We have a pricing dataset that changes the contained values or the number of records. The number of added or removed records is small compared to the changes in values. The dataset usually has between 50 and 500 items with 8 properties.
We currently use AJAX to return a JSON structure that represents the dataset and update a webpage using this structure with the new values and where necessary removing or adding items.
We make the request with two hash values, one for the values and another for

i need following algorithm
let say we have 5 //101 and 7 //111 numbers are arbitrary i need to add these numbers using the recursive method and at the same time using bitwise methods
such that result should be
101
+
111
110 0
or 12 can anybody help me?

I am developing a banner ads system. Each banner will be stored in the db. Each banner will have a wight(int number). How can I transform that in percentage efficiently?
For ex:
banner_1 70
banner_2 90
banner_3 150
and I want to have banner 1 displayed 22%
second 29%
third 48%

describe an algorithm that can determine the length of an array in O(log n).

Is there any papers that describe the algorithm of the FindCornerSubPix function in openCV? I cannot find any documentation which describes it.

I'm building an app where videos can be 'Liked' (upvoted), and we're tracking unique views, but there is no downvoting.
This article
seems to outline the standard for ranking videos that can be both upvoted and downvoted, preventing early submissions from dominating based on their seniority. However, the math is a bit too advanced for me to understand whether or not equating views (which might count as 'apathy votes') with downvotes for this purpose will still breed useful results. This will s

Suppose that I have two computational complexities :
O(k * M(n)) - computational complexity of modular exponentiation, where k is number of exponent bits , n is number of digits , and M(n) is computational complexity of the Newton's division algorithm.
O(log^6(n)) - computational complexity of an algorithm.
How can I determine which one of these two complexities is less "expensive" ? In fact notation M(n) is that what confusing me most .

I believe that the Hamiltonian cycle problem can be summed up as the following:
Given an undirected graph G = (V, E), a
Hamiltonian circuit is a tour in G passing through
every vertex of G once and only once.
Now, what I would like to do is reduce my problem to this. My problem is:
Given a weighted undirected graph G, integer k, and vertices u, v
both in G, is there a simple path in G from u to v
with total weight of AT LEAST k?
So knowing that the Hamiltonian cycle problem

We have a huge chunk of data and we want to perform a few operations on them. Removing duplicates is one of the main operations.
Ex.
a,me,123,2631272164
yrw,wq,1237,123712,126128361
yrw,dsfswq,1323237,12xcvcx3712,1sd26128361
These are three entries in a file and we want to remove duplicates on the basis of 1st column. So, 3rd row should be deleted. Each row may have different number of columns but the column we are interested into, will always be present.
In memory operation doesn't look fe

An array a[] contains all of the integers from 0 to N, except one. However, you cannot access an element with a single operation. Instead, you can call get(i, k) which returns the kth bit of a[i] or you can call swap(i, j) which swaps the ith and jth elements of a[]. Design a O(N) algorithm to find the missing integer.
(For simplicity, assume N is a power of 2.)

Given a grid (or table) with x*y cells. Each cell contains a value. Most of these cells have a value of 0, but there may be a "hot spot" somewhere on this grid with a cell that has a high value. The neighbours of this cell then also have a value > 0. As farer away from the hot spot as lower the value in the respective grid cell.
So this hot spot can be seen as the top of a hill, with decreasing values the farer we are away from this hill. At a certain distance the values drop to 0 again.
Now I

Say I have two arrays :
a=[10 21 50 70 100 120];
b=[18 91];
I want to match the (single) element across a and b that are closest AND within 10 units away.
Result :
idxa=[1 2 3 4 5 6]
idxb=[2 5]
where the matching elements share the same number.
I am confused because I am unsure how to ensure (for example) that 18 matches with 21 instead of 10 because they both meet the requirements of being within 10 units of each other. Also, I'd like to do this across several (up to 8) lists and the c

I have a number of distinct elements in an array and want to find those items first before sorting the array. I was thinking of using a hash table to find the elements, but is that possible since I then have to access the table to get the elements again? Am I on the right track?

Tags： Algorithm
packetdfainspectionnetwork-security
I'm recently reading a paper Algorithms to Accelerate Multiple Regular Expressions
Matching for Deep Packet Inspection about Delayed input DFA.
According to Lemma 1 in the paper, DFA is equivalent to the corresponding Delayed input DFA. But consider a counter example below:
Let f(i, s) denote the transition function, where s is the current state and i is the input character.
DFA:
f(a, 1) = 3, f(b,1) = 3, f(c, 1) = 3, f(a, 2) = 3, f(b, 2) = 3
The corresponding Delayed input DFA:
f(a, 1) = 3,

Tags： Algorithm
complexity-theorybig-oasymptotic-complexitybig-theta
I am developing some algorithm with takes up O(log^3 n). (NOTE: Take O as Big Theta, though Big O would be fine too)
I am unsure whereas O(log^3 n), or even O(log^2 n), is considered to be more/less/equaly complex as O(n log n).
If I were to follow the rules stright away, I'd say O(n log n) is the more complex one, but still, I don't have any clue as why or how.
I've done some research but I haven't been able to find an answer to this question.
Thank you very much.

How do you find the maximum sum <= K in an array with no other constraints (elements don't have to be contiguous or non-contiguous)

Analysing time complexity of this pseudocode. On the right my take on the number of times each line runs. I'm not sure whether to use log n , n log n , or simply n for the while-loop..please help
times
1 sum = 0 1
2 i = 1 1
3 while i ≤ n log n + 1
4 for j = 1 to n n log n
5 sum = sum + j n log n
6 i = 2i log n
7 return sum 1
t

We have 3 variants of Merge sort.
Top down
Bottom up
Natural
Are any of these adaptive algorithms? For instance, if an array is sorted they will take advantage of the sorted order.
According to me, no matter if an array is sorted or not, merge sort will still go in for comparisons and then merge. So, the answer is none of these are adaptive.
Is my understanding is correct?

I am trying to find out the actual time taken by both these algorithms to execute and what I found was inconsistent with information on the internet in many places which says that insertion sort is better. I however found bubble sort was executing more quickly. My code was as follows.
Bubble sort
for(int j = 0; j < a.length - 1; j++){
for(int i = 0; i < a.length - 1 - j; i++) {
count++;
if(a[i] > a[i+1]){
temp = a[i];
a[i] = a[i + 1];

My problem is that, I have two lists for, the sake of simplicity with 4-4 items, these are not unique items. The items sorted by a calculated value, let's call it rank. Now I have to display 2 items per page from the 8 items, but only unique values ordered by the ranking.
So I have this two lists:
List A
A - 1
B - 2
D - 5
C - 6
List B
A - 2
D - 3
B - 4
C - 5
So, I need the first page of items ordered by the ranking with offset 0 limit 2 and that would be:
First page
A(list A) - 1
B(lis

I have an implementation of merge sort. I'm not too sure about the quality of my implementation, so I ran it with lists of N up to 6000 and graphed the time it took to sort each time. I know that merge sort should be O(n log n), but my graph looks to be more linear than logarithmic (maybe slightly polynomial). Am I missing something in my interpretation of this graph? Furthermore, I'm not sure what to think of the increasing spread as N grows towards 6000. Does this look right?
Here is an

I know this question has been asked so much times. However, after digging throughout all related posts on StackOverflow, I still don't find the deterministic answer. Can you help?
My question is that for such a problem, does a O(log n) solution exists? To make the problem even more clear, we can study two cases, respectively: case (1) A doesn't have duplicate and case (2) A has duplicate elements.
I heard a binary search based solution seem to be able to achieve a O(log n), but don't understan

I'm looking for algorithms or data structures specifically for dealing with ambiguities.
In my particular current field of interest I'm looking into ambiguous parses of natural languages, but I assume there must be many fields in computing where ambiguity plays a part.
I can find a lot out there on trying to avoid ambiguity but very little on how to embrace ambiguity and analyse ambiguous data.
Say a parser generates these alternative token streams or interpretations:
A B1 C
A B2 C
A B3 B4

is there any way to print the value of n^1000 without using BigInt? have been thinking on the lines of using some sort of shift logic but haven't been able to come up with something good yet.

I thought an encoding system. arr[(encoded code)]=(decoded code) follows:
arr is an array of binary non-negative integers, less than 2^16 (pow(2, 16)).
arr[0]=0
each of arr[1...16] has one 1. (1, 10, 100, 1000...)
each of arr[17...136] has two 1s. (11, 101, 110, 1001...)
each of arr[137...696] has three 1s. (111, 1011, 1101, 1110...)
... each of binary in arr[(sum of 16C(n-1))...(sum of (16Cn))-1] has n 1s.
each of arr[65519...65534] has fifteen 1s.
arr[65535] is 2^16-1 (1111 1111 1111 1111).

I am using A* algorithm to find the shortest path between a box and a movable character in a navmesh graph. The box is meant to follow the character.
If the character is behind an obstacle or room(for example), then box would travel via the shortest distance, which as shown in figure. Is there a way to prevent the box form moving through this path and walk around the room.
Since I have to avoid moving back while finding path between points, any entry in closed list is first checked to see whet

How do I find a survivable shortest-path tree in a directed graph where each directed edge is positively weighted?
The tree may not include all the nodes of the graph and "shortest-path" means the maximal distance from the source to any destination is minimized; and "survivable" means that for the pruned graph by the removal of the directed edges of the tree, another tree can be found.

If I run a minimax/expectimax for a current state or the start state, and suppose the root has three children (chance nodes) and runs the minimax/expectimax algorithm. Suppose, it finds the optimal terminal node and in turn gets the optimal child of the root. That means it'll choose that particular move that leads to that terminal state which has the optimal utility. And we call the path from the root to that terminal state , path P.
And we assume the opposite player also makes a move as predic

I am really new in algorithm programming. I know when it comes to the sequence alignment with dynamic programming, it should follow the below algorithm:
Alg: Compute C[i, j]: min-cost to align (the first i symbols of X) and (the j symbols of Y)(C1:cost for mismatch,C2:cost for gap alignment)
and def d[i, j] = { C1 if X[i] ≠ Y[j],0 otherwise}
Compute C[i, j]:
case 1: align X[i] with Y[j]
C[i, j] = C[i-1, j-1] + d[i, j]
case 2: either X[i] or Y[j] is aligned to a gap
C[i, j] = min{ (C

I am using the FMOD library to apply FFT to an audio stream, providing me with a constantly updating fixed number of frequency bins. Each bin represents an equal frequency range, with a value between 0 and 1 to represent the intensity of this range from the processed audio. FMOD documentation states that these values can be represented in decibels, where val is the value between 0 and 1:
Decibels = 10.0f * (float)log10(val) * 2.0f
I am attempting to make an automated strobe-like beat detecting

I have seen that in some cases the complexity of nested loops is O(n^2), but I was wondering in which cases we can have the following complexities of nested loops:
O(n)
O(log n) I have seen somewhere a case like this, but I do not recall the exact example.
I mean is there any kind of formulae or trick to calculate the complexity of nested loops? Sometimes when I apply summation formulas I do not get the right answer.
Some examples would be great, thanks.

I have a set of points (x,y).
i need to return two points with minimal distance.
I use this:
http://www.cs.ucsb.edu/~suri/cs235/ClosestPair.pdf
but , i dont really understand how the algo is working.
Can explain in more simple how the algo working?
or suggest another idea?
Thank!

Level: Newbie
q1)Do true random number exists?
q2)If I were to reverse engineer a series of no.s would be random any longer?
q3)Is AP, GP (Arithmetic Prog, Geometric prog) a random series?
q4)Is there any formula for random number generation?
q5)Can anyone write an algorithm for true random no.s?

Tags： Algorithm
dynamic-programmingpseudocode
I am working on some task which requires from me to solve the following algorithmic problem:
- You Have collection of items (their weights): [w1, w2, ..., wn]
- And You have a bag which weight is: W
- It is Needed to fill the bag maximally (fill as much as possible) with the subset of given items.
So this is not "Knapsack 0/1" problem, as we deal only with weights (we have only one parameter for item). Therefore I assume that it might have a solution (Knapsack is NP-complete) or some kind of a

Is it possible to have such a graph for every n? If so, is it possible to generate such a graph programatically?
Thanks in advance.

I have 2 point clouds (a set of points in 3D space) and an iterative algorithm. One of the clouds (let's call it A) is constant on each iteration and another (call it B(i)) is slightly different on each iteration (it means that B(i+1) differs from B(i) only in a few points). On every iteration i for each point from A my algorithm should find closest point from B(i).
My question is: how can I compute these distances in a fastest way possible?
Here is what I've already tried:
Brute force compu

Given n non-negative integers a1, a2, ..., an, where each represents
a point at coordinate (i, ai). n vertical lines are drawn such that
the two endpoints of line i is at (i, ai) and (i, 0). Find two lines,
which together with x-axis forms a container, such that the container
contains the most water
What I do not understand about this question is how am I supposed to know the y-coordinate value(height) of the n vertical lines.

I didn't systematically learn the Data Structure and Algorithm course in the uni (just read some books) and would like to ask if there is a well-defined algorithm to do the following work for a binary tree:
For a given binary tree and a positive integer n, search its leaves. If the difference between the depth of the two adjacent leaves (imagine all leaves are display as an array, so two adjacent leaves may be in two different sub-trees) is larger than n. Subdivide the leaf with lower depth. Re

I am not sure if such questions are accepted and I would gladly delete/edit it if they aren't but I think this is not just a discussion question whose answer will depend on opinion and a fact lies behind the solution.
So I was reading about Level order traversal of binary tree and there exists an O(n) solution when we use a queue data structure.
The algorithm is like this
1) Create an empty queue q
2) temp_node = root
3) Loop while temp_node is not NULL
a) print temp_node->d

I'd like the algorithm for highest product of 3 problem implemented in haskell. Here's the problem statement:
Given an array of integers, find the highest product you can get from
three of the integers.
For example given [1, 2, 3, 4], the algorithm should return 24. And given [-10, -10, 5, 1, 6], the highest product of 3 would be 600 = -10*-10*6.
My attempt (assumed no negatives for the first try):
sol2' a b c [] = a*b*c
sol2' a b c (x:xs) = sol2' a' b' c' xs
where
a' = if

Bubble Sort have Computational Complexity O(n^2). So if we have for example CPU 3.5 GHz, these calculation are true?
1 000 000 * 1 000 000 =10^12
3.5 GHz make ~6 000 000 per mike (I think so, please correct me if this is not true)
(10^12/6 * 10^6)/60=~2777 hours
This is true?

I'm in need of a general algorithm to calculate the (x,y) coordinates of a point (b), when I have two other points (origin and a), an angle (angle), and a distance (distance).
They will be used like this, to form a solid-filled radial progress indicator:
origin is placed at the center of a GUI view
a is placed just beyond one of the edges of the view
angle is the angle formed from a to origin to b
b will be placed somewhere else beyond the edges of the view, such that when the path from a to

The problem is as follows: we want to build a wooden board composed of exactly k planks. We're given two types of planks: shorter and longer. How to determine all possible lengths of such a board?
The solution to this problem can be found here.
The pseudocode is:
getAllLengths(k, shorterAmount, longerAmount) {
getAllLengths(k, 0, shorterAmount, longerAmount, lenghts)
}
getAllLengths(k, totalAmount, shorterAmount, longerAmount, Set lengths) {
if (k == 0) {
lengths.add(total);

I have two audio wav files and I have to align the similar or same content in both of these wav files. There can be extra content(content which is not present on another wav file) in any or both of the wav files.
I have tried it using DTW algorithm but DTW gives the mapping of even those components which are extra i.e present in only one wav file.
The input will be two audio wav files and the output will be the timestamps of the window of both the audios containing the aligned part.
There can b

I'm trying parallelize some software that performs some recursive linear equations. I think some of them might be adapted into prefix sums. A couple of examples of the kinds of equation I'm dealing with are below.
The standard prefix sum is defined as:
y[i] = y[i-1] + x[i]
One equation I'm interested in looks like prefix sum, but with a multiplication:
y[i] = A*y[i-1] + x[i]
Another is having deeper recursion:
y[i] = y[i-1] + y[i-2] + x[i]
Outside of ways of tackling these two variatio

I have an array like this:
array = [
0, 1, 2, 3,
4, 5, 6, 7,
8, 9, 10, 11,
12, 13, 14, 15
];
And an input like following:
input = [5,6,9,10]
When we compare this input with the array we can see that this input actually creates a square on the array.
input = [6,9,10]
This one, on the contary, creates a triangle.
I want to write a function that checks if the input given creates a rectangle or square. If so returns true, else returns false. How can I write that function?

I am not sure if I understood what is a "Boolena string"
Heres the complete question:
Given a Boolean string, the DIVIDE(i, j) function returns a k such that k = i−1, if all the values
between i and j is TRUE, else the number of TRUE values between i and k is equal to the
number of FALSE values between k + 1 to j. (a) Implement a linear time algorithm for the
DIVIDE function. (b) UPDATE(i, b, k) returns an appropriate value of k, if the ith value is
changed to b. Implement a constant t

Problem:
Given a knapsack with weight W and a set of items where each item has a value and weight, find the max value you can pack in the knapsack.
So I have been reading online and all the solutions I saw include DP but I cant understand why.
What I thought of is:
for each item find its value per weight
pack as much of the item with the highest value per weight as you can in the knapsack
If the knapsack is full return the value
Otherwise do the same for the next highest value per weight item (

This is a variation of a question I came across in one of the mock-interview videos.
Assume there's a bunch of routes ex: [rome->dallas, dallas->rome, london->paris, paris->frankfurt, london->dallas, frankfurt->rome] etc. Now I want to add a new location to this route ex: delhi. The question is to to find the minimum no. of connections i need to build such that I can cover all the above places from Delhi ?
My solution is:
Build a graph and get the list of strongly connected c

I would like to know how to derive the time complexity for the Heapify Algorithm for Heap Data Structure.
I am asking this question in the light of the book "Fundamentals of Computer Algorithms" by Ellis Horowits. I am adding some screenshots of the algorithm as well as the derivation given in the book.
Algorithm:
Derivation for worst case complexity:
I understood the first part and last part of this calculation but I cannot figure out how 2^(i-1) x (k-i) changed into i2^(k-i-1).
Al

1 2 3 4 5 6 ...

下一页 最后一页 共 196 页