## algorithms – One-dimensional packing problem: Optimal decomposition of music structure

I am currently working on my Master thesis on the visualization of music structure and I’m looking to find an optimal description of repetitions found in a piece of music.

## Problem Description

Given a section range in a song in seconds (or samples) , e.g. (10,20), I can look up where this section is repeated. Then we end up with a set of repeating sections like: ((10,20), (40,50), (70,80)). We call this a group. A group has a certain fitness given to it.

(As a sidenote, the fitness of a group is defined as a combination of the sum of similarity values and how much of the song they cover alltogether)

Our goal is to find a set of disjoint groups that altogether have the highest fitness; the optimal decomposition of the repetitions. Below are two different valid decompositions of the same song, one course, and one fine decomposition.

We are provided with a set of all candidates, here’s a small selection, sorted top to bottom by fitness:

## Current Greedy Method

1. Sort all candidates by fitness
2. Pick group G with the highest fitness
3. Remove any groups from candidates if they have overlap with G
4. Repeat from step 2 until no candidates are left

## Bonus

Sometimes the candidates overlap every so slightly, which in the context should perhaps not immediately lead to disqualification.

There are options to relax the no-overlap rule. Note that each of the sections in a group has a different brightness. This brightness corresponds to a confidence, so in a group some sections are more certain to be proper repetitions than others.

For a group of sections G that we wish to add to a set of groups of sections S, we can:

• simply remove sections from G if they overlap with any sections in S
• trim the sides of to-be-added sections from G if they overlap with any sections in S
• keep the overlapping sections in G

I hope this problem is interesting enough to you to give it a shot!
Thank you!

## algorithms – Can you give me insight into the time and space complexity of those pairing functions?

I am looking for a pairing function with linear time complexity.

I found

1. one in Steven Pigeon’s thesis (p 115, §5.3.6.3) described in English on MathOverflow that works by bit interleaving.
2. another from Regan’s 1992 paper (p 289, last paragraph) that works by bit concatenation based on a state machine.

The one from Pigeon seems $$O(1)$$ in space and $$O(n)$$ in time to me, assuming fixed-length datatypes (int64). Space $$O(1)$$ would derive from the fact that

$$bitlength(z) = 2*max(bitlength(x),bitlength(y))$$

$$O(n)$$ time seems to trivially derive from the absence of any operation apart from bit interleaving, which is $$O(1)$$ for each pair of bits.

What bugs me is that in Regan (1992), the algorithm is more involved but claims identical time and space complexity. I guess I am misunderstanding something.

Is the Pigeon function really $$O(n)$$ in time? If the problem of integer pairing has such a simple solution, how come Cantor’s and the Rosenberg-Strong pairing functions are cited so widely while having inferior performance characteristics?

## algorithms – What does it mean that a set of intervals is sorted by the right and left endpoints?

While reading a paper (On the k-coloring of intervals), I came upon the following description:

“Input: An integer k, and a set of n intervals sorted by right and left endpoints. The intervals are indexed in order of increasing right endpoint, and it is assumed that all endpoints are positive integers.”

What does it mean that they are sorted by “right and left endpoints”? Does it mean that each interval is on the form (left endpoint, right endpoint)?

## algorithms – Finding redundant edges on a maximum matchings in bipartite graph

Given a bipartite graph, I need to suggest an algorithm that finds the redundant edges.
A redundant edge is an edge that both the graph with it, and the graph without it have the same size of maximum matchings.

My algorithm is as follows:

1. Connect source and sink nodes
2. Set $$c(e)=1$$ for all $$(s,v_1),(v_1,v_2),(v_2,t)space|space{s,v_1,v_2,t}in V$$.
3. Run Edmonds-Karp algorithm on the graph
4. Look at the last residual network, all the backward edges are the redundant edges
5. Flip the direction of the edges on the last residual graph, set the capacities to 0 and run Edmonds-Karp again.
6. Look at the last residual network, and take the backward edges again

We’ll end up with a list of edges, that if we remove one of them we still get a maximum matchings of the same size.

Is my algorithm sufficient enough?

## algorithms – Find an minimum weight elementary path with a given constraint

I have a problem as below and I am really stuck with it as it run with more than the time I desired (about 1 second). I really want your help to build up a more efficient algorithms than mine

Given an undirected graph G = (V, E) with two parameter of weights w1 and w2 for each edges. This graph has N vertices and M edges. A K-Elementary path S is a sub-graph of G which must be an elementary graph and it does have exactly K edges.

Find a K-elementary path S with the sum of w1 of all edges as minimum value and the sum of w2 of all edges of S must be smaller than a given value Q. If it does not exist any path satisfied, print out the value -1

Input:

The first line with four values N, M, K, Q (2 <= N, K <= 50, 1 <= M <= 2*N, 1 <= Q <= 10^9)

The next M lines show the edges of the graph: V1 V2 W1 W2 (1 <= V1, V2 <= N, 1 <= W1 <= 10^4, 1 <= W2 <= 10^4)

Output: One integer to show the minimum weight of the k-elementary graph found. -1 if non-exists

Sample test case:

Input:

``````5 7 3 6
1 2 1 2
1 4 2 2
1 5 3 6
2 3 3 2
2 4 4 4
3 4 5 1
4 5 4 7
``````

Output:

``````6
``````

First of all, I want to quote the definition of an elementary path.

In short, for this problem, we need to find an k-elementary path S such that the weight to w1 is minimum, the sum of all edges to w2 is less than or equal to Q and it does have exactly K edges.

I do have a Backtracking approach, in which I tried to build up all the graph satisfying the second condition (w2) and then find the minimum of the first condition (w1) but, as you have known, the time complexity is quite high. However, I find it hard to convert it to dynamic programming or any other methods to reduce to time complexity. I do add some Branch-bound condition but it is still slow.

Below is my source code which you can refer but I do not think it is useful

``````#include <bits/stdc++.h>
using namespace std;
#define N 51
#define INF 1e9
int n, m, K, Q;
bool appear(N);
int W1(N)(N);
int W2(N)(N);
int currentSum1 = 0;
int currentSum2 = 0;
int source = 0;
int res = INF;
int Log(N);
int minElement = INF;
bool check(int k, int v)
{
return !appear(v) && W1(Log(k - 1))(v) != 0 && W2(Log(k - 1))(v) != 0;
}
void solution()
{
if(currentSum1 != 0 && currentSum1 < res)
{
res = currentSum1;
// for(int i = 0; i <= K; i++)
//     cout << Log(i) << " ";
// cout << endl;
}
}
void solve(int k)
{
for(int v = 1; v <= n; v++)
{
if(check(k, v) && currentSum2 + W2(source)(v) <= Q && currentSum1 + (K - k) * minElement <= res) //Branch-bound condition
{
Log(k) = v;
currentSum2 += W2(Log(k - 1))(v);
currentSum1 += W1(Log(k - 1))(v);
appear(v) = true;
if(k == K)
solution();
else
solve(k + 1);
currentSum1 -= W1(Log(k - 1))(v);
currentSum2 -= W2(Log(k - 1))(v);
appear(v) = false;
}
}
}
int main()
{
fast;
// freopen("data.txt", "r", stdin);
cin >> n >> m >> K >> Q;
for(int i = 0; i < m; i++)
{
int x, y, w1, w2;
cin >> x >> y >> w1 >> w2;
minElement = min(minElement, w1);
W1(x)(y) = w1;
W1(y)(x) = w1;
W2(x)(y) = w2;
W2(y)(x) = w2;
}
for(int v = 1; v <= n; v++)
{
source = v;
currentSum2 = 0;
currentSum1 = 0;
Log(0) = v;
for(int i = 1; i <= n; i++)
appear(i) = false;
appear(source) = true;
solve(1);
}
if(res != INF)
cout << res << endl;
else
cout << -1 << endl;
}
``````

## algorithms – What’s the average number of transistor switches needed to do an N-bit x N-bit multiply?

I want to know how switch-efficient a multiplier can be. If I need to do many $$N$$-bit by $$N$$-bit multiplies, and each bit is determined by flipping a coin, what’s the average number of transistor switches that will be required per multiply, in terms of $$N$$?

## algorithms – Some nodes in binary search had broken.how we can fix it in-place by swapping nodes?

Given a binary search tree(can have any height) .Some nodes its value
changed and violate bst property how we can recover binary search
tree property in-place by just swapping a node with its childrens.

I think it can be done in $$O(n^2)$$ because a bst has height at most $$n$$ and start from parent of leaves and if two nodes violate bst property we can swap it by parent,but i want give a formal prove,anyone have suggestion?

## algorithms – Find optimal alignment between 2 strings with gap cost function

I have a homework question that I trying to solve for many hours without success, maybe someone can guide me to the right way of thinking about it.

The problem:

Given two strings S1 and S2, find the score of their optimal global alignment with gaps.
The gap costs are given by a general function-𝑤(𝑘). It is known that for gaps lengths-𝑘 ≥ 𝑑, 𝑤(𝑘) equals a constant C.

Suggest an algorithm solving the problem with `space O(min{|S1|,|S2|}*d)` and `time O(|S1|*|S2|*d).`

Instruction: When choosing the optimal gap length at each matrix entry, process separately the gaps
of a length less than d and the longer gaps. Store in each matrix entry the optimal value obtained by
using the longer gaps, in addition to the regular optimal value.

Now we learned the following 2 algorithms:

Alignment with gaps where we do not know anything about the cost function:

Alignment with affine gap cost function

my solution:

I know that I have to use a d-rows table in order to meet the space requirements,
and to use both methods but I’m having troubles to combine it into one recursive formula,
Here is what I have done so far:

But I’m not sure how to include the cost for elongating an existing gap that is longer than d, and not even sure how to check if my recursive formula is correct.
Any help would be appreciated!

## algorithms – Find largest k numbers in Mah Heap in O (nlogk)

I have a data stream with n numbers and i want to find the largest k of them (k << n). I want to use a priority queue with max heap with size k , so i will have space complecity O(k) and this is space efficient. The alagorithm that i will use work with the following way:

• Insert the first k Elements in priority queue
• Find the min of them
• While i have other data in stream
( while data < n) then:
• if (element_value > min):
• remove min
• insert new value
• find new min

I have found in various posts that with this way i can have time complecity O(n * logk). But when i want to find the new min i will need O(k) in the worst case and i am consused about what i can do ?
(Then time complecity will be O(kn) that is not better than O(nlogk)

## algorithms – Finding a series of numbers that maximise \$ sum f_i(x_i) \$

Let $$f_1, f_2, … f_m : {0, …, m } rightarrow mathbb Z$$

my task is to find an algorithm that find a series of numbers $$x_1, x_2, …, x_m in { 0, …, m }$$ that maximise $$sum_{i=1}^m f_i(x_i)$$ and subjects to $$sum_i^m x_i le m$$

To be honest I can’t really wrap my head around this problem. Any suggestions on ways to look at this problem differently?