## numerical linear algebra – Numerically solving the optimization problem \$min | x |_{ell^1} s.t. | Ax-b |_{ell^2} leq delta\$

Consider a linear system $$Ax=b$$ with matrix $$A$$ and right hand side $$b$$ and suppose one is interested in a sparse solution of this system. In the situation where the right hand side is corrupted by noise one can solve the minimization problem
$$min | Ax-b |_{ell^2} s.t. | x |_{ell^1} leq delta.$$
This corresponds to the LASSO algorithm with regularization parameter $$delta$$. On the other hand one can try to solve the optimization problem
$$min | x |_{ell^1} s.t. | Ax-b |_{ell^2} leq delta. (1)$$
This problem was, for instance, considered in Candes famous paper “Towards a mathematical theory of super-resolution”. I’m interested in solvind problem (1) numerically with Python but I have limited Python skills. I was wondering if there is any implementation which solves the problem (1). For the LASSO there are many packages but I couldn’t find one for problem (1) so far.

Thanks a lot for your help!

Posted on Categories Articles

## oc.optimization and control – how to draw a diagram of a rectangle using optimization

the question is
A rectangle has its two lower corners on the x-axis and its two upper corners on the curve y=10e^-(x^2)/18.

A) draw the diagram

I am not being lazy and asking someone to just solve it. I genuinely need to learn how to solve and would like to compare the correct answer with mine

Posted on Categories Articles

## mathematical optimization – NMinimize differential evolution: how do search points and initial points really work?

I am puzzled by the settings of a specific method for `NMinimize` called `DifferentialEvolution`, given for example here. The settings include both `"SearchPoints"`, or the size of the population of evolving points, and `"InitialPoints"`, or the initial population(?). What I don’t quite understand is that `"SearchPoints"` can be set to a value different from the number of specified `"InitialPoints"` and `NMinimize` will often happily proceed, and sometimes it won’t.

What might be happening under the hood? If there are less `"InitialPoints"` than there are `"SearchPoints"`, will the population be filled up with random points to meet the specified number of `"SearchPoints"` before evolution begins or does it proceed with a population that is in fact smaller than `"SearchPoints"`? What about the opposite case?

Here is an example where we give much less `"InitialPoints"` than there are `"SearchPoints"`.

``````Clear(f, c, v, x1, x2, y1, y2, y3);
f = 2 x1 + 3 x2 + 3 y1/2 + 2 y2 - y3/2;
c = {x1^2 + y1 == 5/4, x2^(3/2) + 3 y2/2 == 3, x1 + y1 <= 8/5,
4 x2/3 + y2 <= 3, y3 <= y1 + y2, 0 <= x1 <= 10, 0 <= x2 <= 10,
0 <= y1 <= 1, 0 <= y2 <= 1,
0 <= y3 <= 1, {y1, y2, y3} (Element) Integers};
v = {x1, x2, y1, y2, y3};

NMinimize({f, c}, v, Method -> "DifferentialEvolution")
(*{7.66718, {x1 -> 1.11803, x2 -> 1.31037, y1 -> 0, y2 -> 1, y3 -> 1}}*)

points = 5;
searchpoints = 50;
listpoints = {RandomReal({0, 10}, points),
RandomReal({0, 10}, points), RandomReal({0, 1}, points),
RandomReal({0, 1}, points),
RandomReal({0, 1}, points)}(Transpose);

NMinimize({f, c}, v,
Method -> {"DifferentialEvolution", "SearchPoints" -> searchpoints,
"InitialPoints" -> listpoints})
(*{7.66718, {x1 -> 1.11803, x2 -> 1.31037, y1 -> 0, y2 -> 1, y3 -> 1}}*)
``````

If there are more `"InitialPoints"` than there are `"SearchPoints"`, `NMinimize` sometimes works and sometimes doesn’t, depending on the value of the random seed for instance.

Posted on Categories Articles

## mathematical optimization – Maximize a simple expression based on EuclideanDistance takes too long

Do you know if there is a mistake in this command syntax? It is very simple and still it takes too long. I cannot obtain any result at all:

``````Maximize({(EuclideanDistance({a, d}, {b, e})+EuclideanDistance({b, e}, {c, f})+EuclideanDistance({c, f}, {a, d})), 0<=a<=1,0<=d<=1,0<=b<=1,0<=e<=1,0<=c<=1,0<=f<=1}, {a,d,b,e,c,f})
``````

Posted on Categories Articles

## Demystification of SQL Server optimization process

We would like to see all variants of query plan considered during a query optimization by a SQL Server optimizer. SQL Server offers quite detailed insight using `querytraceon` options. For example `QUERYTRACEON 3604, QUERYTRACEON 8615` allows us to print out MEMO structure and `QUERYTRACEON 3604, QUERYTRACEON 8619` print out a list of transformation rules applied during the optimization process. That is great, however, we have several problems with trace outputs:

1. It seems that the MEMO structure contains only final variants of the query plan or variants that were later rewritten into the final one. Is there a way to find “unsuccessful/unpromising” query plans?
2. The operators in MEMO do not contain a reference to SQL parts. For example, LogOp_Get operator does not contain a reference to a specific Table.
3. The transformation rules do not contain a precise reference to MEMO operators, therefore, we can not be sure which operators were transformed by the transformation rule.

Let me show it on a more elaborated example. Let me have two artificial tables `A` and `B`:

``````WITH x AS (
SELECT n FROM
(
VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9)
) v(n)
),
t1 AS
(
SELECT ones.n + 10 * tens.n + 100 * hundreds.n + 1000 * thousands.n + 10000 * tenthousands.n + 100000 * hundredthousands.n as id
FROM x ones, x tens, x hundreds, x thousands, x tenthousands, x hundredthousands
)
SELECT
CAST(id AS INT) id,
CAST(id % 9173 AS int) fkb,
CAST(id % 911 AS int) search,
LEFT('Value ' + CAST(id AS VARCHAR) + ' ' + REPLICATE('*', 1000), 1000) AS padding
INTO A
FROM t1;

WITH x AS (
SELECT n FROM
(
VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9)
) v(n)
),
t1 AS
(
SELECT ones.n + 10 * tens.n + 100 * hundreds.n + 1000 * thousands.n AS id
FROM x ones, x tens, x hundreds, x thousands
)
SELECT
CAST(id AS INT) id,
CAST(id % 901 AS INT) search,
LEFT('Value ' + CAST(id AS VARCHAR) + ' ' + REPLICATE('*', 1000), 1000) AS padding
INTO B
FROM t1;
``````

Right now, I run one simple query

``````SELECT a1.id, a1.fkb, a1.search, a1.padding
FROM A a1 JOIN A a2 ON a1.fkb = a2.id
WHERE a1.search = 497 AND a2.search = 1
OPTION(RECOMPILE,
MAXDOP 1,
QUERYTRACEON 3604,
QUERYTRACEON 8615)

``````

I get quite complex output that describes MEMO structure (you may try by yourself) having 15 groups. Here is the picture, that visualizes MEMO structure using a tree. From the tree one may observe that there were certain rules applied before the optimizer found the final query plan. For example `join commute` (`JoinCommute`), `join to hash join` (`JNtoHS`), or `Enforce sort` (`EnforceSort`). As mentioned it is possible to print out the whole set of rewriting rules applied by the optimizer using `QUERYTRACEON 3604, QUERYTRACEON 8619` options.
The problems:

1. We may find `JNtoSM` (`Join to sort merge`) rewriting rule in the 8619 list, however, the sort-merge operator is not in MEMO structure. I understand that the sort-merge was probably more costly, but why it is not in MEMO?
2. How to know whether `LogOp_Get` operator in MEMO references to table A or table B?
3. If I see rule `GetToIdxScan - Get -> IdxScan` in the 8619 list, how to map it to the MEMO operators?

There is a limited number of resources about this. I have read many of the Paul White blog posts about transformation rules and MEMO, however, the above questions remain unanswered. Thanks for any help.

Posted on Categories Articles

## plotting – Manipulate and Plot of Tangent Point in Optimization Problem: Solve Problems

I want to illustrate how changes in the values of exogenous variables and parameters (T,w,(Alpha)) are changing the optimal values of two endogenous variables (f,c)=(f*,c*). The solution is with a tangency condition and a constraint.

Changes in alpha should move the U-graph along the Bcon-graph; changes in T and w change the Bcon-graph and therefore the optimal values of f and c as well as the U-graph.

``````U = f^(Alpha)*c^(1 - (Alpha))
Bcon = c - (T - f)*w
MRS = D(U, f)/D(U, c)
AbsSlpCon = D(Bcon, f)
TC = MRS - AbsSlpCon
sols = Solve({TC == 0, Bcon == 0}, {f, c})
{SuperStar(f), SuperStar(c)} = {f, c} /. Last(sols)
c1(T_, w_) := c /. Solve(c - (T - f)*w == 0, c)
c2(T_, w_, (Alpha)_) := c /. Solve(U(SuperStar(f), SuperStar(c)) == U(f, c), c)
Manipulate(Plot({c1(T, w), c2(T, w, (Alpha))}, {f, 0, 24}, PlotRange -> {25, 3000}), {T, 8, 24}, {w, 100, 500}, {(Alpha), 0, 1})
``````

Unfortunately,

1. I cannot use Bcon in line 8 to describe c1(T_,w_) but have to copy the function there to get a linear graph in the plot;

2. get no output for c2(T_, w_, (Alpha)_) in line 9, which is showing the tangent U-graph on the Bcon-graph.

“Solve::ifun: Inverse functions are being used by Solve, so some solutions may not be found; use Reduce for complete solution information.”

Any hints or suggestions?

Thanks!

Posted on Categories Articles

## optimization – Are lessons on tail recursion transferable to languages that don’t optimize for it?

I’m currently reading through Structure and Interpretation of Computer Programs (SICP). During the course of that book, the lesson of “you can optimize recursive procedures by writing them as tail recursive” is drilled in to the reader again and again. In fact, I’m nearly 100 pages in and have yet to see a single for or while loop – it’s all been recursion. This is starting to worry me. To my knowledge, optimizing tail calls in a way that effectively turns tail recursive procedures in to iterative procedures is not a common feature in modern programming languages. This gives me my question: If I’m using a language that does not optimize for tail recursion, how can I apply these lessons that SICP has been teaching? Is the knowledge at all transferable?

Posted on Categories Articles

## optimization – Minimization of sum of ratios objective with linear constraints

I am tyring to solve a minimization mathematical model toward global
optimality. The objective function is the sum of some ratios. The numerator of
each fraction is quadratic, and the denominator is linear as follows

$$Min f(X,Y)=sum_kfrac{a.{x_k}^2+left(a+bright)x_k.y_k+b.{y_k}^2}{2c_kleft(c_k-x_k-y_kright)}$$

The non-negative variables are

$$X=left(x_1, x_2,…, x_nright)$$

$$Y=left(y_1, y_2,…, y_nright)$$

The rest of the notation ($$a, b$$and all $$c_k$$ are positive parameters). All constraints are linear. My perception is that the objective function is not convex as it is the summation of some quasi-convex functions. The number of variables of each type is around 20. Therefore, an optimization software product like LINGO, MAPLE etc. can solve the model fairly quickly but the global optimality is not guaranteed. Do you have any idea how we can convincingly solve the model to reach the optimal global solution or at least ensure our solutions are close to it?

Posted on Categories Articles

## optimization – Graph coloring with fixed-size color classes

I’m interested in coloring a graph, but with slightly different objectives than the standard problem. It seems like the focus of most graph-coloring algorithms (DSATUR etc) is to minimize the number of color classes used.

My goal, in contrast, is to maximize the number of color classes of fixed size N.

As a concrete example, say I have a graph with `100` nodes, and I’d like to color the graph with color classes of size `N = 30`. With an optimal algorithm and the right graph, I could find 3 such groups that color 90 total nodes, with 10 nodes left over. A lesser algorithm might only produce 2 such groups, with 40 nodes left over that cannot be colored with a size-30 color class.

I figure I can solve this problem with a Greedy Algorithm, but it won’t be optimal. Or I could model this in a constraint solver, but it might not employ some clever graph-specific tricks that could come in handy.

Does this specific problem have a name? Or an established algorithm to solve it? Thank you!

Posted on Categories Articles

## Optimization for simple 360 walk-around in Reactjs

i am developing a feature with React.js "Which is like a car 360 walkaround" but i’m getting a lot of troubles with latency and assets requests.

These problems will be explained later

You can see the example, demo or reference that i’m trying to accomplish (Only the drag feature):
https://spins.spincar.com/spincarcomdemo/wp0ab2a70jl135405

And right now this is all i have done "My page"

https://d2eidjcwcgbmpq.cloudfront.net/

This is the link of the source code of my page (Github)

https://github.com/otekdo/sample-carousel

• Extra notes: I’m only focused on the drag feature

Now i’ll try to explain the first problem:

1. Repetitive image request:

What i mean with this error of repetitive request is that, everytime that i start dragging on the canvas, A new request of the image is made. for example:

Click this link in order to see the image which shows these repeated request

Currently if you open the devtools of you preferred navigator, Example: Chrome-devtools.

You could see perfectly that My page and the demo page currently pre-load these images
before the user can use the drag feature

But the main difference is:

1- In my page, every time that i start dragging on the canvas, a new request of the image is made, which is not an optimal solution for "costs and perfomance"

2- But in the Demo page, a new request does not happen, it just pre-load the images like i explained before.

Now, this is the help that i am asking:

So guys, i would love if there is a way or advice for not making these repetitive request,
that’s why i also uploaded my source code, so you could download it and make these changes, so you could show me a work-around, i already got 2 days exploding my head looking for that solution, i’m open to big changes, it does not matter.

Now i’ll try to explain the second problem

1. Latency of the drag feature

What i mean with this bug of the latency is that:

The dragging feature of the demo page is faster than the dragging feature of my website

i would love if there is a way or advice for making my website dragging, faster just like the demo example.

In case you did not understand, the second problem, what i mean is:

I would love to find a way for making my canvas render just responsive and fast like the demo page