## Optimization procedure The execution of the MySQL cursor takes a long time

I have a MySQL procedure with a cursor. The example cursor query is given below

``````DECLARE cs CURSOR FOR
(SELECT  a.mag_id FROM A a  WHERE
a.creation_date BETWEEN (v_fromdate) AND (v_todate)
AND a.type_id IN (SELECT type_id FROM key2 WHERE   sessionId=v_sessionId)
AND a.mag_id IN (SELECT magid FROM    key1 WHERE   sessionId=v_sessionId order by magid)
)
UNION
(SELECT  b.combo_mag_id FROM B b
WHERE
b.creation_date BETWEEN (v_fromdate) AND (v_todate)
AND b.type_id IN (SELECT type_id FROM    key2 WHERE   sessionId=v_sessionId)
AND b.combo_mag_id IN (SELECT magid FROM    key1 WHERE   sessionId=v_sessionId order by magid)
);

DECLARE
CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
SET v_cur_time = now();
SET v_cur_query = 'cursor';
OPEN cs;
SELECT timestampdiff(SECOND,v_cur_time,now()) into v_diff;
SELECT CONCAT('Current Query: ', v_cur_query, ' Time taken: ', v_diff);
``````

Both Table A and Table B have millions of records. I've created partitions in both tables with partition by area at creation date. For 3 months of a date range, it takes almost 4 minutes to execute the cursor. But when I tried to execute the same query in The MySQL Workbench Editor took only 22 seconds to set the parameters for the same date range. Can anyone tell me why it runs faster in an SQL editor, and since I need to use it in the stored procedure, is there a way to tweak it?

## Algorithm Analysis – Would an optimization version of the 3-partition problem be too np-complete / np-hard?

The objective function $$f$$ (to minimize) is not fully formalized in the question. It is written that the groups should be "as close as possible to one goal", so it seems obvious to accept this $$f$$ is $$0$$ when the sum of elements in each group is equal to the goal and greater than $$0$$ Otherwise.

Two examples of such functions are the maximum and the sum of the absolute differences between the sum of the elements in each group and the target.

Under these conditions, the above optimization problem is strongly NP-hard, since the answer to the decision problem is "yes" if and only if there is an optimal solution $$S$$ the optimization problem satisfied $$f (S) = 0$$,

It would not be NP-complete because it does not belong to NP (which only contains decision problems).

## Performance – query optimization and indexing

Database relations can be found here http://www.tpc.org/tpc_documents_current_versions/pdf/tpc-h_v2.18.0.pdf

``````SELECT l_orderkey, sum(l_extendedpriced * (1 - l_discount)) as revenue, o_orderdate, o_shippriority
FROM customer, orders, lineitem
WHERE c_mktsegment = 'BUILDING' -- '(SEGMENT)'
and c_custkey = o_custkey
and l_orderkey = o_orderkey
and o_orderdate < "1995-03-15" -- '(DATE)'
and l_shipdate > "1995-03-15" -- '(DATE)'
GROUP BY l_orderkey, o_orderdate, o_shippriority
ORDER BY revenue desc, o_orderdate;
``````

I wanted to optimize this query by using shortcuts like these

``````SELECT l_orderkey, sum(l_extendedpriced * (1 - l_discount)) as revenue, o_orderdate, o_shippriority
FROM customer
JOIN orders ON c_custkey = o_custkey
JOIN lineitem ON o_orderkey = l_orderkey
WHERE c_mktsegment = 'BUILDING' --'(SEGMENT)'
AND o_orderdate < "1995-03-15" -- '(DATE)'
AND l_shipdate > "1995-03-15" -- '(DATE)'
GROUP BY l_orderkey, o_orderdate, o_shippriority
ORDER BY revenue desc, o_orderdate;
``````

Now all I have to do is create indexes for the tables. For this problem, a maximum of 3 indexes per table can be created, with the primary key being 1 of them.

The indexes I've created are (these are the columns used in the GROUP BY and ORDER BY clauses).

``````CREATE INDEX idx_l_orderkey ON lineitem(l_orderkey);
CREATE INDEX idx_o_orderdate_shippriority ON orders(o_orderdate, o_shippriority);
``````

Are there any other indexes I can use to further refine this query? I feel I need to create another index for the c_mktsegment customer table. If there are any other strategies that I have forgotten, please enlighten me. Many thanks.

## Performance optimization – acceleration of integration over implicit regions

Similar to my previous question, running

``````Clear("Global`*");
ineq = (1+y+y^2) r(4)y>0,Integrate(1,{r(1),r(2),r(3),r(4)}(Element)reg1)//Simplify))//AbsoluteTiming
``````

Produced
$$left {165.709, frac {8 y ^ 4 + 26 y ^ 3 + 39 y ^ 2 + 32 y + 12} {24 (y + 1) ^ 2 left (y ^ 2 + y + 1 right )}Law}$$

Is there a way to speed this up using the implicit region? Or is there a faster way to find integration limits from an inequality? An explanation why this is not possible is also accepted.

## opengl – Optimization of texture fetching with higher mip levels

Suppose I have a shader program in DirectX or OpenGL that renders a full-screen quad. And in a pixel / fragment shader, I try out some huge textures with random texture coordinates. This is the same texture coordinate for all texture samples in a shader call, but it is different between different shader calls. These retrieval operations result in performance degradation. I even think that due to the size of the textures, the GPU texture cache is not big enough and is not used efficiently.

Now I have a theoretical question: Can I optimize performance by using a low resolution like 32×32? mask Textures created by mipmapping the large textures, and when a value in a mask texture is not appropriate for a given texture coordinate at a higher mip level, I do not have to do full-size texture fetching. Something like this in HLSL (GLSL code is pretty similar, but there is no (branching) attribute):

``````float2 tc = calculateTexCoordinates();
bool performHeavyComputations = testValue(largeMipmappedTexture.SampleLevel(sampler, tc, 5));

float result = 0;

(branch)
if (performHeavyComputations)
{
result += largeMipmappedTexture.SampleLevel(sampler, tc, 0);
}

``````

About 50% of Texels at MIP level 5 fail the test. Therefore, many shader calls should not sample the full-size textures.

But I introduce branches into the code. Can this branch affect performance even more than sampling the full-size texture, even if it is not needed? Different GPUs may behave differently, and some may not even support branching two gets instead of one?

I can test this code on some computers, but my question is theoretical.

And can you suggest further optimizations if this does not work properly?

## mathematical optimization – Find the maximum of x for a set of conditions

Trying to find a maximum of x, provided that & # 39; & # 39; # 0 <= x <= 3 & # 39; & # 39; & # 39; and some other things. This works well:

``````Clear[x, y]
FindMaximum[{x, 0<=x<=3}, x]
``````

But it does not:

``````Clear[x, y]
FindMaximum[{x, 0<=x<=3 && y==1}, x]
``````

How can I get the maximum x from the second list of constraints?

## Optimization – How do I set up a solid MySQL / MariaDB server for +3000 active connections?

Background.

I have an app that connects to an RDS instance on Amazon (r3.Large), and it works smoothly. However, the boss does not like to spend \$ 1000 a month on it, so we move internally.

We get Dell servers with two physical Xeon processors and a ton of RAM, basically an RDS instance of size r5.2XLarge r5.4XLarge.

I'm not an expert on Linux systems and have already installed MySQL through the Apache Friends XAMPP package, but I never get the same result twice. (I'm a freshman, please be nice).

considerations

The server is mainly used as a LAMP server, so other web-based apps will use MySQL.
The app that uses the current RDS instance on amazon consists of three executables for Windows. Each of these executables uses their independent connection to the database.

We have +500 PCs in 3 different locations, but we plan to expand to +700 PCs by next year.

We estimate that we will have 5,000,000 to 5,500,000 inquiries per hour on a busy day.
And an average of 2,300 active connections to the DB.

A few times a day, the admin team makes extensive reports that are very resource-intensive (it takes up to 40 seconds to complete the tasks).

question

In addition to having quit my job and found a better one, have you made any recommendations for configuring and / or installing MySQL on the server?

Thanks.

## Optimization – How to create a wipe tool algorithm?

I'm creating an algorithm for the smudge tool, but it has to be done pixel by pixel.

The concept of the wiping tool is simple

onMouseMove – Copy the pixels of the old point to a new point using a brush template

I have problems with bitwise operations. The algorithm does not draw pixels correctly. (I'm creating this algorithm from scratch, which can lead to stupid mistakes.)

``````diameter = brush.size;
_bitData = _canvas.bitmapData;
_bitwidth = _bitData.rect.width;//width of canvas

_bitVector = _bitData.getVector();//1d vector of uint

_brushVector = brush.bitmapData.getVector();//1d vector of uint

brushVectorIndex = 0;
for(yIndex = 0; yIndex < diameter; yIndex++)
{
for(xIndex = 0; xIndex < diameter; xIndex++)
{
yCor = yIndex + oldY;
xCor = xIndex + oldX;

if(_bitData.rect.contains(xCor,yCor))
{
bitVectorIndex_old      = (yCor * _bitWidth)        + xCor;
bitVectorIndex_new      = ((Y+yIndex) * _bitWidth)  + X+xIndex;

//Creating alpha map of brush and old mouse point's pixel
brushPixelAlpha = (_brushVector(brushVectorIndex) & _bitVector(bitVectorIndex_old) & 0xFF000000);

//Adding colors to the aplha map according to old mouse point's pixel
brushPixel = brushPixelAlpha | (_bitVector(bitVectorIndex_old) & 0x00FFFFFF);

//Create alpha map for new pixel
pixelAlpha = ((brushPixel | _bitVector(bitVectorIndex_new)) & 0xFF000000)

//Adding color to pixel alpha map using brush's stamp
pixel =  pixelAlpha | (brushPixel & 0x00FFFFFF);

_bitVector(bitVectorIndex_new) = pixel;
}
brushVectorIndex++;
}
}
_bitData.setVector(_bitVector);
``````

If you'd like to suggest how to tweak this code, this is also helpful because it executes 10000 times in each frame.

## Filter Optimization Problem – MathOverflow

Let me first give you some background information. I am developing embedded software and have a filtering problem where I could use the help of mathematicians.

I have a filter table with the following functions:

• possible 100 entries
• 50 entries support exact match filtering
• 50 entries support range matching filtering

Because it is embedded software, the area filtering is done in bits. That means I can filter in areas in base 2. This part can not be changed.
For example:

• Numbers from 0-1 (1 bit)
• Numbers from 0-3 (2 bits)
• Numbers from 0-7 (3 bits)
• Numbers from 0-15 (4 bits)

Now comes the problem I have. If I assume that I have random numbers, what would be the best way to arrange them?

In fact, we're talking about 32-bit numbers, but for the sake of simplicity, we're focusing on numbers up to 1000.

An example would be if I have the numbers 1, 5, 565. One solution would be to add these to exact filters since I only have 3 of them.

Another example is that I have the number 0,1,2,9,565. I can add them all to exact filters.
Another approach is to add 0.1.2 to the range filter and only after checking that I have not received a "false" positive (number 3). The rest can go to exact filters.

As you can see, the problems are not trivial as we move to higher numbers. I want to define a cost function for myself, but I do not know how to set it up properly.

The most important parameter is that all numbers are in the filter.
The second is that I use the maximum number of exact filters.
The third is that I have range filters where I have to have as little overhead as possible after the filter says I have a match.

The reason for these "weights" is that this part is performance critical. It is done often and after this step I want to have the smallest possible number of steps to determine if the number was in the filters or not.