performance – Choosing between network optimisation and CPU usage in clientside web development

There cannot be an universal performance tradeoff. A web app targeted at content creators who work from a beefy gaming PC with a gigabit internet connection will settle on a very different tradeoff than a page targeted at people from developing countries who are using an outdated smartphone over a shaky mobile data connection where bandwidths are measured in the low kbps.

What is striking though is that you are focused on generating a massive CSS file. The question is not whether this CSS should be generated locally or pre-built, but whether this CSS approach is really the most appropriate. When your issue is the limited attr() support, trying to brute-force your way around this limitation is questionable. Likely, techniques such as using custom properties (variables), style attributes on individual elements, or a bit of JavaScript will be faster and better supported on any metric. While I really cannot recommend them in the general case, frameworks like React and Vue are a nice example of using JavaScript to manage the data flow from your element’s attributes into its CSS styles.

optimization – Non-convex linear program optimisation with infinite number of OR constraints

I am aware that when we have a linear problem subject to OR constraints, the LP would be a non-convex optimisation problem. For example,

${x = 0}$ OR ${1<=x<=2}$.

I could not find much explanation on the internet concerning a detailed explanation of this situation. I’d appreciate if anyone could explain this in more detail.

Similar questions in other sites:

I will enhance search engine optimisation article in excessive affect weblog and website, Pro creato for $20

I will enhance search engine optimisation article in excessive affect weblog and website, Pro creato


Drone attack human

A army drone may additionally have autonomously attacked people for the first time except being prompt to do so, in accordance to a latest record with the aid of the UN Security Council.

The report, posted in March, claimed that the AI drone – Kargu-2 quadcopter – produced via Turkish navy tech employer STM, attacked taking flight troopers loyal to Libyan General Khalifa Haftar.

The 548-page record via the UN Security Council’s Panel of Experts on Libya has now not delved into small print on if there have been any deaths due to the incident, however it raises questions on whether world efforts to ban killer independent robots earlier than they are constructed can also be futile.

Over the path of the year, the UN-recognized Government of National Accord pushed the Haftar Affiliated Forces (HAF) again from the Libyan capital Tripoli, and the drone can also have been operational considering January 2020, the professionals noted.

“Logistics convoys and chickening out HAF have been consequently hunted down and remotely engaged by means of the unmanned fight aerial automobiles or the deadly self reliant weapons structures such as the STM Kargu-2,” the UN file noted.

Kargu is a “loitering” drone that makes use of computer learning-based object classification to pick and interact targets, in accordance to STM, and additionally has swarming abilities to permit 20 drones to work together.

“The deadly self sufficient weapons structures have been programmed to assault ambitions besides requiring facts connectivity between the operator and the munition: in effect, a real ‘fire, overlook and find’ capability,” the professionals wrote in the report.

Many robotics and AI researchers in the past, consisting of Elon Musk, and numerous different distinguished personalities like Stephen Hawking and Noam Chomsky have calledfor a ban on “offensive self reliant weapons”, such as these with the practicable to search for and kill particular human beings primarily based on their programming.

Experts have advised that the datasets used to instruct these independent killer robots to classify and perceive objects such as buses, motors and civilians can also now not be sufficiently complicated or robust, and that the synthetic talent (AI) gadget may also study incorrect lessons.

They have additionally warned of the “black box” in computing device learning, in which the choice making technique in AI structures is frequently opaque, posing a actual chance of thoroughly independent navy drones executing the incorrect aims with the motives ultimate hard to unravel.

Zachary Kallenborn, a country wide safety advisor specialising in unmanned aerial vehicles, believes there is increased threat of some thing going incorrect when various such self sufficient drones speak and coordinate their actions, such as in a drone swarm.

“Communication creates dangers of cascading error in which an error by way of one unit is shared with another,” Kallenborn wrote in The Bulletin.

“If everybody was once killed in an self reliant attack, it would possibly signify an ancient first recognized case of synthetic intelligence-based independent weapons being used to kill,” he added.


multivariable calculus – Constrained optimisation problem: finding the nature of stationary points using a graphical approach

In a question I’m asked to

  1. Find the stationary points of $f(x,y,z) = x^2 + y^2 + z^2$ subject to the constraint $g(x,y,z) = x^2 + 2y^2 – z^2 = 1$

  2. Identify their nature by sketching the constraint surface $g$ in different coordinate planes.

Now the first part’s easy using the method of Lagrange multipliers.

Here’s my working:

Set $phi(x,y,z,lambda)= f – lambda g$, then

$partial_x phi = 2x-2xlambda$

$partial_y phi = 2y -4ylambda $

$partial_z phi = 2z + 2zlambda$

$partial_lambda phi = -g = 0$

$partial_x phi = 0 Rightarrow x=0$ or $lambda = 1$.

Case $x = 0$: $partial_y phi = 0 Rightarrow y = 0$ or $lambda = frac{1}{2}$. If $y=0$, then from the constraint we get $-z^2 = 1$ – no solutions in $mathbb{R}$. Hence it must be that $lambda = frac{1}{2}$. So $partial_z phi = 0$ implies $z = 0$. Then from the constraint, $2y^2 = 1 Rightarrow y = pm frac{1}{sqrt{2}}$.

Case $xneq 0$: Then $lambda =1$, so $partial_y phi =partial_z phi = 0 Rightarrow y=z=0$. Then from the constraint we get $x^2 = 1 Rightarrow x = pm 1$.

Hence the stationary points are $color{red}{(x,y,z,lambda) = (pm 1, 0, 0, 1), (0,pm frac{1}{sqrt{2}},0,frac{1}{2})}$

I’m not sure about the second part. I’ve sketched $g = 1$ in each of the coordinate planes:

  • the $yz$-plane (a hyperbola, zeros at $y = pm frac{1}{sqrt{2}}$, asymptotes at $z = pm2y$)

  • the $xy$-plane (an ellipse, intersects $x$– and $y$-axes at $x = pm 1$ and $y = pm frac{1}{sqrt{2}}$, respectively)

  • the $xz$-plane (a hyperbola, zeros at $x = pm 1$, asymptotes at $z = pm y$)


So the resulting $3$D surface is a hyperboloid. But how do I decide about the nature of the stationary points? What’s the intuition behind this? And why am I sketching the constraint surface, not the surface defined by the function $f$ that we want to minimise?

I know how to do this using the Hessian, but what I’m interested in is the graphical approach.

We have $H = begin{bmatrix} 2(1-lambda) & 0 & 0 \
0 & 2(1-2lambda) & 0\
0 & 0 & 2(1+lambda) end{bmatrix}$

So the eigenvalues of $H$ are its diagonal entries.

  • At the SP with $lambda = 1$, the eigenvalues are then $0$, $-2$, $4$
  • At $lambda = frac{1}{2}$, the eigenvalues are $-1, 4, 1$.

In each case there are both positive and negative eigenvalues, so both SPs are saddle points (right?).

sql server 2014 – DB optimisation for table having around 35 million records

I have a main table with around 75 million records ==> Table X.
I have another table Y . X and Y having one to many relation . I have CreatedYEar column in table Y . I am querying data for year 2020 and have around 35 million records in Y for the same. I have another table A,B ,C (having foreign key relation with table X)which have one to one relation with table X. I join tables X,Y,A,B,C on basis of inner join with a mandate filter of createdYEar which brings results to 35 million records.I have applied pagination on this using offset-FETCH next to get only 30 rows.
When I look at actual query plan its all clustered index seek operations, cost distribution seems to be uniform , still my query takes around 12 mins to execute. I have selected only columns which I need. Please help me to reduce time for this.

SELECT col1, col2,col3
FROM X inner join Y on X.ID = Y.RefId AND Y.CreaTedYEar = @YEar ANd Y.ACtive =1

I have clustered index on REfId and Id.
Non clustered index on Y for REfId,CreatedYEar,ACtiveInd.

What can be done to optimise this query

What is the relation between the Lagrange multipliers and the solution of the dual in a linear optimisation problem?

Consider the following linear minimisation problem
(1) quad min_{xgeq 0} a^top x,\
quad quad quad text{s.t. }B^top x=c


  • $x$ is the $ptimes 1$ vector of unknowns.

  • $a$ is a $ptimes 1$ vector of real numbers.

  • $B$ is a $ptimes k$ matrix of real scalars.

  • $c$ is a $ktimes 1$ vector of real scalars.

  • $p>k$.

Consider the Lagrangian of (1)

(2) quad L(x,mu,nu)= a^top x+mu^{top}left(B^top x-cright)+nu^{top}x,

Consider also the dual of (1)
(3) quad max_{y} c^top y,\
quad quad quad text{s.t. } By leq a

Question: what is the relation between the dual of (1) and the Lagrange multipliers of (1)?

Let $mathcal{X}^*$ be the set of argmin of (1).

Let $mathcal{Y}^*$ be the set of argmax of (3).

Let $mathcal{M}^*$ be the set of Lagrange multipliers $mu$ corresponding to each element of $mathcal{X}^*$.

Let $mathcal{V}^*$ be the set of Lagrange multipliers $nu$ corresponding to each element of $mathcal{X}^*$.

Is $mathcal{Y}^*=mathcal{M}^*$? What is the relation between $mathcal{Y}^*$ and $mathcal{V}^*$?

php – Fibonacci series optimisation

How can i optimise this ? Am getting timeout error when i executes this
I am trying to execute this on my wampserver on a windows 10.
This code is actually that i wrote for a test but the test fails if i get the values of $a and $b as huge.

Any help will be welcome. Thank you.

     * @param $n
     * @param false $offset
     * @param false $injectResult
     * @return int|mixed

Seems that this part is the worm

function fibonacci($n, $offset = false, $injectResult = false)
    if ($offset == $n && $injectResult) {
        return $injectResult;
    if ($n == 0) {
        return 0;
    if ($n == 1) {
        return 1;
    if ($n > 1) {
        return fibonacci($n - 1) + fibonacci($n - 2);

     * Function to sum the fibonacci series result
     * @param $a
     * @param $b
     * @return float|int

This part seems ok

function sumFibonacci($a, $b)
   $diff = $b - $a;
   $result = ();
   for ($i = 0; $i <= $diff; $i++) {
       if ($i == 0) {
           $result($i) = fibonacci($a);
       } else {
           $result($i) = fibonacci($a + $i, $a - ($i - 1), $result($i - 1));
       return array_sum($result);
// Example of how to use it
echo sumFibonacci(38,58);

* How can i optimise this if  $a = 38 $b= 58 e.g sumFibonacci(38,58);
*/ ```


python – Hash Table Optimisation

def checkMagazine(magazine, note):

    #Creating 2 Empty Dictionaries for the "Magazine" and the "Note" then filling them up
    UniqueWordsMag = set(magazine)
    UniqueCountMag = (0)*len(UniqueWordsMag)
    UniqueWordDictMag = dict(zip(UniqueWordsMag, UniqueCountMag))

    UniqueWordsNote= set(note)
    UniqueCountNote = (0)*len(UniqueWordsNote)
    UniqueWordDictNote = dict(zip(UniqueWordsNote, UniqueCountNote))

    for i in magazine:
        if i in list(UniqueWordDictMag.keys()):
            UniqueWordDictMag(i) += 1

    for i in note:
        if i in list(UniqueWordDictNote.keys()):
            UniqueWordDictNote(i) += 1

    #Checking for existance in the magazine then checking for the correct count, print no if it does not fulfil conditions
    Success = False
    DesiredCount = len(note)
    Count = 0

    for index,i in enumerate(UniqueWordsNote):
        if i in list(UniqueWordDictMag.keys()):
            if UniqueWordDictNote(i) <= UniqueWordDictMag(i):
                Count += UniqueWordDictNote(i)

    if Count == DesiredCount:
        Success = True
def main():
    mn = input().split()

    m = int(mn(0))

    n = int(mn(1))

    magazine = input().rstrip().split()

    note = input().rstrip().split()

    checkMagazine(magazine, note)

if __name__ == "__main__":

This is a website Question on Hackrrank called Hash Tables: Ransom Note:

Given the words in the magazine and the words in the ransom note, print Yes if he can replicate his ransom note exactly using whole words from the magazine; otherwise, print No.

My code is currently taking too long on some lists E.G lists of size 30,000, are there any optimisations I can make to make this a bit more legible and faster.

Here is an example format:

6 4
give me one grand today night
give one grand today

Output: yes

6 5
two times three is not four
two times two is four

Output: no

optimization – How to solve an optimisation equation with unknown parameters?

Given an example equation:

$$ z = Mx + Ny $$

where M, N are unknown parameters and x, y, z are features of a dataset.

My initial guess is to use gradient descent and the least squares error to obtain M and N from the dataset.

After that, we construct the inequality equations and apply these with the new equation (known M and N) to a Lagrange multiplier to minimise z.

Is this a correct approach to the problem?

performance – Sorting Algorithms Optimisation Python

def MinimumSwaps(Queue):
        MinSwaps = 0
        for i in range(len(Queue) - 1):
            if Queue(i) != i+1:
                for j in range(i+1,len(Queue)):
                    if Queue(j) == i+1:
                        Queue(i), Queue(j) = Queue(j), Queue(i)
                        MinSwaps += 1
        return MinSwaps

def main():
    Result = MinimumSwaps((7, 1, 3, 2, 4, 5, 6))
if __name__ == "__main__":

The question: You are given an unordered array consisting of consecutive integers (1, 2, 3, …, n) without any duplicates. You are allowed to swap any two elements. You need to find the minimum number of swaps required to sort the array in ascending order.

The issue is that what I have provided is inefficient and fails on very large arrays, however Ive tried to optimise it as much as I can and im not aware of another technique to use. This question is likely related to a particular sorting algorithm but is there any way to modify the above code to make it faster?