opengl – Normal calculation based on the Voronoi pattern

I apply a 3D Voronoi pattern to a mesh. With these loops I can calculate the cell position, an ID and the distance.

But I would like to calculate a norm based on the generated pattern.
How can I use this pattern and the associated cells to create a normality or to realign the current normality?

The goal is to give the net a faceted look. The normals of each cell should point in the same direction and adjacent cells should point in different directions. These directions should be based on the original mesh standards. I do not want to completely break up mesh normals and have these points in random directions.

That's how I generate the Voronoi pattern.

float3 p = floor(position);
float3 f = frac(position);
float id = 0.0;
float distance = 10.0;

for (int k = -1; k <= 1; k++)
{
    for (int j = -1; j <= 1; j++)
    {
        for (int i = -1; i <= 1; i++)
        {
            float3 cell = float3(float(i), float(j), float(k));
            float3 random = hash3(p + cell);
            float3 r = cell - f + random * angleOffset;
            float d = dot(r, r);

            if (d < distance)
            {
                id = random;
                distance = d;
                cellPosition = cell + p;
                normal = ?
            }
        }
    }
}

Performance Optimization – Accelerate the calculation on the Mac Pro

Recently I had the opportunity to change my license from a MacBook laptop to a Mac Pro. Although I am very ignorant in this area, I know that the Mac Pro should do calculations much faster than my laptop.

My problem is that when using the same notebook, the runtime on the Mac Pro is even slower! That is very frustrating. I use this forum in the hope that someone sheds some light on the subject and finally gets me out of this ignorance …

Here are the features of my macbook:
Enter image description here

And here are the features of the Mac Pro:
Enter image description here

I'm not familiar with the concepts of parallel computing. I suspect that parallel computation only helps in certain cases, and not per se, to solve matrix equations or simplify long expressions. However, I see that the speed of my processor is higher (3 GHz), and it feels fair to think that the same notebook should run faster on the Mac Pro.

Here is a screenshot of my settings:
Enter image description here
And here's a screenshot that describes the percentage of CPU used:
Enter image description here

I have not found a similar question in the forum. However, if someone shows me that this is the case, I can delete it as a duplicate. I apologize if this is the case. Of course, any comment or explanation is always welcome. Thanks!

magento2 – Base Price + Percentage to be displayed and used for the price calculation

a magento store must update the prices daily on csv-basis.

The price indicated is the "base price". How can I increase the price by a certain percentage instead of pre-processing the CSV to increase the profit?

Found the following file:
storeUri / vendor / magento / module catalog / model / product / type / price.php

and just added:

public function getPrice($product)
    {
        return $product->getData('price') * $percentage_profit;//multiplies times 1.07 (adds 7%)
    }

it seems to work, but I'm not sure if it's interfering with other features across the board (eg volume discounts, coupons, etc.)

Thank you very much.

Differential geometry – Tautological 1-form, identifications in the calculation of the withdrawal

In Lee's Intro to Smooth Manifolds, he introduces the (coordinate-free) definition of the tautological $ 1 $Form through a pullback calculation. It looks like he identifies on the way, but I'm having trouble understanding it. So I did the calculation from scratch, but I do not know how to associate it with what Lee did.

For a variety $ M $we have the following data:

$ pi: T ^ * M to M: (q, varphi) mapsto q $

$ d pi _ {(q, varphi)}: T _ {(q, varphi)} (T ^ * M) to T_qM; (d pi _ {(q, varphi)} v) (f) = v (f circ pi _ {(q, varphi)}) $ for a smooth real value $ f $,

$ d pi ^ * _ {(q, varphi)}: T ^ * _ qM to T ^ * _ {(q, varphi)} (T ^ * M); d pi ^ * _ {(q, varphi)} ( omega_q) (v) = omega (d pi _ {(q, varphi)} (v))

So, in coordinates, if $ omega_q = sum a_i (q) dx ^ i $where it is understood $ dx ^ i $ affects $ T_qM $, then

$ d pi ^ * _ {(q, varphi)} ( omega_q) = sum a_i circ pi (q, phi) d pi ^ * _ {(q, varphi)} (dx ^ i) = sum a_i (q) d (x ^ i circ pi). $

Especially if $ omega = dx ^ i $ then $ d pi ^ * _ {(q, varphi_)} (dx ^ i) = d (x ^ i circ pi) $where it is understood $ d (x ^ i circ pi) $ affects $ T _ {(q, phi)} (T ^ * M). $

Lee has $ d pi ^ * (dx ^ i) = dx ^ i $, so I guess he already makes some IDs of the domain and reach of $ d pi ^ * $ are different, this can not be literally true. Is that it $ T _ {(q, varphi)} (T ^ * M) cong T ^ * _ {(q, varphi)} (T ^ * M)? $ How do I intuitively see this idea? I am lost in symbol pushing. This is not a homework assignment: I am learning the differential geometry myself. Does anyone have any hint for a more careful treatment of this problem?

Coding of the tail operation in the untyped lambda calculation

In Pierce's Exercise 5.2.8 it is proposed to code lists with a fold operation. I have learned that this encoding is also known as Scott coding. I did most of the coding correctly:

$ text {nil} = lambda c. lambda n. n $

$ text {cons} = lambda h. lambda t. lambda c. lambda n. c ; H ; (t ; c ; n) $

$ text {isnil} = lambda l. l ; ( lambda h. lambda t. text {fls}) ; text {tru} $

$ text {head} = lambda l. l ; ( lambda h. lambda t. h) text {fls} $

but the tail failed, I guess I did not think about the case nil. However, the solution of the book astonished me:

$ text {tail} = lambda l. text {fst} (l ; ( lambda x. lambda p. text {pair} ( text {snd} ; p) ; ( text {cons} ; x ; ( text { snd} ; p))) ; ( text {pair} ; text {nil} ; text {nil})) $

Can someone explain the strategy behind this coding?

Calculation and Analysis – A possible error for Limit

I tried to calculate the limit of an expression, say:

Limit(1, c -> I, Assumptions -> Im(c) > 1)

And I got a message Limit::cas, At first I thought that maybe the name c was used by some packages. The problem persists, even if I replace it c with other names. The version of Mathematica on my computer is "11.0.0 for Linux x86 (64-bit) (July 28, 2016)". Does anyone have similar problems with other versions?

calculated column – Calculation of the business month

I have seen a number of calculations for the fiscal year and quarter, but I hope someone can help me with a calculated column for the business months. The business months of our company extend from 29 to 28 September. With a date and time field I would like to be able to return a result of YYYY-MM. (For example, if the date is 11/28/2013, the business month is 2019-11.) If the date is 11/29/2013, the fiscal month is 2019-12.)

I work in Sharepoint 2013.

I have no idea where to start. Any help is greatly appreciated.

Multivariable calculation – partial derivative with respect to the third variable

Thank you for your reply to Mathematics Stack Exchange!

  • Please be sure too answer the question, Provide details and share your research!

But avoid

  • Ask for help, clarify or respond to other answers.
  • Make statements based on opinions; Cover them with references or personal experience.

Use MathJax to format equations. Mathjax reference.

For more information, see our tips for writing great answers.

Calculation and analysis – acceleration of the exact evaluation of an integral with non-constant limits?

I am working on a problem whose result is an integral part of the form

$ text {Integrate} left (c, left {x_1,0, m_1 right }, left {x_2, x_1, m_2 right }, text {…}, left {x_n, x_ {n-1}, m_n right } right) $

Where

$ m_1 <= m_2 <= … m_ {n-1} <= n_n $ and $ n> $ 50, all $ m_x $ are not negative rationals, with $ c $ a rational non-negative constant.

Therefore, each subsequent integration boundary has a lower limit that is equal to the value of the previous boundary.

I can judge this with acceptable speed NIntegrate With AdaptiveQuasiMonteCarlo as a method, but I would prefer, if possible, to get accurate results.

The evaluation of the exact result takes a long time as described above.
Is there a technique to speed this up in Mathematica?

A small executable (in a timely manner) example:

Integrate(1, {x1, 0, 2}, {x2, x1, 4}, {x3, x1, 6}, {x4, x1, 8}, {x5, 
  x1, 10}, {x6, x1, 12}, {x7, x1, 14}, {x8, x1, 16}, {x9, x1, 
  18}, {x10, x1, 20}, {x11, x1, 22}, {x12, x1, 24}, {x13, x1, 
  26}, {x14, x1, 28}, {x15, x1, 30}, {x16, x1, 32}, {x17, x1, 
  34}, {x18, x1, 36}, {x19, x1, 38}, {x20, x1, 40}, {x21, x1, 
  42}, {x22, x1, 44}, {x23, x1, 46}, {x24, x1, 48}, {x25, x1, 
  50}, {x26, x1, 52}, {x27, x1, 54}, {x28, x1, 56}, {x29, x1, 
  58}, {x30, x1, 60}, {x31, x1, 62}, {x32, x1, 64}, {x33, x1, 
  66}, {x34, x1, 68}, {x35, x1, 70}, {x36, x1, 72}, {x37, x1, 
  74}, {x38, x1, 76}, {x39, x1, 78}, {x40, x1, 80}, {x41, x1, 
  82}, {x42, x1, 84}, {x43, x1, 86}, {x44, x1, 88}, {x45, x1, 
  90}, {x46, x1, 92}, {x47, x1, 94}, {x48, x1, 96}, {x49, x1, 
  98}, {x50, x1, 100})

This takes a few seconds on my computer NIntegrate Version takes a few hundredths of a second. I would be glad if I could reach the exact result in a few tenths of a second.

Ideas?