## replacement – Fast fourier coefficients for a long sum

I have a very long expression involving a sum of exponentials, say:

``````Sum((RandomReal() + RandomReal() a) Exp(2 I ii x Pi), {ii, -10, 10})
``````

I want the fastest way to get the list of Fourier coefficients. The Naive ways are pretty time-consuming:

``````Table(FourierCoefficient(f, x, ii), {ii, -10, 10}); // Timing (*3.49906 *)
Table(1/(2 Pi) Integrate(f Exp(-ii I x), {x, -Pi, Pi}), {ii, -10,
10}); // Timing (*7.3649*)
``````

Is there a way to make it faster?

## php – Searching for best practice – Long calculating method

I’m working on a Laravel project for engineers. The software calculates the stability and pressure for specific wooden connections.
As you can imagine, depending on the connection, there are many calculations with many values to be done.

First, there was the approach make one calculation function / method for one connection type (inside the connection type class), where all neccessary fields are calculated.
Then it came up to my mind, to create a method to calculate each value seperately:

``````public function calculateConnection(\$connectionValues) {
\$connection->a = \$this->getAValue(\$connectionValues);
\$connection->b = \$this->getBValue(\$connectionValues);
...
}

public function getAValue(\$connectionValues) {
return ...
}

public function getBValue(\$connectionValues) {
return ...
}
``````

I have a much better overview about the specific values, when creating a function for each value instead of calculating everything in one method, but the issue is, that the values are depending on each other. So value `a` is needed to calculate `b`.

So all methods need to be called in the correct order. Of course, I can do a check, if `a` is already been calculated, before doing the calculation of `b`, but this will blow up the code so much.

I’m interested, what is best-practice in this kind of situation.

## updates – If I buy the cheapest possible Android smartphone today, how long can I keep it without having to buy a new one?

I’m forced to get a “smartphone”, even though I want nothing to do with them, simply to be able to use as a “digital identification” thing and similar things for companies that refuse to provide a website — only an “app”. So I’m trying to find the cheapest possible one.

There appears to be only two “kinds”: iPhone (ridiculously expensive beyond words) and Android (less expensive, but infested with Google). Since I don’t actually want to use them for their intended purpose, and will only power it on briefly when forced to, I’m going to have to pick an Android one.

The cheapest one I can find where I live is \$189. A bit more than I expected. Could’ve sworn they sold sub-\$100 ones not long ago. Either way, what I’m wondering is if they are going to make me keep buying new ones every few years, or if I can just perpetually update that one with new versions of the “Android” OS? Or do new versions eventually require newer phone hardware and refuse to install, leaving me with a rotting brick which eventually stops working?

I have a feeling that these things are made to become obsolete rather quickly. Is that feeling correct?

## INNER JOIN taking so long for even simple query

My inner queries when executed independently works fast(<2 sec). I have then applied INNER JOIN on two sub query results and final output takes around 25 minutes.

``````SELECT DISTINCT
tableA.consumer,
tableA.ltm,
tableA.consumer_dpip,
tableA.consumer_dpname,
tableA.consumer_port,
tableB.provider,
tableB.provider_DPIP,
tableB.provider_DPNAME,
CASE when instr(tableB.provider_DPIP, tableA.consumer_dpip) = 0
then  tableB.provider_port
else null end ,
tableB.endpoint,
tableB.endpoint_port
FROM
(
select DISTINCT
sender.consumername   AS consumer,
devices.ltmip         AS ltm,
devices.dpip          AS consumer_dpip,
devices.dpname        AS consumer_dpname,
appdevices.port       AS consumer_port
from
vz.t_vz_mapnames                  mapnames,
vz.t_vz_sender                    sender,
vz.t_vz_application_devices       appdevices,
vz.t_vz_device_info               devices
WHERE
mapnames.sno = sender.parentid
AND appdevices.appid = sender.consumername
AND upper(appdevices.zone) = devices.zone
AND mapnames.type = 'consumerRoutingTable'
AND appdevices.type = 'Consumer'
AND devices.env = 'Production'
) tableA
inner join
(
select  sender.consumername   AS consumer,
sender.providername   AS provider,
LISTAGG(distinct(devices.dpip), ',') WITHIN GROUP(order by devices.dpip) as provider_DPIP,
LISTAGG(distinct(devices.dpname), ',') WITHIN GROUP(order by devices.dpname) as provider_DPNAME,
appdevices.port as provider_port,
dest.url              AS endpoint,
dest.port             AS endpoint_port
from
vz.t_vz_mapnames                  mapnames,
vz.t_vz_sender                    sender,
vz.t_vz_application_devices       appdevices,
vz.t_vz_device_info               devices,
vz.t_vz_ep_destination_template   dest
WHERE
mapnames.sno = sender.parentid
AND appdevices.appid = sender.providername
AND upper(appdevices.zone) = devices.zone
AND dest.applicationid = sender.providername
AND mapnames.type = 'consumerRoutingTable'
AND appdevices.type = 'Provider'
AND devices.env = 'Production'
AND dest.env = 'PROD'
group by sender.consumername,sender.providername,appdevices.port, dest.url, dest.port
)tableB
on tablea.consumer=tableb.consumer  and tableB.endpoint is not null and tableB.endpoint<>' '
order by tableA.consumer, tableA.ltm
``````

Am I doing anything wrong here or do I need compound indexes added ? Any inputs highly appreciated.

## chain reorganization – How do nodes handle long reorgs?

In a reorg where a node receives blocks for a longer or equal-length chain than the chain they previously followed as longest, what is the process for that node to handle this situation?

I can imagine that in the situation the new chain is clearly longer, the node would have to revert its UTXO set to the point at which the chains diverged, add back mempool transactions that were used since the chains diverged, and then re validate all blocks in the new chain. I can also imagine that some work that was already done in validating transactions could be reused, but not all of it.

How is the UTXO set reverted? Do nodes maintain a history of UTXO set diffs?

In the case where the length of the chains are not significantly different, I can imagine that a node might want to maintain both chains concurrently until it becomes clear one will be longer. Are nodes programmed to do this?

What is the longest reorg a node can handle without revalidating the entire chain from the genesis block?

These questions are primarily about how the software is currently written, rather than about theoretical ways nodes could handle these cases.

## usb on the go – Why does otg mount take long time on some mobiles? Can it be sped up?

I have an external 500GB SSD drive. I connected it to android devices via otg cable.

It works fine. Once it’s mounted, it’s pretty fast.

The only problem is the mounting time. At some devices, the mounting time is fast and about 6 seconds. This is fine to me. But at some devices, the mounting time will take up to 50 seconds. They are all running Android 10 with different models and brands.

I am wondering why the moutning times are so different among devices and is that possible to speed up the speed on the slow devices (without root).

Thank you very much.

## blockchain – Bitcoin Energy Consumption and Carbon Footprint a good long term investment?

How can Bitcoin a good long term investment, given one transaction is equivalent to carbon footprint of 986,125 Million VISA transactions?

If they use the Lightning network which takes transactions Off the Blockchain, wouldn’t that defeat whole purpose of having ledger transaction?

Bitcoin Energy Consumption Index

## long exposure – Issues with dark frame subtraction: Dark frames adding “noise” and changing image color/tint

While editing some landscape shots with stars, I tried to use darkframes to reduce the noise.
More precisely, my approach was to take a series of shots, then firstly to subtract dark frames from each shot, secondly to use the mean of the series for the foreground to further reduce noise, and thirdly to use an astro stacking tool (Sequator) to stack the sky.

Instead of reducing noise, the darkframe subtraction:

1. increased the noise- or rather, added some dark/monochrome noise.
2. changed the white-balance/tinted the image.
(see below)
I do not understand why this is happening/What I am doing wrong.

Procedure/Employed Troubleshooting:

• All photos were shot in succession, with the same settings (15sec, @ISO6400, in-camera dark frame disabled).

• All photos were shot with the same white balance.

• While shooting the darkframes, both the lens cap and the viewfinder cover were applied.

• Photos were imported from my Pentax K1ii, converted to DNG in LR, and exported to PS without any editing/import presets applied.

• I used PS, placed the darkframe layer(s) above my picture, and used the subtract blending mode.

• I followed basic instructions found here/in various videos on dark frame subtraction in photoshop. Note that basically, all of those cover dark frame subtraction with one frame (or use tools other than photoshop). I have tried both using one, and 3 frames. The results are similar, albeit more pronounced with 3.

• I used the free tool “sequator” to subtract darkframes instead (and to align stars). Adding the dark frames here made absolutely no difference.

• (This is an edit/composite done with the frames I tried to subtract darkframes of)

• A crop of the first picture, with (3) dark frames subtracted:

• A crop of the second picture, without dark frames subtracted:

## cortex prime – Can a player character avoid dying as long as they still have Plot Points?

I’ve read Cortex Prime and now I’m wondering whether a PC could die. The rule said that “you can spend a PP to avoid being taken out of the scene” (which I translate as dying).

Does this mean that as long as a player still has PP, they can’t die if they don’t wish so?

## sharepoint online – How to see how long a document has until disposition?

I’m new to the Sharepoint Records Management space (e5 licence). I’m using a test environment to play around and figure out how things work and what they’ll look like for my team. I’ve made some record labels with event based retention periods, applied these labels to some documents, and then created the event. I’m fairly confident I’ve done all of these steps right, but to check, I am hoping to be able to see how long the documents I’ve applied this retention label and event to have before disposition.

Is there a way to do this? The Data Classification overview is showing no retention labels have been applied. I’ve clicked around, clicked on the document that I’ve applied the record to and can’t find a way to access this information.