How do I choose new lightning channels in order to minimize hops when rebalancing?

Currently my fees are quite high when I try to rebalance and I noticed that I can’t rebalance some channels from any of my other channels. I have only six or seven channels and would like to expand to at least 10 for now.

Up until now my strategy for choosing nodes to open channels with was to use to maximise for the total number of nodes my node can reach with the least hops.

How do I choose new channels in order to minimize hops when rebalancing?

Currently my fees are quite high when I try to rebalance my channels and I noticed that I can’t rebalance some channels from any of my channels. I have only six or seven channels with capacities between 500.000 and 1.500.000 sats each and would like to expand to at least 10 channels for now.

Up until now my strategy for choosing nodes to open channels with was to use to maximise for the total number of nodes my node can reach with the least hops.

How can I simulate a bayer filter (or just RGB channels) using photoshop layers?

How can I essentially combine pure red, green, and blue info in 3 layers to create full color?

You have to start with pure ‘Red’, pure ‘Green’, and pure ‘Blue’ color information. But that’s not what you can get from a Bayer masked sensor, since the actual colors of each set of filters are not ‘Red’, ‘Green”, and ‘Blue’.

It’s not what we get from the cones in our retinas, either.

Keep in mind that there’s no specific color intrinsic in any wavelength of visible light, or other wavelengths of electromagnetic radiation for that matter. The color we see in a light source at a specific wavelength is a product of our perception of it, not of the light source itself. A different species may well not perceive wavelengths included in the human defined visible spectrum, just as many species of bugs and insects can perceive light in near infrared wavelengths that do not produce a chemical response in human retinas.

Color is a construct of of how our eye-brain system perceives electromagnetic radiation at certain wavelengths.

Our Bayer masks mimic our retinal cones far more than they mimic our RGB output devices.

The actual colors to which each type of retinal cone is most sensitive:

enter image description here

Compare that to the typical sensitivity measurements of digital cameras (I’ve added vertical lines where our RGB – and sometimes RYGB – color reproduction systems output the strongest):

enter image description here

The Myth of “only” red, “only” green, and “only” blue

If we could create a sensor so that the “blue” filtered pixels were sensitive to only 420nm light, the “green” filtered pixels were sensitive to only 535nm light, and the “red” filtered pixels were sensitive to only 565nm light it would not produce an image that our eyes would recognize as anything resembling the world as we perceive it. To begin with, almost all of the energy of “white light” would be blocked from ever reaching the sensor, so it would be far less sensitive to light than our current cameras are. Any source of light that didn’t emit or reflect light at one of the exact wavelengths listed above would not be measurable at all. So the vast majority of a scene would be very dark or black. It would also be impossible to differentiate between objects that reflect a LOT of light at, say, 490nm and none at 615nm from objects that reflect a LOT of 615nm light but none at 490nm if they both reflected the same amounts of light at 535nm and 565nm. It would be impossible to tell apart many of the distinct colors we perceive.

Even if we created a sensor so that the “blue” filtered pixels were only sensitive to light below about 480nm, the “green” filtered pixels were only sensitive to light between 480nm and 550nm, and the “red” filtered pixels were only sensitive to light above 550nm we would not be able to capture and reproduce an image that resembles what we see with our eyes. Although it would be more efficient than a sensor described above as sensitive to only 420nm, only 535nm, and only 565nm light, it would still be much less sensitive than the overlapping sensitivities provided by a Bayer masked sensor. The overlapping nature of the sensitivities of the cones in the human retina is what gives the brain the ability to perceive color from the differences in the responses of each type of cone to the same light. Without such overlapping sensitivities in a camera’s sensor, we wouldn’t be able to mimic the brain’s response to the signals from our retinas. We would not be able to, for instance, discriminate at all between something reflecting 490nm light from something reflecting 540nm light. In much the same way that a monochromatic camera can not distinguish between any wavelengths of light, but only between intensities of light, we would not be able to discriminate the colors of anything that is emitting or reflecting only wavelengths that all fall within only one of the the three color channels.

Think of how it is when we are seeing under very limited spectrum red lighting. It is impossible to tell the difference between a red shirt and a white one. They both appear the same color to our eyes. Similarly, under limited spectrum red light anything that is blue in color will look very much like it is black because it isn’t reflecting any of the red light shining on it and there is no blue light shining on it to be reflected.

The whole idea that red, green, and blue would be measured discreetly by a “perfect” color sensor is based on oft repeated misconceptions about how Bayer masked cameras reproduce color (The green filter only allows green light to pass, the red filter only allows red light to pass, etc.). It is also based on a misconception of what ‘color’ is.

How Bayer Masked Cameras Reproduce Color

Raw files don’t really store any colors per pixel. They only store a single brightness value per pixel.

It is true that with a Bayer mask over each pixel the light is filtered with either a “Red”, “Green”, or “Blue” filter over each pixel well. But there’s no hard cutoff where only green light gets through to a green filtered pixel or only red light gets through to a red filtered pixel. There’s a lot of overlap.² A lot of red light and some blue light gets through the green filter. A lot of green light and even a bit of blue light makes it through the red filter, and some red and green light is recorded by the pixels that are filtered with blue. Since a raw file is a set of single luminance values for each pixel on the sensor there is no actual color information to a raw file. Color is derived by comparing adjoining pixels that are filtered for one of three colors with a Bayer mask.

Each photon vibrating at the corresponding frequency for a ‘red’ wavelength that makes it past the green filter is counted just the same as each photon vibrating at a frequency for a ‘green’ wavelength that makes it into the same pixel well.³

It is just like putting a red filter in front of the lens when shooting black and white film. It didn’t result in a monochromatic red photo. It also doesn’t result in a B&W photo where only red objects have any brightness at all.
Rather, when photographed in B&W through a red filter, red objects appear a brighter shade of grey than green or blue objects that are the same brightness in the scene as the red object.

The Bayer mask in front of monochromatic pixels doesn’t create color either. What it does is change the tonal value (how bright or how dark the luminance value of a particular wavelength of light is recorded) of various wavelengths by differing amounts. When the tonal values (gray intensities) of adjoining pixels filtered with the three different color filters used in the Bayer mask are compared then colors may be interpolated from that information. This is the process we refer to as demosaicing.

What Is ‘Color’?

Equating certain wavelengths of light to the “color” humans perceive that specific wavelength is a bit of a false assumption. “Color” is very much a construct of the eye/brain system that perceives it and doesn’t really exist at all in the portion of the range of electromagnetic radiation that we call “visible light.” While it is the case that light that is only a discrete single wavelength may be perceived by us as a certain color, it is equally true that some of the colors we perceive are not possible to produce by light that contains only a single wavelength.

The only difference between “visible” light and other forms of EMR that our eyes don’t see is that our eyes are chemically responsive to certain wavelengths of EMR while not being chemically responsive to other wavelengths. Bayer masked cameras work because their sensors mimic the trichromatic way our retinas respond to visible wavelengths of light and when they process the raw data from the sensor into a viewable image they also mimic the way our brains process the information gained from our retinas. But our color reproduction systems rarely, if ever, use three primary colors that match the three respective wavelengths of light to which the three types of cones in the human retina are most responsive.

c# – Event Driven Architecture how should channels be used

I’m using event driven architecture, to perform realtime signal proccessing and to provide independent metrics.

enter image description here

I decided to use a redis cluster to act a cache and a message bus.

I’m a bit confused on the best route for the architecture. Each node has other nodes which subscribe to it’s values. Some of the nodes the information is important and should be written to the DB while others should just be passed along to the next node in the chain (since the value at the time isn’t important enough to store globally).

  1. How much overhead is involved when using channels?
    Should I use a single channel and parse everything from there
    OR Would it be better to use a channel for each node?

  2. Should every event use redis? e.g some parts of the communication don’t need to be stored e.g (Time,Price,Quantity) come from the exchange I don’t see why I would need to publish them to be persisted, However for simplicity I don’t want to have manage multiple code paths

windows – ffmpeg batch file that uses the # of audio channels to select rip method

I am looking to write an ffmpeg or python batch file that can take a set of mp4 video files and follow 1 of 2 rip methods based on the number of audio channels. For example, let’s say I have a folder with 5 mp4 video files in it. 3 of them have 5.1 channel audio while the other 2 have 2.0 channel audio. For the mp4 files with 5.1 channel audio, I would like ffmpeg to use a simple bat file like this:

for %%g in (*.mp4) do (
ffmpeg -i “%%g” -vf scale=720:-4 -crf 20 -c:a ac3 -b:a 224k “%%~ng.surround.mp4”

For the mp4 files with 2.0 channel audio, I would like them to use another simple ffmpeg bat file:

for %%g in (*.mp4) do (
ffmpeg -i “%%g” -vf scale=720:-4 -crf 20 -c:a aac -b:a 128k “%%~ng.stereo.mp4”

The video encoding is identical, but the audio varies based on the number of audio channels in the original video. I am running this on a Windows 10 machine. I’m not very familiar with ffprobe, but I was wondering if it would be useful in the batch process to direct ffmpeg toward using the correct batch process, either the ac3 for surround or aac for stereo.

To make it even more complicated, I was also thinking about including ffprobe in this process to determine if any of the videos needed to be cropped, and, if they did, would enter that into the ffmpeg command as well.

Thank you for any assistance!

lightning network – What are all the payment channels implementations?

The Lightning Network is much more than an implementation of a payment channel. It is a peer-to-peer network which core purpose is to route payment through a set of multiple payment channels.

Then, i think you are confused with regard to the quote: “many others” here refers to
persons not projects. See this wiki page for mentions of some of them.

Finally, as you mention Elements is defining sidechains (a peer-to-peer network which core purpose is to update a block chain that is somehow pegged to Bitcoin) which are not “building on the concept of payment channels”.
However, payment channels may (not yet deployed) be a good abstraction for interconnecting sidechains with Bitcoin or in-between them.

Rendertexture skips channels sometimes

When rendering a camera to a render texture (for a portal effect), it seems to under certain circumstances skip rendering the red channel entirely.

I cannot put my finger on why this might be happening at all. It seems to depend on the angle of the camera, as it stays color-shifted (or not) if I keep the camera in the same position/rotation.

As can be seen in the below pictures, the camera preview does in fact pick up on the full color.

If it is perhaps relevant, I am working in the URP with HDR enabled.

EDIT: Upon further investigation, it seems it can actually skip any channel, it just does red far more often. I just ran into a case where only the green channel was rendered…
It appears to be linked to being forced to render HDR color, for whatever reason?

Render Texture

sharepoint online – Map new MS Teams channels to existing folders (am i doing things correctly)

We have a SharePoint modern team site which contain a document library with 2 folders “General” & “test”, as follow:-

enter image description here

but when i access this sharepoint site inside MS Teams, i can only see the “General” folder as follow:-

enter image description here

so is it fine/recommnded if inside MS Team >> i create a new Channel >> named it “test” >> so the new channel will get mapped automatically to the test folder inside sharepoint document library ? so i will have my MS Teams as follow:-

enter image description here

I am doing so as the user asked us to have all the main folders under the Software Development title (on the same level as the General channel), so he do not have to click on General to access the main folder.. so is creating new MS Teams channel for each main folder a valid approach ? or it have drawbacks?