Offering Private FTP accounts @ centropy.ch – Asking for a small donation to help cover server costs. Updates daily. We are not a TOPSITE just to get that out of the way. We are an archive media server. 1000s of PC games, movies & the hottest TV series. English only. We also have a 100,000+ rom collection! Big gaming arcade packs (7k+)
I’ve had cameras in the past with very sensitive shutter buttons. Sometimes what was intended by me to be a half press was enough pressure to result in a full press. The resulting vibration of the camera as the mirror, shutter, and shutter reset mechanism were actuated was usually enough to disengage the full press and the effect was very much like what you describe. Practice a bit with your camera and see if that might be the case.
Regardless of the camera’s configuration, if the mirror is cycling, the shutter is operating, and an image is being recorded to the memory card upon an actual half press of the shutter button, your camera is malfunctioning. There is no configuration option for the EOS 77D (or any other Canon EOS camera of which I’m aware) that enables an image to be recorded when the shutter button is half pressed. Images should only be recorded with a full press of the shutter button.
Like xenoid advises in the comments to the question, I’d recommend trying a wired cable release (you can get a generic for about $5 on amazon or eBay) with a two stage button. The cheaper wired releases would probably be better in this regard, as there is more of a difference in the “feel” between a half press and full press compared to your camera’s shutter button.
If the malfunction continues with a half press of the wired release, then the issue is not in your shutter button itself but in the way the camera responds to the shutter button’s position.
If the issue is not present when you half press the button on the wired remote, then the issue is probably in the contacts in the shutter button and the camera is interpreting that as a very short duration full press.
Every soft fork or consensus change involves a (very small) non-zero risk of a network split. That risk is considerably lower for a soft fork than say a hard fork (where all nodes need to upgrade). That’s why soft forks aren’t attempted every month or year. All you can do is minimize that risk.
Aaron lays out some scenarios that are theoretically possible. Any incompatibility between “Bitcoin Core” and “Bitcoin Taproot” during the Speedy Trial deployment is in my view highly unlikely. If Speedy Trial fails to activate and we reach November 2022 (please note 2022 not 2021) without miners activating then we are in a similar scenario to the UASF in 2017 where it depends on what the economic majority is running. I can’t predict what the economic majority would be running in November 2022 but I highly suspect the delaying of Taproot activation would be at the top of everyone’s minds.
You do have to weigh up these risks of a network split with miners deliberately blocking Taproot activation potentially forever. If we were to say no more UASFs ever again because we don’t want to take any network split risk that would be handing miners a permanent veto to block the activations of soft forks that have community consensus. So you have to weigh up the risk of the latter which would be just as concerning (if not more concerning) to people.
So in summary these are subtle trade-offs. A number of developers have worked hard to minimize the risk of a network split. But it doesn’t get to zero unless you literally never try a soft fork again. And that would mean that Bitcoin would never seriously improve again.
I have a very simple standalone java application that allows users to tick which messages they want to transfer and then click go, essentially.
We now need to provide different users with a different set of messages on the UI.
e.g: company 1 can only tick messages A,B,C and the rest aren’t visible to them company 2 can only tick messages B,D,E and the rest aren’t visible to them company 3 can tick any of A,B,C,D,E
The UI is loosely-coupled from the business logic , in that each “Message” is processed by the same code behind but with different parameters passed in. So the UI is just starting one thread per message, the same object type but with different parameters per thread, depending on what was ticked.
I am trying to understand if there’s any way of getting Eclipse or Ant or something to generate the UI based on a config file per company, rather than having to maintain multiple UI code instances in our codebase?
For a given Android Studio release published by Google, how can I cryptographically verify the authenticity and integrity of the .tar.gz file that I downloaded before I copy it onto a USB drive and attempt to install it on my laptop?
Today I wanted to download Android Studio, but the download page said nothing about how to cryptographically verify the integrity and authenticity of their release after download.
I expected to see a message on the download page telling me:
The fingerprint of their PGP release signing key,
A link to further documentation, and
Links to (a) a manifest file (eg SHA256SUMS) and (b) a detached signature of that manifest file (eg SHA256SUMS.asc, SHA256SUMS.sig, SHA256SUMS.gpg, etc)
Unfortunately, the only information I found on the download page was how to verify the integrity of the tarball using a SHA-256 checksum found in a table on the same page. Obviously, this checks integrity but not authenticity. And it provides no security because it’s not out-of-band from the .tar.gz itself.
How can I preform cryptographic integrity and authenticity verification with Google’s Android Studio releases?
OVH’s CEO Octave Klaba has released a video giving an update on the SBG fire. In it, he discusses the extent of the damage, results of initial investigation, and future plans.
It’s a pretty low-budget production (just Klaba talking to the camera) but in a way that enhances the authenticity and transparency. Click on this still from the video to watch:
SBG2 and part of SBG1 are destroyed, but most of SBG1, and all of SBG3 and SBG4 are intact.
They don’t know the cause of the fire yet. Firefighter cameras showed one of the UPSes on fire, and there was maintenance earlier that day on the UPS. Whether this is coincidence or cause is still being investigated.
More information to come in the beginning of next week.
I’m Andrew, techno polymath and long-time LowEndTalk community Moderator. My technical interests include all things Unix, perl, python, shell scripting, and relational database systems. I enjoy writing technical articles here on LowEndBox to help people get more out of their VPSes.
I’m working on a project that uses Semantic Versioning. The commit history can be generalized as:
Also, the current version is present in source code (so that the software can use it for various purposes).
I’d like to start implementing a process that I’ve been seeing around:
That has development commits contain a version such as x.y.z-dev. The idea is that x.y.z will be the next release, but we are currently developing it.
That reserves x.y.z for the one commit that is a release.
That directly after a release, updates the source code to use a new x.y.z-dev version.
This allows the software as seen on development commits to not erroneously suggest that it represents a release version.
The issue I’m running into is knowing which version to increment to after a release. Semantic Versioning has requirements for what kinds of changes can be found in a new version. For example, 1.0.0 to 2.0.0 indicates a backwards-incompatible change has been made to some interface. But, directly after a release (when the version is incremented to a new -dev version), it’s hard to say what kinds of changes will be included in the future for the next release.
For example, if we just released 1.2.3, incremented to 1.2.4-dev, and then introduce a backwards-incompatible change, 1.2.4-dev is now invalid and should be 2.0.0-dev.
Should I just do another increment to the next -dev major version during the development cycle when we notice that such a change has occurred? It seems iffy that commits would then exist with a version that would never be released.
I ran into a hardware issue that forced me to downgrade to a lower (5.4) kernel. But after some months the issue seems fixed and the latest kernels (5.10 at this moment). My problem now is that apparently by manually keeping the old default Ubuntu kernel, then manually upgrading by one point release, my system is no longer is downloading/installing standard release kernel updates.
I manually installed (via Mainline) the latest 5.10 kernel, but as with most kernels from Mainline, they are indicated by a Tux icon. My last default install kernel is still there, indicated with a round orange Ubuntu logo. It seems like I’m off track now.
I’d rather not keep using Mainline to download/install stable kernel updates semi-manually. How can I get “back on track” to receive the normal Ubuntu-pushed kernel updates in a standard software update?