Bluetooth – How does the secure BLE connection ensure that people are in middle protection?

I understand that BLE mode is an improvement over legacy pairing for secure connection pairings. The problem with legacy pairing was that an attacker could easily brutally force the initial TK value.

In contrast, both devices generate a pair of ECDH keys and exchange a public key when they are connected securely.

Since BLE does not use a public key certificate, how would a device know whether the public key actually belongs to the entity with which it wants to communicate.

I know later when pairing that there is an attestation check, but that's a similar idea to legacy pairing. Only the order is changed.

Authentication – How to ensure that your own native app communicates with your own API

I'm developing an API and different apps to access, each with different areas, including a native mobile app, and I'm wondering what a good strategy would be to authenticate my own native app to my own API (or more specifically) my users).

I can't find a recommended way to make sure that my client (in this case a native app) really communicates with my API.

For example, when I implement the authorization flow to authenticate my users.
Suppose I have a server as a client mobile.mydomain.com, so my mobile app only requests mobile.mydomain.com and mobile.mydomain.com is able to speak confidently api.mydomain.com because the client ID / client secret is never made public.

So far, so good, api.mydomain.com is sure that calls are coming from mobile.mydomain.com however mobile.mydomain.com I'm not sure who is sending requests to him, and it is still possible to masquerade as my mobile app by creating another app that only contains the same login button and runs the same oauth2 process, and ultimately receives a token that I can talk to further mobile.mydomain.com.

How is that different from using password flow (which I don't recommend) and embedding client ID / client secret in this case? (client_secret is completely useless in this case)

=> Basically only the client ID needs to be known from the API perspective.

How does Google make sure that a request really comes from the Gmail app and not from another app that does exactly the same thing with the same redirect URL, etc.? (which would not be harmful anyway as it requires a username / password). I think it can't know for sure

PS: I am aware that OAuth2 is not used for authentication, but only for authorization

Authentication – How do online identity verification companies ensure that their APIs are not misused?

I am trying to implement a photo ID verification along with a live selfie verification on my Android / iOS apps.

I figured I might be able to implement these functions using Python machine learning libraries. However, I have no idea how to prevent hackers from sending verification data directly to my app's server.

Nowadays, many online identity verification companies use "liveliness" detection, which can prevent users from taking photos of other people's photos or ID cards. They confirm that the images have not been changed. They even make short videos to confirm the vitality.

But what if the perpetrator is not a normal user, but a programmer? What can we do if the programmer calls our APIs directly and sends photos or videos to the server? Then the liveliness detection becomes unusable because we cannot distinguish the selfie sent directly from the programmer from a lively new selfie.

Any solutions? I can only guess that the only way to prevent this type of attack is for users to take random actions generated by the server. For example, saying something on the screen or getting users to write random numbers on the paper and take a picture with it.

How can travelers ensure their safety while flying during the COVID 19 epidemic?

Given the current pandemic, how can travelers ensure that they are not infected during the flight? Would it be sufficient to wear an N95 mask with safety glasses, or is a full protective suit for hazardous substances required to ensure absolute safety?

I am aware that the best course of action is to avoid travel altogether, but let's assume that this is not an option.

Hash – Is multiplying hashes a valid method to ensure that two records are identical (but in any order)?

Suppose "User A" contains a record as below. Each entry has been hashed (sha256) to ensure integrity within a single entry. You cannot change the data of a single entry without changing the corresponding hash:

(
{ data: "000000", hash: "91b4d142823f7d20c5f08df69122de43f35f057a988d9619f6d3138485c9a203" }, 
{ data: "111111", hash: "bcb15f821479b4d5772bd0ca866c00ad5f926e3580720659cc80d39c9d09802a" }, 
{ data: "345345", hash: "dbd3b3fcc3286d927ec214c5648fbb226353a239789750f51430b1e6e9d91f4f" }, 
)

And "User B" has the same data, but in a slightly different order. Hashes are of course the same:

(
{ data: "345345", hash: "dbd3b3fcc3286d927ec214c5648fbb226353a239789750f51430b1e6e9d91f4f" }, 
{ data: "111111", hash: "bcb15f821479b4d5772bd0ca866c00ad5f926e3580720659cc80d39c9d09802a" }, 
{ data: "000000", hash: "91b4d142823f7d20c5f08df69122de43f35f057a988d9619f6d3138485c9a203" }, 
)

I want Have both users check that they have exactly the same record, ignoring the sort order. As an extreme example, if a hacker can replace User B's files with otherwise valid-looking data, users should be able to compare a hash of their entire records and find a mismatch.

I was thinking of calculating a "total hash" that users can compare to check. It should be almost impossible to create a valid record that results in the same "total hash". But since the order can change, it's a little difficult.

I may have a possible solution, but I'm not sure if it is safe enough. Is it safe at all?

My idea is to convert each sha256 hash to an integer (Javascript BigInt) and multiply it by modulo to get a total hash of similar length:

var hashsize = BigInt("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff");
var totalhash = BigInt(0); 

for (var i = 0; i < entries.length; i++) {
  var entryhash = BigInt("0x" + entries(i).hash);
  totalhash = totalhash * entryhash % ; 
}
totalhash = totalhash.toString(16); // convert from bigint back to hex string

This should result in the same hash for user A and user B, unless others have manipulated data, right? How difficult would it be to create a slightly different but valid looking record that would give the same total checksum? Or is there a better way to accomplish this (without sorting!).

Web development – Secures creation to ensure protection of print-related functions when changing layouts

I am trying to develop a security precautionary strategy to ensure that a priting function (print specific page view / optimization of the standard view for printing a wiki site) is protected from accidental breakage by future contributors when changes are made to the wiki page and to CSS -Layout. How could I go on like this?

centos7 – How can you ensure that systemd properly tracks service status if the service fails?

I have so much fun with systemd, it's incredible!
I'm trying to work around this system problem.

Basically I started a Java daemon in the foreground.
This application needs to shutdown properly and can only do so while idle as long running transactions need to be completed.

Our original sysvinit The solution worked so well for years that we didn't think this was using a unique pattern … The shutdown script can actually failand the usual paradigm is to check the return codes, right? So if an attempt is made to stop the service and this attempt fails:

  • The operator is informed
  • The service continues

Now consider the following dummy Service that represents our use case:

(Unit)
Description=Dummy.

(Service)
User=user
Group=group
ExecStart=java -jar app.jar
ExecStop=gracefull_shutdown_script_that_could_timeout_or_fail_if_stopping_is_not_possible_at_the_moment.sh
KillMode=none # If Exec stop fail, keep running.

(Install)
WantedBy=multi-user.target

What happens if gracefull_shutdown_script_that_could_timeout_or_fail_if_stopping_is_not_possible_at_the_moment.sh fails if systemctl stop dummy is called?

  • It does not inform the user of an error.
  • A 0 exit code is returned
  • Main PID remains active
  • Service is marked (active: failed (result: timeout))

However, this service is still very active and can handle more inquiries. Once this condition is reached, try it systemctl stop dummy again doesn't matter.

Strange, systemctl start dummyAfter this second attempt, process the stop request …

I think I may need to report a problem with SystemD, but I would like to hear about the workaround you suggested.

Thank you very much.

What level of physical destruction is sufficient to ensure that an SSD is not readable?

My organization upgraded some printers and put the internal SSD hard drives out of operation by passing the memory chips through a band saw, cutting each chip in half and, in some cases, tearing entire sections off the greenboard.

These printers have been used so that they are likely to contain PHI / HIPAA information.

I seek advice on whether this method of destruction was sufficient or not.

I don't think so, but I want additional resources.

I have published what I have found so far as an answer, as this may be the answer to my question, but I hope for other contributions.

How can you ensure that vulnerability management does not compromise security?

When running vulnerability scans, a particular version of say is often searched Node.js is reported as vulnerable along with the recommendation to upgrade to a higher version. Then we also have uncertainty TLS. SSL Protocols like TLS 1.0 and SSL 3.0 and it is recommended to disable them completely. For me, each of the recommendations above is a change that needs to be applied to a specific application, host, etc. Now I'm wondering if you can make sure that neither change leads to a change reduced or security at risk? How can you ensure that the new version of Node.js does not contain any more serious vulnerabilities / vulnerabilities? How does change management fit in? Is it a change request to update the version of Node.js or disable insecure TLS / SSL protocols? Isn't it so?

Do replicated, distributed multi-primary systems ensure sequential consistency?

I know that replicated, distributed primary backup systems ensure sequential consistency. My question is whether multi-primary systems can achieve this. I mean, if you use Consencus (i.e., Paxos algorithm) to arrange an order for the requests received, sequential consistency is likely to be achieved. However, if you use data types that are replicated without conflict, sequential consistency is achieved.