cryptography – Encryption layers – Information Security Stack Exchange

I’m curious, as a green programmer, if one used layers of encryption methods, would this be more difficult to crack or impossible?

Example:

Layer 1- encryption method 1 “Encrypt this string”

Apply crypto = “encrypted mumbo jumbo”

Layer 2- encryption method 2 “encrypted mumbo jumbo”

Apply crypto layer 2 = “encrypted mumbo jumbo becomes encrypted mumbo jumbo”

And so on…

Does this heighten security and if so does would this take a long time to decrypt?

cryptography – Which Diffie-Hellman Groups does TLS 1.3 support? And should we use TLS 1.3 as a guide?

(1) I’m curious whether the following 10 different DH Groups are the only groups that TLS 1.3 supports,

Yes, in TLS 1.3, only the groups listed in RFC 8446 can be used. It’s possible that a future RFC will add more groups without changing the protocol version, but I’m not aware of any in the plans.

(2) and as such, are they the only ones that we should be using?

You certainly shouldn’t be using other groups in TLS 1.3 since they wouldn’t comply with the protocol. Your software isn’t likely to support other groups in TLS 1.3 anyway.

(2) To elaborate on my second query: Given that TLS 1.3 was developed over years by experts, and that it only supports certain cypher mechanisms – can we take that to mean that these are the only mechanisms that should be used?

No: it’s ok to use other mechanisms in older versions of the protocol. Not every mechanism that is safe to use made it into TLS 1.3. Some mechanisms that are safe but don’t have any practical advantage (faster, smaller code, smaller message size, etc.) didn’t make it into TLS 1.3. One of the goals of TLS 1.3 was to reduce the complexity of the protocol, which means fewer choices.

With TLS ≤1.2, you need to balance security (as in: risk of implementation bugs, known protocol weaknesses, or yet undiscovered protocol weaknesses) with interoperability. (This is true with TLS 1.3 as well, but 1.3 hasn’t been along for long enough to have interoperability problems when it goes through at all.) Due to the number of existing options and the diversity of the existing software, there’s no single right answer for where to put the balance.

Some discussion into what curves should be used has already taken place here, which mentions that secp256r1 and secp384r1 are best.

That discussion was in the context of TLS 1.2, which didn’t support the same curves. There’s no reason to reject any of the curves supported in TLS 1.3 except maybe secp521r1, which is susceptible to implementation weaknesses. (As the name hints, secp521r1 involves 521-bit numbers – and yes, it’s 521 and not 512. Because this is slightly larger than a multiple of 64, there’s a non-negligible chance that certain intermediate numbers will have the most significant word be 0, and insufficiently protected implementations might leak that fact when the number is multiplied because the multiplication will be slightly faster. This leak can be enough to allow an attacker to reconstruct the private key with a moderate number of connection attempts.) Curve25519 is perfectly fine and arguably preferable to secp curves because it’s easier to implement securely. Back in 2015 it was not commonly supported in TLS, but in 2021 it’s a standard part of TLS 1.3. Curve448 is slower and has no particular advantage (barring yet unkown weaknesses in other curves), but it’s ok to use it.

For finite-field Diffie Hellman, don’t use groups smaller than 2048 bits. Older versions of TLS allow custom groups, and there’s no consensus on whether to make use of that. On the one hand, using standard groups might allow an attacker with sufficient computing power (read: NSA) to precompute a very large number of values which then makes attacks feasible. On the other hand, generating good custom groups is slow and hard, and doing that risks creating vulnerabilities that are easier to exploit, such as letting a man-in-the-middle persuade the participants to use a weak group. This is why TLS 1.3 mandates the use of known good groups.

cryptography – Why is a password encrypted with AES and then sent back to the server together with the key with RSA (Instagram)?

I am trying to understand an encryption process on a website (Instagram). As far as I know, a public key is sent from the server to the client. Then the password is encrypted with AES_GCM_256 and packed together with the AES key in an array and then in a sealed box with the public key from the Server.

Is a sealed box the same as simply encrypting the array with RSA?

Why do you do that?

I mean, if you find out the RSA private key and then decrypt the data encrypted with RSA, wouldn’t you also have the AES key to decrypt the password?

And the public key is very short:

297e5cd13e20f701d57bd5a1ee82bcead9a20e4080bc6c737917b868eb65f505

Only 64 characters so 512 bits.

Is that even safe enough for RSA?
Or is the key Curve25519?

As far as I know, should an RSA key be at least 2048 bit large?

I would appreciate a link or the answer to a few questions 🙂

Best regards

cryptography – Keyless entry system

A keyless entry system means just that: it allows the entry into the vehicle (by unlocking the doors) without using a key, usually using some sort of radio fob. Many vehicles also pair this fob with an anti-theft (immobilizer) system and require the fob to start the car, but they need not be the same thing.

In systems with radio fobs, cryptography is generally useful to prevent an attacker from (a) guessing all possibilities and (b) listening for messages and then guessing the key used to send them. Some insecure fobs have used 16-bit LFSRs, which fail both of these tests. More secure fobs use AES, which if used with a suitably sized key in a secure way, prevent both of these from being problems.

However, cryptography doesn’t prevent relay attacks where the attacker attempts to impersonate the fob to the car and the car to the fob. That’s because the problem isn’t that the message isn’t secure, but that the fob is not close. Usually this is solved by requiring a round-trip message to be within a certain number of milliseconds so that the fob is provably within a certain distance according to the speed of light and the expected performance of the fob. This same technique is used to prevent relay attacks on contactless credit cards as well.

cryptography – Is double sha256 the best choice for Bitcoin?

The typical reason one uses double hashing is to deal with length-extension attacks. That’s because any Merkle-Dåmgard algorithm that outputs its entire state (e.g., SHA-1, SHA-256, and SHA-512) is vulnerable to a length extension attack, where users who know a hash can append additional data and also produce a valid hash.

There are other algorithms, such as SHA-3 and BLAKE2, which don’t have this problem because they use a different construction. SHA-3 uses a large state and outputs only a portion, while BLAKE2 modifies the input data of the last block processed to distinguish it. A design lacking this problem is preferable these days.

However, those algorithms didn’t exist at the time Bitcoin was created (2008), and SHA-256 was the standard hash algorithm to use for secure contexts, even though it has this weakness.

Whether an algorithm is “better” in a context depends on one’s needs. Presently, if one needs security against length extension attacks, one chooses SHA-3 or BLAKE2. If one needs performance, one uses BLAKE2 or SHA-256 (if accelerated on the relevant hardware). If one needs compliance, one uses SHA-2 or SHA-3. There are many criteria to consider.

In the context of when the design was made, the choice was probably responsible and defensible and was the best that could be done under the circumstances, even if we would prefer a different algorithms today (because we have better ones available). Since SHA-256 is presently considered secure and robust, there’s little reason to change right now. If in the future that changes, then using a different algorithm may be warranted.

cryptography – Literature question on secure multiparty computation

Consider $N$ players and assume that a private channel between each pair of players is available. Player $i in { 1, ldots,N}$ holds a secret $S_i$, which is a sequence of $B$ random bits.

The players want to perform the secure bitwise addition modulo 2 $S_1 + ldots + S_N$, i.e., after communication over their private channels, the $N$ players can compute the sum if they pool their information together, and any set of $N-1$ players cannot learn any information about the sum.

It is well known that this secure addition can be performed from linear secret sharing schemes (Reference). In this case the communication complexity, i.e., the overall amount of information exchanged between the players is $N(N-1)B$ bits (each player applies a secret sharing scheme to share his secret with the $N-1$ other players by sending them each a share of size $B$ bits).

Questions:

  • Is there another coding scheme that achieves a better communication complexity for this specific problem?

  • What is the best known lower bound on the communication complexity for this specific problem?

cryptography – How to create same ssh key pair on different systesm at different times

Is it possible to generate same ssh key pair on multiple systems?

No. That would not be secure. The key pair should be unique, and are made in a fashion that attempts to ensure this. As no communication happens during generation, no hard guarantee can be made, but they can be big enough that the chance of a collission is essentially zero.

To have the same keys multiple places, copy them. They are plain text files, which can be copied. But a better alternative would probably be to generate one key pair per system, and all desired public keys to the hosts you connect to. That way, you can revoke a single key pair if you loose control of it’s private key.

cryptography – Tests/Techniques for Writing Encryption Algorithms

So I’m trying to design/make my own encryption algorithm and I believe it to be ok. I’m not saying it’s extra secure but I know it’s not useless so, what I was wondering is if there are any specific tests I can carry out in order to try and get a better representation of how secure my algorithm is.

Here is some data from tests that I have carried out:

  • Repeated blocks of data are different from each other
  • 1-bit change in password will completely change the result
  • 1-bit change in input text completely changes the output
  • even if you encrypt the same file with the same password you will get a different output each time because each encryption gets a different random salt
  • when tested over a large number of files matching bytes in both encrypted and unencrypted file that matched in the same place was 0.3% random is about 0.34% other encryptions get about 0.28-0.29%

here I have tried to layout the mode of operation for my algorithm. I believe it to be closest related to a PCBC mode of operation but there are some differences in the way I have laid mine out

Mode of Operation

cryptography – Show that hash function $H$ using RSA is collision resistant if RSA is $(T, epsilon)-text{hard}$

Let $(N, e)$ be sampled from the $RSA$ problem ($N = pq$, $e in mathbb{Z}_{phi(N)}^*$).

We sample a random $y in mathbb{Z}_{N}^*$ and define:
$$f_0(x) = x^e, f_1(x) = yx^e$$

Now, let $k = 10n$ and for $x in {0,1}^k$ the hash function $H$:
$$ H((N,e,y), x) = f_{x_1}(cdots f_{x_k}(1) cdots )$$

I need to show that if the $RSA$ problem is $(T, epsilon)-text{hard}$, then $H$ is a $(T – k*poly(n), epsilon)$ collision resistant.

My idea is to assume that I can find a collision in $H$ with probability bigger than $epsilon$, that means that I can find $a_0, a_1 in mathbb{Z}_{N}^*$ such such that $f_0(a_0) = f_1(a_1)$ and with this I can get the the $e$-th root of $y$ and solve $RSA$.

However, I am not so sure if what I am doing is to correct, and not sure how to write it formally.

Help would be appreciated.