I disagree with a friend on how best to defend himself against a nasty double attack at the local café, airport or hotel. From what I've read, it's best to always use a VPN when connecting to a public network. My friend claims that it's enough to just connect to websites that support HTTPS connections. Who is right and why? Thank you so much!
I'm currently working on a security-based product (VPN) and we have an important requirement that I can not figure out.
The connection between the user and the VPN server is based on the OTP algorithm (one-time pad) and I also have SSL on the server.
At the SSL handshake level, the certificate is sent to the client for review. But we also want to encrypt the certificate with OTP before it is sent over the network.
The client is an iOS app. I am also looking for a solution to have the OTP encrypted certificate first validated at the device level before it is validated by the SSL handshake. It's an extra level of security we want to integrate.
Any idea how I can do that? As far as I know, the SSL handshake is an automated process and can not be controlled.
Suppose I want to exchange a key with someone for a symmetric cipher (such as AES) without meeting him personally. What would be the surest way to do this over the internet? My first instinct would be to use a custom RSA channel over HTTPS for the highest level of security.
I need a future-proof method. (Remember that this only needs to be done once so a "crazy" method can be considered as an answer.)
When the user connects
eve-mail.comEve can check all data sent to her as the TLS connection is terminated
Whether or not Eve chooses to reverse the user's proxy requests to Alice or not depends simply on whether Eve wants to maintain the illusion. moreover, it's not really relevant.
Can this be prevented? It depends on whether. The user connects to
eve-mail.com becomes familiar. OTP 2FA does not help much either
eve-mail.com could only be the OTP proxy.
However, there are some defenses.
If you use a YubiKey or similar alternative as the second factor, the device uses Origin-bound keys to prevent phishing as described. From the FIDO U2F specification:
When the user registers the U2F device with an account with a particular origin (for example, http://www.company.com), the device creates a new key pair that can only be used at that origin, and gives the Originate the public key to be linked to the account. If the user authenticates (ie logs in) to the origin, the originator (in this case http://www.company.com) can check whether the user has the U2F device by checking the following, in addition to the username and password : a signature created by the device.
As long as you have signed in
alice-mail.comThey are protected by Origin-bound keys.
Without a security key, services can block logon when acknowledged via a second channel. Both Apple and Google require registration confirmation via second factors, and the login confirmation request conventionally displays both the IP address and the associated location of the login request. An experienced user can determine that the location does not match their own location, or that the IP address does not match their own IP address, and deny the login request, reset the password, and so on.
I think the first step would be to make the ID card less guessable. If it's a short number, there are tools that allow attackers to list these IDs without knowing your system.
As you mentioned in your question, you already use HTTPs, so that's great.
In general, it helps a bit to have the parameters in the POST body, as they are less likely to be logged into log files that can leak out of your environment. Almost every API gateway / web server logs the URL (and path parameters) in its logs. Very few log the actual text, as it can be very large.
Of course, that depends on your application and what you actually hedge against.
Since the TLS connection is terminated by the TLS proxy, the authentication of the client via client certificates is also terminated there. Because the TLS proxy does not have the client's private key, it can not use the client's original certificate when connecting to the final server with TLS.
The only way to pass the client's original certificate or information about it is outside of TLS, for example: By inserting some fields in the HTTP request header, as described here for HAproxy. The server or web application must then check these fields at the application level, rather than relying on client certificate validation at the TLS level.
I do not really trust the guy and I try to take the time to learn all about it, but what do you think?
#!/bin/bash name=ourwebdomain.local openssl req -new -newkey rsa:2048 -sha256 -days 3650 -nodes -x509 -keyout $name.key -out $name.crt -config <(cat <<-EOF (req) distinguished_name = req_distinguished_name x509_extensions = v3_req prompt = no (req_distinguished_name) CN = $name (v3_req) keyUsage = keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names (alt_names) DNS.1 = $name DNS.2 = *.$name EOF sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ourwebsitdomain.local.crt
This was accompanied by two more files, so here
file ./* generate_ssl: ASCII text ourdomain.local.crt: PEM certificate ourdomain.local.key: ASCII text
I am not so worried that I am more curious. oh, he added this file to our github repo some time ago. He's been behaving very strangely lately and I want to understand what he's doing.
mynaems-MacBook-Pro% file dump.rdb dump.rdb: data myname-MacBook-Pro% ls -lh | grep rdb
-rwxr - r-- 1 myname staff 92B August 29 22:44 dump.rdb
As explained in the post you linked, the proxy decrypts all traffic and then encrypts it again, but signed with a different certificate. Therefore, the certificate you receive differs from that of the Web site. One way to determine this is to compare the certificates you receive when you visit a website from your corporate network with those you receive when you visit them from outside this network (for example, from home) ,
All browsers have functions for displaying the fingerprint of a certificate. In Chrome, click the green lock in the URL bar, click About, and then View Certificate.
Some reservations, though:
- Some websites may provide different certificates for different users for different reasons, so they may not apply to a website. This may not be enough to conclude that there is a TLS proxy.
- The proxy may not intercept traffic for all domains. Even if all the certificates you've tested match, it's impossible to know that the traffic will never be intercepted.
How do you avoid these reservations? You would need to go through the Trust Store in your operating system and in your browser (if it uses your own) to see if any certificates have been added there. For instructions, see this question.
Finally, it should be noted that trapping TSL is absolutely legitimate for an employer. And since you're probably using a computer your employer provided you with at work, there are many other ways you can monitor your browsing habits and all other activities on your computer without relying on a proxy server. So, if you want privacy, do not use your work computer or network.
Your employer may have legitimate reasons, but certainly not your ISP. Luckily, they can not do this easily. Why? The answer is in your quote:
If your company has set up the proxy correctly, you will not know that something is disabled because it has ensured that the proxy's internal SSL certificate is registered as a valid certificate on your computer. Otherwise, a pop-up error message will be displayed. If you click to continue, the fake digital certificate will be accepted.
Since your ISP does not control your computer, there is no way for it to install a root certificate (unless you do). So you get a warning that the certificate is wrong each time you visit a site via HTTPS. In other words, not very furtive.
If you are still worried that someone has sneaked a root certificate into your truststore, you can do so using the method described above. However, you must compare the certificates that you receive from a computer with another ISP, preferably even in another country. Or you can easily compare them to the preinstalled public key pins for sites that use HPKP.
I recognized the problem that a Mitm proxy can pose for my privacy and started to look into it more closely to see how it recognizes it as a client. Things I have to check so far are:
- Check who issued the certificate and verify that it is a self-signed certificate that is installed in my own root certificate store.
- Verify that I am provided web content from a local IP address instead of the external server IP address, and that the local IP address is always the same.
- Verify that my system proxy settings are configured to localhost.
My questions are:
In addition to pinning certificates, what else can I check to see if all the above points are negative, but I still have reason to believe that I am being proxied? A typical example: AVAST claims that you use Mitm proxy to scan all web traffic except the whitelisted URLs of some banks, but I do not see any of Mitm's usual singles that are active. I've read somewhere that they could read directly from memory, but implementation and maintenance may sound expensive if they've already bundled a mitm proxy into their product and openly announce that they're using it. Is there a way to run a mitm without triggering any of the listed checks? A kind of transparent Mitm proxy for Windows machines?
I have a website that needs to be available over both HTTP and HTTPS. However, I want users to use HTTP only when really necessary (obviously). The idea that I came up with is the forwarding to HTTPS together with HSTS
mydomain.com, and to offer simple HTTP
http.mydomain.com, I would ask search engines not to advertise mine
http Subdomain it should be found only on instructions on my page itself. This should prevent users from inadvertently using HTTP, and the choice should really be explicit.
My question is, what types of attacks do I open with this approach. Phishing attacks seem inevitable; An attacker could always trick a victim into using the unsafe domain and hope they do not notice it. I could have a permanent warning banner on mine
http However, this would only help if the attacker can not change the packages in flight. The second problem is DNS spoofing, which an attacker points out
http.mydomain.com, or points
http.mydomain.com to their own servers. However, more and more clients are validating DNSSEC, and DNSSEC is enabled on my website. So I hope the attack vector continues to shrink.
Any things I miss? Is there a better approach to what I am trying to do?