VPS Requeriments and Setup? for reverse proxy http

Hi, i need to know if i need a lot of resources for a reverse proxy with nginx (i say nginx because i read is the recommendation for what i have planned to do), my plan is to just static files I'll use Cloudflare with Edge Cache ttl = 1 month to deliver that will be hosted on a back-end server and in

will be like that

A visitor requests images from img.mydomain.com> The traffic from this subdomain is transmitted using cloudflare with edge cache (pr)> Then forward to my reverse proxy http vps> Then reach the source of origin.

can this vps last?

CPU CPU 1 vCore
RAM RAM 1 GB
SSD SSD 25 GB
Traffic traffic unlimited
Bandwidth Bandwidth 200 Mbps

I need advice on configs, whether it's worth activating the cache for VPS or not, or what I can do. I have about 50,000 daily visitors on this site and the files are just pictures.

pd: I need a vps in the middle, even if I use cloudflare. Is not a solution to delete it.

Sorry for my english and thanks.

tls – How can I continue to allow simple HTTP while preventing accidental use?

I have a website that needs to be available over both HTTP and HTTPS. However, I want users to use HTTP only when really necessary (obviously). The idea that I came up with is the forwarding to HTTPS together with HSTS mydomain.com, and to offer simple HTTP http.mydomain.com, I would ask search engines not to advertise mine http Subdomain it should be found only on instructions on my page itself. This should prevent users from inadvertently using HTTP, and the choice should really be explicit.

My question is, what types of attacks do I open with this approach. Phishing attacks seem inevitable; An attacker could always trick a victim into using the unsafe domain and hope they do not notice it. I could have a permanent warning banner on mine http However, this would only help if the attacker can not change the packages in flight. The second problem is DNS spoofing, which an attacker points out mydomain.com to http.mydomain.com, or points http.mydomain.com to their own servers. However, more and more clients are validating DNSSEC, and DNSSEC is enabled on my website. So I hope the attack vector continues to shrink.

Any things I miss? Is there a better approach to what I am trying to do?

htaccess – Incorrect route appended to URL when redirected from http to https

I have an OVH-hosted WordPress site and problems redirecting http to https.

http://www.example.com Keep feeding nicely https://www.example.com

https://www.example.com Keep feeding nicely https://example.com

however http://example.com continues to lead https://example.com/www,

I do not know where this extra www comes from.

My .htaccess looks like this:


RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} (L,R=301)

firewalls – Can I use a http wan filterd router DDos?

Yes, DDoS is possible even if no ports are open.

Remember that a DDoS attack works by depleting a resource. It can be computing power, memory, memory or bandwidth. So, if an attacker sends more traffic than your network can receive, your network access will be significantly affected. It does not matter if a service to receive the packets is active.

The most effective way to counteract such an attack is to work with your upstream ISP and instruct them to discard any packet that is believed to be part of an attack. That's not cheap or easy, but that's how most Internet services protect themselves.

The other option is to increase network bandwidth to have more resources than the attacker. However, this is more expensive than the previous solution.

linux – Qemu Guest can not access HTTP Nginx from the host

I have nginx installed on the host to enable guest Qemu KVM Http, but if I open in the browser with the IP address 192.168.122.1, my web apps can not be loaded

I thought it was a problem with iptables or forwarding

Here is my iptables-S

-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -i virbr1 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -s 192.168.1.0/24 -d 192.168.122.0/24 -o virbr0 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A OUTPUT -o virbr0 -p tcp -m tcp --sport 80 -m conntrack --ctstate ESTABLISHED -j ACCEPT

And here is my firewall cmd –list-all:

public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp0s25 wlp3s0
  sources: 
  services: cockpit dhcpv6-client ssh
  ports: 80/tcp 1433/tcp
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

Is something wrong here?

I tried to open 192.168.1.2 my local in Virtualbox, it works fine
but failed in Qemu / KVM

But if I ping the IP address only 192.168.122.1 and 192.168.1.2, it is fine

server-side scripting – If the content is retrieved from wget, there may be a problem if I do not know the content type in the HTTP response.

The content-type is used by browsers to find out how the content is displayed. Browsers must choose the presentation method based on the content type. Rendering simple text is very different from rendering HTML.

wget Performs the same action on all files: they are stored on the hard disk. In my experience, it does not pay attention to the Content-Type Header, it only saves the file. Most file systems have no other mechanism to store metadata about the file than the file name and permissions. The content type is not even saved.

The only exception could be Mac OS. Mac's file systems can store far more metadata about files than other file system references. I have never used wget on the Mac, but it is possible that the content type will be saved there as a file metadata. This would then affect which program opens the file with the default action.

Of course, other file systems will usually guess based on the file extension .html, Since this is also wrong, the systems may behave the same way as the application types selected to open the saved file.

If you use the UUID with a specific script or application, it probably does not matter to you what the operating system would choose as the default editor and viewer for the file. I can not imagine that your use case on Windows, Mac or Linux will cause unforeseen problems.

Enabling HTTP / 2 makes the site in Nginx much slower

After enabling http / 2 for my site, I found that performance dropped dramatically. The download speed gets much slower and large image requests block other API calls.

Here is a sample web page to illustrate this problem:



(img.jpg is a picture of ~ 700KB and foo.txt is a small text file. Everything is served directly by Nginx.)

Here is the timeline diagram when HTTP / 2 is NOT enabled (listen 443 ssl):

http1_new

… and if HTTP / 2 is activated (listen 443 ssl http2):

http2_new

You can see that HTTP / 2 causes a longer load time for both img.jpg and foo.txt,

Here is the site configuration:

server {
    listen 443 ssl http2; # when performing http2 example
    # listen 443 ssl; # when performing http1 example
    server_name h2.test.**********;
    root /home/******/h2-test/;
}

I'm using Nginx 1.14.2 on Ubuntu 16.04.6 LTS. Do you have suggestions for resolving this problem?

apache2 – Apache redirects HTTP to HTTPS without preventing HTTP

background

I added an SSL certificate with Lets Encrypt / Certbot on my Debian 9 (Stretch) host.

To change my Apache configuration, certbot essentially copies the vhost.conf file, encloses it, and inserts the Include, SSLCertificateFile, and SSLCertificateKeyFile entries.

The old HTTP file, vhost.conf, has been modified to be easily redirected to HTTP over HTTPS using Rewrite rules.

I'm glad that the site is HTTPS for all end users, but I might want a PHP script to be able to request something from localhost over HTTP and force those requests to use HTTPS when the traffic is complete Local seems unnecessary.

For the purposes of this question, I reset the old HTTP vhost.conf file to handle HTTP traffic as before.

question

So my question is if this works correctly in my HTTP vhost file.


    Redirect permanent "/" "https://mydomain.ltd/"

It certainly is from my tests appear However, I am concerned that SSL is new to me, and while it is generally straightforward, there is great potential for out-of-the-box situations where things may not work as intended.

More weathering

I like my suggested solution like that should Redirect all users with a modern browser to use HTTPS without completely blocking HTTP access. I also prefer to use Redirect permanent Override rules as this is easier and probably more efficient. But again, the fact that a fairly high percentage of Internet advices suggest using the method of rewriting is somewhat disconcerting!

http – Is `git instaweb` ready to use?

I have this host where Git is served with a simple one git daemon --export-all --enable=receive-pack, Hopefully this will only be sent to my IPv4 address.

I just run git instawebSuddenly, the web is also served via port 1234. This is supposed to be delivered only to my IPv4 address.

Well … Git Web is enough by default read-only To be accessible to the whole Internet, right? I mean port 1234, not port 9418. Is it right?

This is a scam from http://superuser.com/questions/1472320.