The future of web hosting

Hi! About once a week, I want to share my thoughts on the industry with everyone, especially those who do not know much about technology and web hosting. I want to sharpen my writing skills and hope to contribute in every way possible. I'll make sure the next post I write contains the latest news on web hosting and industry relations. Feel free to engage in your own thoughts!

22/9/19
Recently, the internet has grown exponentially, faster than ever. User-generated content has changed from previously known plain text to various multimedia formats, including videos and audios.
With the popularity of the Internet, web hosting has not been left out. For anyone who has ever managed a website, the word webhosting is nothing new.
In a few words, web hosting refers to computers or a set of computers that store the data of a web site and that users can access when they visit the web site and also download the data through their browser.
Twenty years ago, when the Internet surfaced, web developers had limited web hosting options. Either you make yourself one or you pay the companies a huge sum of money to use their hardware and resources for your hosting.
Well, all that has changed, there are unlimited possibilities for web hosting on the internet. Most of these are Shared Hosting, Dedicated Hosting, VPS Hosting and Word Press Hosting.
Today, most people have speculated about what the future of web hosting would look like.
As much as we would like to deal with the possibilities of web hosting in the coming years, I will love to go through the development of web hosting and then look to the future.

THE EVOLUTION
Let me go back to the mid-nineties, when Web 2.0 was new, the dot-com bubble was still growing, and web hosting was a self-service industry. Since then, a lot has changed and when I mean change, I do not just mean her outfit. Google has abolished Yahoo, Facebook has abolished MySpace and it is now easier to host a website than before.
As I go through evolution, I call every phase "generations."

FIRST GENERATION
In this phase, people are now resorting to tools, but this time in a very primitive form.
This generation of hosting was accompanied by workstations. Like tools from his time, they were awkward, primitive and ineffective. But mind you, it was the best technology of her time and she does the work.
Running a web server with a workstation is unusual nowadays.
During this generation, the functionality of an entire website depends on the state of a single PC, a single error, and the failure of the site. It was primitive, but a good start. People find ways and eventually improve them.

SECOND GENERATION:
The simplest solution to the problem with the workplace was to place it in a room where there is no accident. This is how a data center was created. The danger associated with this workplace is that it depends on the state of a hardware or software. The data center has resolved the issue, increasing security and increasing access restriction to unauthorized access. The downside is the huge investment required to set up and maintain the data center. They consume too much of everything. They consume too much space, too much power, too much time, too much planning, etc. They are difficult to plan and too rigid and inflexible.

THIRD GENERATION
The potential of the first generation has been recognized and improved in order to reach the second stage. But it also had its drawbacks, and people needed a better solution to a web hosting problem. They needed a more viable solution that offered a better and lasting solution. This made people think about a diverse approach to improving the third stage. To compensate for the disadvantages of the second generation (computer center), thinking outside the box was considered. They stopped searching for more space, more hardware and more security, and started thinking about new things. This time the virtual server was born.
The virtual server was a total breakthrough, it was one of its kind, something that has never been seen or thought of until now. It enabled the separation of an operating system for physical hardware, paving the way for application isolation and the proper and better use of hardware. The true performance of the virtual servers was demonstrated when calculating the virtual cluster.
Although this generation (virtual server) did not take the web hosting industry to a whole new level, it still faced the same problems as the data center. It required a lot of everything. It took a lot of investment, a lot of resources, almost unlimited time, and to make sure you're on the right path, developers need to do more than anticipated effort and investment, and ultimately make a small payoff. This made it possible for the request to change.

FOURTH GENERATION
Over time, the virtual data center experienced the same issues as the older data center. Although managed properly, the servers used were better and more efficient, but they were still fixed in one place.
Critical thinking at this stage has created an advanced and elegant solution, cloud hosting. It combats every single problem and all the disadvantages that occur in the data center.
First and foremost, cloud hosting was flexible and scalable. Web sites consume the same amount of bandwidth, RAM, performance, and storage needed by the previous generation, no more or less. There was no room in the cloud to waste resources.
The birth of cloud hosting means that there is no specific physical server for the sites. This was a very good solution to the problem of the data canter with physical locations. The cloud servers are located in different locations and this has a number of advantages. Now the problem with server removal has been fixed. The integrity of the data center was also addressed. The cloud servers can copy themselves to other cloud servers. This means everything that goes wrong with one of the servers will be up and running in minutes. It is difficult to fix or replace when a major problem occurs.
Virtual servers help separate hardware and software, but the cloud servers take it to a whole new level. It separated the entire data center from the software and the hardware.
This also changed people's perceptions about servers. From a static investment and resource that used cloud hosting technology, servers became specialized and free. They disappear as well as appear when and as necessary, and always adapt perfectly to the needs of the users.
All of these cloud hosting server enhancements over the physical data center make maintenance and operation cost effective and cost-effective for customers.

THE FUTURE OF WEB HOSTING
If we look back at the past of web hosting and look closely at the beginnings, we should appreciate what we currently have. A few years ago, everything was not smooth. It's amazing how fast the technology develops and improves. Nobody knows what the future holds, but as humans we can make assumptions.
Given the previous trends, it will not be difficult to say that cloud hosting will not be one thing forever. It's a new development, and people are still looking for the best way to get around it. It is evolving rapidly as fast as people are thinking of developing an application to integrate the cloud. As legacy applications continue to benefit, developers are looking for ways to use them in a hybrid fashion to take advantage of cloud hosting.
Many people and services still depend on the cloud server today. Services like Loggly, Netflix, etc. will provide many such services in the future. A reduction in the cost of cloud computing and the cost of services as a cloud platform hosting is not news.
The future will certainly bring something more foresighted and abstract than the cloud, but it will take many more years to reach the full potential of the cloud. Maybe one day everything will be hosted in the cloud, not just the web. Nobody knows.

Web Applications – Authentication and Authorization – Front End and Back End Dilemma

I'm working on a centralized authentication and authorization API system and I'm stuck in a dilemma between front-end and back-end.

The front-end person says that only one request should be sent to this API for the user to be simultaneously authenticated and authorized. The response includes JWT, user role (s), and user data.

The back-end person claims that the front-end should have two calls. First, to authenticate the user (login) with a JWT response, second to authorize to retrieve user permissions, roles, and user data.

Which approach is right? Personally, I think the first suggestion is more logical, but I can not find a good example that prevents the backend person from saying, "but you can add a second call to the API for authorization."

Web Development – Why are methods like GET and POST needed in the HTTP protocol?

Please note The question has been changed / clarified since this answer was first written. Another answer to the last repetition of the question follows the second horizontal rule

What is the need for methods such as GET and POST in the HTTP protocol?

They form the basis of the HTTP protocol, along with some other things, such as header formats, header and body separation rules

Can not we implement the HTTP protocol with just a request and a response text?

No, because whatever you created, it would not be the HTTP protocol

For example, the URL contains a request associated with a function that depends on the server-side programming language, such as a servlet. In response, HTML and JavaScript responses are sent.

Congratulations, you have just invented a new protocol! If you now want to set up a default body to operate, maintain, develop, and so forth, one day, it could overtake HTTP

I guess that's a bit strange, but the Internet, TCP / IP or the communication between servers and clients have nothing magical about them. They connect and send a few words to have a conversation. The conversation must really be at both ends of a confirmed specification if the requirements are to be understood and reasonable answers are to be provided. This is no different from any dialogue in the world. You speak English, your neighbor speaks Chinese. Hopefully, if you wave your hand, point, and shake your fist to convey your message that you do not want to park your car in front of your house.

Back on the Internet when you open a socket for an HTTP-compliant web server and send:

EHLO
AUTH LOGIN

(The beginning of an SMTP e-mail transmission) Then you do not receive a reasonable answer. You could make the most perfect SMTP-compliant client, but your web server will not talk to it, because this conversation is all about the shared protocol-no shared protocol, no joy.

Because of this, you can not implement the HTTP protocol without implementing the HTTP protocol. If what you write does not match the protocol, it's just not the protocol – it's something else, and it does not work as specified in the protocol

If we run for a moment with your example; If the client connects and only specifies something that looks like a URL. And the server understands it and only sends something that looks like HTML / JS (a website), then it might work. What did you save, though? A few bytes if you do not say GET? Just a few more to delete this annoying header. The server also has some stored – but what if you can not figure out what he sent you? What if you've asked for a URL that ends in JPEG and sends you some bytes that make a picture, but in PNG? An incomplete PNG to it. If only we had a header indicating how many bytes there are walk then we would know if the number of bytes we have received is actually the entire file or not. What if the server has zipped the response to save bandwidth, but did not tell you? You will spend considerable computing power to find out what it has sent.

At the end of the day we have need Meta information – information about information; we need headlines; We need files that have names, extensions, and build dates. We need people who have birthdays to say thanks and thanks, etc. – the world is full of protocols and contextual information so we do not have to sit down all the time and work things out from scratch. It costs a bit of space, but it's worth it


Is the implementation of different HTTP methods really necessary?

You do not have to implement the entire specified protocol, and this usually applies to everything. I do not know every word in the English language. My Chinese neighbor is also a software developer, but in a different industry and he does not even know the Chinese for some of the terms used in my industry, let alone the English. The good news, though, is that both of us can get a document about the implementation of HTTP, write the server, and I can write the client in different languages ​​on different architectures and they still work because they stick to the protocol

It may happen that none of your users issue a request other than a GET request, do not use persistent connections, send anything other than JSON as a body, or accept anything other than text / plain for you to do so. Write a really reduced web server, which meets only the very limited requirements of the client browser. But you could not just arbitrarily decide to eliminate the basic rules that cause "a text to be passed over a socket," which is HTTP. You can not get rid of the basic idea that the query will be a string:

VERB URL VERSION
header: value

maybe_body

The answer contains a version, a status code, and possibly headers. If you change any of this – it's not HTTP anymore – it's different and works only with something designed to understand it. By these definitions, HTTP is what it is. So if you want to implement it, you have to follow the definitions


To update

Your question has evolved a bit. Here is an answer to your question:

Why does the HTTP protocol know methods?

Historically, one has to realize that things were much more inflexible in their design and implementation, even to the extent that there was no scripting and even the idea that pages could be dynamic was generated in-memory on the fly and instead pushed down the socket There was no static file on the hard disk requested by the client and read and downloaded by the socket. As such, the very early Web, which revolved around the notion of static pages containing links to other pages, would have been all pages on the disk and navigation if the terminal had mostly made GET requests for pages at URLs that the Server could map the URL to a file on the disk and send it. There was also the notion that the web of documents linked to each other and to others should be a developing, evolving thing, so it makes sense to exist a set of methods that enable suitably qualified users to do so To update the network without need If you have access to the server file system, return the use case for PUT and DELETE, and other methods such as HEAD, only meta-information to a document so that the client can decide whether to retrieve it again days with dial modems, really primitive slow technology. It could be a big savings to retrieve the meta of half a megabyte file and find that it has not changed, and to back up the local copy from the cache instead of downloading it again

This provides a historical context for the methods – in the past, the URL was the inflexible bit and was simply related to pages on the disk, so the method was useful because the client could describe what intentions it had for the file and the server the method in different ways. There was really no idea that URLs are virtual or used to switch or map in the original vision of a hypertext web (and it really was just text)

I do not mean that this answer is a documentation of the historical records with dates and cited references about when things began to change – probably you can read Wikipedia – but it is sufficient to say that over time the desire The Web is becoming ever more dynamic, and at each end of the server-client connection, the possibilities for creating a rich multimedia experience are enhanced. Browsers support a tremendous number of content formatting tags. Each of them wanted to implement media diversity features and find new ways to make things look good.

On the client side came scripts, plugins and browser extensions that should make the browser a powerful powerhouse for everything. On the server side, the active generation of content based on algorithms or database data was the major pressure and it continues to develop to the extent that there are probably only a few files left on the hard disk. So we keep a picture or script file as a file on the web server and the browsers get it, but increasingly the pictures the browser displays and the scripts it runs are not files that you can open in your file explorer but generated content that results from a on-demand compilation process, SVG, which describes how pixels are drawn instead of a bitmap array of pixels, or JavaScript that is output from a parent script form such as TypeScript

When creating modern multi-megabyte pages, only a fraction of that is likely fixed content on a hard drive. Database data is formatted and rendered in HTML form that the browser uses and from the server in response to several different programming routines to which the URL refers in some way

I mentioned in the comments to the question that it is a bit of a complete circle. Back in the days when computers cost hundreds of thousands and filled rooms, it was common for multiple users to use the one very powerful central mainframe across hundreds of stupid terminals – a keyboard and a mouse, a green screen, some text, some fetching text. Over time, as computing power increased and prices dropped, desktop computers became more powerful than previous mainframes and the ability to run powerful apps locally made the mainframe model obsolete. However, it never got lost because things just went the other way and returned to a central server that provided most of the useful app functionality and a hundred client computers that only drew on the screen and sent and receiving data was possible / from the server. In the meantime, when your computer was smart enough to run its own copy of Word and Outlook at the same time, Office Online was reactivated, allowing your browser to create a device for drawing pictures on the screen and editing the document / email. Mail is. Writing as a thing that lives on the server, is stored there, sent and shared with other users, assuming that the browser is just a shell that provides a partial view of the thing living elsewhere at all times

The answers give a sense of why there is a concept of methods. This leads to another related question:

For example, Gmail sends the PUT / POST request and data when it creates a link. How does the browser know which method to use?

By default, GET is used according to convention / specification, as this is required by law when you enter a URL and press Enter

Does the Gmail page sent by the server contain the method name to use when invoking the Gmail build request?

This is one of the most important things that I point out in the comments above. The modern web is not even about pages. Once the pages were files on the hard drive, it was retrieved by the browser. Then they became pages that were mostly generated dynamically by inserting data into a template. However, it was still about the process "request new page from server, retrieve page, view page". The replacement of the pages was really refined; They did not see how they were loaded and resized, and their layout shifted to make it more fluid, but it was still the browser that replaced one page or part of a page with another

The modern way of doing things is with a single page application; The browser has a document in memory that is displayed in a specific way, gets script calls back to thebservr and some information, and edits the document so that part of the page visually changes to show the new information – all of this is working without The browser always loads a new page. It's just become a UI that updates parts like a typical client app like Word or Outlook. New items are displayed over other items and can be dragged by simulating dialog boxes, and so on. All this is the browser script engine that sends requests using the http method the developer wants, retrieves data, and accesses the document drawn by the browser. You can imagine that the modern browser is a brilliant device, which is something like an entire operating system or a virtual computer. A programmable device that provides a fairly standardized method of drawing objects on the screen, playing audio, capturing user input, and sending for processing. All you have to do to get your user interface drawn is to give them some HTML / CSS to create a UI, and then constantly tweak the HTML so that the browser changes what it draws. Hell, people are so used to a single page app changing the URL programmatically even though they're not navigating (requesting brand new pages)

When we visit www.gmail.com, the GET method must be used. How does the browser know that this method is to be used?

True. Because it is specified. The first requirement, as has always been the case in the past, is to obtain HTML code to draw a user interface, then either poke it and manipulate it forever, or get another page with a different script containing the UI nudges and manipulates and creates a reactive UI

As some answers show, we can use the DELETE method to create new users. This raises the question of what the intent behind the methods of the http protocol, because ultimately it depends entirely on the servers, which function they want to assign a URL. Why should the client tell the servers which methods to use for a URL?

History. Heritage. Theoretically tomorrow we could throw away all http methods. We're at a programming level where methods are outdated because URLs can be processed to the extent that they indicate to the server as a mediation mechanism on which you want to save the data as a draft e-mail or a draft there is no file on the server / emails / draft / save / 1234 – the server is programmed to take this URL apart and knows what to do with saving the body data as a draft e-mail under ID 1234

So it's quite possible to eliminate methods, apart from the enormous amount of compatibility that has developed around them. It's better to use them only for what you need, but ignore them to a large extent and instead use everything you need to get your thing working. We still need methods because you need to remember that they mean something for the browser and server on which we created our apps. The client-side script wants to use the underlying browser to send data. It must use a method by which the browser performs the requested steps – probably a POST, because GET packs all the variable information into the URL and the length is limited in many servers. The client wants a long response from the server – do not use a HEAD because it should have no response body at all. The browser and server you choose may not have limitations, but one day they'll come across a different implementation at the other end – and in the spirit of interoperation, it helps to stick to a specification to work better

python 3.x – How do I use Selenium web drivers to test 2 players participating in the same game?

I'm trying to create a web-based board game (in Python, Flask & Angular)
Players navigate to the site and enter their name. There, they enter a lobby where they can create a new game or join an existing game.
I want to test the frontend with Selenium web drivers to see if a player can join the game when he creates it.
I've tried to do this with a combination of threads that create new Webdirver objects, or threads that use the same Webdriver object – but no matter what games they seem to connect to.

What would be the best way to achieve this?

Hey, I imagine web hosting talk

Hey, I imagine web hosting talk

& # 39);
var sidebar_align = & right; & # 39 ;;
var content_container_margin = parseInt (& # 39; 350px & # 39;);
var sidebar_width = parseInt (& # 39; 330px & # 39;);
// ->

  1. Hey, I introduce myself

    Hello, my name is Stan and I'm 29 years old and I'm new to the forum. I am looking for information to become aware of web hosting and to experiment with a project


Similar topics

  1. Reply: 3

    Last contribution: 19/02/2004, 16:18 clock

  2. Reply: 10

    Last contribution: 02-12-2004, 18:08

  3. Reply: 19

    Last contribution: 19.02.2001, 14:22 clock

Publish permissions

  • she not allowed post new topics
  • she not allowed Post answers
  • she not allowed Post attachments
  • she not allowed Edit your posts




Boxne Web Hosting

Boxne offers dedicated servers, web hosting, reseller hosting and virtual machines at very reasonable prices. This includes 24/7 technical support and 99.9999% availability guarantee.

Promotions: $ 0.01 for your first month of reseller hosting. Use the action code 1CentHosting
20% off your first month when purchasing a dedicated server or VPS server with coupon code: Server20

https://boxne.com

Unique web design for a very low price for $ 125

Unique web design for a very reasonable price

Hi! there I am Mohammad Jahir Uddin Babor has completed my master's degree and then spent more than 5 years as a designer in an IT company. Now I work as a freelance full time employee. I am an expert in HTML5, CSS3, Javascript, Bootstrap. I have designed some excellent websites for my clients.

,

Avoid Sending Spam Emails Web Hosting Talk

I use the Amazon AWS EC2 instance and use cPanel / WHM
AWS always provides an invalid PTR and a restriction on sending emails from a new instance.
In order to request the removal of these restrictions, I should answer their important question.
Your question is: "Please provide details of the steps you have taken to prevent this account from being associated with sending unsolicited emails."

Does anyone know what to do to avoid that?

Web Development – Should SOAP-based web services be used only with non-browser-based applications?

From what I understand from surfing the Internet, SOAP is an application-level protocol. If so, I can assume that anything that happens through the browser (using the HTTP application protocol) is NOT a SOAP-based web service, and most likely a REST-enabled service. Does this also mean that SOAP comes only for non-browser web applications in the picture?

Additive: My further research has shown that SOAP is an application layer protocol that can be implemented (and often implemented) through a different application layer protocol (http) (I had not previously noticed that an application layer can be implemented through a different application layer had a strict 7-layer OSI model in mind). This means that browser-based applications can also use the SOAP protocol. Please correct me if I'm wrong here.

Web Development – Why do the web page links to Google's search results and my site's Facebook-enabled web pages, when clicked, show 403 banned errors?

I have a website. It works well. But since yesterday, when I click on the web page link "Google Search Result" of my website, the following error appears:

You do not seem to have permission to access this page. 403 errors. Forbidden.

When I share the web page link on my website on my Facebook page and then click on the link on the shared web page, the same error will be displayed.

In both cases, however, I copy the link of the website (from the Google search results or the Facebook page) and paste it into the browser. Then it will open without problems, without an error being displayed.

I use Hostgator Shared Hosting.

I'm really confused why that happens? Please lead me. After day and night, I managed to get traffic, and now this problem is taking away the reputation acquired in the eyes of the search engine and the graph in the analysis is falling down.

I hope you understood the problem. Please help. Thank you in advance.

DreamProxies - Cheapest USA Elite Private Proxies 100 Private Proxies 200 Private Proxies 400 Private Proxies 1000 Private Proxies 2000 Private Proxies ExtraProxies.com - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive Proxies-free.com New Proxy Lists Every Day Proxies123
Proxy Sites Proxy Tunnels Proxy List Working Proxy Sites Hotproxysite Proxy Sites Proxy Sites Anonymous Proxy Anonymous Proxies Top-Proxies.co.uk http://www.proxysitesnow.com Proxy Servers Free Proxies Free Proxy List Proxy List Zoxy Proxy List PR liste all proxy sites More Proxies netgofree netgofree Hide-MyIp - The Best Proxy List American Proxy List www.proxylisty.com/proxylist Web Proxy Submit Proxies Updated Proxy List Updated Proxy List aproxy.org Bypass Proxy Sites Free Proxies List Evolving Critic Business Web Directory Free Proxy List iShortIt MyProxyList Online Proxies Go Proxies Need Proxies PrivateProxies Proxies4MySchool Proxies4Work Free Proxy List Free Proxy Sites ProxyInside Wiksa Proxy ProxyLister.org Free Proxy List ProxyNoid Proxy List Free Proxy List Proxy Sites Proxy TopList ProxyVille UK Proxy WebProxy List RatedProxy.com - Listing the best Web Proxies Free Proxy List SchoolProxiesList Stay Anonymous Proxy List The Power Of Ninja Proxy List UNubstruct Free proxy sites Free proxy sites