web servers – Handling replicated requests that is received via WebServers only once

Setup: IIS webservers , Load Balancers, ASP.Net running on the webservers.
What we mean by replicated request : A get/post request from a user to same URL received within a time period e.g. 1 or , 2 seconds.

Unknowns : If the Load balancers are replicating the same request to different IIS servers or network prior to load balancers has replicated the request.

Known : Users are NOT sending mutiple requests, and even if they did then handling once or not is not pertinent.

From server logs we can see that 1% or 0.1% of the requests are being replicated from 2 time to 17 times or more, being received by same or different IIS servers.

Is there any known patters for handling replicated requests in this scenario? we can have a request id included in all the requests if needed, and just use the request id to identify the same requests. But this is a Jury-rigged solution and I have not any reference material for it. Also it might come with number new problems that we will only become aware of after implementation.

This scenario seems to be common enough to have already had solutions, but googling “iis replicated requests” is not help, I tried looking up Generals Problem , but that too didn’t seem relavent for this case.

Any hints on what topic/solution/terminology to search for is appreciated.

javascript – Consumo de Endpoint não otimizado em aplicação web – JS, PWA

Estou fazendo uma aplicação web com javascript puro, utilizando node.js e express, com o objetivo de transformar em uma pwa. Um dos requisitos foi o consumo de uma API cujo endpoint não é otimizado para leitura geral (ao abrir o link carrega indefinidamente até travar), vou precisar de uma cache dos dados no estado local da aplicação, me foi recomendado usar Redux.

Minha dúvida, não tenho muitos conhecimentos para trabalhar com essa base de dados, o tratamento de cache local e o uso do redux, alguém saberia me informar que ferramentas e bibliotecas utilizar?

se tiver uma forma de receber esses dados. Imagino que receber pacotes dos dados de acordo com a necessidade seria interessante.

github projeto: https://github.com/HenrickyL/pwa-pelikan-tm
fontEnd: http://pelikan-tm.herokuapp.com/

web app – “Click-and-drop” alternative to drag-and-drop, for accessibility (and maybe mobile-web)

I have observed some people fumble with a mouse. They seem unable to synchronize clicking the buttons and sliding the mouse consistently, to where they drag things when they only meant to click on them. Or in trying to use drag-and-drop, they inadvertently drop too soon because they can’t keep their finger on the mouse button for the whole travel. Or they have trouble aiming the mouse while keeping the button pressed.

The web application is a simple toolbar. Think stickers, like gold stars for favorites, or thumbs-down for disfavor, or a red X for deletion. I want users to be able to apply those stickers from a “toolbar” on the edge, to objects on a web page. Drag-and-drop works for this.

For less dextrous users I envision what I’m calling click-and-drop. Instead of dragging with the mouse button depressed the whole time, it proceeds like this: one full click (mousedown + mouseup) on the toolbar “picks up” the icon. Now the mouse-cursor is replaced by the icon while the mouse moves, possibly with a grabbing hand next to it. A second full click at the destination “drops” the icon and the mouse-cursor goes back to what it was. Is there a name for this already? Maybe that action is more like dipping a paintbrush in a watercolor well, rather than a tool-belt.

Would this click-and-drop alternative be helpful for a good portion of dexterity-challenged web users? Are there better alternatives?

Without muddying this question too much, I was further thinking this alternative might transport to mobile or pad devices. Touch the toolbar, the icon hovers say in the upper right corner, pan to the destination, tap the destination once to “drop” the icon. I bring this up mainly for the consideration of consistency across devices. Though perhaps an entirely different scheme is better for the multi-touch world.

android – ¿Cómo podría realizar una redirección web desde cualquier url?

mi proyecto personal consiste en levantar un punto de acceso con hostapd que funciona a modo de puente.

El problema viene cuando uno de los detalles que me gustaría implementar es la redirección a una web específica, me explico:

Que el usuario nada más conectarse a la red (O al tratar de navegar por internet) le salte una web (Ya creada) donde aparece cierta información del sitio donde se ha conectado (Su IP, el dominio y datos que ya están montados en un archivo llamado “Saludo.html”.

Gracias, un saludo.

web development – Back-end solution for pulling from CSV files

I’m building a data visualization that displays COVID information for the United States, at the city, state, and county level.

The ultimate source of truth are three CSVs published by the New York Times on Github in this repo:

The CSVs are updated once per day with new data from the previous day.

The front-end involves selecting a state, county, and type of statistic (number of deaths, number of cases, etc.). Three line charts are then displayed, showing the rate of change over time – at the national, state, and county level.

Right now, the app is purely front-end. It downloads the set of three CSVs (which are quite large), then does a series of calculations on the data, and when the Promise completes, the visualization is finally displayed in the browser. It takes good 5-10 seconds to complete on a good internet connection – hardly sustainable in production, and also requires the user to download the entirety of the data, even though they might be only looking for a few combinations of states / counties.

Is there a solution that could speed this up, without requiring a back-end? Or is a formal database / backend structure needed?

Here is my general idea of what the back-end solution (I would use a Node.js / Express REST API setup) would entail, but looking for suggestions:

  1. Deploy a Node.js script that downloads the CSVs once per day and puts the data in a database. I could either download the entirety of the CSVs and rewrite the entire database, or download just the new data and add it to the database.

  2. Do some additional calculations on the data (for example, calculate change from the previous day) and then send those to the database. These additional calculations could also be done client-side (this is how it is working currently in my front-end solution)

  3. When the user loads the page, have the front-end query for a list of states and counties from the back-end, so front-end can load.

  4. When the user selects a state / county combination, send just that information to the back-end via a REST API. Have the back-end query the database and return just the requested information to the front-end.

Miscellaneous concerns:

a. Obviously, a no-backend solution would be preferred, but I can’t
think of a way where I can query these CSVs with just the
user-supplied information without downloading them in their entirety

b. From a database perspective, it is a big lift / cost to delete all
the data and rewrite it entirely? Or would it be more cost-efficient
(assuming this is a cloud-based solution) to only add the new data?
(assuming the old data does not change, which is an assumption)

c. I’ve been looking at GraphQL as an alternative to REST, but I’m
not sure it will solve the problem of having to download the CSVs in
their entirety and “store” them somewhere. There are several
open-source APIs online already that provide a more convenient way to
query the data:


But these all seem to be pulling from the CSV, and they take a long time. Is this because they are accessing the data from a CSV instead of a database which I’m assuming has much faster access?

amazon web services – How to save an audio stream to S3

Is there a way to save a wav audio stream to Amazon S3? I have a mobile app that records audio and sends the audio chunks to a server instance over a websocket connection for processing. Part of this processing involves saving the stream to a storage service. The issue is that S3 has a 5MB minimum size upload limit for multipart uploads and seeing as the average size of the chunks that my stateless server receives is about 2048 bytes, 5MB is unachievable. Ideally I would like to avoid writing to a temp file on the server’s disk as instances are closed and restarted often meaning any temp files would be lost and thus parts of the audio file would also be lost. I need a way to write these chunks that the server receives directly to a storage service. Any advice would be great, thanks!

Venus Web Solutions – Shared Web Hosting Plan or $2.37/month

Venus Web Solutions - Cheap Shared Web Hosting Venus Web Solutions recently submitted their first ever LowEndBox exclusive offer for their shared web hosting plan. It’s been a little while since we’ve featured a shared web hosting package, but sometimes you just can’t beat the low cost and simplicity of shared web hosting. Venus Web Solutions WHOIS is not public, and you can find their ToS/Legal Docs on their website. They are a registered company in India (#27BSJPK2416A1Z6). They accept PayPal and Credit/Debit Card as payment methods.

A little about Venus Web Solutions in their own words: 

“We provide website hosting services across Linux, Windows and WordPress Hosting platforms. Our servers are highly configured and can handle DDoS attack upto 1TBPS. We are using 100% SSD Disks. We provide separate disk space for the emails as per the plans you will get the additional Email disk space. We use customized control panel which is configured with the inbuilt apps like File Permission checker, Malware Scanner, Website Builder ETC, We provide more than 70 one click installs so you will get the wide range of the apps to install on your website. We provide Free SSL certificate to all the the sites hosted on our servers.”

Here is their offer: 

Windows / Linux Shared Hosting

  • 1 Website
  • 100% SSD Web Space
  • 2 GB Web Space
  • Free SSL Certificate
  • Unlimited Bandwidth
  • Windows / Linux Hosting Platform
  • 50 x 1GB Mailboxes
  • 1 x 1GB MySQL Database
  • 10 x Sub Domains
  • Order Now

More information after the break. Remember to leave your feedback in the comments below!

A little more about Venus Web Solutions:

  • Our hosting platforms are load balancing and autoscaling. Each website hosted on oue server accesses multi-server resources that scales to meet the websites needs. There is no single point of failure means optimal reliability is 24 X 7 X 365.
  • Our powerful Control panel is integrated with the malware scanner app. With the help of this you can scan your website and remove malware if any by your own
  • Clients can Block single IP addresses, entire subnets or even countries from your site via our Powerful control panel. Websites and VPS are protected against denial of service attacks.
  • Our next feature is Sitemap Generator and The Sitemap Generator will crawl a website and create an XML sitemap which you can upload immediately. This app is integrated in our Powerful Control Panel. We provide CDN for free also we support more than 70 one click installs like WordPress.

Please let us know if you have any questions/comments and enjoy!

Jon Biloh

I’m Jon Biloh and I own LowEndBox and LowEndTalk. I’ve spent my nearly 20 year career in IT building companies and now I’m excited to focus on building and enhancing the community at LowEndBox and LowEndTalk.

python – Web scraping – Paginação

Olá, estou fazendo um webscraping com Python, consigo visualizar as informações de todas as páginas no console, uso o Spyder 4.1.4. O número de páginas variam entre 49 e 57, estou usando o comando while e ele roda bem, porém gostaria de compilar todas as infos extraídas de todas as páginas, porém não consigo, dado que toda vez que o looping começa a string troca de valor, ou seja as informações são da página atual e a informação da página anterior é apagada. Como faço para salvá-las sem criar um arquivo para cada página? Gostaria de gerar um arquivo só no final com o compilado de todas as páginas. Muito obrigado