api design – Auto-refreshing web resources — JavaScript SPA best practices

I have a resource that is fetched and displayed to users via the browser fetch API in an SPA. The details of given items in the resource list and the order/membership of the list changes often.

I will give users the ability to manually refresh, but I am curious if anyone has experience designing a quality auto-refresh behavior and if they can share with me a list of gotchas / best practices.

For example, the user probably doesn’t need to be getting updates on the resource if the tab is not currently in focus or if they are AFK.

Architecture – Separation of concerns and other best practices between controllers, services, providers and storage in ASP.NET when building a REST web API

I've traditionally been a desktop app developer, but circumstances put me in the role of the web client and the appropriate REST API logic for a project I'm involved in. Unfortunately I'm a one-man show, so my learning opportunities New patterns or techniques from employees are somewhat limited. When I started up, I had the opportunity (briefly) to work with a contractor who gave me the idea that my server REST logic should be separated into one Regulator (where the actual GET / PUT / POST / DELETE methods live) and a service that makes the heavy lifting. As I was told, the service could continue to interact with one or more providers or Shops.

My understanding is that a providers would include logic that interacts with another system, possibly a different Web API or strange piece of legacy code, or proprietary hardware (such as a temperature gauge). In addition a business would encapsulate the CRUD logic for actual data objects in SQL, NoSQL, text files, etc.

Assuming that all of this makes sense and as the professionals do, he advised me to include the term in my classes as follows:

PizzaController could send the received web api calls to the PizzaServicewho in turn could talk to both PizzaProvider and the RefridgeratorStore.

I'm not 100% sure that the real world is doing this – but it made sense to me to sound believable, and I've generally adopted that pattern and so far it has worked well enough to organize my logic.

But here are a few questions:

First, is this class separation view really how others structure their code? And when I'm close, but not quite, what corrections should I make?

Second, it is legitimate for one service instantiate and use a second service? For example, what if mine PizzaService has to decide whether we want a delivery or whether we want to make a pizza from scratch – maybe she wants to either call it up PizzaProvider -or- it might just want to move to that PizzaMakerService. If the PizzaService if this decision is not made, then the decision logic would have to live earlier in the food chain (no pun intended). That would suggest mine PizzaController would have to decide whether to use that PizzaService -or the PizzaMakerService;; and that doesn't smell right for me.

And finally (following the pattern I was shown) mine Services Frequently return a data object to my Regulator, where the Regulator assigns one or more properties to one ViewModel that will be returned to my customer. I have found that I can just as easily assign the relevant data bits to an anonymous object (C #) during operation and return them to my client. The JSON returned is the same. So why introduce the class definition for a? ViewModel at all? Is there a taboo against making an anonymous object in the Regulator and return?

I realize that (in my situation) I can do pretty much anything I want – how I name classes, how I separate logic when I use anonymous objects – it's really all my code. But these questions have been on my mind for some time and I would like to do things that are as close to "right" as possible. It is likely that these questions (or a variation) have been asked and answered before, so I apologize for any overlap – but I can't find any direct answers for my life.

Thank you so much!

Programming Practices – Good etiquette for 2 optional arguments that cannot both be used

For clarity, I would do two things:

norm_pdf_from_variance(x, mu, variance) and

norm_pdf_from_precision(x, mu, precision).

You could call a common "private" function to do the calculation.

The alternative is a single value v and a boolean isPrecisionBut that's a bad and confusing idea. Go for clarity.

postgresql – Best practices for using migrations in multi-tenant applications in .NET Core

I am developing a multi-tenant application with .net core and postgreSQL With code first, in which each client has their base created by an admin dashboard, assuming that they have 6 clients, I have 6 bases. How can I properly apply migrations in these 6 bases when I change an entity?

Summation – Local Variables in Totals and Tables – Best Practices?

Stumbled upon local variables when defining the function in Mathematica in math.SE and decided to ask them here. Sorry if it's a duplicate – the only really relevant question with a detailed answer I could find here is: How do I avoid nested With []? but I find it somehow too technical and essentially not really the same.

In short, things like f[n_]:=Sum[Binomial[n,k],{k,0,n}] are very dangerous since you never know when you will use them symbolically k: say, f[k-1] rated to 0. That was actually a big surprise for me: for some reason, I thought that summation variables and the dummy variables in constructs like Table are automatically localized!

As explained in the answers there, it is not entirely clear what to use here: Module is perfectly fine, but would share variables across stacked frames. Block doesn't solve the problem. There were also suggestions to use Unique or formal symbols.

What is the optimal solution? Is there an option to somehow automatically locate dummy variables?

Patterns and practices – microservices of different resource types

I'm building a system that consists of a few micro services through AWS.
I encountered the need for a particular MS to do the same logical work but a wide range of workloads.
For example, the same logical work might need 2vcpu and 4 GB RAM, and another one would need 4vcpu and 16 GB RAM.
What is the best course of action here? Are all instances large enough to meet my needs? Did 2 ECS services, one of 2vcpu and one 4vcpu with different SQS, deliver the workload?
or is there another rational idea here?

Design – Best practices in dealing with authentication and validation with ReactJS

I used the simple MEN stack, but I recently learned ReactJS. Now I'm just trying to figure out how to connect my frontend to my backend and I have a couple of problems.

The current problem concerns authentication. In my old MEN applications I only use PassportJS (with passport-local) and everything just worked. When a user makes a request, I can just do it req.user.id and easy to see who the user is who makes the post request. At ReactJS, I assumed I had to do something else (even if it didn't, I still wanted to see what other options users were using). I realized that I had a Udemy course called "The Advanced Web Developer Bootcamp" by Colt and a few others, so I went to the final project where they built a full-stack app and looked at the key parts to see how they did it.

This is how they created the app. They used JWT in the backend. When a user logs on / logs on, a JWT is returned to the user. In the backend you also created two middlewares with the name "isLoggedIn", which only check whether the user is logged in, and another with the name "isAuthenticated", which checks whether the ID in the parameters matches the ID in the JWT (this is the case) This user Bob cannot post mail requests on Steve's behalf if Bob somehow knew Steve's user ID since Bob would not know JWT. So far everything has been great

The problem

So it seems that they are storing the JWT in a redux memory, which doesn't seem to be very good, because when a user leaves the page and comes back, doesn't that essentially mean that he has been logged out? Another popular option is to use localstorage, but I've read a bit and it doesn't seem like a good idea since everyone has access to localStorage, which is a pretty big security issue.

How do I do that? What is the best way to keep everything safe and relatively simple? The JWT backend stuff makes perfect sense to me about how we compare and authenticate or validate the hashes and tokens, but the front end seems a bit dubious.

tldr: JWT was used for authentication in a full-stack app that uses the MEAN stack. The back-end part makes perfect sense if you havehed the password, saved it, compared it with what the user entered, returned the JWT token, etc. The part where it's been difficult to keep it safe in the front end since then I would have to send the JWT back every time I make an API call. What is the best practice method to do this?

PS: If there is a better / easier way, I'm all ears!

Copywriting – Best practices for application notifications

As a developer, I'm always confused about the three basic values ​​of an application's warning:

  1. title
  2. Embassy
  3. Close button

What should be placed in these 3 values? FAQ for me:

  • Should the title be a summary of what happened or where the message came from?
  • Should messages use exclamations?
  • Should messages be complete sentences (ending with a period)?
  • How many details should the message contain?

Well, as a single question: are there some basic rules when writing warning / notification messages? Bonus if these basic rules answer my little questions.

Please share your thoughts …

macbook pro – What are best practices for backup / restore for an Mbit / s in mid-2015 and is my timemachine restore with an estimated 25-hour experience and experience typical for a backup of 150 gig?

I have an MBP in mid-2015 (Mojave 10.14.6) with a 500 GB SSD drive.

What is the fastest device I can use to restore a specific date using a time machine?

I'm assuming that I want something that is connected to the Thunderbolt port (20 Gbps). Should I buy a case that supports SATA and Thunderbolt 2 and then buy a 1 gig drive that runs at 7200 rpm? Any specific recommendations? I don't seem to be able to find a Thunderbolt 2 drive enclosure, although the Thunderbolt 3 enclosures seem to exist.

What are the best practices to make the process pain free?

If I know that recovery is scheduled, is it much faster to do a full backup and full recovery?

Can you optimize the current physical data on the hard disk so that the time machine can be restored faster? I have a lot of source code on the drive, which is a lot of small files. I've read that a lot of small files can slow performance. Maybe it would help dramatically to exclude the source code from the time machine backups?

My experience

I have done two time machine restores in my life.

The first was when I first switched from a valley to this MBP. The process was pretty quick – the MBP was a fresh installation, the SSD was brand new. My notes seem to say that my backup was 13 gig and it took 9 minutes to restore, including wiping the hard drive. Everything worked perfectly. I don't remember what media I used, but I'm sure it was a USB stick.

The second was a few days ago. My backup was now ~ 160 gig data. I will be upgrading to a newer MBP next year, so I wanted to test timemachine backups and test my recovery ability in case my drive or MBP fails. I used a USB 3.1 Patriot 256 GB USB stick.

I went through the steps booted with clover-R and inserted my USB stick. I choose the date from the time machine list and click Restore. I deleted my drive as part of the process. Recovery started well enough, the drive was erased fairly quickly, the first estimated recovery time for the machines was 2 hours and 20 minutes. I watched it for a while and went to sleep. Six hours later I checked the progress and it was called ~ 100 gigs restored, another 20 hours! Since I expected to use my machine that day, I was quite discouraged and anxious. This was clearly not the best plan.

I searched and read what I could find. I found an article that says there is a bug where inserting your USB key before entering the recovery partition (Clover-R) can cause USB 1.0 to be used. After thinking it over, I decided to stop and try again. I went through similar steps and watched the time getting longer. I canceled a lot more, such as checking the hard drive and backup drive using Disk Utility, and finally canceled at a point where I had no recovery partition (great times). Fortunately, the Mac can boot using Internet Restore (which is pretty awesome). After playing around much more and waiting about 15 minutes to boot from the internet, I decided that I had no choice but to wait for the time machine to be restored.

I watched the progress of the day as the estimate of changed

  • 2 hours (9 a.m.)
  • 4 hours (9:07)
  • 11 hours (9:24)
  • 12 hours (9:52)
  • 18 hours (10:25)
  • 20 hours (10:56)
  • 22 hours (11:05)
  • 25 hours (noon)

The amount of data restored increased slowly. Around 2 p.m. the number began to decrease and 17 hours were displayed. At 5 p.m. it was again 22 hours. An unbeatable experience.

I had plugged in my MBP as this would likely take some time, but the screen would go black for energy reasons. I would press the spacebar to check the progress.

  • 21 hours 5 minutes (~ 6 p.m.)

At 5:50 p.m. it showed 140.44 GB and 21 hours and 5 minutes. The progress bar showed a progress line for 140.44 out of 500 GB.

I came back a little later to check. The screen was black, so I hit the spacebar. Nothing has happened. I looked at the flash drive that was still flashing red. I hit the spacebar a couple more times – nothing. I went away and thought about my options – was it frozen? should i turn it off? I looked for answers on the Internet. A new term came into my colloquial language – black screen of death. While searching for answers, I heard the Mac restart – some hopes appeared.

I went to the computer – half expected that the carillon was an auditory hallucination. The computer was restarted and the login prompt was displayed! Bam 21 hours in 10 minutes. Given that my backup was around 150 gig, the 140 was probably 140/150 instead of 500 and the time machine is not smart enough to figure it out.

I entered my password and waited for the login – the waiting spinner appeared. I waited, it turned, I waited more, it turned more. I went away and came back 15 minutes later, it was still spinning. Great times.

I searched the internet – read about using single-user mode and creating a new administrator user. I created my new administrator, went through the setup screens and was able to log in as the new administrator. Deleting the damaged user was mentioned in the article, but I decided to log in again. I logged out with the backup user and tried my original user. This time it worked and opened an iCloud authentication window (probably the reason why it previously rotated).

After a marathon, my MBP was finally back where I needed it.

Is this a fairly typical user experience for time machines?

I'm not interested in running the marathon again without training to improve my user experience. Any general recommendations?

Are people using a time machine for this type of recovery or is it really a last resort tool?

This question – best practices for backup and reinstallation – seems to cover some recommendations like in 2011. I assume that things have changed to a certain extent.

PS If you think this question is long, you should try a timemachine recovery.

DreamProxies - Cheapest USA Elite Private Proxies 100 Private Proxies 200 Private Proxies 400 Private Proxies 1000 Private Proxies 2000 Private Proxies ExtraProxies.com - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive Proxies-free.com New Proxy Lists Every Day Proxies123