amazon web services – To ensure that the application is installed on a particular ec2 instance each time a new instance is created

If you provide a Spot ec2 instance, it can be stopped at any time and a new instance can be created. How can I ensure that the new instance contains all the configuration and application I have installed? Do I need to use other aws services to ensure this?

Data – How to ensure that the services are used by a customer

In this case, the date of origin of the records from the database can be a hint
The newer the entry, the higher the probability that it is current. There may also be an entry indicating the frequency of use of the services, or whether they have generally been used by someone.

Often, people sign up without being consumers at the same time. It is worthwhile to determine the value of this factor

Sending e-mails can also be a possibility in addition to offering premium payments

You have thought about qualitative research?

You will get an initial insight and an analysis of the topics, which you can then confirm quantitatively

postgresql – Ensure uniqueness of values ​​in bigint arrays created after merging two bigint arrays

What is the most efficient way to get the uniqueness of the values ​​in bigint Array created by merging 2 others bigint Arrays?
For example, this operation select ARRAY[1,2] || ARRAY[2, 3] should give as a result 1,2,3, I checked the extension intarray and see, it does not work with bigint,

SEO – Will a Sitemap ensure that pages serviced by AJAX queries are crawled?

I have a website where I publish articles that I started just 2 weeks ago. I'd like to keep the pages as clean as possible and load more content (links to other articles) from AJAX requests to user action (for now, clicks). I read a bit. Most of the articles and blog posts on this topic were outdated. I understand that Google used to support crawling AJAX requests, but not anymore. Some papers also recommend using methods that provide content by pagination. I also read about sitemaps. I know that it gives search engine crawlers an indication of which pages to search.

However, will crawlers find inconsistencies because these links are out of reach and can only be accessed by clicking the Load More button? Does a sitemap make sure the crawlers visit the URL?

java – Ensure thread execution order if read operations must be performed before reading

I am currently designing a server with the following structure:

  • A TCP thread pool that receives data from the network
  • A queue that contains these requests
  • A worker thread pool of fixed size that accepts requests from the queue and performs some work that: a ConcurrentHashMap is read and the results analyzed. After completing the work, the results are placed in a write queue.
  • The write queue stores write requests for ConcurrentHashMap
  • A fixed-length writing thread pool that takes requests from the write pool and writes them to the pool ConcurrentHashMap

Server restrictions:

  • For a specific entry in ConcurrentHashMapA write access is sent to the network several seconds before the read access. The threads must never allow the read to be scheduled before the write occurs.

My previous ideas:

  • First of all, I hope that the three to five seconds are sufficient enough to finish writing before reading
  • Set the writing threads to high priority and the reading threads to low priority
  • A read can check the last write time. If it's big enough (like a few minutes or hours ago), it can ignore the last write because it's outdated. But then it knows that it can not make an informed decision. I could have the thread redo twice if an entry is outdated, but that feels awkward.
  • The write queue has been added so that the TCP pool can be put directly into the write queue (write requests do not have to be processed by the worker pool) instead of in the job queue, while read requests are put in the worker thread queue and then sent some log information later put in the write queue. after the workers pool

Are there ways to make sure that reads are less likely to occur before a write, or are the precautions I've taken are sufficient? Should I use the re-read mechanism?

Is a write queue really needed, or can I use worker threads more generally to handle writes as well? Write order is not important.

SQL Server – How can I ensure database connectivity in a spotty network? (and allow completely offline connectivity)

My place of work has a pretty spotty network … we have fairly regular (though usually short) failures.

Not so long ago, my team logged application data on a SQL Server owned and managed by IT (I'm a non-IT programmer, so I have no control over what they do). The applications of the network, and especially the SQL Servers have been modified to point to an instance of MySQL. The data stored in MySQL is then periodically sent to SQL Server if connectivity can be ensured.

This has significantly improved reliability, but there are still some annoying errors due to incomplete network issues …

Due to poor network conditions and the desire to run software completely offline, I was asked to investigate the local database options …

We are currently using MySQL … I do not know of a local MySQL option that is easy to configure. I also do not want to be managing hundreds of MySQL instances, but the data we record locally will eventually be migrated to either a MySQL database or SQL Server.

What options are available for a very lightweight / easy-to-configure database? What about options for easy migration of this data without much effort?

To be honest, the only thing that comes to my mind is to have a bunch of local databases (possibly something like SQLite) and run scripts that communicate with the local databases at a certain frequency to transfer the data to a database server Sending a network that seems to be a lot of work to do …

I'm afraid the whole set-up feels pretty freaky and I personally think that the responsibility for data storage should lie with the application itself (the application should know that the network has failed and wait until it can try the Resend data). I also think there should be more pressure on IT to improve the reliability of the network and SQL Server.

java – Ensure the unit of a 10-digit alphanumeric string

I have a request to develop a service that needs to generate millions every day random and unique alphanumeric string of length 10 (I can not increase this length, this is a customer requirement). If you have an ID, the next ID must not be guessed.

I want to make sure that all IDs are unique, as these IDs will later be used in a database (not my own) and represent the ID of a product, so it MUST be unique. I know that the probability of a collision is low, but I want to prevent this.

I have no problem generating a random string, but I want to make sure that all generated IDs are really unique.
I've been thinking about using a local SQL database and adding a few millions a day to this database (which ensures uniqueness because this ID would be a primary key). I will then pick up these badges, mark them as "edited" and send them.

To improve insertion performance, have I considered having a table for every year (one partition?).

Do you think that this is a good solution to ensure consistency?
Would you have another solution that might be better than a SQL database?

Many Thanks.

Tips and tricks – To ensure the safety of tripods and cameras when using the remote control

I will travel to Paris alone in December for business. I often use a tripod to take some artistic shots while traveling, but I always stood by with myself taking photos of others. This time I was thinking about the possibility of taking a few pictures in front of the camera with a timer / remote control. (Example of the types of photos I will take)

I was wondering if anyone has tips to ensure the safety of the camera and equipment in such shots. I will try to take as many things as possible during the sunrise to minimize the number of tourists, but I've always been worried about theft, etc., especially in Paris.

Signature – How does Bitcoin ensure that no one can duplicate a transaction?

But what prevents someone, such as the recipient of the original transaction, from simply sending the same exact transaction with the same data and signature?

Bitcoin has no account balance, but works with the concept of unspent transaction issues (UTXOs). Each issue of a transaction (except OP_RETURN Ones) lead to the creation of a separate UTXO. When you create transactions in Bitcoin, you consume these UTXOs completely and create separate UTXOs. This is done by referring to the Outpoint (txid and n) from which you deserve this UTXO & # 39; For example, if you control 2 UTXOs (1 BTC and 0.5 BTC) and you want to send 1.25 BTC to your friend, you must consume both UTXOs and send the remaining 0.25 BTC change to yourself (excluding charges). When the recipient of a transaction issue tries again to transfer the same transaction, the bitcoin nodes, upon verifying the transaction, determine that these utx are not present and the transaction is therefore invalid.

When complete nodes start syncing from the Genesis block, they begin to create the database of all of these UTXOs. Each transaction removes issued UTXOs and adds new ones. This database is stored in chain state Directory and aggressively cached in memory.