linux – Methods for tracking processing time for long running ADD INDEX call in MySQL

I’ve set off index creation on a very large table in MySQL and while I expected it to take a long time, I’m 5 days in and wondering if there’s any way to debug potential issues or simply let it run. I don’t have a precise row count but to estimate, it’s in the 100s of billions of rows and the table is ~400GB on disk. Neither memory or CPU usage appears to be overly taxed (mem ~8GB (out of 16GB total)).

The call I made from within MySQL is as follows:

alter table prices add index(dataDate, ticker, expDate, type), add index(s
ymbol), algorithm=inplace, lock=none;

Running show processlist from within a different MySQL instance shows the call with State ‘altering table’ so the call doesn’t appear blocked. Anything else I can check to gauge progress?

For reference I’m working with MySQL 8 and within Ubuntu 18.04

machine learning – How to extract an open research question from text with Natural Language Processing?

Researchers often broadly state one or several research problems as an “open research question” or an “open research problem”, a “research gap” or “desideratum” or often make “suggestions for further research”. The same terms are used across disciplines. Some disciplines are referring to categories of research gaps like an “evidence gap” (commonly used in medical research). There are probably less than a dozen terms used to describe the same thing.

As long as you are dealing with quantitative research (from natural to social sciences) these open research questions are mostly asked or at least repeated in the “conclusion”, “future research” or “discussion” section of an article. There are a very limited number of phrases used to introduce a desideratum like “further research… (is needed to/will show etc.)…”, “it remains to be seen, if…”. One could prepare a quite comprehensive list of such phrases rather quickly. Often these phrases also signify the beginning of the statement, which should be helpful.

In other words, you know what keywords and phrases to look for and have a measure of relevance by there position in the document. Also many open research problems are directly phrased as a question.

There has to be an element of human curation (aided by further NLP analysis), since it will be difficult to extract the exact statement and the statement has to be validated as non-redundant and preferably categorized (if applicable as “evidence gap”, “method gap” “sample gap” etc.)

Assuming you have a database of full-text articles (most pre-print servers give full access to content and meta-data), others give you free access to metadata and scientometric data. What methods, algorithms and software solutions (open source or SaaS) would you use or test?

Links:
API’s of science repositories and the access they give:

https://guides.lib.berkeley.edu/information-studies/apis

unity – can no longer drag post processing profile into CenterEyeAnchor in OVRCameraRig -> TrackingSpace, is there an alternative step?

The video https://youtu.be/gh4k0Q1Pl7E?t=120 shows a step where I need to drag a newly created post-processing profile into the CenterEyeAnchor section. I am aware this video is the demo using an older version of the Post Processing Stack package, so one can not longer add it as a component into anywhere. But I do not know an alternative to achieve the same step?
Is it not even necessary?
I am very noobie in Unity.

I am using

Unity 2019.3.14
Post Processing 2.3.0

winforms – C# Windows.Forms.Timer synchronous Task Processing with multiple Timers

In the bellow test scenario i like to trigger some task by using multiple timers. Some event can trigger another event.
An event must finish the process, before a new process can be started. Events that gets triggered, while another event is processing, shall queue up and start once nothing is processing. The timer doesn’t need to be accurate.

The current problem on the code bellow, is that things get mixed up. Line2 starts processing, while Line still hasn’t finished. How to make the orders queue up properly and process it properly?

The console output of this code is;
Line1 Processing
Line1 CompletedLine2 Processing
Line2 CompletedLine1 Processing
……..

What is required is;
Line1 Processing Line1 Completed
Line2 Processing Line2 Completed
……..
Im a beginner, so please be patient 🙂

public partial class Form1 : Form
    {
        readonly System.Windows.Forms.Timer myTimer1 = new System.Windows.Forms.Timer();
        readonly System.Windows.Forms.Timer myTimer2 = new System.Windows.Forms.Timer();

        int leadTime1 = 100;
        int leadTime2 = 100;

        public Form1()
        {
            InitializeComponent();
            TaskStarter();

        }

        private void TaskStarter()
        {
            myTimer1.Tick += new EventHandler(myEventTimer1);
            myTimer1.Tick += new EventHandler(myEventTimer2);

            myTimer1.Interval = leadTime1;
            myTimer2.Interval = leadTime2;

            myTimer1.Start();
        }

        private void myEventTimer1(object source, EventArgs e)
        {
            myTimer1.Stop();
            Console.WriteLine("Line1 Processing ");
            MyTask();
            Console.Write(" Line1 Completed");
            myTimer2.Start();
            myTimer1.Enabled = true;
        }

        private void myEventTimer2(object source, EventArgs e)
        {
            myTimer2.Stop();
            Console.WriteLine("Line2 Processing ");
            MyTask();
            Console.Write(" Line2 Completed");
            myTimer2.Enabled = true;
        }

        private void MyTask()
        {
            Random rnd = new Random();
            int tleadtime = rnd.Next(1000, 5000);
            Thread.Sleep(tleadtime);
        }
    }

Is there a search engine with reverse search mechanism: processing multiple queries and retrieving a single result for each of them?

A regular search engine retrieves a number of results matching a single query. I need one that does the opposite: enables to input a number of queries and displays a single match to each of them. If there is no match, I need to know it too. An implementation of this approach exists, but its application is limited to the database embedding it (PubMed). Batch Citation Matcher enables a user to identify unique content in databases covering material also present in PubMed. A user can submit a text file and the engine transforms the content of the file separated in a required way into a number of queries, then displays a table with queries in one column and the information on whether the text of the queries was found in a database being compared to PubMed in another column. Is there an analogues application which can be applied to any website?

Best Payment Processing Company – Payment processors

Are you looking for the best payment processing company for your business? What do you want in your merchant account? We have the best payment processing solutions for all businesses and provide everything which you need like free setup, no hidden fee, e-verification system, live chat & email support and dedicated dashboard. and no bank visit. Apply Now!

Architectural Patterns – Batch Processing: Workload Distribution Solutions

I'm working on a product that is a multi-tenant cloud solution. When it comes to repeatable batch processing, we have a set pattern.

  1. We configure a job so that it wakes up at regular intervals and executes its logic. In particular, it runs through all clients in succession in each run, reads information about a particular client (stored in a database) and executes the business logic
  2. More than one instance of a particular job is running (on different nodes / servers) to ensure fault tolerance. In this way, an instance of a job receives a lock for a database table row that is statically assigned to the job. By "acquire" I mean that the job marks a column on that line as claimed. This ensures that only one of the multiple instances of the same job receives the lock and processes data. In this way we do not have double processing of a tenant.

Lately we've seen the need to redesign this to solve larger volumes and general scalability issues.

We want to process multiple instances of jobs that work on mutually distributed workloads so that we can make better use of our resources and also increase our throughput.

Are there known patterns / technologies for this?

Micro-payment channels – Processing of time-locked contracts based on the time elapsed in the Lightning network

Assume that in a payment route A-> B-> C A wants to pay 0.01 BTC to C. It blocks 0.01 BTC with B in the HTLC with a timeout period of, for example, 2 days and B blocks 0.01 BTC with C in the HTLC with a timeout period of 1 day. Can something be done now that if, for example, C delays in disclosing the model takes half a day, it will receive 0.005 BTC and 0.005 BTC will be returned to B. Can the contract support such dynamic payment settlement based on lost time? ? In the worst case, if C does not react, B gets back a full 0.1 BTC.

Processing – is Kodak Flexicolor SM (C-41SM) Tank Final Rinse a good stabilizer?

It will work fine! Initially, the C-41 film stabilizer was a surfactant (FhotoFlow) plus formaldehyde. The formaldehyde acted as a biocide to protect the film from mold and other beasts. Second, the formaldehyde formed a peptide bond. Film uses a gelatin binder to glue the photosensitive treats and dyes to the film base. Gelatin is an organic substance that is made from animal bones and connective tissue. As such, it is food for bacteria. The dyes are also organic, they are oily globules. Under the microscope, gelatin resembles spaghetti. The peptide staples the spaghetti in places where they overlap. This retains the dyes and prevents them from being mobile. Over time, formaldehyde has been marked as carcinogenic. Color films were modified so that only a biocide plus surfactant did this trick. A modern C-41 flush does this; It contains the surfactant and a mild biocide that is similar to that found in antibiotic hand soap.