Specific thread limit settings for SER

Hope everyone has a good weekend, until now!
I was wondering … SER is smart in that it scales back too many threads to not freeze.
Would it be useful for users to set multiple thread limits?
I see that the reconsideration takes up very little resources, but scraping and posting is harder.
Would checking / retesting, checking e-mail, scraping, and posting messages with their own thread limit be good? Or a mess? lol

mysql – Why is it slow: "SELECT * … ORDER BY id LIMIT 50000, 2"

car_trims is an InnoDB table, has ~ 40 fields, an average line length of 230 bytes, no columns of type TEXT or BLOB

car_trims.id is PK

Query 1: .0007 seconds

SELECT * FROM car_trims ORDER BY id LIMIT 2

Query 2: 0.023 seconds

SELECT id FROM car_trims ORDER BY id LIMIT 50000, 2

Query 3: 0.09 seconds

SELECT * FROM car_trims ORDER BY id LIMIT 50000, 2

First of all, I do not understand why query 2 is so slow, but it's reasonably acceptable. What I really do not understand is why query 3 takes almost 100 ms to read the row data with the primary key.

In my understanding, the database should retrieve the PK from memory to locate the line on the hard drive and then read it out. Query 3 should not take much longer than Query 1.

TO EXPLAIN

id  select_type     table   partitions  type    possible_keys   key     key_len     ref     rows    filtered    Extra   
 1       SIMPLE     car_trims     NULL  index   NULL            PRIMARY       4     NULL    50002   100.00      NULL

my.cnf

[mysqld]
#
# * Basic Settings
#
user        = mysql
pid-file    = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
port        = 3306
basedir     = /usr
datadir     = /var/lib/mysql
tmpdir      = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
sql_mode = "NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address        = 127.0.0.1
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover-options  = BACKUP
#
# * Query Cache Configuration
#
query_cache_type=0
#query_cache_limit  = 1M
#query_cache_size        = 16M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
#general_log_file        = /var/log/mysql/mysql.log
#general_log             = 1
#
# Error log - should be very few entries.
#
log_error = /var/log/mysql/error.log
#
# Here you can see queries with especially long duration
#log_slow_queries   = /var/log/mysql/mysql-slow.log
#long_query_time = 2
#log-queries-not-using-indexes
#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#       other settings you may need to change.
#server-id      = 1
#log_bin            = /var/log/mysql/mysql-bin.log
expire_logs_days    = 10
max_binlog_size   = 100M
#binlog_do_db       = include_database_name
#binlog_ignore_db   = include_database_name
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem


# Custom Stuff
performance-schema=0
event_scheduler=ON
slow-query-log=1
long-query-time=1
max_user_connections=1000
max_connections=1100
table_open_cache=8192
key_buffer_size=64M #myisam table index buffer
max_connect_errors=20
max_allowed_packet=256M
sort_buffer_size=2M
read_buffer_size=2M
read_rnd_buffer_size=4M
myisam_sort_buffer_size=64M
max_heap_table_size=256M
tmp_table_size=256M
thread_cache_size=100

concurrent_insert=2
innodb_buffer_pool_size=1024M
innodb_buffer_pool_instances=8
innodb_flush_method=O_DIRECT
innodb_flush_log_at_trx_commit=2
innodb_log_file_size=32M            #see http://dev.mysql.com/doc/refman/5.0/en/adding-and-removing.html
innodb_old_blocks_time=1000
innodb_stats_on_metadata=off
innodb_log_buffer_size=16M
innodb_file_per_table=1
open_files_limit=10000

mariadb – Why is it not possible to "SELECT" data as parameters for certain clauses like LIMIT?

EXAMPLE:

CREATE TABLE t (A INT);
INSERT INTO t VALUES(1),(2),(3),(4),(5);

That's working:

SELECT A FROM t WHERE A=(SELECT 1)

That does not do:

SELECT A FROM t LIMIT (SELECT 1);

That's working:

SELECT A FROM t WHERE A=substring((SELECT 123),1,1);

That does not do:

SELECT A FROM t WHERE A=1 PROCEDURE ANALYSE((SELECT 1),10000);

As a result, some clauses and functions do not support SELECT statements and others. I would think that would work (SELECT 1) returns a 1 As a parameter and as long as the parameter is the expected type, it should work. This seems to be true in some cases but not in others. Why is that?

Algorithms – Upper limit for the runtime complexity of LOOP programs

Recently, I've learned about LOOP programs that always end and have the same computational power as primitive recursive functions.
In addition, primitive recursive functions (as far as I understand) can compute everything that does not grow faster than $ Ack (n) $,

Does this mean that the upper limit of runtime complexity for LOOP programs is? $ O (confirmation (s)) $? And there are functions similar to those of Ackermann, which can not be calculated with primitive recursive functions, but grow slower than $ Ack (n) $?

(Sorry for spelling and grammar)

Character limit in the mood analysis

I do the search and return the tweets to a specific number and then get … and not the whole tweet. I already tried tweet_mode = & # 39; extended & # 39; to put, but it did not work.
Another problem is that I pass a date as a reference and the search is not performed by date.

#Buscando tweets
textPT = ()
for tweet in tweepy.Cursor(api.search, q="jairbolsonaro", tweet_mode='extended', lang="pt", since = '2019-10-10').items(100):
    textPT.append(unidecode(tweet.full_text))
    print(textPT)

Security – How could PSBT handle the data capacity limit of QR code?

QR code has a very limited data capacity. (According to Wikipedia) A single QR code can only contain <3 KB of binary data.

It's not uncommon for a PSBT to reach this 3 KB limit, especially for those who spend non-SegWit UTXOs.

SegWit offers the possibility to sign input values. However, to sign non-SegWit entries in the offline / hardware wallet, complete data from previous transactions is still required to validate the input values. Otherwise, it could be a security issue: A malicious party / malware could secretly manipulate the submissions to trick the user into paying an unexpectedly high amount.

Calculation and Analysis – A possible error for Limit

I tried to calculate the limit of an expression, say:

Limit(1, c -> I, Assumptions -> Im(c) > 1)

And I got a message Limit::cas, At first I thought that maybe the name c was used by some packages. The problem persists, even if I replace it c with other names. The version of Mathematica on my computer is "11.0.0 for Linux x86 (64-bit) (July 28, 2016)". Does anyone have similar problems with other versions?

Limit $ 1 / zeta (s) $ to GRH

To let $ T geq 0 $, Suppose GRH (T + 100), that is, all non-trivial zeroes $ rho $ the Riemann zeta function with $ | Im ( rho) | leq T + 100 $ fulfill $ Re ( rho) = 1/2 $, Can we then give a good upper limit? $ | 1 / zeta (s) | $ (that is, a good lower limit for $ | zeta (s) | $) to the $ s = sigma + it $. $ | t | leq T $. $ sigma = 3/4 $ or $ sigma = 1 $, say?

(There are known analogue barriers $ zeta (s) / zeta (s) $.)