linux – Importing big MySQL Database

I am trying to import a 6GB database on RHEL7.
I have been reading for
Here are my main settings in my.cnf:

debug-info = TRUE
max_allowed_packet=8200M;
net_buffer_length=1000000M;
post_max_size=4096M
max_exection_time = 60 * 60;
upload_max_filesize=6000M
read_buffer_size = 2014K
connect_timeout = 1000000
net_write_timeout = 1000000
wait_timeout = 1000000
memory_limit=6000M

After changing my settings I restarted the mysql service and then

mysql> SHOW VARIABLES LIKE '%max%';
+------------------------------------------------------+----------------------+
| Variable_name                                        | Value                |
+------------------------------------------------------+----------------------+
| binlog_max_flush_queue_time                          | 0                    |
| ft_max_word_len                                      | 84                   |
| group_concat_max_len                                 | 1024                 |
| innodb_adaptive_max_sleep_delay                      | 150000               |
| innodb_change_buffer_max_size                        | 25                   |
| innodb_compression_pad_pct_max                       | 50                   |
| innodb_file_format_max                               | Barracuda            |
| innodb_ft_max_token_size                             | 84                   |
| innodb_io_capacity_max                               | 2000                 |
| innodb_max_dirty_pages_pct                           | 75.000000            |
| innodb_max_dirty_pages_pct_lwm                       | 0.000000             |
| innodb_max_purge_lag                                 | 0                    |
| innodb_max_purge_lag_delay                           | 0                    |
| innodb_max_undo_log_size                             | 1073741824           |
| innodb_online_alter_log_max_size                     | 134217728            |
| max_allowed_packet                                   | 4194304              |
| max_binlog_cache_size                                | 18446744073709547520 |
| max_binlog_size                                      | 1073741824           |
| max_binlog_stmt_cache_size                           | 18446744073709547520 |
| max_connect_errors                                   | 100                  |
| max_connections                                      | 151                  |
| max_delayed_threads                                  | 20                   |
| max_digest_length                                    | 1024                 |
| max_error_count                                      | 64                   |
| max_execution_time                                   | 0                    |
| max_heap_table_size                                  | 16777216             |
| max_insert_delayed_threads                           | 20                   |
| max_join_size                                        | 18446744073709551615 |
| max_length_for_sort_data                             | 1024                 |
| max_points_in_geometry                               | 65536                |
| max_prepared_stmt_count                              | 16382                |
| max_relay_log_size                                   | 0                    |
| max_seeks_for_key                                    | 18446744073709551615 |
| max_sort_length                                      | 1024                 |
| max_sp_recursion_depth                               | 0                    |
| max_tmp_tables                                       | 32                   |
| max_user_connections                                 | 0                    |
| max_write_lock_count                                 | 18446744073709551615 |
| myisam_max_sort_file_size                            | 9223372036853727232  |
| optimizer_trace_max_mem_size                         | 16384                |
| parser_max_mem_size                                  | 18446744073709551615 |
| performance_schema_max_cond_classes                  | 80                   |
| performance_schema_max_cond_instances                | -1                   |
| performance_schema_max_digest_length                 | 1024                 |
| performance_schema_max_file_classes                  | 80                   |
| performance_schema_max_file_handles                  | 32768                |
| performance_schema_max_file_instances                | -1                   |
| performance_schema_max_index_stat                    | -1                   |
| performance_schema_max_memory_classes                | 320                  |
| performance_schema_max_metadata_locks                | -1                   |
| performance_schema_max_mutex_classes                 | 210                  |
| performance_schema_max_mutex_instances               | -1                   |
| performance_schema_max_prepared_statements_instances | -1                   |
| performance_schema_max_program_instances             | -1                   |
| performance_schema_max_rwlock_classes                | 40                   |
| performance_schema_max_rwlock_instances              | -1                   |
| performance_schema_max_socket_classes                | 10                   |
| performance_schema_max_socket_instances              | -1                   |
| performance_schema_max_sql_text_length               | 1024                 |
| performance_schema_max_stage_classes                 | 150                  |
| performance_schema_max_statement_classes             | 193                  |
| performance_schema_max_statement_stack               | 10                   |
| performance_schema_max_table_handles                 | -1                   |
| performance_schema_max_table_instances               | -1                   |
| performance_schema_max_table_lock_stat               | -1                   |
| performance_schema_max_thread_classes                | 50                   |
| performance_schema_max_thread_instances              | -1                   |
| range_optimizer_max_mem_size                         | 8388608              |
| slave_max_allowed_packet                             | 1073741824           |
| slave_pending_jobs_size_max                          | 16777216             |
+------------------------------------------------------+----------------------+

70 rows in set (0.01 sec)

$ mysql -u user -p database_name < database_dump.sql –force –wait –reconnect

ERROR 2006 (HY000) at line 4432: MySQL server has gone away
ERROR 2006 (HY000) at line 4433: MySQL server has gone away
...
ERROR 2006 (HY000) at line 5707: MySQL server has gone away

mysql – why should I join 2 tables with 1 table apart

I could not solve this problem on hackerrank , and had to look up the solution. I was joining the wrong tables.

There are 4 tables:
hackers
challenges
submissions made by hackers and their scores
difficulty table with levels.

Write a query to print the respective hacker_id and name of hackers
who achieved full scores for more than one challenge.

Submissions table fields: submission_id, hacker_id, challenge_id, score

challenge table fields: challenge_id, hacker_id, difficulty_level

The way i joined: hackers + challenges, challenges + difficulty , challenges + submissions.

select  
hackers.hacker_id, name
 
from submissions inner join challenges on submissions.challenge_id = challenges.challenge_id 
inner join difficulty on difficulty.difficulty_level = challenges.difficulty_level
inner join hackers on 

challenges.hacker_id = hackers.hacker_id -- here is wrong part!

where difficulty.score = submissions.score and difficulty.difficulty_level = challenges.difficulty_level
group by hackers.hacker_id, name 
having count( challenges.challenge_id)> 1
order by count( challenges.challenge_id) desc, hackers.hacker_id 

However, the right way was almost same – except that i should have joined submissions + hackers by hacker_id, not submissions + challenges by challenge_id.

The correct way:

select  
hackers.hacker_id,name

from submissions inner join challenges on submissions.challenge_id = challenges.challenge_id 
inner join difficulty on difficulty.difficulty_level = challenges.difficulty_level
inner join hackers on submissions.hacker_id = hackers.hacker_id
where difficulty.score = submissions.score and difficulty.difficulty_level = challenges.difficulty_level
group by hackers.hacker_id, name 
having count( challenges.challenge_id)> 1
order by count( challenges.challenge_id) desc, hackers.hacker_id 

What’s the logic behind joining submissions + challenges by challenge_id vs
submissions + hackers by hacker_id?

why does it produce different result? A hacker makes submissions , so it should not matter whether i join challenge+submission + hackers or
submission + hacker + challenge…

mysql – Somar valores no Pyqt5

tenho um o seguinte problema.

Tenho esse projeto em desenvolvimento de Python com Pyqt5 que funciona assim : no campo código eu insiro um número e clico em “Enviar” , então ele me retorna descrição e preço em suas respectivas labels. Ele também faz a inclusão dos produtos enviados no campo “itens comprados” (que é uma listWidget). A tabela no MySQl tem os seguintes campos: código (integer) , nome_produto (varchar) e preco (float).

Mas já tentei de inúmeras maneiras fazer a soma dos preços dos produtos incluídos e não obtive sucesso. Já tentei criar um looping (com while) sem sucesso, já tentei incrementar usando:

preco = item(1)
soma = 0
soma += preco

Já tentei guardar em uma lista e dar um SUM(nome_lista). Nada disso funcionou.

O código da função principal é :

from PyQt5 import uic, QtWidgets, QtCore, QtGui
import mysql.connector

mydb = mysql.connector.connect(host="localhost", user="root", database="db_farmacia", password="")
cursor = mydb.cursor()

def funcao_1():

        cursor = mydb.cursor()
        codigo = pdv.lineEdit.text()

        cursor.execute(f"SELECT nome_produto, preço from tb_produtos WHERE codigo = '{codigo}';")


        for i in cursor.fetchall():
            total = 0
            item = i
            descricao = item(0)
            preco = item(1)
            pdv.label_3.setText(f'{descricao}')
            pdv.label_7.setText(f'{preco}')
            pdv.listWidget.insertItem(0, f'{descricao}.....{preco}')

app=QtWidgets.QApplication(())
pdv = uic.loadUi("interface_2.ui")
pdv.pushButton.clicked.connect(funcao_1)

pdv.show()
app.exec()

Coloquei uma imagem em anexo para tentar explicar melhor. Alguém pode ajudar?inserir a descrição da imagem aqui

How can I disable mysql_native_password and mysql_old_password on MariaDB / MySQL?

I only want to use unix_socket as the sole authentication method.
Both mysql_native_password and mysql_old_password are installed and enabled by default, but I’d like to disable them.

I’m unable to uninstall them:

MariaDB ((none))> uninstall plugin mysql_native_password;
ERROR 1619 (HY000): Built-in plugins cannot be deleted            

I’ve followed the instructions here:
https://mariadb.com/kb/en/mysql_plugin/

I’ve created disabled_plugins.ini in /etc/mysql with contents:

disabled_plugins
mysql_old_password

Then:

MariaDB ((none))> mysql_plugin disabled_plugins DISABLE;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'mysql_plugin disabled_plugins DISABLE' at line 1

This is on WSL1 and 10.3.27-MariaDB-0+deb10u1-log Debian 10

Can someone help?

Thanks

Theme, WordPress Version, MySQL Version, PHP Version Update affected files and folders

How do I keep my Files during WordPress and Theme Updates?

Don’t edit core WordPress files or the files of themes or plugins you didn’t develop yourself. If those are updated they are replaced entirely and you will lose any changes you make.

If you want to customise the theme, create a Child Theme. If you want other functionality, or to customise a plugin, create your own plugin.

During WordPress Update does the custom tables I made will also be gone?

No.

Does MySQL Version Update remove my custom Tables in wordpress?

No. MySQL doesn’t know or care which tables are yours and which are from WordPress.

MySQL group replication – how to find queries causing replication lag?

How can we determine which queries are causing a set of MySQL servers running Group Replication to have replication lag?

We have a MySQL Group Replication cluster with 9 servers, and we are currently experiencing replication lag (i.e high values in performance_schema.replication_group_member_stats.COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE for several servers).

We noticed that neither slow queries nor frequent queries are necessarily correlated with lag across servers. We have also noticed that some specific joins we had in our application were causing lag spikes and, once we removed them, the lag reduced. This is, however, not the case for all joins.

Is there a tool or a methodology we can use to find the queries causing the lag?

mysql – ¿Cómo puedo contar el número de registros que tienen como valor de un campo un determinado valor de otra columna?

tengo esta tabla, y quiero hacer una consulta en SQL la cual me permita que por cada registro de C quiero contar cuantos registros repetidos tiene en la columna B.

| A | B | C | D | E |
6050 6010203080504022 0000752740 6050 200
1050 101010 0000752740 1050 1000
3100 301015 0000752740 3100 2000
1100 101020 0000752740 1100 1000
2100 20103020 0001074123 2100 378
3550 301035 0001074123 3550 3392
5251 501025 0001074123 5251 1543
3530 301060 0001074123 3530 2850
6650 8010 0001074123 6650 157
4050 401010 0001074123 4050 10292
3900 301030 0001074123 3900 6288
2400 20103070 0001074123 2400 47689
7100 6010203020 0001074123 7100 5762
6800 601020308030 0001074123 6800 6
6600 8010 0001074123 6600 15709
3500 301050 0001074123 3500 4736
1230 101035 0001074123 1230 46004
1300 101030 0001074123 1300 1028
4330 401035 0001074123 4330 395

Espero tener este resultado.

| A | B | C | D | E | CONTEO |
6050 6010203080504022 0000752740 6050 200 1
1050 101010 0000752740 1050 1000 1
3100 301015 0000752740 3100 2000 1
1100 101020 0000752740 1100 1000 1
2100 20103020 0001074123 2100 378 1
3550 301035 0001074123 3550 3392 1
5251 501025 0001074123 5251 1543 1
3530 301060 0001074123 3530 2850 1
6650 8010 0001074123 6650 157 2
4050 401010 0001074123 4050 10292 1
3900 301030 0001074123 3900 6288 1
2400 20103070 0001074123 2400 47689 1
7100 6010203020 0001074123 7100 5762 1
6800 601020308030 0001074123 6800 6 1
6600 8010 0001074123 6600 15709 2
3500 301050 0001074123 3500 4736 1
1230 101035 0001074123 1230 46004 1
1300 101030 0001074123 1300 1028 1
4330 401035 0001074123 4330 395 1

MUCHAS GRACIAS!!!

MySQL Group Replication starts master node with super_read_only

I am trying to set up MySQL group replication. Only problem is, that when I try to start replication group, It starts with super_read_only.

Here the configurations in my.cnf file

(mysqld)

max_binlog_size = 4096
default_authentication_plugin     = mysql_native_password

log_bin                           = mysql-bin-1.log
enforce_gtid_consistency          = ON
gtid_mode                         = ON
log_slave_updates                 = ON
binlog_checksum                   = NONE

plugin-load-add                   = group_replication.so
plugin-load-add                   = mysql_clone.so
relay_log_recovery                = ON
transaction_write_set_extraction  = XXHASH64
loose_group_replication_start_on_boot                    = OFF
loose_group_replication_group_name                       = 74fe8890-679f-4e93-9169-a7edfbc1d427
loose_group_replication_group_seeds                      = mysql_cluster_mysql0_1:3306, mysql_cluster_mysql1_1:3306, mysql_cluster_mysql2_1:3306
loose_group_replication_single_primary_mode              = ON
loose_group_replication_enforce_update_everywhere_checks = OFF
bind-address = 0.0.0.0

instances are run inside docker, that’s why group seed addresses has these hostnames.

Also here the procedure for running master instance.

DELIMITER $$

USE `db`$$

DROP PROCEDURE IF EXISTS `set_as_master`$$

CREATE DEFINER=`root`@`%` PROCEDURE `set_as_master`()
BEGIN
  SET @@GLOBAL.group_replication_bootstrap_group=1;
  CREATE USER IF NOT EXISTS 'repl'@'%';
  GRANT REPLICATION SLAVE ON *.* TO repl@'%';
  FLUSH PRIVILEGES;
  CHANGE MASTER TO MASTER_USER='root' FOR CHANNEL 'group_replication_recovery';
  START GROUP_REPLICATION;
  -- SELECT * FROM performance_schema.replication_group_members;
END$$

DELIMITER;

After running CALL start_as_master; in Sqlyog, process stucks on below lines.

'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.

2021-03-03T21:47:55.934818Z 8 (System) (MY-013587) (Repl) Plugin group_replication reported: 'Plugin 'group_replication' is starting.'

2021-03-03T21:47:55.935929Z 9 (System) (MY-011565) (Repl) Plugin group_replication reported: 'Setting super_read_only=ON.'

Why does it run with super_read_only=ON?
Is there anything I miss during configuration or running script?

MySQL version is 8.0.23.

mysql – Does a huge key length value for a mulibyte column affect the index performance?

When I look at the EXPLAIN results, the key len value is always calculated based on the actual column length multiplied on the maximum number of bytes for the chosen encoding. Say, for a varchar(64) using utf8 encoding the key len is 192.

Does this number affect performance in any way and should I reduce it when possible? I mean, does it make MySQL to reserve some space somewhere that remain unused, or it’s just a maximum possible value while the used space is based on the exact data length?

So the actual question is: if I have a column that contains only Latin letters and numbers, should I change its encoding to latin1 from utf8 in regard of the space occupied by the index/overall index performance?