mysql – Please tell the creation procedure

Please notify the creation procedure that accepts the parameters ((Id) INT, (column) CHAR (1)), and then invert the value in table # TEST5 for the row matching (Id) and the column (column) , No change is made if the existing value is NULL.

CREATE TABLE #TEST5 ((Id) INT, (A) BIT, (B) BIT, (C) BIT, (D) BIT, (E) BIT);
INSERT INTO #TEST5 ((Id), (A), (C), (E)) VALUES (1, 'true', 'false', 'true');
INSERT INTO #TEST5 ((Id), (A), (B), (C)) VALUES (2, 'true', 'true', 'true');
INSERT INTO #TEST5 ((Id), (C), (D), (E)) VALUES (1, 'false', 'false', 'true');*/

mysql – delete a row with 2 variables with limit from the table

I am developing a PHP script that stores the user's session in a database.

If a user logs out of the server, remove only one line (since the same user can log in more than once).

When the server restarts, remove all user sessions from this server with the same IP address.

  • Table name:
  • Column in table:
    • ID (int, autoincrement, 11)
    • Serverip (Mediumint)
    • User ID (text)

Run on:

mysql  Ver 15.1 Distrib 5.5.64-MariaDB, for Linux (x86_64) using readline 5.1

The query runs correctly but does not delete anything.

    elseif ($_GET('status') == "logout"){
        $sql = "DELETE FROM totalconcurrent WHERE (serverip,userid) IN ((INET_ATON('".get_server_ip()."'),'".$_GET('id')."')) LIMIT 1;";
        if ($conn->query($sql) === TRUE) {
            echo "1 Session of ".$_GET('id')." removed";
        } else {
            echo "Error: " . $sql . "
" . $conn->error; } }

The query runs correctly but does not delete anything.

    elseif ($_GET('status') == "reboot"){
        $sql = "DELETE FROM totalconcurrent WHERE serverip IN ((INET_ATON('".get_server_ip()."')));";

        if ($conn->query($sql) === TRUE) {
            echo "Server rebooted, removed all session stored in this server";
        } else {
            echo "Error: " . $sql . "
" . $conn->error; } }

I have tried many times and many types of queries to do this without finding the right way to do it.

What questions do I need?

mysql – Split database per pool

I need to share a database of users through X pool and every pool must have nearly the same number of users based on their email provider

To better explain to you if my database content:
100 users Outlook.com, 50 Live.com, 10 Gmail.com and 1 Toto.com
And I am asked to share the database at 5'o clock (For example it can be 6, 9 or something else)

I have to have pool A / B / C / D 20 Outlook.com, 10 Live.com, 2 Gmail.com
and the pool E. as above, but with 1 toto.com

Actually, I've already split the database for every email provider, but I don't find how I can balance the pools per ESP almost equally.
The database content includes more than 500 different domains and some domains have only 2 or 3 users. I have to send it all wisely: p

Performance – MySQL – What options in the configuration file affect memory usage?

I was wondering how to manage MySQL memory usage because by default it takes up to 350MB of idle time on my computer. I have no memory problems. I honestly just wondered how that could be done.

I found several answers to optimize the configuration file, they worked as intended, one of them even reduced the memory usage to 100 MB.


ask

1.- Which of the options have the most impact on memory usage?

2.- Where can I find out how these options affect performance? (Documentation / books / everything)


Example configuration file, MySQL only needs 100 MB (It's a Docker container)

(mysqld)
performance_schema = 0
skip-host-cache
skip-name-resolve
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql
secure-file-priv = NULL
skip-external-locking
max_connections = 100
connect_timeout = 5
wait_timeout = 600
max_allowed_packet = 16M
thread_cache_size = 128
sort_buffer_size = 4M
bulk_insert_buffer_size = 16M
tmp_table_size = 32M
max_heap_table_size = 32M
myisam_recover_options = BACKUP
key_buffer_size = 128M
table_open_cache = 400
myisam_sort_buffer_size = 512M
concurrent_insert = 2
read_buffer_size = 2M
read_rnd_buffer_size = 1M
long_query_time = 10
expire_logs_days = 10
max_binlog_size = 100M
default_storage_engine = InnoDB
innodb_buffer_pool_size = 32M
innodb_log_buffer_size = 8M
innodb_file_per_table = 1
innodb_open_files = 400
innodb_io_capacity = 400
innodb_flush_method = O_DIRECT

(mysqldump)
quick
quote-names
max_allowed_packet = 16M

(isamchk)
key_buffer = 16M

mysql – How can I prevent the table from being marked as and the last (automatic) repair fail?

This is the second time in the past 4 days that one of the tables in my database has been corrupted. This is the error I saw in my Apache log:

PHP Fatal error:  Uncaught exception 'PDOException' with message 'SQLSTATE(HY000): General error: 144 Table (TABLE_NAME) is marked as crashed and last (automatic?) repair failed'

I managed to fix it manually using:

myisamchk -r -f $TABLE_NAME

But of course this is not a long-term solution.

I need to understand why this is happening so that I can prevent it from happening again in the future.

  • The total size of the database is 2 GB, 100 tables, 7.5 million rows.
  • The table in which it happened is the largest in the database: 1.2 million rows, 650 MB.
  • Server version: 5.5.50
  • Ubuntu 4/14/1
  • PHP 5.5.9
  • I checked and there was enough free space. So far it has not been an issue about RAM.
  • The host on which the database is located is c3.xlarge (8 GB RAM)

The database is on the server hard drive. I'm thinking of moving it to RDS. I wonder if that would help.

What do you suggest what I should do? Is there a way to analyze MySQL and understand where the problem is?

mysql – How does the RDBMS table store the row data internally?

Does the RDBMS table save the row data in succession in a data block? If the data block is full, will the new block be allocated contiguously?

As I understand it, the DBMS provider must reserve some contiguous blocks for a table when creating a table. once
If this block of blocks is full, another set of contiguous blocks must be assigned to this table. Is not it? It will help the DBMS to work efficiently
Read the area scan.

For me, the answer to the first question is definitely yes. For the second question, it is until blocks are available.

mysql – Punctuation mark of an INSERT statement that uses FROM_UNIXTIME

In one python Environment I have the following variable:

post_time_ms="1581546697000"

This is a Unix-style time with milliseconds.

In my table, "created_date_time" is defined as the datetime column.

I am trying to use an INSERT statement of the form:

sql_insert_query = "INSERT INTO myTable (id_string, text,
 created_date_time) VALUES ('identifier', 'text_content',
 FROM_UNIXTIME('post_time/1000')"

I can't figure out how to underline that. When I run the query as shown above, I get:

"Failed to insert record 1292 (22007): Truncated incorrect DECIMAL value: 'tweet_post_time/1000'

I've tried every variation of single quotes / no quotes I can think of, but I always get errors.

For example if I:

sql_insert_query = "INSERT INTO myTable (id_string, text,
 created_date_time) VALUES ('identifier', 'text_content',
 FROM_UNIXTIME('post_time'/1000)"

I get:

Failed to insert record 1292 (22007): Truncated incorrect DOUBLE value: 'tweet_post_time'

I went so far as to convert the Unix-like value "1581546697000" as follows:

post_time_mysql = datetime.fromtimestamp(int(post_time)/1000)

and then:

sql_insert_query = "INSERT INTO myTable (id_string, text,
 created_date_time) VALUES ('identifier', 'text_content',
 'post_time_mysql')"

and although

print(post_time_mysql)

Issues "2020-02-14 09:25:28",

I am still getting this error for the query above:

Failed to insert record 1292 (22007): Incorrect datetime value: 'post_time_mysql' for column `myDatabase`.`myTable`.`created_date_time` at row 1

Any ideas / suggestions?

mysql – Slow query when matching columns from the second table in JOIN with GROUP BY

I was unable to optimize this MySQL query satisfactorily. It takes about 1.2 seconds to run on my development computer. If I remove that GROUP BY Row or the conditions that match for columns in the second table of the join (those with OR … LIKE …) the performance is significantly improved (EXPLAIN SELECT Results below).

I look forward to suggestions to speed up this run.

SELECT
    `product_ndc`,
    CONCAT(`brand_name`, ' (', `generic_name`, ')') AS `name`,
    `dosage_form`,
    `dea_schedule`,
    `labeler_name`,
    `ingredients`,
    MATCH (`brand_name`, `generic_name`, `labeler_name`, `ingredients`) AGAINST ('codeine' IN BOOLEAN MODE) AS `score`
FROM product_tbl
LEFT JOIN package_tbl ON (`product_tbl`.`id` = `package_tbl`.`id`)
WHERE MATCH (`brand_name`, `generic_name`, `labeler_name`, `ingredients`) AGAINST ('codeine' IN BOOLEAN MODE)
OR `package_ndc` LIKE '4%'
OR `package_ndc_11dig` LIKE '4%'
OR `fuzzed_package_ndc` LIKE '4%'
OR `fuzzed_package_ndc_11dig` LIKE '4%'
GROUP BY `product_tbl`.`id`
ORDER BY `score` DESC
LIMIT 25;

SHOW CREATE TABLE product_tbl (contains 111.502 data records)

CREATE TABLE `product_tbl` (
  `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
  `product_id` varchar(48) NOT NULL,
  `product_ndc` varchar(10) NOT NULL,
  `spl_id` varchar(36) NOT NULL,
  `rxcui` varchar(8) NOT NULL,
  `brand_name` varchar(255) NOT NULL,
  `generic_name` varchar(520) NOT NULL,
  `dosage_form` varchar(255) NOT NULL,
  `dea_schedule` varchar(3) NOT NULL,
  `labeler_name` varchar(255) NOT NULL,
  `is_original_packager` tinyint(1) NOT NULL,
  `finished` tinyint(1) NOT NULL,
  `ingredients` text NOT NULL,
  PRIMARY KEY (`id`),
  KEY `product_ndc` (`product_ndc`),
  KEY `brand_name` (`brand_name`),
  KEY `generic_name` (`generic_name`),
  KEY `dosage_form` (`dosage_form`),
  KEY `dea_schedule` (`dea_schedule`),
  KEY `labeler_name` (`labeler_name`),
  FULLTEXT KEY `ingredients` (`ingredients`),
  FULLTEXT KEY `ft_all` (`brand_name`,`generic_name`,`labeler_name`,`ingredients`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

SHOW CREATE TABLE package_tbl (contains 205,042 data records)

CREATE TABLE `package_tbl` (
  `id` mediumint(8) unsigned DEFAULT NULL,
  `package_ndc` char(12) NOT NULL,
  `package_ndc_11dig` char(13) NOT NULL,
  `fuzzed_package_ndc` varchar(10) NOT NULL,
  `fuzzed_package_ndc_11dig` varchar(11) NOT NULL,
  `description` varchar(255) NOT NULL,
  KEY `package_ndc` (`package_ndc`),
  KEY `package_ndc_11dig` (`package_ndc_11dig`),
  KEY `fuzzed_package_ndc` (`fuzzed_package_ndc`),
  KEY `fuzzed_package_ndc_11dig` (`fuzzed_package_ndc_11dig`),
  KEY `id` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

EXPLAIN with my original query (inserted into this question above)

+----+-------------+-------------+------------+-------+------------------------------------------------------------------------------------------------------+---------+---------+--------------------+--------+----------+---------------------------------+
| id | select_type | table       | partitions | type  | possible_keys                                                                                        | key     | key_len | ref                | rows   | filtered | Extra                           |
+----+-------------+-------------+------------+-------+------------------------------------------------------------------------------------------------------+---------+---------+--------------------+--------+----------+---------------------------------+
|  1 | SIMPLE      | product_tbl | NULL       | index | PRIMARY,product_ndc,brand_name,generic_name,dosage_form,dea_schedule,labeler_name,ingredients,ft_all | PRIMARY | 3       | NULL               | 102739 |   100.00 | Using temporary; Using filesort |
|  1 | SIMPLE      | package_tbl | NULL       | ref   | id                                                                                                   | id      | 4       | fwr.product_tbl.id |      1 |   100.00 | Using where                     |
+----+-------------+-------------+------------+-------+------------------------------------------------------------------------------------------------------+---------+---------+--------------------+--------+----------+---------------------------------+

Remove that OR … LIKE … conditions

If I remove the conditions that match for columns in the join's second table (which contain rows OR … LIKE …) The query time is 0.02 seconds, This is just an experiment to show that the problem is related to comparing values ​​in the linked table. I really have to compare on this table, so this is not a viable option.

SELECT
    `product_ndc`,
    CONCAT(`brand_name`, ' (', `generic_name`, ')') AS `name`,
    `dosage_form`,
    `dea_schedule`,
    `labeler_name`,
    `ingredients`,
    MATCH (`brand_name`, `generic_name`, `labeler_name`, `ingredients`) AGAINST ('codeine' IN BOOLEAN MODE) AS `score`
FROM product_tbl
LEFT JOIN package_tbl ON (`product_tbl`.`id` = `package_tbl`.`id`)
WHERE MATCH (`brand_name`, `generic_name`, `labeler_name`, `ingredients`) AGAINST ('codeine' IN BOOLEAN MODE)
GROUP BY `product_tbl`.`id`
ORDER BY `score` DESC
LIMIT 25;
+----+-------------+-------------+------------+----------+------------------------------------------------------------------------------------------------------+--------+---------+--------------------+------+----------+----------------------------------------------+
| id | select_type | table       | partitions | type     | possible_keys                                                                                        | key    | key_len | ref                | rows | filtered | Extra                                        |
+----+-------------+-------------+------------+----------+------------------------------------------------------------------------------------------------------+--------+---------+--------------------+------+----------+----------------------------------------------+
|  1 | SIMPLE      | product_tbl | NULL       | fulltext | PRIMARY,product_ndc,brand_name,generic_name,dosage_form,dea_schedule,labeler_name,ingredients,ft_all | ft_all | 0       | const              |    1 |   100.00 | Using where; Using temporary; Using filesort |
|  1 | SIMPLE      | package_tbl | NULL       | ref      | id                                                                                                   | id     | 4       | fwr.product_tbl.id |    1 |   100.00 | Using index                                  |
+----+-------------+-------------+------------+----------+------------------------------------------------------------------------------------------------------+--------+---------+--------------------+------+----------+----------------------------------------------+

Remove that GROUP BY … clause

If I remove that GROUP BY product_tbl.id Line, The query time is 0.0015 seconds, This is great, but then I duplicated rows for the data I need.

SELECT
    `product_ndc`,
    CONCAT(`brand_name`, ' (', `generic_name`, ')') AS `name`,
    `dosage_form`,
    `dea_schedule`,
    `labeler_name`,
    `ingredients`,
    MATCH (`brand_name`, `generic_name`, `labeler_name`, `ingredients`) AGAINST ('codeine' IN BOOLEAN MODE) AS `score`
FROM product_tbl
LEFT JOIN package_tbl ON (`product_tbl`.`id` = `package_tbl`.`id`)
WHERE MATCH (`brand_name`, `generic_name`, `labeler_name`, `ingredients`) AGAINST ('codeine' IN BOOLEAN MODE)
OR `package_ndc` LIKE '4%'
OR `package_ndc_11dig` LIKE '4%'
OR `fuzzed_package_ndc` LIKE '4%'
OR `fuzzed_package_ndc_11dig` LIKE '4%'
ORDER BY `score` DESC
LIMIT 25;
+----+-------------+-------------+------------+----------+---------------+--------+---------+--------------------+--------+----------+------------------+
| id | select_type | table       | partitions | type     | possible_keys | key    | key_len | ref                | rows   | filtered | Extra            |
+----+-------------+-------------+------------+----------+---------------+--------+---------+--------------------+--------+----------+------------------+
|  1 | SIMPLE      | product_tbl | NULL       | fulltext | NULL          | ft_all | 3099    | NULL               | 102739 |   100.00 | Ft_hints: sorted |
|  1 | SIMPLE      | package_tbl | NULL       | ref      | id            | id     | 4       | fwr.product_tbl.id |      1 |   100.00 | Using where      |
+----+-------------+-------------+------------+----------+---------------+--------+---------+--------------------+--------+----------+------------------+

Use a subquery

I tried to select the appropriate records using a subquery package_tbl, It's a little faster, but still slow – about 0.6 seconds:

SELECT
    `product_ndc`,
    CONCAT(`brand_name`, ' (', `generic_name`, ')') AS `name`,
    `dosage_form`,
    `dea_schedule`,
    `labeler_name`,
    `ingredients`,
    MATCH (`brand_name`, `generic_name`, `labeler_name`, `ingredients`) AGAINST ('codeine' IN BOOLEAN MODE) AS `score`
FROM product_tbl
WHERE MATCH (`brand_name`, `generic_name`, `labeler_name`, `ingredients`) AGAINST ('codeine' IN BOOLEAN MODE)
OR `product_tbl`.`id` IN (
    SELECT id FROM package_tbl
    WHERE `package_ndc` LIKE '4%'
    OR `package_ndc_11dig` LIKE '4%'
    OR `fuzzed_package_ndc` LIKE '4%'
    OR `fuzzed_package_ndc_11dig` LIKE '4%'
)
GROUP BY `product_tbl`.`id`
ORDER BY `score` DESC
LIMIT 25;
+----+-------------+-------------+------------+------+------------------------------------------------------------------------------------------------------+------+---------+------+--------+----------+-----------------------------+
| id | select_type | table       | partitions | type | possible_keys                                                                                        | key  | key_len | ref  | rows   | filtered | Extra                       |
+----+-------------+-------------+------------+------+------------------------------------------------------------------------------------------------------+------+---------+------+--------+----------+-----------------------------+
|  1 | PRIMARY     | product_tbl | NULL       | ALL  | PRIMARY,product_ndc,brand_name,generic_name,dosage_form,dea_schedule,labeler_name,ingredients,ft_all | NULL | NULL    | NULL | 102739 |   100.00 | Using where; Using filesort |
|  2 | SUBQUERY    | package_tbl | NULL       | ALL  | package_ndc,package_ndc_11dig,fuzzed_package_ndc,fuzzed_package_ndc_11dig,id                         | NULL | NULL    | NULL | 192238 |    37.57 | Using where                 |
+----+-------------+-------------+------------+------+------------------------------------------------------------------------------------------------------+------+---------+------+--------+----------+-----------------------------+

mysqli – query in mysql with summation over a field

I have the following inconveniences: I have a table called "Returns" and I need to add in a query the quantities already returned that were previously registered for each product and that have a common value in the "comp_sal" field. I need to add all the quantities of the same product code listed in the table for each product. I think I'm getting an agreement. I add the query code that adds all the product units without distinguishing the code. … I added an index but it doesn't work … please if someone can help me

  $sql2 = "SELECT SUM(devoluciones.cantdev) as cantdev, 
                  devoluciones.id_producto as devprod 
           from devoluciones, productos 
           where devoluciones.comp_sal = '$idcomp' 
           and devoluciones.id_producto = productos.id_producto  ";

$result2 = mysqli_query($conn, $sql2);

while ($row = mysqli_fetch_array($result2)) {

    $devprod = $row('devprod');

    $cantidadev($devprod) = $cantidadev($devprod) + $row('cantdev');

}