mysql – MariaDB crashed: Unknown/unsupported storage engine: InnoDB

I’ve a Debian GNU/Linux 9 (4GB, 2 CPUs) on Digitalocean. Tonight (I’ve done nothing) my DB (mariaDB) crashed with this errors. I ran a wordpress with InnoDB and myISAM tables:

2020-10-17  0:51:18 140430430813568 (Note) InnoDB: Using mutexes to ref count buffer pool pages
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: The InnoDB memory heap is disabled
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: Compressed tables use zlib 1.2.8
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: Using Linux native AIO
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: Using SSE crc32 instructions
2020-10-17  0:51:18 140430430813568 (Note) InnoDB: Initializing buffer pool, size = 500.0M
InnoDB: mmap(549126144 bytes) failed; errno 12
2020-10-17  0:51:18 140430430813568 (ERROR) InnoDB: Cannot allocate memory for the buffer pool
2020-10-17  0:51:18 140430430813568 (ERROR) Plugin 'InnoDB' init function returned error.
2020-10-17  0:51:18 140430430813568 (ERROR) Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2020-10-17  0:51:18 140430430813568 (Note) Plugin 'FEEDBACK' is disabled.
2020-10-17  0:51:18 140430430813568 (ERROR) **Unknown/unsupported storage engine: InnoDB**
2020-10-17  0:51:18 140430430813568 (ERROR) Aborting

My full DB conf:

# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
# See the examples of server my.cnf files in /usr/share/mysql/

# this is read by the standalone daemon and embedded servers

# this is only for the mysqld standalone daemon

# * Basic Settings
user        = mysql
pid-file    = /var/run/mysqld/
socket      = /var/run/mysqld/mysqld.sock
port        = 3306
basedir     = /usr
datadir     = /var/lib/mysql
tmpdir      = /tmp
lc-messages-dir = /usr/share/mysql

# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address        =

# * Fine Tuning
key_buffer_size     = 16M
max_allowed_packet  = 16M
thread_stack        = 192K
thread_cache_size       = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam_recover_options  = BACKUP
#max_connections        = 100
#table_cache            = 64
#thread_concurrency     = 10

innodb_buffer_pool_instances = 1
innodb_buffer_pool_size = 500M
max_heap_table_size     = 25M
tmp_table_size          = 25M
#log_slow_queries        = /var/log/mysql/mysql-slow.log
#long_query_time = 2

# * Query Cache Configuration
query_cache_limit   = 2M
query_cache_size        = 50M

# * Logging and Replication
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
#general_log_file        = /var/log/mysql/mysql.log
#general_log             = 1
# Error log - should be very few entries.
log_error = /var/log/mysql/error.log
# Enable the slow query log to see queries with especially long duration
#slow_query_log_file    = /var/log/mysql/mariadb-slow.log
#long_query_time = 10
#log_slow_rate_limit    = 1000
#log_slow_verbosity = query_plan
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#       other settings you may need to change.
#server-id      = 1
#log_bin            = /var/log/mysql/mysql-bin.log
expire_logs_days    = 10
max_binlog_size   = 100M
#binlog_do_db       = include_database_name
#binlog_ignore_db   = exclude_database_name

# * InnoDB
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!

# * Security Features
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
# For generating SSL certificates you can use for example the GUI tool "tinyca".
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem
# Accept only connections using the latest and most secure TLS protocol version.
# ..when MariaDB is compiled with OpenSSL:
# ssl-cipher=TLSv1.2
# ..when MariaDB is compiled with YaSSL (default in Debian):
# ssl=on

# * Character sets
# MySQL/MariaDB default is Latin1, but in Debian we rather default to the full
# utf8 4-byte character set. See also client.cnf
character-set-server  = utf8mb4
collation-server      = utf8mb4_general_ci

# * Unix socket authentication plugin is built-in since 10.0.22-6
# Needed so the root database user can authenticate without a password but
# only when running as the unix root user.
# Also available for other users if required.
# See

# this is only for embedded server

# This group is only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here

# This group is only read by MariaDB-10.1 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand

Here my first page of htop
htop result after some minutes
Could not export/copy htop result.

I would be happy if you could help me out!!

Thanks a lot

innodb – MYSQL Slow Query WARNING

I am running FiveM (Grand Theft Auto 5 Multiplayer Game Modification) server, which uses MYSQL as database. When I have a lot of data stored in the database, the data starts executing very slow and In my console appears that Slow Query Warning. Can someone help me ? How can I fix/improve/remove the limit of it and do the query execution faster, because when I receive this slow querry warn, in my server everyone have something like a delay (like if the open menu which access data from the base, they need to wait like 1-2 minutes, but if the database is brand new, there is no problem, only when i have more than 40mb database which i think is not that ok, i’ll be glad if someone helps me).

Slow Querry Warns :

 (esx_billing) (4825ms) INSERT INTO billing (identifier, sender, target_type, target, label, amount) VALUES (?, ?, ?, ?, ?, ?) : ("steam:110000142fd3d53","steam:110000142fd3d53","society","society_police","Speedcamera (80KM/H) - Your speed: 148 KM/H - ",1300)

 (esx_inventoryhud_trunk) (1456ms) SELECT * FROM trunk_inventory WHERE plate = ? : ("EIK 160 ")

 (esplugin_mysql) (2232ms) UPDATE users SET `money`=?, `bank`=? WHERE `identifier`=? : (6066,337190,"steam:110000136560c03")

 (gcphone) (3332ms) UPDATE phone_messages SET phone_messages.isRead = 1 WHERE phone_messages.receiver = ? AND phone_messages.transmitter = ? : ("391-2698","774-8865")

Is it something from mysql’s configuration, does I need to change some values/settings in the .ini to avoid this slow querry’s, also they happens only when In my server are connected more than 40 players. What I think, more players = more querry’s.

innodb – Cannot drop empty database in mariadb

I am having trouble dropping a database on my server
innodb-file-per-table is set to true

We previously had a huge table (1TB) which we used a drop table command. This result i believe was successfully completed but left a file called #sql-ib116.ibd in the directory.

We have completely cleared out the db, but when i issue a drop database dbname, it seems to be stuck in the “closing tables” phase

I then tried to create a frm equivalent and drop the table using this guide to no avail

Im running out of server space and would ideally like manually remove the idb file
If i delete the file from the datadir, will this cause issues? im not worried about the data in this idb file. I also many other databases in this mariadb instance, will deleting affect the remaining?

Any ideas would be fantastic!

innodb – What is the best configuration for a MySQL instance with a lot of databases and lot of tables?

So you have 100GB of data held in 3000 databases?

Each of those database is trivially small (I’d guess about 30MB each).

I would seriously suggest that you need to reconsider your desire to segment / segregate your data in this way. It’s almost certainly not the best way to do things.

You’re trying to run a Windows Server with only 4GB of RAM?
I’m surprised it even starts up!

OK, (just checked) Our Friends in Redmond recommend “at least 2GB” of RAM, but all that machine will be capable of doing is running the operating system itself and keeping the office a bit warmer, with its fan.

If you want to run any other software on it, then you need more memory and, with a DBMS, the more the merrier, generally speaking.

innodb – What is the best mysql configuration for mysql instance with a lot of databases and lot of tables inside?

I have a mysql database instance with more than 3000 database inside. Each database contains more than 200 tables. Total data of all theses database comes around 100gb. I am using windows server 2012R2 operating system with a 4GB of RAM. The RAM memory utilization of the server system was always showing very high. So I tried to restart the system and restart is not working. It is showing restarting for long time and not restarting. When i checked the logs I understood that there is a memory issue. What is the best configuration for the mysql with above architecture? what i need to do to make this work with out failure in future?

(Warning) InnoDB: Difficult to find free blocks in the buffer pool (1486 search iterations)! 1486 failed attempts to flush a page! Consider increasing the buffer pool size. It is also possible that in your Unix version fsync is very slow, or completely frozen inside the OS kernel. Then upgrading to a newer version of your operating system may help. Look at the number of fsyncs in diagnostic info below. Pending flushes (fsync) log: 0; buffer pool: 0. 26099 OS file reads, 1 OS file writes, 1 OS fsyncs. Starting InnoDB Monitor to print further diagnostics to the standard output.

innodb – MySql IDB files not named after tables

In my Linux filesystem, there are many ibd files (InnoDB file per table is ON) which are not named after tables. What are these files and what data do they contain? The question has arisen due to space limitations on the server.

4.2G    ./var/lib/mysql/wift/#sql-ib179-1438865579.ibd
4.2G    ./var/lib/mysql/wift/#sql-ib179-1413146901.ibd
4.2G    ./var/lib/mysql/wift/#sql-ib179-1376672335.ibd
4.2G    ./var/lib/mysql/wift/#sql-ib179-1355103119.ibd
4.2G    ./var/lib/mysql/wift/#sql-ib179-1163730678.ibd
128M    ./var/lib/mysql/wift/customers.ibd

innodb – Nullable integer as part of composite primary key

I want to store phone numbers as integers and I’ve created this SQL:

DROP TABLE `user_id_phone`;
CREATE TABLE `user_id_phone`
    `user_id`      int     unsigned  NOT NULL,
    `country_code` smallint unsigned NOT NULL,
    `number`       bigint unsigned   NOT NULL,
    `ext`          smallint unsigned NULL,
    PRIMARY KEY (`country_code`, `number`, `ext`)    

But my server (10.4.14-MariaDB) creates field ext as ‘NOT NULL’:

`ext` smallint(5) unsigned NOT NULL,

if ext is a part of the composite primary key:

PRIMARY KEY (`country_code`, `number`, `ext`) 

In case of ext is not a part of the primary key:

PRIMARY KEY (`country_code`, `number`),

the field DDL is right after creation:

  `ext` smallint(5) unsigned DEFAULT NULL,


innodb – What files of Mysql lib directory safe to be deleted and to be recreated with innodb_file_per_table settings?

I have brand new mysql 8 setup and installed on a centos 7. Unfortunately I forget to set innodb_file_per_table. I would like to dump the database and recreate with the innodb_file_per_table settings. I have google and its say its safe to delete ibdata1 and ib_logfile0 and ib_lofile1. But I could see many other files in it example binlog.000001, binlog.index, .pem file. mysql.ibd, #innodb_temp,undo_001, undo_002, performance_schema, mysql, mysql.sock,mysqlx.sock, sys etc. My question is that which are the files I should delete to get a fresh setup with my new settings?

innodb – order by slowing down query with multiple joins and limit/offset on larger result sets

I am having trouble with the following query taking quite a long time to process when results are large. The limit and offset can change as this is used with pagination. The range on capture_timestamp can also change, but in this example is finding ALL results (between 0 and 9999999999 – this field is an int of utc timestamp). The issue seems to be the ORDER BY taking up most of the processing time. It looks like it uses user_id for the table join, but then never uses anything for the ordering.

On the logs table I have the following indexes :

PRIMARY : activity_id
user_id : (user_id, capture_timestamp)
capture_timestamp : capture_timestamp (added this to see if by itself would make a difference - it did not)

There are keys setup for all the ON joins.

This particular query for example has 2440801 results (the logs table itself is currently holding 18332067 rows), but I am only showing the first 10 sorted by capture_timestamp and it takes roughly 7 seconds to return the results.


FROM computers
    ON users.computer_id = computers.computer_id
    ON logs.user_id = users.user_id AND logs.capture_timestamp BETWEEN :cw_date_start AND :cw_date_end
WHERE computers.account_id = :cw_account_id AND computers.status = 1
ORDER BY logs.capture_timestamp DESC
LIMIT 0,10

analyze :

    (0) => Array
            (ANALYZE) => {
  "query_block": {
    "select_id": 1,
    "r_loops": 1,
    "r_total_time_ms": 6848.2,
    "filesort": {
      "sort_key": "logs.capture_timestamp desc",
      "r_loops": 1,
      "r_total_time_ms": 431.25,
      "r_limit": 10,
      "r_used_priority_queue": true,
      "r_output_rows": 11,
      "temporary_table": {
        "table": {
          "table_name": "computers",
          "access_type": "ref",
          "possible_keys": ("PRIMARY", "account_id_2", "account_id"),
          "key": "account_id_2",
          "key_length": "4",
          "used_key_parts": ("account_id"),
          "ref": ("const"),
          "r_loops": 1,
          "rows": 294,
          "r_rows": 294,
          "r_total_time_ms": 0.4544,
          "filtered": 100,
          "r_filtered": 100,
          "attached_condition": "computers.`status` = 1"
        "table": {
          "table_name": "users",
          "access_type": "ref",
          "possible_keys": ("PRIMARY", "unique_filter"),
          "key": "unique_filter",
          "key_length": "4",
          "used_key_parts": ("computer_id"),
          "ref": ("db.computers.computer_id"),
          "r_loops": 294,
          "rows": 1,
          "r_rows": 3.415,
          "r_total_time_ms": 0.7054,
          "filtered": 100,
          "r_filtered": 100,
          "using_index": true
        "table": {
          "table_name": "logs",
          "access_type": "ref",
          "possible_keys": ("user_id", "capture_timestamp"),
          "key": "user_id",
          "key_length": "4",
          "used_key_parts": ("user_id"),
          "ref": ("db.users.user_id"),
          "r_loops": 1004,
          "rows": 424,
          "r_rows": 2431.1,
          "r_total_time_ms": 4745.3,
          "filtered": 100,
          "r_filtered": 100,
          "index_condition": "logs.capture_timestamp between '0' and '9999999999'"


Is there anything I can do here to speed these up? When the result set is smaller everything is pretty much immediate although I guess that is because there isn’t as much sorting to do.

Additions :

CREATE TABLE `computers` (
  `computer_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `account_id` int(10) unsigned NOT NULL,
  `status` tinyint(1) unsigned NOT NULL,
  `version` varchar(10) COLLATE utf8_unicode_ci NOT NULL,
  `os` tinyint(1) unsigned NOT NULL,
  `computer_uid` varchar(64) COLLATE utf8_unicode_ci NOT NULL,
  `computer_name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `last_username` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `uninstall` tinyint(1) unsigned NOT NULL,
  `capture_timestamp` int(10) unsigned NOT NULL,
  PRIMARY KEY (`computer_id`),
  UNIQUE KEY `account_id_2` (`account_id`,`computer_uid`),
  KEY `account_id` (`account_id`,`status`),
  CONSTRAINT `computers_ibfk_1` FOREIGN KEY (`account_id`) REFERENCES `accounts` (`account_id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=14362124 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci    

CREATE TABLE `users` (
  `user_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `computer_id` int(10) unsigned NOT NULL,
  `username` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `changed` tinyint(1) unsigned NOT NULL,
  `timestamp` int(10) unsigned NOT NULL,
  `ctimestamp` int(10) unsigned NOT NULL,
  `stimestamp` int(10) unsigned NOT NULL,
  UNIQUE KEY `unique_filter` (`computer_id`,`username`),
  CONSTRAINT `users_ibfk_1` FOREIGN KEY (`computer_id`) REFERENCES `computers` (`computer_id`) ON DELETE CASCADE ON UPDATE CASCADE

  `activity_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `user_id` int(10) unsigned NOT NULL,
  `event_title` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `event_target` text COLLATE utf8_unicode_ci NOT NULL,
  `capture_timestamp` int(10) unsigned NOT NULL,
  `timestamp` int(10) unsigned NOT NULL,
  `demo` tinyint(1) unsigned NOT NULL DEFAULT 0,
  PRIMARY KEY (`activity_id`) USING BTREE,
  KEY `timestamp` (`timestamp`,`demo`),
  KEY `user_id` (`user_id`,`capture_timestamp`) USING BTREE,
  KEY `capture_timestamp` (`capture_timestamp`),
) ENGINE=InnoDB AUTO_INCREMENT=444156934 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci   

How to do safety mirroring innodb table on mysql?

I have one Mysql InnoDb table with rush trafik on row update. The table is also frequently used for heavy select query (especially group by for summary report). As I know this heavy select query will interfere or reduce the update peformance. So i have an idea to mirror this table, so the heavy select query will do in this mirror table (delay 15 menit still accepted).

Fyi, this table have size approximately 10 GB, I have used mysql event (scheduled every 15 minute) to copy this table to another with “insert into select” query. Actually it need more than 60 second for copy process, but the update query still impacted when this event running.

So Is there any best/common practise for mirroring table mysql with minimum impact to currently query (especially update) process on the master table?

Note : I want to do on the same server