mysql 5.6 – Table is full even with innodb_file_per_table

I am trying to create an index to my table using alter query.
My my.cnf file

innodb_data_home_dir = /usr/local/mysql5/data
innodb_data_file_path = ibdata1:60021538816;ibdata2:300M;ibdata3:30000M;ibdata4:10000M;ibdata5:10000M:autoextend
innodb_buffer_pool_instances = 3
innodb_buffer_pool_size = 3G
innodb_additional_mem_pool_size = 8M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 256M
innodb_additional_mem_pool_size = 128M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
innodb_lock_wait_timeout = 100
innodb_file_per_table
innodb_flush_method=O_DIRECT

Still every time my alter query

alter table user add unique index idx_emailHash (emailHash);

giving Table ‘user’ is full?
What am I missing. I am using MySQL 5.6

Some more info

(root@db data)# ll | grep user
    -rw-rw----. 1 mysql mysql       19551 Jun 10 14:33 user.frm
    -rw-rw----. 1 mysql mysql 28412215296 Jun 10 22:58 user.ibd

(root@db data)# ll | grep ibd
-rwxr-xr-x. 1 mysql mysql 60021538816 Jun 10 22:58 ibdata1
-rw-rw----. 1 mysql mysql   314572800 Jun 10 22:20 ibdata2
-rw-rw----. 1 mysql mysql 31457280000 Jun 10 22:33 ibdata3
-rw-rw----. 1 mysql mysql 10485760000 Jun 10 22:51 ibdata4
-rw-rw----. 1 mysql mysql 10485760000 Jun 10 22:51 ibdata5

mysql – Transition to innodb_file_per_table for 30+ databases

Currently I have a MySQL Server with innodb_file_per_table = 0

Based on a number of factors, it seems to make sense for me to change that to innodb_file_per_table = 1

I understand that to make this all work I need to

  1. Use mysqldump to export all databases into individual files
  2. Shutdown mysql
  3. Change the innodb_file_per_table setting
  4. Restart mysql
  5. Import each database

Question is, can I do the following

Day #1

  1. Shutdown mysql
  2. Change the innodb_file_per_table setting
  3. Restart mysql
  4. Export some databases
  5. Import those databases back into mysql

Day #2

  1. Export some databases
  2. Import those databases back into mysql

Until I’ve exported an re-imported all of the databases?

I’m assuming that during the transition some databases will continue to use the ibdata1 file while the exported and re-imported database will use their own datastores. Will having a mix of datastores cause issues or am I OK doing a phased transition?

innodb – What files of Mysql lib directory safe to be deleted and to be recreated with innodb_file_per_table settings?

I have brand new mysql 8 setup and installed on a centos 7. Unfortunately I forget to set innodb_file_per_table. I would like to dump the database and recreate with the innodb_file_per_table settings. I have google and its say its safe to delete ibdata1 and ib_logfile0 and ib_lofile1. But I could see many other files in it example binlog.000001, binlog.index, .pem file. mysql.ibd, #innodb_temp,undo_001, undo_002, performance_schema, mysql, mysql.sock,mysqlx.sock, sys etc. My question is that which are the files I should delete to get a fresh setup with my new settings?

mysql – ROW size can not be set too large, even with innodb_file_per_table

I have a line length error for which I could not find a solution. The error messages are all variations of "line size too big".

Row size too large (> 8126). Changing some columns in TEXT or BLOB can be helpful. The current line format stores a BLOB prefix of 0 bytes inline.
SQL error: 1118

We may be pushing the boundaries we can in the way we can
Database is currently set up. I have seen
lots
Recommendations in similar
Situations did not work out.

I know that when it comes to large rows, the vertical division is a better solution. At the moment, restructuring the database is not an option. Can I fix this in the configuration? Some answers to similar questions give the impression that they are.

In many reports, I found out that people were using VARCHARS, which eventually ended
increase the line size, and people recommended switching to TEXT or BLOB,
In some of our problematic tables using LONGTEXT, I've tried converting them all into TINYTEXT, and I still got the row size problem


What I've tried unsuccessfully on several servers, some of which were new installations of MySQL and new databases.

Always make sure you have the following set first before creating databases:

innodb_file_format = Barracuda
innodb_file_per_table = 1

Then a few of the different changes I've tried:

  • Increase the log file to 8G
    "innodb_log_file_size = 8G'
  • "ROW_FORMAT = COMPRESSED'On a single table – I could not get that to work.
  • "innodb_default_row_format = dynamic'Global
  • "internal_tmp_disk_storage_engine = MyISAM'- error
    report
  • Enlarge page sizeinnodb_page_size = 64K'- Doc
    info

I also found some information about that a beetle
report
say that the mistake
is wrong, and it should have been corrected, but I'm not
sure if that applies here. Maybe we are actually okay and that can be
ignored when he says
it is not a mistake, I am not sure that it means that it is not a mistake?

[3 Jun 2014 19:28] Daniel
price

The error "Line size too large (> 8126)" is not an error.

Why was the default setting changed by innodb_file_per_table?

Is there a drawback to using "File per Table Storage"? I wonder why the default value is innodb_file_per_table was changed with MySQL 5.7.

The documentation shows that the default value has changed (to "OFF"), so there must be a reason:
https://dev.mysql.com/doc/refman/5.5/de/innodb-parameters.html#sysvar_innodb_file_per_table

Reference: Is innodb_file_per_table advisable?

innodb – amazon mysql 5.6 rds innodb_file_per_table 0 drop then creates the same database side effects of memory

I had an event last night where I dropped a database and then created the same database one after another (then filled with minimal data). About 2 hours later, I received a hard disk warning that MySQL had too little storage space. It looks like I lost more than 20 GB of space.

I understand that the file per table of 0 does not allow me to free up space, but this seemed to have a weird effect. It looks like it takes about an hour to lose 20GB of storage, but it happened over an hour.

RDS memory

Is that normal?