MySQL: Batch-Drop Tables

Dropping all tables of a given MySQL-Database is easy. We fetch all tablenames from the information_schema, concatenate them to a a series of “DROP TABLE”-Commands and pipe them to mysql for execution:

mysql -u root -p$MYSQL_PW -B -N -e "select concat('drop table $TARGETSCHEMA.',table_name,';') from information_schema.tables where table_schema='$TARGETSCHEMA'" information_schema | mysql -u root -p$MYSQL_PW $TARGETSCHEMA

You replace $MYSQL_PW with your MySQL-Root-Password and $TARGETSCHEMA with your target-database. If not root, you can also use another user-account with the necessary privileges.

-B stands for batch-mode. (can also be omitted)
-N suppresses the column-names of the result-set.
-e executes the following SQL.

A handy shell-script for our matter could look like this:

#!/bin/bash
# drop all schema-tables
# 2014-07-11 Marc Tempel

if [ $# -ne 2 ]
then
	echo "usage: $0 <targetschema> <mysql_root_pwd>"
	exit
fi

export MYSQL_PW=$2
TARGETSCHEMA=$1
TCOUNT=`mysql -u root -p$MYSQL_PW -B -N -e "select count(*) from information_schema.tables where table_schema='$TARGETSCHEMA'" information_schema`

read -p "Dropping all $TCOUNT tables of schema $TARGETSCHEMA. This can't be undone!"

mysql -u root -p$MYSQL_PW -B -N -e "select concat('drop table $TARGETSCHEMA.',table_name,';') from information_schema.tables where table_schema='$TARGETSCHEMA'" information_schema | mysql -u root -p$MYSQL_PW $TARGETSCHEMA

#EOF
Advertisements

MySQL: Dump tables with wildcard

Dumping a certain table with mysqldump goes like this:

mysqldump -u root -pPWD exampledb table1 > exampledb_table1.dump

This command exports the table “table1” from DB exampledb and stores it in the file “exampledb_table1.dump”.

To export more than one table we pass their names to this command:

mysqldump -u root -pPWD exampledb table1 table2 table3 > exampledb_tables.dump

If we need to export some tables e.g. with a given prefix (like “smw_”), we may want to use a wildcard to save us from stringing together all tablenames. This can be done like this:

mysqldump -u root -pPWD exampledb `echo "show tables like 'smw_%';" | mysql -u root -pPWD exampledb | sed '/Tables_in/d'` > exampledb.dump

Edit 2016-11-13:

Or a little smarter:

mysqldump -u root -pPWD exampledb `mysql -u root -pPWD -B -N -e "show tables like 'smw_%'" exampledb` > exampledb.dump

The “-B” and “-N” suppress the table-format and colum-header and “-e” executes the given command “show tables…”.

MySQL: Batch-Renaming of Tables

Recently I had the following problem: I had a database (let’s call it “exampledb”) whose tablenames all had a constant prefix like “DUMMYTEXT_”. To get rid of this prefixes I had to rename every table via

"ALTER TABLE DUMMYTEXT_TAB1 RENAME TO TAB1"

There were ~100 tables and this would have been a very annoying task doing it manually for every table.
Luckily there is a way to do the renaming in a batch for multiple tables, which makes it a matter of a few minutes.

Every MySQL-Server hosts a schema that contains the metadata of all databases on that server – the INFORMATION_SCHEMA. In that schema is a table called “TABLES” which gives us the information we need: all tablenames of my to-be-renamed exampledb-tables. So I created the appropriate sql-statement, piped it on a bash-shell into the “mysql”-executable and stored the result in a file called “rename.sql”:

echo "select concat('alter table ',table_name,' rename to ', substr(table_name,11),';') from information_schema.TABLES where table_schema='exampledb';" | mysql -u root -pPWD exampledb > ./rename.sql

In the above command the “SUBSTR(table_name,11)” cuts off the first 10 letters (“DUMMYTEXT_”) of every tablename. The CONCAT builds all necessary rename-statements with the remaining string. The “mysql -u root -pPWD exampledb” logs via mysql-executable into DB “exampledb” with user “root” using password “PWD” (mind the omitted whitespace behind parameter “-p”). The output of that command is then written to the file “./rename.sql”.

The first line in the resulting file “rename.sql” contains the above sql-statement.
To remove the sql-command from the rename-file one could use “sed”:

sed -i '/^select/d' ./rename.sql

After removing that line I had a rename-command-file that I now could feed into my database to correct my tablenames:

mysql -u root -pPWD exampledb < ./rename.sql

edit 2013-04-02
Here is a smarter approach as a one-liner:

E:\APPS\xampp\mysql\bin>mysql -u root -e "show tables;" test
+----------------+
| Tables_in_test |
+----------------+
| dummytext_tab1 |
| dummytext_tab2 |
| dummytext_tab3 |
| dummytext_tab4 |
| dummytext_tab5 |
+----------------+

E:\APPS\xampp\mysql\bin>mysql -u root -e "select concat('alter table ',table_name,' rename to ',
substr(table_name,11),';') from information_schema.tables where table_schema='test';" --column-names
=false test | mysql -u root test

E:\APPS\xampp\mysql\bin>mysql -u root -e "show tables;" test
+----------------+
| Tables_in_test |
+----------------+
| tab1           |
| tab2           |
| tab3           |
| tab4           |
| tab5           |
+----------------+

E:\APPS\xampp\mysql\bin>

With this command I can do all the above without resorting to “echo” and “sed” in one line and without using an auxiliary command-file:

mysql -u root -e "select concat('alter table ',table_name,' rename to ', substr(table_name,11),';') from information_schema.tables where table_schema='test';" --column-names=false test | mysql -u root test

Instead of “echo” the “-e” executes the given SQL and in place of “sed” I suppress the printing of the column-label with “–column-names=false”. So I can pipe the resulting Rename-SQL directly back to mysql and do the renaming.
Please notice that in this example the name of my database is “test” – not “exampledb” as above and that I omitted the “-p” switch as my root-user on this test-instance runs without password.

MySQL: concatenating strings from multiple rows in one table-field

If you have the need to concatenate multiple strings from different rows into one single field via a sql-query, MySQL gives you the useful GROUP_CONCAT()-function.

At first we create a table and insert some values with ID as our designated “group by”-column:

mysql> create table t1 (id int, txt varchar(3));
Query OK, 0 rows affected (0.00 sec)

mysql> insert into t1 values (1,'ABC'),(2,'def'),(1,'GHI'),(2,'jkl'),(1,'mno'),(2,'pqr');
Query OK, 6 rows affected (0.00 sec)
Records: 6  Duplicates: 0  Warnings: 0

mysql> select * from t1;
+------+------+
| id   | txt  |
+------+------+
|    1 | ABC  |
|    2 | def  |
|    1 | GHI  |
|    2 | jkl  |
|    1 | mno  |
|    2 | pqr  |
+------+------+
6 rows in set (0.00 sec)

mysql>

Then we’ll do a SELECT on that table and concat the group-specific strings with GROUP_CONCAT():

mysql> select id, group_concat(txt) from t1 group by id;
+------+-------------------+
| id   | group_concat(txt) |
+------+-------------------+
|    1 | ABC,GHI,mno       |
|    2 | def,jkl,pqr       |
+------+-------------------+
2 rows in set (0.01 sec)

mysql>

We could also define the string-separator with the SEPARATOR-keyword:

mysql> select id, group_concat(txt SEPARATOR ':') from t1 group by id;
+------+---------------------------------+
| id   | group_concat(txt SEPARATOR ':') |
+------+---------------------------------+
|    1 | ABC:GHI:mno                     |
|    2 | def:jkl:pqr                     |
+------+---------------------------------+
2 rows in set (0.00 sec)

mysql>

Oddities about MySQL

MySQL silently crops data if it’s larger than the specified fieldlength:

mysql> create table t2 (col1 varchar(5));
Query OK, 0 rows affected (0.00 sec)

mysql> insert into t2 values ('12345');
Query OK, 1 row affected (0.00 sec)

mysql> insert into t2 values ('123456789');
Query OK, 1 row affected, 1 warning (0.00 sec)

mysql> select * from t2;
+-------+
| col1  |
+-------+
| 12345 |
| 12345 |
+-------+
2 rows in set (0.00 sec)

mysql>

Let’s try this on an Oracle-Database:

SQL>create table t2 (col1 varchar2(5));

Table created.

SQL>insert into t2 values ('12345');

1 row created.

SQL>insert into t2 values ('123456789');
insert into t2 values ('123456789')
                       *
ERROR at line 1:
ORA-01401: inserted value too large for column

SQL>select * from t2;

COL1
-----
12345

SQL>

On MySQL you can’t UPDATE a table when that table is referenced in a subquery in the WHERE-condition of the same statement:

mysql> create table t1 (c1 varchar(10),c2 varchar(10));
Query OK, 0 rows affected (0.00 sec)

mysql> insert into t1 values ('abc','abcs@ssdef');
Query OK, 1 row affected (0.00 sec)

mysql> insert into t1 values ('abc','abcs@ssdef');
Query OK, 1 row affected (0.00 sec)

mysql> insert into t1 values ('abc','abcsssdef');
Query OK, 1 row affected (0.00 sec)

mysql> insert into t1 values ('abc','abcs@ssdef');
Query OK, 1 row affected (0.00 sec)

mysql> update t1 set c2='xyz' where c2 in (select c2 from t1 where instr(c2,'@')>0);
ERROR 1093 (HY000): You can't specify target table 't1' for update in FROM clause
mysql> select * from t1;
+------+------------+
| c1   | c2         |
+------+------------+
| abc  | abcs@ssdef |
| abc  | abcs@ssdef |
| abc  | abcsssdef  |
| abc  | abcs@ssdef |
+------+------------+
4 rows in set (0.00 sec)

mysql>

This behaviour is officially documented here:
http://dev.mysql.com/doc/refman/5.0/en/subquery-errors.html
And it is independent of the used storage engine (tested with MyISAM and InnoDB).

What about Oracle? Let’s see:

SQL>create table t1 (c1 varchar2(10),c2 varchar2(10));

Table created.

SQL>insert into t1 values ('abc','abcs@ssdef');

1 row created.

SQL>insert into t1 values ('abc','abcs@ssdef');

1 row created.

SQL>insert into t1 values ('abc','abcsssqw');

1 row created.

SQL>insert into t1 values ('abc','abcs@ssdef');

1 row created.

SQL>select * from t1;

C1         C2
---------- ----------
abc        abcs@ssdef
abc        abcs@ssdef
abc        abcsssqw
abc        abcs@ssdef

SQL>update t1 set c2='xyz' where c2 in (select c2 from t1 where instr(c2,'@')>0);

3 rows updated.

SQL>select * from t1;

C1         C2
---------- ----------
abc        xyz
abc        xyz
abc        abcsssqw
abc        xyz

SQL>

More astonishing oddities about MySQL can be found here:
http://www.databasejournal.com/features/mysql/article.php/3519116/MySQL-Oddities.htm. It’s really worth reading!

Most of this oddities depend on the given “SQL Server Mode” and appear with the default settings of MySQL. By changing the “SQL Server Mode”, one can make MySQL to behave like a “traditional” RDBMS. More info on that here: http://dev.mysql.com/doc/refman/5.5/en/server-sql-mode.html

Locked rows in mediawiki-db

Recently a few times I had the problem, that I got a timeout from the wiki-db when requesting a certain page:
 
“Lock wait timeout exceeded; try restarting the transaction”

Querying the mysql-db for running processes with “SHOW PROCESSLIST” showed one or more sessions waiting for an exclusive lock on a datarow while trying to run an “UPDATE” or “SELECT…FOR UPDATE” on the “page”-table.
The lock is required because of an update of the page_counter-column every time a page is requested.
Normally the update should be very quick but here it seemed as if another session still held a lock on the datarow of my requested page and won’t release it.

Blogger Venu Anuganti wrote about an odd locking-problem with MySQL’s InnoDB-Engine which I verified on a test-db:

When a transaction, which holds a lock, is stopped by an error (e.g. “ERROR 1062 (23000): Duplicate entry ’10’ for key ‘PRIMARY‘”), the lock is not instantly released but held indefinitely until a commit or rollback is set. This causes other sessions, that requests the same lock, to wait until the lock is released or a timeout is hit.

According to the docs this is intended behavior:

“A duplicate-key error rolls back the SQL statement, if you have not specified the IGNORE option in your statement.”

Note: This states that not the entire transaction is rolled back, but just the failed statement. The handling of such an error is left to the application.

Also interesting in this context (same link as quote above):

“A lock wait timeout causes InnoDB to roll back only the single statement that was waiting for the lock and encountered the timeout. (Until MySQL 5.0.13 InnoDB rolled back the entire transaction if a lock wait timeout happened. You can restore this behavior by starting the server with the –innodb_rollback_on_timeout option, available as of MySQL 5.0.32.)”

As I’m using InnoDB for the wiki-tables I think it’s likely that this is the cause of my errors.

Until now I don’t know which piece of code is responsible for the error nor how to automatically detect and fix such errors. All I could do to fix it was to gradually kill the oldest inactive db-sessions (“SHOW PROCESSLIST“, “KILL session_id“) of the wiki-db-user until my test-statement (“SELECT * FROM PAGE WHERE PAGE_ID=108 FOR UPDATE“) run through.

Mediawiki: mysteries of charsets

In the past whenever I had to install a new wiki I always wondered which charset-option to choose:

I thought it would be best to let the db know about the charset of the stored data so that a charset-conversion is always possible. Hence I chose “MySQL 4.1/5.0 UTF-8” – and for new (empty) wikis there was never a problem.

The problems arose when I had to import db-dumps from older wikis. Mainly the pagetitles where often erroneous when they had “specialchars” like ü,ö,ä,ß in it. At first it seemed simple: I checked the table-defs from the old wiki, which where latin1 (or ISO-8859-1 to be ISO-conform). Then exported using “–default-character-set=latin1” and imported with “–default-character-set=utf-8”. But in vain: pagetitles scrambled.
Usually I ex-imported back and forth with various charset-settings until it worked – and usually WHEN it worked I almost never had a clue why.

I connect from a Windows-client through Putty to a MySQL-DB on a linux-server. And every element along the line has it’s own charset-def – mysql even a few (too much):

Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    latin1
Conn.  characterset:    latin1

And every mysql-table could even have a different charset for every of it’s columns. What a mess! I virtually had no chance to tell the REAL charset by looking in the dump-file – cause every thing between my eyes and the dump could conceal it by it’s own settings.

Accidentally I discovered, that the linux-command “less” shows any special-, non-ASCII-chars in a dumpfile as hexcodes. This was the point when the fog cleared. Now I was able to tell in which charset the data was encoded by looking up the hexcodes of known characters in code-tables for latin1 and utf-8. And guess what! It turned out that the supposed latin1-tables actually contained utf-8-encoded data!

There are two ways to get this data uncorrupted into the new db:

1.) The latin1-way:

Export the data with “–default-character-set=latin1” (utf-8 is default for mysqldump). The db believes the data is already latin1 and won’t change it – actually doing an utf-8-export. To be save it should be best to export with “–default-character-set=binary” as this dumps always with no conversion.

 mysqldump -u USERNAME -pPWD --default-character-set=binary dbname > dbname.dump

Knowing the data is utf-8 we import using “–default-character-set=utf8”:

 mysql -u USERNAME -pPWD --default-character-set=utf8 dbname < dbname.dump

As “latin1” is the given charset in the CREATE-statements of the tables in the dumpfile the tables will be created so. This forces the db to do an automatic conversion from utf-8 to latin1 when doing the import. The result is latin1-data in latin1-tables.

2.) The utf-8-way:

The export (and import as well) is now a two step thing.

First we will export only the metadata (the CREATE-statements) of the db using “–no-data”:

 mysqldump -u USERNAME -pPWD --no-data --skip-set-charset dbname > dbname_meta.dump 

In order to change the charset in each of the db’s CREATE-statements we replace every occurrence of “latin1” in the dumpfile with “utf8”:

 sed -i 's/latin1/utf8/g' dbname_meta.dump 

But beware that this is “a blind shot”: The sed-command replaces EVERY occurrence – everywhere in the dump. We do this under the assumption, that it will hit the right places. And as we have no real data in the dump, which would be modified as well, it should be alright.

Second we export just the data – using “–default-character-set=binary” to prevent a “double-utf-8-conversion” and “–no-create-info” to just export the row-data:

 mysqldump -u USERNAME -pPWD --default-character-set=binary --no-create-info --skip-set-charset dbname > dbname_data.dump 

We now have a metadata-file with utf8-enabled table-CREATEs and a datadump in utf-8-encoding. So the import should be obvious:

We create the tables by importing the metadump in an empty db:

 mysql -u USERNAME -pPWD dbname < dbname_meta.dump 

…and then import the data:

 mysql -u USERNAME -pPWD --default-character-set=utf8 dbname < dbname_data.dump 

And finally we have utf8-tables with utf-8-data in it.