Archive for March, 2011

[todo] kill a mysql connection which has slept more than a certain time

March 30th, 2011

here is a script to kill all connections: http://www.coderholic.com/mysql-connection-killer/.

but i need a script for killing all connections that has slept for a period of time.

Also, there are some variables that can be set for shorter connections:

mysql> show variables like “%timeout%”;

and you can set these variables with small values;

mysql> set global  wait_timeout=30;
mysql> set global interactive_timeout=30;

what if you cannot start your mysql server

March 24th, 2011

I guess you have changed some settings in your my.cnf especially you may have added new parameters. The best way to alter my.cnf is to first check whether the parameter is supported by the current version or test SET parameter=value in a mysql console.

problem when creating a secondary index on a large innodb table

March 18th, 2011

The table has only 2 integer columns (id, friendId) with the combined key (id, friendId). Now I want to reversely search users by friendId. Without index friendId, each search will scan the whole table and takes about 90s. So I decided to create index on friendId.

First, I tried: CREATE INDEX ON TABLE tb_name index_name (column_name). After 30 hours, it’s still not finished and I had no idea when it would finish. So I interrupted it.

Then I truncated the whole table, altered the table to add a new index, and then insert the whole data by LOAD DATA INFILE …. INTO TABLE tb_name. During the procedure, only 5% of cpu was used and io_wait is large. And the final time it took was 55 hours:

mysql> LOAD DATA INFILE ‘/home/weiping/rr.dump/rr.friends.all.20100310.csv’ INTO TABLE friendships;
Query OK, 137138510 rows affected (2 days 7 hours 46 min 5.46 sec)

My my.cnf is like this:

delay_key_write=ALL
bulk_insert_buffer_size=2048M
key_buffer              = 768M
max_allowed_packet      = 16M
thread_stack            = 192K
thread_cache_size       = 144
query_cache_limit       = 1M
query_cache_size        = 32M
innodb_buffer_pool_size = 1024M

Of course, I have switched off these unnecessary checks:

SET unique_checks=0;
SET sql_log_bin=0;
set foreign_key_checks=0;

I loaded the data before without the secondary index, it took only 30 minutes! Now I’m wondering why it took so long time with one more index…

BTW, will increase the size of sort_buffer will help massive insertion (and with secondary index)?

notes for migrating large data in mysql (innodb)

March 13th, 2011

Here migrating means exporting & importing. The tips below may help you when you need to migrate a large amount of data. I’ll also list the resource consume of my exporting & importing of 2 big tables:

name definition rows
users CREATE TABLE `users` (
`id` int(11) NOT NULL,
`name` char(30) CHARACTER SET utf8 NOT NULL,
`city` char(50) CHARACTER SET utf8 DEFAULT NULL,
`insertTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`assignedTo` smallint(5) unsigned NOT NULL DEFAULT ’0′,
`iconUrl` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `assignedTo` (`assignedTo`),
KEY `insertTime` (`insertTime`),
KEY `users_crawlTime` (`crawlTime`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
21,000,000
relations CREATE TABLE `friendships` (
`id` int(11) NOT NULL,
`friendId` int(11) NOT NULL,
PRIMARY KEY (`id`,`friendId`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
131,000,000

There are 2 ways to exprot & import data:

  • Using  SELECT … INTO OUTFILE … to export, and LOAD DATA INFILE … INTO TABLE … to import.
  • Using mysqldump … to export, and source … to import.

1. SELECT … INTO OUTFILE … & LOAD DATA INFILE … INTO TABLE …

The basic usage of SELECT INTO is:

SELECT … FROM tb_name INTO OUTFILE ‘/home/me/datadump/xxxx.txt’;

The basic usage of LOAD DATA is:

LOAD DATA INFILE ‘/home/me/datadump/xxxx.txt’ INTO TABLE tb_name [FIELDS TERMINATED BY 't' LINES TERMINATED BY 'n'];

1.1 Tips for SELECT INTO

  • use ORDER BY primary_key.
  • increase sort_buffer.

1.2 Tips for LOAD DATA

  • turn off foreign key checking, log bin & unique checking.
    set foreign_key_checks=0;
    set sql_log_bin=0;
    set unique_checks=0;
    load data ...
  • turn on delay_key_write and increase bulk_insert_buffer_size by editing [mysql] section in my.cnf. Here is a example:
    delay_key_write=ALL 
    bulk_insert_buffer_size=2048M

2. mysqldump … & source …

The basic grammar for mysqldump  is:

shell> mysqldump [options] db_name [tbl_name ...]
shell> mysqldump [options] --databases db_name ...
shell> mysqldump [options] --all-databases

And the source is easy to use:

mysql: source ‘/home/me/datadump/xxxx.sql’;

2.1 Tips for mysqldump

  • use switch –opt. This option is short for “–add-drop-table –add-locks –create-options –disable-keys –extended-insert –lock-tables –quick –set-charset”. The –opt option is enabled by default. You can use –skip-opt to disable it.
  • use switch –order-by-primary. Dump a table sorted by its primary key, or by its first unique index, if such an index exists. This is useful when dumping a MyISAM table to be loaded inInnoDB.

 

3. other tips:

  • use a single commit.
  • split a big file into several small files, such as 1M records in a file.

remember ‘character set utf8′ while loading data into mysql

March 10th, 2011

Recently, I tried to load some data which contains Chinese characters into Mysql. The data is saved in a txt file separated by ‘t’ and encoded with utf8. My mysql table also has the right encoding utf8.

I knew I had to set the char encoding right to be utf8, so I used ‘set names utf8′ in the following command to load the data:

set names utf8;
LOAD DATA LOCAL INFILE '/home/me/datadump/my_data.txt'
INTO TABLE my_table;

However, it turns out this doesn’t work. When checking the data in a terminal, I can get the right results without ‘set names utf8′, but error coded results with ‘set names utf8′. Of course, when using this data in utf8 encoded application, I also got the error results.

After searching, I found the right way to load data is:

LOAD DATA LOCAL INFILE '/home/me/datadump/my_data.txt'
INTO TABLE my_table CHARACTER SET UTF8;

move Mysql’s datadir on ubuntu

March 8th, 2011

Mysql’s datadir (on ubuntu) by default is /var/lib/mysql. Sometimes, if your DB grows bigger and bigger, and finally exceeds the limit of /var partition, you may think of moving it to some other place. Here are the steps.

  1. stop the server
    sudo service mysql stop

    or

    sudo /etc/init.d/mysql stop
  2. copy data to the new folder
    For example, you want to set the new folder in /home2/: 

    sudo cp /var/lib/mysql /home2/

    Now /home2/mysql is the new datadir. Of course, you can change it to whatever name you want.

    Here I deleted the log file ib_logfile* by:

    sudo rm /home2/mysql/ib_logfile0

    Maybe you also want to change the original datadir to some other name to make sure it won’t be in effect any more.

    sudo mv /var/lib/mysql /var/lib/mysql.bak
  3. change the permission of the new directory
    Make sure the owner of /home2/mysql and all its subdirectories is mysql:mysql. When necessary, use chown to change the permission:sudo chown -R mysql:mysql /home2/mysql 

    Also make sure the permissions of the new datadir and its subdiretories are the same as those under /var/lib/mysql. In my experiment, the permission of table files in each database was changed to 400 when copying, so I had to use chmod to correct them:

    sudo chown -R me:me /home2/mysql;
    sudo chmod 660 /home2/mysql/*/*;
    sudo chown -R mysql:mysql /home2/mysql

    The reason why I changed the owner to me (my username on my computer) at the beginning and change it back to mysql at last is so that I can use the wildcast * to change all files’ permission at once.

  4. modify my.cnf and point datadir to the new directory
  5. sudo vi /etc/mysql/my.cnf

    Find datadir and change it to:

    datadir=/home2/mysql

    It seems in Mysql 4 you also need to change the basedir and socket accordingly, but I’m not sure.

  6. add the new diretory into the AppArmor configure file
  7. This part is special on some Linux distributions like Ubuntu >= 7.04. AppArmor is a Linux Security Module implementation of name-based access controls. It confines individual programs to a set of listed files and posix 1003.1e draft capabilities.

    You can use

    sudo aa-status

    to check whether mysql is confined by AppArmor. If so, use

    sudo vi /etc/apparmor.d/usr.sbin.mysqld

    to add the new datadir at the end of the file.

  8. restart apparmor service
    sudo /etc/init.d/apparmor reload
  9. start mysql server
    sudo service mysql start

    or

    sudo /etc/init.d/mysql start

    Refer to /var/log/mysql/error.log to see if there are any errors.

  10. do some tests.
    List all the database, select some records from a table, create a new table and see whether it’s in the new datadir, insert some records, delete them, and whatever.
    WARNING: you may need to run Mysql with the new datadir for a few days before deleting the old datadir.

export mysql data to a custom directory on ubuntu linux

March 8th, 2011

On ubuntu,  mysql is confined with AppArmor to certian directories. Thus when you try to export data into a directory other than /tmp, you’ll get a Errcode 13. So if you really want to export db data inoto a custom dictory, you need to add it to the AppMrmor configure file.

First, check whether mysql is restricted to certain dirs:

sudo aa-status

If you see mysqld listed, then add the custom dirs at the end of the configure file:

sudo vi /etc/apparmor.d/usr.sbin.mysqld

Finally, reload the service:

sudo /etc/init.d/apparmor reload

reset mysql root password on linux(ubuntu)

March 7th, 2011
  1. Stop the MySQL Server.
    sudo /etc/init.d/mysql stop

    If you can not stop the Mysql Server normally due to some problems, try this command to list all mysql processes and then kill them one by one.

    ps aux | grep mysql
  2. Start the Mysql without checking password.
    sudo mysqld --skip-grant-tables &
  3. Login to MySQL as root.
    mysql -u root mysql
  4. Set your new password.
    UPDATE user SET Password=PASSWORD('NEWPASSWORD') WHERE User='root';
    FLUSH PRIVILEGES;

    The more secure way is:

    SET PASSWORD FOR 'root'@'%' = PASSWORD('NEWPASSWORD');
    FLUSH PRIVILEGES;