Posts Tagged ‘mysql’

Mysql replication set up and error fix

April 17th, 2013

Section 1: set up replication

1.1 on master:

* change my.cnf
* create slave user and grant proper privileges
* restart server

2.2 on slave

* change my.cnf with the created msater user
* restart server

Section 2: fix error on replication

2.1 on master:

* check master status:
mysql> show master status \G
* check processlist:

2.2 on slave

* check slave status and you will find the problem.
* check processlist:
* identify the problem
root> grep mysql /var/log/syslog

2.3 fix problems

* way 1: use SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1 to tell the slave to skip one query (which is the invalid one that caused the replication to stop). If you’d like to skip two queries, you’d use SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 2; instead and so on.
mysql> STOP SLAVE;

* way 2: edit my.cnf and add ‘slave-skip-errors = NNN,NNN,NNN’ the numbers should be the errno(s) you found in /var/log/syslog.
root> vi /etc/
and add the line ‘slave-skip-errors = NNN,NNN,NNN’ in the slave section.
root> /etc/init.d/mysql restart

How to keep the order of SELECT WHERE IN() in mysql

April 21st, 2012

The answer is to use ORDER BY FIND_IN_SET(col_name, order_values).

Here is an example. A table `students` has 2 columns: `id`, `name`.
If you want to find the items with the IDs of 4,5,2,1, and keep them in the order of 4,5,2,1 as well.

Here is the SQL:

SELECT id, name FROM students
WHERE id IN (4,5,2,1)
ORDER BY FIND_IN_SET(id, '4,5,2,1');

mysqlpp connection timeout problem

October 3rd, 2011

If you have a application having long time connection with mysql using mysqlpp. You may at first think of using Connection::connected() to determine whether the connection is still valid. But actually this doesn’t work if the connection gets timeout.

After checking the reference, I found out Connection::ping() would be a good way to determine timeout.

Here is a piece of test code:

#include <iostream>
#include <string>
#include <mysql++.h>

using namespace std;

int main() {
	string db_schema("DB_SCHEMA");
	string db_host("DB_HOST");
	string db_user("DB_USER");
	string db_pass("DB_PASS");
	string qry_str("SELECT MAX(user_id) FROM users");

	mysqlpp::Connection cn;
	cn.connect(db_schema.c_str(), db_host.c_str(), db_user.c_str(), db_pass.c_str());
	mysqlpp::Query qry = cn.query(qry_str);
	mysqlpp::StoreQueryResult res =;
	cout << "frist query result: " << res[0][0] << endl;

	string tmp;
	cout << "Wait for a period of time to let the connection timeout, and then input whatever a string to continue: " << endl;
	cin >> tmp;

	cout << "ping result: " << << endl;
	cout << "connected: " << cn.connected() << endl;
		cn.connect(db_schema.c_str(), db_host.c_str(), db_user.c_str(), db_pass.c_str());
	mysqlpp::Query qry2 = cn.query(qry_str);
	res =;
	cout << "second query result: " << res[0][0] << endl;

	return 0;	

But of course, I’m still new to mysqlpp. There may be a better way to do this. Please comment below if you have a better way.

NOTE: Here is how to change mysql connection idle timeout.

UPDATE: Just found another way to handle the timeout problem: use Connection::set_option(new mysqlpp::ReconnectOption(true));. This is much more elegant.
Here is a piece of sample code:

mysqlpp::Connection cn;
cn.set_option(new mysqlpp::ReconnectOption(true));
cn.connect(db_schema.c_str(), db_host.c_str(), db_user.c_str(), db_pass.c_str());

set mysql connection idle timeout

October 3rd, 2011

in config file my.ini, under [mysqld] add these 2 lines:

# set timout to be 1h (3600 seconds)

bash script for killing all mysql processes / connections

September 29th, 2011

Here is a small bash script for killing all mysql processes (or connections).

The basic idea is to select all process IDs into a tmp file (each process ID a line), and then read the file line by line and kill the process, finally remove the tmp file.

This bash only works for MYSQL >= 5.1.7. Becuase before 5.1.7 process list is not stored in the table INFORMATION_SCHEMA.PROCESSLIST.


  • either by providing the MYSQL root password as a parameter to the bash as:
    user> MysqlRootPass
  • or by editing the password the line mysqlpass=$1 as:
    mysqlpass=MysqlRootPass and then excute the bash without parameter:

    Of course, you can also edit the other variables in the bash file.



# mysql host/user/pass who has the privilege to read schema INFORMATION_SCHEMA
# tmp file path

# select all process IDs into the tmp file. each line has a process id.
echo "SELECT id FROM INFORMATION_SCHEMA.PROCESSLIST INTO OUTFILE '$tmpprocesslistfile';" | mysql -h$mysqlhost -u$mysqluser -p$mysqlpass INFORMATION_SCHEMA

# read the file line by line and kill the process
while read qryid
echo "kill $qryid;" | mysql -h$mysqlhost -u$mysqluser -p$mysqlpass
done < "$tmpprocesslistfile"

# remove the tmp file
echo 'All connections have been killed.';
echo 'Please enter sudo password to remove the tmp file if necessary.';
sudo rm $tmpprocesslistfile

Install & use mysql++ on linux(debian squeeze, ubuntu)

July 28th, 2011

The key to use mysql++ on debian/ubuntu system is to find where the mysql++ .h head files and the .so binary files.

Firrst install mysqlclient and mysql++ on debian squeeze:

root# apt-get install libmysqlclient-dev libmysqld-dev libmysql++-dev libmysql++-doc libmysql++3

This will install the mysql head files in /usr/include/mysql, mysql++ head files in /usr/include/mysql, and and in /usr/lib. These are all the files we need.

Now let us use the mysql++ library.

Save the simple code below as test.cpp

#include <iostream>
#include <mysql++.h>

int main() {

mysqlpp::String s(“hello, world”);

std::cout << s << std::endl;

return 0;


And save the makefile as Makefile:

CXX := g++
CXXFLAGS := -I/usr/include/mysql -I/usr/include/mysql++

LDFLAGS := -L/usr/lib -lmysqlpp -lmysqlclient -lnsl -lz -lm

all: test
clean:        rm -f $(EXECUTABLE) *.o

Finally, excute

make all

The compile result would be test.

allow mysql to get outer connections

July 26th, 2011

If you want your mysql for outer connections, do not forget to set :

bind-address            =

in my.cnf.

Otherwise, you will get the error message like this: ”the target machine actively refused it”.

SQLSTATE[HY000]: General error: 2053 in ZF

May 15th, 2011

After updating the PDO, my program began to see some weird problems. One of them is “SQLSTATE[HY000]: General error: 2053″. After some time,  I finally found that it is because I passed NULL to the second parameter ($bind) of Zend_Db_Adapter_Abstract::fetchAll($sql, $bind = array(), $fetchMode = null).

So, instead of this:

$popList = $this->getAdapter()->fetchAll($sql, null, Zend_Db::FETCH_OBJ);

one should:

$popList = $this->getAdapter()->fetchAll($sql, array(), Zend_Db::FETCH_OBJ);

when you don’t need to bind any parameter with the SQL.

what if you cannot start your mysql server

March 24th, 2011

I guess you have changed some settings in your my.cnf especially you may have added new parameters. The best way to alter my.cnf is to first check whether the parameter is supported by the current version or test SET parameter=value in a mysql console.

problem when creating a secondary index on a large innodb table

March 18th, 2011

The table has only 2 integer columns (id, friendId) with the combined key (id, friendId). Now I want to reversely search users by friendId. Without index friendId, each search will scan the whole table and takes about 90s. So I decided to create index on friendId.

First, I tried: CREATE INDEX ON TABLE tb_name index_name (column_name). After 30 hours, it’s still not finished and I had no idea when it would finish. So I interrupted it.

Then I truncated the whole table, altered the table to add a new index, and then insert the whole data by LOAD DATA INFILE …. INTO TABLE tb_name. During the procedure, only 5% of cpu was used and io_wait is large. And the final time it took was 55 hours:

mysql> LOAD DATA INFILE ‘/home/weiping/rr.dump/rr.friends.all.20100310.csv’ INTO TABLE friendships;
Query OK, 137138510 rows affected (2 days 7 hours 46 min 5.46 sec)

My my.cnf is like this:

key_buffer              = 768M
max_allowed_packet      = 16M
thread_stack            = 192K
thread_cache_size       = 144
query_cache_limit       = 1M
query_cache_size        = 32M
innodb_buffer_pool_size = 1024M

Of course, I have switched off these unnecessary checks:

SET unique_checks=0;
SET sql_log_bin=0;
set foreign_key_checks=0;

I loaded the data before without the secondary index, it took only 30 minutes! Now I’m wondering why it took so long time with one more index…

BTW, will increase the size of sort_buffer will help massive insertion (and with secondary index)?