Archive for the ‘mysql’ category

problem when creating a secondary index on a large innodb table

March 18th, 2011

The table has only 2 integer columns (id, friendId) with the combined key (id, friendId). Now I want to reversely search users by friendId. Without index friendId, each search will scan the whole table and takes about 90s. So I decided to create index on friendId.

First, I tried: CREATE INDEX ON TABLE tb_name index_name (column_name). After 30 hours, it’s still not finished and I had no idea when it would finish. So I interrupted it.

Then I truncated the whole table, altered the table to add a new index, and then insert the whole data by LOAD DATA INFILE …. INTO TABLE tb_name. During the procedure, only 5% of cpu was used and io_wait is large. And the final time it took was 55 hours:

mysql> LOAD DATA INFILE ‘/home/weiping/rr.dump/rr.friends.all.20100310.csv’ INTO TABLE friendships;
Query OK, 137138510 rows affected (2 days 7 hours 46 min 5.46 sec)

My my.cnf is like this:

delay_key_write=ALL
bulk_insert_buffer_size=2048M
key_buffer              = 768M
max_allowed_packet      = 16M
thread_stack            = 192K
thread_cache_size       = 144
query_cache_limit       = 1M
query_cache_size        = 32M
innodb_buffer_pool_size = 1024M

Of course, I have switched off these unnecessary checks:

SET unique_checks=0;
SET sql_log_bin=0;
set foreign_key_checks=0;

I loaded the data before without the secondary index, it took only 30 minutes! Now I’m wondering why it took so long time with one more index…

BTW, will increase the size of sort_buffer will help massive insertion (and with secondary index)?

notes for migrating large data in mysql (innodb)

March 13th, 2011

Here migrating means exporting & importing. The tips below may help you when you need to migrate a large amount of data. I’ll also list the resource consume of my exporting & importing of 2 big tables:

name definition rows
users CREATE TABLE `users` (
`id` int(11) NOT NULL,
`name` char(30) CHARACTER SET utf8 NOT NULL,
`city` char(50) CHARACTER SET utf8 DEFAULT NULL,
`insertTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`assignedTo` smallint(5) unsigned NOT NULL DEFAULT ’0′,
`iconUrl` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `assignedTo` (`assignedTo`),
KEY `insertTime` (`insertTime`),
KEY `users_crawlTime` (`crawlTime`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
21,000,000
relations CREATE TABLE `friendships` (
`id` int(11) NOT NULL,
`friendId` int(11) NOT NULL,
PRIMARY KEY (`id`,`friendId`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
131,000,000

There are 2 ways to exprot & import data:

  • Using  SELECT … INTO OUTFILE … to export, and LOAD DATA INFILE … INTO TABLE … to import.
  • Using mysqldump … to export, and source … to import.

1. SELECT … INTO OUTFILE … & LOAD DATA INFILE … INTO TABLE …

The basic usage of SELECT INTO is:

SELECT … FROM tb_name INTO OUTFILE ‘/home/me/datadump/xxxx.txt’;

The basic usage of LOAD DATA is:

LOAD DATA INFILE ‘/home/me/datadump/xxxx.txt’ INTO TABLE tb_name [FIELDS TERMINATED BY 't' LINES TERMINATED BY 'n'];

1.1 Tips for SELECT INTO

  • use ORDER BY primary_key.
  • increase sort_buffer.

1.2 Tips for LOAD DATA

  • turn off foreign key checking, log bin & unique checking.
    set foreign_key_checks=0;
    set sql_log_bin=0;
    set unique_checks=0;
    load data ...
  • turn on delay_key_write and increase bulk_insert_buffer_size by editing [mysql] section in my.cnf. Here is a example:
    delay_key_write=ALL 
    bulk_insert_buffer_size=2048M

2. mysqldump … & source …

The basic grammar for mysqldump  is:

shell> mysqldump [options] db_name [tbl_name ...]
shell> mysqldump [options] --databases db_name ...
shell> mysqldump [options] --all-databases

And the source is easy to use:

mysql: source ‘/home/me/datadump/xxxx.sql’;

2.1 Tips for mysqldump

  • use switch –opt. This option is short for “–add-drop-table –add-locks –create-options –disable-keys –extended-insert –lock-tables –quick –set-charset”. The –opt option is enabled by default. You can use –skip-opt to disable it.
  • use switch –order-by-primary. Dump a table sorted by its primary key, or by its first unique index, if such an index exists. This is useful when dumping a MyISAM table to be loaded inInnoDB.

 

3. other tips:

  • use a single commit.
  • split a big file into several small files, such as 1M records in a file.

remember ‘character set utf8′ while loading data into mysql

March 10th, 2011

Recently, I tried to load some data which contains Chinese characters into Mysql. The data is saved in a txt file separated by ‘t’ and encoded with utf8. My mysql table also has the right encoding utf8.

I knew I had to set the char encoding right to be utf8, so I used ‘set names utf8′ in the following command to load the data:

set names utf8;
LOAD DATA LOCAL INFILE '/home/me/datadump/my_data.txt'
INTO TABLE my_table;

However, it turns out this doesn’t work. When checking the data in a terminal, I can get the right results without ‘set names utf8′, but error coded results with ‘set names utf8′. Of course, when using this data in utf8 encoded application, I also got the error results.

After searching, I found the right way to load data is:

LOAD DATA LOCAL INFILE '/home/me/datadump/my_data.txt'
INTO TABLE my_table CHARACTER SET UTF8;

move Mysql’s datadir on ubuntu

March 8th, 2011

Mysql’s datadir (on ubuntu) by default is /var/lib/mysql. Sometimes, if your DB grows bigger and bigger, and finally exceeds the limit of /var partition, you may think of moving it to some other place. Here are the steps.

  1. stop the server
    sudo service mysql stop

    or

    sudo /etc/init.d/mysql stop
  2. copy data to the new folder
    For example, you want to set the new folder in /home2/: 

    sudo cp /var/lib/mysql /home2/

    Now /home2/mysql is the new datadir. Of course, you can change it to whatever name you want.

    Here I deleted the log file ib_logfile* by:

    sudo rm /home2/mysql/ib_logfile0

    Maybe you also want to change the original datadir to some other name to make sure it won’t be in effect any more.

    sudo mv /var/lib/mysql /var/lib/mysql.bak
  3. change the permission of the new directory
    Make sure the owner of /home2/mysql and all its subdirectories is mysql:mysql. When necessary, use chown to change the permission:sudo chown -R mysql:mysql /home2/mysql 

    Also make sure the permissions of the new datadir and its subdiretories are the same as those under /var/lib/mysql. In my experiment, the permission of table files in each database was changed to 400 when copying, so I had to use chmod to correct them:

    sudo chown -R me:me /home2/mysql;
    sudo chmod 660 /home2/mysql/*/*;
    sudo chown -R mysql:mysql /home2/mysql

    The reason why I changed the owner to me (my username on my computer) at the beginning and change it back to mysql at last is so that I can use the wildcast * to change all files’ permission at once.

  4. modify my.cnf and point datadir to the new directory
  5. sudo vi /etc/mysql/my.cnf

    Find datadir and change it to:

    datadir=/home2/mysql

    It seems in Mysql 4 you also need to change the basedir and socket accordingly, but I’m not sure.

  6. add the new diretory into the AppArmor configure file
  7. This part is special on some Linux distributions like Ubuntu >= 7.04. AppArmor is a Linux Security Module implementation of name-based access controls. It confines individual programs to a set of listed files and posix 1003.1e draft capabilities.

    You can use

    sudo aa-status

    to check whether mysql is confined by AppArmor. If so, use

    sudo vi /etc/apparmor.d/usr.sbin.mysqld

    to add the new datadir at the end of the file.

  8. restart apparmor service
    sudo /etc/init.d/apparmor reload
  9. start mysql server
    sudo service mysql start

    or

    sudo /etc/init.d/mysql start

    Refer to /var/log/mysql/error.log to see if there are any errors.

  10. do some tests.
    List all the database, select some records from a table, create a new table and see whether it’s in the new datadir, insert some records, delete them, and whatever.
    WARNING: you may need to run Mysql with the new datadir for a few days before deleting the old datadir.

export mysql data to a custom directory on ubuntu linux

March 8th, 2011

On ubuntu,  mysql is confined with AppArmor to certian directories. Thus when you try to export data into a directory other than /tmp, you’ll get a Errcode 13. So if you really want to export db data inoto a custom dictory, you need to add it to the AppMrmor configure file.

First, check whether mysql is restricted to certain dirs:

sudo aa-status

If you see mysqld listed, then add the custom dirs at the end of the configure file:

sudo vi /etc/apparmor.d/usr.sbin.mysqld

Finally, reload the service:

sudo /etc/init.d/apparmor reload

reset mysql root password on linux(ubuntu)

March 7th, 2011
  1. Stop the MySQL Server.
    sudo /etc/init.d/mysql stop

    If you can not stop the Mysql Server normally due to some problems, try this command to list all mysql processes and then kill them one by one.

    ps aux | grep mysql
  2. Start the Mysql without checking password.
    sudo mysqld --skip-grant-tables &
  3. Login to MySQL as root.
    mysql -u root mysql
  4. Set your new password.
    UPDATE user SET Password=PASSWORD('NEWPASSWORD') WHERE User='root';
    FLUSH PRIVILEGES;

    The more secure way is:

    SET PASSWORD FOR 'root'@'%' = PASSWORD('NEWPASSWORD');
    FLUSH PRIVILEGES;

common used mysql commands

June 13th, 2010

start up & shut down mysql:

sudo /etc/init.d/mysql start;
sudo /etc/init.d/mysql stop;

or
sudo mysqladmin shutdown -p;

show db & tables:

show databases;
show tables;
desc tableName;

show the defination of a table:

show create table tableName;

show server status

SHOW FULL PROCESSLIST;
show status like ‘Conn%’;
show status like ‘%onn%’;
$ mysqladmin status

SHOW TABLE STATUS;

profile a query

EXPLAIN SELECT * FROM tableName;
or
SET profiling=1;
SELECT * FROM tableName;
...
SET profiling=0;
SHOW profiles;
SHOW profile for query 1;

Add a user

CREATE USER ‘nb’@'localhost’ IDENTIFIED BY ‘yourpassword’;
GRANT SELECT, INSERT, UPDATE, DELETE ON somedb.* TO ‘nb’@'localhost’;

show warnings

SHOW WARNINGS [LIMIT [offset,] row_count]
SHOW COUNT(*) WARNINGS

see: http://dev.mysql.com/doc/refman/5.5/en/show-warnings.html

Compress the result from mysqldump

Compress directly with gzip
mysqldump < mysqldump options> | gzip > outputfile.sql.gz

Gunzip and import using gzip
gunzip < outputfile.sql.gz | mysql < mysql options>

Compress directly with bzip2
mysqldump < mysqldump options> | bzip2 > outputfile.sql.bz2

Bunzip2 and import using bzip2
bunzip2 < outputfile.sql.bz2 | mysql < mysql options>

Compress directly with lzma
mysqldump < mysqldump options> | lzma > outputfile.sql.lzma

Unlzma and import using unlzma
unlzma < outputfile.sql.lzma | mysql < mysql options>

NOTE:

mysqldump < mysqldump options

can be replaced with:

mysqldump -hYourHost -uYourUserName -pYourPassword

reference: http://mediakey.dk/~cc/compressing-mysqldump-output-mysql-gzip-bzip2-and-lzma-7z/

even more…

please wait...

Read/Write Splitting with MySQL-Proxy

April 8th, 2010

Original post: MySQL Proxy learns R/W Splitting

Mysql comment: http://forge.mysql.com/wiki/MySQL_Proxy_RW_Splitting

Download MySQL Proxy: http://dev.mysql.com/downloads/mysql-proxy/

Getting started with MySQL Proxy: http://dev.mysql.com/tech-resources/articles/proxy-gettingstarted.html

Chinese note: http://www.hiadmin.com/?p=190

Export results from MySQL into an external file

February 10th, 2010

1. From mysql command line:

  • SELECT colA, colB FROM table1 INTO OUTFILE "/tmp/output.txt";

The detail SELECT syntax is:

SELECT
    [ALL | DISTINCT | DISTINCTROW ]
      [HIGH_PRIORITY]
      [STRAIGHT_JOIN]
      [SQL_SMALL_RESULT] [SQL_BIG_RESULT] [SQL_BUFFER_RESULT]
      [SQL_CACHE | SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS]
    select_expr [, select_expr ...]
    [FROM table_references
    [WHERE where_condition]
    [GROUP BY {col_name | expr | position}
      [ASC | DESC], ... [WITH ROLLUP]]
    [HAVING where_condition]
    [ORDER BY {col_name | expr | position}
      [ASC | DESC], ...]
    [LIMIT {[offset,] row_count | row_count OFFSET offset}]
    [PROCEDURE procedure_name(argument_list)]
    [INTO OUTFILE 'file_name' export_options
      | INTO DUMPFILE 'file_name'
      | INTO var_name [, var_name]]
    [FOR UPDATE | LOCK IN SHARE MODE]]

2. From console: combine mysql logging in & exporting

  • echo "SELECT colA, colB FROM table1" | mysql -uXXX -pYYY SchemaName > /tmp/outout.txt;
  • mysql mysql -uXXX -pYYY SchemaName < /tmp/export.sql > /tmp/outout.txt;

REFERENCE:

——————————————————————————————–

The corresponding syntax of loading outfile data is:

LOAD DATA [LOW_PRIORITY | CONCURRENT] [LOCAL] INFILE 'file_name'
    [REPLACE | IGNORE]
    INTO TABLE tbl_name
    [CHARACTER SET charset_name]
    [{FIELDS | COLUMNS}
        [TERMINATED BY 'string']
        [[OPTIONALLY] ENCLOSED BY 'char']
        [ESCAPED BY 'char']
    ]
    [LINES
        [STARTING BY 'string']
        [TERMINATED BY 'string']
    ]
    [IGNORE number LINES]
    [(col_name_or_user_var,...)]
    [SET col_name = expr,...]

REFERENCE:

服务器性能优化配置建议

November 26th, 2009

目 录
一、服务配置建议
二、MySQL性能分析及建议
三、系统性能分析

很久以前在前公司给中企动力那边写的服务器分析建议,其实出就是一些简单参数调整仍后利用vmstat,top这些工具对系统性能做初步分析。
贴出来希望对朋友们学习有帮助,同时也欢迎朋友们补充![此文档仅作参考和学习,具体优化比较复杂欢迎朋友们探讨!]

一、服务器配置

先阅读apache配置优化建议如下,再对相关参数进行调整,观察服务器状况.
Apache配置优化建议:
进入/usr/local/apache2/conf/extra 目录
Apache优化,
经过上述操作后,Apache已经能够正常运行。但是,对于访问量稍大的站点,Apache的这些默认配置是无法满足需求的,我们仍需调整Apache的一些参数,使Apache能够在大访问量环境下发挥出更好的性能。以下我们对Apache配置文件httpd.conf中对性能影响较大的参数进行一些说明。

(1) Timeout 该参数指定Apache在接收请求或发送所请求内容之前的最长等待时间(秒),若超过该时间Apache则放弃处理该请求,并释放连接。该参数默认值为120,推荐设置为60,对于访问量较大的网站可以设置为30或15。

(2) KeepAlive 该参数控制Apache是否允许在一个连接中有多个请求,默认打开。但对于大多数论坛类型站点来说,通常设置为off以关闭该支持

(3) MPM – prefork.c 在默认情况下Apache使用Prefork(进程)工作模式,可以说这部分的参数设置是对Apache性能影响的核心和关键。用户可以在配置文档中找到以下配置段:

  1. <IfModule prefork.c>
  2. StartServers         5
  3. MinSpareServers      5
  4. MaxSpareServers     10
  5. MaxClients          15
  6. MaxRequestsPerChild  0
  7. </IfModule>

复制代码

这 就是控制Apache进程工作的配置段,为了更好的理解上述配置中的各项参数,下面让我们先了解一下Apache是如何控制进程工作的。我们知道,在 Unix系统中,很多服务(Service)的守护进程(Daemon)在启动时会创建一个进程以准备应答可能的连接请求,服务即进入了端口监听状态,当 一个来自客户端(Client)的请求被发送至服务所监听的端口时,该服务进程即会处理该请求,在处理过程中,该进程处于独占状态,也就是说如果此时有其 他请求到达,这些请求只能“排队”等待当前请求处理完成且服务进程释放。这样就会导致越来越多的请求处于队列等待状态,实际表现就是该服务处理能力非常低 下。Apache使用Prefork模式很好的解决了这一问题。下面我们来看看Apache实际上是如何高效率工作的。

当Apache启动时,Apache会启动StartSpareServers个空闲进程同时准备接收处理请求,当多个请求到来 时,StarSpareServers进行会越来越少,当空闲进程减少到MinSpareServers个时,Apache为了能够继续有充裕的进程处理 请求,它会再启动StartsServers个进程备用,这样就大大减少了请求队列等待的可能,使得服务效率提高,这也是为什么叫做Pre-fork的原 因;让我们继续跟踪Apache的工作,我们假设Apache已经启动了200个进程来处理请求,理论上来说,此时Apache一共有205个进程,而过 了一段时间,假设有100个请求都得到了Apache的响应和处理,那么此时这100个进程就被释放成为空闲进程,那么此时Apache有105个空闲进 程。而对于服务而言,启动太多的空闲进程时没有任何意义的,反而会降低服务器的整体性能,那么Apache真的会有105个空闲进程么?当然不会!实际上 Apache随时在检查自己,当发现有超过MaxSpareServers个空闲进程时,则会自动停止关闭一些进程,以保证空闲进程不过过多。说到这里,用户应该对Apache的工作方式有了一定的了解,如果想获得更多更详细的说明请参阅Apache手册文档。

我们还有两个参数没有介绍:MaxClients和MaxRequestPerchild;MaxClients指定Apache在同一时间内最多允许有 多少客户端能够与其连接,如果超过MaxClients个连接,客户端将会得到一个“服务器繁忙”的错误页面。我们看到默认情况下MaxClients设 置为15,这对一些中型站点和大型站点显然是远远不够的!也许您需要同时允许512个客户端连接才能满足应用需 求,好吧,那么就让我们把MaxClients修改为512,保存httpd.conf并退出,重启Apache,很遗憾,在重启过程当中您看到了一些错 误提示,Apache重启失败。错误提示中告诉您MaxClients最大只能设定为256,相信您一定很失望。不过不要沮丧,Apache作为世界一流 的Web Server一定不会如此单薄的!在默认情况下,MaxClients的确只能设定为不超过256的整数,但是,如果您有需要完全可以随意定制,此时就需 要使用ServerLimit参数来配合使用,简单的说ServerLimit就像是水桶,而MaxClients就像是水,您可以通过更换更大的水桶 (将ServerLimit设定为一个较大值)来容纳更多的水(MaxClients),但要注意,MaxClients的设定数值是不能大于 ServerLimit的设定数值的!

注:MaxClents < ServerLimit

下面让我们了解一下MaxRequestPerChild参数,该参数指定一个连接进程中可以有多少个线程同时工作。也许这样解释过于专业,那么您只要想想“网络蚂蚁”、“网际快车FlashGet”中的“多点同时下载”即可,该参数实际上就是限制最多可以用几个“点”。默认设置为0,即为:不限制。但需要注意,如果将该值设置的过小会引起访问问题,如果没有特殊需要或者访问量压力并非很大可以保持默认值,如果访问量很大则推荐设置为2048。

好了,解释了这么多,让我们看看经过修改后Perfork.c配置段的推荐配置:

  1. <IfModule prefork.c>
  2. StartServers         5
  3. MinSpareServers      5
  4. MaxSpareServers     10
  5. ServerLimit       1024
  6. MaxClients        768
  7. MaxRequestsPerChild  0
  8. </IfModule>

复制代码

完成了上述对Apache的调整,Apache已经获得了较大的性能改善。

二、MySQL优化建议及分析

MySQL优化步骤:
1、看机器配置,指三大件:cpu、内存、磁盘(I/O)
2、看mysql配置参数
3、查系mysql行状态,可以用mysqlreport工具来查看
4、查看mysql的慢查询
依次解决了以上问题之后,再来查找程序方面的问题

MySQL优化具体方法及建议
1. 以root数据库服务器,先查看相关日志,看看有什么异常tail –n100 xxx.erro
2. 以root身份登陆MySQL数据库,
Mysql –uroot –p
 show processlist;

3.  使用show status命令
mysql会给出一个很长的列表
官方说明在http://www.mysql.com/doc/e…
含义如下:
aborted_clients 客户端非法中断连接次数
aborted_connects 连接mysql失败次数
com_xxx xxx命令执行次数,有很多条
connections 连接mysql的数量
Created_tmp_disk_tables 在磁盘上创建的临时表
Created_tmp_tables 在内存里创建的临时表
Created_tmp_files 临时文件数
Key_read_requests The number of requests to read a key block from the cache
Key_reads The number of physical reads of a key block from disk
Max_used_connections 同时使用的连接数
Open_tables 开放的表
Open_files 开放的文件
Opened_tables 打开的表
Questions 提交到server的查询数
Sort_merge_passes 如果这个值很大,应该增加my.cnf中的sort_buffer值
Uptime 服务器已经工作的秒数

提升性能的建议:
1.如果opened_tables太大,应该把my.cnf中的table_cache变大
2.如果Key_reads太大,则应该把my.cnf中key_buffer_size变大.可以用Key_reads/Key_read_requests计算出cache失败率
3.如果Handler_read_rnd太大,则你写的SQL语句里很多查询都是要扫描整个表,而没有发挥索引的键的作用
4.如果Threads_created太大,就要增加my.cnf中thread_cache_size的值.可以用Threads_created/Connections计算cache命中率
5.如果Created_tmp_disk_tables太大,就要增加my.cnf中tmp_table_size的值,用基于内存的临时表代替基于磁盘的

注:所以配置参数可以修改/etc/my.cnf 此文件.

具体更深入的mysql优化请见本版相关贴

三、系统负载及性能分析方法及工具介绍
vmstat

Procs
-r:
运行的和等待(CPU时间片)运行的进程数,这个值也可以判断是否需要增加CPU(长期大于1)
-b:
处于不可中断状态的进程数,常见的情况是由IO引起的

Memory
-swpd: 切换到交换内存上的内存(默认以KB为单位)
如果 swpd 的值不为0,或者还比较大,比如超过100M了,但是 si, so 的值长期为 0,这种情况我们可以不用担心,不会影响系统性能。
-free: 空闲的物理内存
-buff: 作为buffer cache的内存,对块设备的读写进行缓冲
-cache: 作为page cache的内存, 文件系统的cache
如果 cache 的值大的时候,说明cache住的文件数多,如果频繁访问到的文件都能被cache住,那么磁盘的读IO bi 会非常小

Swap
-si: 交换内存使用,由磁盘调入内存
-so: 交换内存使用,由内存调入磁盘
内存够用的时候,这2个值都是0,如果这2个值长期大于0时,系统性能会受到影响。磁盘IO和CPU资源都会被消耗
我发现有些朋友看到空闲内存(free)很少或接近于0时,就认为内存不够用了,实际上不能光看这一点的,还要结合si,so,如果free很少,但是si,so也很少(大多时候是0),那么不用担心,系统性能这时不会受到影响的。

Io
-bi: 从块设备读入的数据总量(读磁盘) (KB/s),
-bo: 写入到块设备的数据总理(写磁盘) (KB/s)
随机磁盘读写的时候,这2个 值越大(如超出1M),能看到CPU在IO等待的值也会越大

System
-in: 每秒产生的中断次数
-cs: 每秒产生的上下文切换次数
上面这2个值越大,会看到由内核消耗的CPU时间会越多

Cpu
-us: 用户进程消耗的CPU时间百分比
us 的值比较高时,说明用户进程消耗的CPU时间多,但是如果长期超过50% 的使用,那么我们就该考虑优化程序算法或者进行加速了(比如 PHP/Perl)
-sy: 内核进程消耗的CPU时间百分比
sy 的值高时,说明系统内核消耗的CPU资源多,这并不是良性的表现,我们应该检查原因。
-wa: IO等待消耗的CPU时间百分比
wa 的值高时,说明IO等待比较严重,这可能是由于磁盘大量作随机访问造成,也有可能是磁盘的带宽出现瓶颈(块操作)。
-id: CPU处在空闲状态时间百分比

情景分析
这个vmstat的输出那些信息值得关注?
-Procs r: 运行的进程比较多,系统很繁忙
-Io bo: 磁盘写的数据量稍大,如果是大文件的写,10M以内基本不用担心,如果是小文件写2M以内基本正常
Cpu us: 持续大于50,服务高峰期可以接受
Cpu wa: 稍微有些高
Cpu id:持续小于50,服务高峰期可以接受

Top 性能分析介绍

这个命令可以查看系统中运行的进程的状况,CPU使用状况,系统负载,内存使用等。它是检查系统进程运行状况最方便的工具了,它默认显示部分活动的进程,并且按照进程使用CPU的多少排序。它可以显示全部CPU的使用状况,也可以显示每个进程都运行在那个CPU上面。
我习惯使用这个命令查看那些进程或者那类进程占用CPU和内存资源最多,以此迅速定位存在性能问题的进程,以及运行异常的进程。

用 top 看到的内存的说明(Mem的第2行)
-actv
active 活跃的内存页,正在映射给进程使用
-in_d
inactive_dirty 非活跃的内存页,并且内存数据被修改,需要写回磁盘
-in_c
inactive_clean 非活跃的内存页,干净的数据,可以被重新分配使用

问题?
in_d 和 in_c 以及 cache, buffer 的内存有何不同?

我的理解:
actv, in_d, in_c 是 VM 中对内存的管理组织形式,buffer是块设备读写缓冲,cache是文件系统缓存

top工具介绍:
用 top 看到的进程所处的几种状态(STAT列)。
-D 不可中断休眠,通常是 IO 操作所处的状态
-R 正在执行的或者处在等待执行的进程队列中
-S 休眠中
-T 暂停刮起的(比如Ctrl+Z),也可能是被 strace 命令调用中的状态
-Z 僵尸进程,进程执行完成,但由于其父进程没有销毁该进程,而被init进程接管进行销毁。
-W 没有使用物理内存,所占用的物理内存被切换到交换内存
< 高优先级的进程
-N 低优先级

有时候一个进程会有多个状态的标志,比如SWN,SW。

From: http://bbs.linuxtone.org/thread-40-1-1.html