用pl/sql分割字符串

pl/sql 自带的package里面有一个dbms_utility.comma_to_table,可以用来分割以’,’为分割符
的字符串,并且对返回的字符数组类型也有限制,并不是一个通用的方法。如果你的需求恰
好符合该方法的定义,可以考虑使用。与此对应的方法是dbms_utility.table_to_comma,是
把字符串数组转换成字符串。

以下实现改自Tom Kyte的例子

功能: 用pl/sql function分割字符串,并返回相应的number数组

CREATE OR REPLACE TYPE typ_number_table as table of NUMBER;

CREATE OR REPLACE FUNCTION split_string(p_string    IN VARCHAR2
                                                                         ,p_delimiter IN VARCHAR2 DEFAULT ‘,’)
  RETURN typ_number_table IS
  l_string    VARCHAR2(32000) DEFAULT p_string || p_delimiter;
  l_num       NUMBER;
  l_num_table typ_number_table := typ_number_table();
BEGIN
  LOOP
    l_num := instr(l_string
                  ,p_delimiter);
    EXIT WHEN(nvl(l_num
                 ,0) = 0);
    l_num_table.EXTEND;
    l_num_table(l_num_table.COUNT) := to_number(trim(substr(l_string
                                                                                               ,1
                                                                                               ,l_num – 1)));
    l_string := substr(l_string
                      ,l_num + length(p_delimiter));
  END LOOP;
  RETURN l_num_table;
END;

一个并行查询的例子

系统有2个cpu,每个4核,raid 1+0,db为oracle 10gr2 64bit

1. 先直接来个查询看看
SQL> set autot trace
SQL> SELECT COUNT(*)
FROM channels
WHERE channels.snapshot_time > to_date(’10/28/2008 00:00′
                                      ,’MM/DD/YYYY HH24:MI:SS’)
      AND channels.snapshot_time <= to_date(’10/29/2008 23:50′
                                           ,’MM/DD/YYYY HH24:MI:SS’);

Elapsed: 00:01:15.96

Execution Plan
———————————————————-
Plan hash value: 1848729565

———————————————————————————————————–
| Id  | Operation                  | Name         | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
———————————————————————————————————–
|   0 | SELECT STATEMENT           |              |     1 |    11 |   533K  (7)| 01:46:45 |       |       |
|   1 |  SORT AGGREGATE            |              |     1 |    11 |            |          |       |       |
|*  2 |   FILTER                   |              |       |       |            |          |       |       |
|   3 |    PARTITION RANGE ITERATOR|              |    89M|   936M|   533K  (7)| 01:46:45 |   KEY |   KEY |
|*  4 |     INDEX FAST FULL SCAN   | CHANNELS_UNQ |    89M|   936M|   533K  (7)| 01:46:45 |   KEY |   KEY |
———————————————————————————————————–

Predicate Information (identified by operation id):
—————————————————

   2 – filter(TO_DATE(’10/28/2008 00:00′,’MM/DD/YYYY HH24:MI:SS’)<TO_DATE(’10/29/2008
              23:50′,’MM/DD/YYYY HH24:MI:SS’))
   4 – filter(“CHANNELS”.”SNAPSHOT_TIME”<=TO_DATE(’10/29/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’) AND
              “CHANNELS”.”SNAPSHOT_TIME”>TO_DATE(’10/28/2008 00:00′,’MM/DD/YYYY HH24:MI:SS’))
Statistics
———————————————————-
       8962  recursive calls
          0  db block gets
     308209  consistent gets
     306423  physical reads
          0  redo size
        518  bytes sent via SQL*Net to client
        492  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
        141  sorts (memory)
          0  sorts (disk)
          1  rows processed

可以看到,默认走了索引,用时1分16秒左右

2. 强制走全表扫描看看
SQL> alter system flush buffer_cache;
SQL> alter system flush shared_pool;

SQL>
SELECT /*+ full(channels)*/
 COUNT(*)
FROM channels
WHERE channels.snapshot_time > to_date(’10/28/2008 00:00′
                                      ,’MM/DD/YYYY HH24:MI:SS’)
      AND channels.snapshot_time <= to_date(’10/29/2008 23:50′
                                           ,’MM/DD/YYYY HH24:MI:SS’);

Elapsed: 00:00:49.31

Execution Plan
———————————————————-
Plan hash value: 4225188383

——————————————————————————————————-
| Id  | Operation                  | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
——————————————————————————————————-
|   0 | SELECT STATEMENT           |          |     1 |    11 |   714K  (6)| 02:22:51 |       |       |
|   1 |  SORT AGGREGATE            |          |     1 |    11 |            |          |       |       |
|*  2 |   FILTER                   |          |       |       |            |          |       |       |
|   3 |    PARTITION RANGE ITERATOR|          |    89M|   936M|   714K  (6)| 02:22:51 |   KEY |   KEY |
|*  4 |     TABLE ACCESS FULL      | CHANNELS |    89M|   936M|   714K  (6)| 02:22:51 |   KEY |   KEY |
——————————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

   2 – filter(TO_DATE(’10/28/2008 00:00′,’MM/DD/YYYY HH24:MI:SS’)<TO_DATE(’10/29/2008
              23:50′,’MM/DD/YYYY HH24:MI:SS’))
   4 – filter(“CHANNELS”.”SNAPSHOT_TIME”<=TO_DATE(’10/29/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’)
              AND “CHANNELS”.”SNAPSHOT_TIME”>TO_DATE(’10/28/2008 00:00′,’MM/DD/YYYY HH24:MI:SS’))
Statistics
———————————————————-
       8922  recursive calls
          0  db block gets
     404835  consistent gets
     403115  physical reads
          0  redo size
        518  bytes sent via SQL*Net to client
        492  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
        141  sorts (memory)
          0  sorts (disk)
          1  rows processed

全表扫描,用时49秒

3. 并行度为4的查询

SQL> alter system flush buffer_cache;
SQL> alter system flush shared_pool;
SQL>
SELECT /*+ full(channels) parallel(channels,4) */
 COUNT(*)
FROM channels
WHERE channels.snapshot_time > to_date(’10/28/2008 00:00′
                                      ,’MM/DD/YYYY HH24:MI:SS’)
      AND channels.snapshot_time <= to_date(’10/29/2008 23:50′
                                           ,’MM/DD/YYYY HH24:MI:SS’);

Elapsed: 00:00:11.96

Execution Plan
———————————————————-
Plan hash value: 166461870

———————————————————————————————————————————
| Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
———————————————————————————————————————————
|   0 | SELECT STATEMENT        |          |     1 |    11 |   198K  (6)| 00:39:37 |       |       |        |      |            |
|   1 |  SORT AGGREGATE         |          |     1 |    11 |            |          |       |       |        |      |            |
|*  2 |   PX COORDINATOR        |          |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)  | :TQ10000 |     1 |    11 |            |          |       |       |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE      |          |     1 |    11 |            |          |       |       |  Q1,00 | PCWP |            |
|*  5 |      FILTER             |          |       |       |            |          |       |       |  Q1,00 | PCWC |            |
|   6 |       PX BLOCK ITERATOR |          |    89M|   936M|   198K  (6)| 00:39:37 |   KEY |   KEY |  Q1,00 | PCWC |            |
|*  7 |        TABLE ACCESS FULL| CHANNELS |    89M|   936M|   198K  (6)| 00:39:37 |   KEY |   KEY |  Q1,00 | PCWP |            |
———————————————————————————————————————————

Predicate Information (identified by operation id):
—————————————————

   2 – filter(TO_DATE(’10/28/2008 00:00′,’MM/DD/YYYY HH24:MI:SS’)<TO_DATE(’10/29/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’))
   5 – filter(TO_DATE(’10/28/2008 00:00′,’MM/DD/YYYY HH24:MI:SS’)<TO_DATE(’10/29/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’))
   7 – filter(“CHANNELS”.”SNAPSHOT_TIME”<=TO_DATE(’10/29/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’) AND
              “CHANNELS”.”SNAPSHOT_TIME”>TO_DATE(’10/28/2008 00:00′,’MM/DD/YYYY HH24:MI:SS’))
Statistics
———————————————————-
       9661  recursive calls
          4  db block gets
     405834  consistent gets
     403140  physical reads
        632  redo size
        518  bytes sent via SQL*Net to client
        492  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
        152  sorts (memory)
          0  sorts (disk)
          1  rows processed

全表加并行,用时11秒.

Oracle10gR2在RHEL 5下开启异步IO

记录一下步骤,内容主要来自这篇文章

平台:Redhat Enterprise Linux 5 64bit,Oracle 10gR2 10.2.0.4 64bit

1、首先用root用户安装以下必要的rpm包

# rpm -Uvh libaio-0.3.106-3.2.x86_64.rpm
# rpm -Uvh libaio-devel-0.3.106-3.2.x86_64.rpm

2、在系统级支持异步I/O
  与[Note 225751.1]介绍的在RHEL 3里面设置异步IO不同,不需要设置aio-max-size,而且’/proc/sys/fs’路径下也没有这个文件。因为从2.6 kernel开始,已经取消了对IO size的限制[Note 549075.1]。另外根据[Note 471846.1],Oracle建议将aio-max-nr的值设置为1048576或更高。

# echo > /proc/sys/fs/aio-max-nr 1048576

要永久修改这个内核参数,需要在/etc/sysctl.conf加上下面这句
fs.aio-max-nr = 1048576
使参数生效
#/sbin/sysctl -p

3、在数据库级启用异步I/O
  首先修改数据库参数。与[Note 225751.1]在RHEL 3里面设置异步IO不同,Oracle10gR2默认是打开了对异步IO的支持的,不需要重新编译数据库软件。在’$ORACLE_HOME/rdbms/lib’路径下,也没有’skgaioi.o’这个文件。在某些情况下,Oracle无法将IO行为或事件报告给操作系统[Note 365416.1],因此需要做以下操作。

这里开始换成oracle用户

SQL> alter system set disk_asynch_io=TRUE scope=spfile;

SQL> alter system set filesystemio_options=asynch scope=spfile;

SQL>shutdown immediate
$ cd $ORACLE_HOME/rdbms/lib
$ ln -s /usr/lib/libaio.so.1 skgaio.o
$ make PL_ORALIBS=-laio -f ins_rdbms.mk async_on
SQL>startup

  在Oracle10gR2中AIO默认已经是开启的了。可以通过ldd或者nm来检查oracle是否已经启用了AIO支持,有输出代表已经启用。

[oraprod@db01 ~]$ /usr/bin/ldd $ORACLE_HOME/bin/oracle | grep libaio
libaio.so.1 => /usr/lib64/libaio.so.1 (0x00002aaaac4a9000)
[oraprod@db01 ~]$ /usr/bin/nm $ORACLE_HOME/bin/oracle | grep io_getevent
w io_getevents@@LIBAIO_0.4

4、检查异步I/O是否在使用
  根据[Note 370579.1],可以通过查看slabinfo统计信息查看操作系统中AIO是否运行,slab是Linux的内存分配器,AIO相关的内存结构已经分配,kiocb值的第二列和第三列非0即是已使用。与kernel 2.4.x不同,没有显示kiobuf,因为从kernel 2.5.43开始,kiobuf已经从内核中被移除。

$ cat /proc/slabinfo | grep kio
kioctx 64 110 384 10 1 : tunables 54 27 8 : slabdata 11 11 0
kiocb 13 315 256 15 1 : tunables 120 60 8 : slabdata 21 21 44

一个CBO错用索引的例子

channels表,索引是channels_unq(svr_grp_id, channels_record_id, snapshot_time)

执行如下语句

SELECT channels.channel_record_id
      ,SUM(channels.max_viewers) AS viewer_sum
FROM channels
WHERE channels.snapshot_time > to_date(’11/25/2008 23:50′
                                      ,’MM/DD/YYYY HH24:MI:SS’)
      AND channels.snapshot_time <= to_date(’12/06/2008 23:50′
                                           ,’MM/DD/YYYY HH24:MI:SS’)
GROUP BY channels.channel_record_id;

474 rows selected.

Elapsed: 02:06:21.62

Execution Plan
———————————————————-
Plan hash value: 443160641

———————————————————————————————————————

| Id  | Operation                            | Name         | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |

———————————————————————————————————————

|   0 | SELECT STATEMENT                     |              |     1 |    39 | 365   (1)| 00:00:05 |       |       |

|   1 |  HASH GROUP BY                       |              |     1 |    39 | 365   (1)| 00:00:05 |       |       |

|*  2 |   FILTER                             |              |       |       | |          |       |       |

|   3 |    PARTITION RANGE ITERATOR          |              |     1 |    39 | 364   (0)| 00:00:05 |   KEY |   KEY |

|   4 |     TABLE ACCESS BY LOCAL INDEX ROWID| CHANNELS     |     1 |    39 | 364   (0)| 00:00:05 |   KEY |   KEY |

|*  5 |      INDEX SKIP SCAN                 | CHANNELS_UNQ |     1 |       | 364   (0)| 00:00:05 |   KEY |   KEY |

———————————————————————————————————————
Predicate Information (identified by operation id):
—————————————————

   2 – filter(TO_DATE(’11/25/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’)<TO_DATE(’12/06/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’))
   5 – access(“CHANNELS”.”SNAPSHOT_TIME”>TO_DATE(’11/25/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’) AND “CHANNELS”.”SNAPSHOT_TIME”<=TO_DATE(’12/06/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’))
       filter(“CHANNELS”.”SNAPSHOT_TIME”>TO_DATE(’11/25/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’) AND “CHANNELS”.”SNAPSHOT_TIME”<=TO_DATE(’12/06/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’))

Statistics
———————————————————-
       8100  recursive calls
          0  db block gets
  577374676  consistent gets
    4631561  physical reads
      59904  redo size
      11269  bytes sent via SQL*Net to client
        833  bytes received via SQL*Net from client
         33  SQL*Net roundtrips to/from client
         91  sorts (memory)
          0  sorts (disk)
        474  rows processed

用了两个多小时,可以看到,在第5步不正确地走了索引 channels_unq

把查询语句加个hint,强制用全表扫描

SELECT /*+ FULL(channels) */channels.channel_record_id
      ,SUM(channels.max_viewers) AS viewer_sum
FROM channels
WHERE channels.snapshot_time > to_date(’11/25/2008 23:50′
                                      ,’MM/DD/YYYY HH24:MI:SS’)
      AND channels.snapshot_time <= to_date(’12/06/2008 23:50′
                                           ,’MM/DD/YYYY HH24:MI:SS’)
GROUP BY channels.channel_record_id;

474 rows selected.

Elapsed: 00:12:33.59

Execution Plan
———————————————————-
Plan hash value: 4197359631

——————————————————————————————————-

| Id  | Operation                  | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |

——————————————————————————————————-

|   0 | SELECT STATEMENT           |          |     1 |    39 |  1770K  (1)| 05:54:04 |       |       |

|   1 |  HASH GROUP BY             |          |     1 |    39 |  1770K  (1)| 05:54:04 |       |       |

|*  2 |   FILTER                   |          |       |       |            | |       |       |

|   3 |    PARTITION RANGE ITERATOR|          |     1 |    39 |  1770K  (1)| 05:54:04 |   KEY |   KEY |

|*  4 |     TABLE ACCESS FULL      | CHANNELS |     1 |    39 |  1770K  (1)| 05:54:04 |   KEY |   KEY |

——————————————————————————————————-
Predicate Information (identified by operation id):
—————————————————

   2 – filter(TO_DATE(’11/25/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’)<TO_DATE(’12/06/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’))
   4 – filter(“CHANNELS”.”SNAPSHOT_TIME”>TO_DATE(’11/25/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’) AND “CHANNELS”.”SNAPSHOT_TIME”<=TO_DATE(’12/06/2008 23:50′,’MM/DD/YYYY HH24:MI:SS’))
Statistics
———————————————————-
          1  recursive calls
          0  db block gets
    2762409  consistent gets
    2391028  physical reads
       4236  redo size
      11269  bytes sent via SQL*Net to client
        833  bytes received via SQL*Net from client
         33  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        474  rows processed

这次走全表了,用时12分钟左右。

没有时间仔细去找原因,猜想:
1. 可能是最新统计信息没收集
2. 由于使用了绑定变量(在这个例子中简化了,没体现出来),可能是bind varible peeking的问题

Linux IO性能测试

近来想了解一下开发环境的IO性能,分别用dd/orion/iozone/bonnie++四种工具测试了一下
开发环境系统配置如下:
Intel SR1625 server, 2 CPU, 32GB内存, 用主板自带卡做了raid1+0,8个7200转SATA硬盘
操作系统是RHEL 5.3 64位
因为物理内存是32GB,因此整个过程都选用了60GB+的数据量来测试,以避免cache的影响
1. 首先用自带的dd命令先测一下, 块大小为8k
dd只能提供一个大概的测试结果,而且是连续IO而不是随机IO
读测试
# time dd if=/dev/sda2 of=/dev/null bs=8k count=8388608
8388608+0 records in
8388608+0 records out
68719476736 bytes (69 GB) copied, 516.547 seconds, 133 MB/s
real    8m36.926s
user    0m0.117s
sys     0m55.216s
写测试
# time dd if=/dev/zero of=/opt/iotest bs=8k count=8388608
8388608+0 records in
8388608+0 records out
68719476736 bytes (69 GB) copied, 888.398 seconds, 77.4 MB/s
real    14m48.743s
user    0m3.678s
sys     2m47.158s
读写测试
# time dd if=/dev/sda2 of=/opt/iotest bs=8k count=8388608
8388608+0 records in
8388608+0 records out
68719476736 bytes (69 GB) copied, 1869.89 seconds, 36.8 MB/s
real    31m10.343s
user    0m2.613s
sys     3m25.548s
2. 接下来用Oracle的orion工具来测一下
解压即可使用
# gzip orion_linux_x86-64.gz
测异步IO时需要libaio库
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64
# echo $LD_LIBRARY_PATH
:/opt/oracle/product/10.2.0/lib:/usr/lib64
创建配置文件mytest.lun,列出要测试的分区即可. 注意文件名前缀要跟下面的 testname一致
# vi mytest.lun
查看mytest.jun内容
# cat mytest.lun
/dev/sda2
先来个simple test
# ./orion_linux_x86-64 -run simple -testname mytest -num_disks 8
查看测试结果
# cat mytest_20081111_1431_summary.txt
ORION VERSION 11.1.0.7.0
Commandline:
-run simple -testname mytest -num_disks 8
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8,      9,     10,     11,     12,     13,     14,     15,     16
Total Data Points: 38
Name: /dev/sda2 Size: 629143441920
1 FILEs found.
Maximum Large MBPS=56.97 @ Small=0 and Large=7
Maximum Small IOPS=442 @ Small=40 and Large=0
Minimum Small Latency=14.62 @ Small=1 and Large=0
最大MBPS为56.97,最大IOPS为442
再测一下8k随机读操作
# ./orion_linux_x86-64 -run advanced -testname mytest -num_disks 8 -size_small 8 -size_large 8 -type rand &
看看结果
# cat mytest_20081111_1519_summary.txt
ORION VERSION 11.1.0.7.0
Commandline:
-run advanced -testname mytest -num_disks 8 -size_small 8 -size_large 8 -type rand
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 8 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8,      9,     10,     11,     12,     13,     14,     15,     16
Total Data Points: 38
Name: /dev/sda2 Size: 629143441920
1 FILEs found.
Maximum Large MBPS=3.21 @ Small=0 and Large=13
Maximum Small IOPS=448 @ Small=38 and Large=0
Minimum Small Latency=15.16 @ Small=1 and Large=0
最大MBPS为3.21(这么低??),最大IOPS为448
再测一下1M顺序读操作, 失败了, 原因不明…
# ./orion_linux_x86-64 -run advanced -testname mytest -num_disks 8 -size_small 1024 -size_large 1024 -type seq
ORION: ORacle IO Numbers — Version 11.1.0.7.0
mytest_20081114_1349
Test will take approximately 73 minutes
Larger caches may take longer
rwbase_run_test: rwbase_reap_req failed
rwbase_run_process: rwbase_run_test failed
rwbase_rwluns: rwbase_run_process failed
orion_warm_cache: Warming cache failed. Continuing
看看结果
# cat mytest_20081111_1620_summary.txt
ORION VERSION 11.1.0.7.0
Commandline:
-run advanced -testname mytest -num_disks 8 -size_small 1024 -size_large 1024 -type seq
This maps to this test:
Test: mytest
Small IO size: 1024 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
没结果,失败
3. 用iozone来测一下
安装
# tar -xvf iozone3_345.tar
# make linux-AMD64
指定64G的文件,只测read/reread和write/rewrite,记录大小从4k-16k.同时生成一个excel文件iozone.wks
# ./iozone -Rab iozone.wks -s64G -i 0 -i 1 -y 4k -q 16k
        Iozone: Performance Test of File I/O
                Version $Revision: 3.345 $
                Compiled for 64 bit mode.
                Build: linux-AMD64
        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                     Fabrice Bacchella, Zhenghua Xue, Qin Li.
        Run began: Tue Nov 11 10:23:25 2008
        Excel chart generation enabled
        Auto Mode
        File size set to 67108864 KB
        Using Minimum Record Size 4 KB
        Using Maximum Record Size 16 KB
        Command line used: ./iozone -Rab iozone.wks -s64G -i 0 -i 1 -y 4k -q 16k
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride                                 
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
        67108864       4   72882   69470   104898   125512
        67108864       8   72083   69256   133689   109061
        67108864      16   73375   69155   142019   116034
iozone test complete.
Excel output is below:
"Writer report"
        "4"  "8"  "16"
"67108864"   72882  72083  73375
"Re-writer report"
        "4"  "8"  "16"
"67108864"   69470  69256  69155
"Reader report"
        "4"  "8"  "16"
"67108864"   104898  133689  142019
"Re-Reader report"
        "4"  "8"  "16"
"67108864"   125512  109061  116034
可以看到,8k的写是72M/s左右,读是133M/s左右,跟dd的结果比较接近
测一下64G文件8k随机读写
# ./iozone -Rab iozone.wks -s64G -i 2 -y 8k -q 8k
        Iozone: Performance Test of File I/O
                Version $Revision: 3.345 $
                Compiled for 64 bit mode.
                Build: linux-AMD64
        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                     Fabrice Bacchella, Zhenghua Xue, Qin Li.
        Run began: Fri Nov 14 15:52:01 2008
        Excel chart generation enabled
        Auto Mode
        File size set to 67108864 KB
        Using Minimum Record Size 8 KB
        Using Maximum Record Size 8 KB
        Command line used: ./iozone -Rab iozone.wks -s64G -i 2 -y 8k -q 8k
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride                                 
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
        67108864       8
Error reading block at 6501007360
read: Success
出错了(??)
4. 最后用bonnie++测一下
安装
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64
# ./configure
# make
# make install
开始测试,默认文件大小是内存的2倍
# bonnie++ -d /opt/IOTest/ -m sva17 -u root
Using uid:0, gid:0.
Writing with putc()…done
Writing intelligently…done
Rewriting…done
Reading with getc()…done
Reading intelligently…done
start ’em…done…done…done…
Create files in sequential order…done.
Stat files in sequential order…done.
Delete files in sequential order…done.
Create files in random order…done.
Stat files in random order…done.
Delete files in random order…done.
Version 1.03e       ——Sequential Output—— –Sequential Input- –Random-
                    -Per Chr- –Block– -Rewrite- -Per Chr- –Block– –Seeks–
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
sva17           63G 52391  84 35222   7 34323   6 56362  88 131568  10 176.7   0
                    ——Sequential Create—— ——–Random Create——–
                    -Create– –Read— -Delete– -Create– –Read— -Delete–
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
sva17,63G,52391,84,35222,7,34323,6,56362,88,131568,10,176.7,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
顺序写: 按字符是52391KB/s,CPU占用率84%;按块是35222KB/s,CPU占用率7%
顺序读: 按字符是56362KB/s, CPU占用率88%;按块是131568KB/s,CPU占用率10%
随机读写: 176.7次/s,CPU占用率0%
后两项全是++ (没结果?)
结论:不同测试工具构建出来的测试环境不同,侧重点也不一样,得到的结果可能相差比较大。
MBPS:
dd和iozone比较接近,读写分别是130+和70+。
orion读57左右,写没测(会删掉分区内所有文件!)
bonnie++按块读是130左右,写是35左右;按字符读是56左右,写是52左右
IOPS:
dd 无结果
orion 440左右(只读)
iozone 出错
bonnie++ 176.7 (读写)

久等的LISTAGG

在看Tom的11 things about 11gr2.ppt,里面提到要实现列转行的统计:
1. 9i里面要自己实现统计函数

2. 10g里有个sys_connect_by_path可用

SQL> select deptno,
  2         substr(
  3         max(sys_connect_by_path(ename, ‘; ‘)),
  4         3) enames
  5    from (
  6  select deptno,
  7         ename,
  8         row_number()
  9         over
 10         (partition by deptno
 11          order by ename) rn
 12    from emp
 13         )
 14   start with rn = 1
 15  connect by prior deptno = deptno
 16      and prior rn+1 = rn
 17    group by deptno
 18    order by deptno
 19  /

DEPTNO ENAMES
———- ——————–
        10 CLARK; KING; MILLER
        20 ADAMS; FORD; JONES;
           SCOTT; SMITH
 
        30 ALLEN; BLAKE;
           JAMES; MARTIN;
           TURNER; WARD

3. 11gr2里提供了listagg更方便
SQL> select deptno,
  2         listagg( ename, ‘; ‘ )
  3         within group
  4         (order by ename) enames
  5    from emp
  6   group by deptno
  7   order by deptno
  8  /

    DEPTNO ENAMES
———- ——————–
        10 CLARK; KING; MILLER
        20 ADAMS; FORD; JONES;
           SCOTT; SMITH

        30 ALLEN; BLAKE;
           JAMES; MARTIN;
           TURNER; WARD