微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

How to handling large volumes of data on PostgreSQL?

mailing list: pgsql-admin.postgresql.org

from: Johann Spies

..loaded about 4,900,000,000 in one of two tables with 7200684 in the second table in database ‘firewall’,built one index using one date-field (which took a few days) and used that index to copy about 3,800,000 of those records from the first to a third table,deleted those copied record from the first table and dropped the third table.
This took about a week on a 2xcpu quadcore server with 8Gb RAM..

Table paritioning is need.

distribute tables across different disks through tablespaces.Tweak the shared buffers and work_mem settings.

RAID5/6 are very,very slow when it comes to small disk *writes*.

At least a hardware RAID controller with RAID 0 or 10 should be used,with 10krpm or 15krpm drives. SAS preferred.

as on SATA the only quick disks are Western Digital Raptor.

look at a view called pg_stat_activity. Do: select * from pg_stat_activity;

原文地址:https://www.jb51.cc/postgresql/197395.html

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐