[ATrpms-users] mythfilldatabase increased run time
schanzle at nist.gov
Sat Jan 21 22:42:25 CET 2012
On 01/20/2012 03:16 PM, Brian Long wrote:
> On Fri, Jan 20, 2012 at 1:51 PM, John Welch <jrw3319 at gmail.com <mailto:jrw3319 at gmail.com>> wrote:
> Great, glad it worked out for you. My process *only* takes 20-25 minutes to run, so maybe I'm not as bad off as I thought. I don't know if I just became more aware of this process running after reading about it on the myth mailing list, but I just don't remember it hammering the system like it does now prior to the upgrade to F16.
> I never would have known about the change either if I hadn't seen a couple of threads about it over on myth list.
> Good luck!
> Not sure if you guys saw the thread on mythtv-users about mythfilldatabase + MySQL hitting the disk hard. MySQL's config usually has "tmpdir = /tmp" which is not tmpfs on many hosts (it's part of /) and mythfilldatabase could create up to 500M of temp tables for each run. This was noticed when someone was trying to extend the life of their SSD boot disk.
> They changed /etc/my.cnf and set "tmpdir = /dev/shm" or remounted /tmp as tmpfs and it sped things up plus reduced wear on the SSD. Other folks suggested increasing the query cache, etc, but setting tmpdir to somewhere tmpfs was the biggest win.
Thanks for the suggestion. I tried it (setting tmpdir), restarted mysqld (after stopping mythbackend...doh, check for recordings in progress, stupid!), and noticed no significant difference this morning (8.5 mins), within a few seconds of yesterday's run.
I've read that temporary tables may take on disk forms if there is insufficient memory/key space. I currently have no files in /dev/shm from mysql, but that's not to say they came and went. Also, when I was stracing the long 2hr run, it was doing massive numbers of inserts, so those may not benefit. I might not be resorting to temp disk tables since I've bumped up several of my buffer/cache sizes years ago. Here's the nitty-gritty of my.cnf:
# egrep -v '^#|^$' /etc/my.cnf
key_buffer = 16M
table_cache = 128
sort_buffer_size = 2M
myisam_sort_buffer_size = 8M
query_cache_size = 16M
I stumbled over "mysqltuner.pl" (google for it) a neat perl script to digest the mysql internal metrics and summarize them, giving suggestions at the end. Beyond increasing thread_cache_size (I don't have multiple connections), it suggested to increase table_cache > 128. It also said: "Temporary tables created on disk: 5% (58 on disk / 1K total)", so I don't think that's a big bottleneck for me now. mysqltuner also pointed out that the bulk of my tables (84M) are MyISAM, not InnoDB. I wasn't sure, so that's helpful when reading mysql documentation...which is easy to get lost in the details of what applies to your situation or not. I also have a Read/Write ratio of 11%/89%, supporting my mythfilldatabase observations.
BTW, my root filesystem (including /var/lib/mysql) is on a md RAID1 device using old traditional "spinning rust" 1TB 7200 RPM Hitachi disks (only avg 50 MB/sec, 80 near beginning of disk). Your mileage may vary.
It might be interesting, as an experiment, to temporarily change mysql's datadir to /dev/shm too, and run mythfilldatabase, just to see how fast it would be. [of course, that would mean stopping mythbackend, mysqld, copying datadir, changing .cnf, restart daemons, run mythfilldatabase, then put everything back.]
Thanks again for the suggestion, hope the above helps someone!
More information about the atrpms-users