FreeBSD and ZFS

0 like 0 dislike
32 views
As many people know, there is such OS: FreeBSD. Bad or good it doesn't matter, it is beyond the scope of this questionnaire. Fans to write something like "FreeBSD — RIP" request to walk on ENT and leave that sign there.
Also, there is a file system called ZFS, the development of the recently eaten Sun Microsystems. The file system is very interesting and quite wonderful.

I'm a systems administrator Habrahabr and soon enough expect a serious upgrade to the server farm. Among the ideas are the idea to use ZFS. Recently I have been testing ZFS on FreeBSD 8.1-RELEASE. Normal flight, kernel panikov was never, speed meets. But the Internet is a very different feedback, sometimes simply inadequate. The level of abstraction of the file system is simply stunning, sections can steer as I want on the fly; speed is good, sometimes faster UFS2+SU, and deploy it too easy. Happy iskrometnaya compression sections, snapshots and other utility. I picked it up on my test server: everything works fine, no problems noticed.

But still I want to know the opinions of those who are directly faced with the unfolding of ZFS on a combat server running FreeBSD and used a bundle under real load quite a lot of time. Synthetic tests are also interesting, but to a lesser extent, because these synthetic synthetic. Yes: I only use stable OS build, the survey relates more to them.
by | 32 views

7 Answers

0 like 0 dislike
About a year c FreeBSD ZFS in production on a file server. The average return — 1TB of traffic per day. Server: CPU — 2xOpteron 2214, Mem — 32G, controller — AMCC 9650SE-12M, drives — Seagate NS series, 10-12 pieces + SSD Intel X25-M for the cache device. No hang-UPS associated with ZFS for the time server. The only problem is that replacing failed drives in raidz. Do replace the drive, Poole resilvered to the new drive and the old leaves from the configuration. Found a PR for this bug, fixed or not — I do not know.
by
0 like 0 dislike
On home server (Cel3300, 2Gb, 4xWD 2Tb EARS; FreeBSD 8.1 amd64) use the ZFS configuration 4xRAID1 and RAID10:
\r
\r
root@server:/usr/local/etc (1768) zpool status pool: storage state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 mirror ONLINE 0 0 0 gpt/storage0 ONLINE 0 0 0 gpt/storage3 ONLINE 0 0 0 mirror ONLINE 0 0 0 gpt/storage1 ONLINE 0 0 0 gpt/storage2 ONLINE 0 0 0 errors: No known data errors pool: zroot state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror ONLINE 0 0 0 gpt/system0 ONLINE 0 0 0 gpt/system2 ONLINE 0 0 0 gpt/system3 ONLINE 0 0 0 gpt/system1 ONLINE 0 0 0 errors: No known data errors 

\r
Drives broken down like this (done indent 2 MB in the beginning of the disk to correct the problems AdvancedFormat WD EARS screws):
\r
\r
root@server:/usr/local/etc (1771) gpart show => 34 3907029101 ada0 GPT (1.8 T) 34 2014 - free - (1.0 M) 2048 128 1 freebsd-boot (64K) 2176 8388608 2 freebsd-swap (4.0 G) 8390784 41943040 3 freebsd-zfs (20G) 50333824 3856695311 4 freebsd-zfs (1.8 T) => 34 3907029101 ada1 GPT (1.8 T) 34 2014 - free - (1.0 M) 2048 128 1 freebsd-boot (64K) 2176 8388608 2 freebsd-swap (4.0 G) 8390784 41943040 3 freebsd-zfs (20G) 50333824 3856695311 4 freebsd-zfs (1.8 T) => 34 3907029101 ada2 GPT (1.8 T) 34 2014 - free - (1.0 M) 2048 128 1 freebsd-boot (64K) 2176 8388608 2 freebsd-swap (4.0 G) 8390784 41943040 3 freebsd-zfs (20G) 50333824 3856695311 4 freebsd-zfs (1.8 T) => 34 3907029101 ada3 GPT (1.8 T) 34 2014 - free - (1.0 M) 2048 128 1 freebsd-boot (64K) 2176 8388608 2 freebsd-swap (4.0 G) 8390784 41943040 3 freebsd-zfs (20G) 50333824 3856695311 4 freebsd-zfs (1.8 T) 

\r
Problem: slow speed of reading and writing at the ZFS RAID10:
For example, the entry:
\r
\r
dd if=/dev/zero of=/storage/test.file bs=1000M count 1+0 records in 1+0 records out 1048576000 would bytes transferred in 33.316996 secs (31472705 bytes/sec) 

\r
Or reading:
\r
\r
dd if=/storage/test.file of=/dev/nulbs=1000M count=1 1+0 records in 1+0 records out 1048576000 would bytes transferred in 13.424865 secs (78107005 bytes/sec) 

\r
systat it looks like this:
\r
\r
2 users Load 0,29 0,12 0,04 Oct 19 14:27 Mem:KB REAL VIRTUAL VN PAGER SWAP PAGER Tot Share Tot Share Free in out in out Act 1048432 7548 2771456 11732 87616 count All 1232436 10608 1076589k 29964 pages Proc: Interrupts r p d s w Csw Trp Sys Int Sof Flt cow total 4770 69 8556 20k 20k 517 776 798 104 20581 zfod em0 uhci0 2 ozfod 5 uhci3 ehci 9,7% 0,0 Sys%Intr 0,0%0,0 User%Nice 90,3%Idle %ozfod 1997 cpu0: time | | | | | | | | | | | daefr hdac0 257 ===== prcfr 667 ahci0 259 dtbuf 3762 totfr 1997 cpu1: time Namei Name-cache Dir-cache 100000 desvn react Calls hits % hits % 26371 numvn pdwak 2 2 100 24996 frevn pdpgs intrn Disks ada0 ada1 ada2 ada3 da0 pass0 pass1 429056 wire KB/t 128 128 128 127 0,00 0,00 0,00 1103516 act tps 156 173 188 145 0 0 0 368484 inact MB/s 19,51 23,48 21,62 0,00 0,00 0,00 18,03 cache %busy 18 35 35 16 0 0 0 87616 free buf 

\r
And with the actual disc reads quite acceptable:
\r
\r
1073741824 bytes transferred in 9.673196 secs (111001764 bytes/sec) root@server:/usr/home/dyr (1769) dd if=/dev/gpt/storage1 of=/dev/null bs=1024M count=1 1+0 records in 1+0 records out 1073741824 bytes transferred in 9.887180 secs (108599400 bytes/sec) root@server:/usr/home/dyr (1770) dd if=/dev/gpt/storage2 of=/dev/null bs=1024M count=1 1+0 records in 1+0 records out 1073741824 bytes transferred in 9.736273 secs (110282635 bytes/sec) root@server:/usr/home/dyr (1772) dd if=/dev/gpt/storage3 of=/dev/null bs=1024M count=1 1+0 records in 1+0 records out 1073741824 bytes transferred in 11.112231 secs (96627025 bytes/sec) 

\r
Why don't understand.
\r
\r
vfs.zfs.l2c_only_size: 3535428608 vfs.zfs.mfu_ghost_data_lsize: 23331328 vfs.zfs.mfu_ghost_metadata_lsize: 20963840 vfs.zfs.mfu_ghost_size: 44295168 vfs.zfs.mfu_data_lsize: 0 vfs.zfs.mfu_metadata_lsize: 0 vfs.zfs.mfu_size: 11698176 vfs.zfs.mru_ghost_data_lsize: 22306304 vfs.zfs.mru_ghost_metadata_lsize: 8190464 vfs.zfs.mru_ghost_size: 30496768 vfs.zfs.mru_data_lsize: 512 vfs.zfs.mru_metadata_lsize: 0 vfs.zfs.mru_size: 20443648 vfs.zfs.anon_data_lsize: 0 vfs.zfs.anon_metadata_lsize: 0 vfs.zfs.anon_size: 1048576 vfs.zfs.l2arc_norw: 1 vfs.zfs.l2arc_feed_again: 1 vfs.zfs.l2arc_noprefetch: 0 vfs.zfs.l2arc_feed_min_ms: 200 vfs.zfs.l2arc_feed_secs: 1 vfs.zfs.l2arc_headroom: 2 vfs.zfs.l2arc_write_boost: 8388608 vfs.zfs.l2arc_write_max: 8388608 vfs.zfs.arc_meta_limit: 106137600 vfs.zfs.arc_meta_used: 104179208 vfs.zfs.mdcomp_disable: 0 vfs.zfs.arc_min: 53068800 vfs.zfs.arc_max: 424550400 vfs.zfs.zfetch.array_rd_sz: 1048576 vfs.zfs.zfetch.block_cap: 256 vfs.zfs.zfetch.min_sec_reap: 2 vfs.zfs.zfetch.max_streams: 8 vfs.zfs.prefetch_disable: 1 vfs.zfs.check_hostid: 1 vfs.zfs.recover: 0 vfs.zfs.txg.write_limit_override: 0 vfs.zfs.txg.synctime: 5 vfs.zfs.txg.timeout: 10 vfs.zfs.scrub_limit: 10 vfs.zfs.vdev.cache.bshift: 16 vfs.zfs.vdev.cache.size: 10485760 vfs.zfs.vdev.cache.max: 16384 vfs.zfs.vdev.aggregation_limit: 131072 vfs.zfs.vdev.ramp_rate: 2 vfs.zfs.vdev.time_shift: 6 vfs.zfs.vdev.min_pending: 4 vfs.zfs.vdev.max_pending: 10 vfs.zfs.cache_flush_disable: 0 vfs.zfs.zil_disable: 0 vfs.zfs.zio.use_uma: 0 vfs.zfs.version.zpl: 4 vfs.zfs.version.spa: 15 vfs.zfs.version.dmu_backup_stream: 1 vfs.zfs.version.dmu_backup_header: 2 vfs.zfs.version.acl: 1 vfs.zfs.debug: 0 vfs.zfs.super_owner: 0 

\r
Still a bit annoying the available options is the file system sharesmb and sharenfs is clear that they are doing in Solaris, but FreeBSD, as I understand it, they just don't work.
by
0 like 0 dislike
Synthetic test: www.phoronix.com/scan.php?page=article&item=zfs_ext4_btrfs&num=3
But this phoronix. And test, as usual, has little relation to reality )
\r
I actually have one server with backup, a backup, is with zfs. Out of curiosity, since it there is absolutely no need. Well, it seems to work, what else to say, load there is no special, power off does not happen, frya hangs.
\r
I think that if you deystvitelno want the benefits of zfs — then they are worth using. If you do not really need — not is because this is still experimental.
by
0 like 0 dislike
Use ZFS and FreeBSD 8.1 for the backup server (raidz on the pack 2TB drives).
The load is small (what backup), problem zero.
by
0 like 0 dislike
About a month testing FreeBSD with ZFS as the file server. Access about 400-800 people per day. Until all happy. Problems not to notice. If in another month there will be problems put ZFS on the main file server, in the hope a little to reduce the load.
by
0 like 0 dislike
Guys, has anyone survived the fall of one of the hardy United ZFS under Rajoy?
In Solaris everything is fine, and ZFS is quite possible to trust, but would it be as cool if something happens in Praha?
Of course backup everything, but primarily interested in the viability of the system in case of failure of one of the disks.
by
0 like 0 dislike
Yes, in principle, tolerable. not to say that faster (Filipovich), but obviously more convenient.
by

Related questions

0 like 0 dislike
1 answer
0 like 0 dislike
1 answer
asked May 12, 2019 by denism300
0 like 0 dislike
2 answers
0 like 0 dislike
1 answer
0 like 0 dislike
3 answers
110,608 questions
257,186 answers
0 comments
1,107 users