AES-NI support would be a great addition for a real SMB NAS...even SMB's should be concerned with security. What are the chances NAS manufacturers will come out with devices based on AMD Kabini? AMD does a lot less feature segmentation in their chips and Kabini has AES-NI so it seems like a better solution until Intel matches that with Atoms (low TDP Haswells will be too expensive.)
The two features I look for in off-the-shelf NASes are ECC RAM, and the ZFS file system. Unfortunately, it seems that none so far have saw fit to include them.
Instantaneous near-unlimited number of snapshots, end-to-end checksums, integrated raid features without requiring RAID controllers, integrated volume management, storage pooling, etc, etc, etc.
Once you get beyond 1 harddrive, using anything other than ZFS (or other pooled storage system) is heavenly. There's just no comparison to ext*+LVM.
I wonder how multi-user performance would scale if it had a 10 Gbps uplink to a 1 Gbps access switch. Maybe I'm out of touch with arrays of this size, but those numbers seem low for an 8-disk array. Maybe it has to do with the Atom CPU? Maybe the RAID controller has no cache? Honestly I'd be highly disappointed if I spent $1000 on the chassis and another $1000-2000 on hard drives and could barely reach 1 Gbps under the best circumstances.
Try again, They used 8x WD4000FYYZ, They run $410 each... If you get a deal on them. Upwards of $500 if you go to a brick and mortar store... at 400 each, that's $3200 just for the drives for their enterprise class drives for this test. Most people aren't going to use them.
No, you missed my other point... The 8-drive RAID 5 is a failure waiting to happen, enterprise class or not. When a drive does fail, you'll have to repair it. During that 38+ hours... That is the MOST likely time (also when all the drives are old, warn, and getting their tails kicked in doing massively hard reads) that another one is going to fail... Then you lose the entire array and all of your data. That was the point I was trying to make.
A single hard drive is also a failure waiting to happen, enterprise class or not. When a drive does fail, you don't even get the benefit of 24/7 uptime provided by RAID-5 even when the array is degraded. You don't even have the chance to rebuild your RAID array.
I don't think anyone here ever claimed it was... If they did, I missed it. It's all about keeping data during a repair. Drives won't last forever and 38 hours is a long time to beat on the array to rebuild. On old drives, odds of a second failure go up drastically.
You building something yourself and someone else buying it aint the best comparison. You have to set up so many things. Time is money. Who has time to do that?
Sorry, can't edit comments... But ya, performance on this is weak. One of mine, of which empty cost the same, but supports Raid 6, can hold transfers much faster including 400M writes, 600M reads, etc. and that's using 5400 RPM consumer grade drives... 700/900M using performance based hardware or more. Mine is a media share server only needing to serve the house so 4-6 Pure HD sources (all legal, sorry, I do not agree with piracy) at the same time is plenty and this is way more then enough. But this is actually the 'slowest' way I could build it... I went for green since I didn't need any speed in this setup... speed in a real Raid is very easy. Writing is a bit slower, especially in Raid 6 due to the complicated error bit calculations... Reading is butter.
For short distance, Cat-6 works fine. My whole house is wired Cat-6 for < $800 minus the electrician who was also a friend of mine. So complain all ya like... Just cause you wanna sit there and do wi-fi isn't my fault.
To me, that's just too much. I can build the core box itself, FAR more powerful, albiet a bit larger, BUT capable of far more then just sitting there. Can serve as a Subsonic or Plex server, MEDIA stream, Media extender server to Xbox, etc. Even do it's own data workload (handbrake/etc. while running OSx or Windows or even Linux. Anything I choose.). It doesn't have to be a dummy box. And I have two of these running 24/7 and they use VERY little power while doing file server duties. If I load up the CPU to do other tasks, then they'll obviously load up a bit more but...
Anyhow, I can make, right now, say an A6 5400K (3.6G dual-core APU) with 16G 1866 CAS10, a Seasonic 620 modular, Fractal Design insulated (silent) tower to hold 8 fast swapable bays and a boot drive, an A75 USB3 board, AND the Areca ARC-1223, 6G Raid 6 card. (SAS cards break down to control SATA drives for those thinking about that...) all for $944.94 right now. And that comes with one giga-bit NIC already. Add more if ya want, or more whatever... That's the point. Plus these cases are dead silent. I even have the one with windows and you can't hear anything from them. They are a bit more expensive and you could save $50 going with cheaper options though but I was being frivolous. Here's a screenshot of one I just did for a core for a small one at work: http://www.sirgcal.com/images/misc/raid6coreexampl...
* The whole point is; I don't understand these 'boxes'. They use nonstandard raid for one. Synology Raid. Which also means if it fails you can't put it on a regular RAID controller to retrieve your data. At least that's how they used to be. Perhaps not anymore.
* But their price is SO high it doesn't make sense. You can build one yourself, better capabilities all the way around in every way, cheaper. And if you ONLY want raid 5, you can knock about $300 off the price tag. Raid 6 is the bulk of that cost... But honestly IMHO necessary with those sizes, and that many drives in the array...
If you actually have no clue how to build a PC, perhaps... But find your neighborhood nerd to help ya. Still without RAID 6, these just don't serve a purpose. Get two smaller arrays instead. 4-drives or less for raid 5. Can these even do hot-spares? At least that would be something... It would be a live drive waiting to take over in case of a failure. Not quite RAID 6, but sorta kinda a bit more helpful, at least for safety. They didn't mention it.
UPDATE: After looking carefully over these screenshots - I think their review might be SERIOUSLY lacking... I see a RAID 6 option in the setup for the box. But it's greyed out. Probably because they didn't have any drives in it when they were there is my guess. need 4-5 drives MINIMUM to do it to start. But with this many drives, Even testing a RAID 5 is just honestly a bit stupid. It should have been tested RAID 6 and in that situation, might actually be a more attractive option if it is capable and performs.
But then again, RAID 5 generally is faster then RAID 6 due to the added calculations for the extra parity.. And it's RAID 5 performance was pretty weak unless I'm reading the numbers wrong. That is, if the RAID 6 is actually activatable within this device and not just an option within their software that is disabled in this device all together. But I would have thought this review would have tested that mode since that is what an 8-drive setup should have been setup for.
The benchmarks were done with all 8-bays filled with WD RE Drives in RAID 5.
The screenshots show that we can have disk groups. So, for example, you could allocate 4 disks to one disk group and run a RAID 5 volume on it. Then, the other 4 disks could be in another group and you could run a RAID 6 volume in that group.
What is the problem with performance that you are seeing? These Atom-based NAS units basically saturate the network link (accounting for overheads). Remember two links teamed is 2 Gbps in this case and that translates to a maximum of 250 MBps. Accounting for overhead, I see units saturate between 210 - 230 MBps and never have had any unit go above that unless I am teaming 4 ports or more (as you can see in our QNAP TS-EC1279U-RP review)
I will take your feedback about RAID-6 evaluation into consideration in the next round of benchmarks.
How is single client, 1.5 MB/s throughput at about 100 ms latency "stellar?" That sounds absolutely abysmal to me. I'm curious to know how you set up IOMeter... I'd like to repeat the test on my own box and see how it fares.
There comes a time in your life where you just want things to work without the hassle of them breaking every time you turn around. I OWN the 5bay unit (for over a year now) and can say that the UX is wonderful on these. They configure to let you know when something goes wrong (send email, beep, send SMS, etc) so you can fix the issue. Please look at the product before you make conclusions that they are only "dumb" boxes. You can run Plex, and Many other media servers in addition to a DNS, DHCP, Web server with PHP and various CMS installs. Photo Management, Surveillance, etc....
On another note, a inexperienced individual commented that an issue will arise when a drive fails and the array must rebuild. If you are using quality drives and constantly spinning the drives, the chance of a two drive failure is very low. As anyone that has years of experience with computers, keep the drives spinning and things will be fine, it is when you shut down and start up that issues come into play.
I'd say that's even better than using some H/W Raid controller. Good luck replacing one of those with something else than an identical controller with the very same firmware etc.
Wow great timing! Been looking for a NAS with huge storage capabilities to transfer data offsite. Haven't seen many around... Buffalo Terastation looks good but I haven't seen reviews for those or any other modern NAS systems. Thanks for the review!
Did I miss it? But I didn't see it support Raid 6? But Raid 5, ESPECIALLY with large drives, is just asking for failure. I personally have one 8-drive array, building my 2nd now. First with 2TB drives, new one with 4TB drives. Both are Raid 6. Old one 12TB, new one will be 24TB. Ya you lose 2 drives of usable space but that creates 3-drive failure protection. Or basically, when a drive fails and you're rebuilding, you have protection from another drive failing. Cause THAT is what it will happen...
But I didn't see anything in the whole thing about Raid 6 at all. I would Never build an 8-drive system with Raid 5... Not especially with consumer grade hardware... Without Raid 6... It's just not worth it for large array...
3-drive failure, as it it takes 3 to kill the array.. Point is you can be repairing one, if another one fails, your not dead yet... As you would be with RAID 5...
Happy with raid 5 on a 4 bay NAS but I still watch carefully.
The problem is simple. I, like most people, if buying an 8 bay NAS would buy all the disks at the same time so there is a high chance the disks are all from the same manufacture batch. So if one disk in a batch fails there is a higher chance of another failing soon after - I know because it has happened to me.
So for 8 disk NAS Raid 6 is a key feature.
That still gives me a 24TB array. Say 16-18 Gb per lossless blu-ray rip leaves room for 1200 blu rays movies (or 4000 if you are happy with some compression) and about 2000 episodes of TV epsiodes at standard definition (no compression) and maybe 3000 CDs.
Don't get all your drives from one source or vendor. Buy an assortment of drives for your array, and you'll be much less likely to have 2 drives fail at once.
But that's never enough, building my 2nd 24TB rig now actually.. :-/ But I refuse to compress my BRs. I do strip out everything but the movies, but I also do NOT pirate them. I buy them and put them on my server. No one gets them either. Being in a wheelchair, it's one of my few hobbies though so I have a LOT of movies...
Apart from the wheelchair part I do exactly as SirGCal. Using a standard Blu-ray rip (at high quality rather than original) a 2 hour movie comes in at about 15Gb file. Some are a bit larger (17Gb), some a bit smaller (13.5Gb is the smallest).
That chews up a 6TB rig very quickly - particular as 6TB hard disk space is not 6TB because HD manufacturers do not quote HD space in binary but decimal units (the difference is about 7% per TB)..
I look forward to when HD come in 10TB sizes! That would be enough on my 4 bay QNAP 419+ which I consider to be an ideal consumer box - plug it in and it works
Can someone explain to me why this would be better than making your own NAS (with FreeNAS or something similar)? Correct me if I'm wrong but you should be able to put together a nice PC for NAS purposes for under $400 without a RAID card.... $800 (give or take) with a raid card that should be better then this no?
Exactly! You'd even be agreeably surprise that ZFS is better than a RAID card. Additionnaly your own build server will have much more RAM. I was about to buy a QNAP 4 bays. But after spending sometimes to read more about NAS4Free, I realize that a "roll you own" NAS server beats the prebuilts on all performance factors: better case, silence, better CPU, RAM, etc. etc. There is a big inconvenience though, you need to learn NAS4Free (or FreeNAS. the commercial implementation).
Case in point: my NAS server costs me less than $400 (I have the luxury to wait for quality parts to go one sale): Fractal Design R4, Corsair VX 550, 8GB G-SKill Snipper DDR 1600. Just waiting for a good mobo + AMD low power CPU and I am ready.
Not necessarily better or worse. I was looking to replace my WHS and I didn't feel like doing another build. I wanted something compact, quiet, and efficient since it stays on 24/7. The Synology came highly recommended and I didn't feel like doing test builds to figure out which OS I wanted to use.
See my reply above to a few posts. I just put it up a few minutes ago... It IS better.. And quite a bit cheaper. The Synology stuff really is NOT very good for the savvy. In-fact, ESPECIALLY with this many drives, your data is at too much risk... I tried to explain it in detail. Sorry for the rather long windedness of the post but I try to be detailed.
Or you're talking ZFS compression over RAID? I was thinking about something completely different... haven't slept in 36 hours... Twins teething... fun... sorry. But that should work fine on any of these RAID cards.
@SirGCal Thank you for all the info you gave. Coincidentally, I have decided to go with the Fractal Define R4 for silence, exactly as you stated. Regarding ZFS, I think this article might be of your interest, in particular the section "What ZFS Gives You that Controllers Can't"
I have two of those cases myself. Three in the office. It's so quiet. Love it. Mine has windows too. Still very silent and cool with 8 drives running 24'7 (add more fans).
As for the RAID-Z, they only compare it in that article to RAID5. while I agree in that case sure it's better. Much is. They don't compare it to RAID 6 where I think it's performance and failover won't keep up. But this particular method I'm not familiar with so I'd have to play with it to know for sure to run comparisons. I am not a RAID 5 fan at all since arrays have grown beyond the 4 TB range overall size to be honest. In those cases, this would likely be my choice.
The appropriate comparison would be RAID-Z vs RAID-5, and RAID-Z2 vs RAID-6. In each case, ZFS wins if you're dedicating the same amount of space to parity data.
I'll check out RAID-Z2. My only immediate pause would be moving it to another RAID card from a card failure... That is something worth considering if you run a large array. But other then that. When I get ready to build this next array, if possible I will run some tests.
You could also look at raidz3 which is triple parity.
ZFS works file for small number of disks, but it really shines with larger numbers. Avoid "RAID controllers" as much as possible -- "simple" HBA is way better choice -- performance wise.
god glad I made a ZFS server. This thing is expensive, slow and more power hungry than my system. For reference I built mine for a third of the prices. Reach internally 300 MB+ speeds externally limited to the 1 Gbit port and uses 60 watt when resilvering.
A word of caution for Mac users. I researched a NAS "to death" before purchasing the DS1512+ about six months ago. I have a large number of computer systems including vintage Unix based machines, OS X, Linux and Windows. SAMBA and NFS appear to work reasonably well with the Synology DSM, but there is a fundamental issue with AFP support that remains uncorrected in the latest DSM 4.2 build - the support for Unix style file permissions is broken and DSM overrides the OS X permissions with default values.
Synology did improve the behaviour in DSM 4.2 and at least the execute bit can now be correctly set on the remote mounts, but the read and write permissions still do not work. I was extremely disappointed to find such a fundamental issue with a system that is advertised as fully OS X compatible and also widely recommended for Mac users.
So again, why not build your own server, cheaper. More effective, more capable. Using your own OS and your own mounting systems. You could even include Samba and NFS directly if you wanted purely. Works for sure then.
What interface? It's a network storage... It's a mounted drive, or a website address, or a dymanic drive like \192.168.1.100\Share\ That's all you need to get to your data from any system connected to the network. What 'interface' do you use? A webbrowser? Change it to HTTP:// and add Apache or IIS... Don't blame the box because you don't know what you're doing.
Please re-read my original post. This has nothing to do with my knowledge or my abilities but rather everything to do with a device that one purchases that claims to do something but fails to do it.
Again, this just reenforces my examples of why one should build their own instead of buying someone elses 'package' of problems. You put on your own tools/addons/etc. Put on the parts you need. The interfaces you want. Whatever GUI accesses your users want to use to access it with (ya, they can all be added in almost any flavor of OS in some form)... All this does is give it to you in a plug & pray form factor. I added an update below though from a friend who bought one though and his experiences. Even against my recommendations but, I only advise and support.
I have a friend who does too. He has one. To be fair I offered to build him a box like mine also. He went with one of these this time around. He has the same drives as mine this time and it is running RAID 6 (that's what made me think about giving him a call).
Downsides, it's slow. He's only seeing about 120M writes and 180M reads with it. A lot slower then my rig. Plus it can't do all the other dedicated things (Subsonic/Plex/Handbrake/etc) that mine does all the time even over the internet to serve up my files and videos while I'm away. At least he can't figure out how to get it to run Subsonic anyhow...
So it does do RAID 6. Huge bonus there. Kudos to that. But it is also propriotary. Tried taking the array out and putting it on my cards and they didn't recognize it. Common problem with these boxes. Whereas I can take them out of my cards between each other and they all recognize each other between brands as long as they are in standard RAID formats (one benefit of using standard formats). Incase a card ever fails. But even with the SAME network hardware (I bought a spool and we did both of our homes and we got the same switches/routers/etc. so all of that is identical and his house is actually simpler, fewer connections.)
So that's just one real world example but there ya go. At least it does do RAID 6, still Anand dropped the ball on that one and should have tested it for ya/us... Performance seems a bit off but it does work. He hasn't had to do any repairs or rebuilds/grows yet though so can't give anything on that. He built it fully populated out of the door. But at least it does work RAID 6 for those wondering. If your going to go with a box, and a big box at that, at LEAST use RAID 6 or something better. 8 drives is not good for RAID 5...
Well no. It also simply means a matter of time an data loss... For example. I keep my pictures and home movies on my RAID array also along with my private BluRay/DVD collection. To Re-rip my private collection from scratch would take literally years. To have it on hand is the benefit of the array. Possible but very inconvenient. To lose the pictures and home videos would be catastrophic. I do have backups of those off-sight but, RAID 6 still helps prevent either one of these failures from happening. Hence the use of a very large array anyhow. If you're going big, go smart or don't do it. Just to have to recreate the data, possible or not, would be insanely difficult and time consuming and that is the entire POINT to having the array to begin with. Convenience.
You've missed my point. My point is that event a total failure of an array should not mean data loss. RAID is not a backup solution. You should be using a backup solution for data you can't afford to lose, not RAID. Lost uptime is not as expensive in most home environments as it is in most business environments. It's convenient to tolerate a hard disk failure with no downtime at home, but in most cases, the downtime isn't costing you money, so all you're buying with that fault tolerance is convenience because again, RAID is not a backup solution - you should have your data backed up elsewhere.
OK, while you're pulling your many TB down from whatever backup service over your internet connection, also killing your internet pipe, slowing it down for everyone in the process for likely weeks or months to get the pull unless your one of the few on fiber or FIOS, I'd rather not have to repopulate 24-28TB of data from backup in the first place. Good luck with that. While I do keep a backup, it's far better not to need it.
We were finally able to get the Subsonic module loaded and working on it properly and it works fine for music... mostly. But it doesn't have the horsepower to transcode bluray content, even just one viewing, on the fly. I don't know if it's memory or CPU or both but even over the local network (which is disgustingly overkill) it just can't do it. Choppy, stutters, etc. where as mine is smooth and uses ~ 10% CPU or less. I wouldn't think it was that hard on the CPU but... We're still trying to get this to work as it is one of the requirements for him to keep/use this box.
Forgive me if this is a stupid question, but what's the reason for USB and eSATA ports on a box like this? I understand the basic point of a NAS (as a single box where I can dump a buncha drives and have the HW provide some level of RAID) but how do the USB/eSATA ports play into this?
Is the idea that, after I have filled this thing up with 8 internal drives but I need still more space, I start adding drives via the external ports?
There are ways to extend the array, but honestly it becomes a point where the most reliable way becomes to buy or build another array. Doing it as a server in box, you can do 8/12/16/24 drive configurations... This stand alone is the first 8 box setup I've seen aside from rack servers which obviously are true rigs costing a LOT more.
You don't need to convince me. I've built my storage around 5-yr old Macs connected to a bunch of 5-yr old drives using Apple RAID and AFP. Maybe not the right solution for everyone, but meets my needs, and basically free.
But that's not the point. My question remains. For the people who ARE the target for this sort of device, what's the point of the USB/eSATA parts. Our reviewer, for example, wanted USB ports in front of the box. Why? What would he do with them?
Sorry, According to their own site, you can use two of their own Synology DS513s to increase the capacity to 18 drives. However 18 drives even as RAID 6 becomes not so hot. 8-12 drives is about my limit. At 16 I make two RAID groups and then one volume for the virtual array cluster to use the data from. Then you have 4 parity drives but much better drive protection crossed the array instead of just 2 drives of parity. That's another discussion though. They sell bigger boxes though that I think actually do this type of configurations though. I'd have to research it though. But even their reported numbers don't show great performance. Still should be OK for most home use.
If you want to plug in a single drive and just add it as a shared folder, I think it will do that. I can ask my friend to give it a go if he gets home and see if ya like.
I think I have a feeling now, from what both you and Ganesh have said.
Seems a strangely limited market, to have an environment that wants so much storage, but no-one is willing to just use one of the machines around to plug a drive into and have it act as a file server. But, I guess, I'm not the target audience.
Ya, that's the catch. For what it is, it's not bad. But the biggest problem in my eyes is that's all it is. It can't do the "other" things that a server could do such as run the other software packages that my servers do... Or at least we haven't figured out how to make it do so yet. We've been beating the pants off my friends rig trying to make it run something like Subsonic which is a media streaming service to stream your own media files to your self when your offsite. Music and videos... I love it and he was hoping to use it also but isn't getting his Synology box to run anything this complicated yet. In some ways I'm actually a bit surprised since it's just a java daemon. (in windows it's a service). I thought of all my software tools, this one might actually work. And there might be a way, we haven't tried hard yet. Or the other fear is the actual CPU won't be capable of trans-coding on the fly... at least videos. We're pretty sure the software will install, but the Atom's are pretty weak. We'll see.. Worst case I guess, we setup yet another server to feed off it for the streaming. Sort-of defeats the purpose but... If it can't do it...
Yes, you can add the DX510 expansion chassis via the eSATA ports and get a total of (5 + 5) 10 more bays. That is why you have the 18 in the DS1812+ :)
@name99 the USB/eSATA ports allow to make a backup of the NAS on external drives or may be dump content on your NAS. They are not to extend the capacity of your NAS.
OK. Thanks. Again to me seems a strange use case which can easily be duplicated just by uing one of the client machines, but I guess when you're selling something costing a $K you try to add in any random thing you can think of to make it appear worth the money.
Your NFS numbers seem way too low compared to the CIFS numbers. Might want to drop the 'tcp' from the options, is the most likely culprit. NFS defaults to udp, not sure why you're changing that.
I see a number of vociferous comments about how a ZFS build / building your own NAS will offer better performance and how Synology (or, for that matter, any other vendor's off-the-shelf NAS offering) is just too costly. Let me try to address the issue:
1. Building your own NAS with a configuration tuned to what you require will obviously be more cost effective and efficient - no doubts about that. Synology and other such solutions are targeted towards SMB / SOHO users who don't have the expertise to build a NAS on their own, or feel that their time is better spent buying a off-the-shelf ready-to-use offering from a vendor. Maybe the IT admin of the SMB has better things to do than sitting down and building a PC and installing the appropriate OS etc. These off-the-shelf NAS units are just plug and play.
2. Expandability: Units such as the DS1812+ offer the ability to extend the number of bays by providing support for extension units (DX510 has 5 bays and you can attach two of them to the unit). Plug them in and you have a total of 18-bays. Try adding that to your own build (first, you have to make sure the eSATA port you connect the new bays support port multipliers, then you have to spend a lot of time reconfiguring your host OS to recognize and add the new drives in the new bays to your existing array -- these are not impossible things, but just suck up a lot of time)
3. Features : NAS vendors offer 'app stores' to extend the feature set. For example, I am currently trying out Surveillance Station on the DS1812+ right now. Ready-to-use minutes after installing it. On your PC, you have to set up something like iSpy and spend time making sure it is compatible with all your equipment. Synology becomes a one-stop-shop for such features.
In summary, yes, if you are tech savvy and have a lot of time at your disposal, you are better off building your own NAS. There is plenty of open source software available to enable such systems (and to be fair, we are working towards evaluating a custom-built NAS for some time). We elect to do extended coverage of NAS units such as the DS1812+ and QNAP TS-EC1279U-RP because a large number of readers are IT admins / IT decision making people at many SMB / SOHO firms, and they are looking for off-the-shelf solutions. The off-the-shelf NAS market is pretty huge, and that is why you have a large number of vendors doing quite well with increasing revnue.. QNAP, Synology, Thecus, Netgear, Iomega / LenovoEMC, Asustor... The list is pretty big..
Thank you for bringing some sense to this one sided thread. Building your own NAS can be done at a lower cost and offer you great benefits. However you will be hard pressed to build something more refined then this unit especially in regards to size and ease of use.
The simplest form - an HP Microserver + Freenas is pretty easy to assemble. Allows for ECC memory and up to 16gb of it too for higher performance. Still is a tiny form factor, low power, low noise. If expandability is the driver, large PC cases and motherboards with PCIX cards will always win. If features matter, a linux install offers faster development and many more.
I combine them all together - Microserver with 16gb, an SSD for caching, solaris (full install) with virtualbox, Ubuntu in a VM. It's still a lightweight processor (I'd prefer one of the ULV ivy or haswells), but it kills an Atom.
Companies are flocking to this market because it offers nice margins...like the markup Puget might put on their beautiful systems.
The Microserver is a great build your own NAS box but it does not stand up to the DS1812+. For one you can only hold 5 3.5" disk max, while the Synology can hold eight and they are all hot swappable. How about warranty/support anyone?
My opinion is based on owning both types of systems. I just sold my 1511+ + DX510 and I own a OI + Nappit ZFS array. They both have pros and cons. You can almost look at it like a person who's looking to buy a Mac verses a computer nerd who builds all of his boxes. There is a reason why Apple is in business and it's similar to why the Synology, QNAP's and Netgear's are able to sell NAS's.
Madhelp - the Microserver trivially takes 6 drives and has the PCIX slots to take more externally if you really wanted and needed the capacity. Or you could just buy two of them - with memory and a second NIC they're still only 400 each, compared to the $999 price of this unit bare. Either way, the DS1812+ can't stand up to the cpu, the features offered by zfs, the memory capacity, the overall feature set. And you can certainly get support for the software (not freenas, but WHS, or Nextenta or Solaris, others, and have the usual year warranty for the hardware.
Synology and the others combine decent software, easy of use for a limited feature set, and barely good enough hardware into a package. IOW, one out of 3.
6 hot swappable drives? I don't think so. WHS is discontinued and all of those other Solaris based products you listed cost thousands of dollars to buy and support. You might get more CPU and the ability to add more ram to a Microserver box but whats the result? For a storage box it's still going to be slower then a DS1812+ in regards to throughput. In fact while the reviewer dismiss the new DS1813+ it now can deliver 350MB/s reads and 200MB/s writes to the network. I've never see anything close to that from a Microserver, point being its in another class. You guys might complain about the cost but you get what you pay for.
There's nothing special about the DS hardware in term of drive throughput. Using bonnie on the box itself, I've benchmarked various disk organizations on the microserver and 0+1 got reads of 370MB. Writes tend to max in the ballpark of 100 (efrx 2tb drives)- mirroring doesn't speed up rights, it slows it, and raid Z of course requires the parity compute/writes. I'm not going to stripe. However, the more interesting stats aren't about sequential access, which is measurebating, and more about iops. Adding 16G of memory and an SSD caching drive into the zfs pool substantially increases iops.
If you stick to the single onboard NIC, of course you're not going to do better than 1gbit on transfers to other hosts. But you can add a dual intel for $130. Not sure what a quad card would cost, though for the context of most users here, that's a silly feature. Needs switch support (much more $$ than a dumb switch), and needs a lot of users pulling at max. Not the case in the home. Unless the clients are also going to run multiple nics, it's an unusable capacity.
I can't quickly see the disk config that is needed to support the metrics you cite. But if its an 8 drive raid5, that's performance at a risk profile I won't accept. 0+1, otoh, would be the way to go.
Hot swap doesn't work in Solaris (which cost me 0$, not thousands), but my understanding is that it was present in WHS. Isn't an essential feature to me @ home, but I can see others putting more value in it.
I understand your points and I agree. My points about the sequential throughput are just there to cite capability out of the box with the Synology. In regards to IOPS if that's your goal you can load SSD's into the DS1813+ and achieve some seriously high numbers similar to 16GB of ARC and an SSD L2ARC drive.
For the same reasons you would add a quad NIC to the Micro sever are the same reasons you would spend $1000 on a DS1813+. I agree the average home user would not need a quad nic on their NAS, nor would they need a DS1813+. The DS1813+ is built for a SOHO or power user.
This is just shooting in the dark but I would imagine that the metrics I stated above could be produced by one mid tier SSD or three 3TB WD Black drives.
How swap is has to be supported within the hardware and within the software. From my understanding the issue is with the HP Microserver not supporting it. I can't speak on Solaris but I'm using Open Indiana and I can hot swap all day. Also my comment in regards to Solaris was about support. Call Oracle and try to purchase a support contract, it's expensive. Synology comes with a standard 3 year warranty. In a dire situation the guys at Synology will SSH into your box and fix it. Again you get what you pay for.
Sorry, that wasn't the point I was trying to make. Reading through your article; it was completely void of anything in reference to RAID 6. This box should never be run in RAID 5 mode with all 8 drives going and that should have definitely have been explained for those 'laymen' users that for sure wouldn't have known any better. Otherwise you know darn well they would have bought the rig, gotten 8 drives and followed this review and built a RAID 5 array and a few years from now lost it all. The unit might be a phenominal NAS in and of itself, but test it as it really SHOULD be used responsibly by the general public... Or at least two RAID 5 volumes linked.. that would have been better then one giant raid 5 single array. That was the one biggest problem I had with the article. I had to do considerable research and until a friend actually told me he had one I didn't know it was RAID 6 capable. The whole point of these are huge arrays for would-be responsible backups. They are NOT secure backups per-say but at the same time we don't want to lose 20+ TB of data because a drive crapped out and the array had one ecc hickup on a 35+ hour rebuild. I thought I made that a bit more clear in half a dozen posts above this one.
I said it before and I'll say it again... using RAID6 should not protect you from data loss any more than RAID5 will. RAID is not a backup solution and should not be treated as one.
That may be so, but isn't Home/SMB NAS typically for a backup target as well as its media functions?
That said it is both bewildering and disappointing NAS manufacturers haven't embraced ZFS. The only constraint that comes to mind is memory but that is a stupid reason to bail for x86 devices.
This is a pretty sleek unit. However, considering the msrp of $999.00 (with no drives), it is possible to build a superior unit with custom components. Here is a recent example from my custom Freenas Build with ZFS file system with RaidZ1. Drives used in this setup are older assorted Sata3 1TB drives. All settings in FreeNas are default values and no performance optimizations have been made. The reason I am posting is simply to illustrate the fact that far better results can be achieved for about the same cost. Data security that ZFS offers is priceless in my humble opinion.
Greetings. I am confused as to which way to go for a NAS unit. First let me define, this will be for individual personal home network use and SoHo operation. Synology and QNAP seem to be the 2 most popular brands. I really am not so much 'brand' conscious as I am for the product that gives me the best bang for the buck, and features I want. Perhaps this 8 bay would be overkill for such a personal level of need? I might be better off with two 2-bay (or 4-bay) models and synchronize (backup) between them? The other feature I would desire is the HDMI for output as HTPC to the frontroom TV. Thus, noise level as well as the HDMI is a consideration. In short, 1. should I go for QNAP or Synology for these considerations? 2. Which model? (I currently have not quite 8TB of data (two 4TB drives externally hooked up to a MacMini over USB3). Thanks.
1. I personally like Synology and have experience of 3 models, 212j, 2411 and 1512. All of them has been working fine and they are very easy to configure. The 212j wasn't the fastest one around though but I didn't expect it to neither. 2. That depends on the level of protection you want to run and how much your data will grow over the time you expect the device to "live". Pls remember that no RAID-level whatsoever is a replacement for a proper backup (preferably off-site and off-line if you ask me).
Thanks for that feedback. I did a search for 2411 and 1512 but they seem to be 'past tense' models for Synology. But what I did find is there are 8-bay and 12-bay models it seems. I think this goes way beyond my needs and perhaps even data growth. Perhaps a 4-bay or 5-bay might be more suitable for me in terms of growth and capacity. And then, to have a double NAS of the same time where one is main and the other fall back, or, a backup to the main.
Currently I am not doing RAID on my 2-bay DS213. I just do each disk as independent volumes and then back those up over USB3 to an external box housing two more matching drives. Simple but it works.
The draw for me was the HDMI port on the QNAP NAS whereby I could also have the NAS double over as a HTPC Media Server as well. I hear that Synology is suppose to release a DS714 that also has HDMI, and supposedly in June. But, they have been completely mute about any information on the product. But on the other hand, perhaps I should not let HDMI port be a deciding factor as to which NAS I do buy.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
93 Comments
Back to Article
MadMan007 - Thursday, June 13, 2013 - link
AES-NI support would be a great addition for a real SMB NAS...even SMB's should be concerned with security. What are the chances NAS manufacturers will come out with devices based on AMD Kabini? AMD does a lot less feature segmentation in their chips and Kabini has AES-NI so it seems like a better solution until Intel matches that with Atoms (low TDP Haswells will be too expensive.)JDG1980 - Thursday, June 13, 2013 - link
The two features I look for in off-the-shelf NASes are ECC RAM, and the ZFS file system. Unfortunately, it seems that none so far have saw fit to include them.pwr4wrd - Friday, June 14, 2013 - link
I completely agree with you, Even for home/SOHO use, what good is a NAS unit if you dont have data integrity.Samus - Saturday, June 15, 2013 - link
This will change with the Atom family supporting ECC. I don't know of any real advantages ZFS has over ext4 for home/soho.phoenix_rizzen - Monday, June 17, 2013 - link
Instantaneous near-unlimited number of snapshots, end-to-end checksums, integrated raid features without requiring RAID controllers, integrated volume management, storage pooling, etc, etc, etc.Once you get beyond 1 harddrive, using anything other than ZFS (or other pooled storage system) is heavenly. There's just no comparison to ext*+LVM.
Jeff7181 - Thursday, June 13, 2013 - link
I wonder how multi-user performance would scale if it had a 10 Gbps uplink to a 1 Gbps access switch. Maybe I'm out of touch with arrays of this size, but those numbers seem low for an 8-disk array. Maybe it has to do with the Atom CPU? Maybe the RAID controller has no cache? Honestly I'd be highly disappointed if I spent $1000 on the chassis and another $1000-2000 on hard drives and could barely reach 1 Gbps under the best circumstances.DigitalFreak - Thursday, June 13, 2013 - link
There is no RAID controller. The SATA ports are either off of the Intel embedded ports, or more likely off of a 3rd party controller.SirGCal - Thursday, June 13, 2013 - link
Try again, They used 8x WD4000FYYZ, They run $410 each... If you get a deal on them. Upwards of $500 if you go to a brick and mortar store... at 400 each, that's $3200 just for the drives for their enterprise class drives for this test. Most people aren't going to use them.Gigaplex - Thursday, June 13, 2013 - link
That just backs up their point even more. Spending $1k-2k instead isn't likely to get you faster drives.SirGCal - Friday, June 14, 2013 - link
No, you missed my other point... The 8-drive RAID 5 is a failure waiting to happen, enterprise class or not. When a drive does fail, you'll have to repair it. During that 38+ hours... That is the MOST likely time (also when all the drives are old, warn, and getting their tails kicked in doing massively hard reads) that another one is going to fail... Then you lose the entire array and all of your data. That was the point I was trying to make.saiyan - Sunday, June 16, 2013 - link
A single hard drive is also a failure waiting to happen, enterprise class or not. When a drive does fail, you don't even get the benefit of 24/7 uptime provided by RAID-5 even when the array is degraded. You don't even have the chance to rebuild your RAID array.Seriously, RAID is NOT a backup.
SirGCal - Monday, June 17, 2013 - link
I don't think anyone here ever claimed it was... If they did, I missed it. It's all about keeping data during a repair. Drives won't last forever and 38 hours is a long time to beat on the array to rebuild. On old drives, odds of a second failure go up drastically.Duckhunt2 - Saturday, February 15, 2014 - link
You building something yourself and someone else buying it aint the best comparison. You have to set up so many things. Time is money. Who has time to do that?SirGCal - Thursday, June 13, 2013 - link
Sorry, can't edit comments... But ya, performance on this is weak. One of mine, of which empty cost the same, but supports Raid 6, can hold transfers much faster including 400M writes, 600M reads, etc. and that's using 5400 RPM consumer grade drives... 700/900M using performance based hardware or more. Mine is a media share server only needing to serve the house so 4-6 Pure HD sources (all legal, sorry, I do not agree with piracy) at the same time is plenty and this is way more then enough. But this is actually the 'slowest' way I could build it... I went for green since I didn't need any speed in this setup... speed in a real Raid is very easy. Writing is a bit slower, especially in Raid 6 due to the complicated error bit calculations... Reading is butter.santiagoanders - Friday, June 14, 2013 - link
You have a 10G network to run media sharing? Overkill much?SirGCal - Friday, June 14, 2013 - link
For short distance, Cat-6 works fine. My whole house is wired Cat-6 for < $800 minus the electrician who was also a friend of mine. So complain all ya like... Just cause you wanna sit there and do wi-fi isn't my fault.santiagoanders - Monday, June 17, 2013 - link
And how much did you pay for the 10Gbe adapters and switch?Guspaz - Thursday, June 13, 2013 - link
Is it just me, or is the price of this thing not listed anywhere in the article? Benchmarks are meaningless without a price to give them context.DigitalFreak - Thursday, June 13, 2013 - link
The 1812+ runs around $999, and the 1813+ is $1099.SirGCal - Friday, June 14, 2013 - link
To me, that's just too much. I can build the core box itself, FAR more powerful, albiet a bit larger, BUT capable of far more then just sitting there. Can serve as a Subsonic or Plex server, MEDIA stream, Media extender server to Xbox, etc. Even do it's own data workload (handbrake/etc. while running OSx or Windows or even Linux. Anything I choose.). It doesn't have to be a dummy box. And I have two of these running 24/7 and they use VERY little power while doing file server duties. If I load up the CPU to do other tasks, then they'll obviously load up a bit more but...Anyhow, I can make, right now, say an A6 5400K (3.6G dual-core APU) with 16G 1866 CAS10, a Seasonic 620 modular, Fractal Design insulated (silent) tower to hold 8 fast swapable bays and a boot drive, an A75 USB3 board, AND the Areca ARC-1223, 6G Raid 6 card. (SAS cards break down to control SATA drives for those thinking about that...) all for $944.94 right now. And that comes with one giga-bit NIC already. Add more if ya want, or more whatever... That's the point. Plus these cases are dead silent. I even have the one with windows and you can't hear anything from them. They are a bit more expensive and you could save $50 going with cheaper options though but I was being frivolous. Here's a screenshot of one I just did for a core for a small one at work: http://www.sirgcal.com/images/misc/raid6coreexampl...
* The whole point is; I don't understand these 'boxes'. They use nonstandard raid for one. Synology Raid. Which also means if it fails you can't put it on a regular RAID controller to retrieve your data. At least that's how they used to be. Perhaps not anymore.
* But their price is SO high it doesn't make sense. You can build one yourself, better capabilities all the way around in every way, cheaper. And if you ONLY want raid 5, you can knock about $300 off the price tag. Raid 6 is the bulk of that cost... But honestly IMHO necessary with those sizes, and that many drives in the array...
If you actually have no clue how to build a PC, perhaps... But find your neighborhood nerd to help ya. Still without RAID 6, these just don't serve a purpose. Get two smaller arrays instead. 4-drives or less for raid 5. Can these even do hot-spares? At least that would be something... It would be a live drive waiting to take over in case of a failure. Not quite RAID 6, but sorta kinda a bit more helpful, at least for safety. They didn't mention it.
SirGCal - Friday, June 14, 2013 - link
UPDATE: After looking carefully over these screenshots - I think their review might be SERIOUSLY lacking... I see a RAID 6 option in the setup for the box. But it's greyed out. Probably because they didn't have any drives in it when they were there is my guess. need 4-5 drives MINIMUM to do it to start. But with this many drives, Even testing a RAID 5 is just honestly a bit stupid. It should have been tested RAID 6 and in that situation, might actually be a more attractive option if it is capable and performs.But then again, RAID 5 generally is faster then RAID 6 due to the added calculations for the extra parity.. And it's RAID 5 performance was pretty weak unless I'm reading the numbers wrong. That is, if the RAID 6 is actually activatable within this device and not just an option within their software that is disabled in this device all together. But I would have thought this review would have tested that mode since that is what an 8-drive setup should have been setup for.
ganeshts - Friday, June 14, 2013 - link
The benchmarks were done with all 8-bays filled with WD RE Drives in RAID 5.The screenshots show that we can have disk groups. So, for example, you could allocate 4 disks to one disk group and run a RAID 5 volume on it. Then, the other 4 disks could be in another group and you could run a RAID 6 volume in that group.
What is the problem with performance that you are seeing? These Atom-based NAS units basically saturate the network link (accounting for overheads). Remember two links teamed is 2 Gbps in this case and that translates to a maximum of 250 MBps. Accounting for overhead, I see units saturate between 210 - 230 MBps and never have had any unit go above that unless I am teaming 4 ports or more (as you can see in our QNAP TS-EC1279U-RP review)
I will take your feedback about RAID-6 evaluation into consideration in the next round of benchmarks.
Jeff7181 - Monday, June 17, 2013 - link
How is single client, 1.5 MB/s throughput at about 100 ms latency "stellar?" That sounds absolutely abysmal to me. I'm curious to know how you set up IOMeter... I'd like to repeat the test on my own box and see how it fares.mitchdbx - Saturday, June 15, 2013 - link
There comes a time in your life where you just want things to work without the hassle of them breaking every time you turn around. I OWN the 5bay unit (for over a year now) and can say that the UX is wonderful on these. They configure to let you know when something goes wrong (send email, beep, send SMS, etc) so you can fix the issue. Please look at the product before you make conclusions that they are only "dumb" boxes. You can run Plex, and Many other media servers in addition to a DNS, DHCP, Web server with PHP and various CMS installs. Photo Management, Surveillance, etc....On another note, a inexperienced individual commented that an issue will arise when a drive fails and the array must rebuild. If you are using quality drives and constantly spinning the drives, the chance of a two drive failure is very low. As anyone that has years of experience with computers, keep the drives spinning and things will be fine, it is when you shut down and start up that issues come into play.
mitchdbx - Saturday, June 15, 2013 - link
More FYI about the RAID levels....http://forum.synology.com/wiki/index.php/What_is_S...
Micke O - Monday, June 17, 2013 - link
Synology aren't using some "nonstandard raid" with SHR. They are using mdadmThis is how to restore an array in standard PC using linux if your DiskStation would fail:
http://www.synology.com/support/faq_show.php?lang=...
I'd say that's even better than using some H/W Raid controller. Good luck replacing one of those with something else than an identical controller with the very same firmware etc.
Insomniator - Thursday, June 13, 2013 - link
Wow great timing! Been looking for a NAS with huge storage capabilities to transfer data offsite. Haven't seen many around... Buffalo Terastation looks good but I haven't seen reviews for those or any other modern NAS systems. Thanks for the review!SirGCal - Thursday, June 13, 2013 - link
Did I miss it? But I didn't see it support Raid 6? But Raid 5, ESPECIALLY with large drives, is just asking for failure. I personally have one 8-drive array, building my 2nd now. First with 2TB drives, new one with 4TB drives. Both are Raid 6. Old one 12TB, new one will be 24TB. Ya you lose 2 drives of usable space but that creates 3-drive failure protection. Or basically, when a drive fails and you're rebuilding, you have protection from another drive failing. Cause THAT is what it will happen...But I didn't see anything in the whole thing about Raid 6 at all. I would Never build an 8-drive system with Raid 5... Not especially with consumer grade hardware... Without Raid 6... It's just not worth it for large array...
Gigaplex - Thursday, June 13, 2013 - link
No, it only creates 2-drive failure protection. Lose 3 drives in RAID6, and you're toast.SirGCal - Friday, June 14, 2013 - link
3-drive failure, as it it takes 3 to kill the array.. Point is you can be repairing one, if another one fails, your not dead yet... As you would be with RAID 5...cjs150 - Friday, June 14, 2013 - link
Happy with raid 5 on a 4 bay NAS but I still watch carefully.The problem is simple. I, like most people, if buying an 8 bay NAS would buy all the disks at the same time so there is a high chance the disks are all from the same manufacture batch. So if one disk in a batch fails there is a higher chance of another failing soon after - I know because it has happened to me.
So for 8 disk NAS Raid 6 is a key feature.
That still gives me a 24TB array. Say 16-18 Gb per lossless blu-ray rip leaves room for 1200 blu rays movies (or 4000 if you are happy with some compression) and about 2000 episodes of TV epsiodes at standard definition (no compression) and maybe 3000 CDs.
That should be enough!
SirGCal - Friday, June 14, 2013 - link
My point exactly!JeffFlanagan - Friday, June 14, 2013 - link
Don't get all your drives from one source or vendor. Buy an assortment of drives for your array, and you'll be much less likely to have 2 drives fail at once.brennok - Friday, June 14, 2013 - link
I guess I am not like most people. I used SHR2 so I could fill it with various disks as I upgraded. I only started with four 1TB Reds.SirGCal - Friday, June 14, 2013 - link
But that's never enough, building my 2nd 24TB rig now actually.. :-/ But I refuse to compress my BRs. I do strip out everything but the movies, but I also do NOT pirate them. I buy them and put them on my server. No one gets them either. Being in a wheelchair, it's one of my few hobbies though so I have a LOT of movies...cjs150 - Monday, June 17, 2013 - link
Apart from the wheelchair part I do exactly as SirGCal. Using a standard Blu-ray rip (at high quality rather than original) a 2 hour movie comes in at about 15Gb file. Some are a bit larger (17Gb), some a bit smaller (13.5Gb is the smallest).That chews up a 6TB rig very quickly - particular as 6TB hard disk space is not 6TB because HD manufacturers do not quote HD space in binary but decimal units (the difference is about 7% per TB)..
I look forward to when HD come in 10TB sizes! That would be enough on my 4 bay QNAP 419+ which I consider to be an ideal consumer box - plug it in and it works
Babar Javied - Friday, June 14, 2013 - link
Can someone explain to me why this would be better than making your own NAS (with FreeNAS or something similar)? Correct me if I'm wrong but you should be able to put together a nice PC for NAS purposes for under $400 without a RAID card.... $800 (give or take) with a raid card that should be better then this no?Peroxyde - Friday, June 14, 2013 - link
Exactly! You'd even be agreeably surprise that ZFS is better than a RAID card. Additionnaly your own build server will have much more RAM. I was about to buy a QNAP 4 bays. But after spending sometimes to read more about NAS4Free, I realize that a "roll you own" NAS server beats the prebuilts on all performance factors: better case, silence, better CPU, RAM, etc. etc. There is a big inconvenience though, you need to learn NAS4Free (or FreeNAS. the commercial implementation).Case in point: my NAS server costs me less than $400 (I have the luxury to wait for quality parts to go one sale): Fractal Design R4, Corsair VX 550, 8GB G-SKill Snipper DDR 1600. Just waiting for a good mobo + AMD low power CPU and I am ready.
brennok - Friday, June 14, 2013 - link
Not necessarily better or worse. I was looking to replace my WHS and I didn't feel like doing another build. I wanted something compact, quiet, and efficient since it stays on 24/7. The Synology came highly recommended and I didn't feel like doing test builds to figure out which OS I wanted to use.SirGCal - Friday, June 14, 2013 - link
See my reply above to a few posts. I just put it up a few minutes ago... It IS better.. And quite a bit cheaper. The Synology stuff really is NOT very good for the savvy. In-fact, ESPECIALLY with this many drives, your data is at too much risk... I tried to explain it in detail. Sorry for the rather long windedness of the post but I try to be detailed.SirGCal - Friday, June 14, 2013 - link
Ohh, and you could do it with ZFS, I just like RAID and am more familiar with it over ZFSSirGCal - Friday, June 14, 2013 - link
Or you're talking ZFS compression over RAID? I was thinking about something completely different... haven't slept in 36 hours... Twins teething... fun... sorry. But that should work fine on any of these RAID cards.Peroxyde - Friday, June 14, 2013 - link
@SirGCal Thank you for all the info you gave. Coincidentally, I have decided to go with the Fractal Define R4 for silence, exactly as you stated. Regarding ZFS, I think this article might be of your interest, in particular the section "What ZFS Gives You that Controllers Can't"http://constantin.glez.de/blog/2010/01/home-server...
SirGCal - Friday, June 14, 2013 - link
I have two of those cases myself. Three in the office. It's so quiet. Love it. Mine has windows too. Still very silent and cool with 8 drives running 24'7 (add more fans).As for the RAID-Z, they only compare it in that article to RAID5. while I agree in that case sure it's better. Much is. They don't compare it to RAID 6 where I think it's performance and failover won't keep up. But this particular method I'm not familiar with so I'd have to play with it to know for sure to run comparisons. I am not a RAID 5 fan at all since arrays have grown beyond the 4 TB range overall size to be honest. In those cases, this would likely be my choice.
JDG1980 - Friday, June 14, 2013 - link
The appropriate comparison would be RAID-Z vs RAID-5, and RAID-Z2 vs RAID-6. In each case, ZFS wins if you're dedicating the same amount of space to parity data.SirGCal - Sunday, June 16, 2013 - link
I'll check out RAID-Z2. My only immediate pause would be moving it to another RAID card from a card failure... That is something worth considering if you run a large array. But other then that. When I get ready to build this next array, if possible I will run some tests.danbi - Monday, June 17, 2013 - link
You could also look at raidz3 which is triple parity.ZFS works file for small number of disks, but it really shines with larger numbers. Avoid "RAID controllers" as much as possible -- "simple" HBA is way better choice -- performance wise.
Hakker9nl - Friday, June 14, 2013 - link
god glad I made a ZFS server. This thing is expensive, slow and more power hungry than my system.For reference I built mine for a third of the prices. Reach internally 300 MB+ speeds externally limited to the 1 Gbit port and uses 60 watt when resilvering.
SirGCal - Friday, June 14, 2013 - link
EXACTLY my point above. Thanks for help me illustrate it. I tend to be long winded trying to explain things completely...t-rexky - Friday, June 14, 2013 - link
A word of caution for Mac users. I researched a NAS "to death" before purchasing the DS1512+ about six months ago. I have a large number of computer systems including vintage Unix based machines, OS X, Linux and Windows. SAMBA and NFS appear to work reasonably well with the Synology DSM, but there is a fundamental issue with AFP support that remains uncorrected in the latest DSM 4.2 build - the support for Unix style file permissions is broken and DSM overrides the OS X permissions with default values.Synology did improve the behaviour in DSM 4.2 and at least the execute bit can now be correctly set on the remote mounts, but the read and write permissions still do not work. I was extremely disappointed to find such a fundamental issue with a system that is advertised as fully OS X compatible and also widely recommended for Mac users.
For anyone interested in more details, here is the full story: http://forum.synology.com/enu/viewtopic.php?f=64&a...
SirGCal - Friday, June 14, 2013 - link
So again, why not build your own server, cheaper. More effective, more capable. Using your own OS and your own mounting systems. You could even include Samba and NFS directly if you wanted purely. Works for sure then.t-rexky - Friday, June 14, 2013 - link
That approach would work fine for me but unfortunately not for the others at home who need a reasonably user friendly interface...SirGCal - Friday, June 14, 2013 - link
What interface? It's a network storage... It's a mounted drive, or a website address, or a dymanic drive like \192.168.1.100\Share\ That's all you need to get to your data from any system connected to the network. What 'interface' do you use? A webbrowser? Change it to HTTP:// and add Apache or IIS... Don't blame the box because you don't know what you're doing.t-rexky - Friday, June 14, 2013 - link
Please re-read my original post. This has nothing to do with my knowledge or my abilities but rather everything to do with a device that one purchases that claims to do something but fails to do it.SirGCal - Friday, June 14, 2013 - link
Again, this just reenforces my examples of why one should build their own instead of buying someone elses 'package' of problems. You put on your own tools/addons/etc. Put on the parts you need. The interfaces you want. Whatever GUI accesses your users want to use to access it with (ya, they can all be added in almost any flavor of OS in some form)... All this does is give it to you in a plug & pray form factor. I added an update below though from a friend who bought one though and his experiences. Even against my recommendations but, I only advise and support.SirGCal - Friday, June 14, 2013 - link
I have an update for you guys that like these:I have a friend who does too. He has one. To be fair I offered to build him a box like mine also. He went with one of these this time around. He has the same drives as mine this time and it is running RAID 6 (that's what made me think about giving him a call).
Downsides, it's slow. He's only seeing about 120M writes and 180M reads with it. A lot slower then my rig. Plus it can't do all the other dedicated things (Subsonic/Plex/Handbrake/etc) that mine does all the time even over the internet to serve up my files and videos while I'm away. At least he can't figure out how to get it to run Subsonic anyhow...
So it does do RAID 6. Huge bonus there. Kudos to that. But it is also propriotary. Tried taking the array out and putting it on my cards and they didn't recognize it. Common problem with these boxes. Whereas I can take them out of my cards between each other and they all recognize each other between brands as long as they are in standard RAID formats (one benefit of using standard formats). Incase a card ever fails. But even with the SAME network hardware (I bought a spool and we did both of our homes and we got the same switches/routers/etc. so all of that is identical and his house is actually simpler, fewer connections.)
So that's just one real world example but there ya go. At least it does do RAID 6, still Anand dropped the ball on that one and should have tested it for ya/us... Performance seems a bit off but it does work. He hasn't had to do any repairs or rebuilds/grows yet though so can't give anything on that. He built it fully populated out of the door. But at least it does work RAID 6 for those wondering. If your going to go with a box, and a big box at that, at LEAST use RAID 6 or something better. 8 drives is not good for RAID 5...
Jeff7181 - Friday, June 14, 2013 - link
One could argue that RAID6, or even RAID5 for that matter is unnecessary in a home environment where downtime means lost money.Data loss is not (or should not) be a concern. RAID provides performance and fault tolerance.
RAID is not a backup solution and should not be treated as such. You should have another copy of your data elsewhere.
Jeff7181 - Friday, June 14, 2013 - link
That should read... where downtime doesn't mean lost money.SirGCal - Friday, June 14, 2013 - link
Well no. It also simply means a matter of time an data loss... For example. I keep my pictures and home movies on my RAID array also along with my private BluRay/DVD collection. To Re-rip my private collection from scratch would take literally years. To have it on hand is the benefit of the array. Possible but very inconvenient. To lose the pictures and home videos would be catastrophic. I do have backups of those off-sight but, RAID 6 still helps prevent either one of these failures from happening. Hence the use of a very large array anyhow. If you're going big, go smart or don't do it. Just to have to recreate the data, possible or not, would be insanely difficult and time consuming and that is the entire POINT to having the array to begin with. Convenience.Jeff7181 - Monday, June 17, 2013 - link
You've missed my point. My point is that event a total failure of an array should not mean data loss. RAID is not a backup solution. You should be using a backup solution for data you can't afford to lose, not RAID. Lost uptime is not as expensive in most home environments as it is in most business environments. It's convenient to tolerate a hard disk failure with no downtime at home, but in most cases, the downtime isn't costing you money, so all you're buying with that fault tolerance is convenience because again, RAID is not a backup solution - you should have your data backed up elsewhere.SirGCal - Monday, June 17, 2013 - link
OK, while you're pulling your many TB down from whatever backup service over your internet connection, also killing your internet pipe, slowing it down for everyone in the process for likely weeks or months to get the pull unless your one of the few on fiber or FIOS, I'd rather not have to repopulate 24-28TB of data from backup in the first place. Good luck with that. While I do keep a backup, it's far better not to need it.Jeff7181 - Tuesday, June 18, 2013 - link
Who sad anything about "cloud" backup? Buy additional high capacity drives and only spin them to perform backups.SirGCal - Sunday, June 16, 2013 - link
UPDATE ON SYNOLOGY:We were finally able to get the Subsonic module loaded and working on it properly and it works fine for music... mostly. But it doesn't have the horsepower to transcode bluray content, even just one viewing, on the fly. I don't know if it's memory or CPU or both but even over the local network (which is disgustingly overkill) it just can't do it. Choppy, stutters, etc. where as mine is smooth and uses ~ 10% CPU or less. I wouldn't think it was that hard on the CPU but... We're still trying to get this to work as it is one of the requirements for him to keep/use this box.
name99 - Friday, June 14, 2013 - link
Forgive me if this is a stupid question, but what's the reason for USB and eSATA ports on a box like this? I understand the basic point of a NAS (as a single box where I can dump a buncha drives and have the HW provide some level of RAID) but how do the USB/eSATA ports play into this?Is the idea that, after I have filled this thing up with 8 internal drives but I need still more space, I start adding drives via the external ports?
SirGCal - Friday, June 14, 2013 - link
There are ways to extend the array, but honestly it becomes a point where the most reliable way becomes to buy or build another array. Doing it as a server in box, you can do 8/12/16/24 drive configurations... This stand alone is the first 8 box setup I've seen aside from rack servers which obviously are true rigs costing a LOT more.name99 - Friday, June 14, 2013 - link
You don't need to convince me. I've built my storage around 5-yr old Macs connected to a bunch of 5-yr old drives using Apple RAID and AFP. Maybe not the right solution for everyone, but meets my needs, and basically free.But that's not the point. My question remains. For the people who ARE the target for this sort of device, what's the point of the USB/eSATA parts. Our reviewer, for example, wanted USB ports in front of the box. Why? What would he do with them?
SirGCal - Friday, June 14, 2013 - link
Sorry, According to their own site, you can use two of their own Synology DS513s to increase the capacity to 18 drives. However 18 drives even as RAID 6 becomes not so hot. 8-12 drives is about my limit. At 16 I make two RAID groups and then one volume for the virtual array cluster to use the data from. Then you have 4 parity drives but much better drive protection crossed the array instead of just 2 drives of parity. That's another discussion though. They sell bigger boxes though that I think actually do this type of configurations though. I'd have to research it though. But even their reported numbers don't show great performance. Still should be OK for most home use.If you want to plug in a single drive and just add it as a shared folder, I think it will do that. I can ask my friend to give it a go if he gets home and see if ya like.
name99 - Friday, June 14, 2013 - link
I think I have a feeling now, from what both you and Ganesh have said.Seems a strangely limited market, to have an environment that wants so much storage, but no-one is willing to just use one of the machines around to plug a drive into and have it act as a file server. But, I guess, I'm not the target audience.
SirGCal - Saturday, June 15, 2013 - link
Ya, that's the catch. For what it is, it's not bad. But the biggest problem in my eyes is that's all it is. It can't do the "other" things that a server could do such as run the other software packages that my servers do... Or at least we haven't figured out how to make it do so yet. We've been beating the pants off my friends rig trying to make it run something like Subsonic which is a media streaming service to stream your own media files to your self when your offsite. Music and videos... I love it and he was hoping to use it also but isn't getting his Synology box to run anything this complicated yet. In some ways I'm actually a bit surprised since it's just a java daemon. (in windows it's a service). I thought of all my software tools, this one might actually work. And there might be a way, we haven't tried hard yet. Or the other fear is the actual CPU won't be capable of trans-coding on the fly... at least videos. We're pretty sure the software will install, but the Atom's are pretty weak. We'll see.. Worst case I guess, we setup yet another server to feed off it for the streaming. Sort-of defeats the purpose but... If it can't do it...Micke O - Monday, June 17, 2013 - link
The drives in each DS513 must be it's own volume. No BIG volume with all the drives in the main unit and the expansion units is possible.ganeshts - Friday, June 14, 2013 - link
Yes, you can add the DX510 expansion chassis via the eSATA ports and get a total of (5 + 5) 10 more bays. That is why you have the 18 in the DS1812+ :)Peroxyde - Friday, June 14, 2013 - link
@name99 the USB/eSATA ports allow to make a backup of the NAS on external drives or may be dump content on your NAS. They are not to extend the capacity of your NAS.name99 - Friday, June 14, 2013 - link
OK. Thanks.Again to me seems a strange use case which can easily be duplicated just by uing one of the client machines, but I guess when you're selling something costing a $K you try to add in any random thing you can think of to make it appear worth the money.
don_k - Friday, June 14, 2013 - link
Your NFS numbers seem way too low compared to the CIFS numbers. Might want to drop the 'tcp' from the options, is the most likely culprit. NFS defaults to udp, not sure why you're changing that.ganeshts - Friday, June 14, 2013 - link
I see a number of vociferous comments about how a ZFS build / building your own NAS will offer better performance and how Synology (or, for that matter, any other vendor's off-the-shelf NAS offering) is just too costly. Let me try to address the issue:1. Building your own NAS with a configuration tuned to what you require will obviously be more cost effective and efficient - no doubts about that. Synology and other such solutions are targeted towards SMB / SOHO users who don't have the expertise to build a NAS on their own, or feel that their time is better spent buying a off-the-shelf ready-to-use offering from a vendor. Maybe the IT admin of the SMB has better things to do than sitting down and building a PC and installing the appropriate OS etc. These off-the-shelf NAS units are just plug and play.
2. Expandability: Units such as the DS1812+ offer the ability to extend the number of bays by providing support for extension units (DX510 has 5 bays and you can attach two of them to the unit). Plug them in and you have a total of 18-bays. Try adding that to your own build (first, you have to make sure the eSATA port you connect the new bays support port multipliers, then you have to spend a lot of time reconfiguring your host OS to recognize and add the new drives in the new bays to your existing array -- these are not impossible things, but just suck up a lot of time)
3. Features : NAS vendors offer 'app stores' to extend the feature set. For example, I am currently trying out Surveillance Station on the DS1812+ right now. Ready-to-use minutes after installing it. On your PC, you have to set up something like iSpy and spend time making sure it is compatible with all your equipment. Synology becomes a one-stop-shop for such features.
In summary, yes, if you are tech savvy and have a lot of time at your disposal, you are better off building your own NAS. There is plenty of open source software available to enable such systems (and to be fair, we are working towards evaluating a custom-built NAS for some time). We elect to do extended coverage of NAS units such as the DS1812+ and QNAP TS-EC1279U-RP because a large number of readers are IT admins / IT decision making people at many SMB / SOHO firms, and they are looking for off-the-shelf solutions. The off-the-shelf NAS market is pretty huge, and that is why you have a large number of vendors doing quite well with increasing revnue.. QNAP, Synology, Thecus, Netgear, Iomega / LenovoEMC, Asustor... The list is pretty big..
MadHelp - Friday, June 14, 2013 - link
Thank you for bringing some sense to this one sided thread. Building your own NAS can be done at a lower cost and offer you great benefits. However you will be hard pressed to build something more refined then this unit especially in regards to size and ease of use.bsd228 - Friday, June 14, 2013 - link
The simplest form - an HP Microserver + Freenas is pretty easy to assemble. Allows for ECC memory and up to 16gb of it too for higher performance. Still is a tiny form factor, low power, low noise. If expandability is the driver, large PC cases and motherboards with PCIX cards will always win. If features matter, a linux install offers faster development and many more.I combine them all together - Microserver with 16gb, an SSD for caching, solaris (full install) with virtualbox, Ubuntu in a VM. It's still a lightweight processor (I'd prefer one of the ULV ivy or haswells), but it kills an Atom.
Companies are flocking to this market because it offers nice margins...like the markup Puget might put on their beautiful systems.
MadHelp - Saturday, June 15, 2013 - link
The Microserver is a great build your own NAS box but it does not stand up to the DS1812+. For one you can only hold 5 3.5" disk max, while the Synology can hold eight and they are all hot swappable. How about warranty/support anyone?My opinion is based on owning both types of systems. I just sold my 1511+ + DX510 and I own a OI + Nappit ZFS array. They both have pros and cons. You can almost look at it like a person who's looking to buy a Mac verses a computer nerd who builds all of his boxes. There is a reason why Apple is in business and it's similar to why the Synology, QNAP's and Netgear's are able to sell NAS's.
bsd228 - Saturday, June 15, 2013 - link
Madhelp - the Microserver trivially takes 6 drives and has the PCIX slots to take more externally if you really wanted and needed the capacity. Or you could just buy two of them - with memory and a second NIC they're still only 400 each, compared to the $999 price of this unit bare. Either way, the DS1812+ can't stand up to the cpu, the features offered by zfs, the memory capacity, the overall feature set. And you can certainly get support for the software (not freenas, but WHS, or Nextenta or Solaris, others, and have the usual year warranty for the hardware.Synology and the others combine decent software, easy of use for a limited feature set, and barely good enough hardware into a package. IOW, one out of 3.
MadHelp - Sunday, June 16, 2013 - link
6 hot swappable drives? I don't think so. WHS is discontinued and all of those other Solaris based products you listed cost thousands of dollars to buy and support. You might get more CPU and the ability to add more ram to a Microserver box but whats the result? For a storage box it's still going to be slower then a DS1812+ in regards to throughput. In fact while the reviewer dismiss the new DS1813+ it now can deliver 350MB/s reads and 200MB/s writes to the network. I've never see anything close to that from a Microserver, point being its in another class. You guys might complain about the cost but you get what you pay for.bsd228 - Monday, June 17, 2013 - link
There's nothing special about the DS hardware in term of drive throughput. Using bonnie on the box itself, I've benchmarked various disk organizations on the microserver and 0+1 got reads of 370MB. Writes tend to max in the ballpark of 100 (efrx 2tb drives)- mirroring doesn't speed up rights, it slows it, and raid Z of course requires the parity compute/writes. I'm not going to stripe. However, the more interesting stats aren't about sequential access, which is measurebating, and more about iops. Adding 16G of memory and an SSD caching drive into the zfs pool substantially increases iops.If you stick to the single onboard NIC, of course you're not going to do better than 1gbit on transfers to other hosts. But you can add a dual intel for $130. Not sure what a quad card would cost, though for the context of most users here, that's a silly feature. Needs switch support (much more $$ than a dumb switch), and needs a lot of users pulling at max. Not the case in the home. Unless the clients are also going to run multiple nics, it's an unusable capacity.
I can't quickly see the disk config that is needed to support the metrics you cite. But if its an 8 drive raid5, that's performance at a risk profile I won't accept. 0+1, otoh, would be the way to go.
Hot swap doesn't work in Solaris (which cost me 0$, not thousands), but my understanding is that it was present in WHS. Isn't an essential feature to me @ home, but I can see others putting more value in it.
MadHelp - Tuesday, June 18, 2013 - link
I understand your points and I agree. My points about the sequential throughput are just there to cite capability out of the box with the Synology. In regards to IOPS if that's your goal you can load SSD's into the DS1813+ and achieve some seriously high numbers similar to 16GB of ARC and an SSD L2ARC drive.For the same reasons you would add a quad NIC to the Micro sever are the same reasons you would spend $1000 on a DS1813+. I agree the average home user would not need a quad nic on their NAS, nor would they need a DS1813+. The DS1813+ is built for a SOHO or power user.
This is just shooting in the dark but I would imagine that the metrics I stated above could be produced by one mid tier SSD or three 3TB WD Black drives.
How swap is has to be supported within the hardware and within the software. From my understanding the issue is with the HP Microserver not supporting it. I can't speak on Solaris but I'm using Open Indiana and I can hot swap all day. Also my comment in regards to Solaris was about support. Call Oracle and try to purchase a support contract, it's expensive. Synology comes with a standard 3 year warranty. In a dire situation the guys at Synology will SSH into your box and fix it. Again you get what you pay for.
SirGCal - Saturday, June 15, 2013 - link
Sorry, that wasn't the point I was trying to make. Reading through your article; it was completely void of anything in reference to RAID 6. This box should never be run in RAID 5 mode with all 8 drives going and that should have definitely have been explained for those 'laymen' users that for sure wouldn't have known any better. Otherwise you know darn well they would have bought the rig, gotten 8 drives and followed this review and built a RAID 5 array and a few years from now lost it all. The unit might be a phenominal NAS in and of itself, but test it as it really SHOULD be used responsibly by the general public... Or at least two RAID 5 volumes linked.. that would have been better then one giant raid 5 single array. That was the one biggest problem I had with the article. I had to do considerable research and until a friend actually told me he had one I didn't know it was RAID 6 capable. The whole point of these are huge arrays for would-be responsible backups. They are NOT secure backups per-say but at the same time we don't want to lose 20+ TB of data because a drive crapped out and the array had one ecc hickup on a 35+ hour rebuild. I thought I made that a bit more clear in half a dozen posts above this one.DigitalFreak - Monday, June 17, 2013 - link
"I thought I made that a bit more clear in half a dozen posts above this one."I tuned out after the 3rd one.
Jeff7181 - Tuesday, June 25, 2013 - link
I said it before and I'll say it again... using RAID6 should not protect you from data loss any more than RAID5 will. RAID is not a backup solution and should not be treated as one.Gimfred - Thursday, July 18, 2013 - link
That may be so, but isn't Home/SMB NAS typically for a backup target as well as its media functions?That said it is both bewildering and disappointing NAS manufacturers haven't embraced ZFS. The only constraint that comes to mind is memory but that is a stupid reason to bail for x86 devices.
pwr4wrd - Friday, June 14, 2013 - link
This is a pretty sleek unit. However, considering the msrp of $999.00 (with no drives), it is possible to build a superior unit with custom components. Here is a recent example from my custom Freenas Build with ZFS file system with RaidZ1. Drives used in this setup are older assorted Sata3 1TB drives. All settings in FreeNas are default values and no performance optimizations have been made.The reason I am posting is simply to illustrate the fact that far better results can be achieved for about the same cost. Data security that ZFS offers is priceless in my humble opinion.
NASPT Test Results. (MB/s)
HDVideo_1Play 95.653
HDVideo_2Play 111.941
HDVideo_4Play 113.313
HDVideo_1Record 237.236
HDVideo_1Play_1Record 90.348
ContentCreation 10.446
OfficeProductivity 53.119
FileCopyToNAS 74.114
FileCopyFromNAS 93.923
DirectoryCopyToNAS 7.255
DirectoryCopyFromNAS 41.833
PhotoAlbum 16.066
FreeNas (Software version 8.3.1) Server Components:
CPU: Intel Xeon E3-1230 @ 3.20Ghz
Ram: 16 GB ECC DDR3 Kingston @ 1333Mhz
Motherboard: Supermicro X9SCM-F
Network Controllers: Onboard Intel® 82579LM and 82574L, 2x Gigabit LAN ports
Boot Drive: Corsair 32 GB Flash drive
Duckhunt2 - Saturday, February 15, 2014 - link
what about the power consumption?tokyojerry - Sunday, June 16, 2013 - link
Greetings. I am confused as to which way to go for a NAS unit. First let me define, this will be for individual personal home network use and SoHo operation. Synology and QNAP seem to be the 2 most popular brands. I really am not so much 'brand' conscious as I am for the product that gives me the best bang for the buck, and features I want. Perhaps this 8 bay would be overkill for such a personal level of need? I might be better off with two 2-bay (or 4-bay) models and synchronize (backup) between them? The other feature I would desire is the HDMI for output as HTPC to the frontroom TV. Thus, noise level as well as the HDMI is a consideration. In short,1. should I go for QNAP or Synology for these considerations?
2. Which model? (I currently have not quite 8TB of data (two 4TB drives externally hooked up to a MacMini over USB3). Thanks.
Micke O - Monday, June 17, 2013 - link
1. I personally like Synology and have experience of 3 models, 212j, 2411 and 1512. All of them has been working fine and they are very easy to configure. The 212j wasn't the fastest one around though but I didn't expect it to neither.2. That depends on the level of protection you want to run and how much your data will grow over the time you expect the device to "live". Pls remember that no RAID-level whatsoever is a replacement for a proper backup (preferably off-site and off-line if you ask me).
tokyojerry - Wednesday, June 19, 2013 - link
Thanks for that feedback. I did a search for 2411 and 1512 but they seem to be 'past tense' models for Synology. But what I did find is there are 8-bay and 12-bay models it seems. I think this goes way beyond my needs and perhaps even data growth. Perhaps a 4-bay or 5-bay might be more suitable for me in terms of growth and capacity. And then, to have a double NAS of the same time where one is main and the other fall back, or, a backup to the main.Currently I am not doing RAID on my 2-bay DS213. I just do each disk as independent volumes and then back those up over USB3 to an external box housing two more matching drives. Simple but it works.
The draw for me was the HDMI port on the QNAP NAS whereby I could also have the NAS double over as a HTPC Media Server as well. I hear that Synology is suppose to release a DS714 that also has HDMI, and supposedly in June. But, they have been completely mute about any information on the product. But on the other hand, perhaps I should not let HDMI port be a deciding factor as to which NAS I do buy.
Thanks for the input.
klassobanieras - Thursday, June 27, 2013 - link
How does it deal with silent corruption? What happens if you yank the power-cord during a write? How do I get my data off the disks if the NAS dies?God forbid a NAS review ever tell me any of these things.
andypost - Monday, July 29, 2013 - link
why is there still not integration of 10Gbps ethernet interface in these storage/networking products.