CPUs

Once again, the primary purpose of a file server is storage.  It is not computation prowess nor producing high frame rates in games.  All other components, including the CPU, should take a back seat to the hard drives, case, and power supply in the context of a home file server.  File servers do not - repeat, do not - need the latest, greatest, more powerful processors to work well.  In fact, file serving is not a particularly taxing task, especially not for a home file server that will likely never have to distribute data to more than a handful of clients simultaneously.  Therefore, rather than using a powerful and power-hungry CPU, it's a better idea to use a less capable but also less power consuming chip. 

From A(tom) to Z(acate)

Intel's Oak Trail (using Atom CPUs) and AMD's Brazos (using Zacate APUs) platforms are both up to the task of file serving.  However, neither platform produces a particularly pleasant experience with Windows Home Server 2011.  Both platforms take an agonizingly long time to install WHS2011, and neither will be capable of doing much more than simply serving - transcoding video on an Atom or Zacate WHS2011 system is painfully slow.  That said, both Oak Trail and Brazos are sufficient to run WHS2011, especially if your file server will be performing only basic tasks like streaming MP3s and storing photos.

My preferred Atom home server motherboard/cpu combo is the ASUS AT5NM10T-I, a passively-cooled, Atom D525 (1.8GHz dual core with Hyper-Threading) solution that sports four SATA ports (rather than the two found on most Intel boards) and a PCIe x4 slot.  The PCIe expansion slot is useful for adding a SATA controller card in case you want your file server to house more than four drives.  Note it uses laptop SODIMMs rather than standard desktop DIMMs but considering how inexpensive DDR3 is currently, this does not affect the system's cost.  One thing to keep in mind when selecting an Atom-based file server: go for the most recent models that are dual cores and have Hyper-Threading - the price premium is very modest and the performance increase is palpable. 

ASRock's E350M1 is a more fully-featured Zacate motherboard that includes the E-350 APU (1.6GHz dual core), four SATA ports, an eSATA port (useful for backups), as well as VGA, DVI, and HDMI out ports.  While multiple display outputs might not be an important consideration for a file server, more flexibility is always better.  At just under $100 through Newegg currently, it is an exceptional value.  Its expandability is limited to one PCIe x16 slot, which can also accommodate PCIe x4 and x1 cards - whereas PCIe 'up-plugging' can be hit or miss on different motherboards, I have had excellent success up-plugging on this particular model.   

In comparing Atom with Zacate in the context of a file server, the regular laptop or desktop experience paints a useful picture.  Atom is barely sufficient.  Zacate is sufficient.  Zacate's main strength is its integrated GPU, which is not particularly useful for a file server, but its CPU prowess is also substantially better than Atom's.  While the Atom CPU officially draws less power on paper (with a TDP of 13W) than than the E-350 APU (at 18W), in practice, the two platforms consume very similar amounts of power both at idle and under loads typical of a file server (which do not tax the E-350's integrated GPU).   Given that these platforms are priced similarly, use similar quantities of electricity, and the AMD platform's greater general flexibility, it is difficult to recommend the Atom-based solution given Zacate's substantial performance advantage.

The Sandy Bridge Pentiums

Anand recently reviewed the Sandy Bridge-based Pentiums, some of which have been available now for a few months.  These CPUs are excellent home file server processors: they have enough muscle to smoothly run WHS2011 and produce a very pleasant computing experience, they use little power under load, and they are not expensive.   The Intel Pentium G620 has become my go-to file server CPU since its release back in Q2.  It is the least expensive Sandy Bridge desktop CPU at less than $80, and though its TDP is 65W on paper, in reality, it uses far less power under real world load.  It uses even less power than its more powerful yet still juice-sipping Core i3-2100 cousin, and this combination of using not much electricity while being powerful makes it difficult to recommend any AMD CPU that is comparably priced.  Unless you are on an extremely tight budget, a G620 makes more sense for a file server than, say, the AMD Athlon II X2 250.  You can get an idea of how the G620 and 250 compare by looking at Bench - though since Bench doesn't include the 250, we're using the 255, which is ever so slightly faster than the 250.  Pay particular attention to the power usage levels: at idle, the G620 system uses over 20W less than the 250 system, and uses less than 50W less under load. 

But what about the lowly, dirt cheap Sempron 145?  Its TDP is only 45W and though it's a single core CPU, it's still powerful enough for a file server, even one running WHS2011.  Again, though, there's a difference between official TDPs and real world TDPs: my own testing shows that the Sempron 145 also idles nearly 20W higher than the G620.  So the same conclusion immediately above also applies: unless you are on an extreme budget, you're better off with the G620 than the 145.  These increases in CPU power consumption by the AMD CPUs compared to the Sandy Bridge Pentium translate directly into heat dissipated into your file server's case.  Whether this is enough heat to make a difference depends on your case and cooling solutions - but in my experience, it's enough to push hard drive temperatures from the high 30s (Celsius) to to the mid 40s Celsius in especially small cases.

Though power consumption factors prominently in our recommendation of the Pentium G620 over the Athlon II X2 250, it's important to not lose sight of the forest for the trees: a 20W difference in power consumption for a file server CPU is the rough equivalent of leaving a smaller, lower wattage incandescent light bulb like a reading lamp on 24/7 in your home.  Ultimately, the decision is simple: is a $25 or more premium for the G620 worth saving 20W+ on your electrical bill over the long haul?

   The lilliputian Intel Pentium G620 heatsink and fan is a good indication of the chip's heat output

Motherboards

Whether you use a mini-ITX, micro-ATX, or full-size ATX motherboard will largely be dictated by the size of the case you decide to house your home file server in.

Mini-ITX

Mini-ITX boards sacrifice expandability for small size.  Few ITX motherboards have more than four SATA ports, limiting them to use in file servers that will hold maximally 12TB (the largest commercially available hard drives are 3TB).  However, most ITX motherboards have at least one expansion slot, which can hold a SATA controller card.  There are many LGA 1155 motherboards that are compatible with the Intel Pentium G620, and most have very similar feature sets.  However, in a mini-ITX case, board layout becomes critically important.  My favorite ITX 1155 motherboard is Giada's MI-H61-01 specifically because its four SATA ports are clustered on the lower right aspect of the board (when it's mounted), right by the front panel connectors and 20 pin ATX power port.  Because the four pin CPU power connector is located on the upper left corner, cable management is a breeze and facilitates excellent airflow - everything goes to either the lower right or upper left corners, allowing cables to be run along the top or bottom of the mini-ITX case.  The Giada MI-H67-01 has a nearly identical layout and is sometimes less expensive than the H61 board.  Though Giada is a newcomer to the North American market and do not have the reputation of older brands like ASUS, for what's it worth, I have used many of these boards in both file server and regular ITX desktop builds and have been completely satisfied by their products.  Remember, if you want to build a mini-ITX file server that will have six hard drives, you will need to buy a PCIe SATA controller card with two SATA ports like the SYBA SD-SA2PEX-2IR or Rosewill RC-211.   

Micro-ATX

Micro-ATX LGA 1155 boards can sport up to seven total SATA ports (4 SATA II and 3 SATA III), but most come with four or six total SATA ports.  As with mini-ITX file server boards, layout is important when stuffing many hard drives into a micro-ATX case.  The Biostar TH67B places all six of its SATA ports at the very bottom right-most corner of the board.  Unfortunately there are no micro-ATX 1155 options that push the 20+4 pin power connector to either the very top or bottom of the board, but at least this Biostar board has its  four pin power connector at the very top.

Full ATX

Cable management is rarely as difficult in a full-size ATX case as it can be in micro-ATX and mini-ITX cases, so board layout is perhaps less important for a full ATX file server motherboard, but it is still a consideration.  Ten hard drives can become very messy!  ATX 1155 boards max out at 10 ports, however, ten SATA port boards are typically $200 or more, whereas eight SATA port boards like the ASUS P8P67 can be found for as little as $125.  Thus, if you absolutely need ten HDDs in your file server, it makes more sense to spend $125 on the board and $25 on a two port SATA controller card than $200 on a ten SATA port motherboard.  I like this ASUS board for multiple HDD systems because its SATA ports are mounted perpendicularly to the board facing forward and at about the same height as one PCIe x1 and one PCI slot, so managing the SATA cables facilitates better airflow than if they were coming off of different heights on the board.

RAM

File servers do not need high performance, low latency, high frequency RAM.  FreeBSD, FreeNAS, and Ubuntu all run well with 2GB of RAM under loads typical of a home file server, but they do run palpably smoother with 4GB.  WHS2011 runs much more smoothly with 4GB.  All of the file server OS's run even better with 8GB RAM.  As RAM prices continue to fall, 8GB kits have been available for less than $30 (after rebate) regularly.  Because RAM prices are so dynamic lately, rather than recommending a specific product, we'll recommend that you shop around!  You should be able to find 2GB, 4GB, or 8GB for $5/GB without a rebate or less than $5/GB after rebate. 

Now that we've covered CPUs, motherboards, and RAM, the next page discusses cases and PSUs solutions. 

File Server Operating Systems Cases and Power Supplies
Comments Locked

152 Comments

View All Comments

  • HMTK - Monday, September 5, 2011 - link

    Inferior as in PITA for rebuilds and stuff like that. On my little Proliant Microserver I use the onboard RAID because I'm too cheap to buy something decent and it's only my backup machine (and domain controller, DHCP, DNS server) but for lots of really important data I'd look for a true RAID card with an XOR processor and some kind of battery protection: on the card or a UPS.
  • fackamato - Tuesday, September 6, 2011 - link

    I've used Linux MD software RAID for 2 years now, running 7x 2TB 5400 rpm "green" drives, and never had an issue. (except one Samsung drive which died after 6 months).

    This is on an Atom system. It took roughly 24h to rebuild to the new drive (CPU limited of course), while the server was happily playing videos in XBMC.
  • Sivar - Tuesday, September 6, 2011 - link

    This is not true in my experience.
    Hardware RAID cards are far, far more trouble than software RAID when using non-enterprise drives.

    The reason:
    Nearly all hard drives have read errors, sometimes frequently.
    This usually isn't a big deal: The hard drive will just re-read the same area of the drive over and over until it gets the data it needs, then probably mark the trouble spot as bad, remapping it to spare area.

    The problem is that consumer hard drives are happy to spend a LONG time rereading the trouble spot. Far longer than most hardware RAID cards need to decide the drive is not responding and drop it -- a perfectly good drive.

    For "enterprise" SATA drives, often the *only* difference, besides price, is that enterprise drives have a firmware flag set to limit their error recovery time, preventing them from dropping unless they have a real problem. Look up "TLER" for more information.

    Hardware RAID cards generally assume they are using enterprise drives. With RAID software it varies, but in Linux and Windows Server 2008R2 at least, I've never had a good drive drop. This isn't to say it can't happen, of course.

    ------------------------------

    For what it's worth, I recommend Samsung drives for home file servers. The 2TB Samsung F4 has been excellent. Sadly, Samsung is selling its HDD business.

    I expressly do not recommend the Western Digital GP (Green) series, unless you can older models before TLER was expressly disabled in the firmware (even as an option).
  • Havor - Sunday, September 4, 2011 - link

    HighPoint RocketRAID 2680 SGL PCI-Express x4 SATA / SAS (Serial Attached SCSI) Controller Card

    In stock.
    Now: $99.00

    http://www.newegg.com/Product/Product.aspx?Item=N8...

    Screw software raid, and then there are many card with more options like online array expansion.
  • Ratman6161 - Tuesday, September 6, 2011 - link

    For home use, a lot/most people are probably not going to build a file server out of all new components. We are mostly recycling old stuff. My file server is typically whatever my old desktop system was. So when I built my new i7-2600K system, my old Core 2 Quad 6600 desktop system became my new server. But...the old P35 motherboard in this system doesn't have RAID and has only 4 SATA ports. It does have an old IDE Port. So it got my old IDE CD-ROM, and three hard drives that were whatever I had laying around. Had I wanted RAID though, I would probably get a card.

    Also, as to OS; A lot of people for use as a home file server are not going to need ANY "server" os. If you just need to share files between a couple of people, any OS you might run on that machine is going to give you the ability to do that. Another consideration is that a lot of services and utilities have special "server" versions that will cost you more. Example: I use Mozy for cloud backup but if I tried to do that on a Windows Server, it would detect that it was a server and want me to upgrade to the Mozy Pro product which costs more. So by running the "server" on an old copy of Windows XP, I get around that issue. Unless you really need the functionality for something, I'd steer clear of an actual "server" OS.
  • alpha754293 - Tuesday, September 6, 2011 - link

    @Rick83

    "MY RAID card recommendation is a mainboard with as many SATA ports as possible, and screw the RAID card."

    I think that's somewhat of a gross overstatement. And here's why:

    It depends on what you're going to be building your file server for and how much data you anticipate on putting on it, and how important is that data? LIke would it be a big deal if you lost all of it? Some of it? A weeks worth? A day's worth? (i.e. how fault tolerant ARE you?)

    For most home users, that's likely going to be like pictures, music, and videos. With 3 TB drives at about $120 a pop (upwards of $170 a pop), do you really NEED a dedicated file server? You can probably just set up an older, low-powered machine with a Windows share and that's about it.

    @Rick83/PCTC2

    I think that when you're talking about rebuild rates, it depends on what RAID level you were running. Right now, I've got a 27 TB RAID5 server (30 TB raw, 10 * 3TB, 7200 rpm Hitachi SATA-3 on Areca ARC-1230 12-port SATA-II PCIe x8 RAID HBA); and it was going to take 24 hours using 80% background initialization or 10 hours with foreground initialization. So I would imagine that if I had to rebuild the entire 27 TB array; it's going to take a while.

    re: SW vs. HW RAID
    I've had experience with both. First is onboard SAS RAID (LSI1068E) then ZFS on 16*500 GB Hitachi 7200 rpm SATA on Adaptec 21610 (16-port SATA RAID HBA), and now my new system. Each has it's merits.

    SW RAID - pros:
    It's cheap. It's usually relatively easy to set up. They work reasonably well (most people probably won't be able to practically tell the difference in performance). It's cheap.

    SW RAID - cons:
    As I've experienced, twice; if you don't have backups, you can be royally screwed. Unless you've actually TRIED transplanting a SW RAID array, it SOUNDS easy, but it's almost never is. A lot of the times, there are a LOT of things that happen/running in the background that's transparent to the end user so if you tried to transplant it, it doesn't always work. And if you've ever tried transplanting a Windows install (even without RAID); you'll know that.

    There's like the target, the LUN, and a bunch of other things that tell the system about the SW RAID array.

    It's the same with ZFS. In fact, ZFS is maybe a little bit worse because I think there was like a 56-character tag that each hard drive gets as a unique ID. If you pulled a drive out from one of the slots and swapped it with another, haha...watch ZFS FREAK out. Kernel panics are sooo "rampant" that they had a page that told you how to clear the ZFS pool cache to stop the endless kernel panic (white screen of death) loop. And then once you're back up and running, you had to remount the ZFS pool. Scrub it, to make sure no errors, and then you're back up.

    Even Sun's own premium support says that in the event of a catastrophic failure with SW RAID, restore your data from back-ups. And if that server WAS your backup server -- well...you're SOL'd. (Had that happen to me TWICE because I didn't export and down the drives before pulling them out.)

    So that's that. (Try transplanting a Windows SW RAID....haha...I dare you.) And if you transplanted a single Windows install enough times, eventually you'll fully corrupt the OS. It REALLLY hates it when you do that.

    HW RAID - pros:
    Usually it's a lot more resilent. A lot of them have memory caches and some of them even have backup battery modules that help store the write intent operations in the event of a power failure so that at next power-up, it will complete the replay.* (*where/when supported). It's to prevent data corruption in the event that say you are in the middle of copying something onto the server, but then the power dies. It's more important with automated write operations, but since most people kinda slowly pick and choose what they put on the server anyways, that's usually not too bad. You might remember where it left off and pick it up from there manually.

    It's usually REALLY REALLY fast because it doesn't have OS overhead.

    ZFS was a bit of an exception because it waits until a buffer of operations is full before it actually executes the disk. So, you can get a bunch of 175 MB/s bursts (onto a single 2.5" Fujitsu 73 GB 10krpm SAS drive), but your clients might still be reporting 40 MB/s. On newer processors, it effectively was idle. On an old Duron 1800, it would register 14% CPU load doing the same thing.

    HW RAID - cons:
    Cost. Yes, the controllers are expensive. But you can also get some older systems/boards with onboard (HW RAID) (like LSI based controllers), but they work.

    With a PCIe x8 RAID HBA, even with PCIe 1.0 slots, each lane is 2 Gbps (250 MB/s) in each direction. So an 8-lane PCIe 1.0 card can do 16 Gbps (2 GB/s) or 32 Gbps (4 GB/s). SATA-3 is only good to 6 Gbps (750 MB/s including overhead). The highest I'm hitting with my new 27 TB server is just shy of 800 MB/s mark. Sustained read is 381 MB/s (limited by SATA-II connector interface). It's the fastest you can get without PCIe SSD cards. (And as far as I know, you CAN'T RAID the PCIe SSD cards. Not yet anyways.)
  • Brutalizer - Friday, September 9, 2011 - link

    It doesnt sound like I have the same experience of ZFS as you.

    For instance, your hw-raid ARECA card, is it in JBOD mode? You know that hw-raid cards screw ZFS seriously?

    I have pulled disks and replaced them without problems, you claim you had problems? I have never heard of such problems.

    I have also pulled out every disk, and inserted them again in other slots and everything worked fine. No problem. It helps to do a "zpool export" and "import" also.

    I dont understand all your problems with ZFS? Something is wrong, you should be able to pull out disks and replace them without problems. ZFS is designed for that. I dont understand why you dont succeed.
  • plonk420 - Sunday, September 4, 2011 - link

    friend has had good luck with a $100ish 8xSATAII PCI-X Supermicro card (no raid). he uses lvm in ubuntu server. i think they have some PCI-e cards in the same price range, too.

    i got a cheapish server-grade card WITH raid (i had to do some heavy research to see if it was compatible with linux), however it seems there's no SMART monitoring on it (at least in the drive manager GUI; i'm a wuss, obviously).
  • nexox - Wednesday, September 7, 2011 - link

    Well, there are about a million replies here, but I think I've got some information that others have missed:

    1) Motherboard SATA controllers generally suck. They're just no good. I don't know why this site insists on benchmarking SSDs with them. They tend to be slow and handle errors poorly. Yes, I've tested this a fair amount.

    2) Hardware RAID has it's positives and negatives, but generally it's not necessary, at least in Linux with mdraid - I can't speak for Windows.

    So what do you do with these facts? You get a quality Host Bust Adaptor (HBA.) These cards generally provide basic raids (0,1,) but mostly they just give you extra SAS/SATA ports, with decent hardware. I personally like the LSI HBAs (since LSI bought most of the other storage controller companies,) which come in 3gbit and 6gbit SAS/SATA, on PCI-Express x4 and x8, with anywhere from 4 to 16 ports. 8 lanes of PCI-Express 2.0 will support about 4GB/s read, which should be enough. And yes, SAS controllers are compatible with SATA devices.

    Get yourself an LSI card for your storage drives, use on board SATA for your boot drives (software raid1,) and run software raid5 for storage.

    Of course this means you can't use an Atom board, since they generally don't have PCI-e, and even the Brazos boards only offer PCI-e 4x (even if the slots look like a 16x.)

    For some reason SAS HBAs are some kind of secret, but they're really the way to go for a reliable, cheap(ish) system. I have a $550 (at the time) 8 port hardware raid card, which is awesome (Managed to read from a degraded 8 disk raid5, cpu limited at 550MB/s, on relatively old and slow 1TB drives, which isn't going to happen with software raid,) but when I build my next server (or cluster - google ceph) I will be going with sofware raid on a SAS HBA.
  • marcus77 - Saturday, October 6, 2012 - link

    I would recommend you euroNAS http://www.euronas.com as OS because it would provide you more flexibility (you can decide which hw to use and can upgrade it easely).

    Raid controllers don't always make sense - especially when it comes to recovery (multiple drive failures) software raid is much more powerful than most raid controllers.

    If you wish to use many drives you will need an additional controller - LSI makes pretty good HBAs - they don't provide raid functionality but have many ports for the drives. You could use it in combination with software raid. http://www.lsi.com/products/storagecomponents/Page...

    If you are looking for a real HW raid controller - I would recommend Adaptec - they have a very good linux support which is mostly used with storage servers

Log in

Don't have an account? Sign up now