add arrow-down arrow-left arrow-right arrow-up authorcheckmark clipboard combo comment delete discord dots drag-handle dropdown-arrow errorfacebook history inbox instagram issuelink lock markup-bbcode markup-html markup-pcpp markup-cyclingbuilder markup-plain-text markup-reddit menu pin radio-button save search settings share star-empty star-full star-half switch successtag twitch twitter user warningwattage weight youtube

Comments

Comments

Comment reply on cheeseandcereal's Completed Build: 126TB NAS + Proxmox Virtualization Server

  • 10 months ago
  • 1 point

It really depends what you're trying to do. A common argument for hardware raid in the past is that it offloaded calculations from your CPU onto a dedicated controller, although nowadays, pretty much any form of software raid is cheap enough (from a computational perspective), that it really doesn't matter anymore.

The main thing about hardware raid that I dislike is your stuck to traditional raid 0/1/5/6 or some combination. With this, it's impossible for you to change the raid size/disk configuration after the pool has been created. For that reason, I'm not too fond of traditional raid. If that doesn't bug you, traditional raid is fine, although still lacking in features compared to something like ZFS.

As for software raid, I'm not particularly fond of OS-dependent RAID solutions (where you create the raid inside the OS), because that locks you into that OS. Something like Intel's RST where you can create a RAID before boot is a really great, cheap, and easy solution, as long as you have a compatible Intel CPU (AMD doesn't offer anything like this at the moment).

With all of that said, if you have the hardware (namely the RAM), and the willingness to learn a little bit about storage administration, I definitely prefer ZFS over any RAID setup because of its features, performance, reliability, and configurability. ZFS is one of the most popular filesystems in production servers for a good reason. The only thing about ZFS is you have to research and know what you're doing before you start making pools/VDEVs because settings that you initially use might not ever be able to be changed (although this isn't particularly different from traditional RAID, other than the fact that there are less settings to set/tweak vs ZFS).

Comment reply on cheeseandcereal's Completed Build: 126TB NAS + Proxmox Virtualization Server

  • 11 months ago
  • 1 point

Reading back on my post, I think I definitely was a little harsh. I guess I shouldn't have called it half-baked, but I stand by my point that BTRFS is absolutely not production ready. I have been told many accounts from colleagues that tried deploying BTRFS with some smaller customers, only to have a filesystem bug destroy data. You only have to experience that once to generally never want to use it again. This sentiment is backed up in many places including RHEL's decision to deprecate BTRFS support. And of course, that's still all putting aside the fact that the feature-set simply isn't there with BTRFS to be really compelling in the first place.

It would certainly be ideal if OpenZFS were licensed better, but as it is, there is no precedent that anyone really cares about ZFS + Linux from any sort of legal perspective. Reminder that Canonical actually ships ubuntu with the ZFS kernel module pre-installed by default, and some lawyers have backed this up saying that from a legal perspective, it's within the rights of the licensing of both linux and ZFS.

For something like the bootdrive of a server, something like ext4 or XFS is fine, and slightly better when used in tandem with LVM, but for big pools of disks, you really want a filesystem that was designed for hardware failure while managing many disks at a time. Features like copy-on-write (which help enable things like native snap-shotting), can also be a huge plus, and huge money-saver when working that into your backup workflow, where you almost CAN'T afford to NOT use it.

Comment reply on cheeseandcereal's Completed Build: 126TB NAS + Proxmox Virtualization Server

  • 11 months ago
  • 2 points

Mainly containerization of all the services I want to run over a network. I.e. I have a seperate VM for my media server, network sharing software, VPN host, etc, etc. Also having a platform where I can spin up different environments on different OS is really nice as a software engineer, I can quickly test something on Windows+BSD+Linux, even going as far as to spin up a whole new VM if I need to test on a clean system.

Basically anything i want a server for, I can spin up it's own personal environment, rather than cluttering one big server running a single OS.

Comment reply on cheeseandcereal's Completed Build: 126TB NAS + Proxmox Virtualization Server

  • 11 months ago
  • 1 point

In regards to BTRFS: As far as I'm concerned, BTRFS is a half-baked filesystem solution for the people on linux who were butt-hurt about the CDDL licensing of OpenZFS, and wanted a GPL-compliant file system equivalent (although BTRFS is no where near feature parity or stability of ZFS). There have been many arguments over the years whether or not CDDL is truly compatible with GPLv2 or not, and although Stallman disagrees, there are many people (including lawyers), who say it's probably fine, and people continue to work on ZFS on Linux regardless, with no one trying to stop them yet. I don't think ZFS will ever be shipped as part of mainline linux because of the licensing, but that doesn't stop it from being more supported, stable, and suitable for real-world use vs BTRFS. I think you'd be hard-pressed to find almost ANYONE running BTRFS in production because of how experimental/unsupported it is. If you do some research, you'll see just how embryonic BTRFS really is (for a filesystem. Losing data is the worst type of failure, and it still happens with BTRFS deployments). ZFS is tried-and-true, and really the only open source local storage pooling solution that is decent. I could go on talking about it for forever, but I'll save us both some time...

Why have all my storage in my hypervisor? Well that's because it's my home network, and it reduces network/hardware overhead, since I have no need for a dedicated SAN with my only goal being to share storage with the LAN+VMs. You mentioned an iSCSI box that I shared to the hypervisor, and I did consider that at one point, but as you mention, it's just unnecessary network overhead, on top of needing an entirely extra box to host the storage. I would rather over-provision a single server and have everything in one. Proxmox is simply some vm-management software slapped on top of debian, so even though I call it a hypervisor, it really is a full linux OS just running some management software, and less of a true hypervisor. I realize, and am OK with the fact that ZFS uses a lot of resources that will take away what's available from all my VMs, but that's fine with me. Also note like 90% of my VMs are actually linux containers (LXC). This combined with the fact that I don't really run a lot of RAM-heavy software like databases, redis, etc (with the exception being ZFS, of course), means that my virtualization workload is much, much, much less RAM intensive than a typical hypervisor workload. Most of my VMs (containers) use less than 200MB of RAM when running.

The other thing about primarily containers for virtualization, is it means that these VMs get native access to the ZFS filesystem/pool without ANY extra overhead. They can access data from the pool exactly like the hypervisor, with the shared kernel (NO networking stack involved). This means I can have ZFS volumes shared between VMs with literally 0 overhead (where traditionally you'd need some sort of SAN and a very good network to support this, on top of simply the general overhead and latency it takes to run all that storage over wires over a network). This allows me to take the 'micro-services' approach to virtualization, where I can basically dedicate an entire VM (container) to each major application I want to run (so my network fileshare software can run independent of my media server which can run independent of everything else, despite sharing a lot of the same volumes/files). This is great from a security perspective, and allows me to achieve a really nice isolation of services without a single massive VM, and effectively 0 extra overhead from a Storage/CPU perspective, and only a tiny bit of extra overhead from a RAM perspective (something in the realm of 50-150MB per container).

Any KVM-based VMs will still need to use something like NFS to talk back with the hypervisor to get access to any shared storage (aside from what is directly allocated to them as a VM), which has a lot of overhead. If I was primarily using Windows, or KVM virtual machines, this model of having the storage on my hypervisor wouldn't make much sense, because even though network traffic doesn't have to go over a wire to access the shared storage, it's still going through a networking stack on essentially 2 different machines (VM+Hypervisor) ON TOP OF the regular kernel calls to access that data. Fortunately, I keep these VMs that require emulated kernels to a minimum, and I simply don't share my bulk storage with them (they just get allocated whatever they need as their primary block volume, and don't interact with any other storage).

tl;dr: BTRFS is kind of a joke tbh, ZFS is battle-tested, stable, and has more features. With my primary virtualization being containers (not emulated kernels with KVM), my RAM usage is much lower than a typical virtualization workload, hence the low amounts of RAM. This is coupled with the fact that containers get native access to the ZFS pool with no overhead, so if I have one big storage pool on my hypervisor, I can share parts of it between VMs (containers) with literally 0 extra overhead, so having something like a SAN with iSCSI would be detrimental to performance, even if it didn't have any network latency, simply due to the fact that all storage requests must go through networking stacks.

Comment reply on cheeseandcereal's Completed Build: 126TB NAS + Proxmox Virtualization Server

  • 11 months ago
  • 1 point

That's the point of all the noctua fans. With every fan in the system running at max speed 100% of the time (my current config), the fans are still much quieter than the disks, which really aren't that loud. I did this very intentionally so I could have it out without being a nuisance. My old NAS was actually much louder than this one, despite only having 3 fans, and 7 drives.

This thing just sits next to my TV for now, I'm planning on getting a server rack when I upgrade my networking and get proper rack mounted switches.

Comment reply on cheeseandcereal's Completed Build: 126TB NAS + Proxmox Virtualization Server

  • 11 months ago
  • 2 points

Depends what you're using it for, and what storage solution you go with. If it's purely a NAS, and doesn't do dual duty as a virtualization server (like mine), then that's more than enough cpu/RAM to do whatever you want for storage. FreeNAS is a nice OS with a web interface that can help you build ZFS Pools, which is basically a really nice RAID.

If you are trying to do VMs as well, you can, but you'll definitely want more RAM if you do ZFS, and even if you don't, with the 6700k you probably won't want to spin up more than maybe 4-ish VMs or so total. Still totally do-able though.

Comment reply on cheeseandcereal's Completed Build: 126TB NAS + Proxmox Virtualization Server

  • 11 months ago
  • 1 point

No but I hoard/archive data.

Comment reply on cheeseandcereal's Completed Build: 126TB NAS + Proxmox Virtualization Server

  • 11 months ago
  • 2 points

Yup, I'm planning on putting in 10Gb once I upgrade my local network to support it. my home network just isn't 10Gb yet, so I didn't pre-emptively get a 10Gb card.

Not too worried about tape drives for backup, any actually critical data is backed up in more than just this NAS, losing data on this would be annoying, but not catastrophic (again, this isn't a prod system, just a home server). The only reason I went RAID-Z2 is because having another drive die while replacing/resilvering is a very real possibility with big drives like these 10TB, especially since many are made in the same batch, so they can likely fail near each other. I'm not too concerned about the data integrity, and I would never run a system like this in prod.

I also hope the ECC RAM doesn't bite me. I would've preferred ECC, but I was (clearly) willing to take the speed trade-off. But like I mentioned, good unbuffered ECC DDR4 just doesn't really exist.

Comment reply on cheeseandcereal's Completed Build: 126TB NAS + Proxmox Virtualization Server

  • 11 months ago
  • 1 point

A computer that runs many virtual computers with lots of storage.

Comment reply on cheeseandcereal's Completed Build: 126TB NAS + Proxmox Virtualization Server

  • 11 months ago
  • 1 point

The bulk of it is media and general data archiving. Although worth noting it ends up being about ~70TiB usable after all redundancy and whatnot is calculated. Reliable data storage is not cheap, unfortunately.

Comment reply on cheeseandcereal's Completed Build: 126TB NAS + Proxmox Virtualization Server

  • 11 months ago
  • 1 point

Yeah, with those features, the extra mobo price is definitely worth it. The price on epyc was definitely the tipping point for me though. They do have a lot more cache, but, for example, that 7251 is currently selling for more than a 1950x (you can commonly find 1950x's at or below $500 now that the 2000 series threadripper are out), but has half of the cores, and a reasonably slower clockrate. So even with 2 of them, making them over 2x the cost, the 1950x will still perform better in raw compute.

As for ram speeds, I know zen really likes faster ram, especially the larger threadripper/epyc processors with multiple CCXs and infinity fabric. With that said, I'm not sure how much more performance you'll get out of +266mhz on the RAM, BUT with threadripper you can easily overclock ram much past the technically support 2667mhz (most people report using an XMP profile on 3200mhz RAM generally just works out of the box after selecting it in the bios), which actually does give a sizeable boost to performance.

That's why I went with the higher clocked (non-ECC) ram for this build. Threadripper can actually utilize it.

Comment reply on cheeseandcereal's Completed Build: 126TB NAS + Proxmox Virtualization Server

  • 11 months ago
  • 2 points

Yup, Epyc would be ideal (support for multi CPU as well as buffered ECC support (Threadripper supports unbuffered ECC, but good luck finding good unbuffered DDR4)), but at like 2-4x the price of the equivalent threadripper CPU, not to mention the more expensive motherboard as well. If you've got a lot more money to spend, Epyc is the way to go if you don't want to drop $10k+ on the equivalent Intel Xeon CPUs.

Proxmox is awesome. Even though I'm a linux guy, I would have preferred SmartOS due to the amazing ZFS support on illumos, but SmartOS is basically Intel only due to the lack of AMD virtualization extension support. With that said, being able to run lxc containers on proxmox is great due to the insanely low overhead (you can run ubuntu server with like 50MB of RAM). I just wish ZFS on Linux was a tad farther along (although it certainly isn't bad).

Comment reply on cheeseandcereal's Completed Build: Gaming/Editing PC

  • 32 months ago
  • 1 point

That's correct, you can actually see in one of the build pictures that I have an SSD mounted in one of these 2.5" drive trays, with the other 2.5" drive tray sitting adjacent it to it (so you can see what they look like with/without a drive). Those trays which hold the drives come with the case. No adapters required.

Comment reply on cheeseandcereal's Completed Build: Project Red Saber

  • 40 months ago
  • 2 points

The Z97 is a much better board. First of all, you get the Z97 chipset which has a lot more features, including overclocking features, and the GD65 gaming board is a really high quality board with a 12 phase power delivery to the cpu among various other features. If the price difference is only $20 and you aren't on a super tight budget, I would highly recommend the Z97 board.

Comment reply on cheeseandcereal's Completed Build: Bryan's Computer

  • 40 months ago
  • 1 point

There's a hard drive cage below the 5.25" bays

Comment reply on cheeseandcereal's Completed Build: Basic Desktop (with expandibility)

  • 42 months ago
  • 1 point

The case is honestly great for the price. It's bordering on just about the smallest case that you can buy for a standard form factor desktop. I have really no complaints about it, especially considering the price. The USB 3.0 front ports are great on such a low price case, it's very sturdy and feels well built, and everything fits quite well. It will even be able to fit a decent sized graphics card which is awesome.

As for ASRock boards, don't get me wrong, they do make some good mobos. With that said, my experience with the lower end ASRock boards are quite terrible. It seems there's always some ASRock board that undercuts the competition with any specific chipset/feature list, but they always end up failing me. I've bought something like 9 ASRock motherboards in total, pretty much all of them the are boards that are the cheapest for some particular chipset. Out of those 9 boards, I have literally had 7 fail. I have just come to the conclusion that they cut corners to get their prices so low on some of their motherboards, and it shows in their longevity. Thus, I will never buy an ASRock board for me, or any of my builds for anyone else ever again. I've said "one more time" too many times when it comes to cheap ASRock boards. The $20 savings I generally get is never worth it.

Comment reply on cheeseandcereal's Completed Build: Gaming/Editing PC

  • 45 months ago
  • 1 point

As for the RAM and microstutters, you don't get microstutters when you're using a freesync panel, so that's not even a concern, and I would much rather take the higher overclocking potential with a much more tangible benefit over a higher RAM speed.

As for the SSD, I didn't include mail in rebates and such. The actual price I ended up paying for the BX200 SSD was actually around $110, not $130, and there's no way I could justify an extra $40 for a slightly faster SSD. Honestly, the main benefit to jump to an evo would be the 4k read speeds, which would be about doubled, and a few extra iops, but other than that, they are going to preform largely the same.

Comment reply on cheeseandcereal's Completed Build: Gaming/Editing PC

  • 45 months ago
  • 1 point

I could've gone with an 850 evo, but really the only improvement there is in the random 4k reads which wasn't a huge concern, I went with the money save instead. As for RAM, in 95% of applications, RAM speed make absolutely no difference, also you can get better CPU overclocks when you're not pushing your RAM that high, so in my opinion, lower speed, and better CAS latency RAM is actually better in a lot of cases, especially if it saves you money.

Comment reply on cheeseandcereal's Completed Build: Gaming/Editing PC

  • 45 months ago
  • 1 point

I didn't include the mail in rebates for the price either. I have about $100 worth of mail in rebates for this comp.

Comment reply on cheeseandcereal's Completed Build: Gaming/Editing PC

  • 45 months ago
  • 1 point

With OpenCL accelerated tasks, it works great. In gaming it's roughly equivalent, if not a bit better than a Gtx 970

Comment reply on cheeseandcereal's Completed Build: NAS Beast/Personal Server

  • 45 months ago
  • 1 point

Yeah no problem, glad to help

Comment reply on cheeseandcereal's Completed Build: NAS Beast/Personal Server

  • 45 months ago
  • 1 point

Yeah, I am serving video to multiple people both in and out of my local network. Even within my local network, a lot of my content cannot be direct streamed because they use unsupported formats (FLAC Audio, h.265 video, or subtitle burn in to name a few). So with those, I have to live encode even for local network play. And yes, a similar xeon probably would have been a better solution IF I didn't need decent onboard graphics since I can't throw In a cheap gpu with the RAID card in my PCIe slot, which was also a requirement for my personal uses. Also using things such as a chromecast doesn't let me simply play the video files locally via basic network sharing, plus the live encoding on my plex server can help eliminate just a bit of the strain of replaying the high bitrate video on a few of the lower power devices that I (or others) use to watch my plex content on, even at max quality, although there always is that option to lower the quality as well, which is used by everyone accessing my server off my local network.

Comment reply on cheeseandcereal's Completed Build: NAS Beast/Personal Server

  • 45 months ago
  • 1 point

Yes, I'm actually well aware of that board, as it's by far the most popular board to put inside the DS380B. I probably would have gone with that solution, but for my personal uses, I require much more computational performance. I have a few programs that I run that are quite CPU intensive beyond just plex media encoding. That's why I have such an expensive CPU in my NAS as well.

Just as a side note, when you're streaming 1080p content to 12 users at a time, you're probably using direct play rather than live encoding 12 1080p streams (or if you are, they are very low bitrate streams). With my CPU, I can actually live encode about 3 Blu-Ray quality (~30mb/s 1080p) videos at a time. If I was using direct play, I could stream practically as many as I wanted to since it's basically just serving files, requiring basically no CPU usage at all.

Comment reply on cheeseandcereal's Completed Build: NAS Beast/Personal Server

  • 45 months ago
  • 1 point

Yeah, they're actually SFF 8087 breakout cables with sata ends, and I just boughtt the cheapest ones I could find. I kind of wish I went back and bought the shortest ones I could find instead because stuffing all 8 of those ridiculously long sata breakouts into this tiny case was a challenge to say the least. I ended up stuffing most of the extra length into the empty space in the SSD cage since there was literally no other place to put it lol.

Comment reply on cheeseandcereal's Completed Build: NAS Beast/Personal Server

  • 45 months ago
  • 1 point

Well over sata SSD speeds for sequential reads. Gotta love striping. And that's all in RAID 5, so any of my drives could completely fail and I wouldn't lose any data. The 4k read and writes are pretty typical hard drive values, but since I'm not booting off my RAID array (I have an SSD for my OS drive), and this is NAS storage, I will literally never be using lots of small files at a time, so that number is basically irrelevant. Write speeds could be faster, but I don't have a working battery for my RAID card right now so my RAID array is in write through, not write back with that 512MB of on board RAM on the RAID controller. Also it's RAID 5, so with parity calculations whenever you write, it will inherently be relatively slow, but still enough to nearly saturate a 1Gb network connection regardless.

http://imgur.com/Zs2CxT4

Comment reply on cheeseandcereal's Completed Build: NAS Beast/Personal Server

  • 45 months ago
  • 3 points

That's only for running a ZFS file system which is essentially a type of software RAID (although not technically called RAID, but it's the same basic idea). I'm running my RAID through hardware, and I'm not using ZFS (I'm not even using Linux as the OS for this machine, so I couldn't run ZFS if I wanted to) If I wanted to go that route I would need a bigger motherboard since almost all mini ITX motherboards only support up to 16GB of RAM. That 1GB of RAM per 1 TB of storage thing is so that the ZFS file system can do lots of caching as well as error checking, but with a solution like mine with hardware RAID, that's all handled through the RAID controller itself, so no need for extra RAM.

Comment reply on cheeseandcereal's Completed Build: Red Saber 2.0

  • 46 months ago
  • 1 point

Yeah I wouldn't recommend it. Bought it on sale for $150 about a year ago, and a lot of the pleather is already starting to tear, the chair doesn't go as high as I would like and something broke inside mine so now it has a permanent tilt to the right. And all of this was only after a year. It is comfortable though.

Comment reply on cheeseandcereal's Completed Build: Red Saber 2.0

  • 46 months ago
  • 1 point

Yeah, I'm pretty happy with my particular 4790k. I would be willing to bet I could push it to 5.0Ghz if I REALLY wanted to, probably somewhere near 1.5 volts with some tinkering, but I'm not that interested.

Comment reply on cheeseandcereal's Completed Build: Red Saber 2.0

  • 46 months ago
  • 1 point

I was going to buy sleeved cables, but everywhere I looked when upgrading, they were out of stock for a full replacement cable set, and I really don't want to use extensions and add even more cabling to my build.

Comment reply on cheeseandcereal's Completed Build: Red Saber 2.0

  • 46 months ago
  • 1 point

No, that was done on purpose for 2 reasons. 1: It pulls air from the GPUs out of the case. It's basically like an extra exhaust since I'm not worried about the PSU getting too hot. 2: My computer is on carpet and it would have basically no airflow if it was the other way since it would be choked by the carpet.

Comment reply on cheeseandcereal's Completed Build: Project Red Saber

  • 58 months ago
  • 1 point

I have never seen those, but now that you mention it to me, I looked it up and I will totally get one. I will get more monitors in the future, but I need a bigger desk first :(

Sort

add arrow-down arrow-left arrow-right arrow-up authorcheckmark clipboard combo comment delete discord dots drag-handle dropdown-arrow errorfacebook history inbox instagram issuelink lock markup-bbcode markup-html markup-pcpp markup-cyclingbuilder markup-plain-text markup-reddit menu pin radio-button save search settings share star-empty star-full star-half switch successtag twitch twitter user warningwattage weight youtube