add arrow-down arrow-left arrow-right arrow-up authorcheckmark clipboard combo comment delete discord dots drag-handle dropdown-arrow errorfacebook history inbox instagram issuelink lock markup-bbcode markup-html markup-pcpp markup-cyclingbuilder markup-plain-text markup-reddit menu pin radio-button save search settings share star-empty star-full star-half switch successtag twitch twitter user warningwattage weight youtube

Raid nvme m.2 ssd

Racer2k5x

4 months ago

Is it possible now to raid your nvme m.2 drives? I know there was something about it last year but I heard it was a major flop. Also if possible would it be worth buying a 2nd nvme m.2 drive for raid or just use it in the normal method?

Comments

  • 4 months ago
  • 1 point

yes nvme raid is possible, different methods for Intel, AMD.Worth it depends on what you want to get out of it.

  • 4 months ago
  • 1 point

Linus pulled it off:

https://youtu.be/lzzavO5a4OQ

Not much reason to raid NVMe drives though. Also it can be a headache to setup.

  • 4 months ago
  • 1 point

Is it possible now to raid your nvme m.2 drives? I know there was something about it last year but I heard it was a major flop.

Well how are determining if something is a flop or not? RAID outside of enterprise use has always been a flop based on user numbers. And when consumers start talking about RAID, it's always RAID0 because they just want faster performance and not all the cost and complexity of RAID that provides redundancy or performance and redundancy. And even then just beyond talking about it very few consumer level users actually ever bother.

So then you take something like NVMe which is going to be way faster than nearly any SATA RAID configuration could ever be. But NVMe drives until recently have been very expensive per GB, so get at least two to RAID0 them to get insane levels of performance that probably would only show up in benchmarks. I mean outside of some hardcore enthusiast benchmarking junkies who's this solution for? I can't imagine why it's not something that made RAID0 popular considering that.

So yeah, unneeded and unwanted solutions do tend to be flops. But in general I'd say RAID is a flop among typical consumer level users and I would be willing to be there's least 20 times more GeForce 2080 ti users right now than there are users running RAID0 on any (home) systems currently or even at RAID0's highest peak in popularity.

Even very useful RAID configurations aren't popular among average users because they're just not needed for most general use and users have largely shown a willingness to spend a pittance on storage which kind of precludes RAID being popular or widespread for general use.

  • 4 months ago
  • 1 point

Home users are not always raid 0. There are people at home that run a NAS or a plex server for media so they can store all their videos and watch it on their TV or other network attached devices. In cases like that depending on the number of drives they would usually use RAID 1/5/6/10 as for data redundancy. Others absolutely don't want to lose data so they may raid 1 for reliability but my big warning is still backup anyways. Backblaze is a good option to pair with self backups.

Raid 5 and 6 are awesome if you got 3 to 8 HDDs in a NAS server. Raid 5 keeps one drive worth of data for redundancy so if you lose one drive you don't lose any data and raid 6 is for 2 drive redundancy. 8 drives in raid 6 will have 6 drives of capacity and if you lose up to 2 drives at once you don't lose any data no matter which 2 drives they are. You are not really getting much speed with that especially when compared to SSD speed but it is more than fast enough for storing a ton of video.

The speed of NVMe SSDs today like the samsung pros is crazy fast. Sata SSDs are still really good today and is fast enough for the vast majority of home users. Just keep in mind the actual speed difference between SATA ssd vs NVMe ssd feels much smaller in actual use loading programs and such than the speed difference from a HDD to sata SSD. Going from a 3-5 minute windows boot time to 15-20 seconds is a larger difference then going from 15-20 seconds to 10-15 seconds.

  • 4 months ago
  • 1 point

Yes yes there's no absolutes. But I'm going to maintain that home users running NAS or plex servers are a pretty small minority and anyone talking about running NVMe in RAID is almost certainly drooling over the theoretical performance of NVMe in RAID0. And I'm going to maintain that when a user is angling towards improving performance on their system and they throw out RAID without qualifying it at all, they're almost certainly talking about RAID0.

Whenever a user is talking about upgrading their GPU, it's almost always a gaming GPU and not a workstation GPU. But of course there are some users who want a workstation GPU. They're just in a minority not worth worrying over unless they qualify they're specific needs.

Or when a user is talking about getting RAM they're almost always talking about getting non-ECC RAM. But of course there's a tiny minority of users using ECC RAM. But they're going to have to mention that if they start disucssing RAM in their system, no one is going to assume it.

  • 4 months ago
  • 1 point

You might need a threadripper or server motherboard simply loaded with PCIe lanes (and the bios option to bifurcate them if necessary: AMD typically provides this). This might involve using the 16 lanes typically used by the GPU to power 4 NVMe m.2 drives (I doubt you can find a reasonably priced board that provides multiple two lane drives). I think there was a thread on this board about how providing 1 NVMe port per lane was relatively low-cost, providing anything else required a PCIe switch and got expensive.

There was a thread on realworldtech (a place where most people know what they are talking about) that discussed ZFS ZRAID. While most RAIDs seem to avoid pulling all the data across a RAID stripe (and thus check the parity across the RAID) unless required to read/write the entire strip, ZRAID appears to do it every time (and thus significantly reduces your throughput to that of a single drive). There was a note that with m.2 drives, you might well be willing to take this hit (it might not even slow down the program).

I'd be very surprised if RAIDing NVMe wasn't common in the server world (either using the standard "lazy" read/write scheme or the more reliable ZRAID one, depending on how much they care about their data integrity and/or performance).

And pretty much all NVMe (or SATA SSD) already use RAIN (Redundant Array of Independent NAND), often using the larger block LDPC algorithm over RAID's reduced-size Reed Solomon schemes. Even the ones that use a single chip/stack of NAND likely are using LDPC error correction in some other way.

Sort

add arrow-down arrow-left arrow-right arrow-up authorcheckmark clipboard combo comment delete discord dots drag-handle dropdown-arrow errorfacebook history inbox instagram issuelink lock markup-bbcode markup-html markup-pcpp markup-cyclingbuilder markup-plain-text markup-reddit menu pin radio-button save search settings share star-empty star-full star-half switch successtag twitch twitter user warningwattage weight youtube