[ 3 / biz / cgl / ck / diy / fa / g / ic / jp / lit / sci / tg / vr / vt ] [ index / top / reports / report a bug ] [ 4plebs / archived.moe / rbt ]

Due to resource constraints, /g/ and /tg/ will no longer be archived or available. Other archivers continue to archive these boards.Become a Patron!

/g/ - Technology

View post   

[ Toggle deleted replies ]
File: 28 KB, 675x500, raid 5.png [View same] [iqdb] [saucenao] [google] [report]
50060224 No.50060224 [Reply] [Original] [archived.moe] [rbt]

i need a 20TB storage solution expandable to 50TB. Bandwidth is an issue, so using something like AWS is out because it would literally take 10 weeks to upload 1TB.

I was thinking just build a NAS with a raid 5 controller and add 5TB disks as needed.

What the best way to implement this?

>> No.50060276


Server chassis, good RAID card with a hardware controller, UPS to prevent power outages killing your array, ideally a RAID controller with it's own small battery, about $2000.

>> No.50060344


I'm running a 16TB NAS with FreeNAS.

I have a fractal node 304 mini itx case with 6 4TB drives.

I threw a SAS PCI Express raid card in it

It's using an AMD A-5300 APU (dual core 3.4G) with 16GB of RAM.

Shit's pretty cash

>> No.50060479

shit sounds cache

>> No.50060502
File: 59 KB, 500x380, CARLOS!.jpg [View same] [iqdb] [saucenao] [google] [report]


>> No.50060527

if you can handle the fact that btrfs's raid5/6 code is brand new (read: considered usable but not tested a whole lot), that'd be the best

adding disks to a btrfs raid is the least painful of them all (literally just plug in a disk and run a command and it's instantly part of the raid, don't need to stop what you're doing)

plus you can convert between raid levels at any time as well, so you could start out on something more stable like raid10, and convert it to raid5/6 later on

>> No.50060540

>raid 5
Stop right there, RAID 5 has been deprecated since the invention of 1TB devices.
If one of your 1TB+ drives die in a RAID 5, it is significantly probable that another one will die during reconstructing the data into a replacement drive - making all your data lost.

>> No.50060625

>What the best way to implement this?
Buy a NAS that can fit 11 drives. Install drives. Install freenas. Set up raid.

Anything obvious I'm missing about this question?

Oh, and do yourself a favor and use software raid. Hardware raid is not something you want to be using in 2015.

>> No.50060651

i'm not too concerned with my 2T drives, but with new 6T drives i'm not sure i'd go less than dual parity (raid6), reconstructing i just finished replacing one of my disks (after a good 5 years service) and it took over a day to get the raid back in order

the ratio between speed and size is the problem, hdd's are getting bigger quicker than they're getting faster

>> No.50060655

had this happen ages ago when i was working support in IT,
sysadmin had a raid 5 with 3 hot spares (kill me) 1 drive failed, 1 more failed during construction of 1st hot spare.

array lost at the end of it

>> No.50060675

>raid 5 with 3 hot spares
now that's just pure silliness

what on earth stopped him from doing raid6 with 2 hot spares?

>> No.50060692


It's fine if you just want to maximize capacity while minimizing cost with a reasonable amount of data security. Obviously you wouldn't use RAID 5 for anything critical, just stuff that would be a pain to replace. RAID 6 and nested RAID exists for that.

>> No.50060718

this, most people use multiple single-disk-single-volume setups, even raid5 is better than that

>> No.50060719

>HDDs getting faster
Has this even happened in the last few years?

>> No.50060784

i don't buy hdd's enough to know about newer ones, but probably not much

to explain what i mean for others, 15 years ago you'd have something like a 10G hdd that did perhaps 10M/s, that would take only 16 minutes to read everything off of it, but with a newer say, 2T drive that can do 100M/s, that's 200 times bigger, but only 10 times faster, so reading the whole disk would take 5.5 hours

>> No.50060868
File: 1.07 MB, 5616x3744, 1429211077160.jpg [View same] [iqdb] [saucenao] [google] [report]

Really, anon.

>> No.50060869

Here it says most SATA drives has an unrecoverable read error rate of once per 10^14 bits (12.5 TB)
If my logic is sound, even if you have dual parity, when your second 6TB device fails it will need to have read about 12TB to fully reconstruct both, which means another one will probably give an URE just after you reconstruct the second (and any one after that).

>> No.50060885

that's only a problem with shit filesystems without checksumming support

>> No.50060908

wait sorry, i'm too tired

of course, checksumming won't help you when the raid is degraded, silly me

>> No.50060931

-- oh, that said, slightly corrupting one file isn't exactly up there in the list of concerned for most home raid setups

and at least checksumming makes it possible to know which file that is

>> No.50061009

> not having a SAN rack in the basement

>> No.50061063

software raid? i guess if you are a pleb

well raid isn't really a backup is it so this is more of an annoyance than the destroyer of worlds you make it out to be

>Buy a NAS
shilling hard

>3 hot spares

who used to make desktop 2TB HDD's with 10^15? there was one a while ago but i haven't seen any in a good few years now. even the 'pro' color disks are shit aren't they?

>> No.50061116

>software raid? i guess if you are a pleb
m8, it's 2015, not 1995

hardware raid is the apple of raid, expensive, inflexible, non-portable, and it's not even faster or safer anymore

>> No.50061120

>well raid isn't really a backup is it so this is more of an annoyance than the destroyer of worlds you make it out to be
For the people who have backups, yes. But think of the ignorance of the average person and remember that most likely more than half of people are more ignorant than that.

>> No.50061139

The real problem is how shit our storage mediums are.

>> No.50061152

>more than half are more ignorant than the average

>> No.50061439

You can post hard drives to Amazon and they'll copy it to s3 for you.

>> No.50061656

Use ZFS, Freenas would probably be the easiest. Build a 20TB raid array and join it to a new zfs pool. When you expand later, simply create a new raid array and join that to the same zfs pool. ZFS treats the arrays like a JBOD.

>> No.50061705

with btrfs you can expand any type of raid instantly, one disk at a time

>> No.50061762
File: 34 KB, 284x393, happychuck.jpg [View same] [iqdb] [saucenao] [google] [report]

This makes me smile

>> No.50062596

>using 1TB drives in 2015.
I'm just going to use like 10 4TB drives

>> No.50062642
File: 78 KB, 500x476, English motherfucker.jpg [View same] [iqdb] [saucenao] [google] [report]

>Reading comprehension
According to what he said that's even worse.

>> No.50062674

>What the best way to implement this?

>Disks in JBOD mode
>ZFS (on FreeBSD or Linux)

>> No.50063755

o shit I didn't know this. this may actually be the solution I'm looking for. I wanted to use S3 but there's almost no bandwidth at location

>> No.50065168

Norco RPC-4224
IBM M1015 in IT mode
HP SAS Expander
Supermicro mobo

Name (leave empty)
Comment (leave empty)
Password [?]Password used for file deletion.