• @[email protected]
    link
    fedilink
    English
    232 months ago

    If you aren’t running a home server with tons of storage, this product is not for you. If the price is right, 40TB to 50TB is a great upgrade path for massive storage capacity without having to either buy a whole new backplane to support more drives or build an entirely new server. I see a lot of comments comparing 4TB SSDS to 40TB HDD’s so had to chime in. Yes, they make massive SSD storage arrays too, but a lot of us don’t have those really deep pockets.

    • Björn Tantau
      link
      fedilink
      English
      22 months ago

      I’m still waiting for prices to fall below 10 € per TB. Lost a 4 TB drive prematurely in the 2010s. I thought I could just wait a bit until 8 TB drives cost the same. You know, the same kind of price drops HDDs have always had about every 2 years or so. Then a flood or an earthquake or both happened and destroyed some factories and prices shot up and never recovered.

    • @[email protected]
      link
      fedilink
      English
      32 months ago

      I expect many are not upgrading every small incremental improvement too. It’s the 20TB HDDs that are ready to replace.

      • @[email protected]
        link
        fedilink
        English
        22 months ago

        I’d buy two and only turn the other on for a once a month backup. For one lone pirate just running two drives, it would be endgame basically. You’re good.

        • thermal_shock
          link
          fedilink
          English
          22 months ago

          I wish. I’ve got 6000 movies, 200 series, 300k songs, games, etc. pushing 30tb usage. I need to redo my setup, right now it’s raid 10. I know it’s not the most efficient with space, but I feel much better about redundancy.

          • @[email protected]
            link
            fedilink
            English
            2
            edit-2
            2 months ago

            We all prioritize the data we want. I don’t carry ps3 games because I have zero interest in them, and several shows I’d never have interest in, and I don’t bsckup FLAC, I downsample to 320kbps so I doubt I’ll break 25TB any time in the next five years.

    • thermal_shock
      link
      fedilink
      English
      42 months ago

      Thank you! I lol’d at the guy with one in his main PC lol. Like why?

  • @[email protected]
    link
    fedilink
    English
    92 months ago

    No thanks. I’d rather have 4TB SSDs that cost $100. We were getting close to that in 2023, but then the memory manufacturers decided to collude and jacked up prices.

    • Jolteon
      link
      fedilink
      English
      122 months ago

      You do realize that there is probably a fair chunk of people on here who can say that unironically?

      • @[email protected]
        link
        fedilink
        English
        12 months ago

        I’ve only got 32tb in the family Raid 5 (actually 24TB since I lose a drive to parity), but my girl really loves her trash TV while she works so we’re on like 90% full.

        And I’m just a podunk machinist.

  • Stern
    link
    fedilink
    English
    102 months ago

    thats a lot of porn high quality videos

  • Admiral Patrick
    link
    fedilink
    English
    1102 months ago

    Having been burned many times in the past, I won’t even trust 40 GB to a Seagate drive let alone 40 TB.

    Even in enterprise arrays where they’re basically disposable when they fail, I’m still wary of them.

    • @[email protected]
      link
      fedilink
      English
      102 months ago

      Same here. Been burned by SSD’s too though - a Samsung Evo Pro drive crapped out on me just months after buying it. Was under warranty and replaced at no cost, but I still lost all my data and config/settings.

      • @[email protected]
        link
        fedilink
        English
        192 months ago

        Any disk can and will fail at some point in time. Backup is your best friend. Some sort of disk redundancy is your second best friend.

    • @[email protected]
      link
      fedilink
      English
      15
      edit-2
      2 months ago

      I feel the exact same about WD drives and I’m quite happy since I switched to Seagate.

      • @[email protected]
        link
        fedilink
        English
        20
        edit-2
        2 months ago

        Don’t look at Backblaze drive reports then. WD is pretty much all good, Seagate has some good models that are comparable to WD, but they have some absolutely unforgivable ones as well.

        Not every Seagate drive is bad, but nearly every chronically unreliable drive in their reports is a Seagate.

        Personally, I’ve managed hundreds of drives in the last couple of decades. I won’t touch Seagate anymore due to their inconsistent reliability from model to model (and when it’s bad, it’s bad).

        • @[email protected]
          link
          fedilink
          English
          122 months ago

          Don’t look at Backblaze drive reports then

          I have.

          But after personally having suffered 4 complete disk failures of WD drives in less then 3 years, it’s really more like a “fool me once” situation.

          • @[email protected]
            link
            fedilink
            English
            6
            edit-2
            2 months ago

            It used to be pertinent to check the color of WD drives. I can’t remember all of them but of the top of my head I remember Blue dying the most. They used to have black, red and maybe a green model, now they have purple and gold as well. Each was designated for certain purposes / reliability.

            Source: Used to be a certified Apple/Dell/HP repair tech, so I was replacing hard drives daily.

            • @[email protected]
              link
              fedilink
              English
              72 months ago

              Gold is the enterprise ones. Black is enthusiast, blue is desktop, red is NAS, purple is NVR, green is external. Green you almost certainly don’t want (they do their own power management), red is likely to be SMR. But otherwise they’re not too different. If you saw a lot of blues failing, it’s probably because the systems you supported used blue almost exclusively.

              • @[email protected]
                link
                fedilink
                English
                22 months ago

                I thought green was “eco.” At least the higher-end external ones tend to be red drives, which is famously why people shuck them to use internally because they’re often cheaper than just buying a red bare drive directly, for some reason.

                • @[email protected]
                  link
                  fedilink
                  English
                  22 months ago

                  Correct about the greens. They used to be (might still be) the ones that ran at a lower RPM

                • @[email protected]
                  link
                  fedilink
                  English
                  12 months ago

                  You might be right. Although I think it’s been pretty hit or miss with which drives they use in those enclosures.

    • @[email protected]
      link
      fedilink
      English
      522 months ago

      Still, it’s a good thing if it means energy savings at data centers.

      For home and SMB use there’s already a notable absence of backup and archival technologies to match available storage capacities. Developing one without the other seems short sighted.

      • @[email protected]
        link
        fedilink
        English
        272 months ago

        I still wonder, what’s stopping vendors from producing “chonk store” devices. Slow, but reliable bulk storage SSDs.

        Just in terms of physical space, you could easily fit 200 micro SD cards in a 2.5" drive, have everything replicated five times and end up with a reasonably reliable device (extremely simplified, I know).

        I just want something for luke-warm storage that didn’t require a datacenter and/or 500W continuous power draw.

        • Justin
          link
          fedilink
          English
          72 months ago

          they make bulk storage ssds with QLC for enterprise use.

          https://youtu.be/kBTdcdJC_L4

          The reason why they’re not used for consumer use cases yet is because raw nand chips are still more expensive than hard drives. People dont want to pay $3k for a 50tb SSD if they can buy a $500 50tb hdd and they don’t need the speed.

          For what it’s worth, 8tb TLC pcie3 U.2 SSDs are only $400 used on ebay these days which is a pretty good option if you’re trying to move away from noisy slow hdds. 4 of those in raid 5 plus a diy nas would get you 24tb of formatted super fast nextcloud/immich storage for ~$2k.

          • @[email protected]
            link
            fedilink
            English
            3
            edit-2
            2 months ago

            I can’t place why, but the thought of used enterprise SSDs still sketches me out more than HDDs. Maybe it’s just that I only ever think of RAID in terms of hard drives, paired with a decade+ of hearing about SSD reliability issues, which are very different from the more familiar problems HDDs can have.

            The power and noise difference makes it more appealing to me, moreso than the speed, personally. Maybe when consumer bottom-barrel SSDs get a little better I could be convinced into RAIDing a bunch of them and hoping one cold spare is enough.

            EDIT: I can acquire new ~200$ 4TB Orico branded drives where I am relatively easily. Hm.

        • @[email protected]
          link
          fedilink
          English
          32 months ago

          Flash drives are much worse than hard drives for cold storage. The charge in flash will leak.

          If you want cheap storage, back it up to another drive and unplug it.

        • ferret
          link
          fedilink
          English
          132 months ago

          Cost. The speed of flash storage is an inherent quality and not something manufacturers are selecting for typically. I assure you if they knew how to make some sort of Super MLC they absolutely would.

          • @[email protected]
            link
            fedilink
            English
            132 months ago

            It’s not inherent in terms of “more store=more fast”.

            You could absolutely take older, more established production nodes to produce higher quality, longer lasting flash storage. The limitation hardly ever is space, but heat. So putting that kind of flash storage, with intentionally slowed down controllers, into regular 2.5 or even 3.5" form factors should be possible.

            Cost could be an issue because the market isn’t seen as very large.

      • Justin
        link
        fedilink
        English
        22 months ago

        Eh hard drives are archival storage these days. They are DOG SLOW and loud. Any real time system like Nextcloud should probably be using ssds these days.

        • @[email protected]
          link
          fedilink
          English
          17
          edit-2
          2 months ago

          Hard drives are also relatively cheap and fast enough for many purposes. My PCs use SSDs for system drives but HDDs for some data drives, and my NAS will use hard drives until SSDs become more affordable.

          • Justin
            link
            fedilink
            English
            2
            edit-2
            2 months ago

            yeah i still use hard drives for storing movies, logs, and backups on my Nas cluster, but using it for nextcloud or remote game storage is too slow. I also live in an apartment and the scrubs are too loud. There’s only a 5:1 price premium, so it’s worth just going all flash unless you have like 30tb storage needs.

    • TrackinDaKraken
      link
      fedilink
      English
      22 months ago

      Same. Between work and home, I’ve had ~30 Seagate drives fail after less than a year. I stopped buying them for personal use many years ago, but work still insists, because they’re cheaper. I have 1TB WD Black drives that are over ten years old and still running. My newest WD Black drive is a 6TB, and I’ve had it for seven years. I dunno if WD Black is still good, but that’s the first one I’ll try if I need a new drive.

    • @[email protected]
      link
      fedilink
      English
      182 months ago

      My first seagate HD started clicking as I was moving data to it from my older drive just after I purchased it. This was way back in the 00s. In a panic, I started moving data back to my older hd (because I was moving jnstead of copying) and then THAT one started having issues also.

      Turns out when I overclocked my CPU I had forgotten to lock the PCI bus, which resulted in an effective overclock of the HDD interfaces. It was ok until I tried moving mass amounts of data and the HDD tried to keep up instead of letting the buffer fill up and making the OS wait.

      I reversed the OC and despite the HDDs getting so close to failure, both of them lasted for years after that without further issue.

  • alaphic
    link
    fedilink
    English
    562 months ago

    Why in the world does this seem to use an inaccurate depiction of the Xbox Series X expansion card for its thumbnail?

    • @[email protected]
      link
      fedilink
      English
      2
      edit-2
      2 months ago

      I remember switching away from floppies to a–much faster, enormous—80MB hard drive. Never did come close to filling that thing.

      Today, my CPU’s cache is larger than that hard drive.

    • r00ty
      link
      fedilink
      92 months ago

      I bought my first HDD second hand. It was advertised as 40MB. But it was 120MB. How happy was young me?

  • @[email protected]
    link
    fedilink
    English
    112 months ago

    I deal with large data chunks and 40TB drives are an interesting idea… until you consider one failing

    raids and arrays for these large data sets still makes more sense then all the eggs in smaller baskets

    • @[email protected]
      link
      fedilink
      English
      15
      edit-2
      2 months ago

      You’d still put the 40TB drives in a raid? But eventually you’ll be limited by the number of bays, so larger size is better.

      • @[email protected]
        link
        fedilink
        English
        152 months ago

        They’re also ignoring how many times this conversation has been had…

        We never stopped raid at any other increase in drive density, there’s no reason to pick this as the time to stop.

        • Justin
          link
          fedilink
          English
          42 months ago

          Raid 5 is becoming less viable due to the increasing rebuild times, necessitating raid 1 instead. But new drives have better iops too so maybe not as severe as predicted.

          • @[email protected]
            link
            fedilink
            English
            32 months ago

            Yeah I would not touch RAID 5 in this day and age, it’s just not safe enough and there’s not much of an upside to it when SSDs of large capacity exist. RAID 1 mirror is fast enough with SSDs now, or you could go RAID 10 to amplify speed.

            • @[email protected]
              link
              fedilink
              English
              12 months ago

              When setting up RAID1 instead of RAID5 means an extra few thousand dollars of cost, RAID5 is fine thank you very much. Also SSDs in the size many people need are not cheap, and not even a thing at a consumer level.

              5x10TB WD Reds here. SSD isn’t an option, neither is RAID1. My ISP is going to hate me for the next few months after I set up backblaze haha

              • @[email protected]
                link
                fedilink
                English
                12 months ago

                But have you had to deal with the rebuild of one of those when a drive fails? It sucks waiting for a really long time wondering if another drive is going to fail causing complete data loss.

                • @[email protected]
                  link
                  fedilink
                  English
                  12 months ago

                  Not a 10TB one yet, thankfully, but did a 4TB in my old NAS recently after it started giving warnings. It was a few days iirc. Not ideal but better than the thousands of dollars it would cost to go to RAID1. I’d love RAID1, but until we get 50TB consumer drives for < $1k it’s not happening.

            • Justin
              link
              fedilink
              English
              22 months ago

              tbf all the big storage clusters use either mirroring or erasure coding these days. For bulk storage, 4+2 or 8+2 erasure coding is pretty fast, but for databases you should always use mirroring to speed up small writes. but yeah for home use, just use LVM or zfs mirrors.

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        2 months ago

        depends on a lot of factors. If you only need ~30TB of storage and two spare RAID disks, 3x 40TB disks will be much more costly than 6x 10TB disks, or even 4x 20TB disks.

      • @[email protected]
        link
        fedilink
        English
        32 months ago

        Of course, because you don’t want to lose the data if one of the drives dies. And backing up that much data is painful.

    • @[email protected]
      link
      fedilink
      English
      82 months ago

      The main issue I see is that the gulf between capacity and transfer speed is now so vast with mechanical drives that restoring the array after drive failure and replacement is unreasonably long. I feel like you’d need at least two parity drives, not just one, because letting the array be in a degraded state for multiple days while waiting for the data to finish copying back over would be an unacceptable risk.

      • @[email protected]
        link
        fedilink
        English
        32 months ago

        I upgraded my 7 year old 4tb drives with 14tb drives (both setups raid1). A week later, one of the 14tb drives failed. It was a tense time waiting for a new drive and the 24 hours or so for resilvering. No issues since, but boy was that an experience. I’ve since added some automated backup processes.

      • @[email protected]
        link
        fedilink
        English
        22 months ago

        Yes this and also scrubs and smart tests. I have 6 14TB spinning drives and a long smart test takes roughly a week, so running 2 at a time takes close to a month to do all 6 and then it all starts over again, so for half to 75% of the time, 2 of my drives are doing smart tests. Then there’s scrubs which I do monthly. I would consider larger drives if it didn’t mean that my smart/scrub schedule would take more than a month. Rebuilds aren’t too bad, and I have double redundancy for extra peace of mind but I also wouldn’t want that taking much longer either

    • @[email protected]
      link
      fedilink
      English
      52 months ago

      I guess the idea is you’d still do that, but have more data in each array. It does raise the risk of losing a lot of data, but that can be mitigated by sensible RAID design and backups. And then you save power for the same amount of storage.

    • @[email protected]
      link
      fedilink
      English
      39
      edit-2
      2 months ago

      Black ops 6 just demanded another 45 GB for an update on my PS5, when the game is already 200 GB. AAA devs are making me look more into small indie games that don’t eat the whole hard drive to spend my money on, great job folks.

      E) meant to say instead of buying a bigger hard drive I’ll support a small dev instead.

      • @[email protected]
        link
        fedilink
        English
        42 months ago

        Ok, I’m sorry, but… HOW??? How is it possibly two hundred fucking gigabytes??? What the fuck is taking up so much space???

        • @[email protected]
          link
          fedilink
          English
          22 months ago

          More than triple the next largest game. All I want is the zombies mode and space to install other games, I could probably cull 80% of the COD suite and be just fine, but I have to carry the whole bag to reach in 🤷‍♂️

        • Darren
          link
          fedilink
          English
          12 months ago

          The game is 50GB, the other 200GB is just “fuck you” space.

        • @[email protected]
          link
          fedilink
          English
          12 months ago

          It’s mostly textures, video, and audio.

          The game code is probably less than 10gb

          Change languages in your game, I am willing to bet it doesn’t download a language pack for whatever language you choose.

          You need multiple textures for different screens, resolutions, etc. to provide the best looking results. Multiply by the number of unique environments…

          Same with all of the video cutscenes in games, they play pre-rendered videos for cutscenes.

      • @[email protected]
        link
        fedilink
        English
        72 months ago

        I arrived at that point a few years ago. You’re in for a world of discovery. As an fps fan myself I highly recommend Ultrakill. There’s a demo so you don’t have to commit.

        • @[email protected]
          link
          fedilink
          English
          12 months ago

          Thanks I’ll check it out. The gf and I like to shoot zombies but were it not part of PS Plus I surely wouldn’t give them $70. I’ve been playing a lot of Balatro recently, poker roguelike from a sole developer with simple graphics but very fun special powers

      • Dizzy Devil Ducky
        link
        fedilink
        English
        192 months ago

        That is absolutely egregious. 200GB game with a 45GB update? You’d be lucky to see me installing a game that’s around 20-30GB max anymore because I consider that to be the most acceptable amount of bloat for a game anymore.

        • @[email protected]
          link
          fedilink
          English
          52 months ago

          Agreed, it’s getting out of control. The most annoying thing is I’m not interested in PvP, just zombies, so probably 80% of that is all just bloat on my hard drive.

        • @[email protected]
          link
          fedilink
          English
          32 months ago

          Requires an additional 45 clear to accommodate the update file and is currently sitting at 196.5. Deleting Hitman and queuing it up after the update is simple enough technically for someone like me with a wired high speed connection and no data cap, but still a pain in the ass and way too big for a single game.

      • @[email protected]
        link
        fedilink
        English
        62 months ago

        Do you have time to talk about our Lord and Savior FACTORIO? Here; just have a quick taste.

        • @[email protected]
          link
          fedilink
          English
          12 months ago

          I haven’t played much since my curved monitor got broken but between Stellaris and Rimworld I don’t know if there’s enough time in a day for another build queue. I’ll check it out at some point and fall into the rabbit hole I’m sure.

    • @[email protected]
      link
      fedilink
      English
      52 months ago

      I don’t know about that. These are spinning disks so they aren’t exactly going to be fast when compared to solid state drives. Then again, I wouldn’t exactly put it past some of the AAA game devs out there.

    • @[email protected]
      link
      fedilink
      English
      152 months ago

      Oh, they’ll do compression alright, they’ll ship every asset in a dozen resolutions with different lossy compression algos so they don’t need to spend dev time actually handling model and texture downscaling properly. And games will still run like crap because reasons.

      • MentalEdge
        link
        fedilink
        English
        42 months ago

        Games can’t really compress their assets much.

        Stuff like textures generally use a lossless bitmap format. The compression artefacts you get with lossy formats, while unnoticable to the human eye, can cause much more visible rendering artefacts once the game engine goes to calculate how light should interact with the material.

        That’s not to say devs couldn’t be more efficient, but it does explain why games don’t really compress that well.

        • @[email protected]
          link
          fedilink
          English
          22 months ago

          When I say “compress” I mean downscale. I’m suggesting they could have dozens of copies of each texture and model in a host of different resolutions (number of polygons, pixels for textures, etc), instead of handling that in the code. I’m not exactly sure how they currently do low vs medium vs high settings, just suggesting that they could solve that using a ton more data if they essentially had no limitations in terms of customer storage space.

          • MentalEdge
            link
            fedilink
            English
            5
            edit-2
            2 months ago

            Uuh. That is exactly how games work.

            And that’s completely normal. Every modern game has multiple versions of the same asset at various detail levels, all of which are used. And when you choose between “low, medium, high” that doesn’t mean there’s a giant pile of assets that go un-used. The game will use them all, rendering a different version of an asset depending on how close to something you are. The settings often just change how far away the game will render at the highest quality, before it starts to drop down to the lower LODs (level of detail).

            That’s why the games aren’t much smaller on console, for exanple. They’re not including all the unnecessary assets for different graphics settings from PC. They are all part of how modern game work.

            “Handling that in the code” would still involve storing it all somewhere after “generation”, same way shaders are better generated in advance, lest you get a stuttery mess.

            And it isn’t how most game do things even today. Such code does not exist. Not yet at least. Human artists produce better results, and hence games ship with every version of every asset.

            Finally automating this is what Unreals nanite system has only recently promised to do, but it has run into snags.

          • @[email protected]
            link
            fedilink
            English
            12 months ago

            When I say “compress” I mean downscale. I’m suggesting they could have dozens of copies of each texture and model in a host of different resolutions.

            Yeah, that’s generally the best way to do it for optimal performance. Games sometimes have an adjustable option to control this in game, LoD (level of detail).

  • @[email protected]
    link
    fedilink
    English
    52 months ago

    Can’t wait to see how these 40 TB hard drives, a wonderment of technology, will be used to further shove AI down my throat.