• ByteOnBikes@slrpnk.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 hours ago

    Ignoring the Seagate part, which makes sense… Is there a reason with 36TB?

    I recall IT people losing their minds when we hit the 1TB, when the average hard drive was like 80GB.

    So this growth seems right.

    • schizo@forum.uncomfortable.business
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 hours ago

      It’s raid rebuild times.

      The bigger the drive, the longer the time.

      The longer the time, the more likely the rebuild will fail.

      That said, modern raid is much more robust against this kind of fault, but still: if you have one parity drive, one dead drive, and a raid rebuild, if you lose another drive you’re fucked.

      • notfromhere@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        27 minutes ago

        Just rebuilt onto Ceph and it’s a game changer. Drive fails? Who cares, replace it with a bigger drive and go about your day. If total drive count is large enough, and depends if using EC or replication, it could mean pulling data from tons of drives instead of a handful.

    • katy ✨@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      I recall IT people losing their minds when we hit the 1TB

      1TB? I remember when my first computer had a state of the art 200MB hard drive.

      • Keelhaul@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 hours ago

        Quick note, HDD storage is not using transistors to store the data, so is not really directly related to Moore’s law. SSDs do use transistors/nano structures (NAND) for storage and it’s storage capacity is more related to Moore’s law.