• TimeSquirrel@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    So let’s stop calling it “deleted” then, and call it what it is. “Forgetting”.

    I’m not sure what you actually want the OS to do about it other than as I said, fill it with random data.

    • borari@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      I think this is just semantics at this point, but to me there is a difference between “deleted” and “erased”. I see deleted as the typical “moved to trash” or rm action, with erased being overwritten bits, or like microwaving a drive.

      Edit - If i remember correctly deleting something in most OS’s/File Systems just deletes the pointer to that file on disk. The data just hangs out until new data is written to that sector. The solution, other than the one you mentioned about encrypting stored data and destroying the key when you want the data “deleted”, would be to only ever store data in volatile memory. That would make for a horrendous user experience though.

      • Hildegarde@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        You can delete files by overwriting the data. On Linux its shred -zu [file]. Its slow but good to do if you are deleting sensitive data.

        Its good its not the standard delete function.

        • Liz@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Question: what fraction of bits do you need to randomly flip to ensure the data is unrecoverable?

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 month ago

            Information theory aside: In practice all because you can’t write bit-by-bit and if you leave full bytes untouched there still might be enough information for an attacker to get information, especially if it’s of the “did this computer once store this file” kind of information, not the actual file contents.

            If I’m not completely mistaken overwriting the file once will be enough to prevent recovering with logical means, that is, reading the bits the way the manufacturer intended you to, physical forensics can go further by being able to discern “this bit, before it got overwritten, was a 1 or 0” by looking very closely at the physical medium, details on how much flipping you need to defeat that will depend on the physical details.

            And I wouldn’t be too terribly sure about that electro magnet you built into your case to erase your HDD with a panic button: It’s in a fixed place, will have a fixed magnetic field, it’s going to scramble everything sure but the way it scrambles is highly uniform so the bits can probably be recovered. If you want to be really sure buy a crucible and melt the thing.

            Also, may I interest you in this stylish tin-foil hat, special offer.

          • Hildegarde@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            If you delete normally, only the index of the files are removed, so the data can be recovered by a recovery program reading the “empty” space on the disk and looking for readable data.

            If you do a single pass erase, the bits will overwritten one time. About half the bits will be unchanged, but that makes little difference. Any recovery software trying to read it will read the newly written bits instead of the old ones and will not be able to recover anything.

            However, forensic investigation can probably recover data after a single pass erase. The shred command defaults to 3 passes, but you can do many more if you need to be even more sure.

            Unless you have data that someone would spend large sums on forensics to recover, 1 to 3 passes is probably enough.