The Naked Scientists

The Naked Scientists Forum

Author Topic: Where does the information 'go' after it has been deleted on a computer?  (Read 17816 times)

Offline Chemistry4me

  • Neilep Level Member
  • ******
  • Posts: 7709
    • View Profile
Where does the information 'go' after it has been permanently deleted on a computer?  ??? ??? I have no idea  ??? Can you tell me?  :)


 

Offline Madidus_Scientia

  • Neilep Level Member
  • ******
  • Posts: 1451
    • View Profile
don't think of it as going somewhere, think of it becoming different information. say you have the information 1010101010 on a hard drive, and you delete it and it becomes 0000000000. It's still data, but it doesn't mean anything.
 

Offline Chemistry4me

  • Neilep Level Member
  • ******
  • Posts: 7709
    • View Profile
Hmmm... so that's how it works.
 

Offline Chemistry4me

  • Neilep Level Member
  • ******
  • Posts: 7709
    • View Profile
The stuff in your recycle bin isn't really deleted is it? Not until you permanently delete it?
 

Offline Madidus_Scientia

  • Neilep Level Member
  • ******
  • Posts: 1451
    • View Profile
No, and even when you empty the recycle bin the data usually remains there until it is overwritten with new data, your OS basically just flags the chunk of data that was the deleted file as "not there". Its alot faster for the hard drive to do this than to run over every byte of data and reset it.

You can use software to recover this data if you get to it before it's overwritten.

 

Offline RD

  • Neilep Level Member
  • ******
  • Posts: 8128
  • Thanked: 53 times
    • View Profile
You can use software to recover this data if you get to it before it's overwritten.

and shredder software to destroy the info permanently,
 e.g. if you were selling your computer and sensibly wished to erase all your data from it.

Shredding is irreversible so think very carefully before doing it.
 

Offline nicephotog

  • Sr. Member
  • ****
  • Posts: 387
  • Thanked: 7 times
  • H h H h H h H h H h
    • View Profile
    • Freeware Downloads
In UNIX only the "position lookup" on the FAT or Journalising 'file allocation table' is deleted of its file parts starts and ends positions, that's called unlinking hence the "unlink" command. In windows it also overwrites the used FAT byte sized sectors.
 

Offline Chemistry4me

  • Neilep Level Member
  • ******
  • Posts: 7709
    • View Profile
Could you please put that in simple English? [:I][:I]
 

Offline LeeE

  • Neilep Level Member
  • ******
  • Posts: 3382
    • View Profile
    • Spatial
Unix (and unix-like systems such as Linux) use inode based filesystems, not FAT based filesystems.  NTFS is not generally regarded as a FAT filesystem either.
 

Offline Don_1

  • Neilep Level Member
  • ******
  • Posts: 6890
  • Thanked: 7 times
  • A stupid comment for every occasion.
    • View Profile
    • Knight Light Haulage
Never mind 'where does information go after it has been deleted', I would like to know where the hell it goes before in some cases!!!
 

Offline LeeE

  • Neilep Level Member
  • ******
  • Posts: 3382
    • View Profile
    • Spatial
Never mind 'where does information go after it has been deleted', I would like to know where the hell it goes before in some cases!!!


Now that is a good question  ;D

An old computing haiku:

A file that big?
It might be very useful
But now it is gone
« Last Edit: 27/01/2009 14:28:27 by LeeE »
 

Offline Don_1

  • Neilep Level Member
  • ******
  • Posts: 6890
  • Thanked: 7 times
  • A stupid comment for every occasion.
    • View Profile
    • Knight Light Haulage
User Error, Windows was not shut down properly last time.

I bloody know, because you bloody froze!!!

« Last Edit: 27/01/2009 14:40:51 by Don_1 »
 

Offline LeeE

  • Neilep Level Member
  • ******
  • Posts: 3382
    • View Profile
    • Spatial
I think the funniest Windows error message I've seen was the one that concluded with:  "The error was: no error"

A bit of denial going on there, methinks.
 

Offline LeeE

  • Neilep Level Member
  • ******
  • Posts: 3382
    • View Profile
    • Spatial
And then there's:

« Last Edit: 27/01/2009 23:50:16 by LeeE »
 

Offline wolfekeeper

  • Neilep Level Member
  • ******
  • Posts: 1092
  • Thanked: 11 times
    • View Profile
FWIW the answer to the question is that as you delete the information the information gets written to zero. When that happens it takes a minimal amount of energy to do that (to change the ones to zero, the zeros don't take any particular energy in principle), and that produces a certain amount of heat.

So, basically, it increases the 'entropy' of the world and generates a bit of heat.
« Last Edit: 27/01/2009 19:08:49 by wolfekeeper »
 

Offline Don_1

  • Neilep Level Member
  • ******
  • Posts: 6890
  • Thanked: 7 times
  • A stupid comment for every occasion.
    • View Profile
    • Knight Light Haulage
LeeE, that's fantastic.
 

Offline Chemistry4me

  • Neilep Level Member
  • ******
  • Posts: 7709
    • View Profile
It really its!
 

Offline yor_on

  • Naked Science Forum GOD!
  • *******
  • Posts: 11987
  • Thanked: 4 times
  • (Ah, yes:) *a table is always good to hide under*
    • View Profile
The only sure way to treat your hard disk to keep quiet, is with a hammer.

there are examples on hard disks that been owerwritten nine times that still squeaked.

Join the don't trust your HD movement.
Buy a Suzuki:)
 

Offline DoctorBeaver

  • Naked Science Forum GOD!
  • *******
  • Posts: 12656
  • Thanked: 3 times
  • A stitch in time would have confused Einstein.
    • View Profile
yor-on - the only totally safe way is to melt it.
 

Offline nicephotog

  • Sr. Member
  • ****
  • Posts: 387
  • Thanked: 7 times
  • H h H h H h H h H h
    • View Profile
    • Freeware Downloads
There is a GNU software for keys and signatures in email messages  called PGP(Pretty Good Privacy), it has a "shredder" bin system or command and it overwrites the track sectors where the file chunk(s) is/are placed 32 times. Truthfully to retrieve overwritten tracks on the hard-discs' disc platters requires removing the disc platters and placing them in a special reader machine. Edges of the track from only a few overwrites can(could) yield most of the sets of information from the last sets of overwrite.
 

Offline Vern

  • Neilep Level Member
  • ******
  • Posts: 2072
    • View Profile
    • Photonics
The files are stored on disk drives in little short strips called sectors. Usually a file will use several sectors. The beginning and end of each little strip has an ID number. There is a master file on each drive that stores the numbers of all the little strips that contain data and the names of the files that the data goes to. When you delete a file, you just remove the stored numbers in the master file so that the little strips can be re-used. Until they are re-used, the data is still there on the drive.

Edit: I don't know exactly how NTFS works but it seems to divide a drive into tracks that circle around like the grooves on a phonograph record. Each track is divided up into a bunch of sectors. When a file is stored, it gets a list of available sectors from a master file on the drive. It then writes the file into the sectors. The sectors need not be all in a row. The file may be split up into sectors all over the drive.

The master file is kept in RAM, and only written back onto the drive when time is available. Also the files themselves are not written back to the drive instantly. That's why when you get a hard shutdown, you can get into trouble. The stuff in RAM that was supposed to be written back to the drive is lost. Then the master file on the drive might have the wrong numbers for the locations of the data on the drive.

I think it works that way :)
« Last Edit: 06/02/2009 22:51:28 by Vern »
 

Offline LeeE

  • Neilep Level Member
  • ******
  • Posts: 3382
    • View Profile
    • Spatial
...The master file is kept in RAM, and only written back onto the drive when time is available. Also the files themselves are not written back to the drive instantly. That's why when you get a hard shutdown, you can get into trouble. The stuff in RAM that was supposed to be written back to the drive is lost. Then the master file on the drive might have the wrong numbers for the locations of the data on the drive.

I think it works that way :)

What you're talking about there is write-caching and while it can deliver some performance benefits, it's generally a bad idea and is not often used, precisely because of the reasons you give.  Even without write-caching though, it's still possible to get a crash during write procedures and this is where you'll sometimes see disk recovery being required on restart.

Write-caching is not the same as working on a file in an application; with most applications, such as editing a word-procesing document in something like Word etc, you'll loose anything that you haven't saved, or which hasn't been auto-saved, because the system has not been asked to write the data back to disk.  I do fondly remember the DEC VAX VMS editor, which was journalised, and which wouldn't loose anything since the last keystroke in a crash; every keystroke was logged to the journal file, and the screen was updated to show the effect of the keystroke, but the keystrokes weren't actually applied to the file until you saved it.  As a consequence, if the system crashed the original file would still be in it's unedited state and the keystroke journal, which may have been left open, would have simply been closed by the filesystem check on restart and then be re-applied when you restarted the editor on the original file again.  You could optionally watch all your keystrokes being re-applied when restarting the editor, which was quite entertaining.

I can't understand why this scheme is not used with all software these days; the overhead is tiny.
 

Offline Vern

  • Neilep Level Member
  • ******
  • Posts: 2072
    • View Profile
    • Photonics
I think write-caching is used all the time. Also read-caching, if you want to call it that. An operating system can play out of RAM much faster than play from disk. I think the speed advantage is about two orders of magnitude better with caching.

I know in Linux and I think in Vista, a paging method is used where applications page themselves into RAM only as required. So that if a particular part of the AP is not needed it may never even be loaded into RAM.

Edit: After doing a little research I find that this might be user controllable. Maybe I'm living in the past and just haven't caught up to speed :)

Quote
Enabling or Disabling the Disk Write Caching

   1. Right-click My Computer, and then click Properties.
   2. Click the Hardware tab.
   3. Click Device Manager.
   4. Click the plus sign (+) next to the Disk Drives branch to expand it.
   5. Right-click the drive on which you want to enable or disable disk write caching, and then click Properties.
   6. Click the Disk Properties tab.
   7. Click to select or clear the Write Cache Enabled check box as appropriate.
   8. Click OK.
« Last Edit: 07/02/2009 15:29:27 by Vern »
 

Offline LeeE

  • Neilep Level Member
  • ******
  • Posts: 3382
    • View Profile
    • Spatial
Read-caching is very common, it's normal in fact, but that's because there are very few problems associated with it.  The only time I can think of right now, when it can be a bad idea, is with multi-user transactional databases, where in some circumstances, read caching might lead to stale data being read.  This is largely taken in to consideration in the design of the DB engine and caching policies though and rarely has to be considered by the application designer.

As you point out, software can run more quickly from from RAM than disk, so in most systems any unused RAM will be used as cache; on the system I'm using right now 56% of RAM is being used as cache.

Although the CPUs page memory in and out, I think you're really referring to swap space here, which is where RAM contents, both system and application data and code (obviously, it make no sense to swap cache data), is written to disk when RAM usage exceeds RAM capacity, and yes, RAM contents that are not used will be written to swap if the RAM they occupy is required for something else.  Once you've started using swap space though, because you've filled the RAM with software and data, the amount of RAM used for cache will be tiny.  As the system I'm writing this on is using 56% of RAM for cache, you'll be able to guess that I'm currently using no swap space at all (this isn't because I've got lots of RAM - just 512MB in fact - but because I'm using Linux and not windows.  On an identically spec'ed system that dual boots XP, XP uses around 280 MB swap after start-up, and that's without running an SQL DB server, which I do on the Linux boxes)

Write-caching though, like I said, comes with obvious dangers and in general is a bad idea.  Windows may allow you to enable it but I wouldn't recommend it.
 

Offline yor_on

  • Naked Science Forum GOD!
  • *******
  • Posts: 11987
  • Thanked: 4 times
  • (Ah, yes:) *a table is always good to hide under*
    • View Profile
Nice explanation LeeE.

Thats what I like with Linux too. It uses its 'resources' so efficiently.
And with a journaling system like Reiser FS, it seems very hard to lose any data due to loss of electricity etc. I've never had a situation where Linux couldn't recreate the 'information' on the harddisk. On the other hand I find XP pro quite stable too. I don't really have a explanation for why I use XP privately instead, habit I supose :).

It's all this with the constant 'speeding up' of computers that hides the benefits of Linux. You won't notice the performance difference when using it privately, but if I want to use a OS 'professionally' then it is Linux that gets my vote, not XP, how much I ever may like its simple and intuitive interface.

How many of you use a 'dual boot' privately. I do, and I know that Vern do too:)
It may be rather geeky perhaps?
 

The Naked Scientists Forum


 

SMF 2.0.10 | SMF © 2015, Simple Machines
SMFAds for Free Forums