Cogito ergo sum

How to restore files from a disk storage with bad sectors using GNU ddrescue on Ubuntu Linux5 min read



no backup plans meme
Losing access to our precious data due to hardware failures is a nightmare for all of us. Bad sectors on a hard disk drive (HDD) or any other similar failures can block normal data copy/transfer operations and render some of your files inaccessible. Fortunately, there are advanced tools and techniques available that can help rescue this data. One such tool isGNU ddrescue, that I’ve successfully used on my Ubuntu Linux setup. In this blog post, I’ll walk you through the process of restoring files from disk storage with bad sectors/errors using this invaluable tool.

Bad sectors can result from physical damage to the HDD, wear and tear over time, or even manufacturing defects. When the OS encounters these bad sectors during a read operation, it might hang, fail, or give errors.

GNU ddrescue

GNU ddrescue is a data recovery tool that can help you when you’re in a pinch. Unlike the standard dd tool, which might choke on bad sectors, ddrescue is specifically designed to handle such anomalies. It attempts to read the data most efficiently by approaching bad sectors from both sides. If you have GND ddrescue installed on your Ubuntu, please execute the following commands from the terminal:

sudo apt-get update
sudo apt-get install gddrescue

In some cases, you can read and write normally from a disk; however, reading/copying particular files can not be done due to bad sectors or other errors on the disk. Luckily, with GNU ddrescue You have two options: 1) copying/restoring the whole disk or 2) copying/restoring particular files. In my case, I wanted to copy specific files that I couldn’t copy with normal CTRL+C and CTRL+Vor with cpcommand from the terminal. In other words, the file I wanted to copy was 45.9 GB (a video file). Whenever I tried to copy it, the process would get stuck after copying 28 GB of the file, and after a couple of minutes, the process would result in a read failed: Input/output error

Restoring/Copying the files using GNU ddresuce

Here’s a sample command to copy a file that can’t be copied using cppv ( or xcp ( ) commands. The command will try to copy the file from the source_dir/source_file to the  destination_dir/destination_file. This command will copy the file completely from the source to the destination, even if there are read errors. This means that the lines of the file where there’s an error will be skipped in case the program can’t restore them after trying (the number of tries can be specified using the -r parameter).

ddrescue -vvvvv -r3 -P2 -n --no-trim source_dir/source_file destination_dir/destination_file

-vvvv Controls the level of output verbosity. The more  v you put in there, the more verbose the output of the command will be.
-r3 or --retry-passes=3 Controls the maximum number of retry attempts for copying the data from the bad sectors before giving up. The higher the number, the longer the command takes to finish.
-P2 or--data-preview[=2] Controls the lines of data being copied. The higher the number, the more lines of data you will see during the copy/restore process.
-n or --no-scrap Controls whether to perform scraping or not.
-N or --no-trim Controls whether to perform trimming or not.
Here’s the output of the command above (remember that you have to change the source and destination)

GNU ddrescue 1.23
About to copy 45936 MBytes from 'source_dir/source_file' (45936796003) to 'destination_dir/destination_file' (45936796003)
    Starting positions: infile = 0 B,  outfile = 0 B
    Copy block size: 128 sectors       Initial skip size: 1024 sectors
Sector size: 512 Bytes
Direct in: no     Direct out: no     Sparse: no     Truncate: no     
Trim: no          Scrape: no         Max retry passes: 3

Press Ctrl-C to interrupt
Data preview:
                            No data available                                 

     ipos:   36614 MB, non-trimmed:    2777 kB,  current rate:       0 B/s
     opos:   36614 MB, non-scraped:        0 B,  average rate:  16364 kB/s
non-tried:        0 B,  bad-sector:        0 B,    error rate:    6553 B/s
  rescued:   45934 MB,   bad areas:        0,        run time:     46m 47s
pct rescued:   99.99%, read errors:       63,  remaining time:         n/a
                              time since last successful read:         19s

The size file I was trying to restore was 45.9 GB. As you can see, the program succeeded in rescuing 99.99% of the data (practically, it restored the whole file 😛 ) within 47 minutes.  The program encountered 63 read errors. As you can see, there were no bad sectors, as I assumed first. It seems there was something wrong, but  I don’t know exactly what it was. However, regardless of the reason, I was not able to copy the file from the external hard disk.

However, in some cases where there are a lot of errors, the copying/restoring process could take more time. When I tried to restore another file, it took me almost four (3:75) hours to finish the job.

Data preview:                 time since last successful read:          Data preview:
00A37F0000  CA E3 F2 94 57 04 3C 30  64 23 01 87 30 72 07 12  ....W.<0d#..0r..
00A37F0010  21 D5 2F 0C 9A EE 99 2A  01 C9 B5 6E 53 60 3F D3  !./....*...nS`?.

     ipos:    2743 MB, non-trimmed:    8634 kB,  current rate:    1638 B/s
     opos:    2743 MB, non-scraped:        0 B,  average rate:   3145 kB/s
non-tried:        0 B,  bad-sector:        0 B,    error rate:    1638 B/s
  rescued:   44807 MB,   bad areas:        0,        run time:  3h 57m 24s
pct rescued:   99.98%, read errors:      204,  remaining time:         n/a
                              time since last successful read:          0s

I hope you have enjoyed the post and learned something new! If you have any questions or need help regarding similar problems, just write a comment below, and I will help you 🙂

About the author

Peshmerge Morad

Data Science student and a software engineer whose interests span multiple fields.

Add comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Cogito ergo sum