jsjrodman: you could add chunked checksums so you can identify where you have problems and both compensate and auto-identify problem patterns
JMich: Chunked checksums (assuming you mean checking each 10MB chunk's checksum) are already in place, though I think they get checked only once the download has completed, before merging.
I was trying to download some artwork, yesterday. The downloader said it was downloading 50MB of 23MB. Looking at the logger and the file more closely, it was deciding it didn't like what it was downloading and redownloading the whole thing.
Looking at the logger, it failed twice.
I shut down the downloader and downloaded the file use wget.
Afterwards, I started up the downloader to do something else, and it decided to keep downloading the file I'd alrelady correctly downloaded for it.
I had to go into the directory where it keeps bin files and delete the related file.
Problems:
1 - checksums should be more fine grained than 23MB. Really a hash tree should be generated upon request that gets down to some small chunk like 128k. This would save bandwidth for everyone. Meanwhile if you're experiencing underrun or overrun problems (wtfff???) you can correct for this efficiently using an implementation of rsync or a similar protocol.
2 - downloader should communicate more clearly than "downloading 4x the size of the file"
3 - downloader clearly has some bug for that condition, having failed twice. I suspect it was due to bandwidth exhaustion.
4 - downloader fails to check target files on startup if it has bin file saying it needs to download.
JMich: What kind of trouble handling the .bin files? So far, out of 305 games, I've had the downloader crash on me only a few times, and it was usually specific games, not everything, which was reported and fixed.
I haven't encounterd many problems. However, yesterday, I added a bunch of content to the download queue, and the downloader crashed.
I restarted the downloader, it crashed.
At this point, the downloader was going to crash every time it started up. Clearly it was having trouble handling the .bin files in the queue dir.
Just handle the exception, folks. Report the problem, and quarantine the .bin you can't read successfully. Auto-upload the backtrace, or paste it to a dialog and request it to be sent.
JMich: Downloads corrupted by slow internet is a new one, you could try reducing the concurrent downloads to 1 or 2 at a time,
Already done. It still fails when bandwidth is exhausted.
JMich: And what do you mean by "fail" on getting items descriptions? Something equivalent to "couldn't connect to the website, please re-try"? If yes, then you can either remove and re-add it as you are currently doing, or restart the downloader, which should also allow it to recheck.
"Connection error, please try again later!" . The bar is filled-gray, not empty gray. Restarting the downloader doesn't help anything --
Ah, i stand corrected. It does seem to retry the items on restart. In the past I must have had cases where I just got errors again, because certainly I did that roundtrip and found errors still present.
Really though:
1 - I'm betting there's a timeout on the get
2 - If you're manually timing out on your end, don't. Just design your request asynchronously.
3 - If the webserver is timing the client out, retry. You can tell the user this eg "request timed out, retrying"
4 - if you get a number of server-timeout failures, you can give up
5 - let the user right-click retry.
JMich: P.S. Out of personal curiosity, what OS are you using? (feel free to ignore this part)
Win7, 64 bitzzzz
-----
Well this got into details I didn`t initially expect, so attaching the logger if anyone is interested. Unfortunately the log isn`t written using human-readable time, and I don`t remember the times clearly so I can`t point to line numbers etc. Probably annoying to fishing, but if you want to here it is.