It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
There is not any issues with heat regarding my internal Toshiba HDD, it usually runs between 40 and 50 C, no matter the transfer size. This sturdy huge HDD is made for moving huge data in mind, this is not any critical as it is a datacenter 24/7 drive, so it has been build for those loads in mind. Whats important simply is, i does need proper airflow (a small external enclosure is bad, except for quick backups) and it should not be shut down (having endless cycles) all the time, so no sleep mode here, this can be damaging over time.

SSDs on the other hand are not so heat tolerant, but the external SSD can handle much more than the internal drives, because the external drives are very slow, yet they do do not procude a lot of heat on the controller. The external Samsung T5 SSD (this is my download drive) is actually the coldest running drive... with just 30 C (after a redundancy check), the second coldest drive is the Toshiba HDD with 41 C (up to 50 C during extended loads, this is my archive drive with huge data transfers). Internal SSDs usually are tuned for short timed "bursts", with not more than 30 sec full load, so they are only handling installed games, not any "datacenter-matters".

The hottest HDD i ever had was a Ultrastar 10 TB air filled drive in a prebuild external enclosure. During huge file transfer it was able to heat up to 70 C, yet those drives are still alive. I think regarding quality, Toshiba and Ultrastar are the most reliable drives. The Ultrastar in theory is as well a 24/7 drive but for huge file transfer it will need active cooling, especially when using a air filled HDD with huge performance; those are the hottest HDDs ever made. So the manufacturer was a bit stupid putting the hottest HDDs inside a small external enclosure... yet they still survive the extreme heat... simply great quality.

The hottest SSD is a internal Samsung 990 PRO, it can heat up to 70 C or even more in just about a min. But this drive is not used as a datacenter... it is only handling installed games, so it rarely is used more than 20 sec in a row. This is NOT a datacenter drive, not even a download drive... and it should not be used for. Samsung was optimizing it for being able to boot applications the fastest way possible.

The issue i had was on the GoG servers... this is very clear at this point.

There is no general rule, it depends on "how a drive has been build" and what it was intended for... so, using the right drive (datacenter spec) you can even transfer files 24h in a row.

But using a very fast "burst type" SSD, this would be deadly... as they are tuned in order to handle short timed loads... for example when loading a game.
Post edited August 09, 2023 by Xeshra
avatar
Timboli: Drives may be more heat tolerant these days, at least in theory, but that in reality means they should suffer less during normal use, not be an excuse to stress them to the max moving huge amounts of data around.
Already explained, they are not more heat tolerant "those days", rather they have more specialized uses those days... and this have to be taken into account very accurately, for a drive never to fail (theoretical or hypothetical, of course... ).

avatar
Timboli: And as always, drives can die any time, so no assurance of anything.
Correct, IT without backup is like asking for troubles. However, some drives in my experience have a higher risk to fail under certain conditions, this can still be taken into account. No need to "maximize" your risks by chosing the wrong stuff.

Yet i still do not understand why you are pointing at my drives... there was server issues for over 1 week, which is now the main culprit why data are corrupted. Right now the servers still are making failures, it seems.

Regarding my own backups... you may not be able to guess how many drives i already got. I can not count it anymore... it is way above the numbers of my fingers. Most of them still works accurately... but any of those drives with the exception of the most expensive 16 TB drive... the Seagate Ironwolf (it REALLY was expensive 2 years ago) are simply to small already and i can not store a sufficient amount of data on them anymore. So my data simply was "broken" and shared between countless drives, which ultimately was causing lot of lack on updates. So the drives had a lot of "aged" data which is not up to the newest condition anymore.

The only drive that had all the data in the newest shape was the 16 TB drive, which i barely was able to pay... because those overfilled greedy crypto-hunters was creating crazy HDD prices those days. So slowly this drive was the only drive with the newest data and i was praying every month that i finally are able to get me more backup drives... but then all the PC issues started and my cash was smashed away over and over... IT is just pretty dam expensive "those days"... which was the main culprit for being unable to backup the newest data several times.

And of course... my luck was bad enough that, at the exactly same day my 2x20 TB drives had arrived... exactly during this file transfer... the old 16 TB had failed... data not readable anymore and there was many "block-errors". I had to reformat the whole 16 TB drive and i was overwriting everything in order to turn in it in for warranty after... and weird enough, as soon as the entire 16 TB HDD was finish with overwritting... all the block-errors was restored and the HDD, according to SMART had "perfect health"... so this is really a weird story but it was happening like this!

Now i do not know what to do with this old 16 TB drive because without any detectable failure i may fail the warranty case and the new 2x20 TB drives can now execute their work but NOW THE GOG SERVERS was failing....

Yeah.... how can a single human have so much of bad luck... crazy. But at least it can only become better... so the future looks rather bright.

So i was basically fighting for years getting the dam data fully backed up... and now i hope i can get it done... and if i can afford it somehow... getting even more 20 TB drives (there is never enough, only the money bag will decide).

If i do not have the love for IT and data... i would say "be gone" because you could buy a new car by all those costs involved... but anyway, i already got a working car, so i do not even care.
Post edited August 09, 2023 by Xeshra
avatar
Timboli: GOG provide two types of MD5 values for Offline Installers.

Type 1. For each installer file. This will determine whether what was downloaded matches what is on the GOG servers.

Type 2. For each file inside an installer file. This should determine if there is any corruption of an individual file, based on the MD5 record before compiling into an Offline Installer file.

Galaxy and some third party downloaders (i.e. gogcli.exe and gogrepo.py) use Type 1.

When installing a game, Type 2 is checked. Or you can do a simple, but long check (test) with InnoExtract.

Type 1 is a much quicker check, but not as reliable as Type 2.
An offline installer file can pass Type 1 but might not pass Type 2 if it was already corrupted before downloading.

Type 1 really only checks if downloading was okay and no corruption occurred during that process. Type 1 check is usually sufficient, and preferred due to speed, unless you have a lot of spare time to do the much longer check of Type 2.

Neither check is a complete guarantee, as even with Type 2, a file might be corrupted before being added to an Offline Installer file (package).
I only use "Type 2" and usually this is the most safe way of verification. "Typ 1" is only useful for the server, internally... so the server may know that it is not corrupted while hosted on the server itself. However, if the server was having a critical corruption... it may be hard getting the proper data anymore, unless there is a redundant backup (on GoG you never know...).

To some extend you could use "Typ 1" to make sure you was downloading the exact data that is stored on the server, so it may be able to detect file-transfer-corruptions. However, as long as the server and the own system is running stable this will rarely ever happen. If a system is able to run a game stable... most likely downloading a game is like "a walk in the park".

So, indeed the most secure is Type 2 which should be created "only" during the process of the freshly added dev file while being turned into the installer-files. This kinda works as a signature and is pretty safe.

This redundancy check will not only be able to detect a "transfer-failure", it will even be able to detect a general corruption of any files involved.

So, while the Galaxy-Users are only using "Typ 1", checking for "server parity", the Installer will use Typ 2, looking for corruption inside the files itself. Maybe some users got a additional tool which is gathering the "original server parity files", so if a server is screwing up they may even be able to detect a server failure this way, by using Typ 1 only.

However, "Typ 2" can detect any failure because it is looking for the parity during the time of "dev-file-creation" which is the most accurate parity possible. Even that one is not failure-proof, yes, but the risk of having a failure is close to none... unless the dev is uploading it while the server is unstable... then indeed there is many failures possible. So it is extremely critical having a stable and accurately running server... else the possible odds will increase near endless.

There is still another issue: If the Installer is creating a EXE WITHOUT a BIN file, the game can not be checked for redundancy... i can not exactly tell why, it is simply the way it has been designed. Some wise devs was "splitting" their files apart into 2, even if the size is lesser than 4 GB... so the redundancy check will still work. However, most devs do not care at all, or may even lack the knowledge on how it works.

So, basically most files with a size lesser than 4 GB and with a lack of a BIN file, do not own a redundancy check.

I could check it myself, BUT without a "correct redundancy value" i can not verify the integrity. So, somehow we will need a file with the correct value. It might be Typ 1 that can be used for, but Typ 1 is NOT really save, because the file can already be broken on the server and Typ 1, TO THE USER, can not be detected anymore. Unless the user got a archive with the correct MD5 values, or maybe even Galaxy got this archive (i can not say).
Post edited August 09, 2023 by Xeshra
You've lost me with those long replies.

All I did was provide you with some basic information to help sort things out, as you seemed unsure.

Either your issue is a download issue or drive issue or the source is corrupted. The source being corrupted is less likely than the other two. GOG have been having server issues, but that doesn't necessarily mean that what is on the server is corrupted. It is more likely the delivery process is corrupted. I'd be doing Type 1 and Type 2 checks if I was getting corrupted files, to cover both angles.

I mentioned about drive issues because you mentioned the following.

Will take very long because the data is huge, in total 650 GB will have to pass the check.
Doing that in one sitting without a break, if you were, would put your drive at risk, as would downloading that amount without breaks if you had done that. Drive temperature reports don't give the full story.
Post edited August 11, 2023 by Timboli
I said it was a GoG issue, you can believe me or... simply not.

I have no clue regarding your story and you do not need to explain the "full story"...

I tell again, i got a datacenter HDD and it is not having any issues moving large files...

Considering my internal SSDs, they will heat up to the max in just about 1 min, with full load, ... but with very low load (for example moving files to a HDD) even those will never overheat.
Post edited August 12, 2023 by Xeshra