I'm not interested in getting into a big debate, so I probably won't respond to any replies to this, but I thought these claims needed a little more context.
shaddim: I follow the technical development of linux very detailed for years now, and I also noticed the arrival of the HIB with great pleasure. And I have seen the rejection of too many technologies which could help ISVs providing third party apps in a reasonable way too often: often because the distros feared they could loose importance and also because some unix traditionalist thought "what worked somehow for decades must be therefore good enough for the next decades, don't touch the architecture!" One example is
developed by Ryan Gordon, the genius who provides 90% of the HIB ports to linux, another one [url=http://web.archive.org/web/20080331092730/http://www.linux.com/articles/60124]Autopackage which could have ended the painful distro fragmentation.
I can definitely agree that the reception FatELF received was a big problem (though, on the glibc side, not unexpected. Ulrich Drepper is an infamous jerk who even rejected a patch to fix skewed distributions from strfry() because "
Stop wasting people's time! Nobody cares about this crap." It's
one of the reasons Debian switched to EGLIBC.) but Autopackage fails to get adoption because, architecturally, it's a messy hack, the developers have an arrogant "If we disagree, the world is wrong" attitude and, for better or for worse, many Linux developers care about both of those things.
(I'd link you to a flurry of blog posts going into more detail from as far back as 2005 but the domains are dead, they don't seem to have been saved by the Wayback machine and the only reason I still have access to some of them is that I save local copies of almost every page I read using the
ScrapBook Firefox extension.)
Efforts do
seem to be under way but God only knows whether this attempt will meet with more success than previous ones. (Here's hoping since this one actually has the major distro vendors participating.)
shaddim: Another example is GIMP which is functional a fine software, but it also has the most horrible UX (taken out software which has bad functionality too).
No argument. It's not as bad now that they finally gave in and implemented single-window mode (especially with some fine-tuning by someone who knows good UI design principles), but it could still use improvement and Synfig Studio (a Flash-like animation tool) still follows the horrendous mistakes made in GIMP's multi-window mode and actually does
worse.
(Though, in GIMP's case, last I checked, part of the problem was that they still made the same mistake Firefox used to make. Long, heavy, slow release cycles with inefficient merging based on Subversion.)
shaddim: As there is common misunderstanding here I will answer in detail: dependency hell was solved for windows with win2000 and private DLLs (meaning, apps bringing their own set of DLLs). This works as windows always prefer DLLs in the local directory before system libraries.This works fine and SOLVES dependency hell once and for all...by clear separation between apps and system. Also, the overhead is negligible (as DLLs are in moderns apps only a very small part). In linux, on the other hand, there is no concept of local private libraries, applications use always system libs before local one (ignoring ugly hacks like LD_LIBRARYPATH which redirects ALL libs). Which means, in linux everything is in sync and permanent under the risk that on app pulled the wrong version and destabilized the complete system, therefore I call this system (hopefully) "managed dependency hell".
This means every application needs to be synchronized with an distro and it's libs. If I want to provide packages and deploy the app for linux, this results in support pain and many packages. Take a look at humble library, for linux you need 10 packages where windows is fine with one. Distro-agnostic packaging is a problem. Bundle system (like Autopackage) trying to solve that (bundles are common under windows and MacOS), but facing major resistance by the distros.
Private DLLs are basically Windows's way of getting the benefits and downsides of statically-linked libraries without actually statically linking. The only advantage they have is that, like LD_LIBRARY_PATH hacks, users can swap in another version if the developer screws up and is too lazy/busy to release a newer version.
In fact, SDL2 just added support for
overriding static linking so that ported games using static linking rather than just LD_LIBRARY_PATH hacks can still allow the user or distro packager to force a newer version of SDL on the game in order to apply fixes.
In essence, it's a trade-off between security and stability. Private DLLs put responsibility for updates on the application provider while fully shared libraries make it easy for the OS updates system to patch a security bug at the risk of breaking an application if the library or application aren't rigorous about maintaining and sticking to an agreed-upon stable ABI.
It's similar to the situation that has prompted Microsoft to take steps like requiring webcams use the Microsoft-provided USB Video Class driver if they want Windows Logo certification. The majority of remaining BSODs are caused by 3rd-party drivers that Microsoft can't fix because they don't have enough leverage to force them to update.
As Donnie Berkholz explains in
On package management: Negating the downsides of bundling, there's no simple solution.
I'd be careful citing Ingo Molnar. He's burned enough credibility and made enough enemies over the years on other issues that a lot of people will just dismiss his arguments because he made them.