You can run an entire, productive modern Linux, (minus a modern browser*) on 128MB of RAM and one slow core. If you push lower, you start running into issues. I would recommend having around 200MB or so of swap for sudden spikes of memory usage. An aggressive userspace OOM killer may make life easier.
On Linux, if you just run SDDM that launches xfce, you will quickly OOM the system, because SDDM will stay in memory. The same goes for most desktop managers. So the real way is to just `startx` your desktop environment directly and use console login.
i3 is the best call for usability/a modern feeling, with extremely low memory usage. The reasoning is that, if you're used to sway or i3, this will feel like home, and it has all the features you need to be productive. Anything else will eat more RAM, from what I've tried. It also feels really fast, even if your GPU is struggling, because there are no animations and very little movement.
I would personally recommend Alpine, as it really comes with nothing extra. You can then install a desktop environment manually (or use their setup-desktop script if you have plenty of RAM and storage). TinyCore is a bit too wild to do modern computing on; the paradigms are just too outdated, the installation is a bit of a pain, and the installer would OOM on the same system where I can run my entire i3 alpine setup.
DSL seems cool, I haven't tried it; I just wanted to share my experience.
You can try all of this by setting up a qemu VM. Be aware that you will need more RAM just for the BIOS, so maybe if you configure 210MB, youll end up with around 128 usable, or so. Your OS will report how much is usable, accurately.
You can then set CPU cores with usage limits, limit HDD speeds to 2000's HDD speeds (so that your swap isnt fast), and so on. Its a fun exercise to try to develop software on such a limited machine, and its fun to see which software launches and which doesn't.
*: the browser is an issue. Firefox is the preferrable option, but wouldn't launch on so little RAM. NetSurf or elinks/lynx etc. is the way to go, but websites that rely on JS to work, like any and all Rust documentation sites, will be completely unusable.
DSL is much older, and the original version came as a 50MB disk image.
Current version clocks in ~700MB, again very small when compared to any modern Linux installation media.
On the other hand, it seems like DSL takes a more extreme approach to slimming down i3/XFCE route, plus DSL contains Dillo which is arguably the latest modern-ish (to the most extent possible) and lightest browser in existence.
These days, DSL is just Debian with less installed. The 700 MB is a curated list of software chosen to fit on a CD image but you access to the full Debian repos.
It's kind of cheating, but I wonder if you could set up some kind of "server side rendering proxy" that would run all the JS on a given page, and send the client a plain html page with hyperlinks in place of interactive JS elements.
Opera Mini's "extreme mode" takes this approach. The server pre-renders content, also stripping out things the client doesn't need or that would require a lot of resources/bandwidth.
Note that this does present a bit of a man-in-the-middle scenario, and Opera's chief income is from advertising (and "query").
"Back in the day" people were running HP technical workstations, with X11 with CDE, with 128MB RAM, on Pentium-II equivalent speed CPUs - and they liked it!
Another distro worth noting here is EasyOS, a current project by Puppy Linux creator Barry Kauler: https://easyos.org/
I remember having tested it, but can't remember what it was like :) -- at least it didn't make me switch from Tiny Core Linux, which I've used extensively. From a superficial, distro-hopper view, DSL, Puppy, EasyOS and Tiny Core all feel quite similar, I guess.
As a side note, it is interesting to see DSL and TC on the HN front page in two consecutive days of 2025. Both are very old projects; I wonder what's the impulse behind this current interest.
Don't forget the links2 classic web browser! (still missing the <video> and <audio> element support on x11/wayland though).
Server side rendering will collect(steal) personal info, it is a no go. The only right solution is online services to provide a web site on the side of a whatng cartel web app, if the online service does fit ofc. No other way, and hardcore regulation is very probably required.
I recently used it to boot a ~1996 Compaq Presario from CD-Rom to image the hard-drive to a USB stick before wiping it for my retro-computer fun :)
It's kind of sad to hear "adult" people claim in all seriousness that it's reasonable that a kernel alone spends more memory than the minimum requirement for running Windows 95, the operating system with kernel, drivers, a graphical user interface and even a few graphical user-space applications.
I got this insight from a previous thread : you can run linux with gui on the same specs as win 95 fine if your display resolution is 640x480. The framebuffer size is the issue
I mean why is that a problem? Win95 engineering reflects the hardware of the time, the same way today's software engineering reflects the hardware of our time. There's no ideal here, there's no "this is correct," etc its all constantly changing.
This is like car guys today bemoaning the simpler carburetor age or the car guys before them bemoaning the model T age of simplicity. Its silly.
There will never be a scenario where you need all this lightweight stuff outside of extreme edge cases, and there's SO MUCH lightweight stuff its not even a worry.
Also its funny you should mention win95 because I suspect that reflects your age, but a lot of people here are from the dos/first mac/win 2.0 age, and for that crowd win95 was the horrible resource pig and complexity nightmare. Tech press and nerd culture back then was incredibly anti-95 for 'dumbing it all down' and 'being slow' but now its seen as the gold standard of 'proper computing.' So its all relative.
The way I see hardware and tech is that we are forced to ride a train. It makes stops but it cannot stop. It will always go to the next stop. Wanting to stay at a certain stop doesn't make sense and as in fact counter-productive. I wont go into this, but linux on the desktop could have been a bigger contender if the linux crowd and companies were willing to break a lot of things and 'start over' to be more competitive with mac or windows, which at he time did break a lot of things and did 'start over' to a certain degree.
The various implementations of linux desktop always came off clunky and tied to unix-culture conventions which dont really fit the desktop model, which wasn't really appealing for a lot of people, and a lot of that was based on nostalgia and this sort of idealizing old interfaces and concepts. I love kde but its definitely not remotely as appealing as win11 or macos gui and ease of use.
In other words, when nostalgia isn't pushed back upon, we get worse products. I see so much unquestionable nostalgia in tech spaces, I think its something that hurts open source projects and even many commercial ones.
if you can compile the kernel though, there is no reason that W95 should be any smaller than your specifically compiled kernel - in fact it should be much bigger
> There will never be a scenario where you need all this lightweight stuff
I think there are many.
Some examples:
* The fastest code is the code you don't run.
Smaller = faster, and we all want faster. Moore's law is over, Dennard scaling isn't affordable any more, smaller feature sizes are getting absurdly difficult and therefore expensive to fab. So if we want our computers to keep getting faster as we've got used to over the last 40-50 years then the only way to keep delivering that will be to start ruthlessly optimising, shrinking, finding more efficient ways to implement what we've got used to.
Smaller systems are better for performance.
* The smaller the code, the less there is to go wrong.
Smaller doesn't just mean faster, it should mean simpler and cleaner too. Less to go wrong. Easier to debug. Wrappers and VMs and bytecodes and runtimes are bad: they make life easier but they are less efficient and make issues harder to troubleshoot. Part of the Unix philosophy is to embed the KISS principle.
So that's performance and troubleshooting. We aren't done.
* The less you run, the smaller the attack surface.
Smaller code and less code means fewer APIs, fewer interfaces, less points of failure. Look at djb's decades-long policy of offering rewards to people who find holes in qmail or djbdns. Look at OpenBSD. We all need better more secure code. Smaller simpler systems built from fewer layers means more security, less attack surface, less to audit.
Higher performance, and easier troubleshooting, and better security. There's 3 reasons.
Practical examples...
The Atom editor spawned an entire class of app: Electron apps, Javascript on Node, bundled with Chromium. Slack, Discord, VSCode: there are multiple apps used by tens to hundreds of millions of people now. Look at how vast they are. Balena Etcher is a, what, nearly 100 MB download to write an image to USB? Native apps like Rufus do it in a few megabytes. Smaller ones like USBimager do it in hundreds of kilobytes. A dd command in under 100 bytes.
Now some of the people behind Atom wrote Zed.
It's 10% of the size and 10x the speed, in part because it's a native Rust app.
The COSMIC desktop looks like GNOME, works like GNOME Shell, but it's smaller and faster and more customisable because it's native Rust code.
GNOME Shell is Javascript running on an embedded copy of Mozilla's Javascript runtime.
Just like dotcoms wanted to dis-intermediate business, remove middlemen and distributors for faster sales, we could use disintermediation in our software. Fewer runtimes, better smarter compiled languages so we can trap more errors and have faster and safer compiled native code.
Smaller, simpler, cleaner, fewer layers, less abstractions: these are all goods things which are desirable.
Dennis Ritchie and Ken Thompson knew this. That's why Research Unix evolved into Plan 9, which puts way more stuff through the filesystem to remove whole types of API. Everything's in a container all the time, the filesystem abstracts the network and the GUI and more. Under 10% of the syscalls of Linux, the kernel is 5MB of source, and yet it has much of Kubernetes in there.
Then they went further, replaced C too, made a simpler safer language, embedded its runtime right into the kernel, and made binaries CPU-independent, and turned the entire network-aware OS into a runtime to compete with the JVM, so it could run as a browser plugin as well as a bare-metal OS. Now we have ubiquitous virtualisation so lean into it: separate domains. If your user-facing OS only runs in a VM then it doesn't need a filesystem or hardware drivers, because it won't see hardware, only virtualised facilities, so rip all that stuff out. Your container host doesn't need to have a console or manage disks.
This is what we should be doing. This is what we need to do. Hack away at the code complexity. Don't add functionality, remove it. Simplify it. Enforce standards by putting them in the kernel and removing dozens of overlapping implementations. Make codebases that are smaller and readable by humans.
Leave the vast bloated stuff to commercial companies and proprietary software where nobody gets to read it except LLM bots anyway.
I agree with this take. Win95's 4MB minimum/8MB recommended memory requirement and a 20MHz processor is seen as the acceptable place to draw the line but there were graphical desktops on the market before that on systems with 128K of RAM and 8MHz processors. Why aren't we considering Win95's requirements as ridiculously bloated?
Yep, at the time the Amiga crowd was laughing at the bloat. But now its suddenly the gold standard on efficiency? I think a lot of people like to be argumentative because they refuse to understand they are engaging in mere nostalgia and not actually anything factual or logical.
Should really be titled "Damn Small Linux 2024", as this is a reboot of an older distro.
I was going to comment that it must have been posted multiple times before 2024, but this is a refresh of the older distro, there are probably different URLs. I'm not sure what's new about it to warrant a post today, the last release is rc7 from June 2024 and the webpage is full of popup ads that are really annoying.
Perhaps someone discovered it for the first time today? If so, this used to be much smaller. 50MB vs 700MB today. I mean, it's a damn small linux that includes Firefox... that doesn't seem quite right to me.
Thankfully the linked page includes discussion by the author on how it used to be 50MB and why he decided to revisit his original project with a new scope and size limit.
When you look at the actual list of those 4, it's not as hard to understand any more.
It's Firefox, Dillo, Links2 and Netsurf GTK :)
Dillo is something I'd love to daily drive like I did 20 years ago, but it would just fail on most modern websites. But it's what, 2MB in total (binary+libraries)?
Links2 is text terminal oriented. No modern browser can do that natively at all. All competition is even smaller (w3m, lynx). Plus links2 can run in graphics mode, even on a framebuffer, so you can run it without X server at all.
So Fx is the only "general purpose" browser on that list, but is just too big for old hardware.
I remember backpacking in Asia I kept a DSL bootable USB with me. When I'd visit a net cafe I'd simply boot into my DSL environment and bypass the entire windows PC (which was full of spyware and password stealing programs).
It worked great back then, I'm sure it works even better now.
Took me a moment to understand what "DSL" relates to in this context. I thought of "dynamic scripting language" at first and was confused. But here it just means "Damn Small Linux".
The problem with old computers isn’t that they’re slow but fail randomly so they don’t need “smaller” Linux, they need more resiliency that can work with random RAM erros, corrupt disks, absurd CPU instruction failures.
The real issue is that old hardware use a lot of electrical power. You can get a small Single Board Computer with at least as much computing power as those but using 20 to 30 time less electrical power, and fitting in the palm of your hand.
It's not really a real problem for most retro computing enthusiasts; it only comes out to a couple of bucks a month in electricity, and that's assuming you leave the computer running all month.
I used to use this on a CD-ROM, for SSH-ing into my personal server to check email (from work or SO's place), when I didn't have a laptop or handheld with me. USB flash drives often didn't boot by default on PC hardware, but CD-ROMs did.
Later, I made an immutable "Swiss Army knife" USB stick distro called LilDeb, based on Debian Live. And literally carried it on a Swiss Army pocketknife on my keychain. LilDeb was implemented in two big shell scripts, which you can still see at: https://www.neilvandyke.org/lildeb/
Distros like these help troubleshoot boxes that are old/slow but also not used as computers in the traditional sense. For example network boxes, NAS, video recording boxes etc that can't run the latest LTS ubuntu well but can boot a distro like DSL. getting a vga out on these things with a fast to boot distro helps you fix things like corrupt drives, bad partitioning, bad boot loaders etc which needs a few terminal commands and a distro that boots up quickly.
It once took ubuntu 18.04 30 minutes to boot on an old dual core intel network box once. I switched to Xubuntu and it was about 5 minutes. imagine having to do multiple reboots.
The GUI is usually the problem. I have booted Xubuntu and it's still slow. Slow systems with older GPUs simply can't keep up with newer desktops. Most of the times I need a terminal but a lightweight desktop can help if I quickly want to open a browser and search something so that I can copy past more complicated commands.
Back in the early 00's I used DSL in university as it has an x86 emulator included. I could plug the USB in and run Linux under Windows. Kept that on a "huge" 512MB thumb drive that cost over $100. Still have that drive and it still works.
The other day, another os - tiny core Linux - was posted. Today this, I really wonder what is the trade off? Day to day use, security, and something else surely will be missing?
The wildest part of that to me is that OP couldn't help out because there was somehow something more pressing than every non-franchise store not being able to accept payment and likely needing a lot of expertise to patch it fast enough.
There isn't enough detail for me to know how serious is was in regards to stores not being able to accept payments, but a date of this blog being 2010, and it being written about a time when every megabyte of filesize matters makes me think it was possibly much earlier, like maybe 2001-2005. If that's the case, maybe not being able to process cards wasn't as big of a deal as it would be today because you could likely assume most if not all customers could pay with cash if card payments were down.
I didn't know it was revived, had some fun with it back in the day. I'm curious to see how it compares against Alpine which is also very compact because of musl.
Damn Small Linux was one of the first distros I tried out as teenager with a LiveCD. It's sad it fell out due to his lead being rather an asshole with his contributors and overall incompetent.
BTW, one of former core contributors went to make their own distro called TinyCoreLinux.
TinyCore’s “Core" is still just 17 MB and is text-based. It includes the tools to install everything else. It only supports wired networks for the most part.
“TinyCore” is 23MB. It includes a minimal GUI desktop.
“CorePlus” at 243 MB is internationalized, has a half dozen more window managers to choose, has wireless networking tools, and a remastering tool to spin your own.
I had no issues at all running alpine with a UI on a simulated 128MB RAM machine with a few GB of storage (with simulated 2000's disk speeds). That's 128MB not counting memory for the BIOS, of course.
i3 and NetSurf made that extremely possible. Mind you, the only things that didn't work well were Firefox (wouldn't launch due to OOM) and compiling Rust projects. Single translation units in Rust would immediately OOM the system, whereas C, C++, etc. worked fine.
You can run an entire, productive modern Linux, (minus a modern browser*) on 128MB of RAM and one slow core. If you push lower, you start running into issues. I would recommend having around 200MB or so of swap for sudden spikes of memory usage. An aggressive userspace OOM killer may make life easier.
On Linux, if you just run SDDM that launches xfce, you will quickly OOM the system, because SDDM will stay in memory. The same goes for most desktop managers. So the real way is to just `startx` your desktop environment directly and use console login.
i3 is the best call for usability/a modern feeling, with extremely low memory usage. The reasoning is that, if you're used to sway or i3, this will feel like home, and it has all the features you need to be productive. Anything else will eat more RAM, from what I've tried. It also feels really fast, even if your GPU is struggling, because there are no animations and very little movement.
I would personally recommend Alpine, as it really comes with nothing extra. You can then install a desktop environment manually (or use their setup-desktop script if you have plenty of RAM and storage). TinyCore is a bit too wild to do modern computing on; the paradigms are just too outdated, the installation is a bit of a pain, and the installer would OOM on the same system where I can run my entire i3 alpine setup.
DSL seems cool, I haven't tried it; I just wanted to share my experience.
You can try all of this by setting up a qemu VM. Be aware that you will need more RAM just for the BIOS, so maybe if you configure 210MB, youll end up with around 128 usable, or so. Your OS will report how much is usable, accurately.
You can then set CPU cores with usage limits, limit HDD speeds to 2000's HDD speeds (so that your swap isnt fast), and so on. Its a fun exercise to try to develop software on such a limited machine, and its fun to see which software launches and which doesn't.
*: the browser is an issue. Firefox is the preferrable option, but wouldn't launch on so little RAM. NetSurf or elinks/lynx etc. is the way to go, but websites that rely on JS to work, like any and all Rust documentation sites, will be completely unusable.
DSL is much older, and the original version came as a 50MB disk image.
Current version clocks in ~700MB, again very small when compared to any modern Linux installation media.
On the other hand, it seems like DSL takes a more extreme approach to slimming down i3/XFCE route, plus DSL contains Dillo which is arguably the latest modern-ish (to the most extent possible) and lightest browser in existence.
I still remember burning the dsl livecd image to a mini-cd shaped like a credit card (edges trimmed off) and using it on university workstations.
Yeah, I had one of these in my wallet with DSL.
https://en.wikipedia.org/wiki/Bootable_business_card
I'm sure I burned a Gentoo stage 1 boot ISO onto one of those card-sized discs.
These days, DSL is just Debian with less installed. The 700 MB is a curated list of software chosen to fit on a CD image but you access to the full Debian repos.
It's kind of cheating, but I wonder if you could set up some kind of "server side rendering proxy" that would run all the JS on a given page, and send the client a plain html page with hyperlinks in place of interactive JS elements.
Edit: https://www.brow.sh/
Opera Mini's "extreme mode" takes this approach. The server pre-renders content, also stripping out things the client doesn't need or that would require a lot of resources/bandwidth.
Note that this does present a bit of a man-in-the-middle scenario, and Opera's chief income is from advertising (and "query").
Just use Dillo. It's something has videos, use mpv+yt-dlp.
~/.config/mpv/config:
~/yt-dlp.confThat's a wonderful idea! Thank you!
Would that work with CORS?
Interesting post, thank you.
"Back in the day" people were running HP technical workstations, with X11 with CDE, with 128MB RAM, on Pentium-II equivalent speed CPUs - and they liked it!
It built character.
Kids these days...
Another distro worth noting here is EasyOS, a current project by Puppy Linux creator Barry Kauler: https://easyos.org/
I remember having tested it, but can't remember what it was like :) -- at least it didn't make me switch from Tiny Core Linux, which I've used extensively. From a superficial, distro-hopper view, DSL, Puppy, EasyOS and Tiny Core all feel quite similar, I guess.
As a side note, it is interesting to see DSL and TC on the HN front page in two consecutive days of 2025. Both are very old projects; I wonder what's the impulse behind this current interest.
- Use Dillo with gopher/gemini plugins. Add mpv+yt-dlp, the some of the HN comments in this page has my hints posted.
Cookies setup for HN:
- Use ZRAM: - Tiling sucks on small resolutions. Use CWM or IceWM.- XTerm it's very small and works fine. I can posts Xresorces there.
Don't forget the links2 classic web browser! (still missing the <video> and <audio> element support on x11/wayland though).
Server side rendering will collect(steal) personal info, it is a no go. The only right solution is online services to provide a web site on the side of a whatng cartel web app, if the online service does fit ofc. No other way, and hardcore regulation is very probably required.
I recently used it to boot a ~1996 Compaq Presario from CD-Rom to image the hard-drive to a USB stick before wiping it for my retro-computer fun :)
It's kind of sad to hear "adult" people claim in all seriousness that it's reasonable that a kernel alone spends more memory than the minimum requirement for running Windows 95, the operating system with kernel, drivers, a graphical user interface and even a few graphical user-space applications.
I got this insight from a previous thread : you can run linux with gui on the same specs as win 95 fine if your display resolution is 640x480. The framebuffer size is the issue
That and the fact that everything is 64 bit now. The Linux kernel is certainly much bigger though and probably has many more drivers loaded.
It is not one factor but the size of a single bitmap of the screen is certainly an issue.
I mean why is that a problem? Win95 engineering reflects the hardware of the time, the same way today's software engineering reflects the hardware of our time. There's no ideal here, there's no "this is correct," etc its all constantly changing.
This is like car guys today bemoaning the simpler carburetor age or the car guys before them bemoaning the model T age of simplicity. Its silly.
There will never be a scenario where you need all this lightweight stuff outside of extreme edge cases, and there's SO MUCH lightweight stuff its not even a worry.
Also its funny you should mention win95 because I suspect that reflects your age, but a lot of people here are from the dos/first mac/win 2.0 age, and for that crowd win95 was the horrible resource pig and complexity nightmare. Tech press and nerd culture back then was incredibly anti-95 for 'dumbing it all down' and 'being slow' but now its seen as the gold standard of 'proper computing.' So its all relative.
The way I see hardware and tech is that we are forced to ride a train. It makes stops but it cannot stop. It will always go to the next stop. Wanting to stay at a certain stop doesn't make sense and as in fact counter-productive. I wont go into this, but linux on the desktop could have been a bigger contender if the linux crowd and companies were willing to break a lot of things and 'start over' to be more competitive with mac or windows, which at he time did break a lot of things and did 'start over' to a certain degree.
The various implementations of linux desktop always came off clunky and tied to unix-culture conventions which dont really fit the desktop model, which wasn't really appealing for a lot of people, and a lot of that was based on nostalgia and this sort of idealizing old interfaces and concepts. I love kde but its definitely not remotely as appealing as win11 or macos gui and ease of use.
In other words, when nostalgia isn't pushed back upon, we get worse products. I see so much unquestionable nostalgia in tech spaces, I think its something that hurts open source projects and even many commercial ones.
if you can compile the kernel though, there is no reason that W95 should be any smaller than your specifically compiled kernel - in fact it should be much bigger
however this is of course easier said than done
> There will never be a scenario where you need all this lightweight stuff
I think there are many.
Some examples:
* The fastest code is the code you don't run.
Smaller = faster, and we all want faster. Moore's law is over, Dennard scaling isn't affordable any more, smaller feature sizes are getting absurdly difficult and therefore expensive to fab. So if we want our computers to keep getting faster as we've got used to over the last 40-50 years then the only way to keep delivering that will be to start ruthlessly optimising, shrinking, finding more efficient ways to implement what we've got used to.
Smaller systems are better for performance.
* The smaller the code, the less there is to go wrong.
Smaller doesn't just mean faster, it should mean simpler and cleaner too. Less to go wrong. Easier to debug. Wrappers and VMs and bytecodes and runtimes are bad: they make life easier but they are less efficient and make issues harder to troubleshoot. Part of the Unix philosophy is to embed the KISS principle.
So that's performance and troubleshooting. We aren't done.
* The less you run, the smaller the attack surface.
Smaller code and less code means fewer APIs, fewer interfaces, less points of failure. Look at djb's decades-long policy of offering rewards to people who find holes in qmail or djbdns. Look at OpenBSD. We all need better more secure code. Smaller simpler systems built from fewer layers means more security, less attack surface, less to audit.
Higher performance, and easier troubleshooting, and better security. There's 3 reasons.
Practical examples...
The Atom editor spawned an entire class of app: Electron apps, Javascript on Node, bundled with Chromium. Slack, Discord, VSCode: there are multiple apps used by tens to hundreds of millions of people now. Look at how vast they are. Balena Etcher is a, what, nearly 100 MB download to write an image to USB? Native apps like Rufus do it in a few megabytes. Smaller ones like USBimager do it in hundreds of kilobytes. A dd command in under 100 bytes.
Now some of the people behind Atom wrote Zed.
It's 10% of the size and 10x the speed, in part because it's a native Rust app.
The COSMIC desktop looks like GNOME, works like GNOME Shell, but it's smaller and faster and more customisable because it's native Rust code.
GNOME Shell is Javascript running on an embedded copy of Mozilla's Javascript runtime.
Just like dotcoms wanted to dis-intermediate business, remove middlemen and distributors for faster sales, we could use disintermediation in our software. Fewer runtimes, better smarter compiled languages so we can trap more errors and have faster and safer compiled native code.
Smaller, simpler, cleaner, fewer layers, less abstractions: these are all goods things which are desirable.
Dennis Ritchie and Ken Thompson knew this. That's why Research Unix evolved into Plan 9, which puts way more stuff through the filesystem to remove whole types of API. Everything's in a container all the time, the filesystem abstracts the network and the GUI and more. Under 10% of the syscalls of Linux, the kernel is 5MB of source, and yet it has much of Kubernetes in there.
Then they went further, replaced C too, made a simpler safer language, embedded its runtime right into the kernel, and made binaries CPU-independent, and turned the entire network-aware OS into a runtime to compete with the JVM, so it could run as a browser plugin as well as a bare-metal OS. Now we have ubiquitous virtualisation so lean into it: separate domains. If your user-facing OS only runs in a VM then it doesn't need a filesystem or hardware drivers, because it won't see hardware, only virtualised facilities, so rip all that stuff out. Your container host doesn't need to have a console or manage disks.
This is what we should be doing. This is what we need to do. Hack away at the code complexity. Don't add functionality, remove it. Simplify it. Enforce standards by putting them in the kernel and removing dozens of overlapping implementations. Make codebases that are smaller and readable by humans.
Leave the vast bloated stuff to commercial companies and proprietary software where nobody gets to read it except LLM bots anyway.
I agree with this take. Win95's 4MB minimum/8MB recommended memory requirement and a 20MHz processor is seen as the acceptable place to draw the line but there were graphical desktops on the market before that on systems with 128K of RAM and 8MHz processors. Why aren't we considering Win95's requirements as ridiculously bloated?
Yep, at the time the Amiga crowd was laughing at the bloat. But now its suddenly the gold standard on efficiency? I think a lot of people like to be argumentative because they refuse to understand they are engaging in mere nostalgia and not actually anything factual or logical.
Popular in 2024 (399 points, 179 comments) https://news.ycombinator.com/item?id=39215846
Should really be titled "Damn Small Linux 2024", as this is a reboot of an older distro.
I was going to comment that it must have been posted multiple times before 2024, but this is a refresh of the older distro, there are probably different URLs. I'm not sure what's new about it to warrant a post today, the last release is rc7 from June 2024 and the webpage is full of popup ads that are really annoying.
Perhaps someone discovered it for the first time today? If so, this used to be much smaller. 50MB vs 700MB today. I mean, it's a damn small linux that includes Firefox... that doesn't seem quite right to me.
I think the spiritual (and actual) successor to DSL is http://tinycorelinux.net . Which was also discussed here two days ago: https://news.ycombinator.com/item?id=46173547 .
Thankfully the linked page includes discussion by the author on how it used to be 50MB and why he decided to revisit his original project with a new scope and size limit.
Every time I looked at DSL, I never understood the need to include 4 Web Browsers in a distro that supposedly prides itself on size.
When you look at the actual list of those 4, it's not as hard to understand any more.
It's Firefox, Dillo, Links2 and Netsurf GTK :)
Dillo is something I'd love to daily drive like I did 20 years ago, but it would just fail on most modern websites. But it's what, 2MB in total (binary+libraries)?
Links2 is text terminal oriented. No modern browser can do that natively at all. All competition is even smaller (w3m, lynx). Plus links2 can run in graphics mode, even on a framebuffer, so you can run it without X server at all.
So Fx is the only "general purpose" browser on that list, but is just too big for old hardware.
So you can use as little CPU and RAM as necessary to browse the page you want to read at any given moment.
Agreed. Why not have one installed by default and the other 3 could be recommended by DSL as alternatives?
This is an effort to preserve RAM more than disk while still having software that works.
I remember backpacking in Asia I kept a DSL bootable USB with me. When I'd visit a net cafe I'd simply boot into my DSL environment and bypass the entire windows PC (which was full of spyware and password stealing programs).
It worked great back then, I'm sure it works even better now.
Took me a moment to understand what "DSL" relates to in this context. I thought of "dynamic scripting language" at first and was confused. But here it just means "Damn Small Linux".
The problem with old computers isn’t that they’re slow but fail randomly so they don’t need “smaller” Linux, they need more resiliency that can work with random RAM erros, corrupt disks, absurd CPU instruction failures.
The size was a 90s problem.
The real issue is that old hardware use a lot of electrical power. You can get a small Single Board Computer with at least as much computing power as those but using 20 to 30 time less electrical power, and fitting in the palm of your hand.
It's not really a real problem for most retro computing enthusiasts; it only comes out to a couple of bucks a month in electricity, and that's assuming you leave the computer running all month.
It's not an issue, it's just a price to pay.
What sorts of techniques can be used to deal with those issues?
Do you have any recommendations on resilient software and practices?
My old computers that I still run _are_ 90s machines.
Well, technically the eee is '07. But it is 32bit and everything that entails.
Why is there so many spammy junk ads on this site? :|
I run NoScript + uBlock and I thought "wow what a pretty, simple site!". Maybe the author does the same.
Hundreds of ads partners. The GDPR banner is so annoying I won’t even go further.
Smugly behind Brave and Adguard, I had no idea...
A great reason to try and support small distros is that older computers can still be used as long as they work.
There are also some charities that ship old PCs to Africa, install a small Linux distro on them, e.g.:
https://www.computers4charity.org/computers-for-africa
https://worldcomputerexchange.org/
Excellent point.
When I lived in London I helped clients donate a lot of kit to ComputerAid International:
https://www.computeraid.org/
And what's now Computers4Charity:
https://www.computers4charity.org/
I used to use this on a CD-ROM, for SSH-ing into my personal server to check email (from work or SO's place), when I didn't have a laptop or handheld with me. USB flash drives often didn't boot by default on PC hardware, but CD-ROMs did.
Later, I made an immutable "Swiss Army knife" USB stick distro called LilDeb, based on Debian Live. And literally carried it on a Swiss Army pocketknife on my keychain. LilDeb was implemented in two big shell scripts, which you can still see at: https://www.neilvandyke.org/lildeb/
Distros like these help troubleshoot boxes that are old/slow but also not used as computers in the traditional sense. For example network boxes, NAS, video recording boxes etc that can't run the latest LTS ubuntu well but can boot a distro like DSL. getting a vga out on these things with a fast to boot distro helps you fix things like corrupt drives, bad partitioning, bad boot loaders etc which needs a few terminal commands and a distro that boots up quickly.
It once took ubuntu 18.04 30 minutes to boot on an old dual core intel network box once. I switched to Xubuntu and it was about 5 minutes. imagine having to do multiple reboots.
DSL uses the latest Debian kernel. It is not any smaller than Ubuntu.
Yeah it is.
It fits into under 700 MB and runs in well under 100 MB of RAM. The default Ubuntu image is about 6 GB now and takes a gig of RAM.
Have you not tried it? I have:
https://www.theregister.com/2024/02/14/damn_small_linux_retu...
The GUI is usually the problem. I have booted Xubuntu and it's still slow. Slow systems with older GPUs simply can't keep up with newer desktops. Most of the times I need a terminal but a lightweight desktop can help if I quickly want to open a browser and search something so that I can copy past more complicated commands.
It is still slow.
Try Alpine. It's amazing.
Xubuntu 22.04 took nearly 10 GB of disk and half a gig of RAM. I measured it:
https://www.theregister.com/2022/08/18/ubuntu_remixes/
Alpine takes 1.1 GB of disk and under 200 MB of RAM.
https://www.theregister.com/2025/12/05/new_lts_kernel_and_al...
Both running a full Xfce $CURRENT desktop, in a Virtualbox VM.
Back in the early 00's I used DSL in university as it has an x86 emulator included. I could plug the USB in and run Linux under Windows. Kept that on a "huge" 512MB thumb drive that cost over $100. Still have that drive and it still works.
Damn Small was my first ‘home’ distro if only because it took roughly four hours to download the ISO on dialup.
Those were some… painful times.
The other day, another os - tiny core Linux - was posted. Today this, I really wonder what is the trade off? Day to day use, security, and something else surely will be missing?
Here's a cool story of someone using a mini Linux (not DSL) to save a company-wide bug at a fast food chain.
https://web.archive.org/web/20100520020401/http://therealedw...
The wildest part of that to me is that OP couldn't help out because there was somehow something more pressing than every non-franchise store not being able to accept payment and likely needing a lot of expertise to patch it fast enough.
There isn't enough detail for me to know how serious is was in regards to stores not being able to accept payments, but a date of this blog being 2010, and it being written about a time when every megabyte of filesize matters makes me think it was possibly much earlier, like maybe 2001-2005. If that's the case, maybe not being able to process cards wasn't as big of a deal as it would be today because you could likely assume most if not all customers could pay with cash if card payments were down.
I thought the same. Probably his boss first had to realize that OPs teammates needed more help.
I didn't know it was revived, had some fun with it back in the day. I'm curious to see how it compares against Alpine which is also very compact because of musl.
Alpine is more company, not just because of musl but because of busybox.
DSL uses the Debian kernel, Glibc, and the GNU utils.
I salvaged many an aging computer with DSL back in the early 2000s. Great to see life breathed back into the project.
Damn Small Linux was one of the first distros I tried out as teenager with a LiveCD. It's sad it fell out due to his lead being rather an asshole with his contributors and overall incompetent.
BTW, one of former core contributors went to make their own distro called TinyCoreLinux.
How does this compare to Alpine Linux, Amazon Linux and Slackware, including zipslack? Tiny Core Linux?
TinyCore’s “Core" is still just 17 MB and is text-based. It includes the tools to install everything else. It only supports wired networks for the most part.
“TinyCore” is 23MB. It includes a minimal GUI desktop.
“CorePlus” at 243 MB is internationalized, has a half dozen more window managers to choose, has wireless networking tools, and a remastering tool to spin your own.
http://tinycorelinux.net/downloads.html
TCL seems more like modern DSL than DSL 2024.
I had no issues at all running alpine with a UI on a simulated 128MB RAM machine with a few GB of storage (with simulated 2000's disk speeds). That's 128MB not counting memory for the BIOS, of course.
i3 and NetSurf made that extremely possible. Mind you, the only things that didn't work well were Firefox (wouldn't launch due to OOM) and compiling Rust projects. Single translation units in Rust would immediately OOM the system, whereas C, C++, etc. worked fine.
Seems to have been HN'd. Might be a bit too small to handle the traffic.