I would love to like Nextcloud, it's pretty great that it does exist. Just that makes it better than... well everything else I haven't found.
What frustrates me is that it looks like it works, but once in a while it breaks in a way that is pretty much irreparable (or at least not in a practical way).
I want to run an iOS/Android app that backs up images on my server. I tried the iOS app and when it works, it's cool. It's just that once in a while I get errors like "locked webdav" files and it never seems to recover, or sometimes it just stops synchronising and the only way to recover seems to be to restart the sync from zero. It will gladly upload 80GB of pictures "for nothing", discarding each one when it arrives on the server because it already exists (or so it seems, maybe it just overwrites everything).
The thing is that I want my family to use the app, so I can't access their phone for multiple hours every 2 weeks; it has to work reliably.
If it was just for backing up my photos... well I don't need Nextcloud for that.
Again, alternatives just don't seem to exist, where I can install an app on my parent's iOS and have it synchronise their photo gallery in the background. Except I guess iCloud, that is.
I stopped using Nextcloud when the iOS app lost data.
For some reason the app disconnected from my account in the background from time to time (annoying but didn't think it was critical). Once I pasted data on Nextcloud through the Files app integration, it didn't sync because it was disconnected and didn't say anything, and it lost the data.
I never had data outright vanish, but similar to the comment you replied to, it was just unreliable. I found Syncthing much more useful over the long haul. The last 3 times I've had to do anything with it were simply to manage having new machines replace old ones.
Syncthing sadly doesn't let you not download some folders or files, but I just moved those to other storage. It beats the Nextcloud headache.
Recently people built a super-lightweigt alternative, named copyparty[0]. To me that looks like it does everything people tend to need without all the bloat.
I think "people" deserves clarification: Almost the entire thing was written by a single person and with a _seriously_ impressive feature set. The launch video is well worth a quick watch: https://www.youtube.com/watch?v=15_-hgsX2V0&pp=ygUJY29weXBhc...
I don't say this to diminish anyone else's contribution or criticize the software, just to call out the absolutely herculean feat this one person accomplished.
I have tried to run micro https://micro-editor.github.io/ on my phone but this is some other beast if someone is running tmux and vim on their phone
I have found that typing normally is really preferably on android and usually I didn't like having to press columns or ctrl or anything so as such since micro is really just such a great thing overall, it fit so perfectly that when I had that device, I was coding more basic python on my phone than I was on my pc
Although back then I was running alpine on UserLand and I learnt a lot trying to make that alpine vm of sorts to work with python as it basically refused to and I think I learnt a lot which I might have forgotten now but the solution was very hacky (maybe gcompat) and I liked it
This is not an alternative as it only covers files. Mind what is in the article: "I like what Nextcloud offers with its feature set and how easily it replaces a bunch of services under one roof (files, calendar, contacts, notes, to-do lists, photos etc.), but ".
For us Nextcloud AIO is the best thing under the sun. It works reasonably well for our small company (about 10 ppl) and saves us from Microsoft. I'm very grateful to the developers.
Hopefully they are able to act upon such findings or rewrite it with go :-). Mmh, if Berlin (Germany) wouldn't waste so much money in ill-advised ideology-driven and long-term state-destroying actions and "NGOs" they had enough money to fund 100s of such rewrites. Alas...
Why should Germany be wasting public money on a private company who keeps shoveling more and more restrictions on their open-source-washed "community" offering, and whose "enterprise" pricing comes in at twice* the price MS365 does for fewer features, worse integration, and with added costs for hosting, storage, and maintenance?
* or same, if excluding nextcloud talk, but then missing a chat feature
It makes a lot of sense for Germany to keep some independance from foreign proprietary cloud providers (Microsoft, Google); Money very well invested imo. It helps the local industry and data stays under German sovereignity.
I find your "open-source-washed" remark deplaced and quite deragoraty. Nextcloud is, imo, totally right to (try to) monetize. They have to, they must further improve the technical backbone to stay competitive with the big boys.
At the very least their app store, which is pretty much required for OIDC, most 2FA methods, and some other features, stops working at 500 users. AFAIK you can still manually install addons, it's just the integration that's gone, though I'm not 100% sure. Same with their notification push service (which is apparently closed source?[0]), which wouldn't be as much of an issue if there were proper docs on how to stand up your own instance of that.
IIRC they also display a banner on the login screen to all users advertising the enterprise license, and start emailing enterprise ads to all admin users.
Their "fair use policy"[1] also includes some "and more" wording.
There is no way it’s going to be completely rewritten from scratch in Go, and none of whatever Germany is or isn’t doing affects that in any way shape or form.
Actually, it's already been done by the former Nextcloud fork/predecessor. OwnCloud shared a big percentage of the Nextcloud codebase, but they decided to rewrite everything under the name OCIS (OwnCloud Infinite Scale) a couple of years ago. Recently, OwnCloud got acquired by Kiteworks and it seemed like they got in a fight with most of the staff. So big parts of the team left to start "OpenCloud", which is a fork of OCIS and is now a great competitor to Nextcloud. It's much more stable and uses less resources, but it also does a lot less than Nextcloud (namely only File sharing so far. No Apps, no Groupware.)
I think what you described is basically ownCloud Infinite Scale (ocis). I haven't tested it myself but it's something I've been considering. I run normal owncloud right now over nextcloud as it avoided a few hiccups that I had.
It makes perfect sense to me that nextcloud is a good fit for a small company.
My biggest gripe with having used it for far longer than I should have was always that it expected far too much maintenance (4 month release cadence) to make sense for individual use.
Doing that kind of regular upkeep on a tool meant for a whole team of people is a far more reasonable cost-benefit analysis. Especially since it only needs one technically savvy person working behind the scenes, and is very intuitive and familiar on its front-end. Making for great savings overall.
> NOTE: full bidirectional sync, like what nextcloud and syncthing does, will never be supported! Only single-direction sync (server-to-client, or client-to-server) is possible with copyparty
For your specific use case of photos, Immich is the front runner and a much better experience. Sadly for the general Dropbox replacement I haven't found anything either.
> Sadly for the general Dropbox replacement I haven't found anything either.
I had really good luck with Seafile[0]. It's not a full groupware solution, just primarily a really good file syncing/Dropbox solution.
Upsides are everything worked reliably for me, it was much faster, does chunk-level deduplication and some other things, has native apps for everything, is supported by rclone, has a fuse mount option, supports mounting as a "virtual drive" on Windows, supports publicly sharing files, shared "drives", end-to-end encryption, and practically everything else I'd want out of "file syncing solution".
The only thing I didn't like about it is that it stores all of your data as, essentially, opaque chunks on disk that are pieced together using the data in the database. This is how it achieves the performance, deduplication, and other things I _liked_. However it made me a little nervous that I would have a tough time extracting my data if anything went horribly wrong. I took backups. Nothing ever went horribly wrong over 4 or 5 years of running it. I only stopped because I shelved a lot of my self-hosting for a bit.
I can confirm this. We have been using it for 10 years now in our research lab. No data loss so far. Performance is great. Integration with OnlyOffice works quite well (there were sync problems a few years ago - I think upgrading OnlyOffice solved this issue).
Syncthing is under my "want to like" list but I gave up on it. I'm a one person show who just wants to sync a few dozen markdown files across a few laptops and a phone. Every time I'd run it I'd invariably end up with conflict files. It got to the point where I was spending more time merging diffs than writing. How it could do that with just one person running it I have no idea.
That should not happen. I use it a lot and never had this issue, there maybe is something wrong about your setup.
A good idea is to have it on an always-on server and add your share as an encrypted one (like you set the password on all your apps but not on the server); this pretty much results in a dropbox-like experience since you have a central place to sync even when your other devices are not online
My Syncthing experience matches Oxodao's. Over years with >10k files / 100 gb, I've only ever had conflicts when I actually made conflicting simultaneous changes.
I use it on my phone (configured to only sync on WiFi), laptop (connected 99% of the time), and server (up 100% of the time).
The always-up server/laptop as a "master node" are probably key.
I don't think that there is some good alternative to open source syncthing ,the way syncthing just does syncing no
Let me know if you know of any alternative which have helped you but I haven't tried syncthing but I have heard good things about it overall so I feel like I like it already even if I haven't tried it I guess.
If you just need a Dropbox replacement for file syncing, Nextcloud is fine if you use the native file system integrations and ignore the web and WebDAV interfaces.
I would say the opposite. Ente has one huge advantage and that it is e2ee so it's a must if you are hosting someone else photos. But if you are planning to run something on your server/NAS for yourself then Immich has many advantages (that often relate to the e2ee). For example... your files are still files on the disk so less worry about something unrecoverably breaking. And you can add external locations. With Ente it is just about backing up your phone photos. Immich works pretty well as camera photo organizer.
Does it have a mobile app that backs up the photos while in the background and can essentially be "forgotten"? That's pretty much what I need for my family: their photos need to get to my server magically.
There is also "memories for nextcloud" which basically matches immich in feature set (was ahead until last month), nextcloud+memories make a very strong replacement for gdrive or dropbox
Yeah I guess my issue is that if I can't trust the mobile app not to lose my photos (or stop syncing, or not sync everything), then I just can't use it at all. There is no point in having Nextcloud AND iCloud just because I don't trust Nextcloud :D.
Does its iOS/Android app automatically backup the photos in the background? When I looked into Immich (didn't try it) it sounded like it was more of a server thing. I need the automation so that my family can forget about it.
I use Syncthing as a Dropbox replacement, and I like it. I have a machine at home running it that is accessible over the net. Not the prettiest, but it works!
Does it recover though, or do you end up in situations where your setup is essentially broken?
Like if I backup photos from iOS, then remove a subset of those from iOS to make space on the phone (but obviously I want to keep them on the cloud), and later the mobile app gets out of sync, I don't want to end up in a situation where some photos are on iOS, some on the cloud, but none of the devices has everything, and I have no easy way to resync them.
It won't recover unless I do something... sometimes just quitting the iPhone app and then toggling enabling backups works, but not always. I had to completely delete and reinstall the app once to get it to work, and had to resync all 45000 images/videos I had.
I have had the server itself fail in strange ways where I had to restart it. I had to do a full fresh install once when it got hopelessly confused and I was getting database errors saying records either existed when they shouldn't or didn't exist when they should.
I think I am a pretty skilled sysadmin for these types of things, having both designed and administered very large distributed systems for two decades now, but maybe I am doing things wrong, but I think there are just some gotchas still with the project.
Right, that's the kind of issues I am concerned about.
iCloud / Google Photos just don't have that, they really never lose a photo. It's very difficult for me to convince my family to move to something that may lose their data, when iCloud / Google Photos works and is really not that expensive.
It has gotten more stable as I have used it for a while. I think if you want to do it, just wait until it is stable and you have a good backup routine before relying on it.
Have you looked into https://filebrowser.org/? While it's not drop-in replacement for Google Drive/Dropbox, it has been serving me well for similar quick usecase.
It works well for smaller folders but it slows down to a crawl with folders that contain thousands of files. If I add a file to an empty shared folder it will sync almost instantly but if I take a photo both sides become aware of the change rather quickly but then they just sit around for 5 minutes doing nothing before starting the transfer.
how many thousands? I have a folder with a total of 12760 files spread within several folders, but the largest I think is the one with 3827 files.
I've noticed the sync isn't instantaneous, but if I ping one device from the other, it starts immediately. I think Android has some kind of network related sleep somewhere, since the two nixos ones just sync immediately.
The next cloud android app is particularly bad if you use it to back up your cameras DCIM directory then you delete the photos on your phone. It overwrite the files on Nextcloud as new photos are taken. I get why this happened but it is terrible.
I don't doubt that large amounts of javascript can often cause issues but even when cached NextCloud feels sluggish. When I look at just the network tab of a refresh of the calendar page it does 124 network calls, 31 of which aren't cached. it seems to be making a call per calendar each of which is over 30ms. So that stacks up the more calendars you have(and you have a number by default like contact birthdays).
The Javascript performance trace shows over 50% of the work is in making the asynchronous calls to pull those calendars and other network calls one by one and then on all the refresh updates it causes putting them onto the page.
Supporting all these N calendar calls is pulls individually for calendar rooms and calendar resources and "principles" for the user. All separate individual network calls some of which must be gating the later individual calendar calls.
Its not just that, it also makes a call for notifications, groups, user status and multiple heartbeats to complete the page as well, all before it tries to get the calendar details.
This is why I think it feels slow, its pulling down the page and then the javascript is pulling down all the bits of data for everything on the screen with individual calls, waiting for the responses before it can progress in many ways to make the further calls of which there can be N many depending on what the user is doing.
So across the local network (2.5Gbps) that is a second and most of it in waiting for the network. If I use the regular 4G level of throttling it takes 33.10 seconds! Really goes to show how bad this design does with extra latency.
I was going to say... The size of the JS only matters the first time you download it unless there's a lot of tiny files instead of a bundle or two. What the article is complaining about doesn't seem like it's root cause of the slowness.
When it comes to JS optimization in the browser there's usually a few great big smoking guns:
1. Tons of tiny files: Bundle them! Big bundle > zillions of lazy-loaded files.
2. Lots of AJAX requests: We have WebSockets for a reason!
3. Race conditions: Fix your bugs :shrug:
4. Too many JS-driven animations: Use CSS or JS that just manipulates CSS.
Nextcloud appears to be slow because of #2. Both #1 and #2 are dependent on round-trip times (HTTP request to server -> HTTP response to client) which are the biggest cause of slowness on mobile networks (e.g. 5G).
Modern mobile network connections have plenty of bandwidth to deliver great big files/streams but they're still super slow when it comes to round-trip times. Knowing this, it makes perfect sense that Nextcloud would be slow AF on mobile networks because it follows the REST philosophy.
My controversial take: GIVE REST A REST already! WebSockets are vastly superior and they've been around for FIFTEEN YEARS now. Do I understand why they're so much lower latency than REST calls on mobile networks? Not really: In theory, it's still a round-trip but for some reason an open connection can pass data through an order of magnitude (or more) lower latency on something like a 5G connection.
15MB of JavaScript is 15MB of code that your browser is trying to execute. It’s the same principle as “compiling a million lines of code takes a lot longer than compiling a thousand lines”.
It's a lot more complicated than that. If I have a 15MB .js file and it's just a collection of functions that get called on-demand (later), that's going to have a very, very low overhead because modern JS engines JIT compile on-the-fly (as functions get used) with optimization happening for "hot" stuff (even later).
If there's 15MB of JS that gets run immediately after page load, that's a different story. Especially if there's lots of nested calls. Ever drill down deep into a series of function calls inside the performance report for the JS on a web page? The more layers of nesting you have, the greater the overhead.
DRY as a concept is great from a code readability standpoint but it's not ideal performance when it comes to things like JS execution (haha). I'm actually disappointed that modern bundlers don't normally inline calls at the JS layer. IMHO, they rely too much on the JIT to optimize hot call sites when that could've been done by the bundler. Instead, bundlers tend to optimize for file size which is becoming less and less of a concern as bandwidth has far outpaced JS bundle sizes.
The entire JS ecosystem is a giant mess of "tiny package does one thing well" that is dependent on n layers of "other tiny package does one thing well." This results in LOADS of unnecessary nesting when the "tiny package that does one thing well" could've just written their own implementation of that simple thing it relies on.
Don't think of it from the perspective of, "tree shaking is supposed to take care of that." Think of it from the perspective of, "tree shaking is only going to remove dead/duplicated code to save file sizes." It's not going to take that 10-line function that handles with <whatever> and put that logic right where its used (in order to shorten the call tree).
That 15mb still needs to be parsed on every page load, even if it runs in interpreted mode. And on low end devices there’s very little cache, so the working set is likely to be far bigger than available cache, which causes performance to crater.
Ah, that's the thing: "on page load". A one-time expense! If you're using modern page routing, "loading a new URL" isn't actually loading a new page... The client is just simulating it via your router/framework by updating the page URL and adding an entry to the history.
Also, 15MB of JS is nothing on modern "low end devices". Even an old, $5 Raspberry Pi 2 won't flinch at that and anything slower than that... isn't my problem! Haha =)
There comes a point where supporting 10yo devices isn't worth it when what you're offering/"selling" is the latest & greatest technology.
It shouldn't be, "this is why we can't have nice things!" It should be, "this is why YOU can't have nice things!"
When you write code with this mentality it makes my modern CPU with 16 cores at 4HGz and 64GB of RAM feel like a Pentium 3 running at 900MHz with 512MB of RAM.
>There comes a point where supporting 10yo devices isn't worth it
Ten years isn't what it used to be in terms of hardware performance. Hell, even back in 2015 you could probably still make do with a computer from 2005 (although it might have been on its last legs). If your software doesn't run properly (or at all) on ten-year-old hardware, it's likely people on five-year-old hardware, or with a lower budget, are getting a pretty shitty experience.
I'll agree that resources are finite and there's a point beyond which further optimizations are not worthwhile from a business sense, but where that point lies should be considered carefully, not picked arbitrarily and the consequences casually handwaved with an "eh, not my problem".
>Do I understand why they're so much lower latency than REST calls on mobile networks? Not really: In theory, it's still a round-trip but for some reason an open connection can pass data through an order of magnitude (or more) lower latency on something like a 5G connection.
It's because a TLS handshake takes more than one roundtrip to complete. Keeping the connection open means the handshake needs to be done only once, instead of over and over again.
It's up to the client to do that. I'm merely explaining why someone would see a latency improvement switching from HTTPS to websockets. If there's no latency improvement then yes, the client is keeping the connection alive between requests.
Yes and no: There's still a rather large latency improvement even when you're using plain HTTP (not that you should go without encryption).
I was very curious so I asked AI to explain why websockets would have such lower latency than regular HTTP and it gave some (uncited, but logical) reasons:
Once a WebSocket is open, each message avoids several sources of delay that an HTTP request can hit—especially on mobile. The big wins are skipping connection setup and radio wakeups, not shaving a few header bytes.
Why WebSocket “ping/pong” often beats HTTP GET /ping on mobile
No connection setup on the hot path
HTTP (worst case): DNS + TCP 3‑way handshake + TLS handshake (HTTPS) before you can send the request. On mobile RTTs (60–200+ ms), that’s 1–3 extra RTTs, i.e., 100–500+ ms just to get started.
HTTP with keep‑alive/H2/H3: Better (no new TCP/TLS), but pools can be empty or closed by OS/radios/idle timers, so you still pay setup sometimes.
WebSocket: You pay the TCP+TLS+Upgrade once. After that, a ping is just one round trip on an already‑open connection.
Mobile radio state promotions
Cellular modems drop to low‑power states when idle. A fresh HTTP request can force an RRC “promotion” from idle to connected, adding tens to hundreds of ms.
A long‑lived WebSocket with periodic keepalives tends to keep the radio in a faster state or makes promotion more likely to already be done, so your message departs immediately.
Trade‑off: keeping the radio “warm” costs battery; most realtime apps tune keepalive intervals to balance latency vs power.
Fewer app/stack layers per message
HTTP request path: request line + headers (often cookies, auth), routing/middleware, logging, etc. Even with HTTP/2 header compression, the server still parses and runs more machinery.
WebSocket after upgrade: tiny frame parsing (client→server frames are 2‑byte header + 4‑byte mask + payload), often handled in a lightweight event loop. Much less per‑message work.
No extra round trips from CORS preflight
A simple GET usually avoids preflight, but if you add non‑safelisted headers (e.g., Authorization) the browser will first send an OPTIONS request. That’s an extra RTT before your GET.
WebSocket doesn’t use CORS preflights; the Upgrade carries an Origin header that servers can validate.
Warm path effects
Persistent connections retain congestion window and NAT/firewall state, reducing first‑packet delays and occasional SYN drops that new HTTP connections can encounter on mobile networks.
What about encryption (HTTPS/WSS)?
Handshake cost: TLS adds 1–2 RTTs (TLS 1.3 is 1‑RTT; 0‑RTT is possible but niche). If you open and close HTTP connections frequently, you keep paying this. A WebSocket pays it once, then amortizes it over many messages.
After the connection is up, the per‑message crypto cost is small compared to network RTT; the latency advantage mainly comes from avoiding repeated handshakes.
How much do headers/bytes matter?
For tiny messages, both HTTP and WS fit in one MTU. The few hundred extra bytes of HTTP headers rarely change latency meaningfully on mobile; the dominant factor is extra round trips (connection setup, preflight) and radio state.
When the gap narrows
If your HTTP requests reuse an existing HTTP/2 or HTTP/3 connection, have no preflight, and the radio is already in a connected state, a minimal GET /ping and a WS ping/pong both take roughly one network RTT. In that best case, latencies can be similar.
In real mobile conditions, the chances of hitting at least one of the slow paths above are high, so WebSocket usually looks faster and more consistent.
Wow. Talk about inefficiency. It just said the same thing I did, but using twenty times as many characters.
>Yes and no: There's still a rather large latency improvement even when you're using plain HTTP (not that you should go without encryption).
Of course. An unencrypted HTTP request takes a single roundtrip to complete. The client sends the request and receives the response. The only additional cost is to set up the connection, which is also saved when the connection is kept open with a websocket.
I've never seen anybody recommend WebSockets instead of REST. I take it this isn't a widely recommended solution? Do you mean specifically for mobile clients only?
WebSockets are the secret ingredient to amazing low- to medium-user-count software. If you practice using them enough and build a few abstractions over them, you can produce incredible “live” features that REST-designs struggle with.
Having used WebSockets a lot, I’ve realised that it’s not the simple fact that WebSockets are duplex or that it’s more efficient than using HTTP long-polling or SSEs or something else… No, the real benefit is that once you have a “socket” object in your hands, and this object lives beyond the normal “request->response” lifecycle, you realise that your users DESERVE a persistent presence on your server.
You start letting your route handlers run longer, so that you can send the result of an action, rather than telling the user to “refresh the page” with a 5-second refresh timer.
You start connecting events/pubsub messages to your users and forwarding relevant updates over the socket you already hold. (Trying to build a delta update system for polling is complicated enough that the developers of most bespoke business software I’ve seen do not go to the effort of building such things… But with WebSockets it’s easy, as you just subscribe before starting the initial DB query and send all broadcasted updates events for your set of objects on the fly.)
You start wanting to output the progress of a route handler to the user as it happens (“Fetching payroll details…”, “Fetching timesheets…”, “Correlating timesheets and clock in/out data…”, “Making payments…”).
Suddenly, as a developer, you can get live debug log output IN THE UI as it happens. This is amazing.
AND THEN YOU WANT TO CANCEL SOMETHING because you realise you accidentally put in the actual payroll system API key. And that gets you thinking… can I add a cancel button in the UI?
Yes, you can! Just make a ‘ctx.progress()’ method. When called, if the user has cancelled the current RPC, then throw a RPCCancelled error that’s caught by the route handling system. There’s an optional first argument for a progress message to the end user. Maybe add a “no-cancel” flag too for critical sections.
And then you think about live collaboration for a bit… that’s a fun rabbit hole to dive down. I usually just do “this is locked for editing” or check the per-document incrementing version number and say “someone else edited this before you started editing, your changes will be lost — please reload”. Figma cracked live collaboration, but it was very difficult based on what they’ve shared on their blog.
And then… one day… the big one hits… where you have a multistep process and you want Y/N confirmation from the user or some other kind of selection. The sockets are duplex! You can send a message BACK to the RPC client, and have it handled by the initiating code! You just need to make it so devs can add event listeners on the RPC call handle on the client! Then, your server-side route handler can just “await” a response! No need to break up the handler into multiple functions. No need to pack state into the DB for resumability. Just await (and make sure the Promise is rejected if the RPC is cancelled).
If you have a very complex UI page with live-updating pieces, and you want parts of it to be filterable or searchable… This is when you add “nested RPCs”. And if the parent RPC is cancelled (because the user closes that tab, or navigates away, or such) then that RPC and all of its children RPCs are cancelled. The server-side route handler is a function closure, that holds a bunch of state that can be used by any of the sub-RPC handlers (they can be added with ‘ctx.addSubMethod’ or such).
The end result is: while building out any feature of any “non-web-scale” app, you can easily add levels of polish that are simply too annoying to obtain when stuck in a REST point of view. Sure, it’s possible to do the same thing there, but you’ll get frustrated (and so development of such features will not be prioritised). Also, perf-wise, REST is good for “web scale” / high-user-counts, but you will hit weird latency issues if you try to use for live, duplex comms.
WebSockets (and soon HTTP3 transport API) are game-changing. I highly recommend trying some of these things.
After all my years of web development, my rules are thus:
* If the browser has an optimal path for it, use HTTP (e.g. images where it caches them automatically or file uploads where you get a "free" progress API).
* If I know my end users will be behind some shitty firewall that can't handle WebSockets (like we're still living in the early 2010s), use HTTP.
* Requests will be rare (per client): Use HTTP.
* For all else, use WebSockets.
WebSockets are just too awesome! You can use a simple event dispatcher for both the frontend and the backend to handle any given request/response and it makes the code sooooo much simpler than REST. Example:
WSDispatcher.on("pong", pongFunc);
...and `WSDispatcher` would be the (singleton) object that holds the WebSocket connection and has `on()`, `off()`, and `dispatch()` functions. When the server sends a message like `{"type": "pong", "payload": "<some timestamp>"}`, the client calls `WSDispatcher.dispatch("pong", "<some timestamp>")` which results in `pongFunc("<some timestamp>")` being called.
It makes reasoning about your API so simple and human-readable! It's also highly performant and fully async. With a bit of Promise wrapping, you can even make it behave like a synchronous call in your code which keeps the logic nice and concise.
In my latest pet project (collaborative editor) I've got the WebSocket API using a strict "call"/"call:ok" structure. Here's an example from my WEBSOCKET_API.md:
### Create Resource
```javascript
// Create story
send('resources:create', {
resource_type: 'story',
title: 'My New Story',
content: '',
tags: {},
policy: {}
});
// Create chapter (child of story)
send('resources:create', {
resource_type: 'chapter',
parent_id: 'story_abc123', // This would actually be a UUID
title: 'Chapter 1'
});
// Response:
{
type: 'resources:create:ok', // <- Note the ":ok"
resource: { id: '...', resource_type: '...', ... }
}
```
I've got a `request()` helper that makes the async nature of the WebSocket feel more like a synchronous call. Here's what that looks like in action:
const wsPromise = getWsService(); // Returns the WebSocket singleton
// Create resource (story, chapter, or file)
async function createResource(data: ResourcesCreateRequest) {
loading.value = true;
error.value = null;
try {
const ws = await wsPromise;
const response = await ws.request<ResourcesCreateResponse>(
"resources:create",
data // <- The payload
);
// resources.value because it's a Vue 3 `ref()`:
resources.value.push(response.resource);
return response.resource;
} catch (err: any) {
error.value = err?.message || "Failed to create resource";
throw err;
} finally {
loading.value = false;
}
}
For reference, errors are returned in a different, more verbose format where "type" is "error" in the object that the `request()` function knows how to deal with. It used to be ":err" instead of ":ok" but I made it different for a good reason I can't remember right now (LOL).
Aside: There's still THREE firewalls that suck so bad they can't handle WebSockets: SophosXG Firewall, WatchGuard, and McAfee Web Gateway.
The thing that kills me is that Nextcloud had an _amazing_ calendar a few years ago. It was way better than anything else I have used. (And I tried a lot, even the calendar add-on for Thunderbird. Which may or may not be built in these days, I can't keep track.)
Then at some point the Nextcloud calendar was "redesigned" and now it's completely terrible. Aesthetically, it looks like it was designed for toddlers. Functionally, adding and editing events is flat out painful. Trying to specify a time range for an event is weird and frustrating. It's better than not having a calendar, but only just.
There are plenty of open source calendar _servers_, but no good open source web-based calendars that I have been able to find.
It's so rare for teams to do data loading well, rarer still we get effective caching, and often a products footing here only degrades with time. The various sync ideas out there offer such an alluring potential, of having a consistent way to get the client the updated live data they need, in a consistent fashion.
Side note, I'm also hoping the js / TC39 source phase imports proposal aka import source can help let large apps like NextCloud defer loading more of it's JS until needed too. But the waterfall you call out here seems like the real bad side (of NextCloud's architecture)!
https://github.com/tc39/proposal-source-phase-imports
Having at some point maintained a soft fork / patch-set for Nextcloud.. yes, there is so much performance left on the table. With a few basic patches the file manager, for example, sped up by magnitudes in terms of render speed.
The issue remains that the core itself feels like layers upon layers of encrusted code that instead of being fixed have just had another layer added ... "something fundamental wrong? Just add Redis as a dependency. Does it help? Unsure. Let's add something else. Don't like having the config in a db? Let's move some of it to ini files (or vice versa)..etc..etc." it feels like that's the cycle and it ain't pretty and I don't trust the result at all. Eventually abandoned the project.
Edit: at some point I reckon some part of the ecosystem recognised some of these issues and hence Owncloud remade a large part of the fundamentals in Golang. It remains unknown to me whether this sorted things or not. All of these projects feel like they suffer badly from "overbuild".
Edit-edit: another layer to add to the mix is that the "overbuild" situation is probably largely what allows the hosting economy around these open source solutions to thrive since Nextcloud and co. are so over-engineered and badly documented that they -require- a dedicated sys-admin team to run well.
This is my theory as well. NC has grown gradually in silos almost, every piece of it is some plugin they've imported from contributions at some point.
For example the reason there's no cohesiveness with a common websocket bus for all those ajax calls is because they all started out as a separate plugin.
NC has gone full modularity and lost performance for it. What we need is a more focused and cohesive tool for document sharing.
Honestly I think today with IaC and containers, a better approach for selfhosting is to use many tools connected by SSO instead of one monstrosity. The old Unix philosophy, do one thing but do it well.
This still needs cohesive authorization and central file sharing and access rules across apps.
And some central concept of projects to move all content away from people and into the org and roles
1. Did you open back port request with these basic patches? If you have orders of magnitude speed improvements it would be aswesome to share!
2. You definitively don't need an entire sysadmin team to run nextcloud, in my work (large organisation) there's three instances running (for different parts/purposes of which only one is run by more than one person, and I run myself both my personal instance and for a nonprofit with ~100 persons, it's really not much work after setup (and other systems are plenty of a lot more complicated systems to set up, trust me)
1. There was no point, having thought about it a bit; a lot of the patches (in essence it was at most a handful) revolved around disabling features which in turn could never have been upstreamed. An example was, as mentioned elsewhere in this comment section, the abysmal performance of the thumbnail gen feature, it never cached right, it never worked right and even when it did it would absolutely kill listings of larger folders of media - this was basically hacked out and partially replaced with much simpler gen on images alone, suddenly the file manager worked again for clients.
2. Guess that's debatable, or maybe even skill dependent (mea culpa), and also largely a question of how comfortable one is with systems that cannot be reasoned about cleanly (similar to TFA I just could not stand the bloat, it made me feel more than mildly unwell working with it). Eventually it was GDPR reqs that drove us towards the big G across multiple domains.
On another note it strikes me how the attempts at re-gen'ing folder listings online really is Sisyphus work, there should be a clean way to enfold multiuser/access-tokens into the filesystems of phones/PCs/etc. The closest pseudo example at the moment I guess is classic Google Drive but of course it would need gating and security on the OS side of things that works to a standard across multiple ecosystems (Apple, MS, Android, iPhone, Linux etc.) ... yeeeeah, better keep polishing that HTML ball of spaghetti I guess ;)
I don't think this article actually does a great job of explaining why Nextcloud feels slow. It shows lots of big numbers for MBs of Javascript being downloading, but how does that actually impact the user experience? Is the "slow" Nextcloud just sitting around waiting for these JS assets to load and parse?
From my experience, this doesn't meaningfully impact performance. Performance problems come from "accidentally quadratic" logic in the frontend, poorly optimised UI updates, and too many API calls.
It downloads a lot of JavaScript, it decompresses a lot of JavaScript, it parses a lot of JavaScript, it runs a lot of JavaScript, it creates a gazillion onFoundMyNavel event callbacks which all run JavaScript, it does all manner of uncontrolled DOM-touching while its millions of script fragments do their thing, it xhr’s in response to xhrs in response to DOM content ready events, it throws and swallows untold exceptions, has several dozen slightly unoptimized (but not too terrible) page traversals, … the list goes on and on. The point is this all adds up, and having 15MB of code gives a LOT of opportunity for all this to happen. I used to work on a large site where we would break out the stopwatch and paring knife if the homepage got to more than 200KB of code, because it meant we were getting sloppy.
15+ megabytes of executable code begins to look quite insane when you start to take a gander at many AAA games. You can produce a non-trivial Unity WebGL build that fits in <10 megabytes.
Windows 3.11 also wasn’t shipped to you over a cellular connection when you clicked on it. If it were, 6x1.44MB would have been considered quite unacceptable.
Oh totally, but - normal caching behavior would lead to different results than reported in the article. It would impact cold-start scenarios, not every page load. So something else is up.
I've played around with many self-hosted file manager apps. My first one was Ajaxplorer which then became Pydio. I really liked Pydio but didn't stick with it because it was too slow. I briefly played with Nextcloud but didn't stick with it either.
Eventually I ran into FileRun and loved it, even though it wasn't completely open source. FileRun is fast, worked on both desktop and mobile via browser nicely, and I never had an issue with it. It was free for personal use a few years ago, and unfortunately is not anymore. But it's worth the license if you have the money for it.
I tried setting up SeaFile but I had issues getting it working via a reverse proxy and gave up on it.
I like copyparty (https://github.com/9001/copyparty) - really dead simple to use and quick like FIleRun - but the web interface is not geared towards casual users. I also miss Filerun's "Request a file" feature which worked very nicely if you just wanted someone to upload a file to you and then be done.
On the topic of self-hosted file manager apps, I've really liked "filebrowser". Pair it with Syncthing or another sync daemon and you've got a minimal self-hosted Dropbox clone.
> I also miss Filerun's "Request a file" feature which worked very nicely if you just wanted someone to upload a file to you and then be done.
With the disclaimer that I've never used Filerun, I think this can be replicated with copyparty by means of the "shares" feature (--shr). That way, you can create a temporary link for other people to upload to, without granting access to browse or download existing files. It works like this: https://a.ocv.me/pub/demo/#gf-bb96d8ba&t=13:44
Copyparty can't (and doesn't want to) replace Nextcloud for many use cases because it supports one-way sync only. The readme is pretty clear about that. I'm toying with the idea of combining it with Syncthing (for all those devices where I don't want to do a full sync), does anybody have experience with that? I've seen some posts that it can lead to extreme CPU usage when combined with other tools that read/write/index the same folders, but nothing specifically about Syncthing.
Combining copyparty with Syncthing is not something I have tested extensively, but I know people are doing this, and I have yet to hear about any related issues. It's also a usecase I want to support, so if you /do/ hit any issues, please give word! I've briefly checked how Syncthing handles the symlink-based file deduplication, and it seemed to work just fine.
The only precaution I can think of is that copyparty's .hist folder should probably not be synced between devices. So if you intend to share an entire copyparty volume, or a folder which contains a copyparty volume, then you could use the `--hist` global-option or `hist` volflag to put it somewhere else.
As for high CPU usage, this would arise from copyparty deciding to reindex a file when it detects that the file has been modified. This shouldn't be a concern unless you point it at a folder which has continuously modifying files, such as a file that is currently being downloaded or otherwise slowly written to.
A good thing thing about Nextcloud is that by learning one tool, you get a full suite of collaboration apps: sync, file sharing, calendar, notes, collectives, office (via Collabora or OnlyOffice), and more. These features are pretty good, plus, you get things like photo management and Talk, which are decent.
Sure, some people might argue that there are specialized tools for each of these functions. And that’s true. But the tradeoff is that you'd need to manage a lot more with individual services. With Nextcloud, you get a unified platform that might be good enough to run a company, even if it’s not very fast and some features might have bugs.
The AIO has addressed issues like update management and reliability, it been very good in my experience. You get a fully tested, ready-to-go package from Nextcloud.
That said, I wonder, if the platform were rewritten in a more performance-efficient language than PHP, with a simplified codebase and trimmed-down features, would it run faster? The UI could also be more polished (see Synology DSM web interface). The interface in Synology looks really nice!
rewriting in a lower-level language won't do too much for NC, because it's mostly slow due to inefficient IO organization - things like mountains of XHRs, inefficient fetching, db querying etc. - None of that will be implicitly fixed by a rewrite in any language and can be fixed in the PHP stack as well.
I think one of the reasons that helped OC/NC get off the ground was precisely that the sysadmins running it can often do a little PHP, which is just enough to get it customized for the client. Raising the bar for contribution by using lower level languages might not be a desirable change of direction in that case.
The thing I don't get is that based on the article the front-end is as bloated as the back-end.
That said there's an Owncloud version called Infinite Scale which is written in Go.[1] Honestly I tried to go that route but it's requirements are pretty opinionated (Ubuntu LTS 22.04 or 24.04 and lots of docker containers littering your system) but it looks like it's getting a lot of development.
Most of the OCIS team left to start OpenCloud, which is a OCIS fork. And it's hardware requirements are pretty tame. It's a very nice replacement for Nextcloud, if you don't need the Groupware features/Apps and are only looking for File sharing.
> it's requirements are pretty opinionated (Ubuntu LTS 22.04 or 24.04
Hm?
> This guide describes an installation of Infinite Scale based on Ubuntu LTS and docker compose. The underlying hardware of the server can be anything as listed below as long it meets the OS requirements defined in the Software Stack
If the developers can only get it to run in a pile of ubuntu containers, then it's extremely likely they haven't thought through basic things you need to operate a service, like supply chain security, deterministic builds, unit testing, upgrades, etc.
I see 6 officially supported linux distributions. I don't know where anyone got the idea that they can only get it to run on ubuntu. It's containerized. Who cares what the host os is, beyond "it can run containers"?
Nextcloud is something I have a somewhat love-hate relationship with. On one hand, I've used Nextcloud for ~7 years to backup and provide access to all of my family's photos. We can look at our family pictures and memories from any computer, and it's all private and runs mostly without any headaches.
On the other hand, Nextcloud is so far from being something like Google Docs, and I would never recommend it as a general replacement to someone who can't tolerate "jank", for lack of a better word. There are so many small papercuts you'll notice when using it as a power user. Right off the top of my head, uploading large files is finicky, and no amount of web server config tinkering gets it to always work; thumbnail loading is always spotty, and it's significantly slower than it needs to be (I'm talking orders of magnitude).
With all that said, I'm so grateful for Nextcloud since I don't have a replacement, and I would prefer not having all our baby and vacation pictures feeding some big corporation's AI. We really ought to have a safe, private place to store files in 2025 that the average person can wrap their head around. I only wish my family took better advantage of it, since I'm essentially providing them with unlimited storage.
I once discovered and reported a vulnerability in Nextcloud's web client that was due to them including an outdated version of a JavaScript-based PDF viewer. I always wondered why they couldn't just use the browser's PDF viewer. I made $100, which was a large amount to me as a 16 year old at the time.
I recently needed to show a pdf file inside a div in my app. All i wanted was to show it and make it scrollable. The file comes from a fetch() with authorzation headers.
Nextcloud is bloated and slow, but it works and is reliable. I've been running a small instance in a business setting with around 8 daily users for many years. It is rock solid and requires zero maintenance.
But people rarely use the web apps. Instead, it's used more like a NAS with the desktop sync client being the primary interface. Nobody likes the web apps because they're slow. The Windows desktop sync client has a really annoying update process, but other than that is excellent.
I could replace it with a traditional NAS, but the main feature keeping me there is an IMAP authentication plugin. This allows users to sign in with their business email/password. It works so well and makes it so much easier to manage user accounts, revoke access, do password resets, etc.
Web apps don't have to be slow. I prefer web apps over system apps, as I don't have to install extra programs into my system and I have more control over those apps:
- a service decides it's a good idea to load some tracking stuff from 3rd-party? I just uMatrix block it;
- a page has an unwanted element? I just uBlock block it;
- a page could have a better look? I just userstyle style it;
- a page is missing something that could be added on client side? I just userscript script it
I know people here don't like it when one answers to complaints about OSS projects with "go fix it then" but seeing the comment section here, it's hard to not at least think it.
About 50-100 people saying that they know exactly why NC is slow, bloated, bad, but fail to a) point out a valid alternative, b) to act and do something about it.
I'm going to say that I love NC despite its slow performance. I own my storage, I can do Google Drive stuff without selling my soul (aka data) to the devil and I can go patch up stuff, since the code is open.
Is downloading lots of JS and waiting a few seconds bad? Yes. But did I pay for any of it? No. Am I the product as a result of choosing NC? Also no.
Having a basic file system with a dropbox alternative and being able to go and "shop" for extensions and extra tools feels so COOL and fun. Do I want to own my password manager? Bam, covered. Do I want to centralise calendar, mail and kanban into one? Bam, covered.
Codebase is AGPL, installs easily and you don't need to do surgery every new update.
I've been running it without hiccups for over 6 years now.
Would I love it to be as fast and smooth as a platform developed by an evil tech behemoth which wants to swallow everyone's data? Of course, am I happy NC exists? Yes!
And if you got this far, dear reader, give it a try. It's free and you can delete it in a second but if you find something to improve and know how, go help, it helps us all :)
I gave up on using Nextcloud because every time it updated it accumulated more and more errors and there was no way I was going to use a software that I had to troubleshoot every single update. Also the defaults for pictures are apparently quite stupid and so instead of making and showing tiny thumbnails for pictures, the thumbnails are unnecessarily large and loading the thumbnails for a folder of pictures takes forever. You can fix this and tell it to make smaller thumbnails apparently, but again, why am I having to fix everything myself? These should be sane defaults. Unfortunately, I just can't trust Nextcloud.
I gave up updating Nextcloud. It works for what I use it for and I don't feel like I'm missing anything. I'd rather not spend 4+ hours updating and fixing confusing issues without any tangible benefit.
I was expecting the author to open the profiler tab instead of just staring at network. But its yet another "heavy JavaScript bad" rant.
You really consider 1 MB of JS too heavy for an application with hundreds of features? How exactly are developers supposed to fit an entire web app into that? Why does this minimalism suddenly apply only to JavaScript? Should every desktop app be under 1 MB too? Is Windows Calculator 30 MB binary also an offense to your principles?
What year is it, 2002? Even low-band 5G gives you 30–250 Mbps down. At those speeds, 20 MB of JS downloads in well under a second. So whats the math beihnd the 5–10 second figure? What about the cache? Is it turned off for you and you redownload the whole nextcloud from scratch every time?
Nextcloud is undeniably slow, but the real reasons show up in the profiler, not the network tab.
I've spent the past year using a network called O2 here in the UK. Their 5G SA coverage depends a lot on low band (n28/700MHz) and had issues in places where you'd expect it to work well (London, for example). I've experienced sub 1Mbps speeds and even data failing outdoors more than once. I have a good phone, I'm in a city, and using what until a recent merger was the largest network in the country.
I know it's not like this everywhere or all the time, but for those working on sites, apps, etc, please don't assume good speeds are available.
That's really quite odd. There is even no 5G in my area, yet I get 100 Mbps stable download speed on 4G LTE, outdoors and indoors, any time of the day. Is 5G a downgrade? Is it considered normal service in the UK, when latest generation of cellular network provides a connection speed compared to 3G launched in 2001? How is this even acceptable in the year 2025. Would anyone in the UK start complaining if they downgrade it to 100Kbps? Or should we design the apps for that case?
Such underrated comment. You can really have 500MB of dependencies for your app because you're on MacOS and it's still gonna be fast because memory use have nothing to do with performance.
Pretty much the same with JavaScript - modern engines are amazingly fast or at least they really not depend on amount of raw javascript feed to them.
First and foremost, I agree with the meat of your comment.
But I wanted to point about your comment, that it DOES very much matter that apps meant to be transmitted over a remote connection are, indeed, as slim as possible.
You must be thinking about 5G on a city with good infrastructure, right?
I'm right now having a coffee on a road trip, with a 4G connection, and just loading this HN page took like 8~10 seconds. Imagine a bulky and bloated web app if I needed to quickly check a copy of my ID stored in NextCloud.
It's time we normalize testing network-bounded apps through low-bandwidth, high-latency network simulators.
> You really consider 1 MB of JS too heavy for an application with hundreds of features? How exactly are developers supposed to fit an entire web app into that? Why does this minimalism suddenly apply only to JavaScript? Should every desktop app be under 1 MB too? Is Windows Calculator 30 MB binary also an offense to your principles?
Yes, I don't know, because it runs in the browser, yes, yes.
Fantastic recommendation, it's like exactly what the doctor ordered given the premise of this thread. Does Bewcloud play nice with DAV or other open protocols or (dare I hope) nextcloud apps? I wouldn't mind using nextcloud apps paired with a better web front end.
If every aspect of Nextcloud was as clean, quick and light-weight as PhoneTrack this world would be a different place. The interface is a little confusing but once I got the hang of it it's been awesome and there's just nothing like it. I use an old phone in my murse with PhoneTrack on it and that way if I leave it on the bus (again) I actually have a chance of finding it.
No $35/month subscription, and I'm not sharing my location data with some data aggregator (aside from Android of course).
Nextcloud is an old product that inherit from Owncloud developed in php since 2010.
It has extensibility at its core through the thousands of extensions available.
I'm not saying that sourcehut is the same in any way, but I want the difference between GitHub and sourcehut to be the difference between NextCloud and alternative.
> Nextcloud is an old product that inherit from Owncloud developed in php since 2010.
Tough situation to be in, I don't envy it.
> It has extensibility at its core through the thousands of extensions available.
Sure, but I think for some limited use cases, something better could be imagined.
The article mentions Vikunja as an alternative to Nextcloud Tasks, and I can give it a solid recommendation as well. I wanted a self-hosted task management app with some lightweight features for organizing tasks into projects, ideally with a kanban view, but without a full-blown PM feature set. I tried just about every task management app out there, and Vikunja was the only one that ticked all the boxes for me.
Some specific things I like about it:
* Basic todo app features are compatible with CalDAV clients like tasks.org
* Several ways of organizing tasks: subtasks, tags, projects, subprojects, and custom filters
* list, table, and kanban views
* A reasonably clean and performant frontend that isn't cluttered with stuff I don't need (i.e., not Jira)
And some other things that weren't hard requirements, but have been useful for me:
* A REST API, which I use to export task summaries and comments to markdown files (to make them searchable along with my other plaintext notes)
* A 3rd party CLI tool: https://gitlab.com/ce72/vja
* OIDC integration (currently using it with Keycloak)
* Easily deployable with docker compose
I know this post is more about nextcloud...but can i just say this one feature from Vikunja "...export task summaries and comments..." sounds great!!! One of the features i seek out when i look for a task, project management software is the ability to easily and comprehensivelt provide for nice exports, and that said exports *include comments*!!
Either apps lack such an export, or its very minimal, or it includes lots of things, except comments...Sometimes an app might have a REST api, and I'd need to build something non-trivial to start pulling out the comments, etc. I feel like its silly in this day and age.
My desire for comments to be included in exports is for local search...but also because i use comments for sort of thinking aloud, sort of like an inline task journaling...and when comments are lacking, it sucks!
In fact, when i hear folks suggest to simply stop using such apps and merely embrace the text file todo approach, they cite their having full access to comments as a feature...and, i can't dispute their claim! But barely any non-text-based apps highlight the inclusion of comments. So, i have to ask: is it just me (who doesn't use a text-based todo workflow), and then all other folks who *do use* a text-based tdo flow, who actually care about access to comments!?!
Yeah, I hear you. I almost started using a purely text-based todo workflow for those same reasons, but it was hard to give up some web UI features, like easily switching between list and kanban-style views.
My use case looks roughly like this: for a given project (as in hobby/DIY/learning, not professional work), I typically have general planning/reference notes in a markdown file synced across my devices via Nextcloud. Separately, for some individual tasks I might have comments about the initial problem, stuff I researched along the way, and the solution I ended up with. Or just thinking out loud, like you mentioned. Sometimes I'll take the effort to edit that info into my main project doc, but for the way I think, it's sometimes more convenient for me to have that kind of info associated with a specific task. When referring to it later, though, it's really handy to be able to use ripgrep (or other search tools) to search everything at once.
To clarify, though, Vikunja doesn't have a built-in feature that exports all task info including comments, just a REST API. It did take a little work to pull all that info together using multiple endpoints (in this case: projects, tasks, views, comments, labels). Here's a small tool I made for that, although it's fairly specific to my own workflow: https://github.com/JWCook/scripts/tree/main/vikunja-export
> Yeah, I hear you. I almost started using a purely text-based todo workflow for those same reasons, but it was hard to give up some web UI features, like easily switching between list and kanban-style views.
Yeah, i like me some kanban! Which is one reason i've resisted the text-based workflow...so far. ;-)
> ...Vikunja doesn't have a built-in feature that exports all task info including comments, just a REST API. It did take a little work...
Aww, man, then i guess i misread. I thought it was sort of easier than that. Well, i guess that's not all bad. Its possible, but simply requires a little elbow grease. I used to use Trello which does include comments in their JSON export, but i had my own little python app to copy out and filter only the key things i wanted - like comments - and reformated to other text formats like CSV, etc. But, Trello is not open source, so its not an option for me anymore. Well, thanks for sharing (and for making!) your vikunja export tool! :-)
nextcloud just feels abandoned, even if it isn't of course.
maybe paying customers are getting a different/updated/tuned version of it. maybe not. but the only thing that keeps me using it is there isn't any real selfhosted alternatives.
why is it slow? if you just blink or take a breath, it touches the database. years ago i've tried to optimise it a bit and noticed that there are horrible amount of DB transactions there without any apparent reason.
Because it feels worse and more broken as time goes on. Just like any other abandoned web app, except it's being made worse and slower as an active, deliberate, ongoing choice
I know that this is supposed to be targeted at NextCloud in particular, but I think it's a good standalone "you should care about how much JavaScript you ship" post as well.
What frustrates me about modern web development is that everyone is focused on making it work much more than they are making it sure it works fast. Then when you go to push back, the response is always something like "we need to not spend time over-optimizing."
I just checked google calendar it's under 3mb download for js (around 8mb uncompressed).. it's also a lot more responsive than nextcloud web. Even then, it's not necessarily the size, I think that's mostly a symptom of the larger issues likely at play.
There are a lot of requests made in general, these can be good, bad or indifferent depending on the actual connection channels and configuration with the server itself. The pieces are too disconnected from each other... the NextCloud org has 350 repositories on Github. I'm frankly surprised it's more than 30 or so... it's literally 10x what would be a larger expectation... I'd rather deal with a crazy mono-repo at that point.
OP really focused on payload size, is why I was curious.
> On a clean page load [of nextcloud], you will be downloading about 15-20 MB of Javascript, which does compress down to about 4-5 MB in transit, but that is still a huge amount of Javascript. For context, I consider 1 MB of Javascript to be on the heavy side for a web page/app.
> …Yes, that Javascript will be cached in the browser for a while, but you will still be executing all of that on each visit to your Nextcloud instance, and that will take a long time due to the sheer amount of code your browser now has to execute on the page.
While Nextcloud may have a ~60% bigger JS payload, sounds like perhaps that could have been a bit of a misdirection/misdiagnosis, and it's really about performance characteristics of the JS rather than strictly payload size or number of lines of code executed.
On a Google Doc load chosen by whatever my browser location bar autocompleted, I get around twenty JS files, the two biggest are 1MB and 2MB compressed.
Yeah, without a deeper understanding it's really hard to say... just the surface level look, I'm not really at all interested in diving deeper myself. I'd like to like it... I tried out a test install a couple times but just felt it was clunky. Having a surface glance at the org and a couple of the projects, it doesn't surprise me that it felt that way.
gmail should be server sided, with as much JS as you want to use. Unless they moved away from the philosophy they started with GWT (Google Web Toolkit) for Gmail, and perhaps even Inbox (RIP)
I've used nextcloud for close to I think 8 years now as a replacement for google drive.
However my need for something like google drive has reduced massively, and nextcloud continues to be a massive maintenance pain due to its frustratingly fast release cadence.
I don't want to have to log into my admin account and baby it through a new release and migration every four months! Why aren't there any LTS branches? The amount of admin work that nextcloud requires only makes sense for when you legitimately have a whole group of people with accounts that are all utilizing it regularly.
This is honestly the kick in the pants I need to find a solution that actually fits my current use-case. (I just need to sync my fuckin keepass vault to my phone, man.) Syncthing looks promising with significantly less hassle...
Been running NC on my home server and basically maybe update it once a year or so? Even less probably, so definitely not a must to update every time. Plus via snap it's pretty simple.
The major shortcoming of NextCloud, in my opinion, is that that it's not able to do sync over LAN. Imagine wanting to synchronize 1TB+ of data and not being able to do so over a 1 Gbps+ local connection, when another local device has all the necessary data. There is some workaround involving "split DNS", but I haven't gotten around to it. Other than that, I thought NC was absolutely fantastic.
If not, and you don't want to set up dnsmasq just for Nextcloud over LAN, then DNS-based adblock software like AdGuard Home would be a good option (as in, it would give you more benefit for the amount of time/effort required). With AdGuard, you just add a line under Filters -> DNS rewrites. PiHole can do this as well (it's been awhile since I've used it, but I believe there's a Local DNS settings page).
Otherwise, if you only have a small handful of devices, you could add an entry to /etc/hosts (or equivalent) on each device. Not pretty, but it works.
You could also upload directly to the filesystem and then run occ files:scan, or if the storage is mounted as external it just works.
Another method is to set your machines /etc/hosts (or equivalent) to the local IP of the instance (if the device is only on lan you can keep it, otherwise remove it after the large transfer).
Now your rounter should not send traffic to itself away, just loop it internally so it never has to go over your isps connection - so running over lan only helps if your switch is faster than your router..
I had a similar issue with a public game server that required connecting through the WAN even if clients were local on the LAN. I considered split DNS (resolving the name differently depending on the source) but it was complicated for my setup. Instead I found a one-line solution on my OpenBSD router:
pass in on $lan_if inet proto tcp to (egress) port 12345 rdr-to 192.168.1.10
It basically says "pass packets from the LAN interface towards the WAN (egress) on the game port and redirect the traffic to the local game server". The local client doesn't know anything happened, it just worked.
> The major shortcoming of NextCloud, in my opinion, is that that it's not able to do sync over LAN.
That’s an interesting way to describe a lack of configuration on your part.
Imagine me saying: "The major shortcoming of Google drive, in my opinion, is that that it's not able to sync files from my phone. There is some workaround involving an app called 'Google drive' that I have to install on my phone, but I haven't gotten around to it. Other than that, Google drive is absolutely fantastic.
Like most of us I think, I really, really wanted to like nextcloud. I put it on an admittedly somewhat slow dual Xeon server, gave it all 32 threads and many, many gigabytes of ram.
Even on a modern browser on a brand new leading-edge computer, it was completely unusably slow.
Horrendous optimization aside, NC is also chasing the current fad of stripping out useful features and replacing them with oceans of padding. The stock photos app doesn't even have the ability to sort by date!. That's been table stakes for a photo viewer since the 20th goddamn century.
When Windows Explorer offers a more performant and featureful experience, you've fucked up real bad.
I would feel incredibly bad and ashamed to publish software in the condition that NextCloud is in. It is IMO completely unacceptable.
One thing that could help with this is to use CDN for these static assets, while still having the Nextcloud hosted on your own.
We had a similar situation with some notebooks running in production, which were quite slow to load because it was loading a lot of JS files / WASM for the purposes of showing the UI. This was not part of our core logic, and using a CDN to load these, but still relying on private prod instance for business logic helped significantly.
I have a feeling this would be helpful here as well.
(tangential) Reading the comments, several mentioned "copyparty", never heard of it before, haven't used it, haven't reviewed but does there "feature showcase" video makes me want to give it a shot https://www.youtube.com/watch?v=15_-hgsX2V0 :)
The original Doom 2 ran 64,000 pixels (320x200). 4k UHD monitors now show 8.3 million pixels.
YMMV.
Of course, Doom 2 is full of Carmack shenanigans to squeeze every possible ounce of performance out of every byte, written in hand optimized C and assembly. Nextcloud is delivered in UTF-8 text, in a high level scripting language, entirely unoptimized with lots of low hanging fruit for improvement.
Sure but i doubt there is more image data in the delivered nextcloud data compared to doom2, games famously need textures where a website usually needs mostly vector and css based graphics.
Actually Carmack did squeeze every possible ounce of performance out of DOOM, however that does not always mean he was optimizing for size.
If you want to see a project optimized for size you might check out ".kkrieger" from ".theprodukkt" which accomplishes a 3d shooter in 97,280bytes.
You know how many characters 20MB of UTF-8 text is right? If we are talking about javascript it's probably mostly ascii so quite close to 20 million characters. If we take a wild estimate of 80 characters per line that would be 250000 lines of code.
I personally think 20MB is outrageous for any website, webapp or similar. Especially if you want to offer a product to a wide range of devices on a lot of different networks. Reloading a huge chunk of that on every page load feels like bad design.
Developers usually take for granted the modern convenience of a good network connection, imagine using this on a slow connection it would be horrid.
Even in the western "first world" countries there are still quite some people connecting with outdated hardware or slow connections, we often forget them.
If you are making any sort of webapp you ideally have to think about every byte you send to your customer.
I mean, if you’re going to include carmack’s relentless optimizer mindset in the description, I feel like your description of the NextCloud situation should probably end with “and written by people who think shipping 15MB of JavaScript per page is reasonable.”
Sure, but what people leave out is that it’s mostly C and assembly. That just isn’t realistic anymore if you want a better developer experience that leads to faster feature rollout, better security, and better stabilty.
This is like when people reminisce about the performance of windows 95 and its apps while forgetting about getting a blue screen of death every other hour.
Exactly javascript is a higher level language with a lot of required functionality build in. When compared to C you would need to (for most tasks) write way less actual code in javascript to achieve the same result, for example graphics or maths routines. Therefore it's crazy that it's that big.
I think it's a double edged sword of Open-Source/FLOSS... some problems are hard and take a lot of effort. One example I consistently point to is core component libraries... React has MUI and Mantine, and I'm not familiar with any open-source alternatives that come close. As a developer, if there was one for Leptos/Yew/Dioxus, I'd have likely jumped ship to Rust+WASM. They're all fast enough with different advantges and disadvantages.
All said... I actually like TypeScript and React fine for teams of developers... I think NextCloud likely has coordination issues that go beyond the language or even libraries used.
Windows 2000 was quite snappy on my Pentium 150, and pretty rock solid. It was when I stopped being good at fixing computers because it just worked, so I didn't get much practice.
I did get a BSOD from a few software packages in Win2k, but it was fewer and much farther between than Win9x/me... I didn't bump to XP until after SP3 came out... I also liked Win7 a lot. I haven't liked much of Windows since 7 though.
1. Indiscriminate use of packages when a few lines of code would do.
2. Loading everything on every page.
3. Poor bundling strategy, if any.
4. No minification step.
5. Polyfilling for long dead, obsolete browsers
6. Having multiple libraries that accomplish the same thing
7. Using tools and then not doing any optimization at all (like using React and not enabling React Runtime)
Arguably things like an email client and file storage are apps and not pages so a SPA isn't unreasonable. The thing is, you don't end up with this much code by being diligent and following best practices. You get here by being lazy or uninformed.
What is React runtime? I looked it up and the closest thing I came across is the newly announced React compiler. I have a vested interest in this because currently working on a micro-SaaS that uses React heavily and still suffering bundle bloat even after performing all the usual optimizations.
When you compile JSX to JavaScript, it produces a series of function calls representing the structure of the JSX. In a recent major version, React added a new set of functions which are more efficient at both runtime and during transport, and don't require an explicit import (which helps cut down on unnecessary dependencies).
React compiler is awesome for minimizing unnecessary renders but doesn't help with bundle size; might even make it worse. But in my experience it really helps with runtime performance if your code was not already highly optimized.
I think, some of the issues here is that first nextcloud tries to be compatible with any managed / mutualized hosting.
They also treat every "module"/"apps" whatever you call it, as completely distinct spa without proving much of a sdk/framework.
Which mean each app, add is own deps, manage is own build, etc...
Also don't forget that app can even be a part of a screen not the whole thing
Nextcloud is a mess. It tries to do everything. The only reason I keep it in production is because it's a hustle to transition my files and DAVx info elsewhere.
The http upload is miserable, it's slow, it fails with no message, it fails to start, it hangs. When uploading duplicate files the popup is confusing. The UI is slow, the addons break on every update. The gallery is very bad, now we use immich.
I find the Nextcloud client really buggy on the Mac, especially the VFS integration. The file syncing is also really slow. I switched back to P2P file syncing via Syncthing and Resilio Sync out of frustration.
Many have brought up more websockets instead of REST API calls. It looks like they're already working in that direction.. scroll down to "Developer tools and APIs": https://nextcloud.com/blog/nextcloud-hub25-autumn/
While I tend to agree... I've been on enough relatively modern web apps that can hit 8mb pretty easily, usually because bundling and tree shaking are broken. You can save a lot by being judicious.
IMO, the worst offenders are when you bring in charting/graphing libraries into things when either you don't really need them, or otherwise not lazy loading where/when needed. If you're using something like React, then a little reading on SVG can do wonders without bloating an application. I've ripped multi-mb graphing libraries out to replace them with a couple components dynamically generating SVG for simple charting or overlays.
Nextcloud server is written in PHP. Of course it is slow. It's also designed to be used as an office productivity suite meaning a lot of features you may not actually use are enabled by default and those services come with their own cronjobs and so on.
At the risk of sounding out the obvious. PHP is limited to single threaded processes and has garbage collection. It's certainly not the fastest language one could use for handling multiple concurrent jobs.
On the other hand, in 99.99% of web applications you do not need self baked concurrency. Instead use a queue system which handles this. I've used this with 20 million background jobs per day without hassles, it scales very well horizontally und vertically.
I've never used nextcloud, but I always imagined that the point is you can run services but then plug in any calendar app etc. You don't have to be running nextclouds calendar, I thought. Did I misundestand how it works?
I would assume that the people for whom a slow web based calendar is a problem (among other slow things on the web interface) are people who want to be using it if it performed well.
They wouldn't just make a bad slow web interface on purpose to enlighten people as to how bad web interfaces are, as a complicated way of pushing them toward integrated apps.
In my case, I want file/photo syncing, calendar syncing, and contact syncing.
Nextcloud provides all 3 in a package that pretty much just works, in my experience (despite being kinda slow).
The Notes app is a pretty nice wrapper around a specific folder full of markdown files, I mostly use it on my phone, and on my desktop I just use my favorite editor to poke at the .md files directly.
Oh, and when a friend group wanted a better way to figure out which day to get together, I just installed the Polls app with a few clicks and we use that now.
I am a bit disappointed in the performance, but I've been running this setup for years and it "just works" for me. I understand how it works, I know how to back it up (and, more importantly restore from that backup!)
If there's another open-source, self-hosted project that has WebDAV, CalDAV, and CardDAV all in one package, then I might consider switching, but for now Nextcloud is "good enough" for me.
I went from cloud to local smb shares to nextcloud to seafile. Really happy with the latter. Works, no bloat, versioning and some file sharing. The pro version is free with 3 or less usernames. I use the cli client to mount the libraries into folders and share that with smb + subst X: into the root directory on laptops for family. Borgbackup of that offsite for backup.
I've read good things about Seafile and have considered setting it up on my Homelab... though when I looked at the documentation, it too seemed quite large and I worried it wouldn't be the lightweight solution I'm looking for.
It’s not selective sync, but you can get something similar with Ignore Files [1] in SynchThing. This functionality can also be configured via the webGUI and within apps such as MobiusSync [2].
I think you could replace Nextcloud's syncing and file access use cases with Syncthing and Copyparty respectively. IMO the biggest downside is that Copyparty's UX is... somewhat obtuse. It's super fast and functional, though.
Nextcloud, and before it Owncloud, have been "in production" in my household for nearly a decade at this point. There have been some botched updates and sync problems over the years, but it's been by far the most reliable app I've hosted.
In terms of privacy & security, like everything it comes down to risk model and the trade-offs you make to exist in the modern world. Nextcloud is for sharing files, if nothing short of perfect E2EE is tolerable it's probably not the solution for you, not to mention the other 99.999% of services out there.
I think most of the problems people report come down to really bad defaults that let it run like shit on very low-spec boxes that shouldn't be supported (ie raspi gen 1/2 back in the day). Installing redis and configuring php-fpm correctly fixes like 90% of the problems, other than the bloated Javascript as mentioned in the op.
End of the day, it's fine. Not perfect, not ideal, but fine.
for me it's a family photo backup with calendars (private and shared ones) running in a VM on the net.
its webui is rarely used by anyone (except me), everyone is using their phones (calendars, files).
does it work? yes. does anyone other than me care about the bugs? no. but noone really _uses_ it as if it was deployed for a small office of 10-20-30 people. on the other hand, there are companies paying for it.
Nextcloud not perfect but it's still one of a major project that has not shifted to business oriented licence and where all components are available and not paywalled with enterprise edition.
So yes not perfect, bloated js but it works and is maintained.
So I'd rather thanks all developers involved in nextcloud than whine about bloated js.
This post completely misses the point. Linear downloads ~6.1mb of JS over the network, decompressed to ~31mb and still feels snappy.
Applications like linear and nextcloud aren't designed to be opened and closed constantly. You open them once and then work in that tab for the remainder of your session.
As others have pointed out in this thread, "feeling slow" is mostly due to the number of fetch requests and the backend serving those requests.
It felt unnecessarily complex for such a simple task as file synchronization. I prefer unison. Unfortunately, it is a blast from the past written in ocaml and there is no Android app :-(
Just like any other modern app: first you make it work using frameworks. Then, as soon as the "Core" product is done - just a few more features - then we'll circle back around to ripping out those bloated frameworks for something more lithe. Shouldn't be more than two weeks, now. Most of the base stuff is done. Just another feature or two. I mean, a little longer, if we have some issues with those features, sure. But we'll get back around to a simpler UI right after! Just those features, their bugs and support, and then - well documentation. Just the minimum stuff. Enough to know what we did when we come back to it. But we'll whip up those docs and then it's right on to slimming down the frontend! Won't be long now...
As someone who has hosted a few Nextcloud instances for a few years: Nextcloud can be quick if you make it work. If you want to get a good feel for how it can be rent a Hetzner storage box (1TB for below 5 Euros a month).
You sadly can't just install nextcloud on your vanilla server and expect it to perform well.
Do you have any tips and tricks to share? I'm running a self-hosted instance on an old desktop PC in my basement for me and a couple family members. Performance is kinda meh, and I don't think it's due to resource constraints on the server itself. This is after following the performance recommendations in the admin console to tweak php.ini settings.
I don't think I will ever use something like that. I work in over 10 PCs everyday and my only synchronisation is a 16 GB USB stick. I keep all important work, apps and files there.
I would love to like Nextcloud, it's pretty great that it does exist. Just that makes it better than... well everything else I haven't found.
What frustrates me is that it looks like it works, but once in a while it breaks in a way that is pretty much irreparable (or at least not in a practical way).
I want to run an iOS/Android app that backs up images on my server. I tried the iOS app and when it works, it's cool. It's just that once in a while I get errors like "locked webdav" files and it never seems to recover, or sometimes it just stops synchronising and the only way to recover seems to be to restart the sync from zero. It will gladly upload 80GB of pictures "for nothing", discarding each one when it arrives on the server because it already exists (or so it seems, maybe it just overwrites everything).
The thing is that I want my family to use the app, so I can't access their phone for multiple hours every 2 weeks; it has to work reliably.
If it was just for backing up my photos... well I don't need Nextcloud for that.
Again, alternatives just don't seem to exist, where I can install an app on my parent's iOS and have it synchronise their photo gallery in the background. Except I guess iCloud, that is.
I stopped using Nextcloud when the iOS app lost data.
For some reason the app disconnected from my account in the background from time to time (annoying but didn't think it was critical). Once I pasted data on Nextcloud through the Files app integration, it didn't sync because it was disconnected and didn't say anything, and it lost the data.
Oof, sounds painful. It's hard to use anything when you can't trust its fundamentals.
I never had data outright vanish, but similar to the comment you replied to, it was just unreliable. I found Syncthing much more useful over the long haul. The last 3 times I've had to do anything with it were simply to manage having new machines replace old ones.
Syncthing sadly doesn't let you not download some folders or files, but I just moved those to other storage. It beats the Nextcloud headache.
Recently people built a super-lightweigt alternative, named copyparty[0]. To me that looks like it does everything people tend to need without all the bloat.
[0]: https://github.com/9001/copyparty
I think "people" deserves clarification: Almost the entire thing was written by a single person and with a _seriously_ impressive feature set. The launch video is well worth a quick watch: https://www.youtube.com/watch?v=15_-hgsX2V0&pp=ygUJY29weXBhc...
I don't say this to diminish anyone else's contribution or criticize the software, just to call out the absolutely herculean feat this one person accomplished.
Yeah people there pretty much mean one dude. It's mine boggling how much that little program can do considering it had one dev.
Don't forget, "Lot of the code was written on a mobile phone using tmux and vim on a bus". That's crazy.
I have tried to run micro https://micro-editor.github.io/ on my phone but this is some other beast if someone is running tmux and vim on their phone
I have found that typing normally is really preferably on android and usually I didn't like having to press columns or ctrl or anything so as such since micro is really just such a great thing overall, it fit so perfectly that when I had that device, I was coding more basic python on my phone than I was on my pc
Although back then I was running alpine on UserLand and I learnt a lot trying to make that alpine vm of sorts to work with python as it basically refused to and I think I learnt a lot which I might have forgotten now but the solution was very hacky (maybe gcompat) and I liked it
This is not an alternative as it only covers files. Mind what is in the article: "I like what Nextcloud offers with its feature set and how easily it replaces a bunch of services under one roof (files, calendar, contacts, notes, to-do lists, photos etc.), but ".
For us Nextcloud AIO is the best thing under the sun. It works reasonably well for our small company (about 10 ppl) and saves us from Microsoft. I'm very grateful to the developers.
Hopefully they are able to act upon such findings or rewrite it with go :-). Mmh, if Berlin (Germany) wouldn't waste so much money in ill-advised ideology-driven and long-term state-destroying actions and "NGOs" they had enough money to fund 100s of such rewrites. Alas...
Why should Germany be wasting public money on a private company who keeps shoveling more and more restrictions on their open-source-washed "community" offering, and whose "enterprise" pricing comes in at twice* the price MS365 does for fewer features, worse integration, and with added costs for hosting, storage, and maintenance?
* or same, if excluding nextcloud talk, but then missing a chat feature
It makes a lot of sense for Germany to keep some independance from foreign proprietary cloud providers (Microsoft, Google); Money very well invested imo. It helps the local industry and data stays under German sovereignity.
I find your "open-source-washed" remark deplaced and quite deragoraty. Nextcloud is, imo, totally right to (try to) monetize. They have to, they must further improve the technical backbone to stay competitive with the big boys.
Could you expand on what restrictions they have placed on the community version?
At the very least their app store, which is pretty much required for OIDC, most 2FA methods, and some other features, stops working at 500 users. AFAIK you can still manually install addons, it's just the integration that's gone, though I'm not 100% sure. Same with their notification push service (which is apparently closed source?[0]), which wouldn't be as much of an issue if there were proper docs on how to stand up your own instance of that.
IIRC they also display a banner on the login screen to all users advertising the enterprise license, and start emailing enterprise ads to all admin users.
Their "fair use policy"[1] also includes some "and more" wording.
[0] https://github.com/nextcloud/notifications/issues/82
[1] https://nextcloud.com/fairusepolicy/
There is no way it’s going to be completely rewritten from scratch in Go, and none of whatever Germany is or isn’t doing affects that in any way shape or form.
Actually, it's already been done by the former Nextcloud fork/predecessor. OwnCloud shared a big percentage of the Nextcloud codebase, but they decided to rewrite everything under the name OCIS (OwnCloud Infinite Scale) a couple of years ago. Recently, OwnCloud got acquired by Kiteworks and it seemed like they got in a fight with most of the staff. So big parts of the team left to start "OpenCloud", which is a fork of OCIS and is now a great competitor to Nextcloud. It's much more stable and uses less resources, but it also does a lot less than Nextcloud (namely only File sharing so far. No Apps, no Groupware.)
https://github.com/opencloud-eu
I think what you described is basically ownCloud Infinite Scale (ocis). I haven't tested it myself but it's something I've been considering. I run normal owncloud right now over nextcloud as it avoided a few hiccups that I had.
OCIS seems to have lost most of their team. They now work on a fork called OpenCloud. https://github.com/opencloud-eu
It makes perfect sense to me that nextcloud is a good fit for a small company.
My biggest gripe with having used it for far longer than I should have was always that it expected far too much maintenance (4 month release cadence) to make sense for individual use.
Doing that kind of regular upkeep on a tool meant for a whole team of people is a far more reasonable cost-benefit analysis. Especially since it only needs one technically savvy person working behind the scenes, and is very intuitive and familiar on its front-end. Making for great savings overall.
Hetzner‘s storage share product line offers a managed Nextcloud instance. I‘m using them as I didn‘t want to care about updating it myself.
The only downside is you can‘t use apps/plugins which require additional local tools (e.g. ocrmypdf) but others can be used just fine.
Calling remotely hosted services works (e.g. if you have elasticsearch on an vps and setup the Nextcloud fulltext search app accordingly)
I found copyparty to be too busy on the UI/UX side of things. I've settled on dufs[0], quick to deploy, fast to use use, and cross platform.
[0] https://github.com/sigoden/dufs
Do you have a systemd for it, run it with Docker, or simply manually as needed? I find its simplicity perfect!
I run it manually as needed. It's already packaged for both Alpine Linux and Homebrew which suits my ad-hoc needs wonderfully!
Copyparty looks amazing, wow
https://www.youtube.com/watch?v=15_-hgsX2V0
> everything people tend to need
> NOTE: full bidirectional sync, like what nextcloud and syncthing does, will never be supported! Only single-direction sync (server-to-client, or client-to-server) is possible with copyparty
Is sync not the primary use of nextcloud?
For your specific use case of photos, Immich is the front runner and a much better experience. Sadly for the general Dropbox replacement I haven't found anything either.
> Sadly for the general Dropbox replacement I haven't found anything either.
I had really good luck with Seafile[0]. It's not a full groupware solution, just primarily a really good file syncing/Dropbox solution.
Upsides are everything worked reliably for me, it was much faster, does chunk-level deduplication and some other things, has native apps for everything, is supported by rclone, has a fuse mount option, supports mounting as a "virtual drive" on Windows, supports publicly sharing files, shared "drives", end-to-end encryption, and practically everything else I'd want out of "file syncing solution".
The only thing I didn't like about it is that it stores all of your data as, essentially, opaque chunks on disk that are pieced together using the data in the database. This is how it achieves the performance, deduplication, and other things I _liked_. However it made me a little nervous that I would have a tough time extracting my data if anything went horribly wrong. I took backups. Nothing ever went horribly wrong over 4 or 5 years of running it. I only stopped because I shelved a lot of my self-hosting for a bit.
[0]: https://www.seafile.com/en/home/
I can confirm this. We have been using it for 10 years now in our research lab. No data loss so far. Performance is great. Integration with OnlyOffice works quite well (there were sync problems a few years ago - I think upgrading OnlyOffice solved this issue).
Yeah, went with that as well. It’s blazingly fast compared to NC.
Pretty sure that NextCloud uses Seafile behind the scenes unless I’m mistaken.
You are mistaken.
thanks for sharing. been looking for something like this for awhile
For a general file sharing / storage solution there is also OpenCloud: https://opencloud.eu/de
It's what I want to try next. Written in go, it looks promising.
Too many Cloud things! OwnCloud, NextCloud, OpenCloud. There have* to be better names available...
Look into syncthing for a dropbox replacement, have been using it for years, very satisfied.
Syncthing is under my "want to like" list but I gave up on it. I'm a one person show who just wants to sync a few dozen markdown files across a few laptops and a phone. Every time I'd run it I'd invariably end up with conflict files. It got to the point where I was spending more time merging diffs than writing. How it could do that with just one person running it I have no idea.
That should not happen. I use it a lot and never had this issue, there maybe is something wrong about your setup.
A good idea is to have it on an always-on server and add your share as an encrypted one (like you set the password on all your apps but not on the server); this pretty much results in a dropbox-like experience since you have a central place to sync even when your other devices are not online
My Syncthing experience matches Oxodao's. Over years with >10k files / 100 gb, I've only ever had conflicts when I actually made conflicting simultaneous changes.
I use it on my phone (configured to only sync on WiFi), laptop (connected 99% of the time), and server (up 100% of the time).
The always-up server/laptop as a "master node" are probably key.
I had this when I had a windows system in the mix. Windows handles case differently in filenames than linux and macOS, and it caused conflicts.
Same. I don't know why so many people like syncthing.
I don't think that there is some good alternative to open source syncthing ,the way syncthing just does syncing no
Let me know if you know of any alternative which have helped you but I haven't tried syncthing but I have heard good things about it overall so I feel like I like it already even if I haven't tried it I guess.
If you just need a Dropbox replacement for file syncing, Nextcloud is fine if you use the native file system integrations and ignore the web and WebDAV interfaces.
I'd say Ente-photo is at least as good if not better than Immich.
https://github.com/ente-io/ente
I would say the opposite. Ente has one huge advantage and that it is e2ee so it's a must if you are hosting someone else photos. But if you are planning to run something on your server/NAS for yourself then Immich has many advantages (that often relate to the e2ee). For example... your files are still files on the disk so less worry about something unrecoverably breaking. And you can add external locations. With Ente it is just about backing up your phone photos. Immich works pretty well as camera photo organizer.
The Ente desktop app has a continuous export function that’ll just dump everything into plain file directories.
It makes a little more sense when you’re using their cloud version, because otherwise you’re storing the data twice.
Does it have a mobile app that backs up the photos while in the background and can essentially be "forgotten"? That's pretty much what I need for my family: their photos need to get to my server magically.
I'm a very happy Ente Photos user as well.
I replaced all my Dropbox uses with SyncThing (and love it). I run an instance on my server at all times and on every client.
There is also "memories for nextcloud" which basically matches immich in feature set (was ahead until last month), nextcloud+memories make a very strong replacement for gdrive or dropbox
Yeah I guess my issue is that if I can't trust the mobile app not to lose my photos (or stop syncing, or not sync everything), then I just can't use it at all. There is no point in having Nextcloud AND iCloud just because I don't trust Nextcloud :D.
Does its iOS/Android app automatically backup the photos in the background? When I looked into Immich (didn't try it) it sounded like it was more of a server thing. I need the automation so that my family can forget about it.
I use Syncthing as a Dropbox replacement, and I like it. I have a machine at home running it that is accessible over the net. Not the prettiest, but it works!
I love immich, too, but I have also ran into a lot of issues with syncing large libraries. The iPhone app will just hang sometimes.
Does it recover though, or do you end up in situations where your setup is essentially broken?
Like if I backup photos from iOS, then remove a subset of those from iOS to make space on the phone (but obviously I want to keep them on the cloud), and later the mobile app gets out of sync, I don't want to end up in a situation where some photos are on iOS, some on the cloud, but none of the devices has everything, and I have no easy way to resync them.
It won't recover unless I do something... sometimes just quitting the iPhone app and then toggling enabling backups works, but not always. I had to completely delete and reinstall the app once to get it to work, and had to resync all 45000 images/videos I had.
I have had the server itself fail in strange ways where I had to restart it. I had to do a full fresh install once when it got hopelessly confused and I was getting database errors saying records either existed when they shouldn't or didn't exist when they should.
I think I am a pretty skilled sysadmin for these types of things, having both designed and administered very large distributed systems for two decades now, but maybe I am doing things wrong, but I think there are just some gotchas still with the project.
Right, that's the kind of issues I am concerned about.
iCloud / Google Photos just don't have that, they really never lose a photo. It's very difficult for me to convince my family to move to something that may lose their data, when iCloud / Google Photos works and is really not that expensive.
It has gotten more stable as I have used it for a while. I think if you want to do it, just wait until it is stable and you have a good backup routine before relying on it.
I have found adding the following four lines to the immich proxy host in nginx proxy manager (advanced tab) solved my immich syncing issues:
client_max_body_size 50000M;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
send_timeout 600s;
FWIW, my library is about 22000 items large. Hope this helps someone.
I too have found Syncthing + Filebrowser to be a sufficient substitute for Dropbox.
Have you looked into https://filebrowser.org/? While it's not drop-in replacement for Google Drive/Dropbox, it has been serving me well for similar quick usecase.
[dead]
For photos, you can't beat Immich.
I’ve tried every scheme under the sun and Immich is the only thing I’ve ever seen that actually works for this use case
I switch to FolderSync for the upload from mobile. Works like a charm!
I know, it sucks that the official apps are buggy as hell, but the server side is real solid
This also happened to me with my nextcloud, thankfully I did not lose any photos. I transitioned to Immich for my photos and have not looked back.
I use syncthing, I've got a folder shared between my phone, laptop and media center, and it just syncs everything easily.
It works well for smaller folders but it slows down to a crawl with folders that contain thousands of files. If I add a file to an empty shared folder it will sync almost instantly but if I take a photo both sides become aware of the change rather quickly but then they just sit around for 5 minutes doing nothing before starting the transfer.
how many thousands? I have a folder with a total of 12760 files spread within several folders, but the largest I think is the one with 3827 files.
I've noticed the sync isn't instantaneous, but if I ping one device from the other, it starts immediately. I think Android has some kind of network related sleep somewhere, since the two nixos ones just sync immediately.
I do the same it's so convenient
SyncThing
The next cloud android app is particularly bad if you use it to back up your cameras DCIM directory then you delete the photos on your phone. It overwrite the files on Nextcloud as new photos are taken. I get why this happened but it is terrible.
it's bad for everything.
i have lots of txt files on my phone which are just not synced up to my server (the files on the server are 0 byte long).
i'm using txt files to take notes because the Notes app never worked for me (I get sync errors on any android phone while it works on iphone).
I don't doubt that large amounts of javascript can often cause issues but even when cached NextCloud feels sluggish. When I look at just the network tab of a refresh of the calendar page it does 124 network calls, 31 of which aren't cached. it seems to be making a call per calendar each of which is over 30ms. So that stacks up the more calendars you have(and you have a number by default like contact birthdays).
The Javascript performance trace shows over 50% of the work is in making the asynchronous calls to pull those calendars and other network calls one by one and then on all the refresh updates it causes putting them onto the page.
Supporting all these N calendar calls is pulls individually for calendar rooms and calendar resources and "principles" for the user. All separate individual network calls some of which must be gating the later individual calendar calls.
Its not just that, it also makes a call for notifications, groups, user status and multiple heartbeats to complete the page as well, all before it tries to get the calendar details.
This is why I think it feels slow, its pulling down the page and then the javascript is pulling down all the bits of data for everything on the screen with individual calls, waiting for the responses before it can progress in many ways to make the further calls of which there can be N many depending on what the user is doing.
So across the local network (2.5Gbps) that is a second and most of it in waiting for the network. If I use the regular 4G level of throttling it takes 33.10 seconds! Really goes to show how bad this design does with extra latency.
I was going to say... The size of the JS only matters the first time you download it unless there's a lot of tiny files instead of a bundle or two. What the article is complaining about doesn't seem like it's root cause of the slowness.
When it comes to JS optimization in the browser there's usually a few great big smoking guns:
Nextcloud appears to be slow because of #2. Both #1 and #2 are dependent on round-trip times (HTTP request to server -> HTTP response to client) which are the biggest cause of slowness on mobile networks (e.g. 5G).Modern mobile network connections have plenty of bandwidth to deliver great big files/streams but they're still super slow when it comes to round-trip times. Knowing this, it makes perfect sense that Nextcloud would be slow AF on mobile networks because it follows the REST philosophy.
My controversial take: GIVE REST A REST already! WebSockets are vastly superior and they've been around for FIFTEEN YEARS now. Do I understand why they're so much lower latency than REST calls on mobile networks? Not really: In theory, it's still a round-trip but for some reason an open connection can pass data through an order of magnitude (or more) lower latency on something like a 5G connection.
15MB of JavaScript is 15MB of code that your browser is trying to execute. It’s the same principle as “compiling a million lines of code takes a lot longer than compiling a thousand lines”.
It's a lot more complicated than that. If I have a 15MB .js file and it's just a collection of functions that get called on-demand (later), that's going to have a very, very low overhead because modern JS engines JIT compile on-the-fly (as functions get used) with optimization happening for "hot" stuff (even later).
If there's 15MB of JS that gets run immediately after page load, that's a different story. Especially if there's lots of nested calls. Ever drill down deep into a series of function calls inside the performance report for the JS on a web page? The more layers of nesting you have, the greater the overhead.
DRY as a concept is great from a code readability standpoint but it's not ideal performance when it comes to things like JS execution (haha). I'm actually disappointed that modern bundlers don't normally inline calls at the JS layer. IMHO, they rely too much on the JIT to optimize hot call sites when that could've been done by the bundler. Instead, bundlers tend to optimize for file size which is becoming less and less of a concern as bandwidth has far outpaced JS bundle sizes.
The entire JS ecosystem is a giant mess of "tiny package does one thing well" that is dependent on n layers of "other tiny package does one thing well." This results in LOADS of unnecessary nesting when the "tiny package that does one thing well" could've just written their own implementation of that simple thing it relies on.
Don't think of it from the perspective of, "tree shaking is supposed to take care of that." Think of it from the perspective of, "tree shaking is only going to remove dead/duplicated code to save file sizes." It's not going to take that 10-line function that handles with <whatever> and put that logic right where its used (in order to shorten the call tree).
That 15mb still needs to be parsed on every page load, even if it runs in interpreted mode. And on low end devices there’s very little cache, so the working set is likely to be far bigger than available cache, which causes performance to crater.
Ah, that's the thing: "on page load". A one-time expense! If you're using modern page routing, "loading a new URL" isn't actually loading a new page... The client is just simulating it via your router/framework by updating the page URL and adding an entry to the history.
Also, 15MB of JS is nothing on modern "low end devices". Even an old, $5 Raspberry Pi 2 won't flinch at that and anything slower than that... isn't my problem! Haha =)
There comes a point where supporting 10yo devices isn't worth it when what you're offering/"selling" is the latest & greatest technology.
It shouldn't be, "this is why we can't have nice things!" It should be, "this is why YOU can't have nice things!"
When you write code with this mentality it makes my modern CPU with 16 cores at 4HGz and 64GB of RAM feel like a Pentium 3 running at 900MHz with 512MB of RAM.
Please don't.
THANK YOU
>There comes a point where supporting 10yo devices isn't worth it
Ten years isn't what it used to be in terms of hardware performance. Hell, even back in 2015 you could probably still make do with a computer from 2005 (although it might have been on its last legs). If your software doesn't run properly (or at all) on ten-year-old hardware, it's likely people on five-year-old hardware, or with a lower budget, are getting a pretty shitty experience.
I'll agree that resources are finite and there's a point beyond which further optimizations are not worthwhile from a business sense, but where that point lies should be considered carefully, not picked arbitrarily and the consequences casually handwaved with an "eh, not my problem".
>Do I understand why they're so much lower latency than REST calls on mobile networks? Not really: In theory, it's still a round-trip but for some reason an open connection can pass data through an order of magnitude (or more) lower latency on something like a 5G connection.
It's because a TLS handshake takes more than one roundtrip to complete. Keeping the connection open means the handshake needs to be done only once, instead of over and over again.
doesn’t HTTP keep connections open?
It's up to the client to do that. I'm merely explaining why someone would see a latency improvement switching from HTTPS to websockets. If there's no latency improvement then yes, the client is keeping the connection alive between requests.
Yes and no: There's still a rather large latency improvement even when you're using plain HTTP (not that you should go without encryption).
I was very curious so I asked AI to explain why websockets would have such lower latency than regular HTTP and it gave some (uncited, but logical) reasons:
Once a WebSocket is open, each message avoids several sources of delay that an HTTP request can hit—especially on mobile. The big wins are skipping connection setup and radio wakeups, not shaving a few header bytes.
Why WebSocket “ping/pong” often beats HTTP GET /ping on mobile
What about encryption (HTTPS/WSS)? How much do headers/bytes matter? When the gap narrowsWow. Talk about inefficiency. It just said the same thing I did, but using twenty times as many characters.
>Yes and no: There's still a rather large latency improvement even when you're using plain HTTP (not that you should go without encryption).
Of course. An unencrypted HTTP request takes a single roundtrip to complete. The client sends the request and receives the response. The only additional cost is to set up the connection, which is also saved when the connection is kept open with a websocket.
Yes and no. Have you considered that the problem is that a TLS handshake takes more than one round trip to complete?
/s
I've never seen anybody recommend WebSockets instead of REST. I take it this isn't a widely recommended solution? Do you mean specifically for mobile clients only?
WebSockets are the secret ingredient to amazing low- to medium-user-count software. If you practice using them enough and build a few abstractions over them, you can produce incredible “live” features that REST-designs struggle with.
Having used WebSockets a lot, I’ve realised that it’s not the simple fact that WebSockets are duplex or that it’s more efficient than using HTTP long-polling or SSEs or something else… No, the real benefit is that once you have a “socket” object in your hands, and this object lives beyond the normal “request->response” lifecycle, you realise that your users DESERVE a persistent presence on your server.
You start letting your route handlers run longer, so that you can send the result of an action, rather than telling the user to “refresh the page” with a 5-second refresh timer.
You start connecting events/pubsub messages to your users and forwarding relevant updates over the socket you already hold. (Trying to build a delta update system for polling is complicated enough that the developers of most bespoke business software I’ve seen do not go to the effort of building such things… But with WebSockets it’s easy, as you just subscribe before starting the initial DB query and send all broadcasted updates events for your set of objects on the fly.)
You start wanting to output the progress of a route handler to the user as it happens (“Fetching payroll details…”, “Fetching timesheets…”, “Correlating timesheets and clock in/out data…”, “Making payments…”).
Suddenly, as a developer, you can get live debug log output IN THE UI as it happens. This is amazing.
AND THEN YOU WANT TO CANCEL SOMETHING because you realise you accidentally put in the actual payroll system API key. And that gets you thinking… can I add a cancel button in the UI?
Yes, you can! Just make a ‘ctx.progress()’ method. When called, if the user has cancelled the current RPC, then throw a RPCCancelled error that’s caught by the route handling system. There’s an optional first argument for a progress message to the end user. Maybe add a “no-cancel” flag too for critical sections.
And then you think about live collaboration for a bit… that’s a fun rabbit hole to dive down. I usually just do “this is locked for editing” or check the per-document incrementing version number and say “someone else edited this before you started editing, your changes will be lost — please reload”. Figma cracked live collaboration, but it was very difficult based on what they’ve shared on their blog.
And then… one day… the big one hits… where you have a multistep process and you want Y/N confirmation from the user or some other kind of selection. The sockets are duplex! You can send a message BACK to the RPC client, and have it handled by the initiating code! You just need to make it so devs can add event listeners on the RPC call handle on the client! Then, your server-side route handler can just “await” a response! No need to break up the handler into multiple functions. No need to pack state into the DB for resumability. Just await (and make sure the Promise is rejected if the RPC is cancelled).
If you have a very complex UI page with live-updating pieces, and you want parts of it to be filterable or searchable… This is when you add “nested RPCs”. And if the parent RPC is cancelled (because the user closes that tab, or navigates away, or such) then that RPC and all of its children RPCs are cancelled. The server-side route handler is a function closure, that holds a bunch of state that can be used by any of the sub-RPC handlers (they can be added with ‘ctx.addSubMethod’ or such).
The end result is: while building out any feature of any “non-web-scale” app, you can easily add levels of polish that are simply too annoying to obtain when stuck in a REST point of view. Sure, it’s possible to do the same thing there, but you’ll get frustrated (and so development of such features will not be prioritised). Also, perf-wise, REST is good for “web scale” / high-user-counts, but you will hit weird latency issues if you try to use for live, duplex comms.
WebSockets (and soon HTTP3 transport API) are game-changing. I highly recommend trying some of these things.
Find someone to love you the way DecoPerson loves websockets.
After all my years of web development, my rules are thus:
WebSockets are just too awesome! You can use a simple event dispatcher for both the frontend and the backend to handle any given request/response and it makes the code sooooo much simpler than REST. Example: ...and `WSDispatcher` would be the (singleton) object that holds the WebSocket connection and has `on()`, `off()`, and `dispatch()` functions. When the server sends a message like `{"type": "pong", "payload": "<some timestamp>"}`, the client calls `WSDispatcher.dispatch("pong", "<some timestamp>")` which results in `pongFunc("<some timestamp>")` being called.It makes reasoning about your API so simple and human-readable! It's also highly performant and fully async. With a bit of Promise wrapping, you can even make it behave like a synchronous call in your code which keeps the logic nice and concise.
In my latest pet project (collaborative editor) I've got the WebSocket API using a strict "call"/"call:ok" structure. Here's an example from my WEBSOCKET_API.md:
I've got a `request()` helper that makes the async nature of the WebSocket feel more like a synchronous call. Here's what that looks like in action: For reference, errors are returned in a different, more verbose format where "type" is "error" in the object that the `request()` function knows how to deal with. It used to be ":err" instead of ":ok" but I made it different for a good reason I can't remember right now (LOL).Aside: There's still THREE firewalls that suck so bad they can't handle WebSockets: SophosXG Firewall, WatchGuard, and McAfee Web Gateway.
The thing that kills me is that Nextcloud had an _amazing_ calendar a few years ago. It was way better than anything else I have used. (And I tried a lot, even the calendar add-on for Thunderbird. Which may or may not be built in these days, I can't keep track.)
Then at some point the Nextcloud calendar was "redesigned" and now it's completely terrible. Aesthetically, it looks like it was designed for toddlers. Functionally, adding and editing events is flat out painful. Trying to specify a time range for an event is weird and frustrating. It's better than not having a calendar, but only just.
There are plenty of open source calendar _servers_, but no good open source web-based calendars that I have been able to find.
Sync Conf is next week, and this sort of issue is so part of what I hope maybe can just go away. https://syncconf.dev/
Efforts like Electric SQL to have APIs/protocols for bulk fetching all changes (to a "table") is where it's at. https://electric-sql.com/docs/api/http
It's so rare for teams to do data loading well, rarer still we get effective caching, and often a products footing here only degrades with time. The various sync ideas out there offer such an alluring potential, of having a consistent way to get the client the updated live data they need, in a consistent fashion.
Side note, I'm also hoping the js / TC39 source phase imports proposal aka import source can help let large apps like NextCloud defer loading more of it's JS until needed too. But the waterfall you call out here seems like the real bad side (of NextCloud's architecture)! https://github.com/tc39/proposal-source-phase-imports
Having at some point maintained a soft fork / patch-set for Nextcloud.. yes, there is so much performance left on the table. With a few basic patches the file manager, for example, sped up by magnitudes in terms of render speed.
The issue remains that the core itself feels like layers upon layers of encrusted code that instead of being fixed have just had another layer added ... "something fundamental wrong? Just add Redis as a dependency. Does it help? Unsure. Let's add something else. Don't like having the config in a db? Let's move some of it to ini files (or vice versa)..etc..etc." it feels like that's the cycle and it ain't pretty and I don't trust the result at all. Eventually abandoned the project.
Edit: at some point I reckon some part of the ecosystem recognised some of these issues and hence Owncloud remade a large part of the fundamentals in Golang. It remains unknown to me whether this sorted things or not. All of these projects feel like they suffer badly from "overbuild".
Edit-edit: another layer to add to the mix is that the "overbuild" situation is probably largely what allows the hosting economy around these open source solutions to thrive since Nextcloud and co. are so over-engineered and badly documented that they -require- a dedicated sys-admin team to run well.
This is my theory as well. NC has grown gradually in silos almost, every piece of it is some plugin they've imported from contributions at some point.
For example the reason there's no cohesiveness with a common websocket bus for all those ajax calls is because they all started out as a separate plugin.
NC has gone full modularity and lost performance for it. What we need is a more focused and cohesive tool for document sharing.
Honestly I think today with IaC and containers, a better approach for selfhosting is to use many tools connected by SSO instead of one monstrosity. The old Unix philosophy, do one thing but do it well.
This still needs cohesive authorization and central file sharing and access rules across apps. And some central concept of projects to move all content away from people and into the org and roles
Two things:
1. Did you open back port request with these basic patches? If you have orders of magnitude speed improvements it would be aswesome to share!
2. You definitively don't need an entire sysadmin team to run nextcloud, in my work (large organisation) there's three instances running (for different parts/purposes of which only one is run by more than one person, and I run myself both my personal instance and for a nonprofit with ~100 persons, it's really not much work after setup (and other systems are plenty of a lot more complicated systems to set up, trust me)
1. There was no point, having thought about it a bit; a lot of the patches (in essence it was at most a handful) revolved around disabling features which in turn could never have been upstreamed. An example was, as mentioned elsewhere in this comment section, the abysmal performance of the thumbnail gen feature, it never cached right, it never worked right and even when it did it would absolutely kill listings of larger folders of media - this was basically hacked out and partially replaced with much simpler gen on images alone, suddenly the file manager worked again for clients.
2. Guess that's debatable, or maybe even skill dependent (mea culpa), and also largely a question of how comfortable one is with systems that cannot be reasoned about cleanly (similar to TFA I just could not stand the bloat, it made me feel more than mildly unwell working with it). Eventually it was GDPR reqs that drove us towards the big G across multiple domains.
On another note it strikes me how the attempts at re-gen'ing folder listings online really is Sisyphus work, there should be a clean way to enfold multiuser/access-tokens into the filesystems of phones/PCs/etc. The closest pseudo example at the moment I guess is classic Google Drive but of course it would need gating and security on the OS side of things that works to a standard across multiple ecosystems (Apple, MS, Android, iPhone, Linux etc.) ... yeeeeah, better keep polishing that HTML ball of spaghetti I guess ;)
I don't think this article actually does a great job of explaining why Nextcloud feels slow. It shows lots of big numbers for MBs of Javascript being downloading, but how does that actually impact the user experience? Is the "slow" Nextcloud just sitting around waiting for these JS assets to load and parse?
From my experience, this doesn't meaningfully impact performance. Performance problems come from "accidentally quadratic" logic in the frontend, poorly optimised UI updates, and too many API calls.
It downloads a lot of JavaScript, it decompresses a lot of JavaScript, it parses a lot of JavaScript, it runs a lot of JavaScript, it creates a gazillion onFoundMyNavel event callbacks which all run JavaScript, it does all manner of uncontrolled DOM-touching while its millions of script fragments do their thing, it xhr’s in response to xhrs in response to DOM content ready events, it throws and swallows untold exceptions, has several dozen slightly unoptimized (but not too terrible) page traversals, … the list goes on and on. The point is this all adds up, and having 15MB of code gives a LOT of opportunity for all this to happen. I used to work on a large site where we would break out the stopwatch and paring knife if the homepage got to more than 200KB of code, because it meant we were getting sloppy.
15+ megabytes of executable code begins to look quite insane when you start to take a gander at many AAA games. You can produce a non-trivial Unity WebGL build that fits in <10 megabytes.
It’s the kind of code size where you analyze it and find 13 different versions of jquery and a hundred different bespoke console.log wrappers.
Yes and Windows 3.11 came on 6 1.44MB floppy disks. Modern software is so offensive.
Windows 3.11 also wasn’t shipped to you over a cellular connection when you clicked on it. If it were, 6x1.44MB would have been considered quite unacceptable.
But at least they’re not prematurely optimizing
Agreed. Plus if it truly downloads all of that every time, something has gone wrong with caching.
Overeager warming/precomputation of resources on page load (rather than on use) can be a culprit as well.
Relying on cache to cover up a 15MB JavaScript load is a serious crutch.
Oh totally, but - normal caching behavior would lead to different results than reported in the article. It would impact cold-start scenarios, not every page load. So something else is up.
I've played around with many self-hosted file manager apps. My first one was Ajaxplorer which then became Pydio. I really liked Pydio but didn't stick with it because it was too slow. I briefly played with Nextcloud but didn't stick with it either.
Eventually I ran into FileRun and loved it, even though it wasn't completely open source. FileRun is fast, worked on both desktop and mobile via browser nicely, and I never had an issue with it. It was free for personal use a few years ago, and unfortunately is not anymore. But it's worth the license if you have the money for it.
I tried setting up SeaFile but I had issues getting it working via a reverse proxy and gave up on it.
I like copyparty (https://github.com/9001/copyparty) - really dead simple to use and quick like FIleRun - but the web interface is not geared towards casual users. I also miss Filerun's "Request a file" feature which worked very nicely if you just wanted someone to upload a file to you and then be done.
On the topic of self-hosted file manager apps, I've really liked "filebrowser". Pair it with Syncthing or another sync daemon and you've got a minimal self-hosted Dropbox clone.
* https://github.com/filebrowser/filebrowser
* https://github.com/hurlenko/filebrowser-docker
> I also miss Filerun's "Request a file" feature which worked very nicely if you just wanted someone to upload a file to you and then be done.
With the disclaimer that I've never used Filerun, I think this can be replicated with copyparty by means of the "shares" feature (--shr). That way, you can create a temporary link for other people to upload to, without granting access to browse or download existing files. It works like this: https://a.ocv.me/pub/demo/#gf-bb96d8ba&t=13:44
Copyparty can't (and doesn't want to) replace Nextcloud for many use cases because it supports one-way sync only. The readme is pretty clear about that. I'm toying with the idea of combining it with Syncthing (for all those devices where I don't want to do a full sync), does anybody have experience with that? I've seen some posts that it can lead to extreme CPU usage when combined with other tools that read/write/index the same folders, but nothing specifically about Syncthing.
Combining copyparty with Syncthing is not something I have tested extensively, but I know people are doing this, and I have yet to hear about any related issues. It's also a usecase I want to support, so if you /do/ hit any issues, please give word! I've briefly checked how Syncthing handles the symlink-based file deduplication, and it seemed to work just fine.
The only precaution I can think of is that copyparty's .hist folder should probably not be synced between devices. So if you intend to share an entire copyparty volume, or a folder which contains a copyparty volume, then you could use the `--hist` global-option or `hist` volflag to put it somewhere else.
As for high CPU usage, this would arise from copyparty deciding to reindex a file when it detects that the file has been modified. This shouldn't be a concern unless you point it at a folder which has continuously modifying files, such as a file that is currently being downloaded or otherwise slowly written to.
A good thing thing about Nextcloud is that by learning one tool, you get a full suite of collaboration apps: sync, file sharing, calendar, notes, collectives, office (via Collabora or OnlyOffice), and more. These features are pretty good, plus, you get things like photo management and Talk, which are decent.
Sure, some people might argue that there are specialized tools for each of these functions. And that’s true. But the tradeoff is that you'd need to manage a lot more with individual services. With Nextcloud, you get a unified platform that might be good enough to run a company, even if it’s not very fast and some features might have bugs.
The AIO has addressed issues like update management and reliability, it been very good in my experience. You get a fully tested, ready-to-go package from Nextcloud.
That said, I wonder, if the platform were rewritten in a more performance-efficient language than PHP, with a simplified codebase and trimmed-down features, would it run faster? The UI could also be more polished (see Synology DSM web interface). The interface in Synology looks really nice!
rewriting in a lower-level language won't do too much for NC, because it's mostly slow due to inefficient IO organization - things like mountains of XHRs, inefficient fetching, db querying etc. - None of that will be implicitly fixed by a rewrite in any language and can be fixed in the PHP stack as well. I think one of the reasons that helped OC/NC get off the ground was precisely that the sysadmins running it can often do a little PHP, which is just enough to get it customized for the client. Raising the bar for contribution by using lower level languages might not be a desirable change of direction in that case.
The thing I don't get is that based on the article the front-end is as bloated as the back-end.
That said there's an Owncloud version called Infinite Scale which is written in Go.[1] Honestly I tried to go that route but it's requirements are pretty opinionated (Ubuntu LTS 22.04 or 24.04 and lots of docker containers littering your system) but it looks like it's getting a lot of development.
[1] https://doc.owncloud.com/
Most of the OCIS team left to start OpenCloud, which is a OCIS fork. And it's hardware requirements are pretty tame. It's a very nice replacement for Nextcloud, if you don't need the Groupware features/Apps and are only looking for File sharing.
> it's requirements are pretty opinionated (Ubuntu LTS 22.04 or 24.04
Hm?
> This guide describes an installation of Infinite Scale based on Ubuntu LTS and docker compose. The underlying hardware of the server can be anything as listed below as long it meets the OS requirements defined in the Software Stack
https://doc.owncloud.com/ocis/next/depl-examples/ubuntu-comp...
The Software Stack section goes on to say it's just needs Docker, Docker Compose, shell access, and sudo.
Ubuntu and sudo are probably only mentioned because the guide walks you through installing docker and docker compose.
If the developers can only get it to run in a pile of ubuntu containers, then it's extremely likely they haven't thought through basic things you need to operate a service, like supply chain security, deterministic builds, unit testing, upgrades, etc.
I see 6 officially supported linux distributions. I don't know where anyone got the idea that they can only get it to run on ubuntu. It's containerized. Who cares what the host os is, beyond "it can run containers"?
Nextcloud is something I have a somewhat love-hate relationship with. On one hand, I've used Nextcloud for ~7 years to backup and provide access to all of my family's photos. We can look at our family pictures and memories from any computer, and it's all private and runs mostly without any headaches.
On the other hand, Nextcloud is so far from being something like Google Docs, and I would never recommend it as a general replacement to someone who can't tolerate "jank", for lack of a better word. There are so many small papercuts you'll notice when using it as a power user. Right off the top of my head, uploading large files is finicky, and no amount of web server config tinkering gets it to always work; thumbnail loading is always spotty, and it's significantly slower than it needs to be (I'm talking orders of magnitude).
With all that said, I'm so grateful for Nextcloud since I don't have a replacement, and I would prefer not having all our baby and vacation pictures feeding some big corporation's AI. We really ought to have a safe, private place to store files in 2025 that the average person can wrap their head around. I only wish my family took better advantage of it, since I'm essentially providing them with unlimited storage.
I once discovered and reported a vulnerability in Nextcloud's web client that was due to them including an outdated version of a JavaScript-based PDF viewer. I always wondered why they couldn't just use the browser's PDF viewer. I made $100, which was a large amount to me as a 16 year old at the time.
Here is a blog post I wrote at the time about the vulnerability (CVE-2020-8155): https://tripplyons.com/blog/nextcloud-bug-bounty
I recently needed to show a pdf file inside a div in my app. All i wanted was to show it and make it scrollable. The file comes from a fetch() with authorzation headers.
I could not find a way to do this without pdf.js.
This made me try it once more and I got something to work with some Blobs, resource URLs, sanitazion and iframes.
So I guess it is possible
Yeah, blobs seem like the right way to do it.
There does not seem to be a way to configure anything though. It looks quite bad with the default zoom level and the toolbar…
The html object tag can just show a pdf file by default. Just fetch it and pass the source there.
What is the problem with that exactly in your case?
I think it can't do that on iOS? Don't know if that is the relevant thing in the choice being discussed though. Not sure about Android.
Nextcloud is bloated and slow, but it works and is reliable. I've been running a small instance in a business setting with around 8 daily users for many years. It is rock solid and requires zero maintenance.
But people rarely use the web apps. Instead, it's used more like a NAS with the desktop sync client being the primary interface. Nobody likes the web apps because they're slow. The Windows desktop sync client has a really annoying update process, but other than that is excellent.
I could replace it with a traditional NAS, but the main feature keeping me there is an IMAP authentication plugin. This allows users to sign in with their business email/password. It works so well and makes it so much easier to manage user accounts, revoke access, do password resets, etc.
> Nobody likes the web apps because they're slow.
Web apps don't have to be slow. I prefer web apps over system apps, as I don't have to install extra programs into my system and I have more control over those apps:
- a service decides it's a good idea to load some tracking stuff from 3rd-party? I just uMatrix block it;
- a page has an unwanted element? I just uBlock block it;
- a page could have a better look? I just userstyle style it;
- a page is missing something that could be added on client side? I just userscript script it
Do you also prefer a web-based file browser? My main use for Nextcloud is files and a desktop sync is crucial and integrates with the OS.
I know people here don't like it when one answers to complaints about OSS projects with "go fix it then" but seeing the comment section here, it's hard to not at least think it.
About 50-100 people saying that they know exactly why NC is slow, bloated, bad, but fail to a) point out a valid alternative, b) to act and do something about it.
I'm going to say that I love NC despite its slow performance. I own my storage, I can do Google Drive stuff without selling my soul (aka data) to the devil and I can go patch up stuff, since the code is open.
Is downloading lots of JS and waiting a few seconds bad? Yes. But did I pay for any of it? No. Am I the product as a result of choosing NC? Also no.
Having a basic file system with a dropbox alternative and being able to go and "shop" for extensions and extra tools feels so COOL and fun. Do I want to own my password manager? Bam, covered. Do I want to centralise calendar, mail and kanban into one? Bam, covered.
Codebase is AGPL, installs easily and you don't need to do surgery every new update.
I've been running it without hiccups for over 6 years now.
Would I love it to be as fast and smooth as a platform developed by an evil tech behemoth which wants to swallow everyone's data? Of course, am I happy NC exists? Yes!
And if you got this far, dear reader, give it a try. It's free and you can delete it in a second but if you find something to improve and know how, go help, it helps us all :)
I gave up on using Nextcloud because every time it updated it accumulated more and more errors and there was no way I was going to use a software that I had to troubleshoot every single update. Also the defaults for pictures are apparently quite stupid and so instead of making and showing tiny thumbnails for pictures, the thumbnails are unnecessarily large and loading the thumbnails for a folder of pictures takes forever. You can fix this and tell it to make smaller thumbnails apparently, but again, why am I having to fix everything myself? These should be sane defaults. Unfortunately, I just can't trust Nextcloud.
My NextCloud server completely borked itself with an automatic update sometime in the last ~10 months. It's completely unresponsive.
I haven't bothered to fix it.
I gave up updating Nextcloud. It works for what I use it for and I don't feel like I'm missing anything. I'd rather not spend 4+ hours updating and fixing confusing issues without any tangible benefit.
I was expecting the author to open the profiler tab instead of just staring at network. But its yet another "heavy JavaScript bad" rant.
You really consider 1 MB of JS too heavy for an application with hundreds of features? How exactly are developers supposed to fit an entire web app into that? Why does this minimalism suddenly apply only to JavaScript? Should every desktop app be under 1 MB too? Is Windows Calculator 30 MB binary also an offense to your principles?
What year is it, 2002? Even low-band 5G gives you 30–250 Mbps down. At those speeds, 20 MB of JS downloads in well under a second. So whats the math beihnd the 5–10 second figure? What about the cache? Is it turned off for you and you redownload the whole nextcloud from scratch every time?
Nextcloud is undeniably slow, but the real reasons show up in the profiler, not the network tab.
> Even low-band 5G gives you 30–250 Mbps down.
On paper. In practice, it can be worse than that.
I've spent the past year using a network called O2 here in the UK. Their 5G SA coverage depends a lot on low band (n28/700MHz) and had issues in places where you'd expect it to work well (London, for example). I've experienced sub 1Mbps speeds and even data failing outdoors more than once. I have a good phone, I'm in a city, and using what until a recent merger was the largest network in the country.
I know it's not like this everywhere or all the time, but for those working on sites, apps, etc, please don't assume good speeds are available.
That's really quite odd. There is even no 5G in my area, yet I get 100 Mbps stable download speed on 4G LTE, outdoors and indoors, any time of the day. Is 5G a downgrade? Is it considered normal service in the UK, when latest generation of cellular network provides a connection speed compared to 3G launched in 2001? How is this even acceptable in the year 2025. Would anyone in the UK start complaining if they downgrade it to 100Kbps? Or should we design the apps for that case?
Such underrated comment. You can really have 500MB of dependencies for your app because you're on MacOS and it's still gonna be fast because memory use have nothing to do with performance.
Pretty much the same with JavaScript - modern engines are amazingly fast or at least they really not depend on amount of raw javascript feed to them.
> low-band 5G gives you 30–250
First and foremost, I agree with the meat of your comment.
But I wanted to point about your comment, that it DOES very much matter that apps meant to be transmitted over a remote connection are, indeed, as slim as possible.
You must be thinking about 5G on a city with good infrastructure, right?
I'm right now having a coffee on a road trip, with a 4G connection, and just loading this HN page took like 8~10 seconds. Imagine a bulky and bloated web app if I needed to quickly check a copy of my ID stored in NextCloud.
It's time we normalize testing network-bounded apps through low-bandwidth, high-latency network simulators.
> You really consider 1 MB of JS too heavy for an application with hundreds of features? How exactly are developers supposed to fit an entire web app into that? Why does this minimalism suddenly apply only to JavaScript? Should every desktop app be under 1 MB too? Is Windows Calculator 30 MB binary also an offense to your principles?
Yes, I don't know, because it runs in the browser, yes, yes.
I have been considering https://bewcloud.com/ + Immich as an alternative
Nextcloud's client support is very good though and it has some great apps, I use PhoneTrack on road trips a lot
Immich is a night and day improvement for photos vs nextcloud. You could roll it in addition if you wanted to try.
Fantastic recommendation, it's like exactly what the doctor ordered given the premise of this thread. Does Bewcloud play nice with DAV or other open protocols or (dare I hope) nextcloud apps? I wouldn't mind using nextcloud apps paired with a better web front end.
> I use PhoneTrack on road trips a lot
If every aspect of Nextcloud was as clean, quick and light-weight as PhoneTrack this world would be a different place. The interface is a little confusing but once I got the hang of it it's been awesome and there's just nothing like it. I use an old phone in my murse with PhoneTrack on it and that way if I leave it on the bus (again) I actually have a chance of finding it.
No $35/month subscription, and I'm not sharing my location data with some data aggregator (aside from Android of course).
NextCloud does feel slow. What I want is not only a cloud service that does lots of common tasks, but it also should do it lightly and simply.
I'm extremely tempted to write a lightweight alternative. I'm thinking sourcehut [1] vs GitHub.
[1] https://sourcehut.org/
I made one such lightweight alternative frontend: https://github.com/mickael-kerjean/filestash
Take a look at OpenCloud. It's a Go-based rewrite of the former OwnCloud team.
It works very well, has polished UI and uses very little resources. It also does a lot less than Nextcloud.
https://github.com/opencloud-eu
Just compare comparable products.
Nextcloud is an old product that inherit from Owncloud developed in php since 2010. It has extensibility at its core through the thousands of extensions available.
So yaaay compare it with source hut ...
> Just compare comparable products.
> So yaaay compare it with source hut ...
I'm not saying that sourcehut is the same in any way, but I want the difference between GitHub and sourcehut to be the difference between NextCloud and alternative.
> Nextcloud is an old product that inherit from Owncloud developed in php since 2010.
Tough situation to be in, I don't envy it.
> It has extensibility at its core through the thousands of extensions available.
Sure, but I think for some limited use cases, something better could be imagined.
Aren't you just confirming the parent that Nextcloud is the big, feature-rich behemoth like Github?
Maybe that's the problem "old product that inherit from Owncloud".
The article mentions Vikunja as an alternative to Nextcloud Tasks, and I can give it a solid recommendation as well. I wanted a self-hosted task management app with some lightweight features for organizing tasks into projects, ideally with a kanban view, but without a full-blown PM feature set. I tried just about every task management app out there, and Vikunja was the only one that ticked all the boxes for me.
Some specific things I like about it:
And some other things that weren't hard requirements, but have been useful for me:I know this post is more about nextcloud...but can i just say this one feature from Vikunja "...export task summaries and comments..." sounds great!!! One of the features i seek out when i look for a task, project management software is the ability to easily and comprehensivelt provide for nice exports, and that said exports *include comments*!!
Either apps lack such an export, or its very minimal, or it includes lots of things, except comments...Sometimes an app might have a REST api, and I'd need to build something non-trivial to start pulling out the comments, etc. I feel like its silly in this day and age.
My desire for comments to be included in exports is for local search...but also because i use comments for sort of thinking aloud, sort of like an inline task journaling...and when comments are lacking, it sucks!
In fact, when i hear folks suggest to simply stop using such apps and merely embrace the text file todo approach, they cite their having full access to comments as a feature...and, i can't dispute their claim! But barely any non-text-based apps highlight the inclusion of comments. So, i have to ask: is it just me (who doesn't use a text-based todo workflow), and then all other folks who *do use* a text-based tdo flow, who actually care about access to comments!?!
<rant over>
Yeah, I hear you. I almost started using a purely text-based todo workflow for those same reasons, but it was hard to give up some web UI features, like easily switching between list and kanban-style views.
My use case looks roughly like this: for a given project (as in hobby/DIY/learning, not professional work), I typically have general planning/reference notes in a markdown file synced across my devices via Nextcloud. Separately, for some individual tasks I might have comments about the initial problem, stuff I researched along the way, and the solution I ended up with. Or just thinking out loud, like you mentioned. Sometimes I'll take the effort to edit that info into my main project doc, but for the way I think, it's sometimes more convenient for me to have that kind of info associated with a specific task. When referring to it later, though, it's really handy to be able to use ripgrep (or other search tools) to search everything at once.
To clarify, though, Vikunja doesn't have a built-in feature that exports all task info including comments, just a REST API. It did take a little work to pull all that info together using multiple endpoints (in this case: projects, tasks, views, comments, labels). Here's a small tool I made for that, although it's fairly specific to my own workflow: https://github.com/JWCook/scripts/tree/main/vikunja-export
> Yeah, I hear you. I almost started using a purely text-based todo workflow for those same reasons, but it was hard to give up some web UI features, like easily switching between list and kanban-style views.
Yeah, i like me some kanban! Which is one reason i've resisted the text-based workflow...so far. ;-)
> ...Vikunja doesn't have a built-in feature that exports all task info including comments, just a REST API. It did take a little work...
Aww, man, then i guess i misread. I thought it was sort of easier than that. Well, i guess that's not all bad. Its possible, but simply requires a little elbow grease. I used to use Trello which does include comments in their JSON export, but i had my own little python app to copy out and filter only the key things i wanted - like comments - and reformated to other text formats like CSV, etc. But, Trello is not open source, so its not an option for me anymore. Well, thanks for sharing (and for making!) your vikunja export tool! :-)
nextcloud just feels abandoned, even if it isn't of course.
maybe paying customers are getting a different/updated/tuned version of it. maybe not. but the only thing that keeps me using it is there isn't any real selfhosted alternatives.
why is it slow? if you just blink or take a breath, it touches the database. years ago i've tried to optimise it a bit and noticed that there are horrible amount of DB transactions there without any apparent reason.
also, the android client is so broken...
I'm not sure why you feel like it is abandoned. There is a steady release cadence and the changelog[0] clearly shows that much is being worked on.
[0]: https://nextcloud.com/changelog/#latest32
yes of course there's progress and new features and it's not really abandoned per se.
but the feeling is that the outdated or simply bad decisions aren't fixed or redesigned.
it could be made 100 times better.
Because it feels worse and more broken as time goes on. Just like any other abandoned web app, except it's being made worse and slower as an active, deliberate, ongoing choice
On the same note a jira ticket as configured where I work the entire page is 42mb. And I use ad blockers so I already skip the page counting stuff
Wow, that's a lot. Our local installation zero cache request (to not suffer their slooooow cloud):
82 / 86 requests 1,694 kB / 1,754 kB transferred 6,220 kB / 6,281 kB resources Finish: 11.73 s DOMContentLoaded: 1.07 s Load: 1.26 s
I know that this is supposed to be targeted at NextCloud in particular, but I think it's a good standalone "you should care about how much JavaScript you ship" post as well.
What frustrates me about modern web development is that everyone is focused on making it work much more than they are making it sure it works fast. Then when you go to push back, the response is always something like "we need to not spend time over-optimizing."
Sent this straight to the team slack haha.
I'm curious how much Javascript eg gmail and google docs/drive give you, in comparison.
I just checked google calendar it's under 3mb download for js (around 8mb uncompressed).. it's also a lot more responsive than nextcloud web. Even then, it's not necessarily the size, I think that's mostly a symptom of the larger issues likely at play.
There are a lot of requests made in general, these can be good, bad or indifferent depending on the actual connection channels and configuration with the server itself. The pieces are too disconnected from each other... the NextCloud org has 350 repositories on Github. I'm frankly surprised it's more than 30 or so... it's literally 10x what would be a larger expectation... I'd rather deal with a crazy mono-repo at that point.
OP really focused on payload size, is why I was curious.
> On a clean page load [of nextcloud], you will be downloading about 15-20 MB of Javascript, which does compress down to about 4-5 MB in transit, but that is still a huge amount of Javascript. For context, I consider 1 MB of Javascript to be on the heavy side for a web page/app.
> …Yes, that Javascript will be cached in the browser for a while, but you will still be executing all of that on each visit to your Nextcloud instance, and that will take a long time due to the sheer amount of code your browser now has to execute on the page.
While Nextcloud may have a ~60% bigger JS payload, sounds like perhaps that could have been a bit of a misdirection/misdiagnosis, and it's really about performance characteristics of the JS rather than strictly payload size or number of lines of code executed.
On a Google Doc load chosen by whatever my browser location bar autocompleted, I get around twenty JS files, the two biggest are 1MB and 2MB compressed.
Yeah, without a deeper understanding it's really hard to say... just the surface level look, I'm not really at all interested in diving deeper myself. I'd like to like it... I tried out a test install a couple times but just felt it was clunky. Having a surface glance at the org and a couple of the projects, it doesn't surprise me that it felt that way.
gmail should be server sided, with as much JS as you want to use. Unless they moved away from the philosophy they started with GWT (Google Web Toolkit) for Gmail, and perhaps even Inbox (RIP)
I've used nextcloud for close to I think 8 years now as a replacement for google drive.
However my need for something like google drive has reduced massively, and nextcloud continues to be a massive maintenance pain due to its frustratingly fast release cadence.
I don't want to have to log into my admin account and baby it through a new release and migration every four months! Why aren't there any LTS branches? The amount of admin work that nextcloud requires only makes sense for when you legitimately have a whole group of people with accounts that are all utilizing it regularly.
This is honestly the kick in the pants I need to find a solution that actually fits my current use-case. (I just need to sync my fuckin keepass vault to my phone, man.) Syncthing looks promising with significantly less hassle...
The linuxserver.io image for Nextcloud requires considerably less babysitting for upgrades: https://docs.linuxserver.io/images/docker-nextcloud
As long as you only upgrade one major version at a time, it doesn't require putting the server in maintenance mode or using the occ cli.
Been running NC on my home server and basically maybe update it once a year or so? Even less probably, so definitely not a must to update every time. Plus via snap it's pretty simple.
Might also consider Vaultwarden/Bitwarden as a self-host alternative. Yeah it's client-server... that said, been pretty happy as a user.
The major shortcoming of NextCloud, in my opinion, is that that it's not able to do sync over LAN. Imagine wanting to synchronize 1TB+ of data and not being able to do so over a 1 Gbps+ local connection, when another local device has all the necessary data. There is some workaround involving "split DNS", but I haven't gotten around to it. Other than that, I thought NC was absolutely fantastic.
Check if your router has an option to add custom DNS entries. If you're using OpenWRT, for example, it's already running dnsmasq, which can do split DNS relatively easily: https://blog.entek.org.uk/notes/2021/01/05/split-dns-with-dn...
If not, and you don't want to set up dnsmasq just for Nextcloud over LAN, then DNS-based adblock software like AdGuard Home would be a good option (as in, it would give you more benefit for the amount of time/effort required). With AdGuard, you just add a line under Filters -> DNS rewrites. PiHole can do this as well (it's been awhile since I've used it, but I believe there's a Local DNS settings page).
Otherwise, if you only have a small handful of devices, you could add an entry to /etc/hosts (or equivalent) on each device. Not pretty, but it works.
Or just use ipv6!
You could also upload directly to the filesystem and then run occ files:scan, or if the storage is mounted as external it just works.
Another method is to set your machines /etc/hosts (or equivalent) to the local IP of the instance (if the device is only on lan you can keep it, otherwise remove it after the large transfer).
Now your rounter should not send traffic to itself away, just loop it internally so it never has to go over your isps connection - so running over lan only helps if your switch is faster than your router..
I had a similar issue with a public game server that required connecting through the WAN even if clients were local on the LAN. I considered split DNS (resolving the name differently depending on the source) but it was complicated for my setup. Instead I found a one-line solution on my OpenBSD router:
It basically says "pass packets from the LAN interface towards the WAN (egress) on the game port and redirect the traffic to the local game server". The local client doesn't know anything happened, it just worked.> The major shortcoming of NextCloud, in my opinion, is that that it's not able to do sync over LAN.
That’s an interesting way to describe a lack of configuration on your part.
Imagine me saying: "The major shortcoming of Google drive, in my opinion, is that that it's not able to sync files from my phone. There is some workaround involving an app called 'Google drive' that I have to install on my phone, but I haven't gotten around to it. Other than that, Google drive is absolutely fantastic.
I use it on LAN without a problem (using mDNS). Sure it runs with self signed certificates, but that’s ok with me.
I wonder how does bewCloud[1] stack up against NextCloud, since it's meant to be a “modern and simpler alternative” to it. Has anyone tested it?
[1] https://bewcloud.com/
Like most of us I think, I really, really wanted to like nextcloud. I put it on an admittedly somewhat slow dual Xeon server, gave it all 32 threads and many, many gigabytes of ram.
Even on a modern browser on a brand new leading-edge computer, it was completely unusably slow.
Horrendous optimization aside, NC is also chasing the current fad of stripping out useful features and replacing them with oceans of padding. The stock photos app doesn't even have the ability to sort by date!. That's been table stakes for a photo viewer since the 20th goddamn century.
When Windows Explorer offers a more performant and featureful experience, you've fucked up real bad.
I would feel incredibly bad and ashamed to publish software in the condition that NextCloud is in. It is IMO completely unacceptable.
One thing that could help with this is to use CDN for these static assets, while still having the Nextcloud hosted on your own.
We had a similar situation with some notebooks running in production, which were quite slow to load because it was loading a lot of JS files / WASM for the purposes of showing the UI. This was not part of our core logic, and using a CDN to load these, but still relying on private prod instance for business logic helped significantly.
I have a feeling this would be helpful here as well.
(tangential) Reading the comments, several mentioned "copyparty", never heard of it before, haven't used it, haven't reviewed but does there "feature showcase" video makes me want to give it a shot https://www.youtube.com/watch?v=15_-hgsX2V0 :)
For reference, 20 MB is three hundred and thirteen Commodores.
The complete Doom 2, including all graphics, maps, music and sound effects, shipped on 4 floppies, totalling 5.76MB.
The original Doom 2 ran 64,000 pixels (320x200). 4k UHD monitors now show 8.3 million pixels.
YMMV.
Of course, Doom 2 is full of Carmack shenanigans to squeeze every possible ounce of performance out of every byte, written in hand optimized C and assembly. Nextcloud is delivered in UTF-8 text, in a high level scripting language, entirely unoptimized with lots of low hanging fruit for improvement.
Sure but i doubt there is more image data in the delivered nextcloud data compared to doom2, games famously need textures where a website usually needs mostly vector and css based graphics.
Actually Carmack did squeeze every possible ounce of performance out of DOOM, however that does not always mean he was optimizing for size. If you want to see a project optimized for size you might check out ".kkrieger" from ".theprodukkt" which accomplishes a 3d shooter in 97,280bytes.
You know how many characters 20MB of UTF-8 text is right? If we are talking about javascript it's probably mostly ascii so quite close to 20 million characters. If we take a wild estimate of 80 characters per line that would be 250000 lines of code.
I personally think 20MB is outrageous for any website, webapp or similar. Especially if you want to offer a product to a wide range of devices on a lot of different networks. Reloading a huge chunk of that on every page load feels like bad design.
Developers usually take for granted the modern convenience of a good network connection, imagine using this on a slow connection it would be horrid. Even in the western "first world" countries there are still quite some people connecting with outdated hardware or slow connections, we often forget them.
If you are making any sort of webapp you ideally have to think about every byte you send to your customer.
yes, but why isn't it optimised? not as extreme as doom had to be, but to be a bit better? especially the low hanging fruits.
this is why i think there's another version for customers who are paying for it, with tuning, optimization, whatever.
I mean, if you’re going to include carmack’s relentless optimizer mindset in the description, I feel like your description of the NextCloud situation should probably end with “and written by people who think shipping 15MB of JavaScript per page is reasonable.”
You know apps don't store pixels, right? So why are you counting pixels?
A single picture that looks decent on a modern screen, taken from a modern camera, can easily be larger than the original Doom 2 binary.
You don't need pictures for a CRUD app. Should all be vectorial in any case.
The article suggests that it takes 14MB of Javascript to do just the calendar. I doubt that all of my calendar events for 2025 is 14MB.
Or the same number of 64k intros[1][2][3]...
[1]: https://www.youtube.com/watch?v=iXgseVYvhek
[2]: https://www.youtube.com/watch?v=ZWCQfg2IuUE
[3]: https://www.youtube.com/watch?v=4lWbKcPEy_w
Sure, but what people leave out is that it’s mostly C and assembly. That just isn’t realistic anymore if you want a better developer experience that leads to faster feature rollout, better security, and better stabilty.
This is like when people reminisce about the performance of windows 95 and its apps while forgetting about getting a blue screen of death every other hour.
Exactly javascript is a higher level language with a lot of required functionality build in. When compared to C you would need to (for most tasks) write way less actual code in javascript to achieve the same result, for example graphics or maths routines. Therefore it's crazy that it's that big.
I think it's a double edged sword of Open-Source/FLOSS... some problems are hard and take a lot of effort. One example I consistently point to is core component libraries... React has MUI and Mantine, and I'm not familiar with any open-source alternatives that come close. As a developer, if there was one for Leptos/Yew/Dioxus, I'd have likely jumped ship to Rust+WASM. They're all fast enough with different advantges and disadvantages.
All said... I actually like TypeScript and React fine for teams of developers... I think NextCloud likely has coordination issues that go beyond the language or even libraries used.
Windows 2000 was quite snappy on my Pentium 150, and pretty rock solid. It was when I stopped being good at fixing computers because it just worked, so I didn't get much practice.
I did get a BSOD from a few software packages in Win2k, but it was fewer and much farther between than Win9x/me... I didn't bump to XP until after SP3 came out... I also liked Win7 a lot. I haven't liked much of Windows since 7 though.
Currently using Pop + Cosmic.
Win2000 is in the same class as Win95 despite being slightly more stable. It still locked up and crashed more frequently than modern software.
Then you did something special. For me Win2k was at least three orders of magnitude more stable, and based on my buddies that was not exceptional.
Does anyone know what they are doing wrong to create such large bundles? What is the lesson here?
Not paying attention.
1. Indiscriminate use of packages when a few lines of code would do.
2. Loading everything on every page.
3. Poor bundling strategy, if any.
4. No minification step.
5. Polyfilling for long dead, obsolete browsers
6. Having multiple libraries that accomplish the same thing
7. Using tools and then not doing any optimization at all (like using React and not enabling React Runtime)
Arguably things like an email client and file storage are apps and not pages so a SPA isn't unreasonable. The thing is, you don't end up with this much code by being diligent and following best practices. You get here by being lazy or uninformed.
What is React runtime? I looked it up and the closest thing I came across is the newly announced React compiler. I have a vested interest in this because currently working on a micro-SaaS that uses React heavily and still suffering bundle bloat even after performing all the usual optimizations.
When you compile JSX to JavaScript, it produces a series of function calls representing the structure of the JSX. In a recent major version, React added a new set of functions which are more efficient at both runtime and during transport, and don't require an explicit import (which helps cut down on unnecessary dependencies).
React compiler is awesome for minimizing unnecessary renders but doesn't help with bundle size; might even make it worse. But in my experience it really helps with runtime performance if your code was not already highly optimized.
I think, some of the issues here is that first nextcloud tries to be compatible with any managed / mutualized hosting.
They also treat every "module"/"apps" whatever you call it, as completely distinct spa without proving much of a sdk/framework. Which mean each app, add is own deps, manage is own build, etc...
Also don't forget that app can even be a part of a screen not the whole thing
Nextcloud is a mess. It tries to do everything. The only reason I keep it in production is because it's a hustle to transition my files and DAVx info elsewhere.
The http upload is miserable, it's slow, it fails with no message, it fails to start, it hangs. When uploading duplicate files the popup is confusing. The UI is slow, the addons break on every update. The gallery is very bad, now we use immich.
I find the Nextcloud client really buggy on the Mac, especially the VFS integration. The file syncing is also really slow. I switched back to P2P file syncing via Syncthing and Resilio Sync out of frustration.
Many have brought up more websockets instead of REST API calls. It looks like they're already working in that direction.. scroll down to "Developer tools and APIs": https://nextcloud.com/blog/nextcloud-hub25-autumn/
>For context, I consider 1 MB of Javascript to be on the heavy side for a web page/app.
I feel like > 2kb of Javascript is heavy. Literally not needed.
While I tend to agree... I've been on enough relatively modern web apps that can hit 8mb pretty easily, usually because bundling and tree shaking are broken. You can save a lot by being judicious.
IMO, the worst offenders are when you bring in charting/graphing libraries into things when either you don't really need them, or otherwise not lazy loading where/when needed. If you're using something like React, then a little reading on SVG can do wonders without bloating an application. I've ripped multi-mb graphing libraries out to replace them with a couple components dynamically generating SVG for simple charting or overlays.
Preact have been fairly faithful to being <10k (compressed)! (even though they haven't updated the original <3k claim since forever)
It is slow and code seems to be messy enough to be fragile. It's also in PHP that doesn't help performance.
Nextcloud server is written in PHP. Of course it is slow. It's also designed to be used as an office productivity suite meaning a lot of features you may not actually use are enabled by default and those services come with their own cronjobs and so on.
PHP is super-fast today. I've built 2 customer facing web products with PHP which made each a million dollar business. And they were very fast!
https://dev.to/dehemi_fabio/why-php-is-still-worth-learning-...
At the risk of sounding out the obvious. PHP is limited to single threaded processes and has garbage collection. It's certainly not the fastest language one could use for handling multiple concurrent jobs.
That's incorrect. PHP has concurrency included.
On the other hand, in 99.99% of web applications you do not need self baked concurrency. Instead use a queue system which handles this. I've used this with 20 million background jobs per day without hassles, it scales very well horizontally und vertically.
They didn’t say it was the fastest. Just that the language per se is fast enough.
> the language per se is fast enough
I literally explained why this is not the case.
And Nextcloud being slow in general is not a new complaint from users.
I've never used nextcloud, but I always imagined that the point is you can run services but then plug in any calendar app etc. You don't have to be running nextclouds calendar, I thought. Did I misundestand how it works?
If dav works best for you, you're using it right.
I would assume that the people for whom a slow web based calendar is a problem (among other slow things on the web interface) are people who want to be using it if it performed well.
They wouldn't just make a bad slow web interface on purpose to enlighten people as to how bad web interfaces are, as a complicated way of pushing them toward integrated apps.
Their calendar plugin provides CalDAV, so you could just use your local calendar app that syncs with the server over that protocol.
Sooooo why not just host any caldav server instead? Like, why is nextcloud so popular compared to self hosting caldav?
In my case, I want file/photo syncing, calendar syncing, and contact syncing.
Nextcloud provides all 3 in a package that pretty much just works, in my experience (despite being kinda slow).
The Notes app is a pretty nice wrapper around a specific folder full of markdown files, I mostly use it on my phone, and on my desktop I just use my favorite editor to poke at the .md files directly.
Oh, and when a friend group wanted a better way to figure out which day to get together, I just installed the Polls app with a few clicks and we use that now.
I am a bit disappointed in the performance, but I've been running this setup for years and it "just works" for me. I understand how it works, I know how to back it up (and, more importantly restore from that backup!)
If there's another open-source, self-hosted project that has WebDAV, CalDAV, and CardDAV all in one package, then I might consider switching, but for now Nextcloud is "good enough" for me.
Ok so it's just the convenience of being a package, thank you for explaining.
I'm still setting up my own home server, adding one functionality at a time. I wanted to like Nextcloud but it's just too bloated.
Radicale is a good calendar replacement. I'd rather have single-function apps at this point.
Any good file syncing/drive replacements? My Synology exists pretty much because Synology Drives works so well syncing Mac and iOS.
I went from cloud to local smb shares to nextcloud to seafile. Really happy with the latter. Works, no bloat, versioning and some file sharing. The pro version is free with 3 or less usernames. I use the cli client to mount the libraries into folders and share that with smb + subst X: into the root directory on laptops for family. Borgbackup of that offsite for backup.
I've read good things about Seafile and have considered setting it up on my Homelab... though when I looked at the documentation, it too seemed quite large and I worried it wouldn't be the lightweight solution I'm looking for.
Seafile works pretty well. The iOS app is ass though. Everything else is rock solid.
Where does it store metadata like the additional file properties you can add? Does it use Alternate Data Streams for anything?
Does the AI run locally?
For anyone who might find it useful, here's a Reddit thread from 3 years ago on a few concerns about SeaFile I'd love to see revisited with some updated discussion: https://www.reddit.com/r/selfhosted/comments/wzdp2p/are_ther...
Seems like the AI runs wherever you want it - you enter an API endpoint.
https://manual.seafile.com/13.0/extension/seafile-ai/
You might like Peergos, which is E2EE as well. Disclosure (I work on it).
https://peergos.org
You can try it out easily here: https://peergos-demo.net
Our iOS app is still in the works still though.
Syncthing is great, but doesnt offer selective sync or virtual files if you need those features.
Owncloud infinite scale might be the best option for a full featured file sync setup, as thats all it does.
It’s not selective sync, but you can get something similar with Ignore Files [1] in SynchThing. This functionality can also be configured via the webGUI and within apps such as MobiusSync [2].
1. https://docs.syncthing.net/users/ignoring.html
2. https://mobiussync.com
I think you could replace Nextcloud's syncing and file access use cases with Syncthing and Copyparty respectively. IMO the biggest downside is that Copyparty's UX is... somewhat obtuse. It's super fast and functional, though.
Pretty happy with Resilio Sync. I use it on Mac, and linux in a docker container.
It is proprietary: it has words license and price on their page => crapware.
Unison. Unfortunately it has no mobile apps, though.
rsync, ftp, and smb have all existed for decades and work very well on spotty, slow connections (maybe not smb) and are very, very small utilities.
Copyparty. Found that recently and absolutely love it.
Is Nextcloud reliable enough for "production" use?
Last time I heard a certain privacy community recommended against Nextcloud due to some issues with Nextcloud E2EE.
Nextcloud, and before it Owncloud, have been "in production" in my household for nearly a decade at this point. There have been some botched updates and sync problems over the years, but it's been by far the most reliable app I've hosted.
In terms of privacy & security, like everything it comes down to risk model and the trade-offs you make to exist in the modern world. Nextcloud is for sharing files, if nothing short of perfect E2EE is tolerable it's probably not the solution for you, not to mention the other 99.999% of services out there.
I think most of the problems people report come down to really bad defaults that let it run like shit on very low-spec boxes that shouldn't be supported (ie raspi gen 1/2 back in the day). Installing redis and configuring php-fpm correctly fixes like 90% of the problems, other than the bloated Javascript as mentioned in the op.
End of the day, it's fine. Not perfect, not ideal, but fine.
the question is, what's your use case?
for me it's a family photo backup with calendars (private and shared ones) running in a VM on the net.
its webui is rarely used by anyone (except me), everyone is using their phones (calendars, files).
does it work? yes. does anyone other than me care about the bugs? no. but noone really _uses_ it as if it was deployed for a small office of 10-20-30 people. on the other hand, there are companies paying for it.
for this,
Kinda. In the long run you will definitely stumble upon a ton of bugs, but they mostly have some workarounds. Mostly.
Nextcloud not perfect but it's still one of a major project that has not shifted to business oriented licence and where all components are available and not paywalled with enterprise edition.
So yes not perfect, bloated js but it works and is maintained.
So I'd rather thanks all developers involved in nextcloud than whine about bloated js.
>So I'd rather thanks all developers involved in nextcloud than whine about bloated js.
Good news! You can do both.
That's not quite right. There are features that are only available to enterprise customers, or require proprietary plug-ins like Sendent.
Do I need them for my home server? No. Do I need them for my company? Yes, but costs compared to MS 365 are negligible.
Maybe it because of using PHP?
Nope. Php is sufficiently fast.
This post completely misses the point. Linear downloads ~6.1mb of JS over the network, decompressed to ~31mb and still feels snappy.
Applications like linear and nextcloud aren't designed to be opened and closed constantly. You open them once and then work in that tab for the remainder of your session.
As others have pointed out in this thread, "feeling slow" is mostly due to the number of fetch requests and the backend serving those requests.
I think there’s something cool possible in running the NextCloud plugin api over Sandstorm’s auth and sandboxing
syncthing otoh barely even has a web ui, so it's really fast :-P
It felt unnecessarily complex for such a simple task as file synchronization. I prefer unison. Unfortunately, it is a blast from the past written in ocaml and there is no Android app :-(
Syncthing has been very "set it and forget it" for me. It updates itself occasionally but I haven't had to fix anything yet.
Just like any other modern app: first you make it work using frameworks. Then, as soon as the "Core" product is done - just a few more features - then we'll circle back around to ripping out those bloated frameworks for something more lithe. Shouldn't be more than two weeks, now. Most of the base stuff is done. Just another feature or two. I mean, a little longer, if we have some issues with those features, sure. But we'll get back around to a simpler UI right after! Just those features, their bugs and support, and then - well documentation. Just the minimum stuff. Enough to know what we did when we come back to it. But we'll whip up those docs and then it's right on to slimming down the frontend! Won't be long now...
Microsoft Teams goes hold my beer and downloads more than 75 MB of Javascript.
As someone who has hosted a few Nextcloud instances for a few years: Nextcloud can be quick if you make it work. If you want to get a good feel for how it can be rent a Hetzner storage box (1TB for below 5 Euros a month).
You sadly can't just install nextcloud on your vanilla server and expect it to perform well.
Do you have any tips and tricks to share? I'm running a self-hosted instance on an old desktop PC in my basement for me and a couple family members. Performance is kinda meh, and I don't think it's due to resource constraints on the server itself. This is after following the performance recommendations in the admin console to tweak php.ini settings.
Javascript making PHP look bad.
I don't think I will ever use something like that. I work in over 10 PCs everyday and my only synchronisation is a 16 GB USB stick. I keep all important work, apps and files there.
[flagged]
This comment feels AI generated
"You're absolutely right!"
Could an installable PWA solve this ?
Could more diligence in the codebase solve this?
> Could ignoring the problem solve this ?