nextcloud, programming

A new architecture for Nextcloud

A recurring complaint about Nextcloud is how slow it can be for certain tasks. Why is Nextcloud slow? That is a very general question. Nextcloud is not always that slow. Actually it has improved vastly the past couple years. PHP newest versions get faster and faster, Nextcloud developers are now more careful to streamline assets to avoid dozens of requests to load the web interface, and so on and so forth.

But still things like uploading multiple small files or browsing the Gallery are very slow. Too slow if we compare them with their alternatives.

Both those two use cases have something in common: many requests need to be processed in parallel, and this is precisely where Nextcloud struggles.

There are two main separate reasons for this, that revolve around the architecture of Nextcloud and traditional PHP projects in general.

The process of a traditional PHP request

In the classic PHP architecture, the service is not really running while idle. It is the HTTP server who is listening on the network, then detects whenever a request is for PHP processing and knows to pass it over to the PHP interpreter.

Now, there has been an evolution in how the HTTP daemon serves PHP requests. Nowadays, PHP-FPM is most commonly used, which is a different service that implements a FastCGI interface. The HTTP server forwards the request using mod_proxy_fcgi to PHP-FPM, which will typically spawn a new process to start interpreting the PHP code, until the request has been served and then the PHP process either ends, or goes idle waiting for a new task.

This means that with every HTTP request, the process (Nextcloud in this case) needs to start and die, it needs to parse its configuration file, initialize its variables, process everything and cleanup.

Compare that to a traditional C or C++ service, such as Apache itself, which loads its configuration, allocates memory for its main structures and stays listening waiting for an incoming connection. They only need to serve the request, but they don’t need to start over from zero every time.

Now, staying for a bit longer with the Apache example the classic way of this and many other services to operate is to fork a new child process every time a request arrives so that the child processes it and the parent can go back to listening on the socket.

The issue with this is that spawning a new thread is computationally expensive because now we have the context switching performed by the OS so that both threads have their share of processing time. Additionally this reserves a lot of memory since each spawned thread contains a copy of the parent process memory in COW mode. The memory won’t be copied until it diverges between processes, but unless we use overcommit it will be accounted for twice.

It turns out that it is more efficient to keep working on a single thread, or maybe a thread pool to try to avoid said costs. No spawning and limited context switching. This model makes use of asynchronous OS primitives such as epoll that guarantee that separate concurrent events can be processed sequentially by a single thread. Libraries such as libevent, libev and libuv use these primitives to attend to requests such as reads, writes and timers in a closed loop where the library user only needs to register callbacks. This is called an event loop architecture.

This is where Nginx came strong to the scene, since it uses precisely this architecture achieving better performance. Apache ended up adopting the same strategy with MPM event, and then many others followed.

This paradigm is now all over the place. in Rust, Boost.asio in C++, libuv in C which is the event loop for NodeJS are some notable examples.


Nextcloud is a traditional PHP application. It is like Apache before MPM. Think opening the gallery: lots of requests are sent at once to the server to retrieve all those dozens thumbnails that need to be painted on the screen.

For every one of those thumbnails, we load the configuration file, parse the environment, load all Nextcloud apps with all their hooks, go through security protections such as CSRF, tokens and such.

So the number one thing to do to improve the situation is to load as least as possible for every request, in other words, make each request as light as possible. This can be done without touching the architecture.

The real improvement though would come from taking the step that the rest of the industry has already taken. Nextcloud should adopt the event loop model.

This way, the service is already running when the request arrives. The environment is already loaded and we are waiting in the event loop to dispatch requests without needing to initialize and tear down for each one of them. In this architecture, retrieving each thumbnail would mostly boil down to a database access and a file system access.

There are already frameworks in PHP that do this which are worth exploring, such as reactPHP, which was proposed on this pull request.

This is obviously a huge paradigm shift but it is one that should be taken seriously in order to remain relevant. If anybody thinks this is too costly I would just like to cite the sunken costs fallacy.

Check out this post to read more about other Gallery performance issues.

Author: nachoparker

Humbly sharing things that I find useful [ github dockerhub ]


  1. Hello, this is very interesting (although I am not an expert in the architecture). Is this good new your point of view or something that has been decided and planned ? Thank you !

    1. Hi sk,

      That seems to be a PR to provide push functionality through SSE. Cool stuff, but unrelated. You can see some discussion about the event loop proposal in the link included in the post, but it doesn’t seem to be getting much traction so I wanted to bring it up again to stir some discussion.

      Excited to see the push capabilities implemented though, maybe I would suggest to use websockets instead.

      Cheers, thanks for dropping by

  2. I want to say thank you for the investigation. With out you i didn’t knew nextcloud disadvantages. I will Think about migration to another OSS service.

  3. Thank you for this very interesting article. You are right, Nextcloud devs should try to do something about velocity in order to stay in the race. I don’t have the sufficent knowledge to say “Your solution is the right one”, but it makes sense to me. Sorry for my bad english. I hope my comment will help (a little bit) you to have good SEO. Great website. Thanks a lot

  4. Hi,
    do you think Nextcloud can become more competetive when compared to DropBox or Seafile which are written in C++ and Python?
    Don’t get me wrong I use NextcloudPi and it has a lot of advantages over Seafile, but some things are just very disapointing sometimes.
    I can download 1 GB of Pictures from the web Interface to my PC in about 15-20 minutes.
    Downloading them with the IOS-App to my Ipad takes more than 4 hours….
    Anyways I wanted to thank you for your work and the effort you put in this project.
    I am willing to test tweaks if you plan to adjust NCP to a more powerful SBC, but I already tried quite a bit and don’t think it is possible without architecture changes.

    1. I see your problem but I personal dont have this problem when I gi an my cloud via the domain it is pretty slow becores of my internet but if I use the IP I have an upload if 10Mbits download is at 23Mbits

  5. Does NextCloud 21 implement any aspects that you brought up in this article?

    I saw mention that NC 21 had performance improvements (probably different from this article), but only to those that take the time to set it up. How do you expect NC 21 to change performance on NextCloudPi?

    P.S. Thanks for your awesome work with NextCloudPi. I much prefer it over the Snap verison of Nextcloud for control, but it is significantly easier than setting up an entire server software stack by hand. It helps me learn all the pieces at my own pace.

Leave a Reply

Your email address will not be published. Required fields are marked *