Facing slow load times with Next.js caching for dynamic blogs, the author designed a custom caching system that serves content instantly, updates in the background, and forces a refresh after a month of inactivity. The blog's storage was also migrated to an S3 proxy and miniserve for efficient file distribution, with edge caching set to six hours. After initial concerns, the system has been running smoothly for two weeks. The author also notes progress on the riscv-mc emulator project.
Continuing from #7, the built-in caching in NextJS wasn't a good fit for my dynamic blogs, pages are loading painfully slowl. So I stayed up late, designed and implemented my own version. It avoids making requests for a certain period of time, and then starts making background requests as visits, so we can still see the content without delay. Finally, it forces a request when nobody has visited it for about a month. I also added the last updated feature from Gist, so it’s even faster.
Everyone loves cache hit, don't they? Haha
But I haven't tested it fully, I don't actually believe in my 2AM brain and I'm really worried. Whatever, let's wait for a while and find it out.
Uh, two weeks have passed, so far so good!
In the prior two weeks, you may have already seen the announcement. Yeah, we deprecated the old zhenghuo.
There are so many JSON & zip files in it, not plain text, not for GitHub.
The new storage is based on S3 proxy and miniserve.
S3 proxy is a proxy layer that enables me to modify the file system via the S3 API.
I have a program which can sync local files to remote S3 servers,
so why not make full use of it?
Moreover, S3 is the standard!
And miniserve is a simple file distribution web server, with a simple and clear UI.
The best choice for me is an Apache-like one, but I'm really worried that Nginx may have some security issues.
But miniserve fits my requirements pretty well; it even has a download directory feature through .tar.gz!
And it's based on stream, which means it won't use much memory.
I don't care about the compression rate; the key point is that it supports downloading directories.
I set the caching time on the edge to 6 hours,
for everything, including the directory view, files, and directory zips.
The point is that they may change over time, like version updating, bug fixing, or something else, so it's not too long.
And in case of a huge amount of traffic, putting them on Cloudflare's CDN is a good choice.
It's not that dynamic.
And I overrode the default favicon.svg with my own using Nginx.
That wasn't so easy, since this site is connected through a Cloudflare tunnel.
I created a new server in Nginx, listening on a different port, and found out that nginx -s reload can't fully reload this.
When I was overriding it, my LLM told me: DON'T REMOVE BREAK IN YOUR REWRITE BLOCK! And it thought for a while:
The rewrite in the first location block will cause a reload, and it will start again, skipping the first block,
and be proxied by the second block, uhhhh, it works. But you may experience an error if blahblahblah...
So I removed that break in the rewrite.
When uploading the files, I found that the CA of my IPv4 certificate is not included in Python and OpenSSL's trusted CAs,
which explains why one of my classmates using a Mac saw a warning when trying to connect to my service.
I remember I have explained the reason before, but I will still write it again:
Most of my services are behind Cloudflare, but in some cases, it would be too slow and, uh...
Okay, that's the benefit of writing blogs. Accessing the services using IPv4 is not a good choice.
Cloudflare does have an edge node in my city. The speed won't be affected too much.
I will fix it soon.
And riscv-mc has made really great progress, including a terminal, a nice keyboard, and a blocking read ecall (In reality, it's blocking by default, and that's how the libraries are designed. So if it's not blocking in my emulator, it will cause some issues).
I will post a new video about it soon.