This sort of makes me sad. Redis has strayed from what its original goal/purpose was.
I’ve been using it since it was in beta. Simple, clear, fast.
The company I’m working for now keeps trying to add more and more functionality using Redis, that doesn’t belong in Redis, and then complains about Redis scaling issues.
I may be biased, but I think this announcement is actually a very good sign for Redis, since it shows that the focus is back to the community edition, that is, the source tree you can just download from GitHub (and I believe this is an effect of the license change: it is possible for the company to work on the public tree without competitors to cut&paste the code in SAAS services).
There are few things that are interesting for me about this discussion related to complexity and use cases outside the scope.
1. You can still download Redis and type "make" and it builds without dependencies whatsoever like in the past, and that's it.
2. Then you run it and use just the subset of Redis that you like. The additional features are not imposed to the user, nor they impact the rest of the user experience. This is, actually, a lot like how it used to be when I handled the project: I added Pub/Sub, Lua scripting, geo indexing, streams, all stuff that, at first, people felt like they were out of scope, and yet many shown to be among the best features. Now it is perfectly clear that Pub/Sub belonged to Redis very well, for instance.
3. This release has improvements to the foundations, in terms of latency, for example. This means that even if you just use your small subset, you can benefit from the continued developments. Sometimes systems take the bad path of becoming less efficient over time.
So I understand the sentiment but I still see Redis remaining quite lean, at least for the version 8 that I just downloaded and I am inspecting right now.
What do you think doesn't belong in Redis? I've always viewed Redis as basically "generic datastructures in a database" — as opposed to say, Memcached, which is a very simple in-memory-only key/value store (that has always been much faster than Redis). It's hard for me to point to specific features and say: that doesn't belong in Redis! Because Redis has generally felt (to me) like a grab bag of data structures + algorithms, that are meant to be fairly low-latency but not maximally so, where your dataset has to fit in RAM (but is regularly flushed to disk so you avoid cold start issues).
Why not just use Memcached, then? Memcached is much better as an ephemeral cache than Redis — Redis is single-threaded. The point of Redis is all of its extra features: if you're limiting yourself to Memcached-style usage, IMO you're using Redis wrong and should just use Memcached.
If your application is happy with an empty Redis, then why run Redis in the first place?
What you say is good in theory, but doesn’t hold in practice.
We use memcached instead of Redis. Cache different layers in different instances so one going down hurts but doesn’t kill. Or at least it didn’t when I was there. They’ve been trying to squeeze cluster sizes and I guarantee that’s no longer sufficient and multiple open circuit breakers happen if more than one cache goes tits up.
No. Cache protects your other services from peak traffic. Which often leads to wrong sizing of those services to reap efficiency gains. Autoscaling can’t necessarily keep up with that sort of problem.
Remember how I mentioned circuit breakers?
The only time we had trouble with memcached was when we set the max memory a little too high and it restarted due to lack of memory. Which of course likes to happen
during high traffic.
Not fixing those would have resulted in a metastable situation.
If you're not redistributing, then you're using it wrong. Only once redistribution has successfully occurred (i.e. you can reboot the redis process and recover), is the goal of redis fulfilled.
Sure, there's persistence but it always seemed like an afterthought. It's also unavailable in most hosted Redis services or very expensive when it's available.
There's also HA and clustering, which makes data loss less likely but that might not be good enough.
For the people wondering who would ever use Redis this way, check out Sidekiq! https://sidekiq.org/ "Ephemeral" jobs can be a big trade-off that many Rails teams aren't really aware of until it's too late. Reading the Sidekiq docs doesn't mention this, last time I checked, so I can't really blame people when they go for the "standard"/"best" job system and they are surprised when it gets super expensive to host it.
Generic data structures in memory, grab bag of structures and algorithms... sounds more like a programming language or library than an external tool. C++ STL for example would fit these descriptions perfectly.
Doing everything is a recipe for bloat. In a database, in a distributed cache, in a programming language, in anything.
I think it wouldn't be unfair to compare it to Golang, which has in my opinion a quite unbloated stdlib which allows you do almost anything without external libraries!
This is what I see everywhere. Something is a success and then everybody starts using it wrong. Like Elastic search as database, people use it for searching and then start using it as primary database. Mostly pushed by management BTW not always the software engineer.
That does not match my experience. Engineers learn a new tool, that tool is successful in solving a problem. Whether it is recency bias, incorrect pattern matching, or simply laziness, the tool is used again but with reduced success. Repeat that process a few more times (sometimes in different organizations) and now the tool is way outside the domain, ill-fit to the task at hand, and a huge pain.
That often happens with engineers who pushed that tool getting promoted a few times and building their career on said tool, which is where I have seen this being pushed down, but I think it is important that in most cases are still engineers
I worked somewhere where a team, that can only be described as a clown team, decided to use Elastic as the “database” for the entire login/auth microservice, that other teams depended on.
> The company I’m working for now keeps trying to add more and more functionality using Redis, that doesn’t belong in Redis, and then complains about Redis scaling issues.
This doesn't sound like a Redis issue, you're just not using the right tool for the job.
Totally agree. It's definitely not the right tool for what they're doing, but some of the engineers don't seem to know better, or understand, the point of being able to run scripts on Redis.
Lots of Lua scripting and calculations being done on Redis that has nothing to do with the data that's local to Redis. It's infuriating.
This was available for a long time as an extension as part of Redis Stack, but most hosted Redis providers don't make extensions available (I'm assuming due to nuances in Redis's not-quite-open licensing).
If cloud providers which include Redis are now going to include this, it opens up a lot of potential for my use case.
When do you want to store your time series data in Redis and not a database like TimescaleDB or Clickhouse which is optimized for storage on disk and analytics queries?
When you need to be able to retrieve the timeseries data for some period of time, storing it in the application memory doesn't really work since the application will restart whenever updates are made.
Also, redis timeseries offers the ability to downsample to some defined period which is really handy (and afaik isn't really provided by other timeseries databases) as well as set a retention policy.
Which you can store just fine in-memory in a normal data structure. And if you need advanced query capabilities or a query planner there is DuckDB. Using Redis seems like you get most of the disadvantages of having to run a whole database with few of the advantages.
We used Redis in a project where we also manage hosting on a semi-public cloud infrastructure and I had troubled figuring out if license change applied to our situation. We didn't want to pay a lawyer to figure this out, so we switched to Valkey.
No, like yes they pissed a lot of people off and some people did migrate. But a large majority of "enterprise" customers didn't, it's just too much effort for a service you are paying for anyway.
I dunno, MongoDB is as if it's gone, due to a license change in 2018. So asking if redis should be thought of the same as MongoDB is a legitimate question.
Mongodb is gone; everyone stopped using it. The publicly traded company behind it with thousands of employees and over a billion in revenue is a figment of your imagination.
It's kind of like how Java still exists but doesn't commonly run in browsers in the form of a Java Applet. It exists behind the scenes and I'm sure many who used to use it now use it indirectly.
It's sort of as if it's gone. TFA is about what I no longer recognize as what I used to mean when I talked about redis. Since the license change the project with the trademark no longer fits that concept. Valkey does. I'm not sure where I ca find something that fit my old context of MongoDB.
This sort of makes me sad. Redis has strayed from what its original goal/purpose was.
I’ve been using it since it was in beta. Simple, clear, fast.
The company I’m working for now keeps trying to add more and more functionality using Redis, that doesn’t belong in Redis, and then complains about Redis scaling issues.
I may be biased, but I think this announcement is actually a very good sign for Redis, since it shows that the focus is back to the community edition, that is, the source tree you can just download from GitHub (and I believe this is an effect of the license change: it is possible for the company to work on the public tree without competitors to cut&paste the code in SAAS services).
There are few things that are interesting for me about this discussion related to complexity and use cases outside the scope.
1. You can still download Redis and type "make" and it builds without dependencies whatsoever like in the past, and that's it.
2. Then you run it and use just the subset of Redis that you like. The additional features are not imposed to the user, nor they impact the rest of the user experience. This is, actually, a lot like how it used to be when I handled the project: I added Pub/Sub, Lua scripting, geo indexing, streams, all stuff that, at first, people felt like they were out of scope, and yet many shown to be among the best features. Now it is perfectly clear that Pub/Sub belonged to Redis very well, for instance.
3. This release has improvements to the foundations, in terms of latency, for example. This means that even if you just use your small subset, you can benefit from the continued developments. Sometimes systems take the bad path of becoming less efficient over time.
So I understand the sentiment but I still see Redis remaining quite lean, at least for the version 8 that I just downloaded and I am inspecting right now.
What do you think doesn't belong in Redis? I've always viewed Redis as basically "generic datastructures in a database" — as opposed to say, Memcached, which is a very simple in-memory-only key/value store (that has always been much faster than Redis). It's hard for me to point to specific features and say: that doesn't belong in Redis! Because Redis has generally felt (to me) like a grab bag of data structures + algorithms, that are meant to be fairly low-latency but not maximally so, where your dataset has to fit in RAM (but is regularly flushed to disk so you avoid cold start issues).
If your application can't survive the Redis server being wiped without issues, you're using Redis wrong.
Why not just use Memcached, then? Memcached is much better as an ephemeral cache than Redis — Redis is single-threaded. The point of Redis is all of its extra features: if you're limiting yourself to Memcached-style usage, IMO you're using Redis wrong and should just use Memcached.
Valkey is not single threaded
Also the datatypes of redis are practical for caching more complex stuff; they are not for using it as a database though
Redis supports multiple forms of replication for HA
I always just think of Redis as a HashMap As A Service that only supports string keys.
It's nice if the stuff stays there, because my application will be faster. If it goes down I need a few seconds to re-populate it and we're back.
You should use Memcached if you're only using Redis as an ephemeral hashmap. It's much faster.
If your application is happy with an empty Redis, then why run Redis in the first place?
What you say is good in theory, but doesn’t hold in practice.
We use memcached instead of Redis. Cache different layers in different instances so one going down hurts but doesn’t kill. Or at least it didn’t when I was there. They’ve been trying to squeeze cluster sizes and I guarantee that’s no longer sufficient and multiple open circuit breakers happen if more than one cache goes tits up.
Cache and Sessions
Both running in-memory speed up an application, but you can survive both being nuked (minus potentially logging everyone out).
No. Cache protects your other services from peak traffic. Which often leads to wrong sizing of those services to reap efficiency gains. Autoscaling can’t necessarily keep up with that sort of problem.
Remember how I mentioned circuit breakers?
The only time we had trouble with memcached was when we set the max memory a little too high and it restarted due to lack of memory. Which of course likes to happen during high traffic.
Not fixing those would have resulted in a metastable situation.
Pub/Sub is a huge use case for me
The key is in the name: "Redis-tribution".
If you're not redistributing, then you're using it wrong. Only once redistribution has successfully occurred (i.e. you can reboot the redis process and recover), is the goal of redis fulfilled.
This.
Sure, there's persistence but it always seemed like an afterthought. It's also unavailable in most hosted Redis services or very expensive when it's available.
There's also HA and clustering, which makes data loss less likely but that might not be good enough.
For the people wondering who would ever use Redis this way, check out Sidekiq! https://sidekiq.org/ "Ephemeral" jobs can be a big trade-off that many Rails teams aren't really aware of until it's too late. Reading the Sidekiq docs doesn't mention this, last time I checked, so I can't really blame people when they go for the "standard"/"best" job system and they are surprised when it gets super expensive to host it.
Yup, agree. Or as I like to call Redis, your "db building kit"
Of course if what you need is a traditional DB then go with a traditional DB
But it offers those data structures and other stuff that fewer competitors have (or has it in a more quirky way)
> that has always been much faster than Redis
Do you have some reliable recent benchmarks comparing the two?
Rarely seen Redis viewed as a database, even if that has been their push for the last few years.
There are Redis-protocol compatible databases like Aerospike and Kvrocks that are useful if you want a KV store that isn't always in-memory.
Redis Enterprise has started to lean into being able to do this too.
Generic data structures in memory, grab bag of structures and algorithms... sounds more like a programming language or library than an external tool. C++ STL for example would fit these descriptions perfectly.
Doing everything is a recipe for bloat. In a database, in a distributed cache, in a programming language, in anything.
Don't think the argument is "everything", just the things that can be done within the protocol. There's really not much bloat being added considering the "limitations": https://redis.io/docs/latest/develop/reference/protocol-spec
I think it wouldn't be unfair to compare it to Golang, which has in my opinion a quite unbloated stdlib which allows you do almost anything without external libraries!
This is what I see everywhere. Something is a success and then everybody starts using it wrong. Like Elastic search as database, people use it for searching and then start using it as primary database. Mostly pushed by management BTW not always the software engineer.
You'd be surprised how many engineers make these kinda decisions.
That does not match my experience. Engineers learn a new tool, that tool is successful in solving a problem. Whether it is recency bias, incorrect pattern matching, or simply laziness, the tool is used again but with reduced success. Repeat that process a few more times (sometimes in different organizations) and now the tool is way outside the domain, ill-fit to the task at hand, and a huge pain.
That often happens with engineers who pushed that tool getting promoted a few times and building their career on said tool, which is where I have seen this being pushed down, but I think it is important that in most cases are still engineers
I worked somewhere where a team, that can only be described as a clown team, decided to use Elastic as the “database” for the entire login/auth microservice, that other teams depended on.
It was so slow and terrible.
> The company I’m working for now keeps trying to add more and more functionality using Redis, that doesn’t belong in Redis, and then complains about Redis scaling issues.
This doesn't sound like a Redis issue, you're just not using the right tool for the job.
Totally agree. It's definitely not the right tool for what they're doing, but some of the engineers don't seem to know better, or understand, the point of being able to run scripts on Redis.
Lots of Lua scripting and calculations being done on Redis that has nothing to do with the data that's local to Redis. It's infuriating.
The inclusion of Redis timeseries is huge!
This was available for a long time as an extension as part of Redis Stack, but most hosted Redis providers don't make extensions available (I'm assuming due to nuances in Redis's not-quite-open licensing).
If cloud providers which include Redis are now going to include this, it opens up a lot of potential for my use case.
When do you want to store your time series data in Redis and not a database like TimescaleDB or Clickhouse which is optimized for storage on disk and analytics queries?
Likely when it's small enough to keep in RAM and you want to do some sort of on-the fly aggregation/correlation.
Then you can usually just store it in the memory of your application. No need to complicate your stack by running another service.
When you need to be able to retrieve the timeseries data for some period of time, storing it in the application memory doesn't really work since the application will restart whenever updates are made.
Also, redis timeseries offers the ability to downsample to some defined period which is really handy (and afaik isn't really provided by other timeseries databases) as well as set a retention policy.
Some large IoT systems need ephemeral timeseries.
Which you can store just fine in-memory in a normal data structure. And if you need advanced query capabilities or a query planner there is DuckDB. Using Redis seems like you get most of the disadvantages of having to run a whole database with few of the advantages.
your application can consist of multiple processes.
Isn't this just RocksDB?
Or DuckDB
[dead]
It was always, and still is, a license issue. Redis stack had a proprietary license and now Redis has a proprietary license.
I thought people stopped using Redis and moved on to a fork because of licensing issues. Is this true?
We used Redis in a project where we also manage hosting on a semi-public cloud infrastructure and I had troubled figuring out if license change applied to our situation. We didn't want to pay a lawyer to figure this out, so we switched to Valkey.
No, like yes they pissed a lot of people off and some people did migrate. But a large majority of "enterprise" customers didn't, it's just too much effort for a service you are paying for anyway.
I dunno, MongoDB is as if it's gone, due to a license change in 2018. So asking if redis should be thought of the same as MongoDB is a legitimate question.
I just gave valkey-container its 100th star https://github.com/valkey-io/valkey-container
"As if it's gone"? Is this a joke?
Mongodb is gone; everyone stopped using it. The publicly traded company behind it with thousands of employees and over a billion in revenue is a figment of your imagination.
It's kind of like how Java still exists but doesn't commonly run in browsers in the form of a Java Applet. It exists behind the scenes and I'm sure many who used to use it now use it indirectly.
It's sort of as if it's gone. TFA is about what I no longer recognize as what I used to mean when I talked about redis. Since the license change the project with the trademark no longer fits that concept. Valkey does. I'm not sure where I ca find something that fit my old context of MongoDB.
valkey is still dropin compatible now so migration is pretty easy.
plus aws elasticache give you like 30% price cut when you switch to valkey powered engine ; which make it a pretty good incentive.
We've switched to https://github.com/microsoft/Garnet and been very happy
Only AWS did, and their fork is already lacking several important new features like HEXPIRE.
There is a lot of adoption of Valkey. I know it's fun to hate on AWS, but there is a wider shift than just them.
https://cloud.google.com/blog/products/databases/announcing-... https://upcloud.com/blog/now-supporting-valkey https://aiven.io/blog/introducing-aiven-for-valkey https://www.instaclustr.com/blog/valkey-now-available/ https://elest.io/open-source/valkey
>`HEXPIRE`
Finally. Hope they implement this soon at Valkey.
We're working on it, you can follow the progress here: https://github.com/valkey-io/valkey/issues/640.
Its available on Dragonfly https://github.com/dragonflydb/dragonfly
Here is the docs https://www.dragonflydb.io/docs/command-reference/hashes/hex...
Great, thank you so much!
Obnoxious amount of cookie/spam popups.