Better thought out caching mechanism that actually works#802
Merged
Conversation
3617481 to
934b142
Compare
934b142 to
0f115df
Compare
This one follows up #794 again. That change included a template cache, with the idea being that we could reuse a rendered sqlc query when all input values were expected to be stable. For example, when replacing something like a schema name, we'd presumably always be replacing the template value with the same schema over and over again, so it'd be better to save on the work. But that caching system hadn't adequately been thought through, and wouldn't really work. I realized when I was putting #798 (explicit schemas) together that if you injected two schema values from two different clients then you'd end up using the same cache value, which is wrong. Here, we tool out the cache layer a little more so that it considers input values and named args, which all must be consistent for a cached value to be returned. We also add tests to make sure of it. Building a cache key unfortunately has to rely on concatenating some strings together and presorting map keys for stability, but even with the extra work involved, a cache hit still comes out to be quite a bit faster than a miss (~2.5x faster), so it seems worth doing: $ go test ./rivershared/sqlctemplate -bench=. goos: darwin goarch: arm64 pkg: github.com/riverqueue/river/rivershared/sqlctemplate cpu: Apple M4 BenchmarkReplacer/WithCache-10 1626988 735.7 ns/op BenchmarkReplacer/WithoutCache-10 695517 1710 ns/op PASS ok github.com/riverqueue/river/rivershared/sqlctemplate 3.419s
0f115df to
90e6499
Compare
bgentry
approved these changes
Mar 9, 2025
Contributor
bgentry
left a comment
There was a problem hiding this comment.
Looks solid, good catch on this.
Contributor
Author
|
ty man. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This one follows up #794 again. That change included a template cache,
with the idea being that we could reuse a rendered sqlc query when all
input values were expected to be stable. For example, when replacing
something like a schema name, we'd presumably always be replacing the
template value with the same schema over and over again, so it'd be
better to save on the work.
But that caching system hadn't adequately been thought through, and
wouldn't really work. I realized when I was putting #798 (explicit
schemas) together that if you injected two schema values from two
different clients then you'd end up using the same cache value, which is
wrong.
Here, we tool out the cache layer a little more so that it considers
input values and named args, which all must be consistent for a cached
value to be returned. We also add tests to make sure of it.
Building a cache key unfortunately has to rely on concatenating some
strings together and presorting map keys for stability, but even with
the extra work involved, a cache hit still comes out to be quite a bit
faster than a miss (~2.5x faster), so it seems worth doing: