-
-
Save hakib/5cbda96c8121299088115a94ec634903 to your computer and use it in GitHub Desktop.
class TimeLimitedPaginator(Paginator): | |
""" | |
Paginator that enforced a timeout on the count operation. | |
When the timeout is reached a "fake" large value is returned instead, | |
Why does this hack exist? On every admin list view, Django issues a | |
COUNT on the full queryset. There is no simple workaround. On big tables, | |
this COUNT is extremely slow and makes things unbearable. This solution | |
is what we came up with. | |
""" | |
@cached_property | |
def count(self): | |
# We set the timeout in a db transaction to prevent it from | |
# affecting other transactions. | |
with transaction.atomic(), connection.cursor() as cursor: | |
cursor.execute('SET LOCAL statement_timeout TO 200;') | |
try: | |
return super().count | |
except OperationalError: | |
return 9999999999 |
Hey @maheshgawali, very cool!
To be on the safe side, better to move the second execute to a finally
clause.
Oh yes, updated, thanks for the pointer ๐
Just a note/question: it doesn't work on our Django deployment, it causes other admin queries to timeout.
From a quick investigation, we suspect it's because we have ATOMIC_REQUESTS
enabled. Under such setup, the with transaction.atomic()
is NOT opening a new transaction, instead it creates a savepoint, which is not enough to isolate local statement timeout.
Maybe something to mention as a requirement of this approach in the associated blog post?
Hey @browniebroke, it makes sense that it won't work with savepoints, as you mentioned. I actually never saw this setting being used, interesting. I wonder what the usecase is for setting atomic requests to true.
My suggestion to you is to fetch the timeout before the count, set it, and then revert back to the previous value (preferebly using finally
).
I dunno what changed along the way, or where exactly it's entering a transaction higher up the call stack in my project, but this wouldn't work for me until I moved the try/except
outside of with transaction.atomic()
. The original code hides the error inside the transaction preventing an auto-rollback. Otherwise it's quite elegant for a (literally) quick solution to another one of Django's admin woes!
# https://medium.com/@hakibenita/optimizing-django-admin-paginator-53c4eb6bfca3
class TimeoutPaginator(Paginator):
@cached_property
def count(self):
try:
with transaction.atomic(), connection.cursor() as cursor:
cursor.execute('SET LOCAL statement_timeout TO 200;')
return super().count
except OperationalError:
return 9999999999
The SET LOCAL command only affect the current transaction. It shouldn't matter where the try catch is, because when the function ends the transaction ends either because an exception was raised, or because the context was closed. But, if it's working for you stick with it ๐
Aye, Postgres is handy in that way, but with the try/catch inside Django wasn't rolling back the savepoint. So any outer transaction was entering an abort state from the timeout error and that was causing stuff down the line to fail... I think. Honestly I don't understand the inner workings in question but as you say, if it works it works~
Hi,
Thanks for this @hakib
Modified it to use with MySql backendcursor.execute('SET SESSION MAX_EXECUTION_TIME=2000;')
ARG........
I now realize after many lost hours that the main reason I was repeatedly receiving strange errors that I was not able to interpret
is that
cursor.execute('SET LOCAL statement_timeout TO 200;')
was not recognized by my sqlite server !
I have been searching on internet how to implement a request execution timeout on sqlite, without any success.
Could one of you tell me how I could do that?
Thanks in advance
Hey @formacude,
As far a I can tell, SQLite does not provide a way to limit the time of a single query. the way MySQL and PostgreSQL do. The closest I could find is this.
If you come up with a solution for SQLite please share ;)
Thanks for this @hakib
Modified it to use with MySql backend