Why does SQLAlchemy with psycopg2 use_native_unicode have poor performance?
Posted
by
Bob Dover
on Stack Overflow
See other posts from Stack Overflow
or by Bob Dover
Published on 2012-11-20T04:57:31Z
Indexed on
2012/11/20
4:59 UTC
Read the original article
Hit count: 209
I'm having a difficult time figuring out why a simple SELECT query is taking such a long time with sqlalchemy using raw SQL (I'm getting 14600 rows/sec, but when running the same query through psycopg2 without sqlalchemy, I'm getting 38421 rows/sec).
After some poking around, I realized that toggling sqlalchemy's use_native_unicode parameter in the create_engine call actually makes a huge difference.
This query takes 0.5secs to retrieve 7300 rows:
from sqlalchemy import create_engine
engine = create_engine("postgresql+psycopg2://localhost...",
use_native_unicode=True)
r = engine.execute("SELECT * FROM logtable")
fetched_results = r.fetchall()
This query takes 0.19secs to retrieve the same 7300 rows:
engine = create_engine("postgresql+psycopg2://localhost...",
use_native_unicode=False)
r = engine.execute("SELECT * FROM logtable")
fetched_results = r.fetchall()
The only difference between the 2 queries is use_native_unicode. But sqlalchemy's own docs state that it is better to keep use_native_unicode=True (http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html).
Does anyone know why use_native_unicode is making such a big performance difference? And what are the ramifications of turning off use_native_unicode?
© Stack Overflow or respective owner