Coverage for /home/martinb/.local/share/virtualenvs/camcops/lib/python3.6/site-packages/sqlalchemy/dialects/sqlite/base.py : 33%

Hot-keys on this page
r m x p toggle line displays
j k next/prev highlighted chunk
0 (zero) top of page
1 (one) first highlighted chunk
1# sqlite/base.py
2# Copyright (C) 2005-2020 the SQLAlchemy authors and contributors
3# <see AUTHORS file>
4#
5# This module is part of SQLAlchemy and is released under
6# the MIT License: http://www.opensource.org/licenses/mit-license.php
8r"""
9.. dialect:: sqlite
10 :name: SQLite
12.. _sqlite_datetime:
14Date and Time Types
15-------------------
17SQLite does not have built-in DATE, TIME, or DATETIME types, and pysqlite does
18not provide out of the box functionality for translating values between Python
19`datetime` objects and a SQLite-supported format. SQLAlchemy's own
20:class:`~sqlalchemy.types.DateTime` and related types provide date formatting
21and parsing functionality when SQLite is used. The implementation classes are
22:class:`_sqlite.DATETIME`, :class:`_sqlite.DATE` and :class:`_sqlite.TIME`.
23These types represent dates and times as ISO formatted strings, which also
24nicely support ordering. There's no reliance on typical "libc" internals for
25these functions so historical dates are fully supported.
27Ensuring Text affinity
28^^^^^^^^^^^^^^^^^^^^^^
30The DDL rendered for these types is the standard ``DATE``, ``TIME``
31and ``DATETIME`` indicators. However, custom storage formats can also be
32applied to these types. When the
33storage format is detected as containing no alpha characters, the DDL for
34these types is rendered as ``DATE_CHAR``, ``TIME_CHAR``, and ``DATETIME_CHAR``,
35so that the column continues to have textual affinity.
37.. seealso::
39 `Type Affinity <http://www.sqlite.org/datatype3.html#affinity>`_ -
40 in the SQLite documentation
42.. _sqlite_autoincrement:
44SQLite Auto Incrementing Behavior
45----------------------------------
47Background on SQLite's autoincrement is at: http://sqlite.org/autoinc.html
49Key concepts:
51* SQLite has an implicit "auto increment" feature that takes place for any
52 non-composite primary-key column that is specifically created using
53 "INTEGER PRIMARY KEY" for the type + primary key.
55* SQLite also has an explicit "AUTOINCREMENT" keyword, that is **not**
56 equivalent to the implicit autoincrement feature; this keyword is not
57 recommended for general use. SQLAlchemy does not render this keyword
58 unless a special SQLite-specific directive is used (see below). However,
59 it still requires that the column's type is named "INTEGER".
61Using the AUTOINCREMENT Keyword
62^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
64To specifically render the AUTOINCREMENT keyword on the primary key column
65when rendering DDL, add the flag ``sqlite_autoincrement=True`` to the Table
66construct::
68 Table('sometable', metadata,
69 Column('id', Integer, primary_key=True),
70 sqlite_autoincrement=True)
72Allowing autoincrement behavior SQLAlchemy types other than Integer/INTEGER
73^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
75SQLite's typing model is based on naming conventions. Among other things, this
76means that any type name which contains the substring ``"INT"`` will be
77determined to be of "integer affinity". A type named ``"BIGINT"``,
78``"SPECIAL_INT"`` or even ``"XYZINTQPR"``, will be considered by SQLite to be
79of "integer" affinity. However, **the SQLite autoincrement feature, whether
80implicitly or explicitly enabled, requires that the name of the column's type
81is exactly the string "INTEGER"**. Therefore, if an application uses a type
82like :class:`.BigInteger` for a primary key, on SQLite this type will need to
83be rendered as the name ``"INTEGER"`` when emitting the initial ``CREATE
84TABLE`` statement in order for the autoincrement behavior to be available.
86One approach to achieve this is to use :class:`.Integer` on SQLite
87only using :meth:`.TypeEngine.with_variant`::
89 table = Table(
90 "my_table", metadata,
91 Column("id", BigInteger().with_variant(Integer, "sqlite"), primary_key=True)
92 )
94Another is to use a subclass of :class:`.BigInteger` that overrides its DDL
95name to be ``INTEGER`` when compiled against SQLite::
97 from sqlalchemy import BigInteger
98 from sqlalchemy.ext.compiler import compiles
100 class SLBigInteger(BigInteger):
101 pass
103 @compiles(SLBigInteger, 'sqlite')
104 def bi_c(element, compiler, **kw):
105 return "INTEGER"
107 @compiles(SLBigInteger)
108 def bi_c(element, compiler, **kw):
109 return compiler.visit_BIGINT(element, **kw)
112 table = Table(
113 "my_table", metadata,
114 Column("id", SLBigInteger(), primary_key=True)
115 )
117.. seealso::
119 :meth:`.TypeEngine.with_variant`
121 :ref:`sqlalchemy.ext.compiler_toplevel`
123 `Datatypes In SQLite Version 3 <http://sqlite.org/datatype3.html>`_
125.. _sqlite_concurrency:
127Database Locking Behavior / Concurrency
128---------------------------------------
130SQLite is not designed for a high level of write concurrency. The database
131itself, being a file, is locked completely during write operations within
132transactions, meaning exactly one "connection" (in reality a file handle)
133has exclusive access to the database during this period - all other
134"connections" will be blocked during this time.
136The Python DBAPI specification also calls for a connection model that is
137always in a transaction; there is no ``connection.begin()`` method,
138only ``connection.commit()`` and ``connection.rollback()``, upon which a
139new transaction is to be begun immediately. This may seem to imply
140that the SQLite driver would in theory allow only a single filehandle on a
141particular database file at any time; however, there are several
142factors both within SQLite itself as well as within the pysqlite driver
143which loosen this restriction significantly.
145However, no matter what locking modes are used, SQLite will still always
146lock the database file once a transaction is started and DML (e.g. INSERT,
147UPDATE, DELETE) has at least been emitted, and this will block
148other transactions at least at the point that they also attempt to emit DML.
149By default, the length of time on this block is very short before it times out
150with an error.
152This behavior becomes more critical when used in conjunction with the
153SQLAlchemy ORM. SQLAlchemy's :class:`.Session` object by default runs
154within a transaction, and with its autoflush model, may emit DML preceding
155any SELECT statement. This may lead to a SQLite database that locks
156more quickly than is expected. The locking mode of SQLite and the pysqlite
157driver can be manipulated to some degree, however it should be noted that
158achieving a high degree of write-concurrency with SQLite is a losing battle.
160For more information on SQLite's lack of write concurrency by design, please
161see
162`Situations Where Another RDBMS May Work Better - High Concurrency
163<http://www.sqlite.org/whentouse.html>`_ near the bottom of the page.
165The following subsections introduce areas that are impacted by SQLite's
166file-based architecture and additionally will usually require workarounds to
167work when using the pysqlite driver.
169.. _sqlite_isolation_level:
171Transaction Isolation Level / Autocommit
172----------------------------------------
174SQLite supports "transaction isolation" in a non-standard way, along two
175axes. One is that of the
176`PRAGMA read_uncommitted <http://www.sqlite.org/pragma.html#pragma_read_uncommitted>`_
177instruction. This setting can essentially switch SQLite between its
178default mode of ``SERIALIZABLE`` isolation, and a "dirty read" isolation
179mode normally referred to as ``READ UNCOMMITTED``.
181SQLAlchemy ties into this PRAGMA statement using the
182:paramref:`_sa.create_engine.isolation_level` parameter of
183:func:`_sa.create_engine`.
184Valid values for this parameter when used with SQLite are ``"SERIALIZABLE"``
185and ``"READ UNCOMMITTED"`` corresponding to a value of 0 and 1, respectively.
186SQLite defaults to ``SERIALIZABLE``, however its behavior is impacted by
187the pysqlite driver's default behavior.
189When using the pysqlite driver, the ``"AUTOCOMMIT"`` isolation level is also
190available, which will alter the pysqlite connection using the ``.isolation_level``
191attribute on the DBAPI connection and set it to None for the duration
192of the setting.
194.. versionadded:: 1.3.16 added support for SQLite AUTOCOMMIT isolation level
195 when using the pysqlite / sqlite3 SQLite driver.
198The other axis along which SQLite's transactional locking is impacted is
199via the nature of the ``BEGIN`` statement used. The three varieties
200are "deferred", "immediate", and "exclusive", as described at
201`BEGIN TRANSACTION <http://sqlite.org/lang_transaction.html>`_. A straight
202``BEGIN`` statement uses the "deferred" mode, where the database file is
203not locked until the first read or write operation, and read access remains
204open to other transactions until the first write operation. But again,
205it is critical to note that the pysqlite driver interferes with this behavior
206by *not even emitting BEGIN* until the first write operation.
208.. warning::
210 SQLite's transactional scope is impacted by unresolved
211 issues in the pysqlite driver, which defers BEGIN statements to a greater
212 degree than is often feasible. See the section :ref:`pysqlite_serializable`
213 for techniques to work around this behavior.
215SAVEPOINT Support
216----------------------------
218SQLite supports SAVEPOINTs, which only function once a transaction is
219begun. SQLAlchemy's SAVEPOINT support is available using the
220:meth:`_engine.Connection.begin_nested` method at the Core level, and
221:meth:`.Session.begin_nested` at the ORM level. However, SAVEPOINTs
222won't work at all with pysqlite unless workarounds are taken.
224.. warning::
226 SQLite's SAVEPOINT feature is impacted by unresolved
227 issues in the pysqlite driver, which defers BEGIN statements to a greater
228 degree than is often feasible. See the section :ref:`pysqlite_serializable`
229 for techniques to work around this behavior.
231Transactional DDL
232----------------------------
234The SQLite database supports transactional :term:`DDL` as well.
235In this case, the pysqlite driver is not only failing to start transactions,
236it also is ending any existing transaction when DDL is detected, so again,
237workarounds are required.
239.. warning::
241 SQLite's transactional DDL is impacted by unresolved issues
242 in the pysqlite driver, which fails to emit BEGIN and additionally
243 forces a COMMIT to cancel any transaction when DDL is encountered.
244 See the section :ref:`pysqlite_serializable`
245 for techniques to work around this behavior.
247.. _sqlite_foreign_keys:
249Foreign Key Support
250-------------------
252SQLite supports FOREIGN KEY syntax when emitting CREATE statements for tables,
253however by default these constraints have no effect on the operation of the
254table.
256Constraint checking on SQLite has three prerequisites:
258* At least version 3.6.19 of SQLite must be in use
259* The SQLite library must be compiled *without* the SQLITE_OMIT_FOREIGN_KEY
260 or SQLITE_OMIT_TRIGGER symbols enabled.
261* The ``PRAGMA foreign_keys = ON`` statement must be emitted on all
262 connections before use.
264SQLAlchemy allows for the ``PRAGMA`` statement to be emitted automatically for
265new connections through the usage of events::
267 from sqlalchemy.engine import Engine
268 from sqlalchemy import event
270 @event.listens_for(Engine, "connect")
271 def set_sqlite_pragma(dbapi_connection, connection_record):
272 cursor = dbapi_connection.cursor()
273 cursor.execute("PRAGMA foreign_keys=ON")
274 cursor.close()
276.. warning::
278 When SQLite foreign keys are enabled, it is **not possible**
279 to emit CREATE or DROP statements for tables that contain
280 mutually-dependent foreign key constraints;
281 to emit the DDL for these tables requires that ALTER TABLE be used to
282 create or drop these constraints separately, for which SQLite has
283 no support.
285.. seealso::
287 `SQLite Foreign Key Support <http://www.sqlite.org/foreignkeys.html>`_
288 - on the SQLite web site.
290 :ref:`event_toplevel` - SQLAlchemy event API.
292 :ref:`use_alter` - more information on SQLAlchemy's facilities for handling
293 mutually-dependent foreign key constraints.
295.. _sqlite_on_conflict_ddl:
297ON CONFLICT support for constraints
298-----------------------------------
300SQLite supports a non-standard clause known as ON CONFLICT which can be applied
301to primary key, unique, check, and not null constraints. In DDL, it is
302rendered either within the "CONSTRAINT" clause or within the column definition
303itself depending on the location of the target constraint. To render this
304clause within DDL, the extension parameter ``sqlite_on_conflict`` can be
305specified with a string conflict resolution algorithm within the
306:class:`.PrimaryKeyConstraint`, :class:`.UniqueConstraint`,
307:class:`.CheckConstraint` objects. Within the :class:`_schema.Column` object,
308there
309are individual parameters ``sqlite_on_conflict_not_null``,
310``sqlite_on_conflict_primary_key``, ``sqlite_on_conflict_unique`` which each
311correspond to the three types of relevant constraint types that can be
312indicated from a :class:`_schema.Column` object.
314.. seealso::
316 `ON CONFLICT <https://www.sqlite.org/lang_conflict.html>`_ - in the SQLite
317 documentation
319.. versionadded:: 1.3
322The ``sqlite_on_conflict`` parameters accept a string argument which is just
323the resolution name to be chosen, which on SQLite can be one of ROLLBACK,
324ABORT, FAIL, IGNORE, and REPLACE. For example, to add a UNIQUE constraint
325that specifies the IGNORE algorithm::
327 some_table = Table(
328 'some_table', metadata,
329 Column('id', Integer, primary_key=True),
330 Column('data', Integer),
331 UniqueConstraint('id', 'data', sqlite_on_conflict='IGNORE')
332 )
334The above renders CREATE TABLE DDL as::
336 CREATE TABLE some_table (
337 id INTEGER NOT NULL,
338 data INTEGER,
339 PRIMARY KEY (id),
340 UNIQUE (id, data) ON CONFLICT IGNORE
341 )
344When using the :paramref:`_schema.Column.unique`
345flag to add a UNIQUE constraint
346to a single column, the ``sqlite_on_conflict_unique`` parameter can
347be added to the :class:`_schema.Column` as well, which will be added to the
348UNIQUE constraint in the DDL::
350 some_table = Table(
351 'some_table', metadata,
352 Column('id', Integer, primary_key=True),
353 Column('data', Integer, unique=True,
354 sqlite_on_conflict_unique='IGNORE')
355 )
357rendering::
359 CREATE TABLE some_table (
360 id INTEGER NOT NULL,
361 data INTEGER,
362 PRIMARY KEY (id),
363 UNIQUE (data) ON CONFLICT IGNORE
364 )
366To apply the FAIL algorithm for a NOT NULL constraint,
367``sqlite_on_conflict_not_null`` is used::
369 some_table = Table(
370 'some_table', metadata,
371 Column('id', Integer, primary_key=True),
372 Column('data', Integer, nullable=False,
373 sqlite_on_conflict_not_null='FAIL')
374 )
376this renders the column inline ON CONFLICT phrase::
378 CREATE TABLE some_table (
379 id INTEGER NOT NULL,
380 data INTEGER NOT NULL ON CONFLICT FAIL,
381 PRIMARY KEY (id)
382 )
385Similarly, for an inline primary key, use ``sqlite_on_conflict_primary_key``::
387 some_table = Table(
388 'some_table', metadata,
389 Column('id', Integer, primary_key=True,
390 sqlite_on_conflict_primary_key='FAIL')
391 )
393SQLAlchemy renders the PRIMARY KEY constraint separately, so the conflict
394resolution algorithm is applied to the constraint itself::
396 CREATE TABLE some_table (
397 id INTEGER NOT NULL,
398 PRIMARY KEY (id) ON CONFLICT FAIL
399 )
401.. _sqlite_type_reflection:
403Type Reflection
404---------------
406SQLite types are unlike those of most other database backends, in that
407the string name of the type usually does not correspond to a "type" in a
408one-to-one fashion. Instead, SQLite links per-column typing behavior
409to one of five so-called "type affinities" based on a string matching
410pattern for the type.
412SQLAlchemy's reflection process, when inspecting types, uses a simple
413lookup table to link the keywords returned to provided SQLAlchemy types.
414This lookup table is present within the SQLite dialect as it is for all
415other dialects. However, the SQLite dialect has a different "fallback"
416routine for when a particular type name is not located in the lookup map;
417it instead implements the SQLite "type affinity" scheme located at
418http://www.sqlite.org/datatype3.html section 2.1.
420The provided typemap will make direct associations from an exact string
421name match for the following types:
423:class:`_types.BIGINT`, :class:`_types.BLOB`,
424:class:`_types.BOOLEAN`, :class:`_types.BOOLEAN`,
425:class:`_types.CHAR`, :class:`_types.DATE`,
426:class:`_types.DATETIME`, :class:`_types.FLOAT`,
427:class:`_types.DECIMAL`, :class:`_types.FLOAT`,
428:class:`_types.INTEGER`, :class:`_types.INTEGER`,
429:class:`_types.NUMERIC`, :class:`_types.REAL`,
430:class:`_types.SMALLINT`, :class:`_types.TEXT`,
431:class:`_types.TIME`, :class:`_types.TIMESTAMP`,
432:class:`_types.VARCHAR`, :class:`_types.NVARCHAR`,
433:class:`_types.NCHAR`
435When a type name does not match one of the above types, the "type affinity"
436lookup is used instead:
438* :class:`_types.INTEGER` is returned if the type name includes the
439 string ``INT``
440* :class:`_types.TEXT` is returned if the type name includes the
441 string ``CHAR``, ``CLOB`` or ``TEXT``
442* :class:`_types.NullType` is returned if the type name includes the
443 string ``BLOB``
444* :class:`_types.REAL` is returned if the type name includes the string
445 ``REAL``, ``FLOA`` or ``DOUB``.
446* Otherwise, the :class:`_types.NUMERIC` type is used.
448.. versionadded:: 0.9.3 Support for SQLite type affinity rules when reflecting
449 columns.
452.. _sqlite_partial_index:
454Partial Indexes
455---------------
457A partial index, e.g. one which uses a WHERE clause, can be specified
458with the DDL system using the argument ``sqlite_where``::
460 tbl = Table('testtbl', m, Column('data', Integer))
461 idx = Index('test_idx1', tbl.c.data,
462 sqlite_where=and_(tbl.c.data > 5, tbl.c.data < 10))
464The index will be rendered at create time as::
466 CREATE INDEX test_idx1 ON testtbl (data)
467 WHERE data > 5 AND data < 10
469.. versionadded:: 0.9.9
471.. _sqlite_dotted_column_names:
473Dotted Column Names
474-------------------
476Using table or column names that explicitly have periods in them is
477**not recommended**. While this is generally a bad idea for relational
478databases in general, as the dot is a syntactically significant character,
479the SQLite driver up until version **3.10.0** of SQLite has a bug which
480requires that SQLAlchemy filter out these dots in result sets.
482.. versionchanged:: 1.1
484 The following SQLite issue has been resolved as of version 3.10.0
485 of SQLite. SQLAlchemy as of **1.1** automatically disables its internal
486 workarounds based on detection of this version.
488The bug, entirely outside of SQLAlchemy, can be illustrated thusly::
490 import sqlite3
492 assert sqlite3.sqlite_version_info < (3, 10, 0), "bug is fixed in this version"
494 conn = sqlite3.connect(":memory:")
495 cursor = conn.cursor()
497 cursor.execute("create table x (a integer, b integer)")
498 cursor.execute("insert into x (a, b) values (1, 1)")
499 cursor.execute("insert into x (a, b) values (2, 2)")
501 cursor.execute("select x.a, x.b from x")
502 assert [c[0] for c in cursor.description] == ['a', 'b']
504 cursor.execute('''
505 select x.a, x.b from x where a=1
506 union
507 select x.a, x.b from x where a=2
508 ''')
509 assert [c[0] for c in cursor.description] == ['a', 'b'], \
510 [c[0] for c in cursor.description]
512The second assertion fails::
514 Traceback (most recent call last):
515 File "test.py", line 19, in <module>
516 [c[0] for c in cursor.description]
517 AssertionError: ['x.a', 'x.b']
519Where above, the driver incorrectly reports the names of the columns
520including the name of the table, which is entirely inconsistent vs.
521when the UNION is not present.
523SQLAlchemy relies upon column names being predictable in how they match
524to the original statement, so the SQLAlchemy dialect has no choice but
525to filter these out::
528 from sqlalchemy import create_engine
530 eng = create_engine("sqlite://")
531 conn = eng.connect()
533 conn.execute("create table x (a integer, b integer)")
534 conn.execute("insert into x (a, b) values (1, 1)")
535 conn.execute("insert into x (a, b) values (2, 2)")
537 result = conn.execute("select x.a, x.b from x")
538 assert result.keys() == ["a", "b"]
540 result = conn.execute('''
541 select x.a, x.b from x where a=1
542 union
543 select x.a, x.b from x where a=2
544 ''')
545 assert result.keys() == ["a", "b"]
547Note that above, even though SQLAlchemy filters out the dots, *both
548names are still addressable*::
550 >>> row = result.first()
551 >>> row["a"]
552 1
553 >>> row["x.a"]
554 1
555 >>> row["b"]
556 1
557 >>> row["x.b"]
558 1
560Therefore, the workaround applied by SQLAlchemy only impacts
561:meth:`_engine.ResultProxy.keys` and :meth:`.RowProxy.keys()`
562in the public API. In
563the very specific case where an application is forced to use column names that
564contain dots, and the functionality of :meth:`_engine.ResultProxy.keys` and
565:meth:`.RowProxy.keys()` is required to return these dotted names unmodified,
566the ``sqlite_raw_colnames`` execution option may be provided, either on a
567per-:class:`_engine.Connection` basis::
569 result = conn.execution_options(sqlite_raw_colnames=True).execute('''
570 select x.a, x.b from x where a=1
571 union
572 select x.a, x.b from x where a=2
573 ''')
574 assert result.keys() == ["x.a", "x.b"]
576or on a per-:class:`_engine.Engine` basis::
578 engine = create_engine("sqlite://", execution_options={"sqlite_raw_colnames": True})
580When using the per-:class:`_engine.Engine` execution option, note that
581**Core and ORM queries that use UNION may not function properly**.
583""" # noqa
585import datetime
586import numbers
587import re
589from .json import JSON
590from .json import JSONIndexType
591from .json import JSONPathType
592from ... import exc
593from ... import processors
594from ... import schema as sa_schema
595from ... import sql
596from ... import types as sqltypes
597from ... import util
598from ...engine import default
599from ...engine import reflection
600from ...sql import ColumnElement
601from ...sql import compiler
602from ...types import BLOB # noqa
603from ...types import BOOLEAN # noqa
604from ...types import CHAR # noqa
605from ...types import DECIMAL # noqa
606from ...types import FLOAT # noqa
607from ...types import INTEGER # noqa
608from ...types import NUMERIC # noqa
609from ...types import REAL # noqa
610from ...types import SMALLINT # noqa
611from ...types import TEXT # noqa
612from ...types import TIMESTAMP # noqa
613from ...types import VARCHAR # noqa
616class _SQliteJson(JSON):
617 def result_processor(self, dialect, coltype):
618 default_processor = super(_SQliteJson, self).result_processor(
619 dialect, coltype
620 )
622 def process(value):
623 try:
624 return default_processor(value)
625 except TypeError:
626 if isinstance(value, numbers.Number):
627 return value
628 else:
629 raise
631 return process
634class _DateTimeMixin(object):
635 _reg = None
636 _storage_format = None
638 def __init__(self, storage_format=None, regexp=None, **kw):
639 super(_DateTimeMixin, self).__init__(**kw)
640 if regexp is not None:
641 self._reg = re.compile(regexp)
642 if storage_format is not None:
643 self._storage_format = storage_format
645 @property
646 def format_is_text_affinity(self):
647 """return True if the storage format will automatically imply
648 a TEXT affinity.
650 If the storage format contains no non-numeric characters,
651 it will imply a NUMERIC storage format on SQLite; in this case,
652 the type will generate its DDL as DATE_CHAR, DATETIME_CHAR,
653 TIME_CHAR.
655 .. versionadded:: 1.0.0
657 """
658 spec = self._storage_format % {
659 "year": 0,
660 "month": 0,
661 "day": 0,
662 "hour": 0,
663 "minute": 0,
664 "second": 0,
665 "microsecond": 0,
666 }
667 return bool(re.search(r"[^0-9]", spec))
669 def adapt(self, cls, **kw):
670 if issubclass(cls, _DateTimeMixin):
671 if self._storage_format:
672 kw["storage_format"] = self._storage_format
673 if self._reg:
674 kw["regexp"] = self._reg
675 return super(_DateTimeMixin, self).adapt(cls, **kw)
677 def literal_processor(self, dialect):
678 bp = self.bind_processor(dialect)
680 def process(value):
681 return "'%s'" % bp(value)
683 return process
686class DATETIME(_DateTimeMixin, sqltypes.DateTime):
687 r"""Represent a Python datetime object in SQLite using a string.
689 The default string storage format is::
691 "%(year)04d-%(month)02d-%(day)02d %(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d"
693 e.g.::
695 2011-03-15 12:05:57.10558
697 The storage format can be customized to some degree using the
698 ``storage_format`` and ``regexp`` parameters, such as::
700 import re
701 from sqlalchemy.dialects.sqlite import DATETIME
703 dt = DATETIME(storage_format="%(year)04d/%(month)02d/%(day)02d "
704 "%(hour)02d:%(minute)02d:%(second)02d",
705 regexp=r"(\d+)/(\d+)/(\d+) (\d+)-(\d+)-(\d+)"
706 )
708 :param storage_format: format string which will be applied to the dict
709 with keys year, month, day, hour, minute, second, and microsecond.
711 :param regexp: regular expression which will be applied to incoming result
712 rows. If the regexp contains named groups, the resulting match dict is
713 applied to the Python datetime() constructor as keyword arguments.
714 Otherwise, if positional groups are used, the datetime() constructor
715 is called with positional arguments via
716 ``*map(int, match_obj.groups(0))``.
718 """ # noqa
720 _storage_format = (
721 "%(year)04d-%(month)02d-%(day)02d "
722 "%(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d"
723 )
725 def __init__(self, *args, **kwargs):
726 truncate_microseconds = kwargs.pop("truncate_microseconds", False)
727 super(DATETIME, self).__init__(*args, **kwargs)
728 if truncate_microseconds:
729 assert "storage_format" not in kwargs, (
730 "You can specify only "
731 "one of truncate_microseconds or storage_format."
732 )
733 assert "regexp" not in kwargs, (
734 "You can specify only one of "
735 "truncate_microseconds or regexp."
736 )
737 self._storage_format = (
738 "%(year)04d-%(month)02d-%(day)02d "
739 "%(hour)02d:%(minute)02d:%(second)02d"
740 )
742 def bind_processor(self, dialect):
743 datetime_datetime = datetime.datetime
744 datetime_date = datetime.date
745 format_ = self._storage_format
747 def process(value):
748 if value is None:
749 return None
750 elif isinstance(value, datetime_datetime):
751 return format_ % {
752 "year": value.year,
753 "month": value.month,
754 "day": value.day,
755 "hour": value.hour,
756 "minute": value.minute,
757 "second": value.second,
758 "microsecond": value.microsecond,
759 }
760 elif isinstance(value, datetime_date):
761 return format_ % {
762 "year": value.year,
763 "month": value.month,
764 "day": value.day,
765 "hour": 0,
766 "minute": 0,
767 "second": 0,
768 "microsecond": 0,
769 }
770 else:
771 raise TypeError(
772 "SQLite DateTime type only accepts Python "
773 "datetime and date objects as input."
774 )
776 return process
778 def result_processor(self, dialect, coltype):
779 if self._reg:
780 return processors.str_to_datetime_processor_factory(
781 self._reg, datetime.datetime
782 )
783 else:
784 return processors.str_to_datetime
787class DATE(_DateTimeMixin, sqltypes.Date):
788 r"""Represent a Python date object in SQLite using a string.
790 The default string storage format is::
792 "%(year)04d-%(month)02d-%(day)02d"
794 e.g.::
796 2011-03-15
798 The storage format can be customized to some degree using the
799 ``storage_format`` and ``regexp`` parameters, such as::
801 import re
802 from sqlalchemy.dialects.sqlite import DATE
804 d = DATE(
805 storage_format="%(month)02d/%(day)02d/%(year)04d",
806 regexp=re.compile("(?P<month>\d+)/(?P<day>\d+)/(?P<year>\d+)")
807 )
809 :param storage_format: format string which will be applied to the
810 dict with keys year, month, and day.
812 :param regexp: regular expression which will be applied to
813 incoming result rows. If the regexp contains named groups, the
814 resulting match dict is applied to the Python date() constructor
815 as keyword arguments. Otherwise, if positional groups are used, the
816 date() constructor is called with positional arguments via
817 ``*map(int, match_obj.groups(0))``.
818 """
820 _storage_format = "%(year)04d-%(month)02d-%(day)02d"
822 def bind_processor(self, dialect):
823 datetime_date = datetime.date
824 format_ = self._storage_format
826 def process(value):
827 if value is None:
828 return None
829 elif isinstance(value, datetime_date):
830 return format_ % {
831 "year": value.year,
832 "month": value.month,
833 "day": value.day,
834 }
835 else:
836 raise TypeError(
837 "SQLite Date type only accepts Python "
838 "date objects as input."
839 )
841 return process
843 def result_processor(self, dialect, coltype):
844 if self._reg:
845 return processors.str_to_datetime_processor_factory(
846 self._reg, datetime.date
847 )
848 else:
849 return processors.str_to_date
852class TIME(_DateTimeMixin, sqltypes.Time):
853 r"""Represent a Python time object in SQLite using a string.
855 The default string storage format is::
857 "%(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d"
859 e.g.::
861 12:05:57.10558
863 The storage format can be customized to some degree using the
864 ``storage_format`` and ``regexp`` parameters, such as::
866 import re
867 from sqlalchemy.dialects.sqlite import TIME
869 t = TIME(storage_format="%(hour)02d-%(minute)02d-"
870 "%(second)02d-%(microsecond)06d",
871 regexp=re.compile("(\d+)-(\d+)-(\d+)-(?:-(\d+))?")
872 )
874 :param storage_format: format string which will be applied to the dict
875 with keys hour, minute, second, and microsecond.
877 :param regexp: regular expression which will be applied to incoming result
878 rows. If the regexp contains named groups, the resulting match dict is
879 applied to the Python time() constructor as keyword arguments. Otherwise,
880 if positional groups are used, the time() constructor is called with
881 positional arguments via ``*map(int, match_obj.groups(0))``.
882 """
884 _storage_format = "%(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d"
886 def __init__(self, *args, **kwargs):
887 truncate_microseconds = kwargs.pop("truncate_microseconds", False)
888 super(TIME, self).__init__(*args, **kwargs)
889 if truncate_microseconds:
890 assert "storage_format" not in kwargs, (
891 "You can specify only "
892 "one of truncate_microseconds or storage_format."
893 )
894 assert "regexp" not in kwargs, (
895 "You can specify only one of "
896 "truncate_microseconds or regexp."
897 )
898 self._storage_format = "%(hour)02d:%(minute)02d:%(second)02d"
900 def bind_processor(self, dialect):
901 datetime_time = datetime.time
902 format_ = self._storage_format
904 def process(value):
905 if value is None:
906 return None
907 elif isinstance(value, datetime_time):
908 return format_ % {
909 "hour": value.hour,
910 "minute": value.minute,
911 "second": value.second,
912 "microsecond": value.microsecond,
913 }
914 else:
915 raise TypeError(
916 "SQLite Time type only accepts Python "
917 "time objects as input."
918 )
920 return process
922 def result_processor(self, dialect, coltype):
923 if self._reg:
924 return processors.str_to_datetime_processor_factory(
925 self._reg, datetime.time
926 )
927 else:
928 return processors.str_to_time
931colspecs = {
932 sqltypes.Date: DATE,
933 sqltypes.DateTime: DATETIME,
934 sqltypes.JSON: _SQliteJson,
935 sqltypes.JSON.JSONIndexType: JSONIndexType,
936 sqltypes.JSON.JSONPathType: JSONPathType,
937 sqltypes.Time: TIME,
938}
940ischema_names = {
941 "BIGINT": sqltypes.BIGINT,
942 "BLOB": sqltypes.BLOB,
943 "BOOL": sqltypes.BOOLEAN,
944 "BOOLEAN": sqltypes.BOOLEAN,
945 "CHAR": sqltypes.CHAR,
946 "DATE": sqltypes.DATE,
947 "DATE_CHAR": sqltypes.DATE,
948 "DATETIME": sqltypes.DATETIME,
949 "DATETIME_CHAR": sqltypes.DATETIME,
950 "DOUBLE": sqltypes.FLOAT,
951 "DECIMAL": sqltypes.DECIMAL,
952 "FLOAT": sqltypes.FLOAT,
953 "INT": sqltypes.INTEGER,
954 "INTEGER": sqltypes.INTEGER,
955 "JSON": JSON,
956 "NUMERIC": sqltypes.NUMERIC,
957 "REAL": sqltypes.REAL,
958 "SMALLINT": sqltypes.SMALLINT,
959 "TEXT": sqltypes.TEXT,
960 "TIME": sqltypes.TIME,
961 "TIME_CHAR": sqltypes.TIME,
962 "TIMESTAMP": sqltypes.TIMESTAMP,
963 "VARCHAR": sqltypes.VARCHAR,
964 "NVARCHAR": sqltypes.NVARCHAR,
965 "NCHAR": sqltypes.NCHAR,
966}
969class SQLiteCompiler(compiler.SQLCompiler):
970 extract_map = util.update_copy(
971 compiler.SQLCompiler.extract_map,
972 {
973 "month": "%m",
974 "day": "%d",
975 "year": "%Y",
976 "second": "%S",
977 "hour": "%H",
978 "doy": "%j",
979 "minute": "%M",
980 "epoch": "%s",
981 "dow": "%w",
982 "week": "%W",
983 },
984 )
986 def visit_now_func(self, fn, **kw):
987 return "CURRENT_TIMESTAMP"
989 def visit_localtimestamp_func(self, func, **kw):
990 return 'DATETIME(CURRENT_TIMESTAMP, "localtime")'
992 def visit_true(self, expr, **kw):
993 return "1"
995 def visit_false(self, expr, **kw):
996 return "0"
998 def visit_char_length_func(self, fn, **kw):
999 return "length%s" % self.function_argspec(fn)
1001 def visit_cast(self, cast, **kwargs):
1002 if self.dialect.supports_cast:
1003 return super(SQLiteCompiler, self).visit_cast(cast, **kwargs)
1004 else:
1005 return self.process(cast.clause, **kwargs)
1007 def visit_extract(self, extract, **kw):
1008 try:
1009 return "CAST(STRFTIME('%s', %s) AS INTEGER)" % (
1010 self.extract_map[extract.field],
1011 self.process(extract.expr, **kw),
1012 )
1013 except KeyError as err:
1014 util.raise_(
1015 exc.CompileError(
1016 "%s is not a valid extract argument." % extract.field
1017 ),
1018 replace_context=err,
1019 )
1021 def limit_clause(self, select, **kw):
1022 text = ""
1023 if select._limit_clause is not None:
1024 text += "\n LIMIT " + self.process(select._limit_clause, **kw)
1025 if select._offset_clause is not None:
1026 if select._limit_clause is None:
1027 text += "\n LIMIT " + self.process(sql.literal(-1))
1028 text += " OFFSET " + self.process(select._offset_clause, **kw)
1029 else:
1030 text += " OFFSET " + self.process(sql.literal(0), **kw)
1031 return text
1033 def for_update_clause(self, select, **kw):
1034 # sqlite has no "FOR UPDATE" AFAICT
1035 return ""
1037 def visit_is_distinct_from_binary(self, binary, operator, **kw):
1038 return "%s IS NOT %s" % (
1039 self.process(binary.left),
1040 self.process(binary.right),
1041 )
1043 def visit_isnot_distinct_from_binary(self, binary, operator, **kw):
1044 return "%s IS %s" % (
1045 self.process(binary.left),
1046 self.process(binary.right),
1047 )
1049 def visit_json_getitem_op_binary(self, binary, operator, **kw):
1050 if binary.type._type_affinity is sqltypes.JSON:
1051 expr = "JSON_QUOTE(JSON_EXTRACT(%s, %s))"
1052 else:
1053 expr = "JSON_EXTRACT(%s, %s)"
1055 return expr % (
1056 self.process(binary.left, **kw),
1057 self.process(binary.right, **kw),
1058 )
1060 def visit_json_path_getitem_op_binary(self, binary, operator, **kw):
1061 if binary.type._type_affinity is sqltypes.JSON:
1062 expr = "JSON_QUOTE(JSON_EXTRACT(%s, %s))"
1063 else:
1064 expr = "JSON_EXTRACT(%s, %s)"
1066 return expr % (
1067 self.process(binary.left, **kw),
1068 self.process(binary.right, **kw),
1069 )
1071 def visit_empty_set_expr(self, element_types):
1072 return "SELECT %s FROM (SELECT %s) WHERE 1!=1" % (
1073 ", ".join("1" for type_ in element_types or [INTEGER()]),
1074 ", ".join("1" for type_ in element_types or [INTEGER()]),
1075 )
1078class SQLiteDDLCompiler(compiler.DDLCompiler):
1079 def get_column_specification(self, column, **kwargs):
1081 coltype = self.dialect.type_compiler.process(
1082 column.type, type_expression=column
1083 )
1084 colspec = self.preparer.format_column(column) + " " + coltype
1085 default = self.get_column_default_string(column)
1086 if default is not None:
1087 if isinstance(column.server_default.arg, ColumnElement):
1088 default = "(" + default + ")"
1089 colspec += " DEFAULT " + default
1091 if not column.nullable:
1092 colspec += " NOT NULL"
1094 on_conflict_clause = column.dialect_options["sqlite"][
1095 "on_conflict_not_null"
1096 ]
1097 if on_conflict_clause is not None:
1098 colspec += " ON CONFLICT " + on_conflict_clause
1100 if column.primary_key:
1101 if (
1102 column.autoincrement is True
1103 and len(column.table.primary_key.columns) != 1
1104 ):
1105 raise exc.CompileError(
1106 "SQLite does not support autoincrement for "
1107 "composite primary keys"
1108 )
1110 if (
1111 column.table.dialect_options["sqlite"]["autoincrement"]
1112 and len(column.table.primary_key.columns) == 1
1113 and issubclass(column.type._type_affinity, sqltypes.Integer)
1114 and not column.foreign_keys
1115 ):
1116 colspec += " PRIMARY KEY"
1118 on_conflict_clause = column.dialect_options["sqlite"][
1119 "on_conflict_primary_key"
1120 ]
1121 if on_conflict_clause is not None:
1122 colspec += " ON CONFLICT " + on_conflict_clause
1124 colspec += " AUTOINCREMENT"
1126 if column.computed is not None:
1127 colspec += " " + self.process(column.computed)
1129 return colspec
1131 def visit_primary_key_constraint(self, constraint):
1132 # for columns with sqlite_autoincrement=True,
1133 # the PRIMARY KEY constraint can only be inline
1134 # with the column itself.
1135 if len(constraint.columns) == 1:
1136 c = list(constraint)[0]
1137 if (
1138 c.primary_key
1139 and c.table.dialect_options["sqlite"]["autoincrement"]
1140 and issubclass(c.type._type_affinity, sqltypes.Integer)
1141 and not c.foreign_keys
1142 ):
1143 return None
1145 text = super(SQLiteDDLCompiler, self).visit_primary_key_constraint(
1146 constraint
1147 )
1149 on_conflict_clause = constraint.dialect_options["sqlite"][
1150 "on_conflict"
1151 ]
1152 if on_conflict_clause is None and len(constraint.columns) == 1:
1153 on_conflict_clause = list(constraint)[0].dialect_options["sqlite"][
1154 "on_conflict_primary_key"
1155 ]
1157 if on_conflict_clause is not None:
1158 text += " ON CONFLICT " + on_conflict_clause
1160 return text
1162 def visit_unique_constraint(self, constraint):
1163 text = super(SQLiteDDLCompiler, self).visit_unique_constraint(
1164 constraint
1165 )
1167 on_conflict_clause = constraint.dialect_options["sqlite"][
1168 "on_conflict"
1169 ]
1170 if on_conflict_clause is None and len(constraint.columns) == 1:
1171 on_conflict_clause = list(constraint)[0].dialect_options["sqlite"][
1172 "on_conflict_unique"
1173 ]
1175 if on_conflict_clause is not None:
1176 text += " ON CONFLICT " + on_conflict_clause
1178 return text
1180 def visit_check_constraint(self, constraint):
1181 text = super(SQLiteDDLCompiler, self).visit_check_constraint(
1182 constraint
1183 )
1185 on_conflict_clause = constraint.dialect_options["sqlite"][
1186 "on_conflict"
1187 ]
1189 if on_conflict_clause is not None:
1190 text += " ON CONFLICT " + on_conflict_clause
1192 return text
1194 def visit_column_check_constraint(self, constraint):
1195 text = super(SQLiteDDLCompiler, self).visit_column_check_constraint(
1196 constraint
1197 )
1199 if constraint.dialect_options["sqlite"]["on_conflict"] is not None:
1200 raise exc.CompileError(
1201 "SQLite does not support on conflict clause for "
1202 "column check constraint"
1203 )
1205 return text
1207 def visit_foreign_key_constraint(self, constraint):
1209 local_table = constraint.elements[0].parent.table
1210 remote_table = constraint.elements[0].column.table
1212 if local_table.schema != remote_table.schema:
1213 return None
1214 else:
1215 return super(SQLiteDDLCompiler, self).visit_foreign_key_constraint(
1216 constraint
1217 )
1219 def define_constraint_remote_table(self, constraint, table, preparer):
1220 """Format the remote table clause of a CREATE CONSTRAINT clause."""
1222 return preparer.format_table(table, use_schema=False)
1224 def visit_create_index(
1225 self, create, include_schema=False, include_table_schema=True
1226 ):
1227 index = create.element
1228 self._verify_index_table(index)
1229 preparer = self.preparer
1230 text = "CREATE "
1231 if index.unique:
1232 text += "UNIQUE "
1233 text += "INDEX %s ON %s (%s)" % (
1234 self._prepared_index_name(index, include_schema=True),
1235 preparer.format_table(index.table, use_schema=False),
1236 ", ".join(
1237 self.sql_compiler.process(
1238 expr, include_table=False, literal_binds=True
1239 )
1240 for expr in index.expressions
1241 ),
1242 )
1244 whereclause = index.dialect_options["sqlite"]["where"]
1245 if whereclause is not None:
1246 where_compiled = self.sql_compiler.process(
1247 whereclause, include_table=False, literal_binds=True
1248 )
1249 text += " WHERE " + where_compiled
1251 return text
1254class SQLiteTypeCompiler(compiler.GenericTypeCompiler):
1255 def visit_large_binary(self, type_, **kw):
1256 return self.visit_BLOB(type_)
1258 def visit_DATETIME(self, type_, **kw):
1259 if (
1260 not isinstance(type_, _DateTimeMixin)
1261 or type_.format_is_text_affinity
1262 ):
1263 return super(SQLiteTypeCompiler, self).visit_DATETIME(type_)
1264 else:
1265 return "DATETIME_CHAR"
1267 def visit_DATE(self, type_, **kw):
1268 if (
1269 not isinstance(type_, _DateTimeMixin)
1270 or type_.format_is_text_affinity
1271 ):
1272 return super(SQLiteTypeCompiler, self).visit_DATE(type_)
1273 else:
1274 return "DATE_CHAR"
1276 def visit_TIME(self, type_, **kw):
1277 if (
1278 not isinstance(type_, _DateTimeMixin)
1279 or type_.format_is_text_affinity
1280 ):
1281 return super(SQLiteTypeCompiler, self).visit_TIME(type_)
1282 else:
1283 return "TIME_CHAR"
1285 def visit_JSON(self, type_, **kw):
1286 # note this name provides NUMERIC affinity, not TEXT.
1287 # should not be an issue unless the JSON value consists of a single
1288 # numeric value. JSONTEXT can be used if this case is required.
1289 return "JSON"
1292class SQLiteIdentifierPreparer(compiler.IdentifierPreparer):
1293 reserved_words = set(
1294 [
1295 "add",
1296 "after",
1297 "all",
1298 "alter",
1299 "analyze",
1300 "and",
1301 "as",
1302 "asc",
1303 "attach",
1304 "autoincrement",
1305 "before",
1306 "begin",
1307 "between",
1308 "by",
1309 "cascade",
1310 "case",
1311 "cast",
1312 "check",
1313 "collate",
1314 "column",
1315 "commit",
1316 "conflict",
1317 "constraint",
1318 "create",
1319 "cross",
1320 "current_date",
1321 "current_time",
1322 "current_timestamp",
1323 "database",
1324 "default",
1325 "deferrable",
1326 "deferred",
1327 "delete",
1328 "desc",
1329 "detach",
1330 "distinct",
1331 "drop",
1332 "each",
1333 "else",
1334 "end",
1335 "escape",
1336 "except",
1337 "exclusive",
1338 "exists",
1339 "explain",
1340 "false",
1341 "fail",
1342 "for",
1343 "foreign",
1344 "from",
1345 "full",
1346 "glob",
1347 "group",
1348 "having",
1349 "if",
1350 "ignore",
1351 "immediate",
1352 "in",
1353 "index",
1354 "indexed",
1355 "initially",
1356 "inner",
1357 "insert",
1358 "instead",
1359 "intersect",
1360 "into",
1361 "is",
1362 "isnull",
1363 "join",
1364 "key",
1365 "left",
1366 "like",
1367 "limit",
1368 "match",
1369 "natural",
1370 "not",
1371 "notnull",
1372 "null",
1373 "of",
1374 "offset",
1375 "on",
1376 "or",
1377 "order",
1378 "outer",
1379 "plan",
1380 "pragma",
1381 "primary",
1382 "query",
1383 "raise",
1384 "references",
1385 "reindex",
1386 "rename",
1387 "replace",
1388 "restrict",
1389 "right",
1390 "rollback",
1391 "row",
1392 "select",
1393 "set",
1394 "table",
1395 "temp",
1396 "temporary",
1397 "then",
1398 "to",
1399 "transaction",
1400 "trigger",
1401 "true",
1402 "union",
1403 "unique",
1404 "update",
1405 "using",
1406 "vacuum",
1407 "values",
1408 "view",
1409 "virtual",
1410 "when",
1411 "where",
1412 ]
1413 )
1416class SQLiteExecutionContext(default.DefaultExecutionContext):
1417 @util.memoized_property
1418 def _preserve_raw_colnames(self):
1419 return (
1420 not self.dialect._broken_dotted_colnames
1421 or self.execution_options.get("sqlite_raw_colnames", False)
1422 )
1424 def _translate_colname(self, colname):
1425 # TODO: detect SQLite version 3.10.0 or greater;
1426 # see [ticket:3633]
1428 # adjust for dotted column names. SQLite
1429 # in the case of UNION may store col names as
1430 # "tablename.colname", or if using an attached database,
1431 # "database.tablename.colname", in cursor.description
1432 if not self._preserve_raw_colnames and "." in colname:
1433 return colname.split(".")[-1], colname
1434 else:
1435 return colname, None
1438class SQLiteDialect(default.DefaultDialect):
1439 name = "sqlite"
1440 supports_alter = False
1441 supports_unicode_statements = True
1442 supports_unicode_binds = True
1443 supports_default_values = True
1444 supports_empty_insert = False
1445 supports_cast = True
1446 supports_multivalues_insert = True
1447 tuple_in_values = True
1449 default_paramstyle = "qmark"
1450 execution_ctx_cls = SQLiteExecutionContext
1451 statement_compiler = SQLiteCompiler
1452 ddl_compiler = SQLiteDDLCompiler
1453 type_compiler = SQLiteTypeCompiler
1454 preparer = SQLiteIdentifierPreparer
1455 ischema_names = ischema_names
1456 colspecs = colspecs
1457 isolation_level = None
1459 construct_arguments = [
1460 (sa_schema.Table, {"autoincrement": False}),
1461 (sa_schema.Index, {"where": None}),
1462 (
1463 sa_schema.Column,
1464 {
1465 "on_conflict_primary_key": None,
1466 "on_conflict_not_null": None,
1467 "on_conflict_unique": None,
1468 },
1469 ),
1470 (sa_schema.Constraint, {"on_conflict": None}),
1471 ]
1473 _broken_fk_pragma_quotes = False
1474 _broken_dotted_colnames = False
1476 @util.deprecated_params(
1477 _json_serializer=(
1478 "1.3.7",
1479 "The _json_serializer argument to the SQLite dialect has "
1480 "been renamed to the correct name of json_serializer. The old "
1481 "argument name will be removed in a future release.",
1482 ),
1483 _json_deserializer=(
1484 "1.3.7",
1485 "The _json_deserializer argument to the SQLite dialect has "
1486 "been renamed to the correct name of json_deserializer. The old "
1487 "argument name will be removed in a future release.",
1488 ),
1489 )
1490 def __init__(
1491 self,
1492 isolation_level=None,
1493 native_datetime=False,
1494 json_serializer=None,
1495 json_deserializer=None,
1496 _json_serializer=None,
1497 _json_deserializer=None,
1498 **kwargs
1499 ):
1500 default.DefaultDialect.__init__(self, **kwargs)
1501 self.isolation_level = isolation_level
1503 if _json_serializer:
1504 json_serializer = _json_serializer
1505 if _json_deserializer:
1506 json_deserializer = _json_deserializer
1507 self._json_serializer = json_serializer
1508 self._json_deserializer = json_deserializer
1510 # this flag used by pysqlite dialect, and perhaps others in the
1511 # future, to indicate the driver is handling date/timestamp
1512 # conversions (and perhaps datetime/time as well on some hypothetical
1513 # driver ?)
1514 self.native_datetime = native_datetime
1516 if self.dbapi is not None:
1517 self.supports_right_nested_joins = (
1518 self.dbapi.sqlite_version_info >= (3, 7, 16)
1519 )
1520 self._broken_dotted_colnames = self.dbapi.sqlite_version_info < (
1521 3,
1522 10,
1523 0,
1524 )
1525 self.supports_default_values = self.dbapi.sqlite_version_info >= (
1526 3,
1527 3,
1528 8,
1529 )
1530 self.supports_cast = self.dbapi.sqlite_version_info >= (3, 2, 3)
1531 self.supports_multivalues_insert = (
1532 # http://www.sqlite.org/releaselog/3_7_11.html
1533 self.dbapi.sqlite_version_info
1534 >= (3, 7, 11)
1535 )
1536 # see http://www.sqlalchemy.org/trac/ticket/2568
1537 # as well as http://www.sqlite.org/src/info/600482d161
1538 self._broken_fk_pragma_quotes = self.dbapi.sqlite_version_info < (
1539 3,
1540 6,
1541 14,
1542 )
1544 _isolation_lookup = {"READ UNCOMMITTED": 1, "SERIALIZABLE": 0}
1546 def set_isolation_level(self, connection, level):
1547 try:
1548 isolation_level = self._isolation_lookup[level.replace("_", " ")]
1549 except KeyError as err:
1550 util.raise_(
1551 exc.ArgumentError(
1552 "Invalid value '%s' for isolation_level. "
1553 "Valid isolation levels for %s are %s"
1554 % (level, self.name, ", ".join(self._isolation_lookup))
1555 ),
1556 replace_context=err,
1557 )
1558 cursor = connection.cursor()
1559 cursor.execute("PRAGMA read_uncommitted = %d" % isolation_level)
1560 cursor.close()
1562 def get_isolation_level(self, connection):
1563 cursor = connection.cursor()
1564 cursor.execute("PRAGMA read_uncommitted")
1565 res = cursor.fetchone()
1566 if res:
1567 value = res[0]
1568 else:
1569 # http://www.sqlite.org/changes.html#version_3_3_3
1570 # "Optional READ UNCOMMITTED isolation (instead of the
1571 # default isolation level of SERIALIZABLE) and
1572 # table level locking when database connections
1573 # share a common cache.""
1574 # pre-SQLite 3.3.0 default to 0
1575 value = 0
1576 cursor.close()
1577 if value == 0:
1578 return "SERIALIZABLE"
1579 elif value == 1:
1580 return "READ UNCOMMITTED"
1581 else:
1582 assert False, "Unknown isolation level %s" % value
1584 def on_connect(self):
1585 if self.isolation_level is not None:
1587 def connect(conn):
1588 self.set_isolation_level(conn, self.isolation_level)
1590 return connect
1591 else:
1592 return None
1594 @reflection.cache
1595 def get_schema_names(self, connection, **kw):
1596 s = "PRAGMA database_list"
1597 dl = connection.execute(s)
1599 return [db[1] for db in dl if db[1] != "temp"]
1601 @reflection.cache
1602 def get_table_names(self, connection, schema=None, **kw):
1603 if schema is not None:
1604 qschema = self.identifier_preparer.quote_identifier(schema)
1605 master = "%s.sqlite_master" % qschema
1606 else:
1607 master = "sqlite_master"
1608 s = ("SELECT name FROM %s " "WHERE type='table' ORDER BY name") % (
1609 master,
1610 )
1611 rs = connection.execute(s)
1612 return [row[0] for row in rs]
1614 @reflection.cache
1615 def get_temp_table_names(self, connection, **kw):
1616 s = (
1617 "SELECT name FROM sqlite_temp_master "
1618 "WHERE type='table' ORDER BY name "
1619 )
1620 rs = connection.execute(s)
1622 return [row[0] for row in rs]
1624 @reflection.cache
1625 def get_temp_view_names(self, connection, **kw):
1626 s = (
1627 "SELECT name FROM sqlite_temp_master "
1628 "WHERE type='view' ORDER BY name "
1629 )
1630 rs = connection.execute(s)
1632 return [row[0] for row in rs]
1634 def has_table(self, connection, table_name, schema=None):
1635 info = self._get_table_pragma(
1636 connection, "table_info", table_name, schema=schema
1637 )
1638 return bool(info)
1640 @reflection.cache
1641 def get_view_names(self, connection, schema=None, **kw):
1642 if schema is not None:
1643 qschema = self.identifier_preparer.quote_identifier(schema)
1644 master = "%s.sqlite_master" % qschema
1645 else:
1646 master = "sqlite_master"
1647 s = ("SELECT name FROM %s " "WHERE type='view' ORDER BY name") % (
1648 master,
1649 )
1650 rs = connection.execute(s)
1652 return [row[0] for row in rs]
1654 @reflection.cache
1655 def get_view_definition(self, connection, view_name, schema=None, **kw):
1656 if schema is not None:
1657 qschema = self.identifier_preparer.quote_identifier(schema)
1658 master = "%s.sqlite_master" % qschema
1659 s = ("SELECT sql FROM %s WHERE name = '%s'" "AND type='view'") % (
1660 master,
1661 view_name,
1662 )
1663 rs = connection.execute(s)
1664 else:
1665 try:
1666 s = (
1667 "SELECT sql FROM "
1668 " (SELECT * FROM sqlite_master UNION ALL "
1669 " SELECT * FROM sqlite_temp_master) "
1670 "WHERE name = '%s' "
1671 "AND type='view'"
1672 ) % view_name
1673 rs = connection.execute(s)
1674 except exc.DBAPIError:
1675 s = (
1676 "SELECT sql FROM sqlite_master WHERE name = '%s' "
1677 "AND type='view'"
1678 ) % view_name
1679 rs = connection.execute(s)
1681 result = rs.fetchall()
1682 if result:
1683 return result[0].sql
1685 @reflection.cache
1686 def get_columns(self, connection, table_name, schema=None, **kw):
1687 pragma = "table_info"
1688 # computed columns are threaded as hidden, they require table_xinfo
1689 if self.server_version_info >= (3, 31):
1690 pragma = "table_xinfo"
1691 info = self._get_table_pragma(
1692 connection, pragma, table_name, schema=schema
1693 )
1694 columns = []
1695 tablesql = None
1696 for row in info:
1697 name = row[1]
1698 type_ = row[2].upper()
1699 nullable = not row[3]
1700 default = row[4]
1701 primary_key = row[5]
1702 hidden = row[6] if pragma == "table_xinfo" else 0
1704 # hidden has value 0 for normal columns, 1 for hidden columns,
1705 # 2 for computed virtual columns and 3 for computed stored columns
1706 # https://www.sqlite.org/src/info/069351b85f9a706f60d3e98fbc8aaf40c374356b967c0464aede30ead3d9d18b
1707 if hidden == 1:
1708 continue
1710 generated = bool(hidden)
1711 persisted = hidden == 3
1713 if tablesql is None and generated:
1714 tablesql = self._get_table_sql(
1715 connection, table_name, schema, **kw
1716 )
1718 columns.append(
1719 self._get_column_info(
1720 name,
1721 type_,
1722 nullable,
1723 default,
1724 primary_key,
1725 generated,
1726 persisted,
1727 tablesql,
1728 )
1729 )
1730 return columns
1732 def _get_column_info(
1733 self,
1734 name,
1735 type_,
1736 nullable,
1737 default,
1738 primary_key,
1739 generated,
1740 persisted,
1741 tablesql,
1742 ):
1744 if generated:
1745 # the type of a column "cc INTEGER GENERATED ALWAYS AS (1 + 42)"
1746 # somehow is "INTEGER GENERATED ALWAYS"
1747 type_ = re.sub("generated", "", type_, flags=re.IGNORECASE)
1748 type_ = re.sub("always", "", type_, flags=re.IGNORECASE).strip()
1750 coltype = self._resolve_type_affinity(type_)
1752 if default is not None:
1753 default = util.text_type(default)
1755 colspec = {
1756 "name": name,
1757 "type": coltype,
1758 "nullable": nullable,
1759 "default": default,
1760 "autoincrement": "auto",
1761 "primary_key": primary_key,
1762 }
1763 if generated:
1764 sqltext = ""
1765 if tablesql:
1766 pattern = r"[^,]*\s+AS\s+\(([^,]*)\)\s*(?:virtual|stored)?"
1767 match = re.search(
1768 re.escape(name) + pattern, tablesql, re.IGNORECASE
1769 )
1770 if match:
1771 sqltext = match.group(1)
1772 colspec["computed"] = {"sqltext": sqltext, "persisted": persisted}
1773 return colspec
1775 def _resolve_type_affinity(self, type_):
1776 """Return a data type from a reflected column, using affinity tules.
1778 SQLite's goal for universal compatibility introduces some complexity
1779 during reflection, as a column's defined type might not actually be a
1780 type that SQLite understands - or indeed, my not be defined *at all*.
1781 Internally, SQLite handles this with a 'data type affinity' for each
1782 column definition, mapping to one of 'TEXT', 'NUMERIC', 'INTEGER',
1783 'REAL', or 'NONE' (raw bits). The algorithm that determines this is
1784 listed in http://www.sqlite.org/datatype3.html section 2.1.
1786 This method allows SQLAlchemy to support that algorithm, while still
1787 providing access to smarter reflection utilities by regcognizing
1788 column definitions that SQLite only supports through affinity (like
1789 DATE and DOUBLE).
1791 """
1792 match = re.match(r"([\w ]+)(\(.*?\))?", type_)
1793 if match:
1794 coltype = match.group(1)
1795 args = match.group(2)
1796 else:
1797 coltype = ""
1798 args = ""
1800 if coltype in self.ischema_names:
1801 coltype = self.ischema_names[coltype]
1802 elif "INT" in coltype:
1803 coltype = sqltypes.INTEGER
1804 elif "CHAR" in coltype or "CLOB" in coltype or "TEXT" in coltype:
1805 coltype = sqltypes.TEXT
1806 elif "BLOB" in coltype or not coltype:
1807 coltype = sqltypes.NullType
1808 elif "REAL" in coltype or "FLOA" in coltype or "DOUB" in coltype:
1809 coltype = sqltypes.REAL
1810 else:
1811 coltype = sqltypes.NUMERIC
1813 if args is not None:
1814 args = re.findall(r"(\d+)", args)
1815 try:
1816 coltype = coltype(*[int(a) for a in args])
1817 except TypeError:
1818 util.warn(
1819 "Could not instantiate type %s with "
1820 "reflected arguments %s; using no arguments."
1821 % (coltype, args)
1822 )
1823 coltype = coltype()
1824 else:
1825 coltype = coltype()
1827 return coltype
1829 @reflection.cache
1830 def get_pk_constraint(self, connection, table_name, schema=None, **kw):
1831 constraint_name = None
1832 table_data = self._get_table_sql(connection, table_name, schema=schema)
1833 if table_data:
1834 PK_PATTERN = r"CONSTRAINT (\w+) PRIMARY KEY"
1835 result = re.search(PK_PATTERN, table_data, re.I)
1836 constraint_name = result.group(1) if result else None
1838 cols = self.get_columns(connection, table_name, schema, **kw)
1839 pkeys = []
1840 for col in cols:
1841 if col["primary_key"]:
1842 pkeys.append(col["name"])
1844 return {"constrained_columns": pkeys, "name": constraint_name}
1846 @reflection.cache
1847 def get_foreign_keys(self, connection, table_name, schema=None, **kw):
1848 # sqlite makes this *extremely difficult*.
1849 # First, use the pragma to get the actual FKs.
1850 pragma_fks = self._get_table_pragma(
1851 connection, "foreign_key_list", table_name, schema=schema
1852 )
1854 fks = {}
1856 for row in pragma_fks:
1857 (numerical_id, rtbl, lcol, rcol) = (row[0], row[2], row[3], row[4])
1859 if not rcol:
1860 # no referred column, which means it was not named in the
1861 # original DDL. The referred columns of the foreign key
1862 # constraint are therefore the primary key of the referred
1863 # table.
1864 referred_pk = self.get_pk_constraint(
1865 connection, rtbl, schema=schema, **kw
1866 )
1867 # note that if table doesnt exist, we still get back a record,
1868 # just it has no columns in it
1869 referred_columns = referred_pk["constrained_columns"]
1870 else:
1871 # note we use this list only if this is the first column
1872 # in the constraint. for subsequent columns we ignore the
1873 # list and append "rcol" if present.
1874 referred_columns = []
1876 if self._broken_fk_pragma_quotes:
1877 rtbl = re.sub(r"^[\"\[`\']|[\"\]`\']$", "", rtbl)
1879 if numerical_id in fks:
1880 fk = fks[numerical_id]
1881 else:
1882 fk = fks[numerical_id] = {
1883 "name": None,
1884 "constrained_columns": [],
1885 "referred_schema": schema,
1886 "referred_table": rtbl,
1887 "referred_columns": referred_columns,
1888 "options": {},
1889 }
1890 fks[numerical_id] = fk
1892 fk["constrained_columns"].append(lcol)
1894 if rcol:
1895 fk["referred_columns"].append(rcol)
1897 def fk_sig(constrained_columns, referred_table, referred_columns):
1898 return (
1899 tuple(constrained_columns)
1900 + (referred_table,)
1901 + tuple(referred_columns)
1902 )
1904 # then, parse the actual SQL and attempt to find DDL that matches
1905 # the names as well. SQLite saves the DDL in whatever format
1906 # it was typed in as, so need to be liberal here.
1908 keys_by_signature = dict(
1909 (
1910 fk_sig(
1911 fk["constrained_columns"],
1912 fk["referred_table"],
1913 fk["referred_columns"],
1914 ),
1915 fk,
1916 )
1917 for fk in fks.values()
1918 )
1920 table_data = self._get_table_sql(connection, table_name, schema=schema)
1921 if table_data is None:
1922 # system tables, etc.
1923 return []
1925 def parse_fks():
1926 FK_PATTERN = (
1927 r"(?:CONSTRAINT (\w+) +)?"
1928 r"FOREIGN KEY *\( *(.+?) *\) +"
1929 r'REFERENCES +(?:(?:"(.+?)")|([a-z0-9_]+)) *\((.+?)\) *'
1930 r"((?:ON (?:DELETE|UPDATE) "
1931 r"(?:SET NULL|SET DEFAULT|CASCADE|RESTRICT|NO ACTION) *)*)"
1932 )
1933 for match in re.finditer(FK_PATTERN, table_data, re.I):
1934 (
1935 constraint_name,
1936 constrained_columns,
1937 referred_quoted_name,
1938 referred_name,
1939 referred_columns,
1940 onupdatedelete,
1941 ) = match.group(1, 2, 3, 4, 5, 6)
1942 constrained_columns = list(
1943 self._find_cols_in_sig(constrained_columns)
1944 )
1945 if not referred_columns:
1946 referred_columns = constrained_columns
1947 else:
1948 referred_columns = list(
1949 self._find_cols_in_sig(referred_columns)
1950 )
1951 referred_name = referred_quoted_name or referred_name
1952 options = {}
1954 for token in re.split(r" *\bON\b *", onupdatedelete.upper()):
1955 if token.startswith("DELETE"):
1956 options["ondelete"] = token[6:].strip()
1957 elif token.startswith("UPDATE"):
1958 options["onupdate"] = token[6:].strip()
1959 yield (
1960 constraint_name,
1961 constrained_columns,
1962 referred_name,
1963 referred_columns,
1964 options,
1965 )
1967 fkeys = []
1969 for (
1970 constraint_name,
1971 constrained_columns,
1972 referred_name,
1973 referred_columns,
1974 options,
1975 ) in parse_fks():
1976 sig = fk_sig(constrained_columns, referred_name, referred_columns)
1977 if sig not in keys_by_signature:
1978 util.warn(
1979 "WARNING: SQL-parsed foreign key constraint "
1980 "'%s' could not be located in PRAGMA "
1981 "foreign_keys for table %s" % (sig, table_name)
1982 )
1983 continue
1984 key = keys_by_signature.pop(sig)
1985 key["name"] = constraint_name
1986 key["options"] = options
1987 fkeys.append(key)
1988 # assume the remainders are the unnamed, inline constraints, just
1989 # use them as is as it's extremely difficult to parse inline
1990 # constraints
1991 fkeys.extend(keys_by_signature.values())
1992 return fkeys
1994 def _find_cols_in_sig(self, sig):
1995 for match in re.finditer(r'(?:"(.+?)")|([a-z0-9_]+)', sig, re.I):
1996 yield match.group(1) or match.group(2)
1998 @reflection.cache
1999 def get_unique_constraints(
2000 self, connection, table_name, schema=None, **kw
2001 ):
2003 auto_index_by_sig = {}
2004 for idx in self.get_indexes(
2005 connection,
2006 table_name,
2007 schema=schema,
2008 include_auto_indexes=True,
2009 **kw
2010 ):
2011 if not idx["name"].startswith("sqlite_autoindex"):
2012 continue
2013 sig = tuple(idx["column_names"])
2014 auto_index_by_sig[sig] = idx
2016 table_data = self._get_table_sql(
2017 connection, table_name, schema=schema, **kw
2018 )
2019 if not table_data:
2020 return []
2022 unique_constraints = []
2024 def parse_uqs():
2025 UNIQUE_PATTERN = r'(?:CONSTRAINT "?(.+?)"? +)?UNIQUE *\((.+?)\)'
2026 INLINE_UNIQUE_PATTERN = (
2027 r'(?:(".+?")|([a-z0-9]+)) ' r"+[a-z0-9_ ]+? +UNIQUE"
2028 )
2030 for match in re.finditer(UNIQUE_PATTERN, table_data, re.I):
2031 name, cols = match.group(1, 2)
2032 yield name, list(self._find_cols_in_sig(cols))
2034 # we need to match inlines as well, as we seek to differentiate
2035 # a UNIQUE constraint from a UNIQUE INDEX, even though these
2036 # are kind of the same thing :)
2037 for match in re.finditer(INLINE_UNIQUE_PATTERN, table_data, re.I):
2038 cols = list(
2039 self._find_cols_in_sig(match.group(1) or match.group(2))
2040 )
2041 yield None, cols
2043 for name, cols in parse_uqs():
2044 sig = tuple(cols)
2045 if sig in auto_index_by_sig:
2046 auto_index_by_sig.pop(sig)
2047 parsed_constraint = {"name": name, "column_names": cols}
2048 unique_constraints.append(parsed_constraint)
2049 # NOTE: auto_index_by_sig might not be empty here,
2050 # the PRIMARY KEY may have an entry.
2051 return unique_constraints
2053 @reflection.cache
2054 def get_check_constraints(self, connection, table_name, schema=None, **kw):
2055 table_data = self._get_table_sql(
2056 connection, table_name, schema=schema, **kw
2057 )
2058 if not table_data:
2059 return []
2061 CHECK_PATTERN = r"(?:CONSTRAINT (\w+) +)?" r"CHECK *\( *(.+) *\),? *"
2062 check_constraints = []
2063 # NOTE: we aren't using re.S here because we actually are
2064 # taking advantage of each CHECK constraint being all on one
2065 # line in the table definition in order to delineate. This
2066 # necessarily makes assumptions as to how the CREATE TABLE
2067 # was emitted.
2068 for match in re.finditer(CHECK_PATTERN, table_data, re.I):
2069 check_constraints.append(
2070 {"sqltext": match.group(2), "name": match.group(1)}
2071 )
2073 return check_constraints
2075 @reflection.cache
2076 def get_indexes(self, connection, table_name, schema=None, **kw):
2077 pragma_indexes = self._get_table_pragma(
2078 connection, "index_list", table_name, schema=schema
2079 )
2080 indexes = []
2082 include_auto_indexes = kw.pop("include_auto_indexes", False)
2083 for row in pragma_indexes:
2084 # ignore implicit primary key index.
2085 # http://www.mail-archive.com/sqlite-users@sqlite.org/msg30517.html
2086 if not include_auto_indexes and row[1].startswith(
2087 "sqlite_autoindex"
2088 ):
2089 continue
2090 indexes.append(dict(name=row[1], column_names=[], unique=row[2]))
2092 # loop thru unique indexes to get the column names.
2093 for idx in list(indexes):
2094 pragma_index = self._get_table_pragma(
2095 connection, "index_info", idx["name"]
2096 )
2098 for row in pragma_index:
2099 if row[2] is None:
2100 util.warn(
2101 "Skipped unsupported reflection of "
2102 "expression-based index %s" % idx["name"]
2103 )
2104 indexes.remove(idx)
2105 break
2106 else:
2107 idx["column_names"].append(row[2])
2108 return indexes
2110 @reflection.cache
2111 def _get_table_sql(self, connection, table_name, schema=None, **kw):
2112 if schema:
2113 schema_expr = "%s." % (
2114 self.identifier_preparer.quote_identifier(schema)
2115 )
2116 else:
2117 schema_expr = ""
2118 try:
2119 s = (
2120 "SELECT sql FROM "
2121 " (SELECT * FROM %(schema)ssqlite_master UNION ALL "
2122 " SELECT * FROM %(schema)ssqlite_temp_master) "
2123 "WHERE name = '%(table)s' "
2124 "AND type = 'table'"
2125 % {"schema": schema_expr, "table": table_name}
2126 )
2127 rs = connection.execute(s)
2128 except exc.DBAPIError:
2129 s = (
2130 "SELECT sql FROM %(schema)ssqlite_master "
2131 "WHERE name = '%(table)s' "
2132 "AND type = 'table'"
2133 % {"schema": schema_expr, "table": table_name}
2134 )
2135 rs = connection.execute(s)
2136 return rs.scalar()
2138 def _get_table_pragma(self, connection, pragma, table_name, schema=None):
2139 quote = self.identifier_preparer.quote_identifier
2140 if schema is not None:
2141 statements = ["PRAGMA %s." % quote(schema)]
2142 else:
2143 # because PRAGMA looks in all attached databases if no schema
2144 # given, need to specify "main" schema, however since we want
2145 # 'temp' tables in the same namespace as 'main', need to run
2146 # the PRAGMA twice
2147 statements = ["PRAGMA main.", "PRAGMA temp."]
2149 qtable = quote(table_name)
2150 for statement in statements:
2151 statement = "%s%s(%s)" % (statement, pragma, qtable)
2152 cursor = connection.execute(statement)
2153 if not cursor._soft_closed:
2154 # work around SQLite issue whereby cursor.description
2155 # is blank when PRAGMA returns no rows:
2156 # http://www.sqlite.org/cvstrac/tktview?tn=1884
2157 result = cursor.fetchall()
2158 else:
2159 result = []
2160 if result:
2161 return result
2162 else:
2163 return []