Operation Reference

This file provides documentation on Alembic migration directives.

The directives here are used within user-defined migration files, within the upgrade() and downgrade() functions, as well as any functions further invoked by those.

A key design philosophy to the alembic.op functions is that to the greatest degree possible, they internally generate the appropriate SQLAlchemy metadata, typically involving Table and Constraint objects. This so that migration instructions can be given in terms of just the string names and/or flags involved. The exceptions to this rule include the op.add_column() and op.create_table() directives, which require full Column objects, though the table metadata is still generated here.

The functions here all require that a Context has been configured within the env.py script. Under normal circumstances this is always the case, as the migration scripts are invoked via the context.run_migrations() function which ultimately is derived from the Context object.

alembic.op.add_column(table_name, column)

Issue an “add column” instruction using the current change context.

e.g.:

from alembic.op import add_column
from sqlalchemy import Column, String

add_column('organization', 
    Column('name', String())
)        

The provided Column object can also specify a ForeignKey, referencing a remote table name. Alembic will automatically generate a stub “referenced” table and emit a second ALTER statement in order to add the constraint separately:

from alembic.op import add_column
from sqlalchemy import Column, INTEGER, ForeignKey

add_column('organization', 
    Column('account_id', INTEGER, ForeignKey('accounts.id'))
)        
Parameters:
  • table_name – String name of the parent table.
  • column – a sqlalchemy.schema.Column object representing the new column.
alembic.op.alter_column(table_name, column_name, nullable=None, server_default=False, name=None, type_=None, existing_type=None, existing_server_default=False, existing_nullable=None)

Issue an “alter column” instruction using the current change context.

Generally, only that aspect of the column which is being changed, i.e. name, type, nullability, default, needs to be specified. Multiple changes can also be specified at once and the backend should “do the right thing”, emitting each change either separately or together as the backend allows.

MySQL has special requirements here, since MySQL cannot ALTER a column without a full specification. When producing MySQL-compatible migration files, it is recommended that the existing_type, existing_server_default, and existing_nullable parameters be present, if not being altered.

Type changes which are against the SQLAlchemy “schema” types Boolean and Enum may also add or drop constraints which accompany those types on backends that don’t support them natively. The existing_server_default argument is used in this case as well to remove a previous constraint.

Parameters:
  • table_name – string name of the target table.
  • column_name – string name of the target column.
  • nullable – Optional; specify True or False to alter the column’s nullability.
  • server_default – Optional; specify a string SQL expression, text(), or DefaultClause to indicate an alteration to the column’s default value. Set to None to have the default removed.
  • name – Optional; specify a string name here to indicate a column rename operation.
  • type – Optional; a TypeEngine type object to specify a change to the column’s type. For SQLAlchemy types that also indicate a constraint (i.e. Boolean, Enum), the constraint is also generated.
  • existing_type – Optional; a TypeEngine type object to specify the previous type. This is required for all MySQL column alter operations that don’t otherwise specify a new type, as well as for when nullability is being changed on a SQL Server column. It is also used if the type is a so-called SQLlchemy “schema” type which may define a constraint (i.e. Boolean, Enum), so that the constraint can be dropped.
  • existing_server_default – Optional; The existing default value of the column. Required on MySQL if an existing default is not being changed; else MySQL removes the default.
  • existing_nullable – Optional; the existing nullability of the column. Required on MySQL if the existing nullability is not being changed; else MySQL sets this to NULL.
alembic.op.bulk_insert(table, rows)

Issue a “bulk insert” operation using the current change context.

This provides a means of representing an INSERT of multiple rows which works equally well in the context of executing on a live connection as well as that of generating a SQL script. In the case of a SQL script, the values are rendered inline into the statement.

e.g.:

from datetime import date
from sqlalchemy.sql import table, column
from sqlalchemy import String, Integer, Date

# Create an ad-hoc table to use for the insert statement.
accounts_table = table('account',
    column('id', Integer),
    column('name', String),
    column('create_date', Date)
)

bulk_insert(accounts_table,
    [
        {'id':1, 'name':'John Smith', 'create_date':date(2010, 10, 5)},
        {'id':2, 'name':'Ed Williams', 'create_date':date(2007, 5, 27)},
        {'id':3, 'name':'Wendy Jones', 'create_date':date(2008, 8, 15)},
    ]
)
alembic.op.create_check_constraint(name, source, condition, **kw)

Issue a “create check constraint” instruction using the current change context.

e.g.:

from alembic.op import create_check_constraint
from sqlalchemy.sql import column, func

create_check_constraint(
    "ck_user_name_len",
    "user", 
    func.len(column('name')) > 5
)

CHECK constraints are usually against a SQL expression, so ad-hoc table metadata is usually needed. The function will convert the given arguments into a sqlalchemy.schema.CheckConstraint bound to an anonymous table in order to emit the CREATE statement.

Parameters:
  • name – Name of the check constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at NamingConventions, name here can be None, as the event listener will apply the name to the constraint object when it is associated with the table.
  • source – String name of the source table. Currently there is no support for dotted schema names.
  • condition – SQL expression that’s the condition of the constraint. Can be a string or SQLAlchemy expression language structure.
  • deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
  • initially – optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.
alembic.op.create_foreign_key(name, source, referent, local_cols, remote_cols)

Issue a “create foreign key” instruction using the current change context.

e.g.:

from alembic.op import create_foreign_key
create_foreign_key("fk_user_address", "address", "user", ["user_id"], ["id"])

This internally generates a Table object containing the necessary columns, then generates a new ForeignKeyConstraint object which it then associates with the Table. Any event listeners associated with this action will be fired off normally. The AddConstraint construct is ultimately used to generate the ALTER statement.

Parameters:
  • name

    Name of the foreign key constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at NamingConventions, name here can be None, as the event listener will apply the name to the constraint object when it is associated with the table.

  • source – String name of the source table. Currently there is no support for dotted schema names.
  • referent – String name of the destination table. Currently there is no support for dotted schema names.
  • local_cols – a list of string column names in the source table.
  • remote_cols – a list of string column names in the remote table.
alembic.op.create_index(name, tablename, *columns, **kw)

Issue a “create index” instruction using the current change context.

e.g.:

from alembic.op import create_index
create_index('ik_test', 't1', ['foo', 'bar'])
alembic.op.create_table(name, *columns, **kw)

Issue a “create table” instruction using the current change context.

This directive receives an argument list similar to that of the traditional sqlalchemy.schema.Table construct, but without the metadata:

from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column
from alembic.op import create_table

create_table(
    'accounts',
    Column('id', INTEGER, primary_key=True),
    Column('name', VARCHAR(50), nullable=False),
    Column('description', NVARCHAR(200))
)
Parameters:
  • name – Name of the table
  • *columns – collection of Column objects within the table, as well as optional Constraint objects and Index objects.
  • emit_events – if True, emit before_create and after_create events when the table is being created. In particular, the Postgresql ENUM type will emit a CREATE TYPE within these events.
  • **kw – Other keyword arguments are passed to the underlying Table object created for the command.
alembic.op.create_unique_constraint(name, source, local_cols, **kw)

Issue a “create unique constraint” instruction using the current change context.

e.g.:

from alembic.op import create_unique_constraint
create_unique_constraint("uq_user_name", "user", ["name"])

This internally generates a Table object containing the necessary columns, then generates a new UniqueConstraint object which it then associates with the Table. Any event listeners associated with this action will be fired off normally. The AddConstraint construct is ultimately used to generate the ALTER statement.

Parameters:
  • name

    Name of the unique constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at NamingConventions, name here can be None, as the event listener will apply the name to the constraint object when it is associated with the table.

  • source – String name of the source table. Currently there is no support for dotted schema names.
  • local_cols – a list of string column names in the source table.
  • deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
  • initially – optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.
alembic.op.drop_column(table_name, column_name, **kw)

Issue a “drop column” instruction using the current change context.

e.g.:

drop_column('organization', 'account_id')
Parameters:
  • table_name – name of table
  • column_name – name of column
  • mssql_drop_check – Optional boolean. When True, on Microsoft SQL Server only, first drop the CHECK constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.check_constraints, then exec’s a separate DROP CONSTRAINT for that constraint.
  • mssql_drop_default – Optional boolean. When True, on Microsoft SQL Server only, first drop the DEFAULT constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.default_constraints, then exec’s a separate DROP CONSTRAINT for that default.
alembic.op.drop_constraint(name, tablename)

Drop a constraint of the given name

alembic.op.drop_index(name)

Issue a “drop index” instruction using the current change context.

e.g.:

drop_index("accounts")
alembic.op.drop_table(name)

Issue a “drop table” instruction using the current change context.

e.g.:

drop_table("accounts")
alembic.op.execute(sql)

Execute the given SQL using the current change context.

In a SQL script context, the statement is emitted directly to the output stream. There is no return result, however, as this function is oriented towards generating a change script that can run in “offline” mode. For full interaction with a connected database, use the “bind” available from the context:

from alembic.op import get_bind
connection = get_bind()

Also note that any parameterized statement here will not work in offline mode - INSERT, UPDATE and DELETE statements which refer to literal values would need to render inline expressions. For simple use cases, the inline_literal() function can be used for rudimentary quoting of string values. For “bulk” inserts, consider using bulk_insert().

For example, to emit an UPDATE statement which is equally compatible with both online and offline mode:

from sqlalchemy.sql import table, column
from sqlalchemy import String
from alembic.op import execute, inline_literal

account = table('account', 
    column('name', String)
)
execute(
    account.update().\
        where(account.c.name==inline_literal('account 1')).\
        values({'name':inline_literal('account 2')})
        )

Note above we also used the SQLAlchemy sqlalchemy.sql.expression.table() and sqlalchemy.sql.expression.column() constructs to make a brief, ad-hoc table construct just for our UPDATE statement. A full Table construct of course works perfectly fine as well, though note it’s a recommended practice to at least ensure the definition of a table is self-contained within the migration script, rather than imported from a module that may break compatibility with older migrations.

Parameters:sql – Any legal SQLAlchemy expression, including:
alembic.op.get_bind()

Return the current ‘bind’.

Under normal circumstances, this is the sqlalchemy.engine.Connection currently being used to emit SQL to the database.

In a SQL script context, this value is None. [TODO: verify this]

alembic.op.get_context()

Return the current Context object.

If configure() has not been called yet, raises an exception.

Generally, env.py scripts should access the module-level functions in alebmic.context to get at this object’s functionality.

alembic.op.inline_literal(value, type_=None)

Produce an ‘inline literal’ expression, suitable for using in an INSERT, UPDATE, or DELETE statement.

When using Alembic in “offline” mode, CRUD operations aren’t compatible with SQLAlchemy’s default behavior surrounding literal values, which is that they are converted into bound values and passed separately into the execute() method of the DBAPI cursor. An offline SQL script needs to have these rendered inline. While it should always be noted that inline literal values are an enormous security hole in an application that handles untrusted input, a schema migration is not run in this context, so literals are safe to render inline, with the caveat that advanced types like dates may not be supported directly by SQLAlchemy.

See op.execute() for an example usage of inline_literal().

Parameters:
  • value – The value to render. Strings, integers, and simple numerics should be supported. Other types like boolean, dates, etc. may or may not be supported yet by various backends.
  • type – optional - a sqlalchemy.types.TypeEngine subclass stating the type of this value. In SQLAlchemy expressions, this is usually derived automatically from the Python type of the value itself, as well as based on the context in which the value is used.
alembic.op.rename_table(old_table_name, new_table_name, schema=None)

Emit an ALTER TABLE to rename a table.

Parameters:
  • old_table_name – old name.
  • new_table_name – new name.
  • schema – Optional, name of schema to operate within.

Previous topic

Tutorial

Next topic

API Details

This Page