SQLAlchemy 0.4 Documentation

Multiple Pages | One Page
Version: 0.4 Last Updated: 10/17/07 13:56:48

module sqlalchemy.pool

Connection pooling for DB-API connections.

Provides a number of connection pool implementations for a variety of usage scenarios and thread behavior requirements imposed by the application, DB-API or database itself.

Also provides a DB-API 2.0 connection proxying mechanism allowing regular DB-API connect() methods to be transparently managed by a SQLAlchemy connection pool.

Module Functions

def clear_managers()

Remove all current DB-API 2.0 managers.

All pools and connections are disposed.

def manage(module, **params)

Returns a proxy for module that automatically pools connections.

Given a DB-API 2.0 module and pool management parameters, returns a proxy for the module that will automatically pool connections, creating new connection pools for each distinct set of connection arguments sent to the decorated module's connect() function.

Arguments:

module
A DB-API 2.0 database module.
poolclass
The class used by the pool module to provide pooling. Defaults to QueuePool.

See the Pool class for options.

class AssertionPool(Pool)

A Pool implementation that allows at most one checked out connection at a time.

This will raise an exception if more than one connection is checked out at a time. Useful for debugging code that is using more connections than desired.

def __init__(self, creator, **params)

Construct a new AssertionPool.

def create_connection(self)
def do_get(self)
def do_return_conn(self, conn)
def do_return_invalid(self, conn)
def status(self)
back to section top

class NullPool(Pool)

A Pool implementation which does not pool connections.

Instead it literally opens and closes the underlying DB-API connection per each connection open/close.

def do_get(self)
def do_return_conn(self, conn)
def do_return_invalid(self, conn)
def status(self)
back to section top

class Pool(object)

Base class for connection pools.

This is an abstract class, implemented by various subclasses including:

QueuePool
Pools multiple connections using Queue.Queue.
SingletonThreadPool
Stores a single connection per execution thread.
NullPool
Doesn't do any pooling; opens and closes connections.
AssertionPool
Stores only one connection, and asserts that only one connection is checked out at a time.

The main argument, creator, is a callable function that returns a newly connected DB-API connection object.

Options that are understood by Pool are:

echo
If set to True, connections being pulled and retrieved from/to the pool will be logged to the standard output, as well as pool sizing information. Echoing can also be achieved by enabling logging for the "sqlalchemy.pool" namespace. Defaults to False.
use_threadlocal
If set to True, repeated calls to connect() within the same application thread will be guaranteed to return the same connection object, if one has already been retrieved from the pool and has not been returned yet. This allows code to retrieve a connection from the pool, and then while still holding on to that connection, to call other functions which also ask the pool for a connection of the same arguments; those functions will act upon the same connection that the calling method is using. Defaults to True.
recycle
If set to non -1, a number of seconds between connection recycling, which means upon checkout, if this timeout is surpassed the connection will be closed and replaced with a newly opened connection. Defaults to -1.
listeners
A list of PoolListener-like objects that receive events when DB-API connections are created, checked out and checked in to the pool.
def __init__(self, creator, recycle=-1, echo=None, use_threadlocal=True, listeners=None)

Construct a new Pool.

def add_listener(self, listener)

Add a PoolListener-like object to this pool.

def connect(self)
def create_connection(self)
def dispose(self)

dispose of this pool.

this method leaves the possibility of checked-out connections remaining opened, so it is advised to not reuse the pool once dispose() is called, and to instead use a new pool constructed by the recreate() method.

def do_get(self)
def do_return_conn(self, conn)
def get(self)
def log(self, msg)
def recreate(self)

return a new instance of this Pool's class with identical creation arguments.

def return_conn(self, record)
def status(self)
def unique_connection(self)
back to section top

class QueuePool(Pool)

Use Queue.Queue to maintain a fixed-size list of connections.

Arguments include all those used by the base Pool class, as well as:

pool_size
The size of the pool to be maintained. This is the largest number of connections that will be kept persistently in the pool. Note that the pool begins with no connections; once this number of connections is requested, that number of connections will remain. Defaults to 5.
max_overflow
The maximum overflow size of the pool. When the number of checked-out connections reaches the size set in pool_size, additional connections will be returned up to this limit. When those additional connections are returned to the pool, they are disconnected and discarded. It follows then that the total number of simultaneous connections the pool will allow is pool_size + max_overflow, and the total number of "sleeping" connections the pool will allow is pool_size. max_overflow can be set to -1 to indicate no overflow limit; no limit will be placed on the total number of concurrent connections. Defaults to 10.
timeout
The number of seconds to wait before giving up on returning a connection. Defaults to 30.
def __init__(self, creator, pool_size=5, max_overflow=10, timeout=30, **params)

Construct a new QueuePool.

def checkedin(self)
def checkedout(self)
def dispose(self)
def do_get(self)
def do_return_conn(self, conn)
def overflow(self)
def recreate(self)
def size(self)
def status(self)
back to section top

class SingletonThreadPool(Pool)

Maintains a single connection per thread.

Maintains one connection per each thread, never moving a connection to a thread other than the one which it was created in.

This is used for SQLite, which both does not handle multithreading by default, and also requires a singleton connection if a :memory: database is being used.

Options are the same as those of Pool, as well as:

pool_size: 5
The number of threads in which to maintain connections at once.
def __init__(self, creator, pool_size=5, **params)

Construct a new SingletonThreadPool.

def cleanup(self)
def dispose(self)

Dispose of this pool.

this method leaves the possibility of checked-out connections remaining opened, so it is advised to not reuse the pool once dispose() is called, and to instead use a new pool constructed by the recreate() method.

def dispose_local(self)
def do_get(self)
def do_return_conn(self, conn)
def recreate(self)
def status(self)
back to section top

class StaticPool(Pool)

A Pool implementation which stores exactly one connection that is returned for all requests.

def __init__(self, creator, **params)

Construct a new StaticPool.

def create_connection(self)
def do_get(self)
def do_return_conn(self, conn)
def do_return_invalid(self, conn)
def status(self)
back to section top
Up: API Documentation | Previous: module sqlalchemy.interfaces | Next: module sqlalchemy.engine.strategies