Peewee comes with numerous extras which I didn’t really feel like including in the main source module, but which might be interesting to implementers or fun to mess around with.
The playhouse includes modules for different database drivers or database specific functionality:
Modules which expose higher-level python constructs:
As well as tools for working with databases:
The apsw_ext module contains a database class suitable for use with the apsw sqlite driver.
APSW Project page: https://code.google.com/p/apsw/
APSW is a really neat library that provides a thin wrapper on top of SQLite’s C interface, making it possible to use all of SQLite’s advanced features.
Here are just a few reasons to use APSW, taken from the documentation:
For more information on the differences between apsw and pysqlite, check the apsw docs.
from apsw_ext import *
db = APSWDatabase(':memory:')
class BaseModel(Model):
class Meta:
database = db
class SomeModel(BaseModel):
col1 = CharField()
col2 = DateTimeField()
Parameters: |
|
---|
Functions just like the Database.transaction() context manager, but accepts an additional parameter specifying the type of lock to use.
Parameters: | lock_type (string) – type of lock to use when opening a new transaction |
---|
Provides a way of globally registering a module. For more information, see the documentation on virtual tables.
Parameters: |
|
---|
Unregister a module.
Parameters: | mod_name (string) – name to use for module |
---|
Note
Be sure to use the Field subclasses defined in the apsw_ext module, as they will properly handle adapting the data types for storage.
The postgresql extensions module provides a number of “postgres-only” functions, currently:
In the future I would like to add support for more of postgresql’s features. If there is a particular feature you would like to see added, please open a Github issue.
Warning
In order to start using the features described below, you will need to use the extension PostgresqlExtDatabase class instead of PostgresqlDatabase.
The code below will assume you are using the following database and base model:
from playhouse.postgres_ext import *
ext_db = PostgresqlExtDatabase('peewee_test', user='postgres')
class BaseExtModel(Model):
class Meta:
database = ext_db
Postgresql hstore is an embedded key/value store. With hstore, you can store arbitrary key/value pairs in your database alongside structured relational data.
Currently the postgres_ext module supports the following operations:
To start with, you will need to import the custom database class and the hstore functions from playhouse.postgres_ext (see above code snippet). Then, it is as simple as adding a HStoreField to your model:
class House(BaseExtModel):
address = CharField()
features = HStoreField()
You can now store arbitrary key/value pairs on House instances:
>>> h = House.create(address='123 Main St', features={'garage': '2 cars', 'bath': '2 bath'})
>>> h_from_db = House.get(House.id == h.id)
>>> h_from_db.features
{'bath': '2 bath', 'garage': '2 cars'}
You can filter by keys or partial dictionary:
>>> f = House.features
>>> House.select().where(f.contains('garage')) # <-- all houses w/garage key
>>> House.select().where(f.contains(['garage', 'bath'])) # <-- all houses w/garage & bath
>>> House.select().where(f.contains({'garage': '2 cars'})) # <-- houses w/2-car garage
Suppose you want to do an atomic update to the house:
>>> f = House.features
>>> new_features = House.features.update({'bath': '2.5 bath', 'sqft': '1100'})
>>> query = House.update(features=new_features)
>>> query.where(House.id == h.id).execute()
1
>>> h = House.get(House.id == h.id)
>>> h.features
{'bath': '2.5 bath', 'garage': '2 cars', 'sqft': '1100'}
Or, alternatively an atomic delete:
>>> query = House.update(features=f.delete('bath'))
>>> query.where(House.id == h.id).execute()
1
>>> h = House.get(House.id == h.id)
>>> h.features
{'garage': '2 cars', 'sqft': '1100'}
Multiple keys can be deleted at the same time:
>>> query = House.update(features=f.delete('garage', 'sqft'))
You can select just keys, just values, or zip the two:
>>> f = House.features
>>> for h in House.select(House.address, f.keys().alias('keys')):
... print h.address, h.keys
123 Main St [u'bath', u'garage']
>>> for h in House.select(House.address, f.values().alias('vals')):
... print h.address, h.vals
123 Main St [u'2 bath', u'2 cars']
>>> for h in House.select(House.address, f.items().alias('mtx')):
... print h.address, h.mtx
123 Main St [[u'bath', u'2 bath'], [u'garage', u'2 cars']]
You can retrieve a slice of data, for example, all the garage data:
>>> f = House.features
>>> for h in House.select(House.address, f.slice('garage').alias('garage_data')):
... print h.address, h.garage_data
123 Main St {'garage': '2 cars'}
You can check for the existence of a key and filter rows accordingly:
>>> for h in House.select(House.address, f.exists('garage').alias('has_garage')):
... print h.address, h.has_garage
123 Main St True
>>> for h in House.select().where(f.exists('garage')):
... print h.address, h.features['garage'] # <-- just houses w/garage data
123 Main St 2 cars
peewee has basic support for Postgres’ native JSON data type, in the form of JSONField.
Warning
Postgres supports a JSON data type natively as of 9.2 (full support in 9.3). In order to use this functionality you must be using the correct version of Postgres with psycopg2 version 2.5 or greater.
Note
You must be sure your database is an instance of PostgresqlExtDatabase in order to use the JSONField.
Here is an example of how you might declare a model with a JSON field:
import json
import urllib2
from playhouse.postgres_ext import *
db = PostgresqlExtDatabase('my_database') # note
class APIResponse(Model):
url = CharField()
response = JSONField()
class Meta:
database = db
@classmethod
def request(cls, url):
fh = urllib2.urlopen(url)
return cls.create(url=url, response=json.loads(fh.read()))
APIResponse.create_table()
# Store a JSON response.
offense = APIResponse.request('http://wtf.charlesleifer.com/api/offense/')
booking = APIResponse.request('http://wtf.charlesleifer.com/api/booking/')
# Query a JSON data structure using a nested key lookup:
offense_responses = APIResponse.select().where(
APIResponse.response['meta']['model'] == 'offense')
When psycopg2 executes a query, normally all results are fetched and returned to the client by the backend. This can cause your application to use a lot of memory when making large queries. Using server-side cursors, results are returned a little at a time (by default 2000 records). For the definitive reference, please see the psycopg2 documentation.
Note
To use server-side (or named) cursors, you must be using PostgresqlExtDatabase.
To execute a query using a server-side cursor, simply wrap your select query using the ServerSide() helper:
large_query = PageView.select() # Build query normally.
# Iterate over large query inside a transaction.
for page_view in ServerSide(large_query):
# do some interesting analysis here.
pass
# Server-side resources are released.
If you would like all SELECT queries to automatically use a server-side cursor, you can specify this when creating your PostgresqlExtDatabase:
from postgres_ext import PostgresqlExtDatabase
ss_db = PostgresqlExtDatabase('my_db', server_side_cursors=True)
Note
Server-side cursors live only as long as the transaction, so for this reason peewee will not automatically call commit() after executing a SELECT query. If you do not commit after you are done iterating, you will not release the server-side resources until the connection is closed (or the transaction is committed later). Furthermore, since peewee will by default cache rows returned by the cursor, you should always call .iterator() when iterating over a large query.
If you are using the ServerSide() helper, the transaction and call to iterator() will be handled transparently.
Identical to PostgresqlDatabase but required in order to support:
Parameters: |
|
---|
If using server_side_cursors, also be sure to wrap your queries with ServerSide().
Wrap the given select query in a transaction, and call it’s iterator() method to avoid caching row instances. In order for the server-side resources to be released, be sure to exhaust the generator (iterate over all the rows).
Parameters: | select_query – a SelectQuery instance. |
---|---|
Return type: | generator |
Usage:
large_query = PageView.select()
for page_view in ServerSide(large_query):
# Do something interesting.
pass
# At this point server side resources are released.
Field capable of storing arrays of the provided field_class.
Parameters: |
|
---|
You can store and retrieve lists (or lists-of-lists):
class BlogPost(BaseModel):
content = TextField()
tags = ArrayField(CharField)
post = BlogPost(content='awesome', tags=['foo', 'bar', 'baz'])
Additionally, you can use the __getitem__ API to query values or slices in the database:
# Get the first tag on a given blog post.
first_tag = (BlogPost
.select(BlogPost.tags[0].alias('first_tag'))
.where(BlogPost.id == 1)
.dicts()
.get())
# first_tag = {'first_tag': 'foo'}
Get a slice of values:
# Get the first two tags.
two_tags = (BlogPost
.select(BlogPost.tags[:2].alias('two'))
.dicts()
.get())
# two_tags = {'two': ['foo', 'bar']}
Parameters: | items – One or more items that must be in the given array field. |
---|
# Get all blog posts that are tagged with both "python" and "django".
Blog.select().where(Blog.tags.contains('python', 'django'))
Parameters: | items – One or more items to search for in the given array field. |
---|
Like contains(), except will match rows where the array contains any of the given items.
# Get all blog posts that are tagged with "flask" and/or "django".
Blog.select().where(Blog.tags.contains_any('flask', 'django'))
A timezone-aware subclass of DateTimeField.
A field for storing and retrieving arbitrary key/value pairs. For details on usage, see hstore support.
Returns the keys for a given row.
>>> f = House.features
>>> for h in House.select(House.address, f.keys().alias('keys')):
... print h.address, h.keys
123 Main St [u'bath', u'garage']
Return the values for a given row.
>>> for h in House.select(House.address, f.values().alias('vals')):
... print h.address, h.vals
123 Main St [u'2 bath', u'2 cars']
Like python’s dict, return the keys and values in a list-of-lists:
>>> for h in House.select(House.address, f.items().alias('mtx')):
... print h.address, h.mtx
123 Main St [[u'bath', u'2 bath'], [u'garage', u'2 cars']]
Return a slice of data given a list of keys.
>>> f = House.features
>>> for h in House.select(House.address, f.slice('garage').alias('garage_data')):
... print h.address, h.garage_data
123 Main St {'garage': '2 cars'}
Query for whether the given key exists.
>>> for h in House.select(House.address, f.exists('garage').alias('has_garage')):
... print h.address, h.has_garage
123 Main St True
>>> for h in House.select().where(f.exists('garage')):
... print h.address, h.features['garage'] # <-- just houses w/garage data
123 Main St 2 cars
Query for whether the given key has a value associated with it.
Perform an atomic update to the keys/values for a given row or rows.
>>> query = House.update(features=House.features.update(
... sqft=2000,
... year_built=2012))
>>> query.where(House.id == 1).execute()
Delete the provided keys for a given row or rows.
Note
We will use an UPDATE query.
>>> query = House.update(features=House.features.delete(
... 'sqft', 'year_built'))
>>> query.where(House.id == 1).execute()
Parameters: | value – Either a dict, a list of keys, or a single key. |
---|
Query rows for the existence of either:
>>> f = House.features
>>> House.select().where(f.contains('garage')) # <-- all houses w/garage key
>>> House.select().where(f.contains(['garage', 'bath'])) # <-- all houses w/garage & bath
>>> House.select().where(f.contains({'garage': '2 cars'})) # <-- houses w/2-car garage
Parameters: | keys – One or more keys to search for. |
---|
Query rows for the existince of any key.
Field class suitable for storing and querying arbitrary JSON. When using this on a model, set the field’s value to a Python object (either a dict or a list). When you retrieve your value from the database it will be returned as a Python data structure.
Note
You must be using Postgres 9.2 / psycopg2 2.5 or greater.
Example model declaration:
db = PostgresqlExtDatabase('my_db')
class APIResponse(Model):
url = CharField()
response = JSONField()
class Meta:
database = db
Example of storing JSON data:
url = 'http://foo.com/api/resource/'
resp = json.loads(urllib2.urlopen(url).read())
APIResponse.create(url=url, response=resp)
APIResponse.create(url='http://foo.com/baz/', response={'key': 'value'})
To query, use Python’s [] operators to specify nested key lookups:
APIResponse.select().where(
APIResponse.response['key1']['nested-key'] == 'some-value')
A field for storing and retrieving UUID objects.
The SQLite extensions module provides support for some interesting sqlite-only features:
Subclass of the SqliteDatabase that provides some advanced features only offered by Sqlite.
Class-decorator for registering custom aggregation functions.
Parameters: |
|
---|
@db.aggregate(1, 'product')
class Product(object):
"""Like sum, except calculate the product of a series of numbers."""
def __init__(self):
self.product = 1
def step(self, value):
self.product *= value
def finalize(self):
return self.product
# To use this aggregate:
product = (Score
.select(fn.product(Score.value))
.scalar())
Function decorator for registering a custom collation.
Parameters: | name – string name to use for this collation. |
---|
@db.collation()
def collate_reverse(s1, s2):
return -cmp(s1, s2)
# To use this collation:
Book.select().order_by(collate_reverse.collation(Book.title))
As you might have noticed, the original collate_reverse function has a special attribute called collation attached to it. This extra attribute provides a shorthand way to generate the SQL necessary to use our custom collation.
Function decorator for registering user-defined functions.
Parameters: |
|
---|
@db.func()
def title_case(s):
return s.title()
# Use in the select clause...
titled_books = Book.select(fn.title_case(Book.title))
@db.func()
def sha1(s):
return hashlib.sha1(s).hexdigest()
# Use in the where clause...
user = User.select().where(
(User.username == username) &
(fn.sha1(User.password) == password_hash)).get()
With the granular_transaction helper, you can specify the isolation level for an individual transaction. The valid options are:
Example usage:
with db.granular_transaction('exclusive'):
# no other readers or writers!
(Account
.update(Account.balance=Account.balance - 100)
.where(Account.id == from_acct)
.execute())
(Account
.update(Account.balance=Account.balance + 100)
.where(Account.id == to_acct)
.execute())
Subclass of Model that signifies the model operates using a virtual table provided by a sqlite extension.
Model class that provides support for Sqlite’s full-text search extension. Models should be defined normally, however there are a couple caveats:
Therefore it usually makes sense to index the content you intend to search and a single link back to the original document, since all SQL queries except full-text searches and rowid lookups will be slow.
Example:
class Document(FTSModel):
title = TextField() # type affinities are ignored by FTS, so use TextField
content = TextField()
Document.create_table(tokenize='porter') # use the porter stemmer.
# populate documents using normal operations.
for doc in list_of_docs_to_index:
Document.create(title=doc['title'], content=doc['content'])
# use the "match" operation for FTS queries.
matching_docs = Document.select().where(match(Document.title, 'some query'))
# to sort by best match, use the custom "rank" function.
best = (Document
.select(Document, Document.rank('score'))
.where(match(Document.title, 'some query'))
.order_by(SQL('score').desc()))
# or use the shortcut method:
best = Document.match('some phrase')
If you have an existing table and would like to add search for a column on that table, you can specify it using the content option:
class Blog(Model):
title = CharField()
pub_date = DateTimeField()
content = TextField() # we want to search this.
class FTSBlog(FTSModel):
content = TextField()
Blog.create_table()
FTSBlog.create_table(content=Blog.content)
# Now, we can manage content in the FTSBlog. To populate it with
# content:
FTSBlog.rebuild()
# Optimize the index.
FTSBlog.optimize()
The content option accepts either a single Field or a Model and can reduce the amount of storage used. However, content will need to be manually moved to/from the associated FTSModel.
Parameters: |
|
---|
Rebuild the search index – this only works when the content option was specified during table creation.
Optimize the search index.
Shorthand way of performing a search for a given phrase. Example:
for doc in Document.match('search phrase'):
print 'match: ', doc.title
The Django ORM provides a very high-level abstraction over SQL and as a consequence is in some ways limited in terms of flexibility or expressiveness. I wrote a blog post describing my search for a “missing link” between Django’s ORM and the SQL it generates, concluding that no such layer exists. The djpeewee module attempts to provide an easy-to-use, structured layer for generating SQL queries for use with Django’s ORM.
A couple use-cases might be:
Below is an example of how you might use this:
# Django model.
class Event(models.Model):
start_time = models.DateTimeField()
end_time = models.DateTimeField()
title = models.CharField(max_length=255)
# Suppose we want to find all events that are longer than an hour. Django
# does not support this, but we can use peewee.
from playhouse.djpeewee import translate
P = translate(Event)
query = (P.Event
.select()
.where(
(P.Event.end_time - P.Event.start_time) > timedelta(hours=1)))
# Now feed our peewee query into Django's `raw()` method:
sql, params = query.sql()
Event.objects.raw(sql, params)
The translate() function will recursively traverse the graph of models and return a dictionary populated with everything it finds. Back-references are not searched by default, but can be included by specifying backrefs=True.
Example:
>>> from django.contrib.auth.models import User, Group
>>> from playhouse.djpeewee import translate
>>> translate(User, Group)
{'ContentType': peewee.ContentType,
'Group': peewee.Group,
'Group_permissions': peewee.Group_permissions,
'Permission': peewee.Permission,
'User': peewee.User,
'User_groups': peewee.User_groups,
'User_user_permissions': peewee.User_user_permissions}
As you can see in the example above, although only User and Group were passed in to translate(), several other models which are related by foreign key were also created. Additionally, the many-to-many “through” tables were created as separate models since peewee does not abstract away these types of relationships.
Using the above models it is possible to construct joins. The following example will get all users who belong to a group that starts with the letter “A”:
>>> P = translate(User, Group)
>>> query = P.User.select().join(P.User_groups).join(P.Group).where(
... fn.Lower(fn.Substr(P.Group.name, 1, 1)) == 'a')
>>> sql, params = query.sql()
>>> print sql # formatted for legibility
SELECT t1."id", t1."password", ...
FROM "auth_user" AS t1
INNER JOIN "auth_user_groups" AS t2 ON (t1."id" = t2."user_id")
INNER JOIN "auth_group" AS t3 ON (t2."group_id" = t3."id")
WHERE (Lower(Substr(t3."name", %s, %s)) = %s)
Translate the given Django models into roughly equivalent peewee models suitable for use constructing queries. Foreign keys and many-to-many relationships will be followed and models generated, although back references are not traversed.
Parameters: |
|
---|---|
Returns: | A dict-like object containing the generated models, but which supports dotted-name style lookups. |
The following are valid options:
The gfk module provides a Generic ForeignKey (GFK), similar to Django. A GFK is composed of two columns: an object ID and an object type identifier. The object types are collected in a global registry (all_models).
How a GFKField is resolved:
Note
In order to use Generic ForeignKeys, your application’s models must subclass playhouse.gfk.Model. This ensures that the model class will be added to the global registry.
Note
GFKs themselves are not actually a field and will not add a column to your table.
Like regular ForeignKeys, GFKs support a “back-reference” via the ReverseGFK descriptor.
Example:
from playhouse.gfk import *
class Tag(Model):
tag = CharField()
object_type = CharField(null=True)
object_id = IntegerField(null=True)
object = GFKField('object_type', 'object_id')
class Blog(Model):
tags = ReverseGFK(Tag, 'object_type', 'object_id')
class Photo(Model):
tags = ReverseGFK(Tag, 'object_type', 'object_id')
How you use these is pretty straightforward hopefully:
>>> b = Blog.create(name='awesome post')
>>> Tag.create(tag='awesome', object=b)
>>> b2 = Blog.create(name='whiny post')
>>> Tag.create(tag='whiny', object=b2)
>>> b.tags # <-- a select query
<class '__main__.Tag'> SELECT t1."id", t1."tag", t1."object_type", t1."object_id" FROM "tag" AS t1 WHERE ((t1."object_type" = ?) AND (t1."object_id" = ?)) [u'blog', 1]
>>> [x.tag for x in b.tags]
[u'awesome']
>>> [x.tag for x in b2.tags]
[u'whiny']
>>> p = Photo.create(name='picture of cat')
>>> Tag.create(object=p, tag='kitties')
>>> Tag.create(object=p, tag='cats')
>>> [x.tag for x in p.tags]
[u'kitties', u'cats']
>>> [x.tag for x in Blog.tags]
[u'awesome', u'whiny']
>>> t = Tag.get(Tag.tag == 'awesome')
>>> t.object
<__main__.Blog at 0x268f450>
>>> t.object.name
u'awesome post'
Provide a clean API for storing “generic” foreign keys. Generic foreign keys are comprised of an object type, which maps to a model class, and an object id, which maps to the primary key of the related model class.
Setting the GFKField on a model will automatically populate the model_type_field and model_id_field. Similarly, getting the GFKField on a model instance will “resolve” the two fields, first looking up the model class, then looking up the instance by ID.
Provides a simple key/value store, using a dictionary API. By default the the KeyStore will use an in-memory sqlite database, but any database will work.
To start using the key-store, create an instance and pass it a field to use for the values.
>>> kv = KeyStore(TextField())
>>> kv['a'] = 'A'
>>> kv['a']
'A'
Note
To store arbitrary python objects, use the PickledKeyStore, which stores values in a pickled BlobField.
Using the KeyStore it is possible to use “expressions” to retrieve values from the dictionary. For instance, imagine you want to get all keys which contain a certain substring:
>>> keys_matching_substr = kv[kv.key % '%substr%']
>>> keys_start_with_a = kv[fn.Lower(fn.Substr(kv.key, 1, 1)) == 'a']
Lightweight dictionary interface to a model containing a key and value. Implements common dictionary methods, such as __getitem__, __setitem__, get, pop, items, keys, and values.
Parameters: |
---|
Example:
>>> from playhouse.kv import KeyStore
>>> kv = KeyStore(TextField())
>>> kv['a'] = 'foo'
>>> for k, v in kv:
... print k, v
a foo
>>> 'a' in kv
True
>>> 'b' in kv
False
Identical to the KeyStore except anything can be stored as a value in the dictionary. The storage for the value will be a pickled BlobField.
Example:
>>> from playhouse.kv import PickledKeyStore
>>> pkv = PickledKeyStore()
>>> pkv['a'] = 'A'
>>> pkv['b'] = 1.0
>>> list(pkv.items())
[(u'a', 'A'), (u'b', 1.0)]
This module contains helper functions for expressing things that would otherwise be somewhat verbose or cumbersome using peewee’s APIs.
Parameters: |
|
---|
Example SQL case statements:
-- case with predicate --
SELECT "username",
CASE "user_id"
WHEN 1 THEN "one"
WHEN 2 THEN "two"
ELSE "?"
END
FROM "users";
-- case with no predicate (inline expressions) --
SELECT "username",
CASE
WHEN "user_id" = 1 THEN "one"
WHEN "user_id" = 2 THEN "two"
ELSE "?"
END
FROM "users";
Equivalent function invocations:
User.select(User.username, case(User.user_id, (
(1, "one"),
(2, "two")), "?"))
User.select(User.username, case(None, (
(User.user_id == 1, "one"), # note the double equals
(User.user_id == 2, "two")), "?"))
You can specify a value for the CASE expression using the alias() method:
User.select(User.username, case(User.user_id, (
(1, "one"),
(2, "two")), "?").alias("id_string"))
Models with hooks for signals (a-la django) are provided in playhouse.signals. To use the signals, you will need all of your project’s models to be a subclass of playhouse.signals.Model, which overrides the necessary methods to provide support for the various signals.
from playhouse.signals import Model, post_save
class MyModel(Model):
data = IntegerField()
@post_save(sender=MyModel)
def on_save_handler(model_class, instance, created):
put_data_in_cache(instance.data)
The following signals are provided:
Whenever a signal is dispatched, it will call any handlers that have been registered. This allows totally separate code to respond to events like model save and delete.
The Signal class provides a connect() method, which takes a callback function and two optional parameters for “sender” and “name”. If specified, the “sender” parameter should be a single model class and allows your callback to only receive signals from that one model class. The “name” parameter is used as a convenient alias in the event you wish to unregister your signal handler.
Example usage:
from playhouse.signals import *
def post_save_handler(sender, instance, created):
print '%s was just saved' % instance
# our handler will only be called when we save instances of SomeModel
post_save.connect(post_save_handler, sender=SomeModel)
All signal handlers accept as their first two arguments sender and instance, where sender is the model class and instance is the actual model being acted upon.
If you’d like, you can also use a decorator to connect signal handlers. This is functionally equivalent to the above example:
@post_save(sender=SomeModel)
def post_save_handler(sender, instance, created):
print '%s was just saved' % instance
Stores a list of receivers (callbacks) and calls them when the “send” method is invoked.
Add the receiver to the internal list of receivers, which will be called whenever the signal is sent.
Parameters: |
|
---|
from playhouse.signals import post_save
from project.handlers import cache_buster
post_save.connect(cache_buster, name='project.cache_buster')
Disconnect the given receiver (or the receiver with the given name alias) so that it no longer is called. Either the receiver or the name must be provided.
Parameters: |
|
---|
post_save.disconnect(name='project.cache_buster')
Iterates over the receivers and will call them in the order in which they were connected. If the receiver specified a sender, it will only be called if the instance is an instance of the sender.
Parameters: | instance – a model instance |
---|
pwiz is a little script that ships with peewee and is capable of introspecting an existing database and generating model code suitable for interacting with the underlying data. If you have a database already, pwiz can give you a nice boost by generating skeleton code with correct column affinities and foreign keys.
If you install peewee using setup.py install, pwiz will be installed as a “script” and you can just run:
pwiz.py -e postgresql -u postgres my_postgres_db
This will print a bunch of models to standard output. So you can do this:
pwiz.py -e postgresql my_postgres_db > mymodels.py
python # <-- fire up an interactive shell
>>> from mymodels import Blog, Entry, Tag, Whatever
>>> print [blog.name for blog in Blog.select()]
Option | Meaning | Example |
---|---|---|
-h | show help | |
-e | database backend | -e mysql |
-H | host to connect to | -H remote.db.server |
-p | port to connect on | -p 9001 |
-u | database user | -u postgres |
-P | database password | -P secret |
-s | postgres schema | -s public |
The following are valid parameters for the engine:
Note
This module is still pretty experimental, but it provides a light API for doing schema migrations with postgresql.
Instantiate a migrator:
my_db = PostgresqlDatabase(...)
migrator = Migrator(my_db)
Adding a field to a model:
# declare a field instance
new_pubdate_field = DateTimeField(null=True)
# in a transaction, add the column to your model
with my_db.transaction():
migrator.add_column(Story, new_pubdate_field, 'pub_date')
Renaming a field:
# specify the original name of the field and its new name
with my_db.transaction():
migrator.rename_column(Story, 'pub_date', 'publish_date')
Dropping a field:
# specify the field name to drop
with my_db.transaction():
migrator.drop_column(Story, 'some_old_field')
Setting nullable / not nullable
with my_db.transaction():
# make pubdate not nullable
migrator.set_nullable(Story, Story.pub_date, False)
Renaming a table
with my_db.transaction():
migrator.rename_table(Story, 'stories')
This module contains helpers for loading CSV data into a database. CSV files can be introspected to generate an appropriate model class for working with the data. This makes it really easy to explore the data in a CSV file using Peewee and SQL.
Here is how you would load a CSV file into an in-memory SQLite database. The call to load_csv() returns a Model instance suitable for working with the CSV data:
from peewee import *
from playhouse.csv_loader import load_csv
db = SqliteDatabase(':memory:')
ZipToTZ = load_csv(db, 'zip_to_tz.csv')
Now we can run queries using the new model.
# Get the timezone for a zipcode.
>>> ZipToTZ.get(ZipToTZ.zip == 66047).timezone
'US/Central'
# Get all the zipcodes for my town.
>>> [row.zip for row in ZipToTZ.select().where(
... (ZipToTZ.city == 'Lawrence') && (ZipToTZ.state == 'KS'))]
[66044, 66045, 66046, 66047, 66049]
For more information and examples check out this blog post.
Load a CSV file into the provided database or model class, returning a Model suitable for working with the CSV data.
Parameters: |
|
---|---|
Return type: | A Model suitable for querying the CSV data. |
Basic example – field names and types will be introspected:
from peewee import *
from playhouse.csv_loader import *
db = SqliteDatabase(':memory:')
User = load_csv(db, 'users.csv')
Using a pre-defined model:
class ZipToTZ(Model):
zip = IntegerField()
timezone = CharField()
load_csv(ZipToTZ, 'zip_to_tz.csv')
Specifying fields:
fields = [DecimalField(), IntegerField(), IntegerField(), DateField()]
field_names = ['amount', 'from_acct', 'to_acct', 'timestamp']
Payments = load_csv(db, 'payments.csv', fields=fields, field_names=field_names, has_header=False)
Warning
This module should be considered experimental.
The pool module contains a helper class to pool database connections, as well as implementations for PostgreSQL and MySQL. The pool works by overriding the methods on the Database class that open and close connections to the backend. The pool can specify a timeout after which connections are recycled, as well as an upper bound on the number of open connections.
If your application is single-threaded, only one connection will be opened.
If your application is multi-threaded (this includes green threads) and you specify threadlocals=True when instantiating your database, then up to max_connections will be opened.
Note
If you intend to open multiple concurrent connections, specify threadlocals=True when creating your database, e.g.
db = PooledPostgresqlDatabase(
'my_db',
max_connections=8,
stale_timeout=600,
user='postgres',
threadlocals=True)
Mixin class intended to be used with a subclass of Database.
Parameters: |
|
---|
Note
Connections will not be closed exactly when they exceed their stale_timeout. Instead, stale connections are only closed when a new connection is requested.
Note
If the number of open connections exceeds max_connections, a ValueError will be raised.
Close the currently-open connection without returning it to the pool.
Request a connection from the pool. If there are no available connections a new one will be opened.
By default conn will not be closed and instead will be returned to the pool of available connections. If close_conn=True, then conn will be closed and not be returned to the pool.
Subclass of PostgresqlDatabase that mixes in the PooledDatabase helper.
Subclass of MySQLDatabase that mixes in the PooledDatabase helper.
The read_slave module contains a Model subclass that can be used to automatically execute SELECT queries against different database(s). This might be useful if you have your databases in a master / slave configuration.
Model subclass that will route SELECT queries to a different database.
Master and read-slaves are specified using Model.Meta:
# Declare a master and two read-replicas.
master = PostgresqlDatabase('master')
replica_1 = PostgresqlDatabase('replica_1')
replica_2 = PostgresqlDatabase('replica_2')
# Declare a BaseModel, the normal best-practice.
class BaseModel(ReadSlaveModel):
class Meta:
database = master
read_slaves = (replica_1, replica_2)
# Declare your models.
class User(BaseModel):
username = CharField()
When you execute writes (or deletes), they will be executed against the master database:
User.create(username='Peewee') # Executed against master.
When you execute a read query, it will run against one of the replicas:
users = User.select().where(User.username == 'Peewee')
Note
To force a SELECT query against the master database, manually create the SelectQuery.
SelectQuery(User) # master database.
Note
Queries will be dispatched among the read_slaves in round-robin fashion.
Contains utilities helpful when testing peewee projects.
Context manager that lets you use a different database with a set of models. Models can also be automatically created and dropped.
This context manager helps make it possible to test your peewee models using a “test-only” database.
Parameters: |
---|
Example:
from unittest import TestCase
from playhouse.test_utils import test_database
from peewee import *
from my_app.models import User, Tweet
test_db = SqliteDatabase(':memory:')
class TestUsersTweets(TestCase):
def create_test_data(self):
# ... create a bunch of users and tweets
for i in range(10):
User.create(username='user-%d' % i)
def test_timeline(self):
with test_database(test_db, (User, Tweet)):
# This data will be created in `test_db`
self.create_test_data()
# Perform assertions on test data inside ctx manager.
self.assertEqual(Tweet.timeline('user-0') [...])
# once we exit the context manager, we're back to using the normal database