This defines the dump2py management command.


Write a dump of your database to a set of Python modules. This dump is useful for creating a daily backup or before an upgrade with data migration.

Usage: cd to your project directory and say:

$ python dump2py TARGET

This will create a python dump of your database to the directory TARGET.

The directory will contain a file and a series of .py files (one for every model) which are being execfile()d from that



Do not prompt for user input of any kind.


Tolerate database errors. This can help making a partial snapshot of a database which is not (fully) synced with the application code.


Don't complain if the TARGET directory already exists. This will potentially overwrite existing files.

--max-row-count <NUM>

Change the maximum number of rows per source file from its default value (50000) to NUM.

When a table contains many rows, the resulting .py file can become so large that it doesn't fit into memory, causing the Python process to get killed when it tries to restore the data. To avoid this limitation, dump2py distributes the content over several files if a table contains are more than NUM rows.

The default value has been "clinically tested" and should be small enough for most machines.

Hint: When your process gets killed, before using this option, consider restarting the web services on your server and trying again. The web services can occupy considerable amounts of memory on a long-running production site. A simple can fix your issue.


The main script of a Python dump generated by the dump2py command.

To restore a dump created using dump2py to your database, simply run the script using the run management command:

$ python run mydump/


(This module's source code is available here.)



write_create_function(model, stream)


Command([stdout, stderr, no_color, force_color])