todowrite package

ToDoWrite: Hierarchical Task Management System.

A sophisticated hierarchical task management system designed for complex project planning and execution. Built with ToDoWrite Models patterns, it provides both a standalone CLI and a Python module for programmatic use.

class AcceptanceCriteria(**kwargs)[source]

Bases: Base, TimestampMixin

ToDoWrite AcceptanceCriteria model for hierarchical task management.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

assignee
created_at
description
ended_on
extra_data
id
interface_contracts
labels
owner
progress
requirements
severity
started_on
status
title
updated_at
work_type
class Base(**kwargs)[source]

Bases: DeclarativeBase

Base class for all ToDoWrite models.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

metadata = MetaData()

Refers to the _schema.MetaData collection that will be used for new _schema.Table objects.

registry = <sqlalchemy.orm.decl_api.registry object>

Refers to the _orm.registry in use where new _orm.Mapper objects will be associated.

class Command(**kwargs)[source]

Bases: Base, TimestampMixin

ToDoWrite Command model for hierarchical task management.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

acceptance_criteria_id
artifacts
property artifacts_list

Get artifacts as list.

assignee
cmd
cmd_params
created_at
description
ended_on
id
labels
mark_completed()[source]

Mark command as completed with current timestamp.

Return type:

None

mark_started()[source]

Mark command as started with current timestamp.

Return type:

None

output
owner
progress
runtime_env
property runtime_env_dict

Get runtime environment as dictionary.

severity
started_on
status
sub_tasks
title
updated_at
work_type
class Concept(**kwargs)[source]

Bases: Base, TimestampMixin

ToDoWrite Concept model for hierarchical task management.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

assignee
contexts
created_at
description
ended_on
extra_data
goals
id
labels
owner
progress
requirements
severity
started_on
status
title
updated_at
work_type
class Constraint(**kwargs)[source]

Bases: Base, TimestampMixin

ToDoWrite Constraint model for hierarchical task management.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

assignee
created_at
description
ended_on
extra_data
goals
id
labels
owner
progress
requirements
severity
started_on
status
title
updated_at
work_type
class Context(**kwargs)[source]

Bases: Base, TimestampMixin

ToDoWrite Context model for hierarchical task management.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

assignee
concepts
created_at
description
ended_on
extra_data
goals
id
labels
owner
progress
requirements
severity
started_on
status
title
updated_at
work_type
exception DatabaseInitializationError[source]

Bases: ToDoWriteError

Raised when database initialization from schema fails.

class DatabaseSchemaInitializer(validator=None)[source]

Bases: object

Helper class for database initialization using schema definitions.

Provides methods to create, drop, and verify database structure based on the ToDoWrite model schema.

__init__(validator=None)[source]

Initialize with schema validator.

create_database(database_url, drop_existing=False)[source]

Create a new database initialized with the ToDoWrite model schema.

Parameters:
  • database_url (str) – SQLAlchemy database URL

  • drop_existing (bool) – Whether to drop existing database first

Return type:

Engine

Returns:

SQLAlchemy engine for the created database

get_database_status(database_url)[source]

Get status information about the database.

Return type:

dict[str, object]

verify_database_structure(database_url)[source]

Verify that database matches the schema structure.

Return type:

bool

class ToDoWrite(database_url=None)[source]

Bases: object

Industry-standard ToDoWrite API for hierarchical task management.

This class provides a clean, professional interface for interacting with ToDoWrite’s hierarchical task management system, following Python community best practices and design patterns.

database_url

Database connection URL. If None, uses environment variables or defaults.

Type:

Optional[str]

Example

>>> # Initialize with default database
app = ToDoWrite()
>>> app.init_database()
>>> # Initialize with custom database
app = ToDoWrite("sqlite:///my_tasks.db")
>>> app.init_database()
__init__(database_url=None)[source]

Initialize ToDoWrite API instance.

Parameters:

database_url (str | None) – Database connection URL. Supports SQLite and PostgreSQL. If None, reads from TODOWRITE_DATABASE_URL environment variable or uses appropriate default.

Raises:

ValueError – If database_url format is invalid.

get_database_url()[source]

Get the current database URL.

Returns:

The database connection URL.

Return type:

str

init_database()[source]

Initialize the database with proper schema.

Creates all necessary tables and applies migrations if needed. Uses industry-standard database initialization practices.

Returns:

True if initialization was successful.

Return type:

bool

Raises:

DatabaseInitializationError – If database initialization fails.

class Goal(**kwargs)[source]

Bases: Base, TimestampMixin

ToDoWrite Goal model for hierarchical task management.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

add_label(label)[source]

Add a label to this goal.

Return type:

None

add_phase(phase)[source]

Add a phase to this goal.

Return type:

None

add_task(task)[source]

Add a task to this goal.

Return type:

None

classmethod all(session)[source]

Get all goals.

Return type:

list[Goal]

assignee
complete_work()[source]

Mark goal as completed by setting ended_on to current timestamp.

Return type:

None

concepts
constraints
contexts
classmethod create(title, description='', owner='', severity='', work_type='', assignee='')[source]

Create a new Goal instance with default values.

Return type:

Goal

created_at
delete(session)[source]

Delete this goal from the database.

Return type:

None

description
ended_on
extra_data
classmethod find_active(session)[source]

Get all active (not completed) goals.

Return type:

list[Goal]

classmethod find_by_id(session, goal_id)[source]

Find a goal by ID.

Return type:

Goal | None

classmethod find_by_title(session, title)[source]

Find a goal by title.

Return type:

Goal | None

classmethod find_completed(session)[source]

Get all completed goals.

Return type:

list[Goal]

classmethod from_dict(data)[source]

Create Goal instance from dictionary.

Return type:

Goal

get_summary()[source]

Get a brief summary of the goal.

Return type:

str

get_work_duration()[source]

Get the duration between start and completion in ISO format.

Return type:

str | None

id
is_completed()[source]

Check if goal is marked as completed.

Return type:

bool

is_started()[source]

Check if goal is marked as started.

Return type:

bool

is_valid()[source]

Check if goal data is valid.

Return type:

bool

labels
owner
phases
progress
remove_label(label)[source]

Remove a label from this goal.

Return type:

None

remove_phase(phase)[source]

Remove a phase from this goal.

Return type:

None

remove_task(task)[source]

Remove a task from this goal.

Return type:

None

save(session)[source]

Save this goal to the database.

Return type:

None

set_progress(progress)[source]

Set progress percentage (0-100).

Return type:

None

severity
start_work()[source]

Mark goal as started by setting started_on to current timestamp.

Return type:

None

started_on
status
tasks
title
to_dict()[source]

Convert goal to dictionary representation.

Return type:

dict[str, object]

updated_at
validate()[source]

Validate goal data and return list of errors.

Return type:

list[str]

work_type
class InterfaceContract(**kwargs)[source]

Bases: Base, TimestampMixin

ToDoWrite InterfaceContract model for hierarchical task management.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

acceptance_criteria
assignee
created_at
description
ended_on
extra_data
id
labels
owner
phases
progress
severity
started_on
status
title
updated_at
work_type
class Label(**kwargs)[source]

Bases: Base, TimestampMixin

ToDoWrite Label model for categorizing and tagging other models.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

acceptance_criteria
commands
concepts
constraints
contexts
created_at
goals
id
interface_contracts
name
phases
requirements
steps
sub_tasks
tasks
updated_at
class Phase(**kwargs)[source]

Bases: Base, TimestampMixin

ToDoWrite Phase model for hierarchical task management.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

assignee
created_at
description
ended_on
extra_data
goals
id
interface_contracts
labels
owner
progress
severity
started_on
status
steps
title
updated_at
work_type
class Requirement(**kwargs)[source]

Bases: Base, TimestampMixin

ToDoWrite Requirement model for hierarchical task management.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

acceptance_criteria
assignee
concepts
constraints
contexts
created_at
description
ended_on
extra_data
id
labels
owner
progress
severity
started_on
status
title
updated_at
work_type
exception SchemaValidationError[source]

Bases: ToDoWriteError

Raised when schema validation fails.

class Step(**kwargs)[source]

Bases: Base, TimestampMixin

ToDoWrite Step model for hierarchical task management.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

assignee
created_at
description
ended_on
extra_data
id
labels
owner
phases
progress
severity
started_on
status
tasks
title
updated_at
work_type
class SubTask(**kwargs)[source]

Bases: Base, TimestampMixin

ToDoWrite SubTask model for hierarchical task management.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

assignee
commands
created_at
description
ended_on
extra_data
id
labels
owner
progress
severity
started_on
status
tasks
title
updated_at
work_type
class Task(**kwargs)[source]

Bases: Base, TimestampMixin

ToDoWrite Task model for hierarchical task management.

__init__(**kwargs)

A simple constructor that allows initialization from kwargs.

Sets attributes on the constructed instance using the names and values in kwargs.

Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.

assignee
property commands

Get all commands from all subtasks for this task.

Provides complete execution plan visibility by aggregating all commands across all subtasks belonging to this task.

Returns:

List of all Command objects from all subtasks in execution order.

property completed_commands_count

Get count of completed commands across all subtasks.

created_at
description
ended_on
property execution_progress_percentage

Calculate execution progress as percentage based on completed commands.

extra_data
goals
id
labels
owner
progress
severity
started_on
status
steps
sub_tasks
title
property total_commands_count

Get total number of commands across all subtasks.

updated_at
work_type
class ToDoWriteSchemaValidator(schema_path=None)[source]

Bases: object

ToDoWrite Model Schema Validator and Database Manager.

Provides programmatic access to: 1. Validate data against schemas 2. Initialize databases from schema definitions 3. Ensure model-schema consistency 4. Import schemas into properly typed database tables

__init__(schema_path=None)[source]

Initialize schema validator with schema file path.

get_all_association_table_schemas()[source]

Get all association table schemas.

Return type:

dict[str, AssociationTableSchema]

get_all_model_schemas()[source]

Get all model schemas.

Return type:

dict[str, ModelSchema]

get_associated_models(model_name)[source]

Get list of models that this model has relationships with.

Return type:

list[str]

get_association_table_schema(table_name)[source]

Get the schema definition for an association table.

Return type:

AssociationTableSchema

get_model_relationships(model_name)[source]

Get relationship information for a model.

Return type:

dict[str, object]

get_model_schema(model_name)[source]

Get the schema definition for a specific model.

Return type:

ModelSchema

get_schema_summary()[source]

Get a summary of the schema structure.

Return type:

dict[str, object]

initialize_database_from_schema(engine, drop_existing=False)[source]

Initialize database with all tables defined in the schema.

Parameters:
  • engine (Engine) – SQLAlchemy engine to use for database operations

  • drop_existing (bool) – Whether to drop existing tables first

Return type:

None

validate_model_data(model_name, data)[source]

Validate data against a specific model schema.

Return type:

bool

create_engine(url, **kwargs)[source]

Create a new _engine.Engine instance.

The standard calling form is to send the URL as the first positional argument, usually a string that indicates database dialect and connection arguments:

engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test")

Note

Please review Database URLs for general guidelines in composing URL strings. In particular, special characters, such as those often part of passwords, must be URL encoded to be properly parsed.

Additional keyword arguments may then follow it which establish various options on the resulting _engine.Engine and its underlying Dialect and _pool.Pool constructs:

engine = create_engine(
    "mysql+mysqldb://scott:tiger@hostname/dbname",
    pool_recycle=3600,
    echo=True,
)

The string form of the URL is dialect[+driver]://user:password@host/dbname[?key=value..], where dialect is a database name such as mysql, oracle, postgresql, etc., and driver the name of a DBAPI, such as psycopg2, pyodbc, cx_oracle, etc. Alternatively, the URL can be an instance of URL.

**kwargs takes a wide variety of options which are routed towards their appropriate components. Arguments may be specific to the _engine.Engine, the underlying Dialect, as well as the _pool.Pool. Specific dialects also accept keyword arguments that are unique to that dialect. Here, we describe the parameters that are common to most _sa.create_engine() usage.

Once established, the newly resulting _engine.Engine will request a connection from the underlying _pool.Pool once _engine.Engine.connect() is called, or a method which depends on it such as _engine.Engine.execute() is invoked. The _pool.Pool in turn will establish the first actual DBAPI connection when this request is received. The _sa.create_engine() call itself does not establish any actual DBAPI connections directly.

See also

/core/engines

/dialects/index

Working with Engines and Connections

Parameters:
  • connect_args – a dictionary of options which will be passed directly to the DBAPI’s connect() method as additional keyword arguments. See the example at Custom DBAPI connect() arguments / on-connect routines.

  • creator

    a callable which returns a DBAPI connection. This creation function will be passed to the underlying connection pool and will be used to create all new database connections. Usage of this function causes connection parameters specified in the URL argument to be bypassed.

    This hook is not as flexible as the newer _events.DialectEvents.do_connect() hook which allows complete control over how a connection is made to the database, given the full set of URL arguments and state beforehand.

    See also

    _events.DialectEvents.do_connect() - event hook that allows full control over DBAPI connection mechanics.

    Custom DBAPI connect() arguments / on-connect routines

  • echo=False

    if True, the Engine will log all statements as well as a repr() of their parameter lists to the default log handler, which defaults to sys.stdout for output. If set to the string "debug", result rows will be printed to the standard output as well. The echo attribute of Engine can be modified at any time to turn logging on and off; direct control of logging is also available using the standard Python logging module.

    See also

    Configuring Logging - further detail on how to configure logging.

  • echo_pool=False

    if True, the connection pool will log informational output such as when connections are invalidated as well as when connections are recycled to the default log handler, which defaults to sys.stdout for output. If set to the string "debug", the logging will include pool checkouts and checkins. Direct control of logging is also available using the standard Python logging module.

    See also

    Configuring Logging - further detail on how to configure logging.

  • empty_in_strategy

    No longer used; SQLAlchemy now uses “empty set” behavior for IN in all cases.

    Deprecated since version 1.4: The :paramref:`_sa.create_engine.empty_in_strategy` keyword is deprecated, and no longer has any effect. All IN expressions are now rendered using the “expanding parameter” strategy which renders a set of boundexpressions, or an “empty set” SELECT, at statement executiontime.

  • enable_from_linting

    defaults to True. Will emit a warning if a given SELECT statement is found to have un-linked FROM elements which would cause a cartesian product.

    Added in version 1.4.

  • execution_options – Dictionary execution options which will be applied to all connections. See execution_options()

  • future

    Use the 2.0 style _engine.Engine and _engine.Connection API.

    As of SQLAlchemy 2.0, this parameter is present for backwards compatibility only and must remain at its default value of True.

    The :paramref:`_sa.create_engine.future` parameter will be deprecated in a subsequent 2.x release and eventually removed.

    Added in version 1.4.

    Changed in version 2.0: All _engine.Engine objects are “future” style engines and there is no longer a future=False mode of operation.

  • hide_parameters

    Boolean, when set to True, SQL statement parameters will not be displayed in INFO logging nor will they be formatted into the string representation of StatementError objects.

    Added in version 1.3.8.

    See also

    Configuring Logging - further detail on how to configure logging.

  • implicit_returning=True – Legacy parameter that may only be set to True. In SQLAlchemy 2.0, this parameter does nothing. In order to disable “implicit returning” for statements invoked by the ORM, configure this on a per-table basis using the :paramref:`.Table.implicit_returning` parameter.

  • insertmanyvalues_page_size

    number of rows to format into an INSERT statement when the statement uses “insertmanyvalues” mode, which is a paged form of bulk insert that is used for many backends when using executemany execution typically in conjunction with RETURNING. Defaults to 1000, but may also be subject to dialect-specific limiting factors which may override this value on a per-statement basis.

    Added in version 2.0.

  • isolation_level

    optional string name of an isolation level which will be set on all new connections unconditionally. Isolation levels are typically some subset of the string names "SERIALIZABLE", "REPEATABLE READ", "READ COMMITTED", "READ UNCOMMITTED" and "AUTOCOMMIT" based on backend.

    The :paramref:`_sa.create_engine.isolation_level` parameter is in contrast to the :paramref:`.Connection.execution_options.isolation_level` execution option, which may be set on an individual Connection, as well as the same parameter passed to Engine.execution_options(), where it may be used to create multiple engines with different isolation levels that share a common connection pool and dialect.

    Changed in version 2.0: The :paramref:`_sa.create_engine.isolation_level` parameter has been generalized to work on all dialects which support the concept of isolation level, and is provided as a more succinct, up front configuration switch in contrast to the execution option which is more of an ad-hoc programmatic option.

  • json_deserializer

    for dialects that support the _types.JSON datatype, this is a Python callable that will convert a JSON string to a Python object. By default, the Python json.loads function is used.

    Changed in version 1.3.7: The SQLite dialect renamed this from _json_deserializer.

  • json_serializer

    for dialects that support the _types.JSON datatype, this is a Python callable that will render a given object as JSON. By default, the Python json.dumps function is used.

    Changed in version 1.3.7: The SQLite dialect renamed this from _json_serializer.

  • label_length=None

    optional integer value which limits the size of dynamically generated column labels to that many characters. If less than 6, labels are generated as “_(counter)”. If None, the value of dialect.max_identifier_length, which may be affected via the :paramref:`_sa.create_engine.max_identifier_length` parameter, is used instead. The value of :paramref:`_sa.create_engine.label_length` may not be larger than that of :paramref:`_sa.create_engine.max_identfier_length`.

  • logging_name

    String identifier which will be used within the “name” field of logging records generated within the “sqlalchemy.engine” logger. Defaults to a hexstring of the object’s id.

    See also

    Configuring Logging - further detail on how to configure logging.

    :paramref:`_engine.Connection.execution_options.logging_token`

  • max_identifier_length

    integer; override the max_identifier_length determined by the dialect. if None or zero, has no effect. This is the database’s configured maximum number of characters that may be used in a SQL identifier such as a table name, column name, or label name. All dialects determine this value automatically, however in the case of a new database version for which this value has changed but SQLAlchemy’s dialect has not been adjusted, the value may be passed here.

    Added in version 1.3.9.

  • max_overflow=10 – the number of connections to allow in connection pool “overflow”, that is connections that can be opened above and beyond the pool_size setting, which defaults to five. this is only used with QueuePool.

  • module=None – reference to a Python module object (the module itself, not its string name). Specifies an alternate DBAPI module to be used by the engine’s dialect. Each sub-dialect references a specific DBAPI which will be imported before first connect. This parameter causes the import to be bypassed, and the given module to be used instead. Can be used for testing of DBAPIs as well as to inject “mock” DBAPI implementations into the _engine.Engine.

  • paramstyle=None – The paramstyle to use when rendering bound parameters. This style defaults to the one recommended by the DBAPI itself, which is retrieved from the .paramstyle attribute of the DBAPI. However, most DBAPIs accept more than one paramstyle, and in particular it may be desirable to change a “named” paramstyle into a “positional” one, or vice versa. When this attribute is passed, it should be one of the values "qmark", "numeric", "named", "format" or "pyformat", and should correspond to a parameter style known to be supported by the DBAPI in use.

  • pool=None – an already-constructed instance of Pool, such as a QueuePool instance. If non-None, this pool will be used directly as the underlying connection pool for the engine, bypassing whatever connection parameters are present in the URL argument. For information on constructing connection pools manually, see Connection Pooling.

  • poolclass=None – a Pool subclass, which will be used to create a connection pool instance using the connection parameters given in the URL. Note this differs from pool in that you don’t actually instantiate the pool in this case, you just indicate what type of pool to be used.

  • pool_logging_name

    String identifier which will be used within the “name” field of logging records generated within the “sqlalchemy.pool” logger. Defaults to a hexstring of the object’s id.

    See also

    Configuring Logging - further detail on how to configure logging.

  • pool_pre_ping

    boolean, if True will enable the connection pool “pre-ping” feature that tests connections for liveness upon each checkout.

    Added in version 1.2.

  • pool_size=5 – the number of connections to keep open inside the connection pool. This used with QueuePool as well as SingletonThreadPool. With QueuePool, a pool_size setting of 0 indicates no limit; to disable pooling, set poolclass to NullPool instead.

  • pool_recycle=-1

    this setting causes the pool to recycle connections after the given number of seconds has passed. It defaults to -1, or no timeout. For example, setting to 3600 means connections will be recycled after one hour. Note that MySQL in particular will disconnect automatically if no activity is detected on a connection for eight hours (although this is configurable with the MySQLDB connection itself and the server configuration as well).

  • pool_reset_on_return='rollback'

    set the :paramref:`_pool.Pool.reset_on_return` parameter of the underlying _pool.Pool object, which can be set to the values "rollback", "commit", or None.

    See also

    Reset On Return

    Fully preventing ROLLBACK calls under autocommit - a more modern approach to using connections with no transactional instructions

  • pool_timeout=30

    number of seconds to wait before giving up on getting a connection from the pool. This is only used with QueuePool. This can be a float but is subject to the limitations of Python time functions which may not be reliable in the tens of milliseconds.

  • pool_use_lifo=False

    use LIFO (last-in-first-out) when retrieving connections from QueuePool instead of FIFO (first-in-first-out). Using LIFO, a server-side timeout scheme can reduce the number of connections used during non- peak periods of use. When planning for server-side timeouts, ensure that a recycle or pre-ping strategy is in use to gracefully handle stale connections.

    Added in version 1.3.

  • plugins

    string list of plugin names to load. See CreateEnginePlugin for background.

    Added in version 1.2.3.

  • query_cache_size

    size of the cache used to cache the SQL string form of queries. Set to zero to disable caching.

    The cache is pruned of its least recently used items when its size reaches N * 1.5. Defaults to 500, meaning the cache will always store at least 500 SQL statements when filled, and will grow up to 750 items at which point it is pruned back down to 500 by removing the 250 least recently used items.

    Caching is accomplished on a per-statement basis by generating a cache key that represents the statement’s structure, then generating string SQL for the current dialect only if that key is not present in the cache. All statements support caching, however some features such as an INSERT with a large set of parameters will intentionally bypass the cache. SQL logging will indicate statistics for each statement whether or not it were pull from the cache.

    Note

    some ORM functions related to unit-of-work persistence as well as some attribute loading strategies will make use of individual per-mapper caches outside of the main cache.

    Added in version 1.4.

  • skip_autocommit_rollback

    When True, the dialect will unconditionally skip all calls to the DBAPI connection.rollback() method if the DBAPI connection is confirmed to be in “autocommit” mode. The availability of this feature is dialect specific; if not available, a NotImplementedError is raised by the dialect when rollback occurs.

    Added in version 2.0.43.

  • use_insertmanyvalues

    True by default, use the “insertmanyvalues” execution style for INSERT..RETURNING statements by default.

    Added in version 2.0.

Return type:

Engine

get_schema_validator()[source]

Get the default schema validator instance.

Return type:

ToDoWriteSchemaValidator

initialize_database(database_url, drop_existing=False)[source]

Initialize database with schema using default validator.

Return type:

Engine

class sessionmaker(bind=None, *, class_=<class 'sqlalchemy.orm.session.Session'>, autoflush=True, expire_on_commit=True, info=None, **kw)[source]

Bases: _SessionClassMethods, Generic[_S]

A configurable Session factory.

The sessionmaker factory generates new Session objects when called, creating them given the configurational arguments established here.

e.g.:

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker

# an Engine, which the Session will use for connection
# resources
engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/")

Session = sessionmaker(engine)

with Session() as session:
    session.add(some_object)
    session.add(some_other_object)
    session.commit()

Context manager use is optional; otherwise, the returned _orm.Session object may be closed explicitly via the _orm.Session.close() method. Using a try:/finally: block is optional, however will ensure that the close takes place even if there are database errors:

session = Session()
try:
    session.add(some_object)
    session.add(some_other_object)
    session.commit()
finally:
    session.close()

sessionmaker acts as a factory for _orm.Session objects in the same way as an _engine.Engine acts as a factory for _engine.Connection objects. In this way it also includes a _orm.sessionmaker.begin() method, that provides a context manager which both begins and commits a transaction, as well as closes out the _orm.Session when complete, rolling back the transaction if any errors occur:

Session = sessionmaker(engine)

with Session.begin() as session:
    session.add(some_object)
    session.add(some_other_object)
# commits transaction, closes session

Added in version 1.4.

When calling upon _orm.sessionmaker to construct a _orm.Session, keyword arguments may also be passed to the method; these arguments will override that of the globally configured parameters. Below we use a _orm.sessionmaker bound to a certain _engine.Engine to produce a _orm.Session that is instead bound to a specific _engine.Connection procured from that engine:

Session = sessionmaker(engine)

# bind an individual session to a connection

with engine.connect() as connection:
    with Session(bind=connection) as session:
        ...  # work with session

The class also includes a method _orm.sessionmaker.configure(), which can be used to specify additional keyword arguments to the factory, which will take effect for subsequent Session objects generated. This is usually used to associate one or more _engine.Engine objects with an existing sessionmaker factory before it is first used:

# application starts, sessionmaker does not have
# an engine bound yet
Session = sessionmaker()

# ... later, when an engine URL is read from a configuration
# file or other events allow the engine to be created
engine = create_engine("sqlite:///foo.db")
Session.configure(bind=engine)

sess = Session()
# work with session

See also

Opening and Closing a Session - introductory text on creating sessions using sessionmaker.

__init__(bind=None, *, class_=<class 'sqlalchemy.orm.session.Session'>, autoflush=True, expire_on_commit=True, info=None, **kw)[source]

Construct a new sessionmaker.

All arguments here except for class_ correspond to arguments accepted by Session directly. See the Session.__init__() docstring for more details on parameters.

Parameters:
  • bind (Union[Engine, Connection, None]) – a _engine.Engine or other Connectable with which newly created Session objects will be associated.

  • class_ (Type[TypeVar(_S, bound= Session)]) – class to use in order to create new Session objects. Defaults to Session.

  • autoflush (bool) –

    The autoflush setting to use with newly created Session objects.

    See also

    Flushing - additional background on autoflush

  • expire_on_commit=True – the :paramref:`_orm.Session.expire_on_commit` setting to use with newly created Session objects.

  • info (Optional[Dict[Any, Any]]) – optional dictionary of information that will be available via Session.info. Note this dictionary is updated, not replaced, when the info parameter is specified to the specific Session construction operation.

  • **kw (Any) – all other keyword arguments are passed to the constructor of newly created Session objects.

begin()[source]

Produce a context manager that both provides a new _orm.Session as well as a transaction that commits.

e.g.:

Session = sessionmaker(some_engine)

with Session.begin() as session:
    session.add(some_object)

# commits transaction, closes session

Added in version 1.4.

Return type:

AbstractContextManager[TypeVar(_S, bound= Session)]

configure(**new_kw)[source]

(Re)configure the arguments for this sessionmaker.

e.g.:

Session = sessionmaker()

Session.configure(bind=create_engine("sqlite://"))
Return type:

None

class_
validate_model_data(model_name, data)[source]

Validate data against model schema using default validator.

Return type:

bool

Core Components

The todowrite package provides:

  • 12 Hierarchical Models: Goal, Task, Command, etc.

  • Schema Validation: JSON schema validation and database integrity

  • Database Management: SQLAlchemy integration with multiple backends

  • CLI Integration: Command-line interface support

  • Tools: Schema generation, validation, and tracing utilities