todowrite package
ToDoWrite: Hierarchical Task Management System.
A sophisticated hierarchical task management system designed for complex project planning and execution. Built with ToDoWrite Models patterns, it provides both a standalone CLI and a Python module for programmatic use.
- class AcceptanceCriteria(**kwargs)[source]
Bases:
Base,TimestampMixinToDoWrite AcceptanceCriteria model for hierarchical task management.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- assignee
- created_at
- description
- ended_on
- extra_data
- id
- interface_contracts
- labels
- owner
- progress
- requirements
- severity
- started_on
- status
- title
- updated_at
- work_type
- class Base(**kwargs)[source]
Bases:
DeclarativeBaseBase class for all ToDoWrite models.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- metadata = MetaData()
Refers to the
_schema.MetaDatacollection that will be used for new_schema.Tableobjects.See also
- registry = <sqlalchemy.orm.decl_api.registry object>
Refers to the
_orm.registryin use where new_orm.Mapperobjects will be associated.
- class Command(**kwargs)[source]
Bases:
Base,TimestampMixinToDoWrite Command model for hierarchical task management.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- acceptance_criteria_id
- artifacts
- property artifacts_list
Get artifacts as list.
- assignee
- cmd
- cmd_params
- created_at
- description
- ended_on
- id
- labels
- output
- owner
- progress
- runtime_env
- property runtime_env_dict
Get runtime environment as dictionary.
- severity
- started_on
- status
- sub_tasks
- title
- updated_at
- work_type
- class Concept(**kwargs)[source]
Bases:
Base,TimestampMixinToDoWrite Concept model for hierarchical task management.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- assignee
- contexts
- created_at
- description
- ended_on
- extra_data
- goals
- id
- labels
- owner
- progress
- requirements
- severity
- started_on
- status
- title
- updated_at
- work_type
- class Constraint(**kwargs)[source]
Bases:
Base,TimestampMixinToDoWrite Constraint model for hierarchical task management.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- assignee
- created_at
- description
- ended_on
- extra_data
- goals
- id
- labels
- owner
- progress
- requirements
- severity
- started_on
- status
- title
- updated_at
- work_type
- class Context(**kwargs)[source]
Bases:
Base,TimestampMixinToDoWrite Context model for hierarchical task management.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- assignee
- concepts
- created_at
- description
- ended_on
- extra_data
- goals
- id
- labels
- owner
- progress
- requirements
- severity
- started_on
- status
- title
- updated_at
- work_type
- exception DatabaseInitializationError[source]
Bases:
ToDoWriteErrorRaised when database initialization from schema fails.
- class DatabaseSchemaInitializer(validator=None)[source]
Bases:
objectHelper class for database initialization using schema definitions.
Provides methods to create, drop, and verify database structure based on the ToDoWrite model schema.
- create_database(database_url, drop_existing=False)[source]
Create a new database initialized with the ToDoWrite model schema.
- class ToDoWrite(database_url=None)[source]
Bases:
objectIndustry-standard ToDoWrite API for hierarchical task management.
This class provides a clean, professional interface for interacting with ToDoWrite’s hierarchical task management system, following Python community best practices and design patterns.
- database_url
Database connection URL. If None, uses environment variables or defaults.
- Type:
Optional[str]
Example
>>> # Initialize with default database app = ToDoWrite() >>> app.init_database()
>>> # Initialize with custom database app = ToDoWrite("sqlite:///my_tasks.db") >>> app.init_database()
- __init__(database_url=None)[source]
Initialize ToDoWrite API instance.
- Parameters:
database_url (
str|None) – Database connection URL. Supports SQLite and PostgreSQL. If None, reads from TODOWRITE_DATABASE_URL environment variable or uses appropriate default.- Raises:
ValueError – If database_url format is invalid.
- get_database_url()[source]
Get the current database URL.
- Returns:
The database connection URL.
- Return type:
- init_database()[source]
Initialize the database with proper schema.
Creates all necessary tables and applies migrations if needed. Uses industry-standard database initialization practices.
- Returns:
True if initialization was successful.
- Return type:
- Raises:
DatabaseInitializationError – If database initialization fails.
- class Goal(**kwargs)[source]
Bases:
Base,TimestampMixinToDoWrite Goal model for hierarchical task management.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- assignee
- complete_work()[source]
Mark goal as completed by setting ended_on to current timestamp.
- Return type:
- concepts
- constraints
- contexts
- classmethod create(title, description='', owner='', severity='', work_type='', assignee='')[source]
Create a new Goal instance with default values.
- Return type:
- created_at
- description
- ended_on
- extra_data
- id
- labels
- owner
- phases
- progress
- severity
- started_on
- status
- tasks
- title
- updated_at
- work_type
- class InterfaceContract(**kwargs)[source]
Bases:
Base,TimestampMixinToDoWrite InterfaceContract model for hierarchical task management.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- acceptance_criteria
- assignee
- created_at
- description
- ended_on
- extra_data
- id
- labels
- owner
- phases
- progress
- severity
- started_on
- status
- title
- updated_at
- work_type
- class Label(**kwargs)[source]
Bases:
Base,TimestampMixinToDoWrite Label model for categorizing and tagging other models.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- acceptance_criteria
- commands
- concepts
- constraints
- contexts
- created_at
- goals
- id
- interface_contracts
- name
- phases
- requirements
- steps
- sub_tasks
- tasks
- updated_at
- class Phase(**kwargs)[source]
Bases:
Base,TimestampMixinToDoWrite Phase model for hierarchical task management.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- assignee
- created_at
- description
- ended_on
- extra_data
- goals
- id
- interface_contracts
- labels
- owner
- progress
- severity
- started_on
- status
- steps
- title
- updated_at
- work_type
- class Requirement(**kwargs)[source]
Bases:
Base,TimestampMixinToDoWrite Requirement model for hierarchical task management.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- acceptance_criteria
- assignee
- concepts
- constraints
- contexts
- created_at
- description
- ended_on
- extra_data
- id
- labels
- owner
- progress
- severity
- started_on
- status
- title
- updated_at
- work_type
- exception SchemaValidationError[source]
Bases:
ToDoWriteErrorRaised when schema validation fails.
- class Step(**kwargs)[source]
Bases:
Base,TimestampMixinToDoWrite Step model for hierarchical task management.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- assignee
- created_at
- description
- ended_on
- extra_data
- id
- labels
- owner
- phases
- progress
- severity
- started_on
- status
- tasks
- title
- updated_at
- work_type
- class SubTask(**kwargs)[source]
Bases:
Base,TimestampMixinToDoWrite SubTask model for hierarchical task management.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- assignee
- commands
- created_at
- description
- ended_on
- extra_data
- id
- labels
- owner
- progress
- severity
- started_on
- status
- tasks
- title
- updated_at
- work_type
- class Task(**kwargs)[source]
Bases:
Base,TimestampMixinToDoWrite Task model for hierarchical task management.
- __init__(**kwargs)
A simple constructor that allows initialization from kwargs.
Sets attributes on the constructed instance using the names and values in
kwargs.Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships.
- assignee
- property commands
Get all commands from all subtasks for this task.
Provides complete execution plan visibility by aggregating all commands across all subtasks belonging to this task.
- Returns:
List of all Command objects from all subtasks in execution order.
- property completed_commands_count
Get count of completed commands across all subtasks.
- created_at
- description
- ended_on
- property execution_progress_percentage
Calculate execution progress as percentage based on completed commands.
- extra_data
- goals
- id
- labels
- owner
- progress
- severity
- started_on
- status
- steps
- sub_tasks
- title
- property total_commands_count
Get total number of commands across all subtasks.
- updated_at
- work_type
- class ToDoWriteSchemaValidator(schema_path=None)[source]
Bases:
objectToDoWrite Model Schema Validator and Database Manager.
Provides programmatic access to: 1. Validate data against schemas 2. Initialize databases from schema definitions 3. Ensure model-schema consistency 4. Import schemas into properly typed database tables
- get_associated_models(model_name)[source]
Get list of models that this model has relationships with.
- get_association_table_schema(table_name)[source]
Get the schema definition for an association table.
- Return type:
- create_engine(url, **kwargs)[source]
Create a new
_engine.Engineinstance.The standard calling form is to send the URL as the first positional argument, usually a string that indicates database dialect and connection arguments:
engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test")
Note
Please review Database URLs for general guidelines in composing URL strings. In particular, special characters, such as those often part of passwords, must be URL encoded to be properly parsed.
Additional keyword arguments may then follow it which establish various options on the resulting
_engine.Engineand its underlyingDialectand_pool.Poolconstructs:engine = create_engine( "mysql+mysqldb://scott:tiger@hostname/dbname", pool_recycle=3600, echo=True, )
The string form of the URL is
dialect[+driver]://user:password@host/dbname[?key=value..], wheredialectis a database name such asmysql,oracle,postgresql, etc., anddriverthe name of a DBAPI, such aspsycopg2,pyodbc,cx_oracle, etc. Alternatively, the URL can be an instance ofURL.**kwargstakes a wide variety of options which are routed towards their appropriate components. Arguments may be specific to the_engine.Engine, the underlyingDialect, as well as the_pool.Pool. Specific dialects also accept keyword arguments that are unique to that dialect. Here, we describe the parameters that are common to most_sa.create_engine()usage.Once established, the newly resulting
_engine.Enginewill request a connection from the underlying_pool.Poolonce_engine.Engine.connect()is called, or a method which depends on it such as_engine.Engine.execute()is invoked. The_pool.Poolin turn will establish the first actual DBAPI connection when this request is received. The_sa.create_engine()call itself does not establish any actual DBAPI connections directly.- Parameters:
connect_args – a dictionary of options which will be passed directly to the DBAPI’s
connect()method as additional keyword arguments. See the example at Custom DBAPI connect() arguments / on-connect routines.creator –
a callable which returns a DBAPI connection. This creation function will be passed to the underlying connection pool and will be used to create all new database connections. Usage of this function causes connection parameters specified in the URL argument to be bypassed.
This hook is not as flexible as the newer
_events.DialectEvents.do_connect()hook which allows complete control over how a connection is made to the database, given the full set of URL arguments and state beforehand.See also
_events.DialectEvents.do_connect()- event hook that allows full control over DBAPI connection mechanics.echo=False –
if True, the Engine will log all statements as well as a
repr()of their parameter lists to the default log handler, which defaults tosys.stdoutfor output. If set to the string"debug", result rows will be printed to the standard output as well. Theechoattribute ofEnginecan be modified at any time to turn logging on and off; direct control of logging is also available using the standard Pythonloggingmodule.See also
Configuring Logging - further detail on how to configure logging.
echo_pool=False –
if True, the connection pool will log informational output such as when connections are invalidated as well as when connections are recycled to the default log handler, which defaults to
sys.stdoutfor output. If set to the string"debug", the logging will include pool checkouts and checkins. Direct control of logging is also available using the standard Pythonloggingmodule.See also
Configuring Logging - further detail on how to configure logging.
empty_in_strategy –
No longer used; SQLAlchemy now uses “empty set” behavior for IN in all cases.
Deprecated since version 1.4: The :paramref:`_sa.create_engine.empty_in_strategy` keyword is deprecated, and no longer has any effect. All IN expressions are now rendered using the “expanding parameter” strategy which renders a set of boundexpressions, or an “empty set” SELECT, at statement executiontime.
enable_from_linting –
defaults to True. Will emit a warning if a given SELECT statement is found to have un-linked FROM elements which would cause a cartesian product.
Added in version 1.4.
execution_options – Dictionary execution options which will be applied to all connections. See
execution_options()future –
Use the 2.0 style
_engine.Engineand_engine.ConnectionAPI.As of SQLAlchemy 2.0, this parameter is present for backwards compatibility only and must remain at its default value of
True.The :paramref:`_sa.create_engine.future` parameter will be deprecated in a subsequent 2.x release and eventually removed.
Added in version 1.4.
Changed in version 2.0: All
_engine.Engineobjects are “future” style engines and there is no longer afuture=Falsemode of operation.hide_parameters –
Boolean, when set to True, SQL statement parameters will not be displayed in INFO logging nor will they be formatted into the string representation of
StatementErrorobjects.Added in version 1.3.8.
See also
Configuring Logging - further detail on how to configure logging.
implicit_returning=True – Legacy parameter that may only be set to True. In SQLAlchemy 2.0, this parameter does nothing. In order to disable “implicit returning” for statements invoked by the ORM, configure this on a per-table basis using the :paramref:`.Table.implicit_returning` parameter.
insertmanyvalues_page_size –
number of rows to format into an INSERT statement when the statement uses “insertmanyvalues” mode, which is a paged form of bulk insert that is used for many backends when using executemany execution typically in conjunction with RETURNING. Defaults to 1000, but may also be subject to dialect-specific limiting factors which may override this value on a per-statement basis.
Added in version 2.0.
isolation_level –
optional string name of an isolation level which will be set on all new connections unconditionally. Isolation levels are typically some subset of the string names
"SERIALIZABLE","REPEATABLE READ","READ COMMITTED","READ UNCOMMITTED"and"AUTOCOMMIT"based on backend.The :paramref:`_sa.create_engine.isolation_level` parameter is in contrast to the :paramref:`.Connection.execution_options.isolation_level` execution option, which may be set on an individual
Connection, as well as the same parameter passed toEngine.execution_options(), where it may be used to create multiple engines with different isolation levels that share a common connection pool and dialect.Changed in version 2.0: The :paramref:`_sa.create_engine.isolation_level` parameter has been generalized to work on all dialects which support the concept of isolation level, and is provided as a more succinct, up front configuration switch in contrast to the execution option which is more of an ad-hoc programmatic option.
json_deserializer –
for dialects that support the
_types.JSONdatatype, this is a Python callable that will convert a JSON string to a Python object. By default, the Pythonjson.loadsfunction is used.Changed in version 1.3.7: The SQLite dialect renamed this from
_json_deserializer.json_serializer –
for dialects that support the
_types.JSONdatatype, this is a Python callable that will render a given object as JSON. By default, the Pythonjson.dumpsfunction is used.Changed in version 1.3.7: The SQLite dialect renamed this from
_json_serializer.label_length=None –
optional integer value which limits the size of dynamically generated column labels to that many characters. If less than 6, labels are generated as “_(counter)”. If
None, the value ofdialect.max_identifier_length, which may be affected via the :paramref:`_sa.create_engine.max_identifier_length` parameter, is used instead. The value of :paramref:`_sa.create_engine.label_length` may not be larger than that of :paramref:`_sa.create_engine.max_identfier_length`.logging_name –
String identifier which will be used within the “name” field of logging records generated within the “sqlalchemy.engine” logger. Defaults to a hexstring of the object’s id.
See also
Configuring Logging - further detail on how to configure logging.
:paramref:`_engine.Connection.execution_options.logging_token`
max_identifier_length –
integer; override the max_identifier_length determined by the dialect. if
Noneor zero, has no effect. This is the database’s configured maximum number of characters that may be used in a SQL identifier such as a table name, column name, or label name. All dialects determine this value automatically, however in the case of a new database version for which this value has changed but SQLAlchemy’s dialect has not been adjusted, the value may be passed here.Added in version 1.3.9.
max_overflow=10 – the number of connections to allow in connection pool “overflow”, that is connections that can be opened above and beyond the pool_size setting, which defaults to five. this is only used with
QueuePool.module=None – reference to a Python module object (the module itself, not its string name). Specifies an alternate DBAPI module to be used by the engine’s dialect. Each sub-dialect references a specific DBAPI which will be imported before first connect. This parameter causes the import to be bypassed, and the given module to be used instead. Can be used for testing of DBAPIs as well as to inject “mock” DBAPI implementations into the
_engine.Engine.paramstyle=None – The paramstyle to use when rendering bound parameters. This style defaults to the one recommended by the DBAPI itself, which is retrieved from the
.paramstyleattribute of the DBAPI. However, most DBAPIs accept more than one paramstyle, and in particular it may be desirable to change a “named” paramstyle into a “positional” one, or vice versa. When this attribute is passed, it should be one of the values"qmark","numeric","named","format"or"pyformat", and should correspond to a parameter style known to be supported by the DBAPI in use.pool=None – an already-constructed instance of
Pool, such as aQueuePoolinstance. If non-None, this pool will be used directly as the underlying connection pool for the engine, bypassing whatever connection parameters are present in the URL argument. For information on constructing connection pools manually, see Connection Pooling.poolclass=None – a
Poolsubclass, which will be used to create a connection pool instance using the connection parameters given in the URL. Note this differs frompoolin that you don’t actually instantiate the pool in this case, you just indicate what type of pool to be used.pool_logging_name –
String identifier which will be used within the “name” field of logging records generated within the “sqlalchemy.pool” logger. Defaults to a hexstring of the object’s id.
See also
Configuring Logging - further detail on how to configure logging.
pool_pre_ping –
boolean, if True will enable the connection pool “pre-ping” feature that tests connections for liveness upon each checkout.
Added in version 1.2.
See also
pool_size=5 – the number of connections to keep open inside the connection pool. This used with
QueuePoolas well asSingletonThreadPool. WithQueuePool, apool_sizesetting of 0 indicates no limit; to disable pooling, setpoolclasstoNullPoolinstead.pool_recycle=-1 –
this setting causes the pool to recycle connections after the given number of seconds has passed. It defaults to -1, or no timeout. For example, setting to 3600 means connections will be recycled after one hour. Note that MySQL in particular will disconnect automatically if no activity is detected on a connection for eight hours (although this is configurable with the MySQLDB connection itself and the server configuration as well).
See also
pool_reset_on_return='rollback' –
set the :paramref:`_pool.Pool.reset_on_return` parameter of the underlying
_pool.Poolobject, which can be set to the values"rollback","commit", orNone.See also
Fully preventing ROLLBACK calls under autocommit - a more modern approach to using connections with no transactional instructions
pool_timeout=30 –
number of seconds to wait before giving up on getting a connection from the pool. This is only used with
QueuePool. This can be a float but is subject to the limitations of Python time functions which may not be reliable in the tens of milliseconds.pool_use_lifo=False –
use LIFO (last-in-first-out) when retrieving connections from
QueuePoolinstead of FIFO (first-in-first-out). Using LIFO, a server-side timeout scheme can reduce the number of connections used during non- peak periods of use. When planning for server-side timeouts, ensure that a recycle or pre-ping strategy is in use to gracefully handle stale connections.Added in version 1.3.
plugins –
string list of plugin names to load. See
CreateEnginePluginfor background.Added in version 1.2.3.
query_cache_size –
size of the cache used to cache the SQL string form of queries. Set to zero to disable caching.
The cache is pruned of its least recently used items when its size reaches N * 1.5. Defaults to 500, meaning the cache will always store at least 500 SQL statements when filled, and will grow up to 750 items at which point it is pruned back down to 500 by removing the 250 least recently used items.
Caching is accomplished on a per-statement basis by generating a cache key that represents the statement’s structure, then generating string SQL for the current dialect only if that key is not present in the cache. All statements support caching, however some features such as an INSERT with a large set of parameters will intentionally bypass the cache. SQL logging will indicate statistics for each statement whether or not it were pull from the cache.
Note
some ORM functions related to unit-of-work persistence as well as some attribute loading strategies will make use of individual per-mapper caches outside of the main cache.
See also
Added in version 1.4.
skip_autocommit_rollback –
When True, the dialect will unconditionally skip all calls to the DBAPI
connection.rollback()method if the DBAPI connection is confirmed to be in “autocommit” mode. The availability of this feature is dialect specific; if not available, aNotImplementedErroris raised by the dialect when rollback occurs.Added in version 2.0.43.
use_insertmanyvalues –
True by default, use the “insertmanyvalues” execution style for INSERT..RETURNING statements by default.
Added in version 2.0.
- Return type:
- initialize_database(database_url, drop_existing=False)[source]
Initialize database with schema using default validator.
- Return type:
- class sessionmaker(bind=None, *, class_=<class 'sqlalchemy.orm.session.Session'>, autoflush=True, expire_on_commit=True, info=None, **kw)[source]
Bases:
_SessionClassMethods,Generic[_S]A configurable
Sessionfactory.The
sessionmakerfactory generates newSessionobjects when called, creating them given the configurational arguments established here.e.g.:
from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker # an Engine, which the Session will use for connection # resources engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/") Session = sessionmaker(engine) with Session() as session: session.add(some_object) session.add(some_other_object) session.commit()
Context manager use is optional; otherwise, the returned
_orm.Sessionobject may be closed explicitly via the_orm.Session.close()method. Using atry:/finally:block is optional, however will ensure that the close takes place even if there are database errors:session = Session() try: session.add(some_object) session.add(some_other_object) session.commit() finally: session.close()
sessionmakeracts as a factory for_orm.Sessionobjects in the same way as an_engine.Engineacts as a factory for_engine.Connectionobjects. In this way it also includes a_orm.sessionmaker.begin()method, that provides a context manager which both begins and commits a transaction, as well as closes out the_orm.Sessionwhen complete, rolling back the transaction if any errors occur:Session = sessionmaker(engine) with Session.begin() as session: session.add(some_object) session.add(some_other_object) # commits transaction, closes session
Added in version 1.4.
When calling upon
_orm.sessionmakerto construct a_orm.Session, keyword arguments may also be passed to the method; these arguments will override that of the globally configured parameters. Below we use a_orm.sessionmakerbound to a certain_engine.Engineto produce a_orm.Sessionthat is instead bound to a specific_engine.Connectionprocured from that engine:Session = sessionmaker(engine) # bind an individual session to a connection with engine.connect() as connection: with Session(bind=connection) as session: ... # work with session
The class also includes a method
_orm.sessionmaker.configure(), which can be used to specify additional keyword arguments to the factory, which will take effect for subsequentSessionobjects generated. This is usually used to associate one or more_engine.Engineobjects with an existingsessionmakerfactory before it is first used:# application starts, sessionmaker does not have # an engine bound yet Session = sessionmaker() # ... later, when an engine URL is read from a configuration # file or other events allow the engine to be created engine = create_engine("sqlite:///foo.db") Session.configure(bind=engine) sess = Session() # work with session
See also
Opening and Closing a Session - introductory text on creating sessions using
sessionmaker.- __init__(bind=None, *, class_=<class 'sqlalchemy.orm.session.Session'>, autoflush=True, expire_on_commit=True, info=None, **kw)[source]
Construct a new
sessionmaker.All arguments here except for
class_correspond to arguments accepted bySessiondirectly. See theSession.__init__()docstring for more details on parameters.- Parameters:
bind (
Union[Engine,Connection,None]) – a_engine.Engineor otherConnectablewith which newly createdSessionobjects will be associated.class_ (
Type[TypeVar(_S, bound= Session)]) – class to use in order to create newSessionobjects. Defaults toSession.autoflush (
bool) –The autoflush setting to use with newly created
Sessionobjects.See also
Flushing - additional background on autoflush
expire_on_commit=True – the :paramref:`_orm.Session.expire_on_commit` setting to use with newly created
Sessionobjects.info (
Optional[Dict[Any,Any]]) – optional dictionary of information that will be available viaSession.info. Note this dictionary is updated, not replaced, when theinfoparameter is specified to the specificSessionconstruction operation.**kw (
Any) – all other keyword arguments are passed to the constructor of newly createdSessionobjects.
- begin()[source]
Produce a context manager that both provides a new
_orm.Sessionas well as a transaction that commits.e.g.:
Session = sessionmaker(some_engine) with Session.begin() as session: session.add(some_object) # commits transaction, closes session
Added in version 1.4.
- Return type:
AbstractContextManager[TypeVar(_S, bound= Session)]
- configure(**new_kw)[source]
(Re)configure the arguments for this sessionmaker.
e.g.:
Session = sessionmaker() Session.configure(bind=create_engine("sqlite://"))
- Return type:
- class_
- validate_model_data(model_name, data)[source]
Validate data against model schema using default validator.
- Return type:
Core Components
The todowrite package provides:
12 Hierarchical Models: Goal, Task, Command, etc.
Schema Validation: JSON schema validation and database integrity
Database Management: SQLAlchemy integration with multiple backends
CLI Integration: Command-line interface support
Tools: Schema generation, validation, and tracing utilities