Various interesting and useful bits of functionality
You need to import apsw.ext to use this module.
Pretty printing
format_query_table() makes nicely formatted query output - see
the the example.
Logging and tracebacks
You can use log_sqlite() to forward SQLite log messages
to the logging module.
print_augmented_traceback() prints an exception the usual way
but also includes local variables, which APSW includes to make debugging quicker and easier.
Virtual Tables
Use index_info_to_dict() to get apsw.IndexInfo
in an easier to print and work with format.
Use make_virtual_module() to easily turn a Python function
into a virtual table source.
generate_series() and generate_series_sqlite() provide
generate_series.
Query limiting, tracing and performance
Use query_limit with a block (example) to limit total number of rows
processed or elapsed time.
Use Trace with a block (example) to
see the SQL run and stats about each one such as timing, rows
returned, changes made, memory used, and other SQLite counters.
Use ShowResourceUsage with a block (example) to get overall SQLite counters as well
as operating system timing, cpu usage, memory, and I/O.
Database storage usage
Use analyze_pages() to find out how much storage is in use, and
how fragmented it is.  See example output.  It is also available
graphically with page_usage_to_svg() - example output.
Accessing result rows by column name
See the example.
Use apsw.ext.DataClassRowFactory as a
apsw.Connection.row_trace for an entire connection, or
apsw.Cursor.row_trace for a specific cursor.
Converting types into and out of SQLite
SQLite only stores 5 types. (You can do any Python object as runtime only values.)
- None 
- int 
- float 
- str 
- bytes 
Use TypesConverterCursorFactory as
apsw.Connection.cursor_factory to adapt values going into
SQLite, and convert them coming out.  See the example.
To convert values going into SQLite, do either of:
- Inherit from - apsw.ext.SQLiteTypeAdapterand define a to_sqlite_value method on the class
- Call - TypesConverterCursorFactory.register_adapter()with the type and a adapter function
To adapt values coming out of SQLite:
- Call - TypesConverterCursorFactory.register_converter()with the exact type string in the table and a converter function
Detailed Query Information
SQLite can provide lots of information about queries.  The
query_info() function can gather them up
for you.  This includes:
- readonly if the query makes no direct changes 
- first_query if multiple queries are provided 
- actions which databases, tables, columns, functions, views etc are referenced - see actions 
- query_plan which indices, tables scans etc are used to find the query results - see query plans 
- explain for the low level steps taken inside SQLite - see SQLite bytecode 
See the example.
API Reference
- apsw.ext.result_string(code: int) str[source]
- Turns a result or extended result code into a string. The appropriate mapping based on the value is used. 
- class apsw.ext.DataClassRowFactory(*, rename: bool = True, dataclass_kwargs: dict[str, Any] | None = None)[source]
- Returns each row as a - dataclass, accessible by column name.- To use set an instance as - Connection.row_traceto affect all- cursors, or on a specific cursor:- connection.row_trace = apsw.ext.DataClassRowFactory() for row in connection.execute("SELECT title, sum(orders) AS total, ..."): # You can now access by name print (row.title, row.total) # you can get the underlying description print (row.__description__) - You can use as many instances of this class as you want, each across as many - connectionsas you want.- Parameters:
- rename – Column names could be duplicated, or not valid in Python (eg a column named continue). If rename is True, then invalid/duplicate names are replaced with _ and their position starting at zero. For example title, total, title, continue would become title, total, _2, _3. If rename is False then problem column names will result in - TypeErrorraised by- dataclasses.make_dataclass()
- dataclass_kwargs – Additional parameters when creating the dataclass as described in - dataclasses.dataclass(). For example you may want frozen = True to make the dataclass read-only, or slots = True to reduce memory consumption.
 
 - get_dataclass(description: tuple[tuple[str, str], ...]) tuple[Any, tuple[str, ...]][source]
- Returns dataclass and tuple of (potentially renamed) column names - The dataclass is what is returned for each row with that - description- This method caches its results. 
 - get_type(t: str | None) Any[source]
- Returns the - type hintto use in the dataclass based on the type in the- description- SQLite’s affinity rules are followed. - The values have no effect on how your program runs, but can be used by tools like mypy. Column information like whether null is allowed is not present, so this is just a hint. 
 - __call__(cursor: Cursor, row: SQLiteValues) Any[source]
- What the row tracer calls - This - looks upthe dataclass and column names, and then returns an instance of the dataclass.
 
- class apsw.ext.SQLiteTypeAdapter[source]
- A metaclass to indicate conversion to SQLite types is supported - This is one way to indicate your type supports conversion to a value supported by SQLite. You can either inherit from this class, or call the register method: - apsw.ext.SQLiteTypeAdapter.register(YourClassHere) - Doing either is entirely sufficient and there is no need to register with - TypesConverterCursorFactory- abstractmethod to_sqlite_value() SQLiteValue[source]
- Return a SQLite compatible value for this object 
 
- class apsw.ext.TypesConverterCursorFactory(abstract_base_class: ABCMeta = SQLiteTypeAdapter)[source]
- Provides cursors that can convert objects into one of the types supported by SQLite, or back from SQLite - Parameters:
- abstract_base_class – Which metaclass to consider as conversion capable 
 - register_adapter(klass: type, callable: Callable[[Any], SQLiteValue]) None[source]
- Registers a callable that converts from klass to one of the supported SQLite types 
 - register_converter(name: str, callable: Callable[[SQLiteValue], Any]) None[source]
- Registers a callable that converts from a SQLite value 
 - __call__(connection: Connection) TypeConverterCursor[source]
- Returns a new convertor - cursorfor the connection
 - adapt_value(value: Any) SQLiteValue[source]
- Returns SQLite representation of value 
 - convert_value(schematype: str, value: SQLiteValue) Any[source]
- Returns Python object from schema type and SQLite value 
 - wrap_bindings(bindings: Bindings | None) Bindings | None[source]
- Wraps bindings that are supplied to underlying execute 
 - wrap_sequence_bindings(sequenceofbindings: Iterable[Bindings]) Generator[Bindings, None, None][source]
- Wraps a sequence of bindings that are supplied to the underlying executemany 
 - class DictAdapter(factory: TypesConverterCursorFactory, data: Mapping[str, SQLiteValue])[source]
- Used to wrap dictionaries supplied as bindings 
 - class TypeConverterCursor(connection: Connection, factory: TypesConverterCursorFactory)[source]
- Cursor used to do conversions - execute(statements: str, bindings: Bindings | None = None, *, can_cache: bool = True, prepare_flags: int = 0, explain: int = -1) Cursor[source]
- Executes the statements doing conversions on supplied and returned values - See - apsw.Cursor.execute()for parameter details
 - executemany(statements: str, sequenceofbindings: Iterable[Bindings], *, can_cache: bool = True, prepare_flags: int = 0, explain: int = -1) Cursor[source]
- Executes the statements against each item in sequenceofbindings, doing conversions on supplied and returned values - See - apsw.Cursor.executemany()for parameter details
 
 
- apsw.ext.log_sqlite(*, level: int = logging.ERROR, logger: Logger | None = None) None[source]
- Send SQLite log messages to - logging- Parameters:
- level – highest level to log at 
- logger – Use the specific logger 
 
 
- apsw.ext.print_augmented_traceback(exc_type: type[BaseException], exc_value: BaseException, exc_traceback: TracebackType, *, file: TextIO | None = None) None[source]
- Prints a standard exception, but also includes the value of variables in each stack frame which APSW adds to help diagnostics and debugging. - Parameters:
- exc_type – The exception type 
- exc_value – The exception value 
- exc_traceback – Traceback for the exception 
- file – (default - sys.stderr) Where the print goes
 
 - try: .... except Exception as exc: apsw.ext.print_augmented_traceback(*sys.exc_info()) 
- apsw.ext.index_info_to_dict(o: IndexInfo, *, column_names: list[str] | None = None, rowid_name: str = '__ROWID__') dict[str, Any][source]
- Returns a - apsw.IndexInfoas a dictionary.- If column_names is supplied then additional keys with column names are present, using rowid_name for the rowid. - Here is an example response: - query = ''' SELECT orderid, note FROM sales WHERE price > 74.99 AND quantity<=? AND customer in ('Acme Widgets', 'Acme Industries') ORDER BY date desc LIMIT 10''' {'aConstraint': [{'collation': 'BINARY', 'iColumn': 5, 'iColumn_name': 'price', 'op': 4, 'op_str': 'SQLITE_INDEX_CONSTRAINT_GT', 'rhs': 74.99, 'usable': True}, {'collation': 'BINARY', 'iColumn': 7, 'iColumn_name': 'quantity', 'op': 8, 'op_str': 'SQLITE_INDEX_CONSTRAINT_LE', 'rhs': None, 'usable': True}, {'collation': 'BINARY', 'iColumn': 8, 'iColumn_name': 'customer', 'op': 2, 'op_str': 'SQLITE_INDEX_CONSTRAINT_EQ', 'rhs': None, 'usable': True}, {'collation': 'BINARY', 'op': 73, 'op_str': 'SQLITE_INDEX_CONSTRAINT_LIMIT', 'rhs': 10, 'usable': True}], 'aConstraintUsage': [{'argvIndex': 0, 'in': False, 'omit': False}, {'argvIndex': 0, 'in': False, 'omit': False}, {'argvIndex': 0, 'in': True, 'omit': False}, {'argvIndex': 0, 'in': False, 'omit': False}], 'aOrderBy': [{'desc': True, 'iColumn': 9, 'iColumn_name': 'date'}], 'colUsed': {0, 3, 5, 7, 8, 9}, 'colUsed_names': {'date', 'note', 'customer', 'quantity', 'orderid', 'price'}, 'distinct': 0, 'estimatedCost': 5e+98, 'estimatedRows': 25, 'idxFlags': 0, 'idxFlags_set': set(), 'idxNum': 0, 'idxStr': None, 'nConstraint': 4, 'nOrderBy': 1, 'orderByConsumed': False} 
- apsw.ext.dbinfo(db: Connection, schema: str = 'main') tuple[DatabaseFileInfo | None, JournalFileInfo | WALFileInfo | None][source]
- Extracts fields from the database, journal, and wal files - Based on the file format description. The headers are read using - apsw.Connection.read()so you see inside encrypted, compressed, zip etc formats, not necessarily the actual on disk file.- Memory databases return None for both. 
- apsw.ext.find_columns(table_name: str, column_count: int, pk_columns: set[int], *, connection: Connection, schema: str | None = None) tuple[str, ...][source]
- Finds a matching table and returns column names for - changeset_to_sql().- Changesets only include column numbers, not column names so this method is used to find those names. Use it like this: - changeset_to_sql(changeset, get_columns=functools.partial(apsw.ext.find_columns, connection=db)) - The table name has to match (following SQLite’s case insensitivity rules), have the correct number of columns, and corresponding primary key columns. - Parameters:
- table_name – Name of the expected table 
- column_count – How many columns there are 
- pk_columns – Which columns make up the primary key 
- connection – The connection to examine 
- schema – If - None(default) then the- mainand all attached databases are searched, else only the named one.
 
- Raises:
- ValueError – If no corresponding table with matching column count and primary key columns can be found. 
 
- apsw.ext.changeset_to_sql(changeset: ChangesetInput, get_columns: Callable[[str, int, set[int]], tuple[str, ...]]) Iterator[str][source]
- Produces SQL equivalent to the contents of a changeset (or patchset) - Parameters:
- changeset – The changeset either as bytes, or a streaming callable. It is passed to - apsw.Changeset.iter().
- get_columns – Because changesets only have column numbers, this is called with a table name, column count, and primary keys, and should return the corresponding column names to use in the SQL. See - find_columns().
 
 - SQL statements are provided one at a time, and will directly change one row. (More than one row can be affected due to foreign keys.) If a recorded row change was because of an indirect change then the SQL statement begins with the comment - /* indirect */.- See the example 
- class apsw.ext.Trace(file: TextIO | None, db: Connection, *, trigger: bool = False, vtable: bool = False, updates: bool = False, transaction: bool = False, truncate: int = 75, indent: str = '')[source]
- Use as a context manager to show each SQL statement run inside the block - Statements from your code as well as from other parts of SQLite are shown.: - with apsw.ext.Trace(sys.stdout, db): method() db.execute("SQL") etc - Parameters:
- file – File to print to. If None then no information is gathered or printed 
- db – - Connectionto trace.
- trigger – The names of triggers being executed is always shown. If this is True then each statement of an executing trigger is shown too. 
- vtable – If True then statements executed behind the scenes by virtual tables are shown. 
- updates – If True and the - preupdate_hook()is available, then inserted, updated, and deleted rows are shown. This is very helpful when you use bindings.
- transaction – If True then transaction start and commit/rollback will be shown, using commit/rollback hooks. 
- truncate – Truncates SQL text to this many characters 
- indent – Printed before each line of output 
 
 - You are shown each regular statement start with a prefix of - >, end with a prefix of- <if there were in between statements like triggers,- Tindicating trigger statements, and- Vindicating virtual table statements. If- updatesis on, then- INS.- DEL, and- UPDare shown followed by the rowid, and then the columns. For updates, unchanged columns are shown as- ...`. Transaction control is shown with a- !prefix.- As each statement ends you are shown summary information. - Example - Description - Time: 1.235 - Elapsed time since the statement started executing in seconds. This is always shown. - Rows: 5 - How many times SQLite stopped execution providing a row to be processed - Changes: 77 - The difference in the total change count between when the statement started and when it ended. It will include changes made by triggers, virtual table code etc. - FullScanRows: 12,334 - Number of rows visited doing a full scan of a table. This indicates an opportunity for an index. - Sort: 5 - The number of times SQLite had to do a sorting operation. If you have indexes in the desired order then the sorting can be skipped. - AutoIndexRows: 55,988 - SQLite had to create and add this many rows to an automatic index. This indicates an opportunity for an index. - VmStep: 55,102 - How many internal steps were needed. - Mem: 84.3KB - How much memory was used to hold the statement and working data. - Tracing is done with - trace_v2().- See - ShowResourceUsage()to get summary information about a block as a whole. You can use this and that at the same time.- See the example. 
- class apsw.ext.ShowResourceUsage(file: TextIO | None, *, db: Connection | None = None, scope: 'thread' | 'process' | None = None, indent: str = '')[source]
- Use as a context manager to show a summary of time, resource, and SQLite usage inside the block: - with apsw.ext.ShowResourceUsage(sys.stdout, db=connection, scope="thread"): # do things with the database connection.execute("...") # and other calculations do_work() - When then context finishes a report is printed to the file. Only non-zero fields are shown - eg if no I/O is done then no I/O fields are shown. See the example. - Parameters:
- file – File to print to. If None then no information is gathered or printed 
- db – - Connectionto gather SQLite stats from if not None. Statistics from each SQL statement executed are added together.
- scope – Get - threador- processstats, or None. Note that MacOS only supports process, and Windows doesn’t support either.
- indent – Printed before each line of output 
 
 - Timing information comes from - time.monotonic()and- time.process_time(), resource usage from- resource.getrusage()(empty for Windows), and SQLite from- trace_v2().- See - Trace()to trace individual statements. You can use this and that at the same time.- See the example. 
- class apsw.ext.PageUsage(page_size: int, pages_used: int, sequential_pages: int, data_stored: int, cells: int, max_payload: int, tables: list[str], indices: list[str])[source]
- Returned by - analyze_pages()
- class apsw.ext.DatabasePageUsage(page_size: int, pages_used: int, sequential_pages: int, data_stored: int, cells: int, max_payload: int, tables: list[str], indices: list[str], pages_total: int, pages_freelist: int, max_page_count: int)[source]
- Returned by - analyze_pages()when asking about the database as a whole- max_page_count: int
- Limit on the number of pages 
 
- apsw.ext.analyze_pages(con: Connection, scope: int, schema: str = 'main') DatabasePageUsage | dict[str, PageUsage][source]
- Summarizes page usage for the database - The dbstat virtual table is used to gather statistics. - See example output. - Parameters:
- con – Connection to use 
- scope – - Value - Scope - Returns - 0- The database as a whole - 1- Tables and their indices are grouped together. Virtual tables like FTS5 have multiple backing tables which are grouped. - A - dictwhere the key is the name of the table, and a corresponding- PageUsageas the value. The- PageUsage.tablesand- PageUsage.indicesfields tell you which ones were included.- 2- Each table and index separately. - dictof each name and a corresponding- PageUsagewhere one of the- PageUsage.tablesand- PageUsage.indicesfields will have the name.
 
 - Note - dbstat is present in PyPI builds, and many platform SQLite distributions. You can use pragma module_list to check. If the table is not present then calling this function will give - apsw.SQLErrorwith message- no such table: dbstat.
- apsw.ext.page_usage_to_svg(con: Connection, out: TextIO, schema: str = 'main') None[source]
- Visualize database space usage as a SVG - You can hover or click on segments to get more details. The centre circle shows information about the database as a whole, the middle ring shows usage grouped by database (combing indices, shadow tables for virtual tables), while the outer ring shows each index and table separately. - Uses - analyze_pages()to gather the information.- Parameters:
- con – Connection to query 
- out – Where the svg is written to. You can use - io.StringIOif you want it as a string.
- schema – Which attached database to query 
 
 - See example output. 
- exception apsw.ext.QueryLimitNoException[source]
- Indicates that no exception will be raised when a - query_limitis exceeded
- class apsw.ext.query_limit(db: Connection, *, row_limit: int | None = None, timeout: float | None = None, row_exception: type[Exception] = QueryLimitNoException, timeout_exception: type[Exception] = QueryLimitNoException, timeout_steps: int = 100)[source]
- Use as a context manager to limit execution time and rows processed in the block - When the total number of rows processed hits the row limit, or timeout seconds have elapsed an exception is raised to exit the block. - with query_limit(db, row_limit = 1000, timeout = 2.5): db.execute("...") for row in db.execute("..."): ,,, db.execute("...") - Parameters:
- db – Connection to monitor 
- row_limit – Maximum number of rows to process, across all queries. - None(default) means no limit
- timeout – Maximum elapsed time in seconds. - None(default) means no limit
- row_exception – Class of exception to raise when row limit is hit 
- timeout_exception – Class of exception to raise when timeout is hit 
- timeout_steps – How often the elapsed time is checked in SQLite internal operations (see - set_progress_handler())
 
 - If the exception is - QueryLimitNoException(default) then no exception is passed on when the block exits.- Row limits are implemented by - trace_v2()to monitor rows. Only rows directly in your queries are counted - for example rows used by virual tables like FTS5 in the background, or triggers are not counted.- Time limits are implemented by - set_progress_handler()to monitor the elapsed- time. This means the elapsed time is only checked while running SQLite queries.- trace_v2()and- set_progress_handler()implement multiple registrations so query_limit will not interfere with any you may have registered.- If you use nested query_limit blocks then only the limits set by the closest block apply within that block. - See the example 
- apsw.ext.format_query_table(db: Connection, query: str, bindings: Bindings | None = None, *, colour: bool = False, quote: bool = False, string_sanitize: Callable[[str], str] | 0 | 1 | 2 = 0, binary: Callable[[bytes], str] = lambda x: ..., null: str = '(null)', truncate: int = 4096, truncate_val: str = ' ...', text_width: int = 80, use_unicode: bool = True) str[source]
- Produces query output in an attractive text table - See the example. - Parameters:
- db – Connection to run the query on 
- query – Query to run 
- bindings – Bindings for query (if needed) 
- colour – If True then ANSI colours are used to outline the header, and show the type of each value. 
- quote – If True then - apsw.format_sql_value()is used to get a textual representation of a value
- string_sanitize – - If this is a callable then each string is passed to it for cleaning up. Bigger numbers give more sanitization to the string. Using an example source string of: - '''hello \\ \t\f\0日本語 world'''- param - example output - description - 0 - hello \\ 0日本語 world - Various whitespace (eg tabs, vertical form feed) are replaced. backslashes are escaped, embedded nulls become \0 - 1 - hello \\ \0{CJK UNIFIED IDEOGRAPH-65E5}{CJK UNIFIED IDEOGRAPH-672C}{CJK UNIFIED IDEOGRAPH-8A9E} world - After step 0, all non-ascii characters are replaced with their - apsw.unicode.codepoint_name()- 2 - hello.\……..world - All non-ascii characters and whitespace are replaced by a dot 
- binary – Called to convert bytes to string 
- null – How to represent the null value 
- truncate – How many characters to truncate long strings at (after sanitization) 
- truncate_val – Appended to truncated strings to show it was truncated 
- text_width – Maximum output width to generate 
- use_unicode – If True then unicode line drawing characters are used. If False then +—+ and | are used. 
 
 
- class apsw.ext.VTColumnAccess(*values)[source]
- How the column value is accessed from a row, for - make_virtual_module()- By_Index = 1
- By number like with tuples and lists - eg - row[3]
 - By_Name = 2
- By name like with dicts - eg - row['quantity']
 - By_Attr = 3
- By attribute like with - dataclasses- eg- row.quantity
 
- apsw.ext.get_column_names(row: Any) tuple[Sequence[str], VTColumnAccess][source]
- Works out column names and access given an example row - row can be an instance of a row, or the class used to make one (eg a - dataclass)- Type - Access - Column names From - isinstance(- tuple) and- hasattr(- "_fields") - eg- namedtuple()- row._fields- hasattr(- "__match_args__")- row.__match_args__(if not empty)- columnXwhere X is zero up to- len(- row)- Example usage: - def method(arg1, arg2): yield {"fruit": "orange", "price": 17, "quantity": 2} example_row = next(method(0, 10)) method.columns, method.column_access = apsw.ext.get_column_names(example_row) 
- apsw.ext.make_virtual_module(db: Connection, name: str, callable: Callable, *, eponymous: bool = True, eponymous_only: bool = False, repr_invalid: bool = False) None[source]
- Registers a read-only virtual table module with db based on callable. The callable must have an attribute named columns with a list of column names, and an attribute named column_access with a - VTColumnAccesssaying how to access columns from a row. See- get_column_names()for easily figuring that out.- The goal is to make it very easy to turn a Python function into a virtual table. For example the following Python function: - def gendata(start_id, end_id=1000, include_system=False): yield (10, "2020-10-21", "readme.txt) yield (11, "2019-05-12", "john.txt) gendata.columns = ("user_id", "start_date", "file_name") gendata.column_access = VTColumnAccess.By_Index - Will generate a table declared like this, using HIDDEN for parameters: - CREATE TABLE table_name(user_id, start_date, file_name, start_id HIDDEN, end_id HIDDEN, include_system HIDDEN); - inspect.signature()is used to discover parameter names.- Positional parameters to callable come from the table definition. - SELECT * from table_name(1, 100, 1); - Keyword arguments come from WHERE clauses. - SELECT * from table_name(1) WHERE include_system=1; - iter()is called on callable with each iteration expected to return the next row. That means callable can return its data all at once (eg a list of rows), or yield them one row at a time. The number of columns must always be the same, no matter what the parameter values.- Parameters:
- eponymous – Lets you use the name as a table name without having to create a virtual table 
- eponymous_only – Can only reference as a table name 
- repr_invalid – If True then values that are not valid - apsw.SQLiteValuewill be converted to a string using- repr()
 
 - See the example - Advanced- The callable may also have an attribute named primary_key. By default the - id()of each row is used as the primary key. If present then it must be a column number to use as the primary key. The contents of that column must be unique for every row.- If you specify a parameter to the table and in WHERE, or have non-equality for WHERE clauses of parameters then the query will fail with - apsw.SQLErrorand a message from SQLite of “no query solution”
- apsw.ext.generate_series_sqlite(start=None, stop=0xFFFFFFFF, step=1)[source]
- Behaves like SQLite’s generate_series - Only integers are supported. If step is negative then values are generated from stop to start - To use: - apsw.ext.make_virtual_module(db, "generate_series", apsw.ext.generate_series_sqlite) db.execute("SELECT value FROM generate_series(1, 10)) - See also 
- apsw.ext.generate_series(start, stop, step=None)[source]
- Behaves like Postgres and SQL Server - Operates on floating point as well as integer. If step is not specified then it is 1 if stop is greater than start and -1 if stop is less than start. - To use: - apsw.ext.make_virtual_module(db, "generate_series", apsw.ext.generate_series) db.execute("SELECT value FROM generate_series(1, 10)) - See also 
- apsw.ext.query_info(db: Connection, query: str, bindings: Bindings | None = None, *, prepare_flags: int = 0, actions: bool = False, expanded_sql: bool = False, explain: bool = False, explain_query_plan: bool = False) QueryDetails[source]
- Returns information about the query, without running it. - bindings can be None if you want to find out what the bindings for the query are. - Set the various parameters to True if you also want the actions, expanded_sql, explain, query_plan etc filled in. - See the example. 
- class apsw.ext.QueryDetails(query: str, bindings: Bindings | None, first_query: str, query_remaining: str | None, is_explain: int, is_readonly: bool, has_vdbe: bool, bindings_count: int, bindings_names: tuple[str | None], description: tuple[tuple[str, str], ...], description_full: tuple[tuple[str, str, str, str, str], ...] | None, expanded_sql: str | None, actions: list[QueryAction] | None, explain: list[VDBEInstruction] | None, query_plan: QueryPlan | None)[source]
- A - dataclassthat provides detailed information about a query, returned by- query_info()- actions: list[QueryAction] | None
- A list of the actions taken by the query, as discovered via - Connection.authorizer
 - explain: list[VDBEInstruction] | None
- A list of instructions of the internal code used by SQLite to execute the query 
 - query_plan: QueryPlan | None
- The steps taken against tables and indices described here 
 
- class apsw.ext.QueryAction(action: int, action_name: str, column_name: str | None = None, database_name: str | None = None, file_name: str | None = None, function_name: str | None = None, module_name: str | None = None, operation: str | None = None, pragma_name: str | None = None, pragma_value: str | None = None, table_name: str | None = None, trigger_name: str | None = None, trigger_or_view: str | None = None, view_name: str | None = None)[source]
- A - dataclassthat provides information about one action taken by a query- Depending on the action, only a subset of the fields will have non-None values - action: int
- Authorizer code (also present in - apsw.mapping_authorizer_function)
 - action_name: str
- The string corresponding to the action. For example action could be 21 in which case action_name will be SQLITE_SELECT 
 
- class apsw.ext.QueryPlan(detail: str, sub: list[QueryPlan] | None = None)[source]
- A - dataclassfor one step of a query plan
- class apsw.ext.VDBEInstruction(addr: int, opcode: str, comment: str | None = None, p1: int | None = None, p2: int | None = None, p3: int | None = None, p4: int | None = None, p5: int | None = None)[source]
- A - dataclassrepresenting one instruction and its parameters
- class apsw.ext.DatabaseFileInfo(filename: str, header: bytes, page_size: int, write_format: int, read_format: int, reserved_bytes: int, file_change_counter: int, page_count: int, freelist_pages: int, schema_cookie: int, schema_format: int, default_cache_size: int, autovacuum_top_root: int, text_encoding: str, user_version: int, incremental_vacuum: bool, application_id: int, version_valid_for: int, sqlite_version: int)[source]
- Information about the main database file returned by - dbinfo()
- class apsw.ext.JournalFileInfo(filename: str, header: bytes, header_valid: bool, page_count: int, random_nonce: int, initial_pages: int, sector_size: int, page_size: int)[source]
- Information about the rollback journal returned by - dbinfo()- See the file format description