Optuna Dashboard

Real-time dashboard for Optuna.

Getting Started

Optuna Dashboard

Optuna Dashboard

Installation

Prerequisite

Optuna Dashboard supports Python 3.7 or newer.

Installing from PyPI

You can install optuna-dashboard via PyPI or Anaconda Cloud.

$ pip install optuna-dashboard

Also, you can install following optional dependencies to make optuna-dashboard faster.

$ pip install optuna-fast-fanova gunicorn

Installing from the source code

Since it requires to build TypeScript files, pip install git+https://.../optuna-dashboard.git does not actually work. Please clone the git repository and execute following commands to build sdist package:

$ git clone git@github.com:optuna/optuna-dashboard.git
$ cd optuna
# Node.js v16 is required to compile TypeScript files.
$ npm install
$ npm run build:prd
$ python -m build --sdist

Then you can install it like:

$ pip install dist/optuna-dashboard-x.y.z.tar.gz

See CONTRIBUTING.md for more details.

Command-line Interface

The most common usage of Optuna Dashboard is using the command-line interface. Assuming that Optuna’s optimization history is persisted using RDBStorage, you can use the command line interface like optuna-dashboard <STORAGE_URL>.

import optuna

def objective(trial):
    x = trial.suggest_float("x", -100, 100)
    y = trial.suggest_categorical("y", [-1, 0, 1])
    return x**2 + y

study = optuna.create_study(
    storage="sqlite:///db.sqlite3",  # Specify the storage URL here.
    study_name="quadratic-simple"
)
study.optimize(objective, n_trials=100)
print(f"Best value: {study.best_value} (params: {study.best_params})")
$ optuna-dashboard sqlite:///db.sqlite3
Listening on http://localhost:8080/
Hit Ctrl-C to quit.

If you are using JournalStorage classes introduced in Optuna v3.1, you can use them like below:

# JournalFileStorage
$ optuna-dashboard ./path/to/journal.log

# JournalRedisStorage
$ optuna-dashboard redis://localhost:6379

Using an official Docker image

You can also use an official Docker image instead of setting up your Python environment. The Docker image only supports SQLite3, MySQL(PyMySQL), and PostgreSQL(Psycopg2).

SQLite3

$ docker run -it --rm -p 8080:8080 -v `pwd`:/app -w /app ghcr.io/optuna/optuna-dashboard sqlite:///db.sqlite3

MySQL (PyMySQL)

$ docker run -it --rm -p 8080:8080 ghcr.io/optuna/optuna-dashboard mysql+pymysql://username:password@hostname:3306/dbname

PostgreSQL (Psycopg2)

$ docker run -it --rm -p 8080:8080 ghcr.io/optuna/optuna-dashboard postgresql+psycopg2://username:password@hostname:5432/dbname

Python Interface

Python interfaces are also provided for users who want to use other storage implementations (e.g. InMemoryStorage). You can use run_server() function like below:

import optuna
from optuna_dashboard import run_server

def objective(trial):
    x = trial.suggest_float("x", -100, 100)
    y = trial.suggest_categorical("y", [-1, 0, 1])
    return x**2 + y

storage = optuna.storages.InMemoryStorage()
study = optuna.create_study(storage=storage)
study.optimize(objective, n_trials=100)

run_server(storage)

Using Gunicorn or uWSGI server

Optuna Dashboard uses wsgiref module, which is in the Python’s standard libraries, by default. However, as described here, wsgiref is implemented for testing or debugging purpose. You can switch to other WSGI server implementations by using wsgi() function.

wsgi.py
from optuna.storages import RDBStorage
from optuna_dashboard import wsgi

storage = RDBStorage("sqlite:///db.sqlite3")
application = wsgi(storage)

Then please execute following commands to start.

$ pip install gunicorn
$ gunicorn --workers 4 wsgi:application

or

$ pip install uwsgi
$ uwsgi --http :8080 --workeers 4 --wsgi-file wsgi.py

Jupyter Lab Extension (Experimental)

You can install the Jupyter Lab extension via PyPI.

Screenshot for the Jupyter Lab Extension

Jupyter Lab Extension

To use, click the tile to launch the extension, and enter your Optuna’s storage URL (e.g. sqlite:///db.sqlite3) in the dialog.

Browser-only version (Experimental)

GIF animation for the browser-only version

Browser-only version of Optuna Dashboard, powered by Wasm.

We’ve developed the version that operates solely within your web browser. There’s no need to install Python or any other dependencies. Simply open the following URL in your browser, drag and drop your SQLite3 file onto the page, and you’re ready to view your Optuna studies!

https://optuna.github.io/optuna-dashboard/

Warning

Currently, only a subset of features is available. However, you can still check the optimization history, hyperparameter importances, and etc. in graphs and tables.

VS Code and code-server Extension (Experimental)

You can install the VS Code extension via Visual Studio Marketplace, or install the code-server extension via Open VSX.

Screenshot for the VS Code Extension

VS Code Extension

To use, right-click the SQLite3 files (*.db or *.sqlite3) in the file explorer and select the “Open in Optuna Dashboard” from the dropdown menu. This extension leverages the browser-only version of Optuna Dashboard, so the same limitations apply.

Google Colaboratory

When you want to check the optimization history on Google Colaboratory, you can use google.colab.output() function as follows:

import optuna
import threading
from google.colab import output
from optuna_dashboard import run_server

def objective(trial):
    x = trial.suggest_float("x", -100, 100)
    return (x - 2) ** 2

# Run optimization
storage = optuna.storages.InMemoryStorage()
study = optuna.create_study(storage=storage)
study.optimize(objective, n_trials=100)

# Start Optuna Dashboard
port = 8081
thread = threading.Thread(target=run_server, args=(storage,), kwargs={"port": port})
thread.start()
output.serve_kernel_port_as_window(port, path='/dashboard/')

Then please open http://localhost:8081/dashboard to browse.

API Reference

General APIs

optuna_dashboard.run_server

Start running optuna-dashboard and blocks until the server terminates.

optuna_dashboard.wsgi

This function exposes WSGI interface for people who want to run on the production-class WSGI servers like Gunicorn or uWSGI.

optuna_dashboard.save_note

Save the note (Markdown format) to the Study or Trial.

optuna_dashboard.save_plotly_graph_object

Save the user-defined plotly's graph object to the study.

optuna_dashboard.artifact.get_artifact_path

Get the URL path for a given artifact ID.

Human-in-the-loop

Form Widgets

optuna_dashboard.register_objective_form_widgets

Register a list of form widgets to an Optuna study.

optuna_dashboard.register_user_attr_form_widgets

Register a list of form widgets to an Optuna study.

optuna_dashboard.dict_to_form_widget

Restore form widget objects from the dictionary.

optuna_dashboard.ChoiceWidget

A widget representing a choice with associated values.

optuna_dashboard.SliderWidget

A widget representing a slider for selecting a value within a range.

optuna_dashboard.TextInputWidget

A text input widget class that defines a text input field.

optuna_dashboard.ObjectiveUserAttrRef

A class representing a reference to a value of trial.user_attrs.

Preferential Optimization

optuna_dashboard.preferential.create_study

Like optuna.create_study(), but for preferential optimization.

optuna_dashboard.preferential.load_study

Like optuna.load_study(), but for preferential optimization.

optuna_dashboard.preferential.PreferentialStudy

A Study-like class for preferential optimization.

optuna_dashboard.preferential.samplers.gp.PreferentialGPSampler

Sampler for preferential optimization using Gaussian process.

optuna_dashboard.register_preference_feedback_component

Register a preference feedback component to the study.

Streamlit

optuna_dashboard.streamlit.render_trial_note

Write a trial note to UI with streamlit as a markdown format.

optuna_dashboard.streamlit.render_objective_form_widgets

Render user input widgets to UI with streamlit.

optuna_dashboard.streamlit.render_user_attr_form_widgets

Render user input widgets to UI with streamlit.

Error Messages

This section lists descriptions and background for common error messages and warnings raised or emitted by Optuna Dashboard.

Warning Messages

Human-in-the-loop optimization will not work with _CachedStorage in Optuna prior to v3.2.

This warning occurs when the storage object associated with the Optuna Study is of the _CachedStorage class.

When using RDBStorage with Optuna, it is implicitly wrapped with the _CachedStorage class for performance improvement. However, there is a bug in the _CachedStorage class that prevents Optuna from synchronizing the latest Trial information. This bug is not a problem for the general use case of Optuna, but it is critical for human-in-the-loop optimization.

If you are using a version prior to v3.2, please upgrade to v3.2 or later, use another storage classes, or use a following dirty hack to unwrap _CachedStorage class.

if isinstance(study._storage, optuna.storages._CachedStorage):
    study._storage = study._storage._backend

set_objective_names() function is deprecated. Please use study.set_metric_names() instead.

set_objective_names function has been ported to Optuna. Please use study.set_metric_names() function instead.

Deprecated APIs

Corresponding Active APIs

optuna_dashboard.set_objective_names(study, ["objective 1", "objective 2"])

study.set_metric_names(["objective 1", "objective 2"])

upload_artifact() is deprecated. Please use optuna.artifacts.upload_artifact() instead.

upload_artifact function has been ported to Optuna. Please use optuna.artifacts.upload_artifact function instead.

Deprecated APIs

Corresponding Active APIs

optuna_dashboard.artifact.upload_artifact(artifact_backend, trial, fiel_path)

optuna.artifacts.upload_artifact(trial, file_path, artifact_store)

Please note that the order of arguments is different between the deprecated and active APIs.

FileSystemBackend is deprecated. Please use FileSystemArtifactStore instead.

FileSystemBackend class has been ported to Optuna. Please use FileSystemArtifactStore class instead.

Deprecated APIs

Corresponding Active APIs

optuna_dashboard.artifact.file_system.FileSystemBackend(base_path)

optuna.artifacts.FileSystemArtifactStore(base_path)

Boto3Backend` is deprecated. Please use Boto3ArtifactStore instead.

Boto3Backend class has been ported to Optuna. Please use Boto3ArtifactStore class instead.

Deprecated APIs

Corresponding Active APIs

optuna_dashboard.artifact.boto3.Boto3Backend(bucket_name, client=None)

optuna.artifacts.Boto3ArtifactStore(bucket_name, client=None)

Tutorials

Tutorial: Human-in-the-loop Optimization using Objective Form Widgets

_images/hitl1.png

In tasks involving image generation, natural language, or speech synthesis, evaluating results mechanically can be tough, and human evaluation becomes crucial. Until now, managing such tasks with Optuna has been challenging. However, the introduction of Optuna Dashboard enables humans and optimization algorithms to work interactively and execute the optimization process.

In this tutorial, we will explain how to optimize hyperparameters to generate a simple image using Optuna Dashboard. While the tutorial focuses on a simple task, the same approach can be applied to for instance optimize more complex images, natural language, and speech.

The tutorial is organized as follows:

  • What is human-in-the-loop optimization?

  • Main tutorial

    • Theme

    • System architecture

    • Steps

    • Script explanation

What is human-in-the-loop optimization?

Human-in-the-loop (HITL) is a concept where humans play a role in machine learning or artificial intelligence systems. In HITL optimization in particular, humans are part of the optimization process. This is useful when it’s difficult for machines to evaluate the results and human evaluation is crucial. In such cases, humans will assess the results instead.

Generally, HITL optimization involves the following steps:

  1. An output is computed given the hyperparameters suggested by an optimization algorithm

  2. An evaluator (human) evaluates the output

Steps 1 to 2 are repeated to find the best hyperparameters.

HITL optimization is valuable in areas where human judgment is essential, like art and design, since it’s hard for machines to evaluate the output. For instance, it can optimize images created by generative models or improve cooking methods and ingredients for foods like coffee.

Main tutorial

Theme

In this tutorial, we will interactively optimize RGB values between 0 and 255 to generate a color that the user perceives as the “color of the sunset.” If someone already knows the RGB hyperparameter characteristics for their ideal “color of the sunset,” they can specify those values directly. However, even without knowing such characteristics, interactive optimization allows us to find good hyperparameters. Although the task is simple, this serves as a practical introduction to human-in-the-loop optimization, and can be applied to image generation, natural language generation, speech synthesis, and more.

_images/hitl2.jpeg _images/hitl3.jpeg

To implement HITL optimization, you need a way to interactively execute the optimization process, typically through a user interface (UI) or other means. Usually, users would have to implement their own, but with Optuna Dashboard, everything is already set up for you. This is a major advantage of using Optuna Dashboard for this purpose.

System architecture

The system architecture for this tutorial’s example is as follows:

_images/hitl4.png

In HITL optimization using Optuna Dashboard, there are primarily the following components:

  1. Optuna Dashboard for displaying the outputs and making evaluations

  2. Database and File Storage to store the experiment’s data (Study)

  3. Script that samples hyperparameters from Optuna and generates outputs

The DB is the place where the information of the Study is stored. The Artifact Store is a place for storing artifacts (data, files, etc.) for the Optuna Dashboard. In this case, it is used as a storage location for the RGB images.

_images/hitl5.png

Our script repeatedly performs these steps:

  1. Monitor the Study’s state to maintain a constant number of Trials in progress (Running).

  2. Sample hyperparameters using the optimization algorithm and generate RGB images.

  3. Upload the generated RGB images to the Artifact Store.

_images/hitl6.png

Additionally, the evaluator, Optuna Dashboard, and Optuna perform the following processes:

  1. Optuna Dashboard retrieves the RGB images uploaded to the Artifact Store and displays the retrieved RGB images to the evaluator

  2. The evaluator reviews the displayed RGB images and inputs their evaluation of how close the displayed image is to the “color of the sunset” into the Optuna Dashboard

  3. Optuna Dashboard saves the evaluation results in the database

In the example of this tutorial, processes 1-3 and a-c are executed in parallel.

Steps

Given the above system, we carry out HITL optimization as follows:

  1. Environment setup

  2. Execution of the HITL optimization script

  3. Launching Optuna Dashboard

  4. Interactive HITL optimization

Environment setup

To run the script used in this tutorial, you need to install following libraries:

$ pip install "optuna>=3.3.0" "optuna-dashboard>=0.12.0" pillow

You will use SQLite for the storage backend in this tutorial. Ensure that the following library is installed:

Execution of the HITL optimization script

Run a python script below which you copied from main.py

$ python main.py
Launching Optuna Dashboard

Run this command to launch Optuna Dashboard in a separate process.

$ optuna-dashboard sqlite:///db.sqlite3 --artifact-dir ./artifact

In the command, the storage is set to sqlite:///db.sqlite3 to persist Optuna’s trial history. To store the artifacts, --artifact-dir ./artifact is specified.

Listening on http://127.0.0.1:8080/
Hit Ctrl-C to quit.

When you run the command, you will see a message like the one above. Open http://127.0.0.1:8080/dashboard/ in your browser.

Interactive HITL optimization
_images/hitl7.png

You will see the main screen.

_images/hitl8.png

In this example, a study is created with the name “Human-in-the-loop Optimization.” Click on it. You will be directed to the page related to that study.

_images/hitl9.png

Click the third item in the sidebar. You will see a list of all trials.

_images/hitl10.png

For each trial, you can see its details such as RGB parameter values and importantly, the generated image based on these values.

_images/hitl11.gif

Let’s evaluate some of the images. For the first image, which is far from the “color of the sunset,” we rated it as “Bad.” For the next image, which is somewhat closer to the “color of the sunset,” we rated it as “So-so.” Continue this evaluation process for several trials. After evaluating about 30 trials, we should see an improvement.

We can review the progress of the HITL optimization through graphs and other visualizations.

_images/hitl12.png

Also, this image is an array of images up to 30 trials. The best ones are surrounded by thick lines.

_images/hitl13.png

By looking at the History plot, you can see that colors gradually get closer to the “color of the sunset”.

_images/hitl14.png

Additionally, by looking at the Parallel Coordinate plot, you can get an insight into the relationship between the evaluation and each hyperparameter.

Various other plots are available. Try exploring.

Script explanation

Let’s walk through the script we used for the optimization.

 1def suggest_and_generate_image(
 2    study: optuna.Study, artifact_store: FileSystemArtifactStore
 3) -> None:
 4    # 1. Ask new parameters
 5    trial = study.ask()
 6    r = trial.suggest_int("r", 0, 255)
 7    g = trial.suggest_int("g", 0, 255)
 8    b = trial.suggest_int("b", 0, 255)
 9
10    # 2. Generate image
11    image_path = f"tmp/sample-{trial.number}.png"
12    image = Image.new("RGB", (320, 240), color=(r, g, b))
13    image.save(image_path)
14
15    # 3. Upload Artifact
16    artifact_id = upload_artifact(trial, image_path, artifact_store)
17    artifact_path = get_artifact_path(trial, artifact_id)
18
19    # 4. Save Note
20    note = textwrap.dedent(
21        f"""\
22    ## Trial {trial.number}
23
24    ![generated-image]({artifact_path})
25    """
26    )
27    save_note(trial, note)

In the suggest_and_generate_image function, a new Trial is obtained and new hyperparameters are suggested for that Trial. Based on those hyperparameters, an RGB image is generated as an artifact. The generated image is then uploaded to the Artifact Store of the Optuna, and the image is also displayed in the Dashboard’s Note. For more information on how to use the Note feature, please refer to the API Reference of save_note().

 1def start_optimization(artifact_store: FileSystemArtifactStore) -> NoReturn:
 2    # 1. Create Study
 3    study = optuna.create_study(
 4        study_name="Human-in-the-loop Optimization",
 5        storage="sqlite:///db.sqlite3",
 6        sampler=optuna.samplers.TPESampler(constant_liar=True, n_startup_trials=5),
 7        load_if_exists=True,
 8    )
 9
10    # 2. Set an objective name
11    study.set_metric_names(["Looks like sunset color?"])
12
13    # 3. Register ChoiceWidget
14    register_objective_form_widgets(
15        study,
16        widgets=[
17            ChoiceWidget(
18                choices=["Good 👍", "So-so👌", "Bad 👎"],
19                values=[-1, 0, 1],
20                description="Please input your score!",
21            ),
22        ],
23    )
24
25    # 4. Start Human-in-the-loop Optimization
26    n_batch = 4
27    while True:
28        running_trials = study.get_trials(deepcopy=False, states=(TrialState.RUNNING,))
29        if len(running_trials) >= n_batch:
30            time.sleep(1)  # Avoid busy-loop
31            continue
32        suggest_and_generate_image(study, artifact_store)

The function start_optimization defines our loop for HITL optimization to generate an image resembling a sunset color.

  • First, at #1, a Study of Optuna is created using TPESampler. Setting load_if_exists=True allows a Study to exist and be reused and the experiment to be resumed if it has already been created. The reason for setting constant_liar=True in TPESampler is to prevent similar hyperparameters from being sampled even if the trial is executed several times simultaneously (in this example, four times).

  • At #2, the name of the objective that the ChoiceWidget targets is set using the study.set_metric_names function. In this case, the name “Looks like sunset color?” is set.

  • At #3, the ChoiceWidget is registered using the register_objective_form_widgets() function. This widget is used to ask users for evaluation to find the optimal hyperparameters. In this case, there are three options: “Good 👍”, “So-so👌”, and “Bad 👎”, each with an evaluation value of -1, 0, and 1, respectively. Note that Optuna minimizes objective values by default, so -1 is Good. Other widgets for evaluation are also available, so please refer to the API Reference for details.

  • At #4, the suggest_and_generate_image function is used to generate an RGB image. Here, the number of currently running (TrialState.RUNNING) trials is periodically checked to ensure that four trials are running simultaneously. The reason why trials are executed in batches like this is that it generally may take a long time to obtain results from trial execution. By performing batch parallel processing, time waiting for the next results can be reduced. In this case, because generating the images is instant, it’s not necessary, but demonstrates practices.

 1def main() -> NoReturn:
 2    tmp_path = os.path.join(os.path.dirname(__file__), "tmp")
 3
 4    # 1. Create Artifact Store
 5    artifact_path = os.path.join(os.path.dirname(__file__), "artifact")
 6    artifact_store = FileSystemArtifactStore(artifact_path)
 7
 8    if not os.path.exists(artifact_path):
 9        os.mkdir(artifact_path)
10
11    if not os.path.exists(tmp_path):
12        os.mkdir(tmp_path)
13
14    # 2. Run optimize loop
15    start_optimization(artifact_store)

In the main function, at first, the locations of the Artifact Store is set.

  • At #1, the FileSystemArtifactStore is created, which is one of the Artifact Store options used in the Optuna. Artifact Store is used to store artifacts (data, files, etc.) generated during Optuna trials. For more information, please refer to the API Reference.

  • At #2, start_optimization() function, which is described above, is called.

After that, two folders are created, artifact and tmp, and then start_optimization function is called to start the HITL optimization using Optuna.

Tutorial: Preferential Optimization

What is Preferential Optimization?

Preferential optimization is a method for optimizing hyperparameters, focusing of human preferences, by determining which trial is superior when comparing a pair. It differs from human-in-the-loop optimization utilizing objective form widgets, which relies on absolute evaluations, as it significantly reduces fluctuations in evaluators’ criteria, thus ensuring more consistent results.

In this tutorial, we’ll interactively optimize RGB values to generate a color resembling a “sunset hue”, aligining with the problem setting in this tutorial. Familiarity with the tutorial ob objective form widgets may enhance your understanding.

How to Run Preferential Optimization

In preferential optimization, two programs run concurrently: generator.py performing parameter sampling and image generation, and the Optuna Dashboard, offering a user interface for human evaluation.

System Architecture

First, ensure the necessary packages are installed by executing the following command in your terminal:

$ pip install "optuna>=3.3.0" "optuna-dashboard[preferential]>=0.13.0b1" pillow

Next, execute the Python script, copied from generator.py.

$ python generator.py

Then, launch Optuna Dashboard in a separate process using the following command.

$ optuna-dashboard sqlite:///example.db --artifact-dir ./artifact

Here, the storage is configured to sqlite:///example.db to retain Optuna’s trial history, and --artifact-dir ./artifact is specified to store the artifacts (output images).

Listening on http://127.0.0.1:8080/
Hit Ctrl-C to quit.

Upon executing the command, a message like the above will appear. Open http://127.0.0.1:8080/dashboard/ in your browser to view the Optuna Dashboard:

GIF animation for preferential optimization

Select the least sunset-like color from four trials to record human preferences.

Script Explanation

First, we specify the SQLite database URL and initialize the artifact store to house the images produced during the trial.

1STORAGE_URL = "sqlite:///example.db"
2artifact_path = os.path.join(os.path.dirname(__file__), "artifact")
3artifact_store = FileSystemArtifactStore(base_path=artifact_path)
4os.makedirs(artifact_path, exist_ok=True)

Within the main() function, creating dedicated Study and Sampler objects since preferential optimization relies on the comparison results between trials, lacking absolute evaluation values for each one.

Then, the component to be displayed on the human feedback pages is registered via register_preference_feedback_component(). The generated images are uploaded to the artifact store, and their artifact_id is stored in the trial user attribute (e.g., trial.user_attrs["rgb_image"]), enabling the Optuna Dashboard to display images on the evaluation feedback page.

 1from optuna_dashboard import register_preference_feedback_component
 2from optuna_dashboard.preferential import create_study
 3from optuna_dashboard.preferential.samplers.gp import PreferentialGPSampler
 4
 5study = create_study(
 6    n_generate=4,
 7    study_name="Preferential Optimization",
 8    storage=STORAGE_URL,
 9    sampler=PreferentialGPSampler(),
10    load_if_exists=True,
11)
12# Change the component, displayed on the human feedback pages.
13# By default (component_type="note"), the Trial's Markdown note is displayed.
14user_attr_key = "rgb_image"
15register_preference_feedback_component(study, "artifact", user_attr_key)

Following this, we create a loop that continuously checks if new trials should be generated, awaiting human evaluation if not. Within the while loop, new trials are generated if the condition should_generate() returns True. For each trial, RGB values are sampled, an image is generated with these values, saved temporarily. Then the image is uploaded to the artifact store, and finally, the artifact_id is stored to the key, which is specified via register_preference_feedback_component().

 1while True:
 2    # If study.should_generate() returns False, the generator waits for human evaluation.
 3    if not study.should_generate():
 4        time.sleep(0.1)  # Avoid busy-loop
 5        continue
 6
 7    trial = study.ask()
 8    # Ask new parameters
 9    r = trial.suggest_int("r", 0, 255)
10    g = trial.suggest_int("g", 0, 255)
11    b = trial.suggest_int("b", 0, 255)
12
13    # Generate an image
14    image_path = os.path.join(tmpdir, f"sample-{trial.number}.png")
15    image = Image.new("RGB", (320, 240), color=(r, g, b))
16    image.save(image_path)
17
18    # Upload Artifact and set artifact_id to trial.user_attrs["rgb_image"].
19    artifact_id = upload_artifact(trial, image_path, artifact_store)
20    trial.set_user_attr(user_attr_key, artifact_id)

LICENSE

This software is licensed under the MIT license and uses the codes from SQLAlchemy (MIT) project, see LICENSE for more information.

Indices and tables