Core API Reference โ๏ธ¶
This document provides detailed information about the core components and functions of ๐งช MetaboT ๐ต.
Main Module ๐¶
The main module (app.core.main
) provides the primary entry points and core functionality for ๐งช MetaboT ๐ต.
Functions¶
get_api_key
๐¶
Get API key for specified provider from environment variables.
Parameters:
- provider
(str): Provider name matching a key in API_KEY_MAPPING
Returns: - Optional[str]: API key if found, None otherwise
create_litellm_model
๐ค¶
Create a ChatLiteLLM instance based on the model id and configuration from app/config/params.ini file. The configuration includes model parameters such as temperature, max_retries, and optional base_url/api_base settings.
Parameters:
- config
(configparser.SectionProxy): The configuration section from params.ini that contains model settings:
- Required: "id" - model identifier (e.g., "gpt-4", "deepseek/...")
- Optional: "temperature", "max_retries", "base_url", "api_base"
Returns:
- ChatLiteLLM
: Configured ChatLiteLLM instance
llm_creation
๐ค¶
def llm_creation(api_key: Optional[str] = None, params_file: Optional[str] = None) -> Dict[str, Union[ChatOpenAI, ChatLiteLLM]]
Creates and configures language model instances based on the configuration file.
Parameters:
api_key
(Optional[str]): OpenAI API key (optional, can be set via environment variable)params_file
(Optional[str]): Path to an alternate configuration file
Returns: - Dictionary of initialized language models
Example:
models = llm_creation()
# With custom config file
models = llm_creation(params_file="custom_params.ini")
langsmith_setup
๐ ๏ธ¶
Configures LangSmith integration for workflow tracking and monitoring. If the environment variable LANGCHAIN_API_KEY (or LANGSMITH_API_KEY) is provided, the function enables tracing and configures the default project and endpoint. Otherwise, it disables tracing.
Returns: - Optional[Client]: LangSmith client if setup successful, None otherwise
For advanced configuration details, see Advanced Configuration.
Example:
Graph Management ๐ก¶
The graph management module (app.core.graph_management.RdfGraphCustom
) provides tools for interacting with the RDF knowledge graph.
RdfGraph Class¶
class RdfGraph:
def __init__(self, query_endpoint: str, standard: str = "rdf"):
"""
Initialize an RDF graph connection.
Args:
query_endpoint (str): SPARQL endpoint URL
standard (str): RDF standard to use (default: "rdf")
"""
Key Methods:
get_schema
: Retrieves the graph schemaexecute_query
: Runs SPARQL queries against the graphsave
: Persists the graph state
Workflow Management ๐¶
The workflow module (app.core.workflow.langraph_workflow
) manages the processing pipeline.
Functions¶
create_workflow
๐๏ธ¶
def create_workflow(models: Dict[str, Union[ChatOpenAI, ChatLiteLLM]], endpoint_url: str, evaluation: bool = False, api_key: Optional[str] = None) -> Any
Creates a new workflow instance with the specified configuration.
Parameters:
models
(Dict[str, Union[ChatOpenAI, ChatLiteLLM]]): Dictionary of language model instancesendpoint_url
(str): The URL of the SPARQL endpointevaluation
(bool): Whether to run in evaluation modeapi_key
(Optional[str]): OpenAI API key (optional)
Returns: - Workflow instance
process_workflow
๐¶
Processes a query through the workflow.
Parameters:
workflow
: The workflow instancequestion
(str): The query to process
Returns: - Query results
Agent Factory ๐ญ¶
The agent factory module (app.core.agents.agents_factory
) manages agent creation and configuration.
Functions¶
create_all_agents
๐ ๏ธ¶
Creates all required agents for the workflow.
Parameters:
models
: Dictionary of language modelsgraph
: RDF graph instance
Returns: - Dictionary of initialized agents
Utility Functions ๐งฐ¶
The utils module (app.core.utils
) provides common utility functions.
Functions¶
setup_logger
๐¶
Configures a logger instance.
Parameters:
name
(str): Logger name
Returns:
- Configured logger instance
load_config
๐¶
Loads a configuration file.
Parameters:
config_path
(str): Path to configuration file
Returns:
- Parsed configuration object
Database Management ๐พ¶
The database management module provides two specialized databases for saving the outputs of tools and agents, as well as managing workflow state. These databases are created automatically based on the deployment environment:
- When running locally: Databases are created as local files in the project directory
- When deployed in the cloud: Databases are created in the cloud environment
Tools Database ๐ง¶
The tools database (app.core.memory.tools_database
) manages data associated with agent tools.
Creates or returns a singleton instance of the tools database manager.
Features:
- Automatically discovers and tracks tool usage
- Maintains interaction history
- Stores tool-specific data
- Supports both SQLite (default) and custom database backends
- Auto-creates database based on environment (local/cloud)
Configuration:
- Default: Creates local
tools_database.db
file when running locally -
Custom: Set via environment variables:
DATABASE_URL
: Database connection string (used for cloud deployment)TOOLS_DATABASE_MANAGER_CLASS
: Custom manager class path
Memory Database ๐ญ¶
The memory database (app.core.memory.custom_sqlite_file
) manages workflow state and checkpoints.
Creates or returns a singleton instance of the memory database manager.
Features:
- Saves workflow state checkpoints
- Supports multi-threading (essential for Streamlit)
- Automatic cleanup on restart
- Thread-safe operations
- Auto-creates database based on environment (local/cloud)
Configuration:
- Default: Creates local
langgraph_checkpoint.db
file when running locally - Custom: Set via environment variables:
DATABASE_URL
: Database connection string (used for cloud deployment)MEMORY_DATABASE_MANAGER_CLASS
: Custom manager class path
Usage Examples ๐¶
Basic Query Processing¶
from app.core.main import link_kg_database, llm_creation
from app.core.workflow.langraph_workflow import create_workflow, process_workflow
# Initialize components
endpoint_url = "your_endpoint_url"
models = llm_creation()
# Create and run workflow
workflow = create_workflow(
models=models,
endpoint_url=endpoint_url,
evaluation=False
)
process_workflow(workflow, "Your query here")