Workflow Discovery & Validation

Workflows & Templates for Hillstar Orchestrator.

class workflows.WorkflowValidator[source]

Bases: object

Validate Hillstar workflows against schema, registry, and constraints.

SCHEMA_PATH = None
__init__(registry=None)[source]

Initialize validator with optional registry.

Parameters:

registry (ProviderRegistry | None)

load_schema()[source]

Load the workflow schema (from installed package or dev environment).

Return type:

dict[str, Any]

validate_schema(workflow)[source]

Validate workflow against JSON schema.

Parameters:

workflow (dict[str, Any]) – Workflow dictionary

Returns:

bool, errors: List[str])

Return type:

(valid

validate_model_config(model_config)[source]

Validate model_config section for coherence.

Parameters:

model_config (dict[str, Any]) – The model_config dictionary

Returns:

bool, errors: List[str])

Return type:

(valid

validate_graph_connectivity(workflow)[source]

Validate workflow graph connectivity (no disconnected components).

Parameters:

workflow (dict[str, Any]) – Workflow dictionary

Returns:

bool, errors: List[str])

Return type:

(valid

validate_providers(workflow)[source]

Validate all referenced providers and models against registry.

Parameters:

workflow (dict[str, Any]) – Workflow dictionary

Returns:

bool, errors: List[str])

Return type:

(valid

validate_compliance(workflow)[source]

Validate compliance requirements for all providers.

Parameters:

workflow (dict[str, Any]) – Workflow dictionary

Returns:

bool, errors: List[str])

Return type:

(valid

validate_complete(workflow)[source]

Run all validations.

Parameters:

workflow (dict[str, Any]) – Workflow dictionary

Returns:

bool, errors: List[str])

Return type:

(valid

static validate_file(workflow_path)[source]

Validate a workflow file.

Parameters:

workflow_path (str) – Path to workflow.json

Returns:

bool, errors: List[str])

Return type:

(valid

static validate_schema_static(workflow)[source]

Static wrapper for validate_schema.

Parameters:

workflow (dict[str, Any])

Return type:

Tuple[bool, list[str]]

static validate_model_config_static(model_config)[source]

Static wrapper for validate_model_config.

Parameters:

model_config (dict[str, Any])

Return type:

Tuple[bool, list[str]]

static validate_providers_static(workflow)[source]

Static wrapper for validate_providers.

Parameters:

workflow (dict[str, Any])

Return type:

Tuple[bool, list[str]]

static validate_complete_static(workflow)[source]

Static wrapper for validate_complete.

Parameters:

workflow (dict[str, Any])

Return type:

Tuple[bool, list[str]]

static validate_file_static(workflow_path)[source]

Static wrapper for file validation.

Parameters:

workflow_path (str)

Return type:

Tuple[bool, list[str]]

class workflows.ModelPresets[source]

Bases: object

Legacy class for backward compatibility.

New code should use PresetResolver instead.

Named strategies for model selection based on use case. Presets are dynamically generated from the ProviderRegistry.

TIER_MAPPING = {'balanced': 'standard', 'local_only': 'free', 'maximize_quality': 'expensive', 'minimize_cost': 'cheap'}
CAPABILITY_MAPPING = {'complex': ['reasoning', 'coding', 'analysis', 'complex_planning'], 'critical': ['reasoning', 'coding', 'analysis', 'complex_planning'], 'moderate': ['reasoning', 'coding', 'analysis'], 'simple': ['reasoning', 'analysis']}
static select(preset_name, complexity='moderate', provider_preference=None)[source]

Select model from a preset strategy (legacy).

Args: preset_name: One of “minimize_cost”, “balanced”, “maximize_quality”, “local_only” complexity: Task complexity (“simple”, “moderate”, “complex”, “critical”) provider_preference: Optional list of preferred providers in order

Returns: Tuple of (provider, model_id, model_config), or None if no model available

Parameters:
  • preset_name (str)

  • complexity (str)

  • provider_preference (List[str] | None)

Return type:

Tuple[str, str, Dict[str, Any]] | None

static select_simple(preset_name, provider_preference=None)[source]

Select model for simple tasks using a preset.

Parameters:
  • preset_name (str)

  • provider_preference (List[str] | None)

Return type:

Tuple[str, str, Dict[str, Any]] | None

static select_moderate(preset_name, provider_preference=None)[source]

Select model for moderate tasks using a preset.

Parameters:
  • preset_name (str)

  • provider_preference (List[str] | None)

Return type:

Tuple[str, str, Dict[str, Any]] | None

static select_complex(preset_name, provider_preference=None)[source]

Select model for complex tasks using a preset.

Parameters:
  • preset_name (str)

  • provider_preference (List[str] | None)

Return type:

Tuple[str, str, Dict[str, Any]] | None

static select_critical(preset_name, provider_preference=None)[source]

Select model for critical tasks using a preset.

Parameters:
  • preset_name (str)

  • provider_preference (List[str] | None)

Return type:

Tuple[str, str, Dict[str, Any]] | None

static get_available_presets()[source]

Get list of available preset names.

Return type:

List[str]

static describe_preset(preset_name)[source]

Get description of a preset strategy.

Parameters:

preset_name (str)

Return type:

Dict

static get_preset_for_use_case(use_case, has_local_gpu=False, budget_constraint=False)[source]

Get recommended preset based on use case and constraints.

Args: use_case: One of “research”, “production”, “experimentation”, “publication” has_local_gpu: Whether the user has a local GPU available budget_constraint: Whether budget is a primary concern

Returns: Preset name recommendation

Parameters:
  • use_case (str)

  • has_local_gpu (bool)

  • budget_constraint (bool)

Return type:

str

static get_fallback_chain(preset_name, complexity, provider_preference=None)[source]

Get provider fallback chain for a preset.

Args: preset_name: Preset name complexity: Task complexity provider_preference: Preferred providers

Returns: List of providers in fallback order

Parameters:
  • preset_name (str)

  • complexity (str)

  • provider_preference (List[str] | None)

Return type:

List[str]

class workflows.AutoDiscover[source]

Bases: object

Auto-detect Hillstar projects and suggest workflows.

PLANNING_KEYWORDS = ['plan', 'design', 'analyze', 'review', 'investigate', 'explore', 'study']
IMPLEMENTATION_KEYWORDS = ['implement', 'build', 'create', 'develop', 'code', 'write', 'script']
TESTING_KEYWORDS = ['test', 'validate', 'verify', 'check', 'quality', 'qa', 'audit']
QUALITY_KEYWORDS = ['quality', 'accuracy', 'precision', 'reproducibility', 'thorough', 'careful']
BUDGET_KEYWORDS = ['budget', 'cost', 'cheap', 'free', 'minimize', 'save']
LOCAL_KEYWORDS = ['local', 'offline', 'airgap', 'air-gapped', 'sensitive', 'private']
SPEED_KEYWORDS = ['fast', 'quick', 'speed', 'performance', 'efficiency', 'rapid']
static is_hillstar_project(start_dir='.')[source]

Detect if a directory is a Hillstar project.

Args: start_dir: Directory to check (default: current)

Returns: True if Hillstar project indicators found

Indicators: - python/hillstar/ directory (source or pip installation) - .hillstar/ directory (runtime artifacts) - workflow.json in current or subdirectories (workflow definition)

Parameters:

start_dir (str)

Return type:

bool

static get_project_info(start_dir='.')[source]

Get Hillstar project information.

Args: start_dir: Directory to analyze

Returns: Dictionary with: - is_hillstar: bool - has_schema: bool - has_artifacts: bool - has_workflows: bool - workflow_count: int - schema_path: str or None

Parameters:

start_dir (str)

Return type:

Dict[str, Any]

static classify_task(task_description)[source]

Classify task by keywords to infer requirements.

Args: task_description: Natural language task description

Returns: Dictionary with task type scores: - planning: float (0.0-1.0) - implementation: float - testing: float - quality: float - budget_conscious: float - local_only: float - speed_critical: float

Parameters:

task_description (str)

Return type:

Dict[str, float]

static get_preset_suggestions(task_scores)[source]

Suggest presets based on task classification.

Args: task_scores: Task classification scores

Returns: List of (preset_name, confidence) tuples, sorted by confidence

Parameters:

task_scores (Dict[str, float])

Return type:

List[Tuple[str, float]]

static suggest_workflows(task_description, workflows, top_k=3)[source]

Suggest workflows based on task description.

Args: task_description: Natural language task workflows: Available workflow metadata top_k: Return top K matches

Returns: List of suggested workflows with relevance scores, sorted best-first

Parameters:
Return type:

List[Dict[str, Any]]

static get_recommendations(task_description, workflows)[source]

Get comprehensive recommendations for a task.

Args: task_description: Natural language task workflows: Available workflows

Returns: Dictionary with: - task_classification: Task type scores - suggested_presets: List of (preset, confidence) tuples - suggested_workflows: List of matching workflows - recommendation_text: Human-readable summary

Parameters:
Return type:

Dict[str, Any]

static format_recommendations(recommendations)[source]

Format recommendations as human-readable text.

Args: recommendations: Output from get_recommendations()

Returns: Formatted text suitable for display to user

Parameters:

recommendations (Dict[str, Any])

Return type:

str

class workflows.WorkflowDiscovery[source]

Bases: object

Find and analyze Hillstar workflows in a directory tree.

static find_workflows(start_path='.', max_depth=5)[source]

Find all workflow.json files in directory tree.

Parameters:
  • start_path (str) – Directory to search from

  • max_depth (int) – Maximum directory depth to search

Returns:

List of absolute paths to workflow.json files

Return type:

List[str]

static get_workflow_info(workflow_path)[source]

Extract metadata from a workflow file.

Parameters:

workflow_path (str) – Absolute path to workflow.json

Returns:

Dictionary with workflow metadata

Raises:
Return type:

Dict[str, Any]

static get_all_workflow_info(start_path='.', max_depth=5)[source]

Find all workflows and return their metadata.

Parameters:
  • start_path (str) – Directory to search from

  • max_depth (int) – Maximum directory depth

Returns:

List of workflow metadata dictionaries

Return type:

List[Dict[str, Any]]

static find_in_current_project()[source]

Find all workflows in current project (with .hillstar/ or spec/ indicators).

Return type:

List[Dict[str, Any]]

workflows.discovery

Script

discovery.py

Path

python/hillstar/discovery.py

Purpose

Workflow discovery: Find and analyze workflow.json files in project directory.

Scans directory tree for workflow.json files and extracts metadata. Used by MCP server to discover available workflows.

Inputs

start_path (str): Directory to search from (default: current directory)

Outputs

List[str]: Absolute paths to workflow.json files Dict: Workflow metadata (id, description, nodes, edges)

Assumptions

  • workflow.json files are valid JSON

  • Valid according to workflow-schema.json

Parameters

None (per-workflow)

Failure Modes

  • Invalid JSON ValueError

  • Missing required fields KeyError

  • Unreadable files IOError

Author: Julen Gamboa <julen.gamboa.ds@gmail.com>

Created

2026-02-07

Last Edited

2026-02-07

class workflows.discovery.WorkflowDiscovery[source]

Bases: object

Find and analyze Hillstar workflows in a directory tree.

static find_workflows(start_path='.', max_depth=5)[source]

Find all workflow.json files in directory tree.

Parameters:
  • start_path (str) – Directory to search from

  • max_depth (int) – Maximum directory depth to search

Returns:

List of absolute paths to workflow.json files

Return type:

List[str]

static get_workflow_info(workflow_path)[source]

Extract metadata from a workflow file.

Parameters:

workflow_path (str) – Absolute path to workflow.json

Returns:

Dictionary with workflow metadata

Raises:
Return type:

Dict[str, Any]

static get_all_workflow_info(start_path='.', max_depth=5)[source]

Find all workflows and return their metadata.

Parameters:
  • start_path (str) – Directory to search from

  • max_depth (int) – Maximum directory depth

Returns:

List of workflow metadata dictionaries

Return type:

List[Dict[str, Any]]

static find_in_current_project()[source]

Find all workflows in current project (with .hillstar/ or spec/ indicators).

Return type:

List[Dict[str, Any]]

workflows.auto_discover

Script

auto_discover.py

Path

python/hillstar/auto_discover.py

Purpose

Auto-discovery mechanism to detect Hillstar projects and suggest workflows.

Detects if current directory is a Hillstar project and finds available workflows. Used by Claude Code to automatically offer Hillstar integration.

Inputs

current_dir (str): Directory to check (default: current working directory) task_description (str): Natural language task description workflows (List[Dict]): Workflow metadata to search

Outputs

is_hillstar_project (bool): True if directory is Hillstar project suggested_workflows (List[Dict]): Matching workflows ranked by relevance workflow_suggestions (Dict): Workflow recommendations with confidence scores

Assumptions

  • Workflow files exist and are valid JSON

  • Workflow descriptions are informative

  • Task descriptions follow natural language patterns

Parameters

None (per-operation)

Failure Modes

  • No workflows found Empty list

  • Invalid task description Return all workflows

  • Directory not found False

Author: Julen Gamboa <julen.gamboa.ds@gmail.com>

Created

2026-02-07

Last Edited

2026-02-07

class workflows.auto_discover.AutoDiscover[source]

Bases: object

Auto-detect Hillstar projects and suggest workflows.

PLANNING_KEYWORDS = ['plan', 'design', 'analyze', 'review', 'investigate', 'explore', 'study']
IMPLEMENTATION_KEYWORDS = ['implement', 'build', 'create', 'develop', 'code', 'write', 'script']
TESTING_KEYWORDS = ['test', 'validate', 'verify', 'check', 'quality', 'qa', 'audit']
QUALITY_KEYWORDS = ['quality', 'accuracy', 'precision', 'reproducibility', 'thorough', 'careful']
BUDGET_KEYWORDS = ['budget', 'cost', 'cheap', 'free', 'minimize', 'save']
LOCAL_KEYWORDS = ['local', 'offline', 'airgap', 'air-gapped', 'sensitive', 'private']
SPEED_KEYWORDS = ['fast', 'quick', 'speed', 'performance', 'efficiency', 'rapid']
static is_hillstar_project(start_dir='.')[source]

Detect if a directory is a Hillstar project.

Args: start_dir: Directory to check (default: current)

Returns: True if Hillstar project indicators found

Indicators: - python/hillstar/ directory (source or pip installation) - .hillstar/ directory (runtime artifacts) - workflow.json in current or subdirectories (workflow definition)

Parameters:

start_dir (str)

Return type:

bool

static get_project_info(start_dir='.')[source]

Get Hillstar project information.

Args: start_dir: Directory to analyze

Returns: Dictionary with: - is_hillstar: bool - has_schema: bool - has_artifacts: bool - has_workflows: bool - workflow_count: int - schema_path: str or None

Parameters:

start_dir (str)

Return type:

Dict[str, Any]

static classify_task(task_description)[source]

Classify task by keywords to infer requirements.

Args: task_description: Natural language task description

Returns: Dictionary with task type scores: - planning: float (0.0-1.0) - implementation: float - testing: float - quality: float - budget_conscious: float - local_only: float - speed_critical: float

Parameters:

task_description (str)

Return type:

Dict[str, float]

static get_preset_suggestions(task_scores)[source]

Suggest presets based on task classification.

Args: task_scores: Task classification scores

Returns: List of (preset_name, confidence) tuples, sorted by confidence

Parameters:

task_scores (Dict[str, float])

Return type:

List[Tuple[str, float]]

static suggest_workflows(task_description, workflows, top_k=3)[source]

Suggest workflows based on task description.

Args: task_description: Natural language task workflows: Available workflow metadata top_k: Return top K matches

Returns: List of suggested workflows with relevance scores, sorted best-first

Parameters:
Return type:

List[Dict[str, Any]]

static get_recommendations(task_description, workflows)[source]

Get comprehensive recommendations for a task.

Args: task_description: Natural language task workflows: Available workflows

Returns: Dictionary with: - task_classification: Task type scores - suggested_presets: List of (preset, confidence) tuples - suggested_workflows: List of matching workflows - recommendation_text: Human-readable summary

Parameters:
Return type:

Dict[str, Any]

static format_recommendations(recommendations)[source]

Format recommendations as human-readable text.

Args: recommendations: Output from get_recommendations()

Returns: Formatted text suitable for display to user

Parameters:

recommendations (Dict[str, Any])

Return type:

str

workflows.validator

Script

validator.py

Path

python/hillstar/validator.py

Purpose

Workflow validation: Check workflows against schema, registry, and constraints.

Validates: - JSON schema compliance - Provider registry integration - Provider/model availability - Model configuration coherence - Budget constraints - Graph connectivity - Compliance requirements

Inputs

workflow (dict): Workflow JSON config: HillstarConfig with ProviderRegistry registry: ProviderRegistry instance

Outputs

(valid: bool, errors: List[str])

Assumptions

  • Workflow is valid JSON

  • ProviderRegistry is properly initialized

Author: Julen Gamboa <julen.gamboa.ds@gmail.com>

Created

2026-02-07

Last Edited

2026-02-14

class workflows.validator.WorkflowValidator[source]

Bases: object

Validate Hillstar workflows against schema, registry, and constraints.

SCHEMA_PATH = None
__init__(registry=None)[source]

Initialize validator with optional registry.

Parameters:

registry (ProviderRegistry | None)

load_schema()[source]

Load the workflow schema (from installed package or dev environment).

Return type:

dict[str, Any]

validate_schema(workflow)[source]

Validate workflow against JSON schema.

Parameters:

workflow (dict[str, Any]) – Workflow dictionary

Returns:

bool, errors: List[str])

Return type:

(valid

validate_model_config(model_config)[source]

Validate model_config section for coherence.

Parameters:

model_config (dict[str, Any]) – The model_config dictionary

Returns:

bool, errors: List[str])

Return type:

(valid

validate_graph_connectivity(workflow)[source]

Validate workflow graph connectivity (no disconnected components).

Parameters:

workflow (dict[str, Any]) – Workflow dictionary

Returns:

bool, errors: List[str])

Return type:

(valid

validate_providers(workflow)[source]

Validate all referenced providers and models against registry.

Parameters:

workflow (dict[str, Any]) – Workflow dictionary

Returns:

bool, errors: List[str])

Return type:

(valid

validate_compliance(workflow)[source]

Validate compliance requirements for all providers.

Parameters:

workflow (dict[str, Any]) – Workflow dictionary

Returns:

bool, errors: List[str])

Return type:

(valid

validate_complete(workflow)[source]

Run all validations.

Parameters:

workflow (dict[str, Any]) – Workflow dictionary

Returns:

bool, errors: List[str])

Return type:

(valid

static validate_file(workflow_path)[source]

Validate a workflow file.

Parameters:

workflow_path (str) – Path to workflow.json

Returns:

bool, errors: List[str])

Return type:

(valid

static validate_schema_static(workflow)[source]

Static wrapper for validate_schema.

Parameters:

workflow (dict[str, Any])

Return type:

Tuple[bool, list[str]]

static validate_model_config_static(model_config)[source]

Static wrapper for validate_model_config.

Parameters:

model_config (dict[str, Any])

Return type:

Tuple[bool, list[str]]

static validate_providers_static(workflow)[source]

Static wrapper for validate_providers.

Parameters:

workflow (dict[str, Any])

Return type:

Tuple[bool, list[str]]

static validate_complete_static(workflow)[source]

Static wrapper for validate_complete.

Parameters:

workflow (dict[str, Any])

Return type:

Tuple[bool, list[str]]

static validate_file_static(workflow_path)[source]

Static wrapper for file validation.

Parameters:

workflow_path (str)

Return type:

Tuple[bool, list[str]]

workflows.model_presets

Model Selection Presets

PURPOSE:

Data-driven preset system for intelligent model selection with temperature constraint enforcement. Provides four strategies (cost_saver, balanced, quality_first, premium) for different research contexts.

ARCHITECTURE:

  • PresetResolver: resolves (preset, complexity) to

(provider, model, suggested_parameters)

  • Data-driven tier assignment based on pricing formulas (not hardcoded)

  • Parameter support inference with fallback logic for registry gaps

  • Non-negotiable temperature constraints enforced per model class

  • Backward compatibility: legacy ModelPresets class preserved

USAGE:

resolver = PresetResolver(

preset_name=”balanced”, configured_providers=[“openai”, “anthropic”]

) provider, model_id, params = resolver.resolve(

complexity=”moderate”, use_case=”code_writing”

)

CONSTRAINTS (Non-Negotiable):

  • General tasks: Temperature <= 0.3

  • Code writing: Temperature = 7.3e-7

  • Codebase exploration: 0.7 (Devstral-2 only)

  • Claude/OpenAI/Gemini: NO temperature (use effort/thinking)

  • Mistral: 0.3-1.0 for exploration

  • Local models: <= 0.15

Author: Julen Gamboa julen.gamboa.ds@gmail.com

class workflows.model_presets.PresetResolver[source]

Bases: object

Data-driven model resolver that enforces temperature and parameter constraints.

Selects models based on preset tier sequences, complexity escalation, and enforces all non-negotiable temperature rules per model class.

__init__(preset_name, configured_providers)[source]

Initialize resolver with preset and available providers.

Uses global provider registry (read-only) from config.provider_registry.get_registry().

Args: preset_name: One of cost_saver, balanced, quality_first, premium configured_providers: List of provider names in preference order

Parameters:
  • preset_name (str)

  • configured_providers (List[str])

resolve(complexity='moderate', use_case=None)[source]

Resolve (preset, complexity) to (provider, model, suggested_parameters).

Enforces all non-negotiable temperature constraints: - Temperature <= 0.3 for general tasks (all providers) - Temperature 0.7 ONLY for Devstral-2 + codebase_exploration - Temperature 0.00000073 for code_writing (any model) - No temperature for Claude/OpenAI/Gemini (use effort/thinking) - Mistral: allow 0.3-1.0 for exploration tasks - devstral-small-2 (local): CRITICAL cap <= 0.15

Args: complexity: simple, moderate, complex, critical use_case: Optional use case context (general, codebase_exploration, code_writing, etc.)

Returns: Tuple of (provider, model_id, suggested_parameters) or None

suggested_parameters contains: - temperature (if supported by model) - reasoning_effort or thinking (if reasoning model) - max_tokens - context_window - supports_temperature, supports_thinking, supports_reasoning_effort

Parameters:
  • complexity (str)

  • use_case (str | None)

Return type:

Tuple[str, str, Dict[str, Any]] | None

class workflows.model_presets.ModelPresets[source]

Bases: object

Legacy class for backward compatibility.

New code should use PresetResolver instead.

Named strategies for model selection based on use case. Presets are dynamically generated from the ProviderRegistry.

TIER_MAPPING = {'balanced': 'standard', 'local_only': 'free', 'maximize_quality': 'expensive', 'minimize_cost': 'cheap'}
CAPABILITY_MAPPING = {'complex': ['reasoning', 'coding', 'analysis', 'complex_planning'], 'critical': ['reasoning', 'coding', 'analysis', 'complex_planning'], 'moderate': ['reasoning', 'coding', 'analysis'], 'simple': ['reasoning', 'analysis']}
static select(preset_name, complexity='moderate', provider_preference=None)[source]

Select model from a preset strategy (legacy).

Args: preset_name: One of “minimize_cost”, “balanced”, “maximize_quality”, “local_only” complexity: Task complexity (“simple”, “moderate”, “complex”, “critical”) provider_preference: Optional list of preferred providers in order

Returns: Tuple of (provider, model_id, model_config), or None if no model available

Parameters:
  • preset_name (str)

  • complexity (str)

  • provider_preference (List[str] | None)

Return type:

Tuple[str, str, Dict[str, Any]] | None

static select_simple(preset_name, provider_preference=None)[source]

Select model for simple tasks using a preset.

Parameters:
  • preset_name (str)

  • provider_preference (List[str] | None)

Return type:

Tuple[str, str, Dict[str, Any]] | None

static select_moderate(preset_name, provider_preference=None)[source]

Select model for moderate tasks using a preset.

Parameters:
  • preset_name (str)

  • provider_preference (List[str] | None)

Return type:

Tuple[str, str, Dict[str, Any]] | None

static select_complex(preset_name, provider_preference=None)[source]

Select model for complex tasks using a preset.

Parameters:
  • preset_name (str)

  • provider_preference (List[str] | None)

Return type:

Tuple[str, str, Dict[str, Any]] | None

static select_critical(preset_name, provider_preference=None)[source]

Select model for critical tasks using a preset.

Parameters:
  • preset_name (str)

  • provider_preference (List[str] | None)

Return type:

Tuple[str, str, Dict[str, Any]] | None

static get_available_presets()[source]

Get list of available preset names.

Return type:

List[str]

static describe_preset(preset_name)[source]

Get description of a preset strategy.

Parameters:

preset_name (str)

Return type:

Dict

static get_preset_for_use_case(use_case, has_local_gpu=False, budget_constraint=False)[source]

Get recommended preset based on use case and constraints.

Args: use_case: One of “research”, “production”, “experimentation”, “publication” has_local_gpu: Whether the user has a local GPU available budget_constraint: Whether budget is a primary concern

Returns: Preset name recommendation

Parameters:
  • use_case (str)

  • has_local_gpu (bool)

  • budget_constraint (bool)

Return type:

str

static get_fallback_chain(preset_name, complexity, provider_preference=None)[source]

Get provider fallback chain for a preset.

Args: preset_name: Preset name complexity: Task complexity provider_preference: Preferred providers

Returns: List of providers in fallback order

Parameters:
  • preset_name (str)

  • complexity (str)

  • provider_preference (List[str] | None)

Return type:

List[str]