Description: Lists all available evaluations by scanning the tasks directory for YAML files. Returns metadata about each evaluation including name, description, type, and number of samples.URL: /evals/Method: GETResponse Schema:
Copy
List[Dict[str, Any]]
Each dictionary in the list contains:
Copy
{ "name": str, # Name of the evaluation "description": str, # Description of the evaluation "type": str, # Type of evaluation "num_samples": str, # Number of samples in the evaluation "paper_link": str, # Link to the paper describing the evaluation "file_name": str # Name of the YAML file}
Description: Launches an evaluation job by triggering the evaluator with the specified evaluation configuration. The evaluation is run asynchronously in the background.URL: /evals/launch/Method: POSTRequest Payload:
Copy
class EvalRunRequest: eval_name: str # Name of the evaluation to run workflow_id: str # ID of the workflow to evaluate output_variable: str # Output variable to evaluate num_samples: int = 100 # Number of random samples to evaluate
Response Schema:
Copy
class EvalRunResponse: run_id: str # ID of the evaluation run eval_name: str # Name of the evaluation workflow_id: str # ID of the workflow being evaluated status: EvalRunStatusEnum # Status of the evaluation run start_time: datetime # When the evaluation started end_time: Optional[datetime] # When the evaluation ended (if completed) results: Optional[Dict[str, Any]] # Results of the evaluation (if completed)
Description: Gets the status of a specific evaluation run, including results if the evaluation has completed.URL: /evals/runs/{eval_run_id}Method: GETParameters:
Copy
eval_run_id: str # ID of the evaluation run
Response Schema:
Copy
class EvalRunResponse: run_id: str # ID of the evaluation run eval_name: str # Name of the evaluation workflow_id: str # ID of the workflow being evaluated status: EvalRunStatusEnum # Status of the evaluation run start_time: datetime # When the evaluation started end_time: Optional[datetime] # When the evaluation ended (if completed) results: Optional[Dict[str, Any]] # Results of the evaluation (if completed)
Description: Lists all evaluation runs, ordered by start time descending.URL: /evals/runs/Method: GETResponse Schema:
Copy
List[EvalRunResponse]
Where EvalRunResponse contains:
Copy
class EvalRunResponse: run_id: str # ID of the evaluation run eval_name: str # Name of the evaluation workflow_id: str # ID of the workflow being evaluated status: EvalRunStatusEnum # Status of the evaluation run start_time: datetime # When the evaluation started end_time: Optional[datetime] # When the evaluation ended (if completed) results: Optional[Dict[str, Any]] # Results of the evaluation (if completed)