A chain for evaluating ReAct style agents.

This chain is used to evaluate ReAct style agents by reasoning about the sequence of actions taken and their outcomes.

Hierarchy

  • AgentTrajectoryEvaluator
    • TrajectoryEvalChain

Constructors

Properties

llm: BaseLanguageModelInterface<any, BaseLanguageModelCallOptions>
outputKey: string = "text"
outputParser: TrajectoryOutputParser = ...
prompt: BasePromptTemplate<any, BasePromptValueInterface, any>
requiresInput: boolean = true
requiresReference: boolean = false
criterionName?: string
evaluationName?: string = ...
llmKwargs?: any
memory?: BaseMemory
skipInputWarning?: string = ...
skipReferenceWarning?: string = ...

Accessors

Methods

  • Parameters

    • inputs: ChainValues[]
    • Optional config: (RunnableConfig | CallbackManager | (BaseCallbackHandler | BaseCallbackHandlerMethodsClass)[])[]

    Returns Promise<ChainValues[]>

    ⚠️ Deprecated ⚠️

    Use .batch() instead. Will be removed in 0.2.0.

    This feature is deprecated and will be removed in the future.

    It is not recommended for use.

    Call the chain on all inputs in the list

  • Run the core logic of this chain and add to output if desired.

    Wraps _call and handles memory.

    Parameters

    • values: any
    • Optional config: BaseCallbackConfig | CallbackManager | (BaseCallbackHandler | BaseCallbackHandlerMethodsClass)[]

    Returns Promise<ChainValues>

  • Check if the evaluation arguments are valid.

    Parameters

    • Optional reference: string

      The reference label.

    • Optional input: string

      The input string.

    Returns void

    Throws

    If the evaluator requires an input string but none is provided, or if the evaluator requires a reference label but none is provided.

  • Evaluate a trajectory.

    Parameters

    • args: LLMTrajectoryEvaluatorArgs
    • Optional callOptions: BaseLanguageModelCallOptions
    • Optional config: BaseCallbackConfig | CallbackManager | (BaseCallbackHandler | BaseCallbackHandlerMethodsClass)[]

    Returns Promise<ChainValues>

    The evaluation result.

  • Invoke the chain with the provided input and returns the output.

    Parameters

    • input: ChainValues

      Input values for the chain run.

    • Optional options: RunnableConfig

    Returns Promise<ChainValues>

    Promise that resolves with the output of the chain run.

  • Format prompt with values and pass to LLM

    Parameters

    • values: any

      keys to pass to prompt template

    • Optional callbackManager: CallbackManager

      CallbackManager to use

    Returns Promise<EvalOutputType>

    Completion from LLM.

    Example

    llm.predict({ adjective: "funny" })
    
  • Parameters

    • inputs: Record<string, unknown>
    • outputs: Record<string, unknown>
    • returnOnlyOutputs: boolean = false

    Returns Promise<Record<string, unknown>>

  • Parameters

    • input: any
    • Optional config: RunnableConfig | CallbackManager | (BaseCallbackHandler | BaseCallbackHandlerMethodsClass)[]

    Returns Promise<string>

    Deprecated

    Use .invoke() instead. Will be removed in 0.2.0.

  • Create a new TrajectoryEvalChain.

    Parameters

    • llm: BaseChatModel<BaseLanguageModelCallOptions, BaseMessageChunk>
    • Optional agentTools: StructuredToolInterface<ZodObject<any, any, any, any, {}>>[]

      The tools used by the agent.

    • Optional chainOptions: Partial<Omit<LLMEvalChainInput<EvalOutputType, BaseLanguageModelInterface<any, BaseLanguageModelCallOptions>>, "llm">>

      The options for the chain.

    Returns Promise<TrajectoryEvalChain>

  • Parameters

    • Optional prompt: BasePromptTemplate<any, BasePromptValueInterface, any>
    • Optional agentTools: StructuredToolInterface<ZodObject<any, any, any, any, {}>>[]

    Returns BasePromptTemplate<any, BasePromptValueInterface, any>

  • Get the description of the agent tools.

    Parameters

    • agentTools: StructuredToolInterface<ZodObject<any, any, any, any, {}>>[]

    Returns string

    The description of the agent tools.

Generated using TypeDoc