Models¶
- class aisploit.models.BedrockChat(*, model_id: str, client: Any = None, region_name: str | None = None, credentials_profile_name: str | None = None, config: Any = None, provider: str | None = None, model_kwargs: Dict | None = None, endpoint_url: str | None = None, streaming: bool = False, provider_stop_sequence_key_name_map: Mapping[str, str] = {'ai21': 'stop_sequences', 'amazon': 'stopSequences', 'anthropic': 'stop_sequences', 'cohere': 'stop_sequences', 'mistral': 'stop'}, guardrails: Mapping[str, Any] | None = {'id': None, 'trace': False, 'version': None}, name: str | None = None, cache: BaseCache | bool | None = None, verbose: bool = None, callbacks: Callbacks = None, tags: List[str] | None = None, metadata: Dict[str, Any] | None = None, custom_get_token_ids: Callable[[str], List[int]] | None = None, callback_manager: BaseCallbackManager | None = None)¶
Bases:
BedrockChat
,BaseChatModel
- supports_functions() bool ¶
Check if the model supports additional functions beyond basic chat.
- Returns:
bool: True if the model supports additional functions, False otherwise.
- class aisploit.models.ChatAnthropic(*, api_key: str | None = None, model_name: str = 'claude-3-opus-20240229', temperature: float = 1.0, name: str | None = None, cache: BaseCache | bool | None = None, verbose: bool = None, callbacks: Callbacks = None, tags: List[str] | None = None, metadata: Dict[str, Any] | None = None, custom_get_token_ids: Callable[[str], List[int]] | None = None, callback_manager: BaseCallbackManager | None = None, max_tokens_to_sample: int = 1024, top_k: int | None = None, top_p: float | None = None, timeout: float | None = None, max_retries: int = 2, anthropic_api_url: str | None = None, default_headers: Mapping[str, str] | None = None, model_kwargs: Dict[str, Any] = None, streaming: bool = False)¶
Bases:
ChatAnthropic
,BaseChatModel
A chat model based on Anthropic’s language generation technology.
- supports_functions() bool ¶
Check if the model supports additional functions beyond basic chat.
- Returns:
bool: True if the model supports additional functions, False otherwise.
- class aisploit.models.ChatGoogleGenerativeAI(*, api_key: str | None = None, model: str = 'gemini-pro', max_output_tokens: int = 1024, temperature: float = 1.0, safety_settings: Dict[HarmCategory, HarmBlockThreshold] | None = None, name: str | None = None, cache: BaseCache | bool | None = None, verbose: bool = None, callbacks: Callbacks = None, tags: List[str] | None = None, metadata: Dict[str, Any] | None = None, custom_get_token_ids: Callable[[str], List[int]] | None = None, callback_manager: BaseCallbackManager | None = None, google_api_key: SecretStr | None = None, credentials: Any = None, top_p: float | None = None, top_k: int | None = None, n: int = 1, max_retries: int = 6, timeout: float | None = None, client_options: Dict | None = None, transport: str | None = None, additional_headers: Dict[str, str] | None = None, client: Any = None, async_client: Any = None, default_metadata: Sequence[Tuple[str, str]] = None, convert_system_message_to_human: bool = False)¶
Bases:
ChatGoogleGenerativeAI
,BaseChatModel
Wrapper class for interacting with the Google Generative AI API for chat-based models.
- supports_functions() bool ¶
Check if the model supports additional functions beyond basic chat.
- Returns:
bool: True if the model supports additional functions, False otherwise.
- class aisploit.models.ChatOllama(*, model: str = 'llama2', temperature: float = 1.0, name: str | None = None, cache: BaseCache | bool | None = None, verbose: bool = None, callbacks: Callbacks = None, tags: List[str] | None = None, metadata: Dict[str, Any] | None = None, custom_get_token_ids: Callable[[str], List[int]] | None = None, base_url: str = 'http://localhost:11434', mirostat: int | None = None, mirostat_eta: float | None = None, mirostat_tau: float | None = None, num_ctx: int | None = None, num_gpu: int | None = None, num_thread: int | None = None, num_predict: int | None = None, repeat_last_n: int | None = None, repeat_penalty: float | None = None, stop: List[str] | None = None, tfs_z: float | None = None, top_k: int | None = None, top_p: float | None = None, system: str | None = None, template: str | None = None, format: str | None = None, timeout: int | None = None, keep_alive: int | str | None = None, headers: dict | None = None, callback_manager: BaseCallbackManager | None = None)¶
Bases:
ChatOllama
,BaseChatModel
Wrapper class for interacting with the ChatOllama model.
- supports_functions() bool ¶
Check if the model supports additional functions beyond basic chat.
- Returns:
bool: True if the model supports additional functions, False otherwise.
- class aisploit.models.ChatOpenAI(*, api_key: str | None = None, model: str = 'gpt-4', max_tokens: int = 1024, temperature: float = 1.0, name: str | None = None, cache: BaseCache | bool | None = None, verbose: bool = None, callbacks: Callbacks = None, tags: List[str] | None = None, metadata: Dict[str, Any] | None = None, custom_get_token_ids: Callable[[str], List[int]] | None = None, callback_manager: BaseCallbackManager | None = None, client: Any = None, async_client: Any = None, model_kwargs: Dict[str, Any] = None, base_url: str | None = None, organization: str | None = None, openai_proxy: str | None = None, timeout: float | Tuple[float, float] | Any | None = None, max_retries: int = 2, streaming: bool = False, n: int = 1, tiktoken_model_name: str | None = None, default_headers: Mapping[str, str] | None = None, default_query: Mapping[str, object] | None = None, http_client: Any | None = None, http_async_client: Any | None = None)¶
Bases:
ChatOpenAI
,BaseChatModel
Wrapper class for interacting with the OpenAI API for chat-based models.
- supports_functions() bool ¶
Check if the model supports additional functions beyond basic chat.
- Returns:
bool: True if the model supports additional functions, False otherwise.