RooPrompts/docs/providers/requesty.md
2025-05-17 14:58:58 +05:30

39 lines
2.3 KiB
Markdown

---
sidebar_label: Requesty
---
# Using Requesty With Roo Code
Roo Code supports accessing models through the [Requesty](https://www.requesty.ai/) AI platform. Requesty provides an easy and optimized API for interacting with 150+ large language models (LLMs).
**Website:** [https://www.requesty.ai/](https://www.requesty.ai/)
## Getting an API Key
1. **Sign Up/Sign In:** Go to the [Requesty website](https://www.requesty.ai/) and create an account or sign in.
2. **Get API Key:** You can get an API key from the [API Management](https://app.requesty.ai/manage-api) section of your Requesty dashboard.
## Supported Models
Requesty provides access to a wide range of models. Roo Code will automatically fetch the latest list of available models. You can see the full list of available models on the [Model List](https://app.requesty.ai/router/list) page.
## Configuration in Roo Code
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
2. **Select Provider:** Choose "Requesty" from the "API Provider" dropdown.
3. **Enter API Key:** Paste your Requesty API key into the "Requesty API Key" field.
4. **Select Model:** Choose your desired model from the "Model" dropdown.
## Tips and Notes
- **Optimizations**: Requesty offers range of in-flight cost optimizations to lower your costs.
- **Unified and simplified billing**: Unrestricted access to all providers and models, automatic balance top ups and more via a single [API key](https://app.requesty.ai/manage-api).
- **Cost tracking**: Track cost per model, coding language, changed file, and more via the [Cost dashboard](https://app.requesty.ai/cost-management) or the [Requesty VS.code extension](https://marketplace.visualstudio.com/items?itemName=Requesty.requesty).
- **Stats and logs**: See your [coding stats dashboard](https://app.requesty.ai/usage-stats) or go through your [LLM interaction logs](https://app.requesty.ai/logs).
- **Fallback policies**: Keep your LLM working for you with fallback policies when providers are down.
* **Prompt Caching:** Some providers support prompt caching. [Search models with caching](https://app.requesty.ai/router/list).
## Relevant resources
- [Requesty Youtube channel](https://www.youtube.com/@requestyAI):
- [Requesty Discord](https://requesty.ai/discord)