Batch processing allows you to efficiently process multiple API requests in a single call. This page guides you through implementing batch processing with JUHE API to optimize your application's performance and reduce overhead.
What is Batch Processing?
Batch processing is a method for sending multiple API requests in a single HTTP call, rather than making separate calls for each request. This approach offers several benefits:
- Reduced Network Overhead: Fewer HTTP connections mean less latency
- Lower Rate Limit Impact: Multiple operations count as a single request toward your rate limit
- Improved Throughput: Process more data in less time
How Batch Processing Works
The JUHE API batch endpoint accepts an array of individual requests and returns an array of corresponding responses:
POST /batch
Each request in the batch specifies:
- The HTTP method to use
- The API endpoint path
- Any parameters required by that endpoint
Request Format
A batch request is structured as follows:
{
"requests": [
{
"method": "GET",
"path": "/weather/current",
"params": {
"location": "New York"
}
},
{
"method": "GET",
"path": "/weather/forecast",
"params": {
"location": "London",
"days": 3
}
},
{
"method": "POST",
"path": "/geo/ip",
"body": {
"ip": "8.8.8.8"
}
}
]
}
Response Format
The batch endpoint returns responses in the same order as the requests:
{
"status": "success",
"results": [
{
"status": "success",
"data": {
// Response for the first request (New York weather)
}
},
{
"status": "success",
"data": {
// Response for the second request (London forecast)
}
},
{
"status": "success",
"data": {
// Response for the third request (IP geolocation)
}
}
]
}
Handling Partial Failures
If some requests within a batch succeed while others fail, the batch endpoint will still return a 200 OK status code. Each individual response within the results array will have its own status:
{
"status": "success",
"results": [
{
"status": "success",
"data": {
// Response data
}
},
{
"status": "error",
"code": "INVALID_PARAMETER",
"message": "Invalid location parameter"
},
{
"status": "success",
"data": {
// Response data
}
}
]
}
Implementation Examples
JavaScript
async function batchRequests(requests, apiKey) {
const response = await fetch('<https://hub.juheapi.com/batch>', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${apiKey}`
},
body: JSON.stringify({ requests })
});
return response.json();
}
// Example usage
const requests = [
{
method: 'GET',
path: '/weather/current',
params: { location: 'New York' }
},
{
method: 'GET',
path: '/weather/forecast',
params: { location: 'London', days: 3 }
}
];
batchRequests(requests, 'YOUR_API_KEY')
.then(response => {
// Process batch responses
response.results.forEach((result, index) => {
if (result.status === 'success') {
console.log(`Request ${index} succeeded:`, result.data);
} else {
console.error(`Request ${index} failed:`, result.message);
}
});
})
.catch(error => {
console.error('Batch request failed:', error);
});
Python
import requests
def batch_requests(requests_array, api_key):
url = "<https://hub.juheapi.com/batch>"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
payload = {"requests": requests_array}
response = requests.post(url, json=payload, headers=headers)
return response.json()
# Example usage
requests_array = [
{
"method": "GET",
"path": "/weather/current",
"params": {"location": "New York"}
},
{
"method": "GET",
"path": "/weather/forecast",
"params": {"location": "London", "days": 3}
}
]
response = batch_requests(requests_array, "YOUR_API_KEY")
# Process batch responses
for i, result in enumerate(response["results"]):
if result["status"] == "success":
print(f"Request {i} succeeded:", result["data"])
else:
print(f"Request {i} failed:", result["message"])
Best Practices
- Group Similar Requests: Batch requests that target the same or related endpoints for best performance
- Limit Batch Size: Keep batches to a reasonable size (max 20 requests per batch)
- Handle Partial Failures: Always check each result's status individually
- Consider Dependency: If requests depend on each other, don't batch them together
- Implement Retries Carefully: If batched requests fail, consider whether to retry the entire batch or just the failed requests
Limitations
- Maximum of 20 requests per batch
- All requests in a batch must use the same authentication
- Total payload size must be under 1MB
- Batch requests have a longer timeout (30 seconds vs. 10 seconds for individual requests)
When to Avoid Batching
Batching may not be beneficial in these scenarios:
- When subsequent requests depend on earlier results
- For real-time, latency-sensitive operations
- For very large or complex requests that may timeout when combined
Next Steps
Now that you understand batch processing, check out these related topics:
- Performance Optimization - Additional techniques to optimize your API usage
- Error Handling - Best practices for handling errors in batch requests
- Integration Examples - See complete integration examples for your preferred language