Pagination
Learn how to efficiently paginate through large sets of data using Beamery's API pagination system
All list endpoints in Beamery support pagination to help you handle large datasets efficiently. This ensures optimal performance and prevents timeouts when working with large amounts of data.
How pagination works
Beamery uses offset-based pagination across all APIs. This means you specify how many records to skip (offset) and how many to return (limit).
Query Parameters
- Name
limit- Type
- integer
- Description
Number of records to return per request (default varies by endpoint, typically 20-50, max: 100)
- Name
offset- Type
- integer
- Description
Number of records to skip before starting to return results (default: 0)
Response Structure
All paginated responses include a pagination object with metadata:
- Name
total- Type
- integer
- Description
Total number of records available across all pages
- Name
limit- Type
- integer
- Description
Number of records returned in this response
- Name
offset- Type
- integer
- Description
Number of records skipped for this response
- Name
hasMore- Type
- boolean
- Description
Whether there are more records available beyond this page
Example Implementation
Here's how to implement pagination when fetching contacts:
Request Parameters
limit=50- Return 50 contacts per pageoffset=0- Start from the first contact (page 1)offset=50- Skip first 50 contacts (page 2)offset=100- Skip first 100 contacts (page 3)
Calculating Pages
- Current page:
Math.floor(offset / limit) + 1 - Total pages:
Math.ceil(total / limit) - Next page offset:
offset + limit - Previous page offset:
Math.max(0, offset - limit)
curl https://frontier.beamery.com/v1/contacts \
-H "Authorization: Bearer your_access_token" \
-d limit=50 \
-d offset=0
{
"success": true,
"contacts": [
{
"id": "contact_123",
"firstName": "John",
"lastName": "Doe",
"email": "john.doe@example.com"
}
],
"pagination": {
"total": 1250,
"limit": 50,
"offset": 0,
"hasMore": true
}
}
Pagination Best Practices
1. Start with Reasonable Limits
Use appropriate page sizes based on your use case:
- Small datasets: 20-50 records per page
- Large datasets: 50-100 records per page
- Real-time processing: 10-20 records per page
2. Handle Edge Cases
async function fetchAllContacts() {
const allContacts = []
let offset = 0
const limit = 50
do {
const response = await fetch(`/v1/contacts?limit=${limit}&offset=${offset}`, {
headers: { 'Authorization': 'Bearer your_access_token' }
})
const data = await response.json()
allContacts.push(...data.contacts)
offset += limit
// Continue until no more data
if (!data.pagination.hasMore) break
} while (true)
return allContacts
}
3. Monitor Performance
- Large offsets can be slower - consider using filters to reduce dataset size
- Small limits increase API calls - balance between performance and memory usage
- Implement caching for frequently accessed data
4. Error Handling
async function fetchContactsWithRetry(offset = 0, limit = 50, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await fetch(`/v1/contacts?limit=${limit}&offset=${offset}`)
if (!response.ok) {
if (response.status === 429) {
// Rate limited - wait and retry
await new Promise(resolve => setTimeout(resolve, 1000 * (attempt + 1)))
continue
}
throw new Error(`HTTP ${response.status}`)
}
return await response.json()
} catch (error) {
if (attempt === maxRetries - 1) throw error
await new Promise(resolve => setTimeout(resolve, 1000))
}
}
}
Supported Endpoints
The following endpoints support pagination:
Core CRM
GET /v1/contacts- List contactsGET /v1/vacancies- List job vacanciesGET /v1/pools- List talent pools
Job Architecture
GET /v1/ja/skills- List skillsGET /v1/ja/roles- List rolesGET /v1/ja/tasks- List tasksGET /v1/ja/departments- List departments
Taxonomy
GET /v1/taxonomy/skills- List taxonomy skillsGET /v1/taxonomy/roles- List taxonomy roles
Each endpoint may have different default limits and maximum limits. Check the specific endpoint documentation for details.
Common Pitfalls
1. Deep Pagination Performance
Very large offset values (e.g., offset=10000) can be slow. Consider:
- Using filters to reduce the dataset size
- Implementing cursor-based pagination for very large datasets
- Caching results when possible
2. Data Consistency
Data may change between requests. Be prepared to handle:
- Records appearing multiple times if new records are added
- Records being missed if records are deleted
- Total count changing between requests
3. Rate Limiting
- Beamery enforces rate limits (40 requests per second per company)
- Implement exponential backoff for retry logic
- Consider using bulk operations when available
For more information about rate limiting, see the Rate Limiting documentation.