Datacenter availability
The S3-compatible API is available for network volumes in select datacenters. Each datacenter has a unique endpoint URL that you’ll use when calling the API:Datacenter | Endpoint URL |
---|---|
EUR-IS-1 | https://s3api-eur-is-1.runpod.io/ |
EU-RO-1 | https://s3api-eu-ro-1.runpod.io/ |
EU-CZ-1 | https://s3api-eu-cz-1.runpod.io/ |
US-KS-2 | https://s3api-us-ks-2.runpod.io/ |
US-CA-2 | https://s3api-us-ca-2.runpod.io/ |
Setup and authentication
1
Create a network volume
First, create a network volume in a supported datacenter. See Network volumes -> Create a network volume for detailed instructions.
2
Create an S3 API key
Next, you’ll need to generate a new key called an “S3 API key” (this is separate from your Runpod API key).
- Go to the Settings page in the Runpod console.
- Expand S3 API Keys and select Create an S3 API key.
- Name your key and select Create.
- Save the access key (e.g.,
user_***...
) and secret (e.g.,rps_***...
) to use in the next step.
For security, Runpod will show your API key secret only once, so you may wish to save it elsewhere (e.g., in your password manager, or in a GitHub secret). Treat your API key secret like a password and don’t share it with anyone.
3
Configure AWS CLI
To use the S3-compatible API with your Runpod network volumes, you must configure your AWS CLI with the Runpod S3 API key you created.
This command displays which credentials are currently active, the source of each credential (such as the config file or environment variables), and whether all required credentials are properly set.
- If you haven’t already, install the AWS CLI on your local machine.
- Run the command
aws configure
in your terminal. - Provide the following when prompted:
- AWS Access Key ID: Enter your Runpod user ID. You can find this in the Secrets section of the Runpod console, in the description of your S3 API key. By default, the description will look similar to:
Shared Secret for user_2f21CfO73Mm2Uq2lEGFiEF24IPw 1749176107073
.user_2f21CfO73Mm2Uq2lEGFiEF24IPw
is the user ID (yours will be different). - AWS Secret Access Key: Enter your Runpod S3 API key’s secret access key.
- Default Region name: You can leave this blank.
- Default output format: You can leave this blank or set it to
json
.
- AWS Access Key ID: Enter your Runpod user ID. You can find this in the Secrets section of the Runpod console, in the description of your S3 API key. By default, the description will look similar to:
~/.aws/credentials
).Managing credentials
Managing credentials
Verifying your AWS configuration
If you’re experiencing authentication issues, use the following command to check your current AWS configuration:Environment variables override config files
AWS CLI uses the following priority order for credentials (highest to lowest):- Environment variables (
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
) - AWS credentials file (
~/.aws/credentials
) - AWS config file (
~/.aws/config
)
-
Check for existing environment variables:
-
If outdated environment variables are set, unset them:
-
Verify your configuration again:
Using the S3-compatible API
You can use the S3-compatible API to interact with your Runpod network volumes using standard S3 tools: Core AWS CLI operations such asls
, cp
, mv
, rm
, and sync
function as expected.
Network volume path mapping
Network volume path mapping
Network volumes are mounted to Serverless workers at
/runpod-volume
and to Pods at /workspace
by default. The S3-compatible API maps file paths as follows:- Pod filesystem path:
/workspace/my-folder/file.txt
- Serverless worker path:
/runpod-volume/my-folder/file.txt
- S3 API path:
s3://NETWORK_VOLUME_ID/my-folder/file.txt
s3 CLI examples
When usingaws s3
commands, you must pass in the endpoint URL for your network volume using the --endpoint-url
flag and the datacenter ID using the --region
flag.
Unlike traditional S3 key-value stores, object names in the Runpod S3-compatible API correspond to actual file paths on your network volume. Object names containing special characters (e.g., #
) may need to be URL-encoded to ensure proper processing.
List objects
List objects
Use Unlike standard S3 buckets,
ls
to list objects in a network volume directory:ls
and ListObjects
operations will list empty directories.ls
operations may take a long time when used on a directory containing many files (over 10,000) or large amounts of data (over 10GB), or when used recursively on a network volume containing either.Transfer files
Transfer files
Use Use
cp
to copy a file to a network volume:cp
to copy a file from a network volume to a local directory:Remove files
Remove files
Use
rm
to remove a file from a network volume:If you encounter a 502 “bad gateway” error during file transfer, try increasing
AWS_MAX_ATTEMPTS
to 10 or more:Sync directories
Sync directories
This command syncs a local directory (source) to a network volume directory (destination):
s3api CLI example
You can also useaws s3api
commands (instead of aws s3
) to interact with the S3-compatible API.
For example, here’s how you could use aws s3api get-object
to download an object from a network volume:
LOCAL_FILE
with the desired path and name of the file after download—for example: ~/local-dir/my-file.txt
.
For a list of available s3api
commands, see the AWS s3api reference.
Boto3 Python example
You can also use the Boto3 library to interact with the S3-compatible API, using it to transfer files to and from a Runpod network volume. The script below demonstrates how to upload a file to a Runpod network volume using the Boto3 library. It takes command-line arguments for the network volume ID (as an S3 bucket), the datacenter-specific S3 endpoint URL, the local file path, the desired object (file path on the network volume), and the AWS Region (which corresponds to the Runpod datacenter ID).Boto3 script
Boto3 script
To run this script, your Runpod S3 API key credentials must be set as environment variables using the values from the Setup and authentication step:
AWS_ACCESS_KEY_ID
: Should be set to your Runpod S3 API key access key (e.g.,user_***...
).AWS_SECRET_ACCESS_KEY
: Should be set to your Runpod S3 API key’s secret (e.g.,rps_***...
).
Example usage
Example usage
When uploading files with Boto3, you must specify the complete file path (including the filename) for both source and destination files.For example, for the
put_objects
method above, you must specify these arguments:file_path
: The local source file (e.g.,local_directory/file.txt
).object_name
: The remote destination file to be created on the network volume (e.g.,remote_directory/file.txt
).
Uploading very large files
You can upload large files to network volumes using S3 multipart upload operations (see the compatibility reference below). You can also download this helper script, which dramatically improves reliability when uploading very large files (10GB+) by handling timeouts and retries automatically. Click here to download the script on GitHub. Here’s an example of how to run the script using command line arguments:S3 API compatibility reference
The tables below show which S3 API operations and AWS CLI commands are currently supported. Use the tables below to understand what functionality is available and plan your development workflows accordingly. For detailed information on these operations, refer to the AWS S3 API documentation.If a function is not listed below, that means it’s not currently implemented. We are continuously expanding the S3-compatible API based on user needs and usage patterns.
Core operations
Core operations
Operation | Supported | CLI Command | Notes |
---|---|---|---|
CopyObject | ✅ | aws s3 cp , aws s3api copy-object | Copy objects between locations |
DeleteObject | ✅ | aws s3 rm , aws s3api delete-object | Remove individual objects |
GetObject | ✅ | aws s3 cp , aws s3api get-object | Download objects |
HeadBucket | ✅ | aws s3 ls , aws s3api head-bucket | Verify bucket exists and permissions |
HeadObject | ✅ | aws s3api head-object | Retrieve object metadata |
ListBuckets | ✅ | aws s3 ls , aws s3api list-buckets | List available network volumes |
ListObjects | ✅ | aws s3 ls , aws s3api list-objects | List objects in a bucket (includes empty directories) |
ListObjectsV2 | ✅ | aws s3 ls , aws s3api list-objects-v2 | Enhanced version of ListObjects |
PutObject | ✅ | aws s3 cp , aws s3api put-object | Upload objects (<500MB) |
DeleteObjects | ❌ | aws s3api delete-objects | Planned |
RestoreObject | ❌ | aws s3api restore-object | Not supported |
ListObjects
operations may take a long time when used on a directory containing many files (over 10,000) or large amounts of data (over 10GB), or when used recursively on a network volume containing either.Multipart upload operations
Multipart upload operations
Files larger than 500MB must be uploaded using multipart uploads. The AWS CLI performs multipart uploads automatically.
Operation | Supported | CLI Command | Notes |
---|---|---|---|
CreateMultipartUpload | ✅ | aws s3api create-multipart-upload | Start multipart upload for large files |
UploadPart | ✅ | aws s3api upload-part | Upload individual parts |
CompleteMultipartUpload | ✅ | aws s3api complete-multipart-upload | Finish multipart upload |
AbortMultipartUpload | ✅ | aws s3api abort-multipart-upload | Cancel multipart upload |
ListMultipartUploads | ✅ | aws s3api list-multipart-uploads | View in-progress uploads |
ListParts | ✅ | aws s3api list-parts | List parts of a multipart upload |
Bucket management operations
Bucket management operations
Operation | Supported | CLI Command | Notes |
---|---|---|---|
CreateBucket | ❌ | aws s3api create-bucket | Use the Runpod console to create network volumes |
DeleteBucket | ❌ | aws s3api delete-bucket | Use the Runpod console to delete network volumes |
GetBucketLocation | ❌ | aws s3api get-bucket-location | Datacenter info available in the Runpod console |
GetBucketVersioning | ❌ | aws s3api get-bucket-versioning | Versioning is not supported |
PutBucketVersioning | ❌ | aws s3api put-bucket-versioning | Versioning is not supported |
GeneratePresignedURL | ❌ | aws s3 presign | Pre-signed URLs are not supported |
Access control and permissions
Access control and permissions
Operation | Supported | CLI Command | Notes |
---|---|---|---|
GetBucketAcl | ❌ | N/A | ACLs are not supported |
PutBucketAcl | ❌ | N/A | ACLs are not supported |
GetObjectAcl | ❌ | N/A | ACLs are not supported |
PutObjectAcl | ❌ | N/A | ACLs are not supported |
GetBucketPolicy | ❌ | N/A | Bucket policies are not supported |
PutBucketPolicy | ❌ | N/A | Bucket policies are not supported |
Object metadata and tagging
Object metadata and tagging
Operation | Supported | CLI Command | Notes |
---|---|---|---|
GetObjectTagging | ❌ | N/A | Object tagging is not supported |
PutObjectTagging | ❌ | N/A | Object tagging is not supported |
DeleteObjectTagging | ❌ | N/A | Object tagging is not supported |
Encryption and security
Encryption and security
Operation | Supported | CLI Command | Notes |
---|---|---|---|
GetBucketEncryption | ❌ | N/A | Encryption is not supported |
PutBucketEncryption | ❌ | N/A | Encryption is not supported |
GetObjectLockConfiguration | ❌ | N/A | Object locking is not supported |
PutObjectLockConfiguration | ❌ | N/A | Object locking is not supported |
Known issues and limitations
ListObjects runs slowly or fails with "same next token" error
ListObjects runs slowly or fails with "same next token" error
When running This occurs because Runpod must compute and cache the MD5 checksum (i.e., ETag) for files created without the S3-compatible API. This computation can take several minutes for large directories or files, as the
aws s3 ls
or ListObjects
on a directory with many files or large amounts of data (typically >10,000 files or >10 GB of data) for the first time, it may run very slowly, or you may encounter the following error:ListObjects
request must wait until the checksum is ready.Workarounds:- The operation will typically complete successfully if you wait for the process to finish.
- If the client aborts with a pagination error, retry the operation after a brief pause.
Storage and time synchronization
Storage and time synchronization
- Storage capacity: Network volumes have a fixed storage capacity, unlike the virtually unlimited storage of standard S3 buckets. The
CopyObject
andUploadPart
actions do not check for available free space beforehand and may fail if the volume runs out of space. - Maximum file size: 4TB (the maximum size of a network volume).
- Object names: Unlike traditional S3 key-value stores, object names in the Runpod S3-compatible API correspond to actual file paths on your network volume. Object names containing special characters (e.g.,
#
) may need to be URL encoded to ensure proper processing. - Time synchronization: Requests that are out of time sync by 1 hour will be rejected. This is more lenient than the 15-minute window specified by the AWS SigV4 authentication specification.
Multipart uploads
Multipart uploads
- The maximum size for a single part of a multipart upload is 500MB.
- The AWS S3 minimum part size of 5MB is not enforced.
- Multipart upload parts and metadata are stored in a hidden
.s3compat_uploads/
folder. This folder and its contents are automatically cleaned up when you callCompleteMultipartUpload
orAbortMultipartUpload
.
Timeout configuration for large files
Timeout configuration for large files
When uploading large files (10GB+), you may encounter timeout errors during the
CompleteMultipartUpload
operation. To resolve this, we recommend using the multipart upload helper script.Or you can try increasing the timeout settings in your AWS tools:For Or, configure timeout in
aws s3
and aws s3api
, use the --cli-read-timeout
parameter:~/.aws/config
: