ReadOnlyFile lets you read OSS objects using standard Go file interfaces. Open an object with OpenFile, then use Read, Seek, Stat, and Close — the same way you'd work with a local file.
Prerequisites
Before you begin, ensure that you have:
The
oss:GetObjectpermission. See Grant custom permissions to a RAM user.Access credentials configured as environment variables. See Configure access credentials.
How it works
OpenFile returns a *ReadOnlyFile, which implements io.Reader, io.Seeker, and io.Closer. It supports two read modes:
Single stream mode (default): reads the object sequentially over a single connection with automatic reconnection.
Prefetch mode: downloads chunks in parallel before they are requested, improving throughput for large sequential reads. Enable it by setting
EnablePrefetch = true.
The ReadOnlyFile method provides a reconnection mechanism, which has strong robustness in complex network environments.
If prefetch mode is enabled and multiple out-of-order reads occur, the SDK falls back to single stream mode automatically.
API reference
type ReadOnlyFile struct { ... }
func (c *Client) OpenFile(
ctx context.Context,
bucket string,
key string,
optFns ...func(*OpenOptions),
) (file *ReadOnlyFile, err error)Parameters
| Parameter | Type | Description |
|---|---|---|
ctx | context.Context | Request context. |
bucket | string | Bucket name. |
key | string | Object name. |
optFns | ...func(*OpenOptions) | Optional. See OpenOptions below. |
OpenOptions
| Option | Type | Default | Description |
|---|---|---|---|
Offset | int64 | 0 | Start offset when opening the object. Must be >= 0. |
VersionId | *string | — | Object version. Valid only if multiple versions of the object exist. |
RequestPayer | *string | — | Set to requester when pay-by-requester is enabled. |
EnablePrefetch | bool | false | Enable prefetch mode. |
PrefetchNum | int | 3 | Number of chunks to prefetch. Effective only when EnablePrefetch is true. |
ChunkSize | int64 | 6 MiB | Size of each prefetched chunk. Effective only when EnablePrefetch is true. |
PrefetchThreshold | int64 | 20 MiB | Bytes read sequentially before prefetch starts. Effective only when EnablePrefetch is true. |
PrefetchNum,ChunkSize, andPrefetchThresholdtake effect only whenEnablePrefetchistrue. Setting them without enabling prefetch mode has no effect.
Return values
| Value | Type | Description |
|---|---|---|
file | *ReadOnlyFile | The file handle. Valid when err is nil. Implements io.Reader, io.Seeker, and io.Closer. |
err | error | Non-nil if the call fails. |
ReadOnlyFile methods
| Method | Description |
|---|---|
Read(p []byte) (int, error) | Reads up to len(p) bytes into p. Returns the number of bytes read and any error. |
Seek(offset int64, whence int) (int64, error) | Sets the read position. whence: 0 = start, 1 = current position, 2 = end. |
Stat() (os.FileInfo, error) | Returns object metadata. Use info.Size() for object size, info.ModTime() for last modified time, and info.Sys().(http.Header) for OSS-specific HTTP headers such as Content-Type. |
Close() error | Releases file handles, memory, and active sockets. |
Examples
All three examples below use the same initialization block to create an OSS client:
cfg := oss.LoadDefaultConfig().
WithCredentialsProvider(credentials.NewEnvironmentVariableCredentialsProvider()).
WithRegion("<region>")
client := oss.NewClient(cfg)Replace <region> with your bucket's region ID (for example, cn-hangzhou). Credentials are read from environment variables. By default, the public endpoint is used. To access OSS from another Alibaba Cloud service in the same region, configure the internal endpoint. See Regions and endpoints.
Read an entire object using single stream mode
f, err := client.OpenFile(context.TODO(), "<bucket>", "<object>")
if err != nil {
log.Fatalf("failed to open file: %v", err)
}
defer f.Close()
written, err := io.Copy(io.Discard, f)
if err != nil {
log.Fatalf("failed to read file: %v", err)
}
log.Printf("read %d bytes", written)Read an entire object using prefetch mode
Set EnablePrefetch = true to download chunks ahead of reads. This increases throughput for sequential reads of large objects.
f, err := client.OpenFile(context.TODO(), "<bucket>", "<object>",
func(oo *oss.OpenOptions) {
oo.EnablePrefetch = true
},
)
if err != nil {
log.Fatalf("failed to open file: %v", err)
}
defer f.Close()
written, err := io.Copy(io.Discard, f)
if err != nil {
log.Fatalf("failed to read file: %v", err)
}
log.Printf("read %d bytes", written)Read from a specific position using Seek
Use Stat to inspect object metadata, then Seek to position the read cursor before calling Read.
f, err := client.OpenFile(context.TODO(), "<bucket>", "<object>")
if err != nil {
log.Fatalf("failed to open file: %v", err)
}
defer f.Close()
// Get object size and last modified time.
info, err := f.Stat()
if err != nil {
log.Fatalf("failed to stat file: %v", err)
}
log.Printf("size: %d, last modified: %v", info.Size(), info.ModTime())
// Access OSS-specific metadata via HTTP headers.
if header, ok := info.Sys().(http.Header); ok {
log.Printf("content-type: %s", header.Get(oss.HTTPHeaderContentType))
}
// Seek to byte 123 and read the rest of the object.
if _, err = f.Seek(123, io.SeekStart); err != nil {
log.Fatalf("failed to seek: %v", err)
}
written, err := io.Copy(io.Discard, f)
if err != nil {
log.Fatalf("failed to read file: %v", err)
}
log.Printf("read %d bytes", written)