Documentation
¶
Overview ¶
Copyright (c) 2009-2025, quasardb SAS. All rights reserved. Package qdb: QuasarDB Go client API Types: Reader, Writer, ColumnData, HandleType Ex: h.NewReader(opts).FetchAll() → batch
Copyright (c) 2009-2025, quasardb SAS. All rights reserved. Package qdb: high-performance time series database client Types: ErrorType, Handle, Entry, Cluster Ex: qdb.Connect(uri).GetBlob(alias) → data
Copyright (c) 2025 QuasarDB SAS All rights reserved.
Package qdb provides an API to a QuasarDB server.
Copyright (c) 2025 QuasarDB SAS All rights reserved.
Package qdb provides an API to a QuasarDB server.
Copyright (c) 2009-2025, quasardb SAS. All rights reserved. Package qdb: high-performance time series database client Types: ErrorType, Handle, Entry, Cluster Ex: qdb.Connect(uri).GetBlob(alias) → data
Copyright (c) 2009-2025, quasardb SAS. All rights reserved. Package qdb: QuasarDB Go client API Types: Reader, Writer, ColumnData, HandleType Ex: h.NewReader(opts).FetchAll() → batch
Copyright (c) 2009-2025, quasardb SAS. All rights reserved. Package qdb: QuasarDB Go client API Types: Reader, Writer, ColumnData, HandleType Ex: h.NewReader(opts).FetchAll() → batch
CRITICAL MEMORY SAFETY PATTERN: The Writer implements a 5-phase centralized pinning strategy that is the ONLY safe way to pass Go memory to C in this codebase. This pattern prevents segfaults from violating Go 1.23+ CGO pointer rules.
The 5-phase pattern in Push() (MANDATORY sequence):
- Prepare: Collect PinnableBuilders (no pointer assignments!)
- Pin: Pin all Go memory at once 2.5. Build: Execute builders to populate C structures
- Execute: Call C API with pinned memory
- KeepAlive: Prevent GC collection until done
WHY THIS MATTERS: - Direct pointer assignment before pinning = immediate segfault - The builder pattern defers assignments until after pinning - This is the ONLY pattern that works with GODEBUG=cgocheck=2
Memory strategies by column type:
- Int64/Double/Timestamp: Zero-copy with pinning (maximum performance)
- Blob/String: Copy to C memory (required for pointer safety)
Copyright (c) 2009-2025, quasardb SAS. All rights reserved. Package qdb: QuasarDB Go client API Types: Reader, Writer, ColumnData, HandleType Ex: h.NewReader(opts).FetchAll() → batch
Copyright (c) 2009-2025, quasardb SAS. All rights reserved. Package qdb: QuasarDB Go client API Types: Reader, Writer, ColumnData, HandleType Ex: h.NewReader(opts).FetchAll() → batch
Index ¶
- Constants
- Variables
- func ClusterKeyFromFile(clusterPublicKeyFile string) (string, error)
- func CountUndefined() uint64
- func GetColumnDataBlob(cd ColumnData) ([][]byte, error)
- func GetColumnDataBlobUnsafe(cd ColumnData) [][]byte
- func GetColumnDataDouble(cd ColumnData) ([]float64, error)
- func GetColumnDataDoubleUnsafe(cd ColumnData) []float64
- func GetColumnDataInt64(cd ColumnData) ([]int64, error)
- func GetColumnDataInt64Unsafe(cd ColumnData) []int64
- func GetColumnDataString(cd ColumnData) ([]string, error)
- func GetColumnDataStringUnsafe(cd ColumnData) []string
- func GetColumnDataTimestamp(cd ColumnData) ([]time.Time, error)
- func GetColumnDataTimestampNative(cd ColumnData) ([]C.qdb_timespec_t, error)
- func GetColumnDataTimestampNativeUnsafe(cd ColumnData) []C.qdb_timespec_t
- func GetColumnDataTimestampUnsafe(cd ColumnData) []time.Time
- func Int64Undefined() int64
- func IsRetryable(err error) bool
- func MaxTimespec() time.Time
- func MinTimespec() time.Time
- func NeverExpires() time.Time
- func PreserveExpiration() time.Time
- func QdbTimespecSliceToTime(xs []C.qdb_timespec_t) []time.Time
- func QdbTimespecToTime(t C.qdb_timespec_t) time.Time
- func SetLogFile(path string)
- func SetLogger(l Logger)
- func TimeSliceToQdbTimespec(xs []time.Time) []C.qdb_timespec_t
- func TimeToQdbTimespec(t time.Time, out *C.qdb_timespec_t)
- func TimespecToStructG(tp C.qdb_timespec_t) time.Time
- func TsColumnInfoExToStructG(t C.qdb_ts_column_info_ex_t, entry TimeseriesEntry) tsColumn
- func UserCredentialFromFile(userCredentialFile string) (user, secret string, err error)
- func WithGC(t testHelper, testName string, testFunc func())
- func WithGCAndHandle(t testHelper, handle HandleType, testName string, testFunc func())
- type BlobEntry
- func (entry *BlobEntry) CompareAndSwap(newValue, newComparand []byte, expiry time.Time) ([]byte, error)
- func (entry BlobEntry) Get() ([]byte, error)
- func (entry BlobEntry) GetAndRemove() ([]byte, error)
- func (entry *BlobEntry) GetAndUpdate(newContent []byte, expiry time.Time) ([]byte, error)
- func (entry BlobEntry) GetNoAlloc(content []byte) (int, error)
- func (entry BlobEntry) Put(content []byte, expiry time.Time) error
- func (entry BlobEntry) RemoveIf(comparand []byte) error
- func (entry *BlobEntry) Update(newContent []byte, expiry time.Time) error
- type Cluster
- type ColumnData
- type ColumnDataBlob
- type ColumnDataDouble
- type ColumnDataInt64
- type ColumnDataString
- type ColumnDataTimestamp
- type Compression
- type DirectBlobEntry
- type DirectEntry
- type DirectHandleType
- func (h DirectHandleType) Blob(alias string) DirectBlobEntry
- func (h DirectHandleType) Close() error
- func (h DirectHandleType) Integer(alias string) DirectIntegerEntry
- func (h DirectHandleType) PrefixGet(prefix string, limit int) ([]string, error)
- func (h DirectHandleType) Release(buffer unsafe.Pointer)
- type DirectIntegerEntry
- type Encryption
- type Endpoint
- type Entry
- func (e Entry) Alias() string
- func (e Entry) AttachTag(tag string) error
- func (e Entry) AttachTags(tags []string) error
- func (e Entry) DetachTag(tag string) error
- func (e Entry) DetachTags(tags []string) error
- func (e Entry) Exists() bool
- func (e Entry) ExpiresAt(expiry time.Time) error
- func (e Entry) ExpiresFromNow(expiry time.Duration) error
- func (e Entry) GetLocation() (NodeLocation, error)
- func (e Entry) GetMetadata() (Metadata, error)
- func (e Entry) GetTagged(tag string) ([]string, error)
- func (e Entry) GetTags() ([]string, error)
- func (e Entry) HasTag(tag string) error
- func (e Entry) Remove() error
- type EntryType
- type ErrorType
- type Find
- type HandleOptions
- func (o *HandleOptions) GetClientMaxInBufSize() uint
- func (o *HandleOptions) GetClientMaxParallelism() int
- func (o *HandleOptions) GetClusterPublicKey() string
- func (o *HandleOptions) GetClusterPublicKeyFile() string
- func (o *HandleOptions) GetClusterURI() string
- func (o *HandleOptions) GetCompression() Compression
- func (o *HandleOptions) GetEncryption() Encryption
- func (o *HandleOptions) GetTimeout() time.Duration
- func (o *HandleOptions) GetUserName() string
- func (o *HandleOptions) GetUserSecret() string
- func (o *HandleOptions) GetUserSecurityFile() string
- func (o *HandleOptions) WithClientMaxInBufSize(size uint) *HandleOptions
- func (o *HandleOptions) WithClientMaxParallelism(n int) *HandleOptions
- func (o *HandleOptions) WithClusterPublicKey(key string) *HandleOptions
- func (o *HandleOptions) WithClusterPublicKeyFile(path string) *HandleOptions
- func (o *HandleOptions) WithClusterUri(uri string) *HandleOptions
- func (o *HandleOptions) WithCompression(compression Compression) *HandleOptions
- func (o *HandleOptions) WithEncryption(encryption Encryption) *HandleOptions
- func (o *HandleOptions) WithTimeout(timeout time.Duration) *HandleOptions
- func (o *HandleOptions) WithUserName(name string) *HandleOptions
- func (o *HandleOptions) WithUserSecret(secret string) *HandleOptions
- func (o *HandleOptions) WithUserSecurityFile(path string) *HandleOptions
- type HandleOptionsProvider
- type HandleType
- func MustSetupHandle(clusterURI string, timeout time.Duration) HandleType
- func MustSetupSecuredHandle(clusterURI, clusterPublicKeyFile, userCredentialFile string, ...) HandleType
- func NewHandle() (HandleType, error)
- func NewHandleFromOptions(options *HandleOptions) (HandleType, error)
- func NewHandleWithNativeLogs() (HandleType, error)
- func SetupHandle(clusterURI string, timeout time.Duration) (HandleType, error)
- func SetupSecuredHandle(clusterURI, clusterPublicKeyFile, userCredentialFile string, ...) (HandleType, error)
- func (h HandleType) APIBuild() string
- func (h HandleType) APIVersion() string
- func (h HandleType) AddClusterPublicKey(secret string) error
- func (h HandleType) AddUserCredentials(name, secret string) error
- func (h HandleType) Blob(alias string) BlobEntry
- func (h HandleType) Close() error
- func (h HandleType) Cluster() *Cluster
- func (h HandleType) Connect(clusterURI string) error
- func (h HandleType) DirectConnect(nodeURI string) (DirectHandleType, error)
- func (h HandleType) Find() *Find
- func (h HandleType) GetClientMaxInBufSize() (uint, error)
- func (h HandleType) GetClientMaxParallelism() (uint, error)
- func (h HandleType) GetClusterMaxInBufSize() (uint, error)
- func (h HandleType) GetLastError() (string, error)
- func (h HandleType) GetTagged(tag string) ([]string, error)
- func (h HandleType) GetTags(entryAlias string) ([]string, error)
- func (h HandleType) Integer(alias string) IntegerEntry
- func (h HandleType) Node(uri string) *Node
- func (h HandleType) NodeStatistics(nodeID string) (Statistics, error)deprecated
- func (h HandleType) Open(protocol Protocol) error
- func (h HandleType) PrefixCount(prefix string) (uint64, error)
- func (h HandleType) PrefixGet(prefix string, limit int) ([]string, error)
- func (h HandleType) Query(query string) *Query
- func (h HandleType) Release(buffer unsafe.Pointer)
- func (h HandleType) SetClientMaxInBufSize(bufSize uint) error
- func (h HandleType) SetClientMaxParallelism(threadCount uint) error
- func (h HandleType) SetCompression(compressionLevel Compression) error
- func (h HandleType) SetEncryption(encryption Encryption) error
- func (h HandleType) SetMaxCardinality(maxCardinality uint) error
- func (h HandleType) SetTimeout(timeout time.Duration) error
- func (h HandleType) Statistics() (map[string]Statistics, error)
- func (h HandleType) Table(alias string) TimeseriesEntry
- func (h HandleType) Timeseries(alias string) TimeseriesEntrydeprecated
- func (h HandleType) TsBatch(cols ...TsBatchColumnInfo) (*TsBatch, error)
- type IntegerEntry
- type JSONPath
- type Logger
- type Metadata
- type NilLogger
- func (*NilLogger) Debug(msg string, args ...any)
- func (*NilLogger) Detailed(msg string, args ...any)
- func (*NilLogger) Error(msg string, args ...any)
- func (*NilLogger) Info(msg string, args ...any)
- func (*NilLogger) Panic(msg string, args ...any)
- func (*NilLogger) Warn(msg string, args ...any)
- func (*NilLogger) With(args ...any) Logger
- type Node
- type NodeLocation
- type NodeStatus
- type NodeTopology
- type PinnableBuilder
- type Protocol
- type Query
- type QueryPoint
- func (r *QueryPoint) Get() QueryPointResult
- func (r *QueryPoint) GetBlob() ([]byte, error)
- func (r *QueryPoint) GetCount() (int64, error)
- func (r *QueryPoint) GetDouble() (float64, error)
- func (r *QueryPoint) GetInt64() (int64, error)
- func (r *QueryPoint) GetString() (string, error)
- func (r *QueryPoint) GetTimestamp() (time.Time, error)
- type QueryPointResult
- type QueryResult
- func (r QueryResult) Columns(row *QueryPoint) QueryRow
- func (r QueryResult) ColumnsCount() int64
- func (r QueryResult) ColumnsNames() []string
- func (r QueryResult) ErrorMessage() string
- func (r QueryResult) RowCount() int64
- func (r QueryResult) Rows() QueryRows
- func (r QueryResult) ScannedPoints() int64
- type QueryResultValueType
- type QueryRow
- type QueryRows
- type Reader
- type ReaderChunk
- type ReaderColumn
- type ReaderOptions
- type RefID
- type Statistics
- type TestTimeseriesData
- type Time
- type TimeseriesEntry
- func (entry TimeseriesEntry) BlobColumn(columnName string) TsBlobColumn
- func (entry TimeseriesEntry) Bulk(cols ...TsColumnInfo) (*TsBulk, error)
- func (entry TimeseriesEntry) Columns() ([]TsBlobColumn, []TsDoubleColumn, []TsInt64Column, []TsStringColumn, ...)
- func (entry TimeseriesEntry) ColumnsInfo() ([]TsColumnInfo, error)
- func (entry TimeseriesEntry) Create(shardSize time.Duration, cols ...TsColumnInfo) error
- func (entry TimeseriesEntry) DoubleColumn(columnName string) TsDoubleColumn
- func (entry TimeseriesEntry) InsertColumns(cols ...TsColumnInfo) error
- func (entry TimeseriesEntry) Int64Column(columnName string) TsInt64Column
- func (t TimeseriesEntry) Name() string
- func (entry TimeseriesEntry) StringColumn(columnName string) TsStringColumn
- func (entry TimeseriesEntry) SymbolColumn(columnName, symtableName string) TsStringColumn
- func (entry TimeseriesEntry) TimestampColumn(columnName string) TsTimestampColumn
- type Timespec
- type TsAggregationType
- type TsBatch
- func (t *TsBatch) ExtraColumns(cols ...TsBatchColumnInfo) error
- func (t *TsBatch) Push() error
- func (t *TsBatch) PushFast() error
- func (t *TsBatch) Release()
- func (t *TsBatch) RowSetBlob(index int64, content []byte) error
- func (t *TsBatch) RowSetBlobNoCopy(index int64, content []byte) error
- func (t *TsBatch) RowSetDouble(index int64, value float64) error
- func (t *TsBatch) RowSetInt64(index, value int64) error
- func (t *TsBatch) RowSetString(index int64, content string) error
- func (t *TsBatch) RowSetStringNoCopy(index int64, content string) error
- func (t *TsBatch) RowSetTimestamp(index int64, value time.Time) error
- func (t *TsBatch) StartRow(timestamp time.Time) error
- type TsBatchColumnInfo
- type TsBlobAggregation
- type TsBlobColumn
- func (column TsBlobColumn) Aggregate(aggs ...*TsBlobAggregation) ([]TsBlobAggregation, error)
- func (column TsBlobColumn) EraseRanges(rgs ...TsRange) (uint64, error)
- func (column TsBlobColumn) GetRanges(rgs ...TsRange) ([]TsBlobPoint, error)
- func (column TsBlobColumn) Insert(points ...TsBlobPoint) error
- type TsBlobPoint
- type TsBulk
- func (t *TsBulk) GetBlob() ([]byte, error)
- func (t *TsBulk) GetDouble() (float64, error)
- func (t *TsBulk) GetInt64() (int64, error)
- func (t *TsBulk) GetRanges(rgs ...TsRange) error
- func (t *TsBulk) GetString() (string, error)
- func (t *TsBulk) GetTimestamp() (time.Time, error)
- func (t *TsBulk) Ignore() *TsBulk
- func (t *TsBulk) NextRow() (time.Time, error)
- func (t *TsBulk) Release()
- func (t *TsBulk) Row(timestamp time.Time) *TsBulk
- func (t TsBulk) RowCount() int
- type TsColumnInfo
- type TsColumnType
- type TsDoubleAggregation
- type TsDoubleColumn
- func (column TsDoubleColumn) Aggregate(aggs ...*TsDoubleAggregation) ([]TsDoubleAggregation, error)
- func (column TsDoubleColumn) EraseRanges(rgs ...TsRange) (uint64, error)
- func (column TsDoubleColumn) GetRanges(rgs ...TsRange) ([]TsDoublePoint, error)
- func (column TsDoubleColumn) Insert(points ...TsDoublePoint) error
- type TsDoublePoint
- type TsInt64Aggregation
- type TsInt64Column
- func (column TsInt64Column) Aggregate(aggs ...*TsInt64Aggregation) ([]TsInt64Aggregation, error)
- func (column TsInt64Column) EraseRanges(rgs ...TsRange) (uint64, error)
- func (column TsInt64Column) GetRanges(rgs ...TsRange) ([]TsInt64Point, error)
- func (column TsInt64Column) Insert(points ...TsInt64Point) error
- type TsInt64Point
- type TsRange
- type TsStringAggregation
- type TsStringColumn
- func (column TsStringColumn) Aggregate(aggs ...*TsStringAggregation) ([]TsStringAggregation, error)
- func (column TsStringColumn) EraseRanges(rgs ...TsRange) (uint64, error)
- func (column TsStringColumn) GetRanges(rgs ...TsRange) ([]TsStringPoint, error)
- func (column TsStringColumn) Insert(points ...TsStringPoint) error
- type TsStringPoint
- type TsTimestampAggregation
- type TsTimestampColumn
- func (column TsTimestampColumn) Aggregate(aggs ...*TsTimestampAggregation) ([]TsTimestampAggregation, error)
- func (column TsTimestampColumn) EraseRanges(rgs ...TsRange) (uint64, error)
- func (column TsTimestampColumn) GetRanges(rgs ...TsRange) ([]TsTimestampPoint, error)
- func (column TsTimestampColumn) Insert(points ...TsTimestampPoint) error
- type TsTimestampPoint
- type TsValueType
- type Writer
- type WriterColumn
- type WriterDeduplicationMode
- type WriterOptions
- func (options WriterOptions) DisableAsyncClientPush() WriterOptions
- func (options WriterOptions) DisableWriteThrough() WriterOptions
- func (options WriterOptions) EnableAsyncClientPush() WriterOptions
- func (options WriterOptions) EnableDropDuplicates() WriterOptions
- func (options WriterOptions) EnableDropDuplicatesOn(columns []string) WriterOptions
- func (options WriterOptions) EnableWriteThrough() WriterOptions
- func (options WriterOptions) GetDeduplicationMode() WriterDeduplicationMode
- func (options WriterOptions) GetDropDuplicateColumns() []string
- func (options WriterOptions) GetPushMode() WriterPushMode
- func (options WriterOptions) IsAsyncClientPushEnabled() bool
- func (options WriterOptions) IsDropDuplicatesEnabled() bool
- func (options WriterOptions) IsValid() bool
- func (options WriterOptions) IsWriteThroughEnabled() bool
- func (options WriterOptions) WithAsyncPush() WriterOptions
- func (options WriterOptions) WithDeduplicationMode(mode WriterDeduplicationMode) WriterOptions
- func (options WriterOptions) WithFastPush() WriterOptions
- func (options WriterOptions) WithPushMode(mode WriterPushMode) WriterOptions
- func (options WriterOptions) WithTransactionalPush() WriterOptions
- type WriterPushFlag
- type WriterPushMode
- type WriterTable
- func (t *WriterTable) GetData(offset int) (ColumnData, error)
- func (t *WriterTable) GetIndex() []time.Time
- func (t *WriterTable) GetIndexAsNative() []C.qdb_timespec_t
- func (t *WriterTable) GetName() string
- func (t *WriterTable) RowCount() int
- func (t *WriterTable) SetData(offset int, xs ColumnData) error
- func (t *WriterTable) SetDatas(xs []ColumnData) error
- func (t *WriterTable) SetIndex(idx []time.Time)
- func (t *WriterTable) SetIndexFromNative(idx []C.qdb_timespec_t)
Constants ¶
const QdbLogTimeKey = "qdb_time"
QdbLogTimeKey overrides the Record.Time when present as an attribute. The contract: if callers pass `QdbLogTimeKey, time.Time` (or the Attr equivalent), the slog adapter will use that value as the record’s timestamp and omit the attribute itself.
Variables ¶
var TsColumnTypes = []TsColumnType{ TsColumnBlob, TsColumnDouble, TsColumnInt64, TsColumnString, TsColumnTimestamp, TsColumnSymbol, }
var TsValueTypes = []TsValueType{ TsValueBlob, TsValueDouble, TsValueInt64, TsValueString, TsValueTimestamp, }
Functions ¶
func ClusterKeyFromFile ¶
ClusterKeyFromFile retrieves cluster public key from a file.
Args:
clusterPublicKeyFile: Path to file containing cluster public key in PEM format
Returns:
string: Cluster public key content error: File read error if any
Example:
key, err := qdb.ClusterKeyFromFile("/path/to/cluster.key")
if err != nil {
return err
}
func CountUndefined ¶
func CountUndefined() uint64
CountUndefined : return a uint64 value corresponding to quasardb undefined count value
func GetColumnDataBlob ¶ added in v3.14.2
func GetColumnDataBlob(cd ColumnData) ([][]byte, error)
GetColumnDataBlob extracts [][]byte from ColumnData.
func GetColumnDataBlobUnsafe ¶ added in v3.14.2
func GetColumnDataBlobUnsafe(cd ColumnData) [][]byte
GetColumnDataBlobUnsafe extracts [][]byte without type check.
func GetColumnDataDouble ¶ added in v3.14.2
func GetColumnDataDouble(cd ColumnData) ([]float64, error)
GetColumnDataDouble extracts []float64 from ColumnData.
func GetColumnDataDoubleUnsafe ¶ added in v3.14.2
func GetColumnDataDoubleUnsafe(cd ColumnData) []float64
GetColumnDataDoubleUnsafe extracts []float64 without type check.
func GetColumnDataInt64 ¶ added in v3.14.2
func GetColumnDataInt64(cd ColumnData) ([]int64, error)
GetColumnDataInt64 extracts []int64 from ColumnData.
func GetColumnDataInt64Unsafe ¶ added in v3.14.2
func GetColumnDataInt64Unsafe(cd ColumnData) []int64
GetColumnDataInt64Unsafe extracts []int64 without type check.
func GetColumnDataString ¶ added in v3.14.2
func GetColumnDataString(cd ColumnData) ([]string, error)
GetColumnDataString extracts []string from ColumnData.
func GetColumnDataStringUnsafe ¶ added in v3.14.2
func GetColumnDataStringUnsafe(cd ColumnData) []string
GetColumnDataStringUnsafe extracts []string without type check.
func GetColumnDataTimestamp ¶ added in v3.14.2
func GetColumnDataTimestamp(cd ColumnData) ([]time.Time, error)
GetColumnDataTimestamp extracts []time.Time from ColumnData.
func GetColumnDataTimestampNative ¶ added in v3.14.2
func GetColumnDataTimestampNative(cd ColumnData) ([]C.qdb_timespec_t, error)
GetColumnDataTimestampNative extracts []C.qdb_timespec_t from ColumnData.
func GetColumnDataTimestampNativeUnsafe ¶ added in v3.14.2
func GetColumnDataTimestampNativeUnsafe(cd ColumnData) []C.qdb_timespec_t
GetColumnDataTimestampNativeUnsafe extracts []C.qdb_timespec_t without type check.
func GetColumnDataTimestampUnsafe ¶ added in v3.14.2
func GetColumnDataTimestampUnsafe(cd ColumnData) []time.Time
GetColumnDataTimestampUnsafe extracts []time.Time without type check.
func Int64Undefined ¶
func Int64Undefined() int64
Int64Undefined : return a int64 value corresponding to quasardb undefined int64 value
func IsRetryable ¶ added in v3.14.2
IsRetryable checks if error is transient/retryable. Args:
err: any error (wrapped or direct)
Returns:
true: network/resource errors → retry false: logic/permission errors → fail fast
Example:
if IsRetryable(err) { time.Sleep(backoff); retry() }
func MaxTimespec ¶
MaxTimespec : return a time value corresponding to quasardb maximum timespec value
func MinTimespec ¶
MinTimespec : return a time value corresponding to quasardb minimum timespec value
func NeverExpires ¶
NeverExpires : return a time value corresponding to quasardb never expires value
func PreserveExpiration ¶
PreserveExpiration : return a time value corresponding to quasardb preserve expiration value
func QdbTimespecSliceToTime ¶ added in v3.14.2
func QdbTimespecSliceToTime(xs []C.qdb_timespec_t) []time.Time
QdbTimespecSliceToTime converts a slice of qdb_timespec_t to time.Time values.
func QdbTimespecToTime ¶ added in v3.14.2
func QdbTimespecToTime(t C.qdb_timespec_t) time.Time
QdbTimespecToTime converts qdb_timespec_t to a UTC time.Time.
func SetLogFile ¶
func SetLogFile(path string)
func SetLogger ¶ added in v3.14.2
func SetLogger(l Logger)
SetLogger replaces the package-level logger.
Performance trade-offs:
– Atomic swap is O(1) and contention-free for callers. – Panics on nil to fail fast in mis-configuration scenarios.
func TimeSliceToQdbTimespec ¶ added in v3.14.2
func TimeSliceToQdbTimespec(xs []time.Time) []C.qdb_timespec_t
TimeSliceToQdbTimespec converts a slice of time.Time values to qdb_timespec_t slice.
func TimeToQdbTimespec ¶ added in v3.14.2
func TimeToQdbTimespec(t time.Time, out *C.qdb_timespec_t)
TimeToQdbTimespec writes t into out using the qdb_timespec_t format.
func TimespecToStructG ¶ added in v3.14.2
func TimespecToStructG(tp C.qdb_timespec_t) time.Time
TimespecToStructG converts qdb_timespec_t to time.Time in local timezone.
func TsColumnInfoExToStructG ¶ added in v3.14.2
func TsColumnInfoExToStructG(t C.qdb_ts_column_info_ex_t, entry TimeseriesEntry) tsColumn
func UserCredentialFromFile ¶
UserCredentialFromFile retrieves user credentials from a JSON file.
Args:
userCredentialFile: Path to JSON file containing username and secret_key
Returns:
string: Username from the file string: Secret key from the file error: File read or JSON parsing error if any
Example:
user, secret, err := qdb.UserCredentialFromFile("/path/to/user.json")
if err != nil {
return err
}
func WithGC ¶ added in v3.14.2
func WithGC(t testHelper, testName string, testFunc func())
WithGC provides memory isolation for tests by invoking garbage collection before and after test execution. This ensures proper memory cleanup between tests by calling Go's garbage collector.
Decision rationale:
- Ensures memory isolation between tests in high-memory scenarios
- Uses Go's garbage collector for memory management
- Logs timing metrics to track GC overhead
Key assumptions:
- test function follows standard testing patterns
Performance trade-offs:
- Adds GC overhead but ensures test reliability
- Acceptable cost for memory isolation in test environments
Usage example:
func TestMyMemoryIntensiveFunction(t *testing.T) {
WithGC(t, "TestMyMemoryIntensiveFunction", func() {
// Your test code here
})
}
func WithGCAndHandle ¶ added in v3.14.2
func WithGCAndHandle(t testHelper, handle HandleType, testName string, testFunc func())
WithGCAndHandle provides memory isolation for tests. This version maintains the same interface as the original but now only uses Go's garbage collection.
Decision rationale:
- Maintains API compatibility for existing test code
- Uses only Go GC for memory management
- Used for tests that have access to database handles
Usage example:
func TestMyDatabaseFunction(t *testing.T) {
handle := newTestHandle(t)
WithGCAndHandle(t, handle, "TestMyDatabaseFunction", func() {
// Your test code here
})
}
Types ¶
type BlobEntry ¶
type BlobEntry struct {
Entry
}
BlobEntry : blob data type
func (*BlobEntry) CompareAndSwap ¶
func (entry *BlobEntry) CompareAndSwap(newValue, newComparand []byte, expiry time.Time) ([]byte, error)
CompareAndSwap : Atomically compares the entry with comparand and updates it to new_value if, and only if, they match.
The function returns the original value of the entry in case of a mismatch. When it matches, no content is returned. The entry must already exist. Update will occur if and only if the content of the entry matches bit for bit the content of the comparand buffer.
func (BlobEntry) Get ¶
Get : Retrieve an entry's content
If the entry does not exist, the function will fail and return 'alias not found' error.
func (BlobEntry) GetAndRemove ¶
GetAndRemove : Atomically gets an entry from the quasardb server and removes it.
If the entry does not exist, the function will fail and return 'alias not found' error.
func (*BlobEntry) GetAndUpdate ¶
GetAndUpdate : Atomically gets and updates (in this order) the entry on the quasardb server.
The entry must already exist.
func (BlobEntry) GetNoAlloc ¶
GetNoAlloc : Retrieve an entry's content to already allocated buffer
If the entry does not exist, the function will fail and return 'alias not found' error. If the buffer is not large enough to hold the data, the function will fail and return `buffer is too small`, content length will nevertheless be returned with entry size so that the caller may resize its buffer and try again.
func (BlobEntry) Put ¶
Put : Creates a new entry and sets its content to the provided blob.
If the entry already exists the function will fail and will return 'alias already exists' error. You can specify an expiry or use NeverExpires if you don’t want the entry to expire.
func (BlobEntry) RemoveIf ¶
RemoveIf : Atomically removes the entry on the server if the content matches.
The entry must already exist. Removal will occur if and only if the content of the entry matches bit for bit the content of the comparand buffer.
type Cluster ¶
type Cluster struct {
HandleType
}
Cluster : An object permitting calls to a cluster
func (Cluster) PurgeAll ¶
PurgeAll : Removes irremediably all data from all the nodes of the cluster.
This function is useful when quasardb is used as a cache and is not the golden source. This call is not atomic: if the command cannot be dispatched on the whole cluster, it will be dispatched on as many nodes as possible and the function will return with a qdb_e_ok code. By default cluster does not allow this operation and the function returns a qdb_e_operation_disabled error.
func (Cluster) PurgeCache ¶
PurgeCache : Removes all cached data from all the nodes of the cluster.
This function is disabled on a transient cluster. Prefer purge_all in this case. This call is not atomic: if the command cannot be dispatched on the whole cluster, it will be dispatched on as many nodes as possible and the function will return with a qdb_e_ok code.
func (Cluster) TrimAll ¶
TrimAll : Trims all data on all the nodes of the cluster.
Quasardb uses Multi-Version Concurrency Control (MVCC) as a foundation of its transaction engine. It will automatically clean up old versions as entries are accessed. This call is not atomic: if the command cannot be dispatched on the whole cluster, it will be dispatched on as many nodes as possible and the function will return with a qdb_e_ok code. Entries that are not accessed may not be cleaned up, resulting in increasing disk usage. This function will request each nodes to trim all entries, release unused memory and compact files on disk. Because this operation is I/O and CPU intensive it is not recommended to run it when the cluster is heavily used.
type ColumnData ¶ added in v3.14.2
type ColumnData interface {
// Returns the type of data for this column
ValueType() TsValueType
// Ensures the underlying data pre-allocates a certain capacity, useful when we know
// we will have multiple incremental allocations, e.g. when we use it in combination
// with appendData.
EnsureCapacity(n int)
// Ensures underlying data is reset to 0
Clear()
// Returns the number of items in the array
Length() int
// PinToC prepares column data for zero-copy passing to C functions.
//
// CRITICAL: This is part of the centralized pinning strategy that prevents segfaults.
// The function returns a PinnableBuilder that MUST be pinned at the top level
// (in Writer.Push) before any C calls. This two-phase approach ensures:
// 1. All pointers are collected before any pinning occurs
// 2. The runtime.Pinner can pin all objects in one batch
// 3. No Go pointers are stored in C-accessible memory before pinning
//
// Parameters:
// h – handle for QDB memory allocation when copies are required
//
// Returns:
// builder – PinnableBuilder containing the object to pin and builder function
// release – cleanup function for any C allocations (called after C returns)
//
// Implementation notes:
// - Int64/Double/Timestamp: Direct pin of Go slice memory (zero-copy)
// - Blob/String: MUST copy to C memory to avoid storing Go pointers before pinning
// - The release function is always safe to call, even on error
PinToC(h HandleType) (builder PinnableBuilder, release func())
// contains filtered or unexported methods
}
ColumnData represents column data for reading/writing.
type ColumnDataBlob ¶ added in v3.14.2
type ColumnDataBlob struct {
// contains filtered or unexported fields
}
ColumnDataBlob stores binary column data.
func NewColumnDataBlob ¶ added in v3.14.2
func NewColumnDataBlob(xs [][]byte) ColumnDataBlob
NewColumnDataBlob creates binary column data.
func (*ColumnDataBlob) Clear ¶ added in v3.14.2
func (cd *ColumnDataBlob) Clear()
Clear resets to empty.
func (*ColumnDataBlob) EnsureCapacity ¶ added in v3.14.2
func (cd *ColumnDataBlob) EnsureCapacity(n int)
EnsureCapacity pre-allocates space for n values.
func (*ColumnDataBlob) Length ¶ added in v3.14.2
func (cd *ColumnDataBlob) Length() int
Length returns the number of values.
func (*ColumnDataBlob) PinToC ¶ added in v3.14.2
func (cd *ColumnDataBlob) PinToC(h HandleType) (builder PinnableBuilder, release func())
PinToC builds a C envelope for blob data and returns PinnableBuilder.
ZERO-COPY STRATEGY: For blobs, we pin each individual blob's data pointer rather than copying. This requires more complex pinning but provides optimal performance for large binary data transfers.
Implementation notes:
- Each []byte is pinned individually using unsafe.SliceData
- The envelope array is allocated in C memory
- Only the envelope is released, not the blob data (owned by Go)
Safety considerations:
- All blob data pointers are pinned before C access
- The centralized pinning strategy ensures correctness
- Blobs are immutable during C operations
func (*ColumnDataBlob) ValueType ¶ added in v3.14.2
func (cd *ColumnDataBlob) ValueType() TsValueType
ValueType returns TsValueBlob.
type ColumnDataDouble ¶ added in v3.14.2
type ColumnDataDouble struct {
// contains filtered or unexported fields
}
ColumnDataDouble stores float64 column data.
func NewColumnDataDouble ¶ added in v3.14.2
func NewColumnDataDouble(xs []float64) ColumnDataDouble
NewColumnDataDouble creates float64 column data.
func (*ColumnDataDouble) Clear ¶ added in v3.14.2
func (cd *ColumnDataDouble) Clear()
Clear resets to empty.
func (*ColumnDataDouble) EnsureCapacity ¶ added in v3.14.2
func (cd *ColumnDataDouble) EnsureCapacity(n int)
EnsureCapacity pre-allocates space for n values.
func (*ColumnDataDouble) Length ¶ added in v3.14.2
func (cd *ColumnDataDouble) Length() int
Length returns the number of values.
func (*ColumnDataDouble) PinToC ¶ added in v3.14.2
func (cd *ColumnDataDouble) PinToC(h HandleType) (builder PinnableBuilder, release func())
PinToC prepares float64 data for zero-copy passing to C.
ZERO-COPY STRATEGY: Numeric types (int64, float64, timestamp) can safely use zero-copy because they are simple value types without internal pointers. We return a PinnableBuilder that will provide the pointer AFTER pinning.
Why zero-copy is safe here: - float64 is a value type with no internal pointers - Pinning prevents GC from moving the slice during C operations - No risk of violating string immutability or pointer rules
Performance: Zero allocations, zero copies - maximum efficiency.
func (*ColumnDataDouble) ValueType ¶ added in v3.14.2
func (cd *ColumnDataDouble) ValueType() TsValueType
ValueType returns TsValueDouble.
type ColumnDataInt64 ¶ added in v3.14.2
type ColumnDataInt64 struct {
// contains filtered or unexported fields
}
ColumnDataInt64 stores int64 column data.
func NewColumnDataInt64 ¶ added in v3.14.2
func NewColumnDataInt64(xs []int64) ColumnDataInt64
NewColumnDataInt64 creates int64 column data.
func (*ColumnDataInt64) Clear ¶ added in v3.14.2
func (cd *ColumnDataInt64) Clear()
Clear resets to empty.
func (*ColumnDataInt64) EnsureCapacity ¶ added in v3.14.2
func (cd *ColumnDataInt64) EnsureCapacity(n int)
EnsureCapacity pre-allocates space for n values.
func (*ColumnDataInt64) Length ¶ added in v3.14.2
func (cd *ColumnDataInt64) Length() int
Length returns the number of values.
func (*ColumnDataInt64) PinToC ¶ added in v3.14.2
func (cd *ColumnDataInt64) PinToC(h HandleType) (builder PinnableBuilder, release func())
PinToC prepares int64 data for zero-copy passing to C.
ZERO-COPY STRATEGY: Safe for numeric types like int64 because: - Simple value type with no internal pointers - Direct memory layout compatible with C - Pinning prevents GC movement during C operations
Performance: Zero allocations, zero copies - maximum efficiency.
func (*ColumnDataInt64) ValueType ¶ added in v3.14.2
func (cd *ColumnDataInt64) ValueType() TsValueType
ValueType returns TsValueInt64.
type ColumnDataString ¶ added in v3.14.2
type ColumnDataString struct {
// contains filtered or unexported fields
}
ColumnDataString stores text column data.
func NewColumnDataString ¶ added in v3.14.2
func NewColumnDataString(xs []string) ColumnDataString
NewColumnDataString creates string column data.
func (*ColumnDataString) Clear ¶ added in v3.14.2
func (cd *ColumnDataString) Clear()
Clear resets to empty.
func (*ColumnDataString) EnsureCapacity ¶ added in v3.14.2
func (cd *ColumnDataString) EnsureCapacity(n int)
EnsureCapacity pre-allocates space for n values.
func (*ColumnDataString) Length ¶ added in v3.14.2
func (cd *ColumnDataString) Length() int
Length returns the number of values.
func (*ColumnDataString) PinToC ¶ added in v3.14.2
func (cd *ColumnDataString) PinToC(h HandleType) (builder PinnableBuilder, release func())
PinToC builds a C envelope for string data and returns PinnableBuilder.
COPYING STRATEGY: For strings, we copy each string to C memory using qdbCopyString rather than pinning. This provides safety by avoiding unsafe pointer operations while maintaining correct memory management through proper cleanup.
Implementation notes:
- Each string is copied to C memory using qdbCopyString
- The envelope array is allocated in C memory
- Both the envelope and copied strings are released
- Go strings are immutable, so copying is safe
Safety considerations:
- All string data is copied to C memory before C access
- No pinning required since we use copying instead
- String immutability is preserved (C cannot modify original strings)
func (*ColumnDataString) ValueType ¶ added in v3.14.2
func (cd *ColumnDataString) ValueType() TsValueType
ValueType returns TsValueString.
type ColumnDataTimestamp ¶ added in v3.14.2
type ColumnDataTimestamp struct {
// contains filtered or unexported fields
}
ColumnDataTimestamp stores timestamp column data.
func NewColumnDataTimestamp ¶ added in v3.14.2
func NewColumnDataTimestamp(ts []time.Time) ColumnDataTimestamp
NewColumnDataTimestamp creates timestamp column data.
func (*ColumnDataTimestamp) Clear ¶ added in v3.14.2
func (cd *ColumnDataTimestamp) Clear()
Clear resets to empty.
func (*ColumnDataTimestamp) EnsureCapacity ¶ added in v3.14.2
func (cd *ColumnDataTimestamp) EnsureCapacity(n int)
EnsureCapacity pre-allocates space for n values.
func (*ColumnDataTimestamp) Length ¶ added in v3.14.2
func (cd *ColumnDataTimestamp) Length() int
Length returns the number of values.
func (*ColumnDataTimestamp) PinToC ¶ added in v3.14.2
func (cd *ColumnDataTimestamp) PinToC(h HandleType) (builder PinnableBuilder, release func())
PinToC prepares timestamp data for zero-copy passing to C.
ZERO-COPY STRATEGY: Safe because we store data in C.qdb_timespec_t format: - Data is already in C-compatible memory layout - Direct pinning like int64/double (no conversion needed) - Pinning prevents GC movement during C operations
Performance: Zero allocations, zero copies - maximum efficiency.
func (*ColumnDataTimestamp) ValueType ¶ added in v3.14.2
func (cd *ColumnDataTimestamp) ValueType() TsValueType
ValueType returns TsValueTimestamp.
type Compression ¶
type Compression C.qdb_compression_t
Compression is a compression parameter.
const ( CompNone Compression = C.qdb_comp_none CompBalanced Compression = C.qdb_comp_balanced )
Compression values:
CompNone : No compression. CompBalanced : Balanced compression for speed and efficiency, recommended value.
type DirectBlobEntry ¶
type DirectBlobEntry struct {
DirectEntry
}
DirectBlobEntry is an Entry for a blob data type
func (DirectBlobEntry) Get ¶
func (e DirectBlobEntry) Get() ([]byte, error)
Get returns an entry's contents
func (DirectBlobEntry) Put ¶
func (e DirectBlobEntry) Put(content []byte, expiry time.Time) error
Put creates a new entry and sets its content to the provided blob This will return an error if the entry alias already exists You can specify an expiry or use NeverExpires if you don’t want the entry to expire.
func (*DirectBlobEntry) Update ¶
func (e *DirectBlobEntry) Update(newContent []byte, expiry time.Time) error
Update creates or updates an entry and sets its content to the provided blob. If the entry already exists, the function will modify the entry. You can specify an expiry or use NeverExpires if you don’t want the entry to expire.
type DirectEntry ¶
type DirectEntry struct {
DirectHandleType
// contains filtered or unexported fields
}
DirectEntry is a base type for composition. Similar to a regular entry
func (DirectEntry) Remove ¶
func (e DirectEntry) Remove() error
Remove an entry from the local node's storage, regardless of its type.
This function bypasses the clustering mechanism and accesses the node local storage. Entries in the local node storage are not accessible via the regular API and vice versa.
The call is ACID, regardless of the type of the entry and a transaction will be created if need be.
type DirectHandleType ¶
type DirectHandleType struct {
// contains filtered or unexported fields
}
DirectHandleType is an opaque handle needed for maintaining a direct connection to a node.
func (DirectHandleType) Blob ¶
func (h DirectHandleType) Blob(alias string) DirectBlobEntry
Blob creates a direct blob entry object
func (DirectHandleType) Close ¶
func (h DirectHandleType) Close() error
Close releases a direct connect previously opened with DirectConnect
func (DirectHandleType) Integer ¶
func (h DirectHandleType) Integer(alias string) DirectIntegerEntry
Integer creates a direct integer entry object
func (DirectHandleType) PrefixGet ¶
func (h DirectHandleType) PrefixGet(prefix string, limit int) ([]string, error)
PrefixGet : Retrieves the list of all entries matching the provided prefix.
A prefix-based search will enable you to find all entries matching a provided prefix. This function returns the list of aliases. It’s up to the user to query the content associated with every entry, if needed.
func (DirectHandleType) Release ¶
func (h DirectHandleType) Release(buffer unsafe.Pointer)
Release frees API allocated buffers
type DirectIntegerEntry ¶
type DirectIntegerEntry struct {
DirectEntry
}
DirectIntegerEntry is an Entry for a int data type
func (DirectIntegerEntry) Add ¶
func (e DirectIntegerEntry) Add(added int64) (int64, error)
Add : Atomically increases or decreases a signed 64-bit integer.
The specified entry will be atomically increased (or decreased) according to the given addend value: To increase the value, specify a positive added To decrease the value, specify a negative added The function return the result of the operation. The entry must already exist.
func (DirectIntegerEntry) Get ¶
func (e DirectIntegerEntry) Get() (int64, error)
Get returns the value of a signed 64-bit integer
func (DirectIntegerEntry) Put ¶
func (e DirectIntegerEntry) Put(content int64, expiry time.Time) error
Put creates a new signed 64-bit integer.
Atomically creates an entry of the given alias and sets it to a cross-platform signed 64-bit integer. If the entry already exists, the function returns an error. You can specify an expiry time or use NeverExpires if you don’t want the entry to expire. If you want to create or update an entry use Update. The value will be correctly translated independently of the endianness of the client’s platform.
func (DirectIntegerEntry) Update ¶
func (e DirectIntegerEntry) Update(newContent int64, expiry time.Time) error
Update creates or updates a signed 64-bit integer.
Atomically updates an entry of the given alias to the provided value. If the entry doesn’t exist, it will be created. You can specify an expiry time or use NeverExpires if you don’t want the entry to expire.
type Encryption ¶
type Encryption C.qdb_encryption_t
Encryption is an encryption option.
const ( EncryptNone Encryption = C.qdb_crypt_none EncryptAES Encryption = C.qdb_crypt_aes_gcm_256 )
Encryption values:
EncryptNone : No encryption. EncryptAES : Uses aes gcm 256 encryption.
type Endpoint ¶
Endpoint : A structure representing a qdb url endpoint
func RemoteNodeToStructG ¶ added in v3.14.2
func RemoteNodeToStructG(t C.qdb_remote_node_t) Endpoint
type Entry ¶
type Entry struct {
HandleType
// contains filtered or unexported fields
}
Entry : cannot be constructed base type for composition
func (Entry) AttachTag ¶
AttachTag : Adds a tag entry.
Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes. The entry must exist. The tag may or may not exist.
func (Entry) AttachTags ¶
AttachTags : Adds a collection of tags to a single entry.
Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes. The function will ignore existing tags. The entry must exist. The tag may or may not exist.
func (Entry) DetachTag ¶
DetachTag : Removes a tag from an entry.
Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes. The entry must exist. The tag must exist.
func (Entry) DetachTags ¶
DetachTags : Removes a collection of tags from a single entry.
Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes. The entry must exist. The tags must exist.
func (Entry) ExpiresAt ¶
ExpiresAt : Sets the absolute expiration time of an entry.
Blobs and integers can have an expiration time and will be automatically removed by the cluster when they expire. The absolute expiration time is the Unix epoch, that is, the number of milliseconds since 1 January 1970, 00:00::00 UTC. To use a relative expiration time (that is expiration relative to the time of the call), use ExpiresFromNow. To remove the expiration time of an entry, specify the value NeverExpires as ExpiryTime parameter. Values in the past are refused, but the cluster will have a certain tolerance to account for clock skews.
func (Entry) ExpiresFromNow ¶
ExpiresFromNow : Sets the expiration time of an entry, relative to the current time of the client.
Blobs and integers can have an expiration time and will automatically be removed by the cluster when they expire. The expiration is relative to the current time of the machine. To remove the expiration time of an entry or to use an absolute expiration time use ExpiresAt.
func (Entry) GetLocation ¶
func (e Entry) GetLocation() (NodeLocation, error)
GetLocation : Returns the primary node of an entry.
The exact location of an entry should be assumed random and users should not bother about its location as the API will transparently locate the best node for the requested operation. This function is intended for higher level APIs that need to optimize transfers and potentially push computation close to the data.
func (Entry) GetMetadata ¶
GetMetadata : Gets the meta-information about an entry, if it exists.
func (Entry) GetTagged ¶
GetTagged : Retrieves all entries that have the specified tag.
Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes. The tag must exist. The complexity of this function is constant.
func (Entry) GetTags ¶
GetTags : Retrieves all the tags of an entry.
Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes. The entry must exist.
func (Entry) HasTag ¶
HasTag : Tests if an entry has the request tag.
Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes. The entry must exist.
func (Entry) Remove ¶
Remove : Removes an entry from the cluster, regardless of its type.
This call will remove the entry, whether it is a blob, integer, deque, stream. It will properly untag the entry. If the entry spawns on multiple entries or nodes (deques and streams) all blocks will be properly removed. The call is ACID, regardless of the type of the entry and a transaction will be created if need be
type EntryType ¶
type EntryType C.qdb_entry_type_t
EntryType : An enumeration representing possible entries type.
const ( EntryUninitialized EntryType = C.qdb_entry_uninitialized EntryBlob EntryType = C.qdb_entry_blob EntryInteger EntryType = C.qdb_entry_integer EntryHSet EntryType = C.qdb_entry_hset EntryTag EntryType = C.qdb_entry_tag EntryDeque EntryType = C.qdb_entry_deque EntryStream EntryType = C.qdb_entry_stream EntryTS EntryType = C.qdb_entry_ts )
EntryType Values
EntryUninitialized : Uninitialized value. EntryBlob : A binary large object (blob). EntryInteger : A signed 64-bit integer. EntryHSet : A distributed hash set. EntryTag : A tag. EntryDeque : A distributed double-entry queue (deque). EntryTS : A distributed time series. EntryStream : A distributed binary stream.
type ErrorType ¶
type ErrorType C.qdb_error_t
ErrorType: QuasarDB error codes, wraps C.qdb_error_t
const ( Success ErrorType = C.qdb_e_ok Created ErrorType = C.qdb_e_ok_created ErrUninitialized ErrorType = C.qdb_e_uninitialized ErrAliasNotFound ErrorType = C.qdb_e_alias_not_found ErrAliasAlreadyExists ErrorType = C.qdb_e_alias_already_exists ErrOutOfBounds ErrorType = C.qdb_e_out_of_bounds ErrSkipped ErrorType = C.qdb_e_skipped ErrIncompatibleType ErrorType = C.qdb_e_incompatible_type ErrContainerEmpty ErrorType = C.qdb_e_container_empty ErrContainerFull ErrorType = C.qdb_e_container_full ErrElementNotFound ErrorType = C.qdb_e_element_not_found ErrElementAlreadyExists ErrorType = C.qdb_e_element_already_exists ErrOverflow ErrorType = C.qdb_e_overflow ErrUnderflow ErrorType = C.qdb_e_underflow ErrTagAlreadySet ErrorType = C.qdb_e_tag_already_set ErrTagNotSet ErrorType = C.qdb_e_tag_not_set ErrTimeout ErrorType = C.qdb_e_timeout ErrConnectionRefused ErrorType = C.qdb_e_connection_refused ErrConnectionReset ErrorType = C.qdb_e_connection_reset ErrUnstableCluster ErrorType = C.qdb_e_unstable_cluster ErrTryAgain ErrorType = C.qdb_e_try_again ErrConflict ErrorType = C.qdb_e_conflict ErrNotConnected ErrorType = C.qdb_e_not_connected ErrResourceLocked ErrorType = C.qdb_e_resource_locked ErrSystemRemote ErrorType = C.qdb_e_system_remote ErrSystemLocal ErrorType = C.qdb_e_system_local ErrInternalRemote ErrorType = C.qdb_e_internal_remote ErrInternalLocal ErrorType = C.qdb_e_internal_local ErrNoMemoryRemote ErrorType = C.qdb_e_no_memory_remote ErrNoMemoryLocal ErrorType = C.qdb_e_no_memory_local ErrInvalidProtocol ErrorType = C.qdb_e_invalid_protocol ErrHostNotFound ErrorType = C.qdb_e_host_not_found ErrBufferTooSmall ErrorType = C.qdb_e_buffer_too_small ErrNotImplemented ErrorType = C.qdb_e_not_implemented ErrInvalidVersion ErrorType = C.qdb_e_invalid_version ErrInvalidArgument ErrorType = C.qdb_e_invalid_argument ErrInvalidHandle ErrorType = C.qdb_e_invalid_handle ErrReservedAlias ErrorType = C.qdb_e_reserved_alias ErrUnmatchedContent ErrorType = C.qdb_e_unmatched_content ErrInvalidIterator ErrorType = C.qdb_e_invalid_iterator ErrEntryTooLarge ErrorType = C.qdb_e_entry_too_large ErrTransactionPartialFailure ErrorType = C.qdb_e_transaction_partial_failure ErrOperationDisabled ErrorType = C.qdb_e_operation_disabled ErrOperationNotPermitted ErrorType = C.qdb_e_operation_not_permitted ErrIteratorEnd ErrorType = C.qdb_e_iterator_end ErrInvalidReply ErrorType = C.qdb_e_invalid_reply ErrOkCreated ErrorType = C.qdb_e_ok_created ErrNoSpaceLeft ErrorType = C.qdb_e_no_space_left ErrQuotaExceeded ErrorType = C.qdb_e_quota_exceeded ErrAliasTooLong ErrorType = C.qdb_e_alias_too_long ErrClockSkew ErrorType = C.qdb_e_clock_skew ErrAccessDenied ErrorType = C.qdb_e_access_denied ErrLoginFailed ErrorType = C.qdb_e_login_failed ErrColumnNotFound ErrorType = C.qdb_e_column_not_found ErrQueryTooComplex ErrorType = C.qdb_e_query_too_complex ErrInvalidCryptoKey ErrorType = C.qdb_e_invalid_crypto_key ErrInvalidQuery ErrorType = C.qdb_e_invalid_query ErrInvalidRegex ErrorType = C.qdb_e_invalid_regex ErrUnknownUser ErrorType = C.qdb_e_unknown_user ErrInterrupted ErrorType = C.qdb_e_interrupted ErrNetworkInbufTooSmall ErrorType = C.qdb_e_network_inbuf_too_small ErrNetworkError ErrorType = C.qdb_e_network_error ErrDataCorruption ErrorType = C.qdb_e_data_corruption ErrPartialFailure ErrorType = C.qdb_e_partial_failure ErrAsyncPipeFull ErrorType = C.qdb_e_async_pipe_full )
Error codes: retryable errors default true except logic/constraint/permission failures
Network/transient (retryable): - ErrTimeout: network timeout - ErrConnectionRefused/Reset: connection failed - ErrUnstableCluster: temporary cluster issue - ErrTryAgain: explicit retry request - ErrResourceLocked: concurrent access conflict - ErrNetworkError: generic network failure
Logic/programming (non-retryable): - ErrInvalidArgument: bad parameter - ErrIncompatibleType: type mismatch - ErrInvalidQuery: malformed query - ErrBufferTooSmall: insufficient buffer
Constraints (non-retryable): - ErrAliasAlreadyExists: duplicate key - ErrEntryTooLarge: size limit exceeded - ErrQuotaExceeded: storage quota reached
Permissions (non-retryable): - ErrAccessDenied: insufficient privileges - ErrOperationNotPermitted: forbidden operation
type Find ¶
type Find struct {
HandleType
// contains filtered or unexported fields
}
Find : a building type to execute a query Retrieves all entries’ aliases that match the specified query. For the complete grammar, please refer to the documentation. Queries are transactional. The complexity of this function is dependent on the complexity of the query.
func (Find) ExecuteString ¶
ExecuteString : Execute a string query immediately
type HandleOptions ¶ added in v3.14.2
type HandleOptions struct {
// contains filtered or unexported fields
}
HandleOptions holds all configuration options for creating a handle.
func FromHandleOptionsProvider ¶ added in v3.14.2
func FromHandleOptionsProvider(provider HandleOptionsProvider) *HandleOptions
FromHandleOptionsProvider creates HandleOptions from a provider.
Args:
provider: HandleOptionsProvider interface implementation
Returns:
*HandleOptions: New options instance, nil if provider is nil
Note: User secrets cannot be copied for security reasons.
Example:
newOpts := qdb.FromHandleOptionsProvider(existingOpts)
if newOpts != nil {
newOpts.WithUserSecurityFile("/path/to/user.json")
}
func NewHandleOptions ¶ added in v3.14.2
func NewHandleOptions() *HandleOptions
NewHandleOptions creates a new HandleOptions builder.
Args:
None
Returns:
*HandleOptions: Builder for configuring handle options
Default values:
- Compression: CompBalanced
- Encryption: EncryptNone
- Timeout: 120 seconds
Example:
// Simple unsecured connection
opts := NewHandleOptions().
WithClusterUri("qdb://localhost:2836").
WithTimeout(30 * time.Second)
handle, err := qdb.NewHandleFromOptions(opts)
// Secured connection with files
opts := NewHandleOptions().
WithClusterUri("qdb://secure-cluster:2838").
WithClusterPublicKeyFile("/path/to/cluster.key").
WithUserSecurityFile("/path/to/user.json").
WithEncryption(qdb.EncryptAES)
handle, err := qdb.NewHandleFromOptions(opts)
// High-performance configuration
opts := NewHandleOptions().
WithClusterUri("qdb://cluster:2836").
WithCompression(qdb.CompNone).
WithClientMaxParallelism(16).
WithClientMaxInBufSize(64 * 1024 * 1024)
handle, err := qdb.NewHandleFromOptions(opts)
func (*HandleOptions) GetClientMaxInBufSize ¶ added in v3.14.2
func (o *HandleOptions) GetClientMaxInBufSize() uint
GetClientMaxInBufSize returns the current client max input buffer size value.
func (*HandleOptions) GetClientMaxParallelism ¶ added in v3.14.2
func (o *HandleOptions) GetClientMaxParallelism() int
GetClientMaxParallelism returns the current client max parallelism value.
func (*HandleOptions) GetClusterPublicKey ¶ added in v3.14.2
func (o *HandleOptions) GetClusterPublicKey() string
GetClusterPublicKey returns the current cluster public key value.
func (*HandleOptions) GetClusterPublicKeyFile ¶ added in v3.14.2
func (o *HandleOptions) GetClusterPublicKeyFile() string
GetClusterPublicKeyFile returns the current cluster public key file path value.
func (*HandleOptions) GetClusterURI ¶ added in v3.14.2
func (o *HandleOptions) GetClusterURI() string
GetClusterURI returns the current cluster URI value.
func (*HandleOptions) GetCompression ¶ added in v3.14.2
func (o *HandleOptions) GetCompression() Compression
GetCompression returns the current compression value.
func (*HandleOptions) GetEncryption ¶ added in v3.14.2
func (o *HandleOptions) GetEncryption() Encryption
GetEncryption returns the current encryption value.
func (*HandleOptions) GetTimeout ¶ added in v3.14.2
func (o *HandleOptions) GetTimeout() time.Duration
GetTimeout returns the current timeout value.
func (*HandleOptions) GetUserName ¶ added in v3.14.2
func (o *HandleOptions) GetUserName() string
GetUserName returns the current username value.
func (*HandleOptions) GetUserSecret ¶ added in v3.14.2
func (o *HandleOptions) GetUserSecret() string
GetUserSecret returns the current user secret value. Note: This method is kept for internal use but should be used carefully for security reasons.
func (*HandleOptions) GetUserSecurityFile ¶ added in v3.14.2
func (o *HandleOptions) GetUserSecurityFile() string
GetUserSecurityFile returns the current user security file path value.
func (*HandleOptions) WithClientMaxInBufSize ¶ added in v3.14.2
func (o *HandleOptions) WithClientMaxInBufSize(size uint) *HandleOptions
WithClientMaxInBufSize sets the client max input buffer size option.
func (*HandleOptions) WithClientMaxParallelism ¶ added in v3.14.2
func (o *HandleOptions) WithClientMaxParallelism(n int) *HandleOptions
WithClientMaxParallelism sets the client max parallelism option.
func (*HandleOptions) WithClusterPublicKey ¶ added in v3.14.2
func (o *HandleOptions) WithClusterPublicKey(key string) *HandleOptions
WithClusterPublicKey sets the cluster public key option.
func (*HandleOptions) WithClusterPublicKeyFile ¶ added in v3.14.2
func (o *HandleOptions) WithClusterPublicKeyFile(path string) *HandleOptions
WithClusterPublicKeyFile sets the cluster public key file path option.
func (*HandleOptions) WithClusterUri ¶ added in v3.14.2
func (o *HandleOptions) WithClusterUri(uri string) *HandleOptions
WithClusterUri sets the cluster URI option.
func (*HandleOptions) WithCompression ¶ added in v3.14.2
func (o *HandleOptions) WithCompression(compression Compression) *HandleOptions
WithCompression sets the compression option.
func (*HandleOptions) WithEncryption ¶ added in v3.14.2
func (o *HandleOptions) WithEncryption(encryption Encryption) *HandleOptions
WithEncryption sets the encryption option.
func (*HandleOptions) WithTimeout ¶ added in v3.14.2
func (o *HandleOptions) WithTimeout(timeout time.Duration) *HandleOptions
WithTimeout sets the timeout option.
func (*HandleOptions) WithUserName ¶ added in v3.14.2
func (o *HandleOptions) WithUserName(name string) *HandleOptions
WithUserName sets the username option.
func (*HandleOptions) WithUserSecret ¶ added in v3.14.2
func (o *HandleOptions) WithUserSecret(secret string) *HandleOptions
WithUserSecret sets the user secret option.
func (*HandleOptions) WithUserSecurityFile ¶ added in v3.14.2
func (o *HandleOptions) WithUserSecurityFile(path string) *HandleOptions
WithUserSecurityFile sets the user security file path option.
type HandleOptionsProvider ¶ added in v3.14.2
type HandleOptionsProvider interface {
GetClusterURI() string
GetClusterPublicKeyFile() string
GetClusterPublicKey() string
GetUserSecurityFile() string
GetUserName() string
// GetUserSecret() is intentionally omitted for security
GetEncryption() Encryption
GetCompression() Compression
GetClientMaxParallelism() int
GetClientMaxInBufSize() uint
GetTimeout() time.Duration
}
HandleOptionsProvider provides methods to retrieve handle configuration values. Note: User secrets are not exposed through this interface for security reasons.
type HandleType ¶
type HandleType struct {
// contains filtered or unexported fields
}
HandleType is an opaque handle to internal API-allocated structures needed for maintaining connection to a cluster.
func MustSetupHandle ¶
func MustSetupHandle(clusterURI string, timeout time.Duration) HandleType
MustSetupHandle creates and connects a handle, panics on error.
Args:
clusterURI: URI of the QuasarDB cluster (e.g. "qdb://localhost:2836") timeout: Network operation timeout
Returns:
HandleType: Connected handle
Example:
h := qdb.MustSetupHandle("qdb://localhost:2836", 30*time.Second)
defer h.Close()
func MustSetupSecuredHandle ¶
func MustSetupSecuredHandle(clusterURI, clusterPublicKeyFile, userCredentialFile string, timeout time.Duration, encryption Encryption) HandleType
MustSetupSecuredHandle creates and connects a secured handle, panics on error.
Args:
clusterURI: URI of the QuasarDB cluster clusterPublicKeyFile: Path to cluster public key file userCredentialFile: Path to user credentials JSON file timeout: Network operation timeout encryption: Encryption type to use
Returns:
HandleType: Secured and connected handle
Example:
h := qdb.MustSetupSecuredHandle(
"qdb://secure-cluster:2838",
"/path/to/cluster.key",
"/path/to/user.json",
30*time.Second,
qdb.EncryptAES)
defer h.Close()
func NewHandle ¶
func NewHandle() (HandleType, error)
NewHandle creates a new handle.
Args:
None
Returns:
HandleType: Opened handle (not connected) with TCP protocol error: Creation error if any
Example:
h, err := qdb.NewHandle()
if err != nil {
return err
}
defer h.Close()
func NewHandleFromOptions ¶ added in v3.14.2
func NewHandleFromOptions(options *HandleOptions) (HandleType, error)
NewHandleFromOptions creates and configures a new handle using the provided options.
Args:
options: Configuration options for the handle
Returns:
HandleType: Configured and connected handle error: Creation or configuration error if any
Example:
opts := qdb.NewHandleOptions().
WithClusterUri("qdb://localhost:2836").
WithTimeout(30 * time.Second)
h, err := qdb.NewHandleFromOptions(opts)
if err != nil {
return err
}
defer h.Close()
func NewHandleWithNativeLogs ¶ added in v3.14.2
func NewHandleWithNativeLogs() (HandleType, error)
NewHandleWithNativeLogs creates a new handle with native C++ logging enabled.
Args:
None
Returns:
HandleType: Opened handle with native logging enabled error: Creation error if any
Example:
h, err := qdb.NewHandleWithNativeLogs()
if err != nil {
return err
}
defer h.Close()
func SetupHandle ¶
func SetupHandle(clusterURI string, timeout time.Duration) (HandleType, error)
SetupHandle creates and connects a handle to a QuasarDB cluster.
Args:
clusterURI: URI of the QuasarDB cluster (e.g. "qdb://localhost:2836") timeout: Network operation timeout
Returns:
HandleType: Connected handle error: Creation or connection error if any
Example:
h, err := qdb.SetupHandle("qdb://localhost:2836", 30*time.Second)
if err != nil {
return err
}
defer h.Close()
func SetupSecuredHandle ¶
func SetupSecuredHandle(clusterURI, clusterPublicKeyFile, userCredentialFile string, timeout time.Duration, encryption Encryption) (HandleType, error)
SetupSecuredHandle creates and connects a secured handle with authentication.
Args:
clusterURI: URI of the QuasarDB cluster clusterPublicKeyFile: Path to cluster public key file userCredentialFile: Path to user credentials JSON file timeout: Network operation timeout encryption: Encryption type to use
Returns:
HandleType: Secured and connected handle error: Creation, security setup, or connection error if any
Example:
h, err := qdb.SetupSecuredHandle(
"qdb://secure-cluster:2838",
"/path/to/cluster.key",
"/path/to/user.json",
30*time.Second,
qdb.EncryptAES)
if err != nil {
return err
}
defer h.Close()
func (HandleType) APIBuild ¶
func (h HandleType) APIBuild() string
APIBuild returns the QuasarDB API build information.
Args:
None
Returns:
string: Build information including version, compiler, and platform
Example:
build := h.APIBuild() // → "3.14.0-gcc-11.2.0-linux-x86_64"
func (HandleType) APIVersion ¶
func (h HandleType) APIVersion() string
APIVersion returns the QuasarDB API version string.
Args:
None
Returns:
string: API version (e.g. "3.14.0")
Example:
version := h.APIVersion() // → "3.14.0"
func (HandleType) AddClusterPublicKey ¶
func (h HandleType) AddClusterPublicKey(secret string) error
AddClusterPublicKey adds the cluster public key for secure communication.
Args:
secret: Cluster public key in PEM format
Returns:
error: Configuration error if any
Note: Must be called before Connect.
Example:
err := handle.AddClusterPublicKey(clusterKey)
if err != nil {
return err
}
func (HandleType) AddUserCredentials ¶
func (h HandleType) AddUserCredentials(name, secret string) error
AddUserCredentials adds a username and secret key for authentication.
Args:
name: Username for authentication secret: User secret key/private key
Returns:
error: Configuration error if any
Example:
err := handle.AddUserCredentials("myuser", "mysecretkey")
if err != nil {
return err
}
func (HandleType) Blob ¶
func (h HandleType) Blob(alias string) BlobEntry
Blob creates an entry accessor for blob operations.
Args:
alias: Name of the blob entry
Returns:
BlobEntry: Blob entry accessor
Example:
blob := h.Blob("my_data")
err := blob.Put([]byte("Hello World"))
func (HandleType) Close ¶
func (h HandleType) Close() error
Close closes the handle and releases all resources.
Args:
None
Returns:
error: Closing error if any
Note: Terminates connections and releases all internal buffers.
Example:
err := h.Close()
if err != nil {
return err
}
func (HandleType) Cluster ¶
func (h HandleType) Cluster() *Cluster
Cluster creates an entry accessor for cluster operations.
Args:
None
Returns:
*Cluster: Cluster operations accessor
Example:
cluster := h.Cluster() err := cluster.TrimAll()
func (HandleType) Connect ¶
func (h HandleType) Connect(clusterURI string) error
Connect connects a previously opened handle to a QuasarDB cluster.
Args:
clusterURI: URI in format qdb://<address>:<port> (IPv4/IPv6/domain)
Returns:
error: Connection error if any
Example:
err := h.Connect("qdb://localhost:2836")
if err != nil {
return err
}
// Multiple nodes: "qdb://node1:2836,node2:2836"
// IPv6: "qdb://[::1]:2836"
func (HandleType) DirectConnect ¶
func (h HandleType) DirectConnect(nodeURI string) (DirectHandleType, error)
DirectConnect opens a connection to a node for use with the direct API
The returned direct handle must be freed with Close(). Releasing the handle has no impact on non-direct connections or other direct handles.
func (HandleType) Find ¶
func (h HandleType) Find() *Find
Find creates an entry accessor for find query operations.
Args:
None
Returns:
*Find: Find query builder
Example:
results, err := h.Find().
Tag("important").
Type(qdb.EntryBlob).
Execute()
func (HandleType) GetClientMaxInBufSize ¶
func (h HandleType) GetClientMaxInBufSize() (uint, error)
GetClientMaxInBufSize gets the maximum incoming buffer size for client network operations.
Args:
None
Returns:
uint: Current maximum buffer size in bytes error: Retrieval error if any
Example:
size, err := h.GetClientMaxInBufSize() // → 16777216
if err != nil {
return 0, err
}
func (HandleType) GetClientMaxParallelism ¶ added in v3.13.2
func (h HandleType) GetClientMaxParallelism() (uint, error)
GetClientMaxParallelism gets the maximum parallelism option of the client.
Args:
None
Returns:
uint: Current maximum parallelism thread count error: Retrieval error if any
Example:
count, err := h.GetClientMaxParallelism() // → 16
if err != nil {
return 0, err
}
func (HandleType) GetClusterMaxInBufSize ¶
func (h HandleType) GetClusterMaxInBufSize() (uint, error)
GetClusterMaxInBufSize gets the maximum incoming buffer size allowed by the cluster.
Args:
None
Returns:
uint: Maximum buffer size allowed by cluster in bytes error: Retrieval error if any
Example:
size, err := h.GetClusterMaxInBufSize() // → 67108864
if err != nil {
return 0, err
}
func (HandleType) GetLastError ¶
func (h HandleType) GetLastError() (string, error)
GetLastError retrieves last operation error.
Args:
None
Returns:
string: Error message text error: Error code
Example:
msg, err := h.GetLastError() // → "Connection timeout"
func (HandleType) GetTagged ¶
func (h HandleType) GetTagged(tag string) ([]string, error)
GetTagged retrieves all entries that have the specified tag.
Args:
tag: Tag name to search for
Returns:
[]string: List of entry aliases with this tag error: Retrieval error if any
Note: Tag must exist. Constant time complexity.
Example:
entries, err := h.GetTagged("important") // → ["entry1", "entry2"]
if err != nil {
return nil, err
}
func (HandleType) GetTags ¶
func (h HandleType) GetTags(entryAlias string) ([]string, error)
GetTags retrieves all tags of an entry.
Args:
entryAlias: Name of the entry to get tags from
Returns:
[]string: List of tag names error: Retrieval error if any
Note: Entry must exist. Tags scale across nodes.
Example:
tags, err := h.GetTags("myentry") // → ["important", "data"]
if err != nil {
return nil, err
}
func (HandleType) Integer ¶
func (h HandleType) Integer(alias string) IntegerEntry
Integer creates an entry accessor for integer operations.
Args:
alias: Name of the integer entry
Returns:
IntegerEntry: Integer entry accessor
Example:
counter := h.Integer("my_counter")
err := counter.Put(42)
func (HandleType) Node ¶
func (h HandleType) Node(uri string) *Node
Node creates an entry accessor for node operations.
Args:
uri: URI of the node (e.g. "qdb://localhost:2836")
Returns:
*Node: Node accessor
Example:
node := h.Node("qdb://localhost:2836")
status, err := node.Status()
func (HandleType) NodeStatistics
deprecated
func (h HandleType) NodeStatistics(nodeID string) (Statistics, error)
NodeStatistics : Retrieve statistics for a specific node
Deprecated: Statistics will be fetched directly from the node using the new direct API
func (HandleType) Open ¶
func (h HandleType) Open(protocol Protocol) error
Open initializes a handle with the specified protocol.
Args:
protocol: Network protocol to use (e.g. ProtocolTCP)
Returns:
error: Initialization error if any
Note: No connection will be established. Not needed if you created your handle with NewHandle.
Example:
err := h.Open(qdb.ProtocolTCP)
if err != nil {
return err
}
func (HandleType) PrefixCount ¶
func (h HandleType) PrefixCount(prefix string) (uint64, error)
PrefixCount retrieves the count of entries matching the provided prefix.
Args:
prefix: Prefix string to match
Returns:
uint64: Number of entries matching the prefix error: Counting error if any
Example:
count, err := h.PrefixCount("user:") // → 42
if err != nil {
return 0, err
}
func (HandleType) PrefixGet ¶
func (h HandleType) PrefixGet(prefix string, limit int) ([]string, error)
PrefixGet retrieves all entries matching the provided prefix.
Args:
prefix: Prefix string to match limit: Maximum number of results to return
Returns:
[]string: List of entry aliases matching the prefix error: Retrieval error if any
Example:
entries, err := h.PrefixGet("user:", 100) // → ["user:1", "user:2"]
if err != nil {
return nil, err
}
func (HandleType) Query ¶
func (h HandleType) Query(query string) *Query
Query creates an entry accessor for query operations.
Args:
query: SQL-like query string
Returns:
*Query: Query executor
Example:
q := h.Query("SELECT * FROM measurements WHERE value > 100")
result, err := q.Execute()
func (HandleType) Release ¶
func (h HandleType) Release(buffer unsafe.Pointer)
Release releases an API-allocated buffer.
Args:
buffer: Pointer to buffer allocated by QuasarDB API
Returns:
None
Note: Failure to call may cause memory leaks. Works with any API-allocated buffer type.
Example:
var tags **C.char err := C.qdb_get_tags(h.handle, alias, &tags, &tagCount) defer h.Release(unsafe.Pointer(tags))
func (HandleType) SetClientMaxInBufSize ¶
func (h HandleType) SetClientMaxInBufSize(bufSize uint) error
SetClientMaxInBufSize sets the maximum incoming buffer size for client network operations.
Args:
bufSize: Maximum buffer size in bytes
Returns:
error: Configuration error if any
Note: Only modify if expecting very large responses from server.
Example:
err := h.SetClientMaxInBufSize(64 * 1024 * 1024) // 64MB
if err != nil {
return err
}
func (HandleType) SetClientMaxParallelism ¶ added in v3.13.2
func (h HandleType) SetClientMaxParallelism(threadCount uint) error
SetClientMaxParallelism sets the maximum parallelism level for the client.
Args:
threadCount: Number of threads for concurrent operations
Returns:
error: Configuration error if any
Note: Higher values may improve throughput but consume more resources.
Example:
err := h.SetClientMaxParallelism(16)
if err != nil {
return err
}
func (HandleType) SetCompression ¶
func (h HandleType) SetCompression(compressionLevel Compression) error
SetCompression sets the compression level for outgoing messages.
Args:
compressionLevel: Compression type (CompNone, CompBalanced)
Returns:
error: Configuration error if any
Note: API can read any compression used by server regardless of this setting.
Example:
err := h.SetCompression(qdb.CompBalanced)
if err != nil {
return err
}
func (HandleType) SetEncryption ¶
func (h HandleType) SetEncryption(encryption Encryption) error
SetEncryption sets the encryption method for the handle.
Args:
encryption: Encryption type (EncryptNone or EncryptAES)
Returns:
error: Configuration error if any
Note: Must be called before Connect. See AddClusterPublicKey for adding public key.
Example:
err := h.SetEncryption(qdb.EncryptAES)
if err != nil {
return err
}
func (HandleType) SetMaxCardinality ¶
func (h HandleType) SetMaxCardinality(maxCardinality uint) error
SetMaxCardinality sets the maximum allowed cardinality for queries.
Args:
maxCardinality: Maximum cardinality value (minimum: 100)
Returns:
error: Configuration error if any
Note: Default value is 10,007. Minimum allowed value is 100.
Example:
err := h.SetMaxCardinality(50000)
if err != nil {
return err
}
func (HandleType) SetTimeout ¶
func (h HandleType) SetTimeout(timeout time.Duration) error
SetTimeout sets the timeout for all network operations.
Args:
timeout: Duration for network operations timeout
Returns:
error: Configuration error if any
Note: Lower timeouts increase risk of timeout errors. Server-side timeout might be shorter.
Example:
err := h.SetTimeout(30 * time.Second)
if err != nil {
return err
}
func (HandleType) Statistics ¶
func (h HandleType) Statistics() (map[string]Statistics, error)
Statistics : Retrieve statistics for all nodes
func (HandleType) Table ¶ added in v3.14.2
func (h HandleType) Table(alias string) TimeseriesEntry
Table creates an entry accessor for timeseries table operations.
Args:
alias: Name of the timeseries table
Returns:
TimeseriesEntry: Timeseries table accessor
Example:
table := h.Table("measurements")
err := table.Create(columns...)
func (HandleType) Timeseries
deprecated
func (h HandleType) Timeseries(alias string) TimeseriesEntry
Timeseries creates an entry accessor for timeseries operations.
Args:
alias: Name of the timeseries table
Returns:
TimeseriesEntry: Timeseries table accessor
Deprecated: Use Table instead.
Example:
ts := h.Timeseries("measurements")
func (HandleType) TsBatch ¶
func (h HandleType) TsBatch(cols ...TsBatchColumnInfo) (*TsBatch, error)
TsBatch creates an entry accessor for batch timeseries operations.
Args:
cols: Column information for batch operations
Returns:
*TsBatch: Batch operations accessor error: Creation error if any
Example:
batch, err := h.TsBatch(columns...)
if err != nil {
return nil, err
}
defer batch.Release()
type IntegerEntry ¶
type IntegerEntry struct {
Entry
}
IntegerEntry : int data type
func (IntegerEntry) Add ¶
func (entry IntegerEntry) Add(added int64) (int64, error)
Add : Atomically increases or decreases a signed 64-bit integer.
The specified entry will be atomically increased (or decreased) according to the given addend value: To increase the value, specify a positive added To decrease the value, specify a negative added The function return the result of the operation. The entry must already exist.
func (IntegerEntry) Get ¶
func (entry IntegerEntry) Get() (int64, error)
Get : Atomically retrieves the value of a signed 64-bit integer.
Atomically retrieves the value of an existing 64-bit integer.
func (IntegerEntry) Put ¶
func (entry IntegerEntry) Put(content int64, expiry time.Time) error
Put : Creates a new signed 64-bit integer.
Atomically creates an entry of the given alias and sets it to a cross-platform signed 64-bit integer. If the entry already exists, the function returns an error. You can specify an expiry time or use NeverExpires if you don’t want the entry to expire. If you want to create or update an entry use Update. The value will be correctly translated independently of the endianness of the client’s platform.
func (*IntegerEntry) Update ¶
func (entry *IntegerEntry) Update(newContent int64, expiry time.Time) error
Update : Creates or updates a signed 64-bit integer.
Atomically updates an entry of the given alias to the provided value. If the entry doesn’t exist, it will be created. You can specify an expiry time or use NeverExpires if you don’t want the entry to expire.
type JSONPath ¶ added in v3.14.2
type JSONPath struct {
// contains filtered or unexported fields
}
JSONPath wraps parsed JSON data and provides dot-notation path navigation. Designed as a minimal replacement for gabs.Container to eliminate external dependencies.
Decision rationale: - Avoids external dependency on gabs library for simple JSON path navigation. - Provides familiar API to minimize migration effort.
Key assumptions: - JSON is already parsed into map[string]interface{} or compatible structure. - Path strings use dot notation (e.g., "parent.child.value"). - Type assertions are caller's responsibility after navigation.
Performance trade-offs: - Path parsing allocates a string slice for split segments. - Each navigation step performs type assertion and map lookup. - Suitable for config/metadata access, not hot paths.
func (*JSONPath) Data ¶ added in v3.14.2
func (j *JSONPath) Data() interface{}
Data returns the underlying data at this path location. Returns nil if path was not found during navigation.
Decision rationale: - Matches gabs.Data API for drop-in compatibility. - Allows caller to perform type assertions as needed.
Key assumptions: - Caller handles nil checks before type assertion. - Type assertion panics are caller's responsibility.
Usage example: // Get string value with type assertion:
if value := parsed.Path("key").Data(); value != nil {
strValue := value.(string)
}
func (*JSONPath) Path ¶ added in v3.14.2
Path navigates to a nested field using dot notation and returns a new JSONPath. Returns JSONPath with nil data if path cannot be resolved.
Decision rationale: - Matches gabs.Path behavior for compatibility. - Returns wrapper even on failure to allow safe chaining.
Key assumptions: - Path segments separated by dots map to object keys. - Intermediate values must be map[string]interface{} to continue traversal. - Arrays/slices not supported in path notation (differs from full gabs).
Performance trade-offs: - O(n) where n is number of path segments. - String split allocates; consider caching if called repeatedly with same paths.
Usage example: // Navigate nested config: dbPath := config.Path("local.depot.rocksdb.root").Data()
if dbPath == nil {
// handle missing config key
}
type Logger ¶ added in v3.14.2
type Logger interface {
Detailed(msg string, args ...any) // NEW – maps to slog.Debug
Debug(msg string, args ...any)
Info(msg string, args ...any)
Warn(msg string, args ...any)
Error(msg string, args ...any)
Panic(msg string, args ...any) // NEW – maps to slog.Error
// With returns a logger that permanently appends the supplied attributes.
With(args ...any) Logger
}
Logger is the project-wide logging façade.
Decision rationale:
– Decouples QuasarDB Go API from any concrete logging package. – Keeps the surface small; only the levels currently needed by the codebase are exposed, yet it is open for future extension.
Key assumptions:
– Structured logging is preferred; key-value pairs follow slog’s “attr” convention (`msg, "k1", v1, …`). – All methods are safe for concurrent use by multiple goroutines.
func NewSlogAdapter ¶ added in v3.14.2
NewSlogAdapter creates a Logger from a slog.Handler. This is primarily intended for testing purposes.
type Metadata ¶
type Metadata struct {
Ref RefID
Type EntryType
Size uint64
ModificationTime time.Time
ExpiryTime time.Time
}
Metadata : A structure representing the metadata of an entry in the database.
type NilLogger ¶ added in v3.14.2
type NilLogger struct{}
NilLogger provides a Logger implementation that silences all log output.
Decision rationale:
– Enables users to completely disable QuasarDB logging when needed. – Useful for production environments where QuasarDB logs should be suppressed. – Zero-allocation implementation for optimal performance.
Key assumptions:
– All logging methods are no-ops and return immediately. – With() returns a new NilLogger instance for consistency. – Safe for concurrent use by multiple goroutines.
Usage example:
qdb.SetLogger(&qdb.NilLogger{})
type Node ¶
type Node struct {
HandleType
// contains filtered or unexported fields
}
Node : a structure giving access to various pieces of information or actions on a node
func (Node) Config ¶
Config :
Returns the configuration as a byte array of a json object, you can use a method of your choice to unmarshall it. An example is available using the gabs library The configuration is a JSON object, as described in the documentation.
func (Node) RawConfig ¶
RawConfig :
Returns the configuration of a node. The configuration is a JSON object as a byte array, as described in the documentation.
func (Node) RawStatus ¶
RawStatus :
Returns the status of a node. The status is a JSON object as a byte array and contains current information of the node state, as described in the documentation.
func (Node) RawTopology ¶
RawTopology :
Returns the topology of a node. The topology is a JSON object as a byte array containing the node address, and the addresses of its successor and predecessor.
func (Node) Status ¶
func (n Node) Status() (NodeStatus, error)
Status :
Returns the status of a node. The status is a JSON object and contains current information of the node state, as described in the documentation.
func (Node) Topology ¶
func (n Node) Topology() (NodeTopology, error)
Topology :
Returns the topology of a node. The topology is a JSON object containing the node address, and the addresses of its successor and predecessor.
type NodeLocation ¶
NodeLocation : A structure representing the address of a quasardb node.
type NodeStatus ¶
type NodeStatus struct {
Memory struct {
VM struct {
Used int64 `json:"used"`
Total int64 `json:"total"`
} `json:"vm"`
Physmem struct {
Used int64 `json:"used"`
Total int64 `json:"total"`
} `json:"physmem"`
} `json:"memory"`
CPUTimes struct {
Idle int64 `json:"idle"`
System int64 `json:"system"`
User int64 `json:"user"`
} `json:"cpu_times"`
DiskUsage struct {
Free int64 `json:"free"`
Total int64 `json:"total"`
} `json:"disk_usage"`
Network struct {
ListeningEndpoint string `json:"listening_endpoint"`
Partitions struct {
Count int `json:"count"`
MaxSessions int `json:"max_sessions"`
AvailableSessions int `json:"available_sessions"`
} `json:"partitions"`
} `json:"network"`
NodeID string `json:"node_id"`
OperatingSystem string `json:"operating_system"`
HardwareConcurrency int `json:"hardware_concurrency"`
Timestamp time.Time `json:"timestamp"`
Startup time.Time `json:"startup"`
EngineVersion string `json:"engine_version"`
EngineBuildDate time.Time `json:"engine_build_date"`
Entries struct {
Resident struct {
Count int `json:"count"`
Size int `json:"size"`
} `json:"resident"`
Persisted struct {
Count int `json:"count"`
Size int `json:"size"`
} `json:"persisted"`
} `json:"entries"`
Operations struct {
Get struct {
Count int `json:"count"`
Successes int `json:"successes"`
Failures int `json:"failures"`
Pageins int `json:"pageins"`
Evictions int `json:"evictions"`
InBytes int `json:"in_bytes"`
OutBytes int `json:"out_bytes"`
} `json:"get"`
GetAndRemove struct {
Count int `json:"count"`
Successes int `json:"successes"`
Failures int `json:"failures"`
Pageins int `json:"pageins"`
Evictions int `json:"evictions"`
InBytes int `json:"in_bytes"`
OutBytes int `json:"out_bytes"`
} `json:"get_and_remove"`
Put struct {
Count int `json:"count"`
Successes int `json:"successes"`
Failures int `json:"failures"`
Pageins int `json:"pageins"`
Evictions int `json:"evictions"`
InBytes int `json:"in_bytes"`
OutBytes int `json:"out_bytes"`
} `json:"put"`
Update struct {
Count int `json:"count"`
Successes int `json:"successes"`
Failures int `json:"failures"`
Pageins int `json:"pageins"`
Evictions int `json:"evictions"`
InBytes int `json:"in_bytes"`
OutBytes int `json:"out_bytes"`
} `json:"update"`
GetAndUpdate struct {
Count int `json:"count"`
Successes int `json:"successes"`
Failures int `json:"failures"`
Pageins int `json:"pageins"`
Evictions int `json:"evictions"`
InBytes int `json:"in_bytes"`
OutBytes int `json:"out_bytes"`
} `json:"get_and_update"`
CompareAndSwap struct {
Count int `json:"count"`
Successes int `json:"successes"`
Failures int `json:"failures"`
Pageins int `json:"pageins"`
Evictions int `json:"evictions"`
InBytes int `json:"in_bytes"`
OutBytes int `json:"out_bytes"`
} `json:"compare_and_swap"`
Remove struct {
Count int `json:"count"`
Successes int `json:"successes"`
Failures int `json:"failures"`
Pageins int `json:"pageins"`
Evictions int `json:"evictions"`
InBytes int `json:"in_bytes"`
OutBytes int `json:"out_bytes"`
} `json:"remove"`
RemoveIf struct {
Count int `json:"count"`
Successes int `json:"successes"`
Failures int `json:"failures"`
Pageins int `json:"pageins"`
Evictions int `json:"evictions"`
InBytes int `json:"in_bytes"`
OutBytes int `json:"out_bytes"`
} `json:"remove_if"`
PurgeAll struct {
Count int `json:"count"`
Successes int `json:"successes"`
Failures int `json:"failures"`
Pageins int `json:"pageins"`
Evictions int `json:"evictions"`
InBytes int `json:"in_bytes"`
OutBytes int `json:"out_bytes"`
} `json:"purge_all"`
} `json:"operations"`
Overall struct {
Count int `json:"count"`
Successes int `json:"successes"`
Failures int `json:"failures"`
Pageins int `json:"pageins"`
Evictions int `json:"evictions"`
InBytes int `json:"in_bytes"`
OutBytes int `json:"out_bytes"`
} `json:"overall"`
}
NodeStatus : a json representation object containing the status of a node
type NodeTopology ¶
type NodeTopology struct {
Predecessor struct {
Reference string `json:"reference"`
Endpoint string `json:"endpoint"`
} `json:"predecessor"`
Center struct {
Reference string `json:"reference"`
Endpoint string `json:"endpoint"`
} `json:"center"`
Successor struct {
Reference string `json:"reference"`
Endpoint string `json:"endpoint"`
} `json:"successor"`
}
type PinnableBuilder ¶ added in v3.14.2
type PinnableBuilder struct {
Objects []interface{} // Objects to pin (empty for C-allocated memory)
Builder func() unsafe.Pointer // Closure executed AFTER pinning
}
PinnableBuilder implements the critical CGO memory safety pattern required by Go 1.23+.
Why This Pattern Exists ¶
Go 1.23 introduced stricter CGO pointer rules that forbid storing Go pointers in C-accessible memory before those pointers are pinned. Violating this rule causes immediate segfaults when GODEBUG=cgocheck=2 is enabled (which it is in production).
The Problem We Solve ¶
Consider this WRONG approach that causes segfaults:
var table C.qdb_table_t table.data = (*C.double)(unsafe.Pointer(&goSlice[0])) // CRASH: Go pointer in C memory! pinner.Pin(&goSlice[0]) // Too late - already crashed C.qdb_push_data(&table)
The crash happens because we stored a Go pointer (&goSlice[0]) in C-accessible memory (table.data) BEFORE pinning it. The Go runtime detects this and panics.
The Solution: Deferred Population ¶
PinnableBuilder separates the "what to pin" from "how to use it":
builder := NewPinnableBuilderSingle(&goSlice[0], func() unsafe.Pointer {
table.data = (*C.double)(unsafe.Pointer(&goSlice[0]))
return unsafe.Pointer(&goSlice[0])
})
// Later, in the correct sequence:
for _, obj := range builder.Objects {
pinner.Pin(obj) // 1. Pin first
}
builder.Builder() // 2. Then populate C structures
C.qdb_push_data(&table) // 3. Finally call C
Constructors ¶
Two constructors are provided for convenience:
- NewPinnableBuilderSingle: For pinning a single object
- NewPinnableBuilderMultiple: For pinning multiple objects (e.g., individual strings)
Field Usage ¶
Objects: Array of Go memory to pin. Empty for C-allocated memory. Builder: Closure that populates C structures. Executed AFTER pinning is complete.
CRITICAL: This is the ONLY safe pattern for passing Go memory to C in this codebase. ¶
func NewPinnableBuilderMultiple ¶ added in v3.14.2
func NewPinnableBuilderMultiple(objects []interface{}, builder func() unsafe.Pointer) PinnableBuilder
NewPinnableBuilderMultiple creates a PinnableBuilder for multiple objects. Use this for string/blob types that need to pin individual elements.
Parameters:
- objects: Array of Go objects to pin
- builder: Function that populates C structures after pinning
Example:
objects := make([]interface{}, 0, len(strings))
for i := range strings {
if len(strings[i]) > 0 {
objects = append(objects, &strings[i])
}
}
builder := NewPinnableBuilderMultiple(objects, func() unsafe.Pointer {
// Populate C structures with pinned string data
return unsafe.Pointer(envelope)
})
func NewPinnableBuilderSingle ¶ added in v3.14.2
func NewPinnableBuilderSingle(object interface{}, builder func() unsafe.Pointer) PinnableBuilder
NewPinnableBuilderSingle creates a PinnableBuilder for a single object. Use this for numeric types (int64, float64, timestamp) that pin their first element.
Parameters:
- object: The Go object to pin (nil for C-allocated memory)
- builder: Function that populates C structures after pinning
Example:
builder := NewPinnableBuilderSingle(&slice[0], func() unsafe.Pointer {
cStruct.data = (*C.double)(unsafe.Pointer(&slice[0]))
return unsafe.Pointer(&slice[0])
})
type Query ¶
type Query struct {
HandleType
// contains filtered or unexported fields
}
Query : query object
type QueryPoint ¶
type QueryPoint C.qdb_point_result_t
QueryPoint : a variadic structure holding the result type as well as the result value
func (*QueryPoint) Get ¶
func (r *QueryPoint) Get() QueryPointResult
Get : retrieve the raw interface
func (*QueryPoint) GetBlob ¶
func (r *QueryPoint) GetBlob() ([]byte, error)
GetBlob : retrieve a double from the interface
func (*QueryPoint) GetCount ¶
func (r *QueryPoint) GetCount() (int64, error)
GetCount : retrieve the count from the interface
func (*QueryPoint) GetDouble ¶
func (r *QueryPoint) GetDouble() (float64, error)
GetDouble : retrieve a double from the interface
func (*QueryPoint) GetInt64 ¶
func (r *QueryPoint) GetInt64() (int64, error)
GetInt64 : retrieve an int64 from the interface
func (*QueryPoint) GetString ¶
func (r *QueryPoint) GetString() (string, error)
GetString : retrieve a string from the interface
func (*QueryPoint) GetTimestamp ¶
func (r *QueryPoint) GetTimestamp() (time.Time, error)
GetTimestamp : retrieve a timestamp from the interface
type QueryPointResult ¶
type QueryPointResult struct {
// contains filtered or unexported fields
}
QueryPointResult : a query result point
func (QueryPointResult) Type ¶
func (r QueryPointResult) Type() QueryResultValueType
Type : gives the type of the query point result
func (QueryPointResult) Value ¶
func (r QueryPointResult) Value() interface{}
Value : gives the interface{} value of the query point result
type QueryResult ¶
type QueryResult struct {
// contains filtered or unexported fields
}
QueryResult : a query result
func (QueryResult) Columns ¶
func (r QueryResult) Columns(row *QueryPoint) QueryRow
Columns : create columns from a row
func (QueryResult) ColumnsCount ¶
func (r QueryResult) ColumnsCount() int64
ColumnsCount : get the number of columns of each row
func (QueryResult) ColumnsNames ¶
func (r QueryResult) ColumnsNames() []string
ColumnsNames : get the number of columns names of each row
func (QueryResult) ErrorMessage ¶
func (r QueryResult) ErrorMessage() string
ErrorMessage : the error message in case of failure
func (QueryResult) RowCount ¶
func (r QueryResult) RowCount() int64
RowCount : the number of returned rows
func (QueryResult) Rows ¶
func (r QueryResult) Rows() QueryRows
Rows : get rows of a query table result
func (QueryResult) ScannedPoints ¶
func (r QueryResult) ScannedPoints() int64
ScannedPoints : number of points scanned
The actual number of scanned points may be greater
type QueryResultValueType ¶
type QueryResultValueType int64
QueryResultValueType : an enum of possible query point result types
const ( QueryResultNone QueryResultValueType = C.qdb_query_result_none QueryResultDouble QueryResultValueType = C.qdb_query_result_double QueryResultBlob QueryResultValueType = C.qdb_query_result_blob QueryResultInt64 QueryResultValueType = C.qdb_query_result_int64 QueryResultString QueryResultValueType = C.qdb_query_result_string QueryResultTimestamp QueryResultValueType = C.qdb_query_result_timestamp QueryResultCount QueryResultValueType = C.qdb_query_result_count )
QueryResultNone : query result value none QueryResultDouble : query result value double QueryResultBlob : query result value blob QueryResultInt64 : query result value int64 QueryResultString : query result value string QueryResultSymbol : query result value symbol QueryResultTimestamp : query result value timestamp QueryResultCount : query result value count
type Reader ¶ added in v3.14.2
type Reader struct {
// contains filtered or unexported fields
}
Reader iterates over bulk data from QuasarDB.
func NewReader ¶ added in v3.14.2
func NewReader(h HandleType, options ReaderOptions) (Reader, error)
NewReader creates a reader for bulk data retrieval.
func (*Reader) Batch ¶ added in v3.14.2
func (r *Reader) Batch() ReaderChunk
Batch returns the current batch.
func (*Reader) FetchAll ¶ added in v3.14.2
func (r *Reader) FetchAll() (ReaderChunk, error)
FetchAll retrieves all data as a single batch.
type ReaderChunk ¶ added in v3.14.2
type ReaderChunk struct {
// contains filtered or unexported fields
}
ReaderChunk holds a batch of rows read from table.
func NewReaderChunk ¶ added in v3.14.2
func NewReaderChunk(cols []ReaderColumn, idx []time.Time, data []ColumnData) (ReaderChunk, error)
NewReaderChunk creates a chunk from columns, index, and data.
func (*ReaderChunk) Clear ¶ added in v3.14.2
func (rc *ReaderChunk) Clear()
Clear resets the chunk to empty state.
func (*ReaderChunk) Empty ¶ added in v3.14.2
func (rc *ReaderChunk) Empty() bool
Empty reports if the chunk has no data.
func (*ReaderChunk) EnsureCapacity ¶ added in v3.14.2
func (rc *ReaderChunk) EnsureCapacity(n int)
EnsureCapacity pre-allocates space for n rows.
func (*ReaderChunk) RowCount ¶ added in v3.14.2
func (rc *ReaderChunk) RowCount() int
RowCount returns the number of rows in the chunk.
type ReaderColumn ¶ added in v3.14.2
type ReaderColumn struct {
// contains filtered or unexported fields
}
ReaderColumn holds column metadata for reading.
func NewReaderColumn ¶ added in v3.14.2
func NewReaderColumn(n string, t TsColumnType) (ReaderColumn, error)
NewReaderColumn creates column metadata.
func NewReaderColumnFromNative ¶ added in v3.14.2
func NewReaderColumnFromNative(n *C.char, t C.qdb_ts_column_type_t) (ReaderColumn, error)
NewReaderColumnFromNative creates column from C types.
func (ReaderColumn) Name ¶ added in v3.14.2
func (rc ReaderColumn) Name() string
Name returns the column name.
func (ReaderColumn) Type ¶ added in v3.14.2
func (rc ReaderColumn) Type() TsColumnType
Type returns the column data type.
type ReaderOptions ¶ added in v3.14.2
type ReaderOptions struct {
// contains filtered or unexported fields
}
ReaderOptions configures bulk read operations.
func NewReaderDefaultOptions ¶ added in v3.14.2
func NewReaderDefaultOptions(tables []string) ReaderOptions
NewReaderDefaultOptions creates options for reading entire tables.
func NewReaderOptions ¶ added in v3.14.2
func NewReaderOptions() ReaderOptions
NewReaderOptions creates reader options with defaults.
func (ReaderOptions) WithBatchSize ¶ added in v3.14.2
func (ro ReaderOptions) WithBatchSize(batchSize int) ReaderOptions
WithBatchSize sets max rows per fetch.
func (ReaderOptions) WithColumns ¶ added in v3.14.2
func (ro ReaderOptions) WithColumns(columns []string) ReaderOptions
WithColumns sets columns to read (empty=all).
func (ReaderOptions) WithTables ¶ added in v3.14.2
func (ro ReaderOptions) WithTables(tables []string) ReaderOptions
WithTables sets tables to read.
func (ReaderOptions) WithTimeRange ¶ added in v3.14.2
func (ro ReaderOptions) WithTimeRange(start, end time.Time) ReaderOptions
WithTimeRange sets time range [start, end).
type Statistics ¶
type Statistics struct {
EngineBuildDate string `json:"engine_build_date"`
EngineVersion string `json:"engine_version"`
HardwareConcurrency int64 `json:"hardware_concurrency_count"`
Memory struct {
BytesResident int64 `json:"resident_bytes"`
ResidentCount int64 `json:"resident_count"`
Physmem struct {
Used int64 `json:"used_bytes"`
Total int64 `json:"total_bytes"`
} `json:"physmem"`
VM struct {
Used int64 `json:"used_bytes"`
Total int64 `json:"total_bytes"`
} `json:"vm"`
} `json:"memory"`
Network struct {
CurrentUsersCount int64 `json:"current_users_count"`
PartitionsCount int64 `json:"partitions_count"`
Sessions struct {
AvailableCount int64 `json:"available_count"`
UnavailableCount int64 `json:"unavailable_count"`
MaxCount int64 `json:"max_count"`
} `json:"sessions"`
} `json:"network"`
NodeID string `json:"chord.node_id"`
OperatingSystem string `json:"operating_system"`
Persistence struct {
BytesRead int64 `json:"read_bytes"`
BytesWritten int64 `json:"written_bytes"`
} `json:"persistence"`
Requests struct {
BytesOut int64 `json:"out_bytes"`
SuccessesCount int64 `json:"successes_count"`
TotalCount int64 `json:"total_count"`
} `json:"requests"`
Startup int64 `json:"startup_epoch"`
}
Statistics : json adptable structure with node information
type TestTimeseriesData ¶ added in v3.14.2
type TestTimeseriesData struct {
Alias string
BlobPoints []TsBlobPoint
DoublePoints []TsDoublePoint
Int64Points []TsInt64Point
StringPoints []TsStringPoint
TimestampPoints []TsTimestampPoint
SymbolPoints []TsStringPoint
}
TestTimeseriesData bundles the alias and the sample points that newTestTimeseriesAllColumns inserts.
type Time ¶ added in v3.14.2
type Time C.qdb_time_t
Alias for `C.qdb_time_t` so it can be used as `qdb.Time` by API users
type TimeseriesEntry ¶
type TimeseriesEntry struct {
Entry
}
TimeseriesEntry : timeseries double entry data type
func (TimeseriesEntry) BlobColumn ¶
func (entry TimeseriesEntry) BlobColumn(columnName string) TsBlobColumn
BlobColumn : create a column object
func (TimeseriesEntry) Bulk ¶
func (entry TimeseriesEntry) Bulk(cols ...TsColumnInfo) (*TsBulk, error)
Bulk : create a bulk object for the specified columns
If no columns are specified it gets the server side registered columns
func (TimeseriesEntry) Columns ¶
func (entry TimeseriesEntry) Columns() ([]TsBlobColumn, []TsDoubleColumn, []TsInt64Column, []TsStringColumn, []TsTimestampColumn, error)
Columns : return the current columns
func (TimeseriesEntry) ColumnsInfo ¶
func (entry TimeseriesEntry) ColumnsInfo() ([]TsColumnInfo, error)
ColumnsInfo : return the current columns information
func (TimeseriesEntry) Create ¶
func (entry TimeseriesEntry) Create(shardSize time.Duration, cols ...TsColumnInfo) error
Create : create a new timeseries
First parameter is the duration limit to organize a shard Ex: shardSize := 24 * time.Hour
func (TimeseriesEntry) DoubleColumn ¶
func (entry TimeseriesEntry) DoubleColumn(columnName string) TsDoubleColumn
DoubleColumn : create a column object
func (TimeseriesEntry) InsertColumns ¶
func (entry TimeseriesEntry) InsertColumns(cols ...TsColumnInfo) error
InsertColumns : insert columns in a existing timeseries
func (TimeseriesEntry) Int64Column ¶
func (entry TimeseriesEntry) Int64Column(columnName string) TsInt64Column
Int64Column : create a column object
func (TimeseriesEntry) Name ¶ added in v3.14.2
func (t TimeseriesEntry) Name() string
Name : Returns the name of the table
func (TimeseriesEntry) StringColumn ¶
func (entry TimeseriesEntry) StringColumn(columnName string) TsStringColumn
StringColumn : create a column object
func (TimeseriesEntry) SymbolColumn ¶ added in v3.13.0
func (entry TimeseriesEntry) SymbolColumn(columnName, symtableName string) TsStringColumn
SymbolColumn : create a column object (the symbol table name is not set)
func (TimeseriesEntry) TimestampColumn ¶
func (entry TimeseriesEntry) TimestampColumn(columnName string) TsTimestampColumn
TimestampColumn : create a column object
type Timespec ¶ added in v3.14.2
type Timespec C.qdb_timespec_t
Alias for a C.qdb_timespec_t so it can be used as `qdb.Timespec` by API users
type TsAggregationType ¶
type TsAggregationType C.qdb_ts_aggregation_type_t
TsAggregationType typedef of C.qdb_ts_aggregation_type
const ( AggFirst TsAggregationType = C.qdb_agg_first AggLast TsAggregationType = C.qdb_agg_last AggMin TsAggregationType = C.qdb_agg_min AggMax TsAggregationType = C.qdb_agg_max AggArithmeticMean TsAggregationType = C.qdb_agg_arithmetic_mean AggHarmonicMean TsAggregationType = C.qdb_agg_harmonic_mean AggGeometricMean TsAggregationType = C.qdb_agg_geometric_mean AggQuadraticMean TsAggregationType = C.qdb_agg_quadratic_mean AggCount TsAggregationType = C.qdb_agg_count AggSum TsAggregationType = C.qdb_agg_sum AggSumOfSquares TsAggregationType = C.qdb_agg_sum_of_squares AggSpread TsAggregationType = C.qdb_agg_spread AggSampleVariance TsAggregationType = C.qdb_agg_sample_variance AggSampleStddev TsAggregationType = C.qdb_agg_sample_stddev AggPopulationVariance TsAggregationType = C.qdb_agg_population_variance AggPopulationStddev TsAggregationType = C.qdb_agg_population_stddev AggAbsMin TsAggregationType = C.qdb_agg_abs_min AggAbsMax TsAggregationType = C.qdb_agg_abs_max AggProduct TsAggregationType = C.qdb_agg_product AggSkewness TsAggregationType = C.qdb_agg_skewness AggKurtosis TsAggregationType = C.qdb_agg_kurtosis )
Each type gets its value between the begin and end timestamps of aggregation
type TsBatch ¶
type TsBatch struct {
// contains filtered or unexported fields
}
TsBatch represents a batch writer for efficient bulk insertion into timeseries tables.
Batch operations significantly improve performance when inserting large amounts of data. All columns must be specified at initialization and cannot be changed afterward.
func (*TsBatch) ExtraColumns ¶
func (t *TsBatch) ExtraColumns(cols ...TsBatchColumnInfo) error
ExtraColumns : Appends columns to the current batch table
func (*TsBatch) PushFast ¶
PushFast : Fast, in-place batch push that is efficient when doing lots of small, incremental pushes.
func (*TsBatch) Release ¶
func (t *TsBatch) Release()
Release : release the memory of the batch table
func (*TsBatch) RowSetBlob ¶
RowSetBlob : Set blob at specified index in current row
func (*TsBatch) RowSetBlobNoCopy ¶
RowSetBlobNoCopy : Set blob at specified index in current row without copying it
func (*TsBatch) RowSetDouble ¶
RowSetDouble : Set double at specified index in current row
func (*TsBatch) RowSetInt64 ¶
RowSetInt64 : Set int64 at specified index in current row
func (*TsBatch) RowSetString ¶
RowSetString : Set string at specified index in current row
func (*TsBatch) RowSetStringNoCopy ¶
RowSetStringNoCopy : Set string at specified index in current row without copying it
func (*TsBatch) RowSetTimestamp ¶
RowSetTimestamp : Add a timestamp to current row
type TsBatchColumnInfo ¶
TsBatchColumnInfo : Represents one column in a timeseries Preallocate the underlying structure with the ElementCountHint
func NewTsBatchColumnInfo ¶
func NewTsBatchColumnInfo(timeseries, column string, hint int64) TsBatchColumnInfo
NewTsBatchColumnInfo : Creates a new TsBatchColumnInfo
func TsBatchColumnInfoToStructInfoG ¶ added in v3.14.2
func TsBatchColumnInfoToStructInfoG(t C.qdb_ts_batch_column_info_t) TsBatchColumnInfo
type TsBlobAggregation ¶
type TsBlobAggregation struct {
// contains filtered or unexported fields
}
TsBlobAggregation : Aggregation of double type
func NewBlobAggregation ¶
func NewBlobAggregation(kind TsAggregationType, rng TsRange) *TsBlobAggregation
NewBlobAggregation : Create new timeseries blob aggregation
func TsBlobAggregationToStructG ¶ added in v3.14.2
func TsBlobAggregationToStructG(t C.qdb_ts_blob_aggregation_t) TsBlobAggregation
func (TsBlobAggregation) Count ¶
func (t TsBlobAggregation) Count() int64
Count : returns the number of points aggregated into the result
func (TsBlobAggregation) Range ¶
func (t TsBlobAggregation) Range() TsRange
Range : returns the range of the aggregation
func (TsBlobAggregation) Result ¶
func (t TsBlobAggregation) Result() TsBlobPoint
Result : result of the aggregation
func (TsBlobAggregation) Type ¶
func (t TsBlobAggregation) Type() TsAggregationType
Type : returns the type of the aggregation
type TsBlobColumn ¶
type TsBlobColumn struct {
// contains filtered or unexported fields
}
TsBlobColumn : a time series blob column
func (TsBlobColumn) Aggregate ¶
func (column TsBlobColumn) Aggregate(aggs ...*TsBlobAggregation) ([]TsBlobAggregation, error)
Aggregate : Aggregate a sub-part of the time series.
It is an error to call this function on a non existing time-series.
func (TsBlobColumn) EraseRanges ¶
func (column TsBlobColumn) EraseRanges(rgs ...TsRange) (uint64, error)
EraseRanges : erase all points in the specified ranges
func (TsBlobColumn) GetRanges ¶
func (column TsBlobColumn) GetRanges(rgs ...TsRange) ([]TsBlobPoint, error)
GetRanges : Retrieves blobs in the specified range of the time series column.
It is an error to call this function on a non existing time-series.
func (TsBlobColumn) Insert ¶
func (column TsBlobColumn) Insert(points ...TsBlobPoint) error
Insert blob points into a timeseries
type TsBlobPoint ¶
type TsBlobPoint struct {
// contains filtered or unexported fields
}
TsBlobPoint : timestamped data
func NewTsBlobPoint ¶
func NewTsBlobPoint(timestamp time.Time, value []byte) TsBlobPoint
NewTsBlobPoint : Create new timeseries blob point
func TsBlobPointToStructG ¶ added in v3.14.2
func TsBlobPointToStructG(t C.qdb_ts_blob_point) TsBlobPoint
func (TsBlobPoint) Content ¶
func (t TsBlobPoint) Content() []byte
Content : return data point content
func (TsBlobPoint) Timestamp ¶
func (t TsBlobPoint) Timestamp() time.Time
Timestamp : return data point timestamp
type TsBulk ¶
type TsBulk struct {
// contains filtered or unexported fields
}
TsBulk : A structure that permits to append data to a timeseries
func (*TsBulk) GetTimestamp ¶
GetTimestamp : gets a timestamp in row
type TsColumnInfo ¶
type TsColumnInfo struct {
// contains filtered or unexported fields
}
TsColumnInfo : column information in timeseries
func NewSymbolColumnInfo ¶ added in v3.13.0
func NewSymbolColumnInfo(columnName, symtableName string) TsColumnInfo
func NewTsColumnInfo ¶
func NewTsColumnInfo(columnName string, columnType TsColumnType) TsColumnInfo
NewTsColumnInfo : create a column info structure
func TsColumnInfoExToStructInfoG ¶ added in v3.14.2
func TsColumnInfoExToStructInfoG(t C.qdb_ts_column_info_ex_t) TsColumnInfo
func (TsColumnInfo) Symtable ¶ added in v3.13.0
func (t TsColumnInfo) Symtable() string
Symtable : return column symbol table name
type TsColumnType ¶
type TsColumnType C.qdb_ts_column_type_t
TsColumnType : Timeseries column types
const ( TsColumnUninitialized TsColumnType = C.qdb_ts_column_uninitialized TsColumnBlob TsColumnType = C.qdb_ts_column_blob TsColumnDouble TsColumnType = C.qdb_ts_column_double TsColumnInt64 TsColumnType = C.qdb_ts_column_int64 TsColumnString TsColumnType = C.qdb_ts_column_string TsColumnTimestamp TsColumnType = C.qdb_ts_column_timestamp TsColumnSymbol TsColumnType = C.qdb_ts_column_symbol )
Values
TsColumnDouble : column is a double point TsColumnBlob : column is a blob point TsColumnInt64 : column is a int64 point TsColumnTimestamp : column is a timestamp point TsColumnString : column is a string point TsColumnSymbol : column is a symbol point
func (TsColumnType) AsValueType ¶ added in v3.14.2
func (v TsColumnType) AsValueType() TsValueType
func (TsColumnType) IsValid ¶ added in v3.14.2
func (v TsColumnType) IsValid() bool
Returns true if this column is valid and non-null
type TsDoubleAggregation ¶
type TsDoubleAggregation struct {
// contains filtered or unexported fields
}
TsDoubleAggregation : Aggregation of double type
func NewDoubleAggregation ¶
func NewDoubleAggregation(kind TsAggregationType, rng TsRange) *TsDoubleAggregation
NewDoubleAggregation : Create new timeseries double aggregation
func TsDoubleAggregationToStructG ¶ added in v3.14.2
func TsDoubleAggregationToStructG(t C.qdb_ts_double_aggregation_t) TsDoubleAggregation
func (TsDoubleAggregation) Count ¶
func (t TsDoubleAggregation) Count() int64
Count : returns the number of points aggregated into the result
func (TsDoubleAggregation) Range ¶
func (t TsDoubleAggregation) Range() TsRange
Range : returns the range of the aggregation
func (TsDoubleAggregation) Result ¶
func (t TsDoubleAggregation) Result() TsDoublePoint
Result : result of the aggregation
func (TsDoubleAggregation) Type ¶
func (t TsDoubleAggregation) Type() TsAggregationType
Type : returns the type of the aggregation
type TsDoubleColumn ¶
type TsDoubleColumn struct {
// contains filtered or unexported fields
}
TsDoubleColumn : a time series double column
func (TsDoubleColumn) Aggregate ¶
func (column TsDoubleColumn) Aggregate(aggs ...*TsDoubleAggregation) ([]TsDoubleAggregation, error)
Aggregate : Aggregate a sub-part of a timeseries from the specified aggregations.
It is an error to call this function on a non existing time-series.
func (TsDoubleColumn) EraseRanges ¶
func (column TsDoubleColumn) EraseRanges(rgs ...TsRange) (uint64, error)
EraseRanges : erase all points in the specified ranges
func (TsDoubleColumn) GetRanges ¶
func (column TsDoubleColumn) GetRanges(rgs ...TsRange) ([]TsDoublePoint, error)
GetRanges : Retrieves blobs in the specified range of the time series column.
It is an error to call this function on a non existing time-series.
func (TsDoubleColumn) Insert ¶
func (column TsDoubleColumn) Insert(points ...TsDoublePoint) error
Insert double points into a timeseries
type TsDoublePoint ¶
type TsDoublePoint struct {
// contains filtered or unexported fields
}
TsDoublePoint : timestamped double data point
func NewTsDoublePoint ¶
func NewTsDoublePoint(timestamp time.Time, value float64) TsDoublePoint
NewTsDoublePoint : Create new timeseries double point
func TsDoublePointToStructG ¶ added in v3.14.2
func TsDoublePointToStructG(t C.qdb_ts_double_point) TsDoublePoint
func (TsDoublePoint) Content ¶
func (t TsDoublePoint) Content() float64
Content : return data point content
func (TsDoublePoint) Timestamp ¶
func (t TsDoublePoint) Timestamp() time.Time
Timestamp : return data point timestamp
type TsInt64Aggregation ¶
type TsInt64Aggregation struct {
// contains filtered or unexported fields
}
TsInt64Aggregation : Aggregation of int64 type
func NewInt64Aggregation ¶
func NewInt64Aggregation(kind TsAggregationType, rng TsRange) *TsInt64Aggregation
NewInt64Aggregation : Create new timeseries int64 aggregation
func TsInt64AggregationToStructG ¶ added in v3.14.2
func TsInt64AggregationToStructG(t C.qdb_ts_int64_aggregation_t) TsInt64Aggregation
func (TsInt64Aggregation) Count ¶
func (t TsInt64Aggregation) Count() int64
Count : returns the number of points aggregated into the result
func (TsInt64Aggregation) Range ¶
func (t TsInt64Aggregation) Range() TsRange
Range : returns the range of the aggregation
func (TsInt64Aggregation) Result ¶
func (t TsInt64Aggregation) Result() TsInt64Point
Result : result of the aggregation
func (TsInt64Aggregation) Type ¶
func (t TsInt64Aggregation) Type() TsAggregationType
Type : returns the type of the aggregation
type TsInt64Column ¶
type TsInt64Column struct {
// contains filtered or unexported fields
}
TsInt64Column : a time series int64 column
func (TsInt64Column) Aggregate ¶
func (column TsInt64Column) Aggregate(aggs ...*TsInt64Aggregation) ([]TsInt64Aggregation, error)
Aggregate : Aggregate a sub-part of a timeseries from the specified aggregations.
It is an error to call this function on a non existing time-series.
func (TsInt64Column) EraseRanges ¶
func (column TsInt64Column) EraseRanges(rgs ...TsRange) (uint64, error)
EraseRanges : erase all points in the specified ranges
func (TsInt64Column) GetRanges ¶
func (column TsInt64Column) GetRanges(rgs ...TsRange) ([]TsInt64Point, error)
GetRanges : Retrieves int64s in the specified range of the time series column.
It is an error to call this function on a non existing time-series.
func (TsInt64Column) Insert ¶
func (column TsInt64Column) Insert(points ...TsInt64Point) error
Insert int64 points into a timeseries
type TsInt64Point ¶
type TsInt64Point struct {
// contains filtered or unexported fields
}
TsInt64Point : timestamped int64 data point
func NewTsInt64Point ¶
func NewTsInt64Point(timestamp time.Time, value int64) TsInt64Point
NewTsInt64Point : Create new timeseries int64 point
func TsInt64PointToStructG ¶ added in v3.14.2
func TsInt64PointToStructG(t C.qdb_ts_int64_point) TsInt64Point
func (TsInt64Point) Content ¶
func (t TsInt64Point) Content() int64
Content : return data point content
func (TsInt64Point) Timestamp ¶
func (t TsInt64Point) Timestamp() time.Time
Timestamp : return data point timestamp
type TsRange ¶
type TsRange struct {
// contains filtered or unexported fields
}
TsRange : timeseries range with begin and end timestamp
func TsRangeToStructG ¶ added in v3.14.2
func TsRangeToStructG(t C.qdb_ts_range_t) TsRange
type TsStringAggregation ¶
type TsStringAggregation struct {
// contains filtered or unexported fields
}
TsStringAggregation : Aggregation of double type
func NewStringAggregation ¶
func NewStringAggregation(kind TsAggregationType, rng TsRange) *TsStringAggregation
NewStringAggregation : Create new timeseries string aggregation
func TsStringAggregationToStructG ¶ added in v3.14.2
func TsStringAggregationToStructG(t C.qdb_ts_string_aggregation_t) TsStringAggregation
func (TsStringAggregation) Count ¶
func (t TsStringAggregation) Count() int64
Count : returns the number of points aggregated into the result
func (TsStringAggregation) Range ¶
func (t TsStringAggregation) Range() TsRange
Range : returns the range of the aggregation
func (TsStringAggregation) Result ¶
func (t TsStringAggregation) Result() TsStringPoint
Result : result of the aggregation
func (TsStringAggregation) Type ¶
func (t TsStringAggregation) Type() TsAggregationType
Type : returns the type of the aggregation
type TsStringColumn ¶
type TsStringColumn struct {
// contains filtered or unexported fields
}
TsStringColumn : a time series string column
func (TsStringColumn) Aggregate ¶
func (column TsStringColumn) Aggregate(aggs ...*TsStringAggregation) ([]TsStringAggregation, error)
Aggregate : Aggregate a sub-part of the time series.
It is an error to call this function on a non existing time-series.
func (TsStringColumn) EraseRanges ¶
func (column TsStringColumn) EraseRanges(rgs ...TsRange) (uint64, error)
EraseRanges : erase all points in the specified ranges
func (TsStringColumn) GetRanges ¶
func (column TsStringColumn) GetRanges(rgs ...TsRange) ([]TsStringPoint, error)
GetRanges : Retrieves strings in the specified range of the time series column.
It is an error to call this function on a non existing time-series.
func (TsStringColumn) Insert ¶
func (column TsStringColumn) Insert(points ...TsStringPoint) error
Insert string points into a timeseries
type TsStringPoint ¶
type TsStringPoint struct {
// contains filtered or unexported fields
}
TsStringPoint : timestamped data
func NewTsStringPoint ¶
func NewTsStringPoint(timestamp time.Time, value string) TsStringPoint
NewTsStringPoint : Create new timeseries string point
func TsStringPointToStructG ¶ added in v3.14.2
func TsStringPointToStructG(t C.qdb_ts_string_point) TsStringPoint
func (TsStringPoint) Content ¶
func (t TsStringPoint) Content() string
Content : return data point content
func (TsStringPoint) Timestamp ¶
func (t TsStringPoint) Timestamp() time.Time
Timestamp : return data point timestamp
type TsTimestampAggregation ¶
type TsTimestampAggregation struct {
// contains filtered or unexported fields
}
TsTimestampAggregation : Aggregation of timestamp type
func NewTimestampAggregation ¶
func NewTimestampAggregation(kind TsAggregationType, rng TsRange) *TsTimestampAggregation
NewTimestampAggregation : Create new timeseries timestamp aggregation
func TsTimestampAggregationToStructG ¶ added in v3.14.2
func TsTimestampAggregationToStructG(t C.qdb_ts_timestamp_aggregation_t) TsTimestampAggregation
func (TsTimestampAggregation) Count ¶
func (t TsTimestampAggregation) Count() int64
Count : returns the number of points aggregated into the result
func (TsTimestampAggregation) Range ¶
func (t TsTimestampAggregation) Range() TsRange
Range : returns the range of the aggregation
func (TsTimestampAggregation) Result ¶
func (t TsTimestampAggregation) Result() TsTimestampPoint
Result : result of the aggregation
func (TsTimestampAggregation) Type ¶
func (t TsTimestampAggregation) Type() TsAggregationType
Type : returns the type of the aggregation
type TsTimestampColumn ¶
type TsTimestampColumn struct {
// contains filtered or unexported fields
}
TsTimestampColumn : a time series timestamp column
func (TsTimestampColumn) Aggregate ¶
func (column TsTimestampColumn) Aggregate(aggs ...*TsTimestampAggregation) ([]TsTimestampAggregation, error)
Aggregate : Aggregate a sub-part of a timeseries from the specified aggregations.
It is an error to call this function on a non existing time-series.
func (TsTimestampColumn) EraseRanges ¶
func (column TsTimestampColumn) EraseRanges(rgs ...TsRange) (uint64, error)
EraseRanges : erase all points in the specified ranges
func (TsTimestampColumn) GetRanges ¶
func (column TsTimestampColumn) GetRanges(rgs ...TsRange) ([]TsTimestampPoint, error)
GetRanges : Retrieves timestamps in the specified range of the time series column.
It is an error to call this function on a non existing time-series.
func (TsTimestampColumn) Insert ¶
func (column TsTimestampColumn) Insert(points ...TsTimestampPoint) error
Insert timestamp points into a timeseries
type TsTimestampPoint ¶
type TsTimestampPoint struct {
// contains filtered or unexported fields
}
TsTimestampPoint : timestamped timestamp data point
func NewTsTimestampPoint ¶
func NewTsTimestampPoint(timestamp, value time.Time) TsTimestampPoint
NewTsTimestampPoint : Create new timeseries timestamp point
func TsTimestampPointToStructG ¶ added in v3.14.2
func TsTimestampPointToStructG(t C.qdb_ts_timestamp_point) TsTimestampPoint
func (TsTimestampPoint) Content ¶
func (t TsTimestampPoint) Content() time.Time
Content : return data point content
func (TsTimestampPoint) Timestamp ¶
func (t TsTimestampPoint) Timestamp() time.Time
Timestamp : return data point timestamp
type TsValueType ¶ added in v3.14.2
type TsValueType int
TsValueType : Timeseries value types
Values we're able to represent inside a database, as some values are represented differently as columns. A good example are Symbol columns, where the user interacts with the values as strings, but on-disk are stored as an indexed integer.
const ( TsValueNull TsValueType = iota TsValueDouble TsValueInt64 TsValueTimestamp TsValueBlob TsValueString )
func (TsValueType) AsColumnType ¶ added in v3.14.2
func (v TsValueType) AsColumnType() TsColumnType
type Writer ¶ added in v3.14.2
type Writer struct {
// contains filtered or unexported fields
}
Writer batches tables for bulk push.
func NewWriter ¶ added in v3.14.2
func NewWriter(options WriterOptions) Writer
NewWriter creates a writer with options.
func NewWriterWithDefaultOptions ¶ added in v3.14.2
func NewWriterWithDefaultOptions() Writer
NewWriterWithDefaultOptions creates a writer with default options.
func (*Writer) GetOptions ¶ added in v3.14.2
func (w *Writer) GetOptions() WriterOptions
GetOptions returns the writer's push configuration.
func (*Writer) GetTable ¶ added in v3.14.2
func (w *Writer) GetTable(name string) (WriterTable, error)
GetTable retrieves a table by name from the writer.
func (*Writer) Push ¶ added in v3.14.2
func (w *Writer) Push(h HandleType) error
Push writes all tables to the QuasarDB server.
CRITICAL: This method implements the 5-phase centralized pinning strategy that prevents segfaults when passing Go memory to C. This exact sequence is MANDATORY - any deviation will cause crashes in production.
Phase 1: Prepare - Convert all data and collect PinnableBuilders Phase 2: Pin - Pin all Go memory at once before any C calls Phase 2.5: Build - Execute builder closures to populate C structures Phase 3: Execute - Call C API with pinned memory Phase 4: KeepAlive - Prevent GC collection until C is done
WHY THIS PATTERN: Go 1.23+ forbids storing Go pointers in C memory before pinning. The builder pattern defers pointer assignment until after pinning, preventing "cgo argument has Go pointer to unpinned Go pointer" panics.
This pattern is essential because:
- Go 1.23+ forbids storing Go pointers in C memory before pinning
- The C API may access gigabytes of data over 10-10000ms
- Premature GC collection would cause segfaults
- All memory must remain valid until C completely finishes
func (*Writer) SetTable ¶ added in v3.14.2
func (w *Writer) SetTable(t WriterTable) error
SetTable adds a table to the writer batch.
type WriterColumn ¶ added in v3.14.2
type WriterColumn struct {
ColumnName string // column identifier
ColumnType TsColumnType // data type
}
WriterColumn holds column metadata.
type WriterDeduplicationMode ¶ added in v3.14.2
type WriterDeduplicationMode C.qdb_exp_batch_deduplication_mode_t
WriterDeduplicationMode controls duplicate handling.
const ( WriterDeduplicationModeDisabled WriterDeduplicationMode = C.qdb_exp_batch_deduplication_mode_disabled WriterDeduplicationModeDrop WriterDeduplicationMode = C.qdb_exp_batch_deduplication_mode_drop WriterDeduplicationModeUpsert WriterDeduplicationMode = C.qdb_exp_batch_deduplication_mode_upsert )
type WriterOptions ¶ added in v3.14.2
type WriterOptions struct {
// contains filtered or unexported fields
}
WriterOptions configures batch push behavior.
func NewWriterOptions ¶ added in v3.14.2
func NewWriterOptions() WriterOptions
NewWriterOptions creates writer options with safe defaults.
func (WriterOptions) DisableAsyncClientPush ¶ added in v3.14.2
func (options WriterOptions) DisableAsyncClientPush() WriterOptions
DisableAsyncClientPush waits for server acknowledgement.
func (WriterOptions) DisableWriteThrough ¶ added in v3.14.2
func (options WriterOptions) DisableWriteThrough() WriterOptions
DisableWriteThrough allows server to cache pushed data.
func (WriterOptions) EnableAsyncClientPush ¶ added in v3.14.2
func (options WriterOptions) EnableAsyncClientPush() WriterOptions
EnableAsyncClientPush returns before server persistence.
func (WriterOptions) EnableDropDuplicates ¶ added in v3.14.2
func (options WriterOptions) EnableDropDuplicates() WriterOptions
EnableDropDuplicates activates deduplication on all columns.
func (WriterOptions) EnableDropDuplicatesOn ¶ added in v3.14.2
func (options WriterOptions) EnableDropDuplicatesOn(columns []string) WriterOptions
EnableDropDuplicatesOn activates deduplication on specific columns.
func (WriterOptions) EnableWriteThrough ¶ added in v3.14.2
func (options WriterOptions) EnableWriteThrough() WriterOptions
EnableWriteThrough bypasses server-side cache on push.
func (WriterOptions) GetDeduplicationMode ¶ added in v3.14.2
func (options WriterOptions) GetDeduplicationMode() WriterDeduplicationMode
GetDeduplicationMode returns the deduplication strategy.
func (WriterOptions) GetDropDuplicateColumns ¶ added in v3.14.2
func (options WriterOptions) GetDropDuplicateColumns() []string
GetDropDuplicateColumns returns columns used for deduplication.
func (WriterOptions) GetPushMode ¶ added in v3.14.2
func (options WriterOptions) GetPushMode() WriterPushMode
GetPushMode returns the configured push mode.
func (WriterOptions) IsAsyncClientPushEnabled ¶ added in v3.14.2
func (options WriterOptions) IsAsyncClientPushEnabled() bool
IsAsyncClientPushEnabled reports if async push is enabled.
func (WriterOptions) IsDropDuplicatesEnabled ¶ added in v3.14.2
func (options WriterOptions) IsDropDuplicatesEnabled() bool
IsDropDuplicatesEnabled reports if deduplication is enabled.
func (WriterOptions) IsValid ¶ added in v3.14.2
func (options WriterOptions) IsValid() bool
IsValid checks if the options are correctly configured.
func (WriterOptions) IsWriteThroughEnabled ¶ added in v3.14.2
func (options WriterOptions) IsWriteThroughEnabled() bool
IsWriteThroughEnabled reports if write-through is enabled.
func (WriterOptions) WithAsyncPush ¶ added in v3.14.2
func (options WriterOptions) WithAsyncPush() WriterOptions
WithAsyncPush enables async push mode.
func (WriterOptions) WithDeduplicationMode ¶ added in v3.14.2
func (options WriterOptions) WithDeduplicationMode(mode WriterDeduplicationMode) WriterOptions
WithDeduplicationMode sets the deduplication mode.
func (WriterOptions) WithFastPush ¶ added in v3.14.2
func (options WriterOptions) WithFastPush() WriterOptions
WithFastPush enables fast push mode.
func (WriterOptions) WithPushMode ¶ added in v3.14.2
func (options WriterOptions) WithPushMode(mode WriterPushMode) WriterOptions
WithPushMode sets the push mode.
func (WriterOptions) WithTransactionalPush ¶ added in v3.14.2
func (options WriterOptions) WithTransactionalPush() WriterOptions
WithTransactionalPush enables transactional push mode.
type WriterPushFlag ¶ added in v3.14.2
type WriterPushFlag C.qdb_exp_batch_push_flags_t
WriterPushFlag controls batch push behavior.
const ( WriterPushFlagNone WriterPushFlag = C.qdb_exp_batch_push_flag_none WriterPushFlagWriteThrough WriterPushFlag = C.qdb_exp_batch_push_flag_write_through WriterPushFlagAsyncClientPush WriterPushFlag = C.qdb_exp_batch_push_flag_asynchronous_client_push )
type WriterPushMode ¶ added in v3.14.2
type WriterPushMode C.qdb_exp_batch_push_mode_t
WriterPushMode sets batch push consistency level.
const ( WriterPushModeTransactional WriterPushMode = C.qdb_exp_batch_push_transactional WriterPushModeFast WriterPushMode = C.qdb_exp_batch_push_fast WriterPushModeAsync WriterPushMode = C.qdb_exp_batch_push_async )
type WriterTable ¶ added in v3.14.2
type WriterTable struct {
TableName string
// contains filtered or unexported fields
}
WriterTable holds table data for batch push.
func MergeSingleTableWriters ¶ added in v3.14.2
func MergeSingleTableWriters(tables []WriterTable) (WriterTable, error)
MergeSingleTableWriters merges multiple WriterTables with the same table name. All input tables must have identical table names and column schemas. Performance-optimized with pre-allocation based on total row count. This is a specialized function for when you know all tables are for the same table.
func MergeWriterTables ¶ added in v3.14.2
func MergeWriterTables(tables []WriterTable) ([]WriterTable, error)
MergeWriterTables merges multiple WriterTables by grouping them by table name. Tables with the same name must have identical schemas (columns and types). This is the primary merge function that handles all cases, including when all tables happen to be for the same table name. Returns a slice with one WriterTable per unique table name.
func NewWriterTable ¶ added in v3.14.2
func NewWriterTable(t string, cols []WriterColumn) (WriterTable, error)
NewWriterTable creates a table with the given columns.
func (*WriterTable) GetData ¶ added in v3.14.2
func (t *WriterTable) GetData(offset int) (ColumnData, error)
GetData retrieves column data at the given offset.
func (*WriterTable) GetIndex ¶ added in v3.14.2
func (t *WriterTable) GetIndex() []time.Time
GetIndex returns the table's timestamp index.
func (*WriterTable) GetIndexAsNative ¶ added in v3.14.2
func (t *WriterTable) GetIndexAsNative() []C.qdb_timespec_t
GetIndexAsNative returns the timestamp index as C timespecs.
func (*WriterTable) GetName ¶ added in v3.14.2
func (t *WriterTable) GetName() string
GetName returns the table name.
func (*WriterTable) RowCount ¶ added in v3.14.2
func (t *WriterTable) RowCount() int
RowCount returns the number of rows in the table.
func (*WriterTable) SetData ¶ added in v3.14.2
func (t *WriterTable) SetData(offset int, xs ColumnData) error
SetData sets column data at the given offset.
func (*WriterTable) SetDatas ¶ added in v3.14.2
func (t *WriterTable) SetDatas(xs []ColumnData) error
SetDatas sets data for all columns.
func (*WriterTable) SetIndex ¶ added in v3.14.2
func (t *WriterTable) SetIndex(idx []time.Time)
SetIndex sets the table's timestamp index.
func (*WriterTable) SetIndexFromNative ¶ added in v3.14.2
func (t *WriterTable) SetIndexFromNative(idx []C.qdb_timespec_t)
SetIndexFromNative sets the table's timestamp index from C timespecs.
Source Files
¶
- cluster.go
- column_data.go
- constants.go
- direct.go
- entry.go
- entry_blob.go
- entry_integer.go
- entry_timeseries_blob.go
- entry_timeseries_common.go
- entry_timeseries_double.go
- entry_timeseries_int64.go
- entry_timeseries_string.go
- entry_timeseries_timestamp.go
- error.go
- find.go
- handle.go
- handle_options.go
- json_objects.go
- library_link.go
- logger.go
- node.go
- pinnable.go
- qdb_log_cgo.go
- query.go
- reader.go
- statistics.go
- test_utils.go
- time.go
- utils.go
- writer.go
- writer_options.go
- writer_table.go