mirror of
https://github.com/facebook/rocksdb.git
synced 2026-03-20 06:24:22 +00:00
Page:
Benchmarking tools
Pages
A Tutorial of RocksDB SST formats
Administration and Data Access Tool
Allocating Some Indexes and Bloom Filters using Huge Page TLB
Approximate Size
Articles about Rocks
Asynchronous IO
Atomic flush
Background Error Handling
Basic Operations
Benchmarking tools
BlobDB
Block Cache
Block cache analysis and simulation tools
Building on Windows
Checkpoints
Choose Level Compaction Files
Column Families
Compaction Filter
Compaction Stats and DB Status
Compaction Trivial Move
Compaction
Compression
Creating and Ingesting SST files
CuckooTable Format
Daily Off‐peak Time Option
Data Block Hash Index
Delete A Range Of Keys
Delete Stale Files
DeleteRange Implementation
DeleteRange
Developing with an IDE
Dictionary Compression
Direct IO
EventListener
External Table (Experimental)
FIFO compaction style
Features Not in LevelDB
Full File Checksum and Checksum Handoff
Fuzz Test
Home
How to ask a performance related question
How to backup RocksDB
How to persist in memory RocksDB database
How we keep track of live SST files
IO Tracer and Parser
IO
Implement Queue Service Using RocksDB
Index Block Format
Indexing SST Files for Better Lookup Performance
Iterator Implementation
Iterator
JNI Debugging
Journal
Known Issues
Leveled Compaction
Logger
Logging in RocksJava
Low Priority Write
MANIFEST
Managing Disk Space Utilization
Manual Compaction
MemTable
Memory usage in RocksDB
Mempurge (Memtable Garbage Collection) [Experimental]
Merge Operator Implementation
Merge Operator
Multi Column Family Iterator
MultiGet Performance
Object Registry
Online Verification
Open Projects
Option String and Option Map
Partitioned Index Filters
Perf Context and IO Stats Context
Performance Benchmark 2014
Performance Benchmark 201807
Performance Benchmarks October 2022
Performance Benchmarks
Pipelined Write
PlainTable Format
Platform Requirements
Prefix Seek
Projects Being Developed
Proposal: Unifying Level and Universal Compactions
Proposals on Improving Rocksdb's Options
Publication
Rate Limiter
Read Modify Write Benchmarks
Read only and Secondary instances
Reducing memcpy overhead when using Iterators
Remote Compaction
Replication Helpers
RocksDB Bloom Filter
RocksDB Compatibility Between Different Releases
RocksDB Configurable Objects
RocksDB Contribution Guide
RocksDB Extensions
RocksDB FAQ
RocksDB In Memory Workload Performance Benchmarks
RocksDB Options File
RocksDB Overview
RocksDB Public Communication and Information Channels
RocksDB Release Methodology
RocksDB Repairer
RocksDB Trace, Replay, Analyzer, and Workload Generation
RocksDB Troubleshooting Guide
RocksDB Tuning Guide
RocksDB Users and Use Cases
RocksDB version macros
RocksJava API TODO
RocksJava Basics
RocksJava Performance on Flash Storage
Rocksdb Architecture Guide
Rocksdb BlockBasedTable Format
Rocksdb Table Format
SST File Manager
SST Partitioner
SecondaryCache (Experimental)
SeekForPrev
Setup Options and Basic Tuning
Simulation Cache
Single Delete
Slow Deletion
Snapshot
Space Tuning
Speed Up DB Open
Statistics
Stress test
Subcompaction
Tailing Iterator
Talks
Terminology
Tests
The Customizable Class
Third party language bindings
This is a test
Thread Pool
Tiered Storage (Experimental)
Tiered Storage Benchmarking
Time to Live
Track WAL in MANIFEST
Transactions
Tuning RocksDB from Java
Tuning RocksDB on Spinning Disks
Two Phase Commit Implementation
Universal Compaction
User defined Timestamp
WAL Compression
WAL Performance
WAL Recovery Modes
What's new in RocksDB2.7
Wide Columns
Write Ahead Log (WAL)
Write Ahead Log File Format
Write Batch With Index
Write Buffer Manager
Write Stalls
WritePrepared Transactions
WriteUnprepared Transactions
[To Be Deprecated] Persistent Read Cache
log_format.txt
poc test
poc.html
poc
testpoc.html
unordered_write
No results
8
Benchmarking tools
Andrew Kryczka edited this page 2024-01-08 10:12:20 -08:00
Table of Contents
db_bench
db_bench is the main tool that is used to benchmark RocksDB's performance. RocksDB inherited db_bench from LevelDB, and enhanced it to support many additional options. db_bench supports many benchmarks to generate different types of workloads, and its various options can be used to control the tests.
If you are just getting started with db_bench, here are a few things you can try:
- Start with a simple benchmark like fillseq (or fillrandom) to create a database and fill it with some data
./db_bench --benchmarks="fillseq"
If you want more stats, add the meta operator "stats" and --statistics flag.
./db_bench --benchmarks="fillseq,stats" --statistics
- Read the data back
./db_bench --benchmarks="readrandom" --use_existing_db
You can also combine multiple benchmarks to the string that is passed to --benchmarks so that they run sequentially. Example:
./db_bench --benchmarks="fillseq,readrandom,readseq"
More in-depth example of db_bench usage can be found here and here.
Benchmarks List:
fillseq -- write N values in sequential key order in async mode
fillseqdeterministic -- write N values in the specified key order and keep the shape of the LSM tree
fillrandom -- write N values in random key order in async mode
filluniquerandomdeterministic -- write N values in a random key order and keep the shape of the LSM tree
overwrite -- overwrite N values in random key order in async mode
fillsync -- write N/100 values in random key order in sync mode
fill100K -- write N/1000 100K values in random order in async mode
deleteseq -- delete N keys in sequential order
deleterandom -- delete N keys in random order
readseq -- read N times sequentially
readtocache -- 1 thread reading database sequentially
readreverse -- read N times in reverse order
readrandom -- read N times in random order
readmissing -- read N missing keys in random order
readwhilewriting -- 1 writer, N threads doing random reads
readwhilemerging -- 1 merger, N threads doing random reads
readrandomwriterandom -- N threads doing random-read, random-write
prefixscanrandom -- prefix scan N times in random order
updaterandom -- N threads doing read-modify-write for random keys
appendrandom -- N threads doing read-modify-write with growing values
mergerandom -- same as updaterandom/appendrandom using merge operator. Must be used with merge_operator
readrandommergerandom -- perform N random read-or-merge operations. Must be used with merge_operator
newiterator -- repeated iterator creation
seekrandom -- N random seeks, call Next seek_nexts times per seek
seekrandomwhilewriting -- seekrandom and 1 thread doing overwrite
seekrandomwhilemerging -- seekrandom and 1 thread doing merge
crc32c -- repeated crc32c of 4K of data
xxhash -- repeated xxHash of 4K of data
acquireload -- load N*1000 times
fillseekseq -- write N values in sequential key, then read them by seeking to each key
randomtransaction -- execute N random transactions and verify correctness
randomreplacekeys -- randomly replaces N keys by deleting the old version and putting the new version
timeseries -- 1 writer generates time series data and multiple readers doing random reads on id
For a list of all options:
$ ./db_bench -help
persistent_cache_bench
$ ./persistent_cache_bench -help
persistent_cache_bench:
USAGE:
./persistent_cache_bench [OPTIONS]...
...
Flags from utilities/persistent_cache/persistent_cache_bench.cc:
-benchmark (Benchmark mode) type: bool default: false
-cache_size (Cache size) type: uint64 default: 18446744073709551615
-cache_type (Cache type. (block_cache, volatile, tiered)) type: string
default: "block_cache"
-enable_pipelined_writes (Enable async writes) type: bool default: false
-iosize (Read IO size) type: int32 default: 4096
-log_path (Path for the log file) type: string default: "/tmp/log"
-nsec (nsec) type: int32 default: 10
-nthread_read (Lookup threads) type: int32 default: 1
-nthread_write (Insert threads) type: int32 default: 1
-path (Path for cachefile) type: string default: "/tmp/microbench/blkcache"
-volatile_cache_pct (Percentage of cache in memory tier.) type: int32
default: 10
-writer_iosize (File writer IO size) type: int32 default: 4096
-writer_qdepth (File writer qdepth) type: int32 default: 1