update doc 0.6.4

This commit is contained in:
twmht 2017-04-17 00:01:19 +08:00
parent 9d87eb5bdc
commit e834f80b42
4 changed files with 58 additions and 73 deletions

View file

@ -215,47 +215,14 @@ Options object
| *Type:* ``[int]``
| *Default:* ``[1, 1, 1, 1, 1, 1, 1]``
.. py:attribute:: expanded_compaction_factor
.. py:attribute:: max_compaction_bytes
Maximum number of bytes in all compacted files. We avoid expanding
the lower level file set of a compaction if it would make the
total compaction cover more than
(expanded_compaction_factor * targetFileSizeLevel()) many bytes.
We try to limit number of bytes in one compaction to be lower than this
threshold. But it's not guaranteed.
Value 0 will be sanitized.
| *Type:* ``int``
| *Default:* ``25``
.. py:attribute:: source_compaction_factor
Maximum number of bytes in all source files to be compacted in a
single compaction run. We avoid picking too many files in the
source level so that we do not exceed the total source bytes
for compaction to exceed
(source_compaction_factor * targetFileSizeLevel()) many bytes.
If 1 pick maxfilesize amount of data as the source of
a compaction.
| *Type:* ``int``
| *Default:* ``1``
.. py:attribute:: max_grandparent_overlap_factor
Control maximum bytes of overlaps in grandparent (i.e., level+2) before we
stop building a single file in a level->level+1 compaction.
| *Type:* ``int``
| *Default:* ``10``
.. py:attribute:: disable_data_sync
If true, then the contents of data files are not synced
to stable storage. Their contents remain in the OS buffers till the
OS decides to flush them. This option is good for bulk-loading
of data. Once the bulk-loading is complete, please issue a
sync to the OS to flush all dirty buffesrs to stable storage.
| *Type:* ``bool``
| *Default:* ``False``
| *Default:* ``target_file_size_base * 25``
.. py:attribute:: use_fsync
@ -447,12 +414,6 @@ Options object
| *Type:* ``bool``
| *Default:* ``True``
.. py:attribute:: allow_os_buffer
Data being read from file storage may be buffered in the OS
| *Type:* ``bool``
| *Default:* ``True``
.. py:attribute:: allow_mmap_reads
@ -517,22 +478,24 @@ Options object
| *Type:* ``int``
| *Default:* ``0``
.. py:attribute:: verify_checksums_in_compaction
If ``True``, compaction will verify checksum on every read that
happens as part of compaction.
| *Type:* ``bool``
| *Default:* ``True``
.. py:attribute:: compaction_style
The compaction style. Could be set to ``"level"`` to use level-style
compaction. For universal-style compaction use ``"universal"``.
compaction. For universal-style compaction use ``"universal"``. For
FIFO compaction use ``"fifo"``. If no compaction style use ``"none"``.
| *Type:* ``string``
| *Default:* ``level``
.. py:attribute:: compaction_pri
If level compaction_style = kCompactionStyleLevel, for each level,
which files are prioritized to be picked to compact.
| *Type:* Member of :py:class:`rocksdb.CompactionPri`
| *Default:* :py:attr:`rocksdb.CompactionPri.kByCompensatedSize`
.. py:attribute:: compaction_options_universal
Options to use for universal-style compaction. They make only sense if
@ -603,15 +566,6 @@ Options object
opts = rocksdb.Options()
opts.compaction_options_universal = {'stop_style': 'similar_size'}
.. py:attribute:: filter_deletes
Use KeyMayExist API to filter deletes when this is true.
If KeyMayExist returns false, i.e. the key definitely does not exist, then
the delete is a noop. KeyMayExist only incurs in-memory look up.
This optimization avoids writing the delete to storage when appropriate.
| *Type:* ``bool``
| *Default:* ``False``
.. py:attribute:: max_sequential_skip_in_iterations
@ -726,6 +680,18 @@ Options object
*Default:* ``None``
CompactionPri
================
.. py:class:: rocksdb.CompactionPri
Defines the support compression types
.. py:attribute:: kByCompensatedSize
.. py:attribute:: kOldestLargestSeqFirst
.. py:attribute:: kOldestSmallestSeqFirst
.. py:attribute:: kMinOverlappingRatio
CompressionTypes
================
@ -739,6 +705,10 @@ CompressionTypes
.. py:attribute:: bzip2_compression
.. py:attribute:: lz4_compression
.. py:attribute:: lz4hc_compression
.. py:attribute:: xpress_compression
.. py:attribute:: zstd_compression
.. py:attribute:: zstdnotfinal_compression
.. py:attribute:: disable_compression
BytewiseComparator
==================

View file

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
#
# pyrocksdb documentation build configuration file, created by
# python-rocksdb documentation build configuration file, created by
# sphinx-quickstart on Tue Dec 31 12:50:54 2013.
#
# This file is execfile()d with the current directory set to its
@ -47,7 +47,7 @@ source_suffix = '.rst'
master_doc = 'index'
# General information about the project.
project = u'pyrocksdb'
project = u'python-rocksdb'
copyright = u'2014, sh'
# The version info for the project you're documenting, acts as replacement for
@ -55,9 +55,9 @@ copyright = u'2014, sh'
# built documents.
#
# The short X.Y version.
version = '0.4'
version = '0.6'
# The full version, including alpha/beta/rc tags.
release = '0.4'
release = '0.6.4'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
@ -180,7 +180,7 @@ html_static_path = ['_static']
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'pyrocksdbdoc'
htmlhelp_basename = 'python-rocksdbdoc'
# -- Options for LaTeX output ---------------------------------------------
@ -200,7 +200,7 @@ latex_elements = {
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'pyrocksdb.tex', u'pyrocksdb Documentation',
('index', 'python-rocksdb.tex', u'python-rocksdb Documentation',
u'sh', 'manual'),
]
@ -230,7 +230,7 @@ latex_documents = [
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'pyrocksdb', u'pyrocksdb Documentation',
('index', 'python-rocksdb', u'python-rocksdb Documentation',
[u'sh'], 1)
]
@ -244,8 +244,8 @@ man_pages = [
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'pyrocksdb', u'pyrocksdb Documentation',
u'sh', 'pyrocksdb', 'One line description of project.',
('index', 'python-rocksdb', u'python-rocksdb Documentation',
u'sh', 'python-rocksdb', 'One line description of project.',
'Miscellaneous'),
]

View file

@ -1,4 +1,4 @@
Welcome to pyrocksdb's documentation!
Welcome to python-rocksdb's documentation!
=====================================
Overview
@ -11,7 +11,7 @@ Python bindings to the C++ interface of http://rocksdb.org/ using cython::
print db.get(b"a")
Tested with python2.7 and python3.4 and RocksDB version 3.12
Tested with python2.7 and python3.4 and RocksDB version 5.3.0
.. toctree::
:maxdepth: 2

View file

@ -1,4 +1,4 @@
Basic Usage of pyrocksdb
Basic Usage of python-rocksdb
************************
Open
@ -197,6 +197,21 @@ The following example python merge operator implements a counter ::
# prints b'2'
print db.get(b"a")
We provide a set of default operators ``uintadd64``, ``put`` and ``stringappend``
The following example using ``uintadd64`` where each operand is ``uint64`` ::
import rocksdb
import struct
opts = rocksdb.Options()
opts.create_if_missing = True
opts.merge_operator = 'uint64add'
db = rocksdb.DB("test.db", opts)
# since every operand is uint64, you need to pack it into string
db.put(b'a', struct.pack('Q', 1000))
db.merge(b'a', struct.pack('Q', 2000))
assert struct.unpack('Q', db.get(b'a'))[0] == 3000
PrefixExtractor
===============