HashXHistoryPrefixRow issue with endianness of TxNums written to DB #91
Labels
No labels
area: database
area: documentation
area: elasticsearch
area: herald
area: packaging
area: scribe
consider soon
critical
dependencies
good first issue
hacktoberfest
help wanted
improvement
needs: repro
new feature
priority: blocker
priority: high
priority: low
priority: medium
type: bug
type: bug-fix
type: discussion
type: feature request
type: improvement
type: new feature
type: refactor
type: task
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: LBRYCommunity/hub#91
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Python scribe when run on my machine (MacOS ARM64) produces little-endian txnums for hashXhistory prefix. (I just looked up M1 mac behavior, and it's little-endian by default) But there could be some more exotic platform that writes txnums big-endian fashion.
The endianness should be corrected to big-endian like other numbers in the database OR it should be locked to be little-endian specifically.
For now, I am correcting (herald.go) HashXHistoryValue implementation to read TxNums in little-endian form.
This is a consequence of using an
array.array('I')
, which uses the native endianness. And I think it's actually worse - docs sayIt can be 16 bits or 32 bits depending on the platform.
(https://docs.python.org/3/library/array.html).Perhaps another library offers good enough performance and the ability to specify endianness, or we could make our own fast little endian uint32 array serializer/deserializer in cython.