Below is a list of Hive-related programming issues worked on by BlockTrades team during the past two weeks:
Hived work (blockchain node software)
sql_serializer plugin (writes blockchain data to a HAF database)
Our primary hived work this week was focused on testing, benchmarking, and making improvements to the SQL serializer plugin:
- Speed up sql_serializer by maintaining database connection (previously it was connecting and disconnecting between writes to the database)
- A timestamp field was added to the HAF operations table to handle issue with virtual operations where the operation’s effective time is different from the block where it is generated:
https://gitlab.syncad.com/hive/hive/-/merge_requests/274
https://gitlab.syncad.com/hive/psql_tools/-/merge_requests/17 - Fixed a bug where the wrong block number was sent to hive fork manager during a fork
- sql_serializer now uses future to start threads for better error handling and cleanup of threads. Further work on this issue will be completed next week.
- Fixed another fork-related bug in sql_serializer
We modified fc’s json serializer to use a string with preallocated memory instead of a stream in order to speedup replays with the sql_serializer plugin (we saw speedups as much as 40% on some systems). We expect this optimization will also result in significant speedups in hived's API response time, but we haven’t done any benchmarks for that yet.
Miscellaneous code cleanup
- Field
active
removed from post-related API responses (and code) - Removed obsolete plugins: tags_api and follow_api
- Removed legacy plugins
- Updated CI to use rebuilt docker images containing updated certificates
- Minor doc fix and updated seed nodes
- Function
get_impacted_accounts
moved intoprotocol
folder - Optimization: objects related to vesting delegation encapsulated and reduced in size
Test tool improvements (New python library for testing hived)
- Added support for connecting wallet to remote node
- Enabled passing command line arguments to wallet
- Fixed random fail on CI related to implementation of empty directory removing
- Updated existing tests to use new TestTools interface
- Made current directory available to user (via context.get_current_directory())
- Removed main net singleton
https://gitlab.syncad.com/hive/hive/-/merge_requests/270
We're also incorporating a library into TestTools called faketime that will enable us to make our fork testing repeatable. It will also allow us to speedup hived tests by using small, pre-generated block_logs as testing fixtures. Instead of requiring us to execute a sequence of transactions to setup a test case, the test will just replay hived using the pre-generated block log to create the desired testing environment.
Command-line wallet for Hived
- Added default value to the server-rpc-endpoint option
- Added exit function to the wallet api
- Continued work on supporting offline operation of the cli wallet
Hivemind (2nd layer applications + social media middleware)
As mentioned previously, we’re planning to migrate to Ubuntu 20 as the recommended deployment environment for hived and hivemind. As part of this change, we’re planning to move to postgres 12 for hivemind, because this is the default version of postgres shipped with Ubuntu 20.
We’ve modified the continuous integration (CI) system to test the develop branch against postgres 12 and we’ll probably release the final version of hivemind for postgres 10 in a day or so (with support for upgrading existing hivemind installations to the latest database schema changes).
Hive Application Framework: framework for building robust and scalable Hive apps
Fixing/Optimizing HAF-based account history app (Hafah)
We’re currently optimizing and testing our first HAF-based app (code-named Hafah) that emulates the functionality of hived’s account history plugin (and ultimately will replace it). A lot of our recent benchmarks focused on testing on different systems, with different numbers of threads allocated to the sql_serializer and the Hafah plugin.
We’ve also created a fully python-based implementation of Hafah (the indexer for the original Hafah implementation is C++ based) to better showcase how a typical HAF-based app will look.
In preliminary benchmarks, the python-based application was about 20% slower to index 5M blocks than the one with a C++ based indexer, but we still need to do full benchmark testing to see how the two implementations compare over a more realistic data set (e.g. 50M+ blocks). In terms of API performance, both should perform the same, as that python code is shared between the two implementations.
Multi-threading the jsonrpc server used by HAF
We’ve completed a preliminary implementation of a multi-threaded jsonrpc server for use by HAF applications (and traditional hivemind). As mentioned in my previous report, we discovered that this becomes a bottleneck for API traffic at high loads when the API calls themselves are fast. In the next week we’ll begin testing, benchmarking, and optimization.
Conversion of hivemind to HAF-based app
We didn’t have a chance to work on HAF-based hivemind during the previous week as we were tied up with HAF and the HAF account history app (and one of the key devs for this work was sick), but we’ll be resuming work on it during the upcoming week, and preliminary analysis leads us to believe we may be able to complete the work in just a week.
Condenser (source code for hive.blog and a number of other Hive frontend web sites)
I worked with @quochuy and jsalyers on a number of condenser-related issues:
- Support for tiktok videos
- Fixes for youtube videos and instagram
- Fix for muted, blacklisted, and followed pages
- Major code cleanup
- Investigated issues related to letsencrypt certificate delisting
Upcoming work for next week
- Release a final official version of hivemind with postgres 10 support, then update hivemind CI to start testing using postgres 12 instead of 10. We also want to run a full sync to headblock of it against current develop version of hived.
- Finish testing fixes and optimizations to HAF base-level components (sql_serializer and forkmanager).
- For Hafah, we’ll be 1) testing/optimizing the new multithreaded jsonrpc server, 2) further benchmarking of API performance, 3) verifying results against a hived account history node, and 4) continuing set up of continuous integration testing for Hafah.
- Resume work on HAF-based hivemind. Once we’re further along with HAF-based hivemind, we’ll test it using the fork-inducing tool.
- Continue work on speedup of TestTools-based tests.
- Investigate possible changes required to increase haircut limit for Hive-backed dollars (HBD) in hardfork 26. Public discussion is still needed as to the amount to increase the limit (personally I'm inclined to either 20% or 30%, with my preference for 30% to allow for a robust expansion of HBD supply).