It is quite likely that, apart from the people I work with on a daily basis to improve Hive, many may not know what I am doing or that I am doing anything at all, for which I apologize.
I should have known better and posted my witness updates more frequently.
Sometimes I kind of miss those @nextgencrypto’s rants, as it was always a good wake up call ;-)
HBD 20% APR
I’m always very careful and rather conservative when it comes to such decisions, but given current conditions this seems to be a good time to go this way.
HBD is heavily underappreciated, and I don’t mean its monetary value, which by definition should be kind of pegged to USD, but its utility.
1.5s on average to transfer funds from alice
to bob
.
Neither of them pays any fees.
There’s an internal, fee-less, decentralized market HBD <-> HIVE
Two-way conversions between the two.
And of course: the time-locked saving balance.
Strings attached? What does that mean?
Please remember that Hive is powerful and beautiful, but also very complex.
Yes, like with a cello. It’s not easy to play with.
As always: do your own research.
Make sure that you know everything you need to know about HBD <-> HIVE conversions and its rules.
When is HBD interest collected?
What are HIVE/HBD supply, virtual supply, debt ratio, and the “haircut” rule?
Learn about the HBD stabilizer (see DHF proposal and @hbd.funder comments supporting it)
See also:
- @ausbitbank’s https://hive.ausbit.dev/hbd
- @dalz's What Is The Inflation From HBD Interest? How High Can It Go And Is It Sustainable?
Upcoming HardFork
You can read a lot of details about core development in @blocktrades updates, or listen to HIve Core Dev Meetings (see @howo posts).
Public Testnet
The current instance of testnet is slowly nearing its end of life (see TESTNET_BLOCK_LIMIT
). Once it's there, it will be re-deployed using an updated codebase for all the components.
Addresses of instances are in my previous post
Public Mirrornet (TBD)
Mirror instance is still experimental and there are some ongoing struggles related to its full deployment.
It takes a significant amount of time to deal with such issues, and this comes as no surprise, as unlike “regular” testnet, which is limited to 3 million, usually sparse blocks, mirror is just like the mainnet of Hive (actually slightly more complex because of an “override key” being added everywhere).
So when a bug like this happens:
database.cpp:4402 _apply_block ] 10 assert_exception: Assert Exception
is_canonical( c, canon_type ): signature is not canonical
{}
elliptic_secp256k1.cpp:161 public_key
{}
database.cpp:5160 validate_block_header
database.cpp:4402 _apply_block ] next_block.block_num(): 26256743
chain_plugin.cpp:668 replay_blockchain ] 10 assert_exception: Assert Exception
is_canonical( c, canon_type ): signature is not canonical
{}
elliptic_secp256k1.cpp:161 public_key
{}
database.cpp:5160 validate_block_header
rethrow
{"next_block.block_num()":26256743}
database.cpp:4402 _apply_block
It takes 29 hours just to get to the same point of replay, and that’s just a point of transition between HF19 and HF20 rules. Not even halfway to the mirror’s head block.
Mirror is essential for more comprehensive testing and reduces the risk of failure due to bugs in HardFork logic. It’s better catch them before they appear on the mainnet.
Public instance was not so public (only few developers were using it for their work such as @howo, as playing with Resource Credits on a regular testnet doesn’t make much sense), and is currently down waiting for fixes and fully validated reindex before it is available again.
api.openhive.network
In the near future I will be switching from the current master of hived
and hivemind
to their HEAD
of develop
.
It will continue to use regular hived-powered account history.
There are some significant challenges when it comes to hardware resources as Hive is constantly growing.
A year ago, when HF25 was coming, the block_log
size was 350GB, now it’s nearly 550GB.
A running hived API instance (with account history) took 825GB, now it’s 1250GB.
Of course, that’s without taking hivemind and its database into account, which requires an additional 650GB or so (with the peak at resync being even larger).
As you can see, we’ve almost reached the 2TB requirement for fast storage (preferably NVMe).
Heavy development work on HAF and HAfAH is ongoing. It will greatly improve robustness and performance, but at the cost of even greater demand for fast storage space.
Deprecating Ubuntu 18.04 LTS
As I mentioned in one of the hivemind
’s recent issues, it would be a waste of time and resources to try to support all the various mixes of software versions. We’ve been trying to stick to the recent Ubuntu LTS release while keeping support for the previous one if possible.
That shouldn’t be a big deal in the era of virtual machines and docker containers. However, new binaries taken from https://gtg.openhive.network/get might not work on older systems.