We’re now working on the next big set of patches to address the issue of slow syncing when joining a new node to the network. We ran into some minor consensus issues, but luckily those are not related to the core consensus but center around more structural units.
We’re currently focused on making node synching faster when a new participant is catching up with the block production by introducing the synchronization performance patch.
Another issue was the boot node crash that caused several boot nodes, likely caused by someone trying to join the network with an old version of the…
The network has been running stable for more than two weeks now, and we’re currently focused on making node synching faster when a new participant is catching up with the block production. To do that, we’ll introduce synchronization performance patch.
Another issue we came across was boot node crashing that caused several boot nodes (not consensus nodes) to crash. That was likely caused by someone trying to join the network with an old version of the node or a corrupted node.
We’ve been successfully testing our first incentivized testnet with our pre-launch node operations crew on Discord, and yet again we thank you guys for participation — it’s been a huge help!
Over the last couple of weeks, the network has been producing fewer core consensus-related issues, with more problems coming down to the supporting infrastructure around it. We’ve now identified and patched all the issues related to performance and edge cases. You can get a better idea of the issues we’ve been running into from our latest tech update here.
Currently, the testnet is back up and running, with all the the major known problems patched. This week, we’re going to push more features and tests onto our internal development network.
Over the last couple of weeks, we’ve been noticing that the network is producing less and less core consensus-related issues, with the majority of problems coming down to the supporting infrastructure around it:
The network has just been reset with most of the issues patched, we’re still working to identify and patch problems that primarily have to do with performance as well as edge-cases. We thank everyone who’s been running the Taraxa nodes to help us pre-test on Discord, it’s been huge help!
To give you a better idea of the work done over the past two weeks, the size and the heterogenous environments of the testnet right now has really been a big help in driving the discovery of these problems. Here are two example edge cases we ran into:
Moving forward with our pre-launch testing, we’re increasingly seeing a lot more performance issues rather than core consensus ones, which is very good news. In many ways, we were able to observe these synchronization problems thanks to your active involvement in running the Taraxa nodes, which generated lots of data helping us to identify and debug. As of the time of writing, we’ve restarted the testnet and now debugging on the node synchronization and stalled issues, with a few big patches coming up this week.
The two main reasons for the node downtime surfaced last week:
The testnet is down at the moment, and we’re working on the network reset. Before crashing this weekend, it has been running stable for over two weeks, having generated lots of useful data and letting us observe and troubleshoot problems and bugs (especially the one with node synchronization).
For now, we’ve wiped and restarted the testnet, to participate please follow these instructions: https://docs.taraxa.io/node-setup/upgrade-a-node/data-reset.
We are now at the final testing stage of preparation launching the first incentivized testnet that will come in three distinct stages:
The staking contract was completed and is being reviewed by the auditors now. We’re also building the new version of the community site with the staking functionality.
migrating the ERC-20 tokens onto the Taraxa network to enable actual staking of TARA.
We keep exploring the promise of using Taraxa’s audit log of informal transactions to track and quantify reputation from off-chain signals, such as the frequency and impact of link sharing, and other quantitative metrics and…
Now at the final testing stage of preparation for the launch of incentivized Taraxa testnet, the testnet is now back up and running with some minor issues when resyncing the nodes. You can join our node pre-testing crew on Discord running a total of 26 nodes and helping us to spot problems and troubleshoot. On to the updates!
On the application side, we’re now in the process of re-writing Marinate’s UI and building out an open API to allow for integrations with popular messenger platforms, such as Telegram and more.