Anchor Hardfork Update: New point release
Update: A new release (Oxen 11.1.1) has become available on Feb 28th which resolves some issues with Oxen 11.1.0.
Oxen 11.1.1 resolves a bug that prevented pulse blocks from being produced when no L2 provider was configured, which could lead to node decommissioning after 1-2 days.
Additionally, a logging issue has been fixed to ensure log levels and file settings are properly respected.
Lastly, we've addressed a startup bug where slow servers, particularly those restarting for the first time after upgrading from Oxen 10, could be repeatedly killed before completing the early rescan process. These fixes enhance node reliability and prevent unnecessary downtime.
Oxen is announcing changes to the rollout of Oxen Anchor Hardfork. A new point release (version 11.1.0) will be released in advance of the Anchor Hardfork release.
Oxen 11.1.0 will contain all of the additional functionality which was slated for release in the Oxen Anchor Hardfork (now version 11.2.0), but will not trigger a hardfork. This functionality is described in detail here.
The decision to add the Oxen 11.1.0 point release has been made out of caution due to the Anchor Hardfork disabling new Service Node registrations, meaning any nodes which are deregistered will not be able to be re-registered until Session Token's mainnet launch.
Oxen 11.1.0 will be a non-mandatory upgrade; and it will not hardcode any values for the eventual Anchor Hardfork.
This point release offers the opportunity to verify the proper function of new features in the Oxen mainnet environment without disabling new Service Node registrations. This provides more flexibility for both Session Contributors and the Oxen community to test the significant adjustments packaged in the new binaries (and catch any issues while Service Nodes can still be re-registered).
We have also received feedback from some node operators running multiple nodes who are concerned about the amount of requests which will be made to Arbitrum RPC providers. Over the next week we will be looking into streamlining L2 request caching to reduce the number of requests necessary for multi-node operators. This will ensure that we have more robust mechanisms in place for when the L2 tracker becomes mandatory.
When will you release the Oxen 11.1.0 binaries?
It is currently anticipated that Oxen 11.1.0 will be released on February 25. Developers are currently working on resolving a few technical issues before releasing 11.1.0.
The first relates to higher memory usage induced by storing snapshots of the Oxen blockchain.
Additionally, in test builds we have witnessed nodes who restart oxend are performing a very slow rescan of the chain, which can cause a Service Node to become deregistered if it takes too long.
Fixes to both of these issues have already been identified, clearing the way for release of Oxen 11.1.0 next week.
Is Oxen 11.1.0 a mandatory upgrade?
Oxen 11.1.0 is not a mandatory upgrade. However, operators are strongly encouraged to upgrade in order to test the migration candidate. This will also allow operators to experiment with new features, such as connecting to an Arbitrum One RPC node, before they become strict requirements (after the Anchor Hardfork).
When will the Oxen Anchor Hardfork Binaries be released?
The release date of Oxen Anchor hardfork binaries is pending the success of the migration candidate (Oxen 11.1.0). Currently, the release and live testing of 11.1.0 is taken to be part of the testing allowance which was originally scheduled for the period before the Landing Hardfork.
How do I connect to an Arbitrum One RPC node?
While it is not required to connect to an Arbitrum One RPC node for this release, this is a good time for operators to learn how to do so. Operators who have participated in the Session Node testnet may already be familiar with this requirement using the Arbitrum Sepolia network.
There are two main options for fulfilling this requirement:
Public RPC providers
Running your own Arbitrum node
Public RPC providers
Public RPC providers handle the complexity and overhead of running an Arbitrum One node. These providers offer a public endpoint for querying. Some examples of public RPC providers that support Arbitrum One are:
A more comprehensive list of official and third party RPC providers for Arbitrum can also be found here:
https://docs.arbitrum.io/build-decentralized-apps/reference/node-providers
You can sign up for a free account with most of these providers, which typically offer free usage up to a certain limit. In most cases, their free tiers will support up to 2 or 3 Session Nodes without exceeding usage limits. If you need to support more nodes, you can either register for multiple providers and use different endpoints for each set of nodes, or consider upgrading to a paid plan for higher usage.
Note: If you are operating several nodes (More than 2-3 nodes per public provider) you might want to consider deploying a caching server, such as json-rpc-cache-proxy. This setup allows you to significantly reduce the number of calls made to your RPC provider by caching common queries, improving performance and reducing costs.
Running your own Arbitrum Node
If you are staking multiple nodes or prefer not to rely on a public RPC provider, you may want to run your own Arbitrum One node. But be aware that the hardware requirements for running an Arbitrum One node are very demanding.
A full guide on how to run an Arbitrum One node using Nitro can be found here
https://docs.arbitrum.io/run-arbitrum-node/run-full-node
Upgrading
Once you have your Arbitrum node or public provider setup, you will receive a URL or local address that will look something like this, depending on your provider or setup:
https://arb-mainnet.g.alchemy.com/v2/32bfi3gb298fbb32byfb32bf
Please note that for this point release, you will not be prompted to input your L2 provider URL/address, and it is not required to do so to upgrade.
However, if operators would like to connect to an Arbitrum One RPC node to prepare for the Anchor upgrade, they can add a line to their config file with the L2 provider URL.
l2-provider=https://url-to-l2-provider
Support
If you would like to discuss 11.1.0 or require help with your node, please reach out via the Oxen Service Nodes channel on Telegram for assistance.
You've got mail!
Sign up to our newsletter to keep up to date with everything Oxen.