My Storj node has been *knocks on a wood* working flawlessly without any hiccups for the past several weeks.
However, there are two things that have been bothering me as a SNO (storage node operator).
Firstly, the minimum payout threshold. Ethereum transactions are costly. It’s not worth it to send tiny amounts. This means I’m not getting my rewards every month. In fact, I haven’t received payment for months. There is a way around this (L2 solution) but I’m going to wait for Ethereum 2.0 upgrade. It will bring shard chains that will enable eth network to handle more transactions and thus (hopefully) lowering gas fees.
Secondly, the power consumption. I made a wild guess about the power consumption of my node based on its hardware components. With the current price of electricity it might not be profitable. I’d need to measure the actual consumption to be sure.
- Search: DuckDuckGo
- Email: ProtonMail
- Browser: Firefox + containers
- Chat: Signal, Element
- Calls: Jitsi Meet
- Office: LibreOffice, CryptPad
- Passwords: Bitwarden
- 2FA: andOTP
- Analytics: Koko
- Games: GOG
- CMS: WordPress
I tried to use two Linux distros (Fedora, Ubuntu) as a primary OS but the software I want to use is not available or it’s a hassle to get running. I’m back on Windows 10. At least, I made sure to disable all the “spying” options Windows provides during the installation.
Merkle DAGs are Awesome!
The last one was particularly interesting since it got me thinking how to apply DAGs in the context of Red Hat (see my LinkedIn profile for more details about my work).
Merkle DAG uses content addressing to point to data. It does deduplication really well. One large dataset can be distributed on many nodes thus enabling parallel download. Because of content hashing you can verify the data haven’t been tampered with. Existing graphs can be made part of new graphs thus providing a way for versioning.
One idea is using Merkle DAG for distributing data to & between edge nodes. It could save a ton of storage space and bandwidth. It could make downloads faster (it wouldn’t really on single server to download the entire dataset in sequence). Nodes could host bits of data and exchange them. It could enable versioning data (just create a new graph point to unchanged content in the old graph and provide new subtree with new content).
That’s all folks, see you in a month.
See all SNO Storj reports.