Show Posts
|
Pages: [1]
|
There is a certain kind of joy that comes with recognizing how little you actually own in the world... and then seeing real ownership in your BTC.
I recently consolidated two utxos from my cold storage into one because the network fees are so low (cost about 25 cents total to get onto the next block). I connected to my node (running on a headless server in my basement - connected to the world through Starlink), and only had to wait 3 minutes to see the block post. I searched for the transaction ID and right there I could see verification myself of the movement I made and the control I have of the Bitcoin network in digital form.
I didn't need a website. I didn't need a phone. I didn't need an ID. I didn't need a bank. I didn't need permission. I didn't need anyone else.
I just needed a connection to the internet... and my hardware.
I think sometimes people lose sense of just how amazing true ownership can be. I feel like I own my house, but it can be taken from me. I feel like I own my life, but that too can be taken... I only really know a few things in this world. One of them is that I have ownership of this...
it is mine.
(If you don't run a node, I highly recommend trying it... it's not as difficult as you might think. If you have never consolidated UTXOs and verified a transaction yourself from your own node, those are two things worth doing)
|
|
|
Appreciate that tip (I've added blocksxor=0), I also discovered I wasn't running a stable release of 28, so I'm updating to 29.0
Downloading and syncing again. Should fix it.
|
|
|
I've been running a full node for awhile now, with no issues. It has been synced and reliably keeping up with new blocks. I was attempting to more quickly traverse the local data history without RPC using blockchain_parser.blockchain to ingest into a new utxos table for analysis, but whenever it tried to index the raw data the CPU spiked to almsot 100% for hours with no progress. I think I discovered the problem is that it's hung up on the first block ~/.bitcoin/blocks/blk00000.dat starts with: 00000000 3d 11 2a 31 ab c4 8b 90 |=.*1....| that's not right, is it? Is there a way to force an update of just that block... Curious if others have a better suggestion for how to run through the data. I'm also wondering how my initial block got like this? Edit: Doing some more investigating, it seems all of the blocks (at least the first 5) start with thsoe same bytes. Just checked the entire folder and ever single file blk*.dat starts with the magic string 3d112a31 I think this means I need to redownload/sync my entire history? Still confused how this could happen... has anyone ever run into this before? Appreciate and insights that help me avoid having to go through this resync in the future. fwiw, I'm running Bitcoin Core version v28.99.-0 on Ubuntu 24.04.1 LTS
|
|
|
Buying USD with BTC is an interesting choice, not one I would make... but maybe the entity needed it to actually get a thing.
I would think borrowing against it to get the thing that is inflating, irrational and not consistent (aka USD) would be the wiser move...
|
|
|
an ETF is an IOU... in that regard, it's more like fiat than true ownership. Sure, it comes with a 'promise' it won't be rehypothecated (printed), but the point remains, you're not in posession. Just like the value in fiat isn't really yours if it can be debased/inflated without your concent.
In this regard, ironically... ETFs are sort of the institutions trojan horse into BTC... as BTC is a trojan horse into the traditional finance industry. The less people that use ETFs the better... but people are going to do "the easy" path, and find out the hard way. What's actually ownership and what isn't. At least it's adoption, but I don't think it's "healthy" long term. It could actually do a lot of damage, if a huge pile of BTC was lost or stolen... and govt stepped in to reconcile the way they often do (taxing everyone to make sure those with a lot don't lose it).
|
|
|
Doesn't really answer your question, but first thing is that quantum achievements at the level of cracking P2PK addresses would have far more valuable, and obvious targets (think major banks, the IMF, or the power grids, as some examples) before it ever got to the point of coming for Satoshi addresses. Because not only is the BTC encryption more difficult than many of those other targets, but the act of defeating it makes the intention of defeating it irrelevant. That is, if you had billions of dollars to devote to attempting to become the first to overtake the leaders on the path to this objective... what is your goal? If you could even surprise the network with this kind of attack, your gift is it collapsing? Or put another way, though you're thinking "how will Quantum impact BTC"... you should instead be much more worried about "how Quantum will impact the world" because that second question will hit way sooner, and truly be your canary.
That said, you're right lots of discussions have come on this topic. My maybe not so popular opinion is that when the day comes that quantum resistance is required for the BTC network, there will likely be a window of adoption that will be broadcast widely. It'll be common knowledge, if you don't adopt (move) to a quantum reslient address by (insert some date here) then you are agreeing for the network to lock down (burn, what have you) the BTC at the addresses that don't adopt by said date.
This is just one of many potential ways to address this. It's probably not the best. The network will decide, what is best... running a node is your way to vote.
|
|
|
I did a print out of the combined total of all utxos where is_spent = false and I got back 20009210.72379106 ... which is close, but higher than the total supply in circulation (around 19.82m) I'm hoping the difference has something to do with a certain script type not strigger the is_spent variable I track... I'm also aware that the node maintains a current list of all utxos, so maybe my best bet is to do a comparison of utxo ID fields? Curious if anything stands out to anyone on this: Here is a print out of amounts summed by script_type... script_type | cnt | total_amount -----------------------+----------+------------------ pubkeyhash | 52505234 | 6812722.99744588 witness_v0_keyhash | 49812368 | 5950961.07712848 scripthash | 14085469 | 4212919.07096102 pubkey | 45304 | 1720781.42621185 witness_v0_scripthash | 1589794 | 1161322.78037109 witness_v1_taproot | 58752893 | 147776.69085131 nonstandard | 187875 | 2627.60490072 multisig | 1872416 | 57.31497578 nulldata | 143801 | 41.75518261 witness_unknown | 199 | 0.00576232 Not worth doing more dust accounting until I get this table synced with the current chain... I'll do a join with the tx_inputs table to check for situations where a spend has happened but possibly didn't get reflected in my utxos table. I'll have to mark down where this happened... hopefully don't be an ongoing issue forward, but I guess could be accounted for programatically as new data comes in... making progress...
|
|
|
I'll need to do some more digging into what exactly Ordinals are. I'm still cleaning up the data. I discovered that about 140m of my utxos seem to be OP_RETURN and have an amount of 0.00000000 but are marked as unspent (or rather, were never spent after being created). I'm guessing these were just place markers for people to add comments to the chain? I ordered these by script type and 99.9% of them are of type nulldata. I'm trying to decide if I should extract these into a separate table to maintain the ingerity (in case in the future I want to distinguish them for some other purpose), or simply mark them as false for the is_spent column to get them out of the way when I do queries on spendable utxos. I suppose I could just exclude them by the nulldata association? Thinking out loud here... OP can verify this data by analysing the dust accumulation over the time and see if it's spikes exactly when the ordinals started spamming the network.
The other contributors are some wallets which doesn't allow coin control lead to created unnecessary dust amount into the change address.
Good idea. Once I can run some more efficient queries, I'll look at clustering them by data/time.
|
|
|
Spent the last few months exploring the raw data after setting up a node, and constructing a relational database with a goal of tinkering with analytics -- exploring mostly. My database is way too large (almost 4TB - a lot of that indexing) so I need to work on condensing the data to more relevant tables for ongoing discovery. If this isn't the right place for this, appologies. But I thought I would document some findings, and hopefully engage in some discussion or get some inspiration on what else to look for. I realize I should probably have a time series dataset as well... I'm very curious to model the hodl waves. If any of my data points below are egregiously wrong, knowing that would be good also haha. It's entirely possible I'd have a data integrity issue, despite my best efforts on that front. I started looking at the utxos (I'm storing them all, and toggling an is_spent to true when they are used). Trying to make sense of what would be considered dust, or likely lost due to time and size. Since Bitcoins start, I see 3.2 billion utxos, of which 179,288,217 haven't been spent (yet)... of that group 86,898,633 of them have a balance of less than 1000 sats (0.00001 BTC or roughly $1 USD currently - at or below network fees). The combined total of all of this unspent 'dust' is 426.3546 BTC. Already this year (2025, or since block 877279) 18.95 BTC has been added to this pile of dust. If I set it to 0.0001 as the threshold, the total is 56.9 so far this year. Worth noting my database was last updated about 20 hours ago, need to create a cron to ingest on every new block as they come. Work in progress... Is this 0.0001 dust accumulation primarily due to negligence? Or is some other mechanism leading to so much of it? If anyone has ideas of other things to look into I'd love some suggestions  I think my goal at the end of this is to help others learn about BTC (and with this one, learning how not to end up with dust - by reminding people to consolidate when possible), I have been told I'm fairly good at taking complex and breaking it down to easier to understand language. I'm still very much in learning mode myself... having fun.
|
|
|
I’m running a full Bitcoin node from my bsement on an Ubuntu Server and I’m trying to help support the Bitcoin network by opening port 8333 for inbound connections. However, I'm using Starlink as my internet provider, and I’ve encountered some challenges with port forwarding due to Starlink's use of CGNAT (Carrier-Grade NAT).
Has anyone here successfully opened port 8333 on a Starlink network for a Bitcoin node? If so, could you share your approach or any advice on how to bypass CGNAT or configure port forwarding on Starlink?
Here are a few specific questions I have:
Is it possible to get a static IP or public IP with Starlink to make port forwarding work? Has anyone used VPN or tunneling solutions to work around this limitation with Starlink? If so, which one worked best? I’d appreciate any tips or guidance from anyone with experience running a Bitcoin node on Starlink!
Thanks in advance!
|
|
|
This would also explain why it's slowly getting slower. The update search across a growing utxo table is getting linearly longer and longer... I'm up to 700m utxos Thinking out loud here: maybe you can have a look at how Bitcoin Core handles this? For each block, it checks all UTXOs in the chainstate directory, and (especially if it can cache everything) it's fast. Think of it this way: processing the data shouldn't take longer than the IBD, right? Good point, you're absolutely right. I'll dive into that and see what I can find. An in memory cache system would alleviate all of the look-ups letting the system focus strictly on the writing, and less so on the look-ups as it goes. As long as I can find a graceful way to handle crashes, or abrupt stops with the script as it goes... so as not to lose the stored memory that hasn't yet been applied/updated.
|
|
|
It's the writing on disk (running close to 80% everytime a block gets to the insert part -- which surprises me with write speeds around 7500MB/s)... I suspect the way I am batching everything for each block all at once is what is causing this speed issue (there can be quite a few utxos per block) Shouldn't disk writes be handled by file system cache? To compare, even on an old HDD, writing 10 million small files (yes I do crazy things like that) is almost as fast as sustained writes. Reading them is very slow, because then the disk head needs to search for each file. Writing is fast, straight from file system cache onto the disk. I'm not sure how this would work with a database, but if writing 3500 transactions takes 4 seconds, that seems slow to me. You're right. I was focused on the inserts, but it must be the select and update to the UTXOs entries when I check transaction inputs that is causing this slowdown. While processing each new block I'm taking new input information and looking for utxo's that have been spent and updating that in the utxo table once they are, along with the reference to the transaction that caused them to become spent - this can require looking cross the entire database for uxtos being referenced (spent). I think, instead, I can store this input information in a file as I go, defer all updates for the ref_index and is_spent columns to the utxo table until after I process all the blocks, that will likely be much faster than updating each UTXO one-by-one during ingestion as I find inputs to link to utxos from the past. I'll have to think about that. My problem now is I've been running for almost a week, and I don't want to miss something, leaving me in a state where I would have to start over haha. I think I'm getting closer to solving this... you've been right, it shouldn't take this long. This would also explain why it's slowly getting slower. The update search across a growing utxo table is getting linearly longer and longer... I'm up to 700m utxos Appreciate the insights! Shouldn't disk writes be handled by file system cache? To compare, even on an old HDD, writing 10 million small files (yes I do crazy things like that) is almost as fast as sustained writes. Reading them is very slow, because then the disk head needs to search for each file. Writing is fast, straight from file system cache onto the disk. I'm not sure how this would work with a database, but if writing 3500 transactions takes 4 seconds, that seems slow to me.
Completely depends on the filesystem. Most of them like ext3, ext4 and such use journalling, so when you batch all that data to write into the disk, it actually goes inside the journal first. Usually the default settings of the journal is to write deltas of the changed bytes on to the disk. This is more reliable than just doing a write-back to the disk, but it's slightly slower. You can actually change the filesystem settings to more aggressively utilize the disk cache, but it will only take effect on the next reboot. Interesting. I'll look into this. ==== Thanks again for all the thoughts and ideas. This has been extremely helpful!
|
|
|
I would not use a Ryzen CPU for this if you want to me dealing with large datasets / databases and searches an EPYC is the better choice if you want to stick with AMD and if you want to go Intel use a good Xeon.
Same with RAM, if you are manipulating large data sets to analyze you start with the largest one that you may want to look at, at this point from your last post it's the utxos and double it so you would want about 262GB of RAM. You could probably get away with 256GB at that point it's still not ideal but you would be able to load everything into RAM instead of pulling from the drive and look at it there. If you are going to do it, do it right.
I spend a lot of time telling customers 'I told you so' when they try to do things with lower spec hardware for things and they complain it's slow. For a desktop PC having to wait a few extra seconds here and there for some things because you got an i3 instead of an i5 or i7 is one thing. Depending on what you are doing in terms of analyzing this becomes hours instead of minutes.
-Dave
Good points. (Un)fortunately this is just a hobby/interest for now, if I get past the exploration phase and want to do real things with the data I will need to scale up. It's definitely not the CPU (23% capacity) that is the bottleneck, or the RPC commands. It's the writing on disk (running close to 80% everytime a block gets to the insert part -- which surprises me with write speeds around 7500MB/s)... I suspect the way I am batching everything for each block all at once is what is causing this speed issue (there can be quite a few utxos per block) combined with the indexing I'm using to ensure data integrity (maybe not necessary now that it's running really stable... I just didn't want partially processed blocks re-writing on restarts). I'm ingesting about 15,000-20,000 blocks a day currently... I may attempt to change this so I add sets of 1000 at a time, instead of inserting the entire block all at once after being read. That would require some logic, as some of what's being added to the transactions table needs to be referenced and linked in other tables - so order of entry matters across the entire block. But at this pace, it'll get done one way or another within a couple weeks. I'm up to block 481,000... and I'm just past 800GB for the database - but on average it's growing about 80GB per 12,000 blocks now (2017 things really picked up). I estimate, based on some assumptions running a test scripts on sections ahead of me, that this will end up being approximately 3.4TB in size when I'm done, so I'm about a third of the way there. I may move some of the tables onto an external once I index the things I'm really interested in. Slow and steady, I'll get there eventually.
|
|
|
It seems the recent talk of taxing Bitcoin, along with statements from the ECB and the US Fed, stems from the irony of Bitcoin’s journey as a high-risk, low-reward asset that was initially overlooked by high-net-worth individuals and institutions. Typically, these entities have a significant advantage: they invest in opportunities before IPOs, influence rules and regulations, and control the flow of fiat currency. But Bitcoin offered no such early advantage. For institutions like BlackRock, an early investment of substantial capital would have been almost impractical. Imagine, for instance, if BlackRock had tried to put $500 billion into Bitcoin when it was valued at $100 per coin—it would have risked distorting the market entirely, with considerable risk and little immediate reward.
Instead, Bitcoin’s early growth was propelled by individuals willing to take on that risk, often representing a significant share of their own net worth, even up to 50%. As a result, they contributed to Bitcoin’s rise to a trillion-dollar asset, gaining influence and financial returns that institutions typically command. Now, as these institutions recognize Bitcoin’s potential and want a stake in it, they’re not pleased to find themselves following rather than leading.
That’s exactly the irony: all this talk about taxing Bitcoin seems like a strategic narrative while these institutions quietly increase their exposure. The more they ease into Bitcoin, the more they signal to others that it’s a viable asset -- effectively paving the way for broader adoption. It’s a fascinating cycle where the very entities that once hesitated now drive momentum, underscoring Bitcoin’s unique rise. It's fun to watch this play out in our time. It'll be interesting to see what history paints the 2009-2029 period as...
|
|
|
Currently, I'm at 93m unique addresses, 210m input transactions, and 250m utxos
On block 374,000 I'm at 305GB for the postgres table size. At this rate, I may run out of storage, since I believe the first 150k blocks barely had any data to them. I count 1,365,198,853 unique addresses (based on last week's data). If that's any indication, you're looking at about 15 times more data. Maybe significantly more... the last 11k blocks (Fall 2015) added 40GB - at that rate, I'm looking at about 2.5T total, just for this database. This is where I'm at currently... table_name | total_size ------------------------+------------ utxos | 131 GB transaction_inputs | 81 GB transaction_outputs | 77 GB transactions | 40 GB addresses | 19 GB blocks | 100 MB wallets | 16 kB
|
|
|
Hmm, I expected that it would only take hours with your build. Have you set a higher dbcache as I recommended in my previous reply? Because its default setting of 450MiB isn't ideal to your machine's specs.
Oh no, I missed that one. Just turned it up to 16GB and it didn't seem to change the speed. I'm on about block 400,000 after clearing the tables and starting over 36 hours ago (I noticed something in my code was failing to capture the addresses properly and everything was being labeled as 'unknown'. I've fixed it, and verified a few other issues with the data gathering. Started everything over. Checking the system, it appears that RAM isn't being heavily used (only 3GB), and the real culprit is Postgres (taking up 50% of CPU - while processing each block in about 2-3 seconds - sometimes it will do 3-5 very quickly). This will eventually catch up, but that's not ideal. Probably not the optimal dB choice due to indexing (I turned them off, but plan to turn them back on after I'm caught up)... I should probably add records at speed and deal with deconficting and adding in the indexes after? Or possibly move to a time series database? I guess this gets into what exactly I want to do with the data... Haven't quite sorted that out yet, I wanted to see it first... I tried removing all my ON CONFLICT statements, but that didn't seem to imrove things. I tried batching, and it didn't change the speed much either. I think this is just a Postgres insert issue. I should find a faster way to dump the data in, probably from a flat file? I don't have much experience with datasets this large, I've usually gotten away with inserts as I go... Currently, I'm at 93m unique addresses, 210m input transactions, and 250m utxos On block 374,000 I'm at 305GB for the postgres table size. At this rate, I may run out of storage, since I believe the first 150k blocks barely had any data to them. It's unfair how it's advertised to OP with "150-200Mbps download" while their legal document indicates that it's typically within that range. Anyways, 25Mbps (about 3MBps) isn't too bad for IBD, unless its bandwidth is used by other processes and devices.
Yep, it's a marketing thing. I can say though it worked great for the initial sync. I pulled it all down relatively fast... it did get me to about the 1TB soft 'limit' so I'm curious to see what the throttling will be for the next 3 weeks... so far it hasn't impacted streaming. The year is a useless indicator for progress. It's better to look at "GB processed" if you can add that to your data processing.
Indeed, same for block height... which is what I'm printing. Good idea, I'll put that in the log. You might want to look into upgrading the processor to a 7850X3D, if your motherboard supports it. Currently, it is the best the market has to offer for now. Software: I plan to use Ubuntu Server. Is this a solid choice for running a full node and developing analytics tools, or are there other distributions better optimized for this kind of work?
If you're going to be installing a lot of packages from package managers and you don't mind the slightly longer setup time, then I say you should change your setup to Arch Linux. They have the latest versions of everything already in the package manager, so you don't have to fiddle with Pip/Poetry/Npm/Cargo. And then there's the AUR which should cover all the software you need to get including Bitcoin Core. PS. How many disks does your motherboard support? It might be easier to back everything up and then create an LVM with one giant volume after you add those extra disks. Thanks for the input, I'll look into those! I can add another (I would probably get a 4TB so I'm not worried, and possibly run a time series and relational database - each for a different purpose). I think I'm all in on this project now, I'm having fun so far.
|
|
|
The sync was faster than I expected... took about 22 hours. Created a schema tonight for the 10 structures I'm focused on along with an ingest script... sent it to work on the dataset -- looks like I should have this all organized in about 12 more hours... maybe a little longer than I expect, hard to tell because the fist 10% of the chain processed so incredibly fast (a lot less activity in the early years I assume)... I think I'm going to need more innternal storage (2TB)... if my initial schema doesn't grow in scope (it will), I'll run out of psace in about 2.75-3 years - easy problem to solve SSDs are relatively cheap. At least I have some time to figure that out. Getting an ideal GPU is probably next... Cheers! Edit... wow, really slowed down around 25% (I guess that would be 2013ish). Might take a couple days longer than expected 
|
|
|
Starlink is the best option for one of the properties I own (and lived in until recently.) Again, no issues downloading the blockchain or running the node once it's synchronized. Your hard drive choice will have more impact on the sync speed than your bandwidth.
...
There's nothing wrong with Ubuntu, but I prefer Debian. Debian is lighter, and unlike Ubuntu it's 100% open source. Ubuntu is built on Debian, so unless it has specific features you need (which it won't for your purposes) there's no reason to choose the more bulky OS. YMMV.
...
If you don't have a lot of experience with Linux or Bitcoin, it may be best for you to start with an Umbrel sysetem. They are easy to set up, can be run on a Raspberry Pi, and one click installation of an SPV server and blockchain explorer. Once you understand what you need then you can research how to build a node the hard way.
Appreciate your insights. I just put the hardware together today, installed the OS, cloned bitcoin, compiled/ran the core and it's now downloading the blocks and verifying. Decided to stick to what I know (Ubuntu Server LTS)... hopefully I don't regret it later, but I guess I could always just rebuild if it comes to that. Any advice on potential pitfalls, better component choices, or tips for managing a full node with advanced analytics would be appreciated!
Based from your plan, overriding these settings' default may be needed: - "txindex" will enable some features of RPC commands like getrawtransaction which aren't available to nodes without a full transaction index.
Enabling it before the "Initial Block Download" will make sure that the database is built while syncing, that could take hours if enabled later. - "maxmempool" will set your node's mempool size from the default 300MB to your preferred size.
This is useful to your user-case to be able to keep most of the unconfirmed transactions since the default isn't enough during "peak" seasons.
You may also increase your node's database cache with dbcache=<size in megabytes> for it to utilize more of your memory. Thanks so much for this, looks like those will really help me, I put them into the config before I ran the core for the first time and started downloading/verifying Hardware Bottlenecks: Are there any obvious weak points in this build for running full node operations and handling data-intensive tasks like blockchain analytics? I'm especially interested in potential memory or storage issues. The system is fine. If anything, it's overkill. One of my full nodes runs a Dell mini-pc, i5, 32gb of ram, and a 2TB nvme. For a node, I agree. For blockchain analytics, I'd say the more RAM the better. @OP: before you do a lot of extra work, have you seen Bitcoin block data? It includes daily dollar prices. I'll look for deals in the coming months, and see if I can upgrade to 64 of 128. Also thanks for the link, I'll check it out. I'm happy to stand up a node to help broaden the security of the network...and I'm sure having it all local will make processing easier, as I'm planning to do some complex indexing and datasets. At least I have ideas in mind... will need to see if any of them pan out. For now, whie I'm waiting for the sync this gives me something to look at  Hardware Bottlenecks: Are there any obvious weak points in this build for running full node operations and handling data-intensive tasks like blockchain analytics? I'm especially interested in potential memory or storage issues.
If your software (which perform analytic) could utilize GPU, you should know GT 1030 is old and low-end GPU. So you probably want to get something faster. More RAM could allow faster analytic since you could store more data on RAM rather than accessing it from the SSD. Software: I plan to use Ubuntu Server. Is this a solid choice for running a full node and developing analytics tools, or are there other distributions better optimized for this kind of work?
By default, Ubuntu server doesn't include GUI. You probably want to use Ubuntu 24.04 LTS instead. Future Expansion: I'm looking to scale this setup to handle machine learning models on the blockchain data in the future. Should I anticipate the need for more advanced GPUs or additional hardware as I expand the complexity of my models?
I don't know about machine learning or AI field, but your build should support 4 RAM and 2 GPU. And FWIW, marketplace which rent GPU or high-end computer exist. I guess I'll find out when I get into the heavy processing. I'm developing from scratch, so we'll see if I get frustrated with the limitations of the system and incorporate a GPU. If nothing else this is my attempt at learning something new and stretching my development skills. I am energized by projects where I can ultilize pervious experience into new problems. Not exactly sure what I would call a problem here, I'm just tinkering with the modeling for now... looking for something that might be of use to everyone as I go. I've developed full stack, and really enjoy the visualizations on the front end. Or making useful interactions with data. Cheers. Appreciate all the help so far. I'm sure I'll have questions soon 
|
|
|
Premise: I'm planning to set up a full Bitcoin node to run advanced analytics and answer some research questions I have on wallet behavior, scarcity, and how price fluctuations affect the ecosystem. I'm looking for feedback from the community on the hardware setup, potential bottlenecks, and issues I might not have considered at this stage. About Me: I’m a data engineer with a background in computer science, network engineering, and physics. My professional experience includes working with large data sets, complex models, and analytics. Over the past decade, I’ve applied these skills in scientific research, stock market analysis, and behavioral studies. I’m now diving into blockchain data and seeking to develop models that address some unanswered questions. Questions I’m Exploring:- Modeling the behaviors of active vs. inactive wallets.
- How price volatility influences ecosystem behavior.
- Tracking scarcity flows between known and unknown entities over time.
- Identifying and analyzing "gatherers" — addresses that continuously accumulate BTC, regardless of price trends, and modeling their impact on scarcity.
- Projecting Bitcoin’s scarcity under various price scenarios up to 2050 and beyond.
- I’ve seen opinions on these topics, but I’m struggling to find solid research backed by real-world data models. If anyone knows of existing work in these areas, I’d love to hear about it!
Planned Build (Hardware): Processor: AMD Ryzen 7 7700 Motherboard: MSI MAG B650 TOMAHAWK (AM5 socket) Memory: G.SKILL Trident Z5 RGB DDR5-6000 (32GB) Storage: Samsung 990 Pro 2TB (NVMe SSD for fast data access) Power Supply: Corsair RM850x Case: Corsair 4000D Airflow CPU Cooler: Corsair iCUE H100i Elite Capellix (AIO) Optional GPU: MSI GeForce GT 1030 2GB (mainly for potential machine learning features later) My Questions/Concerns: Hardware Bottlenecks: Are there any obvious weak points in this build for running full node operations and handling data-intensive tasks like blockchain analytics? I'm especially interested in potential memory or storage issues. Connectivity: I’m on Starlink Residential (150-200Mbps download), which should be fine after the initial blockchain sync (~600GB). Does anyone have experience with how connectivity might impact node reliability, particularly in rural areas? Software: I plan to use Ubuntu Server. Is this a solid choice for running a full node and developing analytics tools, or are there other distributions better optimized for this kind of work? Future Expansion: I'm looking to scale this setup to handle machine learning models on the blockchain data in the future. Should I anticipate the need for more advanced GPUs or additional hardware as I expand the complexity of my models? Any advice on potential pitfalls, better component choices, or tips for managing a full node with advanced analytics would be appreciated!
|
|
|
|