I only post this because I remembered someone posting here that it would be highly illegal for a fund provider like Blackrock to prebuy assets before opening said fund and having orders from outside money to buy the assets of the new ETF, and if I was a better poster I would find that post and might yet still in the future.
Highly illegal … for who?
There is
nothing illegal about Blackrock sock-puppets buying Bitcoin ever since silk road whereabouts or something.
Our wannabe overlords, are above and beyond any law made for the plebs.
Talking about black rocks … this one is for you exphorizon:
https://nobulart.com/comets-and-dragons/Any 33rd degree Masons around here? Rhetoric question, guess not!
… and this one for you, microgoosens ol’ fren’:
Corbett ReportTravel whilst you can dude, because next thing you know, our totalitarian kleptocratic governments will ask you to surrender a little more of your freedom.
For your safety of course - they’ll ask you to drop down your pants, and biometrical-y scan your asshole - for you to be able to travel.
Just kidding of course, but every
asshole iris is unique right?
I think this is already the case for US citizens travelling to Europe - since last September I think.
#short.tourism
Of course many hotels are still being build on credit.
What could possibly go wrong?
Saying this as a result of a news about six different AI chatbots experiment forecasting a 45k price for bitcoin as the year winds down.
Although would have loved it if it was a perfect number 7 and not 6 AI's.
AI’s are only as good in
predicting as their database and their developers.
Basically they can’t predict shit, rather only throw up desirable - to their handlers - outcomes.
Non-biased AI’s do not exist - to the best of my knowledge. How can they?
Here, have one too, Ληδα:
https://iceni.substack.com/p/a-fourth-talk-with-chatgptLLMs are a flawed technology. They can come up with repetitive answers and they fudge numbers and invent papers from whole cloth when asked for data.
LLMs can be made to say almost anything given a cleverly-crafted set of leading questions. It is rare for them to disagree, unless you explicitly contradict something in their model, or in the silly alignment filters that companies like OpenAI use.
LLMs are not an authority on any subject and everything they produce must be taken with a grain of salt.
Gosh, I just cannot dismiss my favourite
deep-state actor:
https://expose-news.com/2023/11/02/your-family-was-used-as-labrats-graphene-mrna-nanotech-c19-vaccines/