POD5
Member

Offline
Activity: 334
Merit: 10
Keep smiling if you're loosing!
|
 |
April 11, 2025, 02:33:20 PM |
|
How do you calculate the alpha values?
|
bc1qygk0yjdqx4j2sspswmu4dvc76s6hxwn9z0whlu
|
|
|
|
nomachine
|
 |
April 11, 2025, 02:42:47 PM |
|
How do you calculate the alpha values?
Octave or MATLAB 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Bram24732
Member

Offline
Activity: 252
Merit: 26
|
 |
April 11, 2025, 02:46:16 PM |
|
There is a very good use for them: their existence proves that the distribution is uniform. Anomalies in the results would indicate that something is wrong (like, for example, very difficult to notice bugs in the code, resulting in incorrect hashes).
That's how I confirm the bruteforce is working correctly, indeed. Also, I don't see the point of saving keys that have some 0-bits prefix. It saves some kernel instructions to simply skip that check, and use the hash target itself as PoW evidence.
I don't see how checking against 0 gives anything better/worse than checking against target ? Which kernel instructions do you skip checking against the target ? You would still have to do the CMP. Sure you can mutualize the first 32 bits to check, but then you need to check a subset of the next 32 bits and you're introducing complexity, branching, etc... If anything checking against zero frees the compiler from the dependency to the target registers on this instruction. That being said, my biggest issue with Puzzle 69, 71, etc. is that it is impossible to validate that the GPU computes ALL the data correctly (what if a bit flips due to cosmic rays? impossible to know, unless each and every hash is verified); so if I'd be Bram, I wouldn't be so brave. I see too many points of failure.
Since I have a gazillion prefixes from 68 I can tell you that statistically speaking the odds of a bitflip or compute failure is extremely low. And dwarfed by the odds of a competitor finding the key before us. So it does not really matter in the financial modeling. Also about 40% of the GPUs have ECC.
|
|
|
|
|
Bram24732
Member

Offline
Activity: 252
Merit: 26
|
 |
April 11, 2025, 02:52:18 PM |
|
In fact, Bram's proof-of-work confirms the prefix theory, but they behave like stubborn children.
It does not. Also There is no formal math proof that this prefix theory works. There is not a large enough empirical example that this prefix theory works. Even on the full set of proofs for BTC67, this does not apply. There are all the statistics in the world, along with all the people who know anything about math on this forum, including people doing this as their day job for 20 years, who tell you it does not work the way you think it does. Dont make me write a formal math proof you wont be able to read anyway, please.
|
|
|
|
|
|
kTimesG
|
 |
April 11, 2025, 02:53:40 PM |
|
Also, I don't see the point of saving keys that have some 0-bits prefix. It saves some kernel instructions to simply skip that check, and use the hash target itself as PoW evidence.
I don't see how checking against 0 gives anything better/worse than checking against target ? Which kernel instructions do you skip checking against the target ? You would still have to do the CMP. Sure you can mutualize the first 32 bits to check, but then you need to check a subset of the next 32 bits and you're introducing complexity, branching, etc... If anything checking against zero frees the compiler from the dependency to the target registers on this instruction. Because you're doing two checks (target and zero). There's also no need to check more than a single register in the kernel itself or complicate things.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 420
Merit: 8
|
 |
April 11, 2025, 03:00:04 PM |
|
I don't have the nerves to improve this for the better.  It seems everyone has lost their nerve for this. 
|
|
|
|
|
Bram24732
Member

Offline
Activity: 252
Merit: 26
|
 |
April 11, 2025, 03:14:34 PM Last edit: April 16, 2025, 07:12:51 PM by mprep |
|
Also, I don't see the point of saving keys that have some 0-bits prefix. It saves some kernel instructions to simply skip that check, and use the hash target itself as PoW evidence.
I don't see how checking against 0 gives anything better/worse than checking against target ? Which kernel instructions do you skip checking against the target ? You would still have to do the CMP. Sure you can mutualize the first 32 bits to check, but then you need to check a subset of the next 32 bits and you're introducing complexity, branching, etc... If anything checking against zero frees the compiler from the dependency to the target registers on this instruction. Because you're doing two checks (target and zero). There's also no need to check more than a single register in the kernel itself or complicate things. lets imagine I want all 5 zero bytes leading prefixes. I can do : const int32 candidate[5]; const bool check = !candidate[0] && !(candidate[1] & 0xFF000000) // Then check against the target if I want to mutualize the target check I need something like this : const int32 candidate[5]; if (target[0] != candidate[0]){ return; // We dont go any further }
// Here, we mutualized the pattern check for the first int // You still need to check the extra byte here if (candidate[1] & 0xFF000000 == target[1] & 0xFF000000){ // handle prefix found }
if (target[1] != candidate[1]){ return; // We dont go any further }
In fact, Bram's proof-of-work confirms the prefix theory, but they behave like stubborn children.
It does not. Also There is no formal math proof that this prefix theory works. There is not a large enough empirical example that this prefix theory works. Even on the full set of proofs for BTC67, this does not apply. There are all the statistics in the world, along with all the people who know anything about math on this forum, including people doing this as their day job for 20 years, who tell you it does not work the way you think it does. Dont make me write a formal math proof you wont be able to read anyway, please. Take it easy; this isn't a debate between a Christian and an atheist. The atheist theorizes that God doesn't exist but cannot prove it, and vice versa. Creating a pool doesn't make you Fermat or Pascal. You know that you can't 100% refute something without evidence. Therefore, you conjecture, and I conjecture, both of us are right and wrong at the same time. However, it is a mathematical error to assert things you haven't firmly investigated. This is me being easy. I believe I've kept it civil. I do not conjecture, I gave you the PRECISE reason for why it does not work. Several times. Based on how statistics work. Creating a pool does not give me authority here, you are right. In fact, nothing does. I'm just hoping that at some point you come to the realisation that when a cryptographer tells you you're wrong about basic math, you have the humility to consider that it might be the case. [moderator's note: consecutive posts merged]
|
|
|
|
|
|
kTimesG
|
 |
April 11, 2025, 03:29:11 PM |
|
Also, I don't see the point of saving keys that have some 0-bits prefix. It saves some kernel instructions to simply skip that check, and use the hash target itself as PoW evidence.
I don't see how checking against 0 gives anything better/worse than checking against target ? Which kernel instructions do you skip checking against the target ? You would still have to do the CMP. Sure you can mutualize the first 32 bits to check, but then you need to check a subset of the next 32 bits and you're introducing complexity, branching, etc... If anything checking against zero frees the compiler from the dependency to the target registers on this instruction. Because you're doing two checks (target and zero). There's also no need to check more than a single register in the kernel itself or complicate things. lets imagine I want all 5 zero bytes leading prefixes. I can do : const int32 candidate[5]; const bool check = !candidate[0] && !(candidate[1] & 0xFF000000) // Then check against the target if I want to mutualize the target check I need something like this : const int32 candidate[5]; if (target[0] != candidate[0]){ return; // We dont go any further }
// Here, we mutualized the pattern check for the first int // You still need to check the extra byte here if (candidate[1] & 0xFF000000 == target[1] & 0xFF000000){ // handle prefix found }
if (target[1] != candidate[1]){ return; // We dont go any further } Well, since your code is secret and if this snippet is part of your code, my questions are simple: 1. Do you have a GPU that produces more than 2**32 matches (PoW or target) per launch, to make it worth complicating things like above? 2. What's the difference between proving you found 48 zeros or the first 48 bits of the target hash? Maybe it doesn't even matter in practice, but the GPU will still execute something that I don't see the need for. In my kernel I simply check a single 32-bit limb. If it matches - handle prefix found, and let the host bother about this and that about the other bits. The hashes produced per second can be counted on one hand's fingers.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1456
Merit: 284
Shooters Shoot...
|
 |
April 11, 2025, 03:35:00 PM |
|
In fact, Bram's proof-of-work confirms the prefix theory, but they behave like stubborn children.
It does not. Also There is no formal math proof that this prefix theory works. There is not a large enough empirical example that this prefix theory works. Even on the full set of proofs for BTC67, this does not apply. There are all the statistics in the world, along with all the people who know anything about math on this forum, including people doing this as their day job for 20 years, who tell you it does not work the way you think it does. Dont make me write a formal math proof you wont be able to read anyway, please. Take it easy; this isn't a debate between a Christian and an atheist. The atheist theorizes that God doesn't exist but cannot prove it, and vice versa. Creating a pool doesn't make you Fermat or Pascal. You know that you can't 100% refute something without evidence. Therefore, you conjecture, and I conjecture, both of us are right and wrong at the same time. However, it is a mathematical error to assert things you haven't firmly investigated. This is me being easy. I believe I've kept it civil. I do not conjecture, I gave you the PRECISE reason for why it does not work. Several times. Based on how statistics work. Creating a pool does not give me authority here, you are right. In fact, nothing does. I'm just hoping that at some point you come to the realisation that when a cryptographer tells you you're wrong about basic math, you have the humility to consider that it might be the case. Would love it if said cryptographer could stop posting back to back posts and consolidate them  Should be easy, right? lol! Also, to your ramblings about prefixes; What specifically does not work? There are different things you can do with found prefixes. I think you misunderstand what has been said about using them. Depending on how they are used, you should say, they do not offer up any faster results versus sequential, key x to key z, or random sequential (such as what you are doing). Just because you aren't using a method like I have outlined using prefixes, does not mean it doesn't work. It works no better or worse than your method. And the ease of use and tracking, probably a lot easier than what you are doing; unless you upgraded from tracking in excel  Alas, you use probabilities during your search as well, you just use them differently.
|
|
|
|
|
Bram24732
Member

Offline
Activity: 252
Merit: 26
|
 |
April 11, 2025, 04:06:03 PM |
|
Would love it if said cryptographer could stop posting back to back posts and consolidate them  Should be easy, right? lol! Sorry about that, not big on forum etiquette, especially since I posted from phone. What specifically does not work?
Deriving any sort of probability from prefixes to predict unchecked keys unless you upgraded from tracking in excel  Never did, you must be confusing with how I was tracking the bruteforce operational data which was indeed in excel Alas, you use probabilities during your search as well, you just use them differently.
I use them as confirmation, not prediction. Yes, theorizing vaguely. Like in the case of Marilyn vos Savant vs. the mathematicians of the world, who were influenced by theories.
Would be great if there were less gibberish and more maths in your posts  But hey, you do you. I can feel that my efforts here are not getting through, that's all right. Have fun with prefixes guys !
|
|
|
|
|
crytoestudo
Newbie
Offline
Activity: 31
Merit: 0
|
 |
April 11, 2025, 04:19:27 PM |
|
I asked you for a tip on X and you were very polite to me. I believe you told the truth, I was sad about the reality. But it's part of the game. But some people are not prepared for the truth. I thank you for telling the truth.Would love it if said cryptographer could stop posting back to back posts and consolidate them  Should be easy, right? lol! Sorry about that, not big on forum etiquette, especially since I posted from phone. What specifically does not work?
Deriving any sort of probability from prefixes to predict unchecked keys unless you upgraded from tracking in excel  Never did, you must be confusing with how I was tracking the bruteforce operational data which was indeed in excel Alas, you use probabilities during your search as well, you just use them differently.
I use them as confirmation, not prediction. Yes, theorizing vaguely. Like in the case of Marilyn vos Savant vs. the mathematicians of the world, who were influenced by theories.
Would be great if there were less gibberish and more maths in your posts  But hey, you do you. I can feel that my efforts here are not getting through, that's all right. Have fun with prefixes guys !
|
|
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1456
Merit: 284
Shooters Shoot...
|
 |
April 11, 2025, 04:26:11 PM |
|
I use them as confirmation, not prediction. There may be some that use it solely as a prediction. However, that is not what I or McD are saying, well, at least not I. I can use found prefixes and use probability x (based on matching h160 bits) and say, based on probabilities, the average range in between said matching h160 bits is z. I can use this to try and buffer out keys that more than likely will not yield the full h160 I am actually looking for. So I add this buffer around that random key. Now, if I search the remaining keys/ranges and do not find the one I was looking for, I merely shrink that buffer and search again, in those now, newly opened, ranges. This will work the same as your method, but I am hoping I can skip some keys based on a probability of distance in between h160s. One part of the "prefix" theory that can be argued, is what is a good buffer for 40, 44, 48, 52, etc, bits. Each subrange will generate different average distances, so that part is up for debate...the buffer size. The other part one could argue, is if one is only using prefixes, and jumps from a found one, in either direction, or both directions, and does not track everything, then to me, they can miss out on the key, and this is very different from how I use found matching bits h160s. Could it work, yes! Could it jump over the actual h160 we are searching for, yes! According to this, puzzle 69 starts with 1ba45e..... If you have it at the ready and could run it again, I am curious to see what is said keys 67 and 68 should have started with.
|
|
|
|
|
|
kTimesG
|
 |
April 11, 2025, 04:35:49 PM Last edit: April 11, 2025, 04:46:28 PM by kTimesG |
|
There may be some that use it solely as a prediction. However, that is not what I or McD are saying, well, at least not I.
I can use found prefixes and use probability x (based on matching h160 bits) and say, based on probabilities, the average range in between said matching h160 bits is z. I can use this to try and buffer out keys that more than likely will not yield the full h160 I am actually looking for.
I'm not defending Bram here, but you missed out the part where H160 is a uniform distribution. Think about it this way: you found a prefix of length X. By your logic: on average, there should be around just one such prefix in a range of size 2**X. Totally agree. However, there's nothing that says there is a difference between thinking at the range around, or any *other* 2**X set of keys (including the found one), picked out from the entire space. Now you'll say: OK, but I never ever found two keys next to each other both having a prefix of length X. Well, that happens because the probability for that to happen, is exactly one in (2**X times 2**X). However, this probability is the same no matter if you apply it on the found key and the one that comes after it, or to the found key and any other key from anywhere else in the entire space. The exact same thing goes if you state "I never found two keys spaced apart lower than whatever amount". If you buffer out the remaining range and try somewhere else, you will eventually find a new prefix, of course. But this will happen, on average, exactly after the same amount of tries as if you didn't buffer out the range. It is not a shortcut to anything. Now do you see things in the same way?
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1456
Merit: 284
Shooters Shoot...
|
 |
April 11, 2025, 05:02:36 PM |
|
There may be some that use it solely as a prediction. However, that is not what I or McD are saying, well, at least not I.
I can use found prefixes and use probability x (based on matching h160 bits) and say, based on probabilities, the average range in between said matching h160 bits is z. I can use this to try and buffer out keys that more than likely will not yield the full h160 I am actually looking for.
I'm not defending Bram here, but you missed out the part where H160 is a uniform distribution. Think about it this way: you found a prefix of length X. By your logic: on average, there should be around just one such prefix in a range of size 2**X. Totally agree. However, there's nothing that says there is a difference between thinking at the range around, or any *other* 2**X set of keys (including the found one), picked out from the entire space. Now you'll say: OK, but I never ever found two keys next to each other both having a prefix of length X. Well, that happens because the probability for that to happen, is exactly one in (2**X times 2**X). However, this probability is the same no matter if you apply it on the found key and the one that comes after it, or to the found key and any other key from anywhere else in the entire space. The exact same thing goes if you state "I never found two keys spaced apart lower than whatever amount". If you buffer out the remaining range and try somewhere else, you will eventually find a new prefix, of course. But this will happen, on average, exactly after the same amount of tries as if you didn't buffer out the range. It is not a shortcut to anything. Now do you see things in the same way? I understand what you are saying, and no, it does not change the way I see things. I am hedging, that rare instance does not occur in the range I am searching lol. If you say it is not rare, that is fine as well, because even if the key is right next to one I found and built a buffer around, I will still find it. And you may say, well that's dumb, you could have found it if you would have searched one more key, or not searched in this manner. And that is fine as well because it is no different than what Bram is doing. Example: Let's say the key we want to find is 0x1a98201 And let's say we search random + sequential, like Bram is doing. So for this purpose, in order to keep it small, let's say after Bram divides up his random ranges to search, they fall like this: ... 0x1a980ff:0x1a98100 0x1a98101:0x1a98200 0x1a98201:0x1a98300 ... Now let's say he landed on the range 0x1a98101:0x1a98200 and searched it sequentially, checking every key, and he landed on it at 3% into his search. Dang, missed it by one key!!!! But have no fear, at some point, he will land on the range 0x1a98201:0x1a98300 and find the key. But he has 1000's of GPUs and the $ to do it this way. Nothing wrong with that, and maybe it is the preferred way to do it; having the GPU and $ power. But, not everyone has that. So they try to hedge "their bets" and skip some keys up front to try and catch lightning in a bottle, and quickly, more quickly than Bram and his beast. But, again, no worries if their hedge did not pan out on their first time through the range. They merely reduce the buffers built and re-attack. Very different approaches but both will solve the key, eventually.
|
|
|
|
|
Bram24732
Member

Offline
Activity: 252
Merit: 26
|
 |
April 11, 2025, 05:11:36 PM |
|
There may be some that use it solely as a prediction. However, that is not what I or McD are saying, well, at least not I.
I can use found prefixes and use probability x (based on matching h160 bits) and say, based on probabilities, the average range in between said matching h160 bits is z. I can use this to try and buffer out keys that more than likely will not yield the full h160 I am actually looking for.
I'm not defending Bram here, but you missed out the part where H160 is a uniform distribution. Think about it this way: you found a prefix of length X. By your logic: on average, there should be around just one such prefix in a range of size 2**X. Totally agree. However, there's nothing that says there is a difference between thinking at the range around, or any *other* 2**X set of keys (including the found one), picked out from the entire space. Now you'll say: OK, but I never ever found two keys next to each other both having a prefix of length X. Well, that happens because the probability for that to happen, is exactly one in (2**X times 2**X). However, this probability is the same no matter if you apply it on the found key and the one that comes after it, or to the found key and any other key from anywhere else in the entire space. The exact same thing goes if you state "I never found two keys spaced apart lower than whatever amount". If you buffer out the remaining range and try somewhere else, you will eventually find a new prefix, of course. But this will happen, on average, exactly after the same amount of tries as if you didn't buffer out the range. It is not a shortcut to anything. Now do you see things in the same way? I understand what you are saying, and no, it does not change the way I see things. I am hedging, that rare instance does not occur in the range I am searching lol. If you say it is not rare, that is fine as well, because even if the key is right next to one I found and built a buffer around, I will still find it. And you may say, well that's dumb, you could have found it if you would have searched one more key, or not searched in this manner. And that is fine as well because it is no different than what Bram is doing. Example: Let's say the key we want to find is 0x1a98201 And let's say we search random + sequential, like Bram is doing. So for this purpose, in order to keep it small, let's say after Bram divides up his random ranges to search, they fall like this: ... 0x1a980ff:0x1a98100 0x1a98101:0x1a98200 0x1a98201:0x1a98300 ... Now let's say he landed on the range 0x1a98101:0x1a98200 and searched it sequentially, checking every key, and he landed on it at 3% into his search. Dang, missed it by one key!!!! But have no fear, at some point, he will land on the range 0x1a98201:0x1a98300 and find the key. But he has 1000's of GPUs and the $ to do it this way. Nothing wrong with that, and maybe it is the preferred way to do it; having the GPU and $ power. But, not everyone has that. So they try to hedge "their bets" and skip some keys up front to try and catch lightning in a bottle, and quickly, more quickly than Bram and his beast. But, again, no worries if their hedge did not pan out on their first time through the range. They merely reduce the buffers built and re-attack. Very different approaches but both will solve the key, eventually. Of course if you scan the whole range you find the key eventually. But what you’re essentially saying here is that there are pkeys which are more likely to lead to the wallet than others, that’s the whole reason for your search method. I say that it’s not the case. All pkeys have equal probability to be the correct one. Don’t you agree ?
|
|
|
|
|
|
kTimesG
|
 |
April 11, 2025, 05:19:57 PM |
|
I am hedging, that rare instance does not occur in the range I am searching lol. If you say it is not rare, that is fine as well, because even if the key is right next to one I found and built a buffer around, I will still find it.
And you may say, well that's dumb, you could have found it if you would have searched one more key, or not searched in this manner.
And that is fine as well because it is no different than what Bram is doing.
There's nothing wrong with how you're doing it, as long as you are aware that it does not increase any chances at all, in any way. That's the main difference to what McD is claiming - that somehow it is more probable to find the key sooner, while at the same time exhausting the interval if so needed, as a sort of "backup". In reality, the chances increase linearly with every new key you scan, no matter how you picked it. Jumping around through the keyspace because of limited resources, in hopes of an advantage, is pure placebo. It's like saying: we have to find a number between 1 and 100. We only have 10 tries. It's better to spread around the tries, or simply go from 1 to 10? Or from 99 to 90? Or try out only primes? Or numbers with high entropy? Skip symmetricals? If you carry out the simulations, all strategies have the same chance of success - because no one cares how the numbers were chosen, the experiments will, on long term, naturally distribute themselves uniformly through whatever numbers you picked out to try.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
Cringineer
Newbie
Offline
Activity: 2
Merit: 0
|
 |
April 11, 2025, 05:40:24 PM |
|
Not gonna lie, I am definitely a little jealous of the win, especially the same person so soon.
But you did get them fair and square so I can't be angry.
Congrats on the win, I cannot imagine what it must feel like
|
|
|
|
|
|
kTimesG
|
 |
April 11, 2025, 05:40:55 PM |
|
There may be some that use it solely as a prediction. However, that is not what I or McD are saying, well, at least not I.
I can use found prefixes and use probability x (based on matching h160 bits) and say, based on probabilities, the average range in between said matching h160 bits is z. I can use this to try and buffer out keys that more than likely will not yield the full h160 I am actually looking for.
I'm not defending Bram here, but you missed out the part where H160 is a uniform distribution. Think about it this way: you found a prefix of length X. By your logic: on average, there should be around just one such prefix in a range of size 2**X. Totally agree. However, there's nothing that says there is a difference between thinking at the range around, or any *other* 2**X set of keys (including the found one), picked out from the entire space. Now you'll say: OK, but I never ever found two keys next to each other both having a prefix of length X. Well, that happens because the probability for that to happen, is exactly one in (2**X times 2**X). However, this probability is the same no matter if you apply it on the found key and the one that comes after it, or to the found key and any other key from anywhere else in the entire space. The exact same thing goes if you state "I never found two keys spaced apart lower than whatever amount". If you buffer out the remaining range and try somewhere else, you will eventually find a new prefix, of course. But this will happen, on average, exactly after the same amount of tries as if you didn't buffer out the range. It is not a shortcut to anything. Now do you see things in the same way? Your theory fails again: Statistical proximity introduces a bias that disrupts the initial uniformity in the distribution (based on the low probability that a prefix is closer than expected to another identical one). When you omit an appropriate range based on a found prefix, you are eliminating certain areas of the search space, and this changes the probability of finding prefixes in the remaining regions. Temporarily skipping spaces close to a found prefix would mean prioritizing search areas where the likelihood of finding matches is statistically higher. You don't have anything to back up the BS you just wrote. Also it's easily proven to be wrong by basic simulations, which you probably didn't perform, or if you did, they are not properly done. If what you say would be true even 0.000...01% - congrats, you broke both SHA256 and RIPEMD-160, and even secp256k1 a little less or more.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
kTimesG
|
 |
April 11, 2025, 05:52:35 PM |
|
You don't have anything to back up the BS you just wrote. Also it's easily proven to be wrong by basic simulations, which you probably didn't perform, or if you did, they are not properly done.
If what you say would be true even 0.000...01% - congrats, you broke both SHA256 and RIPEMD-160, and even secp256k1 a little less or more.
It's not worth arguing with you; you ramble on about absurd generalizations. What does what I said have to do with breaking hashes? We're discussing a finite space of an immutable set, not vast spaces. We're talking about probabilities in small subranges and the likelihood of finding another identical prefix. You still haven't learned what an immutable set means, so the discussion can't really continue.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
Bram24732
Member

Offline
Activity: 252
Merit: 26
|
 |
April 11, 2025, 06:03:45 PM |
|
Damn. Now I wonder what mcD is like in real life, is he as stubborn on stuff he doesn’t understand as well. I’m starting to fictionalise a character. I’ve spent too much time here 😆
|
|
|
|
|
|