https://en.wikipedia.org/wiki/Endiannessbig-endianness is the dominant ordering in networking protocols (IP, TCP, UDP). Conversely, little-endianness is the dominant ordering for processor architectures (x86, most ARM implementations) and their associated memory. File formats can use either ordering; some formats use a mixture of both
Thanks for the Wiki link, but I'm still unsure.
Everything inside serialized transaction data can be considered to be in little-endian, where the txids used are in the default byte order that comes out of the hash function. These are sometimes referred to being in little-endian too.
So why do we convert hashes to big-endian when searching for transactions and blocks, when internally everything else is in little-endian?