I just made a script for this, that way op can verify how fast his PC can generate sha codes.
time for a in $(seq 1 10); do echo "$a" | sha256sum; done
The code is for linux and what it does is to generate 10 sha256 and print the time:
real 0m0.019s
user 0m0.015s
sys 0m0.008s
Now let's try with 1000 and see the time.
real 0m1.501s
user 0m1.413s
sys 0m0.570s
And with 10,000
real 0m16.384s
user 0m14.474s
sys 0m5.943s
And i get these results with this CPU, maybe other users could test and post their results with a better PC:
*-cpu
product: Intel(R) Core(TM) i5-6300HQ CPU @ 2.30GHz
vendor: Intel Corp.
physical id: 1
bus info: cpu@0
version: 6.94.3
size: 2660MHz
capacity: 3200MHz
width: 64 bits
Thanks for the code!
However, it understimates the computations because it runs shell commands each time. For every hash, the operating system will load the SHA256SUM binary into memory, make the proper system calls, then execute. There is a lot of CPU cycles there which are not used for computing the hashes.
I think it the proper way is to compute millions of hashes within a C or Rust program (maybe C++ too) and also measure the time within the program itself, using the languages libraries.
That's because running the `time` command line ulitiy also takes into account the time to load the program into memory, setup the main function, exit the program, which is a lot of system calls and numerous CPU cycles that are not used for the computation of the hashes and they do take some miliseconds.