Hmm, care to elaborate? I'd much rather use the block numbers.
In testing with -regtest I tried to use block #255 but simply could not get it to accept the number (it complained either due to being negative or it being too big if I added the leading zero byte).
If you are able to work out how to do that correctly then please post here if you don't mind.
I shall - on a first glance, i'd guess the problem migh be here -
const CScriptNum nLockTime(stacktop(-1), fRequireMinimal, 5);
----
explicit CScriptNum(const std::vector<unsigned char>& vch, bool fRequireMinimal,
const size_t nMaxNumSize = nDefaultMaxNumSize)
{
if (vch.size() > nMaxNumSize) {
throw scriptnum_error("script number overflow");
}
if (fRequireMinimal && vch.size() > 0) {
// Check that the number is encoded with the minimum possible
// number of bytes.
//
// If the most-significant-byte - excluding the sign bit - is zero
// then we're not minimal. Note how this test also rejects the
// negative-zero encoding, 0x80.
if ((vch.back() & 0x7f) == 0) {
// One exception: if there's more than one byte and the most
// significant bit of the second-most-significant-byte is set
// it would conflict with the sign bit. An example of this case
// is +-255, which encode to 0xff00 and 0xff80 respectively.
// (big-endian).
if (vch.size() <= 1 || (vch[vch.size() - 2] & 0x80) == 0) {
throw scriptnum_error("non-minimally encoded script number");
}
}
}
m_value = set_vch(vch);
}