I thought about that but there wasn't a practical way to do smaller increments. The frequency of block generation is balanced between confirming transactions as fast as possible and the latency of the network.
The algorithm aims for an average of 6 blocks per hour. If it was 5 bc and 60 per hour, there would be 10 times as many blocks and the initial block download would take 10 times as long. It wouldn't work anyway because that would be only 1 minute average between blocks, too close to the broadcast latency when the network gets larger.
Wouldn't this only be true if the total transactions contained in the 1 minute blocks had the same size as the total transactions contained in the 10 minute blocks? And as long as the global transactions per second (tps) doesn't change, there shouldn't be a huge difference between 1 minute blocks and 10 minute blocks in terms of blockchain size, no?
As for whether or not a 1 minute block time would work, I think that really depends. You would certainly see more orphan blocks and more confirmations would be required to achieve the same level of security but I don't think it's completely infeasible. Over 95% of nodes have a network latency of <40 seconds and by 1 minute, I'd expect this number to be over 97%.
Just out of curiosity but what do you think is an appropriate trade off between latency and block generation time? And how do you think number of nodes affects average network latency? Correct me if I'm wrong but shouldn't best and average case latency to be reduced (or at least scale with O(log n)) with a larger network size since all nodes in the network would attempt to increase their share of connections instead of forcing transactions to require more hops? (assuming this is how you implemented it. I could be wrong on this.)