Cycle time is usually a constant value representing the time between any two clock ticks. This also defines how many operations we can do in the cpu per second. This value is mostly constant, except for some special cpu-s that don’t use clocks.
But the RAM is a totally different object that takes different time for different requests (depending on how you built it, the number of caches, their size, etc…). So a notion of “average time” here in my opinion is much more meaningful than a notion of average clock cycle time. In addition, since a RAM is a totally different entity, it could operate with an external clock, or using some other mechanism. For the cpu it doesn’t matter if it provides the same API. This in some sense, is a sort of abstraction of how the cache really works physically, which makes the “average time” even more useful, as we don’t always know exactly how much time a request from the RAM will take.