Full disk encryption – can KDF reduce the need to re-enter XTS?

NIST-800-57-1 implies that a key derived from a KDF is as good as a key derived from a real CSPRNG in 8.2.4 (2):

If the key derivation key is known to an opponent, it can generate anyone
the derived key. Therefore, keys are derived from a key derivation key
are only as secure as the key activation key itself. As long as the
Key derivation keys are kept secret, the derived keys can be used in the
the same way as randomly generated keys.

The IEEE recommendation in IEEE P1619 / D16 is not to encrypt more than "a few hundred terabytes" with a single key, so that the security certificate for XEX guarantees that attacks are unlikely to fail:

43 • Use of a single cryptographic key for more than a few hundred terabytes of data opens possibility of
44 attacks, as described in D.4.3. The limitation on the size of data encrypted with a single key is not
45 unique to this standard. It comes directly from the fact that AES has a block size of 128 bits and is not
46 mitigated by using AES with a 256-bit key. 

The problem here is that if you encrypt more than 1 TB of blocks, the proof of security for the indistinguishability of cipher text does not apply. Quoting from comments about XTS mode that clarify the matter more than:

When Appendix D.4.3 states that the same key should not encrypt more than
240 blocks (approx. 16 terabytes – not 1 terabyte), this is with the
extremely conservative probability of a distinctive attack
Success with a probability of 1 in 253, which is extraordinary
conservative. D.4.3 also shows that a petabyte is encrypted
limits the probability of an attack's success to no more than 2-37,
and encrypting an exabyte no longer limits the likelihood of success
than 2-17. This is the guarantee based on the safety certificate and the
The actual likelihood of a successful discrimination attack could be
lower if further investigations tighten the security limits. Also the
The result of success in this attack is the distinction
XTS from an ideal block encryption that can be optimized so as not to restore it
Encryption key or plain text. In essence, this is the point at which the
The security corresponds approximately to that of the ECB mode.

My question is, can KDF solve this problem?

Given a sector number IPlain text P and key K, normally the encryption with XTS looks like this

XTS(key=K, plaintext=P, sector=I)

Would use instead

XTS(key=KDF(key=K, salt=I%10), plaintext=P, sector=I)

Do you allow us to encrypt ten times more data while the security guarantee is still valid?

On the one hand, it seems impossible. Basically we use KDF as CSPRNG. That makes no sense.

On the other hand, it seems acceptable to take the NIST recommendations literally.

The crux of the problem, as I understand it, is that XTS is used 128 Bits Block, there is a good change, you will find two identical blocks if you encrypt enough data.

But will the problem really be mitigated if the attacker doesn't have two identical blocks?

C = XTS(K, I, P_1)
C = XTS(K, J, P_2)

The attacker has

C = XTS(KDF(K, I%10), I, P_1)
C = XTS(KDF(K, J%10), J, P_2)

I am happy to receive information on this topic.

r – Problem with the xts object required for the Srates function in the YieldCurve package

I can extract the relevant coefficients for the Svensson model (from the Svennson function). However, I can not generate predicted values ​​with the Srates function. I do not understand the mistake of needing an xts object.

I have some real treasury data (returns and then maturities – in months, not years). I want to adjust a yield curve. Of course, I could put the Svennson function coefficients into the Svensson function, but I want to be able to use the Srates function for convenience / accuracy.

I've tried to format runtimes as ts and xts objects, but still get the error: "Error in xts (ts (runtimes), order.by = index):
order.by requires a corresponding time-based object "

Library (YieldCurve)

# Setting up data -------------------------------------------- -------

Yields <-c (0.01, 0.09, 0.13, 0.39, 0.76, 1.72, 2.41, 3, 3.68, 3.92)
Running times <- c (1,6,12,24, 36, 60, 84, 120, 240, 360)

# Assembly by Svensson ------------------------------------------- -

Svcoefs <- Svensson (Yields, Maturities)

Srates (Svcoefs, seq (1, 360, by = 1)) #previously a predicted value for each runtime

Type Conversion – How do I convert a list of xts objects in R to weekly averages with period.apply?

I try to create weekly averages based on the xts objects that I split into a list, but I keep getting the error:

Error in isOrdered (INDEX):
(List) object can not be of type & # 39; double & # 39; are forced

I tried to use the period.apply function.

rate_data_xts <- xts (rate_data, rate_data[ ,-2], order.by = rate_data[ ,2])

lanes_xts <- split (rate_data_xts, rate_data_xts $ laneid)

tx_ca_reefer <- lanes_xts[["TX_CA_Reefer"]]

head(tx_ca_reefer)

> Head (tx_ca_reefer)
PONumber LoadDate PracticalMiles laneid TruckPayPerMile
2018-01-28 9819414 2018-01-28 1543 TX_CA_Reefer 1.6850
2018-01-28 9848220 2018-01-28 1552 TX_CA_Reefer 2.5128
2018-01-29 9826639 2018-01-29 1246 TX_CA_Reefer 2.4077
2018-01-29 9827379 2018-01-29 1396 TX_CA_Reefer 1.3610
2018-01-29 9828055 2018-01-29 1535 TX_CA_Reefer 1.8241
2018-01-29 9828701 2018-01-29 1604 TX_CA_Reefer 1.8703
warning:
In the zoo (rval, index (x)[i]):
Some methods for Zoo objects do not work if the index entries in
"Order.by" is not unique

end_points <- map (lanes_xts, endpoints, an = & # 39; weeks & # 39;

lanes_weekly_xts <- period.apply (lanes_xts, INDEX = endpoints, FUN = mean)

Error in isOrdered (INDEX):
(List) object can not be of type & # 39; double & # 39; are forced

What I want is a weekly average for every xts object in the list. Any help would be appreciated.

xts – with lapply over an environment in R

I have an environment that contains time series data of 50 stocks pulled from Yahoo money. What I am looking for is to perform the volatility function of the TTR package for each of the variables. I always get an error "Data must be a vector type, it was" NULL ".

    getSymbols (ETFS, auto.assign = T, env = hub)
Lapply (Hub, FUN = volatility (OHLC (x), n = 20))

I tried too

    Lapply (Hub, FUN = function (x) volatility (OHLC (x), n = 20))

I get the error:
Errors in runCov (x, x, n, use = "all.obs", sample = sample, cumulative):
Series do not contain leading NAs