time series – Estimating Stock Beta Using the Kalman Filter

This is a basic question about using the KalmanFilter function. Lets say I have the following:

enter image description here

I want to use the KalmanFilter function (or other related Kalman Filter functions in Mathematica) to estimate the time-varying beta(t) in the linear model:

AAPL (t) = beta (t) * SP500 (t) + e (t)

Where e(t) is a random error process.

Thanks!

fa.functional analysis – Estimating certain tensor norms on Banach spaces

Let $X$ and $Y$ be Banach spaces. An operator $u:Xto Y$ s called nuclear if $u$ can be written as $u=sum_{n=1}^infty x_n^*otimes y_n$ with $(x_n^*)subseteq X^*$, $(y_n)subseteq Y$ such that $sum_{n=1}^infty|x_n^*||y_n|<infty.$ Define $N(u):=inf{sum_{n=1}^infty|x_n^*||y_n|}$ infimum being taken over all representations. Denote $C(n):=sup{N(BA):|A|_{ell_1^ntoell_infty^n}leq 1, |B|_{ell_infty^ntoell_infty^n}leq 1}.$ Is $suplimits_{ngeq 1}C(n)<infty$?

simulation – What is the procedure for performing a binning analysis in Monte Carlo, or more generally, estimating autocorrelation times?

I’m working on a monte carlo project similar to the Ising model. I’ve found many examples on which I’ve based my code: https://github.com/danielsela42/MC_TBG_Model/blob/master/mc_project/mcproj_binned.py (my code).

From some papers I read on binning analysis, the errors after each binning step are supposed to converge. Mine ended up oscillating after some binning step. And so I’m getting negative auto correlation times.

I was hoping someone could either verify my procedure is correct, or explain a good procedure for dealing with correlated sampling.

Thank you in advance for the help!

This is my first time here. If this is not the right place to post this, where would be better?

software engineering – Estimating memory footprint of a C program (and programs in a scripting language using bindings to a C library)

I wanted to test the feasibility of an idea that might be part of a small project, and I’m trying to set up a quick prototype using Python (with bindings to a C library). If any of this is sufficient for any interesting application, I might make this the main topic of my thesis.

I was able to plot how much memory this prototype uses (sort of experimentally) using psutils and randomly generated problem instances, but I would like to know if there’s a more principled/enigineer-like approach to memory profiling (I want to start learning about these topics, because I would like to make reliable estimates in order to better plan ahead).

I’m looking for literature suggestions and practical advice.
Thanks in advance!

linear algebra – Estimating Zeta from Hv and Hu , system of equations, estimation problem

Does anyone here know how $beta$ could be estimated in terms of $H_u, H_v $ in the below equations,

$$ H_u = left | zeta; left(frac{ e^{-jcdot 2cdotpi}-e^{-jcdot 2cdotpi(1/T)(T-Delta t)}}{j2pi}right)-zeta-beta right|^2 $$

$$ H_v = left | zeta; left(frac{ e^{jcdot 2cdotpi}-e^{jcdot 2cdotpi(1/T)(Delta t)}}{j2pi}right)+zetabeta right|^2 $$

This is absolute sqauare of $H_v$ and $H_u$

I am working on an estimation problem where from $H_u , H_v $ they estimated $beta$ that is inside $H_u$ and $H_v$.

Anyone who could tell me about any mathematical procedure that how I can we do so?

A previous example where we did so… Almost same problem

https://math.stackexchange.com/questions/3984685/estimating-p-from-a1-and-a1-system-of-equations-an-estimation-problem/3984708#3984708

$$ A_1 = left | alpha; left(frac{ 1- e^{-jcdot 2cdotpi rho}}{j2pi rho}right) right|^2 $$

$$ A_2 = left |alpha; left(frac{ 1- e^{-jcdot 2cdotpi rho}}{j2pi+j2pi rho}right) right|^2 $$
$$ rho=frac{A_2+sqrt(A_1A_2)}{A_1-A_2} $$

divided $A_1$ by $A_2$ equations

A1A2=(ρ+1)2ρ2

which is a quadratic in ρ.

I Solved it and select the root you I needed. I found $rho $ in terms of $A_1, A_2 $

linear programming – estimating optimal solution for LP with strict inequalities

I have an LP problem with strict inequalities that cannot be relaxed. I understand that most LP solvers require the problem to not have any strict inequalities as it is impossible to find an optimum for all problems.

However, is there precedent (ex. a paper or package) in allowing strict inequalities and then “stepping” towards the solution via something like gradient decent.

This would allow an estimation of the optimum (especially given the restriction of a convex polytope) with a fairly high degree of accuracy (which can be increased with more steps), which seems like it would be useful in many situations.

Estimating a expectation value of a function of a random variable and a non-probability variable using Monte Carlo simulation.

I have a function g(x,Z), where Z has a uniform normal distribution, and x is NOT a random variable (I have a range of pre-specified values to evaluate), and I want to generate a plot of the expectation value E(g(x)), or Eg.

In order to use Monte Carlo simulation to generate this plot with n=1000 draws of Z, must I resample Z anew for every value of x that I plan to evaluate? Or can I rely on the same n=1000 sample of Z values for every value of x?

Thank you!

least squares – Estimating the roots of a non-linear system

I’m working on a computational mathematics research project. As part of this project, I’m going to use the GNU Scientific Library’s multiroot finder to solve a system of 3 non-linear equations in 2-unknowns.

To algorithmically solve for the roots of a non-linear system, I need to choose some starting value/range (an estimation of the roots). So, my question is, how do I determine the starting value? I’d like to do this in the simplest way possible, and a rough estimate should suffice.