It’s a bit of a vague question, but I don’t know what would be the best way to process some data. I’m working on data fitting: I generate X-RAY spectras, with random values that I save, and then try to fit the data I just created. I already have a python script to retrieve those values and calculate the error. What I’d want to do, is save the value of the error for each parameter, everytime I generate and fit data and process the error.
Now is my problem: I don’t exactly know how to do it, and most importantly, what to use. I’m thinking about computing the mean value of the percentage of error for each parameter, the confidence interval etc… But I don’t know if I could simply write another python script for that, or if I should work with a database, automated with python (which I never used), but I’m worried that it’s probably overkill. I was also thinking about simply using a spreadsheet, but yet again, I don’t know what would be the most efficient way.
I won’t have that much data, and the analysis I want to do is pretty simple, but I’d like something automatic: ideally, after fitting, I’d run a script, and the errors for those values would just add up to an existing document (or database, if need be), with already all the errors I’d have computed before.
Thanks for reading!