Physics Department | Center For Optical Technologies | Lehigh University  

Prof. Ivan Biaggio  

Research Group  




PHY474 - Seminar in Modern Physics

Experimental data and theoretical models. Curve Fitting and its Challenges

The syllabus explains the main framework of this seminar. Below are a few useful hints and links about technologies, a list of topics, a list of the case studies to be used for homework, and some more details on everything.

About LaTeX

If you are on a Mac (I can't really advise on other platforms), go get TeXShop and TeX Live. (The link takes you to a page where you will find what they call a "smaller distribution". You can just get this smaller distribution, unless you really want to do very very fancy things....

One well accepted addition to LaTeX for writing scientific (physics) papers, used to publish in the American Physical Society journals, is RevTeX. However, if you download the stuff above, then it should already contain the latest version of RevTeX.

Example: Download the write-up that I prepared to describe one of the case studies. The LaTeX/RevTeX file is DispersionFitting.tex. This file uses a figure, which is the pdf file IndexDataPlot.pdf. Once you are ready with your LaTeX software, then put these two files in a new directory, open up DispersionFitting.tex, and run LaTeX on it.
Apart from describing one of the case studies, it also shows off everything you need to know in order to understand and use LaTeX. Look at how things are done in DispersionFitting.tex, and copy them when you do your own work.

About Curve Fitting

Apart form standard literature that you can findusing the various literature search services, one good way to jump right in is to have a look at the handbooks of fitting software. These tend to have the advantage of being very practice oriented. If it is possible to skip over the technical stuff relating to software and user interfaces that is often present in these manuals, the rest often gives a good insight on the main problems.

Here a few links to such handbooks:

Software for this Seminar

If you have access to a Mac and are not already addicted to a preferred technology, then use pro Fit. There is a free trial version available that should be enough for most of what we will do. In addition, as soon as I get it I'll distribute a registration code valid for this seminar. If you have the registration code, download the full package. pro Fit lets you define your own functions for curve fitting using either a custom version of the venerable Pascal programming language, or using the more fashionable Python. The first is easier to write and explain to people. If you know the second you can do lots of stuff.

Then there is free downloadable software such as sage, gnuplot, or Python. Plus other commercial packages such Origin and more general stuff like Mathematica. And I am sure there are many other options.

In any case, do not use Excel with that Solver Add-in. Excel is most definitely not a tool for general purpose scientific data analysis, and I think it is infinitely better if you learn a proper tool that gives you the flexibility to implement any mathematical function you can think of. Remember that the motivation of this seminar, and the point of view it takes, is to provide people with the ability to quickly and simply and correctly analyze the data they will inevitably obtain as part of their research.

First Part

This is the list of topics from which the first presentations will be selected. When you develop one of these topics, remember that the intent is that you teach your audience how to ultimately use what you are talking about. Don't give just an abstract introduction copied from a theory book or paper. Simplify the concepts if needed, and use plenty of examples.
  1. Comparing data and models. Thursday, June 6 — Keith
    Suggested content:
    A function y=f(x,a1,a2,a3,...) can be used to fit a data set (x,y) by optimum choice of the parameters a1,a2,a3... Review how one can calculate a number that gives a quantitative measure of the deviation between function and data, which then depends on the values of the parameters a1,a2,a3... This is the quantity that will then be minimized by fitting algorithms. Discuss the role of experimental errors, taking into account different error distributions and experimental errors in multiple dimensions (i.e. what do do if there are errors in x as well as in y)
  2. What is curve fitting?. Tuesday, June 11 — Charles
    Suggested content:
    Algorithms to minimize chi squared, general overview. Difference between Linear and Nonlinear Curve Fitting. Nonlinear curve fitting: Overview of the most common methods without going into too many details. Steepest descent (gauss-newton) method, Levenberg-Marquardt, others.
  3. The Levenberg Marquardt method. Thursday, June 13 — Mike
    Suggested content:
    Explanation of the principles on which the algorithm rests, and of the data structures it uses. Some literature:
    Kenneth Levenberg (1944). "A Method for the Solution of Certain Non-Linear Problems in Least Squares". The Quarterly of Applied Mathematics 2: 164-168. (1944)
    Donald Marquardt (1963). "An Algorithm for Least-Squares Estimation of Nonlinear Parameters". SIAM Journal on Applied Mathematics 11 (2): 431-441. doi:10.1137/0111030.
    Jose Pujol. "The solution of nonlinear inverse problems and the Levenberg-Marquardt method" Geophysics 72, W1 (2007); doi:10.1190/1.2732552
  4. Other fitting algorithms. Tuesday, June 18 — Kebra
    Suggested content:
    Discussion and overview of how algorithms can be extended to work in more general cases. One example is fitting with errors in the independent variable (see reference below) but there should be others. Investigate the literature, give an overview. One relevant paper:
    P.L. Jolivette, "least-squares fits when there are errors in X," Computer in Physics, Vol. 7, No. 2, 1993.
  5. Interpreting curve fitting results Wednesday, June 19 — Vincent
    Suggested content:
    The result of a curve-fitting operation is not only the set of parameters a1,a2,a3... that leads to the minimum in the chi-squared. Additional information can be provided on how precise the estimation of those parameters can be. Discuss how errors can be obtained for every parameters, the case of parameters being independent from each other or not. Discuss the covariance matrix that is one of the outputs of the Levenberg Marquardt algorithm after a fit. Explain the role of its diagonal elements, and of the off-diagonal ones.
  6. Evaluation of fitting qualty Tuesday, June 25 — Nikhil
    Suggested content:
    How to evaluate how good a fit is. The "Goodness of fit: quantity. How it is influenced by experimental errors. The problem of evaluating confidence intervales for the parameters beyond standard deviations. Monte Carlo methods to get the confidence intervals.
  7. Curve fitting in more than two dimensions Wednesday, June 26 — Andrew
    Suggested content:
    Fitting in multiple dimensions, of data sets that consit of multidimensional data of the kind ({x1_i,x2_i,x3_i,...}, {y1_i,y2_i,y3_i,...}), possibly with an error vector for every yk_i. Discussion of how this can be implemented using standard algorithms, or of other ways to do this.

Second Part

The topic of the second set of presentations will be decided during the first part of the seminar. A couple of examples are below. But maybe you have some data or some problem that you would like to discuss and present. Basically, can you come up with your own case study and then discuss it?
  1. The Peak Fitting Problem. Thursday, July 18 — Mike
    Suggested content:
    Peak Fitting is peculiar because it can concievably use dozens and dozens of parameters and still be a very stable problem. This is because when peaks are well separated from each other the parameter's of each peak are totally independent from the other one, as can be readily seen from the covariance matrix. But in other cases peaks are near, and this changes. The talk should be an overview of the standard peak fitting methods and its main challanges.
    As Part of this presentation consider this data set as a further case study. It shows how a certain photoluminescence spectrum changes depending on light polarization. The columns contain the data for polarization angles from 0 to 90 degrees. Thanks to peak fitting, one can extract the bands that are not polarization dependent and those that are, and identify their position and amplitude.
  2. Title and Description to be posted soon. Tuesday, July 23 — Nikhil
  3. Title and Description to be posted soon. Thursday, July 25 — Andrew
  4. Title and Description to be posted soon. Tuesday, July 30 — Kebra
  5. Title and Description to be posted soon. Thursday, August 1 — Charlie
  6. Determining the best values of the fundamental physical constants, 7/26 Tuesday, August 6 — Keith
    The best values for the fundamental physical constants are identified by a least squares fitting procedure that takes into account all the various measurements that are available and how they depend from each other. This adjustmant is done every four years. Here is the 2006 article, and here is the 2010 one in preprint form. A description of the procedure is also found here, but many more resources can be found. The use of the covariance matrix is described in Appendix F (page 484) of this reference.
  7. Tips and Tricks. Thursday, August 8 — Vincent
    Go through some interesting examples of fitting situations. Suggestions could be: how to fit with a constraint that says that the function must go through a specific data point no matter what. More examples of multidimensional data can be transformed into (x,y) data for fitting with conventional algorithms. Another example similar to the "physical constants" example, where on edetermiens a set of parameters by minimizing the deviations to a set of independent measurements. Any other interesting special case.
  8. Special purpose algorithms.
    This could build upon topics like peak fitting or fitting using parametric functions. Are there special-purpose altorithms that are interesting for these applications? A special-purpose algorithm for peak fitting would be an interesting thing to discuss.

Case Studies

Here a list of the case studies that we will experiment with and discuss. All case studies are supposed to be an on-going project during the course. The idea is that you start with your first best idea, and that then you keep improving your approach by implementing what we will learn in the next few weeks. The aim is that you have a proper and detailed treatment of each case study by the end of the course.
  1. Decay times
    Here we consider a measurement of the time dynamics of the photoluminesence emitted by a sample after exitation with what can be effectively considered as a delta function in time. After excitation the signal emitted by the sample decays. Here is the data. First column is time, the next two columns are the signal emitted by the sample in two different conditions. Unfortunately, one of the two conditions leads to a lower signal strength and more noisy data. The questions now are (1) Are the decays exponential? (2) What are the decay time constants [the tau in exp(-t/tau)}? (3) Is any difference in decay time constants between the two data sets significant? After you have answered these questions, conclude by giving the values for the two exponential time constants and the experimental errors with which they are affected.
    Hint: Consider modeling this data with an exponential decay with offset, that is a function of the form exp(-t/tau) + c. Consider taking into account the data before time zero in your model.
    • Discuss how the two parameters (tau, c) influence each other.
    • Each data set will deliver a value for the two parameters (tau,c). Discuss the relative position of these two values. Are they "far enough apart"?
    • Again, use the various tools that we discussed or will discuss: standard deviations, covariance matrix, size of the minimum in the chi-2 landscape.
  2. Mystery data
    No lengthy description this time. Here is the data file. The first column is the inverse of the temperature (in K) multiplied by 1000K. The second column is the temperature. The third column is what was measured. You can think of it as a rate. To simplify, assume that the physical process involved here makes this rate exponentially dependent on 1/T, as described by exp(E/(kB T)) where kB is Boltzmann's constant. See if you can determine the energy E, and discuss what the nature of the experimental errors must be in this data set.
    Hint: You can consider three different approaches:
    • Transformation of the data into a liner problem by taking the log of hte y-values, followed by a linear fit.
    • Nonlinear fit using percent errors in the y-values.
    • Nonlinear fit of the y-values vs. T (not 1/T) assuming a constant x-error of 1 degree (and vanishingly small y-errors)
  3. Refractive Index Dispersion
    A description is found in DispersionFitting.tex and IndexDataPlot.pdf if you want to look at the LaTeX source code. Otherwise here is a pdf.
    In addition, here is the data set, as a zipped pro Fit file, and as a text file.
    The result of this analysis should be:
    • A discussion of the refractive index model that can be reasonably used to describe the data, and of the appropriate set of parameters.
    • A plot of the temperature dependence of these parameters.
    In addition, look a the information returned by the Levenberg-Marquardt fitting algorithm:
    • Standard deviations of the fitted parameters.
    • Full covariance matrix.
    • Compare the above between the case where you fit the data with three inedependent Sellmeier oscillator and the case where you fit it with two oscillator plus a term proportional to the square of the wavelength.
  4. Transmission of a thin film on a substrate
    A description is found in FilmTransmissionFitting.tex and Tplot.pdf/TPlot2.pdf if you want to look at the LaTeX source code. Otherwise here is a pdf.
    In addition, here are the data sets, as a as a zipped pro Fit files, and as text files: three films, another film.
    The results of this analysis should be:
    • A plot of the refractive index vs. wavelength as can be obtained from this data,including a discussion of the wavelength range over which this result can be trusted, and an indication of the precision with which the index can be known.
    • A plot of the absorption vs. wavelength as can be obtained from this data, over the wavelength interval for which the result can be trusted.
    • The thicknesses of the three films, with an estimation of the precision with which it can be known.
  5. Chi-squared landscape and fitting behavior
    The idea here is to test the fitting algorithms in your software of choice and determine how they behave. Here is the data set. This data should be fitted by the function f(x,a1,a2) = sqrt( sqr(1+a1*x) + sqr(a2*x)).
    • Plot the chi-squared landscape as a contour plot (a topographical map) in the coordinates (a1,a2). I suggest a range between -1 and 1 for both parameters.
    • Start a fit using various initial values for the two parameters. Try at least the following values for (a1,a2), but you can try more: (0,0), (1,0), (10,0), (0,0.2), (0.0,0.01).
    • Try out (if you can) a couple of different fitting programs, see if they all behave the same. Check the number of iterations they need for every starting value, check if there are any difficulties.
    • Discuss what happens, explain any difference in behavior when using different starting values.
    • Do it yourself: Evaluate by hand the first step in the fitting process using both steepest descent, Gauss-Newton, and the Levenberg-Marquardt method with some value of "lambda", discuss what happens for the different intial values.
    Hint: The function fits the data around (a1,a2) = (-0.5,0.6).

Case Studies - Deliverables

Produce a written report for each of the four case studies. To do this, go through the case studies and apply all the new things that we learn. At the end you will analyze the results of the fit including a discussion of errors of the fitted parameters, chi-2 landscape, shallowness of the minimum, covariance matrix. You should discuss all case studies among yourselves, and we will keep discussing them in class week after week, while you build up your treatment.
After the last two talks, re-visit the case studies one more time while applying the latest that you have learned, like the concept of confidence intervals.

Contact | Goto Top of Page  

Lehigh University Center for Optical Technologies