Commit 1e3e67c2 by Parameswaran Ajith

added problems on statistical inference.

parent 3d0c76a8
Showing with 27 additions and 0 deletions
No preview for this file type
...@@ -78,6 +78,7 @@ ...@@ -78,6 +78,7 @@
\input{diff.tex} \input{diff.tex}
\section{Numerical integration} \section{Numerical integration}
\label{sec:integr}
\input{integration.tex} \input{integration.tex}
\section{Ordinary differential equations: Initial value problems} \section{Ordinary differential equations: Initial value problems}
...@@ -93,8 +94,11 @@ ...@@ -93,8 +94,11 @@
\input{fourier.tex} \input{fourier.tex}
\section{Curve fitting} \section{Curve fitting}
\label{sec:curve_fitting}
\input{curve.tex} \input{curve.tex}
\section{Statistical inference}
\input{statinf.tex}
%\section{Lab 2} %\section{Lab 2}
%\input{rest.tex} %\input{rest.tex}
......
Here we solve the curve fitting problem presented in Sec.~\ref{sec:curve_fitting} using of Bayesian statistical inference. Given some data $d$ and a model $\mathcal{M}$, Bayesian parameter estimation involves computing the posterior distribution of the parameters $\theta$ describing the model. Using Bayes theorem, we can write
\begin{equation}
p(\theta | d, \mathcal{M}) = \frac{p(\theta | \mathcal{M}) ~ p(d | \theta , \mathcal{M})}{p(d | \mathcal{M})},
\end{equation}
where $p(\theta | \mathcal{M})$ is the prior distribution of the parameters $\theta$, $p(d | \theta , \mathcal{M})$ is the likelihood of data given the parameters $\theta$ and the model $\mathcal{M}$, while
\begin{equation}
p(d | \mathcal{M}) = \int p(\theta | \mathcal{M}) ~ p(d | \theta , \mathcal{M}) \, d\theta
\label{eq:evidence}
\end{equation}
is the evidence of the model $\mathcal{M}$. Bayesian model selection involves comparing the evidence of different models, say $\mathcal{M}_1$ and $\mathcal{M}_2$, by means of the likelihood ratio (Bayes factor) between the two models.
\begin{equation}
\mathcal{B}^1_2 = \frac{p(d | \mathcal{M}_1)}{p(d | \mathcal{M}_2)}.
\end{equation}
\subsubsection{Problems}
\begin{enumerate}
\item Compute the posterior distribution $p(H_0 | d, \mathcal{M}_1)$ of the Hubble constant $H_0$ using the Supernova Cosmology project data $z \leq 0.1$. Assume the Hubble's law [Eq.\eqref{eq:Hubble_law}] as the model $\mathcal{M}_1$. Assume uniform prior for $H_0$ in the interval $(10, 100)$ km/s/Mpc.
\item Repeat the analysis using the full data set. What are the differences that you see in the posterior?
\item Compute the posterior distribution $p(H_0, \Omega_M | d, \mathcal{M}_2)$ of the Hubble constant $H_0$ and matter density $\Omega_M$ using the $\Lambda$CDM model $\mathcal{M}_2$ [Eq.\eqref{eq:lcdm}]. You can compute the posterior on a 2-dimensional grid. Assume uniform priors for $H_0$ in the interval $(10, 100)$ km/s/Mpc and for $\Omega_M$ in the interval (0, 1).
\item Compute the likelihood ratio (Bayes factor) between the Hubble's law and $\Lambda$CDM model by computing the evidences [Eq.\ref{eq:evidence}] of the two models using a numerical integration method that we learned in Sec.~\ref{sec:integr}.
\end{enumerate}
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or sign in to comment