BeyondFillingFactor

  |   Source

It is time to go beyond the concept of filling factors. This is a very simple proposal to this end. Imagine that we gave a certain pixel that we subdivide into $N$ equal-area zones. The polarimetric signal that we measure is given by the addition of the fields of the $N$ zones and the added Gaussian noise with zero mean and variance $$\sigma_n^2$$

$$V = \sum_{i=1}^N \alpha_i V_i + \epsilon.$$

The trick is to consider that all the $V_i$ fields are extracted from a multivariate Gaussian distribution with zero mean and a certain covariance matrix $\mathrm{C}$ of dimension $N\times N$. The addition of $N$ numbers with such distribution gives a normal random variable with zero mean and variance equal to

$$ \mathrm{var}(V) = \sum_{i=1}^N \alpha_i^2 \mathrm{var}(V_i) + 2 \sum_{1\leq i \le j \leq n} \alpha_i \alpha_j \mathrm{cov}(V_i,V_j) + \sigma_n^2. $$

In the simple case that all the variables have the same variance, a common correlation factor and a unique $\alpha_i$, this simplifies to:

$$ \mathrm{var}(V) = \alpha^2 N \sigma^2 + \alpha^2 N(N-1) \rho \sigma^2 + \sigma_n^2 = \alpha^2 \sigma^2 \left[ N + N(N-1)\rho \right] + \sigma_n^2. $$

We assume that the value of $N$ for each pixel is extracted from a Poisson distribution, characterized by a parameter $\lambda$. The parameters of our model are the value of $N$ for each pixel, the global $\sigma^2$, $\rho$ and $\lambda$, that we assume common to all pixels. Therefore, a direct application of the Bayes theorem gives:

$$ p(\sigma^2,\rho,\lambda,\mathbf{N}|\mathbf{D}) \propto p(\mathbf{D}|\sigma^2,\rho,\lambda,\mathbf{N}) p(\sigma^2,\rho,\lambda,\mathrm{N}), $$

where we have a set of $M$ observed values of the circular polarization in a set of pixels, that we represent as $\mathbf{D}=[D_1,\ldots,D_M]$. Assuming that all pixels are statistically independent and expanding the prior, we find:

$$ p(\sigma^2,\rho,\lambda,\mathbf{N}|\mathbf{D}) \propto \left[ \prod_{j=1}^M p(D_j|\sigma^2,\rho,N_j) \right] \left[ \prod_{j=1}^M p(N_j|\lambda) \right] p(\sigma^2) p(\rho) p(\lambda). $$

Given the generative mdoel, each individual likelihood is given by:

$$ p(D_j|\sigma^2,\rho,N_j) = \mathcal{N}(D_j|0,\sigma^2 \left[ N_j + N_j(N_j-1)\rho \right] + \sigma_n^2), $$

where we have assumed that $\alpha_i=1, \forall i$. The log-likelihood is given by:

$$ \log p(D_i|\sigma^2,\rho,N_j) = -\frac{1}{2} \log 2\pi - \frac{1}{2} \log \left[ \sigma^2 \left( N_j + N_j(N_j-1)\rho \right) + \sigma_n^2 \right] -\frac{1}{2}\frac{D_j^2}{\sigma^2 \left[ N_j + N_j(N_j-1)\rho \right] + \sigma_n^2}. $$

Likewise, the log-Poisson is given by: $$ \log p(N_j|\lambda) = N_j \log \lambda - \lambda - \log N_j! $$

The marginalization of the $N_j$ cannot be carried out analytically and we have to resort to approximate methods. One possibility is to use MCMC methods but the dimensionality of $\mathbf{N}$ is expected to be very large. Noting that the marginalization can be factorized:

$$ p(\sigma^2,\rho,\lambda|\mathbf{D}) \propto \prod_{j=1}^M \left[ \sum_{N_j=1}^\infty p(D_j|\sigma^2,\rho,N_j) p(N_j|\lambda) \right] p(\sigma^2) p(\rho) p(\lambda). $$

The log-posterior is given by:

$$ \log p(\sigma^2,\rho,\lambda|\mathbf{D}) \propto \sum_{j=1}^M \left[ \log \left( \sum_{N_j=1}^\infty p(D_j|\sigma^2,\rho,N_j) p(N_j|\lambda) \right) \right] +\log p(\sigma^2) +\log p(\rho) + \log p(\lambda). $$