Le Activity Funf

The activity is entitled: Fourier transform model of image formation. (I got this.) The mathematical side of the Fourier transform has be previously discussed in AP 185. In image formation, we now apply the Fourier transform to change the physical dimension of an image to its spatial frequency. To familiarize myself with the discrete Fourier transform, I take the Fourier transforms of different images. (Program written in Python)

circleshiftedcircle                       A        A_fft

Figure 1.  (top-left) Synthetic circle image, (top-middle) Fourier transform of the synthetic circle, Airy pattern, (top-right) Fourier transform of the Airy pattern. (bottom-left) Image of the letter ‘A’, (bottom-right) Fourier transform on the letter ‘A’.

I started with a circular aperture and the capital letter ‘A’ as my initial test images. As predicted by the analytical Fourier transform, there is an Airy pattern produced when I take the Fourier transform of a circular aperture. Here are the other produced synthetic images:

sinusoidsinusoid_fftSlitSlit_fftSlit2Slit2_fftsquaresquare_fft                                      gaussiangaussian_fft

Figure 2. (top-left pair) Sinusoid/Corrugated roof and its Fourier transform, (top-right pair) double slit and its Fourier Transform, (middle-left pair) modified double slit and its Fourier transform, (middle-right pair) centered square and its Fourier transform, (bottom pair) Gaussian bell curve and its Fourier transform.

Next, we were to simulate the image of the word ‘VIP’ as viewed from an aperture. To do this, we need to convolute the Fourier transform of the ‘VIP’ image with the aperture (already in frequency domain). It can be seen that the resulting image in convolution still shows the word ‘VIP’ written in an Airy pattern-like font. These are the images acquired:

circle   VIP VIP_fft

Figure 3. (left) Image of the aperture used in convolution which is already considered to be in fourier domain, (middle) image of the word ‘VIP’, (right) and the convolution or effectively the product of the Fourier transform of the ‘VIP’ image and the aperture.

Next, we are to locate all the A’s in the phrase: THE RAIN IN SPAIN STAYS MAINLY IN THE PLAIN. To do this, we need to perform correlation on the image of the phrase and the image of the letter ‘A’ by multiplying the Fourier transform of the text to the complex conjugate of the Fourier transform of the letter ‘A’. It can be seen that if the text image is inverted (written from bottom right to top left), the peaks in correlation image show the locations of the letter A’s in the text. These are the images acquired:

           A        Text text_fft

Figure 4. (left) Image of the letter ‘A’, (middle) image of the text, (right) and the correlation of the two images.

For this activity, I would give myself a score of 8 for the lacking output. This is the python code I used for the activity.

# -*- coding: utf-8 -*-

“””

Created on Wed Sep 9 11:02:14 2015

@author: jesli

“””

#AP186 Activity5

from __future__ import division

import numpy as np

import matplotlib.pyplot as plt

import matplotlib.cm as cm

import Image as im

import scipy as sp

def gaussian(x):

return np.exp(-(x**2)/2)

M=128

W=10

dim=range(M)

x=np.linspace(-5,5,M)

[xx,yy]=np.meshgrid(x,x)

##Circle

#r=np.sqrt(xx**2+yy**2)

#A=np.zeros([M,M])

#for i in dim:

# for j in dim:

# if r[i][j]<=0.5:

# A[i][j]=1

##Sinusoid

#A=np.sin(xx*np.pi)

##Simulated double slit

#A=np.zeros([M,M])

#for i in dim:

# for j in dim:

# if np.abs(xx[i][j])>=1.5 and np.abs(xx[i][j])<=1.7:

# A[i][j]=1

##Double slit 2

#A=np.zeros([M,M])

#for i in dim:

# for j in dim:

# if np.abs(xx[i][j])>=1.5 and np.abs(xx[i][j])<=1.7 and np.abs(yy[i][j])<=4:

# A[i][j]=1

##Square function

#A=np.zeros([M,M])

#for i in dim:

# for j in dim:

# if np.abs(xx[i][j])<=1 and np.abs(yy[i][j])<=1:

# A[i][j]=1

##2D Gaussian

#r=np.sqrt(xx**2+yy**2)

#A=gaussian(2*r)

## ‘A’ image to matrix

#image=im.open(“A.jpg”)

#im=np.array(image)/np.max(np.array(image))

#A=im[:,:,0]

#’VIP’ image to matrix and convolution

image=im.open(“VIP.jpg”)

im=np.array(image)/np.max(np.array(image))

B=im[:,:,0]

#A=np.fft.fftshift(A)

#Bft=np.fft.fft2(B)

#C=A*Bft

#D=np.fft.ifft2(C)

#intensity=np.abs(D)

## Text correlation

#image1=im.open(“Text.jpg”)

#B=np.array(image1)/np.max(np.array(image1))

#image2=im.open(“A2.jpg”)

#A=np.array(image2)/np.max(np.array(image2))

#a=np.fft.fft2(A)

#b=np.fft.fft2(B)

#b=np.conjugate(b)

#c=a*b

#C=np.fft.ifft2(c)

#intensity=np.abs(C)

#intensity=np.fft.fftshift(intensity)

#Edge detection by convolution

A=np.array([[-1,-1,-1],[2,2,2],[-1,-1,-1]])

C=np.zeros([128,128])

#for i in range(len(C)):

# for j in range(len(C[0])):

# if j>=62 and j<=64 and i>=62 and i<=64:

# C[i][j]=A[i-62][j-62]

x=62

y=x

C[x:x+A.shape[0],y:y+A.shape[1]]=A

C=np.fft.fftshift(C)

b=np.fft.fft2(B)

d=C*b

D=np.fft.ifft2(d)

intensity=np.abs(D)

#ft=np.fft.fft2(A)

#intensity=np.abs(ft)

#shifted=np.fft.fftshift(intensity)

#ft2s=np.fft.fft2(shifted)

#ft2s=np.abs(ft2s)

#ft2s=np.fft.fftshift(ft2s)

#ft2i=np.fft.fft2(intensity)

#ft2i=np.abs(ft2i)

#ft2i=np.fft.fftshift(ft2i)

#plt.figure(1)

#plt.matshow(A,cmap=cm.gray)

plt.figure(2)

plt.matshow(intensity,cmap=cm.gray)

#plt.figure(3)

#plt.matshow(shifted,cmap=cm.gray)

#plt.figure(4)

#plt.matshow(ft2s,cmap=cm.gray)

#plt.figure(5)

#plt.matshow(ft2i,cmap=cm.gray)

Le Activity Vier

I got sick. I missed some classes. My SIVP toolbox won’t install properly. And now I’m in a hurry to finish this. Such is life. Much thanks to Ron-sama for letting me use his laptop for this activity.

First in the list is to approximate the area of a synthetic image. We learned how to make synthetic images in Activity 3. For this part of the activity, I borrowed a circle with radius 0.4.

11949807_10204713804747269_1955501851_n 11349861_10204713804707268_340689775_n

Figure 1. (left) Synthetic circle with radius = 0.4 units, area = 0.5 sq units, and (right) edge detection function applied to the circle in Scilab.

Using the Scilab program in APPENDIX A!, I was able to approximate the area of the synthetic circle. A_sum = 0.5 when cut to 2 significant figures. This tells me that the program used correctly implements Green’s Theorem.

Now to approximate the area of a particular place of interest with our new-found power of Green’s Theorem. The first place that came to my mind was (of course) Leyte, my hometown.

 11992072_10204710301499690_669310499_n 11936964_10204710301299685_893672463_n 11949572_10204710301259684_852141775_n Figure 1. (left) Googlemaps image of Leyte, (middle) area of interest extracted using paint and ImageJ, (right) detected edges using a Scilab function.

I realize that it is quite ambitious to accurately approximate the area of the figure given its irregular shape. Simply sorting the theta values to determine the direction of the contour won’t be as effective since multiple edge points at different parts of the contour have the same theta values. But for experiment’s sake, I tried to use theta sorting on this irregular shape to see how much it deviates from known values. Using the program in APPENDIX B!, I was able to obtain an approximation of the area of Leyte. And using the Measure function of ImageJ, I was able to convert my pixel dimensions to real-world dimensions.

A_sum = 6.953e+09 sq meters, while, according to Google,                                       A_leyte = 2845 sq miles = 7.3685e+9 sq meters,                                                            which is off by about 4e+8 sq meters (5.6% from Google estimation).

Not sure how accurate Google’s estimation is but I suppose its hard to accurately calculate given tides, etc.

Here’s another way to hopefully increase the accuracy of area estimation.11949629_10204710301339686_1181272444_n                   11938150_10204710301019678_1173318094_n 11938035_10204710301099680_1853213757_n 11938853_10204710301059679_249292654_n                             11951051_10204710300979677_315703352_n 11938150_10204710300859674_241570497_n 11949670_10204710300939676_286384897_n

Figure 2. (top) Leyte divided into 6 parts, (middle and bottom rows) divisions of Leyte as separate images.

I decided to try to divide Leyte into “less irregular” parts and add the areas later so that the calculation errors from the equal theta values are minimized. This also implies the use of different off-center origin points for each division.

Here’s a very sleepy ducky to break the ice (I suddenly wanted to have a pet duck)

giphy

Aaaaaand for the last part of the activity, I am to use ImageJ to analyse an image by knowing its pixel ratio. This is the image to be measured:

Scan Mario

To calibrate the measurements made by ImageJ, I measured a line (beside the ‘l’) and used the Set Scale function and calibrated the measurement to be 80.002 pixels/cm. Now measuring a different line (above the coffee bean), I used the calibrated measurement of ImageJ to estimate the length of the line, and the blue circle.

ImageJ measured the line to be 0.968 cm and the blue circle to be 6.323 sq cm.               The physical estimation is about 1 cm for the line, and 6.606 sq cm for the circle.

Now to rate myself, Id give myself a 9 for not being able to fix the theta value problem in my selected image.

APPENDIX A!

//Green's theorem implemented using theta value sorting on a synthetic circle 
nx=200;
ny=200;
x=linspace(-1, 1, nx);
y=linspace(-1, 1, ny);
[X,Y]=ndgrid(x, y);
r=sqrt(X.^2+Y.^2);
A = zeros(nx,ny);
A (find(r<0.4)) = 1;
clf;
imwrite(A,'Circle.bmp');
imwrite(A,'Circle.jpg');
im = imread('Circle.bmp');
E=edge(im,'canny');
imwrite(E,'CircleE.jpg');
imshow(E);
[x1,y1]=find(E);
Rx=x(x1);
Ry=y(y1);
theta=atan(Ry,Rx)*180/%pi;
C=cat(1,theta,Rx,Ry);
[D,k]=gsort(C(1,:),'g','i');
sorted=C(:,k);
A_sum=0;
for i = 1:length(sorted(1,:)),
    if i==length(sorted(1,:)) then
        A_pie = 0.5*(sorted(2,i)*sorted(3,1)-sorted(2,1)*sorted(3,i));
    else i!=length(sorted(1,:));
    A_pie = 0.5*(sorted(2,i)*sorted(3,i+1)-sorted(2,i+1)*sorted(3,i));   
A_sum = A_sum + A_pie;
    end,
end

APPENDIX B!

//Green's theorem implemented using theta value sorting on Leyte
im=imread('Leyte1.bmp');
leyte_bw=rgb2gray(im);
E=edge(leyte_bw,'canny');
//imshow(E);
[xc,yc]=size(im);
pix2dim=20000/67.25;
x=linspace(-1*xc*pix2dim/2,xc*pix2dim/2,700);
[x1,y1]=find(E);
Rx=x(x1);
Ry=x(y1);
theta=atan(Ry,Rx)*180/%pi;
C=cat(1,theta,Rx,Ry);
[D,k]=gsort(C(1,:),'g','i');
sorted = C(:,k);
A_sum = 0 ;
for i = 1:length(sorted(1,:)),
    if i==length(sorted(1,:)) then
        A_pie = 0.5*(sorted(2,i)*sorted(3,1)-sorted(2,1)*sorted(3,i));
    else i!=length(sorted(1,:));
    A_pie = 0.5*(sorted(2,i)*sorted(3,i+1)-sorted(2,i+1)*sorted(3,i));   
A_sum = A_sum + A_pie;
    end
end