Algorithms - How they work.

In this video I show the gain computation for all features and valid data intervals to create a decision tree. I used the Iris dataset, available at scikit-learn opendatasets module plus some data augmentation technique using basic random noise to create more samples.
The source code and presentation used to create the video is available at https://github.com/tkorting/youtube/how-decision-trees-works-2/
Please like and share the video, and subscribe to my channel.
In this video I explain how kNN (k Nearest Neighbors) algorithm works for image classification. We vary the parameter max distance of neighbors to be classified (from 1 to 100), in order to show the evolution of the classification. I selected 9 samples for 3 patterns (bare soil, urban areas, vegetation) and used k=3. I provide one animation of the scatterplot in 3 visible spectral bands of a CBERS satellite image (available at https://github.com/tkorting/remote-sensing-images).
The source code used to create the animations is available at https://github.com/tkorting/youtube/tree/master/knn-for-image-classification
The slides are available at https://prezi.com/bf7r0vaasqim/?utm_campaign=share&utm_medium=copy
Download free remote sensing images at http://www.dgi.inpe.br/catalogo
Please like and share the video, and subscribe to my channel.
In this video I explain how Circle Hough Transform works, by creating an accumulator for every edge detected (using Canny algorithm) in the original image. The accumulator corresponds always to a specific radius size, therefore it only works for circles with this size. For new radius, a new accumulator should be computed. The algorithm was based on the Wikipedia (https://en.wikipedia.org/wiki/Circle_Hough_Transform) adapted for a single radius, because in the original algorithm, all radius are computed.
The source code used to create the animations is available at https://github.com/tkorting/youtube/tree/master/how-circle-hough-transform-works
The slides are available at https://prezi.com/mol2d1rmbpwi/?utm_campaign=share&utm_medium=copy
Download free remote sensing images at http://www.dgi.inpe.br/catalogo
Please like and share the video, and subscribe to my channel.
In this video I explain how image pan-sharpening works, combining a multispectral remote sensing with a panchromatic image, both from WorldView-2 sensor. For this purpose, I converted one RGB composition of the multispectral image to the HSV color space, using scikit-image in Python. Then, I replaced the Value component by the panchromatic image, and converted the HSV (HSpan) to the RGB composition again.
The source code used to create the animations is available at https://github.com/tkorting/youtube/tree/master/image-pan-sharpening
The slides are available at https://prezi.com/yo0iwam-at3m/?utm_campaign=share&utm_medium=copy
The reference book can be found at https://www.amazon.com/Introductory-Digital-Image-Processing-Perspective-dp-013405816X/dp/013405816X/
Download free remote sensing images at http://www.dgi.inpe.br/catalogo
Please like and share the video, and subscribe to my channel.
In this video I show some basic techniques for image enhancement, based on a single pixel transformation function, that can be:
- gain
- offset
- logarithm
- inverse logarithm
- square root (nth root)
- square (nth power)
- negative
The source code and images used to create the animations are available at https://github.com/tkorting/youtube/tree/master/how-image-enhancement-works
The slides are available at https://prezi.com/5wblhdsbaqbz/?utm_campaign=share&utm_medium=copy
The reference book can be found at https://www.amazon.com/Digital-Image-Processing-Rafael-Gonzalez/dp/0133356728
Download free remote sensing images at http://www.dgi.inpe.br/catalogo
Please like and share the video, and subscribe to my channel.
In this video I show the application of the Normalized Difference Water Index (NDWI), with a threshold that divides the image in Water x Non-Water targets, and apply combined line detectors to detect ships in medium resolution Remote Sensing images.
I captured the images from the Sentinel-hub EO-Browser site https://apps.sentinel-hub.com/eo-browser/
The source-code used to create animations and results are available at https://github.com/tkorting/youtube/tree/master/basic-ship-detection-in-rs
The presentation used to create the video is available at https://prezi.com/ugigibuzz33b/?utm_campaign=share&utm_medium=copy
Please like and share the video, and subscribe to my channel.
In this video I show a basic index, called RATIO for a band, which is useful to extract information from rooftops in Remote Sensing RGB (true color) images.
There is a well known index in Remote Sensing, called NDBI (Normalized Difference Built-up Index) which uses more than the visible bands to highlight buildings, including rooftops. Here I describe a simpler way, that can be applied to common satellite images obtained from Google Earth, for example.
The source-code used to create animations and results are available at https://github.com/tkorting/youtube/tree/master/basic-rooftop-extraction
The presentation used to create the video is available at https://prezi.com/j9q1iydyzzqb/?token=137d190405bfefd94ca48110055ac01248eb784a123ad8ef95480996efda9111&utm_campaign=share&utm_medium=copy
Please like and share the video, and subscribe to my channel.
In this video I show a basic change detection scheme for Remote Sensing images.
I show an example with 2 CBERS-4/PAN5 images from the same place, one from 2015 and other from 2018, calling them It1 and It2.
I propose an adaptation of NDVI formula and call it NDTS (Normalized Difference of Time Series) which is basically:
NDTS = (It2 - It1)/(It2 + It1)
Then we square the result and apply a visual threshold followed by mode filtering to highlight the detected change between the two images.
The LaTeX equations and source code for algorithms of this presentation are available at:
https://github.com/tkorting/youtube/tree/master/basic-change-detection-in-rs
The presentation is available at:
https://prezi.com/wjqol0gbp33v/?utm_campaign=share&utm_medium=copy
The two images are available at:
https://drive.google.com/open?id=1eK3APbGJG5AsIx4KL9Tj1r6PLBbhjsIC
https://drive.google.com/open?id=1vJp1bHlRgkm7InJvEHo0IRtg3aPUKP01
In this video we explain how to create Integral Images (also called Summed-area table). This is a technique that is used in image processing to make fast computations of sum in rectangles inside the image.
Other techniques, like SURF method, employs Integral Images.
The original presentation is available at: http://prezi.com/qrm7ty_wjlok/?utm_campaign=share&utm_medium=copy
The source code used to create the animations in this video and equations in the presentation is available at:
https://github.com/tkorting/youtube/tree/master/integral-images
Based on the publication from Achanta et al. (2010) I created this video, to represent visually the application of the SLIC algorithms in the context of superpixel generation.
I used a RGB image by remote sensing to apply the detection of 100 superpixels. The original presentation is available at xxx, and the source-code using Python, created to make the superpixels and produce a beautiful animation, is available at https://github.com/tkorting/youtube/blob/master/slic/main.py
The original algorithm's description is as follows:
SLIC Superpixels
Authors: Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Susstrunk
Abstract. Superpixels are becoming increasingly popular for use in
computer vision applications. However, there are few algorithms that
output a desired number of regular, compact superpixels with a low computational overhead. We introduce a novel algorithm that clusters pixels in the combined five-dimensional color and image plane space to efficiently generate compact, nearly uniform superpixels. The simplicity of our approach makes it extremely easy to use – a lone parameter specifies the number of superpixels – and the efficiency of the algorithm makes it very practical. Experiments show that our approach produces superpixels at a lower computational cost while achieving a segmentation quality equal to or greater than four state-of-the-art methods, as measured by boundary recall and under-segmentation error. We also demonstrate the benefits of our superpixel approach in contrast to existing methods for two tasks in which superpixels have already been shown to increase performance
over pixel-based methods.
Follow my podcast: http://anchor.fm/tkorting
In this video I present a simple example of a CNN (Convolutional Neural Network) applied to image classification of digits. CNN is one of the well known Deep Learning algorithms.
I firstly explain the basics of Neural Networks, i.e. the artificial neuron, followed by the concept of convolution, and the common layers in a CNN, such as convolutional, pooling, fully connected, and softmax classification.
I read several references to prepare this material, but the main references are:
* Towards better exploiting convolutional neural networks for Remote Sensing scene classification. By Keiller Nogueira, Otávio Penatti, Jefersson dos Santos
* Everything you wanted to know about Deep Learning for computer vision but were afraid to ask. By Moacir Ponti, Leonardo Ribeiro, Tiago Nazaré, Tu Bui, John Collomosse
I also created an Octave (Matlab like) source code to implement the basic CNN showed in this video, which are available at my github. Please follow the link for more details on the source code:
https://github.com/tkorting/youtube/tree/master/deep-learning-cnn
This presentation is available at my Prezi site, at this link:
https://prezi.com/n_r8p1ytanyh/?utm_campaign=share&utm_medium=copy
Thanks for watching this video, please like and share, and subscribe to my channel.
Regards
Follow my podcast: http://anchor.fm/tkorting
In this video we describe the DTW algorithm, which is used to measure the distance between two time series. It was originally proposed in 1978 by Sakoe and Chiba for speech recognition, and it has been used up to today for time series analysis. DTW is one of the most used measure of the similarity between two time series, and computes the optimal global alignment between two time series, exploiting temporal distortions between them.
Source code of graphs available at
https://github.com/tkorting/youtube/blob/master/how-dtw-works.m
The presentation was created using as references the following scientific papers:
1. Sakoe, H., Chiba, S. (1978). Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoustic Speech and Signal Processing, v26, pp. 43-49.
2. Souza, C.F.S., Pantoja, C.E.P, Souza, F.C.M. Verificação de assinaturas offline utilizando Dynamic Time Warping. Proceedings of IX Brazilian Congress on Neural Networks, v1, pp. 25-28. 2009.
3. Mueen, A., Keogh. E. Extracting Optimal Performance from Dynamic Time Warping. available at: http://www.cs.unm.edu/~mueen/DTW.pdf
Subscribe to my channel!
In this video we provide an animation of image processing spatial filtering. We provide two exemples, on Highpass spatial and other Lowpass spatial filter in an image.
The kernels for the Highpass filter are
H = [-1 -1 -1
-1 9 -1
-1 -1 -1]
The kernels for Lowpass filter are
L = [1/9 1/9 1/9
1/9 1/9 1/9
1/9 1/9 1/9]
The reference book can be found at https://www.amazon.com/Digital-Image-Processing-Rafael-Gonzalez/dp/0133356728
Follow my podcast: http://anchor.fm/tkorting
In this video I explain how the Hough Transform works to detect lines in images. It firstly apply an edge detection algorithm to the input image, and then computes the Hough Transform to find the combination of Rho and Theta values in which there is more occurrences of lines. This algorithm can also be applied to detect circles, but I only presented a visual example of the algorithm to detect lines.
The main animation starts at 3:40
To create the animation I used octave 4, and packages image and geometry.
The reference book can be found at https://www.amazon.com/Digital-Image-Processing-Rafael-Gonzalez/dp/0133356728
Source code for animation at
https://github.com/tkorting/youtube/tree/master/hough-transform
Subscribe to my channel!
In this video we explain the HSV color model and provide an animation on how to create the HSV color cylinder.
The main animation starts at 1:19
As follows, the source code (in octave) to create the steps for the animation:
please replace the some markers by the corresponding symbol (youtube does not allow), such as |open_brackets|, or |close_braces|, etc
clear all;
figure(1);
clear plot;
clf;
MIN_I = 0;
STEP_I = 0.075;
MAX_I = 1;
MIN_S = 0;
STEP_S = 0.1;
MAX_S = 1;
MIN_H = 0;
STEP_H = 10;
MAX_H = 360;
vector_i = |open_brackets||close_brackets|;
vector_h = |open_brackets||close_brackets|;
vector_s = |open_brackets||close_brackets|;
my_struct = struct();
position_size = 0;
for i = MIN_I:STEP_I:MAX_I
for s = MIN_S:STEP_S:MAX_S
for h = MIN_H:STEP_H:MAX_H
if (s |less_than| 0.2)
h += STEP_H;
endif;

position_i = i;
position_h = s * sin(h * pi / 180);
position_s = s * cos(h * pi / 180);
int_i = position_i * 100;
int_h = position_h * 100;
int_s = position_s * 100;
|open_brackets|r, g, b|close_brackets| = ihs_to_rgb(i, h, s);
ihs_distance = sqrt(int_i^2 + int_h^2 + int_s^2);
ihs_name = sprintf('%09d - %09d - %f %f %f', int_i, ihs_distance, int_i, int_h, int_s);
my_struct = setfield(my_struct, |open_braces|1|close_braces|, ihs_name, |open_braces|position_i position_h position_s r g b i h s|close_braces|);
position_size++;
endfor
endfor
endfor
ordered_struct = orderfields(my_struct);
animation_step = 0;
degree = 110;
degree_final = 160;
degree_step = (degree_final - degree) / position_size;
for |open_brackets|val, key|close_brackets| = ordered_struct
i = val|open_braces|1|close_braces|;
h = val|open_braces|2|close_braces|;
s = val|open_braces|3|close_braces|;
r = val|open_braces|4|close_braces|;
g = val|open_braces|5|close_braces|;
b = val|open_braces|6|close_braces|;
value_i = val|open_braces|7|close_braces|;
value_h = val|open_braces|8|close_braces|;
value_s = val|open_braces|9|close_braces|;
# plot data
plot3(h, s, i, '*', 'color', |open_brackets|r g b|close_brackets|, 'linewidth', 5);
# configure plot
zlabel('VALUE');
axis(|open_brackets|-1 1 -1 1 0 1|close_brackets|);
title_str = sprintf('H = %d\nS = %0.2f\nV = %0.2f', value_h, value_s, value_i);
#title(title_str, 'horizontalAlignment', 'left');
grid on;
box off;
# rotate the graph according to current degree
view(degree, 30 + 7.5 * sin(degree/60));
degree = degree + degree_step;
hold on;
output_filename = sprintf('ihs-steps/animation-%09d.png', animation_step);
print(output_filename, '-S560,420', '-dpng', '-color');
animation_step = animation_step + 1;
endfor
function |open_brackets|R, G, B|close_brackets| = ihs_to_rgb(i, h, s)
H = h / 360;
S = s;
V = i;
# based on transformation HSV to RGB
H = H * 6;
I = floor(H);
F = H - I;
M = V * (1 - S);
N = V * (1 - (S * F));
K = V * (1 - (S * (1 - F)));
R = 0;
G = 0;
B = 0;
if (I == 0)
R = V;
G = K;
B = M;
elseif (I == 1)
R = N;
G = V;
B = M;
elseif (I == 2)
R = M;
G = V;
B = K;
elseif (I == 3)
R = M;
G = N;
B = V;
elseif (I == 4)
R = K;
G = M;
B = V;
elseif (I == 5)
R = V;
G = M;
B = N;
endif