Carregar apresentação
A apresentação está carregando. Por favor, espere
PublicouKauane Guerrero Alterado mais de 10 anos atrás
1
Segmentação II Paulo Sérgio Rodrigues PEL205
2
Processamento Global usando Grafos Para um seqüência de nós n 1,...., n k, sendo cada nó n i o sucessor de n i-1 é chamado caminho de n 1 a n k e o custo desse caminho pode ser dado por:
3
Processamento Global usando Grafos
6
Segmentação Baseada em Limiarização
7
Algoritmo Iterativo para Determinação do Limiar Entrada: Imagem monocromática I; Saída: Limiar T de binarização; 1 - Inicialize o limiar T como a média das intensidades; 2 - Binarize a Imagem de Entrada I usando o limiar T; 3 - Calcule o novo limiar como: 4 = Se T n = T fim, caso contrário faça T = T n e volte ao passo 2;
8
Segmentação Baseada em Limiarização
12
Segmentação Split and Merge 1 - Divida a imagem em 4 quadrantes (regiões). 2 - Para cada região, se não for homogênea, subdivida recursivamente voltando ao passo 1; Se for homogênea vira uma folha da QuadTree.
13
Segmentação Split and Merge
14
K-means Clustering Segmentation zGiven a set of n data points in d- dimensional space and an integer k zWe want to find the set of k points in d- dimensional space that minimizes the mean squared distance from each data point to its nearest center zNo exact polynomial-time algorithms are known for this problem A Local Search Approximation Algorithm for k-Means Clustering by Kanungo et. al
15
K-means Algorithm zHas been shown to converge to a locally optimal solution zBut can converge to a solution arbitrarily bad compared to the optimal solution K-means-type algorithms: A generalized convergence theorem and characterization of local optimality by Selim and Ismail A Local Search Approximation Algorithm for k-Means Clustering by Kanungo et al. K=3 Data Points Optimal Centers Heuristic Centers
16
Euclidean Distance Now to find the distance between two points, say the origin, and the point A = (3,4): Simple and Fast! Remember this when we consider the complexity!
17
Finding a Centroid We use the following equation to find the n dimensional centroid point amid k n dimensional points: Lets find the midpoint between 3 2D points, say: (2,4) (5,2) (8,9)
18
K-means Algorithm 1 - Choose k initial center points randomly 2 - Cluster data using Euclidean distance (or other distance metric) 3 - Calculate new center points for each cluster using only points within the cluster 4 - Re-Cluster all data using the new center points This step could cause data points to be placed in a different cluster 5 - Repeat steps 3 & 4 until the center points have moved such that in step 4 no data points are moved from one cluster to another or some other convergence criteria is met From Data Analysis Tools for DNA Microarrays by Sorin Draghici
19
An example with k=2 1.We Pick k=2 centers at random 2.We cluster our data around these center points Figure Reproduced From Data Analysis Tools for DNA Microarrays by Sorin Draghici
20
K-means example with k=2 3.We recalculate centers based on our current clusters Figure Reproduced From Data Analysis Tools for DNA Microarrays by Sorin Draghici
21
K-means example with k=2 4.We re-cluster our data around our new center points Figure Reproduced From Data Analysis Tools for DNA Microarrays by Sorin Draghici
22
K-means example with k=2 5. We repeat the last two steps until no more data points are moved into a different cluster Figure Reproduced From Data Analysis Tools for DNA Microarrays by Sorin Draghici
23
Characteristics of k-means Clustering zThe random selection of initial center points creates the following properties yNon-Determinism yMay produce clusters without patterns xOne solution is to choose the centers randomly from existing patterns From Data Analysis Tools for DNA Microarrays by Sorin Draghici
24
Algorithm Complexity zLinear in the number of data points, N zCan be shown to have time of cN yc does not depend on N, but rather the number of clusters, k zLow computational complexity zHigh speed From Data Analysis Tools for DNA Microarrays by Sorin Draghici
25
Análise do K-Means
27
Entropia Tradicional BGS q-Entropia Aplicações da q-entropia à PDI Segmentação Baseada em Entropia
28
Entropia Tradicional BGS - Histórico Rudolph Clausius (1822-1888) Clausius foi o primeiro a dar uma defini ç ão para Entropia Ludwing Boltzmann (1844-1906) Boltzmann idealizou o conceito moderno de entropia No in í cio, a id é ia de entropia estava ligada somente a medida da capacidade de realiza ç ão de trabalho dos sistemas f í sicos.
29
Leis da Termodinâmica Trabalho Perdas Energia TOTAL Primeira Lei: A energia não pode ser criada nem destru í da Segunda Lei: S ó pode haver trabalho se houver entropia
30
Max Plank (1854-1947) Plank foi o verdadeiro idealizador da f ó rmula atribu í da a Boltzmann Willard Gibbs (1839-1903) Gibbs introduziu a conhecida f ó rmula Com Plank e Gibbs a entropia transcendeu a Termodinâmica e passou a se associar à Mecânica Estat í stica. Entropia Tradicional BGS - Histórico
31
Entropia e a Teoria da Informação Claude Shannon (1916-2001) Shannon associou a entropia a uma quantidade de informa ç ão A teoria da informa ç ão surgiu na d é cada de 40, com origem na telegrafia e telefonia. Posteriormente, foi utilizada pela Cibern é tica no estudo da troca de informa ç ão de um organismo vivo ou mecânico.
32
Entropia e a Teoria da Informação Claude Shannon (1916-2001) Shannon associou a entropia a uma quantidade de informa ç ão A teoria da informa ç ão encontrou campo f é rtil em diversas á reas, entre elas na Economia, Estat í stica, Lingu í stica, Psicologia, Ecologia, Reconhecimento de Padrões, Medicina, Inteligência Artificial,...
33
Generalização da Entropia Clássica Sabe-se h á mais de um s é culo que entropia tradicional de BG não é capaz de explicar determinados Sistemas F í sicos Tais sistemas possuem como caracter í sticas: - intera ç ões espaciais de longo alcance - intera ç ões temporais de longo alcance - comportamento fractal nas fronteiras E são chamados de Sistemas Não-Extensivos
34
Generalização da Entropia Clássica Exemplos turbulência massa e energia das gal á xias Lei de Zipf-Mandelbrot da linguística Teoria de risco financeiro
35
Generalização da Entropia Clássica Lei de Zipf-Mandelbrot da linguística Don Quijote (Miguel di Cervantes) Extra ç ão de Palavras Relevantes Rank ordenado
36
Generalização da Entropia Clássica Massa e Energia da Gal á xias
37
Generalização da Entropia Clássica Teoria do Risco Financeiro Quando se tem expectativa de perda, algumas pessoas preferem arriscar Quando se tem expectativa de ganho, algumas pessoas preferem não arriscar
38
Generalização da Entropia Clássica Cita ç ão de Artigos Cient í ficos
39
Entropia Não-Extensiva Constantino Tsallis
40
Entropia Não-Extensiva
41
Additive property of Shannon Entropy Tsallis Entropy formula Pseudo-Additive property of Tsallis Entropy
42
Background and Foreground distribution Background and Foreground Tsallis Entropy
43
Pseudo-Additivity for Background and Foreground distribution Here, topt is ideal partition (that maximizes) the pseudo additivity of Tsallis Entropy
44
A new partition of Background and Foreground for new application of Tsallis entropy
45
Respectivelly news Tsallis entropy for the new background and foregrounds
46
General Equation of Pseudo-additivity for one recurssion
47
Here, topt is ideal partition (that maximizes) the pseudo additivity of Tsallis Entropy for the new partition
48
Ultrasound original Benign Tumor Left Column: 1 recurssion; Right column: 3 recurssions row 1: q = 0.00001; row 2: q = 1.0 (Shannon) ; row 3: q = 4 Visual Segmentation Results
49
Left Column: 1 recurssion; Right column: 3 recurssions row 1: q = 0.00001; row 2: q = 1.0 (Shannon) ; row 3: q = 4 Ultrasound original Malignant Tumor Visual Segmentation Results
50
Left upper: NESRA with 16 clusters (3 recurssions); right upper: fuzzy c-means with 16 clusters Left bellow: k-means with 8 clusters; right bellow: SOM with 16 neurons Visual Segmentation Results Benign Tumor
51
Left upper: NESRA with 16 clusters (3 recurssions); right upper: fuzzy c-means with 16 clusters Left bellow: k-means with 8 clusters; right bellow: SOM with 16 neurons Visual Segmentation Results Malignant Tumor
52
Results of application of three approaches for image segmentation: column 1: proposed (NESRA) method; column 2: bootstrap; column 3: fuzzy c-means Some Natural Image Results NESRABootstrapFuzzy C-means
53
Results of application of three approaches for image segmentation: column 1: proposed (NESRA) method; column 2: bootstrap; column 3: fuzzy c-means Some Natural Image Results NESRABootstrapFuzzy C-means
54
Results of application of three approaches for image segmentation: column 1: k-means; column 2: SOM; column 3: watershed Some Natural Image Results K-meansSOMWatershed
55
Results of application of three approaches for image segmentation: column 1: k-means; column 2: SOM; column 3: watershed Some Natural Image Results K-meansSOMWatershed
56
The synthetic image used to compare the robustness of the methods and increasing application of gaussian noise. The two concentric circles have radius 100 and 50, and the intensities for the background, outer and inner circles are 150, 100 and 50 respectively. The letfmost image is the original image; the three others, from left to right, have μ =0 and σ 2 = 0.01, 0.05 and 0.1 gaussian noise respectively. Synthetic Image Results
57
The result segmentation of the six considered algorithms in this paper. In this illustration, for all the original image we have applied a gaussian noise with zero μ and σ 2 = 0.1 which is the highest noise used, and after, a 9 x 9 2D adaptive filter was used for smoothing the noise. In the specific case of NESRA algorithm we use the parameter q = 0.001 since it generates the best visual result with more homogeneous and noiseless regions. Synthetic Image Results NESRA Bootstrap Fuzzy C-meansK-means SOMWatershed
58
The estimated (black ones) and original (white ones) curves superimposed over the original image corresponding to the segmentations of synthetic image. Only the watershed was traced manually since we do not have good precision of the boundary in this case. NESRA Bootstrap Fuzzy C-meansK-means SOMWatershed
59
Comparative performance of the five used methods as a function of increasing gaussian noise. The x-line is the σ 2 and y-line is Robustness Outer Circle
60
Comparative performance of the five used methods as a function of increasing gaussian noise. The x-line is the σ 2 and y-line is Robustness Inner Circle
61
Comparative performance for the five used methods according to the estimated area inside inner, outer and background regions. The performance percentage is an average of the estimated area of the three regions. The x-line is the σ 2 and y-line is the average of estimated area (for the three regions) divided by real area. Performance in Achieving Homogeneous Regions
Apresentações semelhantes
© 2024 SlidePlayer.com.br Inc.
All rights reserved.