VERSION 2024.01rock classification
ML and DL algorithms
fitting data into equation
deriving equation from data
theory driven
data driven
supervised learning
unsupervised learning
yes
no
find the shortest paths between each data point and centroid.
yes/no condition, the program will end if the shortest distances between centroids and data points equal the threshold (near 0)
RGB format is used for display on digital screens such as computers, TV, and mobile phones.
input
data
iteration
2
iteration
1
iteration
k
Which iterated function has the lowest sum of values?
input
data
iteration
2
iteration
1
iteration
k
FCNN style
LeNet style
Neural networks are the components of DL, which stacks as a multilayer to form a deep learning algorithm.
Perceptron is a part of neural networks and stacking neural networks will form deep learning
misclassify
(non-linear)
misclassify
(non-linear)
After the transfer function, the summed product of nodes and weights, the activation function will be applied, and the outcome from this process will remain only positive values.
misclassify
(non-linear)
misclassify
(non-linear)
tabular data
image
The input data passing to a feature map or multi-nodes can be a vector and matrix. Before this process, hot-encoder or feature extraction is required.
misclassify
(non-linear)
Classic terminology shares several similarities among artificial neural networks (ANN), multilayer perceptron (MLP), and fully connected layers. Those types can represent such as the neural network architecture here.
Input layer: one node contains one pixel, so the first layer has 25x32 = 800 pixels. In the case of a complex network, the input layer might be an extracted feature (3D matrix).
input layer
hidden layer
weight
connections
Hidden layer: the first learning process starts from this layer. Increasing nodes and layers here can help the algorithm classify data better. For example, the eye of the giraffe might be extracted from some node in the layer. Keep in mind that adding more nodes and layers will in crease computational cost.
Weight connections: multiple edges connected to nodes among layers are weights in the network algorithm that enhance important features and diminish insignificance features; some nodes might have more weight (useful features) than others (less valuable features).
Output layer: the result from this layer depends on activation functions. If the activation function is regression, the predicted output can be a real value. The output yields probability confidence in the classification problem for probability types in the activation functions.
output layer
binary
multiclass
hidden layer
Sigmoid
Softmax
Hyperbolic
tangent
Rectified linear unit
Leaky
Relu
K-Means
SVM
decision tree
learn a simple feature of the blue data
learn more complex feature of the blue plus orange data
block networks