Search
Wednesday, November 6, 2024
Each Neuron contains below layers
- Input Signal
- Weight
- Bias
- Function
- Activate Function
- Output
Neurons receive preprocessed features as inputs, similar to how
biological receptors respond to stimuli like light. Each neuron processes these inputs to
produce a singular output.
Weight :
Weights in neurons adjust during learning, similar to tuning
synapses in the brain. These adjustments fine-tune the neuron's output to match the
desired outcome of the network's training.
Bias:
The bias is a crucial parameter that adds flexibility to a neuron's output,
allowing it to activate effectively even with no input. It helps the network fit the data
better, enabling the modelling of complex functions and decision boundaries.
Summarize Function
This function calculates the weighted sum of inputs and
weights, then adds the bias, setting the stage for the neuron's activation.
Activate Function:
Transforms the summation output into a complex, non-linear
form, allowing for neuron activation or deactivation. It's essential for enabling multi-layer
networks to learn beyond linear classification, with common types including Sigmoid,
Tanh, ReLU, and Softmax.
Output:
The result from a neuron's internal processing, including weighted inputs
and bias through an activation function, becomes the output, which then serves as input
to subsequent layers in the network.
The initial layer of a neural network
directly interfaces with the input data,
consisting of multiple nodes that
correspond to the features of the input.
For example, a 28x28 pixel image could
have an input layer with 784 nodes.
Each node represents a single feature or
pixel value, transmitting this information
unchanged to the hidden layers of the
network for further processing.
Function: