Package net.bmahe.genetics4j.neat
Class Activations
java.lang.Object
net.bmahe.genetics4j.neat.Activations
Utility class providing common activation functions for NEAT (NeuroEvolution of Augmenting Topologies) neural networks.
Activations contains a collection of mathematical functions commonly used as activation functions in neural networks. These functions transform the weighted sum of inputs at each node into the node's output value, introducing non-linearity that enables neural networks to learn complex patterns.
Available activation functions:
- Linear: Simple linear transformation with slope and bias parameters
- Sigmoid: Logistic function providing smooth transition between 0 and 1
- Hyperbolic tangent: Smooth function providing output between -1 and 1
- Identity: Pass-through function for linear networks
- NEAT paper: Sigmoid variant with specific parameters from original NEAT research
Function variations:
- Float versions: Optimized for float precision networks
- Double versions: Higher precision for sensitive applications
- Parameterized versions: Customizable function parameters
- Pre-configured versions: Common parameter combinations
Common usage patterns:
// Use standard sigmoid activation
FeedForwardNetwork network = new FeedForwardNetwork(
inputNodes, outputNodes, connections, Activations::sigmoid
);
// Custom sigmoid with different steepness
Function<Double, Double> customSigmoid = Activations.sigmoid(2.0);
FeedForwardNetwork steeperNetwork = new FeedForwardNetwork(
inputNodes, outputNodes, connections, customSigmoid
);
// Hyperbolic tangent for outputs in [-1, 1] range
FeedForwardNetwork tanhNetwork = new FeedForwardNetwork(
inputNodes, outputNodes, connections, Activations::tanh
);
// Linear activation for regression problems
FeedForwardNetwork linearNetwork = new FeedForwardNetwork(
inputNodes, outputNodes, connections, Activations::identity
);
// Float versions for memory efficiency
Function<Float, Float> floatSigmoid = Activations.sigmoidFloat;
Activation function characteristics:
- Sigmoid: Smooth, bounded [0,1], good for binary classification
- Tanh: Smooth, bounded [-1,1], zero-centered, often preferred over sigmoid
- Linear: Unbounded, preserves gradients, suitable for regression
- Identity: No transformation, useful for pass-through connections
Performance considerations:
- Float vs Double: Float versions use less memory and may be faster
- Function references: Pre-defined functions avoid object creation
- Mathematical operations: Optimized implementations for common cases
- Branch prediction: Simple functions improve CPU branch prediction
Integration with NEAT evolution:
- Network evaluation: Applied to hidden and output nodes during forward propagation
- Fitness computation: Affects network behavior and resulting fitness
- Gradient flow: Function choice impacts learning and evolution dynamics
- Problem matching: Different problems benefit from different activation functions
- See Also:
-
Field Summary
FieldsModifier and TypeFieldDescriptionIdentity activation function for double values (f(x) = x).Identity activation function for float values (f(x) = x).Sigmoid activation function with steepness 4.9 as used in the original NEAT paper (double version).Sigmoid activation function with steepness 4.9 as used in the original NEAT paper (float version).Standard sigmoid activation function for double values (steepness = 1.0).Standard sigmoid activation function for float values (steepness = 1.0).Hyperbolic tangent activation function for double values.Hyperbolic tangent activation function for float values. -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionlinear
(double a, double b) Creates a linear activation function with specified slope and bias for double values.linearFloat
(float a, float b) Creates a linear activation function with specified slope and bias for float values.sigmoid
(double a) Creates a sigmoid activation function with specified steepness for double values.sigmoidFloat
(float a) Creates a sigmoid activation function with specified steepness for float values.
-
Field Details
-
sigmoidFloat
Standard sigmoid activation function for float values (steepness = 1.0). -
sigmoid
Standard sigmoid activation function for double values (steepness = 1.0). -
identityFloat
Identity activation function for float values (f(x) = x). -
identity
Identity activation function for double values (f(x) = x). -
tanhFloat
Hyperbolic tangent activation function for float values. Output range: (-1, 1). -
tanh
Hyperbolic tangent activation function for double values. Output range: (-1, 1). -
neatPaperFloat
Sigmoid activation function with steepness 4.9 as used in the original NEAT paper (float version). -
neatPaper
Sigmoid activation function with steepness 4.9 as used in the original NEAT paper (double version).
-
-
Constructor Details
-
Activations
private Activations()
-
-
Method Details
-
linearFloat
Creates a linear activation function with specified slope and bias for float values.The linear function computes: f(x) = a * x + b
- Parameters:
a
- the slope parameterb
- the bias parameter- Returns:
- a linear activation function
-
linear
Creates a linear activation function with specified slope and bias for double values.The linear function computes: f(x) = a * x + b
- Parameters:
a
- the slope parameterb
- the bias parameter- Returns:
- a linear activation function
-
sigmoidFloat
Creates a sigmoid activation function with specified steepness for float values.The sigmoid function computes: f(x) = 1 / (1 + exp(-a * x))
Output range: (0, 1)
- Parameters:
a
- the steepness parameter (higher values create steeper transitions)- Returns:
- a sigmoid activation function
-
sigmoid
Creates a sigmoid activation function with specified steepness for double values.The sigmoid function computes: f(x) = 1 / (1 + exp(-a * x))
Output range: (0, 1)
- Parameters:
a
- the steepness parameter (higher values create steeper transitions)- Returns:
- a sigmoid activation function
-