# Simple C++ Neural Network Library This repository contains a lightweight, header-only, and dependency-free implementation of a basic feed-forward neural network in C++. It's designed for understanding the core concepts of neural networks, including forward and backward propagation, different activation functions, and common loss functions. ## Features * **Multi-layered Architecture:** Create neural networks with an arbitrary number of hidden layers. * **Configurable Layers:** Each `Layer` can be customized with `input_size` and `output_size`. * **Built-in Activation Functions:** Supports commonly used activation functions: * `ReLU` (Rectified Linear Unit) * `Sigmoid` * `Linear` (Identity) * **Custom Activation Functions:** Provides an option to define and use custom activation functions and their derivatives. * **Built-in Loss Functions:** Includes standard loss functions for training: * `Mean Squared Error (MSE)` for regression tasks. * `Cross-Entropy` for classification tasks (typically used with a Softmax output layer, though Softmax is not explicitly provided, the derivative assumes its presence for classification). * **Custom Loss Functions:** Allows integration of custom loss functions and their derivatives. * **Forward Propagation:** Computes the output of the network given an input. * **Backward Propagation (Backpropagation):** Calculates gradients and updates weights and biases using a specified learning rate. * **Simple Memory Management:** Handles memory allocation for weights, biases, and internal caches. ## Code Structure * `network.hpp` / `network.cpp`: * Defines and implements the `NeuralNetwork` class, which manages an array of `Layer` objects. * Handles the overall forward and backward passes for the entire network. * Contains the definitions and implementations of the supported loss functions (`MSE`, `Cross-Entropy`) and their derivatives. * Manages the selection and application of loss functions during training. * `layer.hpp` / `layer.cpp`: * Defines and implements the `Layer` class, representing a single computational layer within the neural network. * Manages the weights, biases, and internal caches (`cached_input`, `activation_function_input_cache`, `output_values`) for a layer. * Contains the definitions and implementations of the supported activation functions (`ReLU`, `Sigmoid`, `Linear`) and their derivatives. * Performs the forward and backward computations for an individual layer. ## Dependencies This project uses only standard C++ library features (``, ``, ``, ``). No external libraries are required. ## Building the Project To compile the source files, you will need a C++ compiler (e.g., g++). Navigate to the root directory of the project and use a command similar to the following: ```bash g++ -std=c++17 network.cpp layer.cpp your_main_file.cpp -o neural_network_example ``` Replace `your_main_file.cpp` with the name of the file containing your `main` function where you instantiate and use the `NeuralNetwork` and `Layer` classes. ## Usage Example (Conceptual) While a `main` function is not provided, here's how you would typically use this library: ```cpp #include "network.hpp" #include "layer.hpp" #include #include int main() { // Define layers // Example: Input layer (implicit), 2 hidden layers, 1 output layer Layer* layers = new Layer[3]; layers[0] = Layer(10, 8, ActivationType::RELU); // Input 10 features, 8 neurons, ReLU activation layers[1] = Layer(8, 5, ActivationType::SIGMOID); // Input 8 features, 5 neurons, Sigmoid activation layers[2] = Layer(5, 1, ActivationType::LINEAR); // Input 5 features, 1 neuron, Linear activation (for regression output) // Create a neural network instance with MSE loss NeuralNetwork nn(layers, 3, LossType::MSE); // Prepare example input and target data std::vector input_data(10, 0.5f); // Example input vector std::vector target_data(1, 0.8f); // Example target vector // Buffer for network output std::vector output_buffer(1); // Forward pass nn.forward(input_data.data(), output_buffer.data()); std::cout << "Initial Prediction: " << output_buffer[0] << std::endl; // Calculate initial loss float initial_loss = nn.calculate_loss(output_buffer.data(), target_data.data(), 1); std::cout << "Initial Loss: " << initial_loss << std::endl; // Training loop (simplified for demonstration) float learning_rate = 0.01f; for (int epoch = 0; epoch < 1000; ++epoch) { nn.forward(input_data.data(), output_buffer.data()); nn.backward(target_data.data(), learning_rate); float current_loss = nn.calculate_loss(output_buffer.data(), target_data.data(), 1); if (epoch % 100 == 0) { std::cout << "Epoch " << epoch << ", Loss: " << current_loss << std::endl; } } nn.forward(input_data.data(), output_buffer.data()); std::cout << "Final Prediction: " << output_buffer[0] << std::endl; // The 'layers' array is owned and deleted by the NeuralNetwork destructor. // No explicit delete[] layers needed here if 'nn' goes out of scope. return 0; } ``` ## License This project is licensed under the LGPL-3.0-or-later License. See the LICENSE file (if present) for details.