neural_network

package module
v0.0.0-...-6265c4b Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 19, 2025 License: MIT Imports: 4 Imported by: 0

README

🧠 Neural Network Go Library

Go Version License Platform

Simple and efficient neural network implementation for Go
Optimized for genetic algorithms and neuroevolution


🇬🇧  English   

✨ Features

🏗️ Architecture
  • Fully connected feedforward networks
  • Arbitrary topology support
  • Tanh activation function
  • Zero external dependencies
🧬 Genetic Algorithms
  • Built-in mutation support
  • Network crossover
  • Deep cloning
  • Population evolution
💾 Persistence
  • JSON serialization
  • Save/Load models
  • Complete topology preservation
  • Human-readable format

🚀 Quick Start

Installation
go get github.com/Sekerator/neural-network
Basic Usage
package main

import (
    nn "github.com/Sekerator/neural-network"
)

func main() {
    // Create neural network
    network := nn.NewNeuralNetwork()
    
    // Initialize: 3 inputs → 2 hidden layers (5, 3 neurons) → 2 outputs
    network.Init(3, 2, []int{5, 3}, 2)
    
    // Process inputs
    network.SetInputs([]float32{0.5, 0.8, -0.2})
    
    // Get outputs
    for _, neuron := range network.OutputNeurons {
        println(neuron.Result)
    }
}

📖 API Reference

Core Methods
Method Description Parameters
NewNeuralNetwork() Create new network -
Init(...) Initialize topology inputs, layers, neurons[], outputs
SetInputs(...) Set input values []float32
Calculate() Forward propagation -
Genetic Operations
Method Description Parameters
Mutate(...) Random mutations chance, range
Clone() Deep copy -
Crossover(...) Combine networks *NeuralNetwork
Persistence
Method Description Parameters
Save(...) Export to JSON filename
Load(...) Import from JSON filename

🎯 Examples

XOR Problem

Complete XOR solution using genetic algorithm:

cd examples/xor
go run main.go
Simple Network

Basic network operations demo:

cd examples/simple
go run main.go

🏛️ Architecture

graph LR
    I1[Input 1] --> H1[Hidden 1]
    I2[Input 2] --> H1
    I1 --> H2[Hidden 2]
    I2 --> H2
    H1 --> O[Output]
    H2 --> O
Components
  • NeuralNetwork - Main container managing topology
  • Neuron - Computational unit with bias and activation
  • Synapse - Weighted connection between neurons

🎮 Applications

Perfect for:

  • Game AI - NPC behavior, decision making
  • Robotics - Controller evolution
  • Optimization - Genetic algorithm problems
  • Research - Neuroevolution experiments
  • Education - Learning neural networks

⚡ Performance

  • 🎯 Uses float32 for memory efficiency
  • 🚀 Optimized forward propagation
  • 📦 Minimal memory allocations
  • 🔧 Zero dependencies
🇷🇺  Русский   

✨ Возможности

🏗️ Архитектура
  • Полносвязные сети прямого распространения
  • Поддержка произвольной топологии
  • Функция активации tanh
  • Без внешних зависимостей
🧬 Генетические алгоритмы
  • Встроенная поддержка мутаций
  • Кроссовер сетей
  • Глубокое клонирование
  • Эволюция популяции
💾 Сохранение
  • Сериализация в JSON
  • Сохранение/загрузка моделей
  • Полное сохранение топологии
  • Человекочитаемый формат

🚀 Быстрый старт

Установка
go get github.com/Sekerator/neural-network
Базовое использование
package main

import (
    nn "github.com/Sekerator/neural-network"
)

func main() {
    // Создаём нейронную сеть
    network := nn.NewNeuralNetwork()
    
    // Инициализация: 3 входа → 2 скрытых слоя (5, 3 нейрона) → 2 выхода
    network.Init(3, 2, []int{5, 3}, 2)
    
    // Обработка входов
    network.SetInputs([]float32{0.5, 0.8, -0.2})
    
    // Получение выходов
    for _, neuron := range network.OutputNeurons {
        println(neuron.Result)
    }
}

📖 Справочник API

Основные методы
Метод Описание Параметры
NewNeuralNetwork() Создать новую сеть -
Init(...) Инициализировать топологию входы, слои, нейроны[], выходы
SetInputs(...) Установить входные значения []float32
Calculate() Прямое распространение -
Генетические операции
Метод Описание Параметры
Mutate(...) Случайные мутации вероятность, диапазон
Clone() Глубокая копия -
Crossover(...) Скрещивание сетей *NeuralNetwork
Сохранение
Метод Описание Параметры
Save(...) Экспорт в JSON имя файла
Load(...) Импорт из JSON имя файла

🎯 Примеры

Задача XOR

Полное решение XOR с использованием генетического алгоритма:

cd examples/xor
go run main.go
Простая сеть

Демонстрация базовых операций:

cd examples/simple
go run main.go

🏛️ Архитектура

graph LR
    I1[Вход 1] --> H1[Скрытый 1]
    I2[Вход 2] --> H1
    I1 --> H2[Скрытый 2]
    I2 --> H2
    H1 --> O[Выход]
    H2 --> O
Компоненты
  • NeuralNetwork - Основной контейнер управления топологией
  • Neuron - Вычислительная единица с bias и активацией
  • Synapse - Взвешенная связь между нейронами

🎮 Применение

Идеально подходит для:

  • Игровой ИИ - Поведение NPC, принятие решений
  • Робототехника - Эволюция контроллеров
  • Оптимизация - Задачи генетических алгоритмов
  • Исследования - Эксперименты с нейроэволюцией
  • Образование - Изучение нейронных сетей

⚡ Производительность

  • 🎯 Использует float32 для эффективности памяти
  • 🚀 Оптимизированное прямое распространение
  • 📦 Минимальные аллокации памяти
  • 🔧 Без зависимостей
🇨🇳  中文   

✨ 特性

🏗️ 架构
  • 全连接前馈网络
  • 支持任意拓扑
  • Tanh激活函数
  • 零外部依赖
🧬 遗传算法
  • 内置突变支持
  • 网络交叉
  • 深度克隆
  • 种群进化
💾 持久化
  • JSON序列化
  • 保存/加载模型
  • 完整拓扑保存
  • 人类可读格式

🚀 快速开始

安装
go get github.com/Sekerator/neural-network
基本用法
package main

import (
    nn "github.com/Sekerator/neural-network"
)

func main() {
    // 创建神经网络
    network := nn.NewNeuralNetwork()
    
    // 初始化:3个输入 → 2个隐藏层(5、3个神经元)→ 2个输出
    network.Init(3, 2, []int{5, 3}, 2)
    
    // 处理输入
    network.SetInputs([]float32{0.5, 0.8, -0.2})
    
    // 获取输出
    for _, neuron := range network.OutputNeurons {
        println(neuron.Result)
    }
}

📖 API 参考

核心方法
方法 描述 参数
NewNeuralNetwork() 创建新网络 -
Init(...) 初始化拓扑 输入, 层数, 神经元[], 输出
SetInputs(...) 设置输入值 []float32
Calculate() 前向传播 -
遗传操作
方法 描述 参数
Mutate(...) 随机突变 概率, 范围
Clone() 深度复制 -
Crossover(...) 交叉网络 *NeuralNetwork
持久化
方法 描述 参数
Save(...) 导出到JSON 文件名
Load(...) 从JSON导入 文件名

🎯 示例

XOR问题

使用遗传算法的完整XOR解决方案:

cd examples/xor
go run main.go
简单网络

基本操作演示:

cd examples/simple
go run main.go

🏛️ 架构

graph LR
    I1[输入1] --> H1[隐藏1]
    I2[输入2] --> H1
    I1 --> H2[隐藏2]
    I2 --> H2
    H1 --> O[输出]
    H2 --> O
组件
  • NeuralNetwork - 管理拓扑的主容器
  • Neuron - 带偏置和激活的计算单元
  • Synapse - 神经元之间的加权连接

🎮 应用

完美适用于:

  • 游戏AI - NPC行为、决策制定
  • 机器人 - 控制器进化
  • 优化 - 遗传算法问题
  • 研究 - 神经进化实验
  • 教育 - 学习神经网络

⚡ 性能

  • 🎯 使用float32提高内存效率
  • 🚀 优化的前向传播
  • 📦 最小内存分配
  • 🔧 零依赖

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

👤 Author

Created with ❤️ by Sekerator


GitHub

Documentation

Overview

Package neural_network provides a simple and efficient neural network implementation optimized for genetic algorithms and neuroevolution applications. It supports feedforward networks with arbitrary topology and includes built-in support for mutation, crossover, and cloning operations.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type NetworkData

type NetworkData struct {
	InputCount  int       `json:"input_count"`  // Number of input neurons
	OutputCount int       `json:"output_count"` // Number of output neurons
	HiddenShape []int     `json:"hidden_shape"` // Neurons per hidden layer
	Weights     []float32 `json:"weights"`      // All synapse weights
	Biases      []float32 `json:"biases"`       // All neuron biases (except inputs)
	Topology    [][]int   `json:"topology"`     // Synapse connections [from, to]
}

NetworkData represents the serializable form of a neural network. It's used for saving and loading networks to/from JSON format.

type NeuralNetwork

type NeuralNetwork struct {
	// InputNeurons holds the input layer neurons that receive external inputs
	InputNeurons []*Neuron

	// OutputNeurons holds the output layer neurons that produce the network's results
	OutputNeurons []*Neuron

	// HiddenNeurons is a 2D slice where each element represents a hidden layer
	// containing its neurons. The first index is the layer number, second is the neuron index
	HiddenNeurons [][]*Neuron

	// Synapses contains all connections between neurons with their weights
	Synapses []*Synapse
}

NeuralNetwork represents a feedforward neural network with configurable topology. It consists of input neurons, hidden layers with neurons, and output neurons, all connected by synapses with adjustable weights.

func NewNeuralNetwork

func NewNeuralNetwork() *NeuralNetwork

NewNeuralNetwork creates and returns a new empty neural network instance. The network must be initialized with Init() before use.

func (*NeuralNetwork) Calculate

func (net *NeuralNetwork) Calculate()

Calculate performs forward propagation through the network. It calculates the output of each neuron layer by layer, from hidden layers to output. Each neuron's output is the tanh activation of the weighted sum of its inputs plus bias.

This method is automatically called by SetInputs() and usually doesn't need to be called directly.

func (*NeuralNetwork) Clone

func (net *NeuralNetwork) Clone() *NeuralNetwork

Clone creates a deep copy of the neural network. The cloned network has the same topology, weights, and biases but is completely independent - modifications to the clone won't affect the original.

Returns a new NeuralNetwork instance with identical parameters.

This is useful for preserving good networks in genetic algorithms or creating variations without modifying the original.

func (*NeuralNetwork) Crossover

func (net *NeuralNetwork) Crossover(other *NeuralNetwork) *NeuralNetwork

Crossover performs genetic crossover between two neural networks with the same topology. Each weight and bias in the child has a 50% chance of coming from either parent.

Parameters:

  • other: the second parent network (must have the same topology)

Returns a new child network with mixed parameters from both parents.

This is commonly used in genetic algorithms to combine successful networks. The networks must have identical topology (same number of neurons and layers).

func (*NeuralNetwork) Init

func (net *NeuralNetwork) Init(inputNeuronCount, hiddenLayerCount int, hiddenNeuronCount []int, outputNeuronCount int)

Init initializes the neural network with the specified topology. It creates all neurons and synapses with random initial weights and biases in range [-0.5, 0.5].

Parameters:

  • inputNeuronCount: number of input neurons
  • hiddenLayerCount: number of hidden layers
  • hiddenNeuronCount: slice containing the number of neurons for each hidden layer
  • outputNeuronCount: number of output neurons

The function creates a fully connected feedforward network where each neuron in a layer is connected to all neurons in the next layer.

func (*NeuralNetwork) Load

func (net *NeuralNetwork) Load(filename string) error

Load deserializes a neural network from a JSON file. It reconstructs the complete network topology, weights, and biases.

Parameters:

  • filename: path to the input JSON file

Returns an error if the file cannot be read or parsed.

func (*NeuralNetwork) Mutate

func (net *NeuralNetwork) Mutate(mutateChance float32, mutateRange float32)

Mutate applies random mutations to the network's weights and biases. This is commonly used in genetic algorithms for introducing variation.

Parameters:

  • mutateChance: probability (0.0 to 1.0) that each weight/bias will be mutated
  • mutateRange: standard deviation of the normal distribution used for mutations

Mutations are added to existing values using a normal distribution. If any weight or bias becomes NaN or Inf, it's reset to a random value in [-0.5, 0.5].

func (*NeuralNetwork) Save

func (net *NeuralNetwork) Save(filename string) error

Save serializes the neural network to a JSON file. The saved file contains all network parameters including topology, weights, and biases.

Parameters:

  • filename: path to the output JSON file

Returns an error if the file cannot be created or written.

func (*NeuralNetwork) SetInputs

func (net *NeuralNetwork) SetInputs(Inputs []float32)

SetInputs sets the input values for the network and automatically triggers forward propagation to calculate all neuron outputs.

Parameters:

  • Inputs: slice of float32 values to set as inputs

If the number of inputs doesn't match the number of input neurons, the function returns without making any changes.

type Neuron

type Neuron struct {
	// Bias is the threshold value added to the weighted sum before activation.
	// It allows the neuron to shift its activation function, making it more flexible.
	// Typically initialized to a random value in range [-0.5, 0.5].
	Bias float32

	// Result stores the neuron's output value after activation.
	// For input neurons, this is set directly from external inputs.
	// For hidden and output neurons, this is calculated during forward propagation.
	Result float32

	// Synapses contains all incoming connections from neurons in the previous layer.
	// Each synapse has a weight and a reference to the source neuron.
	// The weighted sum of all synapse inputs forms the neuron's pre-activation value.
	Synapses []*Synapse
}

Neuron represents a single computational unit in the neural network. It holds a bias value, the current result after activation, and references to all incoming synapses that connect it to neurons in the previous layer.

The neuron performs two main operations: 1. Summation: adds weighted inputs from all incoming synapses plus bias 2. Activation: applies the tanh activation function to produce the final output

func (*Neuron) Activate

func (n *Neuron) Activate()

Activate applies the hyperbolic tangent (tanh) activation function to the neuron's result. The tanh function maps any input to the range [-1, 1], providing non-linearity to the network while maintaining zero-centered outputs.

The activation function is: Result = tanh(Result)

This method modifies the Result field in place and is called automatically during forward propagation after summing all weighted inputs and bias.

Properties of tanh activation:

  • Output range: [-1, 1]
  • Zero-centered (unlike sigmoid)
  • Smooth gradient
  • Can suffer from vanishing gradients for very large or small inputs

type Synapse

type Synapse struct {
	// Weight determines the strength and direction of the connection.
	// Positive weights amplify the signal, negative weights inhibit it.
	// Typically initialized to random values in range [-0.5, 0.5].
	// During forward propagation: output = input * Weight
	//
	// The weight is adjusted through:
	//   - Mutation: random changes during genetic evolution
	//   - Crossover: mixing weights from parent networks
	//   - Manual setting: when loading from saved models
	Weight float32

	// LeftNeuron points to the source neuron (from the previous layer).
	// During forward propagation, the Result value from this neuron
	// is multiplied by the Weight to contribute to the next neuron's input.
	// This creates a directed connection from LeftNeuron to the neuron
	// that owns this synapse in its Synapses slice.
	LeftNeuron *Neuron
}

Synapse represents a weighted connection between two neurons in the network. It acts as the communication channel through which signals flow from one neuron to another, with the weight determining the strength and sign of the connection.

In a feedforward network, synapses connect neurons from one layer to neurons in the next layer, forming a directed acyclic graph. The weight of the synapse is multiplied by the output of the source neuron to contribute to the input of the destination neuron.

Synapses are the primary learnable parameters in the network, along with neuron biases. During genetic algorithms, synapse weights are mutated and crossed over to evolve the network's behavior.

Directories

Path Synopsis
examples
simple command
xor command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL