A Python implementation of the Broad Learning System, a novel neural network architecture that provides an efficient alternative to deep learning for various machine learning tasks. This implementation leverages scikit-learn for robust preprocessing and regularized learning.
The Broad Learning System is a flat network architecture that uses random feature mapping and enhancement nodes to create a broad structure instead of a deep one. This approach offers several advantages:
- Fast Training: Uses closed-form solutions via Ridge regression
- Incremental Learning: Support for adding feature and enhancement groups dynamically
- Versatile: Handles both classification and regression tasks
- Efficient: No need for complex backpropagation algorithms
- π Quick Setup: Easy-to-use configuration system
- π§ Flexible Architecture: Configurable feature and enhancement groups
- π Built-in Preprocessing: StandardScaler and OneHotEncoder integration
- π― Multiple Activations: Support for identity, tanh, sigmoid, and ReLU activations
- π Incremental Learning: Add groups and refit without full retraining
- π Train/Test Split: Built-in data splitting utilities
- Python 3.8+
- NumPy
- scikit-learn
- pandas (optional)
- Clone the repository:
git clone https://github.com/FamALouiz/Broad-Learning-System.git
cd Broad-Learning-System- Install dependencies:
pip install -r requirements.txtimport numpy as np
from BLS import BLSConfig, BroadLearningSystem
# Generate sample data
X = np.random.randn(1000, 32)
y = (X[:, 0] - 0.8 * X[:, 1] > 0).astype(int)
# Split data
X_train, X_test, y_train, y_test = BroadLearningSystem.split(
X, y, test_size=0.25, random_state=1, stratify=y
)
# Configure the model
config = BLSConfig(
n_feature_groups=16,
feature_group_size=12,
n_enhancement_groups=8,
enhancement_group_size=12,
feature_activation="tanh",
enhancement_activation="tanh",
lambda_reg=1e-2,
standardize=True,
random_state=1
)
# Train and evaluate
model = BroadLearningSystem(config).fit(X_train, y_train)
train_acc = (model.predict(X_train) == y_train).mean()
test_acc = (model.predict(X_test) == y_test).mean()
print(f"Train accuracy: {train_acc:.4f}")
print(f"Test accuracy: {test_acc:.4f}")The BLSConfig class provides comprehensive configuration options:
| Parameter | Type | Default | Description |
|---|---|---|---|
n_feature_groups |
int | 10 | Number of feature mapping groups |
feature_group_size |
int | 10 | Size of each feature group |
n_enhancement_groups |
int | 10 | Number of enhancement groups |
enhancement_group_size |
int | 10 | Size of each enhancement group |
feature_activation |
str | "tanh" | Activation function for feature nodes |
enhancement_activation |
str | "tanh" | Activation function for enhancement nodes |
lambda_reg |
float | 1e-2 | Ridge regression regularization parameter |
add_bias |
bool | True | Whether to add bias terms |
standardize |
bool | True | Whether to standardize input features |
random_state |
int/None | 42 | Random seed for reproducibility |
"identity": Linear activation (f(x) = x)"tanh": Hyperbolic tangent"sigmoid": Sigmoid function"relu": Rectified Linear Unit
The BLS supports dynamic expansion of the network architecture:
# Train initial model
model = BroadLearningSystem(config).fit(X_train, y_train)
# Expand the network
model.add_feature_groups(4) # Add 4 more feature groups
model.add_enhancement_groups(4) # Add 4 more enhancement groups
# Refit only the output weights (fast)
model.refit_output(X_train, y_train)
# Evaluate expanded model
expanded_acc = (model.predict(X_test) == y_test).mean()
print(f"Accuracy after expansion: {expanded_acc:.4f}")For classification tasks, you can get prediction probabilities:
# Get class probabilities
probabilities = model.predict_proba(X_test)
print(f"Prediction probabilities shape: {probabilities.shape}")The BLS automatically detects regression tasks when the target is continuous:
# Regression example
y_regression = X[:, 0] + 0.5 * X[:, 1] + np.random.normal(0, 0.1, X.shape[0])
model_reg = BroadLearningSystem(config).fit(X_train, y_regression[:len(X_train)])
predictions = model_reg.predict(X_test)Broad-Learning-System/
βββ BLS.py # Main BroadLearningSystem implementation
βββ BLS_config.py # Configuration dataclass
βββ main.py # Example usage script
βββ requirements.txt # Dependencies (currently empty)
βββ LICENSE # MIT License
βββ README.md # This file
The Broad Learning System works in three main steps:
- Feature Mapping: Input data is mapped through random feature groups with configurable activation functions
- Enhancement: Feature outputs are further processed through enhancement groups
- Output Learning: A Ridge regression solver computes optimal output weights using a closed-form solution
The network structure can be represented as:
Input β Feature Groups β Enhancement Groups β Output Layer
β β β β
X β [Fβ, Fβ, ...] β [Eβ, Eβ, ...] β Y
Contributions are welcome! Please feel free to submit issues, feature requests, or pull requests.
This project is licensed under the MIT License - see the LICENSE file for details.
- Chen, C. L. P., & Liu, Z. (2017). Broad learning system: An effective and efficient incremental learning system without the need for deep architecture. IEEE transactions on neural networks and learning systems, 29(1), 10-24.
This implementation uses scikit-learn for robust preprocessing and regularized learning, ensuring compatibility with the broader Python machine learning ecosystem.