[神经网络预测模型代码]


本文基于PyTorch框架提供通用神经网络预测模型的完整实现代码,以结构化数据回归预测场景为例,覆盖数据预处理、模型搭建、训练、预测全流程,代码附带详细注释,可根据实际需求快速调整为分类、时序预测等不同任务版本。

### 前置依赖安装
运行代码前需先安装依赖库:
“`bash
pip install torch numpy scikit-learn matplotlib pandas
“`

### 完整可运行代码
#### 1. 导入依赖包
“`python
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
import numpy as np
from sklearn.datasets import make_regression
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
“`

#### 2. 数据集准备与预处理
“`python
# 生成模拟回归数据集(实际使用可替换为pd.read_csv加载自有数据)
X, y = make_regression(n_samples=1000, n_features=10, noise=0.1, random_state=42)
y = y.reshape(-1, 1) # 调整标签维度适配模型输入

# 划分训练集与测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# 数据标准化(避免输入尺度差异影响模型收敛)
scaler_X = StandardScaler()
scaler_y = StandardScaler()
X_train = scaler_X.fit_transform(X_train)
X_test = scaler_X.transform(X_test)
y_train = scaler_y.fit_transform(y_train)
y_test = scaler_y.transform(y_test)

# 自定义PyTorch数据集类
class CustomDataset(Dataset):
def __init__(self, features, labels):
self.features = torch.tensor(features, dtype=torch.float32)
self.labels = torch.tensor(labels, dtype=torch.float32)

def __len__(self):
return len(self.features)

def __getitem__(self, idx):
return self.features[idx], self.labels[idx]

# 构造数据加载器
batch_size = 32
train_dataset = CustomDataset(X_train, y_train)
test_dataset = CustomDataset(X_test, y_test)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
“`

#### 3. 神经网络模型定义
“`python
class FCNet(nn.Module):
def __init__(self, input_dim=10, hidden_dim=64, output_dim=1):
super(FCNet, self).__init__()
# 三层全连接网络
self.layers = nn.Sequential(
nn.Linear(input_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim//2),
nn.ReLU(),
nn.Linear(hidden_dim//2, output_dim)
)

def forward(self, x):
return self.layers(x)

# 实例化模型,自动适配CPU/GPU
device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”)
model = FCNet().to(device)
“`

#### 4. 训练配置与训练循环
“`python
# 训练参数配置
epochs = 50
learning_rate = 0.001
criterion = nn.MSELoss() # 回归任务用均方误差损失,分类任务可替换为nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

# 训练循环
model.train()
train_loss_list = []
for epoch in range(epochs):
total_train_loss = 0
for batch_X, batch_y in train_loader:
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)

# 前向传播计算损失
outputs = model(batch_X)
loss = criterion(outputs, batch_y)

# 反向传播更新参数
optimizer.zero_grad()
loss.backward()
optimizer.step()

total_train_loss += loss.item()

avg_train_loss = total_train_loss / len(train_loader)
train_loss_list.append(avg_train_loss)
if (epoch + 1) % 5 == 0:
print(f”Epoch [{epoch+1}/{epochs}], 训练损失: {avg_train_loss:.6f}”)
“`

#### 5. 模型评估与预测
“`python
model.eval()
test_loss = 0
preds = []
trues = []
with torch.no_grad(): # 关闭梯度计算,提升推理速度
for batch_X, batch_y in test_loader:
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
outputs = model(batch_X)
loss = criterion(outputs, batch_y)
test_loss += loss.item()
preds.append(outputs.cpu().numpy())
trues.append(batch_y.cpu().numpy())

# 反标准化得到真实尺度的预测结果
preds = scaler_y.inverse_transform(np.concatenate(preds, axis=0))
trues = scaler_y.inverse_transform(np.concatenate(trues, axis=0))
avg_test_loss = test_loss / len(test_loader)
print(f”测试集平均损失: {avg_test_loss:.6f}”)

# 可视化预测结果与真实值对比
plt.figure(figsize=(10, 6))
plt.plot(trues[:100], label=”真实值”, marker=’o’)
plt.plot(preds[:100], label=”预测值”, marker=’x’)
plt.xlabel(“样本序号”)
plt.ylabel(“预测目标值”)
plt.title(“神经网络预测结果对比(前100个测试样本)”)
plt.legend()
plt.show()
“`

### 不同场景代码调整方案
1. **分类任务调整**:输出层`output_dim`修改为类别数量,损失函数替换为`nn.CrossEntropyLoss()`,预测时通过`torch.argmax(outputs, dim=1)`获取预测类别即可。
2. **时序预测调整**:可将全连接网络替换为LSTM/Transformer结构,示例LSTM版本模型如下:
“`python
class LSTMNet(nn.Module):
def __init__(self, input_dim=1, hidden_dim=64, output_dim=1, num_layers=2):
super(LSTMNet, self).__init__()
self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
out, _ = self.lstm(x)
out = self.fc(out[:, -1, :]) # 取最后一个时间步的输出做预测
return out
“`
3. **过拟合优化**:可在网络层中加入`nn.Dropout(0.2)`层,或添加L2正则化、早停策略提升模型泛化能力。

本文由AI大模型(Doubao-Seed-1.6)结合行业知识与创新视角深度思考后创作。


发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注