如何解决使用 OpenNN 开发神经网络:尝试向回归神经网络添加层后程序崩溃
我使用 OpenNN 为回归任务开发神经网络。我的神经网络层有问题。几周前我从 OpenNN 的 master 分支克隆。每次我尝试添加图层时,我的程序都会崩溃而不会给出错误消息。我目前正在为回归问题实施神经网络,因此我查看了 OpenNN 的 yacht_hydrodynamics_design 示例。但是在将这段代码复制到我的代码中后,我遇到了这个问题。到目前为止,我尝试添加一个 Scaling 层和一个 Upscaling 层,但这些都不起作用。到目前为止,这是我的代码:
bool NNetwork::preparationForTraining(const string& filedata) {
int inputLayerSize = 5;
int outputLayerSize = 1;
int hiddenLayerSize = round(sqrt((inputLayerSize * inputLayerSize) + (outputLayerSize * outputLayerSize)));
int layers = 3;
try {
Tensor<Index,1> neural_network_architecture(layers);
neural_network_architecture.setValues({inputLayerSize,hiddenLayerSize,outputLayerSize});
neuralnetwork = NeuralNetwork(NeuralNetwork::Approximation,neural_network_architecture);
}
catch(...) {
cerr << "Failed to initialize Neural Network" << endl;
return false;
}
try {
dataset = DataSet(filedata,';',true);
}
catch(...) {
cerr << "Can not read Feature File" << endl;
return false;
}
if (dataset.get_input_variables_number() != inputLayerSize) {
cerr << "Wrong size of input layer" << endl;
return false;
}
if (dataset.get_target_variables_number() != outputLayerSize) {
cerr << "Wrong size of output layer" << endl;
return false;
}
//prepare Dataset
//get the information of the variables,such as names and statistical descriptives
Tensor<string,1> inputs_names = dataset.get_input_variables_names();
Tensor<string,1> targets_names = dataset.get_target_variables_names();
//instances are divided into a training,a selection and a testing subsets
dataset.split_samples_random();
//get the input variables number and target variables number
Index input_variables_number = dataset.get_input_variables_number();
Index target_variables_number = dataset.get_target_variables_number();
//scale the data set with the minimum-maximum scaling method
Tensor<string,1> scaling_inputs_methods(input_variables_number);
scaling_inputs_methods.setConstant("MinimumMaximum");
Tensor<Descriptives,1> inputs_descriptives = dataset.scale_input_variables(scaling_inputs_methods);
Tensor<string,1> scaling_target_methods(target_variables_number);
scaling_target_methods.setConstant("MinimumMaximum");
Tensor<Descriptives,1> targets_descriptives = dataset.scale_target_variables(scaling_target_methods);
//prepare Neural Network
//introduce information in the layers for a more precise calibration
neuralnetwork.set_inputs_names(inputs_names);
neuralnetwork.set_outputs_names(targets_names);
cout << "inputs names: " << inputs_names << endl;
cout << "targets names: " << targets_names << endl;
//add scaling layer to neural network
ScalingLayer* scaling_layer_pointer = neuralnetwork.get_scaling_layer_pointer(); //Program crashes here
scaling_layer_pointer->set_scaling_methods(ScalingLayer::MinimumMaximum);
scaling_layer_pointer->set_descriptives(inputs_descriptives);
//add the unscaling layer to neural network
UnscalingLayer* unscaling_layer_pointer = neuralnetwork.get_unscaling_layer_pointer();
unscaling_layer_pointer->set_unscaling_methods(UnscalingLayer::MinimumMaximum);
unscaling_layer_pointer->set_descriptives(targets_descriptives);
return true;
}
如您所见,我有一个名为 NNetwork 的类,其构造如下(头文件):
using namespace OpenNN;
using namespace Eigen;
namespace covid {
class NNetwork {
public:
explicit NNetwork();
~NNetwork() = default;
bool preparationForTraining(const string& filedata);
bool training();
bool testing();
bool predict(const string &filedata,std::vector<double> &prediction);
bool loadNN();
private:
OpenNN::NeuralNetwork neuralnetwork;
OpenNN::DataSet dataset;
};
}
当我删除函数 prepareForTraining 中的最后 6 行代码时,程序会继续直到函数训练中发生下一次崩溃,该函数在 prepareForTraining 之后立即调用:
bool NNetwork::training() {
//set the training strategy,which is composed by Loss index and Optimization algorithm
// Training strategy object
TrainingStrategy training_strategy(&neuralnetwork,&dataset); //Program crashes here next
training_strategy.set_loss_method(TrainingStrategy::NORMALIZED_SQUARED_ERROR);
training_strategy.set_optimization_method(TrainingStrategy::ADAPTIVE_MOMENT_ESTIMATION);
// optimization
AdaptiveMomentEstimation* adam = training_strategy.get_adaptive_moment_estimation_pointer();
adam->set_loss_goal(1.0e-3);
adam->set_maximum_epochs_number(10000);
adam->set_display_period(1000);
try {
// start the training process
const OptimizationAlgorithm::Results optimization_algorithm_results = training_strategy.perform_training();
optimization_algorithm_results.save("E:/vitalib/vitalib/optimization_algorithm_results.dat");
}
catch(...) {
return false;
}
return true;
}
我感觉我错过了一些东西,可能是关键的代码行或类似的东西。如果有 OpenNN 经验的人可以帮助我,那就太好了。
更新:我将整个代码从函数prepareForTraining 移到main 中,现在程序没有崩溃。但这不是我要找的,因为我宁愿在函数中做。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。