INDArray 的 DeepLearning4J 问题
DeepLearning4J Problems with INDArray
public void playFullGame(MultiLayerNetwork m1, MultiLayerNetwork m2) {
boolean player = false;
while (!this.isOver) {
float[] f = Main.rowsToInput(this.rows);
System.out.println(f.length);// prints 42
INDArray input = Nd4j.create(f);
this.addChip(Main.getHighestOutput(player ? m1.output(input) : m2.output(input)), player);
player = !player;
}
}
我使用 INDArray input = Nd4j.create(f);
创建 INDArray 但此 m1.output(input)
抛出以下异常:
Exception in thread "AWT-EventQueue-0" org.deeplearning4j.exception.DL4JInvalidInputException: Input size (63 columns; shape = [1, 63]) is invalid: does not match layer input size (layer # inputs = 42) (layer name: layer2, layer index: 2, layer type: OutputLayer)
我不明白为什么创建的 INDArray 是二维的,63 是从哪里来的..
编辑:
多层网络配置:
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(randSeed).optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
.updater(new Nesterovs(0.1, 0.9)).list()
.layer(new DenseLayer.Builder().nIn(numRows * numColums).nOut(63).activation(Activation.RELU)
.weightInit(WeightInit.XAVIER).build())
.layer(new DenseLayer.Builder().nIn(63).nOut(63).activation(Activation.RELU)
.weightInit(WeightInit.XAVIER).build())
.layer(new OutputLayer.Builder(LossFunction.NEGATIVELOGLIKELIHOOD).nIn(numRows * numColums).nOut(7)
.activation(Activation.SOFTMAX).weightInit(WeightInit.XAVIER).build())
.build();
你的 63 来自神经网络本身。您的输入和输出数量形状不匹配。
如果您是神经网络的新手,只需了解基本的 2d 神经网络指定了许多输入和输出即可。下一层的输入数量应与上一层的输出数量相匹配。
对于密集层,第一层中的输入数需要与数据集中的输入列数相匹配。我会确保无论您的数据集是什么,列数都匹配。
这里需要注意的是,为神经网络的每一层设置输入和输出的数量可能容易出错。相反,只需设置每一层的输出数量,并使用 dl4j 的 setInputType api 代替列数。
在你的情况下添加 InputType.feedforward(https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/inputs/InputType.java#L107)
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(randSeed).optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
.updater(new Nesterovs(0.1, 0.9)).list()
.layer(new DenseLayer.Builder().nIn(numRows * numColums).nOut(63).activation(Activation.RELU)
.weightInit(WeightInit.XAVIER).build())
.layer(new DenseLayer.Builder().nIn(63).nOut(63).activation(Activation.RELU)
.weightInit(WeightInit.XAVIER).build())
.layer(new OutputLayer.Builder(LossFunction.NEGATIVELOGLIKELIHOOD).nIn(numRows * numColums).nOut(7)
.activation(Activation.SOFTMAX).weightInit(WeightInit.XAVIER).build())
.setInputType(InputType.feedForward(numRows * numColums))
.build();
这里有更多示例(主要是 CNN):https://github.com/eclipse/deeplearning4j-examples/search?q=setInputType
public void playFullGame(MultiLayerNetwork m1, MultiLayerNetwork m2) {
boolean player = false;
while (!this.isOver) {
float[] f = Main.rowsToInput(this.rows);
System.out.println(f.length);// prints 42
INDArray input = Nd4j.create(f);
this.addChip(Main.getHighestOutput(player ? m1.output(input) : m2.output(input)), player);
player = !player;
}
}
我使用 INDArray input = Nd4j.create(f);
创建 INDArray 但此 m1.output(input)
抛出以下异常:
Exception in thread "AWT-EventQueue-0" org.deeplearning4j.exception.DL4JInvalidInputException: Input size (63 columns; shape = [1, 63]) is invalid: does not match layer input size (layer # inputs = 42) (layer name: layer2, layer index: 2, layer type: OutputLayer)
我不明白为什么创建的 INDArray 是二维的,63 是从哪里来的..
编辑: 多层网络配置:
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(randSeed).optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
.updater(new Nesterovs(0.1, 0.9)).list()
.layer(new DenseLayer.Builder().nIn(numRows * numColums).nOut(63).activation(Activation.RELU)
.weightInit(WeightInit.XAVIER).build())
.layer(new DenseLayer.Builder().nIn(63).nOut(63).activation(Activation.RELU)
.weightInit(WeightInit.XAVIER).build())
.layer(new OutputLayer.Builder(LossFunction.NEGATIVELOGLIKELIHOOD).nIn(numRows * numColums).nOut(7)
.activation(Activation.SOFTMAX).weightInit(WeightInit.XAVIER).build())
.build();
你的 63 来自神经网络本身。您的输入和输出数量形状不匹配。
如果您是神经网络的新手,只需了解基本的 2d 神经网络指定了许多输入和输出即可。下一层的输入数量应与上一层的输出数量相匹配。
对于密集层,第一层中的输入数需要与数据集中的输入列数相匹配。我会确保无论您的数据集是什么,列数都匹配。
这里需要注意的是,为神经网络的每一层设置输入和输出的数量可能容易出错。相反,只需设置每一层的输出数量,并使用 dl4j 的 setInputType api 代替列数。 在你的情况下添加 InputType.feedforward(https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/inputs/InputType.java#L107)
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(randSeed).optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
.updater(new Nesterovs(0.1, 0.9)).list()
.layer(new DenseLayer.Builder().nIn(numRows * numColums).nOut(63).activation(Activation.RELU)
.weightInit(WeightInit.XAVIER).build())
.layer(new DenseLayer.Builder().nIn(63).nOut(63).activation(Activation.RELU)
.weightInit(WeightInit.XAVIER).build())
.layer(new OutputLayer.Builder(LossFunction.NEGATIVELOGLIKELIHOOD).nIn(numRows * numColums).nOut(7)
.activation(Activation.SOFTMAX).weightInit(WeightInit.XAVIER).build())
.setInputType(InputType.feedForward(numRows * numColums))
.build();
这里有更多示例(主要是 CNN):https://github.com/eclipse/deeplearning4j-examples/search?q=setInputType