如何在 Azure 机器学习中构建卷积神经网络?
How to build a Convolution Neural Net in Azure Machine Learning?
有人应该添加 "net#" 作为标签。我正在尝试通过使用本教程将其转变为卷积神经网络来改进 Azure 机器学习工作室中的神经网络:
https://gallery.cortanaintelligence.com/Experiment/Neural-Network-Convolution-and-pooling-deep-net-2
我的和教程的区别是我用 35 个特征和 1 个标签做回归,而他们用 28x28 个特征和 10 个标签做分类。
我从基本示例和第二个示例开始,让他们使用:
input Data [35];
hidden H1 [100]
from Data all;
hidden H2 [100]
from H1 all;
output Result [1] linear
from H2 all;
现在转换成卷积我理解错了。在此处的教程和文档中:https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-azure-ml-netsharp-reference-guide 它没有提及如何为隐藏层计算节点元组值。教程说:
hidden C1 [5, 12, 12]
from Picture convolve {
InputShape = [28, 28];
KernelShape = [ 5, 5];
Stride = [ 2, 2];
MapCount = 5;
}
hidden C2 [50, 4, 4]
from C1 convolve {
InputShape = [ 5, 12, 12];
KernelShape = [ 1, 5, 5];
Stride = [ 1, 2, 2];
Sharing = [ F, T, T];
MapCount = 10;
}
似乎 [5, 12, 12] 和 [50,4,4] 与 KernalShape、Stride 和 MapCount 一起突然出现。我如何知道哪些值对我的示例有效?我尝试使用相同的值,但它没有用,我有一种感觉,因为他有一个 [28,28] 输入而我有一个 [35],我需要 2 个整数而不是 3 个整数的元组。
我只是尝试使用似乎与教程相关的随机值:
const { T = true; F = false; }
input Data [35];
hidden C1 [7, 23]
from Data convolve {
InputShape = [35];
KernelShape = [7];
Stride = [2];
MapCount = 7;
}
hidden C2 [200, 6]
from C1 convolve {
InputShape = [ 7, 23];
KernelShape = [ 1, 7];
Stride = [ 1, 2];
Sharing = [ F, T];
MapCount = 14;
}
hidden H3 [100]
from C2 all;
output Result [1] linear
from H3 all;
现在似乎无法调试,因为 Azure 机器学习工作室给出的唯一错误代码是:
Exception":{"ErrorId":"LibraryException","ErrorCode":"1000","ExceptionType":"ModuleException","Message":"Error 1000: TLC library exception: Exception of type 'Microsoft.Numerics.AFxLibraryException' was thrown.","Exception":{"Library":"TLC","ExceptionType":"LibraryException","Message":"Exception of type 'Microsoft.Numerics.AFxLibraryException' was thrown."}}}Error: Error 1000: TLC library exception: Exception of type 'Microsoft.Numerics.AFxLibraryException' was thrown. Process exited with error code -2
最后我的设置是
感谢您的帮助!
具有给定内核和步幅的 35 列长度输入的正确网络定义如下:
const { T = true; F = false; }
input Data [35];
hidden C1 [7, 15]
from Data convolve {
InputShape = [35];
KernelShape = [7];
Stride = [2];
MapCount = 7;
}
hidden C2 [14, 7, 5]
from C1 convolve {
InputShape = [ 7, 15];
KernelShape = [ 1, 7];
Stride = [ 1, 2];
Sharing = [ F, T];
MapCount = 14;
}
hidden H3 [100]
from C2 all;
output Result [1] linear
from H3 all;
首先,C1 = [7,15]。第一个维度就是 MapCount。对于第二个维度,内核形状定义了用于扫描输入列的 "window" 的长度,步幅定义了它在每一步移动的量。因此内核 windows 将覆盖第 1-7、3-9、5-11、...、29-35 列,当您计算 windows.[=11 时产生第二维 15 =]
接下来,C2 = [14,7,5]。第一个维度也是 MapCount。对于第二维和第三维,1×7 内核 "window" 必须覆盖 7×15 的输入大小,沿相应维度使用步长 1 和 2。
请注意,如果您想展平输出,您可以指定 [98,5] 甚至 [490] 的 C2 隐藏层形状。
有人应该添加 "net#" 作为标签。我正在尝试通过使用本教程将其转变为卷积神经网络来改进 Azure 机器学习工作室中的神经网络:
https://gallery.cortanaintelligence.com/Experiment/Neural-Network-Convolution-and-pooling-deep-net-2
我的和教程的区别是我用 35 个特征和 1 个标签做回归,而他们用 28x28 个特征和 10 个标签做分类。
我从基本示例和第二个示例开始,让他们使用:
input Data [35];
hidden H1 [100]
from Data all;
hidden H2 [100]
from H1 all;
output Result [1] linear
from H2 all;
现在转换成卷积我理解错了。在此处的教程和文档中:https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-azure-ml-netsharp-reference-guide 它没有提及如何为隐藏层计算节点元组值。教程说:
hidden C1 [5, 12, 12]
from Picture convolve {
InputShape = [28, 28];
KernelShape = [ 5, 5];
Stride = [ 2, 2];
MapCount = 5;
}
hidden C2 [50, 4, 4]
from C1 convolve {
InputShape = [ 5, 12, 12];
KernelShape = [ 1, 5, 5];
Stride = [ 1, 2, 2];
Sharing = [ F, T, T];
MapCount = 10;
}
似乎 [5, 12, 12] 和 [50,4,4] 与 KernalShape、Stride 和 MapCount 一起突然出现。我如何知道哪些值对我的示例有效?我尝试使用相同的值,但它没有用,我有一种感觉,因为他有一个 [28,28] 输入而我有一个 [35],我需要 2 个整数而不是 3 个整数的元组。
我只是尝试使用似乎与教程相关的随机值:
const { T = true; F = false; }
input Data [35];
hidden C1 [7, 23]
from Data convolve {
InputShape = [35];
KernelShape = [7];
Stride = [2];
MapCount = 7;
}
hidden C2 [200, 6]
from C1 convolve {
InputShape = [ 7, 23];
KernelShape = [ 1, 7];
Stride = [ 1, 2];
Sharing = [ F, T];
MapCount = 14;
}
hidden H3 [100]
from C2 all;
output Result [1] linear
from H3 all;
现在似乎无法调试,因为 Azure 机器学习工作室给出的唯一错误代码是:
Exception":{"ErrorId":"LibraryException","ErrorCode":"1000","ExceptionType":"ModuleException","Message":"Error 1000: TLC library exception: Exception of type 'Microsoft.Numerics.AFxLibraryException' was thrown.","Exception":{"Library":"TLC","ExceptionType":"LibraryException","Message":"Exception of type 'Microsoft.Numerics.AFxLibraryException' was thrown."}}}Error: Error 1000: TLC library exception: Exception of type 'Microsoft.Numerics.AFxLibraryException' was thrown. Process exited with error code -2
最后我的设置是
感谢您的帮助!
具有给定内核和步幅的 35 列长度输入的正确网络定义如下:
const { T = true; F = false; }
input Data [35];
hidden C1 [7, 15]
from Data convolve {
InputShape = [35];
KernelShape = [7];
Stride = [2];
MapCount = 7;
}
hidden C2 [14, 7, 5]
from C1 convolve {
InputShape = [ 7, 15];
KernelShape = [ 1, 7];
Stride = [ 1, 2];
Sharing = [ F, T];
MapCount = 14;
}
hidden H3 [100]
from C2 all;
output Result [1] linear
from H3 all;
首先,C1 = [7,15]。第一个维度就是 MapCount。对于第二个维度,内核形状定义了用于扫描输入列的 "window" 的长度,步幅定义了它在每一步移动的量。因此内核 windows 将覆盖第 1-7、3-9、5-11、...、29-35 列,当您计算 windows.[=11 时产生第二维 15 =]
接下来,C2 = [14,7,5]。第一个维度也是 MapCount。对于第二维和第三维,1×7 内核 "window" 必须覆盖 7×15 的输入大小,沿相应维度使用步长 1 和 2。
请注意,如果您想展平输出,您可以指定 [98,5] 甚至 [490] 的 C2 隐藏层形状。