ValueError: logits and labels must have the same shape, but got shapes [2] and [2,1]
ValueError: logits and labels must have the same shape, but got shapes [2] and [2,1]
请帮助我理解 TensorFlow.js 代码中的错误。试图打败二元分类和拟合数据集。
简化示例https://jsfiddle.net/9w8hx21o/4/。
在这个例子中,我有 4 个 4 x 7 的观察值,有四个标签。在训练开始时,我得到错误“logits 和标签必须具有相同的形状,但得到了形状 [2] 和 [2,1]”。
const xs = [
[
[1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1],
],
[
[2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2],
],
[
[3, 3, 3, 3, 3, 3, 3],
[3, 3, 3, 3, 3, 3, 3],
[3, 3, 3, 3, 3, 3, 3],
[3, 3, 3, 3, 3, 3, 3],
],
[
[4, 4, 4, 4, 4, 4, 4],
[4, 4, 4, 4, 4, 4, 4],
[4, 4, 4, 4, 4, 4, 4],
[4, 4, 4, 4, 4, 4, 4],
]
]
const ys = [0, 1, 0, 1]
const model = tf.sequential()
model.add(tf.layers.inputLayer({
inputShape: [4, 7]
}))
model.add(tf.layers.conv1d({
filters: 16,
kernelSize: 2,
activation: 'relu',
}))
model.add(tf.layers.flatten())
model.add(tf.layers.dense({
units: 1,
activation: 'sigmoid'
}))
model.summary()
model.compile({
optimizer: 'adam',
loss: 'binaryCrossentropy',
metrics: ['accuracy']
})
const xDataset = tf.data.array(xs);
const yDataset = tf.data.array(ys);
const xyDataset = tf.data.zip({xs: xDataset, ys: yDataset}).batch(2).shuffle(2)
const print_xyDataset = async () => {
await xyDataset.forEachAsync(e => {
console.log('\n');
for (let key in e) {
console.log(key + ':');
console.log('Shape ' + e[key].shape)
e[key].print();
}
})
}
print_xyDataset()
const train = async () => {
await model.fitDataset(xyDataset, {
epochs: 4,
callbacks: {
onEpochEnd: async (epoch, logs) => {
console.log(`EPOCH (${epoch + 1}): Train Accuracy: ${(logs.acc * 100).toFixed(2)}\n`);
},
}
})
}
train().catch(e => console.log(e))
你很可能运行是新版的TF。如果 true 和 pred 缺少额外的暗淡,旧的 TF 会产生数学上等效但内部意外的行为。这样做
const ys = [[0], [1], [0], [1]]
看看是否能解决问题。
请帮助我理解 TensorFlow.js 代码中的错误。试图打败二元分类和拟合数据集。
简化示例https://jsfiddle.net/9w8hx21o/4/。
在这个例子中,我有 4 个 4 x 7 的观察值,有四个标签。在训练开始时,我得到错误“logits 和标签必须具有相同的形状,但得到了形状 [2] 和 [2,1]”。
const xs = [
[
[1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1],
],
[
[2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2],
],
[
[3, 3, 3, 3, 3, 3, 3],
[3, 3, 3, 3, 3, 3, 3],
[3, 3, 3, 3, 3, 3, 3],
[3, 3, 3, 3, 3, 3, 3],
],
[
[4, 4, 4, 4, 4, 4, 4],
[4, 4, 4, 4, 4, 4, 4],
[4, 4, 4, 4, 4, 4, 4],
[4, 4, 4, 4, 4, 4, 4],
]
]
const ys = [0, 1, 0, 1]
const model = tf.sequential()
model.add(tf.layers.inputLayer({
inputShape: [4, 7]
}))
model.add(tf.layers.conv1d({
filters: 16,
kernelSize: 2,
activation: 'relu',
}))
model.add(tf.layers.flatten())
model.add(tf.layers.dense({
units: 1,
activation: 'sigmoid'
}))
model.summary()
model.compile({
optimizer: 'adam',
loss: 'binaryCrossentropy',
metrics: ['accuracy']
})
const xDataset = tf.data.array(xs);
const yDataset = tf.data.array(ys);
const xyDataset = tf.data.zip({xs: xDataset, ys: yDataset}).batch(2).shuffle(2)
const print_xyDataset = async () => {
await xyDataset.forEachAsync(e => {
console.log('\n');
for (let key in e) {
console.log(key + ':');
console.log('Shape ' + e[key].shape)
e[key].print();
}
})
}
print_xyDataset()
const train = async () => {
await model.fitDataset(xyDataset, {
epochs: 4,
callbacks: {
onEpochEnd: async (epoch, logs) => {
console.log(`EPOCH (${epoch + 1}): Train Accuracy: ${(logs.acc * 100).toFixed(2)}\n`);
},
}
})
}
train().catch(e => console.log(e))
你很可能运行是新版的TF。如果 true 和 pred 缺少额外的暗淡,旧的 TF 会产生数学上等效但内部意外的行为。这样做
const ys = [[0], [1], [0], [1]]
看看是否能解决问题。