无效 argument:Not 目标转换序列有足够的时间
Invalid argument:Not enough time for target transition sequence
我尝试 运行 这个 HTR 模型 https://github.com/arthurflor23/handwritten-text-recognition
但它给了我这个错误 Invalid argument: Not enough time for target transition sequence
。这个问题,我觉得在ctc_batch_cost
。我的图像尺寸是 (137,518),文本的 max_len 是 137。关于如何解决这个问题有什么想法吗?
我解决了这个问题,这是由于输入的大小。
Layer (type) Output Shape Param #
=================================================================
input (InputLayer) [(None, 1024, 128, 1)] 0
_________________________________________________________________
conv2d (Conv2D) (None, 1024, 64, 16) 160
_________________________________________________________________
p_re_lu (PReLU) (None, 1024, 64, 16) 16
_________________________________________________________________
batch_normalization (BatchNo (None, 1024, 64, 16) 112
_________________________________________________________________
full_gated_conv2d (FullGated (None, 1024, 64, 16) 4640
_________________________________________________________________
conv2d_1 (Conv2D) (None, 1024, 64, 32) 4640
_________________________________________________________________
p_re_lu_1 (PReLU) (None, 1024, 64, 32) 32
_________________________________________________________________
batch_normalization_1 (Batch (None, 1024, 64, 32) 224
_________________________________________________________________
full_gated_conv2d_1 (FullGat (None, 1024, 64, 32) 18496
_________________________________________________________________
conv2d_2 (Conv2D) (None, 512, 16, 40) 10280
_________________________________________________________________
p_re_lu_2 (PReLU) (None, 512, 16, 40) 40
_________________________________________________________________
batch_normalization_2 (Batch (None, 512, 16, 40) 280
_________________________________________________________________
full_gated_conv2d_2 (FullGat (None, 512, 16, 40) 28880
_________________________________________________________________
dropout (Dropout) (None, 512, 16, 40) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 512, 16, 48) 17328
_________________________________________________________________
p_re_lu_3 (PReLU) (None, 512, 16, 48) 48
_________________________________________________________________
batch_normalization_3 (Batch (None, 512, 16, 48) 336
_________________________________________________________________
full_gated_conv2d_3 (FullGat (None, 512, 16, 48) 41568
_________________________________________________________________
dropout_1 (Dropout) (None, 512, 16, 48) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 256, 4, 56) 21560
_________________________________________________________________
p_re_lu_4 (PReLU) (None, 256, 4, 56) 56
_________________________________________________________________
batch_normalization_4 (Batch (None, 256, 4, 56) 392
_________________________________________________________________
full_gated_conv2d_4 (FullGat (None, 256, 4, 56) 56560
_________________________________________________________________
dropout_2 (Dropout) (None, 256, 4, 56) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 256, 4, 64) 32320
_________________________________________________________________
p_re_lu_5 (PReLU) (None, 256, 4, 64) 64
_________________________________________________________________
batch_normalization_5 (Batch (None, 256, 4, 64) 448
_________________________________________________________________
reshape (Reshape) (None, 256, 256) 0
_________________________________________________________________
bidirectional (Bidirectional (None, 256, 256) 296448
_________________________________________________________________
dense (Dense) (None, 256, 256) 65792
_________________________________________________________________
bidirectional_1 (Bidirection (None, 256, 256) 296448
_________________________________________________________________
dense_1 (Dense) (None, 256, 332) 85324
=================================================================
look at the final layer ( dense_1 ) the second dimension is 256, so your text label should be <=256, not more. The problem comes from here.
我尝试 运行 这个 HTR 模型 https://github.com/arthurflor23/handwritten-text-recognition
但它给了我这个错误 Invalid argument: Not enough time for target transition sequence
。这个问题,我觉得在ctc_batch_cost
。我的图像尺寸是 (137,518),文本的 max_len 是 137。关于如何解决这个问题有什么想法吗?
我解决了这个问题,这是由于输入的大小。
Layer (type) Output Shape Param #
=================================================================
input (InputLayer) [(None, 1024, 128, 1)] 0
_________________________________________________________________
conv2d (Conv2D) (None, 1024, 64, 16) 160
_________________________________________________________________
p_re_lu (PReLU) (None, 1024, 64, 16) 16
_________________________________________________________________
batch_normalization (BatchNo (None, 1024, 64, 16) 112
_________________________________________________________________
full_gated_conv2d (FullGated (None, 1024, 64, 16) 4640
_________________________________________________________________
conv2d_1 (Conv2D) (None, 1024, 64, 32) 4640
_________________________________________________________________
p_re_lu_1 (PReLU) (None, 1024, 64, 32) 32
_________________________________________________________________
batch_normalization_1 (Batch (None, 1024, 64, 32) 224
_________________________________________________________________
full_gated_conv2d_1 (FullGat (None, 1024, 64, 32) 18496
_________________________________________________________________
conv2d_2 (Conv2D) (None, 512, 16, 40) 10280
_________________________________________________________________
p_re_lu_2 (PReLU) (None, 512, 16, 40) 40
_________________________________________________________________
batch_normalization_2 (Batch (None, 512, 16, 40) 280
_________________________________________________________________
full_gated_conv2d_2 (FullGat (None, 512, 16, 40) 28880
_________________________________________________________________
dropout (Dropout) (None, 512, 16, 40) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 512, 16, 48) 17328
_________________________________________________________________
p_re_lu_3 (PReLU) (None, 512, 16, 48) 48
_________________________________________________________________
batch_normalization_3 (Batch (None, 512, 16, 48) 336
_________________________________________________________________
full_gated_conv2d_3 (FullGat (None, 512, 16, 48) 41568
_________________________________________________________________
dropout_1 (Dropout) (None, 512, 16, 48) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 256, 4, 56) 21560
_________________________________________________________________
p_re_lu_4 (PReLU) (None, 256, 4, 56) 56
_________________________________________________________________
batch_normalization_4 (Batch (None, 256, 4, 56) 392
_________________________________________________________________
full_gated_conv2d_4 (FullGat (None, 256, 4, 56) 56560
_________________________________________________________________
dropout_2 (Dropout) (None, 256, 4, 56) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 256, 4, 64) 32320
_________________________________________________________________
p_re_lu_5 (PReLU) (None, 256, 4, 64) 64
_________________________________________________________________
batch_normalization_5 (Batch (None, 256, 4, 64) 448
_________________________________________________________________
reshape (Reshape) (None, 256, 256) 0
_________________________________________________________________
bidirectional (Bidirectional (None, 256, 256) 296448
_________________________________________________________________
dense (Dense) (None, 256, 256) 65792
_________________________________________________________________
bidirectional_1 (Bidirection (None, 256, 256) 296448
_________________________________________________________________
dense_1 (Dense) (None, 256, 332) 85324
=================================================================
look at the final layer ( dense_1 ) the second dimension is 256, so your text label should be <=256, not more. The problem comes from here.