MRI Segmentation Error related to input shape dimension (Input 0 of layer conv2d is incompatible with the layer)
MRI Segmentation Error related to input shape dimension (Input 0 of layer conv2d is incompatible with the layer)
我正在尝试使用深度学习模型执行一些 MRI 分割,但我收到与图像尺寸相关的错误,不知道为什么。
import numpy as np
import nibabel as nib
import matplotlib.pyplot as plt
img = nib.load('/content/drive/My Drive/Programa2/P1_FL_final.nii.gz')
%matplotlib inline
img_np = img.get_fdata()
print(type(img_np),img_np.shape)
#Plotting slice of the image
img_slice= img.get_fdata()[:,:,20]
plt.imshow(img_slice,cmap='gray')
#Make prediction
img_analised=img_np
#img_analised=img_np[:,:,:] I was trying to change dimensions
print(img_analised.shape) #Image shape (480, 512, 30)
newmodel.predict(img_analised)
错误信息
ValueError: Input 0 of layer conv2d is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 512, 30]
问题与输入图像形状有关,代码要求 4 种不同的 MRI 模态,而我使用的模态较少。我改的时候就ok了
我正在尝试使用深度学习模型执行一些 MRI 分割,但我收到与图像尺寸相关的错误,不知道为什么。
import numpy as np
import nibabel as nib
import matplotlib.pyplot as plt
img = nib.load('/content/drive/My Drive/Programa2/P1_FL_final.nii.gz')
%matplotlib inline
img_np = img.get_fdata()
print(type(img_np),img_np.shape)
#Plotting slice of the image
img_slice= img.get_fdata()[:,:,20]
plt.imshow(img_slice,cmap='gray')
#Make prediction
img_analised=img_np
#img_analised=img_np[:,:,:] I was trying to change dimensions
print(img_analised.shape) #Image shape (480, 512, 30)
newmodel.predict(img_analised)
错误信息
ValueError: Input 0 of layer conv2d is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 512, 30]
问题与输入图像形状有关,代码要求 4 种不同的 MRI 模态,而我使用的模态较少。我改的时候就ok了