The TFLite Android app throws Fatal Error: "Buffer Overflow Exception"
The TFLite Android app throws Fatal Error: "Buffer Overflow Exception"
我最近尝试使用 TFLite 模型构建对象检测 Android 应用程序。我构建了自己的自定义模型(HDF5 格式的 Keras 模型)并使用以下命令将模型成功转换为自定义 TFLite 模型:
tflite_convert --keras_model_file=detect.h5 --output_file=detect.tflite --output_format=TFLITE --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_dev_values=127 --change_concat_input_ranges=false --allow_custom_ops
我使用以下代码将关联的元数据进一步添加到此特定模型:
import tensorflow as tf from tflite_support import metadata as _metadata
populator = _metadata.MetadataPopulator.with_model_file("detect.tflite") populator.load_associated_files(["labelmap.txt"]) populator.populate()
然后我在 tensorflow 的 Android 包 Example 中配置了这个模型,并对 Build.gradle 文件、DetectorActivity.java 和 TFLiteObjectDetectionAPIModel.java 做了一些调整,分别。我还根据我需要的内容和方式进行了一些 UI 更改。此外,我必须更改 'numBytesPerChannel'
Float 模型的值从“4”到“3”,因为我收到这样的错误:
Cannot convert between a TensorFlowLite buffer with XYZ bytes and a ByteBuffer with ABC bytes
构建成功但调试器向我抛出“BufferOverFlowError”的致命异常。
11/13 14:57:02: Launching 'app' on Physical Device. Install successfully finished in 16 s 851 ms. $ adb shell am start -n "org.tensorflow.lite.examples.detection/org.tensorflow.lite.examples.detection.DetectorActivity" -a android.intent.action.MAIN -c android.intent.category.LAUNCHER -D Waiting for application to come online: org.tensorflow.lite.examples.detection.test | org.tensorflow.lite.examples.detection Waiting for application to come online: org.tensorflow.lite.examples.detection.test | org.tensorflow.lite.examples.detection Connected to process 22667 on device 'samsung-sm_m315f-RZ8N50B0M5K'. Waiting for application to come online: org.tensorflow.lite.examples.detection.test | org.tensorflow.lite.examples.detection Connecting to org.tensorflow.lite.examples.detection Connected to the target VM, address: 'localhost:46069', transport: 'socket' Capturing and displaying logcat messages from application. This behavior can be disabled in the "Logcat output" section of the "Debugger" settings page. I/mples.detectio: Late-enabling -Xcheck:jni E/mples.detectio: Unknown bits set in runtime_flags: 0x8000 D/ActivityThread: setConscryptValidator setConscryptValidator - put W/ActivityThread: Application org.tensorflow.lite.examples.detection is waiting for the debugger on port 8100... I/System.out: Sending WAIT chunk I/System.out: Debugger has connected waiting for debugger to settle... I/chatty: uid=10379(org.tensorflow.lite.examples.detection) identical 1 line I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/chatty: uid=10379(org.tensorflow.lite.examples.detection) identical 2 lines I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: debugger has settled (1478) I/mples.detectio: Waiting for a blocking GC ClassLinker I/mples.detectio: WaitForGcToComplete blocked ClassLinker on ClassLinker for 7.502ms D/tensorflow: CameraActivity: onCreate org.tensorflow.lite.examples.detection.DetectorActivity@4d5b875 D/PhoneWindow: forceLight changed to true [] from com.android.internal.policy.PhoneWindow.updateForceLightNavigationBar:4274 com.android.internal.policy.DecorView.updateColorViews:1547 com.android.internal.policy.PhoneWindow.dispatchWindowAttributesChanged:3252 android.view.Window.setFlags:1153 com.android.internal.policy.PhoneWindow.generateLayout:2474 I/MultiWindowDecorSupport: [INFO] isPopOver = false I/MultiWindowDecorSupport: updateCaptionType >> DecorView@59812d[], isFloating: false, isApplication: true, hasWindowDecorCaption: false, hasWindowControllerCallback: true D/MultiWindowDecorSupport: setCaptionType = 0, DecorView = DecorView@59812d[] W/mples.detectio: Accessing hidden method Landroid/view/View;->computeFitSystemWindows(Landroid/graphics/Rect;Landroid/graphics/Rect;)Z (greylist, reflection, allowed) W/mples.detectio: Accessing hidden method Landroid/view/ViewGroup;->makeOptionalFitsSystemWindows()V (greylist, reflection, allowed) I/CameraManagerGlobal: Connecting to camera service D/VendorTagDescriptor: addVendorDescriptor: vendor tag id 3854507339 added I/CameraManagerGlobal: Camera 0 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client com.snapchat.android API Level 1 I/CameraManagerGlobal: Camera 1 facing CAMERA_FACING_FRONT state now CAMERA_STATE_CLOSED for client com.dolby.dolby234 API Level 2 I/CameraManagerGlobal: Camera 2 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client com.whatsapp API Level 1 I/CameraManagerGlobal: Camera 20 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client android.system API Level 2 I/CameraManagerGlobal: Camera 23 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client android.system API Level 2 I/CameraManagerGlobal: Camera 3 facing CAMERA_FACING_FRONT state now CAMERA_STATE_CLOSED for client com.sec.android.app.camera API Level 2 I/CameraManagerGlobal: Camera 4 facing CAMERA_FACING_FRONT state now CAMERA_STATE_CLOSED for client vendor.client.pid<4503> API Level 2 I/CameraManagerGlobal: Camera 50 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client com.sec.android.app.camera API Level 2 I/CameraManagerGlobal: Camera 52 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client android.system API Level 2 I/CameraManagerGlobal: Camera 54 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client android.system API Level 2 I/tensorflow: CameraActivity: Camera API lv2?: false D/tensorflow: CameraActivity: onStart org.tensorflow.lite.examples.detection.DetectorActivity@4d5b875 D/tensorflow: CameraActivity: onResume org.tensorflow.lite.examples.detection.DetectorActivity@4d5b875 I/ViewRootImpl@a101c3c[DetectorActivity]: setView = com.android.internal.policy.DecorView@59812d TM=true MM=false I/ViewRootImpl@a101c3c[DetectorActivity]: Relayout returned: old=(0,0,1080,2340) new=(0,0,1080,2340) req=(1080,2340)0 dur=31 res=0x7 s={true 532883185664} ch=true D/OpenGLRenderer: createReliableSurface : 0x7c1211ecc0(0x7c12502000) D/OpenGLRenderer: makeCurrent EglSurface : 0x0 -> 0x0 I/mali_winsys: new_window_surface() [1080x2340] return: 0x3000 D/OpenGLRenderer: eglCreateWindowSurface : 0x7c120c3600 I/CameraManagerGlobal: Camera 0 facing CAMERA_FACING_BACK state now CAMERA_STATE_OPEN for client org.tensorflow.lite.examples.detection API Level 1 I/tensorflow: CameraConnectionFragment: Desired size: 640x480, min size: 480x480 I/tensorflow: CameraConnectionFragment: Valid preview sizes: [1920x1080, 1440x1080, 1280x720, 1088x1088, 1024x768, 960x720, 720x720, 720x480, 640x480] I/tensorflow: CameraConnectionFragment: Rejected preview sizes: [800x450, 640x360, 352x288, 320x240, 256x144, 176x144] CameraConnectionFragment: Exact size match found. W/Gralloc3: mapper 3.x is not supported I/gralloc: Arm Module v1.0 W/Gralloc3: allocator 3.x is not supported D/OpenGLRenderer: makeCurrent EglSurface : 0x0 -> 0x7c120c3600 I/Choreographer: Skipped 34 frames! The application may be doing too much work on its main thread. I/ViewRootImpl@a101c3c[DetectorActivity]: MSG_WINDOW_FOCUS_CHANGED 1 1 D/InputMethodManager: prepareNavigationBarInfo() DecorView@59812d[DetectorActivity] D/InputMethodManager: getNavigationBarColor() -855310 D/InputMethodManager: prepareNavigationBarInfo() DecorView@59812d[DetectorActivity] D/InputMethodManager: getNavigationBarColor() -855310 V/InputMethodManager: Starting input: tba=org.tensorflow.lite.examples.detection ic=null mNaviBarColor -855310 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false D/InputMethodManager: startInputInner - Id : 0 I/InputMethodManager: startInputInner - mService.startInputOrWindowGainedFocus I/ViewRootImpl@a101c3c[DetectorActivity]: MSG_RESIZED: frame=(0,0,1080,2340) ci=(0,83,0,39) vi=(0,83,0,39) or=1 D/InputMethodManager: prepareNavigationBarInfo() DecorView@59812d[DetectorActivity] getNavigationBarColor() -855310 V/InputMethodManager: Starting input: tba=org.tensorflow.lite.examples.detection ic=null mNaviBarColor -855310 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false D/InputMethodManager: startInputInner - Id : 0 I/CameraManagerGlobal: Camera 0 facing CAMERA_FACING_BACK state now CAMERA_STATE_ACTIVE for client org.tensorflow.lite.examples.detection API Level 1 W/TFLiteObjectDetectionAPIModelWithInterpreter: cow1 cow2 cow3 cow4 W/TFLiteObjectDetectionAPIModelWithInterpreter: cow5 cow6 I/tflite: Initialized TensorFlow Lite runtime. I/tensorflow: DetectorActivity: Camera orientation relative to screen canvas: 90 I/tensorflow: DetectorActivity: Initializing at size 640x480 I/tensorflow: DetectorActivity: Preparing image 1 for detection in bg thread. W/System: A resource failed to call close. I/tensorflow: DetectorActivity: Running detection on image 1
E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.examples.detection, PID: 22667
java.nio.BufferOverflowException
at java.nio.Buffer.nextPutIndex(Buffer.java:542)
at java.nio.DirectByteBuffer.putFloat(DirectByteBuffer.java:809)
at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:187)
at org.tensorflow.lite.examples.detection.DetectorActivity.run(DetectorActivity.java:183)
at android.os.Handler.handleCallback(Handler.java:883)
at android.os.Handler.dispatchMessage(Handler.java:100)
at android.os.Looper.loop(Looper.java:237)
at android.os.HandlerThread.run(HandlerThread.java:67)
I/Process: Sending signal. PID: 22667 SIG: 9
Disconnected from the target VM, address: 'localhost:46069', transport: 'socket'
该错误表明更改了这些行:
在TFLiteObjectDetectionAPIModel.java中:
private static final float IMAGE_MEAN = 127.5f;
private static final float IMAGE_STD = 127.5f;
//...
@override
protected void addPixelValue(int pixelValue) {
imgData.putFloat((((pixelValue >> 16) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
imgData.putFloat((((pixelValue >> 8) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
imgData.putFloat(((pixelValue & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
}
在DetectorActivity.java中:
@override
public void run() {
LOGGER.i("Running detection on image " + currTimestamp);
final long startTime = SystemClock.uptimeMillis();
final List<Detector.Recognition> results = detector.recognizeImage(croppedBitmap);
lastProcessingTimeMs = SystemClock.uptimeMillis() - startTime;
如果我遗漏了任何步骤或做错了什么,请告诉我。
P. S. - 在此之前我使用了一个训练有素的模型,该应用程序运行良好,除了它同时显示所有边界框,任何检测的变化都可以忽略不计。我目前正在使用一个训练有素的模型,它看起来像这样(通过 netron):
TFLite Model
我以前也遇到过这个错误。通常,当将数据输入模型的代码中有两处不正确时,就会发生这种情况:
图像大小不正确(即您输入的缓冲区大于模型中分配为输入缓冲区的缓冲区)
您输入数据的数据类型不正确。 (即模型需要 uint8 但你正在提供一个充满浮点值的缓冲区)。
如果您通过 tflite_convert 工具转换模型,有时输入会在 float 或 int 之间改变类型(取决于您的参数)
有没有办法让您检查这些情况中的任何一种是否正在发生?
我能够解决错误。所以基本上,我使用的 tflite 模型具有非常大的输入大小。这是因为我使用了自定义模型(这意味着微调是随机的,因此该模型与 android 的 TFLiteObjectDetectionAPI 不兼容。
这里的另一件事是,我不小心使用 mobilenet-v2 模型作为参考模型来训练我自己的模型,而 TFLiteObjectDetectionAPI 使用 ssd-mobilenet-v1 的默认模型。我不认为这与我的错误有任何关系,但这可能会引发兼容性异常,从而导致一些不明确的错误。
因此,我使用此 link 来使用自定义参数训练我的模型,并且不得不在 pipeline.config 文件中进行一些更改。否则,我训练过的模型显然工作得很好并且给我合适的结果并且准确率为 70%,这对我现在的目的来说已经足够了。
感谢@Saeid、@T.K 和@Alex 的帮助。我很欣赏 Tensorflow 训练工作流程的结构。
我现在已经解决了这个问题,当我想到它时这是一个小错误。如果还有其他事情,请告诉我!
再见!
我最近尝试使用 TFLite 模型构建对象检测 Android 应用程序。我构建了自己的自定义模型(HDF5 格式的 Keras 模型)并使用以下命令将模型成功转换为自定义 TFLite 模型:
tflite_convert --keras_model_file=detect.h5 --output_file=detect.tflite --output_format=TFLITE --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_dev_values=127 --change_concat_input_ranges=false --allow_custom_ops
我使用以下代码将关联的元数据进一步添加到此特定模型:
import tensorflow as tf from tflite_support import metadata as _metadata
populator = _metadata.MetadataPopulator.with_model_file("detect.tflite") populator.load_associated_files(["labelmap.txt"]) populator.populate()
然后我在 tensorflow 的 Android 包 Example 中配置了这个模型,并对 Build.gradle 文件、DetectorActivity.java 和 TFLiteObjectDetectionAPIModel.java 做了一些调整,分别。我还根据我需要的内容和方式进行了一些 UI 更改。此外,我必须更改 'numBytesPerChannel' Float 模型的值从“4”到“3”,因为我收到这样的错误:
Cannot convert between a TensorFlowLite buffer with XYZ bytes and a ByteBuffer with ABC bytes
构建成功但调试器向我抛出“BufferOverFlowError”的致命异常。
11/13 14:57:02: Launching 'app' on Physical Device. Install successfully finished in 16 s 851 ms. $ adb shell am start -n "org.tensorflow.lite.examples.detection/org.tensorflow.lite.examples.detection.DetectorActivity" -a android.intent.action.MAIN -c android.intent.category.LAUNCHER -D Waiting for application to come online: org.tensorflow.lite.examples.detection.test | org.tensorflow.lite.examples.detection Waiting for application to come online: org.tensorflow.lite.examples.detection.test | org.tensorflow.lite.examples.detection Connected to process 22667 on device 'samsung-sm_m315f-RZ8N50B0M5K'. Waiting for application to come online: org.tensorflow.lite.examples.detection.test | org.tensorflow.lite.examples.detection Connecting to org.tensorflow.lite.examples.detection Connected to the target VM, address: 'localhost:46069', transport: 'socket' Capturing and displaying logcat messages from application. This behavior can be disabled in the "Logcat output" section of the "Debugger" settings page. I/mples.detectio: Late-enabling -Xcheck:jni E/mples.detectio: Unknown bits set in runtime_flags: 0x8000 D/ActivityThread: setConscryptValidator setConscryptValidator - put W/ActivityThread: Application org.tensorflow.lite.examples.detection is waiting for the debugger on port 8100... I/System.out: Sending WAIT chunk I/System.out: Debugger has connected waiting for debugger to settle... I/chatty: uid=10379(org.tensorflow.lite.examples.detection) identical 1 line I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/chatty: uid=10379(org.tensorflow.lite.examples.detection) identical 2 lines I/System.out: waiting for debugger to settle... I/System.out: waiting for debugger to settle... I/System.out: debugger has settled (1478) I/mples.detectio: Waiting for a blocking GC ClassLinker I/mples.detectio: WaitForGcToComplete blocked ClassLinker on ClassLinker for 7.502ms D/tensorflow: CameraActivity: onCreate org.tensorflow.lite.examples.detection.DetectorActivity@4d5b875 D/PhoneWindow: forceLight changed to true [] from com.android.internal.policy.PhoneWindow.updateForceLightNavigationBar:4274 com.android.internal.policy.DecorView.updateColorViews:1547 com.android.internal.policy.PhoneWindow.dispatchWindowAttributesChanged:3252 android.view.Window.setFlags:1153 com.android.internal.policy.PhoneWindow.generateLayout:2474 I/MultiWindowDecorSupport: [INFO] isPopOver = false I/MultiWindowDecorSupport: updateCaptionType >> DecorView@59812d[], isFloating: false, isApplication: true, hasWindowDecorCaption: false, hasWindowControllerCallback: true D/MultiWindowDecorSupport: setCaptionType = 0, DecorView = DecorView@59812d[] W/mples.detectio: Accessing hidden method Landroid/view/View;->computeFitSystemWindows(Landroid/graphics/Rect;Landroid/graphics/Rect;)Z (greylist, reflection, allowed) W/mples.detectio: Accessing hidden method Landroid/view/ViewGroup;->makeOptionalFitsSystemWindows()V (greylist, reflection, allowed) I/CameraManagerGlobal: Connecting to camera service D/VendorTagDescriptor: addVendorDescriptor: vendor tag id 3854507339 added I/CameraManagerGlobal: Camera 0 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client com.snapchat.android API Level 1 I/CameraManagerGlobal: Camera 1 facing CAMERA_FACING_FRONT state now CAMERA_STATE_CLOSED for client com.dolby.dolby234 API Level 2 I/CameraManagerGlobal: Camera 2 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client com.whatsapp API Level 1 I/CameraManagerGlobal: Camera 20 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client android.system API Level 2 I/CameraManagerGlobal: Camera 23 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client android.system API Level 2 I/CameraManagerGlobal: Camera 3 facing CAMERA_FACING_FRONT state now CAMERA_STATE_CLOSED for client com.sec.android.app.camera API Level 2 I/CameraManagerGlobal: Camera 4 facing CAMERA_FACING_FRONT state now CAMERA_STATE_CLOSED for client vendor.client.pid<4503> API Level 2 I/CameraManagerGlobal: Camera 50 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client com.sec.android.app.camera API Level 2 I/CameraManagerGlobal: Camera 52 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client android.system API Level 2 I/CameraManagerGlobal: Camera 54 facing CAMERA_FACING_BACK state now CAMERA_STATE_CLOSED for client android.system API Level 2 I/tensorflow: CameraActivity: Camera API lv2?: false D/tensorflow: CameraActivity: onStart org.tensorflow.lite.examples.detection.DetectorActivity@4d5b875 D/tensorflow: CameraActivity: onResume org.tensorflow.lite.examples.detection.DetectorActivity@4d5b875 I/ViewRootImpl@a101c3c[DetectorActivity]: setView = com.android.internal.policy.DecorView@59812d TM=true MM=false I/ViewRootImpl@a101c3c[DetectorActivity]: Relayout returned: old=(0,0,1080,2340) new=(0,0,1080,2340) req=(1080,2340)0 dur=31 res=0x7 s={true 532883185664} ch=true D/OpenGLRenderer: createReliableSurface : 0x7c1211ecc0(0x7c12502000) D/OpenGLRenderer: makeCurrent EglSurface : 0x0 -> 0x0 I/mali_winsys: new_window_surface() [1080x2340] return: 0x3000 D/OpenGLRenderer: eglCreateWindowSurface : 0x7c120c3600 I/CameraManagerGlobal: Camera 0 facing CAMERA_FACING_BACK state now CAMERA_STATE_OPEN for client org.tensorflow.lite.examples.detection API Level 1 I/tensorflow: CameraConnectionFragment: Desired size: 640x480, min size: 480x480 I/tensorflow: CameraConnectionFragment: Valid preview sizes: [1920x1080, 1440x1080, 1280x720, 1088x1088, 1024x768, 960x720, 720x720, 720x480, 640x480] I/tensorflow: CameraConnectionFragment: Rejected preview sizes: [800x450, 640x360, 352x288, 320x240, 256x144, 176x144] CameraConnectionFragment: Exact size match found. W/Gralloc3: mapper 3.x is not supported I/gralloc: Arm Module v1.0 W/Gralloc3: allocator 3.x is not supported D/OpenGLRenderer: makeCurrent EglSurface : 0x0 -> 0x7c120c3600 I/Choreographer: Skipped 34 frames! The application may be doing too much work on its main thread. I/ViewRootImpl@a101c3c[DetectorActivity]: MSG_WINDOW_FOCUS_CHANGED 1 1 D/InputMethodManager: prepareNavigationBarInfo() DecorView@59812d[DetectorActivity] D/InputMethodManager: getNavigationBarColor() -855310 D/InputMethodManager: prepareNavigationBarInfo() DecorView@59812d[DetectorActivity] D/InputMethodManager: getNavigationBarColor() -855310 V/InputMethodManager: Starting input: tba=org.tensorflow.lite.examples.detection ic=null mNaviBarColor -855310 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false D/InputMethodManager: startInputInner - Id : 0 I/InputMethodManager: startInputInner - mService.startInputOrWindowGainedFocus I/ViewRootImpl@a101c3c[DetectorActivity]: MSG_RESIZED: frame=(0,0,1080,2340) ci=(0,83,0,39) vi=(0,83,0,39) or=1 D/InputMethodManager: prepareNavigationBarInfo() DecorView@59812d[DetectorActivity] getNavigationBarColor() -855310 V/InputMethodManager: Starting input: tba=org.tensorflow.lite.examples.detection ic=null mNaviBarColor -855310 mIsGetNaviBarColorSuccess true , NavVisible : true , NavTrans : false D/InputMethodManager: startInputInner - Id : 0 I/CameraManagerGlobal: Camera 0 facing CAMERA_FACING_BACK state now CAMERA_STATE_ACTIVE for client org.tensorflow.lite.examples.detection API Level 1 W/TFLiteObjectDetectionAPIModelWithInterpreter: cow1 cow2 cow3 cow4 W/TFLiteObjectDetectionAPIModelWithInterpreter: cow5 cow6 I/tflite: Initialized TensorFlow Lite runtime. I/tensorflow: DetectorActivity: Camera orientation relative to screen canvas: 90 I/tensorflow: DetectorActivity: Initializing at size 640x480 I/tensorflow: DetectorActivity: Preparing image 1 for detection in bg thread. W/System: A resource failed to call close. I/tensorflow: DetectorActivity: Running detection on image 1
E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.examples.detection, PID: 22667
java.nio.BufferOverflowException
at java.nio.Buffer.nextPutIndex(Buffer.java:542)
at java.nio.DirectByteBuffer.putFloat(DirectByteBuffer.java:809)
at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:187)
at org.tensorflow.lite.examples.detection.DetectorActivity.run(DetectorActivity.java:183)
at android.os.Handler.handleCallback(Handler.java:883)
at android.os.Handler.dispatchMessage(Handler.java:100)
at android.os.Looper.loop(Looper.java:237)
at android.os.HandlerThread.run(HandlerThread.java:67)
I/Process: Sending signal. PID: 22667 SIG: 9
Disconnected from the target VM, address: 'localhost:46069', transport: 'socket'
该错误表明更改了这些行:
在TFLiteObjectDetectionAPIModel.java中:
private static final float IMAGE_MEAN = 127.5f;
private static final float IMAGE_STD = 127.5f;
//...
@override
protected void addPixelValue(int pixelValue) {
imgData.putFloat((((pixelValue >> 16) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
imgData.putFloat((((pixelValue >> 8) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
imgData.putFloat(((pixelValue & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
}
在DetectorActivity.java中:
@override
public void run() {
LOGGER.i("Running detection on image " + currTimestamp);
final long startTime = SystemClock.uptimeMillis();
final List<Detector.Recognition> results = detector.recognizeImage(croppedBitmap);
lastProcessingTimeMs = SystemClock.uptimeMillis() - startTime;
如果我遗漏了任何步骤或做错了什么,请告诉我。
P. S. - 在此之前我使用了一个训练有素的模型,该应用程序运行良好,除了它同时显示所有边界框,任何检测的变化都可以忽略不计。我目前正在使用一个训练有素的模型,它看起来像这样(通过 netron):
TFLite Model
我以前也遇到过这个错误。通常,当将数据输入模型的代码中有两处不正确时,就会发生这种情况:
图像大小不正确(即您输入的缓冲区大于模型中分配为输入缓冲区的缓冲区)
您输入数据的数据类型不正确。 (即模型需要 uint8 但你正在提供一个充满浮点值的缓冲区)。
如果您通过 tflite_convert 工具转换模型,有时输入会在 float 或 int 之间改变类型(取决于您的参数)
有没有办法让您检查这些情况中的任何一种是否正在发生?
我能够解决错误。所以基本上,我使用的 tflite 模型具有非常大的输入大小。这是因为我使用了自定义模型(这意味着微调是随机的,因此该模型与 android 的 TFLiteObjectDetectionAPI 不兼容。 这里的另一件事是,我不小心使用 mobilenet-v2 模型作为参考模型来训练我自己的模型,而 TFLiteObjectDetectionAPI 使用 ssd-mobilenet-v1 的默认模型。我不认为这与我的错误有任何关系,但这可能会引发兼容性异常,从而导致一些不明确的错误。
因此,我使用此 link 来使用自定义参数训练我的模型,并且不得不在 pipeline.config 文件中进行一些更改。否则,我训练过的模型显然工作得很好并且给我合适的结果并且准确率为 70%,这对我现在的目的来说已经足够了。
感谢@Saeid、@T.K 和@Alex 的帮助。我很欣赏 Tensorflow 训练工作流程的结构。
我现在已经解决了这个问题,当我想到它时这是一个小错误。如果还有其他事情,请告诉我! 再见!