尝试从响应正文加载图像时不支持的图像格式
Unsupported Image Format when trying to load an image from a response body
从 gotham-rs routing example 开始,我正在尝试制作一个 REST API 从请求正文中提取图像以便对其进行分析。我使用以下 cURL 命令将图像上传到服务器:
curl -i -X POST -F "image=@/Users/DanielBank/Desktop/grace_hopper.jpg" http://127.0.0.1:7878/
当我尝试从内存中加载图像时出现不支持的图像格式错误:
thread 'gotham-worker-0' panicked at 'called `Result::unwrap()` on an `Err` value: UnsupportedError("Unsupported image format")', src/libcore/result.rs:999:5
/src/main.rs:
extern crate futures;
extern crate gotham;
extern crate hyper;
extern crate mime;
extern crate url;
use futures::{future, Future, Stream};
use hyper::{Body, StatusCode};
use gotham::handler::{HandlerFuture, IntoHandlerError};
use gotham::helpers::http::response::create_response;
use gotham::router::builder::{build_simple_router, DefineSingleRoute, DrawRoutes};
use gotham::router::Router;
use gotham::state::{FromState, State};
use tract_core::ndarray;
use tract_core::prelude::*;
/// Extracts the image from a POST request and responds with a prediction tuple (probability, class)
fn prediction_handler(mut state: State) -> Box<HandlerFuture> {
let f = Body::take_from(&mut state)
.concat2()
.then(|full_body| match full_body {
Ok(valid_body) => {
// load the model
let mut model = tract_tensorflow::tensorflow()
.model_for_path("mobilenet_v2_1.4_224_frozen.pb")
.unwrap();
// specify input type and shape
model
.set_input_fact(
0,
TensorFact::dt_shape(f32::datum_type(), tvec!(1, 224, 224, 3)),
)
.unwrap();
// optimize the model and get an execution plan
let model = model.into_optimized().unwrap();
let plan = SimplePlan::new(&model).unwrap();
let body_content = valid_body.into_bytes();
// extract the image from the body as input
let image = image::load_from_memory(body_content.as_ref())
.unwrap()
.to_rgb();
let resized = image::imageops::resize(&image, 224, 224, ::image::FilterType::Triangle);
let image: Tensor = ndarray::Array4::from_shape_fn((1, 224, 224, 3), |(_, y, x, c)| {
resized[(x as _, y as _)][c] as f32 / 255.0
})
.into();
// run the plan on the input
let result = plan.run(tvec!(image)).unwrap();
// find and display the max value with its index
let best = result[0]
.to_array_view::<f32>()
.unwrap()
.iter()
.cloned()
.zip(1..)
.max_by(|a, b| a.0.partial_cmp(&b.0).unwrap());
// respond with the prediction tuple
let res = create_response(
&state,
StatusCode::OK,
mime::TEXT_PLAIN,
format!("{:?}", best.unwrap()),
);
future::ok((state, res))
}
Err(e) => future::err((state, e.into_handler_error())),
});
Box::new(f)
}
/// Create a `Router`
fn router() -> Router {
build_simple_router(|route| {
route.post("/").to(prediction_handler);
})
}
/// Start a server and use a `Router` to dispatch requests
pub fn main() {
let addr = "127.0.0.1:7878";
println!("Listening for requests at http://{}", addr);
gotham::start(addr, router())
}
Cargo.toml:
[package]
name = "offline-ml"
description = "Offline ML but with a REST API that's not Offline"
version = "0.1.0"
authors = ["Daniel Bank"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
futures = "0.1"
gotham = "0.4.0"
hyper = "0.12"
image = "0.22.2"
mime = "0.3"
tract-core = "0.4.2"
tract-tensorflow = "0.4.2"
url = "2.1.0"
问题是请求 body 同时包含 http headers 和图像数据。您需要根据 RFC 7578.
以某种方式解析它
令人惊讶的是我找不到 ready-made 像样的 MIME multipart/form-data 解析器箱。下面的代码有效,但它诉诸于极其粗糙的 regex
拆分而不是正确的解析。此外,它省略了张量模型训练部分:
extern crate image;
extern crate mime;
use futures::{future, Future, Stream};
use gotham::handler::{HandlerFuture, IntoHandlerError};
use gotham::helpers::http::response::create_response;
use gotham::router::builder::{build_simple_router, DefineSingleRoute, DrawRoutes};
use gotham::router::Router;
use gotham::state::{FromState, State};
use hyper::{Body, StatusCode};
use regex::bytes::Regex;
fn prediction_handler(mut state: State) -> Box<HandlerFuture> {
let f = Body::take_from(&mut state)
.concat2()
.then(|full_body| match full_body {
Ok(valid_body) => {
let body_content = valid_body.into_bytes();
let re = Regex::new(r"\r\n\r\n").unwrap();
let contents: Vec<_> = re.split(body_content.as_ref()).collect();
let image = image::load_from_memory(contents[1]).unwrap().to_rgb();
let res = create_response(
&state,
StatusCode::OK,
mime::TEXT_PLAIN,
format!("{:?}\r\n", image.dimensions()),
);
future::ok((state, res))
}
Err(e) => future::err((state, e.into_handler_error())),
});
Box::new(f)
}
从 gotham-rs routing example 开始,我正在尝试制作一个 REST API 从请求正文中提取图像以便对其进行分析。我使用以下 cURL 命令将图像上传到服务器:
curl -i -X POST -F "image=@/Users/DanielBank/Desktop/grace_hopper.jpg" http://127.0.0.1:7878/
当我尝试从内存中加载图像时出现不支持的图像格式错误:
thread 'gotham-worker-0' panicked at 'called `Result::unwrap()` on an `Err` value: UnsupportedError("Unsupported image format")', src/libcore/result.rs:999:5
/src/main.rs:
extern crate futures;
extern crate gotham;
extern crate hyper;
extern crate mime;
extern crate url;
use futures::{future, Future, Stream};
use hyper::{Body, StatusCode};
use gotham::handler::{HandlerFuture, IntoHandlerError};
use gotham::helpers::http::response::create_response;
use gotham::router::builder::{build_simple_router, DefineSingleRoute, DrawRoutes};
use gotham::router::Router;
use gotham::state::{FromState, State};
use tract_core::ndarray;
use tract_core::prelude::*;
/// Extracts the image from a POST request and responds with a prediction tuple (probability, class)
fn prediction_handler(mut state: State) -> Box<HandlerFuture> {
let f = Body::take_from(&mut state)
.concat2()
.then(|full_body| match full_body {
Ok(valid_body) => {
// load the model
let mut model = tract_tensorflow::tensorflow()
.model_for_path("mobilenet_v2_1.4_224_frozen.pb")
.unwrap();
// specify input type and shape
model
.set_input_fact(
0,
TensorFact::dt_shape(f32::datum_type(), tvec!(1, 224, 224, 3)),
)
.unwrap();
// optimize the model and get an execution plan
let model = model.into_optimized().unwrap();
let plan = SimplePlan::new(&model).unwrap();
let body_content = valid_body.into_bytes();
// extract the image from the body as input
let image = image::load_from_memory(body_content.as_ref())
.unwrap()
.to_rgb();
let resized = image::imageops::resize(&image, 224, 224, ::image::FilterType::Triangle);
let image: Tensor = ndarray::Array4::from_shape_fn((1, 224, 224, 3), |(_, y, x, c)| {
resized[(x as _, y as _)][c] as f32 / 255.0
})
.into();
// run the plan on the input
let result = plan.run(tvec!(image)).unwrap();
// find and display the max value with its index
let best = result[0]
.to_array_view::<f32>()
.unwrap()
.iter()
.cloned()
.zip(1..)
.max_by(|a, b| a.0.partial_cmp(&b.0).unwrap());
// respond with the prediction tuple
let res = create_response(
&state,
StatusCode::OK,
mime::TEXT_PLAIN,
format!("{:?}", best.unwrap()),
);
future::ok((state, res))
}
Err(e) => future::err((state, e.into_handler_error())),
});
Box::new(f)
}
/// Create a `Router`
fn router() -> Router {
build_simple_router(|route| {
route.post("/").to(prediction_handler);
})
}
/// Start a server and use a `Router` to dispatch requests
pub fn main() {
let addr = "127.0.0.1:7878";
println!("Listening for requests at http://{}", addr);
gotham::start(addr, router())
}
Cargo.toml:
[package]
name = "offline-ml"
description = "Offline ML but with a REST API that's not Offline"
version = "0.1.0"
authors = ["Daniel Bank"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
futures = "0.1"
gotham = "0.4.0"
hyper = "0.12"
image = "0.22.2"
mime = "0.3"
tract-core = "0.4.2"
tract-tensorflow = "0.4.2"
url = "2.1.0"
问题是请求 body 同时包含 http headers 和图像数据。您需要根据 RFC 7578.
以某种方式解析它令人惊讶的是我找不到 ready-made 像样的 MIME multipart/form-data 解析器箱。下面的代码有效,但它诉诸于极其粗糙的 regex
拆分而不是正确的解析。此外,它省略了张量模型训练部分:
extern crate image;
extern crate mime;
use futures::{future, Future, Stream};
use gotham::handler::{HandlerFuture, IntoHandlerError};
use gotham::helpers::http::response::create_response;
use gotham::router::builder::{build_simple_router, DefineSingleRoute, DrawRoutes};
use gotham::router::Router;
use gotham::state::{FromState, State};
use hyper::{Body, StatusCode};
use regex::bytes::Regex;
fn prediction_handler(mut state: State) -> Box<HandlerFuture> {
let f = Body::take_from(&mut state)
.concat2()
.then(|full_body| match full_body {
Ok(valid_body) => {
let body_content = valid_body.into_bytes();
let re = Regex::new(r"\r\n\r\n").unwrap();
let contents: Vec<_> = re.split(body_content.as_ref()).collect();
let image = image::load_from_memory(contents[1]).unwrap().to_rgb();
let res = create_response(
&state,
StatusCode::OK,
mime::TEXT_PLAIN,
format!("{:?}\r\n", image.dimensions()),
);
future::ok((state, res))
}
Err(e) => future::err((state, e.into_handler_error())),
});
Box::new(f)
}