如何将磁场矢量从传感器坐标space转换为相机坐标space?
How to convert the magnetic field vector from sensor coordinate space to camera coordinate space?
我想从我们从 ARCore 获得的 TYPE_MAGNETIC_FIELD Position sensor and put it in the same coordinate system that the Frame's 中收集磁场矢量(即微特斯拉中的 x、y、z)。
磁场矢量在Sensor Coordinate System中。我们需要将其放入相机坐标系中。我相信我可以使用以下两个 API,它们在每个相机框架上都提供(显示 NDK 版本我更喜欢文档):
- getAndroidSensorPose() - 获取此框架在世界坐标 space 中 Android 传感器坐标系的位姿。我理解为 "SensorToWorldPose."
- getPose() - 获取世界中物理相机的姿势 space 以获取最新帧。我理解为 "CameraToWorldPose".
下面,我计算相机坐标系中的磁矢量(magneticVectorInCamera)。当我测试它时(通过在 phone 周围传递一个弱磁铁,并将其与 iOS's CLHeading's raw x,y,z values 进行比较,我没有得到我期望的值。有什么建议吗?
scene.addOnUpdateListener(frameTime -> { processFrame(this.sceneView.getArFrame()) });
public void processFrame(Frame frame) {
if (frame.getCamera().getTrackingState() != TrackingState.TRACKING) {
return;
}
// Get the magnetic vector in sensor that we stored in the onSensorChanged() delegate
float[] magneticVectorInSensor = {x,y,z};
// Get sensor to world
Pose sensorToWorldPose = frame.getAndroidSensorPose();
// Get world to camera
Pose cameraToWorldPose = frame.getCamera().getPose();
Pose worldToCameraPose = cameraToWorldPose.inverse();
// Get sensor to camera
Pose sensorToCameraPose = sensorToWorldPose.compose(worldToCameraPose);
// Get the magnetic vector in camera coordinate space
float[] magneticVectorInCamera = sensorToCameraPose.rotateVector(magneticVectorInSensor);
}
@Override
public void onSensorChanged(SensorEvent sensorEvent) {
int sensorType = sensorEvent.sensor.getType();
switch (sensorType) {
case Sensor.TYPE_MAGNETIC_FIELD:
mMagnetometerData = sensorEvent.values.clone();
break;
default:
return;
}
x = mMagnetometerData[0];
y = mMagnetometerData[1];
z = mMagnetometerData[2];
}
这是我从中得到的示例日志行:
V/processFrame: magneticVectorInSensor: [-173.21014, -138.63983, 54.873657]
V/processFrame: sensorToWorldPose: t:[x:-1.010, y:-0.032, z:-0.651], q:[x:-0.28, y:-0.62, z:-0.21, w:0.71]
V/processFrame: cameraToWorldPose: t:[x:-0.941, y:0.034, z:-0.610], q:[x:-0.23, y:0.62, z:0.66, w:-0.35]
V/processFrame: worldToCameraPose: t:[x:-0.509, y:0.762, z:-0.647], q:[x:0.23, y:-0.62, z:-0.66, w:-0.35]
V/processFrame: sensorToCameraPose: t:[x:-0.114, y:0.105, z:-1.312], q:[x:0.54, y:-0.46, z:-0.08, w:-0.70]
V/processFrame: magneticVectorInCamera: [15.159668, 56.381603, 220.96408]
我感到困惑的一件事是为什么我的 sensoryToCamera 姿势在我移动 phone 时发生变化:
sensorToCameraPose: t:[x:0.068, y:-0.014, z:0.083], q:[x:0.14, y:-0.65, z:-0.25, w:-0.70]
sensorToCameraPose: t:[x:0.071, y:-0.010, z:0.077], q:[x:0.11, y:-0.66, z:-0.23, w:-0.70]
sensorToCameraPose: t:[x:0.075, y:-0.007, z:0.070], q:[x:0.08, y:-0.68, z:-0.20, w:-0.70]
sensorToCameraPose: t:[x:0.080, y:-0.007, z:0.061], q:[x:0.05, y:-0.69, z:-0.18, w:-0.70]
sensorToCameraPose: t:[x:0.084, y:-0.008, z:0.052], q:[x:0.01, y:-0.69, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.091, y:-0.011, z:0.045], q:[x:-0.03, y:-0.69, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.094, y:-0.017, z:0.037], q:[x:-0.09, y:-0.69, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.098, y:-0.026, z:0.027], q:[x:-0.16, y:-0.67, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.100, y:-0.037, z:0.020], q:[x:-0.23, y:-0.65, z:-0.19, w:-0.70]
sensorToCameraPose: t:[x:0.098, y:-0.046, z:0.012], q:[x:-0.30, y:-0.62, z:-0.20, w:-0.70]
sensorToCameraPose: t:[x:0.096, y:-0.055, z:0.005], q:[x:-0.35, y:-0.59, z:-0.19, w:-0.70]
sensorToCameraPose: t:[x:0.092, y:-0.061, z:-0.003], q:[x:-0.41, y:-0.56, z:-0.18, w:-0.70]
sensorToCameraPose: t:[x:0.086, y:-0.066, z:-0.011], q:[x:-0.45, y:-0.52, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.080, y:-0.069, z:-0.018], q:[x:-0.49, y:-0.49, z:-0.16, w:-0.70]
sensorToCameraPose: t:[x:0.073, y:-0.071, z:-0.025], q:[x:-0.53, y:-0.45, z:-0.15, w:-0.70]
sensorToCameraPose: t:[x:0.065, y:-0.072, z:-0.031], q:[x:-0.56, y:-0.42, z:-0.13, w:-0.70]
sensorToCameraPose: t:[x:0.059, y:-0.072, z:-0.038], q:[x:-0.59, y:-0.38, z:-0.13, w:-0.70]
sensorToCameraPose: t:[x:0.053, y:-0.071, z:-0.042], q:[x:-0.61, y:-0.35, z:-0.12, w:-0.70]
sensorToCameraPose: t:[x:0.047, y:-0.069, z:-0.046], q:[x:-0.63, y:-0.32, z:-0.11, w:-0.70]
sensorToCameraPose: t:[x:0.041, y:-0.067, z:-0.048], q:[x:-0.64, y:-0.28, z:-0.10, w:-0.70]
sensorToCameraPose: t:[x:0.037, y:-0.064, z:-0.050], q:[x:-0.65, y:-0.26, z:-0.10, w:-0.70]
sensorToCameraPose: t:[x:0.032, y:-0.060, z:-0.052], q:[x:-0.67, y:-0.23, z:-0.09, w:-0.70]
sensorToCameraPose: t:[x:0.027, y:-0.057, z:-0.054], q:[x:-0.68, y:-0.20, z:-0.08, w:-0.70]
注意 - 关于将磁场矢量转换为全局坐标 space(即 this and this),还有其他几个问题,但我没能找到任何用于拍照的东西坐标space.
我上面的代码有两个问题。
首先,我没有正确使用 compose。要通过 A 然后 B 进行转换,您需要执行 B.compose(A)。通过该修复,我开始获得一致的 sensorToCameraPose。
其次,在修复之后,我在 x 和 y 之间旋转了 90°。来自 Reddit 上的 u/inio:
So usually for phone form-factor devices there will be a 90° rotation between the camera coordinate system (which is defined to have +x point in the diction of the horizontal axis of the physical camera image, typically the long axis of the device) and the android sensor coordinate system (which has +y pointing away from the android navigation buttons, and +x thus along the short axis of the device). The difference you describe is an 88.8° rotation. Maybe you want the virtual camera pose? Source
我使用 getDisplayOrientedPose() 进行了测试。有了它,当我处于纵向模式时,我得到了我所期望的。但是,如果我翻转到风景,坐标系就会改变,我会旋转 90°。所以我自己做了轮换。
public void processFrame(Frame frame) {
if (frame.getCamera().getTrackingState() != TrackingState.TRACKING) {
return;
}
// Get the magnetic vector in sensor that we stored in the onSensorChanged() delegate
float[] magneticVectorInSensor = {x,y,z};
// Get sensor to world
Pose sensorToWorldPose = frame.getAndroidSensorPose();
// Get camera to world
Pose cameraToWorldPose = frame.getCamera().getPose();
// +90° rotation about Z
// https://github.com/google-ar/arcore-android-sdk/issues/535#issuecomment-418845833
Pose CAMERA_POSE_FIX = Pose.makeRotation(0, 0, ((float) Math.sqrt(0.5f)), ((float) Math.sqrt(0.5f)));
Pose rotatedCameraToWorldPose = cameraToWorldPose.compose(CAMERA_POSE_FIX);
// Get world to camera
Pose worldToCameraPose = rotatedCameraToWorldPose.inverse();
// Get sensor to camera
Pose sensorToCameraPose = worldToCameraPose.compose(sensorToWorldPose);
// Get the magnetic vector in camera coordinate space
float[] magneticVectorInCamera = sensorToCameraPose.rotateVector(magneticVectorInSensor);
}
我想从我们从 ARCore 获得的 TYPE_MAGNETIC_FIELD Position sensor and put it in the same coordinate system that the Frame's 中收集磁场矢量(即微特斯拉中的 x、y、z)。
磁场矢量在Sensor Coordinate System中。我们需要将其放入相机坐标系中。我相信我可以使用以下两个 API,它们在每个相机框架上都提供(显示 NDK 版本我更喜欢文档):
- getAndroidSensorPose() - 获取此框架在世界坐标 space 中 Android 传感器坐标系的位姿。我理解为 "SensorToWorldPose."
- getPose() - 获取世界中物理相机的姿势 space 以获取最新帧。我理解为 "CameraToWorldPose".
下面,我计算相机坐标系中的磁矢量(magneticVectorInCamera)。当我测试它时(通过在 phone 周围传递一个弱磁铁,并将其与 iOS's CLHeading's raw x,y,z values 进行比较,我没有得到我期望的值。有什么建议吗?
scene.addOnUpdateListener(frameTime -> { processFrame(this.sceneView.getArFrame()) });
public void processFrame(Frame frame) {
if (frame.getCamera().getTrackingState() != TrackingState.TRACKING) {
return;
}
// Get the magnetic vector in sensor that we stored in the onSensorChanged() delegate
float[] magneticVectorInSensor = {x,y,z};
// Get sensor to world
Pose sensorToWorldPose = frame.getAndroidSensorPose();
// Get world to camera
Pose cameraToWorldPose = frame.getCamera().getPose();
Pose worldToCameraPose = cameraToWorldPose.inverse();
// Get sensor to camera
Pose sensorToCameraPose = sensorToWorldPose.compose(worldToCameraPose);
// Get the magnetic vector in camera coordinate space
float[] magneticVectorInCamera = sensorToCameraPose.rotateVector(magneticVectorInSensor);
}
@Override
public void onSensorChanged(SensorEvent sensorEvent) {
int sensorType = sensorEvent.sensor.getType();
switch (sensorType) {
case Sensor.TYPE_MAGNETIC_FIELD:
mMagnetometerData = sensorEvent.values.clone();
break;
default:
return;
}
x = mMagnetometerData[0];
y = mMagnetometerData[1];
z = mMagnetometerData[2];
}
这是我从中得到的示例日志行:
V/processFrame: magneticVectorInSensor: [-173.21014, -138.63983, 54.873657]
V/processFrame: sensorToWorldPose: t:[x:-1.010, y:-0.032, z:-0.651], q:[x:-0.28, y:-0.62, z:-0.21, w:0.71]
V/processFrame: cameraToWorldPose: t:[x:-0.941, y:0.034, z:-0.610], q:[x:-0.23, y:0.62, z:0.66, w:-0.35]
V/processFrame: worldToCameraPose: t:[x:-0.509, y:0.762, z:-0.647], q:[x:0.23, y:-0.62, z:-0.66, w:-0.35]
V/processFrame: sensorToCameraPose: t:[x:-0.114, y:0.105, z:-1.312], q:[x:0.54, y:-0.46, z:-0.08, w:-0.70]
V/processFrame: magneticVectorInCamera: [15.159668, 56.381603, 220.96408]
我感到困惑的一件事是为什么我的 sensoryToCamera 姿势在我移动 phone 时发生变化:
sensorToCameraPose: t:[x:0.068, y:-0.014, z:0.083], q:[x:0.14, y:-0.65, z:-0.25, w:-0.70]
sensorToCameraPose: t:[x:0.071, y:-0.010, z:0.077], q:[x:0.11, y:-0.66, z:-0.23, w:-0.70]
sensorToCameraPose: t:[x:0.075, y:-0.007, z:0.070], q:[x:0.08, y:-0.68, z:-0.20, w:-0.70]
sensorToCameraPose: t:[x:0.080, y:-0.007, z:0.061], q:[x:0.05, y:-0.69, z:-0.18, w:-0.70]
sensorToCameraPose: t:[x:0.084, y:-0.008, z:0.052], q:[x:0.01, y:-0.69, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.091, y:-0.011, z:0.045], q:[x:-0.03, y:-0.69, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.094, y:-0.017, z:0.037], q:[x:-0.09, y:-0.69, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.098, y:-0.026, z:0.027], q:[x:-0.16, y:-0.67, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.100, y:-0.037, z:0.020], q:[x:-0.23, y:-0.65, z:-0.19, w:-0.70]
sensorToCameraPose: t:[x:0.098, y:-0.046, z:0.012], q:[x:-0.30, y:-0.62, z:-0.20, w:-0.70]
sensorToCameraPose: t:[x:0.096, y:-0.055, z:0.005], q:[x:-0.35, y:-0.59, z:-0.19, w:-0.70]
sensorToCameraPose: t:[x:0.092, y:-0.061, z:-0.003], q:[x:-0.41, y:-0.56, z:-0.18, w:-0.70]
sensorToCameraPose: t:[x:0.086, y:-0.066, z:-0.011], q:[x:-0.45, y:-0.52, z:-0.17, w:-0.70]
sensorToCameraPose: t:[x:0.080, y:-0.069, z:-0.018], q:[x:-0.49, y:-0.49, z:-0.16, w:-0.70]
sensorToCameraPose: t:[x:0.073, y:-0.071, z:-0.025], q:[x:-0.53, y:-0.45, z:-0.15, w:-0.70]
sensorToCameraPose: t:[x:0.065, y:-0.072, z:-0.031], q:[x:-0.56, y:-0.42, z:-0.13, w:-0.70]
sensorToCameraPose: t:[x:0.059, y:-0.072, z:-0.038], q:[x:-0.59, y:-0.38, z:-0.13, w:-0.70]
sensorToCameraPose: t:[x:0.053, y:-0.071, z:-0.042], q:[x:-0.61, y:-0.35, z:-0.12, w:-0.70]
sensorToCameraPose: t:[x:0.047, y:-0.069, z:-0.046], q:[x:-0.63, y:-0.32, z:-0.11, w:-0.70]
sensorToCameraPose: t:[x:0.041, y:-0.067, z:-0.048], q:[x:-0.64, y:-0.28, z:-0.10, w:-0.70]
sensorToCameraPose: t:[x:0.037, y:-0.064, z:-0.050], q:[x:-0.65, y:-0.26, z:-0.10, w:-0.70]
sensorToCameraPose: t:[x:0.032, y:-0.060, z:-0.052], q:[x:-0.67, y:-0.23, z:-0.09, w:-0.70]
sensorToCameraPose: t:[x:0.027, y:-0.057, z:-0.054], q:[x:-0.68, y:-0.20, z:-0.08, w:-0.70]
注意 - 关于将磁场矢量转换为全局坐标 space(即 this and this),还有其他几个问题,但我没能找到任何用于拍照的东西坐标space.
我上面的代码有两个问题。
首先,我没有正确使用 compose。要通过 A 然后 B 进行转换,您需要执行 B.compose(A)。通过该修复,我开始获得一致的 sensorToCameraPose。
其次,在修复之后,我在 x 和 y 之间旋转了 90°。来自 Reddit 上的 u/inio:
So usually for phone form-factor devices there will be a 90° rotation between the camera coordinate system (which is defined to have +x point in the diction of the horizontal axis of the physical camera image, typically the long axis of the device) and the android sensor coordinate system (which has +y pointing away from the android navigation buttons, and +x thus along the short axis of the device). The difference you describe is an 88.8° rotation. Maybe you want the virtual camera pose? Source
我使用 getDisplayOrientedPose() 进行了测试。有了它,当我处于纵向模式时,我得到了我所期望的。但是,如果我翻转到风景,坐标系就会改变,我会旋转 90°。所以我自己做了轮换。
public void processFrame(Frame frame) {
if (frame.getCamera().getTrackingState() != TrackingState.TRACKING) {
return;
}
// Get the magnetic vector in sensor that we stored in the onSensorChanged() delegate
float[] magneticVectorInSensor = {x,y,z};
// Get sensor to world
Pose sensorToWorldPose = frame.getAndroidSensorPose();
// Get camera to world
Pose cameraToWorldPose = frame.getCamera().getPose();
// +90° rotation about Z
// https://github.com/google-ar/arcore-android-sdk/issues/535#issuecomment-418845833
Pose CAMERA_POSE_FIX = Pose.makeRotation(0, 0, ((float) Math.sqrt(0.5f)), ((float) Math.sqrt(0.5f)));
Pose rotatedCameraToWorldPose = cameraToWorldPose.compose(CAMERA_POSE_FIX);
// Get world to camera
Pose worldToCameraPose = rotatedCameraToWorldPose.inverse();
// Get sensor to camera
Pose sensorToCameraPose = worldToCameraPose.compose(sensorToWorldPose);
// Get the magnetic vector in camera coordinate space
float[] magneticVectorInCamera = sensorToCameraPose.rotateVector(magneticVectorInSensor);
}