使用 8th Wall XR 图像检测尺寸在 Unity3D 中缩放目标对象
Use 8th Wall XR image detection size to scale target object in Unity3D
是否可以根据真实世界检测图像的大小调整放置模型的大小?我有一幅画,我正在使用 AR 模型进行增强,一旦检测到图像,该模型就会替换这幅画。它应该完美地覆盖这幅画。它的宽度为 45 厘米,并提供给 XRImageDetectionController 脚本。当我 运行 我的应用程序在其真实尺寸 (45cm X 28cm) 中可以看到目标图像时,效果符合预期。理想情况下,我希望能够在真实世界图像可能具有不同大小(保持纵横比相同)的各种设置中演示这幅增强绘画。我的特定设备兼容 ARCore android phone.
我最近开始使用 8th Wall,但我还没有创建自己的项目(只是玩弄了演示项目并查看了源代码),所以我 100% 不知道这是否会工作,但这里是:
如果您查看 8th Wall XRDataTypes.cs
文件,您可以找到数据类型 XRDetectionTexture
、XRDetectionImage
和 XRDetectedImageTarget
。这些数据类型中的每一种都有一些维度字段的实例。
XRDectionTexture:
/**
* A unity Texture2D that can be used as a source for image-target detection.
*/
[Serializable] public struct XRDetectionTexture {
[...]
/**
* The expected physical width of the image-target, in meters.
*/
public float widthInMeters;
[...]
}
XRDectionImage:
/**
* Source image data for a image-target to detect. This can either be constructed manually, or
* from a Unity Texture2d.
*/
public struct XRDetectionImage {
/**
* The width of the source binary image-target, in pixels.
*/
public readonly int widthInPixels;
/**
* The height of the source binary image-target, in pixels.
*/
public readonly int heightInPixels;
/**
* The expected physical width of the image-target, in meters.
*/
public readonly float targetWidthInMeters;
[...]
}
}
XRDetectedImageTarget:
/**
* An image-target that was detected by an AR Engine.
*/
public struct XRDetectedImageTarget {
[...]
/**
* Width of the detected image-target, in unity units.
*/
public readonly float width;
/**
* Height of the detected image-target, in unity units.
*/
public readonly float height;
[...]
}
我自己没有这样做,我不能给你 工作 代码示例,但 8th Wall documentation on the basics of image detection 似乎相当不错,事实上确实表明XRDetectedImageTarget
的实例被传递到检测到的模型上指定的回调方法(图像从 8th Wall 文档复制,2019-01-18):
因此,如果您知道所需的模型与图像的比率(即 "the width of the model should be half the width of the detected image"),那么在回调中您应该能够执行以下操作:
//calculating the size ratio might be more difficult than this, assume this is pseudocode
var sizeRatio = xrDetectedImageTarget.width / xrDetectionImage.targetWidthInMeters;
var placedModel = Instantiate(prefabModel, newPosition, newRotation, parentTransform);
placedModel.transform.localScale = this.transform.localScale * sizeRatio;
希望 works/helps!
是否可以根据真实世界检测图像的大小调整放置模型的大小?我有一幅画,我正在使用 AR 模型进行增强,一旦检测到图像,该模型就会替换这幅画。它应该完美地覆盖这幅画。它的宽度为 45 厘米,并提供给 XRImageDetectionController 脚本。当我 运行 我的应用程序在其真实尺寸 (45cm X 28cm) 中可以看到目标图像时,效果符合预期。理想情况下,我希望能够在真实世界图像可能具有不同大小(保持纵横比相同)的各种设置中演示这幅增强绘画。我的特定设备兼容 ARCore android phone.
我最近开始使用 8th Wall,但我还没有创建自己的项目(只是玩弄了演示项目并查看了源代码),所以我 100% 不知道这是否会工作,但这里是:
如果您查看 8th Wall XRDataTypes.cs
文件,您可以找到数据类型 XRDetectionTexture
、XRDetectionImage
和 XRDetectedImageTarget
。这些数据类型中的每一种都有一些维度字段的实例。
XRDectionTexture:
/**
* A unity Texture2D that can be used as a source for image-target detection.
*/
[Serializable] public struct XRDetectionTexture {
[...]
/**
* The expected physical width of the image-target, in meters.
*/
public float widthInMeters;
[...]
}
XRDectionImage:
/**
* Source image data for a image-target to detect. This can either be constructed manually, or
* from a Unity Texture2d.
*/
public struct XRDetectionImage {
/**
* The width of the source binary image-target, in pixels.
*/
public readonly int widthInPixels;
/**
* The height of the source binary image-target, in pixels.
*/
public readonly int heightInPixels;
/**
* The expected physical width of the image-target, in meters.
*/
public readonly float targetWidthInMeters;
[...]
}
}
XRDetectedImageTarget:
/**
* An image-target that was detected by an AR Engine.
*/
public struct XRDetectedImageTarget {
[...]
/**
* Width of the detected image-target, in unity units.
*/
public readonly float width;
/**
* Height of the detected image-target, in unity units.
*/
public readonly float height;
[...]
}
我自己没有这样做,我不能给你 工作 代码示例,但 8th Wall documentation on the basics of image detection 似乎相当不错,事实上确实表明XRDetectedImageTarget
的实例被传递到检测到的模型上指定的回调方法(图像从 8th Wall 文档复制,2019-01-18):
因此,如果您知道所需的模型与图像的比率(即 "the width of the model should be half the width of the detected image"),那么在回调中您应该能够执行以下操作:
//calculating the size ratio might be more difficult than this, assume this is pseudocode
var sizeRatio = xrDetectedImageTarget.width / xrDetectionImage.targetWidthInMeters;
var placedModel = Instantiate(prefabModel, newPosition, newRotation, parentTransform);
placedModel.transform.localScale = this.transform.localScale * sizeRatio;
希望 works/helps!