canvas 轴心点缩放后,x 和 y 坐标错误
after canvas zoom with pivot point, x- and y- coordinates are wrong
我正在尝试在 canvas 上实现缩放,它应该集中在一个轴心点上。缩放工作正常,但之后用户应该能够在 canvas 上 select 元素。问题是,我的翻译值似乎不正确,因为它们的偏移量与我不缩放到轴心点的值不同(没有轴心点缩放和拖动效果很好)。
我使用了 this example.
中的一些代码
相关代码为:
class DragView extends View {
private static float MIN_ZOOM = 0.2f;
private static float MAX_ZOOM = 2f;
// These constants specify the mode that we're in
private static int NONE = 0;
private int mode = NONE;
private static int DRAG = 1;
private static int ZOOM = 2;
public ArrayList<ProcessElement> elements;
// Visualization
private boolean checkDisplay = false;
private float displayWidth;
private float displayHeight;
// These two variables keep track of the X and Y coordinate of the finger when it first
// touches the screen
private float startX = 0f;
private float startY = 0f;
// These two variables keep track of the amount we need to translate the canvas along the X
//and the Y coordinate
// Also the offset from initial 0,0
private float translateX = 0f;
private float translateY = 0f;
private float lastGestureX = 0;
private float lastGestureY = 0;
private float scaleFactor = 1.f;
private ScaleGestureDetector detector;
...
private void sharedConstructor() {
elements = new ArrayList<ProcessElement>();
flowElements = new ArrayList<ProcessFlow>();
detector = new ScaleGestureDetector(getContext(), new ScaleListener());
}
/**
* checked once to get the measured screen height/width
* @param hasWindowFocus
*/
@Override
public void onWindowFocusChanged(boolean hasWindowFocus) {
super.onWindowFocusChanged(hasWindowFocus);
if (!checkDisplay) {
displayHeight = getMeasuredHeight();
displayWidth = getMeasuredWidth();
checkDisplay = true;
}
}
@Override
public boolean onTouchEvent(MotionEvent event) {
ProcessBaseElement lastElement = null;
switch (event.getAction() & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN:
mode = DRAG;
// Check if an Element has been touched.
// Need to use the absolute Position that's why we take the offset into consideration
touchedElement = isElementTouched(((translateX * -1) + event.getX()) / scaleFactor, (translateY * -1 + event.getY()) / scaleFactor);
//We assign the current X and Y coordinate of the finger to startX and startY minus the previously translated
//amount for each coordinates This works even when we are translating the first time because the initial
//values for these two variables is zero.
startX = event.getX() - translateX;
startY = event.getY() - translateY;
}
// if an element has been touched -> no need to take offset into consideration, because there's no dragging possible
else {
startX = event.getX();
startY = event.getY();
}
break;
case MotionEvent.ACTION_MOVE:
if (mode != ZOOM) {
if (touchedElement == null) {
translateX = event.getX() - startX;
translateY = event.getY() - startY;
} else {
startX = event.getX();
startY = event.getY();
}
}
if(detector.isInProgress()) {
lastGestureX = detector.getFocusX();
lastGestureY = detector.getFocusY();
}
break;
case MotionEvent.ACTION_UP:
mode = NONE;
break;
case MotionEvent.ACTION_POINTER_DOWN:
mode = ZOOM;
break;
case MotionEvent.ACTION_POINTER_UP:
break;
}
detector.onTouchEvent(event);
invalidate();
return true;
}
private ProcessBaseElement isElementTouched(float x, float y) {
for (int i = elements.size() - 1; i >= 0; i--) {
if (elements.get(i).isTouched(x, y))
return elements.get(i);
}
return null;
}
@Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);
canvas.save();
if(detector.isInProgress()) {
canvas.scale(scaleFactor,scaleFactor,detector.getFocusX(),detector.getFocusY());
} else
canvas.scale(scaleFactor, scaleFactor,lastGestureX,lastGestureY); // zoom
// canvas.scale(scaleFactor,scaleFactor);
//We need to divide by the scale factor here, otherwise we end up with excessive panning based on our zoom level
//because the translation amount also gets scaled according to how much we've zoomed into the canvas.
canvas.translate(translateX / scaleFactor, translateY / scaleFactor);
drawContent(canvas);
canvas.restore();
}
/**
* scales the canvas
*/
private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
@Override
public boolean onScale(ScaleGestureDetector detector) {
scaleFactor *= detector.getScaleFactor();
scaleFactor = Math.max(MIN_ZOOM, Math.min(scaleFactor, MAX_ZOOM));
return true;
}
}
}
元素以其在 canvas 上的绝对位置保存(注意拖动)。我怀疑我没有考虑从枢轴点到 translateX
和 translateY
的新偏移量,但我不知道应该在哪里以及如何做。
任何帮助将不胜感激。
好的,所以你基本上是想找出某个屏幕 X/Y 坐标对应的位置,在围绕某个轴心点 {Px, Py} 缩放视图后。
所以,让我们试着分解一下。
为了便于讨论,我们假设 Px & Py = 0,并且 s = 2。这意味着视图在视图的左上角附近被缩放了 2 倍。
在这种情况下,屏幕坐标 {0, 0} 对应于视图中的 {0, 0},因为该点是唯一未更改的点。一般来说,如果屏幕坐标等于轴心点,则没有变化。
如果用户点击其他点,比方说 {2, 3},会发生什么?在这种情况下,曾经是 {2, 3} 的位置现在已经从枢轴点(即 {0, 0})移动了 2 倍,因此对应的位置是 {4, 6}.
当轴心点为 {0, 0} 时,这一切都很容易,但如果不是,会发生什么?
好吧,让我们看另一种情况 - 轴心点现在位于视图的右下角(宽度 = w,高度 = h - {w, h})。同样,如果用户点击相同的位置,那么相应的位置也是 {w, h},但是假设用户点击了其他位置,例如 {w - 2, h - 3}?同样的逻辑也出现在这里:翻译后的位置是{w - 4, h - 6}.
总而言之,我们要做的是将屏幕坐标转换为平移坐标。我们需要对我们收到的这个 X/Y 坐标执行与我们对缩放视图中的每个像素执行的操作相同的操作。
步骤 1 - 我们想根据枢轴点平移 X/Y 位置:
X = X - Px
Y = Y - Py
步骤 2 - 然后我们缩放 X 和 Y:
X = X * s
Y = Y * s
第3步 - 然后我们翻译回去:
X = X + Px
Y = Y + Py
如果我们将此应用于我给出的最后一个示例(我将仅针对 X 进行演示):
Original value: X = w - 2, Px = w
Step 1: X <-- X - Px = w - 2 - w = -2
Step 2: X <-- X * s = -2 * 2 = -4
Step 3: X <-- X + Px = -4 + w = w - 4
一旦您将此应用到您收到的任何 X/Y 缩放之前的相关内容,该点将被平移,以便它相对于缩放状态。
希望这对您有所帮助。
我正在尝试在 canvas 上实现缩放,它应该集中在一个轴心点上。缩放工作正常,但之后用户应该能够在 canvas 上 select 元素。问题是,我的翻译值似乎不正确,因为它们的偏移量与我不缩放到轴心点的值不同(没有轴心点缩放和拖动效果很好)。 我使用了 this example.
中的一些代码相关代码为:
class DragView extends View {
private static float MIN_ZOOM = 0.2f;
private static float MAX_ZOOM = 2f;
// These constants specify the mode that we're in
private static int NONE = 0;
private int mode = NONE;
private static int DRAG = 1;
private static int ZOOM = 2;
public ArrayList<ProcessElement> elements;
// Visualization
private boolean checkDisplay = false;
private float displayWidth;
private float displayHeight;
// These two variables keep track of the X and Y coordinate of the finger when it first
// touches the screen
private float startX = 0f;
private float startY = 0f;
// These two variables keep track of the amount we need to translate the canvas along the X
//and the Y coordinate
// Also the offset from initial 0,0
private float translateX = 0f;
private float translateY = 0f;
private float lastGestureX = 0;
private float lastGestureY = 0;
private float scaleFactor = 1.f;
private ScaleGestureDetector detector;
...
private void sharedConstructor() {
elements = new ArrayList<ProcessElement>();
flowElements = new ArrayList<ProcessFlow>();
detector = new ScaleGestureDetector(getContext(), new ScaleListener());
}
/**
* checked once to get the measured screen height/width
* @param hasWindowFocus
*/
@Override
public void onWindowFocusChanged(boolean hasWindowFocus) {
super.onWindowFocusChanged(hasWindowFocus);
if (!checkDisplay) {
displayHeight = getMeasuredHeight();
displayWidth = getMeasuredWidth();
checkDisplay = true;
}
}
@Override
public boolean onTouchEvent(MotionEvent event) {
ProcessBaseElement lastElement = null;
switch (event.getAction() & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN:
mode = DRAG;
// Check if an Element has been touched.
// Need to use the absolute Position that's why we take the offset into consideration
touchedElement = isElementTouched(((translateX * -1) + event.getX()) / scaleFactor, (translateY * -1 + event.getY()) / scaleFactor);
//We assign the current X and Y coordinate of the finger to startX and startY minus the previously translated
//amount for each coordinates This works even when we are translating the first time because the initial
//values for these two variables is zero.
startX = event.getX() - translateX;
startY = event.getY() - translateY;
}
// if an element has been touched -> no need to take offset into consideration, because there's no dragging possible
else {
startX = event.getX();
startY = event.getY();
}
break;
case MotionEvent.ACTION_MOVE:
if (mode != ZOOM) {
if (touchedElement == null) {
translateX = event.getX() - startX;
translateY = event.getY() - startY;
} else {
startX = event.getX();
startY = event.getY();
}
}
if(detector.isInProgress()) {
lastGestureX = detector.getFocusX();
lastGestureY = detector.getFocusY();
}
break;
case MotionEvent.ACTION_UP:
mode = NONE;
break;
case MotionEvent.ACTION_POINTER_DOWN:
mode = ZOOM;
break;
case MotionEvent.ACTION_POINTER_UP:
break;
}
detector.onTouchEvent(event);
invalidate();
return true;
}
private ProcessBaseElement isElementTouched(float x, float y) {
for (int i = elements.size() - 1; i >= 0; i--) {
if (elements.get(i).isTouched(x, y))
return elements.get(i);
}
return null;
}
@Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);
canvas.save();
if(detector.isInProgress()) {
canvas.scale(scaleFactor,scaleFactor,detector.getFocusX(),detector.getFocusY());
} else
canvas.scale(scaleFactor, scaleFactor,lastGestureX,lastGestureY); // zoom
// canvas.scale(scaleFactor,scaleFactor);
//We need to divide by the scale factor here, otherwise we end up with excessive panning based on our zoom level
//because the translation amount also gets scaled according to how much we've zoomed into the canvas.
canvas.translate(translateX / scaleFactor, translateY / scaleFactor);
drawContent(canvas);
canvas.restore();
}
/**
* scales the canvas
*/
private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
@Override
public boolean onScale(ScaleGestureDetector detector) {
scaleFactor *= detector.getScaleFactor();
scaleFactor = Math.max(MIN_ZOOM, Math.min(scaleFactor, MAX_ZOOM));
return true;
}
}
}
元素以其在 canvas 上的绝对位置保存(注意拖动)。我怀疑我没有考虑从枢轴点到 translateX
和 translateY
的新偏移量,但我不知道应该在哪里以及如何做。
任何帮助将不胜感激。
好的,所以你基本上是想找出某个屏幕 X/Y 坐标对应的位置,在围绕某个轴心点 {Px, Py} 缩放视图后。
所以,让我们试着分解一下。
为了便于讨论,我们假设 Px & Py = 0,并且 s = 2。这意味着视图在视图的左上角附近被缩放了 2 倍。
在这种情况下,屏幕坐标 {0, 0} 对应于视图中的 {0, 0},因为该点是唯一未更改的点。一般来说,如果屏幕坐标等于轴心点,则没有变化。
如果用户点击其他点,比方说 {2, 3},会发生什么?在这种情况下,曾经是 {2, 3} 的位置现在已经从枢轴点(即 {0, 0})移动了 2 倍,因此对应的位置是 {4, 6}.
当轴心点为 {0, 0} 时,这一切都很容易,但如果不是,会发生什么?
好吧,让我们看另一种情况 - 轴心点现在位于视图的右下角(宽度 = w,高度 = h - {w, h})。同样,如果用户点击相同的位置,那么相应的位置也是 {w, h},但是假设用户点击了其他位置,例如 {w - 2, h - 3}?同样的逻辑也出现在这里:翻译后的位置是{w - 4, h - 6}.
总而言之,我们要做的是将屏幕坐标转换为平移坐标。我们需要对我们收到的这个 X/Y 坐标执行与我们对缩放视图中的每个像素执行的操作相同的操作。
步骤 1 - 我们想根据枢轴点平移 X/Y 位置:
X = X - Px
Y = Y - Py
步骤 2 - 然后我们缩放 X 和 Y:
X = X * s
Y = Y * s
第3步 - 然后我们翻译回去:
X = X + Px
Y = Y + Py
如果我们将此应用于我给出的最后一个示例(我将仅针对 X 进行演示):
Original value: X = w - 2, Px = w
Step 1: X <-- X - Px = w - 2 - w = -2
Step 2: X <-- X * s = -2 * 2 = -4
Step 3: X <-- X + Px = -4 + w = w - 4
一旦您将此应用到您收到的任何 X/Y 缩放之前的相关内容,该点将被平移,以便它相对于缩放状态。
希望这对您有所帮助。