"Seeing" Flutter User-Side Issues

Free Fish Technology - Jingkong

Introduction
After the app is released, the most troublesome problem for developers is how to solve the restoration and positioning of the user-side problems after delivery. This is a blank area in the industry that lacks a complete set of systematic solutions. The Xianyu technical team combined its own business pain points to propose a solution on flutter. A new set of technical ideas to solve this problem.

We capture the flow of ui event streams and business data through the bottom layer of the system, and use the captured data to reproduce online problems through the event playback mechanism. This article first introduces the principle of flutter touch gesture events, then introduces how to record flutter ui gesture events, then introduces how to restore and playback flutter ui gesture events, and finally attaches the overall frame diagram including native recording and playback. In order to facilitate the understanding of this article, readers can first read my previous article on native recording and playback, "Thousands of People and Thousands of Faces Online Problem Playback Technology"

background
Today’s apps basically provide an entry for users to feedback questions. However, there are generally two ways to provide users with feedback questions:

Enter the expression directly in text, or take a screenshot
Directly record video feedback
These two forms of feedback often bring the following complaints:

User: inputting text is time-consuming and labor-intensive
Development 1: Can't understand what user feedback means?
Development 2: I can probably understand what the user means, but I can't reproduce it offline.
Development 3: I watched the video recorded by the user, but I couldn't reproduce it offline, and I couldn't locate the problem
So: in order to solve the above problems, we use a new set of ideas to design the online problem playback system

Flutter Gesture Basics
If we want to record and playback flutter ui events, then we must first understand the fundamentals of flutter ui gestures.

1. Flutter UI touch raw data Pointer

We can understand the gesture system in Flutter in two layers. The first layer of concept is raw touch data (pointer), which describes the time, type, location, and movement of pointers (eg, touch, mouse, and stylus) on the screen. The second layer concept is gesture, which describes a semantic action consisting of one or more raw movement data. Raw touch data alone doesn't make any sense in general.

The original touch data is passed by the system to the native, and the native is passed to flutter through the flutter view channel.

The raw data interface that flutter receives from native is as follows:

void _handlePointerDataPacket(ui.PointerDataPacket packet) {
// We convert pointer data to logical pixels so that e.g. the touch slop can be
// defined in a device-independent manner.
_pendingPointerEvents.addAll(PointerEventConverter.expand(packet.data, ui.window.devicePixelRatio));
if (!locked)
_flushPointerEventQueue();
}

2. Flutter UI Collision Test

When the screen receives a touch, the dart Framework performs a collision test on your application to determine which views (renderobjects) exist where the touch meets the screen. Touch events are then dispatched to the innermost renderobject. Starting from the innermost renderobject, these events are bubbled up in the renderobject tree, and finally all the renderobjects are traversed through the bubbling transmission. As can be imagined from this transmission mechanism, the last one in the list of traversed renderobjects is WidgetsFlutterBinding (strictly In terms of WidgetsFlutterBinding is not a renderobject), WidgetsFlutterBinding will be introduced later.

void _handlePointerEvent(PointerEvent event) {
assert(!locked);
HitTestResult result;
if (event is PointerDownEvent) {
assert(!_hitTests.containsKey(event.pointer));
result = HitTestResult();
hitTest(result, event.position);
_hitTests[event.pointer] = result;
assert(() {
if (debugPrintHitTestResults)
debugPrint('$event: $result');
return true;
}());
} else if (event is PointerUpEvent || event is PointerCancelEvent) {
result = _hitTests.remove(event.pointer);
} else if (event.down) {
result = _hitTests[event.pointer];
} else {
return; // We currently ignore add, remove, and hover move events.
}
if (result != null)
dispatchEvent(event, result);
}

The above code uses histTest() to detect which views are involved in the current touch pointer event.

Finally, the event is processed by dispatchEvent(event, result).

void dispatchEvent(PointerEvent event, HitTestResult result) {
assert(!locked);
assert(result != null);
for (HitTestEntry entry in result.path) {
try {
entry.target.handleEvent(event, entry);
} catch (exception, stack) {
}
}
}
The above code is used to call the gesture recognizer of each view (RenderObject) separately to handle the current touch event (decide whether to receive this event or not).

entry.target is the RenderObject corresponding to each widget. All RenderObjects need to implement (implements) the interface of the HitTestTarget class. The HitTestTarget has the handleEvent interface, so each RenderObject needs to implement the handleEvent interface. This interface is used to Handle gesture recognition.

abstract class RenderObject extends AbstractNode with DiagnosticableTreeMixin implements HitTestTarget
In addition to the last WidgetsFlutterBinding, other view RenderObjects call their own handleEvent to recognize gestures. Its function is to judge whether the current gesture should be abandoned. If not, it will be thrown into a router (this router is the gesture arena). Finally, WidgetsFlutterBinding calls handleEvent to unify Decide which of these gesture recognizers will win in the end, so here goes
WidgetsFlutterBinding.handleEvent is actually a unified processing interface. Its code is as follows:

void handleEvent(PointerEvent event, HitTestEntry entry) {
pointerRouter.route(event);
if (event is PointerDownEvent) {
gestureArena.close(event.pointer);
} else if (event is PointerUpEvent) {
gestureArena.sweep(event.pointer);
}
}
3. Flutter UI Gesture Resolution

From the above introduction, it can be concluded that a touch event may trigger multiple gesture recognizers. The framework decides which gesture the user wants by having each recognizer join a "gesture playing field". The Gesture Arena uses the following rules to decide which gesture wins, very simple

At any time, any recognizer can declare itself a failure and actively leave the "gesture playing field". If there is only one recognizer left in the current "playing field", then the one left is the winner, and the winner means alone to receive this touch event and act in response
At any time, any recognizer can declare victory by itself, and in the end it wins, all other recognizers remaining lose
4. Flutter UI Gesture Example

The following example shows that the screen window consists of ABCDEFKG views, where A view is the root view, that is, the bottommost view. The red circle represents the touch point position, and the touch falls in the middle of the G view.


According to the collision test, traverse the view path that responds to this touch event:

WidgetsFlutterBinding <— A <— C <— K <— G (where GKCA is the renderObject)

After traversing the path list, start to call the respective view (GKCA) entry.target.handleEvent to put its recognizer in the arena to participate in the decision. Of course, some views voluntarily give up recognizing the touch event due to their own logical judgment. This process is as follows


Call the handleEvent() method in the order of G->K->C->A->WidgetsFlutterBinding, and finally call its own handleEvent() interface through WidgetsFlutterBinding to uniformly decide which gesture recognizer wins.

The winning gesture recognizer calls back to the upper-level business code through the callback method. The process is as follows


Flutter UI recording
From the above flutter gesture processing, we only need to wrap the callback method on the gesture recognizer callback to intercept the gesture callback method, so that we can read WidgetsFlutterBinding <— A <— C <— K <— during the interception process This view tree for the G link. We only need to record the tree, the node-related attributes and gesture types on the tree, and then play it back by matching this information to the corresponding view on the current interface. The following is the recording code of the tap event. The principle of the recording code of other types of gestures is the same, which is skipped here.

static GestureTapCallback onTapWithRecord(GestureTapCallback orgOnTap, BuildContext context)
{
if (null != orgOnTap && null != context)
{
final GestureTapCallback onTapWithRecord = () {
if(bStartRecord)
{
saveTapInfo(context, TouchEventUIType.OnTap,null);
}
if (null != orgOnTap)
{
orgOnTap();
}
};
return onTapWithRecord;
}
return orgOnTap;
}

static void saveTapInfo(BuildContext context, TouchEventUIType type, Offset point)
{
if(null == point && null != pointerPacketList && pointerPacketList.isNotEmpty)
{
final ui.PointerDataPacket last = pointerPacketList.last;
if(null != last && null != last.data && last.data.isNotEmpty)
{
final ui.Rect rect = QueReplayTool.getWindowRect(context);
point = new Offset(last.data.last.physicalX/ui.window.devicePixelRatio - rect.left,
last.data.last.physicalY /ui.window.devicePixelRatio - rect.top);
}
}
final RecordInfo record = createTapRecordInfo(context, type, point);
if(null != record)
{
FlutterQuestionReplayPlugin.saveRecordDataToNative(record);
}
clearPointerPacketList();
}
The recording flow chart is as follows:


Flutter UI playback
The ui playback is divided into two parts. The first part matches the recorded related information to the corresponding view of the current interface. The second part is to simulate the relevant gestures on this view. This part is difficult and important, which involves how to generate the original touch. Data information, which includes time, type, coordinates, and direction. If these information are set unreasonably or incorrectly, it will cause a crash, and the scrolling distance does not match and needs to be compensated, how to compensate, and so on.

The following is a flow chart of scroll event playback. The playback principle of other types of gestures is the same.


In the above preprocessing, the recognition consumption refers to the scrolling distance required by the gesture recognizer to determine whether the scrolling gesture conforms to the scrolling gesture.

So in order to make its controls scroll, we first need to generate some touch point data, and let the gesture recognizer recognize it as a scroll event. In this way, subsequent scrolling actions can be performed.

The following is the scroll processing logic code, as follows:

void verticalScroll(double dstPoint, double moveDis) {
preReplayPacket = null;
if (0.0 != moveDis) {
//Calculate the scrolling direction here, and the pixel offset of the scrolling unit, skip it because the code is too long
int count =
((ui.window.devicePixelRatio * moveDis) / (unit.abs())).round() * 2;
if (count < minCount) {
count = minCount; //Guaranteed that the minimum offset 50/2=25 is less than this number, it may not respond, because it is consumed by the scroll detected by other controls
//Also if the count is too small, the count is not scrolled before being consumed by the scroll view, this is the end of the touch (ui.PointerChange.up), which may cause the cell
//click event jump event
}
final double physicalX =
rect.center.dx * ui.window.devicePixelRatio; //376.0;
double physicalY;
final double needOffset = (count * unit).abs();
final double targetHeight = rect.size.height * ui.window.devicePixelRatio;
final int scrollPadding = rect.height ~/ 4;
if (needOffset <= targetHeight / 2) {
physicalY = rect.center.dy * ui.window.devicePixelRatio;
} else if (needOffset > targetHeight / 2 && needOffset < targetHeight) {
physicalY = (orgMoveDis > 0)
? (rect.bottom - scrollPadding) * ui.window.devicePixelRatio
: (rect.top + scrollPadding) * ui.window.devicePixelRatio;
} else {
physicalY = (orgMoveDis > 0)
? (rect.bottom - scrollPadding) * ui.window.devicePixelRatio
: (rect.top + scrollPadding) * ui.window.devicePixelRatio;
count = ((rect.height - 2 * scrollPadding) *
ui.window.devicePixelRatio /
unit.abs())
.round();
}
final List packetList =createTouchDataList(count, unit, physicalY, physicalX);
exeScroolTouch(packetList,dstPoint);
} else {
new Timer(const Duration(microseconds: fpsInterval), () {
replayScrollEvent();
});
}
}
The above code roughly processes the logic: 1. Calculate the scrolling direction, each generated touch data offset unit 2. Calculate the starting position of the scrolling 3. Generate a list of scrolling raw touch data 4. Loop the raw touch data, and calculate whether to scroll to the specified position, if the specified position is not reached, continue to supply

The code to generate the scrolling raw touch data list is as follows:

The first data is down touch data, and the others are move touch data. The up data does not need to be generated here, and the up touch data is generated only after the scrolling distance reaches the target position. Why is it designed this way? Here for you to think about!

List createTouchDataList(int count,double unit,double physicalY,double physicalX)
{
final List packetList = [];
int uptime = 0;
for (int i = 0; i < count; i++) {
ui.PointerChange change;
if (0 == i) {
change = ui.PointerChange.down;
} else {
change = ui.PointerChange.move;
physicalY += unit;
if (i < 15) //The first few points make the distance offset in a short period of time to avoid the click and long press events
{
physicalY += unit;
physicalY += unit;
}
}
uptime += replayOnePointDuration;
final ui.PointerData pointer = new ui.PointerData(
timeStamp: new Duration(microseconds: uptime),
change: change,
kind: ui.PointerDeviceKind.touch,
device: 1,
physicalX: physicalX,
physicalY: physicalY,
buttons: 0,
pressure: 0.0,
pressureMin: 0.0,
pressureMax: touchPressureMax,
distance: 0.0,
distanceMax: 0.0,
radiusMajor: downRadiusMajor,
radiusMinor: 0.0,
radiusMin: downRadiusMin,
radiusMax: downRadiusMax,
orientation: orientation,
tilt: 0.0);
final List pointerList = [];
pointerList.add(pointer);
final ui.PointerDataPacket packet =
new ui.PointerDataPacket(data: pointerList);
packetList.add(packet);
}
return packetList;
}
The original touch data is transmitted in a loop, and the code to determine whether to continue the supply is as follows:

We use the timer to continuously send touch data to the system, and each time we need to determine whether the target position has been reached before sending data.

void exeScroolTouch(List packetList,double dstPoint){
Timer.periodic(const Duration(microseconds: fpsInterval), (Timer timer) {
final ScrollableState state = element.state;
final double curPoint = state.position.pixels;//ui.window.physicalSize.height*state.position.pixels/RecordInfo.recordedWindowH;
final double offset = (dstPoint - curPoint).abs();
final bool existOffset = offset > 1 ? true : false;
if (packetList.isNotEmpty && existOffset) {
sendTouchData(packetList, offset);
} else if (packetList.isNotEmpty) {
record.succ = true;
timer.cancel();
packetList.clear();
if (null != preReplayPacket) {
final ui.PointerDataPacket packet =
createUpTouchPointPacket();
if (null != packet) {
ui.window.onPointerDataPacket(packet);
}
}
new Timer(const Duration(microseconds: fpsInterval), () {
replayScrollEvent();
});
} else if (existOffset) {
record.succ = true;
timer.cancel();
packetList.clear();
final ui.PointerDataPacket packet =
createUpTouchPointPacket();
if (null != packet) {
ui.window.onPointerDataPacket(packet);
}
verticalScroll(dstPoint, dstPoint - curPoint);
} else {
finishReplay();
}
});
}
Overall frame diagram of problem playback
The image below includes native and flutter, including ui and data.


Summarize
This article roughly introduces the playback of flutter ui gesture problems. The core part consists of four parts, one is the principle of flutter gestures, the other is flutter ui recording, the third is flutter ui playback, and the fourth is the entire frame diagram. Due to limited space, these four parts are all The introduction is more general and not detailed enough, please understand! There is actually a lot of flutter recording and playback code. I only attach the more important and easy-to-understand code here. Other unimportant or unreadable code is omitted.
If you are interested in the technical points inside, you can follow our the public. In the future, we will publish a detailed and in-depth analysis of the technical points inside.
If you think there is something wrong with the above, please point it out. thanks
follow-up in-depth
So far, our current flutter ui recording and playback has been developed, but we still need to continue to optimize and deepen. We will further optimize from two points: 1. How to simulate more realistic touch events during playback, such as scrolling acceleration, a scroll is actually a process of curve change 2. Solve the inconsistency between gesture recording and playback. For example, when we entered 123 on the keyboard, we intercepted gesture 123 during recording, but due to a bug in the upper layer of the business, input 3 did not respond at that time, and only 12 was displayed in the input box. We simulated gesture 123 during playback, and after the final playback The input box shows 123, so this leads to inconsistent recording and playback. How to solve this problem? This is a troublesome problem, and we will solve it later. And there is already this solution.

Author: Xianyu Technology

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us