Improved Input and Gestures
by Michael Perry · in iTorque 2D · 03/25/2011 (9:16 am) · 39 replies
Greetings everyone! This thread is the official feedback gathering source for improving iTorque 2D's input system. This means enhancing the current touch support and adding new features like gestures. Before proceeding, I have to important notes:
2. Don't remind me it's my day off. I know it's my day off. The guys in IRC already reminded me. My wife already reminded me. I'm pretty sure my dogs gave me a look that meant "Isn't this your day off?". I like choosing how to use my day off and my choice for the first part of the day is to gut the input system for iTorque 2D to suit the needs of my game. In all likelihood, this change will benefit the iT2D licensees...so why not kill two birds with one stone?
Alright, great start. This is a step in the right direction. However, people have already noticed their are limitations for this. Edward Maurina wrote a Multi-Touch Enhancement resource for iTorque 2D, which is an excellent fix to the limitations. He specifically states why he wrote the resource. iT2D cannot handle 1, 2, and 3 touches down and move. This...is kind of a problem. So his resource provides the following functionality in TorqueScript:
Much better. This is a lot closer to what we want. However, even looking at his code I'm seeing limitations to the system. Keep in mind iOS 4 can support up to 11 contact points. Also, a prime directive for iTorque 2D development is better compliance with Apple standards. This results in our iOS code being more familiar for developers with iOS experience.
Also, a blatant limitation is that there is no gesture support. People have been asking how to do pinch, swipe, tap and other gestures for years now. So how do we proceed? I'm going to reserve the next four or five posts to walk through potential ideas of how to improve the current input system (possibly by erasing it and adding something new).
Continued in next post...
Foreword
1. The content of this thread is not guaranteed to be in an official release, nor does it allude to an upcoming release. This is part of my research and development.2. Don't remind me it's my day off. I know it's my day off. The guys in IRC already reminded me. My wife already reminded me. I'm pretty sure my dogs gave me a look that meant "Isn't this your day off?". I like choosing how to use my day off and my choice for the first part of the day is to gut the input system for iTorque 2D to suit the needs of my game. In all likelihood, this change will benefit the iT2D licensees...so why not kill two birds with one stone?
Current Input Status
Alright, with that out of the way, let's talk iOS input. What is currently supported? Well, if you are reading the Official Documentation you already know about the current touch support. If you are not reading the docs, at least read the code. If you are doing neither, check back on this thread in a month. This is about moving forward. So let's see what we have:// Called when the user touches the screen in any way
// %touchCount - How many fingers are touching
// %touchX - Screen coordinate of the touch on the X axis
// %touchY - Screen coordinate of the touch on the Y axis
function oniPhoneTouchDown( %touchCount, %touchX, %touchY )
{
}
// As soon as the user stops touching the screen
// %touchCount - How many fingers were removed from the screen
// %touchX - Where the finger left the screen on the X axis
// %touchY - Where the finger left the screen on the Y axis
function oniPhoneTouchUp( %touchCount, %touchX, %touchY )
{
}
// Called continuously when a user has touched the screen, then started dragging
// %touchCount - How many fingers are currently moving across the screen
// %touchX - Screen coordinate of the dragging on the X axis
// %touchY - Screen coordinate of the dragging on the Y axis
function oniPhoneTouchMove( %touchCount, %touchX, %touchY )
{
}
// Called when the user quickly taps the screen
// %touchCount - How many fingers tapped the screen
// %touchX - Screen coordinate of the tap on the X axis
// %touchY - Screen coordinate of the tap on the Y axis
function oniPhoneTouchTap ( %touchCount, %touchX, %touchY )
{
}Alright, great start. This is a step in the right direction. However, people have already noticed their are limitations for this. Edward Maurina wrote a Multi-Touch Enhancement resource for iTorque 2D, which is an excellent fix to the limitations. He specifically states why he wrote the resource. iT2D cannot handle 1, 2, and 3 touches down and move. This...is kind of a problem. So his resource provides the following functionality in TorqueScript:
function oniPhoneTouchDown( %touchNums, %touchX, %touchY )
{
%numTouches = getWordCount( %touchNums );
for(%count = 0; %count < %numTouches; %count++)
{
%curTouch = getWord( %touchNums, %count );
%curX = getWord( %touchX, %count );
%curY = getWord( %touchY , %count );
// Do something with this data, for example:
echo("Touch number ", %curTouch, " was started at X:", %curX, " Y:", %curY );
}
}Much better. This is a lot closer to what we want. However, even looking at his code I'm seeing limitations to the system. Keep in mind iOS 4 can support up to 11 contact points. Also, a prime directive for iTorque 2D development is better compliance with Apple standards. This results in our iOS code being more familiar for developers with iOS experience.
Also, a blatant limitation is that there is no gesture support. People have been asking how to do pinch, swipe, tap and other gestures for years now. So how do we proceed? I'm going to reserve the next four or five posts to walk through potential ideas of how to improve the current input system (possibly by erasing it and adding something new).
Continued in next post...
About the author
Programmer.
#22
Proposal For Multitouch (adopting finger recognition in iPhoneInput.mm):
None of these changes should break existing functionality and make minimal changes to the user front-end whilst allowing a simple but effective gesture system to be put in place. Potentially common gestures could be supplied out of the box implemented in C++ for quick processing. This concept is also device-agnostic and not just limited to the iPhone touch screen. If it simplifies matters we could depreciate onMouseX and simultaneously call onTouchX if naming is confusing any users.
What do you guys think?
03/29/2011 (7:20 am)
OK here's my formal proposal:Proposal For Multitouch (adopting finger recognition in iPhoneInput.mm):
- DeviceID or any unique identifier is added to the 'ScreenTouchEvent' structure.
- The current GuiEvent structure is either derived or expanded to include the unique ID and other mouse information typically stored by the Canvas in processScreenTouchEvent.
- A 'Hit' scene object/window takes a copy of the mLastEvent (that is passed through by reference) and stores this in an ordered list/array (removal of say finger 2 in a list would lead us to think finger 3 had become finger 2, perhaps not ideal behaviour - use either a vector or place a hit order variable in the GuiEvent).
- onMouseX callback functions have the added parameter of %deviceOrder that is the order in which the device was added to the ordered list.
- Ensure Scene/Canvas passes along loss of focus events so that objects always remove touch records from their lists (scenegraph could potentially contain a quick reference keyed to DeviceID, this could also speed up event processing).
- Each scene/object maintains a list of added gestures (can just be a class) and procceses these one at a time each time a mouse event takes place on the object by accessing the object's mouse events list.
- If a gesture was caught, bail out of event proccessing and call the specified gesture callback.
None of these changes should break existing functionality and make minimal changes to the user front-end whilst allowing a simple but effective gesture system to be put in place. Potentially common gestures could be supplied out of the box implemented in C++ for quick processing. This concept is also device-agnostic and not just limited to the iPhone touch screen. If it simplifies matters we could depreciate onMouseX and simultaneously call onTouchX if naming is confusing any users.
What do you guys think?
#23
What I found was necessary was that the finger id had to be passed as part of the input processing. Otherwise, like Alistair Lowe3 states, I would find that fingers appeared to move large distances.
I'd have to think about it more, but I like Alistair's idea of adding gestures to objects. The way I use multi-touch is like the following:
* On "finger down", assign the "finger ID" to the object and pass along the event to that object.
* On "finger move" or "finger up", pass along the event to the object assigned that "finger ID".
* The object processes the gesture as necessary for its type.
To be honest, this was created so that I could quickly create prototypes. It did end up working fairly well, though.
03/29/2011 (10:37 am)
I'm building a multi-touch table using TGB for the game engine and TUIO for the input library. My use case involves having multiple people using the same device (which I've also seen on iPad games).What I found was necessary was that the finger id had to be passed as part of the input processing. Otherwise, like Alistair Lowe3 states, I would find that fingers appeared to move large distances.
I'd have to think about it more, but I like Alistair's idea of adding gestures to objects. The way I use multi-touch is like the following:
* On "finger down", assign the "finger ID" to the object and pass along the event to that object.
* On "finger move" or "finger up", pass along the event to the object assigned that "finger ID".
* The object processes the gesture as necessary for its type.
To be honest, this was created so that I could quickly create prototypes. It did end up working fairly well, though.
#24
TUIO and Microsoft Surface both support object tracking. There are images that you can place on the bottom of objects and the input system tells you that the item was added, moved, or removed. You also get the unique ID that is encoded in that image and the angle of the object.
Again, for my use-case, I'm implementing board games. There are physical objects that are added to the table and I have to track their locations and ownership. Each piece has a unique ID, so ownership is easy. I just also have to manage the locations of those objects, too.
The TUIO signatures are like this:
* onCursorAdd( id, position )
* onCursorUpdate( id, position )
* onCursorRemove( id, position )
* onObjectAdd( id, position, angle )
* onObjectUpdate( id, position, angle )
* onObjectRemove( id, position, angle )
I then do gesture recognition as appropriate for those objects within the update (and/or remove).
03/29/2011 (6:33 pm)
I just looked back over my TUIO code and found another item for the "wish list" (which is probably way too much for an iPhone update).TUIO and Microsoft Surface both support object tracking. There are images that you can place on the bottom of objects and the input system tells you that the item was added, moved, or removed. You also get the unique ID that is encoded in that image and the angle of the object.
Again, for my use-case, I'm implementing board games. There are physical objects that are added to the table and I have to track their locations and ownership. Each piece has a unique ID, so ownership is easy. I just also have to manage the locations of those objects, too.
The TUIO signatures are like this:
* onCursorAdd( id, position )
* onCursorUpdate( id, position )
* onCursorRemove( id, position )
* onObjectAdd( id, position, angle )
* onObjectUpdate( id, position, angle )
* onObjectRemove( id, position, angle )
I then do gesture recognition as appropriate for those objects within the update (and/or remove).
#25
03/30/2011 (4:59 am)
You've got to look at this from the Apple API perspective, though. That list of callbacks looks even more complicated than what's been proposed ;)
#26
As an example, two fingers moving on an object doesn't necessarily mean "scale". If there was a way to combine the oniPhoneTouch* events with gesture processing, it would be pretty good. Especially if the events could then be "locked" to objects in the scene.
03/30/2011 (11:58 am)
I totally agree... this is way too complicated for what's being proposed. But looking at this for future expansion, if I can't track individual IDs, then I can never create a multi-user experience. As an example, two fingers moving on an object doesn't necessarily mean "scale". If there was a way to combine the oniPhoneTouch* events with gesture processing, it would be pretty good. Especially if the events could then be "locked" to objects in the scene.
#27
03/30/2011 (12:14 pm)
@William - Indeed, that was my primary concern over the current implementation as I think most people use the per object callbacks and related concepts, it's intuitive and easy to use and I hope, from my proposed idea, that you can see there's very little overhead involved in keeping this concept alive with multitouch.
#28
This is a really brilliant thread. It's refreshing to see this kind of thought process actually documented.
I personally would prioritize this over retina support or Box2D support as gestures are a fairly critical part of the iOS interface, and simplifying/improving iT2D gesture support in this manner would go a long way.
Additionally, your catch all method concerns me. If my code at the beginning of that event starts a switch statement where I then handle each event type different I don't see the improvement. By having each event clearly named with specific event handlers, I feel it would make immediate sense how I would code my game's reaction to that event. I also think that'd be easier to document in the long run.
Lastly, much thanks for doing this exercise, this is really awesome.
03/31/2011 (2:36 pm)
Hey Michael,This is a really brilliant thread. It's refreshing to see this kind of thought process actually documented.
I personally would prioritize this over retina support or Box2D support as gestures are a fairly critical part of the iOS interface, and simplifying/improving iT2D gesture support in this manner would go a long way.
Additionally, your catch all method concerns me. If my code at the beginning of that event starts a switch statement where I then handle each event type different I don't see the improvement. By having each event clearly named with specific event handlers, I feel it would make immediate sense how I would code my game's reaction to that event. I also think that'd be easier to document in the long run.
Lastly, much thanks for doing this exercise, this is really awesome.
#29
I am reviewing what I have written, Edward's resource and Alistair's Object-level multi-touch support thread (which was created after this).
06/21/2011 (11:52 am)
Revisiting this thread. Everyone has had a chance to let things sink in or possibly forget the discussion. I'm applying theory to code right now, so does anyone have anymore thoughts on this matter?I am reviewing what I have written, Edward's resource and Alistair's Object-level multi-touch support thread (which was created after this).
#30
Backwards compatibility is taking a big hit. If all goes as planned, onMouseX events are not going to be deprecated. This can be nonfunctional or ill advised
Take a minute to breathe and let the panic reside. Ok, the good news:
I am potentially implementing three new device types into iT2D. I was afraid it would be painful, but I think the result will be much more intuitive to users. I am looking at implementing TouchDevice, AccelerometerDevice, GyroscopeDevice. This will fold ActionMap back into the system and leave the door open to per-object touch interaction.
Previously, the only use for ActionMaps in iT2D was mapping the joystick, which would be signaled by the accelerometer. That looked like this:
Now, you are looking at the following pseudo code
I'm actually pretty happy with this approach, but the good news keeps coming. I am not deprecating the per-object input reactions. Previously we had this:
If all goes well, you will soon see this function in the docs and demos:
As an added bonus, I will reverse the flow of Torque a little. On Windows and OS X, the mouse will trigger the ::onTouchX events. If the wiring and superglue holds, you are still going to be able to test on desktop machines.
So I achieve the two goals I was chasing. First, making iT2D more compliant and familiar to iOS development. Second, still allow for ::onXXX events on objects (should make Alistair happy).
Unfortunately, this removes the ability to test on Windows via joystick, but I justify this by the fact that you cannot test joystick on OS X and not many people are sporting joysticks on their desk these days (assumption).
Gestures are still tricky, but this is how I presented it in IRC based on the current plan. There is an issue with gestures vs regular touch. When you are performing a gesture, you are still getting regular touch feedback. If you isolate all of your functionality to touch handling, then try to do something based on gestures, your game design needs to be prepared for it.
For those of you who have played Angry Birds, analyze how they handle this. In their game, pinching allows a player to zoom in and zoom out. However, you are still moving the camera via dragging. That is what they limited their gestures to, based on a design decision.
It would be difficult for iT2D to handle input in a way that accommodates everyone's input design. The correct way has not changed; listen for gesture signals and provide the data. It will be up to you to figure out how your game reacts to it.
In one of my posts, I prototyped one solution. I used a single global variable that tracked the pinch amount. This used the UIGestureRecognizer approach (which I'm not a fan of). If you wanted to react to a gesture, you always had access to that variable, which freed you up to handle normal touch code however you want.
Now, if I were to implement the non-preferred concept (using a second, invisible UIView), I can grab gestures more easily. As it stands, I'm going to be writing custom gesture code. This is is troublesome in itself, because I have to come up with the algorithm instead of using what Apple provides out of the box for UIViews.
However, I have learned a bit more about views since working on Game Center support in Preview 1. I might have a new trick up my sleeve that allows me to use gesture recognizers, but it's just a theory.
To sum up, I am off and running. I can still adjust small pieces of code based on any further feedback from anyone still reading, but I feel confident that this is the best approach for the future.
I know having to port your previous code from ::onMouseX is going to be annoying, but at least you are going to be able to port to functions that have near identical uses.
That's my brain dump of the month, so enjoy the read. I am going heads down for a little bit to get this implementation working. I may not be too responsive in the forums, but at least you know why.
06/27/2011 (11:28 am)
I came to a conclusion and planned out the final implementation. Good news and bad news. Bad news first:Backwards compatibility is taking a big hit. If all goes as planned, onMouseX events are not going to be deprecated. This can be nonfunctional or ill advised
Take a minute to breathe and let the panic reside. Ok, the good news:
I am potentially implementing three new device types into iT2D. I was afraid it would be painful, but I think the result will be much more intuitive to users. I am looking at implementing TouchDevice, AccelerometerDevice, GyroscopeDevice. This will fold ActionMap back into the system and leave the door open to per-object touch interaction.
Previously, the only use for ActionMaps in iT2D was mapping the joystick, which would be signaled by the accelerometer. That looked like this:
new ActionMap(moveMap);
moveMap.bind(joystick0, xaxis, "moveX");
moveMap.bind(joystick0, yaxis, "moveY");
function moveX(%val)
{
// %val is the axis change in the horizontal
}
function moveY(%val)
{
// %val is the axis change in the vertical
}Now, you are looking at the following pseudo code
moveMap.bind(accelerometer, xaxis, "accelerateX");
moveMap.bind(gyroscope, xaxis, "tiltX");
moveMap.bind(touch, down, "touchDown");
function accelerateX(%val, %rate)
{
// %val is the acceleration in the x-axis
// %rate is the rate of acceleration
}
function tiltX(%val)
{
// %val is the tilt in the x-axis
}
function touchDown(%touchCount, %xTouches, %yTouches)
{
// %touchCount is how many fingers are hitting the screen
// %xTouches is a collection of the fingers x locations
// %yTouches is a collection of the fingers y locations
}I'm actually pretty happy with this approach, but the good news keeps coming. I am not deprecating the per-object input reactions. Previously we had this:
function Player::onMouseDown(%this, %modifier, %worldPos, %numClicks)
{
}If all goes well, you will soon see this function in the docs and demos:
function Player::onTouchDown(%this, %numTouches, %xTouches, %yTouches)
{
}As an added bonus, I will reverse the flow of Torque a little. On Windows and OS X, the mouse will trigger the ::onTouchX events. If the wiring and superglue holds, you are still going to be able to test on desktop machines.
So I achieve the two goals I was chasing. First, making iT2D more compliant and familiar to iOS development. Second, still allow for ::onXXX events on objects (should make Alistair happy).
Unfortunately, this removes the ability to test on Windows via joystick, but I justify this by the fact that you cannot test joystick on OS X and not many people are sporting joysticks on their desk these days (assumption).
Gestures are still tricky, but this is how I presented it in IRC based on the current plan. There is an issue with gestures vs regular touch. When you are performing a gesture, you are still getting regular touch feedback. If you isolate all of your functionality to touch handling, then try to do something based on gestures, your game design needs to be prepared for it.
For those of you who have played Angry Birds, analyze how they handle this. In their game, pinching allows a player to zoom in and zoom out. However, you are still moving the camera via dragging. That is what they limited their gestures to, based on a design decision.
It would be difficult for iT2D to handle input in a way that accommodates everyone's input design. The correct way has not changed; listen for gesture signals and provide the data. It will be up to you to figure out how your game reacts to it.
In one of my posts, I prototyped one solution. I used a single global variable that tracked the pinch amount. This used the UIGestureRecognizer approach (which I'm not a fan of). If you wanted to react to a gesture, you always had access to that variable, which freed you up to handle normal touch code however you want.
Now, if I were to implement the non-preferred concept (using a second, invisible UIView), I can grab gestures more easily. As it stands, I'm going to be writing custom gesture code. This is is troublesome in itself, because I have to come up with the algorithm instead of using what Apple provides out of the box for UIViews.
However, I have learned a bit more about views since working on Game Center support in Preview 1. I might have a new trick up my sleeve that allows me to use gesture recognizers, but it's just a theory.
To sum up, I am off and running. I can still adjust small pieces of code based on any further feedback from anyone still reading, but I feel confident that this is the best approach for the future.
I know having to port your previous code from ::onMouseX is going to be annoying, but at least you are going to be able to port to functions that have near identical uses.
That's my brain dump of the month, so enjoy the read. I am going heads down for a little bit to get this implementation working. I may not be too responsive in the forums, but at least you know why.
#31
06/28/2011 (3:28 pm)
Looking forward to the results - good luck!
#32
BUT:
Part of what Alistair and I were talking about in the other thread was the need to correctly identify each touch with a number or id of some sort. If I am tracking three fingers and the second one down is now lifted, your code will (seemingly, by your description) start giving me the x and y value for the third touch in the place I was expecting the values for the second touch. It would be handy if each touch had an id so I can get it's x and y and not confuse them with other touches. Alistair and I went into lengthy discussion of this which may be helpful to explain it further. Click Here to review that discussion.
Thanks!
06/28/2011 (4:06 pm)
Hi Michael, great to hear you are tackling this. Having an official solution will be great.BUT:
Part of what Alistair and I were talking about in the other thread was the need to correctly identify each touch with a number or id of some sort. If I am tracking three fingers and the second one down is now lifted, your code will (seemingly, by your description) start giving me the x and y value for the third touch in the place I was expecting the values for the second touch. It would be handy if each touch had an id so I can get it's x and y and not confuse them with other touches. Alistair and I went into lengthy discussion of this which may be helpful to explain it further. Click Here to review that discussion.
Thanks!
#33
06/29/2011 (9:17 am)
@Warthog - Ah, thanks for pointing this out. I'll take this into consideration when designing the new devices.
#34
For example, when you touch with a single finger it will have an ID of 0 and a X/Y coordinate. Touch down with another finger, and it will have an ID of 1. Lifting the original finger will release ID 0, but the remaining finger will persist with an ID of 1.
How you store this ID and make use of it will be up to you. The naming of the parameters helps clear this up as well (though we know you can name parameters anything you want)
As I mentioned earlier, I'm trying too get everything hooked up to per-object interaction using the new system:
While the last chunk of code resembles the ::onMouseXXX functions, the code beneath the hood ties into the rest of the input system. This also brings back the constant motion (onMoved or onDragged) callbacks. The double function call problems should also be alleviated by this.
So...does this sound like a step in the direction you were hoping for?
07/06/2011 (9:18 am)
@Warthog and all - I've made some progress and can demonstrate what the prototype code looks like. It's not drastically different from what I showed before, but the underlying code does take into account touch persistence. Each touch will have an ID, which will be numerical from a range of 0-11. Similar to EFM's resource. For example, when you touch with a single finger it will have an ID of 0 and a X/Y coordinate. Touch down with another finger, and it will have an ID of 1. Lifting the original finger will release ID 0, but the remaining finger will persist with an ID of 1.
How you store this ID and make use of it will be up to you. The naming of the parameters helps clear this up as well (though we know you can name parameters anything you want)
new ActionMap(moveMap);
moveMap.bind(touch, down, touchDownFunc);
moveMap.bind(touch, up, touchUpFunc);
moveMap.bind(touch, move, touchMoveFunc);
moveMap.push();
function touchDownFunc(%touchIDs, %touchesX, %touchesY)
{
%touchCount = getFieldCount(%touchIDs);
echo("Number of touch downs: " @ %touchCount);
for(%i = 0; %i < %touchCount; %i++)
{
%id = getField(%touchIDs, %i);
%xCoordinate = getField(%touchesX, %i);
%yCoordinate = getField(%touchesY, %i);
echo("Touch " @ %id);
echo("X/Y: " @ %xCoordinate SPC %yCoordinate);
}
}
function touchUpFunc(%touchIDs, %touchesX, %touchesY)
{
%touchCount = getFieldCount(%touchIDs);
echo("Number fingers lifted: " @ %touchCount);
for(%i = 0; %i < %touchCount; %i++)
{
%id = getField(%touchIDs, %i);
%xCoordinate = getField(%touchesX, %i);
%yCoordinate = getField(%touchesY, %i);
echo("Touch " @ %id);
echo("X/Y: " @ %xCoordinate SPC %yCoordinate);
}
}
function touchMoveFun(%touchIDs, %touchesX, %touchesY)
{
%touchCount = getFieldCount(%touchIDs);
echo("Number of touch moves: " @ %touchCount);
for(%i = 0; %i < %touchCount; %i++)
{
%id = getField(%touchIDs, %i);
%xCoordinate = getField(%touchesX, %i);
%yCoordinate = getField(%touchesY, %i);
echo("Touch " @ %id);
echo("X/Y: " @ %xCoordinate SPC %yCoordinate);
}
}As I mentioned earlier, I'm trying too get everything hooked up to per-object interaction using the new system:
function someObject::onTouchDown(%this, %touchIDs, %touchesX, %touchesY)
{
}
function someObject::onTouchUp(%this, %touchIDs, %touchesX, %touchesY)
{
}
function someObject::onTouchMove(%this, %touchIDs, %touchesX, %touchesY)
{
}While the last chunk of code resembles the ::onMouseXXX functions, the code beneath the hood ties into the rest of the input system. This also brings back the constant motion (onMoved or onDragged) callbacks. The double function call problems should also be alleviated by this.
So...does this sound like a step in the direction you were hoping for?
#35
In IRC, Conor O'Kane expressed concern about the accelerometer's functionality changing. There was a lengthy and detailed discussion about how the accelerometer should report values. I had to change the way it functioned, which somewhat breaks backwards compatibility. Fret not, because there is replacement code that works just as well.
The discussion boiled down to this:
1. Accelerometer provides acceleration and gravity
2. Previous accelerometer was presented and used like a joystick/gyroscope (not good)
3. Devices with gyroscope can provide true and accurate yaw/pitch/roll
Here is how the old system worked:
I'm not a fan for two reasons. First, we are mapping to a joystick which is just plain confusing to a new user. Second, acceleration is not a constant value. It is based on motion. How fast did the device move along the X/Y/Z axes. However, you can get a constant value by way of gravity. If you have a gyroscope, it gets even better because you get a rate of rotation for each axes as well as a constant yaw/pitch/roll.
Furthermore, no one was really using the dead zone parameters. I saw a lot of if/else checks to help reduce sensitivity. So here is the new functionality:
Pros:
1. No more joystick
2. You are not tied to one set of values
3. You gain access to gyroscope (if supported)
4. The engine becomes more compliant by using CMMotionManager
Cons:
1. The new motion system is only compatible with devices using 4.x iOS and higher
2. Projects using the old accelerometer/joystick bindings will not work. The code needs to be updated to use either gyroscope or accelerometer gravity values
Offsetting the cons:
1. A fallback will be available for developers targeting iOS versions prior to 4.0
2. The data values for the accelerometer gravity bindings are comparable to the old joystick values
07/06/2011 (9:29 am)
I didn't want to jam the whole other concept of the motion device changes, but I'm sure you are curious about it after my last post. The implementation for gyroscope and accelerometer is essentially finished. In IRC, Conor O'Kane expressed concern about the accelerometer's functionality changing. There was a lengthy and detailed discussion about how the accelerometer should report values. I had to change the way it functioned, which somewhat breaks backwards compatibility. Fret not, because there is replacement code that works just as well.
The discussion boiled down to this:
1. Accelerometer provides acceleration and gravity
2. Previous accelerometer was presented and used like a joystick/gyroscope (not good)
3. Devices with gyroscope can provide true and accurate yaw/pitch/roll
Here is how the old system worked:
new ActionMap(moveMap);
moveMap.bind(joystick0, xaxis, "moveCloudX");
moveMap.bind(joystick0, yaxis, "moveCloudY");
moveMap.push();
function moveCloudX(%val)
{
if(%val > 0.2)
Cloud.setLinearVelocityX(40);
else if(%val < -0.2)
Cloud.setLinearVelocityX(-40);
else
Cloud.setLinearVelocityX(0);
}
function moveCloudY(%val)
{
if(%val > 0.2)
Cloud.setLinearVelocityY(-40);
else if(%val < -0.2)
Cloud.setLinearVelocityY(40);
else
Cloud.setLinearVelocityY(0);
}I'm not a fan for two reasons. First, we are mapping to a joystick which is just plain confusing to a new user. Second, acceleration is not a constant value. It is based on motion. How fast did the device move along the X/Y/Z axes. However, you can get a constant value by way of gravity. If you have a gyroscope, it gets even better because you get a rate of rotation for each axes as well as a constant yaw/pitch/roll.
Furthermore, no one was really using the dead zone parameters. I saw a lot of if/else checks to help reduce sensitivity. So here is the new functionality:
new ActionMap(moveMap); // Only pass in non-0 values when the gravity shift // on an axis exceeds the absolute value of 0.2 moveMap.bind(accelerometer, gravityx, "D", "-0.2 0.2", "moveCloudX"); moveMap.bind(accelerometer, gravityy, "D", "-0.2 0.2", "moveCloudY"); // Only pass in non-0 values when the acceleration // along an axis exceeds the absolute value of 0.9 moveMap.bind(accelerometer, accelx, "D", "-0.09 0.09", "accelerationX"); moveMap.bind(accelerometer, accely, "D", "-0.09 0.09", "accelerationY"); moveMap.bind(accelerometer, accelz, "D", "-0.09 0.09", "accelerationZ"); // No deadzones used, so full blown sensitivity is utilized // This can be precise to the 0.001 measurement moveMap.bind(gyroscope, pitch, pitchFunc); moveMap.bind(gyroscope, yaw, "yawFunc"); moveMap.bind(gyroscope, roll, "rollFunc"); moveMap.push();
Pros:
1. No more joystick
2. You are not tied to one set of values
3. You gain access to gyroscope (if supported)
4. The engine becomes more compliant by using CMMotionManager
Cons:
1. The new motion system is only compatible with devices using 4.x iOS and higher
2. Projects using the old accelerometer/joystick bindings will not work. The code needs to be updated to use either gyroscope or accelerometer gravity values
Offsetting the cons:
1. A fallback will be available for developers targeting iOS versions prior to 4.0
2. The data values for the accelerometer gravity bindings are comparable to the old joystick values
#36
07/08/2011 (2:19 am)
Hi Michael, thumbs up from me!
#37
Separate ActionMap declarations:
Push and pop
All of the previously mentioned functionality is in, as well. Individual finger IDs and tracking works. So, I've moved onto converting onMouseX events to onTouchX events.
I thought the worst was over, but this is actually the hardest part of improving the input system. onMouseX events are really embedded in the engine. Just open up a game's Visual Studio project and do a search in files for onMouseDown. Every GUI control has one, including t2dSceneWindow, which is what signals onMouseDown for all objects in your scene.
The quickest solution would end up breaking all GUI functionality, except t2dSceneWindow. Despite how many people I see using a level for their interface, I cannot assume everyone has abandoned the built-in GUI system. To make matters worse, the GUIEvent that controls where onMouseX events occur is separate from the other Event structs in iT2D...further complicating matters.
While writing this post I had an idea. I think I can extend per-object multitouch to all t2dSceneObjects without breaking GUIEvent. This is the last step for the next preview, so we are very very close.
07/18/2011 (11:08 am)
Update: The new touchscreendevice is working splendidly. A welcome change is being able to push and pop different ActionMaps that handle touch events differently. You will not have to jam a lot of logic into single oniPhoneTouchX functions.Separate ActionMap declarations:
new ActionMap(menuActionMap); menuActionMap.bind(touchdevice, touchdown, "menuTouchDown"); menuActionMap.bind(touchdevice, touchmove, "menuTouchMove"); menuActionMap.bind(touchdevice, touchup, "menuTouchUp"); new ActionMap(gameActionMap); gameActionMap.bind(touchdevice, touchdown, "gameTouchDown"); gameActionMap.bind(touchdevice, touchmove, "gameTouchMove"); gameActionMap.bind(touchdevice, touchup, "gameTouchUp");
Push and pop
function onMenuStart()
{
menuActionMap.push();
}
function startGame()
{
menuActionMap.pop();
gameActionMap.push();
}All of the previously mentioned functionality is in, as well. Individual finger IDs and tracking works. So, I've moved onto converting onMouseX events to onTouchX events.
I thought the worst was over, but this is actually the hardest part of improving the input system. onMouseX events are really embedded in the engine. Just open up a game's Visual Studio project and do a search in files for onMouseDown. Every GUI control has one, including t2dSceneWindow, which is what signals onMouseDown for all objects in your scene.
The quickest solution would end up breaking all GUI functionality, except t2dSceneWindow. Despite how many people I see using a level for their interface, I cannot assume everyone has abandoned the built-in GUI system. To make matters worse, the GUIEvent that controls where onMouseX events occur is separate from the other Event structs in iT2D...further complicating matters.
While writing this post I had an idea. I think I can extend per-object multitouch to all t2dSceneObjects without breaking GUIEvent. This is the last step for the next preview, so we are very very close.
#38
05/11/2012 (5:11 pm)
Apologies for Necro'ing an old thread, but did Gesture support ever make its way into the released codebase?
#39
With an upcoming release, it will be easier to directly connect native iOS gestures since every app will have a root UIViewController.
05/12/2012 (7:23 am)
@John - The native gestures that iOS SDK supplies did not go into the 1.5 release. The improved input system allows you to replicate the gestures using the same kind of logic in TorqueScript.With an upcoming release, it will be easier to directly connect native iOS gestures since every app will have a root UIViewController.
#40
It isn't such a high priority with my current game, but just something i'm looking at on the horizon.
05/12/2012 (7:33 am)
Thanks for the update Mich. I really want to eventually end up with the native gestures. Emulating them on my own would never provide quite the same "gesture response?" that an iOS user is familiar with on a subconscious level. If that makes any sense :)It isn't such a high priority with my current game, but just something i'm looking at on the horizon.
Torque 3D Owner Alistair Lowe3
I think the interesting thing that came out of this for me is that gestures just interpret input and so ideally, the initial implementation should be something that simply expands the existing functionality by tracking multiple device or fingers (a finger is like a one button mouse).
Looking at apple's official docs, simple single handed gestures are determined manually (detecting a swipe/tap/drag is down to the user). Multi-touch gestures have a helper as Michael has pointed out but the end result for a pinch/zoom can be brought back down to a standard drag with a scaling factor. However, in view of expandability, perhaps we should have an official mechanism for implementing gestures and a callback for a certain gesture.
A user may for example want to implement a 5-finger swipe or a complex gesture, there should be a mechanism in place for them to do this and have it catch events on the sceneobject level. I'm firmly of the belief with the larger array of touch devices now out and the fact that gestures don't necessarily have to be touch based that the gesture system should be device-agnostic, for this to happen a scene object would need to be aware of what devices are currently acting upon it.
So for example:
// New Gesture - like behaviour system
if (!isObject(fiveFingerSwipeGesture))
{
%template = new GestureTemplate(fiveFingerSwipeGesture);
// Fields here
}
function fiveFingerSwipeGesture::ProcessInput(%object)
{
// Determine guesture here
}
// Add gesture to an object
%sceneObject.addGesture( fiveFingerSwipeGesture );
// Process callback from gestures (which will suppress standard callbacks if result was true)
function sceneObject::onGesture( %gesture )
{
if( $guesture.type $= "fiveFingerSwipeGesture" )
{
// Do stuff specific to that gesture
}
}
Under the hood, a sceneObject stores the details of current device states acting upon that object, initially a list of mice that had "ClickedDown" on the object, in the order that they did so, any onMouseMove information is updated in this list, the list of attached gestures process the updates and if a gesture is detected they block the onMouseMove callback and instead trigger the onGesture callback.
How about something like that?
I think will also help with the unnatural situation where in TGB I often
had a sceneObject:onMouseDown but had to deal with the mouse up in the scenewindow because it was released off the object and thus track various globals etc.