Improved Input and Gestures
by Michael Perry · in iTorque 2D · 03/25/2011 (9:16 am) · 39 replies
Greetings everyone! This thread is the official feedback gathering source for improving iTorque 2D's input system. This means enhancing the current touch support and adding new features like gestures. Before proceeding, I have to important notes:
2. Don't remind me it's my day off. I know it's my day off. The guys in IRC already reminded me. My wife already reminded me. I'm pretty sure my dogs gave me a look that meant "Isn't this your day off?". I like choosing how to use my day off and my choice for the first part of the day is to gut the input system for iTorque 2D to suit the needs of my game. In all likelihood, this change will benefit the iT2D licensees...so why not kill two birds with one stone?
Alright, great start. This is a step in the right direction. However, people have already noticed their are limitations for this. Edward Maurina wrote a Multi-Touch Enhancement resource for iTorque 2D, which is an excellent fix to the limitations. He specifically states why he wrote the resource. iT2D cannot handle 1, 2, and 3 touches down and move. This...is kind of a problem. So his resource provides the following functionality in TorqueScript:
Much better. This is a lot closer to what we want. However, even looking at his code I'm seeing limitations to the system. Keep in mind iOS 4 can support up to 11 contact points. Also, a prime directive for iTorque 2D development is better compliance with Apple standards. This results in our iOS code being more familiar for developers with iOS experience.
Also, a blatant limitation is that there is no gesture support. People have been asking how to do pinch, swipe, tap and other gestures for years now. So how do we proceed? I'm going to reserve the next four or five posts to walk through potential ideas of how to improve the current input system (possibly by erasing it and adding something new).
Continued in next post...
Foreword
1. The content of this thread is not guaranteed to be in an official release, nor does it allude to an upcoming release. This is part of my research and development.2. Don't remind me it's my day off. I know it's my day off. The guys in IRC already reminded me. My wife already reminded me. I'm pretty sure my dogs gave me a look that meant "Isn't this your day off?". I like choosing how to use my day off and my choice for the first part of the day is to gut the input system for iTorque 2D to suit the needs of my game. In all likelihood, this change will benefit the iT2D licensees...so why not kill two birds with one stone?
Current Input Status
Alright, with that out of the way, let's talk iOS input. What is currently supported? Well, if you are reading the Official Documentation you already know about the current touch support. If you are not reading the docs, at least read the code. If you are doing neither, check back on this thread in a month. This is about moving forward. So let's see what we have:// Called when the user touches the screen in any way
// %touchCount - How many fingers are touching
// %touchX - Screen coordinate of the touch on the X axis
// %touchY - Screen coordinate of the touch on the Y axis
function oniPhoneTouchDown( %touchCount, %touchX, %touchY )
{
}
// As soon as the user stops touching the screen
// %touchCount - How many fingers were removed from the screen
// %touchX - Where the finger left the screen on the X axis
// %touchY - Where the finger left the screen on the Y axis
function oniPhoneTouchUp( %touchCount, %touchX, %touchY )
{
}
// Called continuously when a user has touched the screen, then started dragging
// %touchCount - How many fingers are currently moving across the screen
// %touchX - Screen coordinate of the dragging on the X axis
// %touchY - Screen coordinate of the dragging on the Y axis
function oniPhoneTouchMove( %touchCount, %touchX, %touchY )
{
}
// Called when the user quickly taps the screen
// %touchCount - How many fingers tapped the screen
// %touchX - Screen coordinate of the tap on the X axis
// %touchY - Screen coordinate of the tap on the Y axis
function oniPhoneTouchTap ( %touchCount, %touchX, %touchY )
{
}Alright, great start. This is a step in the right direction. However, people have already noticed their are limitations for this. Edward Maurina wrote a Multi-Touch Enhancement resource for iTorque 2D, which is an excellent fix to the limitations. He specifically states why he wrote the resource. iT2D cannot handle 1, 2, and 3 touches down and move. This...is kind of a problem. So his resource provides the following functionality in TorqueScript:
function oniPhoneTouchDown( %touchNums, %touchX, %touchY )
{
%numTouches = getWordCount( %touchNums );
for(%count = 0; %count < %numTouches; %count++)
{
%curTouch = getWord( %touchNums, %count );
%curX = getWord( %touchX, %count );
%curY = getWord( %touchY , %count );
// Do something with this data, for example:
echo("Touch number ", %curTouch, " was started at X:", %curX, " Y:", %curY );
}
}Much better. This is a lot closer to what we want. However, even looking at his code I'm seeing limitations to the system. Keep in mind iOS 4 can support up to 11 contact points. Also, a prime directive for iTorque 2D development is better compliance with Apple standards. This results in our iOS code being more familiar for developers with iOS experience.
Also, a blatant limitation is that there is no gesture support. People have been asking how to do pinch, swipe, tap and other gestures for years now. So how do we proceed? I'm going to reserve the next four or five posts to walk through potential ideas of how to improve the current input system (possibly by erasing it and adding something new).
Continued in next post...
About the author
Programmer.
#2
What does that mean? iTorque 2D does not have a UIView to attach gestures to. That means iTorque 2D does not support UIGestureRecognizer objects out of the box. No pinch support for you! Unless...
Ok, my brain came up with two immediate ideas for implementing built in gesture support:
1. Ship with an already made UIViewController, attach gestures to that
2. Create a UIViewController programmatically, attach gestures to that
My immediate concerns were:
1. How will a UIViewController running on top of iTorque 2D's UIWindow affect rendering? Will it draw on top of the iT2D scene graph?
2. How will it affect performance? I have not profiled additional draw calls and functionality for the UIViewController.
3. How will it affect the existing input system? If the touches are associated with the window and ::onMouseXXX are associated with the scene graph, what happens when a UIView is added to the app?
Hmmm...considerations. No telling if I can get this to work, or worse, get this to work and break other stuff. I think I need to read more docs and code before I just start jamming code into the engine.
03/25/2011 (9:17 am)
Gesture Approach 1: Built In System
When I say "Built In System", I am referring to how Apple tells you to implement gestures. Experienced iOS developers have already picked up on the fact that the original iOS SDK and docs were not geared toward game development. My own opinion is that Apple did not expect the explosion of games that hit the app store. Game developers who roll their own apps are much more familiar with a blank window and shell app. They are not going to be using a UIView, they are going to be using a UIWindow and OpenGL ES. At least, that's what the original iTorque 2D developers did.What does that mean? iTorque 2D does not have a UIView to attach gestures to. That means iTorque 2D does not support UIGestureRecognizer objects out of the box. No pinch support for you! Unless...
Ok, my brain came up with two immediate ideas for implementing built in gesture support:
1. Ship with an already made UIViewController, attach gestures to that
2. Create a UIViewController programmatically, attach gestures to that
My immediate concerns were:
1. How will a UIViewController running on top of iTorque 2D's UIWindow affect rendering? Will it draw on top of the iT2D scene graph?
2. How will it affect performance? I have not profiled additional draw calls and functionality for the UIViewController.
3. How will it affect the existing input system? If the touches are associated with the window and ::onMouseXXX are associated with the scene graph, what happens when a UIView is added to the app?
Hmmm...considerations. No telling if I can get this to work, or worse, get this to work and break other stuff. I think I need to read more docs and code before I just start jamming code into the engine.
#3
Options are good. Programmers like options. Programmers hate limitations. Before I try to prototype anything, I should explore my options. Let's stay focused on the pinch gesture. If you look at the UIPinchGestureRecognizer Class Reference, you can tell it really only provides to values: scale and velocity. What are those?
scale: The scale factor relative to the points of the two touches in screen coordinates.
velocity: The velocity of the pinch in scale factor per second. (read-only)
We have a winner. The scale property tells us how much pinching has occurred. So what are the important questions to ask if we want to roll our own custom solution?
What is the scale really?
Forgive me if I butcher this or am way off, but the scale appears to just calculate the distance between two points and return the difference.
How can we reproduce that value without the UIPinchGestureRecognizer?
Create our own formula for calculating distance between two touch points
What other object/function is available to help?
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event;
How can I narrow that down even further? Well, a pinch gesture occurs when two fingers are moving toward each other. That means touchesMoved is where we want to be.
The following is some sample code from a book I'm reading
Alright, so the above code could use some work and is not iTorque 2D ready. Still, it is feasible to reproduce the functionality of pinching without using a gesture recognizer. Now we are making some good progress...except, does iTorque 2D already handle all of the touches?
03/25/2011 (9:17 am)
Gesture Approach 2: Custom System
The built in functionality the iOS SDK provides looks simple enough. I've written a few custom apps from scratch, so I get the syntax and the methodology. I've also made some iTorque apps (obviously), so I can immediately recognize what problems I could run into. Which is why I'm glad there are options.Options are good. Programmers like options. Programmers hate limitations. Before I try to prototype anything, I should explore my options. Let's stay focused on the pinch gesture. If you look at the UIPinchGestureRecognizer Class Reference, you can tell it really only provides to values: scale and velocity. What are those?
scale: The scale factor relative to the points of the two touches in screen coordinates.
velocity: The velocity of the pinch in scale factor per second. (read-only)
We have a winner. The scale property tells us how much pinching has occurred. So what are the important questions to ask if we want to roll our own custom solution?
What is the scale really?
Forgive me if I butcher this or am way off, but the scale appears to just calculate the distance between two points and return the difference.
How can we reproduce that value without the UIPinchGestureRecognizer?
Create our own formula for calculating distance between two touch points
What other object/function is available to help?
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event;
How can I narrow that down even further? Well, a pinch gesture occurs when two fingers are moving toward each other. That means touchesMoved is where we want to be.
The following is some sample code from a book I'm reading
// We have touch motion, do something with it
-(void) touchesMoved: (NSSet *) touches withEvent: (UIEvent *) event
{
// Get the touch collection
NSSet *allTouches = [event allTouches];
// Based on the number of touches, jump to specific case handling
switch ( [allTouches count)] )
{
case 1:
{
// One finger touching, do nothing
break;
}
case 2:
{
// Multiple touches moving at once
// We should check for pinching here
// Finger 1
UITouch *touch1 = [[allTouches allObjects] objectAtIndex:0];
// Finger 2
UITouch *touch2 = [[allTouches allObjects] objectAtIndex:1];
// Get actual coordinates of the touch points
CGPoint touch1PT = [touch1 locationInView:[self view]];
CGPoint touch2PT = [touch2 locationInView:[self view]];
// So now what?
// Get the distance between the two points
// Compare that to the distance from the last check
// Current distance < last distance, we have pinch
}
}
}Alright, so the above code could use some work and is not iTorque 2D ready. Still, it is feasible to reproduce the functionality of pinching without using a gesture recognizer. Now we are making some good progress...except, does iTorque 2D already handle all of the touches?
#4
It's creating a mouse event. Wait, where's the oniPhoneTouchDown function? It gets called in the processMultipleTouches function, located in iPhoneInput.mm. That function is called inside of Input::process() (same source file). Input::process() is a platform function, meaning every platform has its own version. Inside of processMultipleTouches, the TouchDownEvents vector keeps track of our touch points. How does that get filled? Yup, we found the loop:
Is the above bad code? Absolutely not. It works and can be extended easily. The question is, can it or should it be extended to handle custom gestures. The very important issue we have to start thinking about is how will all of this be presented to TorqueScript or the rest of the engine?
So now we review the options:
1. Implement the new UIViewController and gesture recognizer system
2. Overhaul the current touch system to better handle custom gesture calculations
3. Do both and find out which one works the best.
03/25/2011 (9:21 am)
Complete Touch Support?
In the first post, I mentioned that the current touch system for iTorque 2D has limitations. It can handle multiple touches, but it is limited and can glitch if used improperly. Out of the box, the current touch system is not ready for the custom gesture from the last post. It is not even close to resembling the sample code from the last post. What does the iTorque 2D touchesMoved look like?- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
NSUInteger touchCount = 0;
// Enumerates through all touch objects
for (UITouch *touch in touches){
CGPoint point = [touch locationInView:self];
CGPoint prevPoint = [touch previousLocationInView:self]; //EFM
// PUAP -Mat platform::init sets the window to 480x480 for easy rotation, which means this needs to be done to
//keep the curser at the right point, if we are in landscape
if( platState.portrait == false ) {
//point.y -= (480 - 320);
}
createMouseMoveEvent( touchCount, point.x, point.y, prevPoint.x, prevPoint.y ); //EFM
touchCount++;
}
}It's creating a mouse event. Wait, where's the oniPhoneTouchDown function? It gets called in the processMultipleTouches function, located in iPhoneInput.mm. That function is called inside of Input::process() (same source file). Input::process() is a platform function, meaning every platform has its own version. Inside of processMultipleTouches, the TouchDownEvents vector keeps track of our touch points. How does that get filled? Yup, we found the loop:
bool createMouseDownEvent( S32 touchNumber, S32 x, S32 y ) {
ScreenTouchEvent event;
event.xPos = x;
event.yPos = y;
event.action = SI_MAKE;
//Luma: Update position
Canvas->setCursorPos( Point2I( x, y ) );
TouchDownEvents.push_back( touchEvent( touchNumber, x, y ) );
Game->postEvent(event);
return true;//return false if we get bad values or something
}Is the above bad code? Absolutely not. It works and can be extended easily. The question is, can it or should it be extended to handle custom gestures. The very important issue we have to start thinking about is how will all of this be presented to TorqueScript or the rest of the engine?
So now we review the options:
1. Implement the new UIViewController and gesture recognizer system
2. Overhaul the current touch system to better handle custom gesture calculations
3. Do both and find out which one works the best.
#5
INTO iTORQUE 2D WE GO

Boot up the mac. Open the R&D version of iT2D. Create a new project. Open Xcode_iPhone project. Stare at screen blankly for about 10 minutes...
...
...
I went with option 1 first. Add a UIViewController to iTorque 2D and implement the gesture recognizers for pinch and tap. In less than an hour, I had the code in, compiling and (shockingly) gesture data being pushed to TorqueScript. Wait, that worked on the first attempt?
Is not possible!

Did I run Instruments? No. Did I perform exhaustive compatibility testing. No. This is a prototype, so it doesn't have to be pretty. I would not put it in a final build of iT2D or my game until I have tested it. However, the goal was to test feasibility of one of the options. Here's what I did
1. Right clicked on the iTorque 2D Game project
2. Create New file...
3. UIViewController subclass
4. Subclass of UIViewController, disable With XIB for user interface
5. Named the class "gestureView" (no quotes)
6. Rename gestureView.m to gestureView.mm (to work with C++ system)
7. Added the following code to the top of gestureView.mm
8. Added the following code to gestureView.mm (anywhere)
9. Open TGBAppDelegate.mm and add the following to the top:
10. Just above applicationDidFinishLaunching, create a gestureView:
11. In applicationDidFinishLaunching, add the following just above #ifdef TORQUE_ALLOW_ORIENTATIONS:
12. In - (void)dealloc, perform some cleanup:
13. ???
14. Profit
The above gets the data from the iOS UIPinchGestureRecognizer and exposes it to TorqueScript as the variable "$pinchScale". I created a couple of examples using that value: camera zooming, sprite scaling. Seemed to work quite well. I did not see a performance hit just from looking at the screen (no hitches or lag), but that doesn't mean this is 100% correct. As a prototype, this was a success.
When time ran out, I came up with this thread idea. I have not started on an enhanced touch system and custom gesture code, but I know where I would start. Before I created this thread, I hopped into the GarageGames IRC channel to bounce a few ideas off other iT2D users. I had a fuzzy idea of how I might start with improving the system for my game, but I was curious to find out what other iT2D users would want. The results were pretty interesting.
03/25/2011 (9:24 am)
Working Prototype
It's apparent I've been doing some studying. I wouldn't be cramming "Read. Read Code. Code" down your throat if I did not practice it myself. I've spent the past week reading docs, books and code. Satisfied with the knowledge I gained, I was ready to code. I had a one hour window yesterday to experiment. There was a tough choice to make. Do I create a sample gesture app using a blank iOS project or do I dive right into iTorque 2D and start wrecking stuff?INTO iTORQUE 2D WE GO

Boot up the mac. Open the R&D version of iT2D. Create a new project. Open Xcode_iPhone project. Stare at screen blankly for about 10 minutes...
...
...
I went with option 1 first. Add a UIViewController to iTorque 2D and implement the gesture recognizers for pinch and tap. In less than an hour, I had the code in, compiling and (shockingly) gesture data being pushed to TorqueScript. Wait, that worked on the first attempt?
Is not possible!

Did I run Instruments? No. Did I perform exhaustive compatibility testing. No. This is a prototype, so it doesn't have to be pretty. I would not put it in a final build of iT2D or my game until I have tested it. However, the goal was to test feasibility of one of the options. Here's what I did
1. Right clicked on the iTorque 2D Game project
2. Create New file...
3. UIViewController subclass
4. Subclass of UIViewController, disable With XIB for user interface
5. Named the class "gestureView" (no quotes)
6. Rename gestureView.m to gestureView.mm (to work with C++ system)
7. Added the following code to the top of gestureView.mm
#include "platformiPhone/platformiPhone.h" #include "console/consoleTypes.h"
8. Added the following code to gestureView.mm (anywhere)
float lastScaleFactor = 1;
// Implement viewDidLoad to do additional setup after loading the view, typically from a nib.
- (void)viewDidLoad
{
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(handleTapGesture:)];
tapGesture.numberOfTapsRequired = 2;
[self.view addGestureRecognizer:tapGesture];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:@selector(pinchGesture:)];
[self.view addGestureRecognizer:pinchGesture];
[pinchGesture release];
[tapGesture release];
Con::addVariable("pinchScale", TypeF32, &lastScaleFactor);
[super viewDidLoad];
}
- (IBAction) handleTapGesture:(UIGestureRecognizer *) sender
{
// We have tapping
}
- (IBAction) pinchGesture:(UIGestureRecognizer *) sender
{
CGFloat factor = [(UIPinchGestureRecognizer *) sender scale];
if(sender.state == UIGestureRecognizerStateEnded)
{
if(factor > 1)
{
lastScaleFactor += (factor-1);
}
else
{
lastScaleFactor *= factor;
}
}
}9. Open TGBAppDelegate.mm and add the following to the top:
#include "gestureView.h"
10. Just above applicationDidFinishLaunching, create a gestureView:
gestureView *gestureViewTest;
11. In applicationDidFinishLaunching, add the following just above #ifdef TORQUE_ALLOW_ORIENTATIONS:
bool multiTouch = dAtob( Con::getVariable( "$pref::iDevice::UseMultitouch" ) );
if(multiTouch)
{
gestureViewTest = [gestureView alloc];
[gestureViewTest.view sizeToFit];
[self.window addSubview:gestureViewTest.view];
}12. In - (void)dealloc, perform some cleanup:
bool multiTouch = dAtob( Con::getVariable( "$pref::iDevice::UseMultitouch" ) );
if(multiTouch)
[gestureViewTest dealloc];13. ???
14. Profit
The above gets the data from the iOS UIPinchGestureRecognizer and exposes it to TorqueScript as the variable "$pinchScale". I created a couple of examples using that value: camera zooming, sprite scaling. Seemed to work quite well. I did not see a performance hit just from looking at the screen (no hitches or lag), but that doesn't mean this is 100% correct. As a prototype, this was a success.
When time ran out, I came up with this thread idea. I have not started on an enhanced touch system and custom gesture code, but I know where I would start. Before I created this thread, I hopped into the GarageGames IRC channel to bounce a few ideas off other iT2D users. I had a fuzzy idea of how I might start with improving the system for my game, but I was curious to find out what other iT2D users would want. The results were pretty interesting.
#6
1. Improve the current oniPhoneXXX functions and create new functions in TorqueScript that handle gestures. The new code would look something like this:
Clearly, the above code looks like it would solve the issues that prompted this thread. However, I like options. I really started thinking about code duplication, performance cost of constantly going between C++ and TorqueScript and whether multiple functions were easier or harder for people to use...
2. One function to rule them all:
I'm starting to like the single function approach more and more. There's going to be less bouncing around, all my input handling is in a single space, I get all the data I can possibly need and I have reduced potential script overhead.
Seeing these two options really help set the right frame of mind when going back into prototyping mode. I also have my stability and performance to consider, so my prototyping is going to be more focused. All in all, "Read. Read Code. Code" got me extremely far in a short amount of time. Now it's just a matter of repeating the process, slowly narrowing down the problems until a solution is finalized.
03/25/2011 (9:24 am)
Initial Theories
Sometimes I get stuck in a prototyping loop. Before I go any further down the rabbit hole, I want to plan out some possible end results. In other words, what would I want a reusable system for improved touch and gestures to look like? I'm currently sitting on two ideas:1. Improve the current oniPhoneXXX functions and create new functions in TorqueScript that handle gestures. The new code would look something like this:
function oniPhoneTouchDown( %touchNums, %touchX, %touchY )
{
}
function oniPhoneTouchUp( %touchNums, %touchX, %touchY )
{
}
function oniPhoneTouchMove (%touchNums, %touchX, %touchY )
{
}
function oniPhonePinch (%touchNums, %touchX, %touchY, %scaleFactor )
{
}
function oniPhoneTap (%touchNums, %touchX, %touchY, %tapcount )
{
}Clearly, the above code looks like it would solve the issues that prompted this thread. However, I like options. I really started thinking about code duplication, performance cost of constantly going between C++ and TorqueScript and whether multiple functions were easier or harder for people to use...
2. One function to rule them all:
// %touches - Collection of touch coordinates (in screen space)
// %touchCount - Number of touch events on the screen
// %event - What kind of touch events are happening
// %pinchScale - If the event is a pinch, this value will be filled
// with something other than 0
// %tapCount - If the event is a tap, this value will be filled with something other than 0
function iOSInputEvent(%touches, %touchCount, %event, %pinchScale, %tapCount)
{
// %event == 0, touch down
// %event == 1, touch up
// %event == 2, touch move
// %event == 3, pinch
// %event == 4, tap
// %event == 5, swipe
}I'm starting to like the single function approach more and more. There's going to be less bouncing around, all my input handling is in a single space, I get all the data I can possibly need and I have reduced potential script overhead.
Seeing these two options really help set the right frame of mind when going back into prototyping mode. I also have my stability and performance to consider, so my prototyping is going to be more focused. All in all, "Read. Read Code. Code" got me extremely far in a short amount of time. Now it's just a matter of repeating the process, slowly narrowing down the problems until a solution is finalized.
#7
1. Point out anything I was wrong about
2. Weigh in on the options I presented, even offer your own
3. Poke holes where you need to
4. Set a priority for this kind of work. Is it more or less important than retina support? What about Game Center? Box 2D?
I'm unlocking this now. Let's hear your side.
03/25/2011 (9:25 am)
Your Thoughts?
I'm really interested in what you all think about this thread and its contents. I'm excited about the potential discussion, but I can tell you I'm going to find the following most helpful:1. Point out anything I was wrong about
2. Weigh in on the options I presented, even offer your own
3. Poke holes where you need to
4. Set a priority for this kind of work. Is it more or less important than retina support? What about Game Center? Box 2D?
I'm unlocking this now. Let's hear your side.
#8
03/26/2011 (4:13 am)
Isn't this your day off?
#9
I think we should keep the three functions as with the mouse. Down, Up and Move. Maybe even leave the functions as MouseUp etc for easy testing/porting/external device rather than have two mouse like input types with different names.
I think we should keep the same format but add an extra variable or two:
onMouseMove(%this, %modifier, %worldPosition, %clicks, %scale);
Scale just gives us the zoom factor between two fingers (defaults to 1 in all other situations), ultimately a pinch has a central point which is what gets placed in %WorldPosition, dealing with translation this way too. Swipe is just looking at mousemovement over time (perhaps a C++ tracked %accel variable).
onMouseUp(%this, %modifier, %worldPosition, %clicks)
%clicks can let us know if a tap and how many taps took place.
The important thing is that the C++ underbelly keeps track of which finger touched which object where mouseevents were enabled. If a user wants to count the number of fingers on a single object they can just increment/decrement a counter on mouse-down and mouse up but I suspect the most typical scenario will be one finger per object, aside from complex manipulative gestures, and we need to support multiple fingers on multiple objects.
I think we're potentially overcomplicating matters by adding to what we have, I already found this when mixing the itouch functions with mousedown functions when porting code from TGB.
My suggestion is that we scrap all iXXX based commands and go with the mouseup/down/move, adding 1 to 2 variables to move. Porting is easy, will likely work better with any future iphone peripherals, the system is proven as easy to use and people are familiar, I personally very much like these three commands for being simple, with the ability to use them in a very simple fashion upto a very complex fashion.
Most of the work needs to go into C++ in tracking all available fingers offered by iOS and allowing each one to have a state and an associated object if they're currently touched down.
Generally only the first finger placed on an object in iPhone can cause movement, except for pinch where:
The first two fingers to be touched down on an object are noted and if movement is detected on one we track the change in the midpoint location to determine movement and we track the change in their distance from each other to determine scale.
03/26/2011 (5:16 am)
Hmmm, raking my brains over this.I think we should keep the three functions as with the mouse. Down, Up and Move. Maybe even leave the functions as MouseUp etc for easy testing/porting/external device rather than have two mouse like input types with different names.
I think we should keep the same format but add an extra variable or two:
onMouseMove(%this, %modifier, %worldPosition, %clicks, %scale);
Scale just gives us the zoom factor between two fingers (defaults to 1 in all other situations), ultimately a pinch has a central point which is what gets placed in %WorldPosition, dealing with translation this way too. Swipe is just looking at mousemovement over time (perhaps a C++ tracked %accel variable).
onMouseUp(%this, %modifier, %worldPosition, %clicks)
%clicks can let us know if a tap and how many taps took place.
The important thing is that the C++ underbelly keeps track of which finger touched which object where mouseevents were enabled. If a user wants to count the number of fingers on a single object they can just increment/decrement a counter on mouse-down and mouse up but I suspect the most typical scenario will be one finger per object, aside from complex manipulative gestures, and we need to support multiple fingers on multiple objects.
I think we're potentially overcomplicating matters by adding to what we have, I already found this when mixing the itouch functions with mousedown functions when porting code from TGB.
My suggestion is that we scrap all iXXX based commands and go with the mouseup/down/move, adding 1 to 2 variables to move. Porting is easy, will likely work better with any future iphone peripherals, the system is proven as easy to use and people are familiar, I personally very much like these three commands for being simple, with the ability to use them in a very simple fashion upto a very complex fashion.
Most of the work needs to go into C++ in tracking all available fingers offered by iOS and allowing each one to have a state and an associated object if they're currently touched down.
Generally only the first finger placed on an object in iPhone can cause movement, except for pinch where:
The first two fingers to be touched down on an object are noted and if movement is detected on one we track the change in the midpoint location to determine movement and we track the change in their distance from each other to determine scale.
#10
1. It's been shown that new users get extremely confused. "Wait, mouse functions? I'm trying to use a touch screen"!
2. One way or another the input system needs to be revamped. Why continue to extend a system designed for PC and Mac instead of continuing to push the engine toward more iOS driven standards? A chance to improve the input system is a chance to break old ways.
On the flip side there are two benefits to the onMouseXXX system:
1. As mentioned in the documentation, onMouseXXX functions know exactly what object is being clicking on.
2. Torque 2D and previous iTorque 2D users have a chance to port their code over.
03/26/2011 (10:12 am)
@Alistair - There are two problems with continuing to use the onMouseXXX functions:1. It's been shown that new users get extremely confused. "Wait, mouse functions? I'm trying to use a touch screen"!
2. One way or another the input system needs to be revamped. Why continue to extend a system designed for PC and Mac instead of continuing to push the engine toward more iOS driven standards? A chance to improve the input system is a chance to break old ways.
On the flip side there are two benefits to the onMouseXXX system:
1. As mentioned in the documentation, onMouseXXX functions know exactly what object is being clicking on.
2. Torque 2D and previous iTorque 2D users have a chance to port their code over.
#11
Touch-based devices operate differently from desktop computers, so it's sometimes better to force people out of their old ways to make a better interface. Input needs special attention on mobile devices. It's about more than just screen estate, although that can affect how you design your input methods. Your hand is often in the way on an iPhone, so tap to drag is more commonly useful than hold to drag.
03/26/2011 (11:25 am)
How about a transitional version? Deprecate the terrible old system, add iOS-friendly functionality, remove the old system in a later release when people have gotten tired of "THIS IS OUTDATED! USE <function> INSTEAD!" warnings :)Touch-based devices operate differently from desktop computers, so it's sometimes better to force people out of their old ways to make a better interface. Input needs special attention on mobile devices. It's about more than just screen estate, although that can affect how you design your input methods. Your hand is often in the way on an iPhone, so tap to drag is more commonly useful than hold to drag.
#12
I realize that gestures may not be currently possible in iTGB, but only in XCode but if it is possible here is my thinking..
To have pinch zoom we must measure scale and velocity. Velocity is a function of time, so we must record the position of original touches, the distance they travel and then time it took. I can get the positions started, finished, scale (distance) but I am have been trying to call a schedule from the touch functions however I have been unable to. So I actually began to place a schedule in the objects mouseDown/up functions, but having a schedule call ever 100ms seemed to clog up the system. Weird I know, but this schedule call severely slowed the program.
I need to better figure out how this can be done and not to sound like a dick but shouldn't it be oTGB's job to provide a simple example of how to make pinch zoom work, or are you still working on that. Sorry in advance if this is frustrating but it is for me as I have spent the past day trying to get pinch zoom to work.
My code for pinch zoom
03/26/2011 (12:44 pm)
Regarding the function iOSInputEvent, is this just currently an idea you are working on? As I tried it , just placing echoes inside and there are no returns. I realize that gestures may not be currently possible in iTGB, but only in XCode but if it is possible here is my thinking..
To have pinch zoom we must measure scale and velocity. Velocity is a function of time, so we must record the position of original touches, the distance they travel and then time it took. I can get the positions started, finished, scale (distance) but I am have been trying to call a schedule from the touch functions however I have been unable to. So I actually began to place a schedule in the objects mouseDown/up functions, but having a schedule call ever 100ms seemed to clog up the system. Weird I know, but this schedule call severely slowed the program.
// This callback is invoked when the cloud is touched or clicked
function gesturesBehavior::onMouseDown(%this, %modifier, %worldPos)
{
//begins timer to record length of time between down and up mouse events
%this.schedule(100, startTimer);
$timer = 0;
$timerOn = true;
}
function gesturesBehavior::startTimer(%this)
{
if($timerOn)
{
/// add the timer on Mouse up timer stops, this number is used as a time measurement to calculate velocity.
$timer = $timer + 1;
echo("$timer" @ $timer);
%this.schedule(100, startTimer);
}
else return;
}I need to better figure out how this can be done and not to sound like a dick but shouldn't it be oTGB's job to provide a simple example of how to make pinch zoom work, or are you still working on that. Sorry in advance if this is frustrating but it is for me as I have spent the past day trying to get pinch zoom to work.
My code for pinch zoom
function oniPhoneTouchMove( %touchCount, %touchX, %touchY )
{
if(%touchCount == 2)
{
echo("on iPhone Touch Move");
///touch 1 xy pos
%z1FmPx = getWord(%touchX, 0);
%z1FmPy = getWord(%touchY, 0);
echo("%z1FPx" @ %z1FPx);
echo("%z1FPy" @ %z1FPy);
///touch 2 xy pos
%z2FmPx = getWord(%touchX, 1);
%z2FmPy = getWord(%touchY, 1);
echo("%z2FPx" @ %z2FPx);
echo("%z2FPy" @ %z2FPy);
///the vectors of the running touch positions
%zCP1 = %z1FmPx SPC %z1FmPy,
%zCP2 = %z2FmPx SPC %z2FmPy;
///compare distances of positions
%zRPCompare = t2dVectorDistance(%zCP1, %zCP2);
echo("%zRPCompare" @ %zRPCompare);
///compare original distnace of finer postions (recorded in touchDown) to the running distance
if($zOGCompare < %zRPCompare)
{
%zoomBy = %zRPCompare / $zOGCompare;
/// this call to zoom is currently // out as it is not allowing the system to run.
//sceneWindow2D.setCurrentCameraZoom(%zoomBy)
}
else if ($zOGCompare > %zRPCompare)
{
%zoomBy = %zRPCompare / $zOGCompare;
/// this call to zoom is currently // out as it is not allowing the system to run.
//sceneWindow2D.setCurrentCameraZoom(%zoomBy)
}
}
}
#13
Thanks for the insight on what you are trying to do with the current system.
03/26/2011 (1:09 pm)
Quote:Regarding the function iOSInputEvent, is this just currently an idea you are working on? As I tried it , just placing echoes inside and there are no returns.This all research and development. In my first foreword note:
Quote:1. The content of this thread is not guaranteed to be in an official release, nor does it allude to an upcoming release. This is part of my research and development.
Thanks for the insight on what you are trying to do with the current system.
#14
Also, why would
... be hard for the program to digest as it seems to be?
03/26/2011 (1:41 pm)
So do you think it's possible? Any ideas on how to record velocity? As velocity is also required for creating the "slide to dampen/real" effect when dragging an object like on the apple devices Internet use.Also, why would
///in the touch move function scenewindow2D.setCurrentCameraZoom();
... be hard for the program to digest as it seems to be?
#15
03/26/2011 (2:32 pm)
@rennie - Sorry, this isn't a troubleshooting thread. This is dedicated to my original topic and moving forward with a new system, not so much fixing or working with the current one.
#16
What I'm also trying to say is that the input system doesn't need revamping, at least not the front-end. The core concepts are good and simple and that touch devices work just like mice realistically anyway. We just need to take what we have, a single-click single-object system and expand it to multi-click multi-object with support for gestures on single objects, no need to rewrite it. I honestly believe this would achieve the most use out of iTorque by taking the existing simple system and core concepts and expanding it to deal with A. Gestures and B. Multi-touch.
With the current system if I want to check a click was made on an object I just say onMouseDown and that's it, with the combined function even at the most entry level you're going to have an if/switch to determine the event in question, number of fingers etc and there's technically no need for a pinch/swipe event as they can be interpreted as a standard mouse movement.
To answer your question about expanding a system built for PC and Mac:
- We know it's very easy to use, portable and has an existing user base/documentation.
- Touch screen functionally is not dissimilar to mice, a standard touch system is identical in the manner you handle it, the behaviour of multi-touch is like having multiple devices. There is also the concept of gestures but these need not be directly associated with multi-touch screens.
- Macs now have an official gesture touch mouse and Microsoft and various third parties also have touch gesture mice for Windows, my Macbook has a multi-touch gesture pad, interchangeable with a plugin mouse. So any functionality we implement is also very relevant to TGB and ideally should end up in the next release. In short, multi-touch/gesture isn't an iPhone specific concept and this is my reason for wanting a system that is standard across all current versions of Torque.
03/26/2011 (2:52 pm)
@Michael - Perhaps just have functions with identical functionality but different names if someone prefers to use touch terminology but honestly I think it's more confusing to have two sets of functions even if one is more appropriately named.What I'm also trying to say is that the input system doesn't need revamping, at least not the front-end. The core concepts are good and simple and that touch devices work just like mice realistically anyway. We just need to take what we have, a single-click single-object system and expand it to multi-click multi-object with support for gestures on single objects, no need to rewrite it. I honestly believe this would achieve the most use out of iTorque by taking the existing simple system and core concepts and expanding it to deal with A. Gestures and B. Multi-touch.
With the current system if I want to check a click was made on an object I just say onMouseDown and that's it, with the combined function even at the most entry level you're going to have an if/switch to determine the event in question, number of fingers etc and there's technically no need for a pinch/swipe event as they can be interpreted as a standard mouse movement.
To answer your question about expanding a system built for PC and Mac:
- We know it's very easy to use, portable and has an existing user base/documentation.
- Touch screen functionally is not dissimilar to mice, a standard touch system is identical in the manner you handle it, the behaviour of multi-touch is like having multiple devices. There is also the concept of gestures but these need not be directly associated with multi-touch screens.
- Macs now have an official gesture touch mouse and Microsoft and various third parties also have touch gesture mice for Windows, my Macbook has a multi-touch gesture pad, interchangeable with a plugin mouse. So any functionality we implement is also very relevant to TGB and ideally should end up in the next release. In short, multi-touch/gesture isn't an iPhone specific concept and this is my reason for wanting a system that is standard across all current versions of Torque.
#18
Btw, this is exactly the kind of back and forth I was hoping for. This is key to trying to solidify a plan.
03/26/2011 (5:28 pm)
@Alistair - I think you are underestimating the difference between traditional input devices and touch devices. Sure, at the core they have an event and a coordinate of the event. However, trying to support it all creates a lot of bloat. The inputs for an iOS device are finite. For a desktop mouse, you have a lot of extra functionality and things to check for.Quote:With the current system if I want to check a click was made on an object I just say onMouseDown and that's itExcept you have to do that for every object that needs to use it. So every object in the scene has to enable callbacks for onMouseXXX functions, which means multiple manual steps and more overhead.
Btw, this is exactly the kind of back and forth I was hoping for. This is key to trying to solidify a plan.
#19
I accept that the enabling/checking of callbacks causes overheads, but the overheads of checking the event type and managing which objects you're interested in manually and from script would be far greater both in time and cpu usage. If you were to expose touch events for each object you'd still have the same issue regarding callbacks. If that level of potential optimisation mattered to me I'd be writing all my touch code in C++ as opposed to Torque Script. What makes the current system appealing is the ease of use.
I guess what I'm saying is I don't see how the other suggestions at this stage better expose iOS functionality at the loss of portability/ease of use etc. Perhaps once we have some ideas finalised we could write out some common usage scenarios implementing each feature and seeing which was harder to code or would result in more overheads.
03/26/2011 (8:49 pm)
@michael - Aside from extra buttons, which may crop up on future iOS devices (apple already have a patent to expand their touch technology in this way), what extra functionality is there between the two? Particularly in regards to Torque's current implementation which primarily only looks at the left mouse button, with separate commands that can remain unimplemented for focus and right click events (which you may still want to keep code for on the off chance there's an external peripheral in the future).I accept that the enabling/checking of callbacks causes overheads, but the overheads of checking the event type and managing which objects you're interested in manually and from script would be far greater both in time and cpu usage. If you were to expose touch events for each object you'd still have the same issue regarding callbacks. If that level of potential optimisation mattered to me I'd be writing all my touch code in C++ as opposed to Torque Script. What makes the current system appealing is the ease of use.
I guess what I'm saying is I don't see how the other suggestions at this stage better expose iOS functionality at the loss of portability/ease of use etc. Perhaps once we have some ideas finalised we could write out some common usage scenarios implementing each feature and seeing which was harder to code or would result in more overheads.
#20
Here's something which doesn't translate to mousing at all:
mattgemmell.com/2010/05/09/ipad-multi-touch
03/27/2011 (1:01 pm)
Touch isn't necessarily about extra functionality, although up to 11 clicks at the same time with EACH CLICK having a different location I dare say IS additional functionality over one mouse. I challenge you to find a mouse which can be in 11 places at once :)Here's something which doesn't translate to mousing at all:
mattgemmell.com/2010/05/09/ipad-multi-touch
Employee Michael Perry
ZombieShortbus
Read. Read Code. Code
I've said it before. I've put in the docs. I'm saying it now. And guess what? I'll continue to hammer "Read. Read Code. Code" into the brains of everyone until it is a common phrase on this site. Why? Because the usage of it in our forums will greatly enhance our community. Here's how:That sucks. Sure, new users can sometimes be annoying. So can veteran Torque users. At some point, we all started off as new users and someone found us annoying. A good community member will at least point a new user in the right direction if they don't have time to hold their hand.
Here's what I would love to see:
That's awesome. Is it a lot of work? No. That took me 5 minutes to do and has provided a much better newcomer experience for our community. This is perhaps the best way to answer a "noob" question if you don't have time to provide a solution or you don't know a solution. Anyway, that's my rant. Let's apply it to this thread.
Read How Apple Does it
Touch EventsIn the latest version of iTorque 2D, the right methods are being used:
If you read the docs I just linked, you will find the UIResponder Class Reference which details what each one does. Alright, so it's good to know iTorque 2D is connecting to official API.
Gesture Recognizers
This does not exist in iTorque 2D. That appears to be what we want, right? Well, it's not that simple. If you read the above documentation, really read through, you will notice there is a major gap in functionality between touches and gestures.
My understanding of the docs is that the touchesXXX methods are stand alone. All they need is a window and a running app delegate. Unfortunately, gestures need more. The UIGestureRecognizer subclasses require a UIView of some kind.
How did I glean this highly useful piece of information? I READ the docs. How do I prove this information? I started reading code. Check out the Touches Sample Code. Specifically, click on the MyView.m class:
I've removed a lot of extra code to focus on the practice:
- (void)addGestureRecognizersToPiece:(UIView *)piece { UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:@selector(scalePiece:)]; [pinchGesture setDelegate:self]; [piece addGestureRecognizer:pinchGesture]; [pinchGesture release]; }You need to have some understanding and experience working with views and the UIKit to immediately get what's going on, but that's why you have been reading docs and code...right? If aren't, then I recommend you do. If you are only interested in the end result, then you want to skip the next few posts and jump right to the Initial Theories and Your Thoughts posts.