Monthly Archive:: May 2011

If you want to add an animation to your app the first thing you think about is to use animated gifs. But when you perform some research on how to use animated gifs in android you will find out that android does not support this file type. Instead android will always just display one static frame of the animation. But android provides two powerful mechanisms which allow developers to create various types of animations. One way is to use tweened animations where you can define transformation operations such as acceleration, alpha, rotation, scaling, position, etc.  on your objects. The other possibility is to use a frame-by-frame mechanism where it’s possible to define different drawable resources and time intervals in which these resources will be displayed. In this blog post I want to give an introduction on how this frame by frame animation can be implemented in android.

If you are interested in the tweened animation mechanism I would suggest that you refer to the blog post at http://stuffthathappens.com/blog/2008/11/13/android-animation-101/ which shows some basic usage of this animation type.

For this blog post I’ve created a small example app which will just play such a frame-by-frame animation in an activity. The animation which I’ve used is the following Newton’s cradle animation.

Animation of Newton's Cradle (animated gif)
As always you can browse or download the source code of the example App at our SVN repository at Google Code (http://code.google.com/p/android-animation-example/) or use svn to check the project out.

Basics of the Frame-by-Frame Animation

Frame-by-Frame animations are created by creating an xml description file of the animation in your drawable folder. This xml animation can then be accessed in your code like any other drawable. So it’s possible to add such a drawable as a background for an image view. If the animation drawable is added as a background resource android handles the inflating of the animation and this animation object (android.graphics.drawable.AnimationDrawable) can later be accessed with the help of the getBackground method.

It’s also possible to create frame-by-frame animations at runtime using the AnimationDrawable class directly. This class provides a method which can be used to add frames:

  • addFrame(Drawable frame, int duration)

In the following example I will use an xml definition of the animation and not the programmatically way of adding frames.

The AnimationDrawable class also provides methods which can be used to control the animation. For this purpose several methods can be used:

  • void setOneShot(boolean oneShot)

    This method tells the AnimationDrawable if the animation should be repeated (oneShot = false) or if it should only be played once (oneShot = true)

  • void start()

    With the help of this method it is possible to start the animation playback. The animation will be played in a repetitive way if it was previously set (with the help of setOneShot(false)  method)

    This method should not be confused with the “public void run ()” method because the run method should not be called directly.

  • void stop()

    This method can be used to stop the animation playback.

Defining a Frame-by-Frame Animation as a XML Drawable

For defining frame-by-frame animation drawables we have to create an xml file within the drawable folder of the project. The corresponding root tag for this kind of animations is the animation-list tag. This element provides the parameter android:oneshot which is similar to the setOneShot method and defines if the animation should be repeated (false) or not (true). Within this root animation-list element there can be multiple entries of item elements which represent one frame in the animation and have two main parameters android:drawable and android:duration. The android:drawable parameter is used to define a reference to the drawable which will be used for this frame and the android:duration parameter will define how long this frame will be displayed in milliseconds.

In our example I’ve used the Newton’s cradle animation (gif) and extracted the frames of this animation (you can do this for example with the help of GIMP). The result of this extraction are 36 frame images which are named frame0.jpg up to frame35.jpg. These frames are used in the animation.xml file within the drawable folder to create the required animation:

<?xml version="1.0" encoding="utf-8"?>
<animation-list xmlns:android="http://schemas.android.com/apk/res/android"
android:oneshot="false">

	<item android:drawable="@drawable/frame0" android:duration="50" />
	<item android:drawable="@drawable/frame1" android:duration="50" />
	<item android:drawable="@drawable/frame2" android:duration="50" />
	<item android:drawable="@drawable/frame3" android:duration="50" />

	...

	<item android:drawable="@drawable/frame35" android:duration="50" />

</animation-list>

Using a Frame-by-Frame Animation in an Activity

Now that we have defined the animation we can use it in our example activity. The layout of the activity provides an ImageView with the id imageAnimation which will be used to display the animation in the activity:

<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="#E6E6E6">
	<ImageView android:layout_height="wrap_content"
	android:layout_width="wrap_content" android:id="@+id/imageAnimation"
	android:adjustViewBounds="true" />
	</LinearLayout>

Within the activity code there are two locations where we have to take care of the animation. The first is during the onCreate method. Within this method we will set the layout to the corresponding xml layout and then set the background resource of the imageAnimation ImageView to our frame-by-frame animation. Furthermore we will store the reference to the ImageView because we will need it later on to actually start the animation:

public class Animation extends Activity {

	ImageView animation;
	@Override
	public void onCreate(Bundle savedInstanceState) {
		super.onCreate(savedInstanceState);
		setContentView(R.layout.main);
		animation = (ImageView)findViewById(R.id.imageAnimation);

		animation.setBackgroundResource(R.drawable.animation);
	}

You might think that we should also start the animation during the onCreate method but that’s not possible. If we call the start method during the onCreate method call android won’t start the animation. This also applies to the onResume method. Other tutorials might advise you to use a timer during this method calls which will start the animation after some delay but this approach is problematic because if the delay is too short the animation won’t start again and if it’s too long the user experience will drop. Furthermore the delay is device specific so on a fast device the delay might be perfectly chosen but when a user with an older and slower device uses the activity the animation won’t start.

To actually start the animation the best choice is the onWindowFocusChanged method. In the android API reference this method is described with “Called when the current Window of the activity gains or loses focus. This is the best indicator of whether this activity is visible to the user.”. As you can imagine it is sufficient that the animation is getting started when the activity is actually visible to the user and not before that. Furthermore this method can be used to stop the animation (to free resources) if the activity is not visible to the user anymore. To get a reference to the animation we use the getBackground method of the ImageView which is an AnimationDrawable because we previously set the background resource to an animation. In the next step we distinguish between if the Activiy is visible (hasFocus = true) or invisible and start or stop the animation correspondingly. In our example the corresponding method is implemented as follows:

	@Override
	public void onWindowFocusChanged (boolean hasFocus) {
		super.onWindowFocusChanged(hasFocus);
		AnimationDrawable frameAnimation = 
			(AnimationDrawable) animation.getBackground();
		if(hasFocus) {
			frameAnimation.start();
		} else {
			frameAnimation.stop();
		}
	}
}

Now everything is implemented and the animation will be played whenever the activity is visible to the user.

by Kevin Kratzer

The technical advances in the last years, like fast processors, rapid data transmission, better displays and enhanced miniaturization, permit applications based on augmented reality to be processed on a smartphone or a tablet computer. Thus, such applications penetrate more and more into the everyday world. They are mostly designed for marketing, sales and advertising; many are simply playing around, just to demonstrate the possibilities of the new technology.

In industry, augmented reality applications have been used for quite some time. However, the necessary hardware is complex, such as special camera systems, sensors, displays and eye-tracking devices. Therefore, the applications are limited to specific areas.

In my previous post “Using the Android Interface Definition Language (AIDL) to make a Remote Procedure Call (RPC) in Android” I’ve explained the basics on how inter-process communication can be implemented in Android. Now we will take a look at a specialized field in this area: Using Parcelables as parameters of an AIDL method.
As described in the previous post it is possible to either use primitive java types within a remote method signature or any class which implements the android.os.Parcelable interface. Using this interface it is possible to define arbitrarily complex data types which can be used as parameters or return values of methods during a remote method call.
I’ve created a modified version of the first example app to give you a basic example which relies on a Parcelable object as a method return value.

Think of an iOS app which plays an HTML5 video you can stop, play and interact with. This is exactly what we’re going to build in this tutorial.

The HTML

Because only small changes are necessary, we will start by editing the HTML/JS code provided in this article. First, we attach a JavaScript onclick attribute to every elements’ surrounding

 via the setupItems() function:

var t = $(items[i][3]);

(Note that the encoding is necessary because the above string is given as function parameter).

As you can see, Overlay.call() is called with the appropriate item id when clicking on a box. Call() works similar to the way we used windows.location to submit data to the iOS parent app in a previous article – it takes the id and forwards the page to something like ?cmd=call&param=2. This string  can be read later by the iOS app.

That’s it! Now fire up Xcode and create a new view based application. Call it App Solut InteractiveVideoInterface and open your ViewController header file. Put this code in:

@interface App_Solut_InteractiveVideoInterfaceViewController : UIViewController {

IBOutlet UIWebView *webView;
IBOutlet UILabel *label;
}
@property(retain, nonatomic) UIWebView *webView;
@property(retain, nonatomic) UILabel *label;

@end

Save the file, switch to interface builder and edit your ViewControllers’ xib file. Drag both a webView and a label to the screen and connect the just created outlets. To do so, control-drag from your webview object to the files’ owner and select delegate. Then do the same vice-versa: Control-drag (or right click-drag if you own a standard mouse) from files’ owner to both the webview and the label and select the appropriate entries from the list. You have now created the connection between your code and your interface.

In your implementation file synthesize both the label and the webview:

@synthesize webView;
@synthesize label;

Similar to the webview interface code we add the following function to catch all webpage requests.

- (BOOL)webView:(UIWebView*)webView shouldStartLoadWithRequest:(NSURLRequest*)request navigationType:(UIWebViewNavigationType)navigationType {
NSString *url = [[request URL] absoluteString];
NSArray *urlArray = [url componentsSeparatedByString:@"?"];
NSString *cmd = @"";
NSMutableArray *paramsToPass = nil;
if([urlArray count] > 1) {
NSString *paramsString = [urlArray objectAtIndex:1];
NSArray *urlParamsArray = [paramsString componentsSeparatedByString:@"&"];
cmd = [[[urlParamsArray objectAtIndex:0] componentsSeparatedByString:@"="] objectAtIndex:1];
int numCommands = [urlParamsArray count];
paramsToPass = [[NSMutableArray alloc] initWithCapacity:numCommands-1];
for(int i = 1; i < numCommands; i++){
NSString *aParam = [[[urlParamsArray objectAtIndex:i] componentsSeparatedByString:@"="] objectAtIndex:1];
[paramsToPass addObject:aParam];
}
}
if([cmd compare:@"call"] == NSOrderedSame) {
NSString *message = [[paramsToPass objectAtIndex:0]stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding];
[label setText:[NSString stringWithFormat:@"You just clicked button #%@", message]];
}
/* Only load the page if it is not an appcall */
NSRange aSubStringRange = [url rangeOfString:@"appcall"];
if(aSubStringRange.length != 0){
NSLog(@"App call found: request cancelled");
return NO;
} else {
return YES;
}
}

You can see that the value given in the parameter param is attached to a string which is then set as the label text. In this scenario, every video overlay will have it’s own ID send to the app when clicked on. From this point your imagination is set free. You can at this point react to the users click and do whatever you want.

Download sample project

Again, you can find a sample project at GitHub: InteractiveVideoInterface-Example

A WebView element is a smart and easy way to integrate both platform-independent and dynamic content into an ordinary iOS app. While using it to display webpages might be simple things become more complicated when some kind of interaction between the parent app and the webpage is needed. At the moment there is no common way for a page to send any messages to its WebView controller. This article shows basic principles of a two-way communication, providing a work-around for the function missing.
At the end you’ll find a download link to a full iOS sample project.

Calling JavaScript code from your App

This part is easy: Assuming you have a WebView set up in your project displaying just any page. There is a method called stringByEvaluatingJavaScriptFromString which does the job. Example:

[myWebView stringByEvaluatingJavaScriptFromString:@"alert('Hello World!');"];

Will display – you guessed it – an alert message sent by your webpage. That way you can call any JavaScript function available to your webpage from outside, making it a snap to alter pages on the fly.

Calling functions in your App from inside a WebView

As I mentioned earlier, the other way around is far more complicated. First, we have to think about a way a webpage can communicate to an app. Fortunately, the WebView element provides some interesting attributes and events.
First of all an event called shouldStartLoadWithRequest is fired when a page is requested. Here is an example implementation:

-(BOOL)myWebView:(UIWebView*)myWebView shouldStartLoadWithRequest:(NSURLRequest*)request navigationType:(UIWebViewNaviagtionType)navigationType {
   [...]
}

The great thing about this function is that it is called before the page actually gets loaded and that it expects a boolean as a return. Returning YES makes the WebView start loading the desired page while returning NO stops it from doing so.
Now the plan is as follows: A JavaScript function loads a fictional page including some GET parameters via window.location. Because of the special URL keyword the app is able to figure out whether a real page should be loaded or an in-app function is called (namely an app-call). At that point the function can stop the request and execute the action or just let it load.
So the above function needs to get two things done:

  • check whether a normal link or an app call is requested and returning the proper boolean
  • extract given parameters and check if they fit into predefined actions

Based on a sample by tetontech this is the code we are going to talk about:

- (BOOL)webView:(UIWebView*)webView shouldStartLoadWithRequest:(NSURLRequest*)request navigationType:(UIWebViewNavigationType)navigationType {
 NSString *url = [[request URL] absoluteString];
 NSLog(@"Requesting: %@",url);
 NSArray *urlArray = [url componentsSeparatedByString:@"?"];
 NSString *cmd = @"";
 NSMutableArray *paramsToPass = nil;
 // isolate command and parameters
 if([urlArray count] < 1){
 NSString *paramsString = [urlArray objectAtIndex:1];
 NSArray *urlParamsArray = [paramsString componentsSeparatedByString:@"&"];
 cmd = [[[urlParamsArray objectAtIndex:0] componentsSeparatedByString:@"="] objectAtIndex:1];
 int numCommands = [urlParamsArray count];
 paramsToPass = [[NSMutableArray alloc] initWithCapacity:numCommands-1];
 for(int i = 1; i < numCommands; i++){
 NSString *aParam = [[[urlParamsArray objectAtIndex:i] componentsSeparatedByString:@"="] objectAtIndex:1];
 [paramsToPass addObject:aParam];
 }
 }
 if([cmd compare:@"toggleWorking"] == NSOrderedSame){
 NSLog(@"Turning working indicator...");
 if([UIApplication sharedApplication].networkActivityIndicatorVisible == NO) {
 [UIApplication sharedApplication].networkActivityIndicatorVisible = YES;
 NSLog(@"...on");
 } else {
 [UIApplication sharedApplication].networkActivityIndicatorVisible = NO;
 NSLog(@"...off");
 }
 } else if([cmd compare:@"logMessage"] == NSOrderedSame) {
 NSString *message = [[paramsToPass objectAtIndex:0] stringByReplacingOccurrencesOfString:@"%20" withString:@" "];
 NSString *message = [[paramsToPass objectAtIndex:0] stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding];
 NSLog(@"Received JS message: %@",message);
 }
 // only load the page if it is the initial index.html file
 NSRange aSubStringRange = [url rangeOfString:@"index.html"];
 if(aSubStringRange.length != 0){
 return YES;
 } else {
 NSLog(@"App call found: request cancelled");
 return NO;
 }
}

As stated before this function extracts and isolates every parameter and command given and runs through a couple of if-statements to check whether a command is recognized. If that’s the case additional arguments might be used to execute an action.

In this example there are just two possible actions to call: By requesting ?cmd=toggleWorking a spinning wheel in iPads’ tab bar is turned on or off depending on it’s current state. Second action logMessage might be called like this: ?cmd=logMessage&param=Hello%20World. It will forward any given message to the debug console as a log.

NSRange aSubStringRange = [url rangeOfString:@"index.html"];
if(aSubStringRange.length != 0){

The last check is really important! Assuming your mainpage is named index.html this makes sure the page gets loaded once at start. Other requests are blocked right now but you can most likely think of different approaches to make sure app calls don’t get loaded.

Download sample project

Header over to GitHub and download the WebViewInterface-Example
Source: