Kamis, 29 Oktober 2015

New Course on Developing Android Apps for Google Cast and Android TV

Posted by Josh Gordon, Developer Advocate



Go where your users are: the living room! Google Cast lets users stream their favorite apps from Android, iOS and the Web right onto their TV. Android TV turns a TV into an Android device, only bigger!







We've partnered with Udacity to launch a new online course - Google Cast and Android TV Development. This course teaches you how to extend your existing Android app to work with these technologies. It’s packed with practical advice, code snippets, and deep dives into sample code.



You can take advantage of both, without having to rewrite your app. Android TV is just Android on a new form factor, and the Leanback library makes it easy to add a big screen and cinematic UI to your app. Google Cast comes with great samples and guides to help you get started. Google also provides the Cast Companion Library, which makes it faster and easier to add cast to your Android app.



This class is part of our larger series on Ubiquitous Computing across other Google platforms, including Android Wear, and Android Auto. Designed as short, standalone courses, you can take any on its own, or take them all!



Get started now and try it out at no cost, your users are waiting!






Rabu, 28 Oktober 2015

Learn top tips from Kongregate to achieve success with Store Listing Experiments

Posted by Lily Sheringham, Developer Marketing at Google Play


Editor’s note: This is another post in our series featuring tips from developers finding success on Google Play. We recently spoke to games developer Kongregate, to find out how they use Store Listing Experiments successfully. - Ed.



With Store Listing Experiments in the Google Play Developer Console, you can conduct A/B tests on the content of your store listing pages. Test versions of the text and graphics to see which ones perform best, based on install data.



Kongregate increases installs by 45 percent with Store Listing Experiments



Founded in 2006 by brother and sister Jim and Emily Greer, Kongregate is a leading mobile games publisher specializing in free to play games. Kongregate used Store Listing Experiments to test new content for the Global Assault listing page on Google Play. By testing with different audience sizes, they found a new icon that drove 92 percent more installs, while variant screenshots achieved an impressive 14 percent improvement. By picking the icons, screenshots, and text descriptions that were the most sticky with users, Kongregate saw installs increase by 45 percent on the improved page.



Kongregate’s Mike Gordon, VP of Publishing; Peter Eykemans, Senior Producer; and Tammy Levy, Director of Product for Mobile Games, talk about how to successfully optimise mobile game listings with Store Listing Experiments.







Kongregate’s tips for success with Store Listing Experiments



Jeff Gurian, Sr. Director of Marketing at Kongregate also shares his do’s and don’ts on how to use experiments to convert more of your visitors, thereby increasing installs. Check them out below:
























Do’s Don’ts
Do start by testing your game’s icon. Icons can have the greatest impact (positive or negative) on installs — so test early! Don’t test too many variables at once. It makes it harder to determine what drove results. The more variables you test, the more installs (and time) you’ll need to identify a winner.
Do have a question or objective in mind when designing an experiment. For example, does artwork visualizing gameplay drive more installs than artwork that doesn’t? Don’t test artwork only. Also test screenshot ordering, videos, and text to find what combinations increase installs.
Do run experiments long enough to achieve statistical significance. How long it takes to get a result can vary due to changes in traffic sources, location of users, and other factors during testing. Don’t target too small an audience with your experiment variants. The more users you expose to your variants, the more data you collect, the faster you get results!
Do pay attention to the banner, which tells you if your experiment is still “in progress.” When it has collected enough data, the banner will clearly tell you which variant won or if it was a tie. Don’t interpret a test where the control attribute performs better than variants as a waste. You can still learn valuable lessons from what “didn’t work.” Iterate and try again!




Learn more about how Kongregate optimized their Play Store listing with Store Listing Experiments. Learn more about Google Play products and best practices to help you grow your business globally.



Selasa, 27 Oktober 2015

Introducing a New Course on Developing Android Apps for Auto

Posted by Wayne Piekarski, Developer Advocate



Android Auto brings the Android platform to the car in a way that’s optimized for the driving experience, allowing the user to keep their hands on the wheel, and their eyes on the road. To learn how to extend your existing media and messaging apps to work within a car, we collaborated with Udacity to introduce a new course on Ubiquitous Computing with Android Auto.






Designed by Developer Advocates from Google, the course shows you how to take advantage of your existing Android knowledge to work on this new platform. The best part is that Android Auto is based on extensions to the regular Android framework, so you don't need to rewrite your existing apps to support it. You'll learn how to implement messaging apps, by using Notification extensions. You'll also learn how audio players just work on Android Auto when you use the Android media APIs. In both cases, we work through some simple Android samples, and then show what changes are needed to extend them for Android Auto. Finally, we show a complete music playing sample, and how it works across other platforms like Android Wear.



If you have an interest in Android-based messaging or media apps, then you need to learn about Android Auto. Users want to be able to take their experience to other places, such as their cars, and not just on their phones. Having Auto support will allow you to differentiate your app, and give users another reason to try it.



This class is part of our larger series on Ubiquitous Computing across Google platforms, such as Android Wear, Android Auto, Android TV, and Google Cast. Designed as short, standalone courses, you can take any course on its own, or take them all! The Android Auto platform is a great opportunity to add functionality that will distinguish your app from others. This Udacity course will get you up to speed quickly with everything you need to get started.



Get started now and try it out at no cost, your users are waiting!



Senin, 26 Oktober 2015

New in Android Samples: Authenticating to remote servers using the Fingerprint API

Posted by Takeshi Hagikura, Yuichi Araki, Developer Programs Engineer



As we announced in the previous blog post, Android 6.0 Marshmallow is now publicly available to users. Along the way, we’ve been updating our samples collection to highlight exciting new features available to developers.



This week, we’re releasing AsymmetricFingerprintDialog, a new sample demonstrating how to securely integrate with compatible fingerprint readers (like Nexus Imprint) in a client/server environment.



Let’s take a closer look at how this sample works, and talk about how it complements the FingerprintDialog sample we released earlier during the public preview.



Symmetric vs Asymmetric Keys



The Android Fingerprint API protects user privacy by keeping users’ fingerprint features carefully contained within secure hardware on the device. This guards against malicious actors, ensuring that users can safely use their fingerprint, even in untrusted applications.



Android also provides protection for application developers, providing assurances that a user’s fingerprint has been positively identified before providing access to secure data or resources. This protects against tampered applications, providing cryptographic-level security for both offline data and online interactions.



When a user activates their fingerprint reader, they’re unlocking a hardware-backed cryptographic vault. As a developer, you can choose what type of key material is stored in that vault, depending on the needs of your application:




  • Symmetric keys: Similar to a password, symmetric keys allow encrypting local data. This is a good choice securing access to databases or offline files.

  • Asymmetric keys: Provides a key pair, comprised of a public key and a private key. The public key can be safely sent across the internet and stored on a remote server. The private key can later be used to sign data, such that the signature can be verified using the public key. Signed data cannot be tampered with, and positively identifies the original author of that data. In this way, asymmetric keys can be used for network login and authenticating online transactions. Similarly, the public key can be used to encrypt data, such that the data can only be decrypted with the private key.



This sample demonstrates how to use an asymmetric key, in the context of authenticating an online purchase. If you’re curious about using symmetric keys instead, take a look at the FingerprintDialog sample that was published earlier.



Here is a visual explanation of how the Android app, the user, and the backend fit together using the asymmetric key flow:





1. Setting Up: Creating an asymmetric keypair



First you need to create an asymmetric key pair as follows:



KeyPairGenerator.getInstance(KeyProperties.KEY_ALGORITHM_EC, "AndroidKeyStore");
keyPairGenerator.initialize(
new KeyGenParameterSpec.Builder(KEY_NAME,
KeyProperties.PURPOSE_SIGN)
.setDigests(KeyProperties.DIGEST_SHA256)
.setAlgorithmParameterSpec(new ECGenParameterSpec("secp256r1"))
.setUserAuthenticationRequired(true)
.build());
keyPairGenerator.generateKeyPair();


Note that .setUserAuthenticationRequired(true) requires that the user authenticate with a registered fingerprint to authorize every use of the private key.

Then you can retrieve the created private and public keys with as follows:




KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
keyStore.load(null);
PublicKey publicKey =
keyStore.getCertificate(MainActivity.KEY_NAME).getPublicKey();

KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
keyStore.load(null);
PrivateKey key = (PrivateKey) keyStore.getKey(KEY_NAME, null);


2. Registering: Enrolling the public key with your server



Second, you need to transmit the public key to your backend so that in the future the backend can verify that transactions were authorized by the user (i.e. signed by the private key corresponding to this public key).
This sample uses the fake backend implementation for reference, so it mimics the transmission of the public key, but in real life you need to transmit the public key over the network.



boolean enroll(String userId, String password, PublicKey publicKey);


3. Let’s Go: Signing transactions with a fingerprint



To allow the user to authenticate the transaction, e.g. to purchase an item, prompt the user to touch the fingerprint sensor.





Then start listening for a fingerprint as follows:



Signature.getInstance("SHA256withECDSA");
KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");
keyStore.load(null);
PrivateKey key = (PrivateKey) keyStore.getKey(KEY_NAME, null);
signature.initSign(key);
CryptoObject cryptObject = new FingerprintManager.CryptoObject(signature);

CancellationSignal cancellationSignal = new CancellationSignal();
FingerprintManager fingerprintManager =
context.getSystemService(FingerprintManager.class);
fingerprintManager.authenticate(cryptoObject, cancellationSignal, 0, this, null);


4. Finishing Up: Sending the data to your backend and verifying



After successful authentication, send the signed piece of data (in this sample, the contents of a purchase transaction) to the backend, like so:



Signature signature = cryptoObject.getSignature();
// Include a client nonce in the transaction so that the nonce is also signed
// by the private key and the backend can verify that the same nonce can't be used
// to prevent replay attacks.
Transaction transaction = new Transaction("user", 1, new SecureRandom().nextLong());
try {
signature.update(transaction.toByteArray());
byte[] sigBytes = signature.sign();
// Send the transaction and signedTransaction to the dummy backend
if (mStoreBackend.verify(transaction, sigBytes)) {
mActivity.onPurchased(sigBytes);
dismiss();
} else {
mActivity.onPurchaseFailed();
dismiss();
}
} catch (SignatureException e) {
throw new RuntimeException(e);
}


Last, verify the signed data in the backend using the public key enrolled in step 2:



@Override
public boolean verify(Transaction transaction, byte[] transactionSignature) {
try {
if (mReceivedTransactions.contains(transaction)) {
// It verifies the equality of the transaction including the client nonce
// So attackers can't do replay attacks.
return false;
}
mReceivedTransactions.add(transaction);
PublicKey publicKey = mPublicKeys.get(transaction.getUserId());
Signature verificationFunction = Signature.getInstance("SHA256withECDSA");
verificationFunction.initVerify(publicKey);
verificationFunction.update(transaction.toByteArray());
if (verificationFunction.verify(transactionSignature)) {
// Transaction is verified with the public key associated with the user
// Do some post purchase processing in the server
return true;
}
} catch (NoSuchAlgorithmException | InvalidKeyException | SignatureException e) {
// In a real world, better to send some error message to the user
}
return false;
}


At this point, you can assume that the user is correctly authenticated with their fingerprints because as noted in step 1, user authentication is required before every use of the private key. Let’s do the post processing in the backend and tell the user that the transaction is successful!



Other updated samples


We also have a couple of Marshmallow-related updates to the Android For Work APIs this month for you to peruse:



  • AppRestrictionEnforcer and AppRestrictionSchema
    These samples were originally released when the App Restriction feature was introduced as a part of Android for Work API in Android 5.0 Lollipop. AppRestrictionEnforcer demonstrates how to set restriction to other apps as a profile owner. AppRestrictionSchema defines some restrictions that can be controlled by AppRestrictionEnforcer. This update shows how to use 2 additional restriction types introduced in Android 6.0.


  • We hope you enjoy the updated samples. If you have any questions regarding the samples, please visit us on our GitHub page and file issues or send us pull requests.

    Jumat, 23 Oktober 2015

    Coffee with a Googler: Bullet Time with the Cloud Spin Team

    Posted by Laurence Moroney, Developer Advocate



    As part of Google Cloud Platform’s Next roadshow, the team decided to make a demo that anybody could get involved in, and the concept of Google Cloud Spin was born. The idea, influenced by the ”bullet time” scenes from The Matrix was simple -- build a rig of cameras, have them take a number of pictures, and stitch them together into an animated GIF that looks like this:





    Coffee with a Googler caught up with the team responsible for this demo to talk about how they built it, what technical challenges they faced, and how they used the cloud to managed the number of camera and create an animated GIF like the one above.



    There was so much great information for developers in the exploration of this project, that we’ve split our interview into two episodes. The first, with Ray Tsang, Developer Advocate, who tells us about the ins and outs of setting up a number of phones to take a pictures to create a spin. From their original prototype (having lots of googlers holding up selfie sticks) to their final version (shooting videos that are synchronized on an audio signal, controlled by events in Firebase), it’s a fascinating conversation. In Part Two of our conversation, we will talk about how those videos had the right frame extracted, and the frames assembled into a cloud spin is coming soon!



    To learn more about this project, visit the Google Cloud Platform blog.




    Virtual currency: Sources and Sinks

    Posted by Damien Mabin, Developer Advocate



    More and more mobile games base their economic model on virtual currencies and free to play, yet there are plenty of pitfalls to be aware of while developing your game. One of these pitfalls is having an unbalanced economy. Sources and Sinks, a handy feature included in the Play Games Services toolset, is specifically designed to help measure the balance of your game’s virtual economy.



    It helps you visualise in one simple diagram the state of your current in game economy. In diagram 1 (below), along the x-axis (time), and y-axis (amount of virtual currency), we see 2 curves:




    • One showing the amount of virtual currency earn by players (orange line)

    • The other one showing the amount spent by players (green line)





    Diagram 1: Poorly Monetizing Economy



    What do the curves in the diagram tell us? In this case, that our game is likely not going to monetize well. Users are spending less currency than they are earning: resulting in a surplus. There is no sense of scarcity for the user which may indicate that your players do not understand how they can spend currency or that there is value to them in doing so. It would be a good idea in this case to re-evaluate how much content is available to spend virtual currency on and how discoverable this content is to your users. Alternatively, you may want to consider decreasing the amount of in game earned currency is available (inflation can be a bad thing). Ultimately, you want your curves to change as demonstrated in diagram 2 (below).




    Diagram 2: Balancing economy



    That’s a lot better! Now your users are spending more than they earn… Wait! How is that possible? Two reasons: Players are spending the stock of money they accumulated before your changes. Moreover, there is another important point not to forget: you should not track in the above diagrams the amount of virtual currency the user purchase through in app purchases. If you wait a few more days, you should see the 2 curves converge a bit; the delta of them being the amount of virtual currency users purchase through IAP:




    Diagram 3: Stabilised economy



    With play game services you can get this visualisation with 2 lines of codes! It works on iOS and Android and doesn’t require the user to sign in to Play Games. What you will have in your Android or iOS app is something like this:





    You can find more information about the integration here.



    Once the client integration done you can go into your Play Store Developer Console to visualise the curve. Go into the “Game services” section, and click on “Player analytics->overview.”


    Kamis, 22 Oktober 2015

    Google Developers teams up with General Assembly to launch Android Development Immersive training course

    Posted by Peter Lubbers, Senior Program Manager, Google Developer Training



    Today at the Big Android BBQ we announced that we have teamed up with General Assembly (GA), a global education institution transforming thinkers into creators, to create a new Android Development Immersive training course. This 12-week, full-time course will be offered beginning in January 2016 at GA’s New York campus, and in February at GA’s San Francisco campus and will roll out to additional campuses over the course of the next year. It is the first in-person training program of its kind that Google Developers has designed and built.





    The Google Developer Relations team teamed up with General Assembly to ensure the Android Development Immersive bootcamp provides developers with access to the best instructors and latest and greatest hands-on material to create successful app experiences and businesses. To effectively reach over a billion of Android users globally, it's important for developers to build high-quality apps that are beautifully designed, performant, and delightful to use.



    “We are constantly looking at the economy and job market for what skills are most in-demand. Demand for developers who can address this market and build new applications is tremendous,” said Jake Schwartz, co-founder and CEO, General Assembly. “Developing this course in partnership with Google Developers allows us to provide students with the most relevant skills, ensuring a reliable pipeline of talented developers ready to meet the urgent demand of companies in the Android ecosystem, a key component of GA's education-to-employment model."



    Registration in the Android Development Immersive includes access to GA’s career preparation services and support, also known as Outcomes, includes assistance in creating portfolio-ready projects, access to career development workshops, networking events, and coaching and support in the job search process. Through in-person hiring events, mock interviews & GA’s online job search platform, graduates connect with GA’s hiring partners, which consists of close to 2,000 employers globally.



    One of these employers is Vice Media. "I'm really excited to see the candidates coming out of the GA Android course. The fact that they're working with both Google and potential employers to shape the curriculum around real-world problems will make a huge difference. Textbook learning is one thing, but classroom learning with practitioners is a level we have all been waiting for. In fact, Vice Media is going to be hiring an apprentice right out of this course," said Ben Jackson, Director of Mobile Apps for Vice Media.



    Learn more and sign up here.

    Rabu, 21 Oktober 2015

    Get your bibs ready for Big Android BBQ!



    Posted by, Colt McAnlis, Senior Texas Based Developer Advocate



    We’re excited to be involved in the Big Android BBQ (BABBQ) this year because of one thing: passion! Just like BBQ, Android development is much better when passionate people obsess over it. This year’s event is no exception.





    Take +Ian Lake for example. His passion about Android development runs so deep, he was willing to chug a whole bottle of BBQ sauce just so we’d let him represent Android Development Patterns at the conference this year. Or even +Chet Haase, who suffered a humiliating defeat during the Speechless session last year (at the hands of this charming bald guy). He loves BBQ so much that he’s willing to come back and lose again this year, just so he can convince you all that #perfmatters. Let’s not forget +Reto Meier. That mustache was stuck on his face for days. DAYS! All because he loves Android Development so much.



    When you see passion like this, you just have to be part of it. Which is why this year’s BABBQ is jam packed with awesome Google Developers content. We’re going to be talking about performance, new APIs in Marshmallow 6.0, NDK tricks, and Wear optimization. We even have a new set of code labs so that folks can get their hands on new code to use in their apps.



    Finally, we haven’t even mentioned our BABBQ attendees, yet. We’re talking about people who are so passionate about an Android development conference that they are willing to travel to Texas to be a part of it!



    If BBQ isn’t your thing, or you won’t be able to make the event in person, the Android Developers and Google Developers YouTube channels will be there in full force. We’ll be recording the sessions and posting them to Twitter and Google+ throughout the event.



    So, whether you are planning to attend in person or watch online, we want you to remain passionate about your Android development.

    New Courses -- Developing Watch Apps for Android Wear

    Posted by Wayne Piekarski, Developer Advocate



    We recently released a new Udacity course on Ubiquitous Computing with Android Wear, built as a collaboration between Udacity and Google. Android Wear watches allow users to get access to their information quickly, with just a glance, using an always-on display. By taking this course, you will learn everything you need to know to reach your users with notifications, custom watch faces, and even full apps.



    Designed by Developer Advocates from Google, the course is a practical approach to getting started with Android Wear. It takes you through code snippets, and deep dives into sample code, showing you how easy it is to extend your existing Android apps to work on Android Wear. It also covers how to design user interfaces and great experiences for this unique wearable platform, which is important because the interface of the watch needs to be glanceable and unobtrusive for all day use.








    This class is part of our larger series on Ubiquitous Computing across Google platforms, such as Android Wear, Android Auto, Android TV, and Google Cast. Designed as short, standalone courses, you can take any course on its own, or take them all! The Android Wear platform is a great opportunity to add functionality that will distinguish your app from others; and this Udacity course will get you up to speed quickly and easily.



    Get started now and try it out at no cost, your users are waiting!



    Selasa, 20 Oktober 2015

    Android Developer Story: RogerVoice takes advantage of beta testing to launch its app on Android first

    Posted by Lily Sheringham, Google Play team



    RogerVoice is an app which enables people who are hearing impaired to make phone calls through voice recognition and text captions. Founded by Olivier Jeannel, who grew up with more than 80 percent hearing loss, the company successfully raised $35,000 through Kickstarter to get off the ground. Today the team publicly released the app on the Android platform first.



    The team behind RogerVoice talk about how material design and beta testing helped them create an interface which is accessible and intuitive to navigate for users.







    Learn more about how RogerVoice built its app with the help of Google Play features:



    • Material Design: How Material Design helps you create beautiful, engaging apps.

    • Beta testing: Learn more about using beta testing on Google Play for your app.

    • Developer Console: Make the most of the Google Play Developer Console to publish your apps and grow and engage your user base.

    Senin, 19 Oktober 2015

    Introducing the Tech Entrepreneur Nanodegree

    Originally posted on Google Developers Blog


    Posted by Shanea King-Roberson, Program Manager



    As a developer, writing your app is important. But even more important is getting it into the hands of users. Ideally millions of users. To that end, you can now learn what it takes to design, validate, prototype, monetize, and market app ideas from the ground up and grow them into a scalable business with the new Tech Entrepreneur Nanodegree.



    Designed by Google in partnership with Udacity, the Tech Entrepreneur Nanodegree, takes 4-7 months to complete. We have teamed up with most successful thought leaders in this space to provide students with a unique and battle-tested perspective. You’ll meet Geoffrey Moore, author of “Crossing the Chasm”, Pete Koomen, co-founder of Optimizely; Aaron Harris and Kevin Hale, Partners at Y-Combinator; Nir Eyal, author of the book “Hooked: How to build habit forming products” and co-founder of Product Hunt; Steve Chen, Co-Founder of YouTube, rapid prototyping company InVision and many more.



    All of the content that make up this nanodegree is available online for free at udacity.com/google. In addition, Udacity provides paid services, including access to coaches, guidance on your project, help staying on track, career counseling, and a certificate when you complete the nanodegree.








    The Tech Entrepreneur offering will consist of the following courses:




    • Product Design: Learn Google’s Design Sprint methodology, Ideation & Validation, UI/UX design and gathering the right metrics.

    • Prototyping: Experiment with rapid-low and high-fidelity prototyping on mobile and the web using online tools.

    • Monetization: Learn how to monetize your app and how to set up an effective payment funnel.

    • App Marketing: Understand your market, analyze competition, position your product, prepare for launch, acquire customers and learn growth hacks.

    • How to get your startup started: Find out whether you really need venture capital funding, evaluate build vs. buy, and learn simple ways to monitor and maintain your startup business effectively.




    Pitch your ideas in front of Venture Capitalists



    Upon completion, students will receive a joint certificate from Udacity and Google. The top graduates will also be invited to an exclusive pitch event, where they will have the opportunity to pitch their final product to venture capitalists at Google.

    Jumat, 16 Oktober 2015

    Game Performance: Vertex Array Objects

    Posted by Shanee Nishry



    Previously, we showed how you can use vertex layout qualifiers to increase the performance and determinism of your OpenGL application. In this post, we’ll show another useful technique that will help you produce increased performance and cleaner code when drawing objects.



    Binding the vertex buffer



    Before drawing onto the screen, you need to bind your vertex data (e.g. positions, normals, UVs) to the corresponding vertex shader attributes. To do that, you need to bind the vertex buffer, enable the generic vertex attribute, and use glVertexAttribPointer to describe the layout of the buffer.



    Therefore, a draw call might look like this:



    const GLuint ATTRIBUTE_LOCATION_POSITIONS   = 0;
    const GLuint ATTRIBUTE_LOCATION_TEXTUREUV = 1;
    const GLuint ATTRIBUTE_LOCATION_NORMALS = 2;

    // Bind shader program, uniforms and textures
    // ...

    // Bind the vertex buffer
    glBindBuffer( GL_ARRAY_BUFFER, vertex_buffer_object );

    // Set the vertex attributes
    glEnableVertexAttribArray( ATTRIBUTE_LOCATION_POSITIONS );
    glVertexAttribPointer( ATTRIBUTE_LOCATION_POSITIONS, 3, GL_FLOAT, GL_FALSE, 32, 0 );

    glEnableVertexAttribArray( ATTRIBUTE_LOCATION_TEXTUREUV );
    glVertexAttribPointer( ATTRIBUTE_LOCATION_TEXTUREUV, 2, GL_FLOAT, GL_FALSE, 32, 12 );

    glEnableVertexAttribArray( ATTRIBUTE_LOCATION_NORMALS );
    glVertexAttribPointer( ATTRIBUTE_LOCATION_NORMALS, 3, GL_FLOAT, GL_FALSE, 32, 20 );

    // Draw elements
    glDrawElements( GL_TRIANGLES, count, GL_UNSIGNED_SHORT, 0 );


    There are several reasons why we might not like this code very much. The first is that we need to cache the layout of the vertex buffer to enable and disable the right attributes before drawing. This means we are either hard-coding or saving some amount of data for a nearly meaningless task.



    The second reason is performance. Having to tell the drivers which attributes to individually activate is suboptimal. It would be best if we could precompile this information and deliver it all at once.



    Lastly, and purely for aesthetics, our draw call is cluttered by long boilerplate code. It would be nice to get rid of it.








    Did you know there is another reason why someone might frown on this code? The code is making use of layout qualifiers which is great! But, since it’s already using OpenGL ES 3+, it would be even better if the code also used Geometry Instancing. By batching many instances of a mesh into a single draw call, you can really boost performance.




    So how can we improve on the above code?



    Vertex Array Objects (VAOs)



    If you are using OpenGL ES 3 or higher, you should use Vertex Array Objects (or "VAOs") to store your vertex attribute state.



    Using a VAO allows the drivers to compile the vertex description format for repeated use. In addition, this frees you from having to cache the vertex format needed for glVertexAttribPointer, and it also results in less per-draw boilerplate code.



    Creating Vertex Array Objects



    The first thing you need to do is create your VAO. This is created once per mesh, alongside the vertex buffer object and is done like this:



    const GLuint ATTRIBUTE_LOCATION_POSITIONS   = 0;
    const GLuint ATTRIBUTE_LOCATION_TEXTUREUV = 1;
    const GLuint ATTRIBUTE_LOCATION_NORMALS = 2;

    // Bind the vertex buffer object
    glBindBuffer( GL_ARRAY_BUFFER, vertex_buffer_object );

    // Create a VAO
    GLuint vao;
    glGenVertexArrays( 1, &vao );
    glBindVertexArray( vao );

    // Set the vertex attributes as usual
    glEnableVertexAttribArray( ATTRIBUTE_LOCATION_POSITIONS );
    glVertexAttribPointer( ATTRIBUTE_LOCATION_POSITIONS, 3, GL_FLOAT, GL_FALSE, 32, 0 );

    glEnableVertexAttribArray( ATTRIBUTE_LOCATION_TEXTUREUV );
    glVertexAttribPointer( ATTRIBUTE_LOCATION_TEXTUREUV, 2, GL_FLOAT, GL_FALSE, 32, 12 );

    glEnableVertexAttribArray( ATTRIBUTE_LOCATION_NORMALS );
    glVertexAttribPointer( ATTRIBUTE_LOCATION_NORMALS, 3, GL_FLOAT, GL_FALSE, 32, 20 );

    // Unbind the VAO to avoid accidentally overwriting the state
    // Skip this if you are confident your code will not do so
    glBindVertexArray( 0 );


    You have probably noticed that this is very similar to our previous code section except that we now have the addition of:



    // Create a vertex array object
    GLuint vao;
    glGenVertexArrays( 1, &vao );
    glBindVertexArray( vao );


    These lines create and bind the VAO. All glEnableVertexAttribArray and glVertexAttribPointer calls after that are recorded in the currently bound VAO, and that greatly simplifies our per-draw procedure as all you need to do is use the newly created VAO.



    Using the Vertex Array Object



    The next time you want to draw using this mesh all you need to do is bind the VAO using glBindVertexArray.



    // Bind shader program, uniforms and textures
    // ...

    // Bind Vertex Array Object
    glBindVertexArray( vao );

    // Draw elements
    glDrawElements( GL_TRIANGLES, count, GL_UNSIGNED_SHORT, 0 );


    You no longer need to go through all the vertex attributes. This makes your code cleaner, makes per-frame calls shorter and more efficient, and allows the drivers to optimize the binding stage to increase performance.








    Did you notice we are no longer calling glBindBuffer? This is because calling glVertexAttribPointer while recording the VAO references the currently bound buffer even though the VAO does not record glBindBuffer calls on itself.



    Want to learn more how to improve your game performance? Check out our Game Performance article series. If you are building on Android you might also be interested in the Android Performance Patterns.



    Kamis, 15 Oktober 2015

    Android Support Library 23.1

    Posted by Ian Lake, Developer Advocate



    The Android Support Library is a collection of libraries available on a wide array of API levels that help you focus on the unique parts of your app, providing pre-built components, new functionality, and compatibility shims.



    With the latest release of the Android Support Library (23.1), you will see improvements across the Support V4, Media Router, RecyclerView, AppCompat, Design, Percent, Custom Tabs, Leanback, and Palette libraries. Let’s take a closer look.



    Support V4



    The Support V4 library focuses on making supporting a wide variety of API levels straightforward with compatibility shims and backporting specific functionality.



    NestedScrollView is a ScrollView that supports nested scrolling back to API 4. You’ll now be able to set a OnScrollChangeListener to receive callbacks when the scroll X or Y positions change.



    There are a lot of pieces that make up a fully functioning media playback app, with much of it centered around MediaSessionCompat. A media button receiver, a key part to handling playback controls from hardware or bluetooth controls, is now formalized in the new MediaButtonReceiver class. This class makes it possible to forward received playback controls to a Service which is managing your MediaSessionCompat, reusing the Callback methods already required for API 21+, centralizing support on all API levels and all media control events in one place. A simplified constructor for MediaSessionCompat is also available, automatically finding a media button receiver in your manifest for use with MediaSessionCompat.



    Media Router



    The Media Router Support Library is the key component for connecting and sending your media playback to remote devices, such as video and audio devices with Google Cast support. It also provides the mechanism, via MediaRouteProvider, to enable any application to create and manage a remote media playback device connection.



    In this release, MediaRouteChooserDialog (the dialog that controls selecting a valid remote device) and MediaRouteControllerDialog (the dialog to control ongoing remote playback) have both received a brand new design and additional functionality as well. You’ll find the chooser dialog sorts devices by frequency of use and includes a device type icon for easy identification of different devices while the controller dialog now shows current playback information (including album art).





    To feel like a natural part of your app, the content color for both dialogs is now based on the colorPrimary of your alert dialog theme:



    <!-- Your app theme set on your Activity -->
    <style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar">
    <item name="colorPrimary">@color/primary</item>
    <item name="colorPrimaryDark">@color/primaryDark</item>
    <item name="alertDialogTheme">@style/AppTheme.Dialog</item>
    </style>


    <!-- Theme for the dialog itself -->
    <style name="AppTheme.Dialog" parent="Theme.AppCompat.Light.Dialog.Alert">
    <item name="colorPrimary">@color/primary</item>
    <item name="colorPrimaryDark">@color/primaryDark</item>
    </style>



    RecyclerView



    RecyclerView is an extremely powerful and flexible way to show a list, grid, or any view of a large set of data. One advantage over ListView or GridView is the built in support for animations as items are added, removed, or repositioned.



    This release significantly changes the animation system for the better. By using the new ItemAnimator’s canReuseUpdatedViewHolder() method, you’ll be able to choose to reuse the existing ViewHolder, enabling item content animation support. The new ItemHolderInfo and associated APIs give the ItemAnimator the flexibility to collect any data it wants at the correct point in the layout lifecycle, passing that information into the animate callbacks.



    Note that this new API is not backward compatible. If you previously implemented an ItemAnimator, you can instead extend SimpleItemAnimator, which provides the old API by wrapping the new API. You’ll also notice that some methods have been entirely removed from ItemAnimator. For example, if you were calling recyclerView.getItemAnimator().setSupportsChangeAnimations(false), this code won’t compile anymore. You can replace it with:



    ItemAnimator animator = recyclerView.getItemAnimator();
    if (animator instanceof SimpleItemAnimator) {
    ((SimpleItemAnimator) animator).setSupportsChangeAnimations(false);
    }


    AppCompat



    One component of the AppCompat Support Library has been in providing a consistent set of widgets across all API levels, including the ability to tint those widgets to match your branding and accent colors.



    This release adds tint aware versions of SeekBar (for tinting the thumb) as well as ImageButton and ImageView (providing backgroundTint support) which will automatically be used when you use the platform versions in your layouts. You’ll also find that SwitchCompat has been updated to match the styling found in Android 6.0 Marshmallow.



    Design



    The Design Support Library includes a number of components to help implement the latest in the Google design specifications.



    TextInputLayout expands its existing functionality of floating hint text and error indicators with new support for character counting.



    AppBarLayout supports a number of scroll flags which affect how children views react to scrolling (e.g. scrolling off the screen). New to this release is SCROLL_FLAG_SNAP, ensuring that when scrolling ends, the view is not left partially visible. Instead, it will be scrolled to its nearest edge, making fully visible or scrolled completely off the screen. You’ll also find that AppBarLayout now allows users to start scrolling from within the AppBarLayout rather than only from within your scrollable view - this behavior can be controlled by adding a DragCallback.



    NavigationView provides a convenient way to build a navigation drawer, including the ability to creating menu items using a menu XML file. We’ve expanded the functionality possible with the ability to set custom views for items via app:actionLayout or using MenuItemCompat.setActionView().



    Percent



    The Percent Support Library provides percentage based dimensions and margins and, new to this release, the ability to set a custom aspect ratio via app:aspectRatio. By setting only a single width or height and using aspectRatio, the PercentFrameLayout or PercentRelativeLayout will automatically adjust the other dimension so that the layout uses a set aspect ratio.



    Custom Tabs



    The Custom Tabs Support Library allows your app to utilize the full features of compatible browsers including using pre-existing cookies while still maintaining a fast load time (via prefetching) and a custom look and actions.



    In this release, we’re adding a few additional customizations such as hiding the url bar when the page is scrolled down with the new enableUrlBarHiding() method. You’ll also be able to update the action button in an already launched custom tab via your CustomTabsSession with setActionButton() - perfect for providing a visual indication of a state change.



    Navigation events via CustomTabsCallback#onNavigationEvent() have also been expanded to include the new TAB_SHOWN and TAB_HIDDEN events, giving your app more information on how the user interacts with your web content.



    Leanback



    The Leanback library makes it easy to build user interfaces on TV devices. This release adds GuidedStepSupportFragment for a support version of GuidedStepFragment as well as improving animations and transitions and allowing GuidedStepFragment to be placed on top of existing content.



    You’ll also be able to annotate different types of search completions in SearchFragment and staggered slide transition support for VerticalGridFragment.



    Palette



    Palette, used to extract colors from images, now supports extracting from a specific region of a Bitmap with the new setRegion() method.



    SDK available now!



    There’s no better time to get started with the Android Support Library. You can get started developing today by updating the Android Support Repository from the Android SDK Manager.



    For an in depth look at every API change in this release, check out the full API diff.



    To learn more about the Android Support Library and the APIs available to you through it, visit the Support Library section on the Android Developer site.


    Rabu, 14 Oktober 2015

    Google Play Developer Console introduces Universal App Campaigns and User Acquisition performance reporting

    Posted by Frederic Mayot, Google Play team



    At Google I/O in May, we previewed some new and powerful tools to help you further grow your business and improve decision making based on smarter insights on Google Play. We are happy to announce that, today, these features are live in the Developer Console.



    User Acquisition: AdWords Campaigns



    With just a few simple steps, universal app campaigns lets you easily set up ad campaigns from within the Google Play Developer Console and promote your app across Google Play, Google Search, YouTube and the Google Display Network. You will now be able to more effectively find and grow your install base with the help of Google’s unparalleled reach.




    App install ads generated from one universal app campaign



    Universal app campaigns automatically pull in images, video, and descriptions from your Google Play store listing to generate ad formats that look great wherever they are placed. From there, our systems automatically optimize your campaigns and experiment with different creatives and bids to maximize app install volume as close as possible to your target cost-per-install.




    "With universal app campaigns, we only had to set up one campaign that drove more than 10,000 new installs in one month and install volume is continuing to trend up over time. We're also seeing a 20% lower CPI compared to other channels." – José Maria Pertusa, CMO of Linio




    To get started with your first campaign, select the User Acquisition tab for your app in the Developer Console and choose ‘AdWords Campaigns.’



    User Acquisition: Performance report



    When you’re growing an audience for your app, you’ll want to understand where your most valuable users are coming from. The new performance report on the User Acquisition tab in the Developer Console lets you see how people are finding your Play Store listing, how many install your app, and how many go on to make purchases.







    The performance report also tracks marketing links tagged with UTM tags, so you’ll be able to get more granular detail on how well your promotion is doing. Once you’ve got visitors to your Play Store listing, you’ll want to start thinking of ways to increase the number of visitors turning into installers. The new Store Listing Experiments feature can help you run A/B tests to do just that.



    How to get started in the Developer Console



    To learn how to take advantage of these new features in the Developer Console, watch the DevByte video below in which I explain how to set up your first universal app campaign and how to view the new data offered on the performance tab.







    We hope you’ll use these user acquisition tools to grow a valuable audience for your app or game. We continue to improve our features for developers based on your feedback – like the recent improvements to beta testing and Store Listing Experiments – in order to help you grow your app or game business globally on Google Play.

    Senin, 12 Oktober 2015

    Bringing Google Cardboard and VR to the world

    Originally posted on the Google Developers Blog


    Posted by Brandon Wuest, Software Engineer & Stereoscopic Sightseer



    Google Cardboard is bringing virtual reality worldwide. Starting today, the Google Cardboard app is available in 39 languages and over 100 countries on both Android and iOS devices. Additionally, the Cardboard developer docs are now published in 10 languages to help developers build great VR experiences. With more than 15 million installs of Cardboard apps from Google Play, we're excited to bring VR to even more people around the world.



    More Works with Google Cardboard viewers



    Anyone can make their own Cardboard viewer with the open designs ready for download. If you'd rather not DIY, choose from the growing family of certified viewers, including the Mattel View-Master and Zeiss VR One GX, on sale now.





    Better tools for building


    The Cardboard SDKs for Android and Unity have been updated to address your top two requests: drift correction and Unity performance. This update includes a major overhaul of the sensor fusion algorithms that integrate the signals from the gyroscope and accelerometer. These improvements substantially decrease drift, especially on phones with lower-quality sensors. The Cardboard SDK for Unity now supports a fully Unity-native distortion pass. This improves performance by avoiding all major plugin overhead, and enables Cardboard apps to work with Metal rendering on iOS and multi-threaded rendering on Android. All of this adds up to better VR experiences for your users.



    More places


    Finally, to help bring you to more places, you can now explore Google Street View in Cardboard. So, download the updated Google Street View app for Android or iOS, and grab your Cardboard to immerse yourself in destinations around the world.





    With Cardboard available in more places, we're hoping to bring the world just a little bit closer to everyone. Happy exploring!


    Watch Google Play’s Playtime event: How to find success for your app or game business

    Posted by Lily Sheringham, The Google Play team



    Google Play has kicked off its annual event series Playtime, which is running in 12 countries globally. Playtime offers developers the opportunity to learn tips and best practices about how to grow your app or games business and succeed on Android and Google Play.



    You can now watch the Playtime talks, listed below, on the Android Developers YouTube channel. The playlist opens with a video about inspirational developers who are changing the world around them with their apps.












    Build better apps













    You say you want a mobile revolution (13 minutes)

    There are now more than one billion Android users worldwide—a long way from when we launched the first Android phone back in 2008. Hear the latest about the Android and Google Play momentum.

    Build better (25 minutes)

    Learn top tips that you should consider when developing and distributing a successful app or game and how you can leverage M, the Google Play Developer Console and Android Studio.





    Grow a valuable audience

















    Grow & engage users from the Google Play Developer Console (7 minutes)


    Learn from other developers who have taken advantage of Store Listing Experiments and other tools in the Developer Console to dramatically increase their conversions.

    Maximise installs from every channel (11 minutes)


    Get insights into how app promotion can help you reach your audience at the right time. Learn how you can maximize installs from every channel, collect data and insights, automate management and drive user engagement and lifetime value.

    How to succeed in the kids and family space (22 minutes)


    Learn how to design high-quality apps for families and how to successfully engage your audience. You'll also get practical tips to help you boost your reach, retention and revenue.





    Engage and retain your app’s users














    The rules of games, for apps (24 minutes)


    Learn how games drive monetization and how to turn these insights into best practices for apps.

    Boost engagement with smarter interactions (28 minutes)


    How do you connect with your users? How does your app interact with the world around us? This session highlights the most exciting new developer features of the Android platform to help you improve the way you engage with your app users.





    Grow your game business & engage your players



















    Grow your business with Player Analytics (24 minutes)


    Learn how Player Analytics gives game developers unique insight into the first few moments of gameplay, what happens before critical events like churning or spending, and which players are likely to spend and churn.

    Smarter player engagement (23 minutes)


    Learn how successful game developers make full use of the Google Play platform to engage their players for months, if not years.

    The future of gaming at Google (26 minutes)


    Learn about the current ecosystem and the features across platforms that will help you achieve success on Android. Hear about virtual reality games and products which will inspire the development of games in the future.





    Monetization & international growth

















    Monetization and pricing strategies for different users (17 minutes)


    Get key insights into how having a considered price and revenue optimisation strategy can help you maximize revenue from very different users.

    Go global by being local (13 minutes)


    Hear pro tips and best practices, including first hand experiences from apps and games developers, that will help you grow the reach of your apps and games globally.

    Going global - developers share their tips (22 minutes)


    Gain insight into best practices and learn how to develop a successful global app and games business from Google Play and developer panelists from Citymapper, Jelly Button Games, Musixmatch and Social Point.





    Developers share their tips for success

















    Developer talk #1: Material Design for Forza Football (5 minutes)


    Learn and get inspired from best practices on Material Design presented by Sebastian Fürle - Android Developer, Football Addicts

    Developer talk #2: Directed creativity: From build weeks to billions (11 minutes)


    Learn and get inspired from best practices on directed creativity, from building your app to distribution presented by Tom Grinsted - Group Product Manager: Mobile & Devices, Guardian News and Media

    Developer talk #3: Building apps for fast growth markets (8 minutes)


    Learn about building apps for fast growth and emerging markets from Sergio Cucinella - Software Engineer, Truecaller




    For more videos about Android development and finding success on Google Play, subscribe to the Android Developers channel on YouTube and follow +Android Developers.

    Rabu, 07 Oktober 2015

    Keep users’ content safe with Google Drive

    Posted by Dan McGrath, Product Manager, Drive SDK & Partnerships



    Chances are, you’re developing an app that creates or manages data. And chances are, your users really care about that content — be it photos and documents, or calorie counts and exercise stats.



    Whatever it is, you probably don’t want it stuck on a single device — especially since people are replacing their phones and tablets every couple of years (every now and then… shtuff happens). With Google Drive, you can help users access their data at any time, from just about anywhere:




    • Drive APIs give developers a free and easy way to save and retrieve user content using Google Drive

    • In Android 6.0 Marshmallow, there’s also a new way to save app data and settings to Drive automatically



    As your app grows in popularity, Google Drive can scale along with it. In fact, WhatsApp now lets users back up their media and conversations to Google Drive, which translates to about one saved item for every person on the planet — every single day.





    Visit our developer site to learn more, and definitely reach out if you want to discuss more in-depth integrations. We’re here to help make your app great, and to keep users’ content safe.