Google I/O 2009 – ..The GWT Compiler for an Optimized Future

Google I/O 2009 – ..The GWT Compiler for an Optimized Future


Bruce Johnson:
Well, good morning. What did you think about Wave? [cheers and applause] Thanks very much. Yesterday, about the same time
in this same room, we talked about the features
coming in GWT 2.0. And as I think you can
probably see now, a lot of those features
were developed–yes, thank you. I have to switch the monitors. But a lot of those features
were developed because of the really extreme level of sophistication that teams like the Wave team and some other really large
Google Apps including AdWords– they’re really,
really trying to do cutting-edge types of things,
as you’ve seen. So these technologies that
we were working on in G-W-T, they happen
because there’s a need. And then we figure out
the best solution that we can come up with and then they are able
to take our improvements and continue working, but another barrier
has been removed. So we’re really excited
about Wave, and if you want to see
in more detail how the Wave client team built that really nice
UI with GWT and how they take advantage
of code splitting, which is the topic
of our talk today, go see the talk later on about building a Wave client
with GWT. My name is Bruce Johnson,
again. I work on G-W-T. This is…ahem… mostly about compiler output, mostly focusing on
code splitting, but there are a few other things
that we’ll cover too, today. But first, in the spirit
of what I was just describing, it’s leverage, right? You use
the Java language, IDEs to construct your client code. And GWT cross-compiles it
and optimizes things for you, and we add fancy new APIs. You recompile as the new
version of GWT comes out, and, you know, good things
happen to your code, and often you have to do
very little work. That is what I would call
leverage, and that’s the basic
promise of GWT. So would you take this deal? Tweak your app for one hour… and you get in return
50% script size reduction. Well, that’s exactly what
you get with code splitting. We did this with the showcase. The showcase sample
was available as of GWT 1.5, and we said, you know, we should really use code
splitting in showcase. One hour later, showcase
downloads twice as fast. That is high leverage. Similarly, with Wave, actually,
Wave on the iPhone, those guys came from Sydney
to visit us in Atlanta, and they were talking about wanting to compile their
client code for the iPhone. And it was, at that time,
a very large amount of code, and it was bringing the iPhone
browser to its knees. Well, fortunately, Lex here had just put the wraps on the first version
of code splitting, and he spent a day or two
with them. And two days later, their app
was working on the iPhone. And it’s just really phenomenal to see the kind of leverage
that you can get. So…starting to become
redundant now, but just to actually show you
quantitatively, one hour prior,
showcase was 100K. An hour later,
it was 50K to download. Another really interesting
aspect of code splitting is you’ve probably seen it now. We’ve talked about it in several
talks in the keynote. Another aspect is it doesn’t
download code that doesn’t run, because it actually
is based on the control flow of your code. So you don’t spend time downloading bytes
over the network that don’t actually
end up running, which is really important
from a usability and latency standpoint. When you do download, you know,
additional code, even if users ultimately use every bit of functionality
in your entire application, you can amortize the cost
of downloading that code as they use the app. In Kelly Norton’s talk about Measuring in Milliseconds
yesterday– which if you didn’t catch
yesterday, you should check out
on YouTube later– he talks about certain
threshold numbers for turnaround time, and 100 milliseconds
is a key number. So if you do five things
that take 20 milliseconds each, the user’s not even aware
that anything happens. But if you do one thing
that takes 100 milliseconds, the user is aware, and they start to have
worse perceptions about the quality
of what you’re doing. So if you can spread the cost
of downloading code around, then it’s basically nothing
but a good user experience the whole time. Another subtle
but important point is don’t think that when
you download a code fragment it means you’re hitting
the network. If you set up your HTTP response
caching headers correctly, you can feel confident that you can cache those code
fragments permanently. You can say this thing can be
cached until the sun explodes because each fragment has a guaranteed
unique identifier, right? So no two fragments
would ever be confused with each other. Even when you redeploy
the application, if there’s anything
that has changed, it will definitely download
the right new fragments. Okay, so… again, if you ever hear us
talk about GWT, we always talk about
optimization because, really, GWT is about making the end user experience
the best that it can be. It’s…
it is our mission. If some other technology
came out that produced
a better user experience, we would either try
to do better than that or we would become fans
of that technology. It’s not that we want you
to use GWT. We want you to build
excellent web applications for end users. So should you really care
about optimization? This is the last I’ll say it. Yes. Again, as Kelly explained
really well in Measure in Milliseconds
yesterday, if you don’t think about
optimization pretty much
from the beginning, you’re guaranteed
to end up with patterns that are very,
very hard to optimize later. And later,
if you think about it, is the time when
you’re about to ship, which is exactly the time you don’t wanna make
architectural types of changes. So the best thing to do is the first few days,
just get something going. And then the second week,
you’re working on your app. Start thinking about
how to measure things and make sure that
the resulting application is as fast and as small
as it can be. So the rest of the talk is about tools to help you
analyze that in great detail. There’s a couple things
we should have said last year that aren’t strictly related
to code-splitting but that do help you analyze
what the compiler’s doing. So the first thing is RPC. It’s a very convenient way to send Java objects back
and forth across the wire. It makes it easy to share
source code between your server side code
and your client side code. Lots of benefits. But it also generates
client side proxy code to serialize and deserialize
these various objects. And if you’re not careful, you know, 20% or 30%
of your total script size can be this generated RPC code. The RPC code generator
as part of GWT actually produces files
to help you understand what is going into the RPC
serializer code, and so it allows you to…
make better decisions about exactly how to define
your RPC interfaces, where to use broader
polymorphism or more specific types
in your RPC interfaces and use the new feature
in GWT 2.0, which is RPC blacklisting. So there’s two files. There’s a .gwt.rpc file that you’re supposed to deploy
onto your server which acts as sort of
a safeguard so that
an incoming RPC payload is only allowed to ask
to instantiate types that this whiteless file,
the gwt.rpc file, says are okay to instantiate
on your server. So it’s somewhat
of a security measure. But it also helps you know exactly which classes
the RPC system is considering being available
in the serialization process. And then
the module.rpc.log file explains,
in excruciating detail, the logic that the code–
the RPC generator used to decide whether something
should be part of the serialization
proxy logic or not. So I’ll just show you an example
of these files from DynaTable. We have… there’s the first one. So these are meant–
these in particular are meant for use
by software on the server, but you can look at them
as well. So you can just basically
read the first column and see, okay, so there’s
a person class. That’s serializable. There’s a professor class.
There’s a schedule class. There’s a time slot class. And these all correlate to
the DTO classes that I’m using within the DynaTable sample
that’s part of G-W-T. I can see that there’s several
different exception types that get pulled in here,
and so on. So in this case,
those all make sense. But it’s also possible that you’ll look and find…
classes in there that you’re absolutely sure
don’t make sense. Sometimes if you use
very broad polymorphism, if you were to have
an RPC method that returned serializable,
for example, the co-generator
is basically forced to generate code
onto the client that can de-serialize
anything the server might send that is serializable. And so that’s
an explosion of code. And so you would look
at the .gwt.rpc file to get a sense of what is the RPC system really allowing me
to serialize? And then you understand the cost
for it a little bit more. Then…if you find something that just doesn’t look
like it ought to be there, you can look at
the .rpc.log file. And you want to look at
this one second, because there’s
a bunch of data here. But it goes through,
and for any particular class… it will tell you
whether it’s serializable– Oops. Let me try that again. Whether it’s serializable and how it decided
that it was serializable. So in this case, student is reachable because
it’s a subtype of person, and I had an RPC method that returned a person
polymorphically. And so the code generator
looks at person and says, well, the server can send
either a student or a professor as a person, and so I have to generate
serialization code for both. Anyway, this is
one of the ways that you can sort of start
to learn to read the tea leaves and optimize the size
of your client code. Another thing, when you’re
really trying to eke out every bit of size and speed
performance that you can get is you can actually watch
the compiler optimize a method. So this is pretty interesting. So I just created
a simple ShapeExample… and this is what
it looks like. In onModuleLoad
we have this local variable s
of type Shape. And of course,
there’s the Shape class. There’s this method
on the ShapeExample class that returns the shape,
which is a reference to a field which is defined to be a shape but is actually instantiated
as a SmallSquare. And a SmallSquare
derives from Square, which defines area to be the product of
the lengths of the sides, and then the SmallSquare class
returns the lengths of the sides which good programming practice has us making a constant
of two, right? So if you run the GWT compiler and you provide
the -Dgwt.jjs.traceMethod=and then the name of the method
that you want to watch, you can see what it does. In this case,
the onModuleLoad method, the final compiler logic just
before it turns into JavaScript looks like this. And so…
the purpose of the flag is so that you can actually
understand why it’s doing that. Even more importantly, if you look at some code and you
think it should be optimized, it will help you see
what’s happening step-by-step so that if, in fact,
it isn’t getting optimized, you can figure out where in
the chain that it’s going wrong. So let me show you
what that looks like. Let’s see… Oops. Just gonna make this
a little bit bigger here. How’s that? Okay. So what I’ve done
in each pass here… you’ve got each of these
sections marked here. Like the things
I’m highlighting are examples
of different passes that the compiler’s
optimization does. And we’re gonna be able
to trace what the Java code looks like
at each step. So here’s how it starts out. And what I’ve done to help you
follow the changes– something that’s going to
be changed is underlined. And then when that thing changes
in the next step, it’s blue. Okay?
So underline become blue. So the first thing
that happens is we realize
that this variable s is actually never reassigned,
so we can call it final. Okay, that’s kind of useful. Maybe that can serve us
down the road. And the next thing
that happens is… the compiler realizes
that this actually… it does this trick
to rewrite call sites so that instead of it being
a call to this .getShape, it can be a call to
a static method .getShape and pass in this as a parameter. There’s a really tricky
reason for this. In JavaScript, the key word
“this” is four bytes. T-h-i-s.
That’s way too costly. If you can turn that
into a parameter and you can then obfuscate it
to the letter a… it’s the same object,
but you save 75% of the code. And that’s only
scratching the surface, but I won’t bore you
with the details of those kinds of tricks. Okay. The next step… we had decided
that the type of s was final, and so that means
it’s not gonna be reassigned. That means we can actually
tighten the type because we know for a fact that the getShape method,
as it was defined, returns a Square,
and then we can actually see that it ultimately returns
a SmallSquare. So we tighten the type
all the way down to the exact type that
it’s going to be assigned to. The next thing
the compiler looks at is this getArea call. It does the same thing
with getArea, and you can see here
that it makes it a static call. And the next thing it does– both of these lines
are underlined– I have no idea if it’s easy to follow what
I’m doing here or not, but… the next thing it does, is these two underlined lines
get replaced so the getShape call
gets inlined. Because if you remember,
getShape was a simple method that simply returned
the shape field, and so why not just make that
a direct field reference and get rid of that method
all together? So we’ve just made it faster because we don’t have
a method call, and we’ve just made
the code smaller because we don’t have to define
that extra function. And every function you define
costs bytes in the download. So we’ve inlined
the field reference, an then we’ve actually
inlined the definition of the getArea function. But if you’ll remember
when we started out, getArea was polymorphic. But because we were able
to de-virtualize, remove the polymorphism, we can actually see
that getArea call is the getArea call
on SmallSquare which it inherits from Square to be side lengths
multiplied by each other. And then recursively,
we make the same change, and we recognize that, okay,
getSideLength actually is on the
SmallSquare class definitely, and it’s not overridden. And it definitely returns
the field… which happens to be 2, and we can propagate
the constant 2 back into the method
and then inline the method. So then it turns into this. And then 2×2–we can do that
math in the compiler. Why not do that at compile time? Why make the JavaScript
interpreter waste time doing that,
so we go ahead and do that. And then it becomes something that we can actually
concatenate the string, ’cause there’s
a well-defined way to transform a number
into a string. Then we’ve got string literal
plus string literal. Why not concatenate that
at compile time as well? And then the last thing is we don’t need this variable
Shape anymore because we’ve inlined
everything it did all the way down
into the call site. And so why not just get rid
of that variable all together? And so you’re left with
optimized code. And it really is convenient
for the compiler to do that sort of work
on your behalf so that, as Lars said today
in the keynote, you can think about
your application and be more ambitious
and try to build more, you know, more functionality
that’s useful for the end users instead of having
to fiddle with the specifics
of hand optimizing little bits of JavaScript,
and so on. All right…that’s the stuff we really should have
told you about last year. The stuff we’re gonna tell you
more about this year is code splitting. Okay, so why is
code splitting useful? It’s kind of obvious,
but let’s get really specific. Traditionally, you download
the HTML host page, then you download
a large chunk of JavaScript, and then the code
starts to run. So this yellow area here,
the JS, we really would rather the code start running
before all that’s loaded. But you can’t generally do that unless there’s a sound way
to split the code apart. So what happens
with code splitting is you can break
that JavaScript into chunks. And so the user experience
goes…load the HTML host page, load the startup fragment, which is the left-most
JavaScript that you see here. Then the code
can start running. And because the code splitting
API is designed to be an asynchronous fetch of code, the user can continue using
the application even while the lazily loaded
fragment of code is still being fetched
in the background. because the network, you know,
might take a long time, right? So the application
remains usable even while more code
is loading. I guess that’s kind of obvious,
but the net effect is that you see the sum
of all the green is the amount of time that the user’s able
to use the application. And there’s a significantly
longer amount of time that the application
is active, and it stays active
the whole time. So…ultimately, when you’re
designing big applications, you should try to get used
to the idea of the app never being,
you know, fully loaded. It means that the more
you can think about various pieces of functionality
as being able to be broken off and pulled in on demand, generally,
the better off you’ll be. It’s not always strictly true,
obviously. I mean, some cases, you
definitely want functionality to be immediately available
from the very beginning. And that’s why this is developer
guided code splitting. You can decide where you’re
willing to tolerate you know, a delay
under some circumstances, or where you’re not willing
to tolerate it. And you indicate that
by having a split point or not. But you’ve probably already
seen this API a lot now. GWT.runAsync,
it has a Callback. onSuccess happens
when the code loads, and then the failure path if the code cannot be loaded
for some reason. So now I’d like to introduce
Lex Spoon. Lex is the guy who actually
invented code splitting and made it all work, and he’s gonna take you
through an example of how to benefit from it and how to understand
exactly what’s happening if your code isn’t splitting
the way that you want to. Take it away, Lex.
Spoon: Thanks, Bruce. So… [applause] Let’s maybe start
on the optimistic side and imagine that the code
splitting goes the way you want. So as a running example,
you might have an application that can both read and write
something similar to an email. It might be an advanced email that Google’s about to release
later this year. So, as you can imagine,
the code to compose an email is actually pretty hefty, especially if you’ve put
a lot of effort into really making each keystroke
do exactly what you want. So that’s kind of unfortunate
if you don’t use code splitting because if you think about
the way your app is structured, all the code in yellow
in this diagram is code used
for composing email, and yet,
if you don’t split any code, all that has to download
before the inbox shows up on your user screen. So this is a good scenario
for code splitting because there’s a lot of code
that’s not needed initially that you’d like to let them
start reading email while you download the rest
of the code and background. So to see how that looks, in the initial version
of your app, you might have
a button somewhere… it has a callback
that calls this little method onComposeEmailButtonClicked. And in the initial version, it simply opens
the ComposeView. Well, you’d like
to split this out, so instead, what you can do is wrap that exact same code
inside a call to runAsync. So this is just like
setting up an event handler in a GUI framework, except the event in this case
is that more code has arrived. Now, it is also possible
that the code never arrives, so one downside
of code splitting is you have to think about
that extra possibility. And we force you to also write an onFailure branch
of your Callback. And that’s really gonna be
app-specific handler. So that’s pretty simple code,
we think. And as a result, you end up
with something like this. In the initial download,
you download the green parts and you download
the yellow parts later. You also initially download
this extra failure code which we all hate
to think about. But it is an extra
code path that you get. But usually that’s pretty small, and so in some, the initial
download of your app’s gotten a lot smaller. Something we like to brag about
about the system is that it’s not just your code
that gets split up either. The two different parts
of your reader… the email reading code might use one part
of the standard library. The email composing code
might use a different part. And we’ll actually
split that up for you too. So as you can see, 2 and 4 are actually
from the same library, but they got split in half. So that’s when things
are going good. If you’ll forgive me, I’ll dwell
for the rest of the talk on things
that don’t go so well and what you can do about it. So maybe you’re the guy who
did the initial code splitting, and you’re thinking, “Code splitting, code splitting,
code splitting.” And you got it split out
and you bragged about it at the weekly group meeting. And you sent some nice graphs. Next week, another teammate
of yours is thinking, “Keystroke handling, keystroke
handling, keystroke handling.” And they come into the meeting
and they show you some great keystroke
handling code they’ve written
down on the bottom right. You’ll notice this looks
suspiciously like the original code I showed you
for mouseClick handling. And the result
of writing this code is that your app’s
gonna look like this again. This is a little difficult
to deal with, I must admit, because the two different
developers, they’re just not thinking
about the same thing. And so they’ve both
accomplished their task, but unfortunately, the second guy has undone
your good work. So first of all, I’ll show you what to do
in this situation, ’cause no matter
how well you plan, things are gonna get messed up. And then I’ll try to show you some engineering practices
you might try to prevent this from happening
in the first place or at least reduce the odds. So first of all, imagine you noticed
you’ve got some build metrics, and you notice that, gee, my initial download
just went back up to almost
the whole program again. GWT 2.0 will include a tool
called the Story of Your Compile or, as we like to pronounce
our acronyms, it’s SOYC. I think it sounds like some kind of vitamin
breakfast shake. And it tells you… a few really useful things
about your program, including how to debug
this particular problem. So it tells you–
first of all, it tells you which parts of your code
download initially versus later. It tells you how big
each one of those is. So this is pretty useful
outside of code splitting because you want to– when you’re measuring your
latencies in milliseconds, every kilobyte counts, and you’d like to know
where to focus your efforts on shrinking
your code down. And importantly,
for code splitting, if something’s
in the initial download, it’ll tell you why. It’ll tell you what
the compiler was thinking when it… when it screwed up
your code splitting and put it in
the initial download. So let’s take a quick tour. If you had a simple
sample app that was… reading and composing email, if you open up
a Story of Your Compile, you’ll see something
that looks like this. So the front page of it
tells you the overall size in bytes of the code that’s gonna
download to users, of the different downloadable
chunks of code you’ve got. So the top line is the entire
amount of code you’ve got. That’s if they hit every single runAsync
in the program and they end up
downloading everything. This case, it’s 30 kilobytes.
It’s pretty small. The next line shows you
what they download to start with and in this case…
it’s almost the whole thing. So the bar’s pretty far over. This is
an unfortunate situation. The last line
just further rubs in the fact that this is
what downloads later after they reach
that split point, and you’ll notice it’s such
a small amount of code that the graphics didn’t even
render quite right for the bar. So… all right, so this
is what you’ll discover, and you’ll say, okay, what’s going on
in the initial download that looks wrong? This is a view we expect
you’ll be staring at a lot as you try to optimize
your program in the future. This shows you the size
of different parts of your code broken down
in four different ways. Today, I’d like to just focus
on the top left breakdown, which is a breakdown
by Java packages. So if you go down
that top list– it’s probably hard to read
on the screen– but the top one is java.util. So this is a pretty small app. It turns out
that the collection overhead or the implementation
of collection classes is actually pretty heavyweight
for this app. Then you’ll see the third entry
is java.lang, so it’s more runtime support. You’ll see a lot of
com/google.gwt user stuff. That’s the widget
implementation. All of this is probably
not what went wrong, assuming you kept
a fixed version of GWT for all of these samples. It must be the fact that something went wrong
in this package, which is–let’s imagine
the implementation of your… of your application. Well, if you select that item, it gives you
a breakdown of each on a class-by-class basis. So we’re drilling down into where we suspect
a problem might lie. And actually now,
it’s pretty obvious. If you’re someone
who knows your own app, you’ll see that this top bar
is humongous. It’s the EmailCompositionView. And especially
if you go talk to the person, the code splitter, code
splitter, code splitter person, they’re gonna say
that’s crazy. We don’t use the CompositionView
initially. And you say, well, yes, we do. If you click on this guy, it will go to each method
in that class, and it’ll tell you why
the compiler thinks you need it. So if I can draw your
attention to the third entry, the show method…you’ll see
it’s called directly by onComposeKeystrokeClick. That’s the method that this fellow
just implemented last week. You finally figured it out
on Tuesday or Wednesday. And you can go chase
that guy down and say, what you should
have done is you should have
coded it like this. So you show him
your ComposeEmail method, and then they do
the same thing in their
ComposeKeystroke method. I call this the Whack-a-Mole
solution to code splitting. You get it the way
you want it to work. You find out
there’s something wrong. And then you just go
look at your dependencies. You pick some method
in the chain. In this case, we picked
onComposeKeystrokeClick. And you put a cut point
right there. It won’t always work
the first time. Sometimes it’ll still be
in the initial download. You’ll go to here, and you’ll get a different
set of dependencies. So this is a very reactionary
kind of way to improve your code splitting. It will eventually work. You will eventually cut out
enough little pieces that you got the part off
that you wanted. You’ll eventually
get back to here again. However, the Whack-a-Mole style
of problem solving leaves a little bit to desired from sound software
engineering practices. You just don’t wanna be
a manager who’s watching your product
regularly degrade and then having your developers
go scramble to figure out, okay,
what went wrong, and then fix it back again. So… we’d like to come up with
some kind of systematic solution to this problem, and I want to tell you about one
we’ve been thinking about. This is a fairly new… programming permitted
we think, so maybe there’ll be other ones
coming down the road. But let me show you one that we call the async package
coding pattern. This is actually in use by
multiple GWT clients already, and they seem pretty happy
with it, so… The goal of this pattern is that you don’t want
everyone on the application– you don’t want every developer
to have to understand what every library did
to get its code split out. What you’d like to do is you’d like to at least
narrow the focus down to the developer
of a library. So in this case, we want to make the email
composition guy think about how to get
the code split out. And we want everybody else to just get the splitting
automatically. We don’t want them to have
this tempting method called show that they’re not
supposed to call. Except very carefully. So to accomplish this,
what you can do is… remove all direct access
to the classes in that package. So just don’t have a public
static method called show. Instead, provide them
one gateway class called email composition, and make sure everybody has
to go through that gateway class to do anything
with your package. You can use Java protections
to accomplish this. Make sure that the only way
to create that gateway class is inside a runAsync somewhere. If you can write your app
using this pattern, you’re in pretty good shape, because you can only get at
the functionality through the gateway, and the gateway’s only
constructed inside a runAsync, so therefore, there’s no way
to access the code except after a runAsync
has been called. So that probably sounded
a little abstract. I’d like to walk through the way this code looks
real quick. So…make a gateway class usually named the same thing
as your actual module. And make a constructor private so that nobody but itself
can create it. Take all the static
methods you have, and make them instance methods. This way, if anybody
tries to call it, they’re gonna get a complaint
from the Java compiler instead of getting
a complaint from whoever’s watching
the performance of your app who noticed that it just
suddenly degraded. Now, to provide access
to this thing, make a static method, which is conventionally called
createAsync. It takes
a user supply callback and then inside a runAsync, it creates an instance
of the gateway, and it calls the callback
with that instance. The rest of this
is pretty straightforward. If there’s a failure
to download the code, you just buck that
back to the user. And the actual callback
interface, as you can see, looks pretty much the same
as a RunAsyncCallback interface except that
in the success branch, there’s now an argument. onCreated takes an argument which is an instance
of the gateway. On the user side,
it also looks a lot like just using a regular old
runAsync, so… this coding pattern,
we don’t think, is gonna be very difficult
for users to have to deal with compared to what
they’d write anyway. The only difference is that in their onCreated
success branch they get an instance
of the gateway, which is called view
in this case. And then once they’ve got
that instance, they can do stuff with it
like… they can call “show”
and pop up a composition window. So that’s a pretty fun pattern. That’s the basics of it. I’d like to mention that once you’re writing
your own gateway class, you can… you’re in a position
to improve on it. For example, the way I showed it
to you the first time, you create a new instance of it
every time in every callback. You may as well cache
that instance as a private static field
of the gateway so that if they call it again,
they’ll get the same one. That way, you can have
the equivalent of variables that are
scoped to the module. Another fun one is that… the caller doesn’t have to always go through
the createAsync. They only need to do it once
and get an actual instance. So what you can do
is you can arrange your code such that it takes instances
of that class– key parts of your code require
an instance of that class as a prerequisite to even constructing
that part of the code. This is a dependency
injection style where you’re injecting
dependencies via constructor parameters. So just as an example, if you’re writing
a SpellCheckingPlugin to your CompositionView,
you might– suppose it takes
an instance of the gateway as a constructor parameter and then just says
this.view=view. It just hangs onto it. Well, now anywhere
in this plugin that you want, you can make direct calls
onto the view object. So by having the forethought to save aside the instance
of that module, you’ve made the rest of the code
as plug-in pretty convenient. You don’t have to sprinkle
createAsyncs all over your code if you save
the instance of the view. It still turns out that dependency injection
will help you in other ways. If you can stick around
for Ray Ryan’s talk after this, you’ll see how it helps you
with testing. You can mock out the view
to a testing style pretty easily once you’re using this style. So yeah, and once you’re
writing your own, you can improve it
in various ways. Like one of the common things
you want to do as a app writer
to be user friendly, if you’re about to hit
a network delay, you’d like to tell the user that the app hasn’t
just frozen up. So…and that can happen if you call a runAsync
and the code’s not available. However, if the code
is available, you don’t want to flash
an indication up that immediately flashes away. So you could put the onus
on the gateway class to figure out whether it needs
any more code to download. And you could, for example,
add an extra method to the callback interface
that says, hey, I need to actually
do a download. You might want to pop up
an indicator. And then otherwise
leave it alone. That’s just an example. We don’t know exactly what the optimal gateway class
looks like. But it looks like a promising
kind of coding pattern to think about. So that’s what we wanted
to show you today. We wanted to show you
some tools that we provide so that you can help improve
your own performance. As much as possible,
we try to make GWT just make your app rock. But to the extent it doesn’t,
we provide several tools that’ll help you debug
what’s going on and then incrementally
improve. So we showed you how to
reduce the code size due to RPC transmissions. This is an extraordinarily
common thing to happen with people who use GWT RPC is that they just
accidentally put a type… they try to pass
class exception across the wire. And GWT dutifully
gives you code to serialize
every kind of exception that you have available
in your class path. If you look at these files, you can figure out
pretty quickly that you didn’t mean to send an illegal argument exception
across the wire and improve that. We’re showing you how you can look into what
the optimizer’s doing. A lot of times,
a small tweak to your code will yield just dramatically
better output. You love it when you get output
that looks like window.alert and then a literal,
a string literal. So…it’s worth trying
to tease the optimizer into doing those sort of things, and the only real way
you can do that is to actually look
at what it’s doing. And finally,
we’ve shown you how to use code splitting
and how to… how to debug
what’s going wrong with it if it doesn’t do
the right thing, and we’ve suggested
a coding pattern. So if you all have
any questions, we have plenty of time. You want to come up? [applause] Yeah, you wanna
come up here and… answer questions together?
[chuckles] man:…mike? Spoon: Oh, uh…yeah. They’d like you to go
to the mikes because this is all taped
and, uh… man: Okay. What happens
if you’re… you call the runAsync
from one point in your code– like you load the code and you wanna go get
something else right away while the user’s staring
at your initial page. Spoon: Sure. man: And then the user’s quick,
clicks the button. You call runAsync again. Is it smart enough to know that
the first one’s been called? Or was it gonna go
try to get it two times? Spoon: Oh…the implementation
won’t download it twice. But that is a tricky situation
for your app to deal with. So in fact, one of the things
that people do with these gateway classes– like a common name for them
is an asynchronous provider– is they’ll track the fact… they’ll track
outstanding requests and then be able
to do something smart. If you get a request
to the same thing twice. The system doesn’t help you
program around that, but…if you do nothing,
what’ll happen is the code’ll load one time, and then it’ll run
your callback– both instances
of the callback you passed in. But you might want to code
your app carefully to… to deal with that. Johnson: You wouldn’t get
redundant requests for the same fragment, though,
to be clear. man: Okay.
Johnson: Yeah. man: Hi. Thanks, great talk. So far, you’ve shown how
the, uh, the GWT Toolkit, it optimizes the Java
in really tight JavaScript code. And using the code splitting,
you can, you know, not download more
JavaScript code than you need. But I’ve also heard
numerous mentions that it also packages and processes
the style sheet code, CSS code, and any other includes
that you put in there. So one question I had is what happens if you have,
say, a style sheet that is, uh…you’re using
sort of a cross-project because it’s got
all the different styles on objects you want, but you don’t need it
for the page that’s loading. Does the GWT understand that
and not load all that code? Spoon: Do we have dead–
It’s not very good at it. Does it at least do
dead code removing? Johnson: Yeah,
so if you noticed– Did you see the client bundle
stuff yesterday? man: Yeah.
Johnson: Yeah. So… that pattern of
resource bundling… it’s based on
you call a method in order to get the resource. And so…you know what I mean?
man: Yeah, yeah. Johnson: You call a method, and it returns
a text resource, okay? So one of the benefits
of that approach, aside from just simplicity, is that you can use normal
compiler control flow, dead code elimination-type
analysis to decide… whether a given method
is called at all. And so take code splitting out
of the equation for a second. If you don’t call
a particular method that would return
text resource, that text resource need not
be bundled in, first of all. So code splitting–
I mean, sorry. But resource bundling
in general can work well
with dead code elimination. That’s code splitting aside. When you bring code splitting
into it, it gets really exciting
because, as Lex pointed out
in one of the slides, code splitting can work
even at the level of individual methods
on the same class. So if you’ve got one
uber-client bundle that’s got a whole bunch
of different methods, it is possible, in theory– I’m not sure if it does this
in practice– Spoon: I’m not sure
how the bundling works. Johnson: Yeah.
Spoon: Strings is great. Johnson: Yeah, to be able
to split some of those methods
into the start-up fragments, some of those other methods
into– man: And the same with
style sheet information? Johnson: Right.
So it works generally for… But it’s based
on the interaction between the client bundle
code generator and runAsync. It’s possible that in
its current state we haven’t fiddled with exactly
the way it generates code to perfect that yet,
but it is possible. So if the co-generator generates a different field
per method that gets called and each of those
are independent, fields can be split
across fragments. man: Interesting. Johnson: And so you can
automatically distribute the resources
across the fragment. man: And will the debug app
that you showed– I can’t remember
the name of it now– for saying where the compiler thought the code needed to be
optimized and whatnot, does it do the same
for other things besides the JavaScript classes? Spoon: Yeah. Is that
sample on here, Bruce? Johnson: I’m sorry,
were you talking about the Story of Your Compile? man: Yeah, Story of Your
Compile, right. Johnson: Yeah, that one
definitely is there. man: SOYC. Spoon: So the code splitter
works really well with strings, but it can only work on
actual Java code. man: Okay.
Spoon: So… it depends on the particular
generator. The translation one
works really well because each message
that you translate ends up as an individual
string letter. So in your initial download you’ll get the ones
you need initially, and then when you load
the CompositionView, you’ll get all the tool tips
for the CompositionView later. Johnson: Unless the incredibly
polished aesthetics of this fool you,
this is still very early. But what we hope to do
is provide better visibility for each of the code
generators as well. So you’d be able to attribute this amount of code
to the RPC code generator, this amount of code
to the client bundle generator. We’d like to provide drill-downs
for all the different mechanisms and also make that
a generally sensible mechanism so if you write your own
code generators, it can participate
in the same way. But it’s, you know,
we’re sort of still at the beginning of this,
and that’s the vision for it. man: thank you. man: Hi.
Spoon: Hey. man: Uh, could you share
any thoughts about the prospects for precompiled sharable or shared libraries
in GWT? Spoon: Yeah, so… I just wanted to share something
real quick is that if you click on–
it does list– show you
the individual strings that you’re downloading
at different points. But… philosophically, when you’re providing
a web server, your users are all gonna
funnel through a website anyway. And it seems like
you might as well inset a compilation step,
at least at the funnel point. So for most of the time,
you’re really better off if you can figure out a way
to arrange your application so that there’s a place
where all the code gathers and you can run a cross app
compile right there. So that’s by far
where GWT focuses is you can write modules
for programmer consumption, but then when you do
your real compile, you put ’em all together. Now, it is possible
to arrange to have run-time linkups
of various kinds. You have to use– you have to make a bridge
through JavaScript yourself, but it does work. But as an example,
you can compile a Google Gadget for people who’ve looked
on iGoogle or things like that. You can provide a gadget
with a GWT App, and you could even write
a gadget host as a GWT App. And that’s gonna require
that you do a run-time linkup. Johnson: I mean, in terms
of our own roadmap, we really aren’t very bullish about the idea of
precompiled libraries. Because soon as you do that, you basically
prevent the opportunity to do dead code elimination,
for example, ’cause that’s something
the compiler would do. So in other words, you know, you saw, in fact,
from the Story of Your Compile that just the java.util classes were the single largest part
of this particular application. That’s with dead code
elimination already happening. So if we don’t call
arraylist.remove, the code for that
doesn’t get pulled in, right? And it’s still
the biggest chunk. Imagine we cross-compiled
everything in java.util, right? That’s a lot of code. And it would be
really unfortunate to download that
to the client when, in fact, you’re only using
a small subset of it. So I’m not saying never, but we think
that this approach, doing lots of optimization
at compile time and using code splitting, is going to be ultimately
more fruitful. man: yeah,
I guess it depends on, you know, how many
applications you have and how big they are,
right? Johnson: Yeah, that’s true. There’s also, though,
the sort of… we don’t want to create
the web version of DLL hell. I think we’ve all, or many
of us have been there too. And that’s a really
dangerous thing to– so we’re
trying to avoid that. man: My question, I’m using,
let’s say, third-party library. Sometimes now it’s pretty huge
and takes a lot of time. I’m using probably
10% of this library. How in the future are those
library gonna split, even libraries that guys
from Google are writing? And I suppose they optimize it
for split or… how I can deal with this…
as a developer? Spoon: So far, libraries
usually aren’t split up, for whatever reason. The standard GWT library doesn’t have a single call
to runAsync internally. man: No.
Spoon: It just happens. So usually–so far what
people have done is split off…components
at the application level. Things like…
like for the showcase demo, there’s a separate runAsync called for each page
of the demo. And for Google Wave,
there’s a separate split out for the Wave editor,
you know, so their application level
of chunks are being split out so far. But in the future,
I don’t know. Johnson: To clarify too,
there is a difference– Lex was saying there’s not
a gwt.runAsync call within our libraries. But that doesn’t mean
that your runAsync calls won’t be able to split apart
your usage of our libraries, of the GWT libraries. man: It’s up to me
to split kind of in… Johnson: Right.
But we feel like because the decision
about where to split is completely tied in
with the user experience, we should generally provide just a traditional code base, and you should make
the splitting decisions and then the compiler will– man: I’m talking about
simple stuff. Let’s say I’m using Table
or I’m using Tree. Tree has a bunch
of related classes that’s supposed to come
together. That’s natural packaging. But because
it’s a huge library they have Tree,
they have, uh, GRE. They have all this stuff. So I need to split, because I know the Tree coming
all those classes together– Obvious.
Johnson: Right. man: Yes, so…
Johnson: Yeah, that’s, I mean, basically code splitting
will help. man: And I still need
to do this by myself, yes? Johnson: Until we can find
a way to help you more. We will be thinking of ways.
man: Yes. Please find. Johnson: Well, you have
to try it first, then you have to complain. man: Hi. Um…I have a question
about the…like the failure. Like in all the examples that you guys have been showing
yesterday and today, basically what you seem
to be showing is like
when it says onFailure, you just make a little
alert to the user, which I think we can all agree
is not really realistic for a running application. So I’m wondering
if there’s any like paradigms you guys
have thought of for dealing with that. ‘Cause the way that we
write code now, you assume that you have
all the code, and this is changing that. I’m wondering what
you guys are doing about that. Johnson: Ray, are you here?
In the audience? Ray Ryan? So Ray Ryan worked on the new
AdWords rewrite with G-W-T, and that’s something they
definitely have to think about. What tends to happen
in large applications is that there’s a centralized
error reporting mechanism. So, you know, we showed
window.alert just to be succinct,
obviously, but really, what that would
probably look like is, you know, call the global
app instance onError event. Or maybe there’s
some specialized you know,
method that specifies exactly the error id
or something. But typically,
it would funnel to one consolidated ErrorHandler
that would show some UI to indicate something
to the user. man: And what causes
the failure? It’s like if the file
doesn’t load, it’s like a network error? Johnson: Right.
Like a network error. There wouldn’t be
spurious errors due to the compiler
making a mistake or something like that.
man: No, never. Johnson: Right. Like that’s– code splitting wouldn’t work
at all reliably if the compiler
could screw that up. Spoon:
But it’s perfectly normal that you open your email reader
at the airport. and then you walk
down the aisle, and you reopen your laptop
at the same page, and you’re not on
the network anymore. Johnson: So it’s really
transient network failures. The server, you know,
crashed for a minute, but it comes back up ’cause
it’s a cluster or something. man: Yeah. I guess
I was just wondering if there’s any sort of
like general paradigm of like try again,
you know what I’m saying? Johnson: Yeah, have you
been thinking about that, like having–
Spoon: I have. Most apps I look at
don’t do a very good job if they lose network access. This is aside
from code splitting. If the server’s not there, they tend to give you
bad behavior. Good stuff I’ve seen, though, is they pop up a loading dialog
when they’re trying. Good ones will actually
remove that, once they realize
it’s not gonna work. a lot of them leave it running.
man: Gmail. Spoon: Some cool apps will show whether they think the network
is there or not, so if a download fails,
they’ll toggle it off, and you can click on it again
to basically indicate, okay, I think I’ve got
a network now. But it’s all very speculative. Johnson: One thing I’ve actually
heard just, I don’t know, random email
that rides around Google is that sometimes you get
spurious failures and just a simple retry,
and it will succeed. Not with code splitting,
but even RPCs or XHR or HTTP requests
in general. Sometimes the best thing to do
is just try it twice before you declare
a failure, and the code fragment loader
could do that. I don’t know if it does, but…
Spoon: We might want to do that. Johnson: You know,
it’s a little dicey, though, to have your libraries
do too much magic without your control. So if we can find
the right balance, we probably would do that. Spoon: We did go so far as
if your app retries, the low level support
will retry the network request. So if your onFailure retries,
it will actually try again. Johnson: It’s really exactly
the same issue when we talk about RPC, ’cause RPCs can fail
in exactly the same way. So if your app is one designed
so that retry makes sense, then…there you go. man: Thanks. Johnson: Yeah, this is a very
low-level, primitive thing. It’s intended to be the absolute
kernel of functionality, and then you’ll build,
you know, patterns and frameworks
on top of it. man: Yeah, so during the demo
you showed using the dependency
injection pattern. And that’s a wonderful pattern. We use it on
the server side a lot. But we haven’t been able
to use it on the GWT side because there didn’t seem
to be a good library for it, or at least, I mean,
I haven’t looked too hard. But is there any projects
out there that you know about? Johnson: Yeah, come
to Ray Ryan’s talk called Architecture: Best Practices. There is such a library. It’s based on Guice, which is Google’s
dependency injection. It’s called Gin. So Gin and Guice
are quite the pair. And Ray’s gonna talk through
how you can use that to build the kind of app
you’re talking about on the client. man: Okay, excellent. man: Uh, well, that was
exactly my question. We use Gin in 1.5
and Suko in 1.6, and I was just wondering
if there was a more general API? Johnson: Gin is the one,
it looks like the one that everybody
seems to be liking a lot, so… man: Thanks.
Johnson: Yeah. man: Hi. It strikes me
that the process of getting your–
what would you call it– your sections done
is an iterative process where you compile your app, you inspect the decisions
that were made with Story of Your Compile, and then you break the
dependencies where you can and then you try it again. And the first thought
that comes to my mind when I see
Story of Your Compile is, since this is
an iterative process and the issues are sort of– like with hosted mode
versus web mode, what effect does
Story of Your Compile have on compile time? ‘Cause it’s
an iterative process. It seems that’s important. Spoon: I didn’t quite–
clearly understand. It is a compile time tool,
actually. man: Yeah.
Spoon: So, um… man: But the point is it’s generating
a lot of data, right? How does it affect
my compile time when I turn on
Story of Your Compile? Spoon: Ah, well…[laughs] Katherine is laughing
in the back row, I see. Johnson: Some folks who–
Spoon: It increases– it increases the compile time. And we try to keep the increase
as small as possible and on different days. But then we wanted
to output more information, and it gets worse again. So…it is optional right now. If you want the fastest compile,
you disable it. man: I see. Johnson: Yeah, we would
love to make that fast, as fast as we can. And, you know, there’s now
the Google plugin for Eclipse, and it does sort of
incremental, you know, ongoing compiles as you make
source code changes. Maybe possibly
in the future one day, we can kind of build that
Story of Your Compile– man: Or maybe consider having
an analysis mode, compile mode where you don’t actually
generate a code but you just do
the dependence analysis and write that out. Johnson: Right.
Basically more control over how much stuff
you track in. That’s a good point.
Spoon: That’s a good idea. Johnson: I think I really
see the gist of your question
and point now which is you don’t always want
all the data for different types
of use cases of using it. I gotcha. man: Do you have any suggestions on how to get your code splits
to preload in the background? Spoon:
Well, you do wanna do it, and it doesn’t do it
automatically, so… [laughter] A common trend with GWT is that we provide
a very low-level tool, and then we try to figure out
the best practices and then we import them back. Is that fair to say? As we figure out
what they are. Right now,
what you can do to… if you use
the async package pattern, what you can do is have
some part in your code that just does
a chain of these calls in the order
you’d like to preload them. So it’s a little bit
verbose to do that, but that’s a pretty
effective way that you preload
one at a time. And you can even have
these preloaders check if your app has
any indication of… there’s something the user
is actually doing, then you can even like
not do the preload yet until the app quiesces. Johnson: Like you could set–
you could set a timer and then cancel the timer if there’s any interaction
with the app. Spoon: That kinda thing, yeah. Johnson: ‘Cause you wouldn’t
want an RPC to be usurped by a pre-fetch
of a code fragment. And that’s why
we don’t do it automatically, even though it’s kind of obvious
that you might want to. But the network connection’s
so precious, you might have like
real-time RPCs that need to happen
in response to user action. You don’t wanna do that– you don’t wanna usurp that by
loading code in the background. Spoon: Especially if you
think about mobile devices. If you start downloading code,
the network is swamped. Johnson: Right. So it’s really
easy to saturate the connection with stuff that
you might not even use. man: No, I see where
you’re coming from. I’m just thinking like
from a user perspective if they know they’re gonna
walk away, to use your example, from where they have
connectivity, they might want to,
you know, click a button that says
let me go offline with this. Johnson:
Actually, I think the way we might wanna solve that
is through the HTML5 AppCache. There’s already a link–
I think it’s in the incubator– that can take a GWT module and produce the AppCache
manifest in everything. We haven’t done this yet
with code splitting in mind, but it would be possible
to create such a manifest so that you can essentially
download the whole app, including its fragments,
to the AppCache locally. And then it would just–
it would be a fetch, but it would be a fetch
from the local AppCache, not over the network. That would be really cool.
man: Thanks. Johnson: Yeah. man: This is somewhat
of a vague question, but I’ll ask anyway. Going back to the point
you were just making about sort of…
modulating what you download based on user activity or potentially
what else is going on in the, uh,
in the application. What do you see as being
sort of the other type of input that you would put into
that decision? In other words, we talked–
in discussion about two images sort of
blocking up all the connections, other activities going on,
some information, maybe, you know, I was trying
to render something heavy. Do you see that
the G-W-T App itself would ever get more information
of what’s going on on the page or with the browser that would help drive
that sort of a thing? What kind of things
are going on there? I guess that’s my question.
Johnson: Right. Hardware telepathy interface.
That would be awesome. Spoon:
Yeah. Seeing the future. I always thought of it
as app input mostly, not browser input,
but I’m not sure. Johnson: Well, the thing
that’s tricky about it to me is that sometimes
if nothing is happening, it’s ’cause the user is deciding
what they’re just about to do. So just because they haven’t
done something for two seconds, that might be the indication that they are about to do
something, not that they’re not about
to do something. And so I’m not sure what
the right answer is at all. Spoon: You know, it would help
if you could tell a browser that is a low-priority download. Johnson: That would be cool.
Spoon: I don’t think you can. Johnson: That would be cool. Also, you know, the pressure is
being reduced somewhat. The newer generation
of browsers do allow many more
than two outgoing connections. So that will be, you know,
mitigated, I guess, somewhat. Well, we are out of time. Now, there’s one thing
that we need to do here, which is to ask you
to fill out the… this. Provide your feedback and… if you have anything to say,
we would really like to know it so we can make IO even better
for you next year. Thanks very much for coming.
Spoon: Thanks, guys. [applause]

Leave a Reply

Your email address will not be published. Required fields are marked *