Technical Article
The Developer Insight Series, Part 1: Write Dumb Code -- Advice From Four Leading Java Developers
By Janice J. Heiss, April 2009
Over the years, developers have talked about their favorite code, funniest code, most beautiful code, how to write code, how not to write code, the obstacles to writing good code, what they love and hate about writing code, the process of writing code, and so on. In the process, they have provided a lot of insight that is worth preserving.
Contents
- Brian Goetz: Write Dumb Code
- Heinz Kabutz: Classify "Goodness" by Using Good Object-Oriented Design Patterns
- Cay Horstmann: Patterns Are Not Magic Potions
- Kirk Pepperdine: Dumb Code Is More Readable
- See Also
Brian Goetz: Write Dumb Code

Brian Goetz
Sun Microsystems technology evangelist Brian Goetz has, since 2000, published some 75 articles on best practices, platform internals, and concurrent programming. He is the principal author of the book Java Concurrency in Practice , a 2006 Jolt Award Finalist and the best-selling book at the 2006 JavaOne conference. Prior to joining Sun in August of 2006, he was a consultant for 15 years for his software firm, Quiotix, where, in addition to writing about Java technology, he spoke frequently at conferences and gave presentations on threading, the Java programming language memory model, garbage collection, Java technology performance myths, and other topics.
In addition, he has consulted on kernel internals, device drivers, protocol implementations, compilers, server applications, web applications, scientific computing, data visualization, and enterprise infrastructure tools. Goetz has participated in a number of open-source projects, including the Lucene text search and retrieval system, and the FindBugs static analysis toolkit.
At Sun, he serves as a consultant on a wide range of topics that extend from Java concurrency to the needs of Java developers, and he contributes to the development of the Java platform.
How can developers write Java code that performs well?
"Often, the way to write fast code in Java applications is to write dumb code -- code that is straightforward, clean, and follows the most obvious object-oriented principles."
Brian Goetz Technology Evangelist Sun Microsystems
The answer may seem counterintuitive. Often, the way to write fast code in Java applications is to write dumb code -- code that is straightforward, clean, and follows the most obvious object-oriented principles. This has to do with the nature of dynamic compilers, which are big pattern-matching engines. Because compilers are written by humans who have schedules and time budgets, the compiler developers focus their efforts on the most common code patterns, because that's where they get the most leverage. So if you write code using straightforward object-oriented principles, you'll get better compiler optimization than if you write gnarly, hacked-up, bit-banging code that looks really clever but that the compiler can't optimize effectively.
So clean, dumb code often runs faster than really clever code, contrary to what developing in C might have taught us. In C, clever source code turns into the expected idiom at the machine-code level, but it doesn't work that way in Java applications. I'm not saying that the Java compiler is too dumb to translate clever code into the appropriate machine code. It actually optimizes Java code more effectively than does C.
My advice is this: Write simple straightforward code and then, if the performance is still not "good enough," optimize. But implicit in the concept of "good enough" is that you need to have clear performance metrics. Without them, you'll never know when you're done optimizing. You'll also need a realistic, repeatable testing program in place to determine if you're meeting your metrics. Once you can test the performance of your program under actual operating conditions, then it's OK to start tweaking, because you'll know if your tweaks are helping or not. But assuming, "Oh, gee, I think if I change this, it will go faster," is usually counterproductive in Java programming.
Because Java code is dynamically compiled, realistic testing conditions are crucial. If you take a class out of context, it will be compiled differently than it will in your application, which means performance must be measured under realistic conditions. So performance metrics should be tied to indices that have business value -- transactions per second, mean service time, worst-case latency -- factors that your customers will perceive. Focusing on performance characteristics at the micro level is often misleading and difficult to test, because it's hard to make a realistic test case for some small bit of code that you've taken out of context.
In 2003, you said, "Developers love to optimize code and with good reason. It is so satisfying and fun. But knowing when to optimize is far more important. Unfortunately, developers generally have horrible intuition about where the performance problems in an application will actually be." Do you still believe this?
It's truer today than it was four years ago, and more true for Java developers than it was for C. Most performance tuning reminds me of the old joke about the guy who's looking for his keys in the kitchen even though he lost them in the street, because the light's better in the kitchen. We're intimately familiar with the code that we write. The external services that we depend on, whether libraries or external agents, such as databases and web services, are out of sight and out of mind. So when we see a performance problem, we tend to think about places in our code that we can imagine being performance problems. But often, that's not the source of the performance problem -- it's somewhere else in the application's architecture.
Most performance problems these days are consequences of architecture, not coding -- making too many database calls or serializing everything to XML back and forth a million times. These processes are usually going on outside the code you wrote and look at every day, but they are really the source of performance problems. So if you just go by what you're familiar with, you'll be looking for your keys in the kitchen. This is a mistake that developers have always been subject to, and the more complex the application, the more it depends on code you didn't write. Hence, the more likely it is that the problem is outside of your code.
Performance analysis is much harder in the Java programming language than in C where it is more straightforward, because C bears a significant similarity to assembly language. The mapping from C code to machine code is fairly direct. To the extent that it isn't, the compiler can show you the machine code.
Java applications don't work like C. The runtime constantly modifies the code based on changing conditions and observations. It starts out interpreting the code and then compiles it. It may invalidate the compiled code and recompile it based on information from profiling data or from loading other classes. As a result, the performance characteristics of your code will vary dramatically depending on the environment the code runs in. That makes it harder to say "This code is faster than that code" because you have to account for more context to make a reasonable performance analysis. There are also nondeterministic factors such as the timing and nature of compilation, the interaction of the loaded classes, and garbage collection. So it's harder to do the kind of microperformance optimization with Java code that one can do in C.
At the same time, the fact that the compilation is done at execution time means that the optimizer has far more information to work with than the C compiler does. It knows what classes are loaded and how the method being compiled has actually been used. As a result, it can make far better optimization decisions than a static compiler could. This is great for performance but means it's harder to predict the performance of a given block of code.
Heinz Kabutz: Classify "Goodness" by Using Good Object-Oriented Design Patterns

Heinz Kabutz
Java Champion Heinz Kabutz was raised in Cape Town, South Africa, where he developed a love of programming in junior high school through his explorations on a ZX Spectrum computer. He received a B.S. from the University of Cape Town, and at 25, a Ph.D., both in computer science. In 1998, he started his own software development company, Java Specialists, where he writes contract software, consults, and offers courses on Java technology and design patterns.
Kabutz is best known as the creator of the free Java Specialists' Newsletter, targeted to expert Java developers.
I asked Kabutz to respond to Brian Goetz's advice to write "dumb" code.
In my experience, good object-oriented design tends to produce faster and more maintainable Java code. But what is good code? I find it easier to classify "goodness" by using good object-oriented design patterns. I usually encourage software development companies to train all of their programmers in design patterns, from the most junior to the wise architect.
Teams that employ good design patterns find it much easier to tune their code, which will be less brittle and require less copying and pasting. The java.util.Arrays
class is a good example of bad code. It contains two mergeSort(Object[])
methods, one taking a Comparator
, the other using Comparable
. The methods are virtually identical and could have been merged into one with the introduction of a DefaultComparator
that would use the Comparable
method. The strategy pattern would have avoided this design flaw.
"What is good code? I find it easier to classify 'goodness' by using good object-oriented design patterns. I usually encourage software development companies to train all of their programmers in design patterns, from the most junior to the wise architect."
Heinz Kabutz Java Champion and Creator of the Java Specialists' Newsletter
Coding by Ctrl-C and Ctrl-V can also hide performance problems. Let's assume you have several algorithms, all virtually identical, but in different parts of your system.
If you measure performance, you might find that each of the algorithms takes 5 percent of your CPU, but if you add them up, they amount to 20 percent. Good design allows you to more easily change code and detect bottlenecks. Let me give an example to prove Brian Goetz's point.
In the early days of Java programming, I sometimes resorted to "clever" coding. For example, when I was optimizing a system written by a company in Germany, I changed the String
addition to use StringBuffer
after we had optimized the architecture and design of the system and wanted to improve things a bit. Don't read too much into microbenchmarks. Performance advantages come from good design and an appropriate architecture.
We start with a basic concatenation based on +=
:
public static String concat1(String s1, String s2, String s3,
String s4, String s5, String s6) {
String result = "";
result += s1;
result += s2;
result += s3;
result += s4;
result += s5;
result += s6;
return result;
}
String
is immutable, so the compiled code will create many intermediate String
objects, which can strain the garbage collector. A common remedy is to introduce StringBuffer
, causing it to look like this:
public static String concat2(String s1, String s2, String s3,
String s4, String s5, String s6) {
StringBuffer result = new StringBuffer();
result.append(s1);
result.append(s2);
result.append(s3);
result.append(s4);
result.append(s5);
result.append(s6);
return result.toString();
}
But the code is becoming less legible, which is undesirable.
Using JDK 6.0_02 and the server HotSpot compiler, I can execute concat1()
a million times in 2013 milliseconds, but concat2()
in 734 milliseconds. At this point, I might congratulate myself for making the code three times faster. However, the user won't notice it if 0.1 percent of the program becomes three times faster.
Here's a third approach that I used to make my code run faster, back in the days of JDK 1.3. Instead of creating an empty StringBuffer
, I sized it to the number of required characters, like so:
public static String concat3(String s1, String s2, String s3,
String s4, String s5, String s6) {
return new StringBuffer(
s1.length() + s2.length() + s3.length() + s4.length() +
s5.length() + s6.length()).append(s1).append(s2).
append(s3).append(s4).append(s5).append(s6).toString();
}
I managed to call that a million times in 604 milliseconds. Even faster than concat2()
. But is this the best way to add the strings? And what is the simplest way?
The approach in concat4()
illustrates another way:
public static String concat4(String s1, String s2, String s3,
String s4, String s5, String s6) {
return s1 + s2 + s3 + s4 + s5 + s6;
}
You can hardly make it simpler than that. Interestingly, in Java SE 6, I can call the code a million times in 578 milliseconds, which is even better than the far more complicated concat3()
. The method is cleaner, easier to understand, and quicker than our previous best effort.
Sun introduced the StringBuilder
class in J2SE 5.0, which is almost the same as StringBuffer
, except it's not thread-safe. Thread safety is usually not necessary with StringBuffer
, since it is seldom shared between threads. When Strings
are added using the +
operator, the compiler in J2SE 5.0 and Java SE 6 will automatically use StringBuilder
. If StringBuffer
is hard-coded, this optimization will not occur.
When a time-critical method causes a significant bottleneck in your application, it's possible to speed up string concatenation by doing this:
public static String concat5(String s1, String s2, String s3,
String s4, String s5, String s6) {
return new StringBuilder(
s1.length() + s2.length() + s3.length() + s4.length() +
s5.length() + s6.length()).append(s1).append(s2).
append(s3).append(s4).append(s5).append(s6).toString();
}
However, doing this prevents future versions of the Java platform from automatically speeding up the system, and again, it makes the code more difficult to read.
Cay Horstmann: Patterns Are Not Magic Potions

Cay Horstmann
Java Champion Cay Horstmann grew up in northern Germany and attended the Christian-Albrechts-Universität in Kiel, a harbor town by the Baltic Sea. With an M.S. in computer science from Syracuse University and a Ph.D. in mathematics from the University of Michigan, he is now a professor of computer science at San Jose State University in California. He was formerly a VP and CTO of a dot-com startup and, previous to that, owner of a successful company that sold a DOS program for editing scientific documents. In his spare time, Horstmann consults in Internet programming.
I agree with Brian Goetz. I learned over the years that it never pays to optimize code until after you profile. We all fret over caching values rather than recomputing them, eliminating layers, and so on. More often than not, it makes little difference in performance but introduces a huge headache in debugging.
I saw that you asked Heinz Kabutz the same question. He says, "I usually encourage software development companies to train all of their programmers in design patterns, from the most junior to the wise architect." I'm a bit uncomfortable with this. I agree that patterns should be a part of everyone's education, but I've had too many junior programmers sprinkle patterns over their code in the hope of improving it. Patterns are not magic potions, and it takes quite a bit more experience than is commonly acknowledged to use them wisely.
"I've had too many junior programmers sprinkle patterns over their code in the hope of improving it. Patterns are not magic potions, and it takes quite a bit more experience than is commonly acknowledged to use them wisely."
Cay Horstmann Java Champion and Professor of Computer Science at San Jose State University
Take the Java I/O library, which is imbued with the value of the decorator pattern. For example, BufferedReader
is a decorator, and to get buffered reading from a file, you do this:
Reader reader = new BufferedReader(new FileReader("foo.txt"));
What if you also want lookahead
? Now you need to insert a PushbackReader
into the decorator chain.
What a pain! I would have preferred more usability and less pattern dogma. In C++, buffering and lookahead
are part of every file stream, which is so much more convenient in practice.
Kirk Pepperdine: Dumb Code Is More Readable

Kirk Pepperdine
Java Champion Kirk Pepperdine is a primary contributor to javaperformancetuning.com, which is widely regarded as the premier site for Java performance tuning information, and is the coauthor of Ant Developer's Handbook . He has been heavily involved in application performance since the beginning of his programming career and has tuned applications in a variety of languages: Cray Assembler, C, Smalltalk, and, since 1996, Java technology. He has also worked on building middleware for distributed applications.
He has worked with Cray supercomputers at the Canadian Department of Defense, as a consultant at Florida Power & Light, and as a senior consultant with GemStone Systems. He is currently an independent consultant and an editor at TheServerSide.com.
I asked Pepperdine to respond to Brian Goetz.
There are really two questions here: First, how does writing dumb code help with performance? And second, how does writing well-structured code help with performance? I'll answer the "dumb code" one first.
While we write code to run on a machine, the primary consumers of code are humans. Dumb code tends to be more readable and hence more understandable. If we can iron out the twists, then we have a better chance of avoiding the dumb mistakes that clever code may hide from us.
IBM's MMI, and the JIT are tools that work to optimize our code for us through dynamic profiling. Complex code tends to confuse these tools, so that they provide either suboptimal optimizations or no optimizations at all.
"Because we're trained to look at code, when something goes wrong, we look at code. And no matter how good our code is, we can always find something wrong or ugly that's begging to be fixed. Finding ugly code will throw even the best developers off track -- because the code is ugly, they will guess that it's the source of the problem."
Kirk Pepperdine Java Champion and Independent Consultant
We can see this with a well-written microperformance benchmark. Most of the code in a well-written microbenchmark is there to confuse the JIT so that it doesn't translate our code into something that no longer measures the effect we're interested in. While a microbenchmark may be a pathological case, the same sorts of things can happen in our real application code.
Another reason to write dumb code is that most of the complexities are due to some optimization that everyone thinks is needed. In many cases, these optimizations are premature. While I'm all for performance planning, I'm dead set against premature optimizations. When is a plan a plan, and when is it premature? I guess it's a little like the difference between art and porn: You'll know it when you see it.
Which brings us to the second question -- how does well-structured code help performance? Most performance problems can only be solved by adding or altering code in the application. I've found that if the code is well structured and loosely coupled, if the classes are cohesive, and the code uses delegation, I can avoid the whack-a-mole problem when I start to change code.
This problem can also be called shotgun refactoring -- if I make a change in one part of the application, other seemingly random parts of the application will break. And as I fix the breakage, I create a whole series of new breaks, and so on.
So how can we avoid this? First, follow the DRY -- Don't Repeat Yourself -- principle. Let's look at collections as an example. We would traditionally manage a query against a collection in Java by creating an iterator:
....
Iterator iter = customers.iterator();
while (iter.hasNext()) {
Customer c = (Customer)iter.next();
doStuff( c);
}
....
Here's the trap: If another part of our application needs to doStuff()
, it's likely that the code will get repeated either as a cut and paste or as simply rewritten. Either way, you've just violated DRY, which has numerous consequences. You've also neglected another design principle: Delegate, don't assume (responsibility).
By not delegating, you risk violating DRY. You certainly violate the principle of information hiding. Think of it this way: By doing a get
, you've not only violated encapsulation but have tightly coupled your class to the callee. When you violate encapsulation by exporting state, you are forced to also export the application logic needed to manage that state and hence violate DRY. So you can see that this is wrong from many different perspectives.
Here's the big performance hit: Suppose the data structure that is being used is suboptimal, and suppose you recognize that it needs to be changed. If you've exported state and behavior using iterators or whatever, you've created the whack-a-mole problem. Take a look at what happens when we delegate:
public class Customers {
Hashmap customers = new Hashmap();
public void putCustomer( Customer customer) {
allCustomers.put( customer.getId(), customer);
}
public Customer getCustomer( String id) {
allCustomers.get( id);
}
public Customers getCustomers( String pattern) {
return doSomeStufftoGetACollectionOfCustomer( pattern);
}
Here we have more code. But this is often the case when you demonstrate something in a tiny contrived example. You only see the benefits in large code bases, which, of course, make for horrible examples.
That aside, we have a class that wraps our intended collection. Our new class is semantically in tune with our domain. How many problem domains contain the word Hashmap
and how many domains contain Customers
? So we've corrected our vocabulary.
Next, we've provided a home for our queries, at least the queries that we've anticipated. If we get other patterns of queries, it may be possible to add in a secondary collection that is keyed differently. Think of it as adding another index on a database table.
The beauty of it is that because I've encapsulated and delegated the calls, I'm free to make these changes unencumbered by what my clients may or may not know. In other words, no whack-a-mole. Furthermore, every client will realize the performance benefit of the optimization. So this is a win all the way round.
I intentionally didn't use generics. Using this development pattern, I don't need compile-time type checking on the collection(s) because the class API provides all the safety that's needed. I've always contended that it's a mistake to expose a raw collection in an API, and because I never feel that the primary use case for generics is justified. But that's another discussion.
So what happens if someone needs a query that we haven't provided? Closures might seem like a good solution, but they would have to be implemented very carefully or we could "closure" ourselves into a whack-a-mole problem.
The closure would have to access only those elements that could be classified as nonprimitive, by which I mean those elements that can function without knowledge of the underlying structure. This is in contrast to a primitive method or a method that needs to understand the underlying data structure of the class. The point is to hide the implementation details to things outside of our domain.
To summarize, performance tuning often requires that I touch code, which is not much different from refactoring. All of the arguments that the Agile crowd put forth -- loose coupling, simple code, following good design patterns, having good unit testing in place, automated builds, and so on -- also apply to performance tuning.
See Also
- Brian Goetz Blog
- Heinz Kabutz Java Specialists' Newsletter
- Kirk Pepperdine Interview and Blog
- Top Java Developers Offer Advice to Students
- More Effective Java With Google's Joshua Bloch