After I published
part 0 I was criticized for being biased. I was told that the title suggested neutrality but the actual article is not neutral and "Why C# Is Better than Java" would have been a better title. In fact "Java vs. C#" was just a working title that I forgot to change and I agree that the suggested title would have been better. I was criticized for exaggerating the damage caused by the Java approach and overstating the benefits of the C# approach. While this may be true it should be clear that these are real problems and I am not making them up. I do not pretend to have discovered them. In fact I have read about most of the issues listed here in interviews and articles by both Java and C# designers. The reader is also advised to read part 0 where my motivation is clearly described.
And now lets get into today's topic - generics.
Java generics are broken. Admittedly this is more a flaw in the runtime rather than in the language but it is closely related. Consider the following code:
public static <T> T foo(T arg) {
try {
Object o = "42";
T t = (T) o;
return t;
} catch (ClassCastException ex) {
return arg;
}
}
…
Object o = foo(42);
System.out.println(o.getClass());
Oh my Google! I just managed to return a String from a method that returns an Integer! Even the die hard Java fans should admit that this means something is broken. What is broken is the way generics are implemented. There are no actual types in place of generic type arguments at runtime only Object. This s why the cast to T does not actually exist and this is why no ClassCastException is caught. If the result was used at a place where Integer was expected there would be a ClassCastException on a line where there is no cast but hey, at least we got a warning!
I believe Java designers implemented generics this way so existing runtime implementations would not require additional work on the complex machinery needed for real generics. Probably they were afraid that it would slow the adoption of Java 5 on platforms like mobile phones. By implementing generics only at compiler level they allowed runtime implementations to be up to date with minimal effort. Maybe this was the right decision for the platform as a whole but it surely hurt the language. On the bright side this can easily be fixed in the runtime and is something that is considered for Java 7.
.NET solves this problem by having real generics.
Warning: If you do not know what generics covariance and contravariance is and how it works in C# and Java you may want to find out before reading on. You can easily find explanation of the
concept on Wikipedia and many others describing how each language solved the problem. You can also read
my explanation of the concept and how it works in C#.
This is not all on generics. The way covariance and contravariance is handled in Java increases the complexity of the language. Of course every feature added to a language increases its complexity but this case is especially interesting for two reasons. Covariant and contravariant generics are something that is hard to understand on its own. The concept can make your head hurt. I believe it is much more complex than other concepts like lambdas and multiple inheritance for example. This is why I think that in this particular case a more simplistic approach should be favored. On top of this Java is a language that has always been conservative and took pride in its simplicity. In fact the main criticism Java proponents hold against C# is its complexity. Multiple inheritance is not in Java because it is considered complex and the addition of lambdas to the language has been debated for many years because (according the opponents of the idea) they are too complex. In the light of this implementing a feature like generic variance in a complex way should be considered a mistake.
So why do I claim that Java's approach is complex? What is Java's approach anyway? It is the so called user-site variance. When you use an interface or class you define a variable or a parameter and state the variance using a special syntax:
Iterable<Derived> derivedIterable = new ArrayList<Derived>();
Iterable<? extends Base> baseIterable = derivedIterable;
(extends is used for covariant and super for contravariant type arguments)
This allows for basically the same things as the C# approach but it puts the responsibility in the hands of the developer who uses the interface instead of the developer who designed the interface. I think we can agree that people who design variant interfaces are fewer than people who use them. What is more people who design variant interfaces are on average more qualified than people who use them. After all variance is mostly used with collections and they are in the Java class library itself. This is why I believe C#'s approach of leaving the responsibility and complexity in the hands of the interface developers is better especially when the language tries to be conservative.
That being said the Java approach has some benefits. First of all it increases the flexibility by allowing variance to be used when the interface developer forgot or did not know how to put variance into the interface itself. Java also allows classes to exhibit variance. C# only supports it for interfaces and delegates due to limitation of the underlying CLR. It seems like the Java implementation of generics actually has some benefits. However I believe that the thing that justifies complicating the language by adding variance are interfaces or if I have to be more specific one interface in particular - IEnumerable (Iterable in Java). Everything else will rarely be used in scenarios complicated enough to justify introducing additional syntax to the languages. Maybe I am not entirely correct and C#'s delegates and Java interfaces in the Observer Pattern can benefit from variance when used to handle events but both of these are not classes anyway.
Update:
Part 2 has been published.