The setup
I was used to the assumption that type inheritance in Java is transitive, that is
B extends A
and
C extends B
implies that C is a subtype of A as well as of B. As long as A,B and C denote straight classes, this is true to the best of my knowledge. But as I, admittedly only recently, learned, once generic type expressions come into play it gets a bit more interesting:
Let's make
A
a self-referential generic type like
interface A<T extends A<T>>{T x();}
The problem
Suppose, we'd now want to introduce some parallel class hierarchy depending on our A,B,C hierarchy, like this
interface D<T extends A<T>> ...
class E implements D<B> ...
class F implements D<C> ...
. Now, the compiler complains about the definition of class
F
, claiming that
C
is not a valid substitute for the parameter
T extends A<T>
of type
D
. Now, this seems funny: Type B is allowed while type C which is derived from B is not allowed in the very same type expression with the same upper bound A. After some thinking, this makes actually sense. What's the purpose of having a self-referential type like A? Probably, to have some method or attribute which is defined using the type of intended subtypes (as "x" shown above). Now, for the first sub-type,
B
, the type parameter
T
would be bound to
B
itself and in
x
's implementation the return-type would naturally be
B
. The same would, of course, hold for any sub-type of
B
, i.e. also for
C
. That, in turn, means that C is not a legal direct implementation of
A
! C is not self-referential any more, because its inherited member
x
still returns a
B
.
A resolution
Once the real cause of the problem is clear, the solution is easy: The definition of
D
needs to be changed to allow
A
's type parameter to refer to an arbitrary super type of
D
's parameter
interface D<T extends A<? super T>>
Now,
E
and
F
compiles just fine as defined before.