What a pity. There are several strict JVM languages, but there aren't enough of them with compound values, i.e., unencumbered by pervasive use of object identities when manipulating compound data.
Laziness by default does not mean you can't have strict primitive types, which Eta does. Moreover, if you ever want to avoid object references in performance-sensitive contexts, you can allocate off-heap memory and work directly with that via the Ptr mechanism (which is backed by DirectByteBuffers). GHC 8 recently got compound values with an extension called UnboxedSums. It would be tricky to implement on the JVM though in Eta, but not impossible.
In a lazy language, a strictness annotation doesn't change the type of the annotated thing - bottom still inhabits that type.
On the other hand, in a strict language with a laziness monad, the difference between, say, `foo -> bar` and `foo lazy -> bar` (ML syntax) is as clear as daylight. The types tell you what's going on.
OTOH, most of the things I actually use laziness for in Haskell do not fall into the `foo lazy -> bar` category, because they involve things like floating out IO actions or renaming things, and in fact, changing the type makes them more cumbersome e.g.
if x then error "bad" else thing
to
let z = error "bad" in if x then z else thing
Which isn't a valid transformation in any strict language. This general idea is pervasive in the code I write, where the act of binding something is immaterial to its evaluation. I think we use this style a lot more than we give ourselves credit for in Haskell. You can of course wrap this in a thunk, and some of the usage style can be approximated by a monadic type. But this is all just really cumbersome and annoying to do pervasively. It's the best benefit I get from laziness, to structure code this way. You also end up duplicating a lot of strict-vs-lazy code either way you pick, since "crossing the streams" is generally either forbidden by the type system (in your example) or you need the different implicit characteristics (like in Haskell). It's not really clear to me this is a win overall.
I'm not opposed to strict languages, but IMO, I think if you want a strict language, you're better off just forgoing the whole thing, and using lambdas (thunks) where needed for small delays, and a good macro system to define control structures for everything else rather than trying to shoehorn laziness into your types or whatever. Random thunks you occasionally need aren't really the benefit. Being able to decouple definition from evaluation is.
In any language, not just a lazy one, “let x = v in t” is beta-equivalent to “t[x:=v]”, whenever “v” is a beta-equivalent to a value in the language. Of course, in a call-by-need (or pure call-by-name) language, every term is a value. In a call-by-value language, some terms are not values (and this is a feature).
Yeah and it also allows addressing beyond 2GB. I've been using DirectByteBuffers for the sake of being compatible with Android which I think doesn't give you access to the Unsafe API probably. I'll see if I can add a compiler option to use Unsafe for the JVMs that have it in the future. That'll probably do wonders for performance since Unsafe APIs are intrinsics. Unsafe is currently used in the Eta RTS for atomic CAS operations (which should be compatible in Android) on some of the RTS data types.
Is Eta strict or lazy?