For 2.5 years, from Feb 2022 to Aug 2024, I have been working as contractor at Generation Lambda where I was part of the team that built Axo, a decentralised trading protocol on the Cardano blockchain. Initially I joined as Senior Haskell developer, but transitioned into a Tech Lead position after ~8 months.
Due to the fact that Cardano, its libraries and smart contract language are built with Haskell, we decided to use Haskell as well, as at the time we started we could not find suitable industry-ready alternatives. Thus, in these 2.5 years I was writing a lot of Haskell code and starting in 2024 a good amount of Rust code too.
Besides leading the backend team and architecting the backend system, in particular I was responsible for implementing:
- Logging facilities including GCP logging features
- Monitoring abstractions for Prometheus
- Distributed in-memory caching
- Database functionality, abstractions and integration
- Kafka functionality for Haskell
- Cardano transaction construction and reliable submission
- Cardano rollback handling
- Frontend integration
- REST server implementation and API evolution
- Cardano chain monitoring
- Backpressure handling
- Minor Smart Contract implementations in Plutus and Plutarch
- Cardano dependencies and Hard Fork (HF) migrations
- On-chain price signals extraction from other Dexes
- Rewards payout service
- Large number of unit tests
- Loads of integration tests
- Complex suite of E2E tests
Given that I had to get up to speed with Cardano, Blockchain, industry Haskell and other state-of-the-art industry technology such as Docker, it was an extremely intense period - I think I never had to learn as much within such a short period in my life before.
After an intense development period we successfully released Axo in January 2024, and after a few 3rd party integrations (Dexhunter, Steelswap,…) we established it as the 3rd largest Dex on Cardano in terms of trading volume, where it was also at the time I left. The reason for me to leave was to move on to something new, outside of Cardano (more on that further down below), so transitioning out of the project after the successful release and 3rd party integrations felt the right time to move on.
Due to the tremendous intensity of the experiences and breadth of learnings that happened in these 2.5 years I want to reflect on them here, focusing on three aspects in particular: Software Engineering, Cardano and Haskell.
Software Engineering
Software Engineering is the art of making best possible decisions based on imperfect information - and then not to get into fights over ‘I have told you so’ when perfect information is available later. – Thalers Corollary 42
I have repeatedly expressed that I view Software Engineering as a learning process, practicing it as an agile scientific learning process with fundamental focus on fast feedback loops, which I also have taught in this way in my tenure as Lecturer in Software Engineering. All of this I could confirm and deepen during these 2.5 years, where the focus was on learning and mastering new technologies as fast as possible to deliver business value.
However I also had to add another dimension to the way I think about Software Engineering, namely, that of mental load. More specifically, I came to the realisation that mental load is the main bottleneck when developing a complex system, which is the main reason why you need a larger team, so you can distribute the mental load because even if you are a very experienced individual and have a lot of brain power you cannot comprehend and balance the full mental load. So the goal is to have the right team size so you can split the mental load across proper boundaries to create mental-load “regions”. To avoid the bus-factor you would need to have each region covered by at least 2 people - but whether this is feasible or not is another story.
As you might expect, the mental load of Axo was very high, due to a few factors:
- Axo’s vision in itself was very ambitious and stretched very far (our own trading language, order book matching, …) - no one has done something like that in the crypto space before, therefore by its nature the mental load of the project had to be (very) high.
- Cardano is a very complex system to develop on because of terrible documentation, constantly breaking changes, terrible/non existent tooling. This means a lot of up-front learning and dealing with details one has to remember and keep track of. Also battling the inherent limitations of the blockchain was extremely frustrating and meant we had to backtrack a few times (more on that in the section on Cardano below).
- Usage of Haskell which has a very high mental overhead and complicates things as well due to lack of tooling, documentation, libraries and bad community (more on that in the section on Haskell below).
All of these notes on mental load might be rather obvious and trivial, however it has far reaching implications: if you are not able to control/manage and keep the expected mental load of a project within predictable/controllable ranges, then you should not embark on it, because it is gonna result in vastly unpredictable schedules, substantially increased development costs and therefore potential cancellation - and the combination of Cardano & Haskell does not reduce mental load, but on the contrary; more on that in the other sections on Cardano and Haskell below.
Other takeaways were:
- Integration effort drives the complexity of the whole project. A feature might be “clear” and “easy” to implement, however as easy it might be, as soon as you have to so some integration work - either with a UI team, or with a 3rd party - it’s gonna dominate the whole feature. Therefore start integration efforts as early as possible, ideally right away.
- In systems engineering don’t start with top-down abstractions to end up in the concrete - rather start with the concrete and only carefully extract abstractions when the need arises, it is easy to do and pays off. Also, don’t hire top-down abstractors, they will make your life very hard (Ph.D.s have the tendency to fall into the trap of top-down abstractions, often the more intelligent, the more susceptible they are).
- After a product is released, velocity slows down substantially due to interruptions from support requests and bug fixing. This is super obvious, but the effects were so blatantly visible in Axo’s case that it left a deep impression on me.
Having worked in a tremendously skilled and talented team of great people we often were fooling around, coming up with half-joking, half-serious corollaries that sum up some experiences along the way - in my case they were (obviously) called Thalers Corollaries. Below you find a list of my most memorable ones:
- More important than passing tests that you expect to pass are passing tests that you expect to fail.
- If your local unit/integration tests make wrong assumptions, then only full E2E tests can save you - or God.
- When you cannot explain a bug by pure reasoning about the inner workings of your system, revisit and check the fundamental assumptions of your systems - the deeper issue is very likely lurking there.
- In each project there is always this one brutal bug that is f***ing hard to find.
- There is no such think as ’too much log info’ in case of a critical error.
- To say that one should write MORE tests is not the point - you can always write MORE tests, and go on ad infinitum. The question rather is: did you write the RIGHT tests?
- Never underestimate the power of writing unit tests even for small functions that seem ‘obviously’ correct - you might be missing something obvious.
- ‘Hope’ is not a word used by a true Software Engineer.
- A programming language IS its tooling and ecosystem. Saying that a programming language is great despite it has broken tooling or ecosystem, is like saying that ‘Communism is good in principle’ despite all its implementations have utterly and horribly failed.
- If you are doing DB stuff without integration tests, you are f***ed.
Cardano
I’m kind of polarizing. People either like me or they hate me. There’s not a lot of people in between. – Charles Hoskinson
To be brief: the Cardano development experience was a very unpleasant one. The root cause of this I believe to be IOGs overly opinionated ways of doing things, as well as claiming to be “research first” at all costs, despite sometimes plain and pragmatic engineering should be the priority.
- As of 2024 Cardano still has bad/non existent documentation on fundamental parts of the system. Often the only way to get to the bottom of things is to look into the code and its comments (Haddock).
- The situation early 2022 was particularly bad as it was a few months before the Vasil HF which introduced fundamental new features we needed for Axo. It is better now, e.g Aiken, Kupos, also there seems to be a version 2 of Plutarch in the works.
- Inherent design decisions of Cardano forces a lot of work to the off-chain side, which has its benefits (e.g. deterministic fees) but also leads to substantially higher complexity on backend side.
- They way IOG slipped in the new protocol paramter MinFeeRefScriptCostPerByte without discussing it publicly with the community tells everything about how IOG operates: preaching wine and drinking water.
- Offering functionality in a public API, with people starting to rely on it, just to ‘implicitly’ retire it, but not removing it from public API and then telling people that they are ‘doing it wrong’ when performance is going to hell due to arbitrary refactorings (see more on breaking changes below).
- Cardano’s consensus algorithm relies inherently on rollbacks, which complicates things a lot because they occur completely indeterministically, making the system very hard to test. Therefore it is no surprise that many Dapps on Cardano develop directly on mainnet instead of testnets.
- There exists not many publicly documented best practices/patterns/expertise how to do things - yes there are consulting companies, such as Mlabs and Emurgo, but they all have close ties to IOG and they charge generously for their services.
- Some things feel tremendously experimental such as the Plutus Application Backend (PAB), that was a stillbirth when it was released in 2022.
- There are a lot of breaking changes with each dependency update. Also keeping up with Hard Fork changes requires quite a lot of effort.
- The community resembles a cult around Charles Hoskinson, living within its own bubble, not critical of Cardano’s limitations or issues, downtalking other Blockchains.
- In some way Cardanos state is a reflection of the Haskell ecosystem (more on that below), which is no surprise as Cardano is built on Haskell - maybe such things can be infectious.
A few weeks ago I did some research into two other blockchains, Ethereum and Solana, and what I found there made me realise in what bad shape Cardano is despite the money IOG puts in: they have excellent documentation, well-documented patterns and best practices, open-source libraries of various Dapps so you can have a look how certain problems were solved, great tooling, easy to integrate and install - it’s all there. In terms of Solana I actually undertook a small experiment building a simple aggregator for transactions and accounts in Rust, with a REST endpoint, using MongoDB. I did it in just ~12 hours. If you are new to the ecosystem, this is simply impossible on Cardano and Haskell.
I believe that “research first” is a noble undertaking, which has its place in the crypto and blockchain space, however only on core parts such as the consensus algorithm and smart contract language, which should be handled by researchers (as it is done in IOG) - the rest should follow pragmatic engineering principles and should be done by experienced engineers and explicitly NOT left to academics.
All of these issues on Cardano ultimately result in more costs and longer development (which increases costs as well), therefore Cardano is objectively a rather unfavourable environment for crypto startups that want to get stuff done quickly: either you have to buy in experience from consulting companies to get things done quickly (but their services are not cheap) or you have to build up your own expertise, which is gonna take a while and is equally expensive due to the time it takes and the risks it comes with (dead ends, architectural decisions,…). Having said that, yes of course, there are projects on Cardano, and there are quite a few, but ultimately they all copy over same Dapp concepts that have already been tried and are successfully on other chains such as Ethereum or Solana. A noteworthy exception here is Axo, that really explores uncharted territory.
Haskell
There are only two kinds of languages: the ones people complain about and the ones nobody uses. – Bjarne Stroustrup, The C++ Programming Language
A monad is just a monoid in the category of endofunctors, what’s the problem? – James Iry
In the last ~7 years I have written a lot of code in vastly different languages: Java, C, Erlang, JavaScript, TypeScript, Elm, Scala. However the majority of the code was written in Haskell. Having by now a substantial amount of experience in Haskell in the context of a Ph.D., teaching and real-world projects in the industry, I would call myself now as a quite experienced Haskell senior (not on expert yet, would probably need another 5-10 years though).
Initially I started using Haskell as a preparation for my Ph.D. where I went then on to use it to research the use of Pure functional programming for Agent Based Simulation. My motivation for this research topic was that after using OO languages (C++, Java,…) for ~10 years in the industry and also teaching them, I was simply bored by this paradigm and was looking for an entirely new challenge, which I found in pure Functional Programming using Haskell. With my strong industry background I also got interested in how Haskell can be used for more real-world problems such as writing web backends. I continued this investigation and pursuit also after my Ph.D. where I wrote a few backends, which are actually used successfully in production, and experimented with various architectural patterns (3 layer cake, Free Monads,…). I also taught Haskell to post graduates (see Teaching Haskell means teaching important concepts) and after quitting my job at uni I went on to be Tech & Team Lead at Generation Lambda where we were using Haskell to implement a decentralised trading platform on top of the Cardano Blockchain.
At Generation Lambda we decided to use Haskell as we were building on the Cardano Blockchain, which was/is built using Haskell. In a nutshell, when you look at the state of IOGs Cardano code base you see everything that is wrong with using Haskell in the industry. In hindsight the decision to use Haskell was a tremendously expensive one for IOG, due to the lack of tooling and library ecosystem, which meant IOG had to build a lot by themselves. Also, the lack of established architectural patterns and best practices in Haskell led to nearly every major Cardano library (Plutus, Plutarch, PAB, cardano-node, cardano-api, ouroboros-network,…) having their own approach of how to structure and implement their architecture (plain IO, Free Monads, MTL/Type Classes, Freer Monads, type level magic,…). What matters in a real-world Software Engineering is that you know the domain and master the underlying technology - we could clearly see this in the project at Generation Lambda: the bottleneck was understanding the domain and the complexities of the Cardano Blockchain. Unfortunately, Haskell did not help here - it actually made things worse due to its terrible tooling, which also extends to the Cardano ecosystem.
Before I go more into the details of what’s wrong with using Haskell in the industry, there is one thing where Haskell shines exceptionally bright - and in my opinion it is brighter than any other language I have used. Due to Haskell’s strong static type system and being explicit in its side-effects it allows you to move very fast in the beginning of a project, especially when you need to do technical prototyping and you want to move fast and cannot afford to write extensive test suits yet. As continuous refactoring is at the core of modern Software Engineering to quickly adopt to changing requirements and prototype technical solutions and most importantly to pay back tech debt, this allows tests in the early phases of a project to focus only on pure core logic of your application. Precisely this is where Haskell shines due to its typesystem and the possibility to reason about side effects. It allows blazingly fast application and architecture prototyping with high confidence despite low/no test coverage. It also allows easy refactoring towards a more decoupled structure for better unit and integration testing as soon as E2E works in principle. At Generation Lambda Haskell allowed us to move very fast in the beginning of the project without a high test coverage (we wrote some unit tests for some crucial and complex domain logic which we implemented in TDD fashion), while still having high confidence in changes. This is only possible in Haskell due to its strong static type system and explicit side effects. We probably wouldn’t have been nearly as confident when using e.g. Java or Type Script.
It is widely known that Haskell has a quite steep learning curve, which is especially true if you come (as I did) from an imperative, OO background. In this case you need to re-wire your thinking completely to make way to the pure FP paradigm. It will take a while but once you got there it means you are much more flexible in your thinking as programmer. Back when I started learning Haskell there was not too much good material around, but this has improved a bit in the last years. However, I still see Graham Hutton’s Programming in Haskell (2nd Edition) as the best book to get into Haskell as it focuses on basics instead of overwhelming the reader with everything there possibly is in Haskell (such as Haskell Programming From First Principles).
A problem in Haskell is the obsession of its community with theoretical foundations, which it spins back to Category Theory. This makes you sometimes believe that you need to master Category Theory first before you can start programming in Haskell. This is wrong. The focus on theory makes definitely a lot of sense when it comes to research about type systems and computation, but for industry usage not too much. As a consequence an annoying thing is that Monads are treated something like a mystery in Haskell. They are not. They are just customisable semicolon operators, that is, ways of combining computations - just as in OO or imperative languages we use the ; operator, we use Monads in Haskell. Sure, underneath there is much more to it, and it has a sound theoretical foundation, but that doesn’t matter most of the time, and especially not if you are learning the language.
Haskell makes side effects explicit in its type system. This is both its strengths and weakness: if a language has such a strong opinion and stance on something this becomes both its greatest strength and also its greatest weakness. In one way it enables you to move very fast because you can reason about side effects or their guaranteed absence; on the other hand it might slow you down because dealing explicitly with side effects complicates things if you don’t anticipate them in advance. The issue is that by now I have come to the conclusion that being explicit in side effects sounds nice, but does not justify the additional complexity you have to go through to do Mocking and writing decoupled software.
Another problem is that Haskell feels a bit like C++: in both languages you can write tremendously complex code by using every possible language feature. In Haskell you can even go way beyond what C++ allows, by going down the fascinating rabbit hole of Dependent Types. While this might be fine and desired in a Ph.D. it is very rarely needed in an industry project and can become actually very harmful if it gets out of control. You might end up paying for the decision to use such features with increased complexity, and making it harder for less experienced Haskellers to get on board and understand what’s going on. The problem is that Haskell itself is the temptation as it offers you to solve a problem perfectly by using advanced type system and type level features. This tendency to strive for a perfect solution just because it is possible instead of something that works good enough is a dangerous rabbit hole in the industry many Haskellers went down and never came out again.
As a consequence you might find it quite difficult to find Haskellers with the right attitude, that is, focusing on getting stuff done without f**** around. They exist but they are difficult to find. This makes it more complicated to hire new people and to grow a team. However if you find such exceptional Devs, then you have very skilled, intelligent and capable people in your team. So in this way Haskell acts both as a negative and positive gatekeeper.
A serious problem of Haskell is the lack of good and reliable tooling and the at times terrible library ecosystem. For in one case we wanted to use Kafka Transactions but found out that the only Kafka library in Haskell doesn’t support them - so we implemented them ourselves and provided the work as PR to the library. Sure, it is very nice to give something back to the open source community - all Haskell libraries that are available on Hackage/Stackage are open source - however the lack of a readily available library means a potential slow down of you moving fast because you need to crank it out yourself (this also happened when we wanted to use Google Cloud Platform PubSub - the existing library wasn’t touched in years). Tooling is just abysmal compared to other languages: there simply doesn’t exist a useful debugger and IDE support, despite slowly picking up in the form of Haskell Language Server (HLS) is still unstable and a far cry from e.g. IntelliJ IDEA. The issue is: tooling and library ecosystem are of such fundamental importance in real-world industry projects and we are so used to them that we only notice how crucial they are when they are lacking like in Haskell. There are a few web frameworks such as Yesod and IHP, however they are either extremely complex or in their early stages - if you have experienced once how fast you can crank out a microservice system or web backend using Java Spring, you are really wondering why this is still lacking in Haskell. Having said that, there are also amazing libraries such as Servant, Persistent, Criterion, that are definitely as strong as similar libraries in Java.
So by now I have come to the conclusion that Haskell is probably the best language for researching type systems, programming languages, the nature of computation, Category Theory, compilers, parsing, doing exotic Ph.D.s and teaching new concepts to CS students to broaden their horizon - but that it is not a particularly good choice for industry usage unless you have an extremely compelling reason to choose Haskell, such as when the ecosystem of your underlying technology (e.g. Cardano) is built on Haskell, or you are implementing a new parser or language.
Don’t get me wrong, you can get industry projects in Haskell successfully delivered with very high quality, however the reason why I advice against it is that over the last years a much more compelling alternative has emerged and matured, that seems to be sitting in exactly the sweet spot between functional and imperative programming, OO concepts, performance and type safety, that you want in pragmatic real-world industry usage: Rust. I have been following this language for the last ~4 years, and by now have written substantial production-ready code in it and supervised a Rust master thesis. I have come to the conclusion that in a nutshell it is the perfect mix of Haskell, Java and C(++): it has vast documentation, great IDE and tooling support, extremely fast compile times, a good library ecosystem, established practices, a welcoming and pragmatic community and a well designed type system, that is not too opinionated.
Yes, Haskell is probably the most powerful industry-ready language out there but its not what matters in a complex systems-engineering project, because Rust is a better choice there because of lower mental load, better tooling, better docs, more libs, nicer community, and proper guidelines how to do things: it all helps to focus on building stuff rather than f***ing around with the language.
I am just at the beginning of my Rust journey so this is just a well-informed opinion, rather than deep, real-world experience report. However I plan to bet my future career fully on Rust and plan to extend my portfolio with much more industry experience in it over the next 5 years, abandoning Haskell in favour of it - let’s see what I am going to write about Rust then.
Conclusion
As you can see from the above reflections, the time working on Axo was one of great challenges but with great challenges come great learnings and this I deeply enjoyed when working at Axo, just as the overall experience was also very positive. I am grateful to have worked with extremely talented people on that project, which formed quickly into an extremely strong team which was fun and refreshing to lead and work in. Concluding, I can say that I am particularly proud that despite all these difficulties we were able to release a very strong, extremely robust and stable product (I didn’t say it can’t be done on Cardano and in Haskell, just that it is challenging), that made it into the top 3 Dexes on Cardano.