Today was the first day of the conference, and it was a very good day. I must say it largely met my expectations. I was part of the day on Architecture track and part of the day on High Availability. But lets start at the beginning and the key note. Btw, no photos, my phone camera is useless
Martin Fowler and Rebecca Parsons opened the conference with the key note – Data Panorama. Martin was very funny screaming how big data is to open the talk. And it turns out that data is Growing and how we use it is Changing. Keywords were Growing, Distributed, Valuable, Urgent and Connected, and I believed every word Who would argue with this. Nowadays when even a fridge can tweet data is coming from all various devices which wasn’t the case just few years ago, so now challenge is even to decide how much to store and for how long, let alone how to use it. Response to this change are NoSQL databases – and Google with BigTable and Amazon with Dynamo started the trend out of necessity. The necessity being – couldn’t scale up anymore – had to scale out and relational databases couldn’t do it. So now we have new wave of databases that are offering convenience (easier to store aggregates) and distribution (sharding). Talk went on about polyglot persistence and event sourcing; then how the data sources have changed – it’s not tables only anymore, it’s text and images and video and connections … and we need to analyze it differently – obviously – with emphasis on visualization of the data. I could go on and on about this – obviously I found this highly inspirational.
I am really into this polyglot persistence – and when I was thinking about it simplest thing that sprang to mind was logging. I read somewhere how Hadoop was used for logging and it was mentioned by the speakers along with map-reduce, so that’s probably why I thought about it. Anyway, my impression is that we’re not producing enough data – sounds silly, no? We could be logging easily much more data about operations of our services if saving that data to rational database wasn’t perceived as heavy, amount of logged data would probably be vast and it would use the most expensive storage, and how would I analyze it without very expensive tools? It’s non functional concern in the end. But if we used something like Hadoop, that would’ve been much more palatable – maybe. Anyway, just thinking out loud, these all silly thoughts were heavily reinforced by next two talks.
Stefan Tilkov was talking about Breaking the monolith: Towards a system-of-systems architecture . The session was absolutely packed, it was becoming very hot, it was just silly that track hadn’t been assigned a bigger room, all sessions were packed, nearly full, or even overflowing. Anyway, Stefan was talking about system boundaries, how three layer architecture everybody draws is too generic and how one project doesn’t necessarily means one system. On he went to talk about system characteristics and argue that we should really have Cross System and Internal architectures, rules and guidelines (Micro and Macro architectures if you will). Where Macro architecture would define separation and interaction between the systems, and Micro would be responsible for individual systems where we could have even different languages between systems. Polyglot programming, how exciting! Now all this makes sense, Cross System rules and guidelines is something that is going to be changed rarely and we might be stuck with it for a decade, where Internal architecture could change more frequently and independently as long as it (functionality it provides) looks the same outside. He carried on about loosely coupled systems – and again – about data integration, replication and redundancy – something similar to the key note tone. So, the take is – redundancy in data is not necessarily bad, and most probably in relational databases, for various reasons we already have it. He argued that this maybe doesn’t feel right but it is better in the end. Well, I would love to try it it all made lots of sense. Anyway, I really enjoyed the talk, just at the end it trailed off a bit towards UI part of it, and while I get it my interest is more on the other end. All in all, very useful.
The next one was Games for the Masses by Jesper Richter Reichhelm of Wooga – they’re making games for Facebook, and are were facing some serious challenges as games became more and more popular – talking in billions of requests per month. He has kindly shared the slides. So what he said was that logic wasn’t really complex, the load was the challenge – 100K+ DB operations/sec, 50K+ DB updates/sec. What they did was to rewrite their backend 4 times in 2 years, and it was exciting to see how the architecture evolved. They’ve started with Ruby and MySQL, went on to Ruby and Redis, then introduced stateful server with Erlang and did saves to Amazon’s S3, and finally settled for Ruby+Erlang. This is perfect example of Polyglot Programming – Erlang is great for infrastructure kind of code – supporting sessions in reliable and super fast manner, but Ruby has syntax that’s much more appealing to the eye (talking about readability of business logic). It was just interesting to see this also from the perspective of how many servers they ended up using and how they simplified their architecture at the end. I simply enjoyed the session.
The next was Michael Stal of Siemens with How Software Architects can Embrace Change. He raised some good points like why don’t we learn anything from failed projects? Jokingly he said we could use silver bullet to kill right person at the beginning of the project – that would be the silver bullet – ha. Talked about balance between Agile and BDUF i.e. unstable vs. nice but unsuitable architecture. At the end it all boiled down to iterative-incremental architecture development – small iterations with mandatory review of the architecture and some time for refactoring. Reviews and refactorings are important to prevent architecture erosion. Design for change, design for testability. Around that time I ran out of paper – they should hand out bigger notebooks here. In all honesty the talk was a bit too fast for me and slides too crammed – but when I went through my notes I could actually recall most of it. It was a good talk just a bit tiring.
I switched to high availiability track from there on. There were two talks, I’ll just go briefly through. From zero to 10 million daily users in 4 weeks was about sustainable speed of development in startup social gaming company. There was 1 TB of analytics daily mentioned – see how we generate more that, that’s what I was thinking. It wasn’t too inspiring talk but it was useful nonetheless. Interestingly there social gaming teams tend to be small and independent (similar to Wooga above). I didn’t really agree with some inflamatory statements that Testing is dead and Operations are dead, but I understand the background story. Also, there were some interesting pointers on Dark releases and Split testing. The following talk was on Event Sourcing – he he, again, but even though Martin is excellent speaker I felt it lacked content a bit. I might’ve expected too much out of it to be honest. The talk was actually very interesting and I’m nearly sold on this, but before commiting I need to get more information, but this seems to be one of the things really worth adopting.
The evening key note was done by Greg Young, great speaker, and was excellent. It was about weird and interesting things we do. Things that stuck were: Reusability is overrated, DRY has it’s dark side too, we love solving problems that nobady actually has, and ultimately – software is there to bring value. So we should be writing good enough software – and that was the highlight of the day actually. It was all about common sense, simplicity, back to basics.
All in all, it was very enjoyable day, I heared loads of interesting and useful information and I’m looking forward to tomorrows sessions. Thing to practice for tomorrow – conciseness – this was too long.
See reactions to the conference: #qconlondon.
