Editor's Note: Stefan Borsje is Co-founder & CTO at Karma.
No matter where you are in the world, chances are you’ve had to deal with shotty Wi-Fi connections before. Getting a Wi-Fi hotspot from your cellphone carrier is such a pain that many people don’t bother to do it. Karma is changing that. With Karma, you buy a tiny Wi-Fi device and pay for data as you go with no expiration, and no subscription plan.
After Karma joined StackShare and shared their software stack, we spoke with Stefan to learn more about the software behind Karma and their vision to build the world’s Wi-Fi network.
In an industry full of legacy vendors and large enterprise companies, Karma stands out as the only software startup in town. Leveraging AWS and a handful of other infrastructure service providers, they’ve built out software that no other Wi-Fi companies care to, allowing them to deliver a better end user experience. Read on to learn how Karma builds and ships software for Wi-Fi devices.
Basically what we do is we sell internet in a tiny little box. It's a mobile Wi-Fi hotspot you take everywhere you go...we don't have any subscriptions, so no contracts, no subscriptions, no nothing. You're not locked in in any way. You can just buy a Karma hotspot and then you buy a gigabyte and then once you're through that gigabyte, you buy a new one. That's basically all there is to it, it's a really simple model.
Nobody likes their carrier...Everybody's really fed up with all the subscriptions that you have, with all the very opaque plans that they have. You have the family plan, but it’s limited. These guys still think in “lines” that you have. That's totally not what we're talking about these days. You just have a bunch of devices and you just want to get them online. You don't want to deal with all kinds of different subscriptions or contracts. It's all just bullshit, so we just want to get rid of that.
Historically for MVNOs it's not very usual to be a software company and that's a really big difference between us and all the other guys. What all the other guys do is they just buy software from software vendors and use all those different pre-configured packages to put everything together. We flipped that entirely around and we were like, no no no, we're going to do everything ourselves.
I think we use half of the services that they [Amazon] offer, so quite a lot...Then on top of Amazon, we've built a lot of microservices. I think we have about 30 or 40 different services running. On top of those we have a couple web projects as well. Our microservices are basically just dealing with all the data and with all the business logic. Then we have APIs and websites running on top of that. Then most of our services are written in Ruby. We started out as a Ruby shop, but we're slowly also moving some stuff to Go. We're using Go more and more these days, actually, including some backend services.
For our mobile apps, it's actually pretty simple. We just have a REST API and our mobile apps can just talk to that. We have an Android app and an iPhone App.
We also have a lot of communication between the hotspot and our backend services. For example, the hotspot tries to figure out if you are allowed to access the internet, yes or no. It reports how much data I'm currently using. You can basically manage all your settings in your own Karma dashboard and that will be synchronized through the hotspot. So there's all kinds of communication between the hotspot and the backend. For the new version, we're starting to use MQTT for that, which is a really awesome protocol that almost nobody has ever heard of...One of the biggest users of MQTT is actually Facebook Messenger. If you have Facebook messenger in your iPhone, that uses MQTT. It's a really cool protocol.
Yonas: Tell us a bit about Karma.
Stefan: Basically what we do is we sell internet in a tiny little box. It's a mobile Wi-Fi hotspot you take everywhere you go. I think there's a couple things we do different compared to all the other guys. The most important one is that we don't have any subscriptions, so no contracts, no subscriptions, no nothing. You're not locked in in any way. You can just buy a Karma hotspot and then you buy a gigabyte and then once you're through that gigabyte, you buy a new one. That's basically all there is to it, it's a really simple model. We try to stay really honest to our customers as well. So if you buy a gigabyte, we don't have these crazy “you have to use within a week or within a month” kind of rules. Just buy a gigabyte, that one is yours. You can use it whenever you want. If it takes you a year, that's fine with us. It doesn't expire. That's it. Really no nonsense.
Then our hotspots have a very cool feature where they all have an open Wi-Fi access point, so everybody around you can connect to your hotspot. But when they do, they're not using your data. They're using their own data. So everybody who connects to a hotspot uses their own data and once guests hop on to your hotspot, you get 100 megabytes free and the guest gets 100 megabytes as well just to get started using the service. That's basically how we try to acquire a lot of new customers.
Yonas: I like that. So where did the idea first come from?
Stefan: I'm originally from the Netherlands and so is my co-founder. And while we were traveling here in the US we noticed how hard it actually was to get online, especially while you're out on the go. You have to deal with Boingo Wi-Fi at airports. You have to deal with unreliable Wi-Fi in restaurants or coffee shops. There's a million places where you don't have Wi-Fi at all, so it's impossible to get online. Then when you wanted to buy something to solve that problem, you basically had to buy a hotspot with a two year contract and pay $50 a month, which is ridiculous if you travel very infrequently. That's incredibly expensive.
So that got us thinking, there has to be a better way to do this. There has to be an easier way to do this. Then again, if you look at all the telcos.... that's also where our name comes from. These guys could use some good karma. Nobody likes their carrier. Everybody's really fed up with all the subscriptions that you have, with all the very opaque plans that they have. You have the family plan, but it’s limited. These guys still think in “lines” that you have. That's totally not what we're talking about these days. You just have a bunch of devices and you just want to get them online. You don't want to deal with all kinds of different subscriptions or contracts. It's all just bullshit, so we just want to get rid of that.
Yonas: That makes a lot of sense. I think it is getting even more confusing on the carrier side. That's actually working to your guys' advantage. You'd think it would get easier, but it's getting even more complicated with the way that they roll out new plans. I remember that I was trying to add tethering to my current AT&T plan recently and I just couldn't figure it out. Their website is just way too confusing. So I gave up.
S: We have a lot of skeptics saying to us, "Hey, but why wouldn't I use tethering on my phone?" My first question is always, "When was the last time you did that?" And everybody's always, "Uhh. I don't think I ever did that." Exactly. That's exactly what the problem is. It's still too hard. You're still locked into limitations posed to you by the carrier and it's just a mess.
Y: So before we dive into the software component can you talk a little bit about how you do networking?
S: We are basically an MVNO, which means a Mobile Virtual Network Operator. That means that we are a company that runs on carrier's network. We buy data wholesale and then resell it to our customers. It's a really easy, actually quite old, model. You also have a lot of MVNOs who did the same thing with minutes.
We currently run on the Clearwire network, which is a WiMAX network. That's the device that we sold up until recently. But right now we're in the pre-ordering phase of our new LTE hotspot, which runs on the Sprint LTE network.
We technically could run on multiple carriers if we wanted to. Our device is basically carrier agnostic, so we don't really care which carrier. As long as we have a data pipe, we can offer internet service to our customers. But there's all kinds of limitations, like if you want to go to a network, the device has to be certified on that network. You've got to have a contract with the network. It's not easy to roam between different networks because of technology differences. So it's actually still quite hard to figure that all out, but hopefully we can do that in the future.
Y: You're essentially sitting on top of the bigger carriers and just making that more accessible and much more user friendly.
S: Exactly. What we would love to see is for carriers to basically become dumb data pipes because they're very good at building networks, but they're not so good at dealing with customers. And I think that's where we could shine. We could take over that part and use those dumb pipes for the network and combine it in that way.
Y: Gotcha. You're based in the US. Are all your plans and offerings for US customers?
S: Yeah. Right now they are. Our office is officially in the US, but we also have a couple people in the Netherlands and even in a few other countries in Europe, so our team is pretty distributed.
Y: Alright, so we covered the basics. You've got the hotspot and you can connect it. Any device that you want can connect to it. Then you can actually have other people connect to the hotspot as guests or even using their own network. There's just a few basic questions from a technical perspective. Do you want to talk a little bit about your architecture and how you guys think of web and then mobile?
S: Sure. I think it's quite interesting actually. My co-founder Steven went to an MVNO summit last week. If you go to an MVNO summit, you'll meet a lot of other MVNOs but even more software suppliers. Historically for MVNOs it's not very usual to be a software company and that's a really big difference between us and all the other guys. What all the other guys do is they just buy software from software vendors and use all those different pre-configured packages to put everything together.
We flipped that entirely around and we were like, no no no, we're going to do everything ourselves. What we basically did is built everything ourselves. We built our e-commerce platform. In the beginning, we even built our own shipping platform. We have our own data store where you can buy data. We have user management tools. We have device management tools. We have supporting APIs for our mobile apps. We have supporting APIs for our devices because they communicate with the backend. That's all built in-house.
We have internal APIs then also an API between our hotspots and our backend because, for example, when you sign into a hotspot, we need to check, “Are your credentials are okay? Do you still have data left?” That's basically how we control access on the device. The devices are basically like tiny, little API clients that phone home to our main API.
Our architecture is running in Amazon VPC. That's actually what we started with and we're still very happy with. We’re pretty much tied into the entire platform. So we use EC2, but we also use SQS, SNS, DynamoDB. I think we use half of the services that they offer, so quite a lot. CloudFront, Route 53, S3. You name it.
Then on top of Amazon, we've built a lot of microservices. I think we have about 30 or 40 different services running. On top of those we have a couple web projects as well. Our microservices are basically just dealing with all the data and with all the business logic. Then we have APIs and websites running on top of that.
Then most of our services are written in Ruby. We started out as a Ruby shop, but we're slowly also moving some stuff to Go. We're using Go more and more these days, actually, including some backend services.
S: The first time I actually started using Go was for software on our devices. So on our hotspots we have some custom software running in the firmware. For the first device, that was actually completely built by our manufacturer. But for the second generation most of the parts are built by us in-house and we needed a way to quickly develop software for the device. But we don't have any C programmers in-house, so we were actually looking for something that basically sits in between the friendliness of Ruby, but the performance and the ability to be deployed on an embedded system which you get with C. That's basically what led us to Go and it's been awesome for that. It works so well and so great. Since it works so great, it pushed us into looking into whether we should start using this for some backend services as well.
Y: So you have 30 to 40 microservices, are they all feeding into the same database?
S: No. Our internal policy, it's not written down or something, but our internal policy is basically every microservice only touches its own data store, and they can't cross touch their data stores, so we don't have multiple services running in the same databases. They all have their own stuff.
We do that because it makes it a lot easier to encapsulate data with certain functionality, but also to prevent changing the data in unexpected ways. So if you have two services changing stuff in the same database you might end up with data that's kind of broken, basically. What we wanted to be able to do is switch services out, including their data stores if we want to.
For example, another kind of rule that we have is that a service should be so small that we could technically rewrite it in two weeks, and it also means that we do that every once in a while. So if you think, “Hey we actually changed our mind about this, maybe a SQL database isn't the right storage for this,” and we want to use something else, we could technically do that. We could just swap the entire service and it shouldn’t matter for consumers of that service.
S: For most of the stuff we use MySQL. We just use Amazon RDS. But for some stuff we use Amazon DynamoDB. We love DynamoDB. It's amazing. We store usage data in there, for example. I think we have close to seven or eight hundred million records in there and it's scaled like you don't even notice it. You never notice any performance degradation whatsoever. It's insane, and the last time I checked we were paying $150 bucks for that.
Y: Wow. That's awesome.
S: Yeah. And it's basically just a really big key value store, but it's great.
Y: So you've got the microservices running all their own datastores. Do you want to talk a little bit about ... I'm assuming you have some sort of messaging or queuing layer. Do you want to talk a little bit about that?
S: Sure. Yeah. It's quite an interesting topic because it's still up for discussion internally. In the beginning we thought we wanted to start using something like RabbitMQ or maybe Kafka or maybe ActiveMQ. Back then we only had a few developers and no ops people. That has changed now, but we didn't really look forward to setting up a queuing cluster and making sure that all works.
What we did instead was we looked at what services Amazon offers to see if we can use those to build our own messaging system within those services. That's basically what we did. We wrote some clients in Ruby that can basically do the entire orchestration for us, and we run all our messaging on both SNS and SQS. Basically what you can do in Amazon services is you can use Amazon Simple Notification Service, so SNS, for creating topics and you can use queues to subscribe to these topics. That's basically all you need for a messaging system. You don't have to worry about scalability at all. That's what really appealed to us.
We have a lot of communication between the different services. Most of that communication is synchronous, so we just use REST and JSON for that. But if it's asynchronous or it's events that we want to track or stuff like that, then we use the message queue. So it's really, basically for event and notification based communication.
For example, if an order is placed on our website, in our case, a device needs to be shipped, so a notification needs to be sent to our distribution center as well. What our internal store service will do is it will publish a notification through the messaging system, saying “Hey this order has been finished. It's paid, so it can be shipped.” Then our internal shipping service will see the message and send it to our billing department. That's basically how we use messaging. I think the good thing about that is that it makes it very easy to have multiple services respond to a single notification and you don't need synchronous communication for that. So it's way easier to add more stuff once something happens.
Y: Great. So that's covers the backend. Are there any big challenges that you faced early on, from a software perspective? Or what were some of the earlier challenges if you can think back to maybe your V1 or before you had actually shipped? Was it just getting the entire carrier network side of things set up? Or were there things on the software end that were really difficult early on that you figured out?
S: A lot of the challenges in our case has to do either with hardware or relationships with the carrier. Most of that is just a lot of contract negotiation and making sure that we can work with manufacturers. That's really important.
I think from an integration point of view, we're actually able to keep everything pretty lightweight. Even integration with the carrier is really lightweight in our case, which is great because our switch from Clearwire Wimax to Sprint LTE was very straightforward.
Then from an integration standpoint, if you want to integrate with carriers you better prepare for a lot of legacy SOAP APIs. It is all documented, but it's like documents with eighty pages, while we only need three. That's a lot of work and a lot of figuring out what exactly are we trying to do and how much time are we spending on this?
So it's basically two worlds colliding. You have this really old enterprise world with a lot of enterprisey software and then you have us, which is more a startup kind of world. It's not the biggest challenge, but it posed some interesting problems. Also stuff like VPN integrations and that's a lot of work.
From the software perspective on our side, I think the most challenging part was how are we going to build this entire thing. Very early on we started out as an API and then a website on top of it and that slowly grew to 40 different APIs and multiple websites and other services on top of that. We really grew that organically. Along the way we just start thinking: Can we split this? Should we split this? We are going to create separate services for these. That has just been a long process.
Y: But pretty early on you knew that you wanted a microservices architecture?
S: Yeah. I think it took us half a year to figure that out. We started out with one big API and we quickly noted, hey this is not going to work. Too much is going to be molded together like one big app, one big backend service. That didn't really feel right, especially in terms of maintainability, but also scalability. What if we start splitting it out and looking at a service oriented architecture?
Y: By the way, for the microservices, is everything in Ruby?
S: Right now yeah, everything is Ruby. We use Sinatra a lot. I love Sinatra for APIs. It's really simple, really lightweight. It's awesome.
Y: Do you want to talk a little bit about specific apps?
S: I think I can talk about two things. Two things that are interesting. We have communication between the hotspots and our backend, which might be interesting. And we have communication between our mobile apps and our backend.
For our mobile apps, it's actually pretty simple. We just have a REST API and our mobile apps can just talk to that. We have an Android app and an iPhone App. Basically what they do is you can sign into those and then you can buy more data, you can see your usage for the past couple of days or weeks and you can see your current balance and that's basically it. That's pretty straight forward.
Then what the mobile apps can also do is they can also connect to the device directly. For example, let's say you have your iPhone, you connect your iPhone to a Karma. That Karma actually has a little API server on board. That's how our mobile apps communicate with the device directly without even needing an internet connection. So you just need the Wi-Fi connection between the hotspot and the phone, but you don't need the LTE connection for that.
Y: Oh wait. So the hotspot is connected, right? Then you're connecting your phone to the hotspot, instead of using your phone’s data?
S: Exactly, yeah. Your phone connects through the hotspot and it can use a local API on the hotspot itself, which basically allows us to communicate stuff like what's the current battery level, what's the current signal strength, am I connected yes or no, how many people are currently signed into the hotspot.
That's also what we show in the mobile app, so you can see exactly, “Oh this is how my hotspot is doing.” Basically, in that sense it's a companion app with a hotspot, but it's very useful. You can just leave the hotspot in your bag and you can still see it on your mobile phone.
Y: For the actual laptop connection, is there a different setup in terms of the services?
S: No. Not really. We treat all devices as equal. It doesn't really matter. When you connect, you have to go through a connection portal, which is basically similar to what you see when you connect to Starbucks Wi-Fi or Boingo Wi-Fi. You just get the pop-up page and you can connect or you can sign in through that. And that works on every device, so that's what we use.
We also have a lot of communication between the hotspot and our backend services. For example, the hotspot tries to figure out if you are allowed to access the internet, yes or no. It reports how much data I'm currently using. You can basically manage all your settings in your own Karma dashboard and that will be synchronized through the hotspot. So there's all kinds of communication between the hotspot and the backend. For the new version, we're starting to use MQTT for that, which is a really awesome protocol that almost nobody has ever heard of.
Message Queue Telemetry something. It's a protocol. I think it was developed in the late 90's, actually, by IBM. But it didn't have a lot of very interesting applications until now. Because the Internet of Things business is really up and coming, this is going to get really interesting. That's mainly because MQTT is basically built for two things. It's built for a small network footprint. So if you use MQTT to send messages back and forth, you're not using a lot of data, which is cool if you're on a metered connection like an LTE connection for example.
It can also deal with unreliable networks. So it has a quality of service properties built in. One of the biggest users of MQTT is actually Facebook Messenger. If you have Facebook messenger in your iPhone, that uses MQTT. It's a really cool protocol. I'm really excited about it. It's very nice. It's very lightweight.
Y: And that's being built into the next iteration of the product?
S: Yeah. Correct. Basically what it does is it sets up a TCP connection between the hotspot and our backend services. And then it uses MQTT to send messages over that connection.
Y: Right. Speaking of the network, how do you all deal with network outages?
S: It's not that big a deal for us right now because what we do is if you sign into a Karma hotspot, that's actually the only moment that you notice that we communicate with our backends. As soon as you're signed in, we try to get out of the way. Then it's basically, you basically have a connection with the Sprint network and that's it.
We're not running a service on top of that; we're not tunneling traffic or anything. It's just the raw Sprint connection that you're on. So if there's a network outage or whatever, then that's not really something that we can deal with. But what we do have to deal with is what happens if the network is slow or the signal is kind of crappy. Can we resend messages? Do we know whether they arrived at our backend services, yes or no? That's basically where the whole quality of service stuff comes in.
So we deal with some of that. We know it's not a server connection. It's a mobile connection and we know that connection is going to be crappy at some point.
Y: How do you know when the connection actually drops or if it drops? Because you said the only time you talk to the backend is when you connect.
S: Basically, MQTT has some built-in mechanics for that. One is it opens the MQTT connection to our backend servers and that's a persistent connection. That will stay open. Then what it will do is every minute it will send a heartbeat, a ping basically to our backend services to see, “Hey, is the connection still open?” When it drops, it just starts reconnecting automatically, so it has a connection again and can start messages.
What happens if you don't notice that the connection drops is that you will lose messages, and there is also something built in for that. Because if you publish a message on an MQTT topic, you can basically give it a quality of service property as well. So within MQTT you have three levels of quality of services. You have quality of service zero, which is basically your best effort. Just trying to send it and I don't care if it arrived, yes or no. Then you have quality of service level one, which is basically “I sent the message and I want an acknowledgment.” That's at least one delivery. So, if it doesn't have the acknowledgement, it will just try to send it again. Then once it gets the acknowledgement, it’s “Okay, cool, somebody saw this.” Then you have quality service level two, which is basically exactly one delivery. You basically send a message, get an acknowledgement from the receiver, then hold the message while you confirm with the receiver that it’s been received only once, then you can release it. That's the kind of levels that you have with MQTT to deal with that kind of stuff.
Y: Very cool. By the way, your mobile apps are all native?
S: Yup. Objective-C and Java.
Y: Do you want to talk a little bit about web?
S: Sure. I think our most important web properties are, obviously the website where you can buy new devices.
Y: Oh right. Because you built your own e-commerce site.
S: Yeah. We just built everything ourselves.
Y: Did you look at some of the ecommerce packages and just thought it wasn't a good fit?
S: We did look at those, especially when we started out three years ago. But we just wanted to have more control, especially because we don't just sell physical products, but also digital products. The data packages that we sell, we want a lot of flexibility in those. So we want to be able to save your credit card details, so you can just buy new data with one click of a button. We wanted to do auto refill so if you're running low, we can automatically top it up again. All that kind of stuff. To be flexible enough for that we had to build it ourselves.
Y: On the payment side, I imagine you're using one of the bigger payment services?
S: Yeah, we use Stripe a lot. Stripe is awesome. Then we also have support for PayPal and Bitcoin.
Y: Cool. PayPal and Bitcoin, through Paypal? Or through Coinbase?
Y: Do you want to just talk at a high level about how you do order fulfillment and that entire flow? Just at a high level. You don't need to go into too much detail there.
S: It's actually not that complicated. Basically when you order the device on our website, you go through our entire order backend, which basically means fraud control, we capture your payment and all that kind of stuff. Once everything is good to go, we send a notification to our fulfillment partner and they basically pick the order for us, make sure that there's a nice box around it and the right label is on the box and they ship it to you.
Y: Okay. So they take care of the labeling and all of that.
S: Yeah. We did it ourselves in the beginning, just to feel pain of fulfillment and that was very useful to have done it ourselves because now we know exactly, what problems you run into when shipping boxes and having to put a label on it and making sure that the right information is on the label and on the box to be able to identify which device are we exactly shipping to what customer.
Y: Okay. So you have your own internal fraud detection that you built out.
S: Yeah and then obviously the stuff that comes with Stripe.
Y: You work with one provider for the device? The hotspot device?
S: Yeah. We basically have manufacturers in Asia and we don't offer multiple devices at the same time. We had our WiMax device, we now want to move to the LTE network. That means we have to get a new device, but we're not really interested in offering multiple devices at the same time. We just want to keep our product very simple. It's already hard enough for our customers to understand what we're actually doing because a lot of people don't even know the difference between 4G and Wi-Fi. That's already hard. There's no use for us offering multiple products. We just want to keep everything as simple as possible.
Y: So you're using Rails for the websites right?
S: Yeah we use Rails for webpages and projects, not for backend services.
Actually if you click through our website, you won't notice it but you're clicking though, I think, seven or eight different Rails projects. We tie those all together with a front-end library that we wrote, which basically makes sure that you have a consistent experience over all these different Rails apps.
Y: So you wrote your own front end framework?
S: Kind of. It's a gem, we call it Karmeleon. It's not a gem that we released. It's an internal gem. Basically what it does is it makes sure that we have a consistent layout across multiple Rails apps. Then we can share stuff like a menu bar or footer or that kind of stuff.
So if we start a new front end project it's always a Rails application. We pull in the Karmeleon gem with all our styling stuff and then basically the application is almost ready to be deployed. That would be an empty page, but you would still have top bar, footer, you have some custom components that you can immediately use. So it kind of bootstraps our entire project to be a front end project. Also we do use Bootstrap a lot. Everything that you see on our marketing website or web servers, it's all built from Bootstrap.
Y: Speaking of bootstrapping, do you want to touch on your build, test, and deploy process, starting from your local environment?
S: Sure. Since all the backend projects are Sinatra and Ruby and all the front end projects are all Rails and Ruby, it's quite easy to install everything on your local machine. You just check out project, install the gems and you're ready to go. We use Pow a lot, the server by 37 Signals for local development, which is awesome because you can just add a host name and then it magically works. That's pretty cool. It's especially very useful if you happen to use backend services as well. That works great.
Then once you made a change, you push it through GitHub, builds are run via Travis CI. We use that quite a lot. And we try to test all our backend services. We use Rspec and Cucumber a lot. We currently use Chef to push it to our staging or production service.
Y: Chef to AWS. Are you using any VMs locally?
S: No. We tried doing that, but we didn't really like it. We didn't spend enough time doing that. If they're outdated, it takes a lot of time to update everything. That didn't really work since we have, in total, if you add the Rails projects, we have about 50 projects and we don't want to have 50 VMs. That also doesn't work. That would basically mean you would have one big development VM and we never really liked that. We recently hired a DevOps engineer and his main focus over the next couple of months will be, how can we move most of this to Docker because Docker seems like a really good fit for us with what we're trying to do. That will work great, especially on testing, staging and production. And I wonder how good it will work in development as well.
Y: In terms of the team, how many engineers are pushing code at any given moment?
Myself included, we have eight engineers right now. A couple of them are backend engineers, a couple of them are firmware, mobile. I would say eight, but it's all spread out all over a lot of different repositories.
Y: Okay. Are there any tools you want to mention that are really useful for you guys from a workflow perspective? Or any sort of messaging?
S: Slack. Especially since our team is distributed, it's really important to make sure that we can keep in touch with each other all day long. One other tool that we used quite heavily is iDoneThis. We use that a lot. That really gives us a lot of visibility, especially for the people that are remote, it gives a lot of visibility into okay, what's the rest of the company doing? Where are we heading? What are people working on, etc. Then we have weekly meetings. We have a weekly engineering meeting, for example, so that's just with the engineering team. That's just using Google Hangouts. We also have a weekly company meeting. That's basically where our CEO updates everybody on what we were working on last week, what changed, what happened, etc.
Y: Lastly, are there any challenges that you're dealing with now, from an engineering perspective?
S: I think the biggest issue for us is, if you have this whole platform of microservices, how do you make sure that they can communicate with each other and can do that well? What happens when you upgrade the service? Does it break any contracts between services? How does that work?
Then from an operations perspective, how do we actually run this whole system? That's why we're looking into Docker as well. Just make sure that we can isolate components, we can scale them individually if we need to, that we can easily manage them, and still be flexible in the way we run everything. I think flexibility is one of our biggest challenges.
Y: This was awesome. Thanks for doing this Stefan.
Note: click on the stack links below to see highlights from this interview.