FLO HEALTH, INC.

FLO HEALTH, INC.

22 Followers

Decisions 9

Siarhei Zuyeu

Python was first used at Flo because we needed quick prototyping and product idea validation. So, the very first backend architecture was built as a monolithic Python web service. Now we use it for ML-related projects, building web services for our core product, and a huge variety of utilities that help to support platforms or integrations with external tools.

We do focus on getting the most beneficial aspects of language and balancing development with Scala. Core application services include the health domain cycle predictions, managing user data, chat-bots, etc. Python helped us start development and is still capable of managing high-load.

19 27.7K

Vladimir Burylov

iOS Developer at FLO HEALTH, INC.

  • We have been developing all new features in Swift since 2018 - the choice that doesn’t need much explaining in 2023. The Objective-C part of the code is isolated well from the rest of the app and slowly but steadily declines in size.
  • In 2022, we considered SwiftUI mature enough and started using it for all new UI code instead of the Texture framework we had used since 2018. The transition went smoothly since the layout in SwiftUI is based on similar principles as it was in Texture: it's declarative and relies on a container-based system for layout. The Texture framework was preferred over UIKit for the same reason before (and superior performance), but SwiftUI now has all the benefits and is also a first-party tool, actively supported and developed.
  • In 2023 we switched to The Composable Architecture (TCA) for all our business-logic-related code. We have been using Redux-like architecture since 2019 when we decided to pursue this direction instead of the popular at the time MVVM or Viper. It took us some time to adjust to it initially, but then we enjoyed its benefits immensely: full transparency and control over state mutation, convenient testability, and composability that allows for immense scalability with minimal overhead. Initially, we used our in-house solution inspired by TCA, based on RxSwift. But since then, TCA evolved considerably and added many features that our solution was lacking, including support for modern first-party tools like async/await and Combine, so with the release of 1.0 of it, we finally decided to make the switch.
  • With the switch to SwiftUI and TCA in 2023, we adopted async/await and Combine instead of the previously used RxSwift. It just naturally became replaced with first-party tools that provide the same or even superior functionality, that the rest of our stack integrates better with.
  • We chose Swift Package Manager (SPM) over CocoaPods or Carthage back in 2020. Like the Swift language itself, it emerged in a somewhat limited state. We watched patiently how it evolved until it received all the features that we needed, like the ability to have binary packages. With SPM, we could fully embrace modularization in our app, as it allows us to easily create new internal modules and support any number of them.
17 8.1K

Vladimir Kurlenya

It’s pretty common when you read a success story about migrating from a monolith to microservices to see that people have a clear idea of what they already have; what they want to attain overall; that they have looked at all the pros and cons; and out of the plethora of available candidates, they chose Kubernetes. They have been faced with insurmountable problems, and with an unbelievable superhuman effort they resolved these issues and finally found the kind of happy resolution that happens “a year and a half into production.”

Was that how it was for us? Definitely not.

We didn’t spend a lot of time considering the idea of migrating to microservices like that. One day we just decided “why not give it a try?” There was no need to choose from the orchestrators at that point: Most of the dinosaurs were already on their last legs, except for Nomad, which was still showing some signs of life. Kubernetes became the de facto standard, and I had experience working with it. So we decided to pluck up our courage and try to run something non-critical in Kubernetes.

Considering that at that time all our infrastructure was in AWS, we also didn’t spend much time deciding to use EKS.

I’m struggling to remember who we chose as the guinea pig for the run in EKS — it might have been Jenkins. Or Prometheus. It’s difficult to say, but gradually all the new services were launched in EKS, and on the whole, everyone liked the approach.

The only thing that we didn’t understand was how to organize CI/CD.

At that time, we had the heady mix of Ansible/Terraform/Bitbucket, and we were not entirely satisfied with the results. Besides, we tried to practice delivery engineering and didn’t have a dedicated DevOps team, and there were many other factors too.

What did we need?

  • Unification — despite the fact that we never needed our teams to use a strictly defined stack, in CI/CD, some certainty was desired.
  • Decentralization — as mentioned earlier, we did not have a dedicated DevOps team, nor the desire (or need) to start one.
  • Relevance — not bleeding edge, but we wanted a tech stack that was on trend.
  • We also wanted the obvious things like speed, convenience, flexibility, etc.

It was safe to say that Helm was the standard for installing and running applications in EKS, so we didn’t use Ansible or Terraform for the management and templating of Kubernetes objects, although this solution was offered. We only used Helm (although there were lots of questions and complaints about it).

We also didn’t use Ansible or Terraform to manage Helm charts. It didn’t fit with our desire for decentralization and wasn’t exactly convenient. Again, because we don’t have a DevOps team, our service can be deployed in EKS by any developer with the help of Helm, and we don’t need (or want) to be involved in this process. We therefore took the most controversial route: We made our wrapper for Helm so it would work like an automatic transmission, more specifically that it would reduce interaction with the user when making the decision to go or not to go (in our case, to deploy or not to deploy). Later, we added a general Helm chart to this wrapper, so the developer needed several input values for deploying:

  • What to deploy (docker image)
  • Where to deploy (dev, stage, prod, etc.)

So in all, the service deployment process was run from the repository of the same service by the same developer, exactly when and how the developer needed it. Our participation in this process was reduced to minimal consultation on some borderline cases and occasionally eliminating errors (where would we be without them?) in the wrapper.

And then we lived happily ever after. But our story isn’t about that at all.

In fact, I was asked to talk about why we use Kubernetes, not how it went. If I am honest (and as you can surely tell), I don’t have a clear answer. Maybe it would be better if I told you why we are continuing to use Kubernetes.

With Kubernetes, we were able to:

  • Better utilize EC2 instances
  • Obtain a better mix of decentralization (all the services arrive in Kubernetes from authorized repositories, we are not involved in the process) and centralization (we always see when, how, and from where a service arrives to us, whether it is a log, audit or event)
  • Conveniently scale a cluster (we use the combination cluster autoscaler and horizontal pod autoscaler)
  • Get a convenient infrastructure debug (not forgetting that Kubernetes is only one level of abstraction over several others, and even in the worst case scenario it is under the hood of standard RHEL … well, at the very least we have it)
  • Get high levels of fault tolerance and self-healing for the infrastructure
  • Get a single (well, almost) and understandable CI/CD
  • Significantly shorten TTM
  • Have an excellent reason to write this post

And although we didn’t get anything new, we like what we got.

15 32.4K

Vladislav Ermolin

Android Engineer at Flo

Back in 2015, when Flo began, we chose Android SDK as the basis for our Android application.

Nowadays, we could choose from plenty of cross-platform SDK options, which would’ve probably saved us resources at the beginning of the product’s development life cycle. However, engineering resource utilization isn’t the only consideration for making decisions. If you wanted to create the best women’s health solution on the market, you would need to care about performance and seamless integration with operating system features too. The modern cross-platform SDKs have just begun to get closer to the native development option in that regard. The Kotlin Multiplatform Project is a good example of such a framework. Unfortunately, because it hasn't been around for a long time, it still has plenty of issues, so it currently doesn't fit our needs. However, we might consider it in the future. All in all, I believe that we made the right choice.

Over time, Android engineering best practices, tools, and the operating system itself evolved, giving developers multiple ways to implement the same features more effectively, both in terms of engineering team performance and device resource utilization. Our team evolved as well: We’ve come a long way from a single Android developer to a dozen feature teams that need to work on the same codebase simultaneously without stepping on each other's toes. We began caring more about cycle time because one can’t successfully compete by delivering value slowly.

For our dev team, these changes prompted a request to update the codebase in order to deliver value faster and increase the speed of new Android features adoption, raising the overall level of quality at the same time.

We began with the modularization of our Android application. Using the power of the Gradle build system, we split our application into 70+ shared core modules and 30+ independent feature modules. Such a huge step required the revision of the application’s architecture. One could say that we moved to clean architecture; however, I would say that we use architecture driven by common software engineering principles like SOLID, DRY, KISS, etc. On the presentation layer, we switched from the MVP to the MVVM pattern. Implementation of this pattern, powered by the Jetpack Lifecycle components, simplifies Android component lifecycle management and increases the reusability of the code.

Supporting such a setup would be barely possible without a dependency injection (DI) implementation. We settled on Dagger 2. This DI framework supports compile-time graph validation, multibinding, and scoping support. Apart from that, it offers two ways to wire up individual components into a single graph: subcomponents and component dependencies, each good for its purpose. At Flo, we prefer component dependencies, as they better isolate the features and positively impact the build speed, but we use subcomponents closer to the leaves of the dependency graph as well.

Though we still have Java code in the project, Kotlin has become our main programming language. Compared to Java, it has multiple advantages:

  • Improved type system, which, for example, makes it possible to avoid the “billion-dollar mistake” in the majority of cases
  • Rich and mature standard library, which provides solutions for many typical tasks out of the box and minimizes the need for extra utilities
  • Advanced features to better fit the open-closed principle (for example, extension functions and removal of checked exceptions let us improve the extendability of solutions)
  • The syntax sugar, which simply lets you write code faster (it’s hard to imagine modern Android development without data classes, sealed classes, delegates, etc.) We attempt to use Kotlin wherever possible. Our build scripts are written in it, and we also migrate the good old bash scripts onto KScript.

Another huge step in Kotlin adoption is the migration from RxJava to the Kotlin coroutines. RxJava is a superb framework for event-based and asynchronous programming. However, it is not the best choice for asynchronous programming. In that regard, Kotlin coroutines seem like a much wiser choice, offering more effective resource utilization, more readable error stack traces, finer control over the execution scope and the syntax, which looks almost identical to the synchronous code. In its main area of usage — event-based programming — RxJava has also begun to lose ground. Being written in Java, it does not support Kotlin’s type system well. Besides, many of its operators are synchronous by design, which can limit developers. Driven by the Kotlin coroutines, Flow addresses both of these drawbacks. Even though it is a much younger framework, we found it perfectly fits our needs.

Perhaps the most noticeable sign that the above changes were not taken in vain is that you can now use Flo on your smartwatch powered by Android Wear. This is the second Flo app for the Android platform, and it effectively reuses the codebase of the mobile app. One of the core advantages of the Flo Watch app lies in Wear Health Services. It allows us to effectively and securely collect health-related data from the user’s device, if a user so chooses, and utilize it to improve the precision of cycle estimation.

But we won't stop chasing innovation!

Even though we migrated to ViewBinding, enjoying the extra type safety and reduced amount of the boilerplate code, we couldn’t pass by the Jetpack Compose framework, which is going to be the next big thing both for Flo and the whole mobile industry. It allows us to use Kotlin power to construct UI, reduces code duplication, increases reusability of the UI components, and unblocks building complex view hierarchies with less performance penalty. On the other hand, it requires changing the architecture approach once again. But that has never stopped us. So far, we’ve integrated it into one feature module and look forward to using it as a main UI framework in all the upcoming ones.

Finally, what about recent Android features support? Well, we continuously improve the app in that sense. Like most teams, we rely on different Jetpack, Firebase, and Play Services libraries to achieve that goal. We use them to move work to the background, implement push notifications, integrate billing, and many other little things, all of which improve the overall UX or let us reach out to users more effectively. However, we also invest in first-party tooling. In an effort to ensure secure and transparent management of user data, we implemented our own solutions for A/B testing, remote configuration management, logging, and analytics.

What about quality? Developers are responsible for the quality of created solutions. To ensure that we use modern tools and approaches:

  • We chose Detekt and Android Lint for static code analysis. These frameworks prevent many issues from coming up in production by analyzing the codebase during compile time. They are capable of finding the most common problems in Kotlin and Android-related code, ensuring the whole team follows the same code style. When those frameworks do not provide the necessary checks out of the box, we implement them by ourselves.
  • The above two frameworks are used both locally and in the continuous integration pipelines. However, in the latter, we additionally utilize the Sonarcloud tool. It provides extended complexity, security, and potential bug checks, which are run in the cloud.
  • To ensure that the code meets the requirements, we use multiple layers of automated testing. Our test pyramid includes unit tests, which use the JUnit5 platform, and E2E tests powered by Espresso framework. Together, these two approaches to testing allow developers to get feedback fast while at the same time ensuring that features work as expected end-to-end.
12 43.4K