Infobip Engineering Timeline

2008 / 2009

  • Developing our SMSC (SMS center) enabled us to send our first message. We started with a proof of concept with not much of a clue about what could happen. We got access to a real MNO (Mobile Network Operator), ran the test, and the first message was delivered. It broke on the second try but we quickly fixed it. It's hard to believe that everything worked perfectly from a single try.
  • Adoption of the Spring framework made development much easier.

2010 / 2011

  • Following a good hunch of our founder, we introduced the monetary / pricing model for our sGate (SMS firewall) product. Even though we had an MVP (hardware prototype) ready, it wasn't easy selling it to customers. It was a completely new product on the telecom market and no one saw the use case, the value... Not until we decided to add monetization. Then sales started to grow and we agreed on exclusive prices. Quite a few competitors copied the model afterwards.
  • Our in-house developed Deployment Manager tool enabled us to automatize our deployment process and have fewer human errors. The Continuous Integration (Jenkins) / Continuous Deployment (CI/CD) pipeline enabled future QA.
  • Having two instances of our core SMS platform enabled us to deploy to production without having to stop traffic. This was key due to our growth. While one instance was being deployed the other was taking over. Thus we managed to achieve higher availability.
  • Over time, the number of our servers exceeded our familiarity with Greek mythology. Zeus, Hera, Ares... We had more servers than there were Greek gods we could name them after.
  • A power outage in our data center in Zagreb motivated us to migrate our data center to Frankfurt. The new data center was bigger, more stable, with much better connectivity, new hardware, blade servers.
  • GIT replaced TFS. Teams suffered less from administration of source code management.

2012

  • Linux started replacing Windows on our production servers. LVS load balancer was the first service and broke the ice for other services to migrate to Linux. Soon all our Java services were deployed on Linux using our in-house built Deployment Manager app.
  • Virtualization brought more flexibility in managing hardware resources to satisfy the needs of a fast-growing number of teams.

2014

  • We started breaking down our data pipeline into components. Our data warehouse was separated from our transactional database. The performance of traffic processing was improved because reporting wasn't affecting it anymore.
  • Standardized in-house developed inter-process communication (RMI) enabled our services to talk to each other. This was a prerequisite for micro-services.
  • Since we were scaling fast in every possible sense, we had to shift from a monolith to micro-services architecture. It brought faster releases, better organizational scalability. Our first such service was HTTP API, after which came the Upload Tool and Mobile Originated (MO) service.
  • The introduction of Docker made our deployment process simpler. We weren't limited to deploying only our services, we could deploy Redis, Postgre... We could be more creative in how we build our applications and our development teams could deploy their services on their own, without relying on others. It increased the overall sense of ownership.

2015

  • The first steps taken towards the implementation of Infobip Portal which was known as CUP (Customer Portal). These beginnings allowed unifying everything under a single subdomain and domain - portal.infobip.com. Additionally, it was defined to use the unified technologies which is the current basis for Infobip Portal - React and Node.JS/Express.

2016

  • The introduction of Elasticsearch enabled engineers to query data without having an impact on transactional databases. More people within the organization had access to data. This was a baseline for building our Data Science team.
  • The first version of Infobip Portal released with the available products such as Login, Settings, Reports, Voice, OMNI, and more.

2017 / 2018

  • All customers migrated from the old customer portals to the new Infobip Portal.
  • Introduced Broadcast as the first step towards the Moments engagement platform.
  • Introduced Flow - the first iteration of our Engagement platform which would later evolve into Moments - our complete engagement platform.
  • Introduced Webpack as the Web application build solution for the whole Infobip Portal.
  • We replaced the Graphite monitoring system with Prometheus. We increased the number of metrics we were tracking and were able to monitor all our services. Additionally, it became much easier to use and maintain the monitoring system.
  • The data pipeline was evolved to satisfy the demand of an even bigger number of teams who needed to consume the data. Kafka became the backbone of our data pipeline. We finally had a data pipeline that scales.
  • An in-house developed API Gateway component enabled any development team to easily expose their services to clients. There was no more need for domain registration. Communication logs were centralized. Networking teams were offloaded so there were no more dependencies and everything related to authentication, security, balancing was streamlined and centralized.

2019

  • We managed to add new communication channels: WhatsApp, Viber, email, Voice, Telegram..., each with specific new requirements using the same infrastructure and core platform services.
  • The old customer portal under CUP 1.0 was officially retired. We started using the new modular CUP 2.0 solutions like Broadcast, Flow, Dashboard and Reports, which let us scale its use from 5 to 20 teams.
  • New products:
    • Conversations, our Contact Center as a Service (CCaaS) and our first SaaS product
    • Entered the IoT segment, with SIM management for our clients
    • Mobile Identity, our solution for frictionless customer 2FA
  • Introduced SRE (Site Reliability Engineering) and incident management - a central and standardized way of handling and tracking of our stability and incidents
  • Introduced ClickHouse, distributed database analytics for clients
  • Introduced Kubernetes to orchestrate our Docker containers
  • Self-Service is officially introduced and released to the customers together with Signup available on Infobip's official website and the new Homepage for Customers.

2020

  • New products:
    • Introduced Answers, our chatbot building platform
    • Introduced Moments, our complete engagement platform including Broadcast and Flow.
    • Introduced face and voice biometric verification, accompanied by ID-document extraction and KYC.
  • Introduced Opsgenie for monitoring and handling our oncall escalation policies
  • Officially introduced the major React 16 update for our Infobip Portal together with major performance improvements
  • We introduced the Navigation module as an independent micro-frontend module so our 30+ teams wouldn't depend on changes
  • Officially introduced Bepo 1.0 as Infobip's Design System together with its set of components with the goal of having a consistent UX across the whole Infobip Portal and allowing easier updates

2021

  • Introduced Portal Web as the new serverless way of building Web micro-frontend applications that is aimed to bring simplicity, stability, and consistency together with allowing easier technical updates
  • Successfully integrated acquired platforms such as OpenMarket to scale Infobip's communication platform (part of the OnePlatform strategic initiative).
  • Infobip Portal available in more regions with the first customers migrated to them in USA (New York DC) and India (Mumbai DC).
  • Officially replaced the Localization solution in Infobip with Smartcat. Released Translations as a new independent module.
  • Published the Infobip Engineering Handbook. You're reading it right now!