1. Home
  2. Blog
  3. Innovation
  4. DevOps Trends to Watch for in 2023

DevOps Trends to Watch for in 2023

As businesses strive for success in a post-pandemic landscape, technology is becoming increasingly important. With data analysis, artificial intelligence (AI), machine learning (ML), Internet of Things (IoT), 5G, blockchain, and cloud, just to name a few, companies are moving forward in their operations and their offerings by leaps and bounds. But technology on its own [...]

Guillermo Carreras

By Guillermo Carreras

As Director of Delivery, Guillermo Carreras implements BairesDev's campaigns while focusing on Agile development and digital transformation solutions.

10 min read

Featured image

As businesses strive for success in a post-pandemic landscape, technology is becoming increasingly important. With data analysis, artificial intelligence (AI), machine learning (ML), Internet of Things (IoT), 5G, blockchain, and cloud, just to name a few, companies are moving forward in their operations and their offerings by leaps and bounds. But technology on its own doesn’t solve problems or make companies successful. For that, organizations also need highly effective processes.

DevOps is one of those processes, granting the ability for companies to be more efficient in their application development. To speed up their cycles and increase customer satisfaction, more companies are using the elements of DevOps software development like transparency, clear communication, a focus on quality, and an emphasis on customer experience. 

The tools used to achieve these ends are constantly shifting, and professionals should be aware of the technologies and processes they have at their disposal. In the following sections we highlight many emerging trends to watch for and explore as we move into 2023.  

AI and ML

When introduced into the DevOps process, AI can help streamline and automate many tasks involved with software development. For example, AI can help change code, deploy, and more, enabling engineers to focus on higher-level matters such as overseeing the overall development operation or innovating new applications. Machine learning is an aspect of AI that enables deeper automation as machines take on the ability to learn from provided data or operate within real scenarios. 

In addition to automation, these technologies can offer enhanced feedback and alerts to make software engineering teams more effective. AI and ML can be used in the testing process to write test cases, generate test data, and use software in ways that may not occur to humans to try. AI can also be deployed for additional tasks throughout the software development life cycle, including code compiling, code completion, error checking, documentation lookup, decision-making, and estimating project costs and time.

Benefits of automation include greater speed, increased productivity, higher accuracy, and better quality. The evolution of AI and ML will include a more predictive functionality, which can examine the DevOps pipeline, identify issues, and recommend changes before problems emerge.

Chaos Engineering

Chaos engineering is a part of the testing process in which engineers deliberately introduce failure or chaos to determine what applications can handle. The intention is to force software to respond or even break down under certain conditions. While this method may seem to introduce more problems than it solves, it actually provides a way for engineers to identify usability issues and address them before an application is deployed. 

Software engineers who use chaos engineering must go into the process with a purposeful plan and view the process as more of an experiment than a way to identify a possible failure. The plan should include a hypothesis, exactly what will be tested, and the process for testing. After potential failure points have been identified, the next step is to mitigate them.

Chaos engineering supports some of the most important aspects of DevOps, including continuous testing and improvement and creating more reliable applications. The primary benefit of this method is avoiding failures that can upend the development process, slow down development, and waste time and money. The ultimate benefit is enabling companies to deploy higher quality applications that customers will be happy with, resulting in greater loyalty and higher revenues.

Cloud-Native Infrastructure

Cloud-native infrastructure provides both hardware and software functionality by running cloud-based applications. These applications are meant to take advantage of the benefits of cloud operations, including reliability, security, and handing over hardware maintenance to another entity. It aligns with the goals of DevOps in that it enables companies to reduce time to market and work more efficiently. 

As more companies shift to the use of cloud applications, cloud-native development makes the most sense for building them. In developing cloud-based software, engineers can avoid the increasingly outdated process of investing in and managing physical on-premise infrastructure. In a time when remote work and hybrid work are becoming ever more common, cloud-native development is also useful because engineers can work on the project and collaborate no matter where they are. 

Benefits of cloud-native development include faster development and update times and consistent quality monitoring and improvement. Cloud-native software offers numerous points of contact for users to interact with, is more flexible and scalable than conventional software applications, and is based on evolving technology rather than the static environments that companies have employed in the past. 


The containerization trend involves packaging software along with needed elements such as libraries, frameworks, tools, settings, and other supportive components. They are built by converting a container file to a container image, which is converted to an actual container through a runtime engine. The outcome is software that uses fewer resources and is easier to set up in new environments.

Containers are a next step in the evolution of cloud technology. Prior to the cloud, software was run on physical machines, each with its own operating system. At times, applications were not compatible with the operating systems, so engineers attempted to test with all environments when building applications. But this process became cumbersome and time-consuming. 

With containers, software applications can be built with the components that make them compatible with the OS they are running on. Benefits of containerization include increased efficiency and startup times for users and faster deployment and delivery time for engineers. Containers enable observability of information regarding the OS, the application, and other aspects of the system. 


As technology advances, so does the ability of hackers to interfere with its operation. Security has become a major concern for application developers because companies that use their software have the potential to lose millions of dollars if a breach occurs. Additionally, end users who trust their most sensitive data to these companies can be harmed by an attack and lose faith in the business, costing even more in lost revenues and reputation. 

That’s why many engineers are now including security within the DevOps process. Within a DevSecOps framework, engineers include security very early in the development process, rather than adding it on later. With this method, security is baked into the application. The DevSecOps process brings in various players in addition to the security team, including testing professionals. By using this method, teams can identify security vulnerabilities in the code early and often as it is being developed. 

DevSecOps enables companies to take advantage of DevOps best practices such as elevating security, increasing observability, and ensuring proper governance. It reduces the cost of adding security to applications, since it is already proactively integrated into the process.


According to its website, Kubernetes is “a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.” Kubernetes addresses the problem of applications taking up resources and limiting those available to other applications running on the same server, as was the situation when software was still run on physical on-premise hardware. 

An interim solution to this problem was running applications on multiple virtual machines (VM) from a single physical server. This process allowed applications to be isolated, providing adequate resources and security. Containers are the next evolution of virtual machines. Similar to a VM, a container includes its own files, memory, and other supporting components. Containers can also be used independently of the underlying infrastructure.

Kubernetes is useful for managing containers that run applications. “For example,” according to the Kubernetes website, “if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?” The site lists the system’s features, including service discovery and load balancing, storage orchestration, automated rollouts and rollbacks, automatic bin packing, self-healing, and secret and configuration management. 

Low-Code Development

Low-code tools offer the ability to build applications without having to write every line of code. Rather, low-code processes provide drag-and-drop elements to build applications, taking time off of the development process. Reducing time to market puts companies in the position of gaining an edge in a highly competitive market and takes pressure off of overworked engineers. 

Additionally, professionals with less training, known as “citizen developers,” can begin to build applications they need on their own, using a visual interface and drag-and-drop tools. For example, HR managers might need an application that combines information from two applications and analyzes it. Using low-code tools, they could create that application themselves, leaving engineers to focus on more complex projects. The following video demonstrates how this process can be useful for professionals across departments within organizations.


Low-code methodology used by citizen developers won’t replace the role of professional developers. However, it will change DevOps in several significant ways. It will reduce the workload on development and IT teams that are already stretched too thin and make their role more high-level within the organization. Meanwhile, with other professionals taking the lead on some application development, there is the potential for greater creativity and innovation. As a result of shifting roles, new opportunities will be created for evolving positions. 

The use of low-code development replaces no-code development, which was based on the same principle but was inflexible, so builders couldn’t make apps as customized as they wanted to. As this method continues to mature, it will impact every aspect of the development process, including ideation, analysis, coding, testing, deployment, and documentation. 

Microservices Architecture

Microservices architecture is an alternative to monolithic application development. Monolithic applications are highly labor intensive to build and manage. The level of complexity, which is often high from the beginning, only grows as the application does. Adding features merely compounds it, and problems with one process can impact the entire service. 

Microservices architecture is a form of service-oriented architecture that involves separating large applications into smaller services. The result is that developers can work with each individual unit or application programming interface (API) to change the properties, depending on business needs. Engineers can identify problems with individual components and implement small feature changes without modifying the entire code base. This method is now the default standard for software development.

Benefits include greater agility and manageability throughout the software development life cycle. Separate teams can work on ideas for smaller software components, shortening the length of the software development life cycle and enabling companies to test and deliver applications and updates more quickly, supporting greater competitiveness. 


To ensure accuracy in development, engineers must have reliable systems. Observability refers to having ways to ensure that reliability, including monitoring, identifying, and addressing incidents to reduce system downtime. Within the DevOps life cycle, continuous monitoring is where observability comes into play. Companies that adopt this process can save considerable costs based on reduced system downtime.

The concept of observability differs from monitoring. Monitoring is the process of gathering information about an application’s behavior and performance, using various metrics. Observability takes engineers deeper into what is happening with the application, using data that is converted into actionable insights. This process enables DevOps teams to see exactly what is happening so they can address problems immediately.

For observability to be most useful, it must use metrics that can measure the health of the system initially and as it changes over time. It must also record events as they take place, including a time stamp, so operators can understand when specific events occur and how they trace to other events. The notion of data observability refers to logging, collecting, and analyzing data related to system performance. Development teams use agile development as well as continuous integration and continuous deployment (CI/CD) to implement observability. 

Serverless Computing

The need to maintain and periodically add and provision or replace hardware can become cumbersome and expensive for organizations. An alternative is serverless computing, which moves server functionality to a cloud-native environment operated by an external vendor that manages both the cloud infrastructure and scaling of the apps. 

According to open-source software company Red Hat, “With serverless architecture … apps are launched only as needed. When an event triggers app code to run, the public cloud provider dynamically allocates resources for that code. … Serverless frees developers from routine and menial tasks associated with app scaling and server provisioning.” 

It’s important to note that, in serverless computing, servers still exist. But they are abstracted from the development process, enabling engineers to shift their focus from pipeline design to product development. Other benefits of serverless computing, for both developers and the companies they work for, include the ability to move expenses from CapEx to OpEx and gain better control over the operating budget. 

A serverless architecture also offers agility, reliability, and cost efficiency, as the entire software development life cycle, from development to deploying to testing, can be streamlined using this method. Additionally, processes can be accelerated, resulting in faster time to market and more time for engineers to work on critical tasks. 

The Right Fit

For companies that use a DevOps development model, knowing about current technologies is important, but what is more important is proper planning. For example, the notion of citizen developers may sound like a great idea on its face. But it must be overseen to ensure that applications being developed will work within the greater ecosystem of a company. And, while citizen developers don’t need the same level of education as professional engineers, they do need some training to get started. 

Such an initiative should first involve identifying the potential based on team member needs and abilities. Some workers may already be doing citizen development, and it makes sense to find out what the extent is within the organization before moving forward. It’s also helpful to know what the demand is for application development and how much citizen development would actually be taking place if such a program were put into operation. 

All of this information can help teams develop goals for such a program, which will be the foundation for implementation. The next step is to identify and research platforms that could be useful in training employees on low-code development. Companies that want to implement new technologies must also create a governance plan to create rules and ensure they are followed to make best use of company resources. Finally, the organization would need to start training and enabling participants to start their first projects. 

As is evident through just this one example, the introduction of a new process or technology into an organization is not simple, straightforward, or quick. Companies should carefully consider the benefits and drawbacks of each and resist the temptation to adopt something just because it is widely used by others or rumored to be the next big thing. 

Guillermo Carreras

By Guillermo Carreras

Guillermo Carreras focuses on digital transformation solutions and Agile development work as well as the management of BairesDev's successful campaigns. As Director of Delivery, he works with PMO, Sales, and Tech teams to provide end-to-end company alignment.

Stay up to dateBusiness, technology, and innovation insights.Written by experts. Delivered weekly.

Related articles

Contact BairesDev
By continuing to use this site, you agree to our cookie policy.