Cloud, cutting-edge web technologies and open source movement are helping us build, deploy and scale innovative and intuitive applications at a rate never seen before. However, the Networking world faces unique constraints, as a part of the applications have remained on devices/ servers/ gateways /Embedded Devices (On-Premises), while the core is still being driven from Cloud. In this blog, I will discuss how to package the On-Premises software application. In the blogs to follow, we will be touching on cloud applications and how to automate that.

On-Premises software applications are platform-specific. While making native software applications, we know that we have multiple builds/labels coming out of the development team, which is available on the configuration management system, say GIT. When we want to make a release out of this system, say GIT.

What is it that we release?

We release a software package.

Ok, what is a software package?

A software package is a distributed file installed on servers, computers, and personal devices. These software packages come in different formats. For eg:
• deb: This package format is used by Debian, Ubuntu, Linux Mint, and several other derivatives. It was the first package type to be created.
• rpm: This package format was originally called Red Hat Package Manager. It is used by Red Hat, Fedora, SUSE, and several other smaller distributions.
• tar.xz: While it is just a compressed tarball, this is the format that Arch Linux uses.
All software packages consist of:
• Code
• Metadata (name, version, dependencies, etc.)

Ok, Now how do we build/create a package?

The package building process varies per package type. Package types range from operating-system-level packages to packages running front-end code in browsers. Each package type has either a tool or a specific process, to build the package from source code into an archive for distribution. For example:
• Tar, zip commands.
• npm comes with the npm pack command which will archive the codebase into a tarball for use on an npm registry.
• On windows you can use visual studio to create applications or msi

Ok, I created a package. What next? How to make it available in a secure, reliable way?

What is the distribution mechanism? For this, we use package repositories. Package repositories are warehouses of packages along with their metadata and can be public or private. For eg: for npm packages for deb packages.

Ok, I have my releases on a public repository. How can a user get these?

A user can access package repositories using Package Managers. Package managers are tools to interact with the package repositories containing software. They’re used by developers to search, install, and manage packages within the repositories. For eg:
• apt on Debian
• npm in node
• pip in python
• yum redhat

However, with packages, there is a major problem. It is highly likely that the system where you deploy the software will have different library versions from those required by your software. So what do we do?

We use Containers instead. Now, what are containers? A container is a standard unit of software that packages up code and all its dependencies in a sandbox. Since it’s sandboxed it works in its own userspace and can coexist with libraries installed on the system.

Ok, how do I create containers and release them?

For this, we will use the Docker platform. Docker has become de facto for containerisation. Using docker, you can create container images, release them on docker public repository, or even host them in a private docker repository. Using docker engine, you can install any of these images on your system.

In Alethea, we use docker container images as the primary way to package our software.
Our software runs on the end-user system as a docker container (Container is a running version of docker image). Now, deployed software can have bugs. How do I get logs to debug and fix the bugs? As docker is sandboxed, you cannot access the log files directly. You need to follow one of the below approaches:

• Mount log folder from docker container to host system.
• Store logs in the database.
After we deploy the initial docker container image, we will need to upgrade it over time.

Ok, So how do we upgrade a docker image?

We provide an upgrade as zip, which contains a new docker container image and an upgrade script. The upgrade script destroys the old docker image, loads and starts the new image.

Oops !!!!!!!! So many things to do. Can someone please do it for me? Absolutely…

Using the jenkin pipeline you can automate packaging and deployment. A typical pipeline will look like this:
pipeline {
stages {
stage (‘Create docker image’) {
docker build
stage(‘Deploy’) {
docker push app
In summary, using docker container image of packaging has made our deliveries faster, simpler, secure and easy to maintain and upgrade.