When To Use Docker For Personal Development?

When To Use Docker For Personal Development
TL; DR: Docker is a very helpful tool for creating a development environment since, if it can run on your system, it can run anywhere. It works on the machine belonging to your friend, as well as on staging and production. After a new member of the team logs in, three instructions are executed, after which the application(s) are already active.

Should I use Docker during development?

4. You are seeking for a solution to speed up your application; Docker has the potential to dramatically speed up your development process, but this does not guarantee that it will speed up your program itself. Even if it makes your program more scalable, which means that more people will be able to use it, the performance of a single instance of your application will often be slightly slower than it would be without Docker.

Can I use Docker for personal use?

Docker Personal is a fully-functional subscription service that is offered at no cost to individuals, students, educators, non-profit organizations, and small businesses*. Docker Personal also makes Docker accessible to these groups. Developers working on their own projects are the ideal candidates for the Docker Personal membership.

Is Docker still relevant in 2022?

What Repercussions Will You Face As A Result Of Kubernetes’ Decision To Deprecate Docker? – The situation is not quite as dire as it may sound. Let’s preface this entire piece by stating that the only thing that has changed in version 1.20 is that if you’re using Docker, you will now receive a deprecation notice.

  1. This is the sole change.
  2. And that is it.
  3. Can I still use Docker for development? You are perfectly capable of doing so right now and for the foreseeable future.
  4. You need to understand that Docker does not execute Docker-specific images; rather, it runs containers that are OCI-compliant.
  5. Kubernetes will continue to take files in this format so long as Docker continues to distribute them in it.

Docker: Is It Still Possible for Me to Package My Production Apps? Absolutely, and for the same reasons as were given in response to the first question. The functionality of applications that have been packaged using Docker will not be affected in any way.

  1. Therefore, you can still construct and test containers using the tools that you are used with and like using.
  2. Docker-produced images will continue to function in your cluster exactly as they always have, so there is no need for you to make any changes to your CI/CD pipelines or move to alternative image registries.

What Am I Going to Need to Alter? Nothing is happening right now. After upgrading to version 1.20, you will be presented with a deprecation notice if your cluster makes use of Docker as a runtime. The modification, on the other hand, sends a loud and unambiguous message from the Kubernetes community about the path they intend to go.

It is now time to begin making preparations for the future. When is the transformation going to take place? The removal of all Docker dependencies is scheduled to be finished by version 1.23 in the late year of 2021, according to the plan. What will take place when Dockershim has vacated the premises? When that time comes, administrators of Kubernetes clusters will be compelled to transition to a container runtime that is CRI-compliant.

If you are an end-user, you won’t notice much of a difference. There is a good chance that you won’t need to do any particular preparations if you are not operating any type of node modifications. To ensure that your apps are compatible with the new container runtime, you should only test them.

  1. After upgrading to version 1, some of these things will no longer function properly or create other difficulties.23: Using the logging and monitoring features that are unique to Docker.
  2. That example, extracting docker messages from a log or polling the Docker Application Programming Interface (API).
  3. Using the optimizations provided by Docker.

Executing programs that rely on the Docker Command Line Interface. Execution of Docker commands inside of privileged pods. For example, you can construct images by using the docker build command. See projects like kaniko for other alternatives. Using Docker-in-Docker configurations.

  • Using Windows containers while running.
  • Although Containerd is functional in Windows, the amount of support it offers is not yet on par with that of Docker.
  • The goal is to have a containerd release that is reliable for Windows by the time version 1.20 of containerd is released.
  • Before Docker support is discontinued, you should ensure that your managed cluster on a cloud provider such as AWS EKS, Google GKE, or Azure AKS utilizes a supported runtime.

If you are utilizing one of these cloud providers, confirm that your cluster uses a supported runtime. There are certain cloud companies who are a few releases behind, which means that you could have more time to plan. Consequently, verify with your service provider.

  • To give you an example, Google Cloud recently stated that they would be switching the default runtime for all newly-created worker nodes from Docker to containerd; however, you will still have the option to use Docker if you so want.
  • If you operate your own cluster, you will need to examine shifting to another container runtime that is completely compatible with CRI, in addition to reviewing the issues described above.

If you run your own cluster, you will need to check the points indicated above. The Kubernetes documentation provide a step-by-step explanation of the process as follows: Changing over to using containers Making the switch to CRI-O If you wish to continue using Docker after version 1.23, another option is to follow the cri-dockerd project.

When should you not use containers?

Cons –

  • Because functions are (ideally) separated into many containers, the containers themselves need to be able to communicate with one another in order for anything to be accomplished. This might call for complex network configurations. However, due to the fact that containers do not form a single unit, it is necessary for them to interact with one another. Even while certain orchestration systems, like Kubernetes, include higher level units, such multi-container pods, which make this a little bit simpler, utilizing this method is still more complicated than employing VMs. In light of this, Adam Heczko offers the following clarification: “Actually, the L3 networking paradigm in Kubernetes is more simpler than the L2 approach in OpenStack.” Consequently, the amount of work that will need to be done on the network will differ depending on whether you are looking at communicating between functions or between virtual machines (VMs).
  • They are still not regarded to be quite as safe as VMs, for a variety of different reasons
  • however, your experience may differ in this regard. As I noted earlier, containers are still a very young technology, and they’re still not believed to be nearly as secure as VMs. When it is essential to have a very high level of security, for instance, it is not a good idea to employ containers.
  • They can demand additional effort up front: If you are utilizing containers correctly, you will have decomposed your application into its many component services. This is something that, while advantageous, is not required if you are working with virtual machines (VMs).
  • They are not always dependable: You will need to ensure that your application is appropriately architected for this possibility, despite the fact that this may sound discouraging to you. Containers are typically designed for cloud native computing, which operates under the assumption that any component may stop functioning at any time.

Should I use Docker for solo projects?

Ask HN: As a solo founder, what are the reasons not to use Docker?
74 points by juanse on July 18, 2021 | hide | past | favorite | 178 comments
Small project, happily using generic VM. Everybody tell me I should use Docker. Where is the catch?

table>

table>

If you don’t have a reason to use it, and if it isn’t solving an existing problem for you, then don’t use it. This idea you should use docker or other container solutions when there is no compelling and logical problem they are solving is just a time suck. As a solo founder your time is super critical to manage closely, optimizing things that are not high paybacks is a bad idea. Docker can be great, just IMO and especially for a solo founder you need a compelling reason(s) to add any complexity and/or extra time to your system. Especially if your solution is a simple project that you deploy on a single VM (or maybe multiple for reliability) there is no reason to use docker. I’ve been a solo founder more then once, guard your time carefully. With some exceptions, only build things that have an immediate payback, use only enough tools to get it done and don’t add any complexity or dependencies that can cost you extra time to debug, understand or learn. As you add people you can do more and be a little more aggressive on things that don’t have immediate need/payback.

table>

Amazing comment. Thank you. In these age of hype and complexity I really miss common sense. I agree 100% in your approach. Specially when we have access to VM with 48 dedicated cores and almost 200GB of RAM for less than €500/month. If you go beyond that, you have really into something.

table>

This is crazy value for the money. Hard to imagine the limit where you need to leave this behind. My only concern is that with physical servers you don’t have snapshots that save your ass quickly in case of some process gone wrong.

table>

I never understood the VM backup as a strategy. Think about backup from a database or application layer/perspective

table>

The main benefit I see is quickly rollback to previous state. Really handy when something goes wrong for example updating the system.

table>

Sure, but having a stateful VM is where the self-inflicted pain started imo. Though reading all parent posts, my point is not well placed.

table>

There are lots of snapshot and backup solutions for physical servers. You could run your own vm solution like proxmox as well on the bare metal, takes 10 minutes to install locally.

table>

also, is allows you to focus on the problems you are having right now (building a product that runs on a server) instead of theoretical problems you might never reach later. (Scalability and portability). VM’s are quite scalable, especially if you can scale vertically, and they massively reduce complexity.

table>

Let’s say you have a run of the mill nodejs express app that connects to Postgres (let’s assume it’s managed in this scenario). How do you make use of all the 48 cores and the RAM available to you? (I’m genuinely interested so please ignore any snarky tone if any)

table>

There’s two parts to this. The first is the broad question of “how will any app (regardless of the technology) saturate that much hardware”, and to that I simply say it won’t; it will saturate -something- first (with enough load), be it RAM, CPU, network I/O, disk I/O, etc, which will limit utilization of the rest. If this was your own datacenter you might try and “right size” to that, to try and have as little unused hardware capacity as possible, but in the cloud you don’t (can’t) do that. The second is maybe more what you’re trying to ask, which is given that node is single threaded, you would only be using one core, and that would implicitly also limit how much of the other resources you can use. To that, you’d use something like PM2, a node native utility which essentially spins up multiple instances of your app locally, and load balances across them, or the cluster lib, a less feature rich solution to the same problem, but supplied as part of the standard library.

table>

Most likely answer: the application is small and there’s no need, and no useful way, to extract value many cores or a lot of memory. So don’t waste effort on something that isn’t needed. Boring answer: use a pool of threads (or subprocesses, like Apache) to deal with web requests. Without stupid locking, many concurrent requests can be handled for each core. If you don’t know your QoS targets, how many threads you need to meet them, and how much memory each thread requires it’s a serious red flag. Pragmatic answer: do you want to invest in fault tolerance and being able to upgrade your application? Then you need a load balancer to route traffic to multiple instances and switch them on and off properly, and a good load balancer is going to be outside your VM and probably a specific piece of hardware, irrespective of clever techniques in your application.

table>

If you know you’re going to want this type of parallelism and simple deployments, use languages that have better threading stories. That said, one place I worked had a legacy Python service on 16+ core machines. We happened to deploy with dpkg, apt, and docker (I wouldn’t necessarily recommend this), and the start script spun up cpu count – 1 instances with CPU affinity set. There was something like nginx for routing out in front.

table>

I will start by saying that I’m super average and still learning, but it is my understanding that you use RAM to keep things on memory (faster access), and cores for parallelization (different instances reads). In my case, I use Ruby on Rails, so I configure Puma based on the number of cores that I have to set the workers (threats). Not yet, but I will use Redis for caching. It is also an “in-memory” database.

table>

Using more resources than average also comes into play when using less performance (but rapid to develop with) technologies. Best practices are a funny thing, understanding where they come from and who they were intended for an go a long way in having a better context.

table>

> How do you make use of all the 48 cores and the RAM available to you? The point I take from gp’s comment is, there’s a lot of room to scale within a single instance when the nano-sized VM you started with is overwhelmed. Focus on the things that cause growth first, then add complexities of distributed computing when it is actually needed.

table>

I don’t know about nodejs, but serving a twitter-like feed with an in-memory database of everyone’s tweets is one use-case. It’s a database query workload and would benefit from caching raw data or views of the twitter feed and has a latency requirement of 100 milliseconds or so.

table>

depending on your node. js version. nodejs has a cluster mode. before that tools like pm2 had cluster | loadbalancing modes. But yeah you can easily run as many instances of the node. js app on your 60 virtual cores. Let node use half of those. Then the rest for db, monitoring. Then have a load balance ie nginx. -Edit also node. js now has Workers e. g for instances where you need to do CPU bound work. Remember node. js is really fast. most apps out there won’t be able to use it to the max.

table>

I wouldn’t use docker even as an established company at this point. I’ve been using it for several years and it adds complexity rather than removing it in the long run. And yes I include Kubernetes in that as well. They work very well for specific narrow sets of requirements but are unnecessary and require headspace and headcount to look after. It’s better to religiously build simplicity, repeatability and fast automation into everything you do without it. That has a much better ROI on a product build because you have to do all those things anyway to make docker and kubernetes effective. The key thing to do is to build on technology that is portable, compact and repeatable which means at the moment something entirely self-contained (Go, . Net Core, Java, Python with some caveats come to mind)

table>

I find this kinda insane. The days before docker I would spend a lot of time trying to reproduce environments. Docker isn’t hard to use and doesn’t add much complexity, but the payoff is huge in terms of having portable environments.

table>

I spend significantly more time dealing with messes of Docker containers that were thrown together ad-hoc with a slew of bad practices than I ever did having to hunt down and resolve dependencies. Docker really makes it easy to create messes by bad or less experienced developers yet ultimately have something functional, for awhile. Setting up environments used to act as a filter to clear out these sort of messy systems. If you put together a system of distributed services, you were probably competent enough to make something that wasn’t horrible and considered maintainability. Now, with a few days Googling, anyone can spin up a distributed system of services with every bad practice known to man as to how those services are initialized, modified (yes, I rarely see stateless containers), and interconnected. With a sizable team of developers that aren’t under time pressures incentivized to build tech debt and pass it along, Docker (or rather container systems, not necessarily Docker) have a lot of useful purposes. I rarely see it used this way in the wild, I instead see it as a way to build tech debt. I’ll take the need to resolve a couple dependencies over that any-day of the week.

table>

Docker really makes it easy to create messes by bad or less experienced developers yet ultimately have something functional I would like to see a ‘foot-gun index’ where we rank technologies and determine how likely someone is to fucking things up if not careful.

table>

I haven’t seen a tech debt problem with containers, we mostly use infra as code and have PR reviews so that doesn’t happen. Junior engineers can make a mess with anything, it’s a senior engineers job to prevent that

table>

Well it’s often my job to come in and fix these messes after both of those fail or there’s no one senior and competent to handhold and prevent these messes from forming until they’ve become unmanageable yet mission critical.

table>

But Docker is not the only way to provide reproducible environments. I find Terraform + Ansible enough for such purposes.

table>

Just out of curiosity: What does Terraform provide when you already have Ansible running?

table>

Terraform focuses on creating/managing the machines you’ll be using, less so provisioning them (though you can somewhat). Ansible focuses on provisioning systems, with not much (any? not sure) capability for actually creating/managing the machines it’ll provision. I’ve also seen it recommended to use Ansible + Packer to provision an image, then use Terraform to create machines from the image. Saves you some troubles with connecting to your cloud machines.

table>

Ansible has pretty much the same capabilities for managing machines as Terraform have. The fact that it reads the actual state instead of keeping it locally makes for one less headache.

table>

Try producing two kubernetes clusters with exactly the same configuration with full management and monitoring stack and you will discover how difficult it is to scale this complexity up. Docker works for most trivial cases but for complex systems it becomes a shit show very quickly which is hard to reason about when you need to, which is whenever it is inevitably broken.

table>

A lot of Docker criticism I’ve seen on HN and elsewhere equivocates between Docker and Kubernetes. I’ve never used Kubernetes, but I use Docker very frequently and find it a massive improvement over the previous state of affairs. Kubernetes does seem complex and difficult to learn, but Docker doesn’t.

table>

Yes but a legacy app with no need for kubernetes scale gains huge benefits from docker. When you have to compile your libs from source because they are so old no packages exist anymore, then docker greatly aids both development and deployment.

table>

Some languages definitely benefit from portable environments, others don’t really need it. My anecdata is that horsing around with Docker was still needed and when things went wrong, they went REALLY wrong. As with all things, figure out what you need, pick the right tool for the job, and all that good stuff.

table>

I find it insane that you think configuring environments is the right way to program. I write my web programs in Go and there’s nothing to configure once a build is complete. I just get a binary and I can ship it / deploy it anywhere. I never found any use for Docker.

table>

> The key thing to do is to build on technology that is portable, compact and repeatable You mean like docker?. . I agree with the k8s part though. Commiting your enviroment to git and reproducing it anywhere is great. Orchestrating thousands of that is horrible and adds too many extra layers.

table>

Likewise, I’d also prefer not jumping on the hype and have tried to avoid K8s so far, but Something I’d take docker is: deployment speed. Baking images and deploying new ASG makes production deployments take around ~ 15 min. Docker based solution could probably cut it down to 5 mins. Also more inclined to use a simpler/hands off service like ECS Fargate than straight kubernetes. Also seems like over half the jobs being currently posted require k8s skills, might be difficult to avoid it for much longer

table>

>Baking images and deploying new ASG makes production deployments take around ~ 15 min. >Docker based solution could probably cut it down to 5 mins. this reminds me of when I was working on the Danish government electronic invoicing project. The political driver of the project had made some statement along the line that a minute was going to be saved on every invoice sent (which I mean there would be millions and millions of minutes saved every month), and someone else made the observation in my hearing that he doubted that time saved would be monetizable. Which I took to mean that when small amounts of time are saved those are likely just swallowed by meaningless other tasks and not used effectively. I doubt the 10 minutes saved per deploy by using Docker would be monetizable.

table>

probably )) For our biggest app, Drupal, deployments are kind of hands on. Dev(s) are waiting for the deployment, do some post deployment steps, check logs for any issues, make announcements, etc and if we had to rollback, the easy solution is to find the previous deployment and run it again, speed would be probably be beneficial here

table>

When they talked about time saved, were they referring to: A) effort (aka ‘man-hours’), or B) duration (aka ‘wall clock time’) ?

table>

I am doing blue/green deployments with ansible in 30 seconds from package deploy across 20 nodes. YMMV but docker doesn’t inherently make anything faster itself.

table>

each VM can take 1-2 min to deploy launch vm: 1-2 min config/deploy: 3-5 min (ansible) create image: 2/3 min launch new asg: 2-5 min Containers greatly reduce most of these steps

table>

that’s one of the reasons to pay for cloud providers. You can adjust your paid resources to actual usage. This is one way to do it. (search Immutable Infrastructure) For this application, traffic is quite stable so it doesn’t come up often. But once or twice, we had some bug causing performance regressions. They were not visible on the first day, but as usage grew over time the cluster started growing with the problem. So it allowed the application to stay online (at an additional cost) until devs pick up the problem and have a solution ready, probably coming up in the following sprint only.

table>

Out of curiosity, what kind of artifacts are you deploying and how are you building them?

table>

Build in Jenkins. self-contained ReadyToRun . Net core apps. Couple of smaller Go apps. I rather like Go. Target package contains one file! Config is inherited from the environment and cloud provider as the targets are Amazon Linux 2 based.

table>

Compiled executables are pretty simple to deploy without containers. The benefit of containers is that it can help standardize that type of benefit to other runtimes like Python/Ruby where it’s much more challenging to just ship an artifact. In addition to that, with cgroups and namespaces you get a lot of other benefits, which can be used with eg: systemd/etc but it’s nice when one tool handles all of it.

table>

Yes exactly. It’s a solution to a problem. My point is it is better to avoid the problem than employ a solution.

table>

It’s not easy when the only way to avoid the problem you describe is to rewrite your whole project(s) in a language like Go. I like Go and have used standalone binaries instead of Docker containers when deploying Go applications, but I don’t always work with Go applications. As other posters said, Docker largely exists to help with any situation where you don’t have a single binary. These situations are still very common for many developers and devops engineers. And in those cases, Docker can be very helpful.

table>

Fargate provisioning is really slow. Minute or two at best, in my experience. I went back to ASG and Codebuild. Packer for AMI rebuilds.

table>

I would remove Python from this list. Getting Python services that I don’t directly work on running on my dev machine is the only case where Docker has been a lifesaver for me. Not having to deal with Python’s insane amount of environment dependency (even with virtualenvs and all that business) is worth the bulk of running a Linux VM on my Mac. If I didn’t know better, I’d say Python was the entire reason technologies like Docker were invented in the first place.

table>

> It’s better to religiously build simplicity, repeatability and fast automation into everything you do Yea. Totally agree. This should be common sense. Unfortunately it’s not. > The key thing to do is to build on technology that is portable, compact and repeatable which means at the moment something entirely self-contained (Go, . Net Core, Java, Python with some caveats come to mind) Is there really anything other than Go that can achieve this? . Net/Java/Python all require the environment to be setup a certain way. And you have the problem of managing versions. Does python even let you produce a self-contained application?

table>

How do you deploy a golang app? How do you handle rollbacks? Maybe not applicable but how do you scale up? The advantage of something like docker is it unlocks tools like k8s, ecs, azure container instances, DO app platform etc.
See also:  What Is The Goal Of Meditation?

table>

deploy = basically scp and do a blue/green switch over. rollbacks = deploy the last version Scale up = add some more instances and redeploy the current version (not an issue as our load is fairly consistent) All of those tools, which I have used extensively, increase complexity tenfold. Just considering kubernetes resource allocation and CNI is an art which requires full time staff to manage. Fargate is the only reasonable abstraction I have seen so far on this front.

table>

I think the whole point of this exercise is to find out where to draw the line in terms of abstracting out deployment steps and increasing deployment speed. The vast majority will agree that it is NOT at bash scripts. Once you get to a few micro services, good luck maintaining that. C++ has been amazingly powerful for years, but there’s a reason popularity isn’t exploding. .

table>

Bash scripts are pretty good in the beginning. I have scaled systems and they remain helpful as scale is built. A lot of bash commands are automated in Devops tools too.

table>

Agreed. Ansible also calls plenty of command line commands, like bash scripts. Looking at Linode stackscripts or a DigitalOcean tutorials can be pretty surprising too.

table>

Oh let me clear that up for you. This article and this comment discussion assumes a certain familiarity with computers and system administration, which, to ask a question like that, you must not have. So a bash script is this thing that’s very functional, flexible, legible, can be written quickly and easily, can be maintained and updated quickly and easily, and is even portable across platforms, and it gives you all that without a teetering fragile tower of library and framework dependencies that most other automation systems rely on. Every system already has bash or some equivalent Bourne shell for free. Assembly has none of those qualities. Are there any other mysteries we can clear up? I mean we all had to start somewhere and “there are no stupid questions” right?

table>

> Oh let me clear that up for you. > Are there any other mysteries we can clear up? I mean we all had to start somewhere and “there are no stupid questions” right? I’m surprised at how big a jackass a single person can appear in such a concise response. Every time a new technology shows up, there are lots of “Why not just use ?” The answer is obvious, and can be most highlighted by leaning harder into the absurdity of “Why not just use ?”, which is, why does any technology exist beyond Assembly if all that’s old is good enough? But, given you can’t behave like a grown up, I’d rather not continue this comment tree.

table>

That’s just the thing. Bash isn’t old and therefore lacking in value. It’s not absurd either. A lot of bash commands like sed automate editing config files that systems like ansible and docker rely on. Since you haven’t really shared your experience with bash, assembly (or not), or what you prefer instead, it’s really hard to openly entertain viewpoints that aren’t your own when the other party isn’t demonstrating it As well. I use bash, docker, Ansible or whatever else will do the job. I can orchestrate vms and also configure bare metal. I use public clouds, private clouds, hybrid clouds. These could all be interesting things to mutually learn and talk about but maybe it’s more enjoyable for you to have others put in mental labour for you. The one thing complex hosting experience gives you is not to discount any solution for fear of what others may think. It’s where the real world of scaling is. Deciding on what’s best solely by what’s new, popular or social proof can leave a developer exposed where the rest of groupthink may be – often reinventing the wheels over and over in new languages to learn the fundamentals of why it will or will not work out in the end.

table>

You asked a question. I answered the question. The answer was neither a lie nor in error. If you want to remark that the answer was disingeuous without acknowleding the same about your own, then you are in no position to be talking about jackasses and growing up.

table>

The thing about this answer is it requires learning a basic tech skill instead of trying to avoid it with a higher level tool right away like docker. I’m a better user of docker because I know a little bit of bash. It’s not one or the other like so many decisions folks with could be made by social proof. Not sure why you’re being downvoted, I keep in mind anyone who advises on cloud usage might only have cloud experience to begin with. It’s not lost on me that while many jr devs are spending a ton of time putting together and making their stack work others can just build and ship code.

table>

It’s not sure if you’ve used assembly before, but it can be a little more work than high level tools like docker and kubernetes.

table>

As per usual there is a lot of weird hatred of Docker and K8s on HN. Docker isn’t a complex tool and it doesn’t take 200 hours to learn enough to be proficient. I have to imagine all of these folks tried to learn Docker 5 years ago and got burnt and refuse to try it again. At its core, you create a Dockerfile which builds on top of a base image. You tell it what files to copy over and which commands to run to get the system in the state you want it. Then you build the image which runs through all those steps and gives you this reusable snapshot. You can create containers using this image locally or you can push it up to a registry and let any of your systems run that exact same service. You absolutely don’t have to use Docker, if you’re happy using plain VMs why switch? Kubernetes certainly takes longer to learn so I can understand why there is more frustration around it. Though once you learn it, it feels like you’ve got a super power. Being able to abstract yourself away from the underlying infrastructure is freeing. My co-founder and I originally tried to cut costs by using Terraform + Ansible to provision and maintain all of our infrastructure on basic VMs rather than using Kubernetes which is always more expensive. But we quickly decided it was worth the extra expense to use K8s for all the nice things we get with 0 effort. All that being said, all of us techies get way too in the weeds on all of these things that don’t matter. Don’t listen to what others are telling you that you need to use. If you prefer to install apps directly on your VMs, keep doing that. If you find Docker gives you a better QoL like it does me, use that instead. All the time you spend on infrastructure work is time that you could be working on new features!

table>

If you are not already experienced with it don’t use $_TOOL_ . Your job/focus is ONLY to build a business not have cool tech. Think you are a Uber driver, customers don’t care what is under the hood, or is the engine souped up , they just want something to go A-B. Having said that I can tell you for a solo cofounder some of the benefits of using docker. – Documentation of what environment you have to setup is time consuming / error prone and you will tend to less of that as single dev . That Makes it harder to hire and train new people or remember for yourself what exact version of something you used 6months back. Dockerfile is a great way to do that – HA and orchestration tooling with k8s or managed container platform services is not easy without defined environment like docker. That means autoscaling, automatic recreation/reboot etc can be handled 3 am in the morning without you waking up. – Scaling 1-2 is harder than scaling 0-1 . Setting up your environment to be treated like cattle and not Pets from day 1 goes a long way in that.

table>

That’s from a birds eye view and is totally irrelevant to the current question. The state deployments are in right now, there’s so many viable, and somewhat easily accessible options. Choosing one over the other is not different in terms of amount spent learning, because each comes with its own set of issues. For example, I built out one service in GCP and another in AWS and they both roughly took about the same amount of time to “figure out”. I just wanted to see the differences. In general for my purposes both work fine, but I’m finding GCP cheaper. If someone had told me that in the beginning I would have just gone with GCP. Asking “Will X increase/help with my user growth/traction” is sort of irrelevant because I need to do one of 2 things and I just need help making the right choice.

table>

> That’s from a birds eye view and is totally irrelevant to the current question. Really? You don’t think speed of execution is far more important? Getting something to your users, validating assumptions and failing fast is also kinda important? IMHO using the latest tech is not. Now I know docker/docker-compose, but would I use it for my own projects?. . meh. . probably not. It doesn’t take THAT long to set up a box manually (apt install postgress redis nodejs nginx, will get you a LONG way) or with an ansible script if you know it and want to get fancy-pants on me). Spending time doing AWS/GCP, your users don’t care. Whether you can scale to 100M users like Facebook, your users don’t scale. If you know docker and can deploy faster than me writing all this, then great use it.95% of these projects will die within a year. My approach is everything within the first year is disposable. If it takes you a weekend to hack something in php (not the most shiniest thing around, but if you know it great) and push it to prod in a weekend that’s better than learning rustlang (although subjectively better in some eyes) and it taking you 6 months. IHMO. > The state deployments are in right now, there’s so many viable, and somewhat easily accessible options. Here’s one that takes you about 1/2 hour (or less) to learn: `ssh prod; git clone github. com/foo/great-project. git; cd great-project; npm run prod:build` (or the equivalent)`. Done. Standard no frills get-shit-done. Crappy, yes. But faster than your competitors. > but I’m finding GCP cheaper. I’m a tight wad. I like cheap. That’s why I think AWS/GCP are too expensive for what you get (AFAIK). Spin up a DO/Linode 4G instance for $20pm. Is is cheaper than that? I doubt it. > I need to do one of 2 things and I just need help making the right choice. The POINT is, which I wish someone told me at the start of my journey, is that you DON’T NEED (at least not in the beginning) anything but a strong attitude to get stuff done and released as fast as possible. Now you may think learning GCP and AWS is great for your long term career and there is truth to that. Whether it’s valuable to learn something for scratch to do a startup, I would question that,

table>

> Really? You don’t think speed of execution is far more important? Getting something to your users, validating assumptions and failing fast is also kinda important? IMHO using the latest tech is not. It is but not if you are using outdated tech that will cost you many more hours just even months down the line. Let’s say I’m working with a co founder who knows delphi really really well. You think we’ll build anything in delphi or spend a little time to learn any other modern language to ensure our speed stays stable in the future? > My approach is everything within the first year is disposable. Interesting approach but I respectfully disagree. I built a side SaaS app that’s really important to stay up 24/7. I didn’t want to go back and set up server infrastructure multiple times, especially not with the amount of information available online. Your code of course will keep changing but there’s no reason you need to keep changing your infrastructure if you make the right choices in the beginning. I never once changed my infrastructure in the 4 years I’ve been using AWS (beanstalk). > Done. Standard no frills get-shit-done. Crappy, yes. But faster than your competitors. So so so so much more crappier. I would never do this for my paying customers. I have been running a somewhat critical piece of infrastructure for about 4 years now on AWS. I primarily use beanstalk, so in a way less work than a straight VPS but yes can be a bit more expensive. You know how much downtime I’ve had? Probably 3 hours TOTAL, and that too because I pushed something that had bad build commands.3 hours over 4 years. I know because I have a monitoring service. I never had to restart a single server, upgrading has always been one click and deploying has always been one command. The peace of mind has been incomparable. You system would leave me nervous every time I take a vacation. I go on vacations without a worry in the world. My system just took a couple weeks to learn the ins and outs and costs a little more. Mine over yours any day of the week. If I’m just messing around with a personal project, then I use a VPS because literally no one relies on it.

table>

> You think we’ll build anything in delphi Haah of course not! 🙂 but doesn’t mean you should flip to the other extreme and learn how to do k8 and spend a whole bunch of time on it. >Interesting approach but I respectfully disagree. I built a side SaaS app that’s really important to stay up 24/7. From the get-go? Did you spend a whole bunch of time learning it or did someone else pay you? (As an employee). Most solo projects are not like this. > I would never do this for my paying customers. I have been running a somewhat critical piece of infrastructure for about 4 years now on AWS. Of course not, neither do I anymore. For paying customers who can afford it, AWS is a great solution. Most small timers with low traffic are happy with DO/Linode. You can be fancy and have CI/CD or webhooks of code checkin on the master branch. As a side note, back in the day, I’ve used Capistrano, which is pretty much the the above commands wrapped in ruby script. The OP has said he is going to use this as it’s a Rails project. However this is the crux. You’re most likely having someone else pay for hosting. You’re probably getting someone else to pay for you to learn how to do XYZ. Most likely you’re not even aware of how much this infra is costing the client. I really don’t have a problem with this. Good for you. However the person who posted the question was a solo-founder trying to understand whether to learn Docker. This probably is a person who is not a super techy. (A lot of people here know docker just becz of curiosity) and I would hazard a guess not a big runway. These reasons alone, IMHO I would suggest getting something out there, learning to see if there’s a market and reiterating.

table>

> k8 and spend a whole bunch of time on it. I fully agree and no one said that you should do it. But these days there’s so many abstraction layers that it’s not nearly as complicated as it was 2 years ago.2 years ago I wouldn’t have even considered it, but now it might be on the table. > from the get-go? Did you spend a whole bunch of time learning it or did someone else pay you? Yup. Business SaaS product that had day and night use. No investment, no one paid me. Became “ramen profitable” year 2 but it’s not much. I’m not interested in growing it any more for various reasons and right now it’s just “side income”. > However this is the crux. You’re most likely having someone else pay for hosting. You’re probably getting someone else to pay for you to learn how to do XYZ. Most likely you’re not even aware of how much this infra is costing the client. Might be so but not in my case. I pay for everything and customers pay me a monthly fee depending on different params that aren’t even very closely tied to how much it costs me. > However the person who posted the question was a solo-founder trying to understand whether to learn Docker. When I started, I was a solo founder too. Have you looked into caprover? I came across it recently and it looks really interesting. It uses docker internally but it seems to be a somewhat mature product and has so many nice bells and whistles. Best of all, very little time investment (or so they promise). If I was restarting today , I would look into that. It would help me run everything without relying on AWS/GCP and will end up being cheaper if everything works as advertised.

table>

>You system would leave me nervous every time I take a vacation. I go on vacations without a worry in the world. My system just took a couple weeks to learn the ins and outs and costs a little more. Mine over yours any day of the week. If I’m just messing around with a personal project, then I use a VPS because literally no one relies on it. Your assumption is that I can’t build your infrastructure or I choose not to. My assumption is the OP is doing this for a personal project not working for a paying client which you’ve also said you’d use a VPS. Solo founders generally don’t get paid until they get paying customers and generally don’t have lots of runway (3-6 months?). They are also doing other stuff to bootstrap the business (sales/marketing/content). That may change your attitude significantly whether a few weeks worth of learning was worth it.

table>

I guess our difference is opinion is when you want to invest your time learning at least the semi scalable infrastructure. It doesn’t have to be docker, could even just be ansible. Or cop out like me and use App Engine/Beanstalk. I believe the initial time investment is worth it because if the project is even a little successful, I never even have to think about infrastructure. To me that’s important because I hate spending time on it. I’d rather do customer discovery or feature building. And now that I know how to set it up, I can set up entire new projects extremely fast, and reliably. It’s a life skill that I had to learn at some point (as an engineer) and I just did it earlier. SSHing into the machine and doing a git pull every time I deploy, and SSHing into the machine to make every minor environment change over time will drive me crazy as I’m iterating through the product. Isn’t that the case for you? Currently one command takes care of everything . I just need to focus on business. To add to this, the strong infrastructure definitely “delighted” my customers. I would fix bugs in minutes because of the quick deploys/setups and nothing ever went down. Some automatically assumed I had this whole team behind me and I kinda went with it for more $$ lol.

table>

>I guess our difference is opinion is when you want to invest your time learning Yes I think so. I’m very focused on delivering business value as soon as possible. There is often a lull after the first version where the biz endless discusses and re-evaluates everything. I use this as time to learn. Of course, the next time around you start from the new point and learn something else to make things better/faster/automated. Now I’m at a point that I really don’t do what I suggested, becz it’s all second nature. However I think for early founders (especially with not much runway), that investment in time can be a time sink where they are thinking they are in motion, but it might not be that important for the short term. In my experience, short term user feedback is important as the number of very early pivoting I’ve experienced (at startups) is frankly ridiculous. Sounds like you did things more grown-up than I experienced.

table>

I find it extremely useful. It’s really easy to use, and after years of inactivity – I can reanimate projects “just like that” by running a single command. Use docker-compose to your advantage and create a simple set of deployments locally – in the same way you would on a VM for “prod”. Can’t stress how useful it is if you ever want to give it to somebody else, or hire a dev. “docker-compose up” and your dev is set up.

table>

I kinda doubt that. You would need to update the components inside the docker container, right? If you simply activate the container after years of inactivity you will have a complete insecure setup, as there would be no security updates applied. That’s not even acceptable for side projects.

table>

Dunno why you got downvoted, you are exactly correct. Another thing that rots when it’s “years” is the trusted CA bundle, so you cannot even talk to remote peers until you update it.

table>

The usual trick is to mount /etc/ssl/certs from the host. This also means stuff like “extra” CAs can be configured at host level. The packaging issues. .99% of all CVEs are junk, in context – not reachable except in exotic configurations, component not used in container, kernel bugs reported against kernel headers package in the container because of course no docker scanner filters those out, cve is actually misreported, tons of “locally” (aka not) exploitable issues esp privesc. . but yeah, ok, it’s the 1/100 that gets you.

table>

I know for solo-founder discussions about cut-throat effort expenditure judgements in the here and now don’t really need to consider their project being shuttered and later revived, but the first step of reviving old projects is never “put it on a public server” or “update all the packages to ensure the security patches are up to date”, it’s always just getting it running again. I’ve been in scenarios where that effort alone was significant.

table>

Good luck updating old and hand-configured VM. Most of the time running Docker image is just invoking one app without entire system services (except host system), then securing is often just bumping base image version to some recent one.

table>

You don’t hand configure VMs, just like you don’t hand configure containers. You use packer and vagrant for that, and you bump base image versions, just like in Dockerfiles.

table>

That being said some things to consider to answer your question: – complexity: not the easiest thing to learn – work overhead: you’ll need to set this up for the first time – linux is best: I don’t use much of docker outside of linux, it get’s even more complicated on Mac and Windows – deployment: learn how to deploy your images to prod, there are multiple ways and can be hard to decide what to use

table>

1. As a solo founder double down on whatever tech stack that’s working for you and you are familiar with. Focus on the customers, business and product.2. Time is the most precious resource at your disposal. Don’t spend it on learning a new tool/tech unless you don’t have any alternative.3. Why fix something that’s not broken?

table>

I’m a bit perplexed by a lot of the comments here. In webdev, most of the complexity I find is either in the app itself or the aspects of deployment _beyond_ Docker. I usually set my Dockerfile up at the start of a project and use the image it produces across dev, testing, and prod. It takes me only a couple of minutes to set up and only a couple of minutes to (infrequently) modify. Even when I got “ambitious” with my Docker setup, the time investment was still same day. Cloud providers also have a lot of easy-to-use products which work with containers right out of the box. The same image would work on Heroku, Google Cloud Run, and AWS Fargate. I don’t need to make any changes to my app outside of providing different environment variables, just provide the image and we’re good.

table>

How do your reproduce your production environment with “just” Docker? I don’t get it. As an example of a very simple production environment of mine: a VPC (private network) with 1 webserver, 1 nginx proxy, 1 db server and 1 jumpserver. The only place I use Docker is in the webserver (I run a Django shop, so I pack everything needed in a Docker image); but the other servers do not run Docker containers, I use Ansible to provision these servers and also to maintain them (e.g. , OS security updates, etc.) I use systemd as well. With Docker I can only mimic the webserver. I cannot reproduce the interaction of systemd and nginx for example; I cannot reproduce the way Ansible provision my machines in production. Docker alone is not enough. What I do to reproduce my production environment locally is: Vagrant. I spin up 4 VM machines, a private network and voila: I have the same prod. environment on my laptop! Now I can use Ansible and configure systemd as I would do in production.

table>

Wow! Super interested in this (as it is similar to what I had in mind) Could you share further detail please? The jumpserver is haproxy or something similar? I thought previously that you had to put nginx + server application in every node. How do you configure and manage the whole thing? Thanks in advance!

table>

– The jumpserver is just an Ubuntu machine with the ssh daemon running. It’s open to the internet. I try to follow the “best security practices” to harden this machine (but I still have a lot to learn): don’t allow root access, only allow access via ssh keys, etc. I don’t have ssh access to any of the machines inside the VPC, I use the jumpserver for this: I ssh into the jumpserver and then ssh into any other server (of course, I do this automatically. Search “ssh jumphost”) – nginx is installed only in one server. This server is also public to the internet: it’s the main entrance to my web application (so whenever you type mydomainname. com, it’s nginx the one that handles the request and redirects it to my web application) The other servers (the webserver with my application code) and the database server are only accessible from within the VPC (internal network) and are not open to the internet (there are some iptables rules set up) – At the moment I have everything in DigitalOcean. I use Terraform to create the servers (droplets) and then provision them via Ansible. So far so good. > I thought previously that you had to put nginx + server application in every node. I only have one application server because my load is low. In order to scale, sure you can have N application servers and M nginx servers. I don’t know why would I put nginx and the application code in 1 server (I think K8s does this?)
See also:  What Is Cognitive Domain Of Development?

table>

Wow, a lot of Docker dislike in this thread. I personally like it a lot and it makes my deployments (single serve) really quick and easy. I rsync a tarball of the image to my server and rerun my docker compose to go down and back up. Quick and easy. Dependencies are handled on image building and I can run the same image on my local machine before deployment. On the complexity side, there was definitely some learning to be done, but I think learning docker enough to get some work done takes 3 hours at most. Understand the concept, how building works, how networking works, and you’ve got a working knowledge. I don’t use k8s or any composition software, I just run my images as if they were little processes with their external dependencies (e.g. wsgi server) included. Also, if you want to use a PaaS, many of them accept a docker image, so that makes things way easier to deal with and lets you use anything you want, compared to having to conform to a project format. All that said, if you don’t think you need it, don’t bother. It’s not a game changer, but it’s handy for some. If you have a good way to deploy your applications consistently (there are countless), then go with that. If you’re interested in docker, learn it. In the end, choosing docker or not choosing docker won’t be the thing that makes your project successful.

table>

It’s not Docker Dislike: we’re all saying “if you don’t know Docker then don’t waste time learning it. ” Starting a business is hard enough: spend your effort on your customers and implement the simplest possible thing.

table>

> Small project, happily using generic VM. Everybody tell me I should use Docker. Where is the catch? The catch is that you have a working system you are happy with, whereas using Docker introduces a learning curve (a time sink) and a configuration/testing cycle (at the least) that will not benefit you at all in your current use case. I say that authoritatively, as if it were not the case then you wouldn’t be ‘ happily using generic VM ‘ and would, instead, be asking about whether Docker would solve your ongoing issue. – On a more expansive and general note about Docker, it has its place. I’ve done Docker, but I’ve also done stuff ranging from the simple Git push-to-deploy, through third parties like Buddy/Codeship/Azure, cloud containers like AWS Fargate (with Docker), VMs with Terraform and Packer, and more. All have their sweet spots, but your question is specifically about Docker. – If you have low/zero code dependencies or can deploy pre-built binaries (eg Go, Net Core) then you probably don’t care about the OS beyond choosing an LTS version you can redeploy to at will. For that situation you don’t need Docker; just blue/green onto existing or new images. – If you use interpreted/scripted languages (eg Python, Ruby), or you build binaries in-situ (eg Go, Net Core), then you probably need full control over your OS configuration and installed tooling. For that situation you have options, including Docker. All of that is overridden by considering the same question, but about dependencies in relation to your scaling methodology. If it supports scaling with standard images you may not need Docker. If it needs specific configurations then Docker is a good choice. And finally, if you plan on using larger deployment tools or cloud provisioning then there may be non-Docker options to choose from – but at that kind of scale introducing Docker is probably the easiest way forward as it is acceptable to pretty much all related tooling.

table>

Thanks for the answer. Your comment made me think (I have once a problem after a system update). What other options do I have? So far I try not to touch much. Just security updates from yarn or ruby gemfile, o directly one to one if it isn’t directly related.

table>

That sounds like you built a nice snow flake that you might be confusing with a production ready thing. Maybe you are happy with it like that and things are fine. Or maybe it’s a problem you don’t know you have yet. I can’t answer those maybe’s for you. But probably the reason your friends are unhelpfully suggesting a solution is because they suspect you have that problem and might not be aware of it. A reason to ignore their advice would be that you genuinely know better and know what you are doing. Another reason might be that you have plenty of other/more interesting things to do. I’ve certainly sacrificed devops in favor of feature development a couple of times and it was usually the right choice. It’s important but not when it starts taking double digit percentages out of my time. But the difference between an informed choice and just winging it is knowledge. Docker is kind of easy to get and learn. So, read up and then decide if you have a use for it. Much better than mucking around with vagrant, vms, and stuff like ansible, puppet, etc. which was the fashionable thing to do before Docker was a thing. If you like your vms still, look into using packer to make creating them a bit more repeatable. A packer built vm image (with ansible or whatever) is a nice substitute for Docker. More work but you gain a bit flexibility. I’ve used it to build vms that run docker.

table>

Do you know Docker already? If not, that’s a reason not to use it. There are clear advantages in having a production system that’s expressed in terms of a text file that you can read instead of a bunch of stuff that’s accumulated over time which you forgot. Especially if you’re a solo maintainer of a system, because you’re not a full-time SRE, you’re not going to invest into making the system clearly documented and you have lots of other demands on your time, and that will bite you one day (e.g. , do you have a plan for how you’re going to test an upgrade to a new version of your OS or a new release of a language?). But there are plenty of ways to do that other than Docker. Principled usage of Ansible or even a hand-made shell script will work. If you’re on a VM, you get 80% of the benefit of a container – you can spin up a new VM and try to provision onto it and switch over and see what happens. Or, as another commenter mentioned, if you hire a second engineer and need to get them set up on a test environment of their own, they can run the Ansible playbook or shell script on their VM. If Docker is the tool you understand for how to get replicable environments where everything is documented, use it. If it’s not, use something else. But you should pick some tool.

table>

I think the consistent prod/dev env is overhyped. Almost all of my prod/dev bugs are due to prod-only optimizations and not because of different environments. In addition, the virtualization breaks a lot of dev tools (e.g. my debugger consistently breaks or randomly stop listening). That being said, I still use docker to create a production image. I think Docker is great for deploying a single “entity” to a remote machine. No more forgetting to install dependencies, copy assets, configure auto restart etc. on remote machine.

table>

> Almost all of my prod/dev bugs are due to prod-only optimizations and not because of different environments. I’ve found this is because of the work to keep prod/dev consistent. When I didn’t have automated builds the same in all environment there were lots of bugs with packages, paths, environment, etc. Getting rid of all those just leaves the prod-only optimizations you now see.

table>

Outside of special circumstances, I don’t think I’d use Docker as solo founder. I find the quality of life enhancements it offers are more desirable in a team setting. Since you know the tech intimately, you’ve probably got a solid deployment strategy that you can easily repeat and debug. That, generally, is good enough. Adopting Docker means learning a ton of tiny details to get your development experience working well, to iron out deployment quirks, inevitably slower development experience, and other various trade offs. Sometimes the trade offs are more than worth it (and we happily use it on our team of 5), but if your deployment strategy is already reliable and efficient it seems unlikely to be worthwhile. For us, the ability to deploy around the world instantly and with reliability is the feature we wanted most.

table>

“Since you know the tech intimately, you’ve probably got a solid deployment strategy that you can easily repeat and debug. ” That’s not necessarily the case though. Just as easily I can imagine the opposite might be true.

table>

Docker might be useful to automate provisioning new servers if needed, but at least for development “docker compose” is competing with simply keeping backups of VM disk images containing properly installed software. And with many self-inflicted complications and little or no added value Docker isn’t competing very well.

table>

Docker is awesome, but it solves a problem that a simple monolith backend app that is built using one of the most popular frameworks simply doesn’t have. If you’re building an app with Node. js, then you could deploy it to EBS or Heroku or Digital Ocean as is — those tools (as well as many others) support Node. js (and many other frameworks) out of the box without any need for concretisation. The best piece of code is the code not written, and it applies to Dockerfiles to. Don’t waste your time and mental space on something you don’t really need.

table>

I see no advantage to use docker for development, unless for some specific reasons. But docker (swarm mode in particular) is extremely handy and convenient for deployment. Once you learn the basics you barely need anything else for deploying solo founder scale projects. Just dont step accidentally into kubernetes black hole. Unless you want to suffer.

table>

> Just dont step accidentally into kubernetes black hole. Unless you want to suffer. There is definitely space for simple container orchestration tools that aren’t kubernetes. Docker swarm hits the sweetspot for simple deployments that don’t require full on k8s setup.

table>

I can only speak for myself. So; – Easy setup (built-in) – Re-using existing compose files local+staging+production (of course swarm-mode needs to be enabled locally) – Easy scaling – Using other container related tools (if they’re not kubernetes specific) are mostly 1-step. (eg. prometheus et. al.)

table>

Not a founder. The catch is that docker fixes the Linux dependency hell problem that stems from the fact that the server Linux distribution may not have the dependencies you need for running your application and if you install your dependencies that are not in the distro repository, such as shared libraries or other executables, that may override the existing ones and mess the system or break something. Even a native executable may fail to run in the server if it is using an older version of version of GlibC than the one the application was linked against. Therefore, this problem may also affect ruby and python libraries written in C, C++ or Fortran. Other advantage is that docker reproduces entirely the development environment and encapsulates its configuration. So, docker also saves time that would be spent on manual system configuration chores, editing configuration files, creating symbolic links and so on. By using docker or any other container technology such as Podman from Read-Hat one can also get pre-configured and pre-made systems up and running quicker and also swap the configuration. A reason, for not using docker would be when using any application written in Go (Golang) since Go static links almost everything and does not even rely on Linux GlibC (GNU C library), so an application built with Golan can even run in old distributions.

table>

that’s why you install app dependencies into /opt and run your app from there. a simple LD_LIBRARY_PATH update will make your life easier. docker adds complexity. complexity makes you feel smart but it’s the enemy of reliability

table>

Familiarity bias? Maybe you are just accustomed to your current rats nest. What if you have collisions in /opt? What if a change in the ordering of the paths in LD_LIBRARY_PATH causes a domino effect. Reliable? I’m not sure, it depends on the determinism and responsibilities the deploy mechanism has.

table>

I actually deploy all go services in docker too as it can leverage existing system control / monitoring / logging from Linux without writing custom scripts for each service.

table>

It’d probably be worthwhile to know how you’re using and managing that generic VM. CloudFormation with something like Puppet or Chef? Just SSHing and git cloning your project as a deploy? Something else? Docker is, for most people, just a more simplified and consistent build and deployment strategy. Rather than having your deployment process have to include downloading and configuring dependencies, they’ll just be baked into the image you’re set to deploy. Rather than having potentially different operating systems on a host and build machine, it’s all just container config. Rather than having to deploy a Ruby service and a Go service differently, you just build their containers and send them to a container registry. Containers bring simplification. A lot of that simplification is ruined by Kubernetes, but if you’re currently on a generic VM, just switching to Docker might save you some headaches. It also might not, and I would say if you’re finding success without it, what is the reason to switch?

table>

Solo founder here. Not sure how to build a proper distributed backend without docker compose. My stack is python, celery, postgresql, redis. I have one remote vm (each for staging and prod) and a simple deployment script that ssh into the remote vm and does a ‘docker compose up’. But yeah kubernetes is a big time sink with no value and I chose to avoid it.

table>

I’ve also found that docker-compose provides the ideal level of abstraction for most solo founders / small teams / early stage companies and projects. Unless you’re already extremely experienced with Kubernetes, or are in the very rare scenario where you truly have no choice but to rapidly, massively scale very early on, I’d say it’s probably typically a mistake to start a brand new project with Kubernetes instead of something lighter like docker-compose. (The only other reason, I suppose, would be if you just intrinsically really want to learn it and don’t have growth as your primary goal at that time. Just be sure to fully understand the opportunity cost.)

table>

Analogy: “As a hobbyist rocket builder, what are reasons NOT to build a rocket hangar?” Sort of like: 1. Do you take 2 hours to spin up a server with a few web apps. Maybe you’ll have to do it 1 or 2 more times within the next year.2. Take 200 hours to learn Docker, in order to be able to spin up that same server in about 10 seconds. You’ll only have to do it 1 or 2 times within the next year. Whereas the real use case is #3, where you tack onto #2 the following: + where you have 20 developers + meanwhile 4 developers per year churn through your team + you have dev ops experts who can take advantage of all the connectivity & modularity of Docker & its ecosystem (including Kubernetes) In this case, it’s useful to have people who know Docker. Whereas if you’re not doing any of the things above (and/or other container-related activities), then no need to dig into it (unless it’s for fun, which Docker is pretty fun to learn and can teach a dev new things). Mostly: The cost vs benefit does not exist at your scale. Its economics: Cost vs benefit. The cost of time to learn it and put it into practice (and understand its quirks), vs benefit of it on a prototype or small project is a negative relationship. Docker is cool, it’s useful, but it’s just an extra layer of stuff you won’t need unless you need its extra dev ops benefits. That said, I don’t need Docker, but I still find it fun & interesting to learn Docker. I’ve used it for tangential purposes: – Personal projects: such as experimenting with something new: to run Grafana or a new type of database that I don’t want to install directly but want to play around with and even integrate into a project. – At work: to connect to a database

table>

In my professional opinion, Docker is great for testing, verifying workflows, and simplifying developer environments. At the end of the day, it’s just another layer of complexity that could be a liability. Do you need to use it? No. Like with everything else, it is a tool that requires time to master.

table>

I love docker – for dev, testing and rolling out prototypes for light use. I know if any of my servers go down I can recreate them without too much hassle. Yes, it adds a layer of complexity but it also helps maintain a disciplined approach to infrastructure concerns.

table>

1. Complexity. Docker is harder than starting a process.2. Scaling vertically first. I’m not even joking. Don’t like build things so you can’t scale horizontally but scale vertically first. . this also means you will like be able to count your servers using your digits. Everything about Docker will be more difficult to maintain for such a small number of servers. Systemd unit files and some disk images are a lot easier to manage than k8s.3. I think you have this backwards. Ask yourself, what are the reasons you have to use docker? Because I honestly can’t think of one unless you’re getting into the business of like supporting or running other folks containers.

table>

If you don’t need it, then don’t use it. This goes for all tools. I’m old and I’ve been using chroot jails since 2005. So I was never really impressed by docker. But if I ever needed to use it for scalability or dev reasons I would.

table>

The most important reason is to spend as much of your dev time as you can building your product, using the tools you already know. There’s always another incremental improvement you can make to your tools. After you start using docker, it’ll be something else. Just make a decision to put that aside, because it doesn’t matter. Almost anything is good enough, just build your product! I wrote a blog post on this theme you might enjoy: https://davnicwil. com/just-build-the-product

table>

Complexity. As a solo founder you need something extremely simple so you focus your time on the business and not dev-ops. Docker often leads you down a dev-ops rabbit hole. With that said, if you pair Docker with an existing deployment service that is container compatible you can probably keep it simple while also setting yourself up for future deployments. The downside of a VM is a lot of times you make it hard for yourself in the future to containerize.

table>

Simplest answer: it’s a premature optimization. Without knowing anything about your application, I’ll guess that there’s a time when it will solve certain problems for you, but that time isn’t today. The same can be said of many technical solutions that small projects may not need. As such, you should be somewhat aware of what the options are without committing to those options today.

table>

If you don’t know Docker very well already, it’s a waste of time that you could spend building a valuable product to your customers. Assuming you’re just starting out, you want to invest your energy in understanding your customers’ problems and how to solve them. They don’t care how your code runs, what language you use, etc. so stick to the things you know.

table>

I build my dev and prod environments using the same scripts with just a few changes. To make the two match closer, dev is just a vagrant vm. It works great and I’ve never been tempted to change. If I wanted to run micro services, docker might be nice, but I’m told that microservices are really unpleasant to work with in the long run Some other commenters are taking about problems with dependencies and your production environment getting config drift without docker – this need never happen if you just use scripts to build your environments and never modify your servers manually, always do it via updating your scripts in vc. That’s just infrastructure as code, it doesn’t need docker. A few bash scripts and environment variables can do the same job

table>

If you are a developer whose only job is to add features to applications that are already on production, then yes Docker is enough (and probably encouraged). But if you are a solo founder, I imagine you’re also in charge of infrastructure stuff (provisioning machines, setup backups, monitoring of systems, keep machines updated, etc.): so, for this scenario Docker is not enough; if you want to test your changes locally before applying them in production you must be able to reproduce your production environment locally and that means you need a way to run VMs locally (e.g. , via Vagrant + Virtualbox); unless you are willing to have VMs for testing purposes on the cloud (but the same story: Docker is not enough).

table>

What you absolutely should have: A reproducible way to get your tech stack bootstrapped and deployed. This can be a VM and ansible, or nix, or Docker, or a set of shell scripts. Docker is just one way of doing things, and recently it’s the most common way. I used to be more cautious but so far I’ve not run into any bigger problems, so I’m using it for some things in my private infra. But I don’t think it’s actually better per se. Where it shines for me is spinning up a local database, or some service I need. Ephemeral stuff that I can throw away again, less so for long-running production deployments. (Unless you need/want k8s anyway)

table>

– It’s something additional you have to learn, although the basics are quite easy to get started. – It’s easy to expose yourself to security issues – never ever run a “docker -p 27017” on a machine facing the Internet or your MongoDB will get hijacked. Only publish ports you want published as in “visible to the Internet”. If you want to have separate containers for databases and application, use -link! – Docker’s financing and business model isn’t exactly well thought out IMHO. Docker itself is open source commodity tech by now, but it’s well possible that Docker-the-company goes belly up.

table>

Docker is fast compared to competitors for deploying to production, but it is slow compared to alternatives for development. (E.g. if you deploy to production in 20 min people think that is fast, if your build cycle in development is 20 min that is slow.) For a small system the ratio of dev to deployment is on the side of dev. Docker’s approach to I/O saves space if you have a lot of containers that share content, but it costs time for a small scale system. Also docker doesn’t subtract complexity. A dockerfile takes all the work (most importantly the understanding) it takes to write a bash script that can install your system in a VM but you also need to know docker’s quirks.

table>

I’m doing the opposite and deploy all my services (go), postgresql and nginx in docker with docker compose. The isolation and easy deployment with some scripts works for me. I hosted my own registry to have as many images as needed in docker too.

table>

Docker is incredibly easy to get started with. I know this sounds harsh but if you can’t deal with learning docker, you’re going to have a really hard time with tech in general.

table>

There are a bunch of technologies that are easy to learn out there, which doesn’t mean we should learn them all. My concern was that I haven’t seen a clear need and justification, and I was doubtful I was missing something. I have managed so far to learn a full stack + deploy all by myself, so I think I have at least the minimum no navigate new learning.

table>

Agreed. Much simpler than k8s stuffs and you can build simple build / deployment process without using one of those CI/CD platform.

table>

I didn’t use Docker, but I also don’t use containers or virtual machines in general. Serverless all the way, because I don’t have the peoplepower for such low-level things.

table>

If you already have your project running in a vm then the transition time may not be worth it. Plus the possible land mines you may hit in transitioning. You can gain parity with dev and prod or at least get closer. There may be some security benefits as well as it makes maintaining and updating the system easier. Tending vms is hard and you have to stay on top of updates.

table>

I just use a VPS, using Ansible to automate setup of dependency and Capistrano to deploy app (Ruby on Rails), very happy with this stack.

table>

Exactly what I use, except Ansible (I heard it had some problems with retro-compatibility). Have you reach to the point of having to use a load ballancer? If yes, could you tell about how, please?

table>

Using nginx for load balancing is probably the first step I would try as it’s relatively easy. There may be better options out there, but it will probably take a fair amount of time to outlive it. (assuming you’re not a rocket ship unicorn)

table>

It’s something else to learn. Learning takes time and distracts you from building your product. My backend is an Express app. Locally I just run with it Node. In production, I let Heroku run it. I also use AWS Lambda functions and Cloudflare Workers, neither of which use Docker. I have no need to introduce Docker into my workflow.

table>

The reason my team uses it is because our application has tons of complex system-level and python-level requirements, and docker has proven to be the easiest cross-platform isolation option. If our app was web-only we would go with a simple VM or web host. Docker has some handy build tools, but those also exist for VMs (e.g. vagrant).
See also:  How Much Time A Day Should You Spend On Personal Development?

table>

I am an engineer/developer and practically work solo in many projects. Last year I started using the Docker at work. Before docker, all services were installed manually on physical/VM server machines. Getting my head around Docker took some time, but I feel that it was time well spent. I started with composes, but moved to docker swarm (single node) for having configs/secrets. Currently I manage everything with the Portainer, that allows me to control all the swarms and services with relatively easy way. After the working stack is created and tested on the development server, installing it Into one or multiple production servers is quite easy. Depending of the case ugrading services is relatively safe as you can rollback to previous images if you or someone else messed up. The nicest thing for me, is that all dependencies are inside the image. No need to worry about conflicting versions or setting up same thing multiple times. Sadly you still need to be aware of any security related issues that might exist within the image and maintain them. But this same issue does exist outside of Docker as well. However, if you don’t understand or think what you are doing, it is still possible to screw yourself in many ways. But I feel that this is always possible in our field, no matter what tools you are using. While the Docker is not a silver bullet, it has made my life a bit less stressful. And I think it is important to understand what you’re setting up and how you are doing it. Personally I would not want to go back to my old workflow. Ps. Sadly the swarm is deemed by many to be dead. It is relatively easy to manage by a single person and I am not getting the same feeling from alternative solutions, such as Kubernetes.

table>

Many of the other comments here are kinda generic, like “don’t use anything unless you need it”. Here are some specific problems with Docker that I’ve seen bite people in the ass over the years, including at startups.1. DATA DESTRUCTION Docker is unsafe by default. If you don’t take specific measures by setting up bind mounts etc, then it is very easy to get into a state where shutting down the container deletes all the files your server wrote inside it . This is especially nasty when the program inside the container has generated a private key that was then authorized to do something, and you lose it. Yes, I’ve seen this happen. What a mess.2. PERFORMANCE Docker on Linux is a relatively thin layer over kernel features and is reasonably fast. Docker on everything else is excruciatingly slow to the point that it can break things. In particular, doing anything with Docker on macOS can be so slow that it yields an unusable workflow for developers. It’s very common in startups (and these days even bigger firms) for devs to be able to choose their own OS, meaning it’s easy for someone to Dockerize stuff in a fit of enthusiasm and then people on Mac or Windows discover that it’s no longer pleasant to develop on, or may not even work properly at all. Filesystem bridging is a particular pain point.3. DISK SPACE LEAKS Docker likes to download lots of very similar operating systems even when you already have one that works fine, and is very poor at deduplicating files. This can lead to Dockerized systems that appear to work for a while and then one day just go bam because they ran out of disk space.4. BIZARRE AND UNINTUITIVE CACHING SEMANTICS What’s the difference between these two Dockerfile snippets? RUN apt-get update RUN apt-get install xyz abc and RUN apt-get update && apt-get install xyz abc It looks superficially like there’s no difference because a Dockerfile is sort of like a shell script that’s setting up an OS image. But the first approach is subtly broken in ways that won’t become apparent for a few months. The problem is that each and every command you run creates a new snapshot, and Docker assumes that every command is a pure functional transform of the prior state. This is incorrect. So what happens is the apt cache will be snapshotted and when you add another package to the install line, it will try and use an out of date cache, failing because the mirror operators have removed the old versions.5. SECURITY PROBLEMS 5a. Docker requires fairly complex networking configuration especially once you get into containers talking to each other. It is easy to screw this up and accidentally expose your unprotected database socket to the whole world, whilst believing it’s firewalled. For example Docker has been known to actually remove firewall rules in the past, from live production systems.5b. Docker images snapshot an entire OS meaning it won’t get any security updates unless you are careful to continually rebuild them. It’s easy to screw this up such that no updates are actually applied (see point 4), but in practice hardly anyone does do this constant upgrading so stale Docker images are such a large problem that whole startups exist to try and tackle it.6. LACK OF SERVICE MANAGEMENT FEATURES Docker is metadata poor. It has no good way to express startup dependencies between containers, cannot express sandboxing rules and many other problems. People often think containers are sandboxes, but they aren’t actually designed to be so, and this has also led to nasty problems in the past.7. USER/GROUP FLAKYNESS Docker containers have their own user/group namespace, although this is rarely what you want. It’s easy to end up in a situation where software fails in subtle ways because the uid/gid database inside the container is wrong or missing information, because home directories aren’t aligned, or because uids/gids outside the container aren’t the same as those inside but they’re sharing a filesystem. Example: the JVM queries the home directory of the user at startup. It can be the case that the user doesn’t have one, when using a container, so it sets the system property to ‘?’ which no real Java software checks for. Cue lots of files and directories named ‘?’ appearing. Have fun tracking that one down! – A common theme in the above is that Docker makes assumptions about UNIX (really, all operating systems) that don’t reflect how they actually work. This is a good sign that it will blow up in some obscure way. So if Docker has these problems, what’s the alternative? Well, for what you describe I’d just use systemd with tarballs or if you want a bit more support, DEBs. This is what I do and it works fine. Why, well: SystemD is conceptually simple and clean. It is metadata rich, so you can do things like say one service depends on another. SystemD sandboxing is not an accident, it’s an explicit design feature with plenty of knobs to give you what you need, but it’s also very easy to bring up a simple systemd service in a normal UNIX environment as a starting point. Because SystemD is designed to work with normal software it won’t do things like delete files your service created, just because it shut down, and it won’t delete firewall rules just because it can. For software distribution, rsync works fine, but DEB/RPM add support for running scripts on upgrade which can be convenient. They can also pull in dependencies, and the operating system can keep those dependencies up to date for you automatically if you want. Or you can pin all your dependencies to specific versions and take complete control. Although automatic upgrades can make the environment less reproducible, if you are small and only have a few machines in the first place it doesn’t really matter because you’d be upgrading one machine and letting it soak for a bit before doing the others anyway, so keeping your machines identical is not going to happen.

table>

You really made a case crystal clear. I appreciate you shared this point of view. Now I have no doubts.

table>

It’s hard to answer without knowing your stack / what your product does. My guess is that most solo founders would be better off just throwing their app on Render or Heroku, and focusing their efforts on building out a good product. Again, depends entirely on your product.

table>

Basic Ruby on Rails App. In full, I use PosgreSQL, Nginx and Puma. Probably soon Redis, and everything is mostly Server Side Rendered. Deployments using Capistrano.

table>

Don’t use a tool for the sake of it, and I’ve been using Docker happily for my side-project, it makes it easy to reason about things (for me) and it makes it easy for someone else to reason about my application if I need external help.

table>

Personally for small projects, I like to use Docker for development and Heroku (non docker version) for production. I strongly believe without a devops guy your much better off without docker and without running infrastructure.

table>

This reminds me of an older Ask HN: “what exactly is docker?”. The answers were varied and complex. I don’t want to use a product if so many intelligent people simply can’t agree on its purpose.

table>

As a solo founder this is not what you should be spending your limited time on. Hire someone to think about the small rocks and focus your time on the big rocks.

table>

Docker isn’t strictly needed, but learning 12factor (or something like newer one?) before write code would make it futureproof a bit.

table>

I object to the premise behind the post. Which is that your choice of technology should be dictated by peer pressure.

table>

When I was in a very similar situation I evaluated Docker & Ansible and found Ansible more useful and robust.

table>

I’ve run a few (failed) projects in the past as a solo founder as well, and my single most important learning is to keep things simple, specially when it comes to infrastructure. You want something which is super easy to deploy, rollback, and debug. Docker in itself is not necessarily easier or more difficult to manage – that depends on your orchestration solution. If you’re using it to deploy to Heroku, that’s probably fine. But if you intend to use something like Kubernetes or AWS ECS, I’d rather not to as they come with significant overhead and lots of moving parts. I’m currently managing a Kubernetes cluster in GCP for a big client, and I couldn’t be more convinced that this is definitely not the kind of solution I’ll use for my own projects. Finally, although it goes contrary to what is usually recommended, at the beginning I’d personally avoid Terraform and/or AWS CloudFormation as well, as they come with their own overhead and sometimes make small changes more difficult. And it’s a context switch.

table>

what’s your reason to use vm? it’s more resource consumer, take ages to boot, it’s hard/slow to share. .

table>

Ages? In my case takes less than 5 seconds. It is fairly easy as it is just a Linux host and they are dirty cheap. I don’t really agree with you on that.

table>

I use a Virtual Private Server, a comodity you can find in Digital Ocean, Linode, Hetzner, OVH. . Currently I use Capistrano for deploying my Rails app. It takes a few minutes and It’s been quite consistent and reliable so far.

table>

Yikes. . it sounds like a lot of people answering this post haven’t ever scaled any application beyond their garages. . “I think the consistent prod/dev env is overhyped.” . . legitimately lol’d on this. . Docker specifically allows you to simplify and control the process of matching a dev environment, to a staging env, to a production environment – while leveraging simple scripting the cicd build / deploy processes that is repeatable. Repeatable code, repeatable deployments, being the consistency that a developer is supposed to want of their code and application. Yes, if you have a VM with 48 dedicated cores and 192GB of ram you can just go back to building a monolith like its a retro 90’s throwback party – but it would be far more efficient to aim for a zero downtime blue/green deployment, so you can roll out your code without having to shut any service down – which you can achieve very easily, incorporating Docker and even minikube into your VM. So, if you don’t want a consistent build/deploy process that is repeatable, don’t use docker If you don’t want to take advantage of lessons learned and best practices by developers and sysadmins who have been doing this for 30+ years, don’t use docker. Spend a few hours to learn docker – there is a zillion simple copy/pasta docs and medium articles on how to get up and going with Docker and minikube.20 years ago, the similar question would have been “Why would I use a VM instead of a physical machine?” with these same sort of answers like: “I already have a huge sever”, “load balancers are stupid”, etc, etc Today, VM’s are the default go-to for people. . Containers are the next step in properly abstracting the development->Deployment process. There are massive advantages to be able to create VM Images and handle delta differences between them to track changes overtime and deploy and scale at a level that you cannot achieve with even a few physical servers. If you have never scaled an application multi-region, or deployed an application for 100,000+ people – you might not be aware of the need to make this process of patching, code deployments a consistent rolling-update process. I highly suggest you learn this now, since if your business is successfully, and you hit a trend spike and your single VM / one-service crashes, you can loose a lot more than spending a few hours to learn Docker and build a container that could be uploaded to AWS ECR and scaled up to 11 anytime you need. . Docker – or any type of container – is the current best-in-class tech that we have for repeatable software development->Deployment. Don’t shun it because of the popularity of it. Read about why it is advantageous to use it instead of finding an answer to validate why you shouldn’t.

table>

Is Docker free for personal use?

A rundown of the new features – Docker Business is a new product subscription that we are offering for companies who use Docker at scale for application development and require capabilities such as secure software supply chain management, single sign-on (SSO), and container registry access restrictions, among other things.

  • A modification to the conditions that apply to Docker Desktop has been included into our Docker Subscription Service Agreement.
  • Docker Desktop will continue to be available without charge for personal use, educational institutions, open-source initiatives that are not for profit, and small organizations with fewer than 250 workers and annual revenue of less than $10 million.

Commercial use in bigger businesses is only possible with the purchase of a premium membership (Pro, Team, or Business), which can cost as low as $5 per month. Docker Desktop is now included in the Docker Pro and Docker Team subscriptions, making it available for business usage.

  • The previously offered Docker Free membership is now known as Docker Personal.
  • There will not be any modifications made to the Docker Engine or any other upstream open-source Docker or Moby projects.
  • Read the Docker subscription FAQs to get a better understanding of how these changes will effect you.
  • The following sections offer a summary of each tier for your reference.

See Docker Pricing for a comparison of the features offered by each pricing tier.

Why you should not use Docker anymore?

Cattle versus Kittens – Kittens are pets. Each adorable tiny kitten has a name, is petted on a daily basis, receives specialized nutrition, and has unique requirements that include “cuddles.” Your kittens will perish if they do not receive your continual attention.

  • Everyone becomes upset when a kitten dies.
  • The “Cattle” application is the other sort of application.
  • Herds of cattle are managed by farmers, and each animal is assigned a number rather than a name.
  • Cattle are kept in fields rather than barns, and they take care of themselves for the most part.
  • It is possible that there are hundreds or even thousands of instances in the “herd” that reside someplace in the data center, but no one really cares about them very much.

If they become ill or pass away, someday someone will find them and help them. Most likely with a massive tractor but no ceremony at all. — Greg Ferro The Battle of the Cloud Platforms: Cows vs. Kittens Nobody Pays Attention to the Kittens Dying Visualization of the Cattle topology, obtained from an Apache Mesos presentation slide: The services that are being managed must be horizontally scalable in order for the cattle topology to be used.

  1. This means that each colored node in this diagram must be equivalent to a node of the same color and that the cluster must be able to be easily scaled up or down by either adding or removing nodes.
  2. And the unexpected loss of certain nodes does not have a significant influence on the quality of the service as a whole; you may have noticed that this diagram does not include any priceless snowflakes.

Because the cow topology is necessary for services that must be resistant to downtime, businesses that cannot afford such downtime (such as Google, Amazon, and others) will expend the work necessary to ensure that their services conform to this pattern.

  • The challenge, however, is that not all services are simply adaptable in this way; frequently, sacrifices need to be made, and it may need a significant amount of effort from the programmer to run it through a chop shop in order to make those decisions.
  • For instance, application servers generally do not have a problem fitting into this paradigm since they are not stateful (and neither should they be); relational databases, on the other hand, frequently do.

It is not particularly simple to configure Postgres for this purpose; there are ways to achieve high availability (HA) with Postgres by employing replication, which produces a master and hot standby slaves; but, there is no universally accepted method for doing so.

  1. There are some unusual solutions available, such as Postgres-XC, that provide multi-master support but have a limited feature set (so complex in a different way).
  2. MySQL is an exception to this rule because it already has multi-master support built in, and the Mesos diagram conveniently includes it as an option.

And some services, like as Redis, have just very lately obtained capabilities that allow for horizontal scaling (Redis Cluster). Regarding Redis, the production version of Redis requires individualized kernel adjustments ( namely transparent huge pages disabled ).

  1. Because Docker containers share a kernel, any changes made to the kernel parameters will have an effect on each and every Docker container running on that system.
  2. This presents a challenge in the context of using Docker.
  3. In the event that you have parameters that are incompatible with one another, you will now be required to manage container allocations on individual VMs.

I have no problem seeing why Docker would place so much emphasis on this particular aspect of things, and I believe that resolving this issue at the macro level will likely result in solutions that will also trickle down to those of us who have more specific use cases (eventually).

What is the most popular use of Docker?

Docker is a container technology that is open source and is used by system administrators and software developers to build, ship, and execute distributed applications. Since its initial release in 2013, Docker has been a revolutionary technology. It has quickly become one of the most widespread and widely used containerization technologies.

Why should I go for Docker in my project?

Docker makes it possible to speed up the delivery cycles of software. Enterprise software has to be able to adapt swiftly to shifting conditions. This implies that it is simple to scale up to meet demand and simple to update in order to add new features as the needs of the business dictate.

Is Kubernetes replacing Docker?

The following is a list of our future steps, which are based on the comments you have provided: Documentation will be delivered on time for the 1.24 release, which is a commitment made by both CNCF and the team working on the 1.24 release. This involves creating a migration guide for cluster operators, providing more instructive blog entries like this one, and upgrading any existing code samples, tutorials, or tasks.

We are reaching out to the rest of the CNCF community in an effort to assist them in adjusting to this upcoming transition. Please feel free to join us if you are a contributor to a project that relies on Dockershim or if you are interested in assisting with the effort to migrate projects. When it comes to our transition tools as well as our documentation, there is always space for additional contributions.

In order to get things rolling, introduce yourself in the #sig-node channel of the Kubernetes Slack!

Should all developers know Docker?

6. How does Docker benefits Developers? – The fact that Docker ensures that the execution environment is the same for all developers and all servers, including UAT, QA, and Production, is the most significant advantage it offers from the point of view of a programmer or a developer.

It is to everyone’s advantage that the project can be easily set up by any member of the team; there is no need to muck about with config, install libraries, or set up dependencies, for example. Docker is a platform that, to put it more plainly, makes it possible for us to create, deploy, and operate programs by utilizing containers.

You may learn more about the advantages that Docker provides to web developers by checking out the course “Getting Started with Docker” taught by Nigel Poulton and hosted on Pluralsight. This graphic illustrates many of the primary advantages that Docker presents to software developers and programmers.

  1. To participate in this course, you will need to have a Pluralsight subscription, which costs around $29 per month or $299 per year (there is a discount of 14% available currently).
  2. Because it grants quick access to more than 7000 online courses covering every conceivable aspect of technology, this membership comes highly recommended from me to all programmers.

You may also view this course for free by taking advantage of their 10-day free trial, which is another option. That sums up the several reasons why a developer in 2022 ought to become familiar with Docker. As I’ve mentioned before, using Docker significantly simplifies the process of developing and deploying your code as well as running your application.

  1. When your application is packaged inside a container, it simplifies deployment and scalability, and it drives automation.
  2. It simplifies DevOps and strengthens the resilience of your production environment.
  3. In the not-too-distant future, Docker and Kubernetes will play an important part in the software development process because to the increasing importance of cloud computing.

This will cause the container model to become the default paradigm for software development. Because of this, every Developer and DevOps engineer should study Docker. Not only will this help them perform better in their present work, but it will also add an in-demand talent to their résumé, increasing their chances of getting a better position.

  • Additional Articles and Courses about DevOps that you might find interesting 10 Courses for Developers that Focus on Docker and Kubernetes The DevOps Developer RoadMap is shown here.
  • My top recommendations for experienced developers looking to learn DevOps 10 Free Courses for Programmers to Learn Amazon Web Services and the Cloud There are seven free online courses available to learn Selenium for DevOps.10 Free Docker Courses Designed for Professionals Working in Java and DevOps Learn Jenkins for Automation and DevOps with These Top 5 Courses 7 Free Courses Available Online to Help You Learn Kubernetes My top recommendations for becoming familiar with Amazon Web Service 13 of the Most Effective DevOps Courses for Experienced Developers The Five Best Books for Beginners to Read to Learn DevOps 15 online courses to educate students about AWS, Docker, and Kubernetes I’d want to thank you for reading this post up to this point.

If you concur that learning Docker in 2022 is essential for any developer or programmer, then kindly spread the word among your circle of friends and professional associates. We’re going to be able to assist each other become better developers and programmers if we work together.P.S.

Do I need a Docker account to use Docker?

Is having an account on the Docker Hub required in order to use docker containers? In no way is that required at all.

Is Docker still open source?

Therefore, the answer to this question is “no,” Docker is not an open-source project.

Is Docker just for Web Apps?

Why would someone want to use docker, for instance, if they want to run a Python script that automatically gets the latest global meteorological data every 12 hours? In this particular scenario, I wouldn’t. Create a cron job in order to execute the script.

When compared to Linux LXC/LXD containers, what are the benefits of utilizing Docker instead? LXC containers were the foundation upon which Docker was initially constructed. Since that time, it has migrated to a more modern standard known as libcontainer. Cross-platform compatibility with an ecosystem that is far broader is the primary advantage offered here.

Even while Docker is fast making containers accessible to users of other operating systems outside Linux, the world of Linux containers and lxc certainly still have a role to play. It is really difficult for me to comprehend the advantages of utilizing Docker.

  • Docker presents a significant benefit to my development work, which is where I focus the majority of my attention.
  • My concerns regarding older projects that call for more recent versions of runtime libraries and dependencies have been eliminated.
  • Docker is the container that holds everything.
  • After that, there’s the matter of scaling up production and deploying it.

There are straightforward solutions for practically any case thanks to the community and user base that surrounds docker. These solutions range from installations on a single server to auto-scaling and Netflix-level functionality, which I’ll never go close.

  1. Simply put, I’m having trouble comprehending what you’re saying.
  2. Docker should be seen outside of the context of a web application server and thought of more generally as a program or process that runs constantly and offers an API or service that other applications may use.
  3. Yes, it often involves web-based services; however, any process that has TCP/IP or UDP enabled should be able to function normally.

Database systems, caching systems, key-value stores, web servers, and anything else with an always running process that offers an application programming interface (API) through TCP/IP or UDP are examples of systems that fall into this category. As I was saying before, the most significant advantage of this approach is that it encapsulates the service as well as all of the runtime requirements that are associated with it.

  1. do you require MongoDB versions 2.3.3 and 3.2.2 to run on your server? no issue.
  2. Both of them are contained in different containers, and they are able to operate independently.
  3. do you want to use mysql for this application and mongodb for that application? done.
  4. Containerization is a strong tool that helps keep programs isolated from one another and helps decrease the “works on my computer” problem.

Containerization also helps lessen the “works on my machine” problem.