the philosophical questions you have, you can take hours of discussion
A full CI/CD to raise not see the point given the size
well you're still some scripts to stir and Choteau to invent
what a difference this will cron scripts on the server or job in Jenkins? for speed of writing - is the same. so I think the size here doesn't matter
the only thing that matters is how clearly you have described a process(algorithm) of the build/deploy applications
from this point of view, my vision is something like:
1) git is not the tool for deployment, git only for versioning code
and the idea the result of your work should be no code in github, and some sane artifact ready to deploy (docker-image, pip package, npm package, deb package, jar, war, zip in a pinch, etc). If to produce artifacts the question of the tags will disappear by itself - you will have the artifact one version and all
the server should not know about Gita nor about any tags in it
Here I would recommend packing everything in Docker, image if only because the server eventually will not know anything about the application dependencies, required libraries, nothing at all, you need to only install Docker
A huge advantage of using Docker in Dockerfile you are forced, Willy/nilly to describe accurately and clearly all the steps required to install the application. And best of all - it will all be stored in the same repository, under control of git - chic.
Artifacts should be stored in some kind of artifactory,
but if you really simple - you can keep several of the latest versions directly on the server or something and daddy
2) once you've got an artifact - it can be deploie
it would know the characteristics of your project, but roughly speaking say that it is enough to naplodili on the server, put in the right place
again with this Jenkins will handle great and will take you it's all a matter of 10 minutes . If you describe the logic in Jenkinsfile you win again because the deployment process(the algorithm) will be described again EXPLICITLY. And will also be under the control of the Gita. (Jenkins only needs to know what repositories and where to look for Jenkinsfile)
If you're going to turn some hidden cron script on the server about him to anyone, nothing is known. Believe me after a short time the whole thing will start to get complicated, something forgotten, something changed and it all together will hurt you in the balls.
What is the advantage of this approach: if you need to do a roll-back to previous version you do not need to build the project again deflating all with the Gita, because you have previous artifacts, rollback in this case, no problem - just define the previous version of the artifact and deploy again and all the
3) Env Variables
when the application starts: reads all he needs from the environment variables
deploy job can every time these variables are set before deplot - it would also be cool because you would make this knowledge explicit as well
Total we have
- the logic of building the project are described in the Dockerfile and is under git
- the logic of deployment is Jenkinsfile and is under the git, and most importantly is the code (Jenkinsfile write in Groovy, for simple things you'll need 30 minutes of studying and all)
- on the server we didn't set quite apart from the Docker
- we store multiple versions of our app just in case, and can quickly roll back without having to Gita at all
server does not know anything about gitah
- on the server there is no additional logic in the deployment of your application
- having all it's very easy to add more servers for deployment - what we need - roughly speaking, to specify a different IP and set env variables to it ( if they are different of course)
