On 10/05/2012 10:00 AM, Pierre-Yves Chibon wrote:
This week Seth, Toshio and I have been thinking about and playing with
The current jenkins we used is administrted by Luke at:
and runs on hardware which is not within the Fedora infrastructure.
This machine is:
Processor: Dual Xeon @ 2.50GHz (on a dual quad-core Xen dom0)
Memory: 1G allocated; 12G on dom0
OS: Red Hat Enterprise Linux Server 5.8
Python: python-2.4, 2.5, 2.6 and 2.7
This week had two co-occurring events:
- fedora-review did not build on this instance of jenkins due to missing
dependencies on the system
- Toshio started to port Kitchen to python3 and had no place to run his
unit-tests in an automated way.
So we thought about using our new cloud system for setting up jenkins
We now have two build nodes within our cloud, one running Fedora 17 and
one running EL6 (down right now as it is being rebuilt).
Where do we stand from this:
- We can create nodes on our cloud
- Seth created an Ansible routine to configure the nodes directly after
their creation [http://fpaste.org/jRX1/raw/]
So adding new nodes to a Jenkins instance becomes really easy and rather
If we want to run our own jenkins master:
This is the system I can think of:
* Configure the Jenkins master in a machine within the Fedora
* This master is not allowed to do build
* The master can send emails (current jenkins can not due to mail server
* All the builds ran in nodes in the cloud
* Nodes are reinstalled every 6 month, when there is a new version of
Fedora or when needed (via Ansible)
* Nodes can be thrown away at any time
* Upstream provides a rpm and a repo
* the rpm is pretty much a .jar file and an init script doing java -jar
everything else is extracted the first time the app is deployed and goes
* we should be able to use mod_proxy or iptable to redirect the port
8080 (default) to 80.
* Master would have backup, but we should also be able to have an
Ansible routine to re-install it (up to jenkins' configuration)
Did I mention this is awesome? I think I did.
I will disclaim that I have am now a good chunk of the way into the
Continuous Delivery book and have been thinking about how some things
might dovetail in the context of how we do things 'round these parts, so
that is likely some of the inspiration for some of the questions I have....
What are we "building"? Packages? Packages + test against multiple
versions? Build package + unit-test/run against nightly? Package + test
against multiple+"nightly"/latest, if passes then automagically
incorporate into new "nightly"? Or am I TOTALLY off base and you're
just thinking about how we build/test/redeploy our infrastructure apps?
Yes, I realize that a bunch of those are like, totally jumping the gun
and starting small is good, but curious if that is sort of the roadmap.
Or if there is a roadmap or if this is just "checking things out to see
what they might do for us."
To that end - I think it would be super cool for others observing to
know... what is the problem(s) we're trying to solve, or what is the
gain we're hoping to see? And yes - "we heard this was kind of cool so
we're just checking it out to see what it even does" is perfectly
reasonable (and, ahem, awesome). But we have all this new stuff going
on - "we haz a cloud," autoqa is continuing to evolve, new archs like
arm, etc... - and some of it could potentially solve things. So I'm
wondering... is there is a basic "things we could solve/improve" list?
Maybe this would be a cool topic for FUDCon if nothing else?
infrastructure mailing list
infrastructure mailing list