On Thu, 18 Aug 2011 00:17:00 +0530
Rahul Sundaram <email@example.com> wrote:
> We have been running Ask Fedora in the devel instance for a while now
> and updated several times as upstream fixed bugs and responded to
> feature requests
> We have a custom CSS file thanks to Suchakra and with the help of PJP
> (co-sysadmin, cc'ed), we have configured it to run with Apache. I
> haven't setup Postfix or memcached yet on the devel instance but
> atleast Postfix has been tested locally. We will have to think about
> whether we should be running our own instances or hook into the
> existing infrastructure.
> A SOP has been written as well
> We will add more details we go forward. As a side note, while I was
> looking at memcached, ran into a alternative python binding (pylibmc)
> for memcached which apparently performs much faster and is being used
> by reddit. This has been packaged as well as the Django
> (django-pylibmc) module for that. Details at
> Let me know what I need to do to move Ask Fedora to Staging
I think we can start looking at staging now.
Here's what I would suggest (and feedback especially from the other
application developers very welcome):
* Add it into proxy01.stg as https://admin.fedoraproject.org/ask/
(so we can share the cookie. Or do we want to do that anymore?)
* Make a ask01.stg instance to set it up on.
* Have it use db01.stg
* Have it use external memcached.
With this setup, when we move to prod it would be using proxy servers,
and caching, but it's own backend. If we found that it was under too
much load, we could add a 'ask02'. That should be possible, right?
Multiple instances with a shared backend db?
This also allows us to deploy faster, as our app servers are rhel5
I'm a bit unsure if we want the db on the ask01 instance itself, or
using a shared db backend. On the one hand thats less things in one
machine and we can reboot/restart ask01 when we might not be able to do
so to the db backend machine. But it's also another machine to back up
and manage databases on, and if we get replication working another
place we would need to replicate.
infrastructure mailing list