Django server structure and conventions

I am interested in how to organize Django applications on a server with best practice.

  • Where do you host your Django code? The (old now) Almanac says /home/django/domains/somesitename.com/, but I've also seen things put in / opt / apps / somesitename /. I think the / opt / idea sounds better as it is not global, but I haven't seen the option before, and it would probably be better if applications could be hosted in the local users home directory of the deployment.

    / li>
  • You would recommend having one global deployment user, one user per site, or one per site-env (e.g. sitenamelive, sitenamestaging). I think one per site.

  • How can you update config files? I currently put them in the / etc / folder at the top level of source control. for example, / etc / nginc / somesite -live.conf.

  • How do you set up your servers and deploy? I have resisted Chef and Dolly for many years in the hope of something Python-based. Silver Lining is not ready yet and I have high hopes for Patchwork (https://github.com/fabric/patchwork/). We are currently just using some custom Fabric scripts for deployment, but the "server provisioning" is handled by a bash script and some manual steps to add keys and create users. I'm going to investigate the Silk deployment (https://bitbucket.org/btubbs/silk-deployment) as it seems to be the closest to our setup.

Thank!

+3


source to share


1 answer


I think there will be more information about what types of sites you deploy: there will be differences based on relationships between sites, both programmatically and "legally" (as in a business relationship):

  • Having a system account for a "site" can be handy if the sites are "owned" by other people - if you are a web designer or programmer with multiple clients, then it may be beneficial for you to separate.
  • If your sites are linked, that is, a forum site, a blog site, etc., you can use a single deployment system (for example, ours).
  • for libraries, if hosted on reputable sources (pypy, github, etc.) it is probably ok to leave them there and deploy from them - if they are on dodgy hosts that are up or down we grab a copy and put them in the / thirdparty folder in our git repository.

FABRICS The fabric is awesome - if its customization and customization is for you:

  • We have a policy that means no one should ever log into the server (which is mostly true - there are times when we want to see the original nginx log file, but this is rare).
  • We have a structure configured so that there are separate function blocks (restart_nginx, restart_uwsgi, etc.), but also
  • higher level business layer functions that manage all the little blocks in the correct order - for us to update all our servers, we have a hard time typing "fab -i secretkey live deploy" - live sets the settings for live servers and deploy ldeploys (- i is optional if you have the correct .ssh keys)
  • We even have a control flag that, if using a live setup, will prompt for "are you sure" before performing the deployment.

Our code layout

So our mock up code looks something like this:

/         <-- folder containing readme file etc
/bin/     <-- folder containing nginx & uwsgi binaries (!)
/config/  <-- folder containing nginx config and pip list but also things like pep8 and pylint configs 
/fabric/  <-- folder containing fabric deployment
/logs/    <-- holding folder that nginx logs get written into (but not committed)
/src/     <-- actual source is in here!
/thirdparty/ <-- third party libs that we didn't trust the hosting of for pip

      

This is probably contradictory because we are uploading our binaries to our repo, but it means that if I update nginx on boxes and want to rollback, I just do it by manipulating git. I know what works against which build.



How our deployment works:

All our source code is hosted on a private bitbucket repository (we have many repositories and several users, which is why bitbucket is better for us than github). We have a user account for "servers" with our own ssh key for bitbucket.

A fabric deployment does the following on each server:

  • irc bot announces start to irc channel
  • git pull
  • pip deploy (from the pip list in our repo)
  • SyncDB
  • south migrate
  • restart uwsgi
  • restarting celery
  • irc bot announces completion to irc channel
  • start of accessibility testing
  • announce the results of the accessibility check (and publish the report in a private pasture).

"Accessibility test" (think unit test, but against a real server) - removes all web pages and APIs in the "test" account to make sure it returns normal data without affecting real-time stats.

We also have a fallback git service, so if bitbucket doesn't work, it crashes to that gracefully, and we even have a jenkins integration that when committed in the "deploy" branch causes the deployment to go through

Scary bit

As we use cloud computing and expect high bandwidth, our boxes will automatically appear. This is the default image, which contains a copy of the git repository, etc., but invariably it will be out of date, so theresupupup script that does the deployment for itself, that is, the new fields added to the cluster are up to date.

+1


source







All Articles