-
Notifications
You must be signed in to change notification settings - Fork 971
edx-west: use local temp dir for sockets and ansible cache #849
Conversation
@sefk a couple of things, I don't like the temp directories as part of the repo. Why not just set the cache path to /tmp/ansibel-stage-cache or /tmp/ansible-prod-cache? |
Having temp dirs right there gets them created for you. If the dir doesn't exist, I believe ansible fails, and I don't know of a way to ensure they are in place for you net of some weird thing outside the play (makefile, git magic). |
@sefk I don't think there are any ways to have them automatically created. I would suggest maybe calling them something more informative like ec2_cache instead of 'tmp' but that's a nitpick. 👍 |
@feanil changing the dir name from tmp to ec2_cache is a good one. I made that change and repushed my branch. Can you give one more review pls? |
@@ -0,0 +1,3 @@ | |||
* | |||
|
|||
!.gitignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a new VPC for us where we're doing the analytics work.
Per feedback from @feanil on PR openedx-unsupported#849, want our release branch to line up with what we're pulling into master. Going back to putting ssh connections in /tmp. They have globally unique names, putting in /tmp is consisten with other things, and having them live in a dir named ec2_cache feels odd now.
I was having problems where concurrent installs could trample on each other. The instance that immediately affected me was output caching from ec2.py: the output of that command is different between staging and prod, and both were being written to /tmp/ansible_ec2.cache and .index. Fix here is to write to a local temp directory. This creates empty temp dirs to ensure that they are created in all repos. While less likely, you could have collisions on named ssh sockets. Those are named with just the instance name, which could be re-used across VPC's. Putting those in the ./tmp dir too prevents that. Note that for consistency I did away with just the plain ec2.ini file, and instead now there are prod- and stage- variants. This is clean but now means that you'll need to change your install command to look something like this: ANSIBLE_EC2_INI=prod-ec2.ini ANSIBLE_CONFIG=prod-ansible.cfg ansible-playbook -c ssh -u ubuntu -i ./ec2.py prod-app.yml Conflicts: playbooks/edx-west/ansible.cfg
looks good to me. 👍 |
woot |
…master edx-west: use local temp dir for sockets and ansible cache
@feanil -- this is a change that we've been running with for the last few weeks on edx-west/release. It was pulled into our config branch in #806. This just gets the same change onto master, anticipating that we'll be picking up master at some point soon.
Given that we're doing this already and only affects our plays should be a quick one. Thanks for reviewing. Original commit msg below.
I was having problems where concurrent installs could trample on each
other. The instance that immediately affected me was output caching
from ec2.py: the output of that command is different between staging and
prod, and both were being written to /tmp/ansible_ec2.cache and .index.
Fix here is to write to a local temp directory. This creates empty temp
dirs to ensure that they are created in all repos.
While less likely, you could have collisions on named ssh sockets.
Those are named with just the instance name, which could be re-used
across VPC's. Putting those in the ./tmp dir too prevents that.
Note that for consistency I did away with just the plain ec2.ini file,
and instead now there are prod- and stage- variants. This is clean but
now means that you'll need to change your install command to look
something like this:
Conflicts: