Buscar este blog

martes, 19 de noviembre de 2019

Managing dynamic inventory in private subnets using bastion jump box

Managing dynamic inventory in private subnets using bastion jump box
The title of post is quite large, but is something I encountered issues in the last weeks. I had a VPC in AWS, creating x amount of instances in a private network, and was quite complex to manage this instance using static inventory files. So I will explain you how to manage this problem with Ansible.
Before continue, I want to say these articles are really good and can help you with this issues.
So you will be asking, if these articles are so good, why are you writing them again? Easy, I’m doing this in Gitlab CI, and I suppose other CI will encounter similar issues. It’s not possible to connect to the instances using the instructions above.

First Step

We get our inventory in a dynamic way. For this we will use the inventory scripts.
We need to modify the ec2.ini file with uncommenting the vpc_destination_variable and set the value to private_ip_address
An example
# For server inside a VPC, using DNS names may not make sense. When an instance
# has 'subnet_id' set, this variable is used. If the subnet is public, setting
# this to 'ip_address' will return the public IP address. For instances in a
# private subnet, this should be set to 'private_ip_address', and Ansible must
# be run from within EC2. The key of an EC2 tag may optionally be used; however
# the boto instance variables hold precedence in the event of a collision.
# WARNING: - instances that are in the private vpc, _without_ public ip address
# will not be listed in the inventory until You set:
vpc_destination_variable = private_ip_address
#vpc_destination_variable = ip_address
Be sure to have your ansible.cfg, with the following line.
host_key_checking = False
This is useful, as we’re running this in a CI, we can’t hit enter to accept the connection in the terminal.
Then we begin working with our yml file. As I’m running this on a container, I need to create the .ssh directory and the config file. Here it’s important to add StrictHostKeyChecking=no If we don’t do this, this will fail in our CI, as we can’t hit enter. If you don’t included it and run it locally, it will work.
---
- name: Creates ssh directory
  file:
    path: ~/.ssh/
    state: directory


- name: Create ssh config file in local computer
  copy:
    dest: ~/.ssh/config
    content: |
       Host 10.*.*.*
       User ubuntu
       IdentityFile XXXXX.pem
       StrictHostKeyChecking=no
       ProxyCommand ssh -q -W %h:%p {{ lookup('env', 'IP') }}
       Host {{ lookup('env', 'IP') }}
       User ubuntu
       StrictHostKeyChecking=no
       IdentityFile XXXXX.pem
       ForwardAgent yes

And finally we test it running the ping command.
---
- name: test connection
  ping:

In case you need the code : https://github.com/DiegoTc/bastionansible

miércoles, 2 de octubre de 2019

Ansible using plugins for dynamic inventories


This is a small post about how to use inventory plugins in Ansible. If you are looking the script way I recommend to read this article: http://gloriasilveira.com/setting-up-ansible-for-aws-with-dynamic-inventory-ec2.html
It explains really good this or you can watch this video: https://www.youtube.com/watch?v=LnbqO1kTPqE&t=6s
But if you’re looking to use inventory this article can help you.
First of all, why should I used inventory if all over the internet they’re using the python scripts?
Well, Ansible recommends it:
Inventory plugins take advantage of the most recent updates to Ansible’s core code. We recommend plugins over scripts for dynamic inventory. You can write your own plugin to connect to additional dynamic inventory sources. https://docs.ansible.com/ansible/latest/user_guide/intro_dynamic_inventory.html
The actual ansible guide is quite good, but there was a step that got me confused, probably my english isn’t so good and I didn’t understood it.
We need to enable the plugin. So we have 2 ways of doing this, we could edit the ansible.cfg file located in /etc/ansible/ansible.cfg or in your local folder where you’re working.
Ansible Documentation
According to ansible, you need to enable the plugin, like the following code, but I was killing myself and couldn’t make it work, so what’s the correct way of doing it?
[inventory]
enable_plugins = host_list, script, auto, yaml, ini, toml
The way it works for me
I’m working with aws dynamic inventory. So according to the documentation the file name has to end in aws_ec2.(yml|yaml)
So I need to add the aws_ec2 to the enable plugins.
[inventory]
enable_plugins = aws_ec2, host_list, yaml, ini, script
After that, following the documentation is quite easy
File name: demo.aws_ec2.yml
# Minimal example using environment vars or instance role credentials
# Fetch all hosts in us-east-1, the hostname is the public DNS if it exists, otherwise the private IP address
plugin: aws_ec2
regions:
  - us-east-1
If you need to run it
ansible-inventory -i demo.aws_ec2.yml --graph
If you need to use it on a playbook, sending of parameter a private key
ansible-playbook -i demo.aws_ec2.yml playbook.yaml --private-key KEY

lunes, 25 de marzo de 2019

Display GIT branch in the terminal

I encounter with this issue how to make possible to display my GIT branch in the console. I found the answer in ASK UBUNTU: https://askubuntu.com/questions/730754/how-do-i-show-the-git-branch-with-colours-in-bash-prompt

Step 1
nano ~/.bashrc

Step 2
Add the following code at the end of the line
# Show git branch name
force_color_prompt=yes
color_prompt=yes
parse_git_branch() {
 git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/(\1)/'
}
if [ "$color_prompt" = yes ]; then
 PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[01;31m\]$(parse_git_branch)\[\033[00m\]\$ '
else
 PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w$(parse_git_branch)\$ '
fi
unset color_prompt force_color_prompt
Step 3
Reload the file
source ~/.bashrc