Browsed by
Category: aws

Migrated the DB

Migrated the DB

Turns out that running an RDS database is quite pricey – moving this over to the EC2 instance whilst I re-evaluate my architectural decisions.  Spot pricing looks to be the best way forward still, but I need to make sure I’m getting backups – will have to set up a nightly mysql backup to s3 I think, probably the best way to do it.

Test post to make sure the database is working as it should be.

AWS Certified Cloud Practitioner

AWS Certified Cloud Practitioner

I’m on the train on the way back from Manchester to Leeds after having completed my AWS Cloud Practioner examination.

 

I passed!

 

Not sure on my score yet, waiting for the report and the certificate to be available from my certification account.

 

I’ve just realised that this is my first certification since getting my Network+ way back in 2004.  That’s insane.  14 Years.

 

No wonder interviewers asked me if I’d considered getting certified for anything recently – its like those restaurants that have restaurant of the year 2004-2005 on a sticker in the window – it just looks out of date.

 

Quite nervous now about the Associate Solutions Architect exam that I have at the start of October.  This first exam was meant to be a breeze and it was bloody tough.  I guess that means that it’s worth the $120 I paid for it though.

 

Lots of focus on direct connect – a technology that I haven’t really had much chance to play around with as yet, and a few questions about AWS Rekognition, which is a relatively new technology too.

 

In terms of advice, prepare, prepare, prepare and read the white papers.  The practice exams won’t get you ready, the questions are worded so much harder and the answers are so similar to one another that if you don’t know what you’re answering, you simply won’t get it right.

 

So yeah.  AWS certification path started!

 

More training to complete over the next few weeks.

Auto updating Route53 DNS when you launch a new EC2 instance based on an AMI

Auto updating Route53 DNS when you launch a new EC2 instance based on an AMI

I came across an issue with my DNS entries that mean every time that my spot instance was terminated, I had to manually change the A record.  That’s not very cloud-like.

Found an article(below):

Auto-Register EC2 Instance in AWS Route 53

The problem with the article, is that the API has changed since it was written and the script no longer works.

Steps 1-5 are spot on, and most of step 6 is perfect – aside from the script – my fixes are below, I’ve updated the api call and also added in a path statement so that the script will run non-interactively.

Following the blog above, If you are using an amazon linux ami as your base image, you’ll already have the awscli package so you can skip the first part of part 6 also.

 

=========

vi /usr/sbin/update-route53-dns
#!/bin/sh
PATH=$PATH:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin

# Load configuration and export access key ID and secret for cli53 and aws cli
. /etc/route53/config
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY

# The TimeToLive in seconds we use for the DNS records
TTL="300"

# Get the private and public hostname from EC2 resource tags
REGION=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | grep region | awk -F\" '{print $4}')
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
INTERNAL_HOSTNAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values=internal-hostname" --region=$REGION --output=text | cut -f5)
PUBLIC_HOSTNAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values=public-hostname" --region=$REGION --output=text | cut -f5)

# Get the local and public IP Address that is assigned to the instance
LOCAL_IPV4=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
PUBLIC_IPV4=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)

# Create a new or update the A-Records on Route53 with public and private IP address
cli53 rrcreate --replace "$ZONE" "$INTERNAL_HOSTNAME $TTL A $LOCAL_IPV4"
cli53 rrcreate --replace "$ZONE" "www $TTL A $PUBLIC_IPV4"

=========

 

Whilst this is great if you have tags in place, sometimes you want to have something hardcoded to update the DNS records quickly in case of failure/spot request going away.

=========

PUBLIC_HOSTNAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values=public-hostname" --region=$REGION --output=text | cut -f5)
INTERNAL_HOSTNAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values=internal-hostname" --region=$REGION --output=text | cut -f5)
#!/bin/sh
PATH=$PATH:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin

# Load configuration and export access key ID and secret for cli53 and aws cli
. /etc/route53/config
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY

# The TimeToLive in seconds we use for the DNS records
TTL="300"

# Get the private and public hostname from EC2 resource tags
REGION=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | grep region | awk -F\" '{print $4}')
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
INTERNAL_HOSTNAME=web01
PUBLIC_HOSTNAME=www

# Get the local and public IP Address that is assigned to the instance
LOCAL_IPV4=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
PUBLIC_IPV4=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)

# Create a new or update the A-Records on Route53 with public and private IP address
cli53 rrcreate --replace "$ZONE" "$INTERNAL_HOSTNAME $TTL A $LOCAL_IPV4"
cli53 rrcreate --replace "$ZONE" "$PUBLIC_HOSTNAME $TTL A $PUBLIC_IPV4"
=========
Route53 config updated, cron job added, wordpress storage offloaded to an s3 bucket

Route53 config updated, cron job added, wordpress storage offloaded to an s3 bucket

Interesting lunch hour today – had to fix a script I’d found to update route53 dns records on reboot because the API has changed since the guy wrote it.  Gave him my additions and it works a treat.

Also got bucket storage set up, so my images are all being server from S3.

More technical data on what I did to come soon.

Exam tomorrow.  Eeeeek.

A few tweaks

A few tweaks

So as expected, I’m going to need to make a few tweaks to the ami and the image running along with adding a bit of scripting magic to make things run more smoothly.

In regards to the ec2 instance, spot requests are definitely the way to go.  The cost saving is insane, and it’s not like this website is business critical.

That said, I would like to keep it up and running which in itself presents a few challenges.

Though the database backend is RDS backed, ensuring that all posts are brought back as they were if anything happens to the website, the EBS is deleted upon vm termination.

This presents some challenges for the photoblog plus any media that I upload here. I either need to find a way to back up somewhere, or I can leverage s3 buckets in order to do something clever.

That’s this weeks challenge.

I also need to implement some cleverness around automatically updating route53 entries from the web server should it move. Found a few articles I’m going to look at this week in regards to that.

So far I’m pretty impressed with the way the knowledge has stuck. Exam on Wednesday then I can concentrate fully on the solutions architect associate course. Full steam ahead on that one.

Tried to get the cloud gaming rig back up tonight but it would seem a month offline has screwed something with steam and parsec and the vpn. I need to go back to that, give it some TLC, update various bits and pieces and remove anything superfluous.

Once I’ve done all that ill snapshot it again, start from scratch and create an easy to follow guide for gaming using a high powered AWS gaming rig and steam 🙂

We’re in the cloud!

We’re in the cloud!

Technically we were always in the cloud.  I mean, after all, the cloud is just somebody else’s computer, right?

I was using a hosting company called tsohost, been using them for years and years and years.  £50 a year, they manage the underlying infrastructure and I manage the sites and mail and whatnot.

Mail was crap, it was slow, the ssl certs kept erroring and to be honest, I didn’t think I was getting value for money.

A few weeks ago, I started my AWS training in earnest, learning about EC2, S3, elastic beanstalk, Route53 and a whole host of other really cool tech.

Built a wordpress system about 6  different ways – some more manually than others.

It’s amazing the breadth of experience I’ve gained since I first started doing this stuff and first registered my domain way back in 2001.  Things have changed a lot.

This blog will be mainly tech based, see the menu above for different kinds of posts.  This is the preliminary design, its subject to change and I still need to carry on with some bits and pieces in the back end.

More information on what’s powering this thing to follow.

I might even end up sticking some load balancing and auto scaling on – but that may well cost a fortune, so a smallish Linux server with a separate RDS backed MySQL database can do for now.

I should probably go to bed, I’ve been doing this stuff for the last 6 hours since the kids went to bed.