Browsed by
Month: September 2018

Practice makes perfect

Practice makes perfect

I’m 3/4 of the way through my acloudguru AWS training course for solutions architect associate and decided that now I’ve done most of the VPC chapter and my understanding of the subject is helped infinitely by my experience, it was time to use the practice exam voucher on the AWS.training site to see how far off I am with less than 2 weeks to go until i sit it.

The questions were tough and certainly very similar – if you didn’t know for certain what a particular service is called or what a service does then you’re in trouble.

I scored 84%

Looks like the thing I need to focus on the most over the next week or so is security.

Also just need to firm up on a couple of bits of terminology that almost caught me out. The main thing is making sure I read the question all the way through and that I apply logic to the answers that ‘could’ be correct.

Studying is going well. Still enjoying the subject matter. Going all in on AWS.

Costings

Costings

This is absolutely crazy.

 

Look at the price difference between a t3 micro and a t3 small running spot.

 

It’s well under half the price.

 

Much cheaper running it this way than having apache on a t2 small and a seperate RDS instance.

 

S3 backups running well – or they are now that I’ve sorted out the cron job.  Had a little issue with a misplaced * instead of a 0 which meant I got 60 backups between 1am and 2am rather than just one backup at 1am 😉

Manflu is real

Manflu is real

So less than a week after returning to school, Harry has brought a cold home.

More importantly than that, he’s shared it with me, and now i have the cold.

Luckily, this appears to be a cold that is paying attention to Day Nurse, so I’m at least getting a clear head once the tablets have kicked in.

Lots of stuff going on recently, passed my AWS certification exam, got another one at the start of October, so that’s going to be good.

Played a load of two point hospital, its really good fun.

Also started watching Greys Anatomy – which I’m really enjoying!

Migration complete

Migration complete

So the RDS instance has been terminated now.

I’ve got S3 backed mysql backups running nightly, so I’m saving myself an absolute fortune.

Still some tuning to do and I’m not sure if I’m going to stick at spot pricing or look into reserved pricing.  I need to get some tuning in place first with regards to apache and mysql and make sure with Trusted Advisor that I’m running at the right sort of level.

If I can get it running properly on a t2.nano or a t3.nano I could probably save a small fortune paying just $70ish over the course of three years rather than the on-demand pricing.

Spot is okay, but now my database is running on the same ec2 instance, I risk data loss if Amazon were to terminate my instance.

It’s only the 12th of the month – look at the state of the costs:

We’d be looking at easy $13 a month just for RDS – probably more like $16 a month.  Just too expensive for what I need – and lets be honest, RDS backed mysql is a little overkill for a couple of wordpress blogs.

In terms of EC2 cost:

The biggest cost here is obviously the gaming instance I threw up for a work demo last week – even though I’ve only done 9 hours with it, the cost is $2.48.

In comparison, the spot instance running my web server at the moment (a t3 small) has cost me just $1.22 for 178 hours.

Whilst it was fun to play around with RDS and the security groups governing rules in and out, it’s just not required for my use case.

We’ll see how the next few weeks go 🙂

Migrated the DB

Migrated the DB

Turns out that running an RDS database is quite pricey – moving this over to the EC2 instance whilst I re-evaluate my architectural decisions.  Spot pricing looks to be the best way forward still, but I need to make sure I’m getting backups – will have to set up a nightly mysql backup to s3 I think, probably the best way to do it.

Test post to make sure the database is working as it should be.

AWS Certified Cloud Practitioner

AWS Certified Cloud Practitioner

I’m on the train on the way back from Manchester to Leeds after having completed my AWS Cloud Practioner examination.

 

I passed!

 

Not sure on my score yet, waiting for the report and the certificate to be available from my certification account.

 

I’ve just realised that this is my first certification since getting my Network+ way back in 2004.  That’s insane.  14 Years.

 

No wonder interviewers asked me if I’d considered getting certified for anything recently – its like those restaurants that have restaurant of the year 2004-2005 on a sticker in the window – it just looks out of date.

 

Quite nervous now about the Associate Solutions Architect exam that I have at the start of October.  This first exam was meant to be a breeze and it was bloody tough.  I guess that means that it’s worth the $120 I paid for it though.

 

Lots of focus on direct connect – a technology that I haven’t really had much chance to play around with as yet, and a few questions about AWS Rekognition, which is a relatively new technology too.

 

In terms of advice, prepare, prepare, prepare and read the white papers.  The practice exams won’t get you ready, the questions are worded so much harder and the answers are so similar to one another that if you don’t know what you’re answering, you simply won’t get it right.

 

So yeah.  AWS certification path started!

 

More training to complete over the next few weeks.

Auto updating Route53 DNS when you launch a new EC2 instance based on an AMI

Auto updating Route53 DNS when you launch a new EC2 instance based on an AMI

I came across an issue with my DNS entries that mean every time that my spot instance was terminated, I had to manually change the A record.  That’s not very cloud-like.

Found an article(below):

Auto-Register EC2 Instance in AWS Route 53

The problem with the article, is that the API has changed since it was written and the script no longer works.

Steps 1-5 are spot on, and most of step 6 is perfect – aside from the script – my fixes are below, I’ve updated the api call and also added in a path statement so that the script will run non-interactively.

Following the blog above, If you are using an amazon linux ami as your base image, you’ll already have the awscli package so you can skip the first part of part 6 also.

 

=========

vi /usr/sbin/update-route53-dns
#!/bin/sh
PATH=$PATH:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin

# Load configuration and export access key ID and secret for cli53 and aws cli
. /etc/route53/config
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY

# The TimeToLive in seconds we use for the DNS records
TTL="300"

# Get the private and public hostname from EC2 resource tags
REGION=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | grep region | awk -F\" '{print $4}')
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
INTERNAL_HOSTNAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values=internal-hostname" --region=$REGION --output=text | cut -f5)
PUBLIC_HOSTNAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values=public-hostname" --region=$REGION --output=text | cut -f5)

# Get the local and public IP Address that is assigned to the instance
LOCAL_IPV4=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
PUBLIC_IPV4=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)

# Create a new or update the A-Records on Route53 with public and private IP address
cli53 rrcreate --replace "$ZONE" "$INTERNAL_HOSTNAME $TTL A $LOCAL_IPV4"
cli53 rrcreate --replace "$ZONE" "www $TTL A $PUBLIC_IPV4"

=========

 

Whilst this is great if you have tags in place, sometimes you want to have something hardcoded to update the DNS records quickly in case of failure/spot request going away.

=========

PUBLIC_HOSTNAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values=public-hostname" --region=$REGION --output=text | cut -f5)
INTERNAL_HOSTNAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values=internal-hostname" --region=$REGION --output=text | cut -f5)
#!/bin/sh
PATH=$PATH:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin

# Load configuration and export access key ID and secret for cli53 and aws cli
. /etc/route53/config
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY

# The TimeToLive in seconds we use for the DNS records
TTL="300"

# Get the private and public hostname from EC2 resource tags
REGION=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | grep region | awk -F\" '{print $4}')
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
INTERNAL_HOSTNAME=web01
PUBLIC_HOSTNAME=www

# Get the local and public IP Address that is assigned to the instance
LOCAL_IPV4=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
PUBLIC_IPV4=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)

# Create a new or update the A-Records on Route53 with public and private IP address
cli53 rrcreate --replace "$ZONE" "$INTERNAL_HOSTNAME $TTL A $LOCAL_IPV4"
cli53 rrcreate --replace "$ZONE" "$PUBLIC_HOSTNAME $TTL A $PUBLIC_IPV4"
=========
Route53 config updated, cron job added, wordpress storage offloaded to an s3 bucket

Route53 config updated, cron job added, wordpress storage offloaded to an s3 bucket

Interesting lunch hour today – had to fix a script I’d found to update route53 dns records on reboot because the API has changed since the guy wrote it.  Gave him my additions and it works a treat.

Also got bucket storage set up, so my images are all being server from S3.

More technical data on what I did to come soon.

Exam tomorrow.  Eeeeek.

A few tweaks

A few tweaks

So as expected, I’m going to need to make a few tweaks to the ami and the image running along with adding a bit of scripting magic to make things run more smoothly.

In regards to the ec2 instance, spot requests are definitely the way to go.  The cost saving is insane, and it’s not like this website is business critical.

That said, I would like to keep it up and running which in itself presents a few challenges.

Though the database backend is RDS backed, ensuring that all posts are brought back as they were if anything happens to the website, the EBS is deleted upon vm termination.

This presents some challenges for the photoblog plus any media that I upload here. I either need to find a way to back up somewhere, or I can leverage s3 buckets in order to do something clever.

That’s this weeks challenge.

I also need to implement some cleverness around automatically updating route53 entries from the web server should it move. Found a few articles I’m going to look at this week in regards to that.

So far I’m pretty impressed with the way the knowledge has stuck. Exam on Wednesday then I can concentrate fully on the solutions architect associate course. Full steam ahead on that one.

Tried to get the cloud gaming rig back up tonight but it would seem a month offline has screwed something with steam and parsec and the vpn. I need to go back to that, give it some TLC, update various bits and pieces and remove anything superfluous.

Once I’ve done all that ill snapshot it again, start from scratch and create an easy to follow guide for gaming using a high powered AWS gaming rig and steam 🙂