Browsed by
Category: aws

AWS RE:Invent

AWS RE:Invent

As a professional working with AWS, is there any event in the calendar year that is more exciting than AWS RE:Invent?

 

Gutted I didn’t get the chance to go this year.  I am SO EXCITED by the new tech coming out of there though.

 

Robotics AND SATELLITE DATA!?

 

Amazing stuff.

Jefferson Frank Salary Survey: Key Findings

Jefferson Frank Salary Survey: Key Findings

Sam Samarasekera, Business Manager at AWS recruitment firm Jefferson Frank, discusses the key findings of the company’s independent salary survey, exploring everything from diversity to salary benchmarks, certification and beyond. 

 

Amazon Web Services’ incredible growth comes with continued benefits to technology industry professionals, but until now they have been difficult to measure accurately. The Jefferson Frank Salary Survey just landed and, as the largest independent study of the industry, it offers a fascinating insight into our working environment. 

 

For the first time, you can get a detailed breakdown of how you can expect to be remunerated, as well as getting a more comprehensive idea of areas of growth. Not only is it valuable to make sure you are being rewarded in your present role, but it gives you a clear view of areas of development to improve your own self-worth.

 

It also provides a wealth of knowledge to anyone else involved in the industry, including customers and partners, to provide a detailed breakdown of the technology and what direction it is going in.

 

I’ve broken down some of the key findings from the 2018/19 survey:

 

How quickly is AWS growing?

 

Of the survey respondents, 65% of professionals had over 10 years’ experience working in the technology industry, with only 10% having less than three years in the industry. So this is a good indicator of the view of experienced IT professionals. Just 7% of those interviewed had more than seven years’ experience working with AWS, and 63% had only started working with it in the last three years, pointing to this being a huge period of expansion for AWS across the technology sector. 

 

How are companies implementing it?

 

With cost being cited as the most important thing to consider when choosing a cloud provider, it’s no surprise that companies are making the switch to AWS internally. Only 14% of companies that migrated to a cloud provider used contractors, which means increasing your knowledge base will make you a vital team member going forward. Its ease of implementation makes it a big draw, so if your company hasn’t already made the switch, you can expect it to sometime soon. The average time taken to migrate to the cloud is 10 months, and with 66% of companies using internal resources to make the change, it seems unlikely you will escape it.

 

Core skill set

 

So what do you need to brush up on? The amount of core products available to work with is vast and one of AWS’ key strengths, with users praising its scope for “tweaking” things rather than having to implement wholesale changes. At least one in five AWS professionals reported that they work or have worked with 43 different products, so there’s plenty to get your teeth into. The most popular by far is EC2, with 89% having experience with it. That’s a huge lead on S3 (70%) and CloudWatch (69%). 

 

Regarding AWS products, partners expect the biggest increase in demand in the next 12 months will be for Amazon EC2, AWS Lambda and Amazon EKS, which is a handy pointer if you wish to get a head start. Basically, if you haven’t already, you need to learn EC2!

 

What makes EC2 so popular?

 

The majority of respondents considered EC2 a part of their core skill set, with EC2 Auto Scaling a close second. EC2’s scalable capacity is undoubtedly its biggest feature and the reason behind its continued growth. Beyond the free tier, the speed and ease that can be added make it the go-to resource for capacity, with its pay as you go service also coming in for particular praise. Ease of use was another key reason behind its popularity and why it looks likely to continue to dominate. No wonder 76% of respondents listed EC2 as the most important area of product knowledge for cloud professionals.

 

Thirdparty tools

 

A lot of the development community has already had exposure to Jenkins, which explains why it’s the mostused thirdparty tool by quite a distance. Some 21% of respondents use it, with Terraform (13%) and Ansible (10%) lagging some way behind. The fact that it’s open source makes up for its tricky user interface, with users giving it many more ticks in the pros column than the cons.

 

Jenkins and Terraform are expected to be amongst the most indemand tools in DevOps and big data in the next 12 months, so familiarisation with these as well as Java will definitely give you an advantage.

 

The all-important certifications

 

Over half of the survey participants were not AWS certified before taking up their current role, so it isn’t essential if you’re considering a job change. However, twothirds of respondents now hold certs, so be prepared to knuckle down as it’s highly likely that you’ll be expected to gain them once you’re in position. If you simply wish to increase your earning potential or become more employable, they appear to be a must-have. 

 

The financial benefits of certifications

 

We’ll start with the bad newsless than half of responders’ employers paid for their certifications in full (49%), although a further 11% did contribute towards the cost. However, and this is the good news, 22% of professionals reported a direct increase in salary after passing their certs. Participants listed relevant certifications as their number one way to increase earning potential, so it’s a worthy investment even if you have to make it yourself. 

 

Which certs do I need?

 

The top certifications are AWS Certified Solutions Architect, AWS Certified Developer and AWS Certified SysOps Administrator. However, accreditations in programming languages such as Java and PHP will also be advantageous, although Python is expected to be the most in-demand language in the next 12 months.

 

If you’d like more detail, the Jefferson Frank Annual Salary Survey is an invaluable resource for any technology professionals. It’s available to download now and has a more comprehensive breakdown of salaries across the globe, as well as further information on products, certifications and tech trends in the past 12 months as well the coming year.

Promotional credit can not be used for Reserved Instances!

Promotional credit can not be used for Reserved Instances!

I’ve started to play around with Bamboo and a few other bits and pieces in an evening and I’ve managed to find a few promotional $150 vouchers for AWS that you would usually get from AWS training courses on ebay – prices range from about £11 to £35 depending on expiry.  Bamboo will not run on a T3 micro – it just doesn’t have the CPU capacity – definitely needs 2cpus, even for a tiny installation – that’s Java for you 😉

 

I had a spot instance running but it kept going away due to capacity.  It’s running again on a spot instance but I’ve whacked up the price to $1 an hour now so hopefully that’ll never go away.  But again, I’m putting database backups in place to make sure it doesn’t go anywhere, should it go down.

 

I’ve got $450 of credit on my account and it goes towards the EC2 cost along with quite a few other services.  Here’s the complete list:

 

  • Amazon Simple Storage Service
  • AWS Lambda
  • Amazon Glacier
  • Amazon Sumerian
  • Amazon Relational Database Service
  • Amazon SimpleDB
  • AWSCloudDirectoryInternal
  • Amazon CloudSearch
  • AWS IoT
  • Amazon Lightsail
  • Amazon Elastic Compute Cloud
  • AWS Elastic Beanstalk
  • Amazon Simple Notification Service
  • AWS Config
  • Amazon Redshift
  • AmazonCloudWatch
  • Amazon Elastic File System
  • AWS Data Pipeline
  • Amazon ElastiCache
  • Amazon CloudFront
  • Amazon Elastic Transcoder
  • Amazon Virtual Private Cloud
  • Amazon Lex
  • AWS Data Transfer
  • Amazon Simple EDI
  • AWS OpsWorks
  • AWS Storage Gateway
  • Amazon Simple Email Service
  • Amazon Machine Learning
  • Amazon SageMaker
  • Amazon Elastic MapReduce
  • AWS X-Ray
  • Amazon DynamoDB
  • Amazon EC2 Container Registry (ECR)
  • AWS Glue
  • AWS Budgets
  • Amazon Polly
  • Amazon Route 53
  • AWS Support (Basic)
  • Amazon Elasticsearch Service
  • AWS CloudTrail
  • Amazon Rekognition
  • AWS Key Management Service
  • Amazon Simple Queue Service
  • Amazon AppStream
  • Amazon Kinesis

 

So, seeing that EC2 was in the list, and there was no way in hell I was going to burn through that much credit by the end of next year, I decided to purchase some Reserved Instances.  A T3 micro for 3 years and a T3 small for one year, all upfront – total cost was $264 + 20% VAT.  And this charged itself directly to my credit card, with pretty much no confirmation.  To be fair to Amazon, I did click the purchase button, I just expected to see a final confirmation screen that showed me where the funds were going to be coming from.  That didn’t happen.

 

To be fair, there is documentation to support that:

 

https://aws.amazon.com/premiumsupport/knowledge-center/credits-ri/

 

I just didn’t see it before I clicked the old purchase button 😉

 

Their support team have been lovely, they’ve managed to cancel the Reserved Instances and are starting the process to refund me.  I upgraded to developer support at $29 a month to get my request actioned sooner.  I’ve not decided whether or not I’m going to keep this in place or not yet – there are benefits to it for sure, and as I start to do more and more with it over the coming months, it might be a good idea.  We’ll see.

 

Workmail isn’t covered by the credit, thats $4 per user per month, but to be honest, it is so much better than what I’m used to with my previous host, it makes me realise just how great an enterprise level solution it is.

Practice makes perfect

Practice makes perfect

I’m 3/4 of the way through my acloudguru AWS training course for solutions architect associate and decided that now I’ve done most of the VPC chapter and my understanding of the subject is helped infinitely by my experience, it was time to use the practice exam voucher on the AWS.training site to see how far off I am with less than 2 weeks to go until i sit it.

The questions were tough and certainly very similar – if you didn’t know for certain what a particular service is called or what a service does then you’re in trouble.

I scored 84%

Looks like the thing I need to focus on the most over the next week or so is security.

Also just need to firm up on a couple of bits of terminology that almost caught me out. The main thing is making sure I read the question all the way through and that I apply logic to the answers that ‘could’ be correct.

Studying is going well. Still enjoying the subject matter. Going all in on AWS.

Costings

Costings

This is absolutely crazy.

 

Look at the price difference between a t3 micro and a t3 small running spot.

 

It’s well under half the price.

 

Much cheaper running it this way than having apache on a t2 small and a seperate RDS instance.

 

S3 backups running well – or they are now that I’ve sorted out the cron job.  Had a little issue with a misplaced * instead of a 0 which meant I got 60 backups between 1am and 2am rather than just one backup at 1am 😉

Migration complete

Migration complete

So the RDS instance has been terminated now.

I’ve got S3 backed mysql backups running nightly, so I’m saving myself an absolute fortune.

Still some tuning to do and I’m not sure if I’m going to stick at spot pricing or look into reserved pricing.  I need to get some tuning in place first with regards to apache and mysql and make sure with Trusted Advisor that I’m running at the right sort of level.

If I can get it running properly on a t2.nano or a t3.nano I could probably save a small fortune paying just $70ish over the course of three years rather than the on-demand pricing.

Spot is okay, but now my database is running on the same ec2 instance, I risk data loss if Amazon were to terminate my instance.

It’s only the 12th of the month – look at the state of the costs:

We’d be looking at easy $13 a month just for RDS – probably more like $16 a month.  Just too expensive for what I need – and lets be honest, RDS backed mysql is a little overkill for a couple of wordpress blogs.

In terms of EC2 cost:

The biggest cost here is obviously the gaming instance I threw up for a work demo last week – even though I’ve only done 9 hours with it, the cost is $2.48.

In comparison, the spot instance running my web server at the moment (a t3 small) has cost me just $1.22 for 178 hours.

Whilst it was fun to play around with RDS and the security groups governing rules in and out, it’s just not required for my use case.

We’ll see how the next few weeks go 🙂

Migrated the DB

Migrated the DB

Turns out that running an RDS database is quite pricey – moving this over to the EC2 instance whilst I re-evaluate my architectural decisions.  Spot pricing looks to be the best way forward still, but I need to make sure I’m getting backups – will have to set up a nightly mysql backup to s3 I think, probably the best way to do it.

Test post to make sure the database is working as it should be.

AWS Certified Cloud Practitioner

AWS Certified Cloud Practitioner

I’m on the train on the way back from Manchester to Leeds after having completed my AWS Cloud Practioner examination.

 

I passed!

 

Not sure on my score yet, waiting for the report and the certificate to be available from my certification account.

 

I’ve just realised that this is my first certification since getting my Network+ way back in 2004.  That’s insane.  14 Years.

 

No wonder interviewers asked me if I’d considered getting certified for anything recently – its like those restaurants that have restaurant of the year 2004-2005 on a sticker in the window – it just looks out of date.

 

Quite nervous now about the Associate Solutions Architect exam that I have at the start of October.  This first exam was meant to be a breeze and it was bloody tough.  I guess that means that it’s worth the $120 I paid for it though.

 

Lots of focus on direct connect – a technology that I haven’t really had much chance to play around with as yet, and a few questions about AWS Rekognition, which is a relatively new technology too.

 

In terms of advice, prepare, prepare, prepare and read the white papers.  The practice exams won’t get you ready, the questions are worded so much harder and the answers are so similar to one another that if you don’t know what you’re answering, you simply won’t get it right.

 

So yeah.  AWS certification path started!

 

More training to complete over the next few weeks.

Auto updating Route53 DNS when you launch a new EC2 instance based on an AMI

Auto updating Route53 DNS when you launch a new EC2 instance based on an AMI

I came across an issue with my DNS entries that mean every time that my spot instance was terminated, I had to manually change the A record.  That’s not very cloud-like.

Found an article(below):

Auto-Register EC2 Instance in AWS Route 53

The problem with the article, is that the API has changed since it was written and the script no longer works.

Steps 1-5 are spot on, and most of step 6 is perfect – aside from the script – my fixes are below, I’ve updated the api call and also added in a path statement so that the script will run non-interactively.

Following the blog above, If you are using an amazon linux ami as your base image, you’ll already have the awscli package so you can skip the first part of part 6 also.

 

=========

vi /usr/sbin/update-route53-dns
#!/bin/sh
PATH=$PATH:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin

# Load configuration and export access key ID and secret for cli53 and aws cli
. /etc/route53/config
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY

# The TimeToLive in seconds we use for the DNS records
TTL="300"

# Get the private and public hostname from EC2 resource tags
REGION=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | grep region | awk -F\" '{print $4}')
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
INTERNAL_HOSTNAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values=internal-hostname" --region=$REGION --output=text | cut -f5)
PUBLIC_HOSTNAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values=public-hostname" --region=$REGION --output=text | cut -f5)

# Get the local and public IP Address that is assigned to the instance
LOCAL_IPV4=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
PUBLIC_IPV4=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)

# Create a new or update the A-Records on Route53 with public and private IP address
cli53 rrcreate --replace "$ZONE" "$INTERNAL_HOSTNAME $TTL A $LOCAL_IPV4"
cli53 rrcreate --replace "$ZONE" "www $TTL A $PUBLIC_IPV4"

=========

 

Whilst this is great if you have tags in place, sometimes you want to have something hardcoded to update the DNS records quickly in case of failure/spot request going away.

=========

PUBLIC_HOSTNAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values=public-hostname" --region=$REGION --output=text | cut -f5)
INTERNAL_HOSTNAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values=internal-hostname" --region=$REGION --output=text | cut -f5)
#!/bin/sh
PATH=$PATH:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin

# Load configuration and export access key ID and secret for cli53 and aws cli
. /etc/route53/config
export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY

# The TimeToLive in seconds we use for the DNS records
TTL="300"

# Get the private and public hostname from EC2 resource tags
REGION=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | grep region | awk -F\" '{print $4}')
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
INTERNAL_HOSTNAME=web01
PUBLIC_HOSTNAME=www

# Get the local and public IP Address that is assigned to the instance
LOCAL_IPV4=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
PUBLIC_IPV4=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)

# Create a new or update the A-Records on Route53 with public and private IP address
cli53 rrcreate --replace "$ZONE" "$INTERNAL_HOSTNAME $TTL A $LOCAL_IPV4"
cli53 rrcreate --replace "$ZONE" "$PUBLIC_HOSTNAME $TTL A $PUBLIC_IPV4"
=========