Pretty cool post from Google’s Blog to quickly view 21 services in a series of short videos.
Some essential videos
You can find the full set of videos in the original post on Google’s Blog.
There is a “phenomenon” that I have experienced through my career that I like to call the “Reference Architecture Disappointment”.
Some people would experiment with a similar effect when they go to the MD´s consultation with several symptoms to find out that they may have a common cold. No frenzy at the Hospital, no crazy consultations, no House MD´s TV scenes. Just paracetamol, water and rest!
So many years of Medicine School just to prescribe that?
Well, yes. The MD recognised a common cold between dozen of illnesses with the same set of symptoms and prescribed the simplest and best treatment. The question is, would you be able to do it?
Same thing when a Solutions Architect deals with a set of requirements. The “Architect” will select the best architecture that solves a business problem, most simply and efficiently possible. Sometimes, that means to use the “Reference Architecture” for that particular problem, with the necessary changes.
Those architectures emerge from practical experience and encompass patterns and best practices. Usually, reinventing the wheel is not a good idea.
Keep it simple and Rock On!
I’ve recertified at the time of writing this post – June 2021 – and wanted to share a bit of the experience and preparation for this one because it has been peculiar, to say the least.
My preparation is very particular because I tend to prepare my own materials using mostly my experience and the official resources available, which are quite good in general. So I checked out the certification site and the digital readiness training. It was almost the same as the last time. I took the sample questions; they were new but similar to the old ones, nothing really new there, except for Cloudwatch Insights.
I started preparing using the official guide – I bought it in London in 2018 – the docs, some videos from the digital readiness course and my notes from different projects that I’d been working on; I couldn’t help but thinking that the exam needed a refresh. After all, three years in “cloud-years” are a lot, maybe x2, due to the pace of current innovation. Also, many services were missing from the guide and exam outline, including Transit Gateway, AWS RAM, Global Accelerator … all very relevant for modern architectures.
Well, what do you know? I got an invitation to provide input on the new content outline of the exam!
I’m sure that you are aware that AWS Certified Advanced Networking is regarded as one of the most challenging certifications, if not the most. It’s certainly very subjective, depending on many personal factors. In addition, the subject matter is complex, and the official guide it’s not for beginners. It doesn’t hold hands – no funny stories about pets or people – and there is no official practice exam, except for the ones provided with the official guide. Finally, the exam doesn’t take any prisoners; it’s really tough.
As with the other Specialties, you might get questions solely about the subject matter at hand, but many of them will be cross-domain: Security, Architecture, Cost, Compliance, DevOps … It’s not an exam for beginners, and you should hold, at the very least, an associate certification or the equivalent experience.
Don’t forget this exam – and the rest of the certifications – tests experience, not only technical knowledge, so if you don’t have it, you will need to make up for it.
Sixty-five questions, multichoice, three hours – you know the drill.
A good surprise was waiting for me. I was expecting a new set of questions – one of my connections on LinkedIn mentioned it – and I got them. But I wasn’t expecting the exam to be so up-to-date! Really surprising, because I’d just finished a survey about the contents of the new revision of the exam.
Luckily, that wasn’t a problem because I prepare comprehensively, and networking seems to a big part of any project I work on.
The current revision goes beyond the official guide and updates services and scenarios. I have to say that the quality of the questions is higher than in previous incarnations: clearer and better wording and common real-life scenarios. Actually, I had faced most of them, so no unique special cases to trick you. However, that doesn’t mean they are easy. They are not. Some are lengthy, with similar responses, multi-choice …
The sample questions are very relevant, but (mostly) they don’t refer to the new services.
Happily, I passed and improved my score massively from last time, which it’s always nice 🙂
After the exam, I went online and found a post on the AWS certification blog about the exam, discussing the contents, from April’ 21. So I’d guess this update is quite recent.
I got the outlines from the original post by Nigel Harris – kudos, mate 🙂 The contents are absolutely relevant for the exam. I’m adding my personal notes – in cursive – but check the original post for resources and the comments from the original author.
AWS Lambda, Lambda@Edge, Amazon CloudFront – Cloudfront is key; understand how it works with different origins. Remember, the RTMP distribution has been deprecated – mostly outdated content on the official guide – expand and review with other resources.
AWS Global Cloud Infrastructure, Virtual Private Cloud (VPC)
Dynamic Host Configuration Protocol (DHCP) configurations, route tables, network-access control lists (NACLs), and security groups.
NAT gateways (NGW), internet gateways (IGW), egress-only internet gateways (EIGW), and virtual gateways (VGW).
All basic stuff, you should know all that by heart if you are attempting the exam—good content on the official guide, but expand with other resources.
VPNs, AWS Direct Connect – everything about them: technical specifications, scenarios, cost … good content on the official guide, but expand with other resources.
VPC peering, AWS Transit Gateway – everything about them: technical specifications, scenarios, cost … outdated content on the official guide – expand and review with other resources. You should know about Transit VPC’s, though. It still appears on the exam, and you may have to deal with it in some project. If you don’t have real-life experience with the services, you should get some through laboratories or actual projects.
CloudFormation – got a few questions about it – good content on the official guide, but expand with other resources.
AWS PrivateLink, Gateway Endpoints, Interface endpoints – everything about them: technical specifications, scenarios, cost … good content on the official guide, but expand with other resources. If you don’t have real-life experience with the services, you should get some through laboratories or actual projects.
IPAA, EU/US Privacy Shield, and PCI.
Mostly outdated content on the official guide – expand and review with other resources.
VPC flow logs, access logs for your application load balancer, and CloudFront logs.
Mostly outdated content on the official guide – expand and review with other resources.
Placement groups, jumbo frames, and elastic network adapters.
Good content on the official guide, but expand with other resources.
Mostly outdated content on the official guide, so expand with other resources. All those services are key, so make sure to get some real-life experience with them through laboratories or actual projects.
As I mentioned previously, while I was preparing for the recertification, I got an invitation to a survey about the contents of the new revision of the exam.
The thing is, the exam it’s updated. However, the official guide is not. So I’d guess this will be an opportunity to deliver a new guide and training content.
The new contents seem similar to the present incarnation, reducing the domains from five to four, adding new services, increasing security content, networking performance, reliability and monitoring. Potentially, there might be laboratories as well. The exam’s not getting any easier, that’s for sure 😉
I’d guess we may get a beta at the end of the year, looking forward to it!
Last Wednesday, 19/05/2021, I attended one of AWS’s Virtual Days that are being organized regularly. This time was about Migrations, which it’s a hot topic in the Enterprise right now. Sometimes it feels like everything is about Machine Learning or other sideline subjects. Still, in reality, most big projects are about migrating apps from on-premises environments to the cloud.
The Virtual Day was organised about the following subjects:
I wanted to share some takeaway points from Day II, as I missed Day I, which I can only presume was about Lift & Shift tools and operations, which AWS has extensively covered. Services like AWS Application Migration Service – console version of Cloud Endure – or AWS Migration Hub are extremely comprehensive and cutting edge, on tops of classic services like Storage Gateway or the Snow family.
A handy tool for analyzing and containerizing Java and .NET apps. My experience with the tool it’s very positive and can really accelerate the migration of existing applications as it generates several artefacts for services like ECS and Kubernetes.
This part of the webinar was really technical and covered a wide range of topics – I can’t complain, though 🙂
Lake Formation is an interesting service that I think has a lot of potential for the future. Actually, new features are on the way; we’ll see the direction that it takes.
At the moment, the most interesting feature is the centralized granular set of permissions to manage the data sets securely. It took me some time to get around it, but after that worked very well.
AWS Glue Studio looks interesting, but I haven’t used it just yet.
Very well known service and interesting webinar. The only point to highlight is the extensive catalogue of sources and destinations.
My experience in SAP workloads is minimal, so I was really impressed with AWS’s coverage of the subject. Exciting webinar.
The past 15th of March, I sat down at Google’s Professional Cloud Architect Beta, so I’d like to share some of my thoughts now that some time has passed.
If you haven’t taken any Google’s certification before, let me tell you that this test could be very different from your expectations. This is not a highly technical exam-focused, just on architecture scenarios. To understand it better and get a contrast, let’s explore another vendor’s version of the test first.
You are presented with seventy-five questions-scenarios, highly technical and mostly based on their tech. It’s a challenging test, where you need to know many of the platform’s technical intricacies. As with Google’s, I think it reflects the culture and their idea of architecture; in this case, highly specialized in the vendor’s technologies; don’t get me wrong, it’s challenging and a lot of fun. I allocated around nine months to take on that certification, and I had experience with the platform since 2010. The thing is, working for AWS probably’s like that. I had some experienced with them last year, and they were highly specialized in certain areas and technologies.
What’s the problem with that approach? I think that type of certification is confusing many. AWS is very clear, though; you’d need “Two or more years of hands-on experience designing and deploying cloud architecture on AWS”. This is a professional test and means that you should back that certification with professional experience because the exam is only a highly abstract version of the job’s technical side. You are supposed to have the soft skills, broad experience in different technologies and industries, and the intuition that comes with the job to succeed in real life.
Most architectures don’t live in a vacuum, and any change requires a lot of technical work – usually integrating with other technologies. But no company or customer it’s going to take your proposal at face value. Still, a lot of discussions with different teams, questions, presentations, budgets, validations, certifications will happen even before you could do any change … in a few words, it’s not all about knowing the technical side of things; sometimes, it’s the easiest part.
I get many messages from people from other fields and even other industries – Finance, Entertainment, Hospitality – that reversed the process. They took the certification with little experience in Cloud or Architecture, and now they can’t find a job. Why? Because you are missing many other skills, and some you’d need to get at the job. It’s an organic process.
I’m discussing the Beta version of the test, but I don’t think the final version will be very different, at least in the core values. I think this test pushes you to show the experience as an Architect as a whole, not just the technical side of things. So it can be a more difficult exam than AWS’s, even though it could be seen as easier on the surface as you don’t get that many complex scenarios with multi-choice answers that look very similar.
Overall, I think it’s a good and challenging update that now ranks high in terms of difficulty and reflects a bit better the Architect’s job – and Google’s take on it.
The past 24th of February, I attended the AWS Innovate – AI/ML Edition, Technical Decision Maker Track; it was an exciting event, so I’d like to share some quick takeaways:
📌 Scaling ML as a Journey; 7 fundamentals steps: Culture, Team Enablement, Data Strategy, PoC, Repeatability, Scale, Evolution.
📌 S3 strong after-read-consistency: was introduced at last re:Invent, but now I had time to check it out properly. It’s an essential feature for migrations or Data Lakes to ensure having the latest version of documents or files.
📌 New AWS AI Services such as Amazon Lookout for Vision: also introduced at last re:Invent; again, now I had the chance to try it. It seems very appropriate for industrial applications, such as finding defective parts.
📌 The proper way to architect AWS ML Apps: ML Lens
📌 Secure Machine Learning for Regulated Industries: I especially enjoyed this presentation, quite hand-on and lots of RL security practices for Sagemaker.
I’m still going through the other tracks, so expect a full post in the coming weeks.
Image property of aws.com
It’s no secret the huge revolution that Kubernetes has ignited for the Industry since Google introduced it back in 2014, so I’d guess we don’t need to go there.
The GCP offering for Kubernetes is GKE, which provides a fully managed environment for orchestrating and deploying containers in the cloud.
GKE now is offering two operation modes:
The Standard operation mode is managed, but the infrastructure is configured and handled by the customer: needs configuration for scaling and node provisioning – provides a lot of flexibility.
Autopilot mode has been introduced to provide a full and streamlined NoOps experience: GKE fully manages the infrastructure. The nodes provisioning and the scaling are automatically handled for you – no more worries about the master and working nodes. You’d lose some flexibility, though, but that’s the usual compromise.
I initially created a cluster in europe-north-1, but I got some problems deploying the pods to the cluster, so I changed it to usa-north, and it worked – guess some region limitations at the moment some transient problems.
After creating the cluster, I deployed a basic web container with a web service but no node configuration, which speeds and simplifies the provisioning.
Per the documentation, Autopilot applies the following values for the pod’s resources:
Finally, I created a service to expose the endpoint to the world. I selected the balancer type because Autopilot doesn’t allow ExternalIps; alternatively, you could use an Ingress service.
And that’s all; our web app is ready. Autopilot provisioned automatically three nodes using e2-medium machines. After invoking the service a few times, the allocated resources were low, as shown in the image above.
I need to do load testing – and cost calculation – with a complex application and see how the autoscaling behaves. But my initial impression it’s excellent: it can’t be easier to provision a Kubernetes cluster. Read more about it in the GCP’S blog.
Cloud Run is the Serverless Container Platform offering by GCP, launched back in 2019. I’m sure you’d get the idea, deploying apps very quickly – packaged in containers -, in a fully managed environment.
After reading this book by Wietse Venema, and going through most of the hand on examples, I can recommend it without any reservations – well, except for the price, but it’s a niche book after all 🙂
Right off the bat, I’ll tell you that it’s not an 800-page bible. It’s a relatively short book, very well written, concise and clear. Around 160 pages, packed with concepts, real-life advice, and hand-on examples, catered for different audiences and proficiency levels. So you’d get short explanations about Docker and more advanced discussions about transaction concurrency and resource contention.
I love the experience of moving through the book, creating my own reading path, and coming back to it many times. It’s the sort of experience that you can’t have with another sort of media, which actually anchors the information in your mind. The structure of the book makes that very easy, adjusting it to your experience level.
It reminds me a lot of a series of books released by O’Reilly back in the mid-2000s, the Notebook series. I own four of them, just found two of them around, but I have another two packed in containers – seriously 🙂
Official Docs – https://cloud.google.com/run/docs/?hl=es-AR
You’d probably remember how big of a player was Yahoo in the big tech scene in the late ’90s to the 2000s, and I surely still remember how their CEO rejected the buyout offer by Microsoft for $44.6 billion in 2008 – ouch!
Now Yahoo is part of Verizon Media. They have just finished a massive migration of Hadoop and Enterprise Data Warehouse (EDW) workloads to Google Cloud’s BigQuery and Looker, becoming a big part of their MAW – Media Analytics Warehouse.
I don’t need to vouch for the power and flexibility of BigQuery as a tool, is well known: analytics real-time or batch, warehouse or even as an AI tool, without having to move out the data from processing and just using SQL.
I’ve been using it lately in that capacity – BigQuery ML – and it’s really easy, even from Jupyter Notebooks:
%load_ext google.cloud.bigquery %%bigquery SELECT source_year AS year, COUNT(is_male) AS birth_count FROM `bigquery-public-data.samples.natality` GROUP BY year ORDER BY year DESC LIMIT 15
Read more in the following article about the Verizon’s migration: