AWS Certified Solutions Architect Professional - SAP-C02 Exam Course Training - Practice Questions
Architecture Bytes2023-12-22
SAP-C02#SAP C02#AWS Certified Architect Exam#AWS Solutions Architect Practice Questions#AWS Certification#AWS Exam#AWS Solutions Architect Professional#New AWS SAP Exam#AWS Architect Interview#aws certified#pass aws exam#aws certified solutions architect exam prep#SAP-C02 practice questions
3K views|7 months ago
💫 Short Summary
The video covers a wide range of AWS services and solutions, including storage classes, data access patterns, security services, database management, data transfer, load balancing, and more. It emphasizes the importance of choosing the right solutions based on performance, durability, and cost-effectiveness for various scenarios. The video also discusses strategies for managing traffic within the AWS network, cloud device security, customer support services, compliance, and data protection. Overall, it provides insights into optimizing AWS resources, enhancing security, and implementing effective solutions for different use cases.
✨ Highlights
📊 Transcript
✦
Overview of AWS storage classes and solutions.
00:00Different storage classes like S3 Standard, S3 Intelligent-Tiering, S3 Infrequent Access, and S3 Glacier are discussed based on data access patterns and cost-effectiveness.
Storage solutions like Amazon EFS, EBS, S3, EC2 instance store, and FSx are explained with their features, limitations, and best use cases.
Emphasis on choosing the right storage solution for performance, durability, and cost, with examples of scenarios where each type of storage is most suitable.
✦
Features of S3 Access Points and Amazon FSX for Windows File Server.
04:48S3 access points allow control over shared data among different teams with varying security needs.
Amazon FSX for Windows file server offers high-performance file storage with Windows native support.
Cross-region S3 replication needs versioning and filters for replication scope.
Hosting a static website on S3 removes the need for application servers or EC2 instances.
✦
AWS S3 Transfer Acceleration and Multi-part file upload optimize file transfers to S3.
09:04Transfer Acceleration utilizes CloudFront's Edge locations for faster transfers.
Versioning in S3 starts existing objects with a null version.
AWS File Gateway allows access to S3 buckets over NFS.
Storage Gateway types include File Gateway for NFS/SMB, Volume Gateway for block storage, and Tape Gateway for virtual tape libraries.
Amazon Macie locates personally identifiable information in S3 files.
AWS Secrets Manager securely manages and rotates database credentials.
✦
Safeguarding web applications with AWS security services.
16:03AWS WAF is used for web application firewall protection.
AWS Shield provides defense against dos attacks.
CloudFront offers DDoS protection.
Shield Standard and Shield Advanced have different features and benefits.
Autoscaling groups and CloudFront help mitigate attacks.
AWS Shield is emphasized as the best solution for preventing web exploits and dos attacks.
✦
AWS security services such as Amazon Inspector, Security Hub, Systems Manager Run Command, GuardDuty, and Access Analyzer provide comprehensive vulnerability management and threat monitoring for AWS accounts.
18:30PowerUser Access Policy offers restricted management capabilities for users and groups compared to Administrator Access Policy.
Single Sign-On integration with on-premise Active Directory users enables seamless access to the AWS Management Console for enhanced security and user experience.
✦
Techniques for controlling access to resources in AWS.
22:46Use pre-signed URLs for PDF reports in S3 for paid users and avoid public access.
Utilize CloudFront signed cookies for static images to maintain access while securing URLs.
Restrict access to EC2-hosted applications by geographic location with AWS WAF or Route 53.
Connect VPCs using VPC peering with non-overlapping CIDR blocks and understand different network connectivity options in AWS.
✦
Communication using IPC and the pairing of protocols VPca and VPcb is discussed in the video segment.
27:05Limitations of communication between VPca and VPcc are highlighted due to lack of direct peering.
Gateway endpoints are explained for accessing S3 bucket or DynamoDB without internet, distinguishing them from Interface endpoints.
Nat instances and Nat Gateways are compared, emphasizing their differences and use cases.
The need for VPC interface endpoints for private connectivity to services like Kinesis data streams is addressed, focusing on network configuration and security measures.
✦
Overview of Cross-Zone Load Balancing and Networking Features on AWS.
31:07Traffic is evenly distributed across EC2 instances for optimal performance.
Shared services VPC allows multiple VPCs to securely access common resources.
Private hosted zones in Route 53 facilitate internal DNS resolution within a VPC.
VPN Cloud Hub connects remote branch offices to a central VPC, while direct connect provides high-speed connectivity to the AWS cloud.
✦
Setting up private access to S3 bucket within VPC and database migration.
35:36Creating a Gateway endpoint for S3 and implementing an S3 bucket policy is crucial for private access within a VPC.
Migrating MySQL to PostgreSQL can be done using AWS schema conversion tool or DMS.
Improving Dynamo DB read performance can be achieved through Dynamo DB Accelerator.
High availability and fault tolerance in RDS can be ensured through multi-AZ configuration and read replicas, highlighting the inefficiency of using Glue job or custom scripts for database migration.
✦
AWS Database Options and Migration Services.
39:50Dynamic capacity provisioning is recommended for adjusting read and write capacity units of Dynamo DB based on workload fluctuations.
Amazon Aurora Global database with read replicas is the best option for globally distributed applications requiring rapid read access to a relational database.
AWS application Discovery service collects information on on-premise servers and applications for migration to AWS, while AWS Migration Hub tracks application migration process.
Snowball Edge is recommended for transferring 50 terabytes of data quickly to AWS Cloud bypassing VPN bandwidth limitations.
✦
AWS solutions for data transfer and synchronization between on-premise storage and AWS.
45:03AWS DataSync and AWS Snowball Edge are recommended for efficient data transfer and synchronization between on-premise storage and AWS.
AWS DataSync over Direct Connect is recommended for moving 50 terabytes of data to AWS within a week.
AWS DataSync is suitable for synchronized backups when considering backup solutions for on-premise NFS server to Amazon EFS.
Rehosting is recommended for quickly moving on-premise applications to AWS with minimal changes.
✦
Utilizing AWS services for improved performance and efficiency.
48:13Using CloudFront for caching PDF reports can speed up downloads for users.
Lifecycle hooks in Amazon EC2 Auto Scaling aid in logging and investigating instances before termination.
API Gateway error code 413 indicates file size too large, recommending pre-signed URLs for direct S3 uploads.
AWS Data Lake with S3 storage is ideal for storing diverse data for analysis, accommodating structured and unstructured data.
✦
Overview of Amazon Athena and Redshift Spectrum in data analysis using SQL.
53:06Amazon Athena allows for interactive querying directly in S3, while Redshift Spectrum enables querying data in S3 without loading it into the Redshift cluster.
Effective querying in S3 involves partitioning data by date and storing it in Apache Parquet format for efficient performance.
Using AWS Database Migration Service (DMS) to replicate data from various RDBMS databases into Redshift can help consolidate data into a unified data warehouse cost-effectively.
Ensuring encrypted data at rest in Redshift can be achieved by enabling cross-region snapshots and creating snapshot copies with region-specific keys.
✦
Efficient data retrieval with S3 Select and scan range parameter.
56:25AWS Athena for SQL-based querying of S3 data, AWS Glue for ETL tasks.
Real-time message reception with Kinesis Data Streams, data delivery with Kinesis Firehose to OpenSearch.
Compressed export of DynamoDB data to S3 for reduced storage costs.
Avoid complex and unnecessary movement of DynamoDB data to Redshift, cost savings with reserved EC2 instances.
✦
Types of AWS instance launch options are discussed.
01:00:15The importance of choosing the right instance type based on workload consistency and uptime requirements is highlighted.
AWS Organizations can be used for centralized management and governance of multiple AWS accounts.
Control tower, service control policies, and service catalog are emphasized for managing AWS accounts effectively.
Service control policies are recommended for enforcing rules across organizational account hierarchies to ensure consistent compliance and control over EC2 instance launches.
✦
Overview of AWS services for managing and optimizing traffic and API activity.
01:04:50AWS CloudTrail logs API activity for security compliance within an AWS account.
AWS Global Accelerator and Route 53 route user traffic based on lowest latency, with Global Accelerator offering quicker failover routing.
Different types of load balancers available, such as Application Load Balancer for web applications and Network Load Balancer for TCP/UDP traffic.
API Gateway routes requests to APIs based on rules, supporting stateless secure HTTP and REST APIs.
✦
Strategies for managing traffic within the AWS network include using network load balancers for fixed IP addresses and custom routing accelerators for directing users to specific destinations.
01:08:55Setting up application load balancers and Route 53 can help direct requests to ECS clusters based on subdomains.
CloudFront with Lambda@Edge functions can be used to dynamically adjust login pages based on device type, enhancing content delivery and user experience.
It is important to avoid unnecessary complexity and higher costs in solution structures when managing traffic within the AWS network.
✦
Using AWS Global Accelerator and blue-green deployment strategy for effective application transitions.
01:15:03AWS Global Accelerator offers fixed anycast IP addresses to route traffic efficiently.
Blue-green deployment enables precise control over user traffic distribution between old and new versions of an application.
Route 53 for DNS-based routing may encounter challenges with DNS caching and propagation delays during transitions.
AWS Config and Systems Manager are recommended for tracking configuration changes and ensuring compliance over time.
✦
AWS Control Tower automates setup for multi-account environments and centralized account management within AWS organizations.
01:18:15AWS Config and Monitoring Services like CloudWatch, CloudTrail, and AWS X-Ray provide auditing, compliance, and performance monitoring solutions.
Per-client throttling limits in API Gateway prevent system overload from excessive API calls.
AWS Trusted Advisor offers actionable recommendations for optimizing cost, security, and performance.
AWS Systems Manager Session Manager enables secure, auditable, and remote shell access to EC2 instances without relying on SSH key pairs or access keys.
✦
Overview of AWS X-Ray, VPC flow logs, AWS OpsWorks, and deployment strategies in AWS.
01:20:53AWS X-Ray helps developers analyze request flow in applications to identify performance issues in distributed systems.
VPC flow logs track IP traffic within the VPC and can be sent to Amazon CloudWatch or S3 for analysis.
AWS OpsWorks simplifies resource management and automates application deployment with Chef or Puppet.
Canary deployment enables gradual feature rollout to a subset of users before full deployment, while Blue-Green deployment is a different rollout approach.
✦
AWS video streaming and media content management services.
01:25:50Elemental Media Convert for transcoding and Media Live for live video processing.
Amazon Recognition for image and video analysis.
Amazon Transcribe is not ideal for categorizing media files based on content.
AWS offers AI services for text-to-speech conversion, sentiment analysis, machine learning model training, and data extraction from documents.
AWS can be utilized for traffic optimization in smart city projects through sensor monitoring and real-time signal adjustments.
✦
Overview of cloud device security and support through AWS IoT services.
01:29:25Comparison of data synchronization and offline access options, with a recommendation for AWS AppSync.
Explanation of using WebSockets API Gateway for granular control over synchronization but lacking offline functionality.
Optimizing deployment and maintenance of shared modules across multiple Lambda functions using Lambda layers.
✦
Key Highlights:
01:33:41Increasing memory allocation for a Lambda function increases CPU power, but increasing timeout does not have the same effect.
Delayed visibility for messages in SQS can be achieved by setting up the delay seconds attribute.
Enhanced fan out feature in Kinesis allows multiple consumers to read data independently from the same Shard, avoiding contention.
Operations in AWS CLI can be validated using the dry run flag.
Organizing departments under AWS organization, enabling consolidated billing, and using cost allocation tags can help analyze expenses by department.
✦
Building a cloud-based customer support service with Amazon Connect, Amazon Lex, and Amazon Comprehend.
01:38:10Discusses fortifying the security of an e-commerce platform hosted on AWS by utilizing Amazon security services like AWS Shield Advanced, AWS Guard Duty, AWS Inspector, AWS CloudFront, AWS WAF, and AWS KMS.
Focus on mitigating common web vulnerabilities and securing sensitive customer data.
Optimal solution involves deploying AWS CloudFront with AWS WAF for content delivery and application-level protection.
Also involves using AWS KMS for encryption of customer payment information.
✦
Best practices for healthcare application handling patient records on AWS.
01:44:00Options A, B, and C do not meet compliance and data protection requirements.
Option D recommends using AWS Macy to identify and protect sensitive data and AWS Config for continuous compliance assessment.
For high-demand media streaming service, AWS Elemental MediaLive, AWS Elemental MediaPackage, and Amazon CloudFront are recommended for global content delivery.
These solutions effectively optimize global content delivery for the media streaming service on AWS.
✦
Designing a network for a multinational gaming company hosting a real-time multiplayer game on AWS.
01:47:57Focus on low latency and high throughput connections for global gamers.
Utilizing AWS Global Accelerator and AWS Transit Gateway peering for optimized global routing and efficient traffic management among gaming servers.
AWS Direct Connect and AWS VPN are less suitable options due to the scenario not involving data centers or company offices.
The best solution lies in using AWS Global Accelerator and AWS Transit Gateway peering for a robust network infrastructure.
✦
Managing environments on AWS using security groups and network ACLs.
01:50:50Deploying individual VPCs for each environment and establishing VPC peering connections ensures network segregation and secure communication.
Utilizing a single VPC with unique security groups across availability zones is less secure than separate VPCs.
Creating separate VPCs for each environment and establishing VPN connections with Transit Gateway adds complexity.
The best solution is using separate VPCs for each environment, VPC peering, and security groups for managing data infrastructure on AWS for a media streaming platform.
✦
AWS storage options comparison for metadata, user profiles, and media files.
01:56:30Dynamo DB and S3 are recommended for a ride-sharing app due to cost-effectiveness and scalability.
Redshift and EFS are not ideal for real-time analytics.
Kinesis data streams, firehose, and analytics are suggested for rapid data injection and analysis, with data stored in S3 for long-term analysis.
Athena is suitable for ad hoc queries but not real-time analytics.
✦
Data migration options for 70 terabytes include AWS Data Sync, direct connect, Snowball Edge, and AWS Serverless Migration Service.
01:59:08Snowball Edge is recommended due to its capacity for large data transfers.
A corporation planning to migrate 500 on-premise servers to AWS can collect VM details through scripting, exporting configuration details, or using AWS agentless Discovery Connector.
The AWS agentless Discovery Connector provides automated data gathering and exploration within AWS Migration Hub for efficient migration planning.
✦
Challenges of maintaining a popular social media platform with image uploads while preventing inappropriate content sharing.
02:04:12Solutions proposed include writing a custom script, using Amazon recognition for image analysis, batch processing images with Amazon Comprehend, and invoking Amazon Lex.
The focus is on efficiently flagging and deleting inappropriate images with minimal development effort.
Conclusion: Amazon recognition is the best solution due to its deep learning capabilities for image analysis.
00:00AWS certified Solutions architect
00:03professional
00:06preparation the exam questions are
00:08scenario based to begin with we will
00:10explore various scenarios to grasp key
00:13Concepts and after that review some
00:15sample questions and best way to tackle
00:20them you have a very large number of
00:23objects on S3 with unpredictable access
00:26patterns what storage class is suitable
00:29for them
00:30S3 intelligent tiring is the right
00:33answer for this S3 standard and Glacier
00:37are wrong answers
00:39here let's look at various S3 storage
00:42classes S3 standard this is the default
00:44storage class S3 intelligent tiring this
00:47is used where access patterns are not
00:49known it automatically moves data to the
00:52most cost- effective storage Tire by
00:54monitoring access patterns S3 standard
00:57infrequent access this is for long lived
01:00less frequently accessed data for
01:01example once in 30 days here you are
01:03charged for a minimum period of 30 days
01:06S31 Zone infrequent access this is a
01:09variation of S3 standard infrequent
01:11access here the data is stored in one
01:13availability Zone it is less expensive
01:16than standard infrequent access storage
01:18class S3 Glacier is for archiving data
01:22in case of Glacier instant
01:25retrieval minimum storage period is 90
01:27days and data can be retrieved quickly
01:30in case of Glacier deep archive
01:33retrieval time is in hours and it is the
01:35lowest cost data archiving solution and
01:38it is used for rarely accessed
01:42data several ec2 instances in a VPC need
01:45fast concurrent access to 10 terab of
01:48common data what storage should be used
01:50for this data Amazon EFS or elastic file
01:54system is a simple scalable and highly
01:56available file system that can be easily
01:58shared across multiple Amazon on ec2
02:00instances therefore this is the right
02:02storage solution to use here S3 will not
02:06provide relatively Fast Access EBS has
02:09limitations around attaching it to
02:11multiple ec2 instances and ec2 instance
02:15storage is nonpersistent local storage
02:18on ec2 instances therefore these are
02:21wrong
02:23answers let's look at various storage
02:25types Amazon EFS or elastic file system
02:28is a scalable shared file storage
02:30multiple ec2 instances can easily access
02:33it Amazon EBS or elastic block storage
02:37can be attached to an ec2 instance it
02:39has limitations around attaching it to
02:42multiple ec2 instances it provides
02:45persistent durable volumes and you can
02:47create snapshots out of it Amazon S3 is
02:51an object storage it is highly durable
02:54scalable and coste effective ec2 instant
02:57store is temporary storage on E C2
03:00instance it provides High I/O
03:02performance and data is lost on instance
03:05termination Amazon FSX is high
03:08performance managed file storage for
03:11Windows or Linux
03:15systems a company is running a
03:17distributed database across several ec2
03:19instances with enough redundancy
03:22database needs to store some temporary
03:24data on storage that can support very
03:26highspeed read and wrs what storage
03:28solution is best suited for this
03:31scenario so for this we can use ec2
03:34instance store which offers excellent
03:36performance as it is directly attached
03:39to the instance providing low latency
03:42access for read and write operations
03:44given that the setup has a redundancy to
03:46mitigate the risk of data loss due to
03:48instance failures leveraging the ec2
03:51instance storage for temporary data is
03:54the best choice
03:56here while both EBS and S3 offer
04:00reliable and persistent storage they
04:03cannot match the highspeed reads and wrs
04:06offered by E2 instance
04:10store every Department in a company has
04:13a dedicated subfolder in a S3 bucket we
04:16need to ensure that IM users can access
04:20only their assigned Department
04:23subfolders how can we do that here we
04:26can use S3 access points so S3 access
04:30points can be used to manage and control
04:33access to share data among multiple
04:36teams with varing access needs while
04:39ensuring security isolation and
04:42simplified access management therefore
04:45various Department users will be able to
04:48access only their own designated
04:51subfolders now S3 bucket policy is the
04:55wrong answer here as it does not provide
04:57the same level of granular control
04:59control and configuration that S3 access
05:02point can provide in this
05:06case you need a high performance file
05:09system with Windows native support so
05:12here you can use Amazon FSX for Windows
05:14file server which is a fully managed
05:17file storage service designed
05:18specifically to provide native
05:20compatibility with windows-based
05:22applications and workloads it supports
05:24the server message block or SMB protocol
05:27Amazon FSX for luster is incorrect
05:30answer here as it is posix compliant
05:32file system which is optimized for Linux
05:35systems EFS is not correct either as we
05:38need Windows native
05:42support so let's summarize Amazon FSX
05:44file systems both are high performance
05:46systems so first we have Amazon FSX for
05:49luster this is posex compliant and
05:51suitable for Linux systems then we have
05:54Amazon FSX for Windows file server this
05:58provides Windows native
06:00support replicate objects in S3 Bucket
06:03from one region to
06:05another for this we can use cross region
06:08S3 replication and for this to work
06:11source and Target buckets must have
06:13versioning enabled and to narrow down
06:17the scope of replication we can use
06:20filters AWS glue job is a serverless ETL
06:25service and it doesn't have any built-in
06:28functionality for cross region
06:30replication of S3 buckets and S3
06:33transfer acceleration is for faster file
06:36uploads to S3 therefore both AWS glue
06:40job and S3 transfer acceleration are
06:43incorrect
06:45answers your static website is hosted
06:48using an application server on an E2
06:51instance how can you bring down the
06:54costs since this is a static website you
06:57can host it on S3
06:59you do not need an application server or
07:02an E2
07:04instance on S3 simply create a bucket
07:07upload your HTML CSS JavaScript image
07:10files enable static web hosting on the
07:13S3 bucket and ensure your bucket
07:16permissions allow public read access to
07:19the objects that's all you all set using
07:23ec2 spot instance is a wrong answer
07:27while using a spot instance will reduce
07:30costs however a spot instance can be
07:33taken away any moment therefore this is
07:35not a reliable way of hosting a
07:39website Amazon light sale is not
07:42costeffective either you can easily host
07:45a static website on S3 at low
07:49cost make request of pay for accessing
07:52objects in your S3 bucket for
07:56example you have partner companies which
07:59are are accessing files in your S3
08:01bucket and these are really large
08:05files by default you as the bucket owner
08:08would be paying for all outgoing costs
08:11for objects in your S3 bucket here you
08:14want to make the requester pay the
08:17solution to this is to activate
08:19requester pays option on the S3 bucket
08:23so when this feature is activated the
08:25requester is built for request costs
08:28when accessing objects in the bucket now
08:30for this to work requestor must also
08:32have an AWS
08:34account using cost allocation tax is a
08:39wrong answer cost allocation tax are
08:41used to organize your resource
08:44costs in your cost allocation reports
08:47for example production versus test
08:50resources enable Consolidated billing is
08:53also a wrong answer Consolidated billing
08:56is used when you want one bill bill for
08:59all AWS accounts in your
09:04organization speed up file uploads to S3
09:08for this we can use S3 transfer
09:11acceleration and multipart file upload
09:14now S3 transfer acceleration leverages
09:16Amazon cloudfronts globally distributed
09:18Edge locations to accelerate the
09:20transfer of files to S3 by optimizing
09:23the data path to the nearest Edge
09:25location before routing it to S3 bucket
09:28while multi-art file upload enables
09:30breaking down large files into smaller
09:33parts and uploading them concurrently
09:35and then merging them into a single
09:37object in S3 using AWS cloudfront or AWS
09:41Global accelerator on its own would be a
09:44wrong answer both cloudfront and Global
09:47accelerator leverage aws's Global
09:50Network infrastructure to optimize the
09:53delivery of content and routing of
09:55traffic however they are not designed
09:57for faster uploads per
10:01se when you enable versioning on an
10:04existing bucket what is the version
10:06assigned to existing objects in the
10:09bucket existing objects will have their
10:12version set to null so that is the
10:15correct answer and subsequent update to
10:18such an object will result in version
10:21one which will increment with each
10:24update therefore version zero and
10:26version one are wrong answers and any
10:30new object that is uploaded to this
10:32bucket starts with version
10:37one applications on your local premises
10:40need access to S3 buckets or NFS or
10:44network file system so to enable access
10:47to S3 buckets over NFS from on premises
10:51AWS file Gateway which is a part of the
10:55storage Gateway service can be used both
10:58volum volume Gateway and tape Gateway do
11:02not facilitate access to S3 buckets via
11:05NFS so they are wrong
11:08answers let's look at various storage
11:11Gateway types file Gateway accesses S3
11:15as NFS or SMB file shares it acts as a
11:19file interface between on premise
11:22applications and AWS volume Gateway
11:26volume Gateway presents Cloud bagged ice
11:28cuzy block storage volumes to on promise
11:31applications and manages data in Amazon
11:35S3 so it facilitates low latency access
11:38for frequently used data and supports
11:40snapshots which are stored as EBS
11:43snapshots and local
11:45caching so there are two modes in which
11:49it operates in cast mode primary data
11:51resides in Amazon S3 with frequently
11:54Access Data cached locally for quick
11:57access in the the stored mode primary
12:00data set resides on premises for low
12:03latency access while asynchronously
12:06backing up to Amazon
12:09S3 tape Gateway it offers a virtual tape
12:13library for backup and archival needs it
12:17stores the virtual tapes in Amazon
12:24S3 find personally identifiable
12:27information or piis in CSV files in S3
12:32bucket how would you do that what
12:35service would you use Amazon Mai is the
12:38right solution for that so what are the
12:41examples of personally identifiable
12:43information these are things like Social
12:46Security numbers credit card numbers and
12:49so on Amazon Mai can locate such
12:52information in your S3 files S3 select
12:56is a wrong answer here S3 select is used
12:59to select data from S3 using SQL queries
13:05and aw's config is also a wrong answer
13:08because it does not help us identify any
13:11piis instead it helps us with
13:14configuration management of our AWS
13:18account an application on a ec2 instance
13:22accesses a database using database
13:24credentials where can we store the
13:26database credentials securely and and
13:29rotate them as needed for this we can
13:32use AWS Secrets manager now AWS Secrets
13:36manager is a service that helps us
13:38securely manage retrieve and rotate
13:42sensitive credentials API keys and other
13:45Secrets used by applications therefore
13:48eliminating the need to hardcode such
13:50sensitive information within application
13:53or
13:53code both AWS KMS and easy to instance
13:58store are wrong answers aw scis or Key
14:02Management Service enables the creation
14:04and control of encryption keys while ec2
14:08instance store provides temporary
14:09storage for ec2 instances so any data
14:13stored here is neither secure nor
14:16permanent it's lost when ec2 instance
14:22terminates let's go over some AWS
14:24Security Services AWS wav or web
14:28application firewall it provides
14:30protection against common web exploits
14:33like cross-site scripting SQL injection
14:37Etc AWS Shield protection against dos
14:41attacks it comes in two flavors Shield
14:44standard which is free and shield
14:46Advanced which is a paid
14:48service Amazon guard
14:51Duty threat detection and monitoring for
14:54malicious activity Amazon
14:57inspector security assessment
15:00vulnerability scanning for ec2 and other
15:04services
15:06KMS encryption key management AWS
15:10certificate manager provision manage SSL
15:14certificates Amazon
15:16Mai discover sensitive data Amazon
15:21detective analyze and investigate
15:23security findings AWS security Hub
15:27aggregate security finding ings Cloud
15:30HSM is a hardware-based key
15:33storage Amazon Cognito for
15:36authentication authorization and user
15:38management for web and mobile
15:41apps how can you Safeguard your web
15:44application from common web exploits
15:46like SQL injection cross-site scripting
15:49etc for this you can use AWS WF or web
15:53application firewall which you can
15:56configure with custom rules and filters
15:59to control web traffic and mitigate
16:02security
16:03threats both AWS shield and cloudfront
16:07are wrong answers aw Shield is a managed
16:10distributed denial of service Protection
16:13Service and cloudfront is not
16:16specifically designed to prevent such
16:18web
16:21exploits how can you Safeguard your web
16:23application from dos or distributed
16:25denial of service attacks you can use aw
16:29Shield or Shield Advanced for this aw
16:32Shield is a managed distributed denial
16:35of service Protection Service that
16:38safeguards web applications running on
16:40AWS against common and large scale dos
16:42attacks this production is available by
16:45default for free AWS Shield Advanced is
16:49an optional paid service that extends
16:51the protection provided by aw shield
16:53with additional features such as
16:56advanced attack mitigation and cost
16:58protection against dos related usage
17:01spikes VAV autoscaling groups and Cloud
17:05front can help with DD protection to a
17:08limited extent AWS Shield Remains the
17:12best way to solve this problem so vaf as
17:15we know is used to prevent web
17:18exploits it does allow setting rules for
17:20rate limiting and controlling the number
17:23of requests from specific IP addresses
17:25Geographic locations or other criteria
17:29which can help mitigate the impact of
17:31certain types of Dos attacks autoscaling
17:34groups can help to some extent for
17:36example during a Dos attack if the
17:39existing instances are overwhelmed
17:41autoscaling can automatically launch
17:43additional instances to handle the
17:45increased load once the attack subsides
17:48it can scale back in reducing the
17:50resources to normal
17:51levels the distributed architecture of
17:54cloud front helps absorb and mitigate
17:56large scale attacks by Distributing the
17:58attack traffic across multiple Edge
18:01locations thereby reducing the impact on
18:04the origin
18:06server check for vulnerabilities on
18:09Amazon ec2
18:11instances for this we can use Amazon
18:14inspector which is an automated
18:16vulnerability Management Service that
18:18continually scans ec2 and container
18:22workloads for software vulnerabilities
18:24and unintended Network exposure although
18:28it primarily focuses on assessing the
18:30security posture of ec2 instances it can
18:33also evaluate other AWS resources to a
18:37limited extent AWS security Hub
18:40Aggregates security findings and AWS
18:43systems manager run command allows you
18:46to remotely execute commands on multiple
18:49ec2 instances both security Hub and
18:52systems manager run command are not
18:55suitable for this
18:57scenario
18:59monitor malicious activity in your
19:00account and generate security findings
19:04for this we can use Amazon guard Duty
19:06which is a thread detection service that
19:08continuously monitors for malicious
19:10activity and unauthorized Behavior
19:13within AWS account it analyzes cloud
19:16trail logs VPC flow logs and DNS logs to
19:19detect threats such as unauthorized
19:21access compromised instances and Port
19:25probing although marked as wrong answers
19:28both AWS cloud trail insights and AWS
19:32security Hub help us keep AWS accounts
19:34safe cloud trail insights can detect
19:37unusual operational activity using cloud
19:40trail logs and aw security Hub
19:43Aggregates security
19:46findings a company uses many different
19:48AWS accounts one per Department over the
19:51years several cross account and Public
19:53Access of AWS resources has been
19:56mistakenly granted how can the AWS
19:58administrator proactively identify
20:01security issues and unended access to
20:04the resources for this we can use AWS
20:07access
20:08analyzer it continuously monitors your
20:11resource policies and access control
20:14policies provides findings and
20:16recommendations to enhance security AWS
20:20access advisor and AWS security Hub are
20:23also helpful here although neither of
20:26them is the best answer AWS access
20:29advisor provides insights into service
20:32permissions granted to IM am roles while
20:36security Hub Aggregates security
20:38findings and alerts from various
20:41services like guard Duty inspector
20:43analyzer
20:46Etc provide full access to AWS services
20:50and resources but without the permission
20:52to manage users and groups so for this
20:56we can use power use user access policy
21:00so this policy is similar to
21:02administrator access policy but does not
21:05allow management of users and
21:08groups while administrator access policy
21:10grants complete control and management
21:13capabilities of all AWS services
21:16including managing users and groups
21:19therefore that is the wrong answer and
21:22there is no such thing as a super user
21:26policy give AWS console access to on
21:30premise Microsoft active directory users
21:33you can do this by setting up SSO or
21:36single sign on with your on premise
21:38active directory using ad connector this
21:41allows your on premise ad users to use
21:45their existing credentials to seamlessly
21:47sign in into AWS Management console now
21:51among the wrong answers synchronize on
21:53promise ad users with AWS managed ad
21:55service we do not really need to set up
21:57a new new AWS managed ad service and
22:00synchronize on promise ad users there is
22:02no need to do this and register on
22:05promise ad users as I am
22:08users this to is wrong because we do not
22:10want to duplicate existing ad users as
22:14IM am
22:16users can SSL certificate generated from
22:19AWS certificate manager or ACM for
22:22application load balancer in one region
22:25be used in application load balancer in
22:28another region answer is no SSL
22:31certificate generated by AWS certificate
22:34manager is region specific and cannot be
22:37directly used in another region so you
22:40must generate SSL certificate separately
22:43in each region where you intend to use
22:46it how can we control access to PDF
22:49reports stored in a S3 bucket ensuring
22:52access to specific reports via URLs only
22:55to paid users for this we can use use S3
22:58pre-signed URLs a pre-signed URL uses
23:01security credentials to Grant time
23:03limited permission to download objects
23:05from S3 bucket by default objects in S3
23:08bucket are private so you can
23:10dynamically generate pre-signed URLs for
23:12specific reports you want to provide
23:14access to and share them with your paid
23:16user configuring the S3 bucket for
23:19Public Access is the wrong answer here
23:22because we do not want to Grant access
23:24to our reports to everyone but only to
23:27our paid
23:29users a web application has premium
23:32static images hosted in a S3 bucket that
23:35are served over cloudfront how can you
23:37give access to all the images in the
23:39bucket to select users ensuring access
23:42URLs do not change for this we can use
23:45cloudfront signed cookies which offers
23:48the ability to manage access to multiple
23:50content files without requiring URL
23:53alterations for each user this method
23:55allows controlled access to the images
23:57while keeping the URLs consistent
24:00cloudfront signed URLs is the wrong
24:02answer here because it is designed to
24:04restrict access to individual files
24:06requiring URL modifications for each
24:09users's access so this method is less
24:12suitable for granting access to multiple
24:14files without changing URLs for each
24:18user how can you restrict access to your
24:21application which is hosted on ec2
24:23servers behind an application load
24:25balancer so that users from specific
24:27countries or Geographic locations are
24:30unable to access it we can do this using
24:34AWS wav or web application firewall or
24:38cloudfront or Route
24:4153 implementing a custom solution with
24:44IP address checks might involve manually
24:47maintaining a list of IP addresses
24:49associated with various Geographic
24:52locations this method requires
24:55continuous update to the IP database and
24:58therefore is less effective and
25:00scalable therefore it is not a good
25:07solution how would you connect two
25:10vpcs you can use VPC peering for this
25:14VPC peering establishes a network
25:16connection between two vpcs enabling
25:18communication and resource sharing
25:20between them as if they were part of a
25:22single Network while still allowing
25:24separate control over each VPC this does
25:27not require an internet gateway VPN or
25:30Hardware to establish the connection the
25:33cidr blocks of the peered vpcs must not
25:36overlap to avoid routing issues both
25:40Transit Gateway and sight tosite VPN are
25:42not the right Solutions here Transit
25:45gateways are typically used to connect
25:47several vpcs via a single Gateway
25:49simplifying networking between them
25:52while sight to sight VPN is encrypted
25:54connection between on-premise Network
25:56and aw
25:59VPC let's look at various network
26:01connectivity types and their key
26:03features VPC peering this is a private
26:07connection between two vpcs Transit
26:10Gateway is a centralized hub for
26:11connecting multiple vpcs VPN and on
26:14premise networks it has a Hub and spoke
26:18architecture Direct Connect is a
26:20dedicated private network connection
26:21between on premises and AWS Cloud it
26:25provides low latency and high bandwidth
26:28connection site to site VPN is encrypted
26:31connection between on premisis network
26:33and AWS VPC established over the
26:36Internet it provides secure
26:38communication using IPC
26:42protocol vpca is paired with
26:45vpcb vpcb is paired with
26:49vpcc can resources in VPC a reach
26:53vpcc answer is no vpca cannot not
26:57communicate with resources in vpcc as
27:00there is no direct peering between them
27:02each peering relationship is independent
27:05and there's no transitive peering to
27:07enable communication between resources
27:09in vpca and vpcc a separate peering
27:13connection between vpca and vpcc would
27:16need to be
27:18established access S3 bucket or Dynamo
27:21DB from a VPC without going over
27:24internet for this we can use Gateway
27:26endpoint without a Gateway endpoint our
27:29requests from VPC to S3 or Dynamo DB
27:33would travel over public
27:35internet so Gateway endpoint ensures
27:38that this does not happen and all
27:40communication happens over AWS
27:43infrastructure interface endpoint is a
27:45wrong answer here interface endpoint
27:47enables private connectivity to many
27:50services over AWS private link these
27:53Services include some AWS managed
27:55Services Services hosted by AWS
27:58customers partners and so on S3 is one
28:02of the supported service here but not
28:04Dynamo DB therefore this is a wrong
28:07answer VPC peering is clearly a wrong
28:10answer because it's meant to connect two
28:15vpcs let's look at various endpoints
28:18Gateway endpoint this provides secure
28:20private connection to Amazon S3 or
28:23Dynamo DB from within a VPC data does
28:26not go over public internet interface
28:30endpoint this to provides secure private
28:33connection to many services powered by
28:35AWS private link these include AWS
28:39services or those hosted by AWS Partners
28:42so what is aw's private
28:44link it provides private connectivity
28:48between vpcs supported AWS services and
28:51your on premisis networks without
28:53exposing your traffic to the public
28:56internet
28:58allow E2 instance in a private subnet of
29:01a VPC to access Internet so by default
29:05an E2 instance in a private subnet does
29:07not have internet connectivity now we
29:09can provide this by using a natat
29:11instance or a natat
29:14gateway gateway endpoint and interface
29:16endpoint are wrong answers here Gateway
29:19endpoint provides private connectivity
29:21to S3 or Dynamo DB while interface
29:24endpoint provides private connectivity
29:26using AWS private link to various
29:31services so let's go over the difference
29:33between Nat instance and a natat Gateway
29:37so natat instance is a user managed ec2
29:40instance for natat you must provision it
29:43manually and it can be a single point of
29:46failure while Nat Gateway is a fully
29:48managed aw service which is highly
29:51available and suitable for production
29:55loads allow an easy to instance in a
29:57private subnet to access only one
30:00particular URL for example to download a
30:03patch Beyond this one particular URL
30:06there should be no other access to
30:08Internet so for this we can use a natat
30:11Gateway this will provide the necessary
30:13internet connectivity the subnet route
30:16table should be updated to use the N
30:18Gateway and the E to Security Group can
30:22restrict access to a specific IP so this
30:27sols a problem Route 53 is clearly a
30:30wrong answer here because it is a DNS
30:33Management
30:35Service your application running on E to
30:38instance in a private subnet needs to
30:40send data to Kinesis data streams
30:43privately not over the Internet so what
30:46network configuration is required for
30:48this so for this we need to set up a VPC
30:51interface endpoint for Kinesis data
30:53streams in the VPC ensure that route
30:57table associated with the subnet has a
30:59route entry for the interface
31:02endpoint and of course interface
31:04endpoint policy should allow appropriate
31:06access
31:07permissions Gateway endpoint is clearly
31:10a wrong answer here because that's for
31:13accessing S3 or Dynamo DB
31:17privately you have set up one E2
31:20instance in availability Zone 1 three E2
31:24instances in availability Zone two
31:28application load balancer is set up in
31:30front of them with cross Zone load
31:33balancing enabled what percentage of
31:35requests go to each
31:38instance now when cross Zone load
31:41balancing is
31:42enabled traffic is distributed across
31:45ec2 targets since we have four ec2
31:48instances in total each instance gets
31:5225% of the traffic so what happens when
31:55cross Zone load balance is disabled in
31:58that case traffic is distributed by
32:01availability zone so in such a scenario
32:05az1 will receive 50% of the traffic and
32:07az2 will receive 50% of the traffic and
32:11since there is only one instance in A1
32:13it will receive all of that 50% of the
32:16traff
32:17traffic and since there are three
32:19instances in A2 the 50% of the traffic
32:23coming to it would be distributed across
32:26the three
32:29instances multiple vpcs need access to
32:32common or shared services and resources
32:35for this we can use shared services VPC
32:39which is an architectural model whereby
32:42centralized services or resources are
32:44hosted within a dedicated VPC other vpcs
32:47establish connectivity to it through
32:49mechanisms like VPC peering or Transit
32:52Gateway so this enables multiple vpcs to
32:55access common services
32:57securely while maintaining Network
33:00isolation sight to sight VPN is for
33:03creating a secure connection from on
33:05premise to VPC on AWS cloud and Route 53
33:09is a DNS Management Service so both
33:12these answers are
33:14incorrect manage DNS configuration for
33:17internal resources within a VPC for
33:20example a DB server must be reachable
33:23via a name like prod. db. example.com
33:27so how would you achieve this we can use
33:31private hosted zone for this a private
33:34hosted Zone in AWS Route 53 is a feature
33:37that allows users to create and manage
33:39custom domain names for internal
33:41resources within a VPC it enables
33:44private DNS resolution within the VPC
33:47keeping domain name resolution
33:48restricted within the specified VPC
33:51boundaries without public
33:53accessibility using a custom DNS server
33:56may be possible but it would require
33:58manual configuration and administration
34:01there is no need to do this therefore
34:04this is a wrong
34:06answer a company has several remote
34:09Branch offices in different places all
34:12of them need to securely connect with
34:13each other and to a VPC in one region
34:17what is the best way to do this cost
34:19efficiently so for this we will create a
34:22VPC private Gateway in our VPC then
34:27each branch office uses a sight tosite
34:30VPN connection to connect to the VPC
34:33private Gateway so this Hub and spoke
34:36architectural model is called VPN Cloud
34:39Hub so all our remote Branch offices are
34:42connected over VPN to the VPC private
34:47Gateway in our
34:48VPC now if a particular branch office
34:51needs highe Speed high bandwidth
34:55connectivity to the AWS cloud or our VPC
34:59it can use a direct connect connection
35:02to the VPC private Gateway instead of
35:05VPN now here we are dealing with the
35:08single VPC what if there were multiple
35:11vpcs which the remote Branch offices
35:14must connect to in that case we must use
35:17a Transit
35:18Gateway instead of private Gateway a VPC
35:22private Gateway is always attached to a
35:24single
35:25VPC
35:27allow private access to a S3 bucket only
35:31from resources in a specific
35:33VPC so we can do this first by creating
35:36a Gateway endpoint for S3 this will
35:39ensure that S3 can be reached privately
35:43not over the Internet next create a S3
35:46bucket policy and here restrict access
35:50to resources from a particular VPC
35:53only VPC private Gateway is a wrong
35:56answer answer here since a VPC private
35:59Gateway facilitates secure and encrypted
36:02connectivity between the VPC and
36:04external networks by using VPN or direct
36:08connect so that does not help in our use
36:11case
36:16here migrate MySQL to post SQL database
36:21we can do this using AWS schema
36:23conversion tool or sat and database
36:25migration service
36:27DMS to migrate database schemas and
36:30replicate data from source to Target
36:33database one time or as ongoing
36:35real-time data replication writing glue
36:38job or custom scripts are wrong answers
36:42glue job is an ETL process and not well
36:46suited for this work and there is no
36:48need to write custom scripts as we have
36:51better
36:53solution gaming data is being stored in
36:55Dynamo DB how can we improve Dynamo DB
36:58read performance for creating gaming
37:01leaderboards for this we can use Dynamo
37:03DB
37:05accelerator which is a high-speed
37:07distributed in memory cache for Dynamo
37:09DB that can significantly reduce read
37:11latency for frequently access data now
37:15between elastic cache and Dynamo DB
37:17accelerator Dynamo DB accelerator is the
37:20better choice here because our
37:22underlying database is Dynam DB and the
37:25Dynamo DB accelerator is seamlessly
37:27integrated with it therefore elastic
37:30cache is not the best answer here and
37:33Amazon elastic cache supports multiple
37:36caching engines and one of those engines
37:38is redis therefore redis 2 is not our
37:42best
37:44answer what is an efficient way to
37:46delete all data in a Dynamo DB table we
37:50can drop the table and recreate it
37:53deleting an entire Dynamo DB table is
37:55the most efficient way to remove all
37:57data at once it's quicker than deleting
38:00items individually especially when
38:02dealing with large data sets searching
38:05and deleting all items in a table is
38:08comparatively
38:11inefficient how can we ensure High
38:13availability and fall tolerance with RDS
38:16for this we can set up RDS with multi-az
38:19configuration in this setup RDS
38:22replicates your primary database to a
38:24standby instance in a different
38:25availability Z Zone within the same
38:27region if the primary instance fails the
38:30system automatically fails over to the
38:32standby thereby reducing downtime and
38:34ensuring data durability this setup
38:37enhances reliability by maintaining a
38:39synchronized standby database for
38:42failover purposes let's look at other
38:45options RDS read replicas are
38:48asynchronous copies of the primary
38:50database they help distribute read
38:52traffic improve read performance and
38:55provide scalability for read intensive
38:57applications they do not directly
38:59provide High availability and don't
39:01support automatic failure their focus is
39:04to improve read performance therefore
39:06this is not a correct answer Amazon
39:09Aurora is a high performance relational
39:11database from Amazon we do not need to
39:14migrate to it just for high availability
39:16we already have a solution in RDS
39:19therefore this two is not a correct
39:23answer Dynamo DB throttles at certain
39:25times of the day because of variable
39:28workloads how can you handle this cost
39:30effectively we have three options here
39:33use Dynamic capacity provisioning or
39:36increase provision capacity or use
39:39Dynamo DB
39:40accelerator Dynamic capacity
39:42provisioning is the best option here
39:45where the provisioned read and write
39:47capacity units of Dynamo DB is adjusted
39:50based on workload
39:52fluctuations now since our workloads are
39:55not consistently High increasing
39:58provision capacity is not required and
40:01is not cost- effective Dynam DB
40:04accelerator can help with reducing
40:06throttling but is not coste effective
40:09therefore not the best solution
40:11here in a globally distributed
40:14application spanning multiple regions
40:16that requires rapid read access to a
40:18relational database which database is
40:21most suitable Amazon Aurora Global
40:24database offers read replicas across
40:26regions making it suitable for globally
40:28distributed applications therefore this
40:31is the right answer Amazon RDS lacks the
40:34inherent Global replication features and
40:36low latency access specifically tailored
40:39for distributed setups therefore this is
40:42not the correct answer and Dynam DB is a
40:45nosql database service provided by AWS
40:49while Dynamo DB excels in scalability
40:51it's not a relational database and
40:53therefore does not fit the requirement
41:00how can you collect information on
41:02servers and applications running in your
41:04on premise environment in order to plan
41:07for migration to
41:09AWS for this we can use AWS application
41:13Discovery
41:14service it collects information on on
41:17premise servers applications and their
41:20dependencies AWS migration Hub is used
41:23to track application migration process
41:25there for this is not a correct answer
41:29and AWS application migration Service as
41:31the name suggests is a migration service
41:35and is not a discovery
41:38service let's look at various migration
41:40Services aw's application Discovery
41:43service this collects information on
41:45servers running processes network
41:47connections in your on premise
41:49environment using agents collectors Etc
41:53AWS migration
41:55hub this is a tracking service this
41:58provides centralized tracking of
42:00application
42:02migrations AWS migration service this
42:05helps migrate servers applications to
42:08AWS cloud from your on premise
42:11environment or from other Cloud
42:13providers you can also migrate e to
42:16workloads across regions availability
42:19zones or accounts it ensures continuous
42:22synchronization between the source and
42:24Target environments during the migration
42:27process and creates EBS volumes during
42:30this process so this is a lift and shift
42:33migration strategy with little or no
42:37downtime next we have VM import export
42:41here we can import VM images from an
42:44existing environment to Amazon ec2 and
42:48also export them back to your existing
42:51on premise environment this does not
42:54provide live synchronization but is an
42:56offline process you can also use it to
42:59create a repository of VM images for
43:02backup and Disaster Recovery
43:05purposes what is the most suitable
43:08method to transfer 50 terab of data from
43:11an on-premise Network to the AWS Cloud
43:14within a few days considering the
43:16existence of a sight tosite VPN between
43:19the on-premise network and the AWS Cloud
43:22so for this we can use snowball Edge
43:25which is a physical device designed for
43:28large- scale data transfers to AWS it's
43:31ideal for transferring massive amounts
43:32of data quickly by passing bandwidth
43:35limitations of a VPN connection given
43:38the 50 terabyt volume and time
43:40constraint snowball Edge physical
43:43transfer offers the most efficient
43:45solution AWS data sync is a service for
43:49online data transfer between on premise
43:51storage and
43:52AWS but transferring 50 tabt over a VPN
43:56and connection might not be efficient
43:58within the specified time frame due to
44:01limitations in bandwidth therefore this
44:03is a wrong answer snowmobile is meant
44:07for exabyte scale data and involves
44:10massive scale transfer via a large
44:12physical device making it impractical
44:14for a 50 terab transfer within a few
44:16days due to Logistics scale and cost
44:21therefore this two is a incorrect
44:24answer so we have this snow family of
44:26devices which support offline transfer
44:29of data to
44:31AWS okay so first we have snow
44:33cone this has a capacity of about 8
44:37terabytes then we have snowball Edge it
44:39comes in um multiple flavors storage
44:42optimized and compute
44:44optimized so the storage optimized one
44:48provides about 80 terabyt of htd
44:51space and then we have
44:54snowmobile so this supports exabyte
44:57scale data transfer in a truck and has a
45:00capacity of 100
45:03petabytes we need to move 50 tabt of
45:06data from an on-premise location to AWS
45:09Cloud within a week a highspeed 10gbps
45:12direct connect link exists between the
45:14networks what is the most efficient way
45:17to do this for this we can use AWS data
45:20sync over Direct Connect as it leverages
45:24the existing high-speed Direct Connect
45:26connection efficiently ensuring the data
45:28transfer of 50 tab is within the given
45:31time frame of 7 Days both AWS snowball
45:36Edge and
45:37snowmobile are excessive for this
45:40particular scenario and could introduce
45:43unnecessary complexity and time
45:46therefore they are not correct
45:49answers how can you create a
45:51synchronized backup solution for an on
45:53premise NFS server to Amazon EFS in the
45:56cloud we have two options here AWS data
46:00sync and AWS file Gateway let's look at
46:03them one by one AWS data sync is
46:05designed explicitly for efficient and
46:08automated data synchronization between
46:10on premise storage and aw storage
46:12services like Amazon EFS S3 Etc
46:16therefore this is suitable for a
46:17synchronized backup solution to EFS and
46:20is the correct answer AWS file Gateway
46:24is primarily dedicated to enabling on
46:26premise applications to access cloud
46:28storage particularly S3 through NFS
46:31while it facilitates data
46:32synchronization from on premise NFS
46:34servers to the cloud however it lacks
46:37native support for synchronizing data
46:40directly with Amazon EFS cloud storage
46:43therefore this is not the correct
46:47answer move on premise application to
46:50AWS quickly without making changes and
46:53with minimal effort this kind of
46:56migration would be called
46:59rehosting which is a lift and shift
47:01migration and this is the correct answer
47:05rep platform is when we optimize our
47:07application to leverage some AWS
47:10Services when we
47:12migrate and refactor is when we make
47:15significant changes to our application
47:16by restructuring or
47:18redesigning since our requirement is to
47:21migrate without making changes rehosting
47:24is the correct answer and platform and
47:26refactoring are incorrect
47:33answers how can we efficiently and
47:35quickly scale our on-premise web server
47:37which hosts publicly accessible PDF
47:39reports to accommodate a surge of
47:42millions of users expected during an
47:44upcoming marketing
47:45campaign to efficiently handle the
47:48anticipated surge in users downloading
47:49PDF reports from our on premise web
47:51server we can set up an AWS cloudfront
47:54distribution with with the on premise
47:56web server as the
47:58origin cloudfront can cash the PDF
48:00reports ensuring quicker and more
48:02efficient download for users therefore
48:05using cloudfront is the correct answer
48:07here migrating the web server to Cloud
48:10might be an excessive step when the
48:12objective can be achieved more
48:13efficiently by leveraging cloudfront for
48:16caching purposes therefore that is not
48:20the correct
48:21answer ec2 instances of your autoscaling
48:24group are abruptly terminating for
48:26unknown reasons you want to log to these
48:29ec2 instances before they are terminated
48:31to download logs and investigate how can
48:34you do this for this you can use
48:36autoscaling groups life cycle
48:39hooks now Amazon ec2 autoscaling
48:42incorporates life cycle hooks allowing a
48:44designated time frame typically 1 hour
48:46for actions to finish before instance
48:49transition in this case before instance
48:52termination so when a scal in event
48:54happens a life cycle hook temporarily
48:56hals instance termination sending an
48:59Amazon event Bridge notification and
49:02during this pause you can connect to the
49:04instance to retrieve logs or
49:07investigate prior to the instance
49:10termination using Amazon cloudwatch
49:12events is the wrong answer because
49:15cloudwatch events won't assist in
49:17pausing instance termination which is
49:19essential in the
49:20scenario they are more focused on
49:23monitoring and triggering actions based
49:25on EV in an autoscaling
49:28group an application supports uploading
49:31video files to API Gateway however it
49:34occasionally fails with error code 413
49:37returned by API Gateway how can we
49:39resolve this
49:41problem now error code 413 indicates
49:44upload file size is too large API
49:47Gateway does not support uploads greater
49:49than 10 MB therefore we must make some
49:53changes to our architecture to handle
49:55this this using pre-signed URLs we can
49:59upload files directly to
50:02S3 where the upload size limit is much
50:05much higher therefore this is our
50:08correct answer changing API Gateway file
50:11upload limit is not possible because
50:14this is not something that is
50:16configurable therefore that is the wrong
50:20answer how can we ensure uninterrupted
50:23service for inflight requests when an
50:25instance operating behind a road
50:27balancer is being deregistered or taken
50:29out of
50:30service for this we can enable
50:33connection draining on application load
50:35balancer when an instance is marked for
50:37deregistration example during a scal
50:40down event connection draining also
50:42known as deregistration delay allows the
50:45load balancer to complete serving the
50:47inflight requests to that instance
50:49before taking it out of service our
50:52second option of disabling instance
50:54termination is is not good because it
50:57would hinder the scal down process of
50:59the autoscaling
51:05group you have diverse structured and
51:08unstructured data for analysis where is
51:10the best place to store it for
51:12processing the correct answer here is
51:15AWS data Lake using S3 which is ideal
51:18for diverse data types it offers
51:20scalable cost-effective storage
51:21accommodating both structured and
51:23unstructured data for efficient process
51:25proc in and Analysis both red shift and
51:28Dynamo DB are wrong answers here red
51:30shift suits structured data analysis but
51:32it's less flexible and cost-effective
51:35for unstructured data or large scale
51:37storage Dynamo DB is for structured
51:39operational data not ideal for diverse
51:42or large volumes of structured and
51:44unstructured data intended for
51:47analysis how would you analyze data in
51:50S3 using SQL
51:52queries we can use Amazon Etha for this
51:56which is a serverless interactive
51:57quering service that allows you to
51:59analyze data directly in S3 using SQL
52:02it's designed for ad hoc querying and
52:04Analysis of large scale data sets let's
52:08look at redshift Spectrum query redshift
52:11spectrum is a feature of Amazon redshift
52:14a data warehouse service in AWS redshift
52:16Spectrum enables users to run SQL
52:18queries against data stored directly in
52:21Amazon S3 without needing to load it
52:24into the red shift cluster now our use
52:27case is simply to query and analyze data
52:29in S3 so bringing red shift into the
52:31equation is not necessary therefore this
52:35is not the best answer AWS DMS or
52:38database migration service is not meant
52:40for quering or analyzing data from S3
52:43therefore it's not a correct
52:47answer let's go over some analytics
52:49Services Amazon red shift is a PAB scale
52:53data
52:54warehouse red shift Spectrum query is a
52:57feature of Amazon red shift which allows
53:00you to run SQL queries from redshift
53:03directly against data stored in Amazon
53:06S3 Amazon ethena allows you to query
53:09data in S3 in a ad hoc manner using
53:13SQL Amazon EMR is a managed Hadoop
53:16framework with support for spark hbas
53:20Presto and so on AWS glue allows you to
53:24create ETL job
53:26and also provides a glue data catalog
53:29quick site is a business intelligence
53:31and reporting
53:34tool how should you organize iot sensor
53:37data in your AWS S3 data lay for
53:40Effective quering based on
53:42date so you can partition the data by
53:45date and store it using Apache Park
53:50format this approach enables efficient
53:52querying based on date because part
53:55partitioning by date allows direct
53:57access to specific time frames and using
54:00Park format enhances query performance
54:02due to its columnus storage and
54:04compression
54:06benefits now the second option partition
54:09the data by sensor ID and store it in
54:11CSV format this is not a good option
54:15partitioning by sensor ID might be
54:18helpful for specific sensor-based
54:20queries it doesn't efficiently address
54:23the need for quering data by date
54:25additionally storing data in CSV format
54:28could hinder query performance compared
54:31to using Park format due to csvs lack of
54:35optimization for columnal storage and
54:40compression given that each department
54:42within a company uses its own rdbms
54:45database the company aims to consolidate
54:48all this data into a unified data
54:50warehouse for comprehensive regular
54:53analysis What would would be the most
54:55cost-effective approach to achieve this
54:58goal so for this we can use DMS to
55:02replicate and synchronized data from
55:04various departmental rdbms databases
55:07into red shift which is a specialized
55:09data warehousing platform for further
55:11data
55:12analysis and the second option is to
55:15export all databases to Amazon data late
55:18this is not a good option because the
55:21question specifically asked for a data
55:24warehouse
55:26what measures can a company take to
55:28ensure its encrypted at rest red shift
55:31data warehouse is backed up in another
55:34region for disaster recovery for this
55:37broadly two steps are necessary first is
55:39to enable cross region
55:42snapshots so that red shift snapshots
55:44are copied from one region to another
55:47next step is to create snapshot copy
55:50Grant in the Target region now while
55:53doing this we need to specify specify a
55:55cus key this key should be from ks in
55:59the Target region as KS keys are region
56:03specific therefore the first answer
56:06where we specify KS keys from the target
56:09region is the correct answer the second
56:12answer is wrong because here we are
56:13specifying KS keys from the source
56:17region medical records stored in an S3
56:20bucket contain important patient
56:22information within the first 100 bytes
56:24of each file
56:25what approach can efficiently retrieve
56:27this specific information from all files
56:29in the
56:30bucket S3 select with scan range
56:33parameter efficiently retrieves specific
56:35data from S3 objects without fetching
56:38entire files making it perfect for
56:41getting the initial 100 bytes of each
56:43file therefore this is the correct
56:46answer Amazon ethena is designed for SQL
56:49based quering of data in S3 it operates
56:52on entire objects rather than fetching
56:54specific specific bytes or portions of
56:56files so this is not the right answer
57:00and AWS glue job is intended for ETL
57:03tasks and does not directly retrieve
57:07specific bytes from files in a S3 bucket
57:11therefore this two is the wrong
57:14answer how would you design an
57:16application on AWS to receive messages
57:18from various client devices and display
57:21visualizations in real time so we can
57:24use AWS Kinesis data streams to receive
57:27messages from client devices in real
57:29time and these messages would then flow
57:33into Kinesis fire hose which acts as a
57:35delivery service directing the data to
57:38open search for storage and
57:40indexation to visualize this data kibana
57:43can be used with open search allowing
57:46you to create real-time visualizations
57:49and insights from the ingested messages
57:52so this meets all the requirements of
57:53our scenario therefore this is the
57:56correct
57:56answer in the second approach it is
57:59suggested that we use Amazon sqs to
58:03receive messages Lambda for processing
58:05and storing data into S3 then visualize
58:07it using quick site here using S3 as
58:11data storage and then quick site for
58:13visualization does not provide real-time
58:15visualizations the way the first
58:17approach does therefore this is not the
58:20best
58:23answer let's go over some Kinesis
58:25Services Amazon Kinesis data
58:28streams ingest data streams at scale
58:32Kinesis data fire host load data streams
58:35into Data stores like S3 red shift open
58:38search Etc Kinesis data
58:42analytics for realtime analytics on
58:45streaming data Kinesis video streams
58:49stream video from devices to
58:52AWS Dynamo DB has a lot of historical
58:55data that is only needed for analysis
58:57how can we reduce the storage costs and
58:59perform ad hoc analysis on it so we can
59:03export the Dynamo DB data to S3 in
59:05compressed format and then delete
59:08historical data from Dynamo DB so
59:11storing the data in S3 is going to
59:14reduce our storage
59:16cost and we can use Amazon Athena to
59:19query and analyze the data in S3 as
59:22needed so this is our current
59:25answer our second option is to move
59:27Dynamo DB data to Red shift using ETL
59:30jobs now moving no SQL dynamodb data to
59:34relational data store like red shift
59:36using ETL jobs is unnecessarily complex
59:39without any real
59:41benefits therefore this is the wrong
59:48answer you are running Oracle database
59:51on an OnDemand ec2 instance same data
59:54database will continue to be in
59:55operation for several years how can you
59:58reduce
59:59costs we can do this by using reserved
01:00:02ec2
01:00:03instance which offers significant cost
01:00:06savings compared to OnDemand instance
01:00:08for long-term predictable workloads like
01:00:11our Oracle database spot instances are
01:00:15significantly cheaper than on demand
01:00:18instances but they are not suitable for
01:00:20long-term consistent workloads they can
01:00:22be interrupted for various Reon reasons
01:00:25which makes them unsuitable for a
01:00:27multi-year continuous operation where
01:00:29uptime is
01:00:31crucial therefore using spot instances
01:00:34is the wrong answer our third option is
01:00:37to migrate to RDS the question
01:00:39specifically states that same database
01:00:42will continue to be in operation for
01:00:44several years therefore migrating to RDS
01:00:47is not indicated here hence this is not
01:00:51the correct
01:00:53answer
01:00:55let's look at various e to instance
01:00:57launch Types on demand available
01:01:00immediately without long-term commitment
01:01:02reserved they provide significant cost
01:01:05savings for committed usage spot
01:01:08instances available at lower costs from
01:01:11unused capacity and they can be taken
01:01:13away
01:01:14anytime dedicated host physical servers
01:01:18dedicated solely for your use per host
01:01:21billing you have control over ec2
01:01:24placement and they can be useful for
01:01:26regulatory needs dedicated
01:01:29instances these are instances on
01:01:32Hardware dedicated to you there is a per
01:01:35instance billing and you do not have
01:01:36control over instance
01:01:40placement change subnet of an ec2
01:01:43instance so this is not possible you
01:01:46cannot change subnet of an E2 instance
01:01:50instead you can create an Ami from uh E2
01:01:53instance and launch it in the desired
01:01:56Subnet in which case it is essentially a
01:01:59new ec2
01:02:01instance and the wrong answer is stop a
01:02:04instance and change the subnet you
01:02:06cannot do
01:02:12that how can you streamline and
01:02:14standardize the management of multiple
01:02:16AWS accounts across various company
01:02:19departments for this we can use AWS
01:02:22organizations which allows you to
01:02:24centrally manage and govern multiple
01:02:26accounts by creating a hierarchy
01:02:28applying policies across the
01:02:30organization and simplifying
01:02:32administrative tasks hence this is our
01:02:35best answer cross account roles are used
01:02:38to delegate access between AWS accounts
01:02:40securely it's more about access control
01:02:43rather than centralizing management
01:02:45tasks across accounts therefore this is
01:02:47not a correct answer aw's Direct Connect
01:02:50is an obvious wrong answer it is used
01:02:53for establishing a d dedicated network
01:02:55connection between an on premise Network
01:02:57and AWS it's not designed for managing
01:03:00multiple AWS accounts
01:03:03together let's go over some AWS
01:03:05organization related services so the
01:03:08first is AWS
01:03:10organizations it is used to manage and
01:03:12govern multiple accounts centrally in a
01:03:15hierarchy next we have control tower for
01:03:19automated setup for multiac account
01:03:22environments service control policies
01:03:25for enforcement of organizational
01:03:28policies and service catalog this is a
01:03:32catalog for Approved AWS
01:03:36Services how can you prevent accounts in
01:03:38your AWS organization from launching ec2
01:03:41instances without a specific tag so for
01:03:44this we can use service control policies
01:03:46or scps in AWS organizations to enable
01:03:51the enforcement of rules across the
01:03:53organizations account hierarchy so by
01:03:56attaching a SCP to the organization's
01:03:58route you can mandate specific
01:04:00requirements such as tagging across all
01:04:04accounts within the
01:04:05organization this ensures consistent
01:04:08compliance and control over ec2 instance
01:04:11launches using AWS config rule is
01:04:14incorrect in this context while AWS
01:04:17config rules can monitor resource
01:04:19configurations for compliance they don't
01:04:22directly prevent ec2 instance launches
01:04:24without specific
01:04:25tags and Amazon cloudwatch alerts also
01:04:29don't directly prevent EC to instance
01:04:31launches without tags cloudwatch alerts
01:04:34are primarily used for monitoring and
01:04:35triggering actions based on specific
01:04:38conditions in your AWS
01:04:41environment how can you create a
01:04:43centralized lck of events that occur
01:04:45within an AWS organization and its
01:04:48member accounts let's first understand
01:04:50what is cloud trail ews cloud trail is
01:04:53an ew service
01:04:54that logs and tracks API activity within
01:04:57an AWS account providing detailed
01:04:59records of actions taken helping with
01:05:01security compliance and troubleshooting
01:05:04and organization Trail centralizes event
01:05:06logging specifically for an AWS
01:05:08organization and its member accounts
01:05:10offering comprehensive monitoring and
01:05:12tracking capabilities at an organization
01:05:14level therefore our correct answer is
01:05:17organization Trail and not cloud trail
01:05:20and cloudwatch logs primarily captures
01:05:23logs from various WS services but it
01:05:26doesn't inherently centralize event
01:05:27logging for an entire AWS
01:05:34organization you have deployed an
01:05:36application across various AWS regions
01:05:39to serve a global user base how can you
01:05:41efficiently direct user traffic to a
01:05:43regional endpoint with lowest latency
01:05:46and swiftly readout traffic if a
01:05:48regional endpoint
01:05:49fails both AWS Global accelerator and
01:05:52Route 53 can route any user traffic to a
01:05:54regional endpoint based on lowest
01:05:57latency and in case of a failover route
01:06:00traffic to a healthy
01:06:01endpoint however there is a difference
01:06:03in terms of how quickly failover routing
01:06:06happens with global accelerator this
01:06:09failover is quick with Route 53 there
01:06:12may be a delay because of the way DNS
01:06:14works and how its changes are propagated
01:06:18therefore AWS Global accelerator is the
01:06:20better choice in this scenario
01:06:22cloudfront doesn't explicitly offer
01:06:24similar capabilities it's primarily
01:06:27focused on delivering cast content
01:06:29quickly to end users from its Edge
01:06:32locations therefore it is not the
01:06:34correct
01:06:36answer let's go over some routing
01:06:39Services elastic load balancer there are
01:06:42many kinds of these application load
01:06:44balancer which is suitable for web
01:06:47application traffic and does not support
01:06:49static IP address Network load balancer
01:06:53this can be used when you you want to
01:06:54directly work with TCP or UDP traffic
01:06:58and it supports static IP address
01:07:01classic load
01:07:02balancer this is legacy and no longer
01:07:05recommended for use Gateway load
01:07:08balancer for virtual
01:07:11appliances API Gateway it is used to
01:07:14Route
01:07:15requests to apis based on rules it
01:07:19supports stateless secure HTTP and rest
01:07:23apis and stateful websockets
01:07:26protocol cloudfront is a Content
01:07:28delivery Network and it can cach content
01:07:31at the edge using Lambda at Edge feature
01:07:34you can run Lambda functions at the edge
01:07:38locations Global accelerator it utilizes
01:07:41static anycast IP addresses to direct
01:07:44traffic over AWS Global Network it
01:07:47supports rapid failover by dynamically
01:07:50rerouting traffic to healthy endpoints
01:07:52within seconds Route 53 is a DNS
01:07:56Management Service and it supports DNS
01:07:59based
01:08:00failover which may encounter some delays
01:08:03due to DNS
01:08:06caching how can you ensure that an
01:08:08application service running on E2
01:08:10instances behind an application load
01:08:12balancer can be accessed by multiple
01:08:14clients that require a fixed or static
01:08:16IP address to invoke the service as per
01:08:20our scenario application load balancer
01:08:22is the entry point for a
01:08:24service and assigning a static IP
01:08:27address directly to an application load
01:08:29balancer isn't possible so to ensure a
01:08:32fixed IP as entry point into our
01:08:35application we can introduce a network
01:08:37load balancer preceding the application
01:08:40load balancer a network load balancer
01:08:42can have a static IP address so that's
01:08:46our correct answer using cloudfront is
01:08:50incorrect here since cloudfront operates
01:08:53through a distributor network of edge
01:08:55locations and doesn't provide a singular
01:08:57static IP address for exposure or
01:09:01use a multiplayer gaming application
01:09:04hosts Game Servers on various ec2
01:09:06instances how can we ensure that players
01:09:08in a particular game session always
01:09:11connect to their designated gaming
01:09:12server or ec2 instance for this we can
01:09:16use custom routing accelerator which
01:09:19empowers you to apply your custom Logic
01:09:21for directing users to a particular
01:09:23Amazon e to destination now this
01:09:26capability maintains the advantages of
01:09:28global accelerator while granting
01:09:30increased control and flexibility in
01:09:33managing traffic distribution therefore
01:09:35this is our correct answer Route 53 on
01:09:39the other hand excels in dns-based
01:09:41routing but does not offer the same
01:09:43low-level control and optimizations as
01:09:46the custom routing accelerator for
01:09:48managing traffic within the AWS Network
01:09:50at a deeper level therefore this is not
01:09:53the correct answer
01:09:56how can we structure our solution to
01:09:58ensure that requests for specific
01:10:00subdomains are directed to the
01:10:02corresponding ECS clusters catering to
01:10:05each
01:10:05service so for this we can set up an
01:10:09application load balancer to direct
01:10:10requests to the relevant ECS cluster by
01:10:14leveraging host-based
01:10:16routing so request for support.
01:10:19example.com is routed to the support
01:10:23Target group
01:10:24the support ECS cluster would be
01:10:26associated with this particular Target
01:10:28group Next we will configure Route 53 to
01:10:32manage the main domain example.com and
01:10:35use a wildcard entry like star.
01:10:37example.com to direct all subdomain
01:10:41requests to application load
01:10:43balancer so support. example.com app.
01:10:47example.com and web. example.com gets
01:10:50routed through the application load
01:10:51balancer for further processing and
01:10:54allocation to the correct ECS cluster so
01:10:57that was our best answer here's another
01:11:00approach we can use individual
01:11:02application load balancers for each
01:11:05subdomain and assign corresponding ECS
01:11:08cluster to receive requests from it and
01:11:12in Route 53 create entries for each
01:11:15subdomain so that they Point directly to
01:11:18their corresponding application load
01:11:20balancers now this second approach can
01:11:23lead to unnecessary complexity and
01:11:25higher costs compared to a more
01:11:27Consolidated approach discussed
01:11:29previously so this is not a good
01:11:32solution and therefore the wrong
01:11:36answer you have an AWS hosted web
01:11:39application with application load
01:11:40balancer as the entry point the deployed
01:11:43application and code cannot be changed
01:11:46so how can you dynamically modify the
01:11:48login page delivered to end users based
01:11:51on device type they are using
01:11:54for this we can employ cloudfront with
01:11:56Lambda atage function to dynamically
01:11:59adjust the login page according to the
01:12:01device type from which it is requested
01:12:03so Lambda at Edge extends the
01:12:05capabilities of AWS Lambda to the edge
01:12:07locations of Amazon cloudfront allowing
01:12:10you to run Lambda functions in response
01:12:12to cloudfront events so this allows you
01:12:14to customize and enhance content
01:12:17delivery or security and user experience
01:12:20at the edge of the AWS Network the
01:12:22second approach Ro of using API Gateway
01:12:25with Lambda functions to customize login
01:12:27Pages dynamically is complex and not
01:12:30viable since this is a web application
01:12:33serving Pages using API Gateway is not
01:12:36warranted here therefore this is the
01:12:38wrong
01:12:40answer you have deployed a web
01:12:42application on an ECS cluster with
01:12:44application load balancer in front and
01:12:47cloudfront for caching on the edge how
01:12:50can you ensure that application load
01:12:52balancer can be accessed assist only via
01:12:54Cloud front and not directly so for this
01:12:58we can add custom HTTP headers to
01:13:01requests that cloudfront sends to
01:13:03application load balancer so you can
01:13:06configure your cloudfront distribution
01:13:08to do this and then configure your
01:13:10application load balancer to only accept
01:13:13requests that contain these custom
01:13:16headers so that's our correct answer now
01:13:19let's look at the second approach ensure
01:13:22that the application load balance
01:13:23security
01:13:24groups restrict access only to the
01:13:27cloudfront
01:13:29IPS problem with this approach is that
01:13:31cloudfront IPS could change over a
01:13:34period of time therefore this is not a
01:13:36viable approach and hence the wrong
01:13:40answer in a gaming application that uses
01:13:43UDP and is deployed across multiple
01:13:45regions Global game plays connect to the
01:13:47nearest healthy Regional endpoint for
01:13:49optimal latency how can we establish a
01:13:52fixed IP address as the entry point into
01:13:55the gaming application AWS Global
01:13:58accelerator provides a pair of static
01:14:00anycast IP addresses that act as a fixed
01:14:03entry point routing traffic through the
01:14:05AWS Global
01:14:07Network to your gaming applications
01:14:10Regional endpoints so that's the best
01:14:13solution for this
01:14:14scenario let's look at the second option
01:14:17configure Route 53 to map a domain name
01:14:21to elastic IP addresses of Regional
01:14:24endpoints while this may be feasible but
01:14:27it is not the most optimal solution for
01:14:30the given scenario as we are looking for
01:14:32a direct fixed IP address for This
01:14:35Global
01:14:38application you are rolling out a new
01:14:40version of your e-commerce application
01:14:42on AWS and aiming for fast yet controll
01:14:46transition for it users from the old to
01:14:48the new version both the versions of the
01:14:51application will continue to exist and
01:14:53serve users until all users have been
01:14:56transitioned how would you manage this
01:14:58transition
01:15:00effectively for this we can use AWS
01:15:02Global
01:15:03accelerator and blue green deployment
01:15:06strategy so with AWS Global accelerator
01:15:09implementing a blue green deployment
01:15:11strategy becomes seamless it allows for
01:15:14smooth transition and precise control
01:15:16over distribution of user traffic
01:15:18between your existing blue and new green
01:15:22deployments of your
01:15:23application therefore this is our
01:15:26correct answer let's look at the second
01:15:27option of using Route 53 Route 53 relies
01:15:31on DNS based routing and faces
01:15:34challenges with DNS caching causing
01:15:36potential delays in propagating changes
01:15:38during blue green user traffic
01:15:41transition therefore it doesn't offer
01:15:44the granular traffic management for sift
01:15:48transitions between environments like
01:15:50AWS Global accelerator does hence it is
01:15:54the wrong
01:15:58answer as an account administrator how
01:16:01can you track configuration changes in
01:16:03your account over a period of time and
01:16:05monitor it for compliance so we can use
01:16:08AWS config for this which provides
01:16:11compliance monitoring and config change
01:16:14tracking AWS systems manager is for
01:16:17unified resource management and
01:16:19automation of operational tasks patching
01:16:22Etc
01:16:24and AWS control tower is for automated
01:16:27setup for multi-account environments and
01:16:29centralized account management within
01:16:31AWS
01:16:32organizations therefore systems manager
01:16:34and control tower are not the right
01:16:39answers let's go over some AWS config
01:16:42and monitoring Services AWS
01:16:45config audit config changes and
01:16:48compliance systems manager centralized
01:16:51Ops data and automation
01:16:54resource access manager share AWS
01:16:57resources across accounts Cloud watch
01:17:00monitoring logs metric collection and
01:17:04alarms cloud
01:17:05trail records AWS API activity for
01:17:09auditing of resource
01:17:11changes AWS X-Ray request
01:17:16tracing how would you automate common
01:17:19maintenance and deployment tasks across
01:17:21a fleet of EO instances
01:17:24you can use AWS systems manager agent
01:17:26for this so install AWS systems manager
01:17:30agent on these ec2 instances using which
01:17:33you can centrally manage and automate
01:17:35various operational tasks on your ec2
01:17:38instances so that's our correct answer
01:17:42Amazon inspector is a security
01:17:44assessment service for example you could
01:17:46use it to find vulnerabilities on ec2
01:17:49instances and AWS config is for
01:17:53evaluating and auditing configuration of
01:17:55AWS resources like ec2 instances
01:17:59therefore both Amazon inspector and AWS
01:18:01config are wrong answers
01:18:04here how can you address the issue of
01:18:07some clients overloading the company's
01:18:09system by making an excessive number of
01:18:12API calls through Amazon API Gateway
01:18:15rest
01:18:16API we can solve this problem by
01:18:18implementing per client throttling
01:18:20limits in API Gateway and utilizing API
01:18:24Keys as client identifiers this allows
01:18:27you to control and limit the number of
01:18:30requests each client can make thus
01:18:32preventing system overload and this is
01:18:35our chosen
01:18:36solution AWS x-ray primarily focuses on
01:18:40providing insights into application
01:18:42performance by tracing requests as they
01:18:45move through various Services therefore
01:18:47this is not an optimal solution for the
01:18:49given
01:18:51scenario okay a company has many
01:18:53resources which may be idle
01:18:55underutilized or unsecured how can it
01:18:58generate actionable recommendations to
01:19:01optimize cost security and
01:19:03performance for this we can use AWS
01:19:06trusted advisor which provides
01:19:08actionable recommendations for
01:19:11optimizing cost security performance and
01:19:14fall tolerance AWS cost Explorer on the
01:19:18other hand focuses mainly on cost
01:19:20management and Analysis and ews config
01:19:24monitors configuration changes but
01:19:26doesn't directly offer actionable
01:19:28recommendations for optimizing cost
01:19:30security and
01:19:31performance therefore ews trusted
01:19:34advisor is our chosen solution and not
01:19:37cost Explorer or AWS
01:19:41config how can a user securely access
01:19:44and log into an Amazon E2 instance
01:19:47without relying on SSH key pairs or
01:19:50access
01:19:51keys for this we can use use AWS systems
01:19:54manager session manager it provides
01:19:57secure auditable and remote shell access
01:20:00to your Amazon ec2 instances without
01:20:04requiring open inbound ports or SSH Keys
01:20:07it allows you to connect to instance
01:20:09directly from the AWS Management console
01:20:13or through the AWS
01:20:15CLI therefore this is the correct answer
01:20:19our second option is to use AWS systems
01:20:22manager
01:20:23run command now that allows you to
01:20:25execute commands on one or more ec2
01:20:29instances remotely without the need for
01:20:32SSH or access keys since our question
01:20:35pertains to logging into ec2 instance
01:20:38and not running commands remotely this
01:20:40is not the right
01:20:43answer how would you trace user requests
01:20:46through your application and visualize
01:20:48them to understand how your application
01:20:50and its underlying services are
01:20:52performing
01:20:53for this you can use AWS x-ray which
01:20:56helps developers track and understand
01:20:58how requests flow through their
01:21:00applications giving insights into
01:21:02performance issues if any within
01:21:05distributed systems AWS cloud trail
01:21:08doesn't Trace flow of requests within an
01:21:10application but rather tracks API level
01:21:13actions and changes made to AWS
01:21:15resources therefore this is not the
01:21:17correct answer AWS cloudwatch is a
01:21:20monitoring and logging service tracing
01:21:23request is not a native feature of the
01:21:25service therefore it is not the correct
01:21:29answer capture all IP traffic flowing to
01:21:32and from a VPC to Monitor and capture
01:21:35inbound and outbound IP traffic within a
01:21:37VPC VPC flow logs can be employed this
01:21:41feature captures data regarding IP
01:21:43traffic that traverses network
01:21:45interfaces within the VPC the collected
01:21:48log data can be directed to Amazon
01:21:50cloudwatch logs or stored in Amazon
01:21:53S3 neither Amazon Cloud watch nor AWS
01:21:57x-ray is designed for this specific
01:22:00purpose Amazon cloudwatch lacks the
01:22:03capability to independently capture this
01:22:05type of traffic while AWS x-ray traces
01:22:08API calls within an account it does not
01:22:12capture IP
01:22:14traffic seamlessly monitor Network
01:22:17traffic in near real time on an E2
01:22:19instance for suspicious activities or
01:22:21security threats continuously without
01:22:24disrupting the ec2 instance for this we
01:22:27can use traffic mirroring here we mirror
01:22:30traffic from The Source ec2 instance to
01:22:32a Target ec2 instance and run network
01:22:35monitoring tools on the target ec2
01:22:37instance or Appliance Amazon inspector
01:22:40is a wrong answer here because it does
01:22:43not monitor traffic directly on the ec2
01:22:46instance although it helps us find
01:22:49vulnerabilities on the ec2 instance and
01:22:52VPC flow logs can capture Network
01:22:55traffic information and send the logs to
01:22:58cloudwatch or S3 where it can be
01:23:01analyzed further however it is a wrong
01:23:03answer because it is not a real-time
01:23:10solution how do you leverage Chef or
01:23:13puppet to automate AWS resource
01:23:15configuration and application deployment
01:23:19effectively for this we can use AWS opsw
01:23:23opsworks streamlines the management of
01:23:26AWS resources and automates application
01:23:28deployment by utilizing Chef or puppet
01:23:32as its core configuration management
01:23:36engines so this is our correct answer
01:23:39let's look at other options cloud
01:23:41formation now the question scenario
01:23:44specifically focuses on using the
01:23:46automation platforms of Chef or puppet
01:23:49for resource configuration and
01:23:51deployment
01:23:53whereas cloud formations primary
01:23:54strength lies in infrastructure
01:23:56provisioning therefore this is not the
01:23:59correct answer our third option is code
01:24:02deploy which is excellent for automating
01:24:05application deployments on AWS but
01:24:07doesn't offer the detailed configuration
01:24:09management capabilities of Chef or
01:24:12puppet and therefore this is the wrong
01:24:17answer roll out new features or changes
01:24:20gradually to a subset of users or
01:24:22servers before the full
01:24:25deployment for this we can use Canary
01:24:27deployment which is a progressive roll
01:24:29out of an application to a subset of
01:24:32users before rolling out
01:24:35fully our second option is blue green
01:24:38deployment
01:24:39strategy this is not a progressive roll
01:24:42out therefore it is not the right
01:24:47answer let's discuss some deployment
01:24:50strategies Canary this is a incremental
01:24:53or Progressive roll out of new features
01:24:55or updates to a small subset of users or
01:24:59servers AB testing here we perform split
01:25:02testing of two versions of something
01:25:05like a web page app Feature Etc to gauge
01:25:08which version performs better blue green
01:25:11here we maintain two environments blue
01:25:14and green and then we can release new
01:25:17version of our application on green
01:25:20environment and switch traffic from one
01:25:24environment to another to perform our
01:25:31tests how can you architect a solution
01:25:34for an OnDemand video streaming platform
01:25:37the video files are stored in a S3
01:25:39bucket the service should support
01:25:41dynamically adjusting video quality in
01:25:43real time for different bandwidths and
01:25:45devices while ensuring optimal playback
01:25:48across varied
01:25:50formats so for this we can use AWS
01:25:53Elemental media convert which transports
01:25:56video files to support various formats
01:25:59and adaptive bit rate streaming and
01:26:02cloudfront caches and distributes the
01:26:05content globally so that's our solution
01:26:08let's look at other options media live
01:26:12is gear towards live video processing
01:26:15and streaming it's primarily used for
01:26:17encoding and streaming live video
01:26:19content to various devices how however
01:26:22our use case is on demand video
01:26:25streaming platform which works with
01:26:27video files therefore media live is not
01:26:31the correct answer Amazon connect is a
01:26:34cloud-based contact center service which
01:26:36allows you to set up and manage customer
01:26:38contact centers efficiently to handle
01:26:41calls chats and so on it does not
01:26:44support video streaming therefore it is
01:26:47the wrong
01:26:50answer let's go over some AWS Media
01:26:53Services AWS Elemental media convert
01:26:56this is for file based video transcoding
01:26:59media live is for live video processing
01:27:02and encoding media package is for media
01:27:06content packaging and media store is a
01:27:09durable media storage media tailor is
01:27:12used for personalized ad insertion and
01:27:15Amazon interactive video service or IVs
01:27:19offers live streaming and interactive
01:27:21experience
01:27:24ices you have a large collection of
01:27:26images and videos in a digital media
01:27:29library how can you organize and
01:27:31categorize the media files efficiently
01:27:34based on their content for this we can
01:27:36use Amazon recognition which has
01:27:39comprehensive image and video analysis
01:27:41capabilities using machine learning and
01:27:43computer vision
01:27:45Technologies it can do object and scene
01:27:48detection facial analysis text detection
01:27:51and and so on therefore it is suitable
01:27:55for categorizing the media files based
01:27:58on their content Amazon transcribe is
01:28:01not the correct answer it is a speech to
01:28:04text conversion service in other words
01:28:06it transcribes audio files into accurate
01:28:10and timestamped
01:28:13text let's go over some Amazon AI
01:28:15Services Amazon
01:28:17recognition it's used for image and
01:28:20video analysis Amazon
01:28:22text to speech conversion Lex for
01:28:25creating chat Bots comprehend it has
01:28:29natural language processing
01:28:31capabilities so it can be used for
01:28:34sentiment analysis or keyphrase
01:28:36extraction from text Amazon
01:28:39transcribe audio to text conversion
01:28:42Amazon Sage maker it's a platform for
01:28:45building training and deploying machine
01:28:47learning models Amazon personalized
01:28:49create personalized recommendations
01:28:52using ml
01:28:54algorithms
01:28:56textract extract text and data from scan
01:29:00documents Amazon forecast generate
01:29:02accurate forecasts based on time series
01:29:05data using machine
01:29:11learning for a smart city traffic
01:29:14optimization project using sensors at
01:29:16intersections to monitor traffic
01:29:18patterns and adjust signals in real time
01:29:21which combination of AWS Services
01:29:23ensures secure data transmission to the
01:29:25cloud device security and supports
01:29:28immediate response for traffic
01:29:30management for this we can use AWS iot
01:29:33core iot device Defender and AWS Lambda
01:29:37function iot core acts as a message
01:29:39broker receiving messages from sensors
01:29:42iot device Defender mons the devices to
01:29:46ensure security and AWS Lambda function
01:29:50processes the messages
01:29:52our second option is to use Amazon
01:29:54Kinesis data streams Kinesis data
01:29:57firehost and S3 although you can use
01:30:01Kinesis data streams to receive messages
01:30:04from sensors it is always better to use
01:30:07AWS Services which are purpose-built for
01:30:10a use case like iot core in this case
01:30:14and in this option there is no specific
01:30:18provision for device security therefore
01:30:21comparative
01:30:22this is not a good
01:30:25option let's go over some iot Services
01:30:28iot core it serves as a message broker
01:30:32and has other features like rules engine
01:30:36iot device Defender it monitors device
01:30:38behavior for security iot device
01:30:42management it can be used for Remote
01:30:44Management of devices iot green
01:30:47grass it extends AWS iot Cloud
01:30:50capabilities to the edge iot sitewise
01:30:54can collect industrial data and monitor
01:30:57operations iot analytics is used for
01:31:01analyzing data from iot devices iot twin
01:31:05maker it is used for digital
01:31:08representation of physical
01:31:15devices how would you manage syncing
01:31:17data in real time between users and the
01:31:20backend as well as is enabling access to
01:31:23data while offline for a gaming app
01:31:26being developed for both mobile and web
01:31:29on
01:31:30AWS we could use AWS Apps sync for this
01:31:34which provides a managed graphql service
01:31:36that automatically handles data
01:31:38synchronization across different devices
01:31:41and the back end and also enables
01:31:43offline access through caching
01:31:46mechanisms and this is useful for
01:31:49applications like chat applications
01:31:51collaborative tools gaming apps or any
01:31:54application where multiple users need to
01:31:56interact with share data in real time
01:32:00let's look at the second option so here
01:32:02you could use websockets API Gateway 2
01:32:06and it would work for the scenario with
01:32:08websockets API Gateway you have more
01:32:11granular control over the
01:32:13synchronization process it allows direct
01:32:15control over websocket
01:32:17connections however unlike appsync the
01:32:20offline functionality is not a part of
01:32:22its core features and must be custom
01:32:25built therefore comparatively this may
01:32:29not be the best
01:32:32approach you have a set of AWS Lambda
01:32:34functions written in Python that all
01:32:37require the same custom module for data
01:32:39processing how would you optimize the
01:32:41deployment and maintenance of this
01:32:43shared module across these functions we
01:32:46can use Lambda layers for this Lambda
01:32:49functions can be set up to fetch sub
01:32:51supplementary code and content as layers
01:32:54which are zip archives containing
01:32:55libraries custom run times or
01:32:58dependencies these layers enable the
01:33:00utilization of libraries within the
01:33:02function without bundling them into the
01:33:04deployment package thereby keeping the
01:33:07package size minimal our second option
01:33:10is to individually package the custom
01:33:12module with each Lambda function
01:33:14deployment so by duplicating the custom
01:33:17module across each functions deployment
01:33:19package it increases the size of each
01:33:22functions deployment package
01:33:24unnecessarily and complicates the
01:33:26process of maintaining and updating the
01:33:27shared module across multiple functions
01:33:30therefore this is not a good
01:33:34option how can you increase the
01:33:36computing power allocated to your Lambda
01:33:38function so for this we must allocate
01:33:41more memory to the Lambda function now
01:33:44remember that compute power to a Lambda
01:33:46function is allocated in proportion to
01:33:49the memory allocated to it so higher
01:33:52memory allocation automatically provides
01:33:55more CPU power to the Lambda function
01:33:58increasing timeout for the Lambda
01:34:01function doesn't help so that's the
01:34:03wrong
01:34:04answer how do we set up delayed
01:34:07visibility for messages sent to a sqsq
01:34:09ensuring they stay hidden for a specific
01:34:12duration before becoming accessible for
01:34:15retrieval for this we can add delay
01:34:18seconds attribute to a message it
01:34:21introdu es an initial delay before the
01:34:24message becomes available for any
01:34:26consumer to
01:34:27retrieve message visibility timeout
01:34:30attribute on a message controls how long
01:34:33a message stays invisible to other
01:34:36consumers after being retrieved by a
01:34:39consumer this is not the correct answer
01:34:43because it refers to post retrieval
01:34:46invisibility message retention in a sqsq
01:34:50is the duration of message stays in the
01:34:52queue after this time elapses the
01:34:55message is automatically
01:34:56removed ensuring cues do not retain
01:35:00messages
01:35:03indefinitely how can we enable multiple
01:35:05consumers to efficiently read data from
01:35:07a shared Kinesis data stream Shard
01:35:10without causing contention or
01:35:11performance issues for this we can use
01:35:14enhanced fan out feature in a
01:35:17traditional fanout scenario if consumers
01:35:18read data from a shared Kinesis shard
01:35:21all consumers read the same data and if
01:35:24one consumer falls behind or is slow it
01:35:26affects others enhanced fan out on the
01:35:29other hand allows each consumer to read
01:35:32its own copy of the data independently
01:35:34from the same Shard this ensures that
01:35:37consumers don't impact each
01:35:39other therefore using enhanced fanout is
01:35:43our chosen answer let's look at the
01:35:45second option of increasing the number
01:35:47of
01:35:48shards as per our question scenario
01:35:51multiple consumers need to access the
01:35:54same shared
01:35:55Shard therefore simply adding more
01:35:58shards won't really solve the
01:36:01problem and hence this is the wrong
01:36:05answer before you execute an operation
01:36:08you need to find if you have necessary
01:36:10permissions configurations and the
01:36:12potential impact of executing the
01:36:15operation how can you do this for this
01:36:18we can use aw CLI command to execute the
01:36:22operation with a dry run flag it
01:36:25performs simulation of the operation and
01:36:27displays the steps it would take to
01:36:29execute the operation but stops short of
01:36:31applying any modifications to resources
01:36:34it's a great way to validate permissions
01:36:36configurations and the impact of an
01:36:39operation before actually performing it
01:36:42so that's our correct answer our second
01:36:45option is to use test run flag with aw
01:36:49CLI that is incorrect simply because
01:36:51there is no such thing as a test run
01:36:55flag how can we effectively find
01:36:58detailed expenses by Department in a
01:37:01company given that each department
01:37:03operates its own distinct AWS accounts
01:37:06for this we can organize departments
01:37:09under AWS organization as organizational
01:37:12units or
01:37:14ous and then Department accounts can be
01:37:18attached to corresponding ous
01:37:21next enable Consolidated billing for the
01:37:25organization and then use cost
01:37:28allocation tags to tag resources in each
01:37:31departmental
01:37:33account so now using AWS cost Explorer
01:37:37or reports we can analyze costs by
01:37:41Department based on allocated tags so
01:37:45that's our solution our second option is
01:37:49to use AWS budgets AWS budgets primarily
01:37:53focuses on setting and monitoring budget
01:37:56limits and sending alerts based on those
01:37:59thresholds it does not provide detailed
01:38:01expenses by Department therefore it is
01:38:04not the best
01:38:07answer how would you build a cloud-based
01:38:10customer support service that
01:38:11incorporates both voice and chat
01:38:13communication channels for this we can
01:38:16use Amazon connect which is a
01:38:18cloud-based contact center service it
01:38:20enables businesses to set up and manage
01:38:23customer contact centers without
01:38:25requiring complex infrastructure or
01:38:27upfront
01:38:28costs both Amazon Lex and Amazon
01:38:31comprehend are incorrect answers Amazon
01:38:35Lex is ideal for text based interactions
01:38:38and chatbot creation while Amazon
01:38:41comprehend can extract valuable insights
01:38:44from large volumes of text Data it has
01:38:47natural language processing
01:38:50capabilities
01:38:52let's go over some sample questions
01:38:58now so how to tackle the questions read
01:39:02the question carefully identify key
01:39:05focus of the question read the answers
01:39:08carefully eliminate the wrong answers
01:39:11and then zero in on the right
01:39:16answer you are tasked with fortifying
01:39:18the security of a high trffic e-commerce
01:39:21platform hosted on AWS the goal is to
01:39:24mitigate common web vulnerabilities and
01:39:27secure sensitive customer data which
01:39:30combination of Amazon Security Services
01:39:32would best suit the
01:39:35scenario so this question focuses on
01:39:37mitigating common web vulnerabilities
01:39:40and securing sensitive customer data
01:39:44option A Implement AWS Shield Advanced
01:39:46for dos protection and AWS web
01:39:49application firewall to safeguard
01:39:51against common web exploits and SQL
01:39:54injection attacks so here aw Shield
01:39:57Advanced provides protection against dos
01:40:00attacks however the question does not
01:40:02specifically refer to Dos attacks using
01:40:06AWS web application firewall is good
01:40:09here because it safeguards against
01:40:11common web
01:40:13exploits and securing sensitive customer
01:40:16data is not directly addressed in this
01:40:19option therefore this option appears to
01:40:22be incorrect option b utilize AWS guard
01:40:26Duty for intelligent threat detection
01:40:29and Amazon inspector for continuous
01:40:32assessment of security vulnerabilities
01:40:34within the application code here
01:40:37although aw's guard Duty offers threat
01:40:40detection and Amazon inspector
01:40:42continuously assesses security
01:40:44vulnerabilities this combination does
01:40:46not directly focus on mitigating common
01:40:49web vulnerabilities or encrypting
01:40:51sensitive customer data as required for
01:40:54an e-commerce platform's
01:40:57security therefore this is not the
01:40:59correct answer either option C deploy
01:41:03AWS cloudfront with AWS wav for Content
01:41:07delivery and application Level
01:41:09protection and AWS KMS for encryption of
01:41:13customer payment information stored in
01:41:15the
01:41:16database so this option combines AWS
01:41:19cloudfront with AWS web application
01:41:21firewall or wav offering protection at
01:41:24the edge for Content delivery and
01:41:27application Level security against
01:41:28common web exploits additionally AWS KMS
01:41:32ensures encryption of sensitive customer
01:41:34payment information meeting the
01:41:36requirement to secure sensitive data so
01:41:40this appears to be the correct answer
01:41:43however we still have the option D let's
01:41:45look at that employ aw security hub for
01:41:49centralized security monitoring and AWS
01:41:51IM for fine grained access control or
01:41:55customer data access while aw security
01:41:58Hub facilitates centralized security
01:42:00monitoring and AWS IM controls access
01:42:05this combination doesn't address
01:42:06specific measures for mitigating common
01:42:08web vulnerabilities or encrypting
01:42:11sensitive customer data therefore this
01:42:14is not a correct answer hence option C
01:42:17is our correct
01:42:19answer
01:42:22you are designing a secure architecture
01:42:24for a healthcare application handling
01:42:26sensitive patient records on AWS the
01:42:29requirement is to ensure compliance with
01:42:31Healthcare regulations and protect
01:42:33patient data from unauthorized access
01:42:36which security solution would you
01:42:38suggest so the focus here is on ensuring
01:42:41compliance and protecting patient data
01:42:44let's look at the options option A
01:42:47utilize AWS IM for Access Control and
01:42:50AWS
01:42:51Key Management Service for encrypting
01:42:53patient records stored in Amazon
01:42:56S3 here while AWS I controls access and
01:43:00AWS KMS encrypts data it does not focus
01:43:04on ensuring compliance with Healthcare
01:43:07regulations therefore this does not
01:43:09appear to be a good answer let's look at
01:43:12other options option b Implement AWS
01:43:15detective for threat detection and AWS
01:43:18cloud trail for monitoring API activity
01:43:21within the application to ensure a
01:43:22secure application posture this option
01:43:26does not focus on
01:43:28compliance or active protection of
01:43:31sensitive patient records therefore this
01:43:34is not a good answer either option C
01:43:37deploy AWS wav in conjunction with AWS
01:43:41Shield Advanced for dods protection and
01:43:43perimeter
01:43:44security these are good security
01:43:46measures but again there is no Focus on
01:43:51compliance or protecting sensitive
01:43:53patient records which is a requirement
01:43:56for this Healthcare application
01:43:58therefore this does not appear to be a
01:44:00good answer either option D utilize AWS
01:44:04Mai for identifying and protecting
01:44:07sensitive data and AWS config for
01:44:10continuous assessment of resource
01:44:12configurations against compliance rules
01:44:15so AWS Macy helps us in identifying
01:44:18sensitive patient data and therefore
01:44:20protecting it and AWS config allows
01:44:24continuous evaluation of resource
01:44:26configurations against predefined
01:44:28compliance rules ensuring adherence to
01:44:31healthcare
01:44:32regulations therefore amongst the
01:44:34options presented option D is the best
01:44:40answer you are tasked with optimizing
01:44:43Global content delivery for a high
01:44:45demand media streaming service on AWS
01:44:49this service streams live events to
01:44:51users worldwide requiring low latency
01:44:54access and fault tolerance which
01:44:56combination of AWS Services would best
01:44:58address the complexities of efficient
01:45:00Global content delivery for this media
01:45:02streaming
01:45:03platform so the focus here is on live
01:45:06media streaming and Global content
01:45:09delivery let's look at the options
01:45:12option A Implement AWS Elemental media
01:45:15live for live video encoding and AWS
01:45:18Elemental media package for content
01:45:20packaging utilizing Amazon cloudfront
01:45:23for Global content delivery with
01:45:27customized Edge
01:45:28caching so here ews Elemental media live
01:45:32provides us with realtime encoding and
01:45:35media package can package the content
01:45:37for delivery Cloud fronts Edge caching
01:45:41ensures low latency Global content
01:45:43delivery so overall this solution meets
01:45:47the requirements laid out in the
01:45:49question hence hence it is a good
01:45:51candidate for being a correct answer
01:45:54let's look at other options option b
01:45:57deploy Amazon Route 53 with latency
01:46:00based routing and AWS Elemental media
01:46:03store for optimized content caching and
01:46:06delivery to Global
01:46:07viewers so here while Route 53
01:46:10facilitates latency based routing AWS
01:46:13Elemental media store focuses on storage
01:46:16rather than Edge
01:46:17caching making it relatively less suit
01:46:20able for optimized content delivery in a
01:46:23live media streaming
01:46:25scenario hence this is not a good answer
01:46:29option C utilize AWS Elemental media
01:46:32connect for secure and reliable video
01:46:34transport and AWS Global accelerator for
01:46:38optimized Global routing of video
01:46:40streams to users media connect manages
01:46:44secure video transport and Global
01:46:47accelerator doesn't directly handle
01:46:49content delivery
01:46:51optimization therefore this is not a
01:46:53good option option D Implement AWS
01:46:57Direct Connect for dedicated network
01:46:59connections between regions and Amazon
01:47:01cloudfront with customized Edge
01:47:03locations for optimized content caching
01:47:05and delivery AWS Direct Connect is a
01:47:08dedicated high-speed network connection
01:47:10between an on promise Network and AWS
01:47:13Cloud this has nothing to do with live
01:47:16media streaming to a global audience
01:47:19therefore this is not the correct answer
01:47:22hence we can conclude that option A is
01:47:25our correct
01:47:28answer as an architect you need to lay
01:47:31out a network design for multinational
01:47:33gaming company launching a real-time
01:47:35multiplayer game hosted on AWS the
01:47:38objective is to ensure low latency High
01:47:41throughput connections for Global Gamers
01:47:43across various regions which AWS
01:47:46Services would you choose to achieve
01:47:47this so the focus here is on low latency
01:47:51High throughput connections for Global
01:47:53gamers in a real-time multiplayer
01:47:57game let's look at the options option A
01:48:00utilize AWS Direct Connect for dedicated
01:48:03private connections between the gaming
01:48:04company data centers and AWS regions
01:48:08paid with Amazon Route 53 for latency
01:48:10based routing of gaming
01:48:12traffic our question scenario does not
01:48:15refer to any data centers and using AWS
01:48:18Direct Connect is not meaningful here
01:48:22therefore this does not appear to be a
01:48:25good solution let's look at other
01:48:26options option b deploy AWS Global
01:48:30accelerator for optimized Global routing
01:48:32and AWS Transit Gateway peering for
01:48:34management of network traffic between
01:48:36the gaming servers in different AWS
01:48:39regions AWS Global accelerator optimizes
01:48:43Global routing for low latency access
01:48:46while Transit Gateway peering
01:48:48facilitates efficient traffic management
01:48:50among gaming servers in various
01:48:53regions ensuring High throughput
01:48:56connections crucial for a realtime
01:48:58multiplayer gaming solution so this
01:49:01looks like a good answer let's look at
01:49:04other options option C Implement AWS VPN
01:49:09for encrypted connections between the
01:49:11gaming companies offices and AWS regions
01:49:13leveraging AWS Global accelerator for
01:49:16optimized Global routing of multiplayer
01:49:19gaming traffic
01:49:21reference to company's
01:49:22offices is misleading here as the
01:49:25question scenario has no reference to it
01:49:29therefore this is not the right answer
01:49:31let's look at option D utilize Amazon
01:49:34cloudfront for Content delivery and
01:49:36Amazon Route 53 with geolocation routing
01:49:39to direct gaming traffic to the nearest
01:49:41AWS regions hosting the gaming servers
01:49:44cloudfront assists with content deliv
01:49:47but might not be specialized for gaming
01:49:49traffic going to the servers and Route
01:49:5253 is for BNS based routing and cannot
01:49:55reroute traffic quickly as may be
01:49:58required in a real-time multiplayer
01:50:01gaming solution therefore option D is
01:50:03not the right answer so option b where
01:50:06we use AWS Global accelerator and AWS
01:50:09Transit Gateway peering is our best
01:50:15answer in designing a robust Network
01:50:18infrastructure for an expanding
01:50:20e-commerce platform on AWS you aim to
01:50:23segregate development testing and
01:50:25production environments securely which
01:50:27configuration of AWS VPC networking best
01:50:30aligns with these
01:50:33requirements so here the focus of the
01:50:35question is on segregating development
01:50:38testing and production
01:50:40environments let's look at the options
01:50:43option A set up a single VPC with
01:50:46distinct subnets representing
01:50:48development testing and production
01:50:50environments using security groups and
01:50:52network ACLS to manage inter subnet
01:50:55traffic while this appears to be a
01:50:58viable
01:50:59option let's see if we have better ways
01:51:03to isolate these
01:51:04environments option b deploy individual
01:51:08vpcs for each
01:51:10environment and establish VPC peering
01:51:13connections to regulate
01:51:15communication and employ security groups
01:51:17for Traffic Control using separate vpcs
01:51:21for distinct environments ensures
01:51:23Network segregation VPC peering
01:51:26facilitates secure communication between
01:51:28them and security groups add an extra
01:51:30layer of control over inbound and
01:51:32outbound traffic thereby meeting the
01:51:35needs of a secure and segregated
01:51:39environments this is better than option
01:51:42A and probably the right
01:51:44answer however let's look at other
01:51:46options option C utilize a single VPC
01:51:50with unique security groups for each
01:51:52environment ensuring isolation across
01:51:54different availability zones for secure
01:51:57communication so although using one VPC
01:52:00with distinct security groups across
01:52:02availability zones might provide some
01:52:05isolation it does not match the security
01:52:08standards and isolation achieved by
01:52:11separate
01:52:13vpcs therefore this is not the best
01:52:15answer let's look at option D create
01:52:18separate vpcs for each environment M and
01:52:21establish VPN connections leveraging AWS
01:52:24Transit Gateway for managing traffic
01:52:26flow and secure inter VPC
01:52:28communication creating separate vpcs and
01:52:31using VPN connections with Transit
01:52:33Gateway introduces complexity and might
01:52:35not be as straightforward as VPC peering
01:52:39for secure inter environment
01:52:41communication therefore this is not a
01:52:43good option hence option b of using
01:52:47separate vpcs for each environment and
01:52:49and VPC pairing between them along with
01:52:53security
01:52:54groups is the right
01:52:58answer you are architecting data
01:53:01infrastructure for a media streaming
01:53:03platform on AWS aiming to efficiently
01:53:06manage metadata and user profiles while
01:53:09saving media files in a scalable
01:53:11cost-efficient manner updates to
01:53:14metadata may be streamed in real time to
01:53:16other services for analysis how how
01:53:19would you fulfill these requirements
01:53:21ensuring high performance and
01:53:23scalability so here we focus on managing
01:53:28metadata and user profiles and also
01:53:30updates to metadata may be streamed in
01:53:33real time to other services we also need
01:53:36to save media files in a scalable
01:53:38cost-efficient
01:53:39manner let's look at the options option
01:53:42A utilize Amazon RDS for managing
01:53:46metadata and user profiles and leverage
01:53:49Amazon S3 for scalable storage and
01:53:52retrieval of media
01:53:53files RDS is effective for managing
01:53:56metadata and user profiles leveraging S3
01:53:59for media files may offer scalability
01:54:02and is cost-
01:54:04effective however streaming the changes
01:54:07to metadata which is stored in
01:54:09RDS is a challenge here so this may not
01:54:13be the best option let's look at option
01:54:16b Implement Amazon Dynamo DB for storing
01:54:20metadata and user profiles paired with
01:54:22Amazon S3 for storing and efficiently
01:54:26accessing the media files Dynamo DB
01:54:29offers high scalability and performance
01:54:31for storing metadata and user profiles
01:54:34while S3 provides scalable and
01:54:36cost-effective storage for media files
01:54:39since the metadata is in Dynamo DB
01:54:42Dynamo DB streams can be used to stream
01:54:45the changes in data to other
01:54:48services therefore this appears to be a
01:54:51good answer let's look at other options
01:54:54option C store metadata and user
01:54:57profiles in Dynamo DB and media files
01:55:00can be saved to Amazon FSX so here
01:55:03Dynamo DB for metadata and user profiles
01:55:06is a good option however Amazon FSX may
01:55:10not be as cost effective as S3
01:55:14especially in the long term given that
01:55:16it provides many storage
01:55:18classes
01:55:20therefore this is not the best answer
01:55:22let's look at option D utilize Amazon
01:55:25red shift for managing metadata and
01:55:28Amazon EFS for storing media files
01:55:31allowing red shift Spectrum to perform
01:55:33analytics directly on
01:55:36EFS red shift is a data warehouse and
01:55:39not suitable for storing and streaming
01:55:42real-time data and for this use case
01:55:45storing media files in S3 is better than
01:55:49storing them in EFS therefore this is
01:55:52not a good option hence option b of
01:55:55using Dynamo DB and S3 is our best
01:56:02answer a right sharing application on
01:56:05AWS aims to process and analyze incoming
01:56:08location data from drivers and
01:56:10passengers the application requires
01:56:13rapid data injection real-time analysis
01:56:16processing and also long-term
01:56:17cost-efficient storage for for further
01:56:20analysis of right patterns as a solution
01:56:22architect how would you meet these
01:56:25requirements so the focus here is on
01:56:27rapid data inje realtime analytics and
01:56:30long-term cost efficient storage of the
01:56:33data for further
01:56:35analysis let's look at the options
01:56:38option a deploy Amazon Kinesis data
01:56:40streams for data inje and processing and
01:56:42then store the data in S3 use AWS glue
01:56:46for transforming and analyzing the data
01:56:48in real time and
01:56:51longterm so using Kinesis data streams
01:56:54for Rapid data injection is
01:56:57good and so is storing data in S3 which
01:57:00is cost efficient and
01:57:02scalable however AWS glue is not a good
01:57:05option
01:57:06for realtime analytics we need to find a
01:57:10better solution so let's look at the
01:57:13option b Implement Amazon Kinesis data
01:57:16fireh host for data injection and direct
01:57:18the data to Amazon red shift for
01:57:20real-time
01:57:21analytics and historical data storage to
01:57:24predict demand
01:57:26patterns so Kinesis data firehost
01:57:28simplifies data delivery to Red
01:57:31shift however using red shift for
01:57:33real-time analytics is not good we need
01:57:36to find a better way to do this let's
01:57:39look at option C inest data into sqs and
01:57:42use Lambda functions to save it into S3
01:57:45use Amazon ethena to query and analyze
01:57:47data in S3 quickly
01:57:50now here Amazon Athena can be used to do
01:57:52ad hoc queries on data in S3 using
01:57:55SQL however it is not good for real-time
01:57:58analytics therefore this is not the
01:58:00right answer let's look at option D
01:58:04utilize Amazon Kinesis data streams for
01:58:06ingesting and Kinesis data analytics for
01:58:08real-time analytics store the data in
01:58:11Amazon S3 for analysis of right patterns
01:58:14in the long term so Amazon Kinesis data
01:58:18streams offers High through data
01:58:19injection and Kinesis data analytics
01:58:22offers realtime
01:58:24analytics and storing data in Amazon S3
01:58:28allows for scalable and cost-effective
01:58:30storage allowing further in-depth
01:58:33analysis of right patterns therefore
01:58:36option D is the correct
01:58:40answer a corporation with an on promise
01:58:43post rescu database of 70 terab is
01:58:46planning a migration to Amazon RDS in in
01:58:49US West one region the corporation has
01:58:52an existing VPN connection from their on
01:58:55premise Network to AWS cloud with
01:58:57restricted bandwidth the company
01:59:00requires a cost-efficient migration
01:59:02solution which approach would expedite
01:59:05the migration process while minimizing
01:59:08downtime so here the focus is on
01:59:10migrating 70 terabytes of data a VPN
01:59:14connection
01:59:15exists and we need a quick coste
01:59:18efficient migration solution
01:59:20for the
01:59:21database so let's look at the options
01:59:24option A use AWS data sync to transfer
01:59:28on premise postgressql data to S3 over
01:59:32existing VPN connection then write a
01:59:36script to import the data from S3 into
01:59:40RDS now existing VPN connection has
01:59:43restricted
01:59:45bandwidth therefore this may not be
01:59:47feasible for transferring 70 terabytes
01:59:49of data
01:59:51quickly let's see if we have better
01:59:54options option b set up a direct connect
01:59:58link
02:00:00from on premise to AWS use database
02:00:04migration service or DMS to migrate data
02:00:07from on premise post SQL to
02:00:11RDS so setting up a direct connect link
02:00:14for this one-time migration is an
02:00:17expensive proposition not a
02:00:19costeffective solution let's look at
02:00:22other options option C use a snow cone
02:00:26device to transfer on premise
02:00:29postgressql data to
02:00:31S3 import data into RDS using AWS CLI
02:00:35commands this option is not correct
02:00:38because snow con devices cannot transfer
02:00:4070 terab of data their capacity is much
02:00:44less option D using a snowball Edge
02:00:47device transfer on premise Pro gql data
02:00:50to S3 use DMS to migrate data from S3 to
02:00:55RDS snowball Edge device has the
02:00:58necessary capacity for 70 terab of data
02:01:02and DMS can be used to migrate data from
02:01:05s32
02:01:06RDS therefore option D is the correct
02:01:12answer a corporation intends to migrate
02:01:16500 on premise servers to AWS in Future
02:01:20these servers operate across multiple
02:01:21VMware clusters within the corporation's
02:01:24data center to prepare for migration the
02:01:27corporation aims to collect
02:01:29comprehensive information about its
02:01:31virtual machines including their
02:01:33configurations and Associated details
02:01:35and then explore the data what solution
02:01:38would best cater to these specified
02:01:42requirements so here the focus is to
02:01:45collect configuration and Associated
02:01:48details of virtual machines which are
02:01:51part of VMware clusters let's look at
02:01:54the options option
02:01:57A automate data retrieval from on
02:02:00promise servers using a script employ
02:02:02AWS CLI to store server details in AWS
02:02:05migration Hub analyze data directly
02:02:08within the migration Hub
02:02:10console so here we are expected to write
02:02:13a custom script for data
02:02:16retrieval let's see if we have
02:02:20options option b export configuration
02:02:23details from each VM server and upload
02:02:26to S3 using ETL scripts refine and
02:02:29explore the
02:02:31data
02:02:33so here we are expected to go to each VM
02:02:36server and Export configuration details
02:02:40this seems like a lot of work let's see
02:02:43if we have better options option C set
02:02:47up the AWS agentless Discovery connector
02:02:50virtual Appliance on the on premise
02:02:53Network enable data exploration within
02:02:56ews migration hub for further
02:02:59analysis using agentless Discovery
02:03:01connector to gather configuration
02:03:03information from the VMS in an automated
02:03:07fashion is good and we can explore this
02:03:11data within AWS migration Hub therefore
02:03:14this appears to be the correct answer
02:03:16however let's look at option D
02:03:19use AWS serverless migration service to
02:03:21migrate all VM servers and then export
02:03:24their configurations into S3 for further
02:03:27exploration this answer is obviously
02:03:30incorrect because our requirement is not
02:03:33to migrate the VM
02:03:34servers but only to collect information
02:03:37from them for
02:03:39now so option C where we use agentless
02:03:43Discovery connector virtual
02:03:45Appliance and AWS migration Hub is the
02:03:49correct
02:03:53answer to properly maintain a popular
02:03:55social media platform allowing image
02:03:57uploads a company aims to prevent
02:03:59inappropriate content sharing they seek
02:04:02an efficient solution with minimal
02:04:03development effort to flag inappropriate
02:04:06images and alert the
02:04:08administrators what approach should a
02:04:10Solutions architect take to fulfill this
02:04:12criteria here the focus is on
02:04:15identifying inappropriate
02:04:17images let's look at Option a write a
02:04:21custom script using a computer vision
02:04:22library to detect and flag inappropriate
02:04:25images delete the inappropriate images
02:04:27upon detection so here the suggestion is
02:04:31that we should write a custom script
02:04:34while this may work let's look at other
02:04:37options option b using Amazon
02:04:40recognition find inappropriate content
02:04:42in images invoke this detection on image
02:04:44upload event Amazon recognition employs
02:04:47deep learning to analyze ize images and
02:04:49videos therefore it is suitable for our
02:04:51current scenario and is a good solution
02:04:55let's look at other options option C
02:04:58batch process images using Amazon
02:04:59comprehend and delete the ones
02:05:01identified as inappropriate Amazon
02:05:03comprehend is used for extraction of
02:05:05insights from text Data like sentiment
02:05:08analysis keyphrase extraction and so on
02:05:12it does not work with images therefore
02:05:14this is not the right answer option D
02:05:17invoke Amazon Le on image uploads to
02:05:19detect inappropriate images segregate
02:05:22the flag images into a separate storage
02:05:25Amazon Lex is used to create chat
02:05:27box therefore this is not the correct
02:05:30answer option b where we use Amazon
02:05:33recognition is the best solution for
02:05:35this
02:05:45scenario
🎥 Related Videos![What vaccinating vampire bats can teach us about pandemics | Daniel Streicker](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FlSPxeA6Z_m4%2Fhqdefault.jpg&w=3840&q=75)
![a16z Podcast | Things Come Together -- Truths about Tech in Africa](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fjbl5I7kYWHM%2Fhqdefault.jpg&w=3840&q=75)
![2024 TSCRS Applications of anterior segments diagnostic instruments in cataract surgery](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FWnUDk5vKuqE%2Fhqdefault.jpg&w=3840&q=75)
![a16z Podcast | The Infrastructure of Total Health](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FoLGRslHa5FE%2Fhqdefault.jpg&w=3840&q=75)
![The Robot Lawyer Resistance with Joshua Browder of DoNotPay](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FAmVdYPTdw2c%2Fhqdefault.jpg&w=3840&q=75)
![NES Controllers Explained](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FnAStgQzPrAQ%2Fhqdefault.jpg&w=3840&q=75)
![What vaccinating vampire bats can teach us about pandemics | Daniel Streicker](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FlSPxeA6Z_m4%2Fhqdefault.jpg&w=3840&q=75)
What vaccinating vampire bats can teach us about pandemics | Daniel Streicker
![a16z Podcast | Things Come Together -- Truths about Tech in Africa](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fjbl5I7kYWHM%2Fhqdefault.jpg&w=3840&q=75)
a16z Podcast | Things Come Together -- Truths about Tech in Africa
![2024 TSCRS Applications of anterior segments diagnostic instruments in cataract surgery](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FWnUDk5vKuqE%2Fhqdefault.jpg&w=3840&q=75)
2024 TSCRS Applications of anterior segments diagnostic instruments in cataract surgery
![a16z Podcast | The Infrastructure of Total Health](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FoLGRslHa5FE%2Fhqdefault.jpg&w=3840&q=75)
a16z Podcast | The Infrastructure of Total Health
![The Robot Lawyer Resistance with Joshua Browder of DoNotPay](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FAmVdYPTdw2c%2Fhqdefault.jpg&w=3840&q=75)
The Robot Lawyer Resistance with Joshua Browder of DoNotPay
![NES Controllers Explained](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FnAStgQzPrAQ%2Fhqdefault.jpg&w=3840&q=75)
NES Controllers Explained
🔥 Recently Summarized Examples![4 Steps to Master Any Complex Skill (quickly)](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FpHN7BXpdAPQ%2Fhqdefault.jpg&w=3840&q=75)
![40 Years of Fitness Experience in Less Than 11 Minutes.](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2F2ZsHsX4hvlk%2Fhqdefault.jpg&w=3840&q=75)
![Gun Controlling Media Makes FATAL Mistake... They Have Tied Their Fate To Biden's & Gun Rights Win](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2F8L34kAwjriw%2Fhqdefault.jpg&w=3840&q=75)
![GET READY! Palantir Is Officially The Next Nvidia.](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FscpVS5--ikw%2Fhqdefault.jpg&w=3840&q=75)
![Abundant Thinking: The Hidden Key to Get Everything You Want (Audiobook)](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2Far9dcxJ11H4%2Fhqdefault.jpg&w=3840&q=75)
![The Coming Demonic Invasion (Revelation 9:12–21)](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FDYzkCtbwHqU%2Fhqdefault.jpg&w=3840&q=75)
![4 Steps to Master Any Complex Skill (quickly)](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FpHN7BXpdAPQ%2Fhqdefault.jpg&w=3840&q=75)
4 Steps to Master Any Complex Skill (quickly)
![40 Years of Fitness Experience in Less Than 11 Minutes.](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2F2ZsHsX4hvlk%2Fhqdefault.jpg&w=3840&q=75)
40 Years of Fitness Experience in Less Than 11 Minutes.
![Gun Controlling Media Makes FATAL Mistake... They Have Tied Their Fate To Biden's & Gun Rights Win](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2F8L34kAwjriw%2Fhqdefault.jpg&w=3840&q=75)
Gun Controlling Media Makes FATAL Mistake... They Have Tied Their Fate To Biden's & Gun Rights Win
![GET READY! Palantir Is Officially The Next Nvidia.](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FscpVS5--ikw%2Fhqdefault.jpg&w=3840&q=75)
GET READY! Palantir Is Officially The Next Nvidia.
![Abundant Thinking: The Hidden Key to Get Everything You Want (Audiobook)](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2Far9dcxJ11H4%2Fhqdefault.jpg&w=3840&q=75)
Abundant Thinking: The Hidden Key to Get Everything You Want (Audiobook)
![The Coming Demonic Invasion (Revelation 9:12–21)](/_next/image?url=https%3A%2F%2Fi.ytimg.com%2Fvi%2FDYzkCtbwHqU%2Fhqdefault.jpg&w=3840&q=75)