Oracle ERP World

26. AMI Types (EBS vs Instance Store)

AMI = Amazon Machine Image AMIs are ready to use EC2 instances with customizations.  Represents customization of EC2 instance. Within custom AMI we can have our own software configuration, OS and monitoring tool… Faster boot/ configuration time because all the software is pre-packaged through AMI. AMIs are built for a specific region and can be copied across regions. We can launch EC2 instances from: i) Public AMI: AWS provided. Most popular is ‘Amazon Linux 2 AMI’ ii) Custom AMI: Create and maintain by user iii) AWS Marketplace: An AMI created and sold by someone else

AMI Process (from an EC2 instance) i) Start an EC2 instance and customize it. ii) Stop the instance for data integrity iii) Build an AMI – This will also create EBS snapshots iv) Launch instances from other AMIs

us-east-1a >> EC2 instance >> AMI >> Create custom AMI >> use this custom AMI in another EC2 instance in us-east-1b

EC2 Image Builder Used to automate the creation of virtual machines or container images or AMIs. Means able to automate the creation, maintain, validate and test EC2 AMIs. EC2 Image Builder when runs will create an EC2 instance called ‘Builder EC2 Instance’. This instance is going to build components and customized s/w installs. A new AMI is going to be created out of that instance. EC2 Image Builder will automatically creates a ‘Test EC2 Instance’ from the newly created AMI and going to run bunch of tests that are defined in advance. We can skip the test, if we do not want to run. But the test validates whether AMI is working properly and secured? application running correctly? Once the AMI is tested then the AMI is going to be distributed. The Image Builder is regional service and AMI lets you to distribute across regions. The image builder can run on a schedule basis. Like weekly schedule or whenever the packages are updated or we can run it manually and is a free service.

Instance Store High performance hardware disk attached to EC2 instance. EBS volumes are network drives with good but limited performance. If we want a high performance, then attach a hard disk to EC2 instance. EC2 instance is a virtual machine but it is attached to real hardware server. Some of the servers do have disk space that is directly attached with the physical connection on to the server. Better I/O performance and good throughput. If we stop or terminate EC2 instance that has an instance store then the storage will be lost and its called Ephemeral storage. Use case: Good for buffer/ cache/ scratch data/ temporary content but not for long term storage. For long term storage EBS would be the best use case. If the underlying server of EC2 instance fails then we have risk of data loss as the hardware attached to the instance also fails. So if we decide to use an instance store then its our responsibility to maintain backups and replications.

Question 1: A company wants some EBS volumes with maximum possible Provisioned IOPS (PIOPS) to support high-performance database workloads on EC2 instances. The company also wants some EBS volumes that can be attached to multiple EC2 instances in the same Availability Zone. As an AWS Certified Solutions Architect Associate, which of the following options would you identify as correct for the given requirements? (Select two) Answer: a. Use io2 Block express volumes on Nitro-based EC2 instances to achieve a maximum Provisioned IOPS of 256,000 b. Use io1/ io2 volumes to enable Multi-Attach on Nitro-based EC2 instances Explanation: EBS io2 Block Express is the next generation of Amazon EBS storage server architecture. It has been built for the purpose of meeting the performance requirements of the most demanding I/O intensive applications that run on Nitro-based Amazon EC2 instances. With io2 Block Express volumes, you can provision volumes with Provisioned IOPS (PIOPS) up to 256,000, with an IOPS:GiB ratio of 1,000:1. Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (io1 or io2) volume to multiple instances that are in the same Availability Zone. You can attach multiple Multi-Attach enabled volumes to an instance or set of instances. Each instance to which the volume is attached has full read and write permission to the shared volume. Multi-Attach makes it easier for you to achieve higher application availability in clustered Linux applications that manage concurrent write operations.

Question 2: The solo founder at a tech startup has just created a brand new AWS account. The founder has provisioned an EC2 instance 1A which is running in region A. Later, he takes a snapshot of the instance 1A and then creates a new AMI in region A from this snapshot. This AMI is then copied into another region B. The founder provisions an instance 1B in region B using this new AMI in region B. At this point in time, what entities exist in region B? Answer: 1 EC2 instance, 1 AMI and 1 snapshot exist in region B Explanation: An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an AMI when you launch an instance. When the new AMI is copied from region A into region B, it automatically creates a snapshot in region B because AMIs are based on the underlying snapshots. Further, an instance is created from this AMI in region B. Hence, we have 1 EC2 instance, 1 AMI and 1 snapshot in region B.

Question 3: A company’s application is running on Amazon EC2 instances in a single Region. In the event of a disaster, a solutions architect needs to ensure that the resources can also be deployed to a second Region. Which combination of actions should the solutions architect take to accomplish this? (Select TWO.) Options: A. Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region for the destination B. Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the second Region using that EBS volume C. Launch a new EC2 instance in the second Region and copy a volume from Amazon S3 to the new instance D. Detach a volume on an EC2 instance and copy it to an Amazon S3 bucket in the second Region E. Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region Answer: A & E Explanation You can copy an Amazon Machine Image (AMI) within or across AWS Regions using the AWS Management Console, the AWS Command Line Interface or SDKs, or the Amazon EC2 API, all of which support the CopyImage action. Using the copied AMI the solutions architect would then be able to launch an instance from the same EBS volume in the second Region. Note: the AMIs are stored on Amazon S3, however you cannot view them in the S3 management console or work with them programmatically using the S3 API. CORRECT: “Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region for the destination” is a correct answer. CORRECT: “Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region” is also a correct answer. INCORRECT: “Detach a volume on an EC2 instance and copy it to an Amazon S3 bucket in the second Region” is incorrect. You cannot copy EBS volumes directly from EBS to Amazon S3. INCORRECT: “Launch a new EC2 instance in the second Region and copy a volume from Amazon S3 to the new instance” is incorrect. You cannot create an EBS volume directly from Amazon S3. INCORRECT: “Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the second Region using that EBS volume” is incorrect. You cannot create an EBS volume directly from Amazon S3.

Question 4: A research group needs a fleet of EC2 instances for a specialized task that must deliver high random I/O performance. Each instance in the fleet would have access to a dataset that is replicated across the instances. Because of the resilient application architecture, the specialized task would continue to be processed even if any instance goes down, as the underlying application architecture would ensure the replacement instance has access to the required dataset. Which of the following options is the MOST cost-optimal and resource-efficient solution to build this fleet of EC2 instances? Options A. Use EBS based EC2 instances B. Use EC2 instances with EFS mount points C. Use EC2 instances with access to S3 based storage D. Use Instance Store based EC2 instances Answer: D Explanation Correct option: Use Instance Store based EC2 instances An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance store volumes are included as part of the instance’s usage cost. As Instance Store based volumes provide high random I/O performance at low cost (as the storage is part of the instance’s usage cost) and the resilient architecture can adjust for the loss of any instance, therefore you should use Instance Store based EC2 instances for this use-case. Incorrect options: Use EBS based EC2 instances – EBS based volumes would need to use Provisioned IOPS (io1) as the storage type and that would incur additional costs. As we are looking for the most cost-optimal solution, this option is ruled out. Use EC2 instances with EFS mount points – Using EFS implies that extra resources would have to be provisioned. As we are looking for the most resource-efficient solution, this option is also ruled out. Use EC2 instances with access to S3 based storage – Using EC2 instances with access to S3 based storage does not deliver high random I/O performance, this option is just added as a distractor.

Question 5: The DevOps team at a multi-national company is helping its subsidiaries standardize EC2 instances by using the same Amazon Machine Image (AMI). Some of these subsidiaries are in the same AWS region but use different AWS accounts whereas others are in different AWS regions but use the same AWS account as the parent company. The DevOps team has hired you as a solutions architect for this project. Which of the following would you identify as CORRECT regarding the capabilities of AMIs? (Select three) Options: A• Copying an AMI backed by an encrypted snapshot results in an unencrypted target snapshot B• You can share an AMI with another AWS account C• You cannot share an AMI with another AWS account D• You cannot copy an AMI across AWS Regions E• Copying an AMI backed by an encrypted snapshot cannot result in an unencrypted target snapshot F• You can copy an AMI across AWS Regions Answer: B, E & F Explanation Correct options: You can copy an AMI across AWS Regions You can share an AMI with another AWS account Copying an AMI backed by an encrypted snapshot cannot result in an unencrypted target snapshot An Amazon Machine Image (AMI) provides the information required to launch an instance. An AMI includes the following: One or more EBS snapshots, or, for instance-store-backed AMIs, a template for the root volume of the instance. Launch permissions that control which AWS accounts can use the AMI to launch instances. A block device mapping that specifies the volumes to attach to the instance when it’s launched. You can copy an AMI within or across AWS Regions using the AWS Management Console, the AWS Command Line Interface or SDKs, or the Amazon EC2 API, all of which support the CopyImage action. You can copy both Amazon EBS-backed AMIs and instance-store-backed AMIs. You can copy AMIs with encrypted snapshots and also change encryption status during the copy process. Therefore, the option – “You can copy an AMI across AWS Regions” – is correct. Copying AMIs across regions: via – https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html The following table shows encryption support for various AMI-copying scenarios. While it is possible to copy an unencrypted snapshot to yield an encrypted snapshot, you cannot copy an encrypted snapshot to yield an unencrypted one. Therefore, the option – “Copying an AMI backed by an encrypted snapshot cannot result in an unencrypted target snapshot” is correct. via – https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html You can share an AMI with another AWS account. To copy an AMI that was shared with you from another account, the owner of the source AMI must grant you read permissions for the storage that backs the AMI, either the associated EBS snapshot (for an Amazon EBS-backed AMI) or an associated S3 bucket (for an instance store-backed AMI). Therefore, the option – “You can share an AMI with another AWS account” – is correct. Incorrect options: You cannot copy an AMI across AWS Regions You cannot share an AMI with another AWS account Copying an AMI backed by an encrypted snapshot results in an unencrypted target snapshot These three options contradict the details provided in the explanation above.

Question 6: A data-processing application runs on an i3.large EC2 instance with a single 100 GB EBS gp2 volume. The application stores temporary data in a small database (less than 30 GB) located on the EBS root volume. The application is struggling to process the data fast enough, and a Solutions Architect has determined that the I/O speed of the temporary database is the bottleneck. What is the MOST cost-efficient way to improve the database response times? A. Put the temporary database on a new 50-GB EBS gp2 volume B. Move the temporary database onto instance storage (Correct) C. Put the temporary database on a new 50-GB EBS io1 volume with a 3000 IOPS allocation D. Enable EBS optimization on the instance and keep the temporary files on the existing volume Explanation EC2 Instance Stores are high-speed ephemeral storage that is physically attached to the EC2 instance. The i3.large instance type comes with a single 475GB NVMe SSD instance store so it would be a good way to lower cost and improve performance by using the attached instance store. As the files are temporary, it can be assumed that ephemeral storage (which means the data is lost when the instance is stopped) is sufficient. CORRECT: “Move the temporary database onto instance storage” is the correct answer. INCORRECT: “Put the temporary database on a new 50-GB EBS io1 volume with a 3000 IOPS allocation” is incorrect. Moving the DB to a new 50-GB EBS io1 volume with a 3000 IOPS allocation will improve performance but is more expensive so will not be the most cost-efficient solution. INCORRECT: “Put the temporary database on a new 50-GB EBS gp2 volume” is incorrect. Moving the DB to a new 50-GB EBS gp2 volume will not result in a performance improvement as you get IOPS allocated per GB so a smaller volume will have lower performance. INCORRECT: “Enable EBS optimization on the instance and keep the temporary files on the existing volume” is incorrect. Enabling EBS optimization will not lower cost. Also, EBS Optimization is a network traffic optimization, it does not change the I/O performance of the volume.

awslagi.com

AWS Solutions Architect Associate SAA-C02 Practice Questions Part 6

  • March 18, 2022
  • iam.awslagi

AWS Solutions Architect Associate SAA-C02

Notes: Now AWS SAA-C02 was moved to AWS SAA-C03. Please check this link below for this new version:

AWS Certified Solutions Architect Associate SAA-C03 Practice Exam

For PDF Version: AWS Solutions Architect Associate SAA-C02 Practice Exam Part 1 AWS Solutions Architect Associate SAA-C02 Practice Exam Part 2 AWS Solutions Architect Associate SAA-C02 Practice Exam Part 3 AWS Solutions Architect Associate SAA-C02 Practice Exam Part 4 AWS Solutions Architect Associate SAA-C02 Practice Exam Part 5 AWS Solutions Architect Associate SAA-C02 Practice Exam Part 6 For Audio Version:

1. One of the biggest football leagues in Europe has granted the distribution rights for live streaming its matches in the US to a silicon valley based streaming services company. As per the terms of distribution, the company must make sure that only users from the US are able to live stream the matches on their platform. Users from other countries in the world must be denied access to these live-streamed matches. Which of the following options would allow the company to enforce these streaming restrictions? (Select two).

A. Use Route 53 based latency routing policy to restrict distribution of content to only the locations in which you have distribution rights. B. Use georestriction to prevent users in specific geographic locations from accessing content that you’re distributing through a CloudFront web distribution. C. Use Route 53 based geolocation routing policy to restrict distribution of content to only the locations in which you have distribution rights. D. Use Route 53 based failover routing policy to restrict distribution of content to only the locations in which you have distribution rights. E. Use Route 53 based weighted routing policy to restrict distribution of content to only the locations in which you have distribution rights.

2. A social gaming startup has its flagship application hosted on a fleet of EC2 servers running behind an Elastic Load Balancer. These servers are part of an Auto Scaling Group. 90% of the users start logging into the system at 6 pm every day and continue till midnight. The engineering team at the startup has observed that there is a significant performance lag during the initial hour from 6 pm to 7 pm. The application is able to function normally thereafter. As a solutions architect, which of the following steps would you recommend addressing the performance bottleneck during that initial hour of traffic spike?

A. Configure your Auto Scaling group by creating a target tracking policy. This causes the scale-out to happen even before peak traffic kicks in at 6 pm. B. Configure your Auto Scaling group by creating a scheduled action that kicks-off before 6 pm. This causes the scale-out to happen even before peak traffic kicks in at 6 pm. C. Configure your Auto Scaling group by creating a lifecycle hook that kicks-off before 6 pm. This causes the scale-out to happen even before peak traffic kicks in at 6 pm. D. Configure your Auto Scaling group by creating a step scaling policy. This causes the scale-out to happen even before peak traffic kicks in at 6 pm.

3. The engineering team at an e-commerce company wants to set up a custom domain for internal usage such as internaldomainexample.com. The team wants to use the private hosted zones feature of Route 53 to accomplish this. Which of the following settings of the VPC need to be enabled? (Select two)

A. enableDnsSupport B. enableVpcHostnames C. enableDnsHostnames D. enableDnsDomain E. enableVpcSupport

4. A research group at an ivy-league university needs a fleet of EC2 instances operating in a fault-tolerant architecture for a specialized task that must deliver high random I/O performance. Each instance in the fleet would have access to a dataset that is replicated across the instances. Because of the resilient architecture, the specialized task would continue to be processed even if any of the instances goes down as the underlying application architecture would ensure the replacement instance has access to the required dataset. Which of the following options is the MOST cost-optimal and resource-efficient solution to build this fleet of EC2 instances?

A. Use EC2 instances with access to S3 based storage. B. Use Instance Store based EC2 instances. C. Use EBS based EC2 instances. D. Use EC2 instances with EFS mount points.

5. The DevOps team at a major financial services company uses Multi-Availability Zone (Multi-AZ) deployment for its MySQL RDS database in order to automate its database replication and augment data durability. The DevOps team has scheduled a maintenance window for a database engine level upgrade for the coming weekend. Which of the following is the correct outcome during the maintenance window?

A. Any database engine level upgrade for an RDS DB instance with Multi-AZ deployment triggers both the primary and standby DB instances to be upgraded at the same time. However, this does not cause any downtime until the upgrade is complete. B. Any database engine level upgrade for an RDS DB instance with Multi-AZ deployment triggers both the primary and standby DB instances to be upgraded at the same time. This causes downtime until the upgrade is complete. C. Any database engine level upgrade for an RDS DB instance with Multi-AZ deployment triggers the primary DB instance to be upgraded which is then followed by the upgrade of the standby DB instance. This does not cause any downtime for the duration of the upgrade. D. Any database engine level upgrade for an RDS DB instance with Multi-AZ deployment triggers the standby DB instance to be upgraded which is then followed by the upgrade of the primary DB instance. This does not cause any downtime for the duration of the upgrade.

6. A streaming solutions company is building a video streaming product by using an Application Load Balancer (ALB) that routes the requests to the underlying EC2 instances. The engineering team has noticed a peculiar pattern. The ALB removes an instance whenever it is detected as unhealthy but the Auto Scaling group fails to kick-in and provision the replacement instance. What could explain this anomaly?

A. Both the Auto Scaling group and Application Load Balancer are using ALB based health check. B. The Auto Scaling group is using ALB based health check and the Application Load Balancer is using EC2 based health check. C. The Auto Scaling group is using EC2 based health check and the Application Load Balancer is using ALB based health check. D. Both the Auto Scaling group and Application Load Balancer are using EC2 based health check.

7. A social media analytics company uses a fleet of EC2 servers to manage its analytics workflow. These EC2 servers operate under an Auto Scaling group. The engineers at the company want to be able to download log files whenever an instance terminates because of a scale-in event from an auto-scaling policy. Which of the following features can be used to enable this custom action?

A. EC2 instance meta data B. EC2 instance user data C. Auto Scaling group lifecycle hook D. Auto Scaling group scheduled action

8. A US-based healthcare startup is building an interactive diagnostic tool for COVID-19 related assessments. The users would be required to capture their personal health records via this tool. As this is sensitive health information, the backup of the user data must be kept encrypted in S3. The startup does not want to provide its own encryption keys but still wants to maintain an audit trail of when an encryption key was used and by whom. Which of the following is the BEST solution for this use-case?

A. Use SSE-KMS to encrypt the user data on S3. B. Use client-side encryption with client provided keys and then upload the encrypted user data to S3. C. Use SSE-C to encrypt the user data on S3. D. Use SSE-S3 to encrypt the user data on S3.

9. A cyber security company is running a mission critical application using a single Spread lacement group of EC2 instances. The company needs 15 Amazon EC2 instances for optimal performance. How many Availability Zones (AZs) will the company need to deploy these EC2 instances per the given use-case?

A. 3 B. 7 C. 15 D. 14

10. The payroll department at a company initiates several computationally intensive workloads on EC2 instances at a designated hour on the last day of every month. The payroll department has noticed a trend of severe performance lag during this hour. The engineering team has figured out a solution by using Auto Scaling Group for these EC2 instances and making sure that 10 EC2 instances are available during this peak usage hour. For normal operations only 2 EC2 instances are enough to cater to the workload. As a solutions architect, which of the following steps would you recommend to implement the solution?

A. Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the desired capacity of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour . B. Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the min count as well as the max count of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour. C. Configure your Auto Scaling group by creating a target tracking policy and setting the instance count to 10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the designated hour. D. Configure your Auto Scaling group by creating a simple tracking policy and setting the instance count to 10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the designated hour.

11. The engineering team at a data analytics company has observed that its flagship application functions at its peak performance when the underlying EC2 instances have a CPU utilization of about 50%. The application is built on a fleet of EC2 instances managed under an Auto Scaling group. The workflow requests are handled by an internal Application Load Balancer that routes the requests to the instances. As a solutions architect, what would you recommend so that the application runs near its peak performance state?

A. Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target metric with a target value of 50%. B. Configure the Auto Scaling group to use simple scaling policy and set the CPU utilization as the target metric with a target value of 50%. C. Configure the Auto Scaling group to use a Cloudwatch alarm triggered on a CPU utilization threshold of 50%. D. Configure the Auto Scaling group to use step scaling policy and set the CPU utilization as the target metric with a target value of 50%.

12. A silicon valley based startup focused on the advertising technology (ad tech) space uses DynamoDE as a data store for storing various kinds of marketing data, such as user profiles, user events, clicks, and visited links. Some of these use-cases require a high request rate (millions of requests per second), low predictable latency and reliability. The startup now wants to add a caching layer to support high read volumes. As a solutions architect, which of the following AWS services would you recommend as a caching layer for this use-case? (Select two)

A. DynamoDB Accelerator (DAX) B. Elasticsearch C. Redshift D. RDS E. ElastiCache

13. An IT company has built a custom data warehousing solution for a retail organization by using Amazon Redshift. As part of the cost optimizations, the company wants to move any historical data (any data older than a year) into S3, as the daily analytical reports consume data for just the last one year. However the analysts want to retain the ability to cross-reference this historical data along with the daily reports. The company wants to develop a solution with the LEAST amount of effort and MINIMUM cost. As a solutions architect, which option would you recommend to facilitate this use-case?

A. Use Glue ETL job to load the S3 based historical data into Redshift. Once the ad-hoc queries are run for the historic data, it can be removed from Redshift. B. Use the Redshift COPY command to load the S3 based historical data into a Redshift. Once the ad-hoc queries are run for the historic data, it can be removed from Redshift. C. Setup access to the historical data via Athena. The analytics team can run historical data queries on Athena and continue the daily reporting on Redshift. In case the reports need to be cross-referenced, the analytics team need to export these in flat files and then do further analysis. D. Use Redshift Spectrum to create Redshift cluster tables pointing to the underlying historical data in S3. The analytics team can then query this historical data to cross-reference with the daily reports from Redshift.

14. The solo founder at a tech startup has just created a brand new AWS account. The founder has provisioned an EC2 instance 1A which is running in region A. Later, he takes a snapshot of the instance 1A and then creates a new AMI in region A from this snapshot. This AMI is then copied into another region B. The founder provisions an instance 1B in region B using this new AMI in region B. At this point in time, what entities exist in region B?

A. 1 EC2 instance, 1 AMI and 1 snapshot exist in region B B. 1 EC2 instance and 2 AMls exist in region B C. 1 EC2 instance and 1 snapshot exist in region B D. 1 EC2 instance and 1 AMI exist in region B

15. The sourcing team at the US headquarters of a global e-commerce company is preparing a spreadsheet of the new product catalog. The spreadsheet is saved on an EFS file system created in us-east-1 region. The sourcing team counterparts from other AWS regions such as Asia Pacific and Europe also want to collaborate on this spreadsheet. As a solutions architect, what is your recommendation to enable this collaboration with the LEAST amount of operational overhead?

A. The spreadsheet will have to be copied in Amazon S3 which can then be accessed from any AWS region. B. The spreadsheet will have to be copied into EFS file systems of other AWS regions as EFS is a regional service and it does not allow access from other AWS regions. C. The spreadsheet on the EFS file system can be accessed from EC2 instances running in other AWS regions by using an inter-region VPC peering connection. D. The spreadsheet data will have to be moved into an RDS MySQL database which can then be accessed from any AWS region.

16. The engineering team at a leading online real estate marketplace uses Amazon MySQL RDS because it simplifies much of the time-consuming administrative tasks typically associated with databases. The team uses Multi-Availability Zone (Multi-AZ) deployment to further automate its database replication and augment data durability and also deploys read replicas. A new DevOps engineer has joined the team and wants to understand the replication capabilities for Multi-AZ as well as Read-replicas. Which of the following correctly summarizes these capabilities for the given database?

A. Multi-AZ follows asynchronous replication and spans at least two Availability Zones within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region. B. Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region. C. Multi-AZ follows asynchronous replication and spans one Availability Zone within a single region. Read replicas follow synchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region. D. Multi-AZ follows asynchronous replication and spans at least two Availability Zones within a single region. Read replicas follow synchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region.

17. A digital media streaming company wants to use AWS Cloudfront to distribute its content only to its service subscribers. As a solutions architect, which of the following solutions would you suggest in order to deliver restricted content to the bona fide end users? (Select two)

A. Require HTTPS for communication between CloudFront and your custom origin. B. Use CloudFront signed cookies. C. Require HTTPS for communication between CloudFront and your S3 origin. D. Forward HTTPS requests to the origin server by using the ECDSA or RSA ciphers. E. Use CloudFront signed URLs.

18. The engineering team at an online fashion retailer uses AWS Cloud to manage its technology infrastructure. The EC2 server fleet is behind an Application Load Balancer and the fleet strength is managed by an Auto Scaling group. Based on the historical data, the team is anticipating a huge traffic spike during the upcoming Thanksgiving sale. As an AWS solutions architect, what feature of the Auto Scaling group would you leverage so that the potential surge in traffic can be preemptively addressed?

A. Auto Scaling group target tracking scaling policy. B. Auto Scaling group lifecycle hook. C. Auto Scaling group scheduled action. D. Auto Scaling group step scaling policy.

19. An IT Company wants to move all the compute components of its AWS Cloud infrastructure into serverless architecture. Their development stack comprises a mix of backend programming languages and the company would like to explore the support offered by the AWS Lambda runtime for their programming languages stack. Can you identify the programming languages supported by the Lambda runtime?

A. C#/.NET B. C C. Go D. PHP

20. Which of the following features of an Amazon S3 bucket can only be suspended once they have been enabled?

A. Requester Pays. B. Server Access Logging. C. Versioning. D. Static Website Hosting.

21. The CTO of an online home rental marketplace wants to re-engineer the caching layer of the current architecture for its relational database. He wants the caching layer to have replication and archival support built into the architecture. Which of the following AWS service offers the capabilities required for the re-engineering of the caching layer?

A. DynamoDB Accelerator (DAX). B. ElastiCache for Memcached. C. DocumentDB. D. ElastiCache for Redis.

22. A gaming company uses Amazon Aurora as its primary database service. The company has now deployed 5 multi-AZ read replicas to increase the read throughput and for use as failover target. The replicas have been assigned the following failover priority tiers and corresponding sizes are given in parentheses: tier-1 (16TB), tier-1 (32TB), tier-10 (16TB), tier-15 (16TB), tier-15 (32TB). In the event of a failover, Amazon RDS will promote which of the following read replicas?

A. Tier-15 (32TB) B. Tier-1 (16TB) C. Tier-1 (32TB) D. Tier-10 (16TB)

23. A DevOps engineer at an IT company was recently added to the admin group of the company’s AWS account. The Administratoraccess managed policy is attached to this group. Can you identify the AWS tasks that the DevOps engineer CANNOT perform even though he has full Administrator privileges (Select two)?

A. Delete an S3 bucket from the production environment. B. Configure an Amazon S3 bucket to enable MFA (Multi FactorAuthentication) delete. C. Delete the IAM user for his manager. D. Change the password for his own IAM user account. E. Close the company’s AWS account.

24. The planetary research program at an ivy-league university is assisting NASA to find potential landing sites for exploration vehicles of unmanned missions to our neighboring planets. The program uses High Performance Computing (HPC) driven application architecture to identify these landing sites. Which of the following EC2 instance topologies should this application be deployed on?

A. The EC2 instances should be deployed in a partition placement group so that distributed workloads can be handled effectively. B. The EC2 instances should be deployed in an Auto Scaling group so that application meets high availability requirements. C. The EC2 instances should be deployed in a spread placement group so that there are no correlated failures. D. The EC2 instances should be deployed in a cluster placement group so that the underlying workload can benefit from low network latency and high network throughput.

25. A major bank is using SQS to migrate several core banking applications to the cloud to ensure high availability and cost efficiency while simplifying administrative complexity and overhead. The development team at the bank expects a peak rate of about 1000 messages per second to be processed via SQS. It is important that the messages are processed in order. Which of the following options can be used to implement this system?

A. Use Amazon SQS FIFO queue in batch mode of 2 messages per operation to process the messages at the peak rate. B. Use Amazon SQS FIFO queue to process the messages. C. Use Amazon SQS FIFO queue in batch mode of 4 messages per operation to process the messages at the peak rate. D. Use Amazon SQS standard queue to process the messages.

26. The audit department at one of the leading consultancy firms generates and accesses the audit reports only during the last month of a financial year. The department uses AWS Step Functions to orchestrate the report creating process with failover and retry scenarios built into the solution and the data should be available with millisecond latency. The underlying data to create these audit reports is stored on S3 and runs into hundreds of Terabytes. As a solutions architect, which is the MOST cost-effective storage class that you would recommend to be used for this use-case?

A. Amazon S3 Glacier (S3 Glacier). B. Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering). C. Amazon S3 Standard-Infrequent Access (S3 Standard-IA). D. Amazon S3 Standard.

27. A file hosting startup offers cloud storage and file synchronization services to its end users. The file-hosting service uses Amazon S3 under the hood to power its storage offerings. Currently all the customer files are uploaded directly under a single S3 bucket. The engineering team has started seeing scalability issues where customer file uploads have started failing during the peak access hours in the evening with more than 5000 requests per second. Which of the following is the MOST resource efficient and cost-optimal way of addressing this issue?

A. Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations. B. Change the application architecture to create a new S3 bucket for each customer and then upload each customer’s files directly under the respective buckets. C. Change the application architecture to create a new S3 bucket for each day’s data and then upload the daily files directly under that day’s bucket. D. Change the application architecture to use EFS instead of Amazon S3 for storing the customers’ uploaded files.

28. The DevOps team at an analytics company has noticed that the performance of its proprietary Machine Learning workflow has deteriorated ever since a new Auto Scaling group was deployed a few days back. Upon investigation, the team found out that the Launch Configuration selected for the Auto Scaling group is using the incorrect instance type that is not optimized to handle the Machine Learning workflow. As a solutions architect, what would you recommend to provide a long term resolution for this issue?

A. No need to modify the launch configuration. Just modify the Auto Scaling group to use the correct instance type. B. No need to modify the launch configuration. Just modify the Auto Scaling group to use more number of existing instance types. More instances may offset the loss of performance. C. Modify the launch configuration to use the correct instance type and continue to use the existing Auto Scaling group. D. Create a new launch configuration to use the correct instance type. Modify the Auto Scaling group to use this new launch configuration.Delete the old launch configuration as it is no longer needed.

29. A social photo-sharing company uses Amazon S3 to store the images uploaded by the users. These images are kept encrypted in S3 by using AWS-KMS and the company manages its own Customer Master Key (CMK) for encryption. A member of the DevOps team accidentally deleted the CMK a day ago, thereby rendering the user’s photo data unrecoverable. You have been contacted by the company to consult them on possible solutions to this crisis. As a solutions architect, which of the following steps would you recommend to solve this issue?

A. The CMK can be recovered by the AWS root account user. B. The company should issue a notification on its web application informing the users about the loss of their data. C. As the CMK was deleted a day ago, it must be in the ‘pending deletion’ status and hence you can just cancel the CMK deletion and recover the key. D. Contact AWS support to retrieve the CMK from their backup.

30. A leading video streaming provider is migrating to AWS Cloud infrastructure for delivering its content to users across the world. The company wants to make sure that the solution supports at least a million requests per second for its EC2 server farm. As a solutions architect, which type of Elastic Load Balancer would you recommend as part of the solution stack?

A. Network Load Balancer. B. Infrastructure Load Balancer. C. Application Load Balancer. D. Classic Load Balancer.

31. A silicon valley based research group is working on a High Performance Computing (HPC) application in the area of Computational Fluid Dynamics. The application carries out simulations of the external aerodynamics around a car and needs to be deployed on EC2 instances with a requirement for high levels of inter-node communications and high network traffic between the instances. As a solutions architect, which of the following options would you recommend to the engineering team at the startup? (Select two)

A. Deploy EC2 instances with Elastic Fabric Adapter. B. Deploy EC2 instances behind a Network Load Balancer. C. Deploy EC2 instances in a partition placement group. D. Deploy EC2 instances in a cluster placement group. E. Deploy EC2 instances in a spread placement group.

32. A US-based non-profit organization develops learning methods for primary and secondary vocational education, delivered through digital learning platforms, which are hosted on AWS under a hybrid cloud setup. After experiencing stability issues with their cluster of self-managed RabbitMQ message brokers, the organization wants to explore an alternate solution on AWS. As a solutions architect, which of the following AWS services would you recommend that can provide support for quick and easy migration from RabbitMQ?

A. Amazon Simple Notification Service (Amazon SNS). B. Amazon SQS FIFO (First-In-First-Out). C. Amazon SQS Standard. D. Amazon MQ.

33. A junior scientist working with the Deep Space Research Laboratory at NASA is trying to upload a high-resolution image of a nebula into Amazon S3. The image size is approximately 3GB. The junior scientist is using S3 Transfer Acceleration (S3TA) for faster image upload. It turns out that S3TA did not result in an accelerated transfer. Given this scenario, which of the following is correct regarding the charges for this image transfer?

A. The junior scientist needs to pay both S3 transfer charges and S3TA transfer charges for the image upload. B. The junior scientist does not need to pay any transfer charges for the image upload. C. The junior scientist only needs to pay S3 transfer charges for the image upload. D. The junior scientist only needs to pay S3TA transfer charges for the image upload.

34. A global media company is using Amazon CloudFront to deliver media-rich content to its audience across the world. The Content Delivery Network (CDN) offers a multi-tier cache by default, with regional edge caches that improve latency and lower the load on the origin servers when the object is not already cached at the edge. However there are certain content types that bypass the regional edge cache, and go directly to the origin. Which of the following content types skip the regional edge cache? (Select two)

A. Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin. B. User-generated videos. C. Static content such as style sheets, JavaScript files. D. E-commerce assets such as product photos. E. Dynamic content, as determined at request time (cache-behavior configured to forward all headers).

35. A leading carmaker would like to build a new car-as-a-sensor service by leveraging fully serverless components that are provisioned and managed automatically by AWS. The development team at the carmaker does not want an option that requires the capacity to be manually provisioned, as it does not want to respond manually to changing volumes of sensor data. Given these constraints, which of the following solutions is the BEST fit to develop this car-as-a-sensor service?

A. Ingest the sensor data in a Kinesis Data Stream, which is polled by a Lambda function in batches and the data is written into an auto-scaled DynamoDB table for downstream processing. B. Ingest the sensor data in a Kinesis Data Stream, which is polled by an application running on an EC2 instance and the data is written into an auto-scaled DynamoDB table for downstream processing. C. Ingest the sensor data in an Amazon SQS standard queue, which is polled by a Lambda function in batches and the data is written into an auto-scaled DynamoDB table for downstream processing. D. Ingest the sensor data in an Amazon SQS standard queue, which is polled by an application running on an EC2 instance and the data is written into an auto-scaled DynamoDB table for downstream processing.

36. The engineering team at an in-home fitness company is evaluating multiple in-memory data stores with the ability to power its on-demand, live leaderboard. The company’s leaderboard requires high availability, low latency. and real-time processing to deliver customizable user data for the community of users working out together virtually from the comfort of their home. As a solutions architect, which of the following solutions would you recommend? (Select two)

A. Power the on-demand, live leaderboard using DynamoDB as it meets the in-memory, high availability, low latency requirements. B. Power the on-demand, live leaderboard using ElastiCache Redis as it meets the in-memory, high availability, low latency requirements. C. Power the on-demand, live leaderboard using AWS Neptune as it meets the in-memory, high availability, low latency requirements. D. Power the on-demand, live leaderboard using DynamoDB with DynamoDB Accelerator (DAX) as it meets the in-memory, high availability, low latency requirements. E. Power the on-demand, live leaderboard using RDS Aurora as it meets the in-memory, high availability, low latency requirements.

37. A media company wants to get out of the business of owning and maintaining its own IT infrastructure. As part of this digital transformation, the media company wants to archive about 5PB of data in its on-premises data center to durable long term storage. As a solutions architect, what is your recommendation to migrate this data in the MOST cost-optimal way?

A. Setup AWS direct connect between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier. B. Setup Site-to-Site VPN connection between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier. C. Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into AWS Glacier. D. Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier.

38. A leading video streaming service delivers billions of hours of content from Amazon S3 to customers around the world. Amazon S3 also serves as the data lake for its big data analytics solution. The data lake has a staging zone where intermediary query results are kept only for 24 hours. These results are also heavily referenced by other parts of the analytics pipeline. Which of the following is the MOST cost-effective strategy for storing this intermediary query data?

A. Store the intermediary query results in S3 One Zone-Infrequent Access storage class. B. Store the intermediary query results in S3 Standard storage class. C. Store the intermediary query results in S3 Intelligent-Tiering storage class. D. Store the intermediary query results in S3 Standard-Infrequent Access storage class.

39. A news network uses Amazon S3 to aggregate the raw video footage from its reporting teams across the US. The news network has recently expanded into new geographies in Europe and Asia. The technical teams at the overseas branch offices have reported huge delays in uploading large video files to the destination S3 bucket. Which of the following are the MOST cost-effective options to improve the file upload speed into S3? (Select two)

A. Use multipart uploads for faster file uploads into the destination S3 bucket. B. Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination S3 bucket. C. Use AWS Global Accelerator for faster file uploads into the destination S3 bucket. D. Create multiple site-to-site VPN connections between the AWS Cloud and branch offices in Europe and Asia. Use these VPN connections for faster file uploads into S3. E. Create multiple AWS direct connect connections between the AWS Cloud and branch offices in Europe and Asia. Use the direct connect connections for faster file uploads into S3.

40. A video analytics organization has been acquired by a leading media company. The analytics organization has 10 independent applications with an on-premises data footprint of about 70TB for each application. The media company has its IT infrastructure on the AWS Cloud. The terms of the acquisition mandate that the on-premises data should be migrated into AWS Cloud and the two organizations establish connectivity so that collaborative development efforts can be pursued. The CTO of the media company has set a timeline of one month to carry out this transition. Which of the following are the MOST cost-effective options for completing the data transfer and then establishing connectivity? (Select two)

A. Setup AWS direct connect to establish connectivity between the on-premises data center and AWS Cloud. B. Setup Site-to-Site VPN to establish connectivity between the on-premises data center and AWS Cloud. C. Order 10 Snowball Edge Storage Optimized devices to complete the one-time data transfer. D. Order 70 Snowball Edge Storage Optimized devices to complete the one-time data transfer. E. Order 1 Snowmobile to complete the one-time data transfer.

41. An IT consultant is helping the owner of a medium-sized business set up an AWS account. What are the security recommendations he must follow while creating the AWS account root user? (Select two)

A. Create AWS account root user access keys and share those keys only with the business owner. B. Enable Multi Factor Authentication (MFA) for the AWS account root user account. C. Create a strong password for the AWS account root user. D. Send an email to the business owner with details of the login username and password for the AWS root user. This will help the business owner to troubleshoot any login issues in future. E. Encrypt the access keys and save them on Amazon S3.

42. A technology blogger wants to write a review on the comparative pricing for various storage types available on AWS Cloud. The blogger has created a test file of size 1GB with some random data. Next he copies this test file into AWS S3 Standard storage class, provisions an EBS volume (General Purpose SSD (gp2)) with 100GB of provisioned storage and copies the test file into the EBS volume, and lastly copies the test file into an EFS Standard Storage filesystem. At the end of the month, he analyses the bill for costs incurred on the respective storage types for the test file. What is the correct order of the storage charges incurred for the test file on these three storage types?

A. Cost of test file storage on EFS < Cost of test file storage on S3 Standard < Cost of test file storage on EBS. B. Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test file storage on EBS. C. Cost of test file storage on S3 Standard < Cost of test file storage on EBS < Cost of test file storage on EFS. D. Cost of test file storage on EBS < Cost of test file storage on S3 Standard < Cost of test file storage on EFS.

43. A software engineering intern at an e-commerce company is documenting the process flow to provision EC2 instances via the Amazon EC2 API. These instances are to be used for an internal application that processes HR payroll data. He wants to highlight those volume types that cannot be used as a boot volume. Can you help the intern by identifying those storage volume types that CANNOT be used as boot volumes while creating the instances? (Select two)

44. The data engineering team at an e-commerce company has set up a workflow to ingest the clickstream data into the raw zone of the S3 data lake. The team wants to run some SQL based data sanity checks on the raw zone of the data lake. What AWS services would you recommend for this use-case such that the solution is cost-effective and easy to maintain?

A. Load the incremental raw zone data into RDS on an hourly basis and run the SQL based sanity checks. B. Load the incremental raw zone data into Redshift on an hourly basis and run the SQL based sanity checks. C. Load the incremental raw zone data into an EMR based Spark Cluster on an hourly basis and use SparkSQL to run the SQL based sanity checks. D. Use Athena to run SQL based analytics against S3 data.

45. A leading social media analytics company is contemplating moving its dockerized application stack into AWS Cloud. The company is not sure about the pricing for using Elastic Container Service (ECS) with the EC2 launch type compared to the Elastic Container Service (ECS) with the Fargate launch type. Which of the following is correct regarding the pricing for these two services?

A. Both ECS with EC2 launch type and ECS with Fargate launch type are charged based on vCPU and memory resources that the containerized application requests. B. Both ECS with EC2 launch type and ECS with Fargate launch type are charged based on EC2 instances and EBS volumes used. C. ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests. D. Both ECS with EC2 launch type and ECS with Fargate launch type are just charged based on Elastic Container Service used per hour.

46. The DevOps team at an e-commerce company wants to perform some maintenance work on a specific EC2 instance that is part of an Auto Scaling group using a step scaling policy. The team is facing a maintenance challenge – every time the team deploys a maintenance patch, the instance health check status shows as out of service for a few minutes. This causes the Auto Scaling group to provision another replacement instance immediately. As a solutions architect, which are the MOST time/resource efficient steps that you would recommend so that the maintenance work can be completed at the earliest? (Select two)

A. Put the instance into the Standby state and then update the instance by applying the maintenance patch. Once the instance is ready, you can exit the Standby state and then return the instance to service. B. Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can activate the ReplaceUnhealthy process type Again. C. Suspend the ScheduledActions process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can activate the ScheduledActions process type again. D. Take a snapshot of the instance, create a new AMI and then launch a new instance using this AMI. Apply the maintenance patch to this new instance and then add it back to the Auto Scaling Group by using the manual scaling policy. Terminate the earlier instance that had the maintenance issue. E. Delete the Auto Scaling group and apply the maintenance fix to the given instance. Create a new Auto Scaling group and add all the instances again using the manual scaling policy.

47. Which of the following is true regarding cross-zone load balancing as seen in Application Load Balancer versus Network Load Balancer?

A. By default, cross-zone load balancing is disabled for Application Load Balancer and enabled for Network Load Balancer. B. By default, cross-zone load balancing is enabled for both Application Load Balancer and Network Load Balancer. C. By default, cross-zone load balancing is enabled for Application Load Balancer and disabled for Network Load Balancer. D. By default, cross-zone load balancing is disabled for both Application Load Balancer and Network Load Balancer.

48. The IT department at a consulting firm is conducting a training workshop for new developers. As part of an evaluation exercise on Amazon S3, the new developers were asked to identify the invalid storage class lifecycle transitions for objects stored on S3. Can you spot the INVALID lifecycle transitions from the options below? (Select two)

A. S3 Standard-IA => S3 Intelligent-Tiering. B. S3 Intelligent-Tiering S3 Standard. C. S3 One Zone-IA => S3 Standard-IA. D. S3 Standard-IA => S3 One Zone-IA. E. S3 Standard = > S3 Intelligent-Tiering.

49. A silicon valley based startup uses a fleet of EC2 servers to manage its CRM application. These EC2 servers are behind an Elastic Load Balancer (ELB). Which of the following configurations are NOT allowed for the Elastic Load Balancer?

A. Use the ELB to distribute traffic for four EC2 instances. All the four instances are deployed across two Availability Zones of us-east-1 region. B. Use the ELB to distribute traffic for four EC2 instances. Two of these instances are deployed in Availability Zone A of us-east-1 region and the other two instances are deployed in Availability Zone B of us-west-1 region. C. Use the ELB to distribute traffic for four EC2 instances. All the four instances are deployed in Availability Zone A of us-east-1 region. D. Use the ELB to distribute traffic for four EC2 instances. All the four instances are deployed in Availability Zone B of us-west-1 region.

50. A large financial institution operates an on-premises data center with hundreds of PB of data managed on Microsoft’s Distributed File System (DFS). The CTO wants the organization to transition into a hybrid cloud environment and run data-intensive analytics workloads that support DFS. Which of the following AWS services can facilitate the migration of these workloads?

A. Amazon FSx for Windows File Server. B. Amazon FSx for Lustre. C. Microsoft SQL Server on Amazon. D. AWS Managed Microsoft AD.

51. An e-commerce company wants to explore a hybrid cloud environment with AWS so that it can start leveraging AWS services for some of its data analytics workflows. The engineering team at the e-commerce company wants to establish a dedicated, encrypted, low latency, and high throughput connection between its data center and AWS Cloud. The engineering team has set aside sufficient time to account for the operational overhead of establishing this connection. As a solutions architect, which of the following solutions would you recommend to the company?

A. Use AWS Direct Connect plus VPN to establish a connection between the data center and AWS Cloud. B. Use AWS Direct Connect to establish a connection between the data center and AWS Cloud. C. Use VPC transit gateway to establish a connection between the data center and AWS Cloud. D. Use site-to-site VPN to establish a connection between the data center and AWS of Cloud.

52. A silicon valley based startup wants to be the global collaboration platform for API development. The product team at the startup has figured out a market need to support both stateful and stateless client-server communications via the APIs developed using its platform. You have been hired by the startup as an AWS solutions architect to build a Proof-of-Concept to fulfill this market need using AWS API Gateway. Which of the following would you recommend to the startup?

A. API Gateway creates RESTful APIs that enable stateless client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateless, full-duplex communication between client and server. B. API Gateway creates RESTful APIs that enable stateful client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server. C. API Gateway creates RESTful APIs that enable stateful client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateless, full-duplex communication between client and server. D. API Gateway creates RESTful APIs that enable stateless client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server.

53. A company has multiple EC2 instances operating in a private subnet which is part of a custom VPC. These instances are running an image processing application that needs to access images stored on S3. Once each image is processed, the status of the corresponding record needs to be marked as completed in a DynamoDB table. How would you go about providing private access to these AWS resources which are not part of this custom VPC?

A. Create a gateway endpoint for DynamoDB and add it as a target in the route table of the custom VPC. Create an interface endpoint for S3 and then connect to the S3 service using the private IP address. B. Create a gateway endpoint for S3 and add it as a target in the route table of the custom VPC. Create an interface endpoint for DynamoDB and then connect to the DynamoDB service using the private IP address. C. Create a separate gateway endpoint for S3 and DynamoDB each. Add two new target entries for these two gateway endpoints in the route table of the custom VPC. D. Create a separate interface endpoint for S3 and DynamoDB each. Then connect to these services using the private IP address.

54. An organization wants to delegate access to a set of users from the development environment so that they can access some resources in the production environment which is managed under another AWS account. As a solutions architect, which of the following steps would you recommend?

A. It is not possible to access cross-account resources Both IAM roles and IAM users can be used interchangeably for cross-account access. B. Create a new IAM role with the required permissions to access the resources in the production environment. C. The users can then assume this IAM role while accessing the resources from the production environment D. Create new IAM user credentials for the production environment and share these credentials with the set of users from the development environment.

55. A healthcare startup needs to enforce compliance and regulatory guidelines for objects stored in Amazon S3. One of the key requirements is to provide adequate protection against accidental deletion of objects. As a solutions architect, what are your recommendations to address these guidelines? (Select two)

A. Enable versioning on the bucket. B. Enable MFA delete on the bucket. C. Create an event trigger on deleting any S3 object. The event invokes an SNS notification via email to the IT manager. D. Establish a process to get managerial approval for deleting S3 objects. E. Change the configuration on AWS S3 console so that the user needs to provide additional confirmation while deleting any S3 object.

56. The development team at an e-commerce startup has set up multiple microservices running on EC2 instances under an Application Load Balancer. The team wants to route traffic to multiple back-end services based on the URL path of the HTTP header. So it wants requests for https://www.example.com/orders to go to a specific microservice and requests for https://www.example.com/products to go to another microservice. Which of the following features of Application Load Balancers can be used for this use-case?

A. Query string parameter-based routing. B. Host-based Routing. C. HTTP header-based routing. D. Path-based Routing.

57. A new DevOps engineer has joined a large financial services company recently. As part of his onboarding, the IT department is conducting a review of the checklist for tasks related to AWS Identity and Access Management. As a solutions architect, which best practices would you recommend (Select two)?

A. Configure AWS CloudTrail to log all IAM actions. B. Grant maximum privileges to avoid assigning privileges again. C. Create a minimum number of accounts and share these account credentials among employees. D. Use user credentials to provide access specific permissions for Amazon EC2 instances. E. Enable MFA for privileged users.

58. A biotechnology company wants to seamlessly integrate its on-premises data center with AWS cloud-based IT systems which would be critical to manage as well as scale-up the complex planning and execution of every stage of its drug development process. As part of a pilot program, the company wants to integrate data files from its analytical instruments into AWS via an NFS interface. Which of the following AWS service is the MOST efficient solution for the given use-case?

A. AWS Storage Gateway Tape Gateway B. AWS Storage Gateway Volume Gateway C. AWS Storage Gateway – File Gateway D. AWS Site-to-Site VPN

59. The DevOps team at an e-commerce company has deployed a fleet of EC2 instances under an Auto Scaling group (ASG). The instances under the ASG span two Availability Zones (AZ) within the us-east-1 region. All the incoming requests are handled by an Application Load Balancer (ALB) that routes the requests to the EC2 instances under the ASG. As part of a test run, two instances (instance 1 and 2, belonging to AZ A) were manually terminated by the DevOps team causing the Availability Zones to become unbalanced. Later that day. another instance (belonging to AZI B) was detected as unhealthy by the Application Load Balancer’s health check. Can you identify the correct outcomes for these events? (Select two)

A. As the Availability Zones got unbalanced, Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones When rebalancing. Amazon EC2 Auto Scaling terminates old instances before launching new instances, so that rebalancing does not cause extra instances to be launched. B. Amazon EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. Later, another scaling activity launches a new instance to replace the terminated instance. C. Amazon EC2 Auto Scaling creates a new scaling activity for launching a new instance to replace the unhealthy instance. Later, EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. D. As the Availability Zones got unbalanced, Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones. When rebalancing, Amazon EC2 Auto Scaling launches new instances before terminating the old ones, so that rebalancing does not compromise the performance or availability of your application. E. Amazon EC2 Auto Scaling creates a new scaling activity to terminate the unhealthy instance and launch the new instance simultaneously.

60. A geological research agency maintains the seismological data for the last 100 years. The data has a velocity of 1GB per minute. You would like to store the data with only the most relevant attributes to build a predictive model for earthquakes. What AWS services would you use to build the most cost-effective solution with the LEAST amount of infrastructure maintenance?

A. Ingest the data in a Spark Streaming Cluster on EMR use Spark Streaming transformations before writing to S3. B. Ingest the data in AWS Glue job and use Spark transformations before writing to S3. C. Ingest the data in Kinesis Data Analytics and use SQL queries to filter and transform the data before writing to S3. D. Ingest the data in Kinesis Data Firehose and use a Lambda function to filter and transform the incoming stream before the output is dumped on S3.

61. One of the largest healthcare solutions provider in the world uses Amazon S3 to store and protect a petabyte of critical medical imaging data for its AWS based Health Cloud service, which connects hundreds of thousands of imaging machines and other medical devices. The engineering team has observed that while some of the objects in the imaging data bucket are frequently accessed, others sit idle for a considerable span of time. As a solutions architect, what is your recommendation to build the MOST cost-effective solution?

A. Create a data monitoring application on an EC2 instance in the same region as the imaging data bucket. The application is triggered daily via CloudWatch and it changes the storage class of infrequently accessed objects to S3 Standard-IA and the frequently accessed objects are migrated to S3 Standard class. B. Store the objects in the imaging data bucket using the S3 Intelligent-Tiering storage class. C. Create a data monitoring application on an EC2 instance in the same region as the imaging data bucket. The application is triggered daily via CloudWatch and it changes the storage class of infrequently accessed objects to S3 One Zone-IA and the frequently accessed objects are migrated to S3 Standard class. D. Store the objects in the imaging data bucket using the S3 Standard-IA storage class.

62. The engineering team at a Spanish professional football club has built a notification system on the web platform using Amazon SNS notifications which are then handled by a Lambda function for end-user delivery. During the off-season, the notification systems need to handle about 100 requests per second. During the peak football season, the rate touches about 5000 requests per second and it is noticed that a significant number of the notifications are not being delivered to the end-users on the web platform. As a solutions architect, which of the following would you suggest as the BEST possible solution to this issue?

A. Amazon SNS has hit a scalability limit, so the team needs to contact AWS support to raise the account limit. B. Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for Lambda, so the team needs to contact AWS support to raise the account limit. C. The engineering team needs to provision more servers running the Lambda service. D. The engineering team needs to provision more servers running the SNS service.

63. An IT company wants to review its security best-practices after an incident was reported where a new developer on the team was assigned full access to DynamoDB. The developer accidentally deleted a couple of tables from the production environment while building out a new feature. Which is the MOST effective way to address this issue so that such incidents do not recur?

A. Use permissions boundary to control the maximum permissions employees can grant to the IAM principals. B. Remove full database access for all IAM users in the organization. C. Only root user should have full database access in the organization. D. The CTO should review the permissions for each new developer’s IAM user so that such incidents don’t recur.

64. A cyber forensics company runs its EC2 servers behind an Application Load Balancer along with an Auto Scaling group. The engineers at the company want to be able to install proprietary forensic tools on each instance and perform a pre-activation status check of these tools whenever an instance is provisioned because of a scale-out event from an auto-scaling policy. Which of the following options can be used to enable this custom action?

A. Use the Auto Scaling group scheduled action to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check B. Use the EC2 instance user data to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check. C. Use the Auto Scaling group lifecycle hook to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check. D. Use the EC2 instance meta data to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre- activation status check.

65. A chip design startup is running an Electronic Design Automation (EDA) application, which is a high-performance workflow used to simulate performance and failures during the design phase of silicon chip production. The application produces massive volumes of data that can be divided into two categories. The ‘hot data’ needs to be both processed and stored quickly in a parallel and distributed fashion. The ‘cold data’ needs to be kept for reference with quick access for reads and updates at a low cost. Which of the following AWS services is BEST suited to accelerate the aforementioned chip design process?

A. Amazon FSx for Windows File Server. B. AWS Glue. C. Amazon FSx for Lustre. D. Amazon EMR.

Share this:

  • Previous Hashicorp Terraform Associate TA-002-P Exam Questions 2022 Part 1
  • Next CompTIA Project+ Exam Questions Part 5 – Ver 2022

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

awslagi.com

Subscribe to Newsletter

National Academies Press: OpenBook

Science at Sea: Meeting Future Oceanographic Goals with a Robust Academic Research Fleet (2009)

Chapter: 1 the u.s. academic research fleet.

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

1 The U.S. Academic Research Fleet The academic research fleet provides U.S. and international users with access to the ocean—from the nearshore coastal zones to deep, remote regions far from land. Research vessels provide oceanographers with opportunities to study issues of increasing societal relevance, including the ocean’s role in climate, natural hazards, economic resources, human health, and ecosystem sustainability. A highly capable fleet of ships also provides a platform for innovative basic research in chemical, biological, and physical oceanography; marine geology and geophysics; atmospheric science; and emerging interdisciplinary areas. Reports from the U.S. Commission on Ocean Policy (USCOP) and the Joint Subcommittee on Ocean Science and Technology (JSOST) have recognized the academic fleet as an essential component of ocean research infrastructure (U.S. Commission on Ocean Policy, 2004; Joint Subcommit- tee on Ocean Science and Technology, 2007). At the same time, there is community concern that the fleet is in dire need of both modernization and recapitalization (i.e., U.S. Commission on Ocean Policy, 2004; Mala- koff, 2008; UNOLS Fleet Improvement Committee, 2009). BACKGROUND The UNOLS Consortium The U.S. academic research fleet is managed through the University- National Oceanographic Laboratory System (UNOLS; Box 1-1), a con- sortium that unites research institutions, federal agencies, and state and 

 SCIENCE AT SEA Box 1-1 What Is UNOLS? •  he UNOLS mission is to “provide a primary forum through which the ocean re- T search and education community, research facility operators and the supporting federal agencies work cooperatively to improve access, scheduling, operation, and capabilities of current and future academic oceanographic facilities.” •  8 UNOLS institutions operate shared-use facilities, including 22 research ves- 1 sels, a National Deep Submergence Facility, a National Oceanographic Aircraft Facility, and a National Oceanographic Seismic Facility. •  NOLS acts in an advisory role to facility operators and to supporting federal U agencies, but it is not itself a funding agency or a facility operator. •  NOLS supports community-wide efforts to provide broad access to oceano- U graphic research facilities; continuous improvement of existing facilities; and planning for future oceanographic facilities. Source: UNOLS website (www.unols.org) and UNOLS Fleet Improvement Committee (2009). private interests. Although the academic fleet has existed since before World War II (history provided in Appendix A), the UNOLS management structure was not established until 1971, based on a recommendation of the Stratton Commission report Our Nation and the Sea (Commission on Marine Science, 1969; Byrne and Dinsmore, 2000; Bash, 2001). From 18 original operating institutions (Byrne and Dinsmore, 2000), by 2009 mem- bership had grown to 61 institutions representing 26 states and Panama, Puerto Rico, and Bermuda (Appendix B). UNOLS coordinates the sched- ules of 22 vessels berthed in 13 states and Bermuda. UNOLS assists federal and states agencies in performing their sea- going responsibilities. The National Science Foundation (NSF), Office of Naval Research (ONR), National Oceanic and Atmospheric Admin- istration (NOAA), U.S. Geological Survey (USGS), Minerals Manage- ment Service (MMS), and U.S. Coast Guard (USCG) support the UNOLS consortium through a cooperative agreement. Other agencies, including the Environmental Protection Agency (EPA), National Aeronautics and Space Administration (NASA), U.S. Army Corps of Engineers (USACE), and Department of Energy (DOE) support ship time on UNOLS vessels (Annette DeSilva, personal communication, 2009). State funds and private resources are also used to support the academic fleet.

THE U.S. ACADEMIC RESEARCH FLEET  The UNOLS Fleet The current UNOLS fleet (Table 1-1) consists of six classes of ships (Federal Oceanographic Facilities Committee, 2001; Interagency Working Group on Facilities, 2007; UNOLS Fleet Improvement Committee, 2009). Of these, the Global, Ocean, Intermediate, and Regional classes have been most likely to be built or acquired with federal funds (Interagency Work- ing Group on Facilities, 2007). Global class vessels are large, high-endurance ships capable of work- ing worldwide. They are able to stay at sea for 50 or more days and can carry 30-38 scientists. Two of the six Global class ships are specialized: Atlantis is the tender for the Alvin deep submersible, and Marcus Langseth is a seismic ship. While the four other Global class vessels are general purpose, each also carries specialized equipment (e.g., long coring ability on Knorr). Intermediate class ships are medium-endurance, ocean-ranging vessels with berths for 18-20 scientists. Three of the five Intermediate vessels (Endeavor, Oceanus, and Wecoma) are approaching the end of their service lives and are projected to retire in 2010. The Ocean class was envi- sioned in the 2001 Federal Oceanographic Facilities Committee (FOFC) report Charting the Future for the National Academic Research Fleet as a replacement for the aging, less capable Intermediate class (Federal Ocean- ographic Facilities Committee, 2001). These general purpose, oceangoing vessels are designed to have ranges up to 40 days and accommodations for 25 scientists. There is currently one Ocean class vessel, Kilo Moana, with three more planned. Regional and Regional/Coastal class vessels serve coastal oceanography needs, with 30-day endurance and capacity for up to 20 scientists. There are two main distinctions between these classes: all four of the Regional/Coastal vessels were funded through state sources, while two of the three Regional ships were acquired by NSF; and Regional class ships generally have a little more range and endurance than Regional/Coastal vessels, which would work closer to the coast and often conduct shorter cruises closer to port. Local class ships work in the nearshore environment, with an endurance of about 20 days and berthing for 15 or fewer scientists. Most Local class vessels are owned by individual institutions. The Navy has historically been a strong supporter of academic ocean research in the United States. In addition to funding scientific research and instrument development, there is a long and well-invested portfolio of assets in the U.S. academic research fleet (see Appendix A and Table 1-1, respectively, for past and current Navy-funded UNOLS ships). The Navy currently owns five of the six Global vessels and the sole Ocean class vessel in the UNOLS fleet, and has traditionally capitalized the larg- est ships of the UNOLS fleet. NSF owns the Global class Marcus Langseth, three Intermediate class vessels, and several smaller ships. NSF funds

10 SCIENCE AT SEA Table 1-1  The 2009 UNOLS Research Fleet (adapted from www.unols. org; used with permission from UNOLS) Year Built/ Length Operating Institution Ship Converted Owner (ft) Global         Scripps Institution of Melville 1969 Navy 279 Oceanography (SIO) Woods Hole Oceanographic Knorr 1970 Navy 279 Institution University of Washington Thomas G. 1991 Navy 274 Thompson Scripps Institution of Roger Revelle 1996 Navy 274 Oceanography Woods Hole Oceanographic Atlantis 1997 Navy 274 Institution Lamont-Doherty Earth Marcus Langseth 2008 NSF 235 Observatory           Ocean         University of Hawaii Kilo Moana 2002 Navy 186           Intermediate         Harbor Branch Oceanographic Seward Johnson 1985 FAU 204 Institute, Florida Atlantic University (FAU) Oregon State University Wecoma 1976 NSF 185 University of Rhode Island Endeavor 1977 NSF 185 Woods Hole Oceanographic Oceanus 1976 NSF 177 Institution Scripps Institution of New Horizon 1978 SIO 170 Oceanography           Regional         Bermuda Institute for Ocean Atlantic Explorer 2006 BIOS 168 Sciences (BIOS) Duke University/University of Cape Hatteras 1981 NSF  135 North Carolina Moss Landing Marine Point Sur 1981 NSF 135 Laboratories           Regional/Coastal         University of Delaware (UD) Hugh R. Sharp 2005 UD  146 Scripps Institution of Robert Gordon 1981 SIO 125 Oceanography Sproul Louisiana Universities Marine Pelican 1985 LUMCON 116 Consortium (LUMCON) University of Miami (UM) Walton Smith 2000 UM 96          

THE U.S. ACADEMIC RESEARCH FLEET 11 Table 1-1  Continued Year Built/ Length Operating Institution Ship Converted Owner (ft) Local         University System of Georgia/ Savannah 2001 UG/SKIO 92 Skidaway (UG/SKIO) University of Minnesota, Blue Heron 1985 UMD 86 Duluth (UMD) University of Washington Clifford Barnes 1966 NSF 66 the majority of ship operating days (58 percent between 2000 and 2009; Annette DeSilva, personal communication, 2008) and fleet operating costs (63 percent in 2007) (UNOLS Fleet Improvement Committee, 2009). By comparison, the Navy utilized an average of 17 percent of UNOLS ship operating days in the same time period. REPORT SCOPE The Navy has committed to build two new Ocean class vessels, sched- uled to enter service in 2014 and 2015, with ONR as the mission sponsor. Both ONR and NSF are interested in the impact of evolving science needs, rapid technological advancements, and increasing operational costs on future research fleet capabilities. They have asked the National Academies to carry out an independent and objective assessment of the scientific and technological issues that may affect the evolution of the UNOLS fleet (see Box 1-2 for Statement of Task). Because of the long lifespan of the research fleet assets (often 30 or more years), there is a strong emphasis on adequate planning in the pres- ent to make sure the fleet remains capable of supporting future scientific research. This report investigates future vessel needs, including fleet mix, but does not address or recommend a specific number of ships needed. In the same vein, an “optimal mix” of autonomous and remote platforms, observing systems, and remote sensing is not addressed because of an inability to predict future disruptive technologies that could revolution- ize the field of oceanography. This report is also not intended to impact the major design elements of the two planned Ocean class vessels, which were in development when the study was commissioned. Primary technology drivers for this study include recent investments in ocean observing systems (e.g., NSF’s Ocean Observing Initiative [OOI] and NOAA’s investment in the Integrated Ocean Observing System [IOOS]) and associated long-duration sensor packages; growth in the use

12 SCIENCE AT SEA Box 1-2 Statement of Task In support of the need for oceanographic fleet replacement, ONR is currently in the early design process for the first of two new Ocean class ships and requires near-term advice on how the rapid advancements in ocean observing technology and the impacts of rising costs will impact the future fleet relative to Navy needs. Therefore, ONR and NSF have requested that the National Research Council (NRC) appoint an ad hoc committee to review the scientific and technological is- sues that may affect the evolution of the UNOLS academic fleet, including: 1.  ow technological advances such as autonomous underwater vehicles H and ocean observing systems will affect the role and characteristics of the future UNOLS fleet with regard to accomplishing national oceanographic data collection objectives. 2.  he most important factors in oceanographic research vessel design. Does T specialized research needs dominate the design criteria and, if so, what are the impacts on costs and overall availability? 3.  ow evolving modeling and remote sensing technologies will impact the H balance between various research operations such as ground-truthing, hy- pothesis testing, exploration, and observation. 4.  ow the increasing cost of ship time will affect the types of science done H aboard ships. 5.  he usefulness of partnering mechanisms such as UNOLS to support na- T tional oceanographic research objectives. and maturity of remotely operated and autonomous vehicles; and increas- ingly sophisticated modeling and remote sensing. Evolving directions in scientific research and their expected impacts on research vessel design are also examined in the context of past experi- ences and present trends. The fleet is required to support a broad range of oceanographic missions, including those in physical, biological, and chemical oceanography; marine geology and geophysics; and atmo- spheric science. For this reason, ONR’s intent with its Global and Ocean class vessels has been to provide a general purpose platform for science (Frank Herr, personal communication, 2009). The committee has identi- fied design requirements dictated by research needs, with a discussion of the costs entailed. Capital and life-cycle costs are also strong drivers of the academic fleet. Construction costs are dependent on shipyard labor needs and the cost of raw materials such as steel. Crew salaries and benefits costs have historically been the largest percentage of vessel operating costs, although

THE U.S. ACADEMIC RESEARCH FLEET 13 rising fuel prices from 2005 to 2008 contributed to increasingly higher overall operating costs (UNOLS Fleet Improvement Committee, 2009). STUDY APPROACH AND INFORMATION NEEDS To properly evaluate the factors and demands that may drive future fleet needs, the committee considered a number of issues. Major trends in future oceanographic research were examined as a necessary complement to technological advances. The committee studied many recent commu- nity planning documents and agency strategic plans for future ocean sci- ence directions to evaluate these needs. During the information gathering process, presentations by and discussions with representatives of federal agencies, scientists, engineers, shipboard scientific support personnel, and ship operators were used to discern trends in science usage, new technol- ogies, and vessel needs. The committee chose not to explore quantitative analyses of recent publications or conference abstracts, because members did not feel that such analyses would provide accurate, forward-looking measures of community scientific trends or changing fleet needs. Statistics related to fleet operating costs and usage trends were obtained from the UNOLS Office and examined by the committee. Due to the minor differ- ences between their respective classes, Regional and Regional/Coastal vessels were considered together and are often used interchangeably in this report. The academic research fleet has been studied often. Federal advisory boards, interagency groups, and the UNOLS Fleet Improvement Com- mittee have all expended considerable effort discussing the status of the fleet, projections into the future, and renewal plans. These prior reports are summarized below. Past Assessments In 1999, The Academic Research Fleet was written in response to a request from NSF’s Science Advisory Board (Fleet Review Committee, 1999). The committee was asked to evaluate current and future vessel requirements for NSF oceanographic research and to report on the over- all management structure for the research fleet. Among its findings and recommendations, the report recognized that the strength of the UNOLS system was in the highly trained crew and ship operators that supported seagoing research. UNOLS management and practices were also com- mended. The report indicated some concern about a potential decreasing trend in fleet use and called for the introduction of new technologies into the fleet and improvements in training and quality control. The report

14 SCIENCE AT SEA recommended that federal agencies prepare and coordinate long-range plans for the academic fleet. An NSF-sponsored workshop held in 2000, Assessment of Future Sci- ence Needs in the Context of the Academic Oceanographic Fleet, examined fleet needs in the context of future science research and new observational technology. Workshop participants concluded that new observational tools and systems would not reduce or replace the need for an academic research fleet. Instead, future research and tools would increase demand for ship time and for more capable ships (Cowles and Atkinson, 2000). NSF’s 2001 report Ocean Sciences at the New Millennium asserted that “maintaining a modern, well-equipped research fleet is the most basic requirement for a healthy and vigorous research program in the ocean sci- ences” and strongly recommended that a long-term plan for fleet renewal be enacted (National Science Foundation, 2001). That same year, FOFC, a federal interagency committee of the National Oceanographic Partnership Program (NOPP), released Charting the Future for the National Academic Research Fleet (Federal Oceanographic Facilities Committee, 2001). That report responded to data in The Academic Research Fleet by setting forth a renewal strategy for the academic research fleet, with the underlying assumption that fleet capacity would be maintained while capabilities were increased. It outlined a 20-year plan for adding 10-13 additional vessels to the academic fleet, discussed planning for technology upgrades and updating ship concept designs and science mis- sion requirements, and proposed the introduction of Ocean class vessels as replacements for aging and less capable Intermediate vessels of the fleet. The plan was to be revised at least once every 5 years to account for changing science needs. In its 2004 report An Ocean Blueprint for the 21st Century, the U.S. Com- mission on Ocean Policy praised the UNOLS fleet renewal plan outlined in Charting the Future for the National Academic Research Fleet. However, the members of the commission expressed concern that at the time of their report there had been no move to implement the plan or provide funding for fleet renewal (U.S. Commission on Ocean Policy, 2004). In 2007, the Interagency Working Group on Facilities (IWGF), a suc- cessor to FOFC established by JSOST, released the Federal Oceanographic Fleet Status Report (Interagency Working Group on Facilities, 2007). The IWGF report described the current status and planned renewal activities of federally-owned academic ships more than 40 meters in length and other federal fleet assets in the 2007-2015 time frame. Renewal plans put forth in the 2001 FOFC report either were not addressed in this report or   leet capacity was defined as 3,600 days, equal to the total operational days averaged F over the previous 5-year interval (1997 to 2001).

THE U.S. ACADEMIC RESEARCH FLEET 15 were scaled down, with the exception of a replacement for the seismic vessel Maurice Ewing. The most recent assessment of the fleet was done in 2009 by the UNOLS Fleet Improvement Committee (2009). Its Fleet Improvement Plan addressed the needs of the U.S. research fleet through 2025. The report recommended that it was necessary for the academic research fleet to increase beyond the levels projected in the Federal Oceanographic Fleet Status Report and that federal agencies should proceed with existing and planned fleet renewal activities. It was noted that the current planned renewal contains fewer ships than was recommended in the 2001 Charting the Future for the National Academic Research Fleet plan. ORGANIZATION OF THIS REPORT This report addresses oceanographic research and technology needs that should influence the development of the U.S. academic fleet in the next 10-20 years. Chapter 2 surveys future science trends that will impact fleet usage in the near future, while Chapter 3 provides a discussion on specific technological advancements that may impact research ves- sel needs. Chapters 2 and 3 both address aspects of the first and third components of the Statement of Task (Box 1-2; Tasks 1 and 3). Research vessel design factors and criteria (Task 2) are outlined in Chapter 4, while fleet costs and the resulting impact on research (Task 4) are discussed in Chapter 5. Chapter 6 discusses the UNOLS partnership structure and its usefulness (Task 5). A summary and recommendations are included in Chapter 7. Relevant items from the Statement of Task are listed at the beginning of each chapter.

The U.S. academic research fleet is an essential national resource, and it is likely that scientific demands on the fleet will increase. Oceanographers are embracing a host of remote technologies that can facilitate the collection of data, but will continue to require capable, adaptable research vessels for access to the sea for the foreseeable future. Maintaining U.S. leadership in ocean research will require investing in larger and more capable general purpose Global and Regional class ships; involving the scientific community in all phases of ship design and acquisition; and improving coordination between agencies that operate research fleets.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

  • Email Signup

The U.S. Academic Research Fleet

You are here.

R/V Roger Revelle & R/V Sally Ride

Brief Description

The U.S. Academic Research Fleet (ARF) currently consists of 17 oceanographic vessels and various submersibles/autonomous vehicles owned by NSF, the Office of Naval Research (ONR), and U.S. universities and laboratories. All the ARF ships and vehicles are operated by research universities and laboratories. The ARF is a subset of the U.S. Federal Oceanographic Fleet, with collaboration under the Interagency Working Group on Facilities and Infrastructure (IWG-FI). Coordination to access the ARF vessels and vehicles is accomplished through collaboration with the University-National Oceanographic Laboratory System (UNOLS) organization. Universities and laboratories that operate ARF vessels are designated as UNOLS operators, and as such adhere to the UNOLS Research Vessel Safety Standards, as well as other applicable U.S. Coast Guard Code of Federal Regulations and International Maritime regulations. All ARF vessels are U.S.-flagged vessels.

Photo of the research vessel R/V Sikuliaq

Scientific Purpose

The ARF consists of technologically advanced ships and submersibles/autonomous vehicles that enable scientists to conduct research on the complex ocean, seafloor, and sub-seafloor environment, the Great Lakes, and in the remote polar regions. ARF vessels collect observational data on Earth systems that provides a foundation for understanding how these systems interact and for improved modeling. Through at-sea sampling and observing, researchers have begun to understand, model, and predict the responses of marine populations to both long-term and episodic changes in ocean conditions.

jhh

Meeting Intellectual Community Needs

The National Research Council’s Committee Report, Sea Change: 2015-2025 Decadal Survey of Ocean Sciences, documented that ships provide invaluable access to the sea and are an essential component of the ocean research infrastructure. The Committee found that the ARF was a critical asset in addressing each of the eight decadal science priorities of highest importance to the Nation in the decade of 2015-2025. Users of ARF vessels collect data during a cruise both at sea and onshore via tele-presence/data presence.

Governance Structure and Partnerships

NSF Governance Structure

NSF oversight is provided by a Program Director in the Division of Ocean Sciences who works cooperatively with staff from other Divisions, BFA, the Office of the General Counsel, and the Office of Legislative and Public Affairs. Within BFA, the Large Facilities Office provides advice to program staff and assists with Agency oversight and assurance. The GEO Senior Advisor for Facilities and the Chief Officer for Research Facilities also provide high-level guidance, support, and oversight.

NSF is the Cognizant Federal Agency and oversees the ARF through awards to each ship-operating institution as well as through site visits, ship inspections, Business Systems Reviews (BSRs) and participation at UNOLS Council/Committee meetings. Additional oversight is provided by the ARF Integrated Project Team consisting of Program Directors and staff from GEO, BFA’s Large Facility Office, the Cooperative Services Branch in the Division of Acquisition and Contract Support, as well as representatives from the Office of Legislative and Public Affairs and the Office of the General Council.

External Governance Structure

The ARF is overseen through a variety of activities conducted by NSF and through coordination with stakeholders through the UNOLS Council and Committees. The UNOLS Ship Scheduling Committee is the mechanism used to develop the annual operating schedule to maximize the efficient support for the funded science. Through the UNOLS Fleet Improvement Committee, the stakeholders update documents identifying the capabilities needed by each Ship Class to support the science missions, which inform funding needs. Additionally, the material condition of the vessels, which is determined through the NSF Ship Inspection Program, helps determine future Fleet modernization needs.

Partnerships and Other Funding Sources

The ARF is supported through an interagency partnership, principally with ONR and NOAA. The Fleet’s operating costs are divided proportionally among the vessel users based on usage. NSF supports approximately 70 percent of the total usage.

Crew members on R/V THOMAS G THOMPSON

Naval Postgraduate School

Naval Postgraduate School

Where Science Meets the Art of Warfare

NPS, U.S. Pacific Fleet Launch Nimitz Research Group

Lt. Cmdr. Ed Early, NPS Public Affairs   |  February 15, 2022

The Naval Postgraduate School (NPS) and U.S. Pacific Fleet announced the establishment of the Nimitz Research Group, Feb. 16.

The Naval Postgraduate School (NPS) and U.S. Pacific Fleet announced the establishment of the Nimitz Research Group, Feb. 16. Under the aegis of NPS’ Naval Warfare Studies Institute, the new organization will leverage NPS’ interdisciplinary education and research capabilities and institutional knowledge in new ways to meet the needs and emerging challenges of the Pacific Fleet.

The Naval Postgraduate School (NPS) and Commander, U.S. Pacific Fleet (PACFLT) are joining forces to establish the Nimitz Research Group, an organization which will leverage NPS’ educational and research capabilities and institutional knowledge to meet the needs of the Pacific Fleet.

Created under the aegis of NPS’ Naval Warfare Studies Institute (NWSI), the Nimitz Research Group will consist of NPS faculty and students who will serve as an extension of the PACFLT staff in Hawaii, participating in fleet exercises and events and providing additional research capacity and subject matter expertise.

The launch of the Nimitz Research Group was announced on Feb. 16 by the president of NPS, retired Vice Adm. Ann E. Rondeau, and Adm. Samuel Paparo, Commander, PACFLT.

“The establishment of the Nimitz Research Group marks a further evolution in our outstanding partnership with the U.S. Pacific Fleet,” said Rondeau. “We have always seen NPS as a center of excellence and innovation, a place where our faculty and students work together to solve the operational challenges of our fleet and force. Through the Nimitz Research Group, we will be able to provide those solutions by deploying our talent and our experience in direct support of our Pacific Fleet partners.”

The Nimitz Research Group is modeled after NWSI’s Bucklew Research Group, which already provides similar support to Naval Special Warfare (NSW) through studies and research by Navy SEAL officers attending NPS on a two-year master’s degree program. Bucklew scholars serve as an extension of NSW Group commands, who in turn benefit from the SEALs’ education, research efforts, interactions with the academic community, and proximity to Silicon Valley.

During a meeting with academic and industry leaders at NPS in October 2021, Paparo – a graduate of NPS’ Systems Analysis program – expressed a desire to similarly leverage the unique capabilities of NPS to support COMPACFLT’s priorities and research needs. Paparo recognized the value of utilizing the deep expertise of NPS faculty members as well as the operational experience of NPS’ 2,500 mid-career officers, senior non-commissioned officers and civilians. 

The example set by the Bucklew Research Group proved to be an ideal model for PACFLT’s requirements. As a result, the Nimitz Research Group was conceived with the goal of providing coherence and unity of action for NPS’ support to PACFLT.

“The Nimitz Research Group links the intellectual rigor of NPS, its key location in the nation’s hub of technical innovation and the expertise of innovative warfighters in the Pacific Fleet to research, develop and implement new and dynamic combat capabilities,” said Paparo. “Together we will build critical advantages over our competitors to maximize our strengths – battlespace awareness, agility, maneuverability and collective capabilities of the joint forces.”

According to U.S. Marine Corps Col. Randy Pugh, deputy director of NWSI, the main idea behind the Nimitz Research Group is its multi-disciplinary nature. As the group consists of members from all services, warfare communities and academic programs, every problem is analyzed and solved using many different lenses and with a tremendous wealth and diversity of experiences and expertise.

“As a result, you end up with a really rich and well-informed solution to a particular problem,” Pugh said. “The general theme is that this is not just the collective activity of individuals, but rather a whole which is very much greater than the sum of its parts.” 

MEDIA CONTACT  

Office of University Communications 1 University Circle Monterey, CA 93943 (831) 656-1068 https://nps.edu/office-of-university-communications [email protected]

IMAGES

  1. Overview

    a research group needs a fleet

  2. Research Vessels: The Fleet Is In

    a research group needs a fleet

  3. The Future of the Fleet: How technology is making transporting freight

    a research group needs a fleet

  4. Multiethnic Group Of People With Research Concept Stock Photo

    a research group needs a fleet

  5. National Center for Wildlife Launches New Research Fleet

    a research group needs a fleet

  6. What Is Fleet Management? Explained

    a research group needs a fleet

VIDEO

  1. KC Fleet Care

  2. What to Know Before Purchasing a Fleet Vehicle

  3. 26:9:24 Arrangement group Needs a name?

  4. HVAC Van Almost Ready! Scottsdale Collision Center Fleet Services 🚐✨

  5. Quattro Plant Ltd celebrates its 35th year in business

  6. Fleet200 Strategy Network

COMMENTS

  1. 26. AMI Types (EBS vs Instance Store)

    A research group needs a fleet of EC2 instances for a specialized task that must deliver high random I/O performance. Each instance in the fleet would have access to a dataset that is replicated across the instances. Because of the resilient application architecture, the specialized task would continue to be processed even if any instance goes ...

  2. SA Flashcards

    A research group needs a fleet of EC2 instances for a specialized task that must deliver high random I/O performance. Each instance in the fleet would have access to a dataset that is replicated across the instances. Because of the resilient application architecture, the specialized task would continue to be processed even if any instance goes ...

  3. AWS Solutions Architect Associate SAA-C02 Practice Questions Part 6

    A research group at an ivy-league university needs a fleet of EC2 instances operating in a fault-tolerant architecture for a specialized task that must deliver high random I/O performance. Each instance in the fleet would have access to a dataset that is replicated across the instances. Because of the resilient architecture, the specialized ...

  4. 1 The U.S. Academic Research Fleet

    Its Fleet Improvement Plan addressed the needs of the U.S. research fleet through 2025. The report recommended that it was necessary for the academic research fleet to increase beyond the levels projected in the Federal Oceanographic Fleet Status Report and that federal agencies should proceed with existing and planned fleet renewal activities.

  5. Evolution of the National Oceanographic Research Fleet

    The U.S. academic research fleet is an essential national resource, and it is likely that scientific demands on the fleet will increase. Oceanographers are embracing a host of remote technologies that can facilitate the collection of data, but will continue to require capable, adaptable research vessels for access to the sea for the foreseeable future.

  6. The U.S. Academic Research Fleet

    The ARF is a subset of the U.S. Federal Oceanographic Fleet, with collaboration under the Interagency Working Group on Facilities and Infrastructure (IWG-FI). Coordination to access the ARF vessels and vehicles is accomplished through collaboration with the University-National Oceanographic Laboratory System (UNOLS) organization.

  7. NPS, U.S. Pacific Fleet Launch Nimitz Research Group

    The Naval Postgraduate School (NPS) and U.S. Pacific Fleet announced the establishment of the Nimitz Research Group, Feb. 16. Under the aegis of NPS' Naval Warfare Studies Institute, the new organization will leverage NPS' interdisciplinary education and research capabilities and institutional knowledge in new ways to meet the needs and emerging challenges of the Pacific Fleet.

  8. QUALIFYING CUSTOMERS AND IDENTIFYING THEIR NEEDS

    Study with Quizlet and memorize flashcards containing terms like What is First-Bay Service?, What's a good way to establish trust with a prospective Business & Fleet customer?, What is a good way to manage the extended Business & Fleet sales process? and more.

  9. NPS, U.S. Pacific Fleet launch Nimitz Research Group

    Through the Nimitz Research Group, we will be able to provide those solutions by deploying our talent and our experience in direct support of our Pacific Fleet partners." The Nimitz Research ...

  10. Naval Postgraduate School, U.S. Pacific Fleet launch Nimitz Research Group

    The Naval Postgraduate School (NPS) and U.S. Pacific Fleet announced the establishment of the Nimitz Research Group, Feb. 16. Under the aegis of NPS' Naval Warfare Studies Institute, the new organization will leverage NPS' interdisciplinary education and research capabilities and institutional knowledge in new ways to meet the needs and emerging challenges of the Pacific Fleet.