20:00

Free Test
/ 10

Quiz

1/10
A company provides a service that helps users from around the world discover new restaurants. The
service has 50 million monthly active users. The company wants to implement a semantic search
solution across a database that contains 20 million restaurants and 200 million reviews. The company
currently stores the data in PostgreSQL.
The solution must support complex natural language queries and return results for at least 95% of
queries within 500 ms. The solution must maintain data freshness for restaurant details that update
hourly. The solution must also scale cost-effectively during peak usage periods.
Which solution will meet these requirements with the LEAST development effort?
Select the answer
1 correct answer
A.
Migrate the restaurant data to Amazon OpenSearch Service. Implement keyword-based search rules that use custom analyzers and relevance tuning to find restaurants based on attributes such as cuisine type, features, and location. Create Amazon API Gateway HTTP API endpoints to transform user queries into structured search parameters.
B.
Migrate the restaurant data to Amazon OpenSearch Service. Use a foundation model (FM) in Amazon Bedrock to generate vector embeddings from restaurant descriptions, reviews, and menu items. When users submit natural language queries, convert the queries to embeddings by using the same FM. Perform k-nearest neighbors (k-NN) searches to find semantically similar results.
C.
Keep the restaurant data in PostgreSQL and implement a pgvector extension. Use a foundation model (FM) in Amazon Bedrock to generate vector embeddings from restaurant data. Store the vector embeddings directly in PostgreSQL. Create an AWS Lambda function to convert natural language queries to vector representations by using the same FM. Configure the Lambda function to perform similarity searches within the database.
D.
Migrate restaurant data to an Amazon Bedrock knowledge base by using a custom ingestion pipeline. Configure the knowledge base to automatically generate embeddings from restaurant information. Use the Amazon Bedrock Retrieve API with built-in vector search capabilities to query the knowledge base directly by using natural language input.

Quiz

2/10
A company is using Amazon Bedrock and Anthropic Claude 3 Haiku to develop an AI assistant. The AI
assistant normally processes 10,000 requests each hour but experiences surges of up to 30,000
requests each hour during peak usage periods. The AI assistant must respond within 2 seconds while
operating across multiple AWS Regions.
The company observes that during peak usage periods, the AI assistant experiences throughput
bottlenecks that cause increased latency and occasional request timeouts. The company must
resolve the performance issues.
Which solution will meet this requirement?
Select the answer
1 correct answer
A.
Purchase provisioned throughput and sufficient model units (MUs) in a single Region. Configure the application to retry failed requests with exponential backoff.
B.
Implement token batching to reduce API overhead. Use cross-Region inference profiles to automatically distribute traffic across available Regions.
C.
Set up auto scaling AWS Lambda functions in each Region. Implement client-side round-robin request distribution. Purchase one model unit (MU) of provisioned throughput as a backup.
D.
Implement batch inference for all requests by using Amazon S3 buckets across multiple Regions. Use Amazon SQS to set up an asynchronous retrieval process.

Quiz

3/10
A company uses an AI assistant application to summarize the company’s website content and
provide information to customers. The company plans to use Amazon Bedrock to give the application
access to a foundation model (FM).
The company needs to deploy the AI assistant application to a development environment and a
production environment. The solution must integrate the environments with the FM. The company
wants to test the effectiveness of various FMs in each environment. The solution must provide
product owners with the ability to easily switch between FMs for testing purposes in each
environment.
Which solution will meet these requirements?
Select the answer
1 correct answer
A.
Create one AWS CDK application. Create multiple pipelines in AWS CodePipeline. Configure each pipeline to have its own settings for each FM. Configure the application to invoke the Amazon Bedrock FMs by using the aws_bedrock.ProvisionedModel.fromProvisionedModelArn() method.
B.
Create a separate AWS CDK application for each environment. Configure the applications to invoke the Amazon Bedrock FMs by using the aws_bedrock.FoundationModel.fromFoundationModelId() method. Create a separate pipeline in AWS CodePipeline for each environment.
C.
Create one AWS CDK application. Configure the application to invoke the Amazon Bedrock FMs by using the aws_bedrock.FoundationModel.fromFoundationModelId() method. Create a pipeline in AWS CodePipeline that has a deployment stage for each environment that uses AWS CodeBuild deploy actions.
D.
Create one AWS CDK application for the production environment. Configure the application to invoke the Amazon Bedrock FMs by using the aws_bedrock.ProvisionedModel.fromProvisionedModelArn() method. Create a pipeline in AWS CodePipeline. Configure the pipeline to deploy to the production environment by using an AWS CodeBuild deploy action. For the development environment, manually recreate the resources by referring to the production application code.

Quiz

4/10
A company deploys multiple Amazon Bedrock–based generative AI (GenAI) applications across
multiple business units for customer service, content generation, and document analysis. Some
applications show unpredictable token consumption patterns. The company requires a
comprehensive observability solution that provides real-time visibility into token usage patterns
across multiple models. The observability solution must support custom dashboards for multiple
stakeholder groups and provide alerting capabilities for token consumption across all the foundation
models that the company’s applications use.
Which combination of solutions will meet these requirements with the LEAST operational overhead?
(Select TWO.)
Select the answer
2 correct answers
A.
Use Amazon CloudWatch metrics as data sources to create custom Amazon QuickSight dashboards that show token usage trends and usage patterns across FMs.
B.
Use CloudWatch Logs Insights to analyze Amazon Bedrock invocation logs for token consumption patterns and usage attribution by application. Create custom queries to identify high-usage scenarios. Add log widgets to dashboards to enable continuous monitoring.
C.
Create custom Amazon CloudWatch dashboards that combine native Amazon Bedrock token and invocation CloudWatch metrics. Set up CloudWatch alarms to monitor token usage thresholds.
D.
Create dashboards that show token usage trends and patterns across the company’s FMs by using an Amazon Bedrock zero-ETL integration with Amazon Managed Grafana.
E.
Implement Amazon EventBridge rules to capture Amazon Bedrock model invocation events. Route token usage data to Amazon OpenSearch Serverless by using Amazon Data Firehose. Use OpenSearch dashboards to analyze usage patterns.

Quiz

5/10
An enterprise application uses an Amazon Bedrock foundation model (FM) to process and analyze 50
to 200 pages of technical documents. Users are experiencing inconsistent responses and receiving
truncated outputs when processing documents that exceed the FM's context window limits.
Which solution will resolve this problem?
Select the answer
1 correct answer
A.
Configure fixed-size chunking at 4,000 tokens for each chunk with 20% overlap. Use application- level logic to link multiple chunks sequentially until the FM's maximum context window of 200,000 tokens is reached before making inference calls.
B.
Use hierarchical chunking with parent chunks of 8,000 tokens and child chunks of 2,000 tokens. Use Amazon Bedrock Knowledge Bases built-in retrieval to automatically select relevant parent chunks based on query context. Configure overlap tokens to maintain semantic continuity.
C.
Use semantic chunking with a breakpoint percentile threshold of 95% and a buffer size of 3 sentences. Use the RetrieveAndGenerate API to dynamically select the most relevant chunks based on embedding similarity scores.
D.
Create a pre-processing AWS Lambda function that analyzes document token count by using the FM's tokenizer. Configure the Lambda function to split documents into equal segments that fit within 80% of the context window. Configure the Lambda function to process each segment independently before aggregating the results.

Quiz

6/10
A company uses AWS Lake Formation to set up a data lake that contains databases and tables for
multiple business units across multiple AWS Regions. The company wants to use a foundation model
(FM) through Amazon Bedrock to perform fraud detection. The FM must ingest sensitive financial
data from the data lake. The data includes some customer personally identifiable information (PII).
The company must design an access control solution that prevents PII from appearing in a production
environment. The FM must access only authorized data subsets that have PII redacted from specific
data columns. The company must capture audit trails for all data access.
Which solution will meet these requirements?
Select the answer
1 correct answer
A.
Create a separate dataset in a separate Amazon S3 bucket for each business unit and Region combination. Configure S3 bucket policies to control access based on IAM roles that are assigned to FM training instances. Use S3 access logs to track data access.
B.
Configure the FM to authenticate by using AWS Identity and Access Management roles and Lake Formation permissions based on LF-Tag expressions. Define business units and Regions as LF-Tags that are assigned to databases and tables. Use AWS CloudTrail to collect comprehensive audit trails of data access.
C.
Use direct IAM principal grants on specific databases and tables in Lake Formation. Create a custom application layer that logs access requests and further filters sensitive columns before sending data to the FM.
D.
Configure the FM to request temporary credentials from AWS Security Token Service. Access the data by using presigned S3 URLs that are generated by an API that applies business unit and Regional filters. Use AWS CloudTrail to collect comprehensive audit trails of data access.

Quiz

7/10
A company is developing a generative AI (GenAI) application that analyzes customer service calls in
real time and generates suggested responses for human customer service agents. The application
must process 500,000 concurrent calls during peak hours with less than 200 ms end-to-end latency
for each suggestion. The company uses existing architecture to transcribe customer call audio
streams. The application must not exceed a predefined monthly compute budget and must maintain
auto scaling capabilities.
Which solution will meet these requirements?
Select the answer
1 correct answer
A.
Deploy a large, complex reasoning model on Amazon Bedrock. Purchase provisioned throughput and optimize for batch processing.
B.
Deploy a low-latency, real-time optimized model on Amazon Bedrock. Purchase provisioned throughput and set up automatic scaling policies.
C.
Deploy a large language model (LLM) on an Amazon SageMaker real-time endpoint that uses dedicated GPU instances.
D.
Deploy a mid-sized language model on an Amazon SageMaker serverless endpoint that is optimized for batch processing.

Quiz

8/10
A company uses AWS Lambda functions to build an AI agent solution. A GenAI developer must set up
a Model Context Protocol (MCP) server that accesses user information. The GenAI developer must
also configure the AI agent to use the new MCP server. The GenAI developer must ensure that only
authorized users can access the MCP server.
Which solution will meet these requirements?
Select the answer
1 correct answer
A.
Use a Lambda function to host the MCP server. Grant the AI agent Lambda functions permission to invoke the Lambda function that hosts the MCP server. Configure the AI agent’s MCP client to invoke the MCP server asynchronously.
B.
Use a Lambda function to host the MCP server. Grant the AI agent Lambda functions permission to invoke the Lambda function that hosts the MCP server. Configure the AI agent to use the STDIO transport with the MCP server.
C.
Use a Lambda function to host the MCP server. Create an Amazon API Gateway HTTP API that proxies requests to the Lambda function. Configure the AI agent solution to use the Streamable HTTP transport to make requests through the HTTP API. Use Amazon Cognito to enforce OAuth 2.1.
D.
Use a Lambda layer to host the MCP server. Add the Lambda layer to the AI agent Lambda functions. Configure the agentic AI solution to use the STDIO transport to send requests to the MCP server. In the AI agent’s MCP configuration, specify the Lambda layer ARN as the command. Specify the user credentials as environment variables.

Quiz

9/10
A company is building a serverless application that uses AWS Lambda functions to help students
around the world summarize notes. The application uses Anthropic Claude through Amazon Bedrock.
The company observes that most of the traffic occurs during evenings in each time zone. Users report
experiencing throttling errors during peak usage times in their time zones.
The company needs to resolve the throttling issues by ensuring continuous operation of the
application. The solution must maintain application performance quality and must not require a fixed
hourly cost during low traffic periods.
Which solution will meet these requirements?
Select the answer
1 correct answer
A.
Create custom Amazon CloudWatch metrics to monitor model errors. Set provisioned throughput to a value that is safely higher than the peak traffic observed.
B.
Create custom Amazon CloudWatch metrics to monitor model errors. Set up a failover mechanism to redirect invocations to a backup AWS Region when the errors exceed a specified threshold.
C.
Enable invocation logging in Amazon Bedrock. Monitor key metrics such as Invocations, InputTokenCount, OutputTokenCount, and InvocationThrottles. Distribute traffic across cross-Region inference endpoints.
D.
Enable invocation logging in Amazon Bedrock. Monitor InvocationLatency, InvocationClientErrors, and InvocationServerErrors metrics. Distribute traffic across multiple versions of the same model.

Quiz

10/10
A financial services company is creating a Retrieval Augmented Generation (RAG) application that
uses Amazon Bedrock to generate summaries of market activities. The application relies on a vector
database that stores a small proprietary dataset with a low index count. The application must
perform similarity searches. The Amazon Bedrock model’s responses must maximize accuracy and
maintain high performance.
The company needs to configure the vector database and integrate it with the application.
Which solution will meet these requirements?
Select the answer
1 correct answer
A.
Launch an Amazon MemoryDB cluster and configure the index by using the Flat algorithm. Configure a horizontal scaling policy based on performance metrics.
B.
Launch an Amazon MemoryDB cluster and configure the index by using the Hierarchical Navigable Small World (HNSW) algorithm. Configure a vertical scaling policy based on performance metrics.
C.
Launch an Amazon Aurora PostgreSQL cluster and configure the index by using the Inverted File with Flat Compression (IVFFlat) algorithm. Configure the instance class to scale to a larger size when the load increases.
D.
Launch an Amazon DocumentDB cluster that has an IVFFlat index and a high probe value. Configure connections to the cluster as a replica set. Distribute reads to replica instances.
Looking for more questions?Buy now

Amazon AWS Certified Generative AI Developer - Professional Practice test unlocks all online simulator questions

Thank you for choosing the free version of the Amazon AWS Certified Generative AI Developer - Professional practice test! Further deepen your knowledge on Amazon Simulator; by unlocking the full version of our Amazon AWS Certified Generative AI Developer - Professional Simulator you will be able to take tests with over 85 constantly updated questions and easily pass your exam. 98% of people pass the exam in the first attempt after preparing with our 85 questions.

BUY NOW

What to expect from our Amazon AWS Certified Generative AI Developer - Professional practice tests and how to prepare for any exam?

The Amazon AWS Certified Generative AI Developer - Professional Simulator Practice Tests are part of the Amazon Database and are the best way to prepare for any Amazon AWS Certified Generative AI Developer - Professional exam. The Amazon AWS Certified Generative AI Developer - Professional practice tests consist of 85 questions and are written by experts to help you and prepare you to pass the exam on the first attempt. The Amazon AWS Certified Generative AI Developer - Professional database includes questions from previous and other exams, which means you will be able to practice simulating past and future questions. Preparation with Amazon AWS Certified Generative AI Developer - Professional Simulator will also give you an idea of the time it will take to complete each section of the Amazon AWS Certified Generative AI Developer - Professional practice test . It is important to note that the Amazon AWS Certified Generative AI Developer - Professional Simulator does not replace the classic Amazon AWS Certified Generative AI Developer - Professional study guides; however, the Simulator provides valuable insights into what to expect and how much work needs to be done to prepare for the Amazon AWS Certified Generative AI Developer - Professional exam.

BUY NOW

Amazon AWS Certified Generative AI Developer - Professional Practice test therefore represents an excellent tool to prepare for the actual exam together with our Amazon practice test . Our Amazon AWS Certified Generative AI Developer - Professional Simulator will help you assess your level of preparation and understand your strengths and weaknesses. Below you can read all the quizzes you will find in our Amazon AWS Certified Generative AI Developer - Professional Simulator and how our unique Amazon AWS Certified Generative AI Developer - Professional Database made up of real questions:

Info quiz:

  • Quiz name:Amazon AWS Certified Generative AI Developer - Professional
  • Total number of questions:85
  • Number of questions for the test:50
  • Pass score:80%

You can prepare for the Amazon AWS Certified Generative AI Developer - Professional exams with our mobile app. It is very easy to use and even works offline in case of network failure, with all the functions you need to study and practice with our Amazon AWS Certified Generative AI Developer - Professional Simulator.

Use our Mobile App, available for both Android and iOS devices, with our Amazon AWS Certified Generative AI Developer - Professional Simulator . You can use it anywhere and always remember that our mobile app is free and available on all stores.

Our Mobile App contains all Amazon AWS Certified Generative AI Developer - Professional practice tests which consist of 85 questions and also provide study material to pass the final Amazon AWS Certified Generative AI Developer - Professional exam with guaranteed success. Our Amazon AWS Certified Generative AI Developer - Professional database contain hundreds of questions and Amazon Tests related to Amazon AWS Certified Generative AI Developer - Professional Exam. This way you can practice anywhere you want, even offline without the internet.

BUY NOW