AI Insights Chatbot on AWS using Bedrock, Fargate and Aurora

Presspage implemented an AI-powered insights chatbot on AWS to provide customers with contextual analytics from newsroom and engagement data. Using Amazon Bedrock, AWS Fargate, Amazon S3, and Aurora PostgreSQL with pgvector, the platform delivers secure, tenant-isolated semantic search with minimal operational overhead.

Right Banner size (23)

Problem statement

Presspage provides a communications platform used by organizations worldwide to publish press releases, manage media relationships, and analyze engagement metrics. As historical content and performance data grew, customers increasingly relied on manual analysis to extract insights, identify trends, and benchmark performance.

This approach was time-consuming, inconsistent, and dependent on individual expertise, limiting the ability of customers to quickly derive strategic value from their data.

To support its product innovation roadmap, Presspage aimed to introduce an AI-powered insights capability that could automatically surface contextual insights from existing datasets. The solution needed to meet several key requirements: strict tenant isolation between customers, compliance with EU data residency and privacy regulations, scalability for growing datasets, and minimal operational overhead through the use of managed AWS services.

Proposed solution & architecture

CloudNation designed and implemented an AI-powered Insights Platform fully deployed within Presspage’s AWS environment.

The architecture follows a medallion data model consisting of Bronze, Silver, and Gold data layers to progressively refine data while enforcing privacy and tenant isolation.

Raw daily exports from the Presspage platform are ingested into Amazon S3 (Bronze layer). A containerized data processing pipeline running on AWS Fargate transforms and curates the data into privacy-safe, semantically structured records grouped by AgencyId, ClientId, and ReleaseId (Silver layer).

Only curated datasets are promoted for vectorization in the Gold layer, where embeddings are generated using Amazon Bedrock foundation models. The embeddings are stored in Amazon Aurora PostgreSQL with pgvector, enabling efficient vector similarity search.

A chatbot interface performs metadata-filtered semantic search across the vector database and uses Amazon Bedrock to generate contextual responses. All queries are automatically filtered by customer identifiers, ensuring strict tenant isolation and preventing cross-customer data exposure.

The entire platform is provisioned using Terraform, monitored through Amazon CloudWatch, and operated under a managed platform model to minimize operational complexity.

Outcomes & success metrics

As a result of the implementation, Presspage achieved:

  • A production-ready AI insights platform capable of delivering contextual answers from historical press and engagement data

  • Strict tenant isolation enforced through metadata-based filtering and privacy-by-design data processing

  • EU-resident data storage and processing aligned with GDPR and internal governance requirements

  • Reduced manual analysis effort for customers, enabling faster insight generation and data-driven decision-making

  • A scalable architecture capable of supporting growing datasets without platform redesign

  • Reduced operational overhead through the use of managed AWS services including Bedrock, Fargate, and Aurora

  • Full observability across ingestion, processing, and retrieval pipelines using centralized monitoring and logging

TCO analysis

A cost and architecture analysis compared traditional self-managed analytics and AI infrastructure with an AWS-native architecture built on managed services.

The selected design minimizes operational overhead and infrastructure management by leveraging services such as AWS Fargate, Amazon Bedrock, and Amazon Aurora. Incremental data ingestion and scalable serverless processing further optimize compute utilization and cost efficiency.

Operational spending is monitored through AWS cost management tools and standardized tagging, enabling transparent cost tracking and governance.

Lessons learned

Building AI platforms using AWS-native managed services significantly accelerates development and reduces operational complexity. However, the success of AI-driven insights depends heavily on data quality, schema consistency, and effective governance.

Implementing strict tenant isolation and privacy controls early in the architecture simplified compliance and increased customer trust. Infrastructure-as-code deployments using Terraform enabled repeatable and scalable platform management.

Ongoing optimization of ingestion pipelines, data chunking, and retrieval strategies proved necessary to maintain high-quality AI-generated insights as datasets continued to grow.

sylvain-mauroux-eW6-OBl58l4-unsplash
AI-powered analytics

Explore how AI can unlock insights from your data

Book an AI insights assessment

More success stories