Let us imagine – you are working in a Bank and somehow your leadership falls in love with Postgres. So, they give you a task to migrate 19 000 Oracle-based applications to Postgres and other tech (where applicable). This is exactly what we have faced with a bit over 3 years ago.
To solve this problem with came up with a operational framework which helped us to set the whole enterprise on a path towards successful migration. Since this framework can be easily scaled up and down, it is currently used in few other customers as a major operational model for modernization of their data and application ecosystem.
In this presentation we will focus on people, processes and culture as well as some of the technology critical for the success of such complex and risky programs. We will share some challenges we faced and solved as well as ones still to be solved. We focus on practical aspects of how to set up and structure legacy to Postgres migration program, how to make people believe in it and how to encourage them to innovate and reuse results of these innovations at enterprise scale.
Get ready to embark on an epic journey from Oracle to PostgreSQL with my talk, “Oracle to Postgres: Happily Ever After Edition.” This engaging session will dive deep into post production go-live world as seen through eyes of a Postgrs DBA. You’ll discover practical strategies, indispensable tools, and real-world tips that ensure your migration is not just successful, but a sleep friendly endavour. Whether you’re a DBA, developer, or IT manager, this talk will equip you with the knowledge and confidence to transform your database landscape.
Modeling data with versioning enables data teams to track and reproduce history for fast- and slow-changing data over time. In the era of read-optimized cloud data warehouses and ELT (Extract-Load-Transform) workflows, there is still value in integrating dimensional data modeling practices with modern functional approaches in the modern data stack to design efficient and reproducible data models for analytics reporting use cases that require accurate reporting of point-in-time historical dimensions. In this talk, I will be exploring how we can design history-preserving data models and reproducible data transformation workflows by adopting “change-aware” dimensional modeling practices in the modern data stack.
Imagine harnessing the power of a supercomputer with the simplicity and affordability of Raspberry Pis. In my upcoming talk, I’ll showcase an innovative project where a supercluster using 63 Raspberry Pis 3B+ is shown in live action, deploying a StackGres cluster and “making it work”.
Join us to explore:
Our European Space Agency’s archives holding TBs of space data relies on open-source software for its flexibility and cost-effectiveness. However, we’ve encountered a significant hurdle: Greenplum, the distributed database we use, has shifted from open source to proprietary.
This change directly impacts our mission, which requires a scale-out database that supports specific PostgreSQL extensions like PgSphere and q3c. While open-source software offers numerous advantages, it also carries risks, including unexpected licensing changes that can disrupt vital operations.
In this presentation, we delve into the critical role of open-source software in our project, explore the challenges introduced by Greenplum’s new status, highlight the inherent risks of relying on open-source solutions, and outline our proactive strategy to transition to a new open-source database—ensuring our mission continues seamlessly.
When I was at Adjust we replaced ElasticSearch with an inhouse solution built on PostgreSQL in order to avoid scalability limits in ElasticSearch which we had hit at about 1PB in size. This talk covers:
Let’s discuss the limitations and constraints of the current extension support system, various APIs and implicit contracts, and how we can work around some of the issues today.
We will dive into these topics:
In this talk we will explore the inner workings of the Postgres database from the perspective of the execution of a SQL query.
We will follow the same path as the SQL query, going through the parse, the rule system, the optimizer, the query plan, and finally, the executor, which will give the user the result through a cursor.
We will learn about the Query Tree, the different kinds of Joins and Scans that you can have and how they work internally, how the optimizer selects the right query plan, and how the executor runs that query plan.
This knowledge will help you better understand how the database works and why our queries are sometimes slow.
A query plan needs to be computed to execute a query in PostgreSQL. The query only describes how the result should look like. However, to execute a query, PostgreSQL needs to know which operators (e.g., filters, join) and input data (e.g., a table scan or an index scan) have to be used to produce the desired query result. So, the query plan contains the exact information on which steps are needed to execute the query.
When a given function is used on the predicate it can or cannot be pre-evaluated during query planning depending on the function volatility. IMMUTABLE functions return the same result for the same input so the result is pre-evaluated during the query planning and the function invocation will be replaced by a constant value in the query plan. But for functions using other volatility classification Postgres doesn’t apply this optimization.
In this talk, we’ll present some techniques we implemented in TimescaleDB, where we hacked the planner to add and/or replace the constant value of a function call result to give the planner a chance to produce a better execution plan. This is useful because it can potentially improve query performance on partitioned tables by allowing partitions to be pruned at planning time, significantly reducing execution time.
As a developer, it is crucial to understand Isolation Levels and Multi-Version Concurrency Control (MVCC) to create scalable applications without compromising data integrity due to race conditions. Although the default isolation level in most databases is usually adequate, knowing and using it correctly in your application is essential.
This technical session explores the implementation of these concepts in popular SQL databases such as Oracle, MySQL, SQL Server, PostgreSQL, and YugabyteDB.
We will also examine why traditional SQL standard definitions may not be suitable for MVCC databases and why this knowledge is essential for developers.
Let’s navigate these subtleties and make informed decisions for our application design without getting too academic.
Linux and containers are similar in that they provide operating system resources such as memory and storage. These resources, along with CPU and networking, are key factors to achieve optimal performance. However, databases in general and therefore PostgreSQL too relies on high performance disk IO to function properly. This widely known.
In this session, we'll explore how memory allocation at the container or operating system level influences the performance of buffered disk IO. We’ll also examine Linux memory management details and the different ways to perform disk IO. From there, we’ll walk through a number of examples that reveal the typical dynamics of cached and non-cached IOs, as well as the differences between the two.
Understanding these dynamics are important to PostgreSQL, but really to any application, that is dependent on disk IO and has performance requirements.
Running databases in containers has been the biggest anti-pattern of the last decade. The world, however, moves on and stateful container workloads become more common, and so do databases in Kubernetes. People love the additional convenience when it comes to deployment, scalability, and operation.
With PostgreSQL on its way to become the world’s most beloved database, there certainly are quite some things to keep in mind when running it on k8s. Let us evaluate the important Dos and especially the Don’ts.
Overview of agenda:
PGA is a fully open source Postgres distribution built around three key concepts:
By clicking on subscribe you agree to our privacy policy.