2 minute read

Recently, I had the chance to work on an IoT project and I was storing sensor data using the PostgreSQL database.

Intro

While working on the improvement of the MQTT broker (An MQTT broker is a central hub that acts as a messaging middleware, facilitating communication between MQTT clients) script.

I have several optimization including threading and asynchronous handling of received payloads, PostgreSQL pooling, and so on for making it a better system to handle over 200 IoT devices, which are continuously sending data.

Monitoring PostgreSQL Database

As I was looking for ways to monitor the performance of my Postgres database, I found the following:

  1. Query profiling: Profiling queries can help identify bottlenecks in the database. There are tools like pgBadger, pg_stat_statements, or the built-in EXPLAIN command to analyze queries and identify potential performance issues.

  2. System monitoring: Monitoring the server’s resource usage can help identify issues with CPU, memory, or disk I/O that could be impacting database performance. Tools like top, htop, or the built-in Linux system monitor can help you track system resource usage. I guess this is the most common way to check performance of any application. :wink:

  3. Database statistics: PostgreSQL provides a number of statistics that can help monitor database performance, including the number of queries per second, buffer usage, and disk I/O. These statistics can be explored using tools like pg_stat_activity, pg_stat_database, or the built-in pgAdmin tool. For example, we can use the following:

    SELECT * FROM pg_stat_activity;
    
  4. Connection pooling statistics: Since I have been using a connection pool, I can monitor the pool’s statistics to track usage, idle connections, and connection errors. Many connection pool libraries provide built-in monitoring tools or interfaces for accessing pool statistics.

    SELECT  *  FROM pg_stat_activity WHERE application_name =  'app-name-that-uses-pool';
    

    or

    SELECT datname, numbackends, max_connections FROM pg_stat_database;
    
  5. Log analysis: Examining the database’s log files can provide insight into slow queries, errors, and other issues that may be impacting performance. There are tools like pgBadger or the built-in PostgreSQL log analyzer to parse log files and identify potential issues.

    To use pgbadger, we can install it and use the following to generate an HTML report containing various statistics and graphs based on the log file. You can view this report in a web browser by opening the file index.html in the output directory.

    $ pgbadger /path/to/postgresql.log
    

View more stats

We can run the relevant SQL queries to get the required performance statistics. For example, you can use the following queries to get the active connections and top queries:

SELECT query, calls, total_time, rows FROM pg_stat_statements ORDER BY total_time DESC LIMIT 10;

We can also use tools like pgAdmin to monitor database performance. pgAdmin is a web-based database management tool that provides a graphical user interface to manage PostgreSQL databases. To use pgAdmin, we can follow these steps:

-   Install `pgAdmin` on machine.
-   Launch `pgAdmin` and connect to PostgreSQL database.
-   Use the various tabs and menus to view and manage database objects and performance metrics.

Concluding Remarks

Overall, these tools can help monitor the performance of a PostgreSQL database and identify areas where we may need to optimize our queries or configuration settings.

As a DevOps/SRE/Backend developer, we need to properly monitor the performance of our applications. Especially, if we are using database, monitoring the performance of database is important.

That’s it for today! Cheers!!! 😎

Leave a comment