pg-el
This Emacs Lisp library lets you access the PostgreSQL đ object-relational DBMS from Emacs, using its network-level frontend/backend protocol. The library is capable of automatic type coercions from a range of SQL types to and from the equivalent Emacs Lisp type.
This libary will be useful for developers, rather than end users. If youâre looking for an Emacs-based browser/editor for PostgreSQL, you may be interested in PGmacs, which uses this library to communicate with PostgreSQL or a compatible database.
-
SCRAM-SHA-256 authentication (the default method since PostgreSQL version 14) as well as MD5 and password authentication.
-
Encrypted (TLS) connections between Emacs and the PostgreSQL backend. This includes support for client certificates.
-
Prepared statements using PostgreSQLâs extended query message flow, that allows for parameterized queries to protect from SQL injection issues.
-
The PostgreSQL COPY protocol to copy preformatted data to PostgreSQL from an Emacs buffer.
-
Asynchronous handling of LISTEN/NOTIFY notification messages from PostgreSQL, allowing the implementation of publish-subscribe type architectures (PostgreSQL as an âevent brokerâ or âmessage busâ and Emacs as event publisher and consumer).
-
Parsing various PostgreSQL types including integers, floats, array types, numerical ranges, JSON and JSONB objects into their native Emacs Lisp equivalents. The parsing support is user-extensible. Support for the HSTORE, pgvector and PostGIS extensions.
-
Connections over TCP or (on Unix machines) a local Unix socket.
The code has been tested with PostgreSQL versions 17beta2, 16.3, 15.4, 13.8, 11.17, and 10.22 on Linux. It is also tested via GitHub actions on MacOS and Microsoft Windows. This library also works, to a variable extent, against other databases that implement the PostgreSQL wire protocol:
-
YugabyteDB: tested against version 2.21. This database uses a lot of code from PostgreSQL 11 and is quite compatible, including with the HSTORE and pgvector extensions. However, some system tables differ from PostgreSQL, such as the
pg_sequences
table. It does not support the XML type. It does not supportLISTEN
/NOTIFY
. -
CrateDB: tested with version 5.7.2. There are limitations in this databaseâs emulation of the PostgreSQL system tables: for example, itâs not possible to query the owner of a table (function
pg-table-owner
). It doesnât accept SQL statements that only include an SQL comment. It doesnât support setting comments on SQL tables. As documented, CrateDB does not support theTIME
type without a time zone. It doesnât support casting integers to bits. It doesnât support theVARBIT
type. It has no support for the COPY protocol. -
CockroachDB: tested with CockroachDB CCL v24.1. Note that this database does not implement the large object functionality, and its interpretation of SQL occasionally differs from that of PostgreSQL. It is currently reporting an internal error when we call
pg-table-comment
. -
ParadeDB: This ElasticSearch alternative is very PostgreSQL-compatible (more of an extension than a reimplementation). Tested with the Dockerhub instance which is based on PostgreSQL 16.3. All tests pass.
-
QuestDB: tested against version 6.5.4. This is not very PostgreSQL-compatible: it fails on the SQL query
SELECT 1::integer
because it doesnât recognize integer as a type. It doesnât supportDELETE
statements. -
ClickHouse doesnât work with pg-el. Their version 24.5 has a very basic implementation of the PostgreSQL wire protocol. It doesnât support the
pg_type
system table which provides information on the OIDs associated with different PostgreSQL types. All values are returned in textual format using the pseudo-OID of 0, which means the client must parse the value. The database immediately closes the connection on any SQL error. It doesnât support configuration statements such asSET datestyle
. It doesnât specify aserver_name
in the startup sequence, which might allow us to detect this special case and restrict functionality to the most basic aspects.
Tested with Emacs versions 30-pre-release, 29.4, 28.2, 27.2 and 26.3. Emacs versions older than
26.1 will not work against a recent PostgreSQL version (whose default configuration requires
SCRAM-SHA-256 authentication), because they donât include the GnuTLS support which we use to
calculate HMACs. They may however work against a database set up to allow unauthenticated local
connections. Emacs versions before 28.1 will not support the extended query protocol, because the
bindat
package is required. We mostly test with Emacs on Linux, but the library also works fine on
Microsoft Windows and MacOS.
You may be interested in an alternative library emacs-libpq that enables access to PostgreSQL from Emacs by binding to the libpq library.
Licence
pg-el is free software distributed under the terms of the GNU GPL v3 or later.
Installation
You can install via the MELPA package archive, or with package-vc-install
, or with use-package
.
Installing via MELPA
Install via the MELPA package archive by
including the following in your Emacs initialization file (.emacs.el
or init.el
):
(require 'package)
(add-to-list 'package-archives '("melpa" . "https://melpa.org/packages/") t)
then saying
M-x package-install RET pg
Installing with package-vc-install
With Emacs 29, you can install the library from the latest Github revision (this requires git to be installed) using:
(unless (package-installed-p 'pg)
(package-vc-install "https://github.com/emarsden/pg-el" nil nil 'pg))
You can later update to the latest version with M-x package-vc-upgrade RET pg RET
.
Installing with use-package
If you prefer to use the use-package
macro, which is built in to Emacs 29, you can use (requires
git to be installed):
(use-package pg :vc (:url "https://github.com/emarsden/pg-el"))
Quickstart to using pg-el
These illustrative examples assume you have a PostgreSQL user pgeltestuser
with password pgeltest
who owns a database pgeltestdb
. To set that up with a local PostgreSQL database, use commands
similar to the following:
sudo -u postgres createuser --createdb pgeltestuser
sudo -u postgres createdb --owner=pgeltestuser pgeltestdb
sudo -u postgres psql
postgres=# alter user pgeltestuser with encrypted password 'pgeltest';
If you want to enable and test the support for the HSTORE and pgvector extensions, you will need to
load them into the test database as PostgreSQL superuser (the normal user pgeltestuser
we created
above is not allowed to load extensions). The pgvector extension generally needs to be installed
separately from PostgreSQL (for example by installing the postgresql-16-pgvector
package on Debian).
sudo -u postgres psql
postgres=# CREATE EXTENSION hstore;
CREATE EXTENSION
postgres=# CREATE EXTENSION vector;
CREATE EXTENSION
Now, from your preferred Emacs Lisp shell (here M-x ielm
), check that you are able to connect to
and authenticate with the database from Emacs:
ELISP> (require 'pg)
pg
ELISP> (defvar *pg* (pg-connect "pgeltestdb" "pgeltestuser" "pgeltest" "localhost" 5432))
*pg*
If you donât already have PostgreSQL installed locally, it may be convenient for you to use PostgreSQL Docker Community images, using Docker or Podman. I recommend installing Podman because itâs fully free software, whereas Docker is partly commercial. Podman is also able to run containers ârootlessâ, without special privileges, which is good for security, and doesnât require a background daemon. Podman has a docker-compatible commandline interface.
podman run -d --name pgsql \
-v /dev/log:/dev/log \
-v /var/run/postgresql:/var/run/postgresql \
--publish 5432:5432 \
-e POSTGRES_DB=pgeltestdb \
-e POSTGRES_USER=pgeltestuser \
-e POSTGRES_PASSWORD=pgeltest \
docker.io/library/postgres:latest
then connect from Emacs with
ELISP> (pg-connect "pgeltestdb" "pgeltestuser" "pgeltest" "localhost" 5432)
or connect over a local Unix socket
ELISP> (pg-connect-local "/var/run/postgresql/.s.PGSQL.5432" "pgeltestdb" "pgeltestuser "pgeltest")
Now some simple interactions with the database:
ELISP> (pg-backend-version *pg*)
"PostgreSQL 16.1 (Debian 16.1-1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 13.2.0-6) 13.2.0, 64-bit"
ELISP> (let ((res (pg-exec *pg* "SELECT 42, 1.23")))
(pg-result res :tuple 0))
(42 1.23)
Note that the first query has returned an Emacs Lisp string, and the second query has returned a tuple (represented as a list) where the first value is an Emacs Lisp integer, and the second and Emacs Lisp float. The pg-el library has ensured automatic type coercion from the SQL types to the most appropriate Emacs Lisp type.
The following example shows the output from a query that returns multiple rows. It returns a list of tuples, each tuple containing a single integer.
ELISP> (let ((res (pg-exec *pg* "SELECT * FROM generate_series(50,55)")))
(pg-result res :tuples))
((50)
(51)
(52)
(53)
(54)
(55))
An SQL query that returns no results will return the empty list.
ELISP> (let ((res (pg-exec *pg* "SELECT 3 where 1=0")))
(pg-result res :tuples))
nil
For more, see the usage information and the API documentation.
Using pg-el
The examples below illustrate various features of pg-el in conjunction with PostgreSQL. A more complete set of examples can be found in our test suite.
The examples all assume that you are using ielm as an Emacs Lisp shell (start with M-x ielm
) and
that you have a connection to PostgreSQL:
ELISP> (require 'cl-lib)
cl-lib
ELISP> (require 'pg)
pg
ELISP> (defvar *pg* (pg-connect "pgeltestdb" "pgeltestuser" "pgeltest" "localhost" 5432))
*pg*
The library should in principle convert from any obscure Emacs encoding to the UTF-8 supported by PostgreSQL.
ELISP> (let ((res (pg-exec *pg* "SELECT 'Ă©tait ' || 'lĂ '")))
(pg-result res :tuple 0))
("Ă©tait lĂ ")
ELISP> (let ((res (pg-exec *pg* "select length('(âŻÂ°âĄÂ°)âŻïž” â»ââ»')")))
(pg-result res :tuple 0))
(12)
ELISP> (let ((res (pg-exec *pg* "SELECT 'đ'")))
(pg-result res :tuple 0))
("đ")
You can create and delete tables, and query the database metainformation using pg-tables
and
pg-columns
.
ELISP> (pg-result (pg-exec *pg* "CREATE TABLE messages(id BIGSERIAL PRIMARY KEY, msg TEXT)") :status)
"CREATE TABLE"
ELISP> (member "messages" (pg-tables *pg*))
("messages")
ELISP> (member "msg" (pg-columns *pg* "messages"))
("msg")
ELISP> (pg-result (pg-exec *pg* "DROP TABLE messages") :status)
"DROP TABLE"
ELISP> (member "messages" (pg-tables *pg*))
nil
The library has support for PostgreSQLâs extended query protocol (prepared statements), which you should use to prevent SQL injection attacks.
ELISP> (pg-result (pg-exec *pg* "CREATE TABLE count_test(key INTEGER, val INTEGER)") :status)
"CREATE TABLE"
ELISP> (dotimes (i 100)
(pg-exec-prepared *pg* "INSERT INTO count_test VALUES($1, $2)"
`((,i . "int4") (,(* i i) . "int4"))))
nil
ELISP> (let ((res (pg-exec *pg* "SELECT count(*) FROM count_test")))
(car (pg-result res :tuple 0)))
100
ELISP> (defvar *multires* (pg-exec-prepared *pg* "SELECT key FROM count_test" nil :max-rows 10))
*multires*
ELISP> (pg-result *multires* :tuples)
((0)
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9))
ELISP> (pg-result *multires* :incomplete)
t
ELISP> (setq *multires* (pg-fetch *pg* *multires* :max-rows 5))
;; *multires*
ELISP> (pg-result *multires* :tuples)
((10)
(11)
(12)
(13)
(14))
ELISP> (pg-result *multires* :incomplete)
t
ELISP> (setq *multires* (pg-fetch *pg* *multires* :max-rows 100))
;; *multires*
ELISP> (length (pg-result *multires* :tuples))
85
ELISP> (pg-result *multires* :incomplete)
nil
ELISP> (let ((res (pg-exec *pg* "SELECT pi()::int4")))
(car (pg-result res :tuple 0)))
3
ELISP> (let ((res (pg-exec *pg* "SELECT 42::text")))
(car (pg-result res :tuple 0)))
"42"
ELISP> (let ((res (pg-exec *pg* "SELECT '42'::smallint")))
(car (pg-result res :tuple 0)))
42 (#o52, #x2a, ?*)
ELISP> (let ((res (pg-exec *pg* "SELECT 'PT3H4M42S'::interval")))
(car (pg-result res :tuple 0)))
"03:04:42"
Boolean vectors are only supported in Emacs from version 27 onwards (you can check whether the
function make-bool-vector
is fboundp).
ELISP> (let ((res (pg-exec *pg* "SELECT '1010'::bit(4)")))
(equal (car (pg-result res :tuple 0))
(coerce (vector t nil t nil) 'bool-vector)))
t
ELISP> (let ((res (pg-exec *pg* "SELECT b'1001000'")))
(equal (car (pg-result res :tuple 0))
(coerce (vector t nil nil t nil nil nil) 'bool-vector)))
t
ELISP> (let ((res (pg-exec *pg* "SELECT '101111'::varbit(6)")))
(equal (car (pg-result res :tuple 0))
(coerce (vector t nil t t t t) 'bool-vector)))
t
Emacs has support for bignums from version 27.2 onwards.
ELISP> (fboundp 'bignump)
t
ELISP> (let ((res (pg-exec *pg* "SELECT factorial(25)")))
(car (pg-result res :tuple 0)))
15511210043330985984000000 (#o6324500606375411017360000000, #xcd4a0619fb0907bc00000)
ELISP> (let ((res (pg-exec *pg* "SELECT 'Infinity'::float4")))
(car (pg-result res :tuple 0)))
1.0e+INF
ELISP> (let ((res (pg-exec *pg* "SELECT '-Infinity'::float8")))
(car (pg-result res :tuple 0)))
-1.0e+INF
ELISP> (let ((res (pg-exec *pg* "SELECT 'NaN'::float8")))
(car (pg-result res :tuple 0)))
0.0e+NaN
ELISP> (let ((res (pg-exec *pg* "SELECT int4range(10, 20)")))
(car (pg-result res :tuple 0)))
(:range 91 10 41 20)
;; note that 91 is the character ?\[ and 41 is the character ?\)
ELISP> (let ((res (pg-exec *pg* "SELECT int4range(10, 20)")))
(equal (car (pg-result res :tuple 0))
(list :range ?\[ 10 ?\) 20)))
t
ELISP> (let ((res (pg-exec *pg* "SELECT int4range(5,15) + int4range(10,20)")))
(equal (car (pg-result res :tuple 0))
(list :range ?\[ 5 ?\) 20)))
t
ELISP> (let ((res (pg-exec *pg* "SELECT int8range(5,15) * int8range(10,20)")))
(equal (car (pg-result res :tuple 0))
(list :range ?\[ 10 ?\) 15)))
t
ELISP> (let ((res (pg-exec *pg* "SELECT '(3,7)'::int4range")))
(equal (car (pg-result res :tuple 0))
(list :range ?\[ 4 ?\) 7)))
t
ELISP> (let ((res (pg-exec *pg* "SELECT int8range(1, 14, '(]')")))
(equal (car (pg-result res :tuple 0))
(list :range ?\[ 2 ?\) 15)))
t
ELISP> (let ((res (pg-exec *pg* "SELECT '[4,4)'::int4range")))
(equal (car (pg-result res :tuple 0))
(list :range)))
t
ELISP> (let ((res (pg-exec *pg* "SELECT numrange(33.33, 66.66)")))
(car (pg-result res :tuple 0)))
(:range 91 33.33 41 66.66)
ELISP> (let ((res (pg-exec *pg* "SELECT upper(numrange(-50.0, -40.0))")))
(car (pg-result res :tuple 0)))
-40.0
ELISP> (let ((res (pg-exec *pg* "SELECT numrange(NULL, 2.2)")))
(car (pg-result res :tuple 0)))
(:range 40 nil 41 2.2)
ELISP> (let ((res (pg-exec *pg* "SELECT numrange(NULL, NULL)")))
(car (pg-result res :tuple 0)))
(:range 40 nil 41 nil)
Working with binary data
The BYTEA type allows the storage of binary strings, i.e. sequences of octets. They can contain NUL octets (the value zero).
ELISP> (let ((res (pg-exec *pg* "SELECT '\\xDEADBEEF'::bytea")))
(equal (car (pg-result res :tuple 0))
(decode-hex-string "DEADBEEF")))
t
ELISP> (let ((res (pg-exec *pg* "SELECT '\\001\\003\\005'::bytea")))
(equal (car (pg-result res :tuple 0))
(string 1 3 5)))
t
ELISP> (let ((res (pg-exec *pg* "SELECT '\\x123456'::bytea || '\\x789a00bcde'::bytea")))
(equal (car (pg-result res :tuple 0))
(decode-hex-string "123456789a00bcde")))
t
ELISP> (let ((res (pg-exec *pg* "SELECT 'warning\\000'::bytea")))
(equal (length (car (pg-result res :tuple 0))) 8))
t
When sending binary data to PostgreSQL, either encode all potentially problematic octets, as we did for NUL above, or send base64-encoded content and decode it in PostgreSQL. There are various other useful functions for working with binary data on PostgreSQL, such as hash functions.
ELISP> (pg-result (pg-exec *pg* "CREATE TABLE bt(blob BYTEA, tag int)") :status)
"CREATE TABLE"
ELISP> (let* ((size 512)
(random-octets (make-string size 0)))
(dotimes (i size)
(setf (aref random-octets i) (random 256)))
(setf (aref random-octets 0) 0)
(pg-exec-prepared *pg*
"INSERT INTO bt VALUES (decode($1, 'base64'), 42)"
`((,(base64-encode-string random-octets) . "text")))
(equal random-octets (car (pg-result (pg-exec *pg* "SELECT blob FROM bt WHERE tag=42") :tuple 0))))
t
ELISP> (let* ((res (pg-exec *pg* "SELECT sha256('foobles'::bytea)"))
(hx (encode-hex-string (car (pg-result res :tuple 0)))))
(equal hx (secure-hash 'sha256 "foobles")))
t
ELISP> (let* ((res (pg-exec *pg* "SELECT md5('foobles')"))
(r (car (pg-result res :tuple 0))))
(equal r (md5 "foobles")))
t
ELISP> (let* ((res (pg-exec *pg* "SELECT encode('foobles', 'base64')"))
(r (car (pg-result res :tuple 0))))
(equal r (base64-encode-string "foobles")))
t
PostgreSQL arrays
(To be documented)
Using prepared statements
pg-el has support for PostgreSQLâs extended query protocol (prepared statements), which you should use to prevent SQL injection attacks.
ELISP> (pg-result (pg-exec *pg* "CREATE TABLE count_test(key INTEGER, val INTEGER)") :status)
"CREATE TABLE"
ELISP> (dotimes (i 100)
(pg-exec-prepared *pg* "INSERT INTO count_test VALUES($1, $2)"
`((,i . "int4") (,(* i i) . "int4"))))
nil
ELISP> (let ((res (pg-exec *pg* "SELECT count(*) FROM count_test")))
(car (pg-result res :tuple 0)))
100
ELISP> (defvar *multires* (pg-exec-prepared *pg* "SELECT key FROM count_test" nil :max-rows 10))
*multires*
ELISP> (pg-result *multires* :tuples)
((0)
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9))
ELISP> (pg-result *multires* :incomplete)
t
ELISP> (setq *multires* (pg-fetch *pg* *multires* :max-rows 5))
;; *multires*
ELISP> (pg-result *multires* :tuples)
((10)
(11)
(12)
(13)
(14))
ELISP> (pg-result *multires* :incomplete)
t
ELISP> (setq *multires* (pg-fetch *pg* *multires* :max-rows 100))
;; *multires*
ELISP> (length (pg-result *multires* :tuples))
85
ELISP> (pg-result *multires* :incomplete)
nil
If your application will use the same prepared statement multiple times, you can ask PostgreSQL to parse/analyze the SQL query and bind parameters once, then use the prepared statement with different variable values multiple times. This will improve performance by avoiding the overhead of reparsing and reoptimizing a query plan multiple times.
The example function below (which comes from the PGmacs
browsing/editing interface for Emacs) illustrates the use of the utility function
pg-ensure-prepared-statement
, which either retrieves the cached prepared statement if the function
has already been called (pg-el maintains a per-connection cache of prepared statements), or prepares
the statement given the SQL and the argument types if the function has not yet been called in this
PostgreSQL connection. The prepared statement is executed using pg-fetch-prepared
, which functions
in a similar way to function pg-fetch
.
(defun pgmacs--table-primary-keys (con table)
"Return the columns active as PRIMARY KEY in TABLE.
Uses PostgreSQL connection CON."
(let* ((schema (if (pg-qualified-name-p table)
(pg-qualified-name-schema table)
"public"))
(tname (if (pg-qualified-name-p table)
(pg-qualified-name-name table)
table))
(sql "SELECT a.attname
FROM pg_catalog.pg_index idx
JOIN pg_catalog.pg_class c ON c.oid = idx.indrelid
JOIN pg_catalog.pg_attribute a ON a.attrelid = c.oid AND a.attnum = ANY(idx.indkey)
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE relname = $1 AND nspname = $2 AND indisprimary")
(argument-types (list "text" "text"))
(params `((,tname . "text") (,schema . "text")))
(ps-name (pg-ensure-prepared-statement con "QRY-tbl-primary-keys" sql argument-types))
(res (pg-fetch-prepared con ps-name params)))
(mapcar #'cl-first (pg-result res :tuples))))
JSON and JSONB values
PostgreSQL has quite a lot of support for storing, saving and processing JSON and JSONB data. pg-el is able to deserialize JSON and JSONB values into Emacs Lisp structures such as hashtables (for dicts), arrays, numbers, strings and so on.
This library will parse and represent JSON/JSONB data either using the JSON support built into Emacs
with libjansson (see function json-available-p
, from version 28.1), or using the json.el
library. There are some differences in the ways these methods handle dictionaries and specific
values such as NULL, false, [] and {}. Our examples below use the builtin JSON support in Emacs.
ELISP> (defun scalar (sql) (car (pg-result (pg-exec *pg* sql) :tuple 0)))
scalar
ELISP> (let ((json (scalar "SELECT '[5,7]'::json")))
(aref json 0))
5 (#o5, #x5, ?\C-e)
ELISP> (let ((json (scalar "SELECT '[42.0,77.7]'::jsonb")))
(aref json 1))
77.7
ELISP> (scalar "SELECT '[]'::json")
[]
ELISP> (scalar "SELECT '{}'::json")
#<hash-table equal 0/1 0x1586e6cc2813>
ELISP> (let ((json (scalar "SELECT '{\"a\": 42, \"b\": \"foo\"}'::json")))
(gethash "b" json))
"foo"
ELISP> (let ((json (scalar "SELECT '{\"a\": [0,1,2,null]}'::json")))
(gethash "a" json))
[0 1 2 :null]
pg-el can also serialize Emacs Lisp structures into the PostgreSQL JSON format, for use in prepared statements.
ELISP> (let ((ht (make-hash-table)))
(puthash "biz" 45 ht)
(puthash "boz" -5.5 ht)
(puthash "comment" "good stuff" ht)
(pg-result (pg-exec-prepared *pg* "SELECT $1->'boz'" `((,ht . "json"))) :tuple 0))
(-5.5)
ELISP> (let ((ht (make-hash-table)))
(puthash "biz" 45 ht)
(puthash "boz" -5.5 ht)
(puthash "comment" "good stuff" ht)
;; the '-' jsonb operator deletes a matching key/value mapping
(let* ((res (pg-exec-prepared *pg* "SELECT $1 - 'boz'" `((,ht . "jsonb"))))
(row (pg-result res :tuple 0)))
(gethash "comment" (cl-first row) )))
"good stuff"
Support for the JSON path language (jsonpath type)
pg-el serializes and deserializes JSONPATH expressions as strings, as illustrated below. You can use them as arguments to prepared statements.
ELISP> (pg-result (pg-exec *pg* "SELECT 'true'::jsonpath") :tuple 0)
(list "true")
ELISP> (pg-result (pg-exec *pg* "SELECT '$[*] ? (@ < 1 || @ > 5)'::jsonpath") :tuple 0)
(list "$[*]?(@ < 1 || @ > 5)")
ELISP> (let* ((sql "SELECT jsonb_path_query($1, $2)")
(dict (make-hash-table :test #'equal))
(_ (puthash "h" 5.6 dict))
(params `((,dict . "jsonb") ("$.h.floor()" . "jsonpath")))
(res (pg-exec-prepared con sql params))
(row (pg-result res :tuple 0)))
(cl-first row))
5
The HSTORE key-value type
There is support for the PostgreSQL HSTORE
extension, which can store key/value pairs
in a single PostgreSQL column. Itâs necessary to call pg-hstore-setup
before using this
functionality, to load the extension if necessary and to set up our parser support for the HSTORE
type.
ELISP> (pg-hstore-setup *pg*)
ELISP> (defvar *hs* (car (pg-result (pg-exec *pg* "SELECT 'foo=>bar'::hstore") :tuple 0)))
*hs*
ELISP> (gethash "foo" *hs*)
"bar"
ELISP> (hash-table-count *hs*)
1 (#o1, #x1, ?\C-a)
;; There is no guarantee as to the value stored for the 'a' key (duplicate)
ELISP> (setq *hs* (car (pg-result (pg-exec *pg* "SELECT 'a=>1,foobles=>2,a=>66'::hstore") :tuple 0)))
#<hash-table equal 2/65 0x1574257479d9>
ELISP> (hash-table-count *hs*)
2 (#o2, #x2, ?\C-b)
ELISP> (pg-result (pg-exec *pg* "SELECT akeys('biz=>NULL,baz=>42,boz=>66'::hstore)") :tuple 0)
(["baz" "biz" "boz"])
ELISP> (pg-hstore-setup *pg*)
ELISP> (let ((ht (make-hash-table :test #'equal)))
(puthash "biz" "baz" ht)
(puthash "foo" "bar" ht)
(puthash "more" "than" ht)
(let* ((res (pg-exec-prepared con "SELECT $1 ? 'foo'" `((,ht . "hstore"))))
(pg-result res :tuple 0))))
(t)
Data types used by the PostGIS extension
There is deserialization support (and trivial serialization) support for the data types used by the
PostGIS extension. Itâs necessary to require the pg-gis
library and to
call pg-setup-postgis
before using this functionality, to load the extension if necessary and to
set up our deserialization support for the types used by PostGIS (in particular, the geometry
and
geography
types).
(require 'pg-gis)
(pg-setup-postgis *pg*)
PostGIS sends values over the wire in HEXEWKB format (Extended Well-Known
Binary
encoded in hexademical), such as 01010000200400000000000000000000000000000000000000
which
represents the well-known text (WKT) POINT (0 0)
.
If the variable pg-gis-use-geosop
is non-nil, we parse this format using the geosop
commandline
utility function from GEOS (often available in packages named geos-bin
or similar). Otherwise, we
leave it as a string (it can be parsed using PostGIS functions such as ST_AsText
).
sudo apt install geos-bin
ELISP> (require 'pg-gis)
ELISP> (pg-setup-postgis *pg*)
ELISP> (pg-result (pg-exec *pg* "SELECT 'POINT(4 5)'::geometry") :tuple 0)
("POINT (4 5)")
ELISP> (pg-result (pg-exec *pg* "SELECT Box2D(ST_GeomFromText('LINESTRING(1 2, 3 4, 5 6)'))") :tuple 0)
("BOX(1 2,5 6)")
ELISP> (pg-result (pg-exec *pg* "SELECT 'MULTILINESTRING((-118.584 38.374 20,-118.583 38.5 30),(-71.05957 42.3589 75, -71.061 43 90))'::geometry") :tuple 0)
("MULTILINESTRING Z ((-118.584 38.374 20, -118.583 38.5 30), (-71.05957 42.3589 75, -71.061 43 90))")
ELISP> (pg-result (pg-exec *pg* "SELECT 'SPHEROID[\"GRS_1980\",6378137,298.2572]'::spheroid") :tuple 0)
("SPHEROID(\"GRS_1980\",6378137,298.2572)")
Collation
Case support in PostgreSQL (lower()
and upper()
functions) depend on the current collation
rules. A table has a default collation
which is specified at creation (with a default). To remove the dependency on the
tableâs collation, you can specify the desired collation explicitly.
Note that PostgreSQL can be compiled with or without support for libicu, as a complement to the collation support in your libc.
ELISP> (let ((res (pg-exec *pg* "SELECT lower('FĂĂĂ' COLLATE \"fr_FR\")")))
(car (pg-result res :tuple 0)))
"fÎöé"
ELISP> (let ((res (pg-exec *pg* "SELECT lower('FĂđđ„bz' COLLATE \"fr_FR\")")))
(car (pg-result res :tuple 0)))
"fĂŽđđ„bz"
ELISP> (pg-result (pg-exec *pg* "CREATE COLLATION IF NOT EXISTS \"french\" (provider = icu, locale = 'fr_FR')") :status)
"CREATE COLLATION"
ELISP> (let ((res (pg-exec *pg* "SELECT lower('FĂĂĂ' COLLATE \"french\")")))
(car (pg-result res :tuple 0)))
"fÎöé"
The COPY protocol
The COPY protocol can be used to send and receive large amounts of data to/from PostgreSQL. It can be used with CSV or TSV data.
From Emacs to PostgreSQL
The pg-el library allows you to COPY from an Emacs buffer into PostgreSQL using function
pg-copy-from-buffer
, as illustrated below.
ELISP> (defun ascii (n) (+ ?A (mod n 26)))
ascii
ELISP> (defun random-word ()
(apply #'string (cl-loop for count to 10 collect (+ ?a (random 26)))))
random-word
ELISP> (pg-result (pg-exec *pg* "CREATE TABLE copy_tsv(a INTEGER, b CHAR, c TEXT)") :status)
"CREATE TABLE"
ELISP> (let ((buf (get-buffer-create " *pg-copy-temp-tsv*")))
(with-current-buffer buf
(dotimes (i 42)
(insert (format "%d\t%c\t%s\n" i (ascii i) (random-word)))))
(pg-result (pg-copy-from-buffer *pg* "COPY copy_tsv(a,b,c) FROM STDIN" buf) :status))
"COPY 84"
ELISP> (pg-result (pg-exec *pg* "SELECT COUNT(*) FROM copy_tsv") :tuple 0)
(84)
ELISP> (pg-result (pg-exec *pg* "SELECT * FROM copy_tsv LIMIT 5") :tuples)
((0 "A" "ufyhdnkoyfi")
(1 "B" "jpnlxbftdpm")
(2 "C" "lqvazrhesdg")
(3 "D" "epxkjdsfdpg")
(4 "E" "yjhgdwjzbvt"))
The use of CSV formatted data is very similar; you simply need to specify WITH (FORMAT CSV)
in the
COPY
statement.
ELISP> (pg-result (pg-exec *pg* "CREATE TABLE copy_csv (a INT2, b INTEGER, c CHAR, d TEXT)") :status)
"CREATE TABLE"
ELISP> (let ((buf (get-buffer-create " *pg-copy-temp-csv*")))
(with-current-buffer buf
(dotimes (i 1000)
(insert (format "%d,%d,%c,%s\n" i (* i i) (ascii i) (random-word)))))
(pg-result (pg-copy-from-buffer *pg* "COPY copy_csv(a,b,c,d) FROM STDIN WITH (FORMAT CSV)" buf) :status))
"COPY 1000"
ELISP> (pg-result (pg-exec *pg* "SELECT * FROM copy_csv LIMIT 3") :tuples)
((0 0 "A" "ajoskqunbrx")
(1 1 "B" "pzmoyefgywu")
(2 4 "C" "blylbnhnrdb"))
From PostgreSQL to Emacs
You can copy from PostgreSQL into an Emacs buffer using the function pg-copy-to-buffer
, as
illustrated below.
ELISP> (let ((res (pg-copy-to-buffer *pg* "COPY copy_csv TO STDOUT WITH (FORMAT CSV, HEADER TRUE)"
(get-buffer-create "*pg-csv*"))))
(pg-result res :status))
"COPY 1000"
The following more verbose example illustrates fetching CSV data from an online source, importing it into PostgreSQL, removing some unneeded columns and querying the data.
ELISP> (with-temp-buffer
(url-insert-file-contents "https://www.data.gouv.fr/fr/datasets/r/51606633-fb13-4820-b795-9a2a575a72f1")
(pg-exec *pg* "CREATE TABLE cities(
insee_code TEXT NOT NULL,
city_code TEXT,
zip_code NUMERIC,
label TEXT NOT NULL,
latitude FLOAT,
longitude FLOAT,
department_name TEXT,
department_number VARCHAR(3),
region_name TEXT,
region_geojson_name TEXT)")
(pg-result (pg-copy-from-buffer *pg* "COPY cities FROM STDIN WITH (FORMAT CSV, DELIMITER ',', HEADER TRUE)"
(current-buffer)) :status))
"COPY 39145"
ELISP> (pg-result (pg-exec *pg* "ALTER TABLE cities DROP COLUMN region_name") :status)
"ALTER TABLE"
ELISP> (pg-result (pg-exec *pg* "ALTER TABLE cities DROP COLUMN region_geojson_name") :status)
"ALTER TABLE"
ELISP> (pg-result (pg-exec *pg* "ALTER TABLE cities DROP COLUMN label") :status)
"ALTER TABLE"
ELISP> (pg-result (pg-exec *pg* "SELECT * FROM cities WHERE city_code LIKE 'toulouse%'") :tuples)
(("39533" "toulouse le chateau" 39230 46.821901729 5.583200112 "jura" "39")
("31555" "toulouse" 31100 43.596037953 1.432094901 "haute-garonne" "31")
("31555" "toulouse" 31300 43.596037953 1.432094901 "haute-garonne" "31")
("31555" "toulouse" 31400 43.596037953 1.432094901 "haute-garonne" "31")
("31555" "toulouse" 31500 43.596037953 1.432094901 "haute-garonne" "31")
("31555" "toulouse" 31000 43.596037953 1.432094901 "haute-garonne" "31")
("31555" "toulouse" 31200 43.596037953 1.432094901 "haute-garonne" "31"))
Schema-qualified names
Introduction to PostgreSQL schemas
A schema in PostgreSQL (and in the ANSI SQL standard) is a collection of tables, views, functions, constraints, indexes and sequences. PostgreSQL allows you to define multiple schemas in the same database, and different schemas can include datastructres (such as tables) with the same name. You can think of them as namespaces for tables.
Here is the hierarchy of names:
- each PostgreSQL instance can have multiple databases;
- each database can contain multiple schemas;
- each schema can contain multiple tables (and views, functions and so on).
In SQL syntax you will use qualified names including a schema, such as myschema.mytable
anywhere were you use a ânormalâ unqualified name mytable
. Default objects created by the user are
in the schema named public
(this schema is not normally printed in query results because the
current search_path
includes public
). Objects used internally by PostgreSQL are in system-defined schemas
such as pg_catalog
, pg_toast
and information_schema
.
The pg-el library represents schema-qualified names using pg-qualified-name
objects, which are
cl-defstruct
objects. This means you can create them as follows:
(make-pg-qualified-name :schema "myschema" :name "mytable")
You can then use these objects anywhere you would use a normal table name, escaping special
characters using pg-print-qualified-name
. The pg-tables
function will return normal string names
for tables in the public
namespace, and pg-qualified-name
objects for tables in other namespaces.
ELISP> (let ((res (pg-exec *pg* "SHOW search_path")))
(pg-result res :tuple 0))
("\"$user\", public")
ELISP> (let ((res (pg-exec *pg* "CREATE SCHEMA custom")))
(pg-result res :status))
"CREATE SCHEMA"
ELISP> (let* ((qn (make-pg-qualified-name :schema "custom" :name "mytable"))
(sql (format "CREATE TABLE IF NOT EXISTS %s(id INTEGER)"
(pg-print-qualified-name qn)))
(res (pg-exec *pg* sql)))
(pg-result res :status))
"CREATE TABLE"
ELISP> (pg-tables *pg*)
("purchases" "customers" #s(pg-qualified-name :schema "custom" :name "mytable"))
;; We can use schema-qualified names as parameters for a prepared query.
ELISP> (let* ((qn (make-pg-qualified-name :schema "custom" :name "mytable"))
(pqn (pg-print-qualified-name qn))
(sql "SELECT pg_total_relation_size($1)")
(res (pg-exec-prepared *pg* sql `((,pqn . "text")))))
(pg-result res :tuple 0))
(0)
Special pg-el features
Handling parameter changes
The PostgreSQL backend informs connected clients when certain server parameters change, by sending
them a special ParameterStatus
message. These notifications are sent for GUC_REPORT
parameters,
which include the client_encoding
, the DateStyle
, TimeZone
, server_encoding
,
in_hot_standby
and is_superuser
. You can register your interest in these messages by adding a
handler function to pg-parameter-change-functions
. Each of these handler functions will be called
when such a message is received, with three arguments: the connection to PostgreSQL, the parameter
name and the parameter value.
These messages are sent asynchronously.
ELISP> (defun handle-tz-change (_con name value)
(when (string= name "TimeZone")
(message "New timezone in PostgreSQL is %s" value)))
handle-tz-change
ELISP> (cl-pushnew #'handle-tz-change pg-parameter-change-functions)
(handle-tz-change pg-handle-parameter-client-encoding)
ELISP> (pg-result (pg-exec *pg* "SET SESSION TIME ZONE 'Europe/Paris'") :status)
"SET"
ELISP> (pg-result (pg-exec *pg* "SET SESSION TIME ZONE 'America/Chicago'") :status)
"SET"
You should see either one or two messages announcing a parameter change (the first statement wonât generate a ParameterStatus message if the time zone was already set to Europe/Paris).
Handling asynchronous notifications
PostgreSQL has an asynchronous notification functionality based on the LISTEN and NOTIFY
commands. A client can register its
interest in a particular notification channel with the LISTEN
command, and later stop listening
with the UNLISTEN
command. All clients listening on a particular channel will be notified
asynchronously when a NOTIFY
command with that channel name is executed by any client. A âpayloadâ
string can be passed to communicate additional data to the listeners. In pg-el you can register
functions to be called when an asynchronous notification is received by adding them to the
pg-handle-notice-functions
. Each handler function is called with a single argument, the notice, in
the form of a pgerror
struct.
PostgreSQL will signal an asynchronous notification for a DROP TABLE IF EXISTS
command that attempts
to remove a table that doesnât exist, as a form of warning message. We can register our interest in
this message by locally binding the pg-handle-notice-functions
variable.
ELISP> (defun deity-p (notif)
;; the notification message will be localized, but should contain the table name
(when (cl-search "deity" (pgerror-message notif))
(message "Indeed")))
ELISP> (let ((pg-handle-notice-functions (list #'deity-p)))
(pg-result (pg-exec *pg* "DROP TABLE IF EXISTS deity") :status))
"DROP TABLE"
You should see the message in the minibuffer.
This example illustrates the use of NOTIFY and LISTEN. Itâs obviously not very useful with a single client; real applications would involve multiple event consumers and possibly also multiple event producers. This functionality can be used to implement simple publish-subscribe communication patterns, with PostgreSQL serving as an event broker.
(cl-flet ((notification-handler (channel payload)
(message "Async notification on %s: %s" channel payload)))
(pg-add-notification-handler *pg* #'notification-handler)
(pg-exec *pg* "LISTEN yourheart")
(pg-exec *pg* "NOTIFY yourheart, 'foobles'")
(pg-exec *pg* "SELECT 'ignored'")
(pg-exec *pg* "NOTIFY yourheart, 'bazzles'")
(sleep-for 10)
(pg-exec *pg* "SELECT 'ignored'")
(pg-exec *pg* "NOTIFY yourheart")
(pg-exec *pg* "SELECT 'ignored'")
;; The function pg_notify is an alternative to the LISTEN statement, and more flexible if your
;; channel name is determined by a variable.
(pg-exec *pg* "SELECT pg_notify('yourheart', 'leaving')")
(pg-exec *pg* "SELECT 'ignored'")
(pg-exec *pg* "UNLISTEN yourheart")
(pg-exec *pg* "NOTIFY yourheart, 'Et redit in nihilum quod fuit ante nihil.'")))
The pg-el API
The entry points in the pg-el library are documented below.
(with-pg-connection con (dbname user [password host port]) &body body)
A macro which opens a TCP network connection to database DBNAME
, executes the BODY
forms then
disconnects. See function pg-connect
for details of the connection arguments.
(with-pg-connection-local con (path dbname user [password]) &body body)
A macro which opens a connection to database DBNAME
over a local Unix socket at PATH
, executes
the BODY
forms then disconnects. See function pg-connect-local
for details of the connection
arguments.
(with-pg-transaction con &body body)
A macro which executes the BODY
forms wrapped in an SQL transaction. CON
is a connection to the
database. If an error occurs during the execution of the forms, a ROLLBACK instruction is executed.
(pg-connect dbname user [password host port tls-options]) -> con
Connect to the database DBNAME
on HOST
(defaults to localhost) at PORT
(defaults to 5432) via
TCP/IP and authenticate as USER
with PASSWORD
. This library currently supports SCRAM-SHA-256
authentication (the default method from PostgreSQL version 14 onwards), MD5 authentication and
cleartext password authentication. This function also sets the output date type to ISO
and
initializes our type parser tables.
If tls-options
is non-NIL, attempt to establish an encrypted connection to PostgreSQL by
passing tls-options
to Emacs function gnutls-negotiate
. tls-options
is a Common-Lisp style
argument list of the form
(list :priority-string "NORMAL:-MD5" :trustfiles (list "/etc/company/RootCA.crt"))
To use client certificates to authenticate the TLS connection, use a value of TLS-OPTIONS
of the
form
`(list :keylist ((,key ,cert)))
where key
is the filename of the client certificate private key and cert
is the filename of the
client certificate. These are passed to GnuTLS.
(pg-connect-local path dbname user [password]) -> con
Initiate a connection with the PostgreSQL backend over local Unix socket PATH
. Connect to the
database DBNAME
with the username USER
, providing PASSWORD
if necessary. Returns a connection
to the database (as an opaque type). PASSWORD
defaults to an empty string.
(pg-exec con &rest sql) -> pgresult
Concatenate the SQL strings and send to the PostgreSQL backend over connection CON
. Retrieve the
information returned by the database and return it in an opaque record PGRESULT. The content of the
pgresult should be accessed using the pg-result
function.
(pg-exec-prepared con query typed-arguments &key (max-rows 0)) -> pgresult
Execute SQL query QUERY
, which may include numbered parameters such as $1
, $2
and so on,
using PostgreSQLâs extended query protocol, on database connection CON
. The TYPED-ARGUMENTS
are
a list of the form
'((42 . "int4") ("42" . "text"))
This query will return at most MAX-ROWS
rows (a value of zero indicates no limit). It returns a
pgresult structure (see function pg-result
). This method is useful to reduce the risk of SQL
injection attacks.
(pg-result pgresult what &rest args) -> info
Extract information from the PGRESULT
returned by pg-exec
. The WHAT
keyword can be one of
-
:connection
: retrieve the database connection. -
:status
: a string returned by the backend to indicate the status of the command; it is something like âSELECTâ for a select command, âDELETE 1â if the deletion affected a single row, etc. -
:attributes
: a list of tuples providing metadata: the first component of each tuple is the attributeâs name as a string, the second an integer representing its PostgreSQL type, and the third an integer representing the size of that type. -
:tuples
: all the data retrieved from the database, as a list of lists, each list corresponding to one row of data returned by the backend. -
:tuple
tuple-number: return a specific tuple (numbering starts at 0). -
:incomplete
: determine whether the set of tuples returned in this query set is incomplete, due to a suspended portal. If true, further tuples can be obtained by callingpg-fetch
. -
:oid
: allows you to retrieve the OID returned by the backend if the command was an insertion. The OID is a unique identifier for that row in the database (this is PostgreSQL-specific; please refer to the documentation for more details).
.
(pg-fetch con result &key (max-rows 0))
Fetch pending rows from the suspended portal in RESULT
on database connection CON
.
This query will retrieve at most MAX-ROWS
rows (default value of zero means no limit).
Returns a pgresult structure (see function pg-result
). When used in multiple fetch situations
(with the :max-rows
parameter to pg-exec-prepared
which allows you to retrieve large result sets
incrementally), the same pgresult structure (initally returned by pg-exec-prepared
) should be
passed to each successive call to pg-fetch
, because it contains column metainformation that is
required to parse the incoming data. Each successive call to pg-fetch
will return this pgresult
structure with new tuples accessible via pg-result :tuples
. When no more tuples are available,
the :incomplete
slot of the pgresult structure will be nil.
(pg-cancel con) -> nil
Ask the server to cancel the command currently being processed by the backend. The cancellation
request concerns the command requested over database connection CON
.
(pg-disconnect con) -> nil
Close the database connection CON
.
(pg-for-each con select-form callback)
Calls CALLBACK
on each tuple returned by SELECT-FORM
. Declares a cursor for SELECT-FORM
, then
fetches tuples using repeated executions of FETCH 1
, until no results are left. The cursor is then
closed. The work is performed within a transaction. When you have a large amount of data to handle,
this usage is more efficient than fetching all the tuples in one go.
If you wish to browse the results, each one in a separate buffer, you could have the callback insert
each tuple into a buffer created with (generate-new-buffer "myprefix")
, then use ibufferâs â/ nâ to
list/visit/delete all buffers whose names match myprefix.
(pg-databases con) -> list of strings
Return a list of the databases available over PostgreSQL connection CON
. A database is a set of
tables; in a fresh PostgreSQL installation there is a single database named âtemplate1â.
(pg-tables con) -> list of strings
Return a list of the tables present in the database to which we are currently connected over CON
.
Only include user tables: system tables are not included in this list.
(pg-columns con table) -> list of strings
Return a list of the columns (or attributes) in TABLE
, which must be a table in the database to
which we are connected over CON
. We only include the column names; if you want more detailed
information (attribute types, for example), it can be obtained from pg-result
on a SELECT
statement for that table.
(pg-hstore-setup con)
Prepare for the use of HSTORE datatypes over database connection CON
. This function must be called
before using the HSTORE extension. It loads the extension if necessary, and sets up the parsing
support for HSTORE datatypes.
(pg-vector-setup con)
Prepare for the use of VECTOR datatypes from the pgvector extension over database connection CON
.
This function must be called before using the pgvector extension. It loads the extension if
necessary, and sets up the parsing support for vector datatypes.
(pg-lo-create con . args) -> oid
Create a new large object (BLOB, or binary large object in other DBMSes parlance) in the database to
which we are connected via CON
. Returns an OID
(which is represented as an elisp integer) which
will allow you to use the large object. Optional ARGS
are a Unix-style mode string which determines
the permissions of the newly created large object, one of ârâ for read-only permission, âwâ for
write-only, ârwâ for read+write. Default is ârâ.
Large-object functions MUST be used within a transaction (see the macro with-pg-transaction
).
(pg-lo-open con oid . args) -> fd
Open a large object whose unique identifier is OID
(an elisp integer) in the database to which we
are connected via CON
. Optional ARGS
is a Unix-style mode string as for pg-lo-create
; which
defaults to ârâ read-only permissions. Returns a file descriptor (an elisp integer) which can be
used in other large-object functions.
(pg-lo-close con fd)
Close the file descriptor FD
which was associated with a large object. Note that this does not
delete the large object; use pg-lo-unlink
for that.
(pg-lo-read con fd bytes) -> string
Read BYTES
from the file descriptor FD
which is associated with a large object. Return an elisp
string which should be BYTES
characters long.
(pg-lo-write con fd buf)
Write the bytes contained in the elisp string BUF
to the large object associated with the file
descriptor FD
.
(pg-lo-lseek con fd offset whence)
Do the equivalent of a lseek(2)
on the file descriptor FD
which is associated with a large
object; i.e. reposition the read/write file offset for that large object to OFFSET
(an elisp
integer). WHENCE
has the same significance as in lseek()
; it should be one of SEEK_SET
(set the
offset to the absolute position), SEEK_CUR
(set the offset relative to the current offset) or
SEEK_END
(set the offset relative to the end of the file). WHENCE
should be an elisp integer whose
values can be obtained from the header file <unistd.h>
(probably 0, 1 and 2 respectively).
(pg-lo-tell con oid) -> integer
Do the equivalent of an ftell(3)
on the file associated with the large object whose unique
identifier is OID
. Returns the current position of the file offset for the objectâs associated file
descriptor, as an elisp integer.
(pg-lo-unlink con oid)
Remove the large object whose unique identifier is OID
from the system. In the current
implementation of large objects in PostgreSQL, each large object is associated with an object in the
filesystem.
(pg-lo-import con filename) -> oid
Create a new large object and initialize it to the data contained in the file whose name is
FILENAME
. Returns an OID
(as an elisp integer). Note that this operation is only syntactic sugar
around the basic large-object operations listed above.
(pg-lo-export con oid filename)
Create a new file named FILENAME
and fill it with the contents of the large object whose unique
identifier is OID
. This operation is also syntactic sugar.
Variable pg-parameter-change-functions
is a list of handlers to be called when the backend informs
us of a parameter change, for example a change to the session time zone. Each handler is called with
three arguments: the connection to the backend, the parameter name and the parameter value. It is
initially set to a function that looks out for client_encoding
messages and updates the value
recorded in the connection.
Variable pg-handle-notice-functions
is a list of handlers to be called when the backend sends us a
NOTICE
message. Each handler is called with one argument, the notice, as a pgerror struct.
Boolean variable pg-disable-type-coercion
can be set to non-nil (before initiating a connection)
to disable the libraryâs type coercion facility. Default is t
.
Setting up PostgreSQL to accept TCP/IP connections has security implications; please consult the documentation for details. It is possible to use the port forwarding capabilities of ssh to establish a connection to the backend over TCP/IP, which provides both a secure authentication mechanism and encryption (and optionally compression) of data passing through the tunnel. Hereâs how to do it (thanks to Gene Selkov, Jr. for the description):
-
Establish a tunnel to the backend machine, like this:
ssh -L 3333:backend.dom:5432 postgres@backend.dom
The first number in the -L argument, 3333, is the port number of your end of the tunnel. The second number, 5432, is the remote end of the tunnel â the port number your backend is using. The name or the address in between the port numbers belongs to the server machine, as does the last argument to ssh that also includes the optional user name. Without the user name, ssh will try the name you are currently logged on as on the client machine. You can use any user name the server machine will accept, not necessarily those related to postgres.
-
Now that you have a running ssh session, you can point pg.el to the local host at the port number which you specified in step 1. For example,
(pg-connect "dbname" "user" "password" "localhost" 3333)
You can omit the port argument if you chose 5432 as the local end of the tunnel, since pg.el defaults to this value.
Your feedback
Bug reports should be filed as issues on our GitHub project page.
Pull requests are also welcome!