FreshRSS: Move from Feedly to FreshRSS

This is how I moved my Feedly news to self-hosted FreshRSS

FreshRSS: Move from Feedly to FreshRSS

In the beginning of my IT career, getting news for special topics was a full time job:

  • manually crawl through web sites
  • read special newspapers

But there's something that does that for you: RSS.

"RSS (RDF Site Summary or Really Simple Syndication) is a web feed that allows users and applications to access updates to websites in a standardized, computer-readable format. Subscribing to RSS feeds can allow a user to keep track of many different websites in a single news aggregator, which constantly monitors sites for new content, removing the need for the user to manually check them."

So if using RSS, you'll never have to manually crawl through web sites. But an RSS client works on one device, i. e. your laptop. If you also configure an RSS client on another laptop, or on your smartphone, you'll get the same information - but you won't get the information if you already read some of these news.

So a internet service was needed, which gets RSS feeds and also syncs which ones are alredy read. And the service was provided by Google.

For some time, I was absolutely satisfied with it. But then, Google decided to discontinue it - and it's users needed a successor. For me, it was Feedly - great website and many clients on Windows, Linux, and mobile device. For a long time, I used the Feedly app and Reeder, as both apps have their advantages.

But what to do if you don't want internet services to know anything about your interests? Just host it yourself 😄

A great service for RSS feeds is FreshRSS:

"FreshRSS is a self-hosted RSS and Atom feed aggregator.
It is lightweight, easy to work with, powerful, and customizable."

If you already read some of my blog posts, you might know that I prefer to host services using Docker.

FreshRSS consists of several services:

  • app: the FreshRSS app
  • db: a PostgreSQL database
  • read: Mozilla's Readability service
  • merc: Mercury parser, which parses websites to extract the actual content

Installation

This is my docker-compose.yml file for FreshRSS:

services:
  app:
    image: freshrss/freshrss
    hostname: freshrss
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Berlin
    volumes:
      - ./config:/config
    restart: unless-stopped
    ports:
      - 80:80
  db:
    image: postgres:17
    restart: always
    environment:
      - POSTGRES_USER=<db user>
      - POSTGRES_PASSWORD=<password>
      - POSTGRES_DB=<db>
    volumes:
      - ./freshrss-db:/var/lib/postgresql/data
    command: postgres -c shared_preload_libraries=pg_stat_statements -c pg_stat_statements.track=all -c max_connections=200
    hostname: freshrss-db
  read:
    image: phpdockerio/readability-js-server
    hostname: freshrss-read
    restart: always
  merc:
    image: wangqiru/mercury-parser-api
    hostname: freshrss-merc
    restart: always

In this case, my database is created from within the same docker-compose.yml. If there are many more database in the same Docker server which are using PostgreSQL, that could be some kind of overhead, as many PostgreSQL servers are running for different applications. If you prefer to

The latest image version is pulled like that:

# docker compose pull

Now, let's start it:

# docker compose pull -d
db-1    | The files belonging to this database system will be owned by user "postgres".
db-1    | This user must also own the server process.
db-1    |
db-1    | The database cluster will be initialized with locale "en_US.utf8".
db-1    | The default database encoding has accordingly been set to "UTF8".
db-1    | The default text search configuration will be set to "english".
db-1    |
db-1    | Data page checksums are disabled.
db-1    |
db-1    | fixing permissions on existing directory /var/lib/postgresql/data ... ok                                                                                           db-1    | creating subdirectories ... ok
db-1    | selecting dynamic shared memory implementation ... posix
db-1    | selecting default "max_connections" ... 100
db-1    | selecting default "shared_buffers" ... 128MB
db-1    | selecting default time zone ... Etc/UTC
db-1    | creating configuration files ... ok
db-1    | running bootstrap script ... ok
read-1  | 2025-07-22T07:15:07: PM2 log: Launching in no daemon mode                                                                                                          read-1  | 2025-07-22T07:15:07: PM2 log: App [Readability server:0] starting in -cluster mode-
read-1  | 2025-07-22T07:15:07: PM2 log: App [Readability server:0] online
read-1  | 2025-07-22T07:15:07: PM2 log: App [Readability server:1] starting in -cluster mode-
read-1  | 2025-07-22T07:15:07: PM2 log: App [Readability server:1] online
read-1  | 2025-07-22T07:15:07: PM2 log: App [Readability server:2] starting in -cluster mode-
merc-1  |
merc-1  | > mercury-parser-api@1.0.0 start
merc-1  | > node index.js
merc-1  |
read-1  | 2025-07-22T07:15:07: PM2 log: App [Readability server:2] online
read-1  | 2025-07-22T07:15:07: PM2 log: App [Readability server:3] starting in -cluster mode-
db-1    | performing post-bootstrap initialization ... ok
read-1  | 2025-07-22T07:15:07: PM2 log: App [Readability server:3] online
read-1  | 2025-07-22T07:15:07: PM2 log: App [Readability server:4] starting in -cluster mode-
read-1  | 2025-07-22T07:15:07: PM2 log: App [Readability server:4] online
db-1    | syncing data to disk ... ok
db-1    |
db-1    |
db-1    | Success. You can now start the database server using:
db-1    |
db-1    |     pg_ctl -D /var/lib/postgresql/data -l logfile start
db-1    |
db-1    | initdb: warning: enabling "trust" authentication for local connections
db-1    | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
db-1    | waiting for server to start....2025-07-22 07:15:08.120 UTC [49] LOG:  starting PostgreSQL 17.5 (Debian 17.5-1.pgdg120+1) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
db-1    | 2025-07-22 07:15:08.125 UTC [49] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-1    | 2025-07-22 07:15:08.137 UTC [52] LOG:  database system was shut down at 2025-07-22 07:15:07 UTC
db-1    | 2025-07-22 07:15:08.154 UTC [49] LOG:  database system is ready to accept connections
db-1    |  done
db-1    | server started
db-1    | CREATE DATABASE
db-1    |
db-1    |
db-1    | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db-1    |
db-1    | waiting for server to shut down....2025-07-22 07:15:08.411 UTC [49] LOG:  received fast shutdown request
db-1    | 2025-07-22 07:15:08.413 UTC [49] LOG:  aborting any active transactions
db-1    | 2025-07-22 07:15:08.423 UTC [49] LOG:  background worker "logical replication launcher" (PID 55) exited with exit code 1
db-1    | 2025-07-22 07:15:08.423 UTC [50] LOG:  shutting down
db-1    | 2025-07-22 07:15:08.426 UTC [50] LOG:  checkpoint starting: shutdown immediate
db-1    | 2025-07-22 07:15:08.491 UTC [50] LOG:  checkpoint complete: wrote 921 buffers (5.6%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.046 s, sync=0.006 s, total=0.068 s; sync files=301, longest=0.003 s, average=0.001 s; distance=4238 kB, estimate=4238 kB; lsn=0/1908990, redo lsn=0/1908990
db-1    | 2025-07-22 07:15:08.508 UTC [49] LOG:  database system is shut down
db-1    |  done
db-1    | server stopped
db-1    |
db-1    | PostgreSQL init process complete; ready for start up.
db-1    |
db-1    | 2025-07-22 07:15:08.564 UTC [1] LOG:  starting PostgreSQL 17.5 (Debian 17.5-1.pgdg120+1) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
db-1    | 2025-07-22 07:15:08.566 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db-1    | 2025-07-22 07:15:08.566 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db-1    | 2025-07-22 07:15:08.573 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-1    | 2025-07-22 07:15:08.585 UTC [65] LOG:  database system was shut down at 2025-07-22 07:15:08 UTC
db-1    | 2025-07-22 07:15:08.612 UTC [1] LOG:  database system is ready to accept connections
merc-1  | 🚀Mercury Parser API listens on port 3000
read-1  | [2025-07-22T07:15:09.479Z] Readability.js server v1.7.2 listening on port 3000!
read-1  | [2025-07-22T07:15:09.815Z] Readability.js server v1.7.2 listening on port 3000!
read-1  | [2025-07-22T07:15:09.849Z] Readability.js server v1.7.2 listening on port 3000!
read-1  | [2025-07-22T07:15:09.912Z] Readability.js server v1.7.2 listening on port 3000!
read-1  | [2025-07-22T07:15:09.956Z] Readability.js server v1.7.2 listening on port 3000!
app-1   | [Tue Jul 22 09:15:10.987600 2025] [mpm_prefork:notice] [pid 1:tid 1] AH00163: Apache/2.4.62 (Debian) configured -- resuming normal operations
app-1   | [Tue Jul 22 09:15:10.987765 2025] [core:notice] [pid 1:tid 1] AH00094: Command line: 'apache2 -D FOREGROUND'

After that, FreshRSS is available on port 80 (or any other port you defined in the ports section).

Configuration

When you open the FreshRSS website for the first time, you'll see something like that:

It starts the configuration of the FreshRSS system (here it displays in German, as this is the primary language of my browser).

In the next step, it checks if all the components are available and running the right version. As we use the default Docker image, there shouldn't be any issues:

After that, you have to configure your database. In my case, I decided to use PostgreSQL, as this is my favourite for open source solutions. Other possibilities are

  • SQLite: very slim database, very few resource needs, but could be a problem when it comes to big database sizes
  • MySQL or MariaDB

Then, set the first user to log in:

Installation completed:

Click on Complete installation, and you're ready to start:

The first articles are already loaded, they're from the FreshRSS blog:

Now, you can start to enter your subscriptions using the Subscription management button - or import them by using an OPML file, if you moved from another feed aggregator (like Feedly).

Client

On my iPhone and my iPad, I'm using the Reeder app. Reeder is able to connect to FreshRSS instances. But for that, the API is needed. Just enable it in the settings:

Additionally, access over API needs to be enabled for every user in the profile:

Synchronisation

After some time, I found out that my feeds weren't synchronized automatically. So I added CRON_MIN to my configuration:

  app:
    image: freshrss/freshrss
    hostname: freshrss
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Berlin
      - CRON_MIN=*/5

In my case, feeds are synchronized every 5 minutes. I'm not sure if this frequence is needed, but for me it works just fine.

Get more information here.

Duplicate feed entries

After some testing, I found out that I had duplicate entries for news. That happened because I recreated FreshRSS a few times, without recreating the database, so every refresh inserted new entries.

But as we have a database, we can also access it and delete duplicate entries:

# docker exec -u postgres -ti freshrss-db-1 psql -U freshrss freshrss
psql (17.5 (Debian 17.5-1.pgdg120+1))
Type "help" for help.

freshrss=#
UPDATE <username>_entry
SET is_read = 1
WHERE id NOT IN (
   SELECT MIN(id)
   FROM <username>_entry
   GROUP BY title
);

In fact, it does not delete duplicate entries - it just marks them as already read, so they are not shown any more, when you show only unread news.

Database tuning

I found a page which shows how to optimize full-text search. Without changing anything in the FreshRSS code, it can be done like that:

CREATE EXTENSION pg_trgm;
CREATE INDEX gin_trgm_index_title ON <username>_entry USING gin(title gin_trgm_ops);
CREATE INDEX gin_trgm_index_content ON <username>_entry USING gin(content gin_trgm_ops);
CREATE STATISTICS freshrss_entry_stats ON title, content FROM <username>_entry;
ANALYZE <username>_entry;

What it does:

  • Create extension pg_trm (support for similarity of text, see documentation)
  • create two Generalized Inverted Indexes (GIN) on the entry table
    • column title
    • column content
  • create a new extended statistics object tracking data about the entry table
  • analyze this new object

As FreshRSS is using ILIKE, these improvements can be easily used.

If you want to learn more about FreshRSS, see here.

Subscribe to Martin's Blog

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe