Setting up PostgreSQL Full Text Search (FTS) helps users bring up a fully-fledged search engine without a steep learning curve and new software to install. It can be used to set up anything from a small catalog website to a huge news website.

You might wonder what’s the big deal with Full-Text Search. you can use the LIKE or ILIKE clause any day in a query to perform a search on any given string. The problem with the traditional LIKE clause is that it has a very high-performance penalty. That means, it takes a long time to return results as the words inside the text are not indexed. In this blog, you will learn about the PostgreSQL Full Text Search and the PostgreSQL Phrased search. Moroever, you will understand these with examples and will also learn about the Vector technique.

Table of Contents

What is PostgreSQL Full Text Search

PostgreSQL Full Text Search refers to a technique in which you search for a single computer-stored document or a collection in your full-text database. It provides you with the capability to identify natural languages documents that satisfy a query. PostgreSQL Full Text Search allows indexing of documents which helps in the rapid search of documents. PostgreSQL Full Text Search also supports dictionaries that allow fine-grained control over token normalization. Learn more about PostgreSQL Full Text Search from here.

  • During a PostgreSQL Full Text search, Postgres will not include “stop words” like “the” or “a” while finding a match.
  • It implements “Stemming” to find derivatives of a root word. For Example, stemming will map “runs” and “running” to the root word run.
  • PostgreSQL will rank your search matches and will place the best matches at the top of the results.

PostgreSQL Full Text Search Entities

To understand the PostgreSQL Full Text Search, you need to be familiar with the following terms first:

  • Document: A set of data on which your full-text search will take place is called Document. In PostgreSQL, you can create a document using either a single column or multiple columns or even using multiple tables.
  • Lexemes: The PostgreSQL document during processing is parsed into multiple tokens, which consist of words, sentences, etc. of the text present in the document. These tokens are modified to form more meaningful entities called lexemes.
  • Dictionaries: The conversion of Tokens into Lexemes is done using dictionaries. PostgreSQL offers both options of either using the built-in dictionaries or creating your own dictionaries. These dictionaries are responsible to determine which stop words are not important and whether differently derived words have the same base language.
Simplify your Data Analysis with Hevo’s No-code Data Pipeline

Hevo Data, a No-code Data Pipeline helps to load data from any data source such as PostgreSQL, SaaS applications, Cloud Storage, SDK,s, and Streaming Services and simplifies the ETL process. It supports 100+ data sources and loads the data onto the desired Data Warehouse, enriches the data, and transforms it into an analysis-ready form without writing a single line of code.

Its completely automated pipeline offers data to be delivered in real-time without any loss from source to destination. Its fault-tolerant and scalable architecture ensure that the data is handled in a secure, consistent manner with zero data loss and supports different forms of data. The solutions provided are consistent and work with different Business Intelligence (BI) tools as well.

Get Started with Hevo for Free

Check out why Hevo is the Best:

  • Secure: Hevo has a fault-tolerant architecture that ensures that the data is handled in a secure, consistent manner with zero data loss.
  • Schema Management: Hevo takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema.
  • Minimal Learning: Hevo, with its simple and interactive UI, is extremely simple for new customers to work on and perform operations.
  • Hevo Is Built To Scale: As the number of sources and the volume of your data grows, Hevo scales horizontally, handling millions of records per minute with very little latency.
  • Incremental Data Load: Hevo allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
  • Live Support: The Hevo team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
  • Live Monitoring: Hevo allows you to monitor the data flow and check where your data is at a particular point in time.
Sign up here for a 14-Day Free Trial!

Core Concepts of PostgreSQL Full Text Search

PostgreSQL Full Text Search: Architecture
Image Source

PostgreSQL offers several native functions that can help you in implementing PostgreSQL Full Text Search:

  • to_tsvector: This function is used to create a list of tokens of tsvector data type. ts stands for “Text Search”.
  • to_tsquery: This function can be used for querying the vector for occurrences of particular words or phrases.
  • Searching: This process uses the to_tsvector and to_tsquery functions to provide fast results.
Download the Whitepaper on Automating Data Integration
Download the Whitepaper on Automating Data Integration
Download the Whitepaper on Automating Data Integration
Learn the key benefits of automating Data Integration

1) to_tsvector

To create a vector for a sentence, you can use the to_tsvector function as follows:

SELECT to_tsvector('The quick brown fox jumped over the lazy dog.');

This returns a vector where every token is a lexeme (a unit of lexical meaning) with pointers. It also omits words that carry little meaning, such as conjunctions (and, or) and articles (the).

to_tsvector -----'brown':3 'dog':9 'fox':4 'jump':5 'lazi':8 'quick':2

By default, every word is normalized as a lexeme in English. For instance, “jumped” becomes “jump”. However, this might not be the case based on the localization settings of your PostgreSQL installation. If you are working with a language other than English, you simply need to pass it as an argument and PostgreSQL will handle it from there.

2) to_tsquery

This function accepts a list of words that would be checked against the normalized vector created with the to_tsvector() function. You can use the following Postgres functions to convert text fields to tsquery values such as to_tsquery, plainto_tsquery, and phraseto_tsquery.

SELECT to_tsquery('The quick brown fox jumped over the lazy dog')

3) Searching

After creating tsvector from your text data and tsquery from your search terms, you can further perform a full-text search using the @@ operator.

@@ operator checks if tsquery matches tsvector. For instance, if the word to be queried is “fox” for the above-mentioned example, then:

SELECT to_tsvector('The quick brown fox jumped over the lazy dog') @@ to_tsquery('fox');

The search is smart enough not to match anything that simply starts with “fox” in this example unless it’s related to the same semantics of the text originally vectorized.

tsquery provides a few operators for your use like the AND operator, OR operator, and the NEGATION operator which will be used in a later section of this article. It also recently added a <-> operator that allows you to search for word distance or proximity.

Example: Searching Storm Event Details

This section will explain the implementation of PostgreSQL Full Text Search and the search tools involved with it.

You can create a table containing data related to storm events gathered by the U.S. National Weather Service. Also, add a small portion of the storm data into our table using the below code.

CREATE TABLE se_details (
    episode_id int,
    event_id int primary key,
    state text,
    event_type text,
    begin_date_time timestamp,
    episode_narrative text,
    event_narrative text,

In this example, you can carry out a PostgreSQL Full Text Search on the event_narrative column. Moreover, insert a new column to the table to store the preprocessed search document (i.e. the list of lexemes):

ALTER TABLE se_details ADD COLUMN ts tsvector 
    GENERATED ALWAYS AS (to_tsvector('english', event_narrative)) STORED;

In the new versions of Postgres, an automatic column “ts” is synced with our source data. After that, you can then generate a GIN index using the ts column:

SELECT state, begin_date_time, event_type, event_narrative
FROM se_details
WHERE ts @@ to_tsquery('english', 'tornado');

Another Postgres full-text search data type is “tsquery”. It contains search terms that are also present as lexemes. You should pass the input search term to the to_tsquery function to optimize the query during the Postgres Full-Text Search.

This query will provide records as output. These records will contain the search term “tornado” somewhere in the text string. Moreover, it will also provide records in the result set which have “tornado” as lexeme (“tornado-like” and “tornadoes”):

state           | WISCONSIN
begin_date_time | 2018-08-28 15:30:00
event_type      | Thunderstorm Wind
event_narrative | A swath of widespread tree and crop damage across the southern portion of the county. Sections of trees and crops compl
etely flattened, and some structural damage from fallen trees or due to the strong downburst winds. Various roads closed due to fallen tre
es. Two semi-trucks were overturned on highway 57 in Waldo. The widespread wind damage and tornadoes caused structural damage to many home
s with 70 homes sustaining affected damage, 3 homes with minor damage, 2 homes with major damage, one home destroyed, and 2 businesses wit
h minor damage.

Fuzzy Search vs Full Text Search

You can search for words in the “Full Text” of the document. Search terms can either be individual words or phrases. The search terms are south directly “word-for_word” in the text or a selected part of the text, and are highlighted if found. You can further use Boolean Operators like AND, OR, NOT, for added functionality.

For Full-Text searches, an imprecise search (Fuzzy Search) is also available, which returns terms that are merely similar to the term searched. You need to enable pg_trgm Postgres extension from PSQL using the command below:

CREATE EXTENSION pg_trgm;

This extension will help you with fuzzy string matching.

Mastering PostgreSQL Tools: PostgreSQL Full Text Search and Phrase Search

PostgreSQL provides you with the following 2 key searching techniques:

1) PostgreSQL Full Text Search

You can search for words in the “Full Text” of the document. Search terms can either be individual words or phrases. The search terms are south directly “word-for_word” in the text or a selected part of the text, and are highlighted if found. You can further use Boolean Operators like AND, OR, NOT, for added functionality.

You will face another issue while searching for a word like “query”, which may be present in its plural form “queries”. In this situation, you won’t get search matches by performing a simple pattern search using LIKE. Moreover, using Regular Expression in this search will also not be the ideal solution as it consumes a lot of time.

A more feasible solution for this issue is using a semantic vector for all of the words present in a document. This way, when you search for a word like “query”, your results will provide all instances of the word (and its tenses), even if you searched for “queried” or “queries”.Furthermore, it will give you faster results as you don’t have to search the full document itself.

2) PostgreSQL Phrase Search

The full text search is helpful if you wish to find a word (or a set of words) in a document. However, if you want to seek a phrase in that document, you can’t rely on full text search.
For example, if you wish to search for “walking fast” or “to walk very fast”, you can use words like walk & fast in the above-mentioned techniques. This will give you documents containing these words, but the results will be in any random configuration. To overcome this issue, PostgreSQL 9.6 has the operator that you can use with tsquery to search via word proximity.

The following example will help you understand Phrase search better. You will notice that the “to_tsquery” with the “AND” operator doesn’t create any distinction when it comes to the words’ location in the documents:

SELECT document_id, document_text FROM documents  
WHERE document_tokens @@ to_tsquery('jump & quick');

Output of the this code is shown below:

 document_id |             document_text
-------------+---------------------------------------
           3 | The five boxing wizards jump quickly.
           4 | How vexingly quick daft zebras jump!
(2 rows)

Now you can use the proximity operator for phrase search in the following manner:

SELECT document_id, document_text FROM documents  
WHERE document_tokens @@ to_tsquery('jump <-> quick');

It will generate the following output:

 document_id |             document_text
-------------+---------------------------------------
           3 | The five boxing wizards jump quickly.
(1 row)

Understanding PostgreSQL Full Text Search Setup

You can search for words in the “Full Text” of the document. Search terms can either be individual words or phrases. The search terms are south directly “word-for_word” in the text or a selected part of the text, and are highlighted if found. You can further use Boolean Operators like AND, OR, NOT, for added functionality.

Here are the steps involved in setting up PostgreSQL Full Text Search:

Step 1: Preparing the Database

We will first create a table

CREATE TABLE tv_series (
    id SERIAL PRIMARY KEY,
    title TEXT NOT NULL,
    description TEXT NOT NULL,
    creator TEXT NOT NULL
);

Next, we will insert some data to start querying

INSERT INTO tv_series 
    (id, title, description, creator) 
VALUES 
    (1, 'Stranger Things', 'Set in the fictional town of Hawkins, Indiana in the 1980s, the first season focuses on the investigation into the disappearance of a young boy...', 'Duffer Brothers'),
    (2, 'Breaking Bad', 'BB is an American neo-western crime drama television series created and produced by Vince Gilligan. It tells the story of Walter White, a struggling high school chemistry teacher diagnosed with inoperable lung cancer...', 'Vince Gilligan'),
    (3, 'Band of Brothers', 'Band of Brothers is a 2001 American war drama miniseries based on historian Stephen E. Ambrose''s 1992 nonfiction book of the same name. The executive producers were Steven Spielberg and Tom Hanks...', 'Steven Spielberg, Tom Hanks'),
    (4, 'Rick and Morty', 'Rick and Morty is an American animated science-fiction sitcom created by Justin Roiland and Dan Harmon for Cartoon Network''s late-night programming block...', 'Justin Roiland, Dan Harmon');

Step 2: Exploring Data

We will now have to create a document, which is simply just concatenating all the fields we want to perform a search on.

SELECT (title || ' ' || description || ' ' || creator) AS document FROM tv_series;

At this stage we just have a string, to make it more actionable we need to transform it using to_tsvector().

SELECT  (to_tsvector(title) || to_tsvector(description) || to_tsvector(creator)) AS document 
FROM tv_series;

The result we have got in return is in tsvector form, this is more suitable to perform a full-text search. But if you notice, the result looks weirder than anything we originally had. If you’d notice, you will find that the first record in the result has, in fact, reduced the word investigation to investig, also there are some random numbers in front of each word in the result. What really happened is that Postgres just sorted a list of distinct lexemes, which are words that have been normalized to merge different variants of the same word.

Let’s start querying the data. We will be using the @@ operator to perform a query and check if a tsvector matches tsquery:

SELECT to_tsvector(description) @@ 'rick' FROM tv_series WHERE id = 4;

output :

----------
t
(1 row)

Yes, it did match. Now let’s try another query.

SELECT to_tsvector(description) @@ 'morty' FROM tv_series WHERE id = 4;

output:

----------
f
(1 row)

Sadly, the result is negative. To solve this we need to build a tsquery that creates the same lexemes and using the operator @@, casts the string into a tsquery. The following shows the difference between casting and using the function to_tsquery().

SELECT 'morty'::tsquery, to_tsquery('morty');

output:

 tsquery | to_tsquery
---------+------------
 'morty' | 'morti'
(1 row)

But in the case of ‘rick’, the stem is equal to the word.

SELECT 'rick'::tsquery, to_tsquery('rick');

output:

 tsquery | to_tsquery
---------+------------
  'rick' | 'rick'
(1 row)

So we will have to use to_tsquery() for querying documents.

SELECT to_tsvector(description) @@ to_tsquery('morty') FROM tv_series WHERE id = 4;

output:

----------
t
(1 row)

Step 3: Improving Performance of PostgreSQL Full Text Search

In a production environment, we can’t create tsvector at runtime and expect it to run at lightning speed, we will have to re-create it whenever a new entry is inserted in our table. For this, we will create a new column in our table named document_vectors of data type tsvector. Also, we would require indexes to perform queries at speeds higher than that of light.

PostgreSQL Full Text Search: GIN Index
Image Source

When it comes to tsvector, Postgres provides us with GiST (Generalized Search Tree) and GIN (Generalized Inverted Index) indexes. The main difference between them is, GiST is faster to build but requires additional reads, while on the other hand GIN is slower to build but faster to query when compared to GiST. So if your database is read-intensive you must go with GIN.

ALTER TABLE tv_series ADD "document_vectors" tsvector;
CREATE INDEX idx_fts_doc_vec ON tv_series USING gin(document_vectors);

Next update the tsvector of existing rows

UPDATE 
    tv_series 
SET 
    document_vectors = (to_tsvector(title) || to_tsvector(description) || to_tsvector(creator));

For more fun, we will insert a new row in the column this time with the tsvector of a document.

INSERT INTO tv_series
    (id, title, description, creator, document_vectors)
VALUES
    (5, 'Better Call Saul', 'Better Call Saul is an American television crime drama series created by Vince Gilligan and Peter Gould. It is a spin-off prequel of Gilligan''s prior series Breaking Bad.', 'Vince Gilligan, Peter Gould', (to_tsvector('Better Call Saul') || to_tsvector('Better Call Saul is an American television crime drama series created by Vince Gilligan and Peter Gould. It is a spin-off prequel of Gilligan''s prior series Breaking Bad.') || to_tsvector('Vince Gilligan, Peter Gould')));

Finally, let’s search Breaking Bad:

SELECT id, title, creator FROM tv_series WHERE document_vectors @@ to_tsquery('break');

output:

 id | title            | creator
----+------------------+-----------------------------
  2 | Breaking Bad     | Vince Gilligan
  5 | Better Call Saul | Vince Gilligan, Peter Gould
(2 rows)

Implementing Postgres Full Text in Rails

Now that you have learned about the PostgreSQL Full Text Search technique, you can now implement this technique in Ruby Rails. For this, you’ll be using the pg_search Gem which can be used in the following two ways:

  • Search Scope: This allows you to search within a single model with greater flexibility.
  • Multi Search: This method allows you to search across multiple models and return a single array of results. Suppose you have three models: Brand, Review, and Product. You can use Multi Search to search across all of them at the same time, seeing a single set of search results. You can use this to add the federated search functionality to your app.

This section will deal with the Search Scope in detail. Here are the steps involved in this method:

  • Step 1: First, you need to add the Gem to your Gemfile to get started. The following snippet gives you an idea about exactly how you can go about it:
# Gemfile 
gem 'pg_search', '~> 2.3', '>= 2.3.2'
  • Step 2: In the next step, you need to add a module to your Job model and define your first searchable field:
class Job < ApplicationRecord
include PgSearch::Model 
pg_search_scope :search_title, against: :title 
end

This snippet adds a class-level method to the Job model, that allows you to find jobs with the following snippet, which returns them ranked from best match to worst.

Job.search_title('Ruby on Rails')
  • Step 3: If you append to_sql to the above-mentioned Ruby statement, you can see the SQL that is being generated. Since it handles searching and putting the results in the correct order using the ts_rank it might be a little messy.
SELECT "jobs".* FROM "jobs" INNER JOIN ( SELECT "jobs"."id" AS pg_search_id, (ts_rank((to_tsvector('simple', coalesce("jobs"."title"::text, ''))), (to_tsquery('simple', ''' ' || 'Ruby' || ' ''') && to_tsquery('simple', ''' ' || 'on' || ' ''') && to_tsquery('simple', ''' ' || 'Rails' || ' ''')), 0)) AS rank FROM "jobs" WHERE ((to_tsvector('simple', coalesce("jobs"."title"::text, ''))) @@ (to_tsquery('simple', ''' ' || 'Ruby' || ' ''') && to_tsquery('simple', ''' ' || 'on' || ' ''') && to_tsquery('simple', ''' ' || 'Rails' || ' ''')))) AS pg_search_5d9a17cb70b9733aadc073 ON "jobs"."id" = pg_search_5d9a17cb70b9733aadc073.pg_search_id ORDER BY pg_search_5d9a17cb70b9733aadc073.rank DESC, "jobs"."id" ASC

You can configure pg_search in numerous ways ranging from support for negation and prefixes to specifying which language dictionary to use when normalizing a document. You can also add multiple weighted columns for the same.

pg_search uses the “simple” dictionary by default, which does not carry out normalization. If you want to normalize your document using the English dictionary, searching across both the description and title, this is what it would look like:

class Job < ApplicationRecord include PgSearch::Model pg_search_scope :search_job, against: %i[title description], using: { tsearch: { dictionary: 'english' } } end

To give higher precedence to the title column, you can add weighted scores to each of the columns, with possible values of A, B, C, D.

class Job < ApplicationRecord include PgSearch::Model pg_search_scope :search_job, against: { title: 'A', description: 'B' }, using: { tsearch: { dictionary: 'english' } } end

Searching Against the Vector

Now, to use the tsvector you need to perform a lot of pre-processing to simplify your search process. You will need to use a function called tsquery if you want to search against the tsvector column.

PostgreSQL Full Text: Phrase Search
Image Source

The @@ operator is used when you need direct matches. Postgres also provide other operators to perform more sophisticated matches. The output that you will get will be in a true or false format. You can see in the above image that the search result for the word scares, is true. Preprocessing played a crucial role in generating this output.

Advanced PostgreSQL Full Text Search Techniques

Now that you’re familiar with searching for the existence of words, let’s discuss some of the advanced search techniques:

Advanced PostgreSQL Full Text Search: JSON Search

PostgreSQL natively supports JSON data type. Postgres allows you to create native data types such as JSON and JSONB. The GIN index allows you to search entire JSON documents with the same performance as a B-Tree index.

Advanced PostgreSQL Full Text Search: Dictionaries

PostgreSQL Dictionary is used in the backend for many tasks such as,

  • Stop Word Removal (a, an, the, etc.)
  • Normalization (scared -> scare as we saw above)
  • Thesaurus
  • Spelling

You can further customize this Dictionary to add more Stop Words, or to even create our own custom dictionary.

Advanced PostgreSQL Full Text Search: Ranking

Ranking search results can ensure that the most relevant results are ranked higher. ts_rank considers the frequency of words. The more tokens that match the text, the higher the rank.

If you run the following query, rank1 has a higher rank than rank2 because the tokens “fox” is found in the sentence while “cat” is not:

SELECT
  ts_rank(
    to_tsvector('The quick brown fox jumped over the lazy dog'), 
    to_tsquery('fox')
  ) AS rank1,
  ts_rank(
    to_tsvector('The quick brown fox jumped over the lazy dog'), 
    to_tsquery('cat')
  ) AS rank2;

FAQs

1) How to do a full-text search in Postgres?

In PostgreSQL, full-text searching is done with the @@ match operator, which returns true if a tsvector (document) matches a tsquery (query). It makes no difference which type of data is written first:

SELECT 'a fat cat sat on a mat and ate a fat rat'::tsvector @@ 'cat & rat'::tsquery;
 ?column?
----------
 t

SELECT 'fat & cow'::tsquery @@ 'a fat cat sat on a mat and ate a fat rat'::tsvector;
 ?column?
----------
 f

A tsquery, like a tsvector, is not just raw text, as the above example shows. A tsquery is a collection of search phrases that must be already-normalized lexemes and can combine multiple terms using the AND, OR, and NOT operators. To turn user-written text into a valid tsquery, the methods to tsquery and plainto tsquery are useful, for example, by normalizing words found in the text. To parse and normalize a document string, tsvector is utilized. In practice, a text search match would seem as follows:

SELECT to_tsvector('fat cats ate fat rats') @@ to_tsquery('fat & rat');
 ?column? 
----------
 t

This match would fail if written as

SELECT 'fat cats ate fat rats'::tsvector @@ to_tsquery('fat & rat');
 ?column? 
----------
 f

Since in the above example, above is no normalization of the term rats here. Rats do not match rat because the elements of a tsvector are lexemes, which are considered to be normalized.

2) How to use full-text search for Postgres for many to one column?

To speed up text searches, you can establish a GIN index:

CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', body));

It’s worth noting that to tsvector is used with two arguments. In expression indexes, only text search functions with a configuration name can be utilized. This is because the default text search config must have no effect on the index contents. If they were, the index contents could be inconsistent because separate entries could contain tsvectors built with various text search parameters, with no way of knowing which was which. It would be impossible to accurately dump and restore such an index.

Since the index above utilizes the two-argument version of to tsvector, it will only be utilized by a query reference that uses the two-argument form of to tsvector with the same configuration name. WHERE to tsvector (‘english’, body) @@ ‘a & b’, on the other hand, can use the index, whereas WHERE to tsvector(body) @@ ‘a & b’ cannot. This assures that an index will only be used with the same configuration as the index entries were created with.

More sophisticated expression indexes can be created by specifying the configuration name in another column, for example:

CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector(config_name, body));

where config_name is a pgweb table column. This allows you to have many setups in the same index while still keeping track of which one was used for each index entry. This would be handy if the document collection included documents in multiple languages, for example. WHERE to tsvector(config name, body) @@ ‘a & b’ .

Indexes can even join columns together:

CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', title || ' ' || body));

Another option is to construct a distinct tsvector column for the to_tsvector output. This is a concatenation of title and body, with coalesce used to ensure that one field is indexed even if the other is NULL:

ALTER TABLE pgweb ADD COLUMN textsearchable_index_col tsvector;
UPDATE pgweb SET textsearchable_index_col =
     to_tsvector('english', coalesce(title,'') || ' ' || coalesce(body,''));

Then, to speed up the search, you establish a GIN index:

CREATE INDEX textsearch_idx ON pgweb USING GIN (textsearchable_index_col);

Now you are ready to perform a fast full-text search:

SELECT title
FROM pgweb
WHERE textsearchable_index_col @@ to_tsquery('create & table')
ORDER BY last_mod_date DESC
LIMIT 10;

Limitations of PostgreSQL Full Text Search

Some of the limitations of PostgreSQL Full Text Search (FTS) are listed below:

  • The length of each lexeme must be less than 2K bytes.
  • You should not have more than 256 positions per lexeme.
  • The length of tsvector should be less than 1 MB.
  • The total number of nodes in a tsquery should be less than 32,768.

We were able to successfully query all the documents which had the word Breaking Bad in it with just the word ‘break’.

For more such posts, visit our blog page.

Conclusion

The article introduced you to PostgreSQL Full Text Search and it’s variants. It also presented and example showcasing the implementation the PostgreSQL Full Text Search. Thae article also discussed the Phrased Search with an example and introduced the Vector technique. Furthermore, the article listed down the limitations of using the PostgreSQL Full Text approach. You can also consider SQL server and SQLite and have a look at these in-depth PostgreSQL vs SQL Server and SQLite vs PostgreSQL guides.

Visit our Website to Explore Hevo

Now, to perform Data Analytics on your PostgreSQL data, you first need to export this data to a Data Warehouse. This will require you to custom code complex scripts to develop the ETL processes. Hevo Data can automate your data transfer process, hence allowing you to focus on other aspects of your business like Analytics, Customer Management, etc. This platform allows you to transfer data from 100+ sources like PostgreSQL to Cloud-based Data Warehouses like Amazon Redshift, Snowflake, Google BigQuery, etc. It will provide you with a hassle-free experience and make your work life much easier.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand.

Share your understanding of the PostgreSQL Full Text Search in the comments below!

mm
Software Engineer, Hevo Data

With around a decade of experience, Sarad has designed and developed fundamental components of Hevo. His expertise lies in building lean solutions for various software problems, mentoring fellow engineers and exploring new technologies

No-code Data Pipeline for your Data Warehouse

Get Started with Hevo