cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! These forums are read-only. All users and content have migrated. Please join us at community.neo4j.com.

Is there documentation on how to configure custom tokenizer in full-text indexing?

lingvisa
Graph Fellow

The documentation says that 'support configuring custom analyzers, including analyzers that are not included with Lucene itself.' However,

https://neo4j.com/docs/operations-manual/4.2/performance/index-configuration/#index-configuration-fulltext

Here it doesn't say how to configure my own tokenizer. Is there an example of how to configure my own tokenizer?

call db.index.fulltext.listAvailableAnalyzers

If I have my own tokenizer interface in python:

def get_tokens(text):
   ...
  return tokens

How to configure it to be used by the full-text indexing?

1 REPLY 1

clem
Graph Steward

I think you have to write your tokenizer in Java.

Here's more info: