5

ピルセンでカスタム アナライザーを作成したいと考えています。通常、java lucene では、analyzer クラスを作成すると、クラスは lucene の Analyzer クラスを継承します。

ただし、ピルセンは、Java から C++/Python へのコンパイラーである jcc を使用します。

では、jcc を使用して Python クラスに Java クラスを継承させるにはどうすればよいでしょうか。特に、カスタム ピルセン アナライザーを作成するにはどうすればよいでしょうか。

ありがとう。

4

2 に答える 2

3

EdgeNGram フィルターをラップするアナライザーの例を次に示します。

import lucene
class EdgeNGramAnalyzer(lucene.PythonAnalyzer):
    '''
    This is an example of a custom Analyzer (in this case an edge-n-gram analyzer)
    EdgeNGram Analyzers are good for type-ahead
    '''

    def __init__(self, side, minlength, maxlength):
        '''
        Args:
            side[enum] Can be one of lucene.EdgeNGramTokenFilter.Side.FRONT or lucene.EdgeNGramTokenFilter.Side.BACK
            minlength[int]
            maxlength[int]
        '''
        lucene.PythonAnalyzer.__init__(self)
        self.side = side
        self.minlength = minlength
        self.maxlength = maxlength

    def tokenStream(self, fieldName, reader):
        result = lucene.LowerCaseTokenizer(Version.LUCENE_CURRENT, reader)
        result = lucene.StandardFilter(result)
        result = lucene.StopFilter(True, result, StopAnalyzer.ENGLISH_STOP_WORDS_SET)
        result = lucene.ASCIIFoldingFilter(result)
        result = lucene.EdgeNGramTokenFilter(result, self.side, self.minlength, self.maxlength)
        return result

PorterStemmer を再実装する別の例を次に示します。

# This sample illustrates how to write an Analyzer 'extension' in Python.
# 
#   What is happening behind the scenes ?
#
# The PorterStemmerAnalyzer python class does not in fact extend Analyzer,
# it merely provides an implementation for Analyzer's abstract tokenStream()
# method. When an instance of PorterStemmerAnalyzer is passed to PyLucene,
# with a call to IndexWriter(store, PorterStemmerAnalyzer(), True) for
# example, the PyLucene SWIG-based glue code wraps it into an instance of
# PythonAnalyzer, a proper java extension of Analyzer which implements a
# native tokenStream() method whose job is to call the tokenStream() method
# on the python instance it wraps. The PythonAnalyzer instance is the
# Analyzer extension bridge to PorterStemmerAnalyzer.

'''
More explanation... 
Analyzers split up a chunk of text into tokens...
Analyzers are applied to an index globally (unless you use perFieldAnalyzer)
Analyzers implement Tokenizers and TokenFilters.
Tokenizers break up string into tokens. TokenFilters break of Tokens into more Tokens or filter out
Tokens
'''

import sys, os
from datetime import datetime
from lucene import *
from IndexFiles import IndexFiles


class PorterStemmerAnalyzer(PythonAnalyzer):

    def tokenStream(self, fieldName, reader):

        #There can only be 1 tokenizer in each Analyzer
        result = StandardTokenizer(Version.LUCENE_CURRENT, reader)
        result = StandardFilter(result)
        result = LowerCaseFilter(result)
        result = PorterStemFilter(result)
        result = StopFilter(True, result, StopAnalyzer.ENGLISH_STOP_WORDS_SET)

        return result


if __name__ == '__main__':
    if len(sys.argv) < 2:
        sys.exit("requires at least one argument: lucene-index-path")
    initVM()
    start = datetime.now()
    try:
        IndexFiles(sys.argv[1], "index", PorterStemmerAnalyzer())
        end = datetime.now()
        print end - start
    except Exception, e:
        print "Failed: ", e

perFieldAnalyzerWrapper.java もチェックアウト KeywordAnalyzerTest.py

        analyzer = PerFieldAnalyzerWrapper(SimpleAnalyzer())
        analyzer.addAnalyzer("partnum", KeywordAnalyzer())

        query = QueryParser(Version.LUCENE_CURRENT, "description",
                            analyzer).parse("partnum:Q36 AND SPACE")
        scoreDocs = self.searcher.search(query, 50).scoreDocs
于 2012-03-04T18:19:53.197 に答える