elasticsearch引擎怎么启动
这篇文章主要讲解了“elasticsearch引擎怎么启动”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“elasticsearch引擎怎么启动”吧!
专注于为中小企业提供成都网站设计、做网站服务,电脑端+手机端+微信端的三站合一,更高效的管理,为中小企业万源免费做网站提供优质的服务。我们立足成都,凝聚了一批互联网行业人才,有力地推动了上千余家企业的稳健成长,帮助中小企业通过网站建设实现规模扩充和转变。
Engine是ES最接近神Lucene的地方,是对Lucene分布式环境访问的一层封装。这个类的接口,是个命令模式,所以很自然的实现了操作日志 translog。引擎旧版本的实现类叫RobinEngine,新版本改名了,而且加了几个版本类型。不过这对我们分析源码影响不大。它主要的2个内容是 操作日志和 数据版本。亮点是锁的运用。我们这里不罗列代码了,就从 封装和 并发 2个角度看下代码。
启动引擎
既然是引擎,开始讨论之前,先启动了它再说。ES是用guice管理的实例。我们为了直观一点,就直接New了。
public Engine createEngine() throws IOException { Index index=new Index("index"); ShardId shardId = new ShardId(index, 1); ThreadPool threadPool = new ThreadPool(); CodecService cs = new CodecService(shardId.index()); AnalysisService as = new AnalysisService(shardId.index()); SimilarityService ss = new SimilarityService(shardId.index()); Translog translog = new FsTranslog(shardId, EMPTY_SETTINGS, new File("c:/fs-translog")); DirectoryService directoryService = new RamDirectoryService(shardId, EMPTY_SETTINGS); Store store = new Store(shardId, EMPTY_SETTINGS, null, directoryService, new LeastUsedDistributor(directoryService)); SnapshotDeletionPolicy sdp = new SnapshotDeletionPolicy( new KeepOnlyLastDeletionPolicy(shardId, EMPTY_SETTINGS)); MergeSchedulerProvider> scp = new SerialMergeSchedulerProvider(shardId, EMPTY_SETTINGS, threadPool); MergePolicyProvider> mpp = new LogByteSizeMergePolicyProvider(store, new IndexSettingsService(index, EMPTY_SETTINGS)); IndexSettingsService iss = new IndexSettingsService(shardId.index(), EMPTY_SETTINGS); ShardIndexingService sis = new ShardIndexingService(shardId, EMPTY_SETTINGS,new ShardSlowLogIndexingService(shardId,EMPTY_SETTINGS, iss)); Engine engine = new RobinEngine(shardId,EMPTY_SETTINGS,threadPool,iss,sis,null,store, sdp, translog,mpp, scp,as, ss,cs); return engine; }
对Lucene的封装
封装后,其实就是用JSON语法查询,返回JSON内容。这个在引擎这个层面,还没实现。回想下Lucene的CURD。我们再看看ES引擎的CURD是什么样的。首先对Document封装了下,叫ParsedDocument
private ParsedDocument createParsedDocument(String uid, String id, String type, String routing, long timestamp, long ttl,Analyzer analyzer, BytesReference source, boolean mappingsModified) { Field uidField = new Field("_uid", uid, UidFieldMapper.Defaults.FIELD_TYPE); Field versionField = new NumericDocValuesField("_version", 0); Document document=new Document(); document.add(uidField); document.add(versionField); document.add(new Field("_source",source.toBytes())); document.add(new TextField("name", "myname", Field.Store.NO)); return new ParsedDocument(uidField, versionField, id, type, routing, timestamp, ttl, Arrays.asList(document), analyzer, source, mappingsModified); }
Engine engine = createEngine(); engine.start(); String json="{\"name\":\"myname\"}"; BytesReference source = new BytesArray(json.getBytes()); ParsedDocument doc = createParsedDocument("2", "myid", "mytype", null, -1, -1, Lucene.STANDARD_ANALYZER, source, false); //增加 Engine.Create create = new Engine.Create(null, newUid("2"), doc); engine.create(create); create.version(2); create.versionType(VersionType.EXTERNAL); engine.create(create); //删除 Engine.Delete delete = new Engine.Delete("mytype", "myid", newUid("2")); engine.delete(delete); //修改类似增加,省略了 //查询 TopDocs result=null; try { Query query=new MatchAllDocsQuery(); result=engine.searcher().searcher().search(query,10); System.out.println(result.totalHits); } catch (Exception e) { e.printStackTrace(); } //获取 Engine.GetResult gr = engine.get(new Engine.Get(true, newUid("1"))); System.out.println(gr.source().source.toUtf8());
从用法上来看这个小家伙,也没那么厉害啊。没关系,到后来他会越来越厉害,直到最后成长成高大上的ES。
query和Get的比较
query的流程我们是清楚的,那Lucene没有的这个Get操作是什么样的逻辑呢? 我们看下
最理想的情况下,直接从translog里,返回数据。否则会根据UID在TermsEnum里定位这个词条(二分查找?),然后根据指针从fdt文件里,返回内容。
就说如果把query分,query和fatch 2个阶段的话,他是前面的阶段不一样的。比起termQuery少了一些步骤。
//我简化了代码,演示代码 for(AtomicReaderContext context : reader.leaves()){ try { Terms terms=context.reader().terms("brand"); TermsEnum te= terms.iterator(null); BytesRef br=new BytesRef("陌".getBytes()); if(te.seekExact(br,false)){ DocsEnum docs = te.docs(null, null); for (int d = docs.nextDoc(); d != DocsEnum.NO_MORE_DOCS; d = docs.nextDoc()) { System.out.println(reader.document(d).getBinaryValue("_source").utf8ToString()); } } } catch (IOException e) { e.printStackTrace(); } }
refresh和Flush的区别
refresh调用的就是Lucene的 searcherManager.maybeRefresh() ,Flush的话分,3种情况
static class Flush { public static enum Type { /** * 创建一个新的Writer */ NEW_WRITER, /** * 提交writer */ COMMIT, /** * 提交translog. */ COMMIT_TRANSLOG }
refresh的话更轻量一点。他默认会自动刷新
@Override public TimeValue defaultRefreshInterval() { return new TimeValue(1, TimeUnit.SECONDS); }
并发控制
它每一个操作的逻辑都差不多,我们选一个创建看下。引擎是整个ES用锁用的最频繁的地方,一层套一层的用啊,不出事,也是怪事一件。
@Override public void create(Create create) throws EngineException { rwl.readLock().lock(); try { IndexWriter writer = this.indexWriter; if (writer == null) { throw new EngineClosedException(shardId, failedEngine); } innerCreate(create, writer); dirty = true; possibleMergeNeeded = true; flushNeeded = true; } catch (IOException e) { throw new CreateFailedEngineException(shardId, create, e); } catch (OutOfMemoryError e) { failEngine(e); throw new CreateFailedEngineException(shardId, create, e); } catch (IllegalStateException e) { if (e.getMessage().contains("OutOfMemoryError")) { failEngine(e); } throw new CreateFailedEngineException(shardId, create, e); } finally { rwl.readLock().unlock(); } } private void innerCreate(Create create, IndexWriter writer) throws IOException { synchronized (dirtyLock(create.uid())) { //....省略下数据版本的验证,不在这里讲 if (create.docs().size() > 1) { writer.addDocuments(create.docs(), create.analyzer()); } else { writer.addDocument(create.docs().get(0), create.analyzer()); } Translog.Location translogLocation = translog.add(new Translog.Create(create)); //versionMap.put(versionKey, new VersionValue(updatedVersion, false, threadPool.estimatedTimeInMillis(), translogLocation)); indexingService.postCreateUnderLock(create); } }
首先是个读写锁。很多操作都加读锁了,写锁只有启动关闭引擎,recover-phase3 ,NEW_WRITER类型的flush的时候才加的。意思就是。。。在我启动关闭引擎,数据恢复和重新创建indexWriter的时候,是不能干任何事情的。接下来是个对象锁。UID加锁,用的锁分段技术,就是ConcurrentHashMap的原理。减少了大量锁对象的创建。要知道UID可是个海量对象啊。这里要是用个String,分分钟就OO吧。
private final Object[] dirtyLocks; this.dirtyLocks = new Object[indexConcurrency * 50]; // 默认最多会有8*50个锁对象... for (int i = 0; i < dirtyLocks.length; i++) { dirtyLocks[i] = new Object(); } private Object dirtyLock(BytesRef uid) { int hash = DjbHashFunction.DJB_HASH(uid.bytes, uid.offset, uid.length); // abs returns Integer.MIN_VALUE, so we need to protect against it... if (hash == Integer.MIN_VALUE) { hash = 0; } return dirtyLocks[Math.abs(hash) % dirtyLocks.length]; }
感谢各位的阅读,以上就是“elasticsearch引擎怎么启动”的内容了,经过本文的学习后,相信大家对elasticsearch引擎怎么启动这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是创新互联,小编将为大家推送更多相关知识点的文章,欢迎关注!
当前文章:elasticsearch引擎怎么启动
网站地址:http://pwwzsj.com/article/ggecgg.html