Monitoring the CM Elasticsearch Index
Here's a couple different ways to manage your new CM 9.2 Elasticsearch instance.
via Chrome
Visit the webstore and add the ElasticSearch Head extension.
Then when you launch the extension you can update the server address. With the visuals I can easily see that the unassigned shards are what's pushing my health to yellow.
You can also browse the data in the index
via Head Stand-alone Server
The same functionality as what's shown above, however the interface is a stand-alone server operating off port 9100. You'd access it via any browser from address http://localhost:9100. You would use this option if you can't use chrome.
via Powershell
Install Elastico from an elevated powershell prompt.
You have to trust the repository in order to continue. If you're in a secure environment then visit the github page for the source, then manually install the module.
Now I can run a command to check the overall cluster's health...
Another command to check the statuses of the indexes...
Note that in both instances I'm calling a script intended for v5 even though I'm using v6. I can actually run any version of the command as they all seem to be forward & backward compatible. Probably still makes the most sense to run the one intended for the most recent version of ES.
I can also perform a search. I went to the client and found the record I worked with in my last post (I had used Kibana as a front-end to the ES index). Then I searched via powershell.
via Kibana
When creating your index pattern, keep in mind that the default naming convention for the CM content index starts with "hpecm_". You could use that, or just a plain asterisk, to configure your access. If you wish to use the Timelion feature then you should also pick the date registered field as the time filter.
When you then click discover you can explore the index.
You can pick a different time period that might show some results...
If I can now get some data then I know the system is at least partially working. Kibana doesn't really give much detail as to the internal workings of the cluster and/or index.