Elastic is an Elasticsearch client for the Go programming language.
The master branch of Elastic currently targets Elasticsearch 2.0 or higher! See below for details.
See the wiki for additional information about Elastic.
This is the source code of the current version of Elastic, version 3.0. Elastic 3.0 targets Elasticsearch 2.0 (or higher).
If you came from an earlier version and found that you cannot update, don't worry. Earlier versions are still available. All you need to do is go-get them and change your import path. See below for details.
Elastic 2.0 targets Elasticsearch 1.x and is still actively maintained. Here's what you need to do to use Elastic 2.0:
$ go get gopkg.in/olivere/elastic.v2
Use the following import path:
import "gopkg.in/olivere/elastic.v2"
Elastic 1.0 is no longer maintained. It has an outdated API and will only work with Elasticsearch 0.90 to 1.2. However, it is still available. Here's what you need to do to use Elastic 1.0:
$ go get gopkg.in/olivere/elastic.v1
Use the following import path:
import "gopkg.in/olivere/elastic.v1"
We use Elastic in production since 2012. Elastic is quite stable from our experience but the API changes now and then. We strive for API compatibility. However, Elasticsearch sometimes introduces breaking changes and we sometimes have to adapt.
Having said that, there have been no big API changes that required you to rewrite your application big time. More often than not it's renaming APIs and adding/removing features so that Elastic is in sync with Elasticsearch.
Elastic has been used in production with the following Elasticsearch versions: 0.90, 1.0-1.7, 2.0. Furthermore, we use Travis CI to test Elastic with the most recent versions of Elasticsearch and Go. See the .travis.yml file for the exact matrix and Travis for the results.
Elasticsearch has quite a few features. Most of them are implemented by Elastic. I add features and APIs as required. It's straightforward to implement missing pieces. I'm accepting pull requests :-)
Having said that, I hope you find the project useful.
The first thing you do is to create a Client. The client connects to Elasticsearch on http://127.0.0.1:9200
by default.
You typically create one client for your app. Here's a complete example of creating a client, creating an index, adding a document, executing a search etc.
// Create a client
client, err := elastic.NewClient()
if err != nil {
// Handle error
}
// Create an index
_, err = client.CreateIndex("twitter").Do()
if err != nil {
// Handle error
panic(err)
}
// Add a document to the index
tweet := Tweet{User: "olivere", Message: "Take Five"}
_, err = client.Index().
Index("twitter").
Type("tweet").
Id("1").
BodyJson(tweet).
Do()
if err != nil {
// Handle error
panic(err)
}
// Search with a term query
termQuery := elastic.NewTermQuery("user", "olivere")
searchResult, err := client.Search().
Index("twitter"). // search in index "twitter"
Query(termQuery). // specify the query
Sort("user", true). // sort by "user" field, ascending
From(0).Size(10). // take documents 0-9
Pretty(true). // pretty print request and response JSON
Do() // execute
if err != nil {
// Handle error
panic(err)
}
// searchResult is of type SearchResult and returns hits, suggestions,
// and all kinds of other information from Elasticsearch.
fmt.Printf("Query took %d milliseconds\n", searchResult.TookInMillis)
// Each is a convenience function that iterates over hits in a search result.
// It makes sure you don't need to check for nil values in the response.
// However, it ignores errors in serialization. If you want full control
// over iterating the hits, see below.
var ttyp Tweet
for _, item := range searchResult.Each(reflect.TypeOf(ttyp)) {
if t, ok := item.(Tweet); ok {
fmt.Printf("Tweet by %s: %s\n", t.User, t.Message)
}
}
// TotalHits is another convenience function that works even when something goes wrong.
fmt.Printf("Found a total of %d tweets\n", searchResult.TotalHits())
// Here's how you iterate through results with full control over each step.
if searchResult.Hits != nil {
fmt.Printf("Found a total of %d tweets\n", searchResult.Hits.TotalHits)
// Iterate through results
for _, hit := range searchResult.Hits.Hits {
// hit.Index contains the name of the index
// Deserialize hit.Source into a Tweet (could also be just a map[string]interface{}).
var t Tweet
err := json.Unmarshal(*hit.Source, &t)
if err != nil {
// Deserialization failed
}
// Work with tweet
fmt.Printf("Tweet by %s: %s\n", t.User, t.Message)
}
} else {
// No hits
fmt.Print("Found no tweets\n")
}
// Delete the index again
_, err = client.DeleteIndex("twitter").Do()
if err != nil {
// Handle error
panic(err)
}
See the wiki for more details.
- Index API
- Get API
- Delete API
- Update API
- Multi Get API
- Bulk API
- Delete By Query API
- Term Vectors
- Multi termvectors API
- Search
- Search Template
- Search Shards API
- Suggesters
- Term Suggester
- Phrase Suggester
- Completion Suggester
- Context Suggester
- Multi Search API
- Count API
- Search Exists API
- Validate API
- Explain API
- Percolator API
- Field Stats API
- Metrics Aggregations
- Avg
- Cardinality
- Extended Stats
- Geo Bounds
- Max
- Min
- Percentiles
- Percentile Ranks
- Scripted Metric
- Stats
- Sum
- Top Hits
- Value Count
- Bucket Aggregations
- Children
- Date Histogram
- Date Range
- Filter
- Filters
- Geo Distance
- GeoHash Grid
- Global
- Histogram
- IPv4 Range
- Missing
- Nested
- Range
- Reverse Nested
- Sampler
- Significant Terms
- Terms
- Pipeline Aggregations
- Avg Bucket
- Derivative
- Max Bucket
- Min Bucket
- Sum Bucket
- Moving Average
- Cumulative Sum
- Bucket Script
- Bucket Selector
- Serial Differencing
- Aggregation Metadata
- Create Index
- Delete Index
- Get Index
- Indices Exists
- Open / Close Index
- Put Mapping
- Get Mapping
- Get Field Mapping
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Index Templates
- Warmers
- Indices Stats
- Indices Segments
- Indices Recovery
- Clear Cache
- Flush
- Refresh
- Optimize
- Shadow Replica Indices
- Upgrade
The cat APIs are not implemented as of now. We think they are better suited for operating with Elasticsearch on the command line.
- cat aliases
- cat allocation
- cat count
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat thread pool
- cat shards
- cat segments
- Cluster Health
- Cluster State
- Cluster Stats
- Pending Cluster Tasks
- Cluster Reroute
- Cluster Update Settings
- Nodes Stats
- Nodes Info
- Nodes hot_threads
- Match All Query
- Inner hits
- Full text queries
- Match Query
- Multi Match Query
- Common Terms Query
- Query String Query
- Simple Query String Query
- Term level queries
- Term Query
- Terms Query
- Range Query
- Exists Query
- Missing Query
- Prefix Query
- Wildcard Query
- Regexp Query
- Fuzzy Query
- Type Query
- Ids Query
- Compound queries
- Constant Score Query
- Bool Query
- Dis Max Query
- Function Score Query
- Boosting Query
- Indices Query
- And Query (deprecated)
- Not Query
- Or Query (deprecated)
- Filtered Query (deprecated)
- Limit Query (deprecated)
- Joining queries
- Nested Query
- Has Child Query
- Has Parent Query
- Geo queries
- GeoShape Query
- Geo Bounding Box Query
- Geo Distance Query
- Geo Distance Range Query
- Geo Polygon Query
- Geohash Cell Query
- Specialized queries
- More Like This Query
- Template Query
- Script Query
- Span queries
- Span Term Query
- Span Multi Term Query
- Span First Query
- Span Near Query
- Span Or Query
- Span Not Query
- Span Containing Query
- Span Within Query
- Snapshot and Restore
- Sort by score
- Sort by field
- Sort by geo distance
- Sort by script
Scrolling through documents (e.g. search_type=scan
) are implemented via
the Scroll
and Scan
services. The ClearScroll
API is implemented as well.
Read the contribution guidelines.
Thanks a lot for the great folks working hard on Elasticsearch and Go.
MIT-LICENSE. See LICENSE or the LICENSE file provided in the repository for details.