Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support UTF-8 label matchers: Add new parser #3453

Merged

Conversation

grobinson-grafana
Copy link
Contributor

@grobinson-grafana grobinson-grafana commented Aug 9, 2023

What this pull request does

This pull request adds the new label matchers parser as proposed in #3353. Included is a number of compliance tests comparing the grammar supported in the new parser with the existing parser in pkg/labels. The compliance tests can be run passing the "compliance" tag when running go test.

Motivation

The original motivation for writing this parser was to add support for matching label names containing . and spaces to grafana/grafana. However, about the same time I learned that Prometheus maintainers agreed to add support for UTF-8 labels in Alertmanager, and so I decided to further the work to see if it could be upstreamed to Alertmanager instead.

The original source code can be found at grobinson-grafana/matchers.

Supported grammar

This LL(1) parser in its current version is not 100% compatible with the existing regular expression, although it is close and can be modified if required. The grammar can be understood as follows:

<expr>        ::= "{" <sequence> "}" | <sequence>
<sequence>    ::= <matcher> | <sequence> "," <matcher>
<matcher>     ::= <label_name> <operator> <label_value>
<label_name>  ::= <quoted> | <unquoted>
<operator>    ::= "=" | "=~" | "!=" | "!~"
<label_value> ::= <quoted> | <unquoted>
<quoted>      ::= "\"" /.*/ "\""
<unquoted>    ::= /^[^{}!=~,"'` \]+$/

Here are some examples of valid inputs:

{}
foo=bar
{foo=bar}
{foo=bar🙂}
{foo!=bar}
{foo="bar"}
{foo="bar🙂"}
{foo=~[a-zA-Z0-9]+}
{foo=~"[a-zA-Z0-9]+"}
{"foo"!~"[0-9]+"}
{ "foo with spaces" = "bar with spaces" }
{foo="bar",bar="foo 🙂","baz"!=qux,qux!="baz 🙂"}

and some examples of invalid inputs:

=
foo
{
{foo
{foo=
{foo=bar
{foo=bar,
{foo=bar,}
{foo=bar 🙂}
{foo with spaces=bar with spaces}

Breaking changes

All ^{}!=~,"'`\ and whitespace must be double quoted

It is possible to use UTF-8 on both sides of the expression. However, label names and label values that contain one or more ^{}!=~,"'` characters or whitespace must be double quoted.

#### Expressions must start and end with open and closing braces

All expressions must start and end with { and }, although this can be relaxed if required. For example foo=bar is not valid, it must be {foo=bar}.

#### Trailing commas are not permitted

Trailing commas are not permitted. For example {foo=bar,} is not valid, it must be {foo=bar}.

#### All non [a-zA-Z_:][a-zA-Z0-9_:]* values must be double quoted

The set of unquoted characters is now the same on both sides of the expression. In other words, both label names and label values without double quotes must match the regular expression [a-zA-Z_:][a-zA-Z0-9_:]*. For example {foo=!bar} is not valid, it must be {foo="!bar"}. In current versions of Alertmanager, unquoted label values can contain all UTF-8 code points with the exception of comma, such as {foo=!bar}.

There are two reasons for this:

1. It's no longer possible to write ambiguous matchers which I feel is something Alertmanager should fix. For example is {foo=~} equivalent to {foo="~"} or {foo=~""}?

2. If we restrict the =, !, ~ characters to double quotes we can keep the grammar LL(1). Without this restriction lookahead/backtrack is required to parse matchers such as {foo==~!=!~bar} which are valid in current versions of Alertmanager.

Errors

One of the goals with this LL(1) parser is to provide better error messages than what is possible using just a regular expression. For example:

{foo
0:4: end of input: expected an operator such as '=', '!=', '=~' or '!~'

{foo=bar
0:8: end of input: expected close paren

foo=bar}
0:8: }: expected opening paren

{foo=bar,,}
9:10: unexpected ,: expected a matcher or close paren after comma

{foo=bar 🙂}
9:13: 🙂: invalid input: expected comma or closing '}'

{foo with spaces=bar with spaces}
5:9: unexpected with: expected an operator such as '=', '!=', '=~' or '!~'

Benchmarks

I've also provided a number of benchmarks of both the LL(1) parser and regex parser that supports UTF-8. These can be found at grobinson-grafana/matchers-benchmarks. However, to run them go.mod must be updated to use the branch https://github.com/grafana/prometheus-alertmanager/tree/yuri-tceretian/utf-8-label-names here.

BenchmarkMatchersSimple, BenchmarkPrometheusSimple
{foo="bar"}

BenchmarkMatchersComplex, BenchmarkPrometheusComplex
{foo="bar",bar="foo 🙂","baz"!=qux,qux!="baz 🙂"}

BenchmarkMatchersRegexSimple, BenchmarkPrometheusRegexSimple
{foo=~"[a-zA-Z_:][a-zA-Z0-9_:]*"}

BenchmarkMatchersRegexComplex, BenchmarkPrometheusRegexComplex
{foo=~"[a-zA-Z_:][a-zA-Z0-9_:]*",bar=~"[a-zA-Z_:]","baz"!~"[a-zA-Z_:][a-zA-Z0-9_:]*",qux!~"[a-zA-Z_:]"}
go test -bench=. -benchmem
goos: darwin
goarch: arm64
pkg: github.com/grobinson-grafana/matchers-benchmarks
BenchmarkMatchersRegexSimple-8      	  488295	      2425 ns/op	    3248 B/op	      49 allocs/op
BenchmarkMatchersRegexComplex-8     	  138081	      9074 ns/op	   11448 B/op	     169 allocs/op
BenchmarkPrometheusRegexSimple-8    	  329244	      3496 ns/op	    3531 B/op	      58 allocs/op
BenchmarkPrometheusRegexComplex-8   	   95188	     12554 ns/op	   12619 B/op	     204 allocs/op
BenchmarkMatchersSimple-8           	 2888340	       414.9 ns/op	      56 B/op	       2 allocs/op
BenchmarkMatchersComplex-8          	  741590	      1628 ns/op	     248 B/op	       7 allocs/op
BenchmarkPrometheusSimple-8         	 1919209	       613.9 ns/op	     233 B/op	       8 allocs/op
BenchmarkPrometheusComplex-8        	  425430	      2803 ns/op	    1015 B/op	      31 allocs/op
PASS
ok  	github.com/grobinson-grafana/matchers-benchmarks	11.766s

@grobinson-grafana
Copy link
Contributor Author

I forgot to signoff the commit, forced pushed.

@grobinson-grafana grobinson-grafana force-pushed the grobinson/label-matchers-parser branch 3 times, most recently from bb4e042 to e975ad6 Compare August 9, 2023 10:36
Copy link
Member

@gotjosh gotjosh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dropping the first half of my review - it's mostly nits. I still need to go through parse.go and its tests but so far so good (with the caveat that I don't know much about parsers but hope that by the end of this feature, I'm more educated on the matter 😄 )

matchers/parse/lexer.go Outdated Show resolved Hide resolved
matchers/parse/lexer.go Outdated Show resolved Hide resolved
matchers/parse/lexer.go Outdated Show resolved Hide resolved
matchers/parse/lexer.go Outdated Show resolved Hide resolved
matchers/parse/lexer.go Outdated Show resolved Hide resolved
matchers/parse/lexer_test.go Outdated Show resolved Hide resolved
matchers/compliance/compliance_test.go Outdated Show resolved Hide resolved
matchers/parse/lexer_test.go Show resolved Hide resolved
matchers/parse/token.go Outdated Show resolved Hide resolved
matchers/parse/lexer.go Outdated Show resolved Hide resolved
@grobinson-grafana grobinson-grafana changed the title Add label matchers parser Support UTF-8 matchers: Add new matchers parser Aug 9, 2023
@grobinson-grafana grobinson-grafana changed the title Support UTF-8 matchers: Add new matchers parser Support UTF-8 label matchers: Add new parser Aug 9, 2023
@grobinson-grafana
Copy link
Contributor Author

I changed the name of the PR to start with "Support UTF-8 matchers". I will use this on all PRs for this ongoing work to make it easier to find PRs related to this epic.

Copy link
Member

@gotjosh gotjosh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job so far, your test coverage is truly impressive - I played around modifying things as I went along and every time I changed something there were justified cases against them.

I've dropped you a set of extra nits that I'd love to discuss - I'm not done with parse.go as it's very dense but we're getting there.

matchers/parse/parse.go Outdated Show resolved Hide resolved
matchers/parse/token.go Outdated Show resolved Hide resolved
// and TokenNone is not one of the accepted kinds. It is possible to use either
// Scan() or Peek() as fn depending on whether accept should consume or peek
// the next token.
func (p *Parser) accept(fn func() (Token, error), kind ...TokenKind) (bool, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the need to try and keep some level of parity between accept and expect but I don't see the correlation between them.

My understanding of the current usage of accept is Peek at the next token and tell me if it's the kind I expect. As such, it feels clear to me to just inline this as it's only relevant on the parseOpenParen function and there's no need for an extraction.

I propose the following:

// In parse.go
func (p *Parser) PeekNext() (Token, error) {
	t, err := p.lexer.Peek()
	if err != nil {
		return Token{}, err
	}

	if t.Kind == TokenNone {
		return Token{}, fmt.Errorf("0:%d: %w", len(p.input), ErrEOF)
	}

	return t, nil
}
// in token.go
// IsAny verifies that the token is of any specified TokenKinds.
func (t Token) IsAny(kinds ...TokenKind) bool {
	for _, k := range kinds {
		if t.Kind == k {
			return true
		}
	}

	return false
}

And finally, change parseOpenParen

func (p *Parser) parseOpenParen(l *Lexer) (parseFn, error) {
	// Can start with an optional open brace.
	currentToken, err := p.peekNext()
	if err != nil {
		if errors.Is(err, ErrEOF) {
			return p.parseEOF, nil
		}
		return nil, err
	}

	p.hasOpenParen = currentToken.IsAny(TokenOpenBrace)
	// If the token was an open brace it must be scanned so the token
	// following it can be peeked.
	if p.hasOpenParen {
		if _, err = l.Scan(); err != nil {
			panic("Unexpected error scanning open brace")
		}

		// If the next token is a close brace there are no matchers in the input,
		// and we can just parse the close brace.
		currentToken, err = p.peekNext()
		if err != nil {
			return nil, fmt.Errorf("%s: %w", err, ErrNoCloseBrace)
		}

		if currentToken.IsAny(TokenCloseBrace) {
			return p.parseCloseParen, nil
		}
	}

	if currentToken.IsAny(TokenCloseBrace) {
		return p.parseCloseParen, nil
	}

	return p.parseLabelMatcher, nil
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated. I'm thinking if I should just update the lexer to return an ErrorEOF error at the same time as TokenNone if there is no more input. I think doing so would remove the need for peekNext and instead we can just call l.Peek().

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed offline, agreed that we'll move the ErrorEOF errors into the lexer to simplify what's going on the parser. I'll wait on that before I continue with the review.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I'm going to continue with moving ErrorEOF into the lexer, it doesn't work quite as well as I had expected. I think instead I'm going to go with the other option where the lexer makes its position available via a public method and we remove input from the parser.

matchers/parse/parse.go Outdated Show resolved Hide resolved
matchers/parse/parse.go Outdated Show resolved Hide resolved
@beorn7
Copy link
Member

beorn7 commented Aug 15, 2023

Without having done a thorough review, just a thought:

All non [a-zA-Z_:][a-zA-Z0-9_:]* values must be double quoted

I like this as it matches the Prometheus naming restrictions. And even with the upcoming introduction of UTF-8 strings as names, the requirement to quote the name would be exactly the same. Yay, consistency! However, the AM matchers aren't really fully consistent with PromQL anyway (e.g. single quotes and backticks aren't allowed). How about going the other way and only require double-quoting if a specific subset of characters is contained? Which would be something like {}"=!~, and any whitespace (and maybe some others to be prepared for future extensions – for example, I could see AM matchers supporting backticks in the future to switch off escape sequence expansion as in PromQL).

@beorn7
Copy link
Member

beorn7 commented Aug 15, 2023

Speaking of escape sequences: I see cases like foo="\\\"" in the test cases, so they seem to be supported, but I don't see that mentioned in the grammar. Am I missing something?

@grobinson-grafana
Copy link
Contributor Author

However, the AM matchers aren't really fully consistent with PromQL anyway (e.g. single quotes and backticks aren't allowed).

Single quotes and backticks can be supported if needed. The strings.Unquote function from Go's string package supports double-quoted, single quoted and backtick strings - so we'd just need to update the lexer to lex strings with either delimiter.

How about going the other way and only require double-quoting if a specific subset of characters is contained? Which would be something like {}"=!~

I don't think it's that useful because in the case of regex there will be some regexes that work without double quoting and others that don't, depending on the contents of the regex. For example foo=~[a-z+] will parse but foo=~[a-z+!] won't because there is an unquoted !. For that reason I chose to enforce double quoting of all non [a-zA-Z_][a-zA-Z0-9_]* inputs.

@beorn7
Copy link
Member

beorn7 commented Aug 16, 2023

The strings.Unquote function from Go's string package supports double-quoted, single quoted and backtick strings - so we'd just need to update the lexer to lex strings with either delimiter.

And you also needed to change whether escape sequences are honored or not. `foo\tbar` and "foo\tbar" are different strings in PromQL.

I don't think it's that useful because in the case of regex there will be some regexes that work without double quoting and others that don't, depending on the contents of the regex.

I think it's mostly useful because fewer current use cases would break. I leave it to you and @gotjosh to make the trade-off here.

Just saying that technically, every string is a regexp, even without any characters that have a special meaning in regexps. So even with the more restrictive character set as in this PR, abc is a regexp that works without quoting and 0abc is a regexp that doesn't work. While I appreciate the consistency with the Prometheus special character definition (which is the strongest argument in favor IMHO), for those not familiar with Prometheus, it will be anyway confusing when quoting is required and when not. So might as well do it as convenient as possible…

I'm not feeling strongly either way. I just want you to make an informed call. "Consistency with Prometheus" and "break as few existing use cases as possible" are arguments in different directions, and I would weigh each one much heavier than "characters with special meaning in regexps should always require quoting".

@grobinson-grafana
Copy link
Contributor Author

And you also needed to change whether escape sequences are honored or not. `foo\tbar` and "foo\tbar" are different strings in PromQL.

Ah! Yes, I remember! Perhaps we can add that in future if there is demand for it, as it won't be a breaking change.

Just saying that technically, every string is a regexp, even without any characters that have a special meaning in regexps.

The concern I have here is that I don't think we can give useful error messages to users should they change their regex from alertname=abc to alertname=abc! because the lexer just lexes the input into a sequence of tokens and doesn't know about the structure of matchers.

For example, the error message will be something like:

unexpected end of input, expected one of '=~'

The user will then change their regex to alertname=abc!= which will fix the lexer error, but will then encounter a parser error because the parser has found an operator != at the end:

unexpected !=: expected a comma or close paren

It doesn't know if the users intention was to have a regex abc!= or if it they had made a copy/paste error and accidentally added != to the end by mistake.

What do you think @gotjosh?

@beorn7
Copy link
Member

beorn7 commented Aug 16, 2023

Are you saying that the current state of this PR always requires quoting for regexps? E.g. foo=~bar would be invalid?

@grobinson-grafana
Copy link
Contributor Author

Yes that's correct! It should all be mentioned under Breaking changes at the top of the PR (see All non [a-zA-Z_:][a-zA-Z0-9_:]* values must be double quoted).

@grobinson-grafana
Copy link
Contributor Author

The reasoning behind this is that the original goals of this project were 1. to eliminate parsing ambiguities in the current parser and 2. to support UTF-8.

If we allow unquoted control characters we have parsing ambiguities. For example foo=~=bar. If we disallow them then we no longer have parsing ambiguities, but the parser as designed cannot provide helpful error messages as the lexer was structured around control characters being double-quoted. It will need to be rewritten if we want to facilitate helpful error messages in this case.

@beorn7
Copy link
Member

beorn7 commented Aug 16, 2023

But my counter proposal does require quoting for all characters that are potentially part of the comparison operator ({}"=!~). What is kind of a pity is that even things like / or . require quoting.

@grobinson-grafana
Copy link
Contributor Author

grobinson-grafana commented Aug 16, 2023

But my counter proposal does require quoting for all characters that are potentially part of the comparison operator ({}"=!~). What is kind of a pity is that even things like / or . require quoting.

Yes I understand (don't forget commas need to be quoted too). The concern I have is about error messages. If we relax the grammar so just {}"=!~, has to be double quoted then the grammar is still unambiguous, however the problem is that it creates a weird situation where a user can start with a valid matcher such as the following:

alertname=~foo

and then can change it to an invalid matcher by adding a comparison operator such as:

alertname=~~foo

giving the error message:

11:12: ~: invalid input: expected label value

The main discussion point for me is that I'm not sure this error message is useful enough to help the user understand they need to add double quotes. But because the lexer is designed around this being double quoted, to facilitate better error messages for this case it will need to be rewritten to be aware of the structure of a matcher rather than just a dumb tokenizer.

The options I see are the following:

  1. Keep the grammar as it is, and require regexes to be double quoted
  2. Change the grammar, keep the lexer as it is, and accept the error message isn't the best for this use case
  3. Change the grammar, but rewrite the lexer to know about the structure of matchers, rather than be just a dumb tokenizer, so we can have better error messages for this use case

@beorn7
Copy link
Member

beorn7 commented Aug 16, 2023

Maybe that error message isn't the worst. It says that a label value is expected, and what it got was a character that is invalid in a label value unless the label value is quoted.

In different news, I don't think this problem is linked to regexp matching. foo=bar is valid. foo=!bar yields "5:6: !: expected one of '=~': expected label value". That's at least as confusing as the error message above. I think if we want to accept unquoted label values at all, we either need to live with sub-optimal error messages or go the extra mile and code a smarter lexer.

BTW:
foo=.bar → ".: invalid input: expected label value"
foo=foo.bar → ".: invalid input: expected a comma or close brace"

None of these says that quoting is needed. And my proposal (to be more liberal with special characters and only ban those that create ambiguities) would actually make both examples pass, which would clearly be less confusing than a confusing error message.

@grobinson-grafana
Copy link
Contributor Author

grobinson-grafana commented Aug 16, 2023

You make a good point about the error messages for the cases.

When I first started this work (this was before the proposal to allow UTF-8 in Prometheus/Alertmanager was accepted) I had started with a much stricter grammar where all text had to be double quoted. We later relaxed this to [a-zA-Z_][a-zA-Z0-9_]* to avoid breaking almost all existing use cases.

Given that the grammar has been relaxed before I think we can relax it further provided doing so does not add parsing ambiguities to the grammar - which this change should not. At this time I'm impartial to doing so and am willing to be persuaded for either case.

If we choose to relax the grammar as suggested, I suppose the question remaining is how can we add this to the lexer. To answer that question I think we can do something like the following:

diff --git a/matchers/parse/lexer.go b/matchers/parse/lexer.go
index cdf25161..a7546540 100644
--- a/matchers/parse/lexer.go
+++ b/matchers/parse/lexer.go
@@ -32,6 +32,10 @@ func isNum(r rune) bool {
 	return r >= '0' && r <= '9'
 }

+func isReserved(r rune) bool {
+	return unicode.IsSpace(r) || strings.ContainsRune("{}!=~,", r)
+}
+
 // ExpectedError is returned when the next rune does not match what is expected.
 type ExpectedError struct {
 	input       string
@@ -168,7 +172,7 @@ func (l *Lexer) Scan() (Token, error) {
 			l.rewind()
 			tok, l.err = l.scanQuoted()
 			return tok, l.err
-		case r == '_' || isAlpha(r):
+		case !isReserved(r):
 			l.rewind()
 			tok, l.err = l.scanIdent()
 			return tok, l.err
@@ -191,7 +195,7 @@ func (l *Lexer) Scan() (Token, error) {

 func (l *Lexer) scanIdent() (Token, error) {
 	for r := l.next(); r != eof; r = l.next() {
-		if !isAlpha(r) && !isNum(r) && r != '_' && r != ':' {
+		if isReserved(r) {
 			l.rewind()
 			break
 		}

@grobinson-grafana
Copy link
Contributor Author

This would also allow unquoted matchers in non-Latin alphabets, such as:

alertname=测试

which if Google translate is correct is the Chinese word for test.

Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
Signed-off-by: George Robinson <george.robinson@grafana.com>
@gotjosh gotjosh merged commit 353c0a1 into prometheus:main Sep 5, 2023
5 checks passed
@gotjosh
Copy link
Member

gotjosh commented Sep 5, 2023

Thank you very much for your contribution.

@grobinson-grafana
Copy link
Contributor Author

grobinson-grafana commented Oct 25, 2023

I did some additional verification this morning and found a couple of examples we might want to fix.

  1. foo=\n is not un-escaped as we don't support openmetrics escape sequences outside of double quotes. I propose that we do not allow backslashes outside of double quotes.

  2. All escape sequences that are supported in strconv.Unquote are supported in double quotes, where as in the classic matchers parser just the openmetrics escape sequences are supported. I'm not sure how much of an issue this is, but it might be something we might want to restrict in future.

Input UTF-8 Classic
foo=bar {foo="bar"} {foo="bar"}
foo==bar error {foo="=bar"}
foo=bar🙂 {foo="bar🙂"} {foo="bar🙂"}
foo🙂=bar {foo🙂="bar"} error
foo= error {foo=""}
foo=\n {foo="\\n"} {foo="\n"}
foo=\t {foo="\\t"} {foo="\\t"}
foo=\ {foo="\\"} {foo="\\"}
foo=\" error {foo="\""}
foo=\r {foo="\r"} {foo="\\r"}
foo=bar, {foo="bar"} {foo="bar"}
foo=bar,, error error
foo=,bar error error
foo="" {foo=""} {foo=""}
foo="\n" {foo="\n"} {foo="\\n"}
foo="\t" {foo=" "} {foo="\t"}
foo="\r" carriage return {foo="\\r"}
foo="bar," {foo="bar,"} {foo="bar,"}
foo="bar,," {foo="bar,,"} {foo="bar,,"}
"foo "=bar {foo ="bar"} error
今日は=世界 {今日は="世界"} error

@beorn7
Copy link
Member

beorn7 commented Oct 25, 2023

I propose that we do not allow backticks outside of double quotes.

Do you mean backslash?

About the OM vs. strconv.Unquote dissonance: Note that this is nothing new. PromQL uses strconv.Unquote, and the OM escaping is used in the exposition format. I would tend towards strconv.Unquote because the OM escaping is rather specific (and specifically designed for the use case of an exposition format).

@grobinson-grafana
Copy link
Contributor Author

I propose that we do not allow backticks outside of double quotes.

Do you mean backslash?

About the OM vs. strconv.Unquote dissonance: Note that this is nothing new. PromQL uses strconv.Unquote, and the OM escaping is used in the exposition format. I would tend towards strconv.Unquote because the OM escaping is rather specific (and specifically designed for the use case of an exposition format).

Yes that's what I meant! 😄

@grobinson-grafana
Copy link
Contributor Author

Thanks for the clarification on strconv though, this is super helpful. In such case, I'm inclined to keep it.

@grobinson-grafana
Copy link
Contributor Author

Here is the PR that rejects backslashes outside double quotes #3571 for reference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants