Class CLexer#

Class Documentation#

class CLexer#

Tokenizes the output of a character reader using TOML v1.0: https://toml.io/en/v1.0.0.

Public Types

enum class ENavigationMode#

Navigation modes supported by the lexer.

Values:

enumerator skip_comments_and_whitespace#

Skip comments and whitespace during navigation (default)

enumerator do_not_skip_anything#

Do not skip anything during navigation.

Public Functions

CLexer() = default#

Default constructor.

CLexer(const std::string &rssString, bool bValueOnly = false)#

Constructs a new LexerTOML object with given input data that will be lexed.

Parameters:
  • rssString[in] The UTF-8 encoded content of a TOML source

  • bValueOnly[in] When set, the lexer should treat the string as a value assignment.

void Feed(const std::string &rssString, bool bValueOnly = false)#

Feed the lexer with the given string. This will replace a previous lexing result.

Parameters:
  • rssString[in] UTF-8 input string.

  • bValueOnly[in] When set, the lexer should treat the string as a value assignment.

void Reset()#

Reset the lexer cursor position.

ENavigationMode NavigationMode() const#

Get the current navigation mode.

Returns:

The current naviation mode.

void NavigationMode(ENavigationMode eMode)#

Set the navigation mode.

Parameters:

eMode[in] The mode to be used for navigation.

const CToken &Peek(size_t nSkip = 0) const#

Gets the n-th token after the current cursor without advancing the cursor.

Remark

Whitespace and comments are skipped.

Parameters:

nSkip[in] Skip the amount of tokens.

Returns:

Returns smart pointer to the token in the token list or an empty pointer.

const CToken &Consume(size_t nSkip = 0)#

Gets the n-th token after the current cursor and advancing the cursor by n.

Remark

Whitespace and comments are skipped.

Parameters:

nSkip[in] Skip the amount of tokens.

Returns:

Returns smart pointer to the token in the token list or an empty pointer.

bool IsEnd() const#

Checks if the end-token was consumed.

Returns:

Returns true if the end-token was consumed by Consume() or Consume(n) or if there are no tokens; false otherwise

void SmartExtendNodeRange(CNodeTokenRange &rTokenRange) const#

Check and extend the provided boundaries of a given token range to include white space and comments which obviously belong to the token range and check for additional tokens when reaching the end of the token list.

Parameters:

rTokenRange[inout] Reference to the node token range being updated with extended boundaries.