src/blib2to3/pgen2/grammar.py
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | # Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | # Licensed to PSF under a Contributor Agreement.
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | """This module defines the data structures used to represent a grammar.
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | These are a bit arcane because they are derived from the data
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | structures used by Python's 'pgen' parser generator.
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | There's also a table here mapping operators to their names in the
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | token module; the Python tokenize module reports all operators as the
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | fallback token code OP, but the parser needs the actual token code.
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | """
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | # Python imports
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | import os
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | import pickle
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | import tempfile
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | from typing import Any, Dict, List, Optional, Tuple, TypeVar, Union
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | # Local imports
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | from . import token
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | _P = TypeVar("_P", bound="Grammar")
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | Label = Tuple[int, Optional[str]]
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | DFA = List[List[Tuple[int, int]]]
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | DFAS = Tuple[DFA, Dict[int, int]]
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | Path = Union[str, "os.PathLike[str]"]
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | class Grammar:
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | """Pgen parsing tables conversion class.
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | Once initialized, this class supplies the grammar tables for the
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | parsing engine implemented by parse.py. The parsing engine
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | accesses the instance variables directly. The class here does not
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | provide initialization of the tables; several subclasses exist to
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | do this (see the conv and pgen modules).
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | The load() method reads the tables from a pickle file, which is
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | much faster than the other ways offered by subclasses. The pickle
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | file is written by calling dump() (after loading the grammar
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | tables using a subclass). The report() method prints a readable
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | representation of the tables to stdout, for debugging.
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | The instance variables are as follows:
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | symbol2number -- a dict mapping symbol names to numbers. Symbol
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | numbers are always 256 or higher, to distinguish
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | them from token numbers, which are between 0 and
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | 255 (inclusive).
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | number2symbol -- a dict mapping numbers to symbol names;
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | these two are each other's inverse.
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | states -- a list of DFAs, where each DFA is a list of
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | states, each state is a list of arcs, and each
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | arc is a (i, j) pair where i is a label and j is
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | a state number. The DFA number is the index into
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | this list. (This name is slightly confusing.)
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | Final states are represented by a special arc of
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | the form (0, j) where j is its own state number.
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | dfas -- a dict mapping symbol numbers to (DFA, first)
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | pairs, where DFA is an item from the states list
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | above, and first is a set of tokens that can
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | begin this grammar rule (represented by a dict
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | whose values are always 1).
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | labels -- a list of (x, y) pairs where x is either a token
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | number or a symbol number, and y is either None
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | or a string; the strings are keywords. The label
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | number is the index in this list; label numbers
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | are used to mark state transitions (arcs) in the
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | DFAs.
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | start -- the number of the grammar's start symbol.
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | keywords -- a dict mapping keyword strings to arc labels.
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | tokens -- a dict mapping token numbers to arc labels.
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- | """
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | def __init__(self) -> None:
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self.symbol2number: Dict[str, int] = {}
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self.number2symbol: Dict[int, str] = {}
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self.states: List[DFA] = []
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self.dfas: Dict[int, DFAS] = {}
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self.labels: List[Label] = [(0, "EMPTY")]
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self.keywords: Dict[str, int] = {}
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self.soft_keywords: Dict[str, int] = {}
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self.tokens: Dict[int, int] = {}
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self.symbol2label: Dict[str, int] = {}
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self.version: Tuple[int, int] = (0, 0)
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self.start = 256
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | # Python 3.7+ parses async as a keyword, not an identifier
0014 0023 0013 0001 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self.async_keywords = False
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | def dump(self, filename: Path) -> None:
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | """Dump the grammar tables to a pickle file."""
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 |
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | # mypyc generates objects that don't have a __dict__, but they
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | # do have __getstate__ methods that will return an equivalent
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | # dictionary
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | if hasattr(self, "__dict__"):
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | d = self.__dict__
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | else:
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | d = self.__getstate__() # type: ignore
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 |
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | with tempfile.NamedTemporaryFile(
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | dir=os.path.dirname(filename), delete=False
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | ) as f:
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | pickle.dump(d, f, pickle.HIGHEST_PROTOCOL)
0016 0009 0010 0004 0000 0002 0004 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | os.replace(f.name, filename)
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
0003 0003 0003 0000 0000 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | def _update(self, attrs: Dict[str, Any]) -> None:
0003 0003 0003 0000 0000 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | for k, v in attrs.items():
0003 0003 0003 0000 0000 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | setattr(self, k, v)
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
0005 0005 0004 0000 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | def load(self, filename: Path) -> None:
0005 0005 0004 0000 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | """Load the grammar tables from a pickle file."""
0005 0005 0004 0000 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | with open(filename, "rb") as f:
0005 0005 0004 0000 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | d = pickle.load(f)
0005 0005 0004 0000 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self._update(d)
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
0003 0003 0002 0000 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | def loads(self, pkl: bytes) -> None:
0003 0003 0002 0000 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | """Load the grammar tables from a pickle bytes object."""
0003 0003 0002 0000 0000 0000 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | self._update(pickle.loads(pkl))
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | def copy(self: _P) -> _P:
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | """
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | Copy the grammar.
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | """
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | new = self.__class__()
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | for dict_attr in (
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | "symbol2number",
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | "number2symbol",
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | "dfas",
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | "keywords",
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | "soft_keywords",
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | "tokens",
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | "symbol2label",
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | ):
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | setattr(new, dict_attr, getattr(self, dict_attr).copy())
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | new.labels = self.labels[:]
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | new.states = self.states[:]
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | new.start = self.start
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | new.version = self.version
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | new.async_keywords = self.async_keywords
0021 0013 0018 0000 0003 0000 0000 02 02 000 000 000 000 000 000 0000.00 0000.00 0000.00 | return new
---- ---- ---- ---- ---- ---- ---- 02 -- --- --- --- --- --- --- ------- ------- ------- |
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | def report(self) -> None:
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | """Dump the grammar tables to standard output, for debugging."""
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | from pprint import pprint
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 |
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | print("s2n")
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | pprint(self.symbol2number)
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | print("n2s")
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | pprint(self.number2symbol)
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | print("states")
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | pprint(self.states)
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | print("dfas")
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | pprint(self.dfas)
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | print("labels")
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | pprint(self.labels)
0015 0014 0013 0000 0000 0001 0001 02 01 000 000 000 000 000 000 0000.00 0000.00 0000.00 | print("start", self.start)
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | # Map from operator to number (since tokenize doesn't do this)
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | opmap_raw = """
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | ( LPAR
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | ) RPAR
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | [ LSQB
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | ] RSQB
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | : COLON
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | , COMMA
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | ; SEMI
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | + PLUS
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | - MINUS
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | * STAR
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | / SLASH
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | | VBAR
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | & AMPER
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | < LESS
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | > GREATER
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | = EQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | . DOT
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | % PERCENT
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | ` BACKQUOTE
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | { LBRACE
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | } RBRACE
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | @ AT
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | @= ATEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | == EQEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | != NOTEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | <> NOTEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | <= LESSEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | >= GREATEREQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | ~ TILDE
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | ^ CIRCUMFLEX
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | << LEFTSHIFT
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | >> RIGHTSHIFT
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | ** DOUBLESTAR
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | += PLUSEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | -= MINEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | *= STAREQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | /= SLASHEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | %= PERCENTEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | &= AMPEREQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | |= VBAREQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | ^= CIRCUMFLEXEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | <<= LEFTSHIFTEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | >>= RIGHTSHIFTEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | **= DOUBLESTAREQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | // DOUBLESLASH
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | //= DOUBLESLASHEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | -> RARROW
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | := COLONEQUAL
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | """
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- |
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | opmap = {}
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | for line in opmap_raw.splitlines():
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | if line:
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | op, name = line.split()
---- ---- ---- ---- ---- ---- ---- -- -- --- --- --- --- --- --- ------- ------- ------- | opmap[op] = getattr(token, name)