Files
ServerSync/lib/pygments/__pycache__/lexer.cpython-314.pyc

316 lines
39 KiB
Plaintext
Raw Normal View History

2026-02-12 02:28:23 +02:00
+
<00> <0C>i%<25><00><00>Z<00>Rt^RIt^RIt^RIt^RIHtHt^RIHt^RI H
t
H t H t H t Ht^RIHtHtHtHtHtHt^RIHt.R-Ot]P2!R4t.R.Ot]!R4t!RR]4t!RR]R7t !RR
] 4t!!RR ]"4t#!RR4t$]$!4t%!RR]&4t'!RR4t(Rt)!RR4t*]*!4t+R t,!R!R 4t-!R"R]4t.!R#R$]4t/!R%R] ]/R7t0!R&R 4t1!R'R ]04t2R(t3!R)R*]/4t4!R+R,]0]4R7t5R#)/z<>
pygments.lexer
~~~~~~~~~~~~~~
Base lexer classes.
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
N)<02> apply_filters<72>Filter)<01>get_filter_by_name)<05>Error<6F>Text<78>Other<65>
Whitespace<EFBFBD>
_TokenType)<06> get_bool_opt<70> get_int_opt<70> get_list_opt<70>make_analysator<6F>Future<72> guess_decode)<01> regex_opt<70>Lexer<65>
RegexLexer<EFBFBD>ExtendedRegexLexer<65>DelegatingLexer<65> LexerContext<78>include<64>default<6C>wordsz.*?
c<00><00>R#)<01><00>)<01>xs&<26>5/tmp/pip-target-qd_sq_1j/lib/python/pygments/lexer.py<70><lambda>r"s<00><00>#<23>c<00>*a<00>]tRt^%toRtRtRtVtR#)<05> LexerMetazv
This metaclass automagically converts ``analyse_text`` methods into
static methods which always return float values.
c<08>f<00>RV9d\VR,4VR&\PWW#4#)<01> analyse_text)r <00>type<70>__new__)<04>mcs<63>name<6D>bases<65>ds&&&&rr%<00>LexerMeta.__new__+s/<00><00> <19>Q<EFBFBD> <1E> /<2F><01>.<2E>0A<30> B<>A<EFBFBD>n<EFBFBD> <1D><13>|<7C>|<7C>C<EFBFBD>u<EFBFBD>0<>0rrN)<08>__name__<5F>
__module__<EFBFBD> __qualname__<5F>__firstlineno__<5F>__doc__r%<00>__static_attributes__<5F>__classdictcell__<5F><01> __classdict__s@rr!r!%s<00><><00><00><08>
1<>1rr!c<00>va<00>]tRt^1toRtRt.t.t.t.t ^t
Rt Rt Rt RtRtRtRtRtR RltR tR
tVtR#) ran
Lexer for a specific language.
See also :doc:`lexerdevelopment`, a high-level guide to writing
lexers.
Lexer classes have attributes used for choosing the most appropriate
lexer based on various criteria.
.. autoattribute:: name
:no-value:
.. autoattribute:: aliases
:no-value:
.. autoattribute:: filenames
:no-value:
.. autoattribute:: alias_filenames
.. autoattribute:: mimetypes
:no-value:
.. autoattribute:: priority
Lexers included in Pygments should have two additional attributes:
.. autoattribute:: url
:no-value:
.. autoattribute:: version_added
:no-value:
Lexers included in Pygments may have additional attributes:
.. autoattribute:: _example
:no-value:
You can pass options to the constructor. The basic options recognized
by all lexers and processed by the base `Lexer` class are:
``stripnl``
Strip leading and trailing newlines from the input (default: True).
``stripall``
Strip all leading and trailing whitespace from the input
(default: False).
``ensurenl``
Make sure that the input ends with a newline (default: True). This
is required for some lexers that consume input linewise.
.. versionadded:: 1.3
``tabsize``
If given and greater than 0, expand tabs in the input (default: 0).
``encoding``
If given, must be an encoding name. This encoding will be used to
convert the input string to Unicode, if it is not already a Unicode
string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
Latin1 detection. Can also be ``'chardet'`` to use the chardet
library, if it is installed.
``inencoding``
Overrides the ``encoding`` if given.
Nc <0C>|<00>Wn\VRR4Vn\VRR4Vn\VRR4Vn\ VR^4VnVPRR4VnVPR 4;'g VPVn.Vn \VR
R 4FpVPV4K R #) a)
This constructor takes arbitrary options as keyword arguments.
Every subclass must first process its own options and then call
the `Lexer` constructor, since it processes the basic
options like `stripnl`.
An example looks like this:
.. sourcecode:: python
def __init__(self, **options):
self.compress = options.get('compress', '')
Lexer.__init__(self, **options)
As these options must all be specifiable as strings (due to the
command line usage), there are various utility functions
available to help with that, see `Utilities`_.
<EFBFBD>stripnlT<6C>stripallF<6C>ensurenl<6E>tabsize<7A>encoding<6E>guess<73>
inencoding<EFBFBD>filtersNr) <0C>optionsr
r6r7r8r r9<00>getr:r=r <00>
add_filter)<03>selfr><00>filter_s&, r<00>__init__<5F>Lexer.__init__<5F>s<><00><00>&<1F> <0C>#<23>G<EFBFBD>Y<EFBFBD><04>=<3D><04> <0C>$<24>W<EFBFBD>j<EFBFBD>%<25>@<40><04> <0A>$<24>W<EFBFBD>j<EFBFBD>$<24>?<3F><04> <0A>"<22>7<EFBFBD>I<EFBFBD>q<EFBFBD>9<><04> <0C><1F> <0B> <0B>J<EFBFBD><07>8<><04> <0A><1F> <0B> <0B>L<EFBFBD>1<>B<>B<>T<EFBFBD>]<5D>]<5D><04> <0A><19><04> <0C>#<23>G<EFBFBD>Y<EFBFBD><02>;<3B>G<EFBFBD> <10>O<EFBFBD>O<EFBFBD>G<EFBFBD> $<24><rc<08><><00>VP'd)RVPP RVP: R2#RVPP R2#)z<pygments.lexers.z with <20>>)r><00> __class__r+<00>rAs&r<00>__repr__<5F>Lexer.__repr__<5F>sK<00><00> <0F><<3C><<3C><<3C>&<26>t<EFBFBD>~<7E>~<7E>'><3E>'><3E>&?<3F>v<EFBFBD>d<EFBFBD>l<EFBFBD>l<EFBFBD>EU<45>UV<55>W<> W<>&<26>t<EFBFBD>~<7E>~<7E>'><3E>'><3E>&?<3F>q<EFBFBD>A<> Arc <0C><><00>\V\4'g \V3/VBpVPP V4R#)z(
Add a new stream filter to this lexer.
N)<05>
isinstancerrr=<00>append)rArBr>s&&,rr@<00>Lexer.add_filter<65>s2<00><00><1A>'<27>6<EFBFBD>*<2A>*<2A>(<28><17><<3C>G<EFBFBD><<3C>G<EFBFBD> <0C> <0C> <0C><1B><1B>G<EFBFBD>$rc <0C><00>R#)aa
A static method which is called for lexer guessing.
It should analyse the text and return a float in the range
from ``0.0`` to ``1.0``. If it returns ``0.0``, the lexer
will not be selected as the most probable one, if it returns
``1.0``, it will be selected immediately. This is used by
`guess_lexer`.
The `LexerMeta` metaclass automatically wraps this function so
that it works like a static method (no ``self`` or ``cls``
parameter) and the return value is automatically converted to
`float`. If the return value is an object that is boolean `False`
it's the same as if the return values was ``0.0``.
Nr)<01>texts&rr#<00>Lexer.analyse_text<78>s<00>rc <0C>^<00>\V\4'Eg
VPR8Xd\V4wrEMVPR8Xd<>^RIpRp\ F<wrgTPT4'gKT\T4RPTR4pM TfDTPTR,4pTPTPR4;'gRR4pTpMfVPVP4pVPR 4'dV\R 4RpM%VPR 4'dV\R 4RpVPR
R 4pVPR R 4pVP'dVP4pM#VP'dVPR 4pVP ^8<>dVP#VP 4pVP$'d!VP'R 4'g
VR , pV# \
dp\ R4ThRp?ii;i) zVApply preprocessing such as decoding the input, removing BOM and normalizing newlines.r;<00>chardetNzkTo enable chardet encoding guessing, please install the chardet library from http://chardet.feedparser.org/<2F>replace:NiNr:<00>utf-8uz
<EFBFBD>
<EFBFBD> )rL<00>strr:rrS<00> ImportError<6F> _encoding_map<61>
startswith<EFBFBD>len<65>decode<64>detectr?rTr7<00>stripr6r9<00>
expandtabsr8<00>endswith) rArP<00>_rS<00>e<>decoded<65>bomr:<00>encs && r<00>_preprocess_lexer_input<75>Lexer._preprocess_lexer_input<75>s<><00><00><1A>$<24><03>$<24>$<24><13>}<7D>}<7D><07>'<27>&<26>t<EFBFBD>,<2C><07><04>a<EFBFBD><15><1D><1D>)<29>+<2B>T<01>"<22> <1F><07>%2<>M<EFBFBD>C<EFBFBD><1B><EFBFBD><EFBFBD>s<EFBFBD>+<2B>+<2B>"&<26>s<EFBFBD>3<EFBFBD>x<EFBFBD>y<EFBFBD>/<2F>"8<>"8<><18>9<EFBFBD>"M<><07><1D>&3<>
<1B>?<3F>!<21>.<2E>.<2E><14>e<EFBFBD><1B>5<>C<EFBFBD>"<22>k<EFBFBD>k<EFBFBD>#<23>'<27>'<27>*<2A>*=<3D>*H<>*H<><17>*3<>5<>G<EFBFBD><1E><04><1B>{<7B>{<7B>4<EFBFBD>=<3D>=<3D>1<><04><17>?<3F>?<3F>8<EFBFBD>,<2C>,<2C><1F><03>H<EFBFBD> <0A><0E>/<2F>D<EFBFBD><44><13><EFBFBD><EFBFBD>x<EFBFBD>(<28>(<28><1B>C<EFBFBD><08>M<EFBFBD>N<EFBFBD>+<2B><04><14>|<7C>|<7C>F<EFBFBD>D<EFBFBD>)<29><04><13>|<7C>|<7C>D<EFBFBD>$<24>'<27><04> <0F>=<3D>=<3D>=<3D><17>:<3A>:<3A><<3C>D<EFBFBD> <11>\<5C>\<5C>\<5C><17>:<3A>:<3A>d<EFBFBD>#<23>D<EFBFBD> <0F><<3C><<3C>!<21> <1B><17>?<3F>?<3F>4<EFBFBD><<3C><<3C>0<>D<EFBFBD> <0F>=<3D>=<3D>=<3D><14><1D><1D>t<EFBFBD>!4<>!4<> <10>D<EFBFBD>L<EFBFBD>D<EFBFBD><13> <0B><>I#<23>T<01>%<25>'L<01>M<01>RS<52>T<01><>T<01>s<00>
H<00> H,<03> H'<03>'H,c <0C><>aa<01>SPS4oVV3RlpV!4pV'g\VSPS4pV#)a
This method is the basic interface of a lexer. It is called by
the `highlight()` function. It must process the text and return an
iterable of ``(tokentype, value)`` pairs from `text`.
Normally, you don't need to override this method. The default
implementation processes the options recognized by all lexers
(`stripnl`, `stripall` and so on), and then yields all tokens
from `get_tokens_unprocessed()`, with the ``index`` dropped.
If `unfiltered` is set to `True`, the filtering mechanism is
bypassed even if filters are defined.
c3<00>P<"<00>SPS4F wrpW3x<00>K R#5i<01>N)<01>get_tokens_unprocessed)rb<00>t<>vrArPs <20><>r<00>streamer<65>"Lexer.get_tokens.<locals>.streamer s&<00><><00><00><1F>6<>6<>t<EFBFBD><<3C><07><01>a<EFBFBD><17>d<EFBFBD>
<EFBFBD>=<3D>s<00>#&)rgrr=)rArP<00>
unfilteredro<00>streamsff& r<00>
get_tokens<EFBFBD>Lexer.get_tokens<6E>s=<00><><00><14>+<2B>+<2B>D<EFBFBD>1<><04> <1B><1A><1A><06><19>"<22>6<EFBFBD>4<EFBFBD><<3C><<3C><14>><3E>F<EFBFBD><15> rc <0C><00>\h)a#
This method should process the text and return an iterable of
``(index, tokentype, value)`` tuples where ``index`` is the starting
position of the token within the input text.
It must be overridden by subclasses. It is recommended to
implement it as a generator to maximize effectiveness.
)<01>NotImplementedError)rArPs&&rrl<00>Lexer.get_tokens_unprocesseds
<00><00>"<22>!r)r:r8r=r>r7r6r9)F)r+r,r-r.r/r'<00>aliases<65> filenames<65>alias_filenames<65> mimetypes<65>priority<74>url<72> version_added<65>_examplerCrIr@r#rgrsrlr0r1r2s@rrr1st<00><><00><00>8<08>v <10>D<EFBFBD><11>G<EFBFBD>
<13>I<EFBFBD><19>O<EFBFBD><13>I<EFBFBD><11>H<EFBFBD> <0F>C<EFBFBD><19>M<EFBFBD><14>H<EFBFBD>%<25><B<01> %<25> <0C>"-<14>^<16>0 "<22> "r)<01> metaclassc<00>6a<00>]tRtRtoRt]3RltRtRtVt R#)ri!a
This lexer takes two lexer as arguments. A root lexer and
a language lexer. First everything is scanned using the language
lexer, afterwards all ``Other`` tokens are lexed using the root
lexer.
The lexers from the ``template`` lexer package use this base lexer.
c <08>t<00>V!R/VBVnV!R/VBVnW0n\P!V3/VBR#<00>Nr)<05>
root_lexer<EFBFBD>language_lexer<65>needlerrC)rA<00> _root_lexer<65>_language_lexer<65>_needler>s&&&&,rrC<00>DelegatingLexer.__init__+s7<00><00>%<25>0<><07>0<><04><0F>-<2D>8<><07>8<><04><1B><1D> <0B> <0A><0E><0E>t<EFBFBD>'<27>w<EFBFBD>'rc<08><><00>Rp.p.pVPPV4FXwrVpW`PJd1V'dVP\ V4V34.pW', pKEVPWVV34KZ V'dVP\ V4V34\ VVP PV44#)<01>)r<>rlr<>rMr\<00> do_insertionsr<73>)rArP<00>buffered<65>
insertions<EFBFBD>
lng_buffer<EFBFBD>irmrns&& rrl<00>&DelegatingLexer.get_tokens_unprocessed1s<><00><00><15><08><17>
<EFBFBD><17>
<EFBFBD><1B>*<2A>*<2A>A<>A<>$<24>G<>G<EFBFBD>A<EFBFBD>!<21><10>K<EFBFBD>K<EFBFBD><1F><1D><1E>%<25>%<25>s<EFBFBD>8<EFBFBD>}<7D>j<EFBFBD>&A<>B<>!#<23>J<EFBFBD><18> <0A><08><1A>!<21>!<21>1<EFBFBD><11>)<29>,<2C>H<01> <16> <16> <1D> <1D>s<EFBFBD>8<EFBFBD>}<7D>j<EFBFBD>9<> :<3A><1C>Z<EFBFBD>!<21>_<EFBFBD>_<EFBFBD>C<>C<>H<EFBFBD>M<>O<01> Or)r<>r<>r<>N)
r+r,r-r.r/rrCrlr0r1r2s@rrr!s!<00><><00><00><08>>C<01>(<28> O<01>Orc<00><00>]tRtRtRtRtR#)riHzA
Indicates that a state should include rules from another state.
rN<>r+r,r-r.r/r0rrrrrHs <00><00><08> rc<00>*a<00>]tRtRtoRtRtRtVtR#)<06>_inheritiOz;
Indicates the a state should inherit from its superclass.
c<08><00>R#)<01>inheritrrHs&rrI<00>_inherit.__repr__Ss<00><00>rrN)r+r,r-r.r/rIr0r1r2s@rr<>r<>Os<00><><00><00><08><19>rr<>c<00>0a<00>]tRtRtoRtRtRtRtVtR#)<07>combinediYz2
Indicates a state combined from multiple states.
c<08>,<00>\PW4#rk)<02>tupler%)<02>cls<6C>argss&*rr%<00>combined.__new__^s<00><00><14>}<7D>}<7D>S<EFBFBD>'<27>'rc<08><00>R#rkr)rAr<>s&*rrC<00>combined.__init__as<00><00> rrN) r+r,r-r.r/r%rCr0r1r2s@rr<>r<>Ys<00><><00><00><08>(<28> <0A> rr<>c<00>Ta<00>]tRtRtoRtRtR RltR RltR RltRt R t
R
t Vt R#) <0C> _PseudoMatchifz2
A pseudo match object constructed from a string.
c<08><00>W nWnR#rk)<02>_text<78>_start)rA<00>startrPs&&&rrC<00>_PseudoMatch.__init__ks <00><00><19>
<EFBFBD><1B> rNc<08><00>VP#rk)r<><00>rA<00>args&&rr<><00>_PseudoMatch.startos <00><00><13>{<7B>{<7B>rc<08>N<00>VP\VP4,#rk)r<>r\r<>r<>s&&r<00>end<6E>_PseudoMatch.endrs<00><00><13>{<7B>{<7B>S<EFBFBD><14><1A><1A>_<EFBFBD>,<2C>,rc<08>@<00>V'd \R4hVP#)z No such group)<02>
IndexErrorr<EFBFBD>r<>s&&r<00>group<75>_PseudoMatch.groupus<00><00> <0E><1C>_<EFBFBD>-<2D> -<2D><13>z<EFBFBD>z<EFBFBD>rc<08><00>VP3#rk)r<>rHs&r<00>groups<70>_PseudoMatch.groupszs<00><00><14>
<EFBFBD>
<EFBFBD>}<7D>rc<08><00>/#rkrrHs&r<00> groupdict<63>_PseudoMatch.groupdict}s<00><00><11> r)r<>r<>rk) r+r,r-r.r/rCr<>r<>r<>r<>r<>r0r1r2s@rr<>r<>fs-<00><><00><00><08><1C><1B>-<2D><1A>
<1D><12>rr<>c<04>a<00>RV3RllpV#)zD
Callback that yields multiple actions for each group in the match.
c
3<00>,<"<00>\S4F<>wr4VfK \V4\JdAVPV^,4pV'dVP V^,4WE3x<00>K\K^VPV^,4pVfK|V'dVP V^,4VnV!V\ VP V^,4V4V4FpV'gK Vx<00>K K<> V'dVP4VnR#R#5irk)<08> enumerater$r r<>r<><00>posr<73>r<>)<08>lexer<65>match<63>ctxr<78><00>action<6F>data<74>itemr<6D>s&&& <20>r<00>callback<63>bygroups.<locals>.callback<63>s<><00><><00><00>"<22>4<EFBFBD><1F>I<EFBFBD>A<EFBFBD><15>~<7E><18><15>f<EFBFBD><1C><1A>+<2B><1C>{<7B>{<7B>1<EFBFBD>q<EFBFBD>5<EFBFBD>)<29><04><17><1F>+<2B>+<2B>a<EFBFBD>!<21>e<EFBFBD>,<2C>f<EFBFBD>:<3A>:<3A><18><1D>{<7B>{<7B>1<EFBFBD>q<EFBFBD>5<EFBFBD>)<29><04><17>#<23><1A>"'<27>+<2B>+<2B>a<EFBFBD>!<21>e<EFBFBD>"4<><03><07> &<26>u<EFBFBD>'3<>E<EFBFBD>K<EFBFBD>K<EFBFBD><01>A<EFBFBD><05>4F<34><04>'M<>s<EFBFBD>!T<01><04><1F>4<EFBFBD>"&<26>J<EFBFBD>!T<01>)<29> <0F><1B>i<EFBFBD>i<EFBFBD>k<EFBFBD>C<EFBFBD>G<EFBFBD> <0F>s%<00>A D<01> 9D<01> D<01>AD<01>'D<01>;Drkr)r<>r<>sj r<00>bygroupsr<73><00>s<00><><00>"<22>& <14>Orc<00><00>]tRtRtRtRtR#)<05>_Thisi<73>zL
Special singleton used for indicating the caller class.
Used by ``using``.
rNr<4E>rrrr<>r<><00>s<00><00>rr<>c <04><>aaa<04>/oRS9d;SPR4p\V\\34'dVSR&MRV3SR&S\Jd RVV3RllpV#RVVV3RllpV#)a<>
Callback that processes the match with a different lexer.
The keyword arguments are forwarded to the lexer, except `state` which
is handled separately.
`state` specifies the state that the new lexer will start in, and can
be an enumerable such as ('root', 'inline', 'string') or a simple
string which is assumed to be on top of the root state.
Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
<EFBFBD>state<74>stack<63>rootc3<00>L<"<00>S 'd/S PVP4VP!R/S BpMTpVP4pVP!VP 43/SBFwrVpWT,Wg3x<00>K V'dVP 4VnR#R#5ir<>)<08>updater>rGr<>rlr<>r<>r<>)
r<EFBFBD>r<>r<><00>lx<6C>sr<73>rmrn<00> gt_kwargs<67>kwargss
&&& <20><>rr<><00>using.<locals>.callback<63>s<><00><><00><00><16><16> <0A> <0A>e<EFBFBD>m<EFBFBD>m<EFBFBD>,<2C><1A>_<EFBFBD>_<EFBFBD>.<2E>v<EFBFBD>.<2E><02><1A><02><15> <0B> <0B> <0A>A<EFBFBD><1D>4<>4<>U<EFBFBD>[<5B>[<5B>]<5D>P<>i<EFBFBD>P<><07><01>a<EFBFBD><17>e<EFBFBD>Q<EFBFBD>k<EFBFBD>!<21>Q<01><12><1F>)<29>)<29>+<2B><03><07><13>s <00>BB$<01> B$c3<00>"<"<00>S
PVP4S!R/S
BpVP4pVP!VP 43/S BFwrVpWT,Wg3x<00>K V'dVP 4VnR#R#5ir<>)r<>r>r<>rlr<>r<>r<>) r<>r<>r<>r<>r<>r<>rmrn<00>_otherr<72>r<>s &&& <20><><EFBFBD>rr<>r<><00>st<00><><00><00> <12>M<EFBFBD>M<EFBFBD>%<25>-<2D>-<2D> (<28><17>!<21>&<26>!<21>B<EFBFBD><15> <0B> <0B> <0A>A<EFBFBD><1D>4<>4<>U<EFBFBD>[<5B>[<5B>]<5D>P<>i<EFBFBD>P<><07><01>a<EFBFBD><17>e<EFBFBD>Q<EFBFBD>k<EFBFBD>!<21>Q<01><12><1F>)<29>)<29>+<2B><03><07><13>s <00>A2B<01>6Brk)<05>poprL<00>listr<74><00>this)r<>r<>r<>r<>r<>sfl @r<00>usingr<67><00>sr<00><><00><13>I<EFBFBD><0E>&<26><18> <12>J<EFBFBD>J<EFBFBD>w<EFBFBD> <1F><01> <15>a<EFBFBD>$<24><05><1D> '<27> '<27>!"<22>I<EFBFBD>g<EFBFBD> <1E>"(<28>!<21><1B>I<EFBFBD>g<EFBFBD> <1E> <0A><14>~<7E> &<26> &<26>2 <14>O<EFBFBD> &<26> &<26> <14>Orc<00>*a<00>]tRtRtoRtRtRtVtR#)ri<>z<>
Indicates a state or state action (e.g. #pop) to apply.
For example default('#pop') is equivalent to ('', Token, '#pop')
Note that state tuples may be used as well.
.. versionadded:: 2.0
c<08><00>WnR#rk<00>r<>)rAr<>s&&rrC<00>default.__init__<5F>s<00><00><1A>
rr<>N)r+r,r-r.r/rCr0r1r2s@rrr<00>s<00><><00><00><08><1B>rc<00>4a<00>]tRtRtoRtRRltRtRtVtR#)ri<>z<>
Indicates a list of literal words that is transformed into an optimized
regex that matches any of the words.
.. versionadded:: 2.0
c<08>*<00>WnW nW0nR#rk)r<00>prefix<69>suffix)rArr<>r<>s&&&&rrC<00>words.__init__<5F>s<00><00><1A>
<EFBFBD><1C> <0B><1C> rc<08>Z<00>\VPVPVPR7#)<01>r<>r<>)rrr<>r<>rHs&rr?<00> words.get<65>s<00><00><18><14><1A><1A>D<EFBFBD>K<EFBFBD>K<EFBFBD><04> <0B> <0B>L<>Lr)r<>r<>rN)r<>r<>) r+r,r-r.r/rCr?r0r1r2s@rrr<00>s<00><><00><00><08> <1D>
M<01>Mrc<00>Ra<00>]tRtRtoRtRtRtRtRtR Rlt R t
R
t R t Vt R#) <0A>RegexLexerMetai<61>zk
Metaclass for RegexLexer, creates the self._tokens attribute from
self.tokens on the first instantiation.
c <0C><><00>\V\4'dVP4p\P!W4P
#)zBPreprocess the regular expression component of a token definition.)rLrr?<00>re<72>compiler<65>)r<><00>regex<65>rflagsr<73>s&&&&r<00>_process_regex<65>RegexLexerMeta._process_regex<65>s/<00><00> <15>e<EFBFBD>V<EFBFBD> $<24> $<24><19>I<EFBFBD>I<EFBFBD>K<EFBFBD>E<EFBFBD><11>z<EFBFBD>z<EFBFBD>%<25>(<28>.<2E>.<2E>.rc <0C>d<00>\V4\Jg\V4'g QRV: 24hV#)z5Preprocess the token component of a token definition.z0token type must be simple type or callable, not )r$r <00>callable)r<><00>tokens&&r<00>_process_token<65>RegexLexerMeta._process_token<65>s4<00><00><13>E<EFBFBD>{<7B>j<EFBFBD>(<28>H<EFBFBD>U<EFBFBD>O<EFBFBD>O<EFBFBD> I<01>><3E>u<EFBFBD>i<EFBFBD> H<> I<01>;<3B><14> rc <0C>Z<00>\V\4'dHVR8XdR
#W9dV3#VR8XdV#VR,R8Xd\VR,4)#QRT: 24h\V\4'dqRVP,pV;P^, un.pVF5pWa8wg QRV: 24hVP VP VW644K7 WSV&V3#\V\4'd)VF pWb9dK VR 9dKQRV,4h V#QR T: 24h) z=Preprocess the state transition action of a token definition.<2E>#pop<6F>#push:N<>Nz#pop::r<>NNzunknown new state z_tmp_%dzcircular state ref zunknown new state def <20><><EFBFBD><EFBFBD><EFBFBD>)r<>r<>)rLrX<00>intr<74><00>_tmpname<6D>extend<6E>_process_stater<65>)r<><00> new_state<74> unprocessed<65> processed<65> tmp_state<74>itokens<6E>istates&&&& r<00>_process_new_state<74>!RegexLexerMeta._process_new_statesB<00><00> <15>i<EFBFBD><13> %<25> %<25><18>F<EFBFBD>"<22><19> <09><1A>)<29>!<21>|<7C>#<23><1A>g<EFBFBD>%<25> <20> <20><1A>2<EFBFBD><1D>'<27>)<29><1B>I<EFBFBD>b<EFBFBD>M<EFBFBD>*<2A>*<2A>*<2A>@<40> 2<>9<EFBFBD>-<2D>@<40>@<40>u<EFBFBD> <17> <09>8<EFBFBD> ,<2C> ,<2C>!<21>C<EFBFBD>L<EFBFBD>L<EFBFBD>0<>I<EFBFBD> <0F>L<EFBFBD>L<EFBFBD>A<EFBFBD> <1D>L<EFBFBD><18>G<EFBFBD>#<23><06><1D>*<2A>L<>.A<>&<26><1A>,L<>L<>*<2A><17><0E><0E>s<EFBFBD>1<>1<>+<2B>2;<3B> E<01>F<01>$<24>$+<2B>i<EFBFBD> <20><1D><<3C> <1F> <17> <09>5<EFBFBD> )<29> )<29>#<23><06><1E>-<2D><1E>"3<>3<>2<>(<28>6<EFBFBD>1<>2<>4<>$<24><1D> <1C> @<40>2<>9<EFBFBD>-<2D>@<40> @<40>5rc  <0C><00>\V\4'g QRV: 24hV^,R8wg QRV: 24hW29d W#,#.;qBV&VPpW,EFdp\V\4'd>Wc8wg QRV: 24hVP VP W\V444KW\V\ 4'dKo\V\4'dQVPVPW4pVP\P!R4PRV34K<>\V4\Jg QRV: 24hVP!V^,WS4pTP'T^,4p
\)T4^8XdRpMTPT^,Y4pTPY<>T34EKg V# \"d(p \%RT^,: R T: R
T: R T 24T hRp ? ii;i) z%Preprocess a single state definition.zwrong state name <20>#zinvalid state name zcircular state reference r<>Nzwrong rule def zuncompilable regex z
in state z of z: )rLrX<00>flagsrr<>r<>r<>rrr<>rMr<>r<>r<>r$r<>r<><00> Exception<6F>
ValueErrorr<EFBFBD>r\) r<>r<>rr<><00>tokensr<73><00>tdefr<66><00>rex<65>errr<72>s &&&& rr<><00>RegexLexerMeta._process_state's<><00><00><19>%<25><13>%<25>%<25>D<>):<3A>5<EFBFBD>)<29>'D<>D<>%<25><14>Q<EFBFBD>x<EFBFBD>3<EFBFBD><EFBFBD>?<3F>"5<>e<EFBFBD>Y<EFBFBD> ?<3F>?<3F><EFBFBD> <10> <1D><1C>#<23> #<23>$&<26>&<26><06>5<EFBFBD>!<21><14><19><19><06><1F>&<26>&<26>D<EFBFBD><19>$<24><07>(<28>(<28><1B>}<7D>K<>(A<>%<25><19>&K<>K<>}<7D><16> <0A> <0A>c<EFBFBD>0<>0<><1B>14<31>T<EFBFBD><19><<3C>=<3D><18><19>$<24><08>)<29>)<29><19><19>$<24><07>(<28>(<28><1F>2<>2<>4<EFBFBD>:<3A>:<3A>{<7B>V<> <09><16> <0A> <0A>r<EFBFBD>z<EFBFBD>z<EFBFBD>"<22>~<7E>3<>3<>T<EFBFBD>9<EFBFBD>E<>F<><18><17><04>:<3A><15>&<26> B<>/<2F>$<24><18>(B<> B<>&<26> r<01><19>(<28>(<28><14>a<EFBFBD><17>&<26>@<40><03><18>&<26>&<26>t<EFBFBD>A<EFBFBD>w<EFBFBD>/<2F>E<EFBFBD><12>4<EFBFBD>y<EFBFBD>A<EFBFBD>~<7E> <20> <09><1F>2<>2<>4<EFBFBD><01>7<EFBFBD>3><3E>K<01> <09> <13>M<EFBFBD>M<EFBFBD>3<EFBFBD>y<EFBFBD>1<> 2<>A'<27>B<16> <0A><><1D> r<01> <20>#6<>t<EFBFBD>A<EFBFBD>w<EFBFBD>k<EFBFBD><1A>E<EFBFBD>9<EFBFBD>TX<54>Y\<5C>X_<58>_a<5F>be<62>af<61>!g<>h<>nq<6E>q<><71> r<01>s<00>G<02> H<05>"G;<05>;HNc <0C><><00>/;q0PV&T;'gVPV,p\V4FpVPW#V4K V#)z-Preprocess a dictionary of token definitions.)<04> _all_tokensr r<>r<>)r<>r'<00> tokendefsrr<>s&&& r<00>process_tokendef<65>RegexLexerMeta.process_tokendefRsM<00><00>,.<2E>.<2E> <09>O<EFBFBD>O<EFBFBD>D<EFBFBD>)<29><1D>1<>1<><13><1A><1A>D<EFBFBD>!1<> <09><19>)<29>_<EFBFBD>E<EFBFBD> <0F> <1E> <1E>y<EFBFBD>U<EFBFBD> ;<3B>%<25><18>rc <0C><><00>/p/pVPF<>pVPPR/4pVP4F|wrVVPV4pVf!WaV&VP \
4pY<>T&K:VPVR4pVfKRWgW<67>^,%VP \
4p W<>,W%&K~ K<> V# \ dK<>i;i \ dK<>i;i)a<>
Merge tokens from superclasses in MRO order, returning a single tokendef
dictionary.
Any state that is not defined by a subclass will be inherited
automatically. States that *are* defined by subclasses will, by
default, override that state in the superclass. If a subclass wishes to
inherit definitions from a superclass, it can use the special value
"inherit", which will cause the superclass' state definition to be
included at that point in the state.
r N)<08>__mro__<5F>__dict__r?<00>items<6D>indexr<78>r
r<>)
r<EFBFBD>r <00> inheritable<6C>c<>toksr<73>r<00>curitems<6D> inherit_ndx<64> new_inh_ndxs
& r<00> get_tokendefs<66>RegexLexerMeta.get_tokendefsZs<><00><00><14><06><18> <0B><14><1B><1B>A<EFBFBD><14>:<3A>:<3A>><3E>><3E>(<28>B<EFBFBD>/<2F>D<EFBFBD> $<24>
<EFBFBD>
<EFBFBD> <0C> <0C><05>!<21>:<3A>:<3A>e<EFBFBD>,<2C><08><1B>#<23>
%*<2A>5<EFBFBD>M<EFBFBD>!<21>&+<2B>k<EFBFBD>k<EFBFBD>'<27>&:<3A> <0B>*5<><05>&<26><1C>)<29>o<EFBFBD>o<EFBFBD>e<EFBFBD>T<EFBFBD>:<3A> <0B><1E>&<26><1C>7<<3C><1B><11>]<5D>3<>C<01>#(<28>+<2B>+<2B>g<EFBFBD>"6<>K<EFBFBD>*5<>)B<>K<EFBFBD>&<26>9!-<2D><1D>B<16> <0A><>)&<26>!<21> <20>!<21><>"<22><19><18><19>s$<00>C<04>C<04> C<07>C<07> C'<07>&C'c <0C><00>RVP9dY/Vn^Vn\VR4'dVP'dM%VP RVP 44Vn\P!V.VO5/VB#)z:Instantiate cls after preprocessing its token definitions.<2E>_tokens<6E>token_variantsr<73>)
rrr<><00>hasattrr$rr r#r$<00>__call__)r<>r<><00>kwdss&*,rr&<00>RegexLexerMeta.__call__<5F>sn<00><00> <14>C<EFBFBD>L<EFBFBD>L<EFBFBD> (<28> <20>C<EFBFBD>O<EFBFBD><1C>C<EFBFBD>L<EFBFBD><16>s<EFBFBD>,<2C>-<2D>-<2D>#<23>2D<32>2D<32>2D<32><14>!<21>2<>2<>2<EFBFBD>s<EFBFBD>7H<37>7H<37>7J<37>K<><03> <0B><13>}<7D>}<7D>S<EFBFBD>0<>4<EFBFBD>0<>4<EFBFBD>0<>0rrrk)r+r,r-r.r/r<>r<>rr<>rr r&r0r1r2s@rr<>r<><00>s6<00><><00><00><08>
/<2F> <15> !A<01>F)<16>V<19>/<16>b 1<> 1rr<>c<00>Ja<00>]tRtRtoRt]P t/tRRlt Rt
Vt R#)ri<>z<>
Base for simple stateful regular expression-based lexers.
Simplifies the lexing process so that you need only
provide a list of states and regular expressions.
c# <0C><>"<00>^pVPp\V4pWER,,pVEFYwrxp V!W4p
V
'gKVe9\V4\JdW8V
P 43x<00>MV!W
4Rjx<01>L
V
P 4pV e<>\ V \4'dhV F`p V R8Xd%\V4^8<>dVP4K,K.V R8XdVPVR,4KOVPV 4Kb Mc\ V \4'd#\V 4\V48<>dVRM0WYR1M+V R8XdVPVR,4M QRT : 24hWER,,pEK_ W,R8Xd$R.pVR,pV\R3x<00>V^, pEK<>V\W,3x<00>V^, pEK<>EL\ \dR#i;i5i) zf
Split ``text`` into (tokentype, text) pairs.
``stack`` is the initial stack (default: ``['root']``)
Nr<EFBFBD>r<><00><>NN<4E>wrong state def: rVr<>r<>)r#r<>r$r r<>r<>rLr<>r\r<>rMr<><00>absrrr<>) rArPr<>r<>r<00>
statestack<EFBFBD> statetokens<6E>rexmatchr<68>r<><00>mr<6D>s &&& rrl<00>!RegexLexer.get_tokens_unprocessed<65>s<><00><00><00> <10><03><18>L<EFBFBD>L<EFBFBD> <09><19>%<25>[<5B>
<EFBFBD><1F>2<EFBFBD><0E>/<2F> <0B><0F>/:<3A>+<2B><08>)<29><1C>T<EFBFBD>'<27><01><14>1<EFBFBD><1D>)<29><1F><06><<3C>:<3A>5<>"%<25>q<EFBFBD>w<EFBFBD>w<EFBFBD>y<EFBFBD>"8<>8<>'-<2D>d<EFBFBD><EFBFBD>6<>6<><1B>%<25>%<25>'<27>C<EFBFBD> <20>,<2C>%<25>i<EFBFBD><15>7<>7<>)2<><05>#(<28>F<EFBFBD>?<3F>'*<2A>:<3A><EFBFBD><11>':<3A>(2<><0E><0E>(8<>(;<3B>%*<2A>g<EFBFBD>%5<>$.<2E>$5<>$5<>j<EFBFBD><12>n<EFBFBD>$E<>$.<2E>$5<>$5<>e<EFBFBD>$<<3C>*3<>(<28> <09>3<EFBFBD>7<>7<> #<23>9<EFBFBD>~<7E><13>Z<EFBFBD><1F>@<40>$.<2E>r<EFBFBD>N<EFBFBD>$.<2E>z<EFBFBD>$:<3A>&<26>'<27>1<>&<26>-<2D>-<2D>j<EFBFBD><12>n<EFBFBD>=<3D>K<>,=<3D>i<EFBFBD>]<5D>*K<>K<>5<EFBFBD>&/<2F>2<EFBFBD><0E>&?<3F> <0B><19>C0;<3B>J <1A><1B>y<EFBFBD>D<EFBFBD>(<28>&,<2C>X<EFBFBD>
<EFBFBD>&/<2F><06>&7<> <0B>!<21>:<3A>t<EFBFBD>3<>3<><1B>q<EFBFBD><08><03> <20><1D>u<EFBFBD>d<EFBFBD>i<EFBFBD>/<2F>/<2F><17>1<EFBFBD>H<EFBFBD>C<EFBFBD>O7<><37>P"<22><1A><19><1A>sI<00>AG/<01>7G/<01>>G<06>?D G/<01> -G<00>:G/<01>=G<00>G/<01> G,<03>(G/<01>+G,<03>,G/rN<>)r<>) r+r,r-r.r/r<><00> MULTILINErr rlr0r1r2s@rrr<00>s&<00><><00><00><08> <0F>L<EFBFBD>L<EFBFBD>E<EFBFBD>0<10>F<EFBFBD>;<1A>;rc<00>4a<00>]tRtRtoRtRRltRtRtVtR#)ri<>z1
A helper object that holds lexer position data.
Nc<08>v<00>WnW nT;'g \V4VnT;'gR.VnR#)r<>N)rPr<>r\r<>r<>)rArPr<>r<>r<>s&&&&&rrC<00>LexerContext.__init__<5F>s0<00><00><18> <09><16><08><16>#<23>#<23>#<23>d<EFBFBD>)<29><04><08><1A>&<26>&<26>v<EFBFBD>h<EFBFBD><04>
rc<08>\<00>RVP: RVP: RVP: R2#)z LexerContext(z, <20>))rPr<>r<>rHs&rrI<00>LexerContext.__repr__s)<00><00><1E>t<EFBFBD>y<EFBFBD>y<EFBFBD>m<EFBFBD>2<EFBFBD>d<EFBFBD>h<EFBFBD>h<EFBFBD>\<5C><12>D<EFBFBD>J<EFBFBD>J<EFBFBD>><3E><11>K<>Kr)r<>r<>r<>rP<00>NN) r+r,r-r.r/rCrIr0r1r2s@rrr<00>s<00><><00><00><08>'<27> L<01>Lrc<00>.a<00>]tRtRtoRtRRltRtVtR#)ri z=
A RegexLexer that uses a context object to store its state.
Nc# <0C>t"<00>VPpV'g\V^4pVR,pM'TpW4PR,,pVPpVEFwrgpV!WPVP
4p V 'gK.Vez\ V4\Jd5VPWyP43x<00>V P 4VnM2V!W V4Rjx<01>L
V'gW4PR,,pVEe_\V\4'd<>VF<>p
V
R8Xd9\VP4^8<>dVPP4K@KBV
R8Xd/VPPVPR,4KwVPPV
4K<> M<>\V\4'dB\V4\VP48<>dVPRMOVPVR1M?VR8Xd.VPPVPR,4M QRT: 24hW4PR,,pEK VPVP
8<>dR#WP,R8XdCR.VnVR,pVP\ R3x<00>V;P^, unEK<>VP\"WP,3x<00>V;P^, unEK<>ELG \$dR#i;i5i) zg
Split ``text`` into (tokentype, text) pairs.
If ``context`` is given, use this lexer context instead.
r<EFBFBD>Nr<4E>r<>r+r-rVr<>)r#rr<>rPr<>r<>r$r r<>rLr<>r\r<>rMr<>r.rrr<>) rArP<00>contextrr<>r0r1r<>r<>r2r<>s &&& rrl<00>)ExtendedRegexLexer.get_tokens_unprocessedsb<00><00><00>
<19>L<EFBFBD>L<EFBFBD> <09><16><1E>t<EFBFBD>Q<EFBFBD>'<27>C<EFBFBD>#<23>F<EFBFBD>+<2B>K<EFBFBD><19>C<EFBFBD>#<23>I<EFBFBD>I<EFBFBD>b<EFBFBD>M<EFBFBD>2<>K<EFBFBD><16>8<EFBFBD>8<EFBFBD>D<EFBFBD><0F>/:<3A>+<2B><08>)<29><1C>T<EFBFBD>7<EFBFBD>7<EFBFBD>C<EFBFBD>G<EFBFBD>G<EFBFBD>4<><01><14>1<EFBFBD><1D>)<29><1F><06><<3C>:<3A>5<>"%<25>'<27>'<27>6<EFBFBD>7<EFBFBD>7<EFBFBD>9<EFBFBD>"<<3C><<3C>&'<27>e<EFBFBD>e<EFBFBD>g<EFBFBD>C<EFBFBD>G<EFBFBD>'-<2D>d<EFBFBD>s<EFBFBD>';<3B>;<3B>;<3B>#,<2C>.7<EFBFBD> <09> <09>"<22> <0A>.F<> <0B> <20>,<2C>%<25>i<EFBFBD><15>7<>7<>)2<><05>#(<28>F<EFBFBD>?<3F>'*<2A>3<EFBFBD>9<EFBFBD>9<EFBFBD>~<7E><01>'9<>(+<2B> <09> <09> <0A> <0A><0F>(:<3A>%*<2A>g<EFBFBD>%5<>$'<27>I<EFBFBD>I<EFBFBD>$4<>$4<>S<EFBFBD>Y<EFBFBD>Y<EFBFBD>r<EFBFBD>]<5D>$C<>$'<27>I<EFBFBD>I<EFBFBD>$4<>$4<>U<EFBFBD>$;<3B>*3<>(<28> <09>3<EFBFBD>7<>7<>"<22>9<EFBFBD>~<7E><13>S<EFBFBD>Y<EFBFBD>Y<EFBFBD><1E>?<3F>$'<27>I<EFBFBD>I<EFBFBD>b<EFBFBD>M<EFBFBD>$'<27>I<EFBFBD>I<EFBFBD>i<EFBFBD>j<EFBFBD>$9<>&<26>'<27>1<><1F>I<EFBFBD>I<EFBFBD>,<2C>,<2C>S<EFBFBD>Y<EFBFBD>Y<EFBFBD>r<EFBFBD>]<5D>;<3B>K<>,=<3D>i<EFBFBD>]<5D>*K<>K<>5<EFBFBD>&/<2F> <09> <09>"<22> <0A>&><3E> <0B><19>G0;<3B>J <1A><1A>w<EFBFBD>w<EFBFBD>#<23>'<27>'<27>)<29><1D><1B>G<EFBFBD>G<EFBFBD>}<7D><04>,<2C>%+<2B>H<EFBFBD><03> <09>&/<2F><06>&7<> <0B>!<21>g<EFBFBD>g<EFBFBD>t<EFBFBD>T<EFBFBD>1<>1<><1B><07><07>1<EFBFBD> <0C><07> <20><1D>'<27>'<27>5<EFBFBD>$<24>w<EFBFBD>w<EFBFBD>-<2D>7<>7<><17>G<EFBFBD>G<EFBFBD>q<EFBFBD>L<EFBFBD>G<EFBFBD>G<EFBFBD>Q<<3C><>R"<22><1A><19><1A>s]<00>A>L8<01>AL8<01>L#<06> L8<01>)FL8<01>,L&<00>L8<01> AL&<00>L8<01>">L&<00> L8<01>& L5<03>1L8<01>4L5<03>5L8rr<)r+r,r-r.r/rlr0r1r2s@rrr s<00><><00><00><08>@<1A>@rc#<04><>"<00>\V4p\V4wr#RpRpTF<>wrgpTfTp^p T'd~T\T4,T8<>dgY<>Y&,
p
T
'dYGT
3x<00>T\T
4, pTFwr<>p YLT 3x<00>T\T 4, pK Y&,
p \T4wr#K<>T \T48gK<>YGY<47>R3x<00>T\T4T ,
, pK<> T'dAT;'g^pTFwr<>pYGT3x<00>T\T4, pK \T4wr#KHR# \dTRjx<01>L
R#i;i \dRpK<>i;i \dRpR#i;i5i)aC
Helper for lexers which must combine the results of several
sublexers.
``insertions`` is a list of ``(index, itokens)`` pairs.
Each ``itokens`` iterable should be inserted at position
``index`` into the token stream given by the ``tokens``
argument.
The result is a combined token stream.
TODO: clean up the code here.
NTF)<04>iter<65>next<78> StopIterationr\)r<>r rr<00>realpos<6F>insleftr<74>rmrn<00>oldi<64>tmpval<61>it_index<65>it_token<65>it_value<75>ps&& rr<>r<>Qs<><00><00><00><16>j<EFBFBD>!<21>J<EFBFBD><0F><1D>j<EFBFBD>)<29><0E><05> <13>G<EFBFBD><12>G<EFBFBD><1A><07><01>a<EFBFBD> <12>?<3F><17>G<EFBFBD><10><04><15>!<21>c<EFBFBD>!<21>f<EFBFBD>*<2A><05>-<2D><16>E<EFBFBD>I<EFBFBD>&<26>F<EFBFBD><15><1D>&<26>(<28>(<28><17>3<EFBFBD>v<EFBFBD>;<3B>&<26><07>07<30>,<2C><08>H<EFBFBD><1D><18>1<>1<><17>3<EFBFBD>x<EFBFBD>=<3D>(<28><07>18<31><19>9<EFBFBD>D<EFBFBD> <16>!%<25>j<EFBFBD>!1<><0E><05>w<EFBFBD> <10>#<23>a<EFBFBD>&<26>=<3D><19>a<EFBFBD><05>h<EFBFBD>&<26> &<26> <13>s<EFBFBD>1<EFBFBD>v<EFBFBD><04>}<7D> $<24>G<EFBFBD>+<1A>0 <12><19>,<2C>,<2C>Q<EFBFBD><07><1E>G<EFBFBD>A<EFBFBD>!<21><19>a<EFBFBD>-<2D> <1F> <13>s<EFBFBD>1<EFBFBD>v<EFBFBD> <1D>G<EFBFBD><1F> <12>!<21>*<2A>-<2D>N<EFBFBD>E<EFBFBD>7<EFBFBD> <12><>E <19><0F><19><19><19><0E><0F><>4!<21> <16><1F><07><15> <16><> <1D> <12><1B>G<EFBFBD> <11> <12>s<EFBFBD><00> E;<01> D6<00>E;<01>A-E;<01>& E<02>3E;<01>,E;<01>4E;<01>='E;<01>% E'<00>2E;<01>6E<03>E <06>E<03> E;<01>E<03>E;<01> E$<05> E;<01>#E$<05>$E;<01>' E8<03>4E;<01>7E8<03>8E;c<00>*a<00>]tRtRtoRtRtRtVtR#)<06>ProfilingRegexLexerMetai<61>z>Metaclass for ProfilingRegexLexer, collects regex timing info.c<08><>aaaa<06>\V\4'd.\VPVPVPR7oMVo\
P !SV4o\P3VVVV3RllpV#)r<>c<00>.<<04>SPR,PS
S 3^R.4p\P!4pSPWV4p\P!4pV^;;,^, uu&V^;;,Wd,
, uu&V#)r,rr<>)<04>
_prof_data<EFBFBD>
setdefault<EFBFBD>timer<65>) rPr<><00>endpos<6F>info<66>t0<74>res<65>t1r<31><00>compiledr r<>s &&& <20><><EFBFBD><EFBFBD>r<00>
match_func<EFBFBD>:ProfilingRegexLexerMeta._process_regex.<locals>.match_func<6E>sn<00><><00><16>><3E>><3E>"<22>%<25>0<>0<>%<25><13><1C><01>3<EFBFBD>x<EFBFBD>H<>D<EFBFBD><15><19><19><1B>B<EFBFBD><1A>.<2E>.<2E><14>F<EFBFBD>3<>C<EFBFBD><15><19><19><1B>B<EFBFBD> <10><11>G<EFBFBD>q<EFBFBD>L<EFBFBD>G<EFBFBD> <10><11>G<EFBFBD>r<EFBFBD>w<EFBFBD> <1E>G<EFBFBD><16>Jr) rLrrr<>r<>r<>r<><00>sys<79>maxsize)r<>r<>r<>r<>rZrYr sf&&f @@rr<><00>&ProfilingRegexLexerMeta._process_regex<65>s]<00><><00> <15>e<EFBFBD>U<EFBFBD> #<23> #<23><1B>E<EFBFBD>K<EFBFBD>K<EFBFBD><05> <0C> <0C>#(<28><<3C><<3C>1<>C<EFBFBD><18>C<EFBFBD><15>:<3A>:<3A>c<EFBFBD>6<EFBFBD>*<2A><08>),<2C><1B><1B> <17> <17><1A>rrN)r+r,r-r.r/r<>r0r1r2s@rrNrN<00>s<00><><00><00>H<><1A>rrNc<00>6a<00>]tRtRtoRt.t^tRRltRtVt R#)<07>ProfilingRegexLexeri<72>zFDrop-in replacement for RegexLexer that does profiling of its regexes.c#<08>na"<00>SPPP/4\P SW4Rjx<01>L
SPPP 4p\ RVP44V3RlRR7p\RV44p\4\RSPP\V4V3,4\R4\R:R R R
:R R 24\R4VFp\R V,4K \R4R#L<>5i)Nc3<00><>"<00>TFWwwrwr4V\V4PR4PRR4R,VRV,RV,V, 3x<00>KY R#5i)zu'z\\<5C>\:N<>ANi<4E>N)<03>reprr_rT)<05>.0r<EFBFBD><00>r<>nrms& r<00> <genexpr><3E>=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr><3E>sZ<00><00><00>@<01>/><3E>+<2B>F<EFBFBD>Q<EFBFBD>F<EFBFBD>Q<EFBFBD><1A>4<EFBFBD><01>7<EFBFBD>=<3D>=<3D><15>/<2F>7<>7<><06><04>E<>c<EFBFBD>J<><19>4<EFBFBD>!<21>8<EFBFBD>T<EFBFBD>A<EFBFBD>X<EFBFBD><01>\<5C>3<>/><3E>s<00>AA!c<00>*<<01>VSP,#rk)<01>_prof_sort_index)rrAs&<26>rr<00><ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<lambda><3E>s<00><><00>A<EFBFBD>d<EFBFBD>&;<3B>&;<3B>$<rT)<02>key<65>reversec3<00>2"<00>TF q^,x<00>K R#5i)<02>Nr)rfrs& rrirj<00>s<00><00><00>+<2B>d<EFBFBD><11>!<21><04><04>d<EFBFBD>s<00>z2Profiling result for %s lexing %d chars in %.3f msr<73><00>20<32> r<><00>64z ncalls tottime percallz%-20s %-65s %5d %8.4f %8.4fzn==============================================================================================================zn--------------------------------------------------------------------------------------------------------------) rGrQrMrrlr<><00>sortedr<00>sum<75>printr+r\)rArPr<><00>rawdatar<61><00> sum_totalr)sf&& rrl<00>*ProfilingRegexLexer.get_tokens_unprocessed<65>s<><00><><00><00> <0C><0E><0E>!<21>!<21>(<28>(<28><12>,<2C><1D>4<>4<>T<EFBFBD>4<EFBFBD>G<>G<>G<><16>.<2E>.<2E>+<2B>+<2B>/<2F>/<2F>1<><07><15>@<01>/6<>}<7D>}<7D><EFBFBD>@<01>=<3D>"<22> $<24><04>
<18>+<2B>d<EFBFBD>+<2B>+<2B> <09> <0A><07> <0A>B<><13>~<7E>~<7E>&<26>&<26><03>D<EFBFBD> <09>9<EFBFBD>=<3D>><3E> ?<3F> <0A>i<EFBFBD><18> <0A><07><17>I<>J<> <0A>i<EFBFBD><18><15>A<EFBFBD> <11>/<2F>!<21>3<> 4<><16> <0A>i<EFBFBD><18># H<01>s<00>?D5<01>D3<04>C1D5rNr4)
r+r,r-r.r/rQrlrlr0r1r2s@rr`r`<00>s<00><><00><00>P<><13>J<EFBFBD><18><14><19>rr`) rrrrrrr<>r<>r<>r<>rr<00>line_re))srU)s<00><>zutf-32)s<00><>zutf-32be)s<00><>zutf-16)s<00><>zutf-16be)6r/r<>r\rS<00>pygments.filterrr<00>pygments.filtersr<00>pygments.tokenrrrrr <00> pygments.utilr
r r r rr<00>pygments.regexoptr<00>__all__r<5F>r{rZ<00> staticmethod<6F>_default_analyser$r!rrrXrr<>r<>r<>r<>r<>r<>r<>r<>r<>rrr<>rrrr<>rNr`rrr<00><module>r<>sJ<00><01><04>
<EFBFBD>
<EFBFBD> <0B>1<>/<2F>E<>E<>*<2A>*<2A>'<27> *<2A><07> <0A>*<2A>*<2A>W<EFBFBD>
<1D><07>,<2C> <0A>  <20> <0A>.<2E><10> 1<><04> 1<>m"<22>i<EFBFBD>m"<22>`O<01>e<EFBFBD>O<01>N <09>c<EFBFBD> <09><19><19> <13>*<2A><07>
 <0A>u<EFBFBD>
 <0A><12><12>6<14>4<08><08>  <0A>w<EFBFBD><04>/<14>d <1B> <1B> M<01>F<EFBFBD> M<01> d1<>Y<EFBFBD>d1<>N^<1A><15>.<2E>^<1A>B L<01> L<01>E<1A><1A>E<1A>P=<12>@<1A>n<EFBFBD><1A>,<19>*<2A>0G<30>r