← Index
NYTProf Performance Profile   « line view »
For /Users/timbo/perl5/perlbrew/perls/perl-5.18.2/bin/perlcritic
  Run on Sat Mar 19 22:12:22 2016
Reported on Sat Mar 19 22:14:12 2016

Filename/Users/timbo/perl5/perlbrew/perls/perl-5.18.2/lib/site_perl/5.18.2/PPI/Tokenizer.pm
StatementsExecuted 3487328 statements in 3.64s
Subroutines
Calls P F Exclusive
Time
Inclusive
Time
Subroutine
149609111.30s4.58sPPI::Tokenizer::::_process_next_charPPI::Tokenizer::_process_next_char
2690421857ms6.13sPPI::Tokenizer::::_process_next_linePPI::Tokenizer::_process_next_line
9451311602ms6.79sPPI::Tokenizer::::get_tokenPPI::Tokenizer::get_token
56533147428ms681msPPI::Tokenizer::::_new_tokenPPI::Tokenizer::_new_token
2054264261ms305msPPI::Tokenizer::::_previous_significant_tokensPPI::Tokenizer::_previous_significant_tokens
945132916218ms218msPPI::Tokenizer::::_finalize_tokenPPI::Tokenizer::_finalize_token
2728132186ms246msPPI::Tokenizer::::_fill_linePPI::Tokenizer::_fill_line
14411162ms162msPPI::Tokenizer::::CORE:substPPI::Tokenizer::CORE:subst (opcode)
14411118ms503msPPI::Tokenizer::::newPPI::Tokenizer::new
272873260.1ms60.1msPPI::Tokenizer::::_get_linePPI::Tokenizer::_get_line
18661116.1ms34.6msPPI::Tokenizer::::_opcontextPPI::Tokenizer::_opcontext
15534114.15ms4.15msPPI::Tokenizer::::CORE:matchPPI::Tokenizer::CORE:match (opcode)
144111.34ms1.76msPPI::Tokenizer::::_clean_eofPPI::Tokenizer::_clean_eof
5221488┬Ás589┬ÁsPPI::Tokenizer::::_last_significant_tokenPPI::Tokenizer::_last_significant_token
111135┬Ás224┬ÁsPPI::Tokenizer::::BEGIN@88PPI::Tokenizer::BEGIN@88
11112┬Ás23┬ÁsPPI::Tokenizer::::BEGIN@81PPI::Tokenizer::BEGIN@81
1117┬Ás35┬ÁsPPI::Tokenizer::::BEGIN@82PPI::Tokenizer::BEGIN@82
1116┬Ás23┬ÁsPPI::Tokenizer::::BEGIN@90PPI::Tokenizer::BEGIN@90
1113┬Ás3┬ÁsPPI::Tokenizer::::BEGIN@83PPI::Tokenizer::BEGIN@83
1113┬Ás3┬ÁsPPI::Tokenizer::::BEGIN@84PPI::Tokenizer::BEGIN@84
1113┬Ás3┬ÁsPPI::Tokenizer::::BEGIN@85PPI::Tokenizer::BEGIN@85
1113┬Ás3┬ÁsPPI::Tokenizer::::BEGIN@87PPI::Tokenizer::BEGIN@87
1113┬Ás3┬ÁsPPI::Tokenizer::::BEGIN@86PPI::Tokenizer::BEGIN@86
1113┬Ás3┬ÁsPPI::Tokenizer::::BEGIN@91PPI::Tokenizer::BEGIN@91
0000s0sPPI::Tokenizer::::__ANON__[:211]PPI::Tokenizer::__ANON__[:211]
0000s0sPPI::Tokenizer::::_charPPI::Tokenizer::_char
0000s0sPPI::Tokenizer::::_last_tokenPPI::Tokenizer::_last_token
0000s0sPPI::Tokenizer::::all_tokensPPI::Tokenizer::all_tokens
0000s0sPPI::Tokenizer::::decrement_cursorPPI::Tokenizer::decrement_cursor
0000s0sPPI::Tokenizer::::increment_cursorPPI::Tokenizer::increment_cursor
Call graph for these subroutines as a Graphviz dot language file.
Line State
ments
Time
on line
Calls Time
in subs
Code
1package PPI::Tokenizer;
2
3=pod
4
5=head1 NAME
6
7PPI::Tokenizer - The Perl Document Tokenizer
8
9=head1 SYNOPSIS
10
11 # Create a tokenizer for a file, array or string
12 $Tokenizer = PPI::Tokenizer->new( 'filename.pl' );
13 $Tokenizer = PPI::Tokenizer->new( \@lines );
14 $Tokenizer = PPI::Tokenizer->new( \$source );
15
16 # Return all the tokens for the document
17 my $tokens = $Tokenizer->all_tokens;
18
19 # Or we can use it as an iterator
20 while ( my $Token = $Tokenizer->get_token ) {
21 print "Found token '$Token'\n";
22 }
23
24 # If we REALLY need to manually nudge the cursor, you
25 # can do that to (The lexer needs this ability to do rollbacks)
26 $is_incremented = $Tokenizer->increment_cursor;
27 $is_decremented = $Tokenizer->decrement_cursor;
28
29=head1 DESCRIPTION
30
31PPI::Tokenizer is the class that provides Tokenizer objects for use in
32breaking strings of Perl source code into Tokens.
33
34By the time you are reading this, you probably need to know a little
35about the difference between how perl parses Perl "code" and how PPI
36parsers Perl "documents".
37
38"perl" itself (the interpreter) uses a heavily modified lex specification
39to specify its parsing logic, maintains several types of state as it
40goes, and incrementally tokenizes, lexes AND EXECUTES at the same time.
41
42In fact, it is provably impossible to use perl's parsing method without
43simultaneously executing code. A formal mathematical proof has been
44published demonstrating the method.
45
46This is where the truism "Only perl can parse Perl" comes from.
47
48PPI uses a completely different approach by abandoning the (impossible)
49ability to parse Perl the same way that the interpreter does, and instead
50parsing the source as a document, using a document structure independantly
51derived from the Perl documentation and approximating the perl interpreter
52interpretation as closely as possible.
53
54It was touch and go for a long time whether we could get it close enough,
55but in the end it turned out that it could be done.
56
57In this approach, the tokenizer C<PPI::Tokenizer> is implemented separately
58from the lexer L<PPI::Lexer>.
59
60The job of C<PPI::Tokenizer> is to take pure source as a string and break it
61up into a stream/set of tokens, and contains most of the "black magic" used
62in PPI. By comparison, the lexer implements a relatively straight forward
63tree structure, and has an implementation that is uncomplicated (compared
64to the insanity in the tokenizer at least).
65
66The Tokenizer uses an immense amount of heuristics, guessing and cruft,
67supported by a very B<VERY> flexible internal API, but fortunately it was
68possible to largely encapsulate the black magic, so there is not a lot that
69gets exposed to people using the C<PPI::Tokenizer> itself.
70
71=head1 METHODS
72
73Despite the incredible complexity, the Tokenizer itself only exposes a
74relatively small number of methods, with most of the complexity implemented
75in private methods.
76
77=cut
78
79# Make sure everything we need is loaded so
80# we don't have to go and load all of PPI.
81221┬Ás234┬Ás
# spent 23┬Ás (12+11) within PPI::Tokenizer::BEGIN@81 which was called: # once (12┬Ás+11┬Ás) by PPI::BEGIN@28 at line 81
use strict;
# spent 23┬Ás making 1 call to PPI::Tokenizer::BEGIN@81 # spent 11┬Ás making 1 call to strict::import
82219┬Ás263┬Ás
# spent 35┬Ás (7+28) within PPI::Tokenizer::BEGIN@82 which was called: # once (7┬Ás+28┬Ás) by PPI::BEGIN@28 at line 82
use Params::Util qw{_INSTANCE _SCALAR0 _ARRAY0};
# spent 35┬Ás making 1 call to PPI::Tokenizer::BEGIN@82 # spent 28┬Ás making 1 call to Exporter::import
83218┬Ás13┬Ás
# spent 3┬Ás within PPI::Tokenizer::BEGIN@83 which was called: # once (3┬Ás+0s) by PPI::BEGIN@28 at line 83
use List::MoreUtils ();
# spent 3┬Ás making 1 call to PPI::Tokenizer::BEGIN@83
84215┬Ás13┬Ás
# spent 3┬Ás within PPI::Tokenizer::BEGIN@84 which was called: # once (3┬Ás+0s) by PPI::BEGIN@28 at line 84
use PPI::Util ();
# spent 3┬Ás making 1 call to PPI::Tokenizer::BEGIN@84
85214┬Ás13┬Ás
# spent 3┬Ás within PPI::Tokenizer::BEGIN@85 which was called: # once (3┬Ás+0s) by PPI::BEGIN@28 at line 85
use PPI::Element ();
# spent 3┬Ás making 1 call to PPI::Tokenizer::BEGIN@85
86220┬Ás13┬Ás
# spent 3┬Ás within PPI::Tokenizer::BEGIN@86 which was called: # once (3┬Ás+0s) by PPI::BEGIN@28 at line 86
use PPI::Token ();
# spent 3┬Ás making 1 call to PPI::Tokenizer::BEGIN@86
87215┬Ás13┬Ás
# spent 3┬Ás within PPI::Tokenizer::BEGIN@87 which was called: # once (3┬Ás+0s) by PPI::BEGIN@28 at line 87
use PPI::Exception ();
# spent 3┬Ás making 1 call to PPI::Tokenizer::BEGIN@87
88279┬Ás1224┬Ás
# spent 224┬Ás (135+89) within PPI::Tokenizer::BEGIN@88 which was called: # once (135┬Ás+89┬Ás) by PPI::BEGIN@28 at line 88
use PPI::Exception::ParserRejection ();
# spent 224┬Ás making 1 call to PPI::Tokenizer::BEGIN@88
89
90222┬Ás239┬Ás
# spent 23┬Ás (6+16) within PPI::Tokenizer::BEGIN@90 which was called: # once (6┬Ás+16┬Ás) by PPI::BEGIN@28 at line 90
use vars qw{$VERSION};
# spent 23┬Ás making 1 call to PPI::Tokenizer::BEGIN@90 # spent 16┬Ás making 1 call to vars::import
91
# spent 3┬Ás within PPI::Tokenizer::BEGIN@91 which was called: # once (3┬Ás+0s) by PPI::BEGIN@28 at line 93
BEGIN {
9214┬Ás $VERSION = '1.215';
9311.57ms13┬Ás}
# spent 3┬Ás making 1 call to PPI::Tokenizer::BEGIN@91
94
- -
99#####################################################################
100# Creation and Initialization
101
102=pod
103
104=head2 new $file | \@lines | \$source
105
106The main C<new> constructor creates a new Tokenizer object. These
107objects have no configuration parameters, and can only be used once,
108to tokenize a single perl source file.
109
110It takes as argument either a normal scalar containing source code,
111a reference to a scalar containing source code, or a reference to an
112ARRAY containing newline-terminated lines of source code.
113
114Returns a new C<PPI::Tokenizer> object on success, or throws a
115L<PPI::Exception> exception on error.
116
117=cut
118
119
# spent 503ms (118+384) within PPI::Tokenizer::new which was called 144 times, avg 3.49ms/call: # 144 times (118ms+384ms) by PPI::Lexer::lex_file at line 159 of PPI/Lexer.pm, avg 3.49ms/call
sub new {
120144119┬Ás my $class = ref($_[0]) || $_[0];
121
122 # Create the empty tokenizer struct
1231441.61ms my $self = bless {
124 # Source code
125 source => undef,
126 source_bytes => undef,
127
128 # Line buffer
129 line => undef,
130 line_length => undef,
131 line_cursor => undef,
132 line_count => 0,
133
134 # Parse state
135 token => undef,
136 class => 'PPI::Token::BOM',
137 zone => 'PPI::Token::Whitespace',
138
139 # Output token buffer
140 tokens => [],
141 token_cursor => 0,
142 token_eof => 0,
143
144 # Perl 6 blocks
145 perl6 => [],
146 }, $class;
147
148144208┬Ás if ( ! defined $_[1] ) {
149 # We weren't given anything
150 PPI::Exception->throw("No source provided to Tokenizer");
151
152 } elsif ( ! ref $_[1] ) {
153144566┬Ás144187ms my $source = PPI::Util::_slurp($_[1]);
# spent 187ms making 144 calls to PPI::Util::_slurp, avg 1.30ms/call
1541441.20ms if ( ref $source ) {
155 # Content returned by reference
156 $self->{source} = $$source;
157 } else {
158 # Errors returned as a string
159 return( $source );
160 }
161
162 } elsif ( _SCALAR0($_[1]) ) {
163 $self->{source} = ${$_[1]};
164
165 } elsif ( _ARRAY0($_[1]) ) {
166 $self->{source} = join '', map { "\n" } @{$_[1]};
167
168 } else {
169 # We don't support whatever this is
170 PPI::Exception->throw(ref($_[1]) . " is not supported as a source provider");
171 }
172
173 # We can't handle a null string
174144289┬Ás $self->{source_bytes} = length $self->{source};
1751443.62ms if ( $self->{source_bytes} > 1048576 ) {
176 # Dammit! It's ALWAYS the "Perl" modules larger than a
177 # meg that seems to blow up the Tokenizer/Lexer.
178 # Nobody actually writes real programs larger than a meg
179 # Perl::Tidy (the largest) is only 800k.
180 # It is always these idiots with massive Data::Dumper
181 # structs or huge RecDescent parser.
182 PPI::Exception::ParserRejection->throw("File is too large");
183
184 } elsif ( $self->{source_bytes} ) {
185 # Split on local newlines
186144163ms144162ms $self->{source} =~ s/(?:\015{1,2}\012|\015|\012)/\n/g;
# spent 162ms making 144 calls to PPI::Tokenizer::CORE:subst, avg 1.12ms/call
187144107ms $self->{source} = [ split /(?<=\n)/, $self->{source} ];
188
189 } else {
190 $self->{source} = [ ];
191 }
192
193 ### EVIL
194 # I'm explaining this earlier than I should so you can understand
195 # why I'm about to do something that looks very strange. There's
196 # a problem with the Tokenizer, in that tokens tend to change
197 # classes as each letter is added, but they don't get allocated
198 # their definite final class until the "end" of the token, the
199 # detection of which occurs in about a hundred different places,
200 # all through various crufty code (that triples the speed).
201 #
202 # However, in general, this does not apply to tokens in which a
203 # whitespace character is valid, such as comments, whitespace and
204 # big strings.
205 #
206 # So what we do is add a space to the end of the source. This
207 # triggers normal "end of token" functionality for all cases. Then,
208 # once the tokenizer hits end of file, it examines the last token to
209 # manually either remove the ' ' token, or chop it off the end of
210 # a longer one in which the space would be valid.
2111567834.2ms1567839.0ms if ( List::MoreUtils::any { /^__(?:DATA|END)__\s*$/ } @{$self->{source}} ) {
# spent 34.9ms making 144 calls to List::MoreUtils::any, avg 242┬Ás/call # spent 4.15ms making 15534 calls to PPI::Tokenizer::CORE:match, avg 267ns/call
212 $self->{source_eof_chop} = '';
213 } elsif ( ! defined $self->{source}->[0] ) {
214 $self->{source_eof_chop} = '';
215 } elsif ( $self->{source}->[-1] =~ /\s$/ ) {
216 $self->{source_eof_chop} = '';
217 } else {
218 $self->{source_eof_chop} = 1;
219 $self->{source}->[-1] .= ' ';
220 }
221
222144765┬Ás $self;
223}
224
- -
229#####################################################################
230# Main Public Methods
231
232=pod
233
234=head2 get_token
235
236When using the PPI::Tokenizer object as an iterator, the C<get_token>
237method is the primary method that is used. It increments the cursor
238and returns the next Token in the output array.
239
240The actual parsing of the file is done only as-needed, and a line at
241a time. When C<get_token> hits the end of the token array, it will
242cause the parser to pull in the next line and parse it, continuing
243as needed until there are more tokens on the output array that
244get_token can then return.
245
246This means that a number of Tokenizer objects can be created, and
247won't consume significant CPU until you actually begin to pull tokens
248from it.
249
250Return a L<PPI::Token> object on success, C<0> if the Tokenizer had
251reached the end of the file, or C<undef> on error.
252
253=cut
254
255
# spent 6.79s (602ms+6.19) within PPI::Tokenizer::get_token which was called 94513 times, avg 72┬Ás/call: # 94513 times (602ms+6.19s) by PPI::Lexer::_get_token at line 1413 of PPI/Lexer.pm, avg 72┬Ás/call
sub get_token {
2569451317.5ms my $self = shift;
257
258 # Shortcut for EOF
2599451315.6ms if ( $self->{token_eof}
260 and $self->{token_cursor} > scalar @{$self->{tokens}}
261 ) {
262 return 0;
263 }
264
265 # Return the next token if we can
26694513298ms8238448.3ms if ( my $token = $self->{tokens}->[ $self->{token_cursor} ] ) {
# spent 48.3ms making 82384 calls to PPI::Util::TRUE, avg 587ns/call
2678238411.9ms $self->{token_cursor}++;
26882384244ms return $token;
269 }
270
27112129268┬Ás my $line_rv;
272
273 # Catch exceptions and return undef, so that we
274 # can start to convert code to exception-based code.
275121294.52ms my $rv = eval {
276 # No token, we need to get some more
2771212914.1ms121294.32s while ( $line_rv = $self->_process_next_line ) {
# spent 4.32s making 12129 calls to PPI::Tokenizer::_process_next_line, avg 356┬Ás/call
278 # If there is something in the buffer, return it
279 # The defined() prevents a ton of calls to PPI::Util::TRUE
2802661631.1ms147751.81s if ( defined( my $token = $self->{tokens}->[ $self->{token_cursor} ] ) ) {
# spent 1.81s making 14775 calls to PPI::Tokenizer::_process_next_line, avg 123┬Ás/call
281118411.48ms $self->{token_cursor}++;
282118415.62ms return $token;
283 }
284 }
28528856┬Ás return undef;
286 };
2871212980.8ms118418.35ms if ( $@ ) {
# spent 8.35ms making 11841 calls to PPI::Util::TRUE, avg 705ns/call
288 if ( _INSTANCE($@, 'PPI::Exception') ) {
289 $@->throw;
290 } else {
291 my $errstr = $@;
292 $errstr =~ s/^(.*) at line .+$/$1/;
293 PPI::Exception->throw( $errstr );
294 }
295 } elsif ( $rv ) {
296 return $rv;
297 }
298
29928863┬Ás if ( defined $line_rv ) {
300 # End of file, but we can still return things from the buffer
301288181┬Ás if ( my $token = $self->{tokens}->[ $self->{token_cursor} ] ) {
302 $self->{token_cursor}++;
303 return $token;
304 }
305
306 # Set our token end of file flag
30728882┬Ás $self->{token_eof} = 1;
308288489┬Ás return 0;
309 }
310
311 # Error, pass it up to our caller
312 undef;
313}
314
315=pod
316
317=head2 all_tokens
318
319When not being used as an iterator, the C<all_tokens> method tells
320the Tokenizer to parse the entire file and return all of the tokens
321in a single ARRAY reference.
322
323It should be noted that C<all_tokens> does B<NOT> interfere with the
324use of the Tokenizer object as an iterator (does not modify the token
325cursor) and use of the two different mechanisms can be mixed safely.
326
327Returns a reference to an ARRAY of L<PPI::Token> objects on success
328or throws an exception on error.
329
330=cut
331
332sub all_tokens {
333 my $self = shift;
334
335 # Catch exceptions and return undef, so that we
336 # can start to convert code to exception-based code.
337 eval {
338 # Process lines until we get EOF
339 unless ( $self->{token_eof} ) {
340 my $rv;
341 while ( $rv = $self->_process_next_line ) {}
342 unless ( defined $rv ) {
343 PPI::Exception->throw("Error while processing source");
344 }
345
346 # Clean up the end of the tokenizer
347 $self->_clean_eof;
348 }
349 };
350 if ( $@ ) {
351 my $errstr = $@;
352 $errstr =~ s/^(.*) at line .+$/$1/;
353 PPI::Exception->throw( $errstr );
354 }
355
356 # End of file, return a copy of the token array.
357 return [ @{$self->{tokens}} ];
358}
359
360=pod
361
362=head2 increment_cursor
363
364Although exposed as a public method, C<increment_method> is implemented
365for expert use only, when writing lexers or other components that work
366directly on token streams.
367
368It manually increments the token cursor forward through the file, in effect
369"skipping" the next token.
370
371Return true if the cursor is incremented, C<0> if already at the end of
372the file, or C<undef> on error.
373
374=cut
375
376sub increment_cursor {
377 # Do this via the get_token method, which makes sure there
378 # is actually a token there to move to.
379 $_[0]->get_token and 1;
380}
381
382=pod
383
384=head2 decrement_cursor
385
386Although exposed as a public method, C<decrement_method> is implemented
387for expert use only, when writing lexers or other components that work
388directly on token streams.
389
390It manually decrements the token cursor backwards through the file, in
391effect "rolling back" the token stream. And indeed that is what it is
392primarily intended for, when the component that is consuming the token
393stream needs to implement some sort of "roll back" feature in its use
394of the token stream.
395
396Return true if the cursor is decremented, C<0> if already at the
397beginning of the file, or C<undef> on error.
398
399=cut
400
401sub decrement_cursor {
402 my $self = shift;
403
404 # Check for the beginning of the file
405 return 0 unless $self->{token_cursor};
406
407 # Decrement the token cursor
408 $self->{token_eof} = 0;
409 --$self->{token_cursor};
410}
411
- -
416#####################################################################
417# Working With Source
418
419# Fetches the next line from the input line buffer
420# Returns undef at EOF.
421
# spent 60.1ms within PPI::Tokenizer::_get_line which was called 27287 times, avg 2┬Ás/call: # 27281 times (60.1ms+0s) by PPI::Tokenizer::_fill_line at line 443, avg 2┬Ás/call # 5 times (10┬Ás+0s) by PPI::Token::HereDoc::__TOKENIZER__on_char at line 222 of PPI/Token/HereDoc.pm, avg 2┬Ás/call # once (3┬Ás+0s) by PPI::Token::HereDoc::__TOKENIZER__on_char at line 211 of PPI/Token/HereDoc.pm
sub _get_line {
422272873.41ms my $self = shift;
423272876.10ms return undef unless $self->{source}; # EOF hit previously
424
425 # Pull off the next line
4262714315.3ms my $line = shift @{$self->{source}};
427
428 # Flag EOF if we hit it
429271433.09ms $self->{source} = undef unless defined $line;
430
431 # Return the line (or EOF flag)
43227143113ms return $line; # string or undef
433}
434
435# Fetches the next line, ready to process
436# Returns 1 on success
437# Returns 0 on EOF
438
# spent 246ms (186+60.1) within PPI::Tokenizer::_fill_line which was called 27281 times, avg 9┬Ás/call: # 26904 times (184ms+59.2ms) by PPI::Tokenizer::_process_next_line at line 490, avg 9┬Ás/call # 372 times (1.89ms+884┬Ás) by PPI::Token::_QuoteEngine::_scan_for_brace_character at line 183 of PPI/Token/_QuoteEngine.pm, avg 7┬Ás/call # 5 times (38┬Ás+16┬Ás) by PPI::Token::_QuoteEngine::_scan_for_unescaped_character at line 137 of PPI/Token/_QuoteEngine.pm, avg 11┬Ás/call
sub _fill_line {
439272813.17ms my $self = shift;
440272813.02ms my $inscan = shift;
441
442 # Get the next line
4432728127.1ms2728160.1ms my $line = $self->_get_line;
# spent 60.1ms making 27281 calls to PPI::Tokenizer::_get_line, avg 2┬Ás/call
444272812.96ms unless ( defined $line ) {
445 # End of file
44628832┬Ás unless ( $inscan ) {
447288199┬Ás delete $self->{line};
44828852┬Ás delete $self->{line_cursor};
44928846┬Ás delete $self->{line_length};
450288529┬Ás return 0;
451 }
452
453 # In the scan version, just set the cursor to the end
454 # of the line, and the rest should just cascade out.
455 $self->{line_cursor} = $self->{line_length};
456 return 0;
457 }
458
459 # Populate the appropriate variables
460269936.62ms $self->{line} = $line;
461269934.61ms $self->{line_cursor} = -1;
462269936.80ms $self->{line_length} = length $line;
463269933.62ms $self->{line_count}++;
464
4652699368.3ms 1;
466}
467
468# Get the current character
469sub _char {
470 my $self = shift;
471 substr( $self->{line}, $self->{line_cursor}, 1 );
472}
473
- -
478####################################################################
479# Per line processing methods
480
481# Processes the next line
482# Returns 1 on success completion
483# Returns 0 if EOF
484# Returns undef on error
485
# spent 6.13s (857ms+5.28) within PPI::Tokenizer::_process_next_line which was called 26904 times, avg 228┬Ás/call: # 14775 times (272ms+1.54s) by PPI::Tokenizer::get_token at line 280, avg 123┬Ás/call # 12129 times (586ms+3.74s) by PPI::Tokenizer::get_token at line 277, avg 356┬Ás/call
sub _process_next_line {
486269043.78ms my $self = shift;
487
488 # Fill the line buffer
48926904903┬Ás my $rv;
4902690423.3ms26904243ms unless ( $rv = $self->_fill_line ) {
# spent 243ms making 26904 calls to PPI::Tokenizer::_fill_line, avg 9┬Ás/call
49128838┬Ás return undef unless defined $rv;
492
493 # End of file, finalize last token
494288275┬Ás288397┬Ás $self->_finalize_token;
# spent 397┬Ás making 288 calls to PPI::Tokenizer::_finalize_token, avg 1┬Ás/call
495288450┬Ás return 0;
496 }
497
498 # Run the __TOKENIZER__on_line_start
4992661639.3ms26616354ms $rv = $self->{class}->__TOKENIZER__on_line_start( $self );
# spent 269ms making 14943 calls to PPI::Token::Whitespace::__TOKENIZER__on_line_start, avg 18┬Ás/call # spent 65.6ms making 9695 calls to PPI::Token::Pod::__TOKENIZER__on_line_start, avg 7┬Ás/call # spent 14.1ms making 1834 calls to PPI::Token::End::__TOKENIZER__on_line_start, avg 8┬Ás/call # spent 4.66ms making 144 calls to PPI::Token::BOM::__TOKENIZER__on_line_start, avg 32┬Ás/call
500266163.26ms unless ( $rv ) {
501 # If there are no more source lines, then clean up
502169239.78ms1441.76ms if ( ref $self->{source} eq 'ARRAY' and ! @{$self->{source}} ) {
# spent 1.76ms making 144 calls to PPI::Tokenizer::_clean_eof, avg 12┬Ás/call
503 $self->_clean_eof;
504 }
505
506 # Defined but false means next line
5071692366.4ms return 1 if defined $rv;
508 PPI::Exception->throw("Error at line $self->{line_count}");
509 }
510
511 # If we can't deal with the entire line, process char by char
5129693203ms1496094.58s while ( $rv = $self->_process_next_char ) {}
# spent 4.58s making 149609 calls to PPI::Tokenizer::_process_next_char, avg 31┬Ás/call
51396931.15ms unless ( defined $rv ) {
514 PPI::Exception->throw("Error at line $self->{line_count}, character $self->{line_cursor}");
515 }
516
517 # Trigger any action that needs to happen at the end of a line
518969313.4ms969394.6ms $self->{class}->__TOKENIZER__on_line_end( $self );
# spent 94.4ms making 9549 calls to PPI::Token::Whitespace::__TOKENIZER__on_line_end, avg 10┬Ás/call # spent 224┬Ás making 144 calls to PPI::Token::__TOKENIZER__on_line_end, avg 2┬Ás/call
519
520 # If there are no more source lines, then clean up
52196937.24ms unless ( ref($self->{source}) eq 'ARRAY' and @{$self->{source}} ) {
522 return $self->_clean_eof;
523 }
524
525969337.6ms return 1;
526}
527
- -
532#####################################################################
533# Per-character processing methods
534
535# Process on a per-character basis.
536# Note that due the the high number of times this gets
537# called, it has been fairly heavily in-lined, so the code
538# might look a bit ugly and duplicated.
539
# spent 4.58s (1.30+3.28) within PPI::Tokenizer::_process_next_char which was called 149609 times, avg 31┬Ás/call: # 149609 times (1.30s+3.28s) by PPI::Tokenizer::_process_next_line at line 512, avg 31┬Ás/call
sub _process_next_char {
54014960924.1ms my $self = shift;
541
542 ### FIXME - This checks for a screwed up condition that triggers
543 ### several warnings, amoungst other things.
54414960948.5ms if ( ! defined $self->{line_cursor} or ! defined $self->{line_length} ) {
545 # $DB::single = 1;
546 return undef;
547 }
548
549 # Increment the counter and check for end of line
55014960957.7ms return 0 if ++$self->{line_cursor} >= $self->{line_length};
551
552 # Pass control to the token class
5531399161.69ms my $result;
554139916221ms1399162.94s unless ( $result = $self->{class}->__TOKENIZER__on_char( $self ) ) {
# spent 1.87s making 106218 calls to PPI::Token::Whitespace::__TOKENIZER__on_char, avg 18┬Ás/call # spent 362ms making 7754 calls to PPI::Token::Symbol::__TOKENIZER__on_char, avg 47┬Ás/call # spent 299ms making 10634 calls to PPI::Token::Operator::__TOKENIZER__on_char, avg 28┬Ás/call # spent 201ms making 8180 calls to PPI::Token::Unknown::__TOKENIZER__on_char, avg 25┬Ás/call # spent 90.9ms making 1688 calls to PPI::Token::_QuoteEngine::__TOKENIZER__on_char, avg 54┬Ás/call # spent 69.1ms making 3157 calls to PPI::Token::Structure::__TOKENIZER__on_char, avg 22┬Ás/call # spent 38.4ms making 1170 calls to PPI::Token::Number::__TOKENIZER__on_char, avg 33┬Ás/call # spent 13.3ms making 1018 calls to PPI::Token::Number::Float::__TOKENIZER__on_char, avg 13┬Ás/call # spent 1.61ms making 34 calls to PPI::Token::Magic::__TOKENIZER__on_char, avg 47┬Ás/call # spent 654┬Ás making 61 calls to PPI::Token::Cast::__TOKENIZER__on_char, avg 11┬Ás/call # spent 69┬Ás making 2 calls to PPI::Token::DashedWord::__TOKENIZER__on_char, avg 34┬Ás/call
555 # undef is error. 0 is "Did stuff ourself, you don't have to do anything"
556 return defined $result ? 1 : undef;
557 }
558
559 # We will need the value of the current character
56012342054.3ms my $char = substr( $self->{line}, $self->{line_cursor}, 1 );
56112342015.8ms if ( $result eq '1' ) {
562 # If __TOKENIZER__on_char returns 1, it is signaling that it thinks that
563 # the character is part of it.
564
565 # Add the character
566124746.66ms if ( defined $self->{token} ) {
567 $self->{token}->{content} .= $char;
568 } else {
569 defined($self->{token} = $self->{class}->new($char)) or return undef;
570 }
571
5721247437.1ms return 1;
573 }
574
575 # We have been provided with the name of a class
57611094685.8ms21222254ms if ( $self->{class} ne "PPI::Token::$result" ) {
# spent 254ms making 21222 calls to PPI::Tokenizer::_new_token, avg 12┬Ás/call
577 # New class
578 $self->_new_token( $result, $char );
579 } elsif ( defined $self->{token} ) {
580 # Same class as current
581 $self->{token}->{content} .= $char;
582 } else {
583 # Same class, but no current
5843769261.1ms3769285.7ms defined($self->{token} = $self->{class}->new($char)) or return undef;
# spent 85.7ms making 37692 calls to PPI::Token::new, avg 2┬Ás/call
585 }
586
587110946352ms 1;
588}
589
- -
594#####################################################################
595# Altering Tokens in Tokenizer
596
597# Finish the end of a token.
598# Returns the resulting parse class as a convenience.
599
# spent 218ms within PPI::Tokenizer::_finalize_token which was called 94513 times, avg 2┬Ás/call: # 31193 times (67.2ms+0s) by PPI::Tokenizer::_new_token at line 620, avg 2┬Ás/call # 14291 times (35.5ms+0s) by PPI::Token::Word::__TOKENIZER__commit at line 539 of PPI/Token/Word.pm, avg 2┬Ás/call # 13365 times (29.4ms+0s) by PPI::Token::Structure::__TOKENIZER__commit at line 76 of PPI/Token/Structure.pm, avg 2┬Ás/call # 9549 times (20.9ms+0s) by PPI::Token::Whitespace::__TOKENIZER__on_line_end at line 417 of PPI/Token/Whitespace.pm, avg 2┬Ás/call # 7437 times (16.8ms+0s) by PPI::Token::Operator::__TOKENIZER__on_char at line 112 of PPI/Token/Operator.pm, avg 2┬Ás/call # 7245 times (21.2ms+0s) by PPI::Token::Symbol::__TOKENIZER__on_char at line 216 of PPI/Token/Symbol.pm, avg 3┬Ás/call # 3157 times (6.88ms+0s) by PPI::Token::Structure::__TOKENIZER__on_char at line 70 of PPI/Token/Structure.pm, avg 2┬Ás/call # 2743 times (7.54ms+0s) by PPI::Token::_QuoteEngine::__TOKENIZER__on_char at line 58 of PPI/Token/_QuoteEngine.pm, avg 3┬Ás/call # 1668 times (3.76ms+0s) by PPI::Token::Whitespace::__TOKENIZER__on_line_start at line 165 of PPI/Token/Whitespace.pm, avg 2┬Ás/call # 1252 times (2.71ms+0s) by PPI::Token::Whitespace::__TOKENIZER__on_char at line 213 of PPI/Token/Whitespace.pm, avg 2┬Ás/call # 832 times (2.14ms+0s) by PPI::Token::Number::__TOKENIZER__on_char at line 125 of PPI/Token/Number.pm, avg 3┬Ás/call # 509 times (1.33ms+0s) by PPI::Token::Symbol::__TOKENIZER__on_char at line 174 of PPI/Token/Symbol.pm, avg 3┬Ás/call # 288 times (397┬Ás+0s) by PPI::Tokenizer::_process_next_line at line 494, avg 1┬Ás/call # 148 times (513┬Ás+0s) by PPI::Token::Number::Float::__TOKENIZER__on_char at line 108 of PPI/Token/Number/Float.pm, avg 3┬Ás/call # 146 times (415┬Ás+0s) by PPI::Token::Pod::__TOKENIZER__on_line_start at line 148 of PPI/Token/Pod.pm, avg 3┬Ás/call # 144 times (335┬Ás+0s) by PPI::Tokenizer::_clean_eof at line 635, avg 2┬Ás/call # 144 times (308┬Ás+0s) by PPI::Token::Word::__TOKENIZER__commit at line 458 of PPI/Token/Word.pm, avg 2┬Ás/call # 144 times (299┬Ás+0s) by PPI::Token::Word::__TOKENIZER__commit at line 441 of PPI/Token/Word.pm, avg 2┬Ás/call # 85 times (215┬Ás+0s) by PPI::Token::Unknown::__TOKENIZER__on_char at line 179 of PPI/Token/Unknown.pm, avg 3┬Ás/call # 61 times (125┬Ás+0s) by PPI::Token::Cast::__TOKENIZER__on_char at line 51 of PPI/Token/Cast.pm, avg 2┬Ás/call # 51 times (105┬Ás+0s) by PPI::Token::Whitespace::__TOKENIZER__on_char at line 261 of PPI/Token/Whitespace.pm, avg 2┬Ás/call # 30 times (105┬Ás+0s) by PPI::Token::Magic::__TOKENIZER__on_char at line 228 of PPI/Token/Magic.pm, avg 4┬Ás/call # 22 times (54┬Ás+0s) by PPI::Token::Unknown::__TOKENIZER__on_char at line 216 of PPI/Token/Unknown.pm, avg 2┬Ás/call # 3 times (8┬Ás+0s) by PPI::Token::ArrayIndex::__TOKENIZER__on_char at line 56 of PPI/Token/ArrayIndex.pm, avg 3┬Ás/call # 2 times (5┬Ás+0s) by PPI::Token::DashedWord::__TOKENIZER__on_char at line 95 of PPI/Token/DashedWord.pm, avg 2┬Ás/call # once (2┬Ás+0s) by PPI::Token::Magic::__TOKENIZER__on_char at line 170 of PPI/Token/Magic.pm # once (2┬Ás+0s) by PPI::Token::Unknown::__TOKENIZER__on_char at line 150 of PPI/Token/Unknown.pm # once (2┬Ás+0s) by PPI::Token::HereDoc::__TOKENIZER__on_char at line 218 of PPI/Token/HereDoc.pm # once (2┬Ás+0s) by PPI::Token::Whitespace::__TOKENIZER__on_char at line 316 of PPI/Token/Whitespace.pm
sub _finalize_token {
6009451316.2ms my $self = shift;
6019451316.8ms return $self->{class} unless defined $self->{token};
602
603 # Add the token to the token buffer
6049422534.9ms push @{ $self->{tokens} }, $self->{token};
6059422516.6ms $self->{token} = undef;
606
607 # Return the parse class to that of the zone we are in
60894225297ms $self->{class} = $self->{zone};
609}
610
611# Creates a new token and sets it in the tokenizer
612# The defined() in here prevent a ton of calls to PPI::Util::TRUE
613
# spent 681ms (428+253) within PPI::Tokenizer::_new_token which was called 56533 times, avg 12┬Ás/call: # 21222 times (159ms+94.4ms) by PPI::Tokenizer::_process_next_char at line 576, avg 12┬Ás/call # 14291 times (103ms+63.5ms) by PPI::Token::Word::__TOKENIZER__commit at line 533 of PPI/Token/Word.pm, avg 12┬Ás/call # 13365 times (103ms+47.8ms) by PPI::Token::Structure::__TOKENIZER__commit at line 75 of PPI/Token/Structure.pm, avg 11┬Ás/call # 3724 times (24.9ms+10.0ms) by PPI::Token::Whitespace::__TOKENIZER__on_line_start at line 159 of PPI/Token/Whitespace.pm, avg 9┬Ás/call # 1668 times (19.7ms+6.52ms) by PPI::Token::Whitespace::__TOKENIZER__on_line_start at line 164 of PPI/Token/Whitespace.pm, avg 16┬Ás/call # 1055 times (10.0ms+26.4ms) by PPI::Token::Word::__TOKENIZER__commit at line 497 of PPI/Token/Word.pm, avg 35┬Ás/call # 288 times (1.53ms+796┬Ás) by PPI::Token::End::__TOKENIZER__on_line_start at line 84 of PPI/Token/End.pm, avg 8┬Ás/call # 242 times (1.71ms+1.08ms) by PPI::Token::Comment::__TOKENIZER__commit at line 93 of PPI/Token/Comment.pm, avg 12┬Ás/call # 242 times (1.62ms+1.01ms) by PPI::Token::Comment::__TOKENIZER__commit at line 94 of PPI/Token/Comment.pm, avg 11┬Ás/call # 144 times (1.30ms+760┬Ás) by PPI::Token::Word::__TOKENIZER__commit at line 440 of PPI/Token/Word.pm, avg 14┬Ás/call # 144 times (1.29ms+646┬Ás) by PPI::Token::End::__TOKENIZER__on_line_start at line 70 of PPI/Token/End.pm, avg 13┬Ás/call # 144 times (703┬Ás+318┬Ás) by PPI::Token::Word::__TOKENIZER__commit at line 454 of PPI/Token/Word.pm, avg 7┬Ás/call # 2 times (15┬Ás+9┬Ás) by PPI::Token::Whitespace::__TOKENIZER__on_line_start at line 170 of PPI/Token/Whitespace.pm, avg 12┬Ás/call # 2 times (14┬Ás+8┬Ás) by PPI::Token::Number::Float::__TOKENIZER__on_char at line 93 of PPI/Token/Number/Float.pm, avg 11┬Ás/call
sub _new_token {
614565339.70ms my $self = shift;
615 # throw PPI::Exception() unless @_;
6165653331.6ms my $class = substr( $_[0], 0, 12 ) eq 'PPI::Token::'
617 ? shift : 'PPI::Token::' . shift;
618
619 # Finalize any existing token
6205653338.5ms3119367.2ms $self->_finalize_token if defined $self->{token};
# spent 67.2ms making 31193 calls to PPI::Tokenizer::_finalize_token, avg 2┬Ás/call
621
622 # Create the new token and update the parse class
6235653396.6ms56533186ms defined($self->{token} = $class->new($_[0])) or PPI::Exception->throw;
# spent 138ms making 53790 calls to PPI::Token::new, avg 3┬Ás/call # spent 24.2ms making 1061 calls to PPI::Token::_QuoteEngine::Full::new, avg 23┬Ás/call # spent 23.6ms making 1682 calls to PPI::Token::_QuoteEngine::Simple::new, avg 14┬Ás/call
6245653311.2ms $self->{class} = $class;
625
62656533165ms 1;
627}
628
629# At the end of the file, we need to clean up the results of the erroneous
630# space that we inserted at the beginning of the process.
631
# spent 1.76ms (1.34+424┬Ás) within PPI::Tokenizer::_clean_eof which was called 144 times, avg 12┬Ás/call: # 144 times (1.34ms+424┬Ás) by PPI::Tokenizer::_process_next_line at line 502, avg 12┬Ás/call
sub _clean_eof {
63214447┬Ás my $self = shift;
633
634 # Finish any partially completed token
635144645┬Ás288424┬Ás $self->_finalize_token if $self->{token};
# spent 335┬Ás making 144 calls to PPI::Tokenizer::_finalize_token, avg 2┬Ás/call # spent 89┬Ás making 144 calls to PPI::Util::TRUE, avg 618ns/call
636
637 # Find the last token, and if it has no content, kill it.
638 # There appears to be some evidence that such "null tokens" are
639 # somehow getting created accidentally.
640144132┬Ás my $last_token = $self->{tokens}->[ -1 ];
64114491┬Ás unless ( length $last_token->{content} ) {
642 pop @{$self->{tokens}};
643 }
644
645 # Now, if the last character of the last token is a space we added,
646 # chop it off, deleting the token if there's nothing else left.
64714480┬Ás if ( $self->{source_eof_chop} ) {
648 $last_token = $self->{tokens}->[ -1 ];
649 $last_token->{content} =~ s/ $//;
650 unless ( length $last_token->{content} ) {
651 # Popping token
652 pop @{$self->{tokens}};
653 }
654
655 # The hack involving adding an extra space is now reversed, and
656 # now nobody will ever know. The perfect crime!
657 $self->{source_eof_chop} = '';
658 }
659
660144331┬Ás 1;
661}
662
- -
667#####################################################################
668# Utility Methods
669
670# Context
671sub _last_token {
672 $_[0]->{tokens}->[-1];
673}
674
675
# spent 589┬Ás (488+101) within PPI::Tokenizer::_last_significant_token which was called 52 times, avg 11┬Ás/call: # 51 times (479┬Ás+99┬Ás) by PPI::Token::Whitespace::__TOKENIZER__on_char at line 265 of PPI/Token/Whitespace.pm, avg 11┬Ás/call # once (10┬Ás+2┬Ás) by PPI::Token::Whitespace::__TOKENIZER__on_char at line 321 of PPI/Token/Whitespace.pm
sub _last_significant_token {
6765219┬Ás my $self = shift;
6775241┬Ás my $cursor = $#{ $self->{tokens} };
6785220┬Ás while ( $cursor >= 0 ) {
67910445┬Ás my $token = $self->{tokens}->[$cursor--];
680104266┬Ás104101┬Ás return $token if $token->significant;
# spent 54┬Ás making 52 calls to PPI::Token::Whitespace::significant, avg 1┬Ás/call # spent 46┬Ás making 52 calls to PPI::Element::significant, avg 894ns/call
681 }
682
683 # Nothing...
684 PPI::Token::Whitespace->null;
685}
686
687# Get an array ref of previous significant tokens.
688# Like _last_significant_token except it gets more than just one token
689# Returns array ref on success.
690# Returns 0 on not enough tokens
691
# spent 305ms (261+43.9) within PPI::Tokenizer::_previous_significant_tokens which was called 20542 times, avg 15┬Ás/call: # 15490 times (172ms+28.5ms) by PPI::Token::Word::__TOKENIZER__commit at line 430 of PPI/Token/Word.pm, avg 13┬Ás/call # 3157 times (72.8ms+13.4ms) by PPI::Token::Whitespace::__TOKENIZER__on_char at line 222 of PPI/Token/Whitespace.pm, avg 27┬Ás/call # 1866 times (16.1ms+1.91ms) by PPI::Tokenizer::_opcontext at line 741, avg 10┬Ás/call # 25 times (469┬Ás+119┬Ás) by PPI::Token::Unknown::__TOKENIZER__is_an_attribute at line 305 of PPI/Token/Unknown.pm, avg 24┬Ás/call # 2 times (17┬Ás+3┬Ás) by PPI::Token::Unknown::__TOKENIZER__on_char at line 57 of PPI/Token/Unknown.pm, avg 10┬Ás/call # 2 times (11┬Ás+2┬Ás) by PPI::Token::Whitespace::__TOKENIZER__on_char at line 384 of PPI/Token/Whitespace.pm, avg 6┬Ás/call
sub _previous_significant_tokens {
692205424.29ms my $self = shift;
693205422.60ms my $count = shift || 1;
694205428.90ms my $cursor = $#{ $self->{tokens} };
695
696205421.91ms my ($token, @tokens);
697205424.68ms while ( $cursor >= 0 ) {
6984218114.9ms $token = $self->{tokens}->[$cursor--];
6994218153.6ms4218140.9ms if ( $token->significant ) {
# spent 25.1ms making 26762 calls to PPI::Element::significant, avg 940ns/call # spent 13.8ms making 13592 calls to PPI::Token::Whitespace::significant, avg 1┬Ás/call # spent 1.88ms making 1824 calls to PPI::Token::Comment::significant, avg 1┬Ás/call # spent 3┬Ás making 3 calls to PPI::Token::Pod::significant, avg 1┬Ás/call
7002676210.4ms push @tokens, $token;
70126762107ms return \@tokens if scalar @tokens >= $count;
702 }
703 }
704
705 # Pad with empties
706144424┬Ás foreach ( 1 .. ($count - scalar @tokens) ) {
707144703┬Ás1443.03ms push @tokens, PPI::Token::Whitespace->null;
# spent 3.03ms making 144 calls to PPI::Token::Whitespace::null, avg 21┬Ás/call
708 }
709
710144466┬Ás \@tokens;
711}
712
71317┬Ásmy %OBVIOUS_CLASS = (
714 'PPI::Token::Symbol' => 'operator',
715 'PPI::Token::Magic' => 'operator',
716 'PPI::Token::Number' => 'operator',
717 'PPI::Token::ArrayIndex' => 'operator',
718 'PPI::Token::Quote::Double' => 'operator',
719 'PPI::Token::Quote::Interpolate' => 'operator',
720 'PPI::Token::Quote::Literal' => 'operator',
721 'PPI::Token::Quote::Single' => 'operator',
722 'PPI::Token::QuoteLike::Backtick' => 'operator',
723 'PPI::Token::QuoteLike::Command' => 'operator',
724 'PPI::Token::QuoteLike::Readline' => 'operator',
725 'PPI::Token::QuoteLike::Regexp' => 'operator',
726 'PPI::Token::QuoteLike::Words' => 'operator',
727);
728
72912┬Ásmy %OBVIOUS_CONTENT = (
730 '(' => 'operand',
731 '{' => 'operand',
732 '[' => 'operand',
733 ';' => 'operand',
734 '}' => 'operator',
735);
736
737# Try to determine operator/operand context, is possible.
738# Returns "operator", "operand", or "" if unknown.
739
# spent 34.6ms (16.1+18.5) within PPI::Tokenizer::_opcontext which was called 1866 times, avg 19┬Ás/call: # 1866 times (16.1ms+18.5ms) by PPI::Token::Whitespace::__TOKENIZER__on_char at line 397 of PPI/Token/Whitespace.pm, avg 19┬Ás/call
sub _opcontext {
7401866419┬Ás my $self = shift;
74118662.31ms186618.0ms my $tokens = $self->_previous_significant_tokens(1);
# spent 18.0ms making 1866 calls to PPI::Tokenizer::_previous_significant_tokens, avg 10┬Ás/call
7421866635┬Ás my $p0 = $tokens->[0];
7431866905┬Ás my $c0 = ref $p0;
744
745 # Map the obvious cases
74618665.32ms return $OBVIOUS_CLASS{$c0} if defined $OBVIOUS_CLASS{$c0};
747133334┬Ás153247┬Ás return $OBVIOUS_CONTENT{$p0} if defined $OBVIOUS_CONTENT{$p0};
# spent 247┬Ás making 153 calls to PPI::Token::content, avg 2┬Ás/call
748
749 # Most of the time after an operator, we are an operand
750113485┬Ás113168┬Ás return 'operand' if $p0->isa('PPI::Token::Operator');
# spent 168┬Ás making 113 calls to UNIVERSAL::isa, avg 1┬Ás/call
751
752 # If there's NOTHING, it's operand
753107149┬Ás107140┬Ás return 'operand' if $p0->content eq '';
# spent 140┬Ás making 107 calls to PPI::Token::content, avg 1┬Ás/call
754
755 # Otherwise, we don't know
756107283┬Ás return ''
757}
758
75916┬Ás1;
760
761=pod
762
763=head1 NOTES
764
765=head2 How the Tokenizer Works
766
767Understanding the Tokenizer is not for the feint-hearted. It is by far
768the most complex and twisty piece of perl I've ever written that is actually
769still built properly and isn't a terrible spaghetti-like mess. In fact, you
770probably want to skip this section.
771
772But if you really want to understand, well then here goes.
773
774=head2 Source Input and Clean Up
775
776The Tokenizer starts by taking source in a variety of forms, sucking it
777all in and merging into one big string, and doing our own internal line
778split, using a "universal line separator" which allows the Tokenizer to
779take source for any platform (and even supports a few known types of
780broken newlines caused by mixed mac/pc/*nix editor screw ups).
781
782The resulting array of lines is used to feed the tokenizer, and is also
783accessed directly by the heredoc-logic to do the line-oriented part of
784here-doc support.
785
786=head2 Doing Things the Old Fashioned Way
787
788Due to the complexity of perl, and after 2 previously aborted parser
789attempts, in the end the tokenizer was fashioned around a line-buffered
790character-by-character method.
791
792That is, the Tokenizer pulls and holds a line at a time into a line buffer,
793and then iterates a cursor along it. At each cursor position, a method is
794called in whatever token class we are currently in, which will examine the
795character at the current position, and handle it.
796
797As the handler methods in the various token classes are called, they
798build up a output token array for the source code.
799
800Various parts of the Tokenizer use look-ahead, arbitrary-distance
801look-behind (although currently the maximum is three significant tokens),
802or both, and various other heuristic guesses.
803
804I've been told it is officially termed a I<"backtracking parser
805with infinite lookaheads">.
806
807=head2 State Variables
808
809Aside from the current line and the character cursor, the Tokenizer
810maintains a number of different state variables.
811
812=over
813
814=item Current Class
815
816The Tokenizer maintains the current token class at all times. Much of the
817time is just going to be the "Whitespace" class, which is what the base of
818a document is. As the tokenizer executes the various character handlers,
819the class changes a lot as it moves a long. In fact, in some instances,
820the character handler may not handle the character directly itself, but
821rather change the "current class" and then hand off to the character
822handler for the new class.
823
824Because of this, and some other things I'll deal with later, the number of
825times the character handlers are called does not in fact have a direct
826relationship to the number of actual characters in the document.
827
828=item Current Zone
829
830Rather than create a class stack to allow for infinitely nested layers of
831classes, the Tokenizer recognises just a single layer.
832
833To put it a different way, in various parts of the file, the Tokenizer will
834recognise different "base" or "substrate" classes. When a Token such as a
835comment or a number is finalised by the tokenizer, it "falls back" to the
836base state.
837
838This allows proper tokenization of special areas such as __DATA__
839and __END__ blocks, which also contain things like comments and POD,
840without allowing the creation of any significant Tokens inside these areas.
841
842For the main part of a document we use L<PPI::Token::Whitespace> for this,
843with the idea being that code is "floating in a sea of whitespace".
844
845=item Current Token
846
847The final main state variable is the "current token". This is the Token
848that is currently being built by the Tokenizer. For certain types, it
849can be manipulated and morphed and change class quite a bit while being
850assembled, as the Tokenizer's understanding of the token content changes.
851
852When the Tokenizer is confident that it has seen the end of the Token, it
853will be "finalized", which adds it to the output token array and resets
854the current class to that of the zone that we are currently in.
855
856I should also note at this point that the "current token" variable is
857optional. The Tokenizer is capable of knowing what class it is currently
858set to, without actually having accumulated any characters in the Token.
859
860=back
861
862=head2 Making It Faster
863
864As I'm sure you can imagine, calling several different methods for each
865character and running regexes and other complex heuristics made the first
866fully working version of the tokenizer extremely slow.
867
868During testing, I created a metric to measure parsing speed called
869LPGC, or "lines per gigacycle" . A gigacycle is simple a billion CPU
870cycles on a typical single-core CPU, and so a Tokenizer running at
871"1000 lines per gigacycle" should generate around 1200 lines of tokenized
872code when running on a 1200 MHz processor.
873
874The first working version of the tokenizer ran at only 350 LPGC, so to
875tokenize a typical large module such as L<ExtUtils::MakeMaker> took
87610-15 seconds. This sluggishness made it unpractical for many uses.
877
878So in the current parser, there are multiple layers of optimisation
879very carefully built in to the basic. This has brought the tokenizer
880up to a more reasonable 1000 LPGC, at the expense of making the code
881quite a bit twistier.
882
883=head2 Making It Faster - Whole Line Classification
884
885The first step in the optimisation process was to add a hew handler to
886enable several of the more basic classes (whitespace, comments) to be
887able to be parsed a line at a time. At the start of each line, a
888special optional handler (only supported by a few classes) is called to
889check and see if the entire line can be parsed in one go.
890
891This is used mainly to handle things like POD, comments, empty lines,
892and a few other minor special cases.
893
894=head2 Making It Faster - Inlining
895
896The second stage of the optimisation involved inlining a small
897number of critical methods that were repeated an extremely high number
898of times. Profiling suggested that there were about 1,000,000 individual
899method calls per gigacycle, and by cutting these by two thirds a significant
900speed improvement was gained, in the order of about 50%.
901
902You may notice that many methods in the C<PPI::Tokenizer> code look
903very nested and long hand. This is primarily due to this inlining.
904
905At around this time, some statistics code that existed in the early
906versions of the parser was also removed, as it was determined that
907it was consuming around 15% of the CPU for the entire parser, while
908making the core more complicated.
909
910A judgment call was made that with the difficulties likely to be
911encountered with future planned enhancements, and given the relatively
912high cost involved, the statistics features would be removed from the
913Tokenizer.
914
915=head2 Making It Faster - Quote Engine
916
917Once inlining had reached diminishing returns, it became obvious from
918the profiling results that a huge amount of time was being spent
919stepping a char at a time though long, simple and "syntactically boring"
920code such as comments and strings.
921
922The existing regex engine was expanded to also encompass quotes and
923other quote-like things, and a special abstract base class was added
924that provided a number of specialised parsing methods that would "scan
925ahead", looking out ahead to find the end of a string, and updating
926the cursor to leave it in a valid position for the next call.
927
928This is also the point at which the number of character handler calls began
929to greatly differ from the number of characters. But it has been done
930in a way that allows the parser to retain the power of the original
931version at the critical points, while skipping through the "boring bits"
932as needed for additional speed.
933
934The addition of this feature allowed the tokenizer to exceed 1000 LPGC
935for the first time.
936
937=head2 Making It Faster - The "Complete" Mechanism
938
939As it became evident that great speed increases were available by using
940this "skipping ahead" mechanism, a new handler method was added that
941explicitly handles the parsing of an entire token, where the structure
942of the token is relatively simple. Tokens such as symbols fit this case,
943as once we are passed the initial sigil and word char, we know that we
944can skip ahead and "complete" the rest of the token much more easily.
945
946A number of these have been added for most or possibly all of the common
947cases, with most of these "complete" handlers implemented using regular
948expressions.
949
950In fact, so many have been added that at this point, you could arguably
951reclassify the tokenizer as a "hybrid regex, char-by=char heuristic
952tokenizer". More tokens are now consumed in "complete" methods in a
953typical program than are handled by the normal char-by-char methods.
954
955Many of the these complete-handlers were implemented during the writing
956of the Lexer, and this has allowed the full parser to maintain around
9571000 LPGC despite the increasing weight of the Lexer.
958
959=head2 Making It Faster - Porting To C (In Progress)
960
961While it would be extraordinarily difficult to port all of the Tokenizer
962to C, work has started on a L<PPI::XS> "accelerator" package which acts as
963a separate and automatically-detected add-on to the main PPI package.
964
965L<PPI::XS> implements faster versions of a variety of functions scattered
966over the entire PPI codebase, from the Tokenizer Core, Quote Engine, and
967various other places, and implements them identically in XS/C.
968
969In particular, the skip-ahead methods from the Quote Engine would appear
970to be extremely amenable to being done in C, and a number of other
971functions could be cherry-picked one at a time and implemented in C.
972
973Each method is heavily tested to ensure that the functionality is
974identical, and a versioning mechanism is included to ensure that if a
975function gets out of sync, L<PPI::XS> will degrade gracefully and just
976not replace that single method.
977
978=head1 TO DO
979
980- Add an option to reset or seek the token stream...
981
982- Implement more Tokenizer functions in L<PPI::XS>
983
984=head1 SUPPORT
985
986See the L<support section|PPI/SUPPORT> in the main module.
987
988=head1 AUTHOR
989
990Adam Kennedy E<lt>adamk@cpan.orgE<gt>
991
992=head1 COPYRIGHT
993
994Copyright 2001 - 2011 Adam Kennedy.
995
996This program is free software; you can redistribute
997it and/or modify it under the same terms as Perl itself.
998
999The full text of the license can be found in the
1000LICENSE file included with this module.
1001
1002=cut
 
# spent 4.15ms within PPI::Tokenizer::CORE:match which was called 15534 times, avg 267ns/call: # 15534 times (4.15ms+0s) by List::MoreUtils::any at line 211, avg 267ns/call
sub PPI::Tokenizer::CORE:match; # opcode
# spent 162ms within PPI::Tokenizer::CORE:subst which was called 144 times, avg 1.12ms/call: # 144 times (162ms+0s) by PPI::Tokenizer::new at line 186, avg 1.12ms/call
sub PPI::Tokenizer::CORE:subst; # opcode