Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell.
Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it:
import Control.Applicative
import Text.Parsec hiding ((<|>))
expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-')
term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/')
fact = read <$> many1 digit <|> char '(' *> expr <* char ')'
eval :: String -> Int
eval = either (error . show) id . parse expr "" . filter (/= ' ')
main :: IO ()
main = do
file <- readFile "expr"
putStr $ show $ eval file
putStr "\n"
and here's my self-contained precedence climbing parser in F#:
let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs))
and shift oop f op (P(g, xs)) =
let h, xs = loop (op, g, xs)
loop (oop, f h, xs)
and loop = function
| ' ' as oop, f, ('+' | '-' as op)::P(g, xs)
| (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs)
| oop, f, ('^' as op)::P(g, xs) ->
let h, xs = loop (op, g, xs)
let op = match op with
| '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) | '^' -> pown
loop (oop, op f h, xs)
| _, f, xs -> f, xs
and (|P|) = function
| '-'::P(f, xs) ->
let f, xs = loop ('~', f, xs)
P(-f, xs)
| '('::Expr(f, ')'::xs) -> P(f, xs)
| c::xs when '0' <= c && c <= '9' -> P(int(string c), xs)
My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?