a simple, tolerant & efficient HTML/XML parser (with HTML in mind though)
Go to file
2020-10-28 10:27:22 +01:00
bench add ability to convert special html entities to unicode 2014-06-12 20:44:44 +02:00
example add ability to convert special html entities to unicode 2014-06-12 20:44:44 +02:00
html_files add agda page in tests, bump version 2015-02-26 15:39:05 +01:00
src/Text export tag parser 2020-10-28 10:17:58 +01:00
tests Retain whitespace nodes between tags 2019-01-14 10:07:00 +01:00
.ghci Added .ghci; src is in scope of tests, always enable OverloadedStrings. 2014-06-23 14:38:55 +08:00
.gitignore Ignore .ghc.environment.* 2019-01-14 10:06:40 +01:00
.travis.yml travis update: run tests 2014-06-23 12:20:37 +02:00
default.nix Use HQ nixpkgs, which now includes a /homeless-shelter/ fix. 2014-09-06 02:38:26 +08:00
LICENSE initial commit 2014-06-02 21:22:35 +02:00
README.md Fixed README documentation references. 2015-03-12 09:44:08 +00:00
report-entities.html add bench result 2014-06-13 10:14:03 +02:00
Setup.hs initial commit 2014-06-02 21:22:35 +02:00
stack.yaml stack junk 2017-07-28 18:59:39 +02:00
taggy.cabal Dependency on containers 2020-06-04 13:29:56 -07:00

taggy

An attoparsec based html parser. Build Status

Currently very WIP but already supports a fairly decent range of common websites. I haven't managed to find a website with which it chokes, using the current parser. The performance is quite promising.

Using taggy

taggy has a taggyWith function to work on HTML à la tagsoup.

taggyWith :: Bool -> LT.Text -> [Tag]

The Bool there just lets you specify whether you want to convert the special HTML entities to their corresponding unicode character. True means "yes convert them please". This function takes lazy Text as input.

Or you can use the raw run function, which returns a good old Result from attoparsec.

run :: Bool -> LT.Text -> AttoLT.Result [Tag]

For example, if you want to read the html code from a file, and print one tag per line, you could do:

import Data.Attoparsec.Text.Lazy (eitherResult)
import qualified Data.Text.Lazy.IO as T
import Text.Taggy (run)

taggy :: FilePath -> IO ()
taggy fp = do
  content <- T.readFile fp
  either (\s -> putStrLn $ "couldn't parse: " ++ s) 
         (mapM_ print) 
         (eitherResult $ run True content)

But taggy also started providing support for DOM-syle documents. This is computed from the list of tags gained by using taggyWith.

If you fire up ghci with taggy loaded:

$ cabal repl # if working with a copy of this repo

You can see this domify in action.

λ> :set -XOverloadedStrings
λ> head . domify . taggyWith False $ "<html><head></head><body>yo</body></html>"
NodeElement (Element {eltName = "html", eltAttrs = fromList [], eltChildren = [NodeElement (Element {eltName = "head", eltAttrs = fromList [], eltChildren = []}),NodeElement (Element {eltName = "body", eltAttrs = fromList [], eltChildren = [NodeContent "yo"]})]})

Note that the Text.Taggy.DOM module contains a function that composes domify and taggyWith for you: parseDOM.

Lenses for taggy

We (well, mostly Vikram Virma to be honest) have put up a companion taggy-lens library.

Haddocks

I try to keep an up-to-date copy of the docs on my server: