Created
May 26, 2019 20:20
-
-
Save guillem/c6c890c01bd04a1e81237995fca9ec24 to your computer and use it in GitHub Desktop.
Quick and dirty script to split the huge XML file with all the wikipedia pages into individual files. No actual XML parsing is done, it handles the input as a text file and parses it line by line, so it doesn't need much memory to run.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env python3 | |
import os, sys, shutil | |
infile = sys.argv[1] | |
outdir = sys.argv[2] | |
shutil.rmtree(outdir, ignore_errors=True) | |
os.mkdir(outdir) | |
with open(infile) as i: | |
p = False | |
n = 0 | |
l = i.readline() | |
while l: | |
if l.strip() == '<page>': | |
p = True | |
n += 1 | |
o = open(f'{outdir}{os.sep}{n:010d}.xml', 'w') | |
print(f'\r{n:010d}', end='', flush=True) | |
elif l.strip() == '</page>': | |
p = False | |
o.write(l) | |
o.close() | |
if p: | |
o.write(l) | |
l = i.readline() |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment