then re-add with the same manually added bit set
in the new spider request, otherwise seed url might
not get spidered since it might not match the regex.
not just uh48, because using fake ips
results in having the same url crawled twice
since it is from a different "firstip" so
we should include "firstip" in the lock as well
to prevent a double round increment.
see comment in Spider.cpp to this effect.
specify the working directory for each host
entry. then we can use the exact same hosts.conf
file for each gb instance rather than having to
change the single "working-dir:" directive for
each instance, in the case where the each have
a different working directory.