

Information providers can help robots traverse a document tree without
encountering problems, and can help search tools find meaningful links.
- Browsers store URLs and titles in the hotlist.
A usable hotlist depends on having documents with meaningful titles.
Several search-engines also rank title words higher than words in a
document's body. Documents with meaningful titles are more likely to
be found.
- Search-engines that use URLs can be helped by providing meaningful
URLs. "debra.html" is better than "home.html" or "homepage.html".
- Smart robots or engines also use the link text (which browsers present
in a different color, or underlined).
"Here is a link to the
homepage of Paul De Bra"
is better than "The homepage of Paul De Bra is located
here.
- Robots cannot try (all) coordinates in clickable images in order to
find out which documents can be reached from these documents.
Textual alternatives are not just helpful for users with text-only browsers,
but give robots easy access to the hidden links.
- Robots cannot try to fill out forms.
Access to public information should not be hidden behind forms.
- Not all robots manage to avoid "black holes" (like the
time example).
The safest way to avoid these problems is not to generate black holes.
The second-best way is to tell robots to avoid them by means of the
/robots.txt file.