<P> In 2001, Tim Berners - Lee participated in a discussion of the Semantic Web, where it was presented that intelligent software' agents' might one day automatically trawl the Web and find, filter and correlate previously unrelated, published facts for the benefit of end users . Such agents are not commonplace even now, but some of the ideas of Web 2.0, mashups and price comparison websites may be coming close . The main difference between these web application hybrids and Berners - Lee's semantic agents lies in the fact that the current aggregation and hybridisation of information is usually designed in by web developers, who already know the web locations and the API semantics of the specific data they wish to mash, compare and combine . </P> <P> An important type of web agent that does crawl and read web pages automatically, without prior knowledge of what it might find, is the Web crawler or search - engine spider . These software agents are dependent on the semantic clarity of web pages they find as they use various techniques and algorithms to read and index millions of web pages a day and provide web users with search facilities . </P> <P> In order for search - engine spiders to be able to rate the significance of pieces of text they find in HTML documents, and also for those creating mashups and other hybrids, as well as for more automated agents as they are developed, the semantic structures that exist in HTML need to be widely and uniformly applied to bring out the meaning of published information . </P> <P> While the true semantic web may depend on complex RDF ontologies and metadata, every HTML document makes its contribution to the meaningfulness of the Web by the correct use of headings, lists, titles and other semantic markup wherever possible . This "plain" use of HTML has been called "Plain Old Semantic HTML" or POSH . The correct use of Web 2.0' tagging' creates folksonomies that may be equally or even more meaningful to many . HTML 5 introduced new semantic elements such as section, article, footer, progress, nav, aside, mark, and time . Overall, the goal of the W3C is to slowly introduce more ways for browsers, developers, and crawlers to better distinguish between different types of data, allowing for benefits such as better display on browsers on different devices . </P>

How to create a semantic link in html