At least that's the impression the Google folks gave when explaining the way they catalog the internet. The short version is
Even after removing...exact duplicates, we saw a trillion unique URLs, and the number of individual web pages out there is growing by several billion pages per day.How many servers must they have to run to catalog all of that?
Hiç yorum yok:
Yorum Gönder