Usenet - Usenetnet Review

- 13.59

What is Usenet and why is it better than Torrents?
photo src: www.htpcbeginner.com

Usenet is a worldwide distributed discussion system available on computers. It was developed from the general-purpose UUCP dial-up network architecture. Tom Truscott and Jim Ellis conceived the idea in 1979, and it was established in 1980. Users read and post messages (called articles or posts, and collectively termed news) to one or more categories, known as newsgroups. Usenet resembles a bulletin board system (BBS) in many respects and is the precursor to Internet forums that are widely used today. Discussions are threaded, as with web forums and BBSs, though posts are stored on the server sequentially. The name comes from the term "users network".

One notable difference between a BBS or web forum and Usenet is the absence of a central server and dedicated administrator. Usenet is distributed among a large, constantly changing conglomeration of servers that store and forward messages to one another in so-called news feeds. Individual users may read messages from and post messages to a local server operated by a commercial usenet provider, their Internet service provider, university, employer, or their own server.

Usenet has significant cultural importance in the networked world, having given rise to, or popularized, many widely recognized concepts and terms such as "FAQ", "flame", and "spam".


Streaming media - Wikiwand
photo src: www.wikiwand.com


Maps, Directions, and Place Reviews



Introduction

Usenet was conceived in 1979 and publicly established in 1980, at the University of North Carolina at Chapel Hill and Duke University, over a decade before the World Wide Web was developed and the general public received access to the Internet, making it one of the oldest computer network communications systems still in widespread use. It was originally built on the "poor man's ARPANET", employing UUCP as its transport protocol to offer mail and file transfers, as well as announcements through the newly developed news software such as A News. The name Usenet emphasized its creators' hope that the USENIX organization would take an active role in its operation.

The articles that users post to Usenet are organized into topical categories known as newsgroups, which are themselves logically organized into hierarchies of subjects. For instance, sci.math and sci.physics are within the sci.* hierarchy, for science. Or, talk.origins and talk.atheism are in the talk.* hierarchy. When a user subscribes to a newsgroup, the news client software keeps track of which articles that user has read.

In most newsgroups, the majority of the articles are responses to some other article. The set of articles that can be traced to one single non-reply article is called a thread. Most modern newsreaders display the articles arranged into threads and subthreads.

When a user posts an article, it is initially only available on that user's news server. Each news server talks to one or more other servers (its "newsfeeds") and exchanges articles with them. In this fashion, the article is copied from server to server and should eventually reach every server in the network. The later peer-to-peer networks operate on a similar principle, but for Usenet it is normally the sender, rather than the receiver, who initiates transfers. Usenet was designed under conditions when networks were much slower and not always available. Many sites on the original Usenet network would connect only once or twice a day to batch-transfer messages in and out. This is largely because the POTS network was typically used for transfers, and phone charges were lower at night.

The format and transmission of Usenet articles is similar to that of Internet e-mail messages. The difference between the two is that Usenet articles can be read by any user whose news server carries the group to which the message was posted, as opposed to email messages, which have one or more specific recipients.

Today, Usenet has diminished in importance with respect to Internet forums, blogs and mailing lists. Usenet differs from such media in several ways: Usenet requires no personal registration with the group concerned; information need not be stored on a remote server; archives are always available; and reading the messages requires not a mail or web client, but a news client. The groups in alt.binaries are still widely used for data transfer.


What is Usenet and why is it better than Torrents?
photo src: www.htpcbeginner.com


ISPs, news servers, and newsfeeds

Many Internet service providers, and many other Internet sites, operate news servers for their users to access. ISPs that do not operate their own servers directly will often offer their users an account from another provider that specifically operates newsfeeds. In early news implementations, the server and newsreader were a single program suite, running on the same system. Today, one uses separate newsreader client software, a program that resembles an email client but accesses Usenet servers instead. Some clients such as Mozilla Thunderbird and Outlook Express provide both abilities.

Not all ISPs run news servers. A news server is one of the most difficult Internet services to administer well because of the large amount of data involved, small customer base (compared to mainstream Internet services such as email and web access), and a disproportionately high volume of customer support incidents (frequently complaining of missing news articles that are not the ISP's fault). Some ISPs outsource news operation to specialist sites, which will usually appear to a user as though the ISP ran the server itself. Many sites carry a restricted newsfeed, with a limited number of newsgroups. Commonly omitted from such a newsfeed are foreign-language newsgroups and the alt.binaries hierarchy which largely carries software, music, videos and images, and accounts for over 99 percent of article data.

There are also Usenet providers that specialize in offering service to users whose ISPs do not carry news, or that carry a restricted feed.

See also news server operation for an overview of how news systems are implemented.

Newsreaders

Newsgroups are typically accessed with newsreaders: applications that allow users to read and reply to postings in newsgroups. These applications act as clients to one or more news servers. Although historically, Usenet was associated with the Unix operating system developed at AT&T, newsreaders are available for all major operating systems. Modern mail clients or "communication suites" commonly also have an integrated newsreader. Often, however, these integrated clients are of low quality, compared to standalone newsreaders, and incorrectly implement Usenet protocols, standards and conventions. Many of these integrated clients, for example the one in Microsoft's Outlook Express, are disliked by purists because of their misbehavior.

With the rise of the World Wide Web (WWW), web front-ends (web2news) have become more common. Web front ends have lowered the technical entry barrier requirements to that of one application and no Usenet NNTP server account. There are numerous websites now offering web based gateways to Usenet groups, although some people have begun filtering messages made by some of the web interfaces for one reason or another. Google Groups is one such web based front end and some web browsers can access Google Groups via news: protocol links directly.

Moderated and unmoderated newsgroups

A minority of newsgroups are moderated, meaning that messages submitted by readers are not distributed directly to Usenet, but instead are emailed to the moderators of the newsgroup for approval. The moderator is to receive submitted articles, review them, and inject approved articles so that they can be properly propagated worldwide. Articles approved by a moderator must bear the Approved: header line. Moderators ensure that the messages that readers see in the newsgroup conform to the charter of the newsgroup, though they are not required to follow any such rules or guidelines. Typically, moderators are appointed in the proposal for the newsgroup, and changes of moderators follow a succession plan.

Historically, a mod.* hierarchy existed before Usenet reorganization. Now, moderated newsgroups may appear in any hierarchy, typically with .moderated added to the group name.

Usenet newsgroups in the Big-8 hierarchy are created by proposals called a Request for Discussion, or RFD. The RFD is required to have the following information: newsgroup name, checkgroups file entry, and moderated or unmoderated status. If the group is to be moderated, then at least one moderator with a valid email address must be provided. Other information which is beneficial but not required includes: a charter, a rationale, and a moderation policy if the group is to be moderated. Discussion of the new newsgroup proposal follows, and is finished with the members of the Big-8 Management Board making the decision, by vote, to either approve or disapprove the new newsgroup.

Unmoderated newsgroups form the majority of Usenet newsgroups, and messages submitted by readers for unmoderated newsgroups are immediately propagated for everyone to see. Minimal editorial content filtering vs propagation speed form one crux of the Usenet community. One little cited defense of propagation is canceling a propagated message, but few Usenet users use this command and some news readers do not offer cancellation commands, in part because article storage expires in relatively short order anyway. Almost all unmoderated Usenet groups have become collections of spam.

Creation of moderated newsgroups often becomes a hot subject of controversy, raising issues regarding censorship and the desire of a subset of users to form an intentional community.

Technical details

Usenet is a set of protocols for generating, storing and retrieving news "articles" (which resemble Internet mail messages) and for exchanging them among a readership which is potentially widely distributed. These protocols most commonly use a flooding algorithm which propagates copies throughout a network of participating servers. Whenever a message reaches a server, that server forwards the message to all its network neighbors that haven't yet seen the article. Only one copy of a message is stored per server, and each server makes it available on demand to the (typically local) readers able to access that server. The collection of Usenet servers has thus a certain peer-to-peer character in that they share resources by exchanging them, the granularity of exchange however is on a different scale than a modern peer-to-peer system and this characteristic excludes the actual users of the system who connect to the news servers with a typical client-server application, much like an email reader.

RFC 850 was the first formal specification of the messages exchanged by Usenet servers. It was superseded by RFC 1036 and subsequently by RFC 5536 and RFC 5537.

In cases where unsuitable content has been posted, Usenet has support for automated removal of a posting from the whole network by creating a cancel message, although due to a lack of authentication and resultant abuse, this capability is frequently disabled. Copyright holders may still request the manual deletion of infringing material using the provisions of World Intellectual Property Organization treaty implementations, such as the United States Online Copyright Infringement Liability Limitation Act, but this would require giving notice to each individual news server administrator.

On the Internet, Usenet is transported via the Network News Transfer Protocol (NNTP) on TCP Port 119 for standard, unprotected connections and on TCP port 563 for SSL encrypted connections which is offered only by a few sites.

Organization

The major set of worldwide newsgroups is contained within nine hierarchies, eight of which are operated under consensual guidelines that govern their administration and naming. The current Big Eight are:

  • comp.* - computer-related discussions (comp.software, comp.sys.amiga)
  • humanities.* - fine arts, literature, and philosophy (humanities.classics, humanities.design.misc)
  • misc.* - miscellaneous topics (misc.education, misc.forsale, misc.kids)
  • news.* - discussions and announcements about news (meaning Usenet, not current events) (news.groups, news.admin)
  • rec.* - recreation and entertainment (rec.music, rec.arts.movies)
  • sci.* - science related discussions (sci.psychology, sci.research)
  • soc.* - social discussions (soc.college.org, soc.culture.african)
  • talk.* - talk about various controversial topics (talk.religion, talk.politics, talk.origins)

See also the Great Renaming.

The alt.* hierarchy is not subject to the procedures controlling groups in the Big Eight, and it is as a result less organized. Groups in the alt.* hierarchy tend to be more specialized or specific--for example, there might be a newsgroup under the Big Eight which contains discussions about children's books, but a group in the alt hierarchy may be dedicated to one specific author of children's books. Binaries are posted in alt.binaries.*, making it the largest of all the hierarchies.

Many other hierarchies of newsgroups are distributed alongside these. Regional and language-specific hierarchies such as japan.*, malta.* and ne.* serve specific countries and regions such as Japan, Malta and New England. Companies and projects administer their own hierarchies to discuss their products and offer community technical support, such as the historical gnu.* hierarchy from the Free Software Foundation. Microsoft closed its newsserver in June 2010, providing support for its products over forums now. Some users prefer to use the term "Usenet" to refer only to the Big Eight hierarchies; others include alt as well. The more general term "netnews" incorporates the entire medium, including private organizational news systems.

Informal sub-hierarchy conventions also exist. *.answers are typically moderated cross-post groups for FAQs. An FAQ would be posted within one group and a cross post to the *.answers group at the head of the hierarchy seen by some as a refining of information in that news group. Some subgroups are recursive--to the point of some silliness in alt.*.

Binary content

Usenet was originally created to distribute text content encoded in the 7-bit ASCII character set. With the help of programs that encode 8-bit values into ASCII, it became practical to distribute binary files as content. Binary posts, due to their size and often-dubious copyright status, were in time restricted to specific newsgroups, making it easier for administrators to allow or disallow the traffic.

The oldest widely used encoding method for binary content is uuencode, from the Unix UUCP package. In the late 1980s, Usenet articles were often limited to 60,000 characters, and larger hard limits exist today. Files are therefore commonly split into sections that require reassembly by the reader.

With the header extensions and the Base64 and Quoted-Printable MIME encodings, there was a new generation of binary transport. In practice, MIME has seen increased adoption in text messages, but it is avoided for most binary attachments. Some operating systems with metadata attached to files use specialized encoding formats. For Mac OS, both Binhex and special MIME types are used.

Other lesser known encoding systems that may have been used at one time were BTOA, XX encoding, BOO, and USR encoding.

In an attempt to reduce file transfer times, an informal file encoding known as yEnc was introduced in 2001. It achieves about a 30% reduction in data transferred by assuming that most 8-bit characters can safely be transferred across the network without first encoding into the 7-bit ASCII space.

The most common method of uploading large binary posts to Usenet is to convert the files into RAR archives and create Parchive files for them. Parity files are used to recreate missing data when not every part of the files reaches a server.

Binary retention time

Each news server generally allocates a certain amount of storage space for post content in each newsgroup. When this storage has been filled, each time a new post arrives, old posts are deleted to make room for the new content. If the network bandwidth available to a server is high but the storage allocation is small, it is possible for a huge flood of incoming content to overflow the allocation and push out everything that was in the group before it. If the flood is large enough, the beginning of the flood will begin to be deleted even before the last part of the flood has been posted.

Binary newsgroups are only able to function reliably if there is sufficient storage allocated to a group to allow readers enough time to download all parts of a binary posting before it is flushed out of the group's storage allocation. This was at one time how posting of undesired content was countered; the newsgroup would be flooded with random garbage data posts, of sufficient quantity to push out all the content to be suppressed. This has been compensated by service providers allocating enough storage to retain everything posted each day, including such spam floods, without deleting anything.

The average length of time that posts are able to stay in the group before being deleted is commonly called the retention time. Generally the larger Usenet servers have enough capacity to archive several years of binary content even when flooded with new data at the maximum daily speed available. A good binaries service provider must not only accommodate users of fast connections (3 megabit) but also users of slow connections (256 kilobit or less) who need more time to download content over a period of several days or weeks.

Major NSPs have a retention time of more than 4 years. This results in more than 33 petabytes (33000 terabytes) of storage.

In part because of such long retention times, as well as growing Internet upload speeds, Usenet is also used by individual users to store backup data in a practice called Usenet backup, or uBackup. While commercial providers offer more easy to use online backup services, storing data on Usenet is free of charge (although access to Usenet itself may not be). The method requires the user to manually select, prepare and upload the data. Because anyone can potentially download the backup files, the data is typically encrypted. After the files are uploaded, the uploader does not have any control over them; the files are automatically copied to all Usenet providers, so there will be multiple copies of it spread over different geographical locations around the world--desirable in a backup scheme.

Legal issues

While binary newsgroups can be used to distribute completely legal user-created works, open-source software, and public domain material, some binary groups are used to illegally distribute commercial software, copyrighted media, and obscene material.

ISP-operated Usenet servers frequently block access to all alt.binaries.* groups to both reduce network traffic and to avoid related legal issues. Commercial Usenet service providers claim to operate as a telecommunications service, and assert that they are not responsible for the user-posted binary content transferred via their equipment. In the United States, Usenet providers can qualify for protection under the DMCA Safe Harbor regulations, provided that they establish a mechanism to comply with and respond to takedown notices from copyright holders.

Removal of copyrighted content from the entire Usenet network is a nearly impossible task, due to the rapid propagation between servers and the retention done by each server. Petitioning a Usenet provider for removal only removes it from that one server's retention cache, but not any others. It is possible for a special post cancellation message to be distributed to remove it from all servers, but many providers ignore cancel messages by standard policy, because they can be easily falsified and submitted by anyone. For a takedown petition to be most effective across the whole network, it would have to be issued to the origin server to which the content has been posted, before it has been propagated to other servers. Removal of the content at this early stage would prevent further propagation, but with modern high speed links, content can be propagated as fast as it arrives, allowing no time for content review and takedown issuance by copyright holders.

Establishing the identity of the person posting illegal content is equally difficult due to the trust-based design of the network. Like SMTP email, servers generally assume the header and origin information in a post is true and accurate. However, as in SMTP email, Usenet post headers are easily falsified so as to obscure the true identity and location of the message source. In this manner, Usenet is significantly different from modern P2P services; most P2P users distributing content are typically immediately identifiable to all other users by their network address, but the origin information for a Usenet posting can be completely obscured and unobtainable once it has propagated past the original server.

Also unlike modern P2P services, the identity of the downloaders is hidden from view. On P2P services a downloader is identifiable to all others by their network address. On Usenet, the downloader connects directly to a server, and only the server knows the address of who is connecting to it. Some Usenet providers do keep usage logs, but not all make this logged information casually available to outside parties such as the Recording Industry Association of America. The existence of anonymising gateways to USENET also complicates the tracing of a postings true origin.


Free Usenet Account without Registrationâ—„ 100 % Free! - YouTube
photo src: www.youtube.com


History

Newsgroup experiments first occurred in 1979. Tom Truscott and Jim Ellis of Duke University came up with the idea as a replacement for a local announcement program, and established a link with nearby University of North Carolina using Bourne shell scripts written by Steve Bellovin. The public release of news was in the form of conventional compiled software, written by Steve Daniel and Truscott. In 1980, Usenet was connected to ARPANET through UC Berkeley which had connections to both Usenet and ARPANET. Mark Horton, the graduate student who set up the connection, began "feeding mailing lists from the ARPANET into Usenet" with the "fa" ("From ARPANET") identifier. Usenet gained 50 member sites in its first year, including Reed College, University of Oklahoma, and Bell Labs, and the number of people using the network increased dramatically; however, it was still a while longer before Usenet users could contribute to ARPANET.

Network

UUCP networks spread quickly due to the lower costs involved, and the ability to use existing leased lines, X.25 links or even ARPANET connections. By 1983, thousands of people participated from more than 500 hosts, mostly universities and Bell Labs sites but also a growing number of Unix-related companies; the number of hosts nearly doubled to 940 in 1984. More than 100 newsgroups existed, more than 20 devoted to Unix and other computer-related topics, and at least a third to recreation. As the mesh of UUCP hosts rapidly expanded, it became desirable to distinguish the Usenet subset from the overall network. A vote was taken at the 1982 USENIX conference to choose a new name. The name Usenet was retained, but it was established that it only applied to news. The name UUCPNET became the common name for the overall network.

In addition to UUCP, early Usenet traffic was also exchanged with Fidonet and other dial-up BBS networks. Widespread use of Usenet by the BBS community was facilitated by the introduction of UUCP feeds made possible by MS-DOS implementations of UUCP, such as UFGATE (UUCP to FidoNet Gateway), FSUUCP and UUPC. In 1986, RFC 977 provided the Network News Transfer Protocol (NNTP) specification for distribution of Usenet articles over TCP/IP as a more flexible alternative to informal Internet transfers of UUCP traffic. Since the Internet boom of the 1990s, almost all Usenet distribution is over NNTP.

Software

Early versions of Usenet used Duke's A News software, designed for one or two articles a day. Matt Glickman and Horton at Berkeley produced an improved version called B News that could handle the rising traffic (about 50 articles a day as of late 1983). With a message format that offered compatibility with Internet mail and improved performance, it became the dominant server software. C News, developed by Geoff Collyer and Henry Spencer at the University of Toronto, was comparable to B News in features but offered considerably faster processing. In the early 1990s, InterNetNews by Rich Salz was developed to take advantage of the continuous message flow made possible by NNTP versus the batched store-and-forward design of UUCP. Since that time INN development has continued, and other news server software has also been developed.

Public venue

Usenet was the first Internet community and the place for many of the most important public developments in the pre-commercial Internet. It was the place where Tim Berners-Lee announced the launch of the World Wide Web, where Linus Torvalds announced the Linux project, and where Marc Andreessen announced the creation of the Mosaic browser and the introduction of the image tag, which revolutionized the World Wide Web by turning it into a graphical medium.

Internet jargon and history

Many jargon terms now in common use on the Internet originated or were popularized on Usenet. Likewise, many conflicts which later spread to the rest of the Internet, such as the ongoing difficulties over spamming, began on Usenet.

"Usenet is like a herd of performing elephants with diarrhea. Massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it."

Decline

Sascha Segan of PC Magazine said in 2008 "Usenet has been dying for years". Segan said that some people pointed to the Eternal September in 1993 as the beginning of Usenet's decline. Segan believes that when pornographers and software crackers began putting large files on Usenet by the late 1990s, Usenet disk space and traffic increased correspondingly. Internet service providers questioned why they needed to host space for pornography and unauthorized software. When the State of New York opened an investigation on child pornographers who used Usenet, many ISPs dropped all Usenet access or access to the alt.* hierarchy.

In response, John Biggs of TechCrunch said "As long as there are folks who think a command line is better than a mouse, the original text-only social network will live on".

AOL discontinued Usenet access in 2005. In May 2010, Duke University, whose implementation had kicked off Usenet more than 30 years earlier, decommissioned its Usenet server, citing low usage and rising costs. After 32 years, the Usenet news service link at the University of North Carolina at Chapel Hill (news.unc.edu) was retired on February 4, 2011.


photo src: www.netnethunter.com


Usenet traffic changes

Over time, the amount of Usenet traffic has steadily increased. As of 2010 the number of all text posts made in all Big-8 newsgroups averaged 1,800 new messages every hour, with an average of 25,000 messages per day. However, these averages are minuscule in comparison to the traffic in the binary groups. Much of this traffic increase reflects not an increase in discrete users or newsgroup discussions, but instead the combination of massive automated spamming and an increase in the use of .binaries newsgroups in which large files are often posted publicly. A small sampling of the change (measured in feed size per day) follows:

In 2008, Verizon Communications, Time Warner Cable and Sprint Nextel signed an agreement with Attorney General of New York Andrew Cuomo to shut down access to sources of child pornography. Time Warner Cable stopped offering access to Usenet. Verizon reduced its access to the "Big 8" hierarchies. Sprint stopped access to the alt.* hierarchies. AT&T stopped access to the alt.binaries.* hierarchies. Cuomo never specifically named Usenet in his anti-child pornography campaign. David DeJean of PC World said that some worry that the ISPs used Cuomo's campaign as an excuse to end portions of Usenet access, as it is costly for the Internet service providers and not in high demand by customers. In 2008 AOL, which no longer offered Usenet access, and the four providers that responded to the Cuomo campaign were the five largest Internet service providers in the United States; they had more than 50% of the U.S. ISP marketshare. On June 8, 2009, AT&T announced that it would no longer provide access to the Usenet service as of July 15, 2009.

AOL announced that it would discontinue its integrated Usenet service in early 2005, citing the growing popularity of weblogs, chat forums and on-line conferencing. The AOL community had a tremendous role in popularizing Usenet some 11 years earlier.

In August 2009, Verizon announced that it would discontinue access to Usenet on September 30, 2009. JANET(UK) announced it will discontinue Usenet service, effective July 31, 2010, citing Google Groups as an alternative. Microsoft announced that it would discontinue support for its public newsgroups (msnews.microsoft.com) from June 1, 2010, offering web forums as an alternative.

Primary reasons cited for the discontinuance of Usenet service by general ISPs include the decline in volume of actual readers due to competition from blogs, along with cost and liability concerns of increasing proportion of traffic devoted to file-sharing and spam on unused or discontinued groups.

Some ISPs did not include pressure from Attorney General of New York Andrew Cuomo's aggressive campaign against child pornography as one of their reasons for dropping Usenet feeds as part of their services. ISPs Cox and Atlantic Communications resisted the 2008 trend but both did eventually drop their respective Usenet feeds in 2010.




Archives

Public archives of Usenet articles have existed since the early days of Usenet, such as the system created by Kenneth Almquist in late 1982. Distributed archiving of Usenet posts was suggested in November 1982 by Scott Orshan, who proposed that "Every site should keep all the articles it posted, forever." Also in November of that year, Rick Adams responded to a post asking "Has anyone archived netnews, or does anyone plan to?" by stating that he was, "afraid to admit it, but I started archiving most 'useful' newsgroups as of September 18." In June 1982, Gregory G. Woodbury proposed an "automatic access to archives" system that consisted of "automatic answering of fixed-format messages to a special mail recipient on specified machines."

In 1985, two news archiving systems and one RFC were posted to the Internet. The first system, called keepnews, by Mark M. Swenson of The University of Arizona, was described as "a program that attempts to provide a sane way of extracting and keeping information that comes over Usenet." The main advantage of this system was to allow users to mark articles as worthwhile to retain. The second system, YA News Archiver by Chuq Von Rospach, was similar to keepnews, but was "designed to work with much larger archives where the wonderful quadratic search time feature of the Unix ... becomes a real problem." Von Rospach in early 1985 posted a detailed RFC for "archiving and accessing usenet articles with keyword lookup." This RFC described a program that could "generate and maintain an archive of Usenet articles and allow looking up articles based on the article-id, subject lines, or keywords pulled out of the article itself." Also included was C code for the internal data structure of the system.

The desire to have a fulltext search index of archived news articles is not new either, one such request having been made in April 1991 by Alex Martelli who sought to "build some sort of keyword index for [the news archive]." In early May, Mr. Martelli posted a summary of his responses to Usenet, noting that the "most popular suggestion award must definitely go to 'lq-text' package, by Liam Quin, recently posted in alt.sources."

The huge site http://asstr.org archives and indexes erotic and pornographic stories posted to the Usenet group alt.sex.stories.

Today, the archiving of Usenet has led to a fear of loss of privacy. An archive simplifies ways to profile people. This has partly been countered with the introduction of the X-No-Archive: Yes header, which is itself controversial.

Archives by Google Groups and DejaNews

Web-based archiving of Usenet posts began in 1995 at Deja News with a very large, searchable database. In 2001, this database was acquired by Google.

Google Groups hosts an archive of Usenet posts dating back to May 1981. The earliest posts, which date from May 1981 to June 1991, were donated to Google by the University of Western Ontario with the help of David Wiseman and others, and were originally archived by Henry Spencer at the University of Toronto's Zoology department. The archives for late 1991 through early 1995 were provided by Kent Landfield from the NetNews CD series and Jürgen Christoffel from GMD. The archive of posts from March 1995 onward was started by the company DejaNews (later Deja), which was purchased by Google in February 2001. Google began archiving Usenet posts for itself starting in the second week of August 2000.

Google has been criticized by Vice and Wired contributors as well as former employees for its stewardship of the archive and for breaking its search functionality.

Source of the article : Wikipedia



EmoticonEmoticon

 

Start typing and press Enter to search