Human Genome Project

Wikipedia 🌐 Human Genome Project 

ASSOCIATIONS (there are many, we just add them as it is reviewed)

Saved Wikipedia (June 03, 2021) - "Human Genome Project" 

Source : [HK007C][GDrive] 

 Mentions : 

The Human Genome Project (HGP) was an international scientific research project with the goal of determining the base pairs that make up human DNA, and of identifying and mapping all of the genes of the human genome from both a physical and a functional standpoint.[1] It remains the world's largest collaborative biological project.[2] Planning started after the idea was picked up in 1984 by the US government, the project formally launched in 1990, and was declared complete on April 14, 2003.[3]

Funding came from the American government through the National Institutes of Health (NIH) as well as numerous other groups from around the world. A parallel project was conducted outside the government by the Celera Corporation, or Celera Genomics, which was formally launched in 1998. Most of the government-sponsored sequencing was performed in twenty universities and research centres in the United States, the United Kingdom, Japan, France, Germany, and China.[4]

The Human Genome Project originally aimed to map the nucleotides contained in a human haploid reference genome (more than three billion). The "genome" of any given individual is unique; mapping the "human genome" involved sequencing a small number of individuals and then assembling to get a complete sequence for each chromosome. Therefore, the finished human genome is a mosaic, not representing any one individual.

Human Genome Project

History

The Human Genome Project was a 13-year-long, publicly funded project initiated in 1990 with the objective of determining the DNA sequence of the entire euchromatic human genome within 15 years.[5]

In May 1985, [Robert Louis Sinsheimer (born 1920)] organized a workshop at the University of California, Santa Cruz, to discuss sequencing the human genome,[6] but for a number of reasons the NIH was uninterested in pursuing the proposal. The following March, the Santa Fe Workshop was organized by [Charles Peter DeLisi (born 1941)] and David Smith of the Department of Energy's Office of Health and Environmental Research (OHER).[7] At the same time [Dr. Renato Dulbecco (born 1914)] proposed whole genome sequencing in an essay in Science.[8] James Watson followed two months later with a workshop held at the Cold Spring Harbor Laboratory. Thus the idea for obtaining a reference sequence had three independent origins: [Robert Louis Sinsheimer (born 1920)], [Dr. Renato Dulbecco (born 1914)] and [Charles Peter DeLisi (born 1941)]. Ultimately it was the actions by DeLisi that launched the project.[9][10][11][12]

The fact that the Santa Fe workshop was motivated and supported by a Federal Agency opened a path, albeit a difficult and tortuous one,[13] for converting the idea into public policy in the United States. In a memo to the Assistant Secretary for Energy Research (Alvin Trivelpiece), [Charles Peter DeLisi (born 1941)], who was then Director of the OHER, outlined a broad plan for the project.[14] This started a long and complex chain of events which led to approved reprogramming of funds that enabled the OHER to launch the Project in 1986, and to recommend the first line item for the HGP, which was in President Reagan's 1988 budget submission,[13] and ultimately approved by the Congress. Of particular importance in Congressional approval was the advocacy of New Mexico Senator Pete Domenici, whom DeLisi had befriended.[15] Domenici chaired the Senate Committee on Energy and Natural Resources, as well as the Budget Committee, both of which were key in the DOE budget process. Congress added a comparable amount to the NIH budget, thereby beginning official funding by both agencies.

Alvin Trivelpiece sought and obtained the approval of DeLisi's proposal by Deputy Secretary William Flynn Martin. This chart[16] was used in the spring of 1986 by Trivelpiece, then Director of the Office of Energy Research in the Department of Energy, to brief Martin and Under Secretary Joseph Salgado regarding his intention to reprogram $4 million to initiate the project with the approval of Secretary Herrington. This reprogramming was followed by a line item budget of $16 million in the Reagan Administration’s 1987 budget submission to Congress.[17] It subsequently passed both Houses. The Project was planned for 15 years.[18]

Candidate technologies were already being considered for the proposed undertaking at least as early as 1979; Ronald W. Davis and colleagues of Stanford University submitted a proposal to NIH that year and it was turned down as being too ambitious.[19][20]

In 1990, the two major funding agencies, DOE and NIH, developed a memorandum of understanding in order to coordinate plans and set the clock for the initiation of the Project to 1990.[21] At that time, David Galas was Director of the renamed “Office of Biological and Environmental Research” in the U.S. Department of Energy's Office of Science and James Watson headed the NIH Genome Program. In 1993, Aristides Patrinos succeeded Galas and [Dr. Francis Sellers Collins (born 1950)] succeeded James Watson, assuming the role of overall Project Head as Director of the U.S. National Institutes of Health (NIH) National Center for Human Genome Research (which would later become the National Human Genome Research Institute). A working draft of the genome was announced in 2000 and the papers describing it were published in February 2001. A more complete draft was published in 2003, and genome "finishing" work continued for more than a decade.

The $3 billion project was formally founded in 1990 by the US Department of Energy and the National Institutes of Health, and was expected to take 15 years.[22] In addition to the United States, the international consortium comprised geneticists in the United Kingdom, France, Australia, China and myriad other spontaneous relationships.[23] The project ended up costing less than expected at about $2.7 billion (FY 1991).[4] When adjusted for inflation, this costs roughly $5 billion (FY 2018).[24][25]

Due to widespread international cooperation and advances in the field of genomics (especially in sequence analysis), as well as major advances in computing technology, a 'rough draft' of the genome was finished in 2000 (announced jointly by U.S. President Bill Clinton and British Prime Minister Tony Blair on June 26, 2000).[26] This first available rough draft assembly of the genome was completed by the Genome Bioinformatics Group at the University of California, Santa Cruz, primarily led by then-graduate student Jim Kent. Ongoing sequencing led to the announcement of the essentially complete genome on April 14, 2003, two years earlier than planned.[27][28] In May 2006, another milestone was passed on the way to completion of the project, when the sequence of the very last chromosome was published in Nature.[29]

The institutions, companies, and laboratories in Human Genome Program are listed below, according to NIH:[4]

Additionally, beginning in 2000 and continuing for three years in Russia, the Russian Foundation for Basic Research (RFFI) (Russian: Đ ĐŸŃŃĐžĐčсĐșĐžĐč Ń„ĐŸĐœĐŽ Ń„ŃƒĐœĐŽĐ°ĐŒĐ”ĐœŃ‚Đ°Đ»ŃŒĐœŃ‹Ń… ĐžŃŃĐ»Đ”ĐŽĐŸĐČĐ°ĐœĐžĐč (РЀЀИ)) provided a grant of about 500 thousand rubles to fund genome mapping of Russians (three groups: Vologda-Vyatka (Russian: Đ’ĐŸĐ»ĐŸĐłĐŽĐ°-ВятĐșĐ°), Ilmen-Belozersk (Russian: Đ˜Đ»ŃŒĐŒĐ”ĐœŃŒ-Đ‘Đ”Đ»ĐŸĐ·Đ”Ń€ŃĐș), and Valdai (Russian: ВалЎаĐč)) by the Laboratory of Human Population Genetics of the Medical Genetics Center of the Russian Academy of Medical Sciences (Russian: Đ»Đ°Đ±ĐŸŃ€Đ°Ń‚ĐŸŃ€ĐžĐž ĐżĐŸĐżŃƒĐ»ŃŃ†ĐžĐŸĐœĐœĐŸĐč ĐłĐ”ĐœĐ”Ń‚ĐžĐșĐž Ń‡Đ”Đ»ĐŸĐČĐ”ĐșĐ° ĐœĐ”ĐŽĐžĐșĐŸ-ĐłĐ”ĐœĐ”Ń‚ĐžŃ‡Đ”ŃĐșĐŸĐłĐŸ Ń†Đ”ĐœŃ‚Ń€Đ° Đ ĐŸŃŃĐžĐčсĐșĐŸĐč Đ°ĐșĐ°ĐŽĐ”ĐŒĐžĐž ĐŒĐ”ĐŽĐžŃ†ĐžĐœŃĐșох ĐœĐ°ŃƒĐș). Although the top Russian geneticist in 2004 is Sergei Inge-Vechtomov (Russian: ХДргДĐč Đ˜ĐœĐłĐ”-Đ’Đ”Ń‡Ń‚ĐŸĐŒĐŸĐČ), the research was headed by Doctor of Biological Sciences Elena Balanovskaya (Russian: Đ•Đ»Đ”ĐœĐ° Đ‘Đ°Đ»Đ°ĐœĐŸĐČсĐșая) at the Laboratory of Human Population Genetics in Moscow. Since 2004, Evgeny Ginter is the scientific supervisor of the Medical Genetics Center in Moscow.[30]

State of completion

The project was not able to sequence all the DNA found in human cells. It sequenced only euchromatic regions of the genome, which make up 92.1% of the human genome. The other regions, called heterochromatic, are found in centromeres and telomeres, and were not sequenced under the project.[31]

The Human Genome Project (HGP) was declared complete in April 2003. An initial rough draft of the human genome was available in June 2000 and by February 2001 a working draft had been completed and published followed by the final sequencing mapping of the human genome on April 14, 2003. Although this was reported to cover 99% of the euchromatic human genome with 99.99% accuracy, a major quality assessment of the human genome sequence was published on May 27, 2004 indicating over 92% of sampling exceeded 99.99% accuracy which was within the intended goal.[32]

In March 2009, the Genome Reference Consortium (GRC) released a more accurate version of the human genome, but that still left more than 300 gaps,[33] while 160 such gaps remained in 2015.[34]

Though in May 2020, the GRC reported 79 "unresolved" gaps,[35] accounting for as much as 5% of the human genome,[36] months later the application of new long-range sequencing techniques and a homozygous cell line in which both copies of each chromosome are identical led to the first telomere-to-telomere, truly complete sequence of a human chromosome, the X-chromosome.[37] Work to complete the remaining chromosomes using the same approach is ongoing.[36]

In 2021 it was reported that the Telomere-to-Telomere (T2T) consortium had filled in all of the gaps. Thus there came into existence a complete human genome with no gaps.[38]

Applications and proposed benefits

The sequencing of the human genome holds benefits for many fields, from molecular medicine to human evolution. The Human Genome Project, through its sequencing of the DNA, can help us understand diseases including: genotyping of specific viruses to direct appropriate treatment; identification of mutations linked to different forms of cancer; the design of medication and more accurate prediction of their effects; advancement in forensic applied sciences; biofuels and other energy applications; agriculture, animal husbandry, bioprocessing; risk assessment; bioarcheology, anthropology and evolution. Another proposed benefit is the commercial development of genomics research related to DNA based products, a multibillion-dollar industry.

The sequence of the DNA is stored in databases available to anyone on the Internet. The U.S. National Center for Biotechnology Information (and sister organizations in Europe and Japan) house the gene sequence in a database known as GenBank, along with sequences of known and hypothetical genes and proteins. Other organizations, such as the UCSC Genome Browser at the University of California, Santa Cruz,[39] and Ensembl[40] present additional data and annotation and powerful tools for visualizing and searching it. Computer programs have been developed to analyze the data because the data itself is difficult to interpret without such programs. Generally speaking, advances in genome sequencing technology have followed Moore's Law, a concept from computer science which states that integrated circuits can increase in complexity at an exponential rate.[41] This means that the speeds at which whole genomes can be sequenced can increase at a similar rate, as was seen during the development of the above-mentioned Human Genome Project.

Techniques and analysis

The process of identifying the boundaries between genes and other features in a raw DNA sequence is called genome annotation and is in the domain of bioinformatics. While expert biologists make the best annotators, their work proceeds slowly, and computer programs are increasingly used to meet the high-throughput demands of genome sequencing projects. Beginning in 2008, a new technology known as RNA-seq was introduced that allowed scientists to directly sequence the messenger RNA in cells. This replaced previous methods of annotation, which relied on the inherent properties of the DNA sequence, with direct measurement, which was much more accurate. Today, annotation of the human genome and other genomes relies primarily on deep sequencing of the transcripts in every human tissue using RNA-seq. These experiments have revealed that over 90% of genes contain at least one and usually several alternative splice variants, in which the exons are combined in different ways to produce 2 or more gene products from the same locus.[42]

The genome published by the HGP does not represent the sequence of every individual's genome. It is the combined mosaic of a small number of anonymous donors, all of the European origin. The HGP genome is a scaffold for future work in identifying differences among individuals. Subsequent projects sequenced the genomes of multiple distinct ethnic groups, though as of today there is still only one "reference genome."[43]

Findings

Key findings of the draft (2001) and complete (2004) genome sequences include:

Accomplishments

The first printout of the human genome to be presented as a series of books, displayed at the Wellcome Collection, London

The human genome has approximately 3.1 billion base pairs.[49] The Human Genome Project was started in 1990 with the goal of sequencing and identifying all base pairs in the human genetic instruction set, finding the genetic roots of disease and then developing treatments. It is considered a megaproject.

The genome was broken into smaller pieces; approximately 150,000 base pairs in length.[50] These pieces were then ligated into a type of vector known as "bacterial artificial chromosomes", or BACs, which are derived from bacterial chromosomes which have been genetically engineered. The vectors containing the genes can be inserted into bacteria where they are copied by the bacterial DNA replication machinery. Each of these pieces was then sequenced separately as a small "shotgun" project and then assembled. The larger, 150,000 base pairs go together to create chromosomes. This is known as the "hierarchical shotgun" approach, because the genome is first broken into relatively large chunks, which are then mapped to chromosomes before being selected for sequencing.[51][52]

Funding came from the US government through the National Institutes of Health in the United States, and a UK charity organization, the Wellcome Trust, as well as numerous other groups from around the world. The funding supported a number of large sequencing centers including those at Whitehead Institute, the Wellcome Sanger Institute (then called The Sanger Centre) based at the Wellcome Genome Campus, Washington University in St. Louis, and Baylor College of Medicine.[22][53]

The United Nations Educational, Scientific and Cultural Organization (UNESCO) served as an important channel for the involvement of developing countries in the Human Genome Project.[54]

Public versus private approaches

In 1998, a similar, privately funded quest was launched by the American researcher Craig Venter, and his firm Celera Genomics. Venter was a scientist at the NIH during the early 1990s when the project was initiated. The $300m Celera effort was intended to proceed at a faster pace and at a fraction of the cost of the roughly $3 billion publicly funded project. The Celera approach was able to proceed at a much more rapid rate, and at a lower cost, than the public project in part because it used data made available by the publicly funded project.[45]

Celera used a technique called whole genome shotgun sequencing, employing pairwise end sequencing,[55] which had been used to sequence bacterial genomes of up to six million base pairs in length, but not for anything nearly as large as the three billion base pair human genome.

Celera initially announced that it would seek patent protection on "only 200–300" genes, but later amended this to seeking "intellectual property protection" on "fully-characterized important structures" amounting to 100–300 targets. The firm eventually filed preliminary ("place-holder") patent applications on 6,500 whole or partial genes. Celera also promised to publish their findings in accordance with the terms of the 1996 "Bermuda Statement", by releasing new data annually (the HGP released its new data daily), although, unlike the publicly funded project, they would not permit free redistribution or scientific use of the data. The publicly funded competitors were compelled to release the first draft of the human genome before Celera for this reason. On July 7, 2000, the UCSC Genome Bioinformatics Group released a first working draft on the web. The scientific community downloaded about 500 GB of information from the UCSC genome server in the first 24 hours of free and unrestricted access.[56]

In March 2000, President Clinton, along with Prime Minister Tony Blair in a dual statement, urged that the genome sequence should have "unencumbered access" to all researchers who wished to research the sequence.[57] The statement sent Celera's stock plummeting and dragged down the biotechnology-heavy Nasdaq. The biotechnology sector lost about $50 billion in market capitalization in two days.

Although the working draft was announced in June 2000, it was not until February 2001 that Celera and the HGP scientists published details of their drafts. Special issues of Nature (which published the publicly funded project's scientific paper)[45] described the methods used to produce the draft sequence and offered analysis of the sequence. These drafts covered about 83% of the genome (90% of the euchromatic regions with 150,000 gaps and the order and orientation of many segments not yet established). In February 2001, at the time of the joint publications, press releases announced that the project had been completed by both groups. Improved drafts were announced in 2003 and 2005, filling in to approximately 92% of the sequence currently.

Genome donors

In the IHGSC international public-sector HGP, researchers collected blood (female) or sperm (male) samples from a large number of donors. Only a few of many collected samples were processed as DNA resources. Thus the donor identities were protected so neither donors nor scientists could know whose DNA was sequenced. DNA clones taken from many different libraries were used in the overall project, with most of those libraries being created by Pieter J. de Jong's. Much of the sequence (>70%) of the reference genome produced by the public HGP came from a single anonymous male donor from Buffalo, New York (code name RP11; the "RP" refers to Roswell Park Comprehensive Cancer Center).[58][59]

HGP scientists used white blood cells from the blood of two male and two female donors (randomly selected from 20 of each) – each donor yielding a separate DNA library. One of these libraries (RP11) was used considerably more than others, due to quality considerations. One minor technical issue is that male samples contain just over half as much DNA from the sex chromosomes (one X chromosome and one Y chromosome) compared to female samples (which contain two X chromosomes). The other 22 chromosomes (the autosomes) are the same for both sexes.

Although the main sequencing phase of the HGP has been completed, studies of DNA variation continued in the International HapMap Project, whose goal was to identify patterns of single-nucleotide polymorphism (SNP) groups (called haplotypes, or “haps”). The DNA samples for the HapMap came from a total of 270 individuals; Yoruba people in Ibadan, Nigeria; Japanese people in Tokyo; Han Chinese in Beijing; and the French Centre d’Etude du Polymorphisme Humain (CEPH) resource, which consisted of residents of the United States having ancestry from Western and Northern Europe.

In the Celera Genomics private-sector project, DNA from five different individuals were used for sequencing. The lead scientist of Celera Genomics at that time, Craig Venter, later acknowledged (in a public letter to the journal Science) that his DNA was one of 21 samples in the pool, five of which were selected for use.[60][61]

In 2007, a team led by Jonathan Rothberg published James Watson's entire genome, unveiling the six-billion-nucleotide genome of a single individual for the first time.[62]

Developments

With the sequence in hand, the next step was to identify the genetic variants that increase the risk for common diseases like cancer and diabetes.[21][50]

It is anticipated that detailed knowledge of the human genome will provide new avenues for advances in medicine and biotechnology. Clear practical results of the project emerged even before the work was finished. For example, a number of companies, such as Myriad Genetics, started offering easy ways to administer genetic tests that can show predisposition to a variety of illnesses, including breast cancer, hemostasis disorders, cystic fibrosis, liver diseases and many others. Also, the etiologies for cancers, Alzheimer's disease and other areas of clinical interest are considered likely to benefit from genome information and possibly may lead in the long term to significant advances in their management.[63][64]

There are also many tangible benefits for biologists. For example, a researcher investigating a certain form of cancer may have narrowed down their search to a particular gene. By visiting the human genome database on the World Wide Web, this researcher can examine what other scientists have written about this gene, including (potentially) the three-dimensional structure of its product, its function(s), its evolutionary relationships to other human genes, or to genes in mice or yeast or fruit flies, possible detrimental mutations, interactions with other genes, body tissues in which this gene is activated, and diseases associated with this gene or other datatypes. Further, a deeper understanding of the disease processes at the level of molecular biology may determine new therapeutic procedures. Given the established importance of DNA in molecular biology and its central role in determining the fundamental operation of cellular processes, it is likely that expanded knowledge in this area will facilitate medical advances in numerous areas of clinical interest that may not have been possible without them.[65]

The analysis of similarities between DNA sequences from different organisms is also opening new avenues in the study of evolution. In many cases, evolutionary questions can now be framed in terms of molecular biology; indeed, many major evolutionary milestones (the emergence of the ribosome and organelles, the development of embryos with body plans, the vertebrate immune system) can be related to the molecular level. Many questions about the similarities and differences between humans and our closest relatives (the primates, and indeed the other mammals) are expected to be illuminated by the data in this project.[63][66]

The project inspired and paved the way for genomic work in other fields, such as agriculture. For example, by studying the genetic composition of Tritium aestivum, the world's most commonly used bread wheat, great insight has been gained into the ways that domestication has impacted the evolution of the plant.[67] It is being investigated which loci are most susceptible to manipulation, and how this plays out in evolutionary terms. Genetic sequencing has allowed these questions to be addressed for the first time, as specific loci can be compared in wild and domesticated strains of the plant. This will allow for advances in the genetic modification in the future which could yield healthier and disease-resistant wheat crops, among other things.

Ethical, legal and social issues

At the onset of the Human Genome Project, several ethical, legal, and social concerns were raised in regard to how increased knowledge of the human genome could be used to discriminate against people. One of the main concerns of most individuals was the fear that both employers and health insurance companies would refuse to hire individuals or refuse to provide insurance to people because of a health concern indicated by someone's genes.[68] In 1996 the United States passed the Health Insurance Portability and Accountability Act (HIPAA) which protects against the unauthorized and non-consensual release of individually identifiable health information to any entity not actively engaged in the provision of healthcare services to a patient.[69] Other nations passed no such protections[citation needed].

Along with identifying all of the approximately 20,000–25,000 genes in the human genome (estimated at between 80,000 and 140,000 at the start of the project), the Human Genome Project also sought to address the ethical, legal, and social issues that were created by the onset of the project.[70] For that, the Ethical, Legal, and Social Implications (ELSI) program was founded in 1990. Five percent of the annual budget was allocated to address the ELSI arising from the project.[22][71] This budget started at approximately $1.57 million in the year 1990, but increased to approximately $18 million in the year 2014.[72]

Whilst the project may offer significant benefits to medicine and scientific research, some authors have emphasized the need to address the potential social consequences of mapping the human genome. "Molecularising disease and their possible cure will have a profound impact on what patients expect from medical help and the new generation of doctors' perception of illness."[73]

See also

Press / Evidence Timeline

2000 (July 12) - MIT News : "Whitehead scientists enjoy genome sequence milestone"

Seema Kumar, Whitehead Institute  /   Saved as PDF : [HE00AM][GDrive] 

Mentioned : Eric Steven Lander (born 1957)  /  Kevin Judd McKernan (born 1973)  / Human Genome Project  /  Celera Genomics Corporation  /  Dr. John Craig Venter (born 1946)  /

Image of saved PDF : [HE00AN][GDrive] 

The Whitehead/MIT Center for Genome Research enjoyed much more than 15 minutes of fame in late June, as the [Human Genome Project] and [Celera Genomics Corporation] announced their first assemblies of the human genome, the genetic blueprint for a human being.

Whitehead was the single largest contributor to the [Human Genome Project], providing roughly a third of all the sequence assembled by the international consortium of 16 laboratories involved.

Whitehead also laid much of the groundwork needed for the project, by scaling up 20-fold and launching the project's final phase -- sequencing the three billion base pairs that make up the human genome. Over the past year or so, Whitehead's sequencing center produced more than one billion base pairs or DNA letters that went toward assembling the "book of life" announced on June 26.

BETTER, FASTER THAN EXPECTED

Production of genome sequence has skyrocketed over the past year, with more than 60 percent of the sequence having been produced in the past six months alone. During this time, the project consortium has been producing 1,000 bases per second of raw sequence -- seven days a week, 24 hours a day.

The consortium's goal for spring 2000 was to produce a "working draft" version of the human sequence, an assembly containing overlapping fragments that cover approximately 90 percent of the genome and that are sequenced in "working draft" form, i.e., with some gaps and ambiguities. The consortium's ultimate goal is to produce a completely "finished" sequence, i.e. one with no gaps and 99.99 percent accuracy. The target date for this ultimate goal had been 2003, but the final, stand-the-test-of-time sequence will likely be produced considerably ahead of that schedule.

The Human Genome Project consortium centers in six countries have produced far more sequence data than expected (more than 22.1 billion bases of raw sequence data, comprising overlapping fragments totaling 3.9 billion bases and providing seven-fold sequence coverage of the human genome). As a result, the working draft is substantially closer to the ultimate finished form than the consortium expected at this stage.

Although the working draft is useful for most biomedical research, a highly accurate sequence that's as close to perfect as possible is critical for obtaining all the information there is to get from human sequence data. This has already been achieved for chromosomes 21 and 22, as well as for 24 percent of the entire genome.

In a related announcement, Celera Genomics announced that it completed its own first assembly of the human genome DNA sequence.

The public and private projects use similar automation and sequencing technology, but different approaches to sequencing the human genome. The public project uses a "hierarchical shotgun" approach in which individual large DNA fragments of known position are subjected to shotgun sequencing (i.e., shredded into small fragments that are sequenced, and then reassembled on the basis of sequence overlaps). The Celera project uses a "whole genome shotgun" approach, in which the entire genome is shredded into small fragments that are sequenced and put back together on the basis of sequence overlaps.

TRIUMPHANT FEELINGS

Behind all the publicity hoopla was the personal triumph and exhilaration felt by every Whitehead person involved with the project. In fact, for most of them, including the eight representatives from the Genome Center who went to a White House ceremony in Washington, the pride and excitement about a job well done far surpassed any appearance on the "Today" show.

[Eric Steven Lander (born 1957)], professor of biology and director of the Whitehead Genome Center, and Lauren Linton, co-director of its sequencing center, as well as sequencing center team leaders were in the White House East Room as President Clinton and Britain's Prime Minister Tony Blair made the historic announcement -- that the "book of life" had been decoded. The room was electric with anticipation as the band played "Hail to the Chief" and announced the President's entrance.

Remarks by President Clinton, Francis Collins (director of the National Human Genome Research Institute) and [Dr. John Craig Venter (born 1946)] (president of [Celera Genomics Corporation]) recognized the work of the thousands of scientists who helped the world reach this milestone.

"We are incredibly happy and feeling a sense of triumph. This is an exciting day, and the credit goes to all the people who worked day and night at a feverish pace both to create the sequencing center and to sequence every last bit of DNA to achieve the goals that we had set for this milestone," said Dr. Linton.

"It's very exciting to be here, to stand here in the White House and be recognized for our accomplishments. It was impressive and overwhelming and totally thrilling," said Nicole Stange-Thomann, leader of the clone preparation and library construction team.

She and several team leaders from Whitehead, including  [Kevin Judd McKernan (born 1973)], Mike Zody, Lisa Kann, Jim Meldrim, Ken Dewar, Will Fitzhugh and Paul McEwan, attended the White House event and the press conference that followed at the Capital Hilton.

MEDIA BLITZ

Back in Cambridge, sequencing center assistant directors Bruce Birren and Chad Nusbaum rallied the troops for a celebration at the Whitehead Genome Center. They also faced huge and unprecedented media interest in the topic, handling dozens of interviews and television broadcasts that followed the announcement. WHDH-TV Channel 7 (the Boston affiliate of NBC) broadcast live from the Whitehead party, and Channels 4 and 56 also descended on the Whitehead sequencing center.

CNN, ABC, NBC, CBS, the Discovery Channel and many other national and international TV stations had prepared in advance, taking footage of the sequencing center and conducting interviews in the past several months, and were ready with stories featuring Whitehead soon after the June 26 announcement.

Whitehead was also featured in the New York Times, the Boston Globe, the Boston Herald, the Washington Post, the Los Angeles Times, Newsday, USA Today, the Wall Street Journal, the Dallas Morning News, Time, the Associated Press and many other newspapers and magazines.

While the media attention focused mostly on the sequencing center, some of it also trickled down to the Genome Center's Functional Genomics Group and the main Whitehead Institute on questions regarding functional genomics and other applications of the genome sequence. Media calls came at a frenzied pace as news outlets frantically tried to get Whitehead scientists to appear on shows on short notice.

MIT Professor of Biology and Whitehead member Richard A. Young appeared on MSNBC; Professor and Whitehead director Gerald Fink was on Greater Boston with Emily Rooney; and [Kevin Judd McKernan (born 1973)] (a team leader at the sequencing center) and David Altshuler (a research scientist at the Genome Center and Harvard endocrinologist) were on the Geraldo Rivera show on CNBC. All this happened within the span of just one day (June 26). Media calls continued to pour in all week as reporters did follow-up stories about the Genome Center's accomplishments.

"We deserve to be proud of our accomplishments and bask in this glory as the world's attention focuses on us. The credit goes to all the individuals at the Whitehead Genome Center who have worked hard to make us the flagship center of the Human Genome Project Consortium. Everyone associated with this project should feel proud," said Professor [Eric Steven Lander (born 1957)],.

Books

The Human Genome Project: The Formation of Federal Policies in the United States, 1986-1990

Robert Mullan Cook-Deegan

The human genome project began to take shape in 1985 and 1986 at various meetings and in the rumor mills of science. By the beginning of the federal government's fiscal year 1988, there were formal line items for genome research in the budgets of both the National Institutes of Health (NIH) and the Department of Energy (DOE). Genome research budgets have grown considerably in 1989 and 1990, and organizational structures have been in flux, but the allocation of funds through line-item budgets was a pivotal event, in this case signaling the rapid adoption of a science policy initiative. This paper focuses on how those dedicated budgets were created.

  https://www.nap.edu/read/1793/chapter/5#105



https://www.baltimoresun.com/news/bs-xpm-1999-11-17-9911170234-story.html

Lander, the scientist said, sees himself as the "Henry Kissinger" of a potential detente between J. Craig Venter, founder of Celera, and Dr. Francis Collins, chief of the NIH's Human Genome Project. For a while, Venter and Collins belittled each other's scientific strategies in public.


1994 book : "The Gene wars" , by Robert Cook-Deegan

2013 book - "Life Out of Sequence: A Data-Driven History of Bioinformatics"

Chapters 1 and 2

2013-life-out-of-sequence-a-data-driven-history-hallam-stevens-screen-recording.mp4

https://drive.google.com/file/d/1tXgTnDGUo7HpcPYQNmXAky0ptyxwMUrh/view?usp=sharing 

2013-life-out-of-sequence-a-data-driven-history-hallam-stevens-img-cover.jpg

https://drive.google.com/file/d/1fP91svIJ3nYTxvIpVZU2f6MqB1b2JY_z/view?usp=sharing 

Front Cover

Hallam Stevens

University of Chicago Press, Nov 4, 2013 - Science - 272 pages

Review : "Thirty years ago, the most likely place to find a biologist was standing at a laboratory bench, peering down a microscope, surrounded by flasks of chemicals and petri dishes full of bacteria. Today, you are just as likely to find him or her in a room that looks more like an office, poring over lines of code on computer screens. The use of computers in biology has radically transformed who biologists are, what they do, and how they understand life. In Life Out of Sequence, Hallam Stevens looks inside this new landscape of digital scientific work. Stevens chronicles the emergence of bioinformatics—the mode of working across and between biology, computing, mathematics, and statistics—from the 1960s to the present, seeking to understand how knowledge about life is made in and through virtual spaces. He shows how scientific data moves from living organisms into DNA sequencing machines, through software, and into databases, images, and scientific publications. What he reveals is a biology very different from the one of predigital days: a biology that includes not only biologists but also highly interdisciplinary teams of managers and workers; a biology that is more centered on DNA sequencing, but one that understands sequence in terms of dynamic cascades and highly interconnected networks. Life Out of Sequence thus offers the computational biology community welcome context for their own work while also giving the public a frontline perspective of what is going on in this rapidly changing field. "

https://books.google.com/books?id=4ZKvAAAAQBAJ&dq=Carl+W.+Anderson,+Robert+Pollack,+and+Norton+Zinder+march+1979&source=gbs_navlinks_s 

We purchased on Google (for 44 dollars)

Copy of text (for chatper 1) placed into this text file : 2013-life-out-of-sequence-a-data-driven-history-hallam-stevens-copied-text-ch-1.txt

https://drive.google.com/file/d/1kTHJl4vcVGjbE0SaGvVTYi49ayR1L4KM/view?usp=sharing 

Chapter one: Building Computers

Before we can understand the effects of computers on biology, we need to understand what sorts of things computers are . Electronic computers were being used in biology even in the 1950s, but before 1980 they remained on the margins of biology—only a handful of biologists considered them important to their work. Now most biologists would find their work impossible without using a computer in some way. It seems obvious—to biologists as well as laypeople—that computers, databases, algorithms, and networks are appropriate tools for biological work. How and why did this change take place?

Perhaps it was computers that changed. As computers got better, a standard argument goes, they were able to handle more and more data and increasingly complex calculations, and they gradually became suitable for biological problems. This chapter argues that it was, in fact, the other way around: it was biology that changed to become a computerized and computerizable discipline. At the center of this change were data, especially sequence data. Computers are data processors: data storage, data management, and data analysis machines. During the 1980s, biologists began to produce large amounts of sequence data. These data needed to be collected, stored, maintained, and analyzed. Computers—data processing machines—provided a ready-made tool.

Our everyday familiarity with computers suggests that they are universal machines: we can use them to do the supermarket shopping, run a business, or watch a movie. But understanding the effects of computers—on biology at least—requires us to see these machines in a different light. The early history of computers suggests that they were not universal machines, but designed and adapted for particular kinds of data-driven problems. When computers came to be deployed in biology on a large scale, it was because these same kinds of problems became important in biology. Modes of thinking and working embedded in computational hardware were carried over from one discipline to another.

The use of computers in biology—at least since the 1980s—has entailed a shift toward problems involving statistics, probability, simulation, and stochastic methods. Using computers has meant focusing on the kinds of problems that computers are designed to solve. DNA, RNA, and protein sequences proved particularly amenable to these kinds of computations. The long strings of letters could be easily rendered as data and managed and manipulated as such. Sequences could be treated as patterns or codes that could be subjected to statistical and probabilistic analyses. They became objects ideally suited to the sorts of tools that computers offered. Bioinformatics is not just using computers to solve the same old biological problems; it marks a new way of thinking about and doing biology in which large volumes of data play the central role. Data-driven biology emerged because of the computer’s history as a data instrument.

The first part of this chapter provides a history of early electronic computers and their applications to biological problems before the 1980s. It pays special attention to the purposes for which computers were built and the uses to which they were put: solving differential equations, stochastic problems, and data management. These problems influenced the design of the machines. Joseph November argues that between roughly 1955 and 1965, biology went from being an “exemplar of systems that computers could not describe to exemplars of systems that computers could indeed describe.” 1 The introduction of computers into the life sciences borrowed heavily from operations research. It involved mathematizing aspects of biology in order to frame problems in modeling and data management terms—the terms that computers worked in. 2 Despite these adaptations, at the end of the 1970s, the computer still lay largely outside mainstream biological research. For the most part, it was an instrument ill-adapted to the practices and norms of the biological laboratory. 3

The invention of DNA sequencing in the late 1970s did much to change both the direction of biological research and the relationship of biology with computing. Since the early 1980s, the amount of sequence data has continued to grow at an exponential rate. The computer was a perfect tool with which to cope with the overwhelming flow of data. The second and third parts of this chapter consist of two case studies: the first of Walter Goad, a physicist who turned his computational skills toward biology in the 1960s; and the second of James Ostell, a computationally minded PhD student in biology at Harvard University in the 1980s. These examples show how the practices of computer use were imported from physics into biology and struggled to establish themselves there. These practices became established as a distinct subdiscipline of biology—bioinformatics—during the 1990s.

What Is a Computer?

The computer was an object designed and constructed to solve particular sorts of problems, first for the military and, soon afterward, for Big Physics. Computers were (and are) good at solving certain types of problems: numerical simulations, differential equations, stochastic and statistical problems, and problems involving the management of large amounts of data. 4

The modern electronic computer was born in World War II. Almost all the early attempts to build mechanical calculating devices were associated with weapons or the war effort. Paul Edwards argues that “for two decades, from the early 1940s until the early 1960s, the armed forces of the United States were the single most important driver of digital computer development.” 5 Alan Turing’s eponymous machine was conceived to solve a problem in pure mathematics, but its first physical realization at Bletchley Park was as a device to break German ciphers. 6 Howard Aiken’s Mark I, built by IBM between 1937 and 1943, was used by the US Navy’s Bureau of Ships to compute mathematical tables. 7 The computers designed at the Moore School of Electrical Engineering at the University of Pennsylvania in the late 1930s were purpose-built for ballistics computations at the Aberdeen Proving Ground in Maryland. 8 A large part of the design and the institutional impetus for the Electronic Numerical Integrator and Computer (ENIAC), also developed at the Moore School, came from John von Neumann. As part of the Manhattan Project, von Neumann was interested in using computers to solve problems in the mathematics of implosion. Although the ENIAC did not become functional until after the end of the war, its design—the kinds of problems it was supposed to solve—reflected wartime priorities.

With the emergence of the Cold War, military support for computers would continue to be of paramount importance. The first problem programmed onto the ENIAC (in November 1945) was a mathematical model of the hydrogen bomb. 9 As the conflict deepened, the military found uses for computers in aiming and operating weapons, weapons engineering, radar control, and the coordination of military operations. Computers like MIT’s Whirlwind (1951) and SAGE (Semi-Automatic Ground Environment, 1959) were the first to be applied to what became known as C 3 I: command, control, communications, and intelligence. 10

What implications did the military involvement have for computer design? Most early computers were designed to solve problems involving large sets of numbers. Firing tables are the most obvious example. Other problems, like implosion, also involved the numerical solution of differential equations. 11 A large set of numbers—representing an approximate solution—would be entered into the computer; a series of computations on these numbers would yield a new, better approximation. A solution could be approached iteratively. Problems such as radar control also involved (real-time) updating of large amounts of data fed in from remote military installations. Storing and iteratively updating large tables of data was the exemplary computational problem.

Another field that quickly took up the use of digital electronic computers was physics, particularly the disciplines of nuclear and particle physics. The military problems described above belonged strictly to the domain of physics. Differential equations and systems of linear algebraic equations can describe a wide range of physical phenomena such as fluid flow, diffusion, heat transfer, electromagnetic waves, and radio active decay. In some cases, techniques of military computing were applied directly to physics problems. For instance, missile telemetry involved problems of real-time, multichannel communication that were also useful for controlling bubble chambers. 12 A few years later, other physicists realized that computers could be used to great effect in “logic” machines: spark chambers and wire chambers that used electrical detectors rather than photographs to capture subatomic events. Bubble chambers and spark chambers were complicated machines that required careful coordination and monitoring so that the best conditions for recording events could be maintained by the experimenters. By building computers into the detectors, physicists were able to retain real-time control over their experimental machines. 13

But computers could be used for data reduction as well as control. From the early 1950s, computers were used to sort and analyze bubble chamber film and render the data into a useful form. One of the main problems for many particle physics experiments was the sorting of the signal from the noise: for many kinds of subatomic events, a certain “background” could be anticipated. Figuring out just how many background events should be expected inside the volume of a spark chamber was often a difficult problem that could not be solved analytically. Again following the lead of the military, physicists turned to simulations using computers. Starting with random numbers, physicists used stochastic methods that mimicked physical processes to arrive at “predictions” of the expected background. These “Monte Carlo” processes evolved from early computer simulations of atomic bombs on the ENIAC to sophisticated background calculations for bubble chambers. The computer itself became a particular kind of object: that is, a simulation machine.

The other significant use of computers that evolved between 1945 and 1955 was in the management of data. In many ways, this was a straightforward extension of the ENIAC’s ability to work with large sets of numbers. The Moore School engineers J. Presper Eckert and John Mauchly quickly saw how their design for the Electronic Discrete Variable Advanced Calculator (EDVAC) could be adapted into a machine that could rapidly sort data—precisely the need of commercial work. This insight inspired the inventors to incorporate the Eckert-Mauchly Computer Corporation in December 1948 with the aim of selling electronic computers to businesses. The first computer they produced—the UNIVAC (Universal Automatic Computer)—was sold to the US Census Bureau in March 1951. By 1954, they had sold almost twenty machines to military (the US Air Force, US Army Map Service, Atomic Energy Commission) and nonmilitary customers (General Electric, US Steel, DuPont, Metropolitan Life, Consolidated Edison). Customers used these machines for inventory and logistics. The most important feature of the computer was its ability to “scan through a reel of tape, find the correct record or set of records, perform some process in it, and return the results again to tape.” 14 It was an “automatic” information processing system. The UNIVAC was successful because it was able to store, operate on, and manipulate large tables of numbers—the only difference was that these numbers now represented inventory or revenue figures rather than purely mathematical expressions.

Between the end of World War II and the early 1960s, computers were also extensively used by the military in operations research (OR). OR and the related field of systems analysis were devoted to the systematic analysis of logistical problems in order to find optimally efficient solutions. 15 OR involved problems of game theory, probability, and statistics. These logical and numerical problems were understood as exactly the sorts of problems computers were good at solving. 16 The use of computers in OR and systems analysis not only continued to couple them to the military, but also continued their association with particular sorts of problems: namely, problems with large numbers of well-defined variables that would yield to numerical and logical calculations. 17

What were the consequences of all this for the application of computers to biology? Despite their touted “universality,” digital computers were not equally good at solving all problems. The ways in which early computers were used established standards and practices that influenced later uses. 18 The design of early computers placed certain constraints on where and how they would and could be applied to biological problems. The use of computers in biology was successful only where biological problems could be reduced to problems of data analysis and management. Bringing computers to the life sciences meant following specific patterns of use that were modeled on approaches in OR and physics and which reproduced modes of practice and patronage from those fields. 19

In the late 1950s, there were two alternative notions of how computers might be applied to the life sciences. The first was that biology and biologists had to mathematize, becoming more like the physical sciences. The second was that computers could be used for accounting purposes, creating “a biology oriented toward the collation of statistical analysis of large volumes of quantitative data.” 20 Both notions involved making biological problems amenable to computers’ data processing power. Robert Ledley—one of the strongest advocates of the application of computers in biology and medicine—envisioned the transformation of biologists’ research and practices along the lines of Big Science. 21

In 1965, Ledley published Use of Computers in Biology and Medicine . The foreword (by Lee Lusted of the National Institutes of Health) acknowledged that computer use required large-scale funding and cooperation similar to that seen in physics. 22 Ledley echoed these views in his preface:

Physics served as the paradigm of such organization. But the physical sciences also provided the model for the kinds of problems that computers were supposed to solve: those involving “large masses of data and many complicated interrelating factors.” Many of the biomedical applications of computers that Ledley’s volume explored treated biological systems according to their physical and chemical bases. The examples Ledley describes in his introduction include the numerical solution of differential equations describing biological systems (including protein structures, nerve fiber conduction, muscle fiber excitability, diffusion through semipermeable membranes, metabolic reactions, blood flow), simulations (Monte Carlo simulation of chemical reactions, enzyme systems, cell division, genetics, self-organizing neural nets), statistical analyses (medical records, experimental data, evaluation of new drugs, data from electrocardiograms and electroencephalograms, photomicrographic analysis); real-time experimental and clinical control (automatic respirators, analysis of electrophoresis, diffusion, and ultracentrifuge patterns, and counting of bacterial cultures) and medical diagnosis (including medical records and distribution and communication of medical knowledge). 24 Almost all the applications were either borrowed directly from the physical sciences or depended on problems involving statistics or large volumes of information. 25

For the most part, the mathematization and rationalization of biology that Ledley and others believed was necessary for the “computerization” of the life sciences did not eventuate. 26 By the late 1960s, however, the invention of minicomputers and the general reduction in the costs of computers allowed more biologists to experiment with their use. 27 At Stanford University, a small group of computer scientists and biologists led by Edward Feigenbaum and Joshua Lederberg began to take advantage of these changes. After applying computers to the problem of determining the structure of organic molecules, this group began to extend their work into molecular biology. 28

In 1975, they created MOLGEN, or “Applications of Symbolic Computation and Artificial Intelligence to Molecular Biology.” The aim of this project was to combine expertise in molecular biology with techniques from artificial intelligence to create “automated methods for experimental assistance,” including the design of complicated experimental plans and the analysis of nucleic acid sequences. 29

Lederberg and Feigenbaum initially conceived MOLGEN as an artificial intelligence (AI) project for molecular biology. MOLGEN included a “knowledge base” compiled by expert molecular biologists and containing “declarative and procedural information about structures, laboratory conditions, [and] laboratory techniques.” 30 They hoped that MOLGEN, once provided with sufficient information, would be able to emulate the reasoning processes of a working molecular biologist. Biologists did not readily take up these AI tools, and their use remained limited. What did begin to catch on, however, were the simple tools created as part of the MOLGEN project for entering, editing, comparing, and analyzing protein and nucleic acid sequences. In other words, biologists used MOLGEN for data management, rather than for the more complex tasks for which it was intended. By the end of the 1970s, computers had not yet exerted a wide influence on the knowledge and practice of biology. Since about 1975, however, computers have changed what it means to do biology: they have “computerized” the biologist’s laboratory.

By the early 1980s, and especially after the advent of the first personal computers, biologists began to use computers in a variety of ways. These applications included the collection, display, and analysis of data (e.g., electron micrographs, gel electrophoresis), simulations of molecular dynamics (e.g., binding of enzymes), simulations of evolution, and especially the study of the structure and folding of proteins (reconstructing data from X-ray crystallography, visualization, simulation and prediction of folding). 31 However, biologists saw the greatest potential of computers in dealing with sequences. In 1984, for instance, Martin Bishop wrote a review of software for molecular biology; out of fifty-three packages listed, thirty were for sequence analysis, a further nine for “recombinant DNA strategy,” and another seven for database retrieval and management. 32 The analysis of sequence data was becoming the exemplar for computing in biology. 33

As data processing machines, computers could be used in biology only in ways that aligned with their uses in the military and in physics. The early design and use of computers influenced the ways in which they could and would be applied in the life sciences. In the 1970s, the computer began to bring new kinds of problems (and techniques for solving them) to the fore in biology—simulation, statistics, and large-volume data management and analysis were the problems computers could solve quickly. We will see how these methods had to struggle to establish themselves within and alongside more familiar ways of knowing and doing in the life sciences.

Walter Goad and the Origins of GenBank

The next two sections provide two examples of ultimately successful attempts to introduce computers into biology. What these case studies suggest is that success depended not on adapting the computer to biological problems, but on adapting biology to problems that computers could readily solve. In particular, they demonstrate the central roles that data management, statistics, and sequences came to play in these new kinds of computationally driven biology. Together, these case studies also show that the application of computers to biology was not obvious or straightforward—Goad was able to use computers only because of his special position at Los Alamos, while Ostell had to struggle for many years to show the relevance and importance of his work. Ultimately, the acceptance of computers by biologists required a redefinition of the kinds of problems that biology addressed.

Walter Goad (1925–2000) came to Los Alamos Scientific Laboratories as a graduate student in 1951, in the midst of President Truman’s crash program to construct a hydrogen bomb. He quickly proved himself an able contributor to that project, gaining key insights into problems of neutron flux inside supercritical uranium. There is a clear continuity between some of Goad’s earlier (physics) and later (biological) work: both used numerical and statistical methods to solve data-intensive problems. Digital electronic computers were Goad’s most important tool. As a consequence, Goad’s work imported specific ways of doing and thinking from physics into biology. In particular, he brought ways of using computers as data management machines. Goad’s position as a senior scientist in one of the United States’ most prestigious scientific research institutions imparted a special prestige to these modes of practice. Ultimately, the physics-born computing that Goad introduced played a crucial role in redefining the types of problems that biologists addressed; the reorganization of biology that has accompanied the genomic era can be understood in part as a consequence of the modes of thinking and doing that the computer carried from Los Alamos.

We can reconstruct an idea of the kinds of physics problems that Goad was tackling by examining both some of his published work from the 1950s and his thesis on cosmic ray scattering. 34 This work had three crucial features. First, it depended on modeling systems (like neutrons) as fluids using differential or difference equations. Second, such systems involved many particles, so their properties could only be treated statistically. Third, insight was gained from the models by using numerical or statistical methods, often with the help of a digital electronic computer. During the 1950s, Los Alamos scientists pioneered new ways of problem solving using these machines.

Electronic computers were not available when Goad first came to Los Alamos in 1951 (although Los Alamos had had access to comput ers elsewhere since the war). By 1952, however, the laboratory had the MANIAC (Mathematical Analyzer, Numerical Integrator, and Computer), which had been constructed under the direction of Nicholas Metropolis. Between 1952 and 1954, Metropolis worked with Enrico Fermi, Stanislaw Ulam, George Gamow, and others on refining Monte Carlo and other numerical methods for use on the new machine. They applied these methods to problems in phase-shift analysis, nonlinear-coupled oscillators, two-dimensional hydrodynamics, and nuclear cascades. 35 Los Alamos also played a crucial role in convincing IBM to turn its efforts to manufacturing digital computers in the early 1950s. It was the first institution to receive IBM’s “Defense Calculator,” the IBM 701, in March 1953. 36

When attempting to understand the motion of neutrons inside a hydrogen bomb, it is not possible to write down (let alone solve) the equations of motion for all the neutrons (there are far too many). Instead, it is necessary to find ways of summarizing the vast amounts of data contained in the system. Goad played a central role in Los Alamos’ work on this problem. By treating the motion of neutrons like the flow of a fluid, Goad could describe it using well-known differential equations. These equations could be solved by “numerical methods”—that is, by finding approximate solutions through intensive calculation. 37 In other cases, Goad worked by using Monte Carlo methods—that is, by simulating the motion of neutrons as a series of random moves. 38 In this kind of work, Goad used electronic computers to perform the calculations: the computer acted to keep track of and manage the vast amounts of data involved. The important result was not the motion of any given neutron, but the overall pattern of motion, as determined from the statistical properties of the system.

When Goad returned to his thesis at the end of 1952, his work on cosmic rays proceeded similarly. He was attempting to produce a model of how cosmic rays would propagate through the atmosphere. Since a shower of cosmic rays involved many particles, once again it was not possible to track all of them individually. Instead, Goad attempted to develop a set of equations that would yield the statistical distribution of particles in the shower in space and time. These equations were solved numerically based on theoretical predictions about the production of mesons in the upper atmosphere. 39 In both his work on the hydrogen bomb and his thesis, Goad’s theoretical contributions centered on using numerical methods to understand the statistics of transport and flow. By the 1960s, Goad had become increasingly interested in some problems in biology. While visiting the University of Colorado Medical Center, Goad collaborated extensively with the physical chemist John R. Cann, examining transport processes in biological systems. First with electrophoresis gels, and then extending their work to ultracentrifugation, chromatography, and gel filtration, Goad and Cann developed models for understanding how biological
molecules moved through complex environments. 40 The general approach to such problems was to write down a set of differential or difference equations that could then be solved using numerical methods on a computer. This work was done on the IBM-704 and IBM-7094 machines at Los Alamos. These kinds of transport problems are remarkably similar to the kinds of physics that Goad had contributed to the hydrogen bomb: instead of neutrons moving through a supercritical plasma, the equations now had to represent macromolecules moving through a space filled with other molecules. 41 Here too, it was not the motion of any particular molecule that was of interest, but the statistical or average motion of an ensemble of molecules. Such work often proceeded by treating the motion of the molecule as a random walk and then simulating the overall motion computationally using Monte Carlo methods. Goad himself saw some clear continuities between his work in physics and his work in biology. Reporting his professional interests in 1974, he wrote, “Statistics and statistical mechanics, transport processes, and fluid mechanics, especially as applied to biological and chemical phenomena.” 42 By 1974, Goad was devoting most of his time to biological problems, but “statistical mechanics, transport processes, and fluid mechanics” well described his work in theoretical physics too. Likewise, in a Los Alamos memo from 1972, Goad argued that the work in biology should not be split off from the Theoretical Division’s other activities: “Nearly all of the problems that engage [Los Alamos] have a common core: . . . the focus is on the behavior of macroelements of the system, the behavior of microelements being averaged over—as in an equation of state—or otherwise statistically
characterized.” 43 Goad’s work may have dealt with proteins instead of nucleons, but his modes of thinking and working were very similar. His biology drew on familiar tools, particularly the computer, to solve problems by deducing the statistical properties of complex systems. The computer was the vital tool here because it could keep track of and summarize the vast amounts of data present in these models. Los Alamos provided a uniquely suitable context for this work. The laboratory’s long-standing interest in biology and medicine—and particularly in molecular genetics—provided some context for Goad’s forays. Few biologists were trained in the quantitative, statistical, and numerical methods that Goad could deploy; even fewer had access to expensive, powerful computers. Mathematical biology remained an extremely isolated subdiscipline. 44 Those few who used computers for biology were marginalized as theoreticians in an experiment-dominated discipline. Margaret Dayhoff at the National Biomedical Research Foundation (NBRF), for instance, struggled to gain acceptance among the wider biological community. 45 Goad’s position at Los Alamos was such that he did not require the plaudits of biologists—the prestige of the laboratory itself, as well as its open-ended mission, allowed him the freedom to pursue a novel kind of cross-disciplinary work. In 1974, the Theoretical Division’s commitment to the life sciences was formalized by the formation of a new subdivision: T-10, Theoretical Biology and Biophysics. The group was formally headed by George I. Bell, but by this time Goad too was devoting almost all his time to biological problems. The group worked on problems in immunology, radiation damage to nucleotides, transport of macromolecules, and human genetics. The senior scientists saw their
role as complementary to that of the experimenters, building and analyzing mathematical models of biological systems that could then be tested. 46 It was around this time that Goad and the small group of physicists working with him began to devote more attention to nucleic acid sequences. For biologists, both protein and nucleotide sequences were the keys to understanding evolution. Just as morphologists compared the shapes of the bones or limbs of different species, comparing the sequence of dog hemoglobin with that of cow hemoglobin, for instance, allowed inferences about the relatedness and evolutionary trajectories of dogs and cows. The more sequence that was available, and the more sensitively it could be compared, the greater insight into evolution could be gained. In other words, sequence comparison allowed biologists to study evolutionary dynamics very precisely at the molecular level. 47 Sequences were an appealing subject of research for physicists for several reasons. First, they were understood to be the fundamental building blocks of biology—studying their structure and function was equivalent in some sense to studying electrons and quarks in physics. Second, their discrete code seemed susceptible to the quantitative and computational tools that physicists had at their disposal. Computers were useful for processing the large quantities of numerical data from physics experiments and simulations; the growth of nucleotide sequence data offered similar possibilities for deploying the computer in biology. 48 The T-10 group immediately attempted to formulate sequence analysis as a set of mathematical problems. Ulam realized quickly that the problem of comparing sequences with one another was really a problem of finding a “metric space of sequences.” 49
physicists—including Temple Smith, Michael Waterman, Myron Stein, William A. Beyer, and Minoru Kanehisa—began to work on these problems of sequence comparison and analysis, making important advances both mathematically and in software. 50 T-10 fostered a culture of intense intellectual activity; its members realized that they were pursuing a unique approach to biology with skills and resources available to few others. 51 Within the group, sequence analysis was considered a problem of pattern matching and detection: within the confusing blur of As, Gs, Ts, and Cs in a DNA sequence, lay hidden patterns that coded for genes or acted as protein-specific binding sites. Even the relatively short (by contemporary standards) nucleotide sequences available in the mid-1970s contained hundreds of base pairs—far more than could be made sense of by eye. As a tool for dealing with large amounts of data and for performing statistical analysis, the computer was ideal for sequence analysis. 52 Goad’s earlier work in physics and biology had used computers to search for statistical patterns in the motion of neutrons or macromolecules; here, also by keeping track of large amounts of data, computerized stochastic techniques (e.g., Monte Carlo methods) could be used for finding statistical patterns hidden in the sequences. As the Los Alamos News Bulletin said of Goad’s work on DNA in 1982, “Pattern-recognition research and the preparation of computer systems and codes to simplify the process are part of a long-standing effort at Los Alamos—in part the progeny of the weapons development program here.” 53 Goad’s work used many of the same tools and techniques that had been developed at the laboratory since its beginnings, applying them now to biology instead of bombs.
GenBank—evolved from these computational efforts. For Goad, the collection of nucleotide sequences went hand in hand with their analysis: collection was necessary in order to have the richest possible resource for analytical work, but without continuously evolving analytical tools, a collection would be just a useless jumble of base pairs. In 1979, Goad began a pilot project with the aim of collecting, storing, analyzing, and distributing nucleic acid sequences. This databasing effort was almost coextensive with the analytical work of the T-10 group: both involved using the computer for organizing large sets of data. The techniques of large-scale data analysis required for sequence comparison were very similar to the methods required for tracking and organizing sequence in a database. “These activities have in common,” Bell wrote, “enhancing our understanding of the burgeoning data of molecular genetics both by relatively straightforward organization and analysis of the data and by the development of new tools for recognizing important features of the data.” 54 Databasing meant knowing how to use a computer for organizing and keeping track of large volumes of data. In other words, data management—the organization of sequence data into a bank—depended deeply on the kinds of computer-based approaches that Goad had been using for decades in both physics and biology. Goad’s experience with computers led him (and the T-10 group) to understand and frame biological problems in terms of pattern matching and data management—these were problems that they possessed the tools to solve. In so doing, these physicists brought not only new tools to biology, but new kinds of problems and practices. In 1979, Goad submitted an unsolicited proposal to the National
Institutes of Health (NIH) in the hope that he might receive funding to expand his database. After some hesitation, a competitive request for proposals was issued by the NIH in 1981. Goad was not the only person attempting to collect sequences and organize them using computers. Elvin Kabat had begun a collection of sequences of immunoglobulins at the NIH, while Kurt StĂŒber in Germany, Richard Grantham in France, and Douglas Brutlag at Stanford also had their own sequence collections. Dayhoff used computer analysis to compile the Atlas of Protein Sequence and Structure from her collection of (mostly protein) sequences at the NBRF. However, this kind of collection and analysis was not considered high-prestige work by biologists, and Dayhoff struggled to find funding for her work. 55 Goad’s position as a physicist at a prestigious laboratory afforded him independence from such concerns: he could pursue sequence collection and comparison just because he thought it was valuable scientific work. Ultimately, the $2 million, five-year contract for the publicly funded sequence database was awarded to Goad’s group in June 1982. Both the origins and the subsequent success of GenBank have been detailed elsewhere. 56 Goad’s scientific biography, however, suggests that GenBank was partly a product of his background in physics, as he imported a statistical and data management style of science into biology via the computer. Goad’s position as a physicist at a world-renowned laboratory allowed him to import ways of working into biology from his own discipline. Goad’s techniques and tools—particularly the computer—carried their prestige from his work in physics and had a credibility that did not depend on norms of biological work. These circumstances allowed the introduction not
only of a new tool (the computer), but also of specific ways of thinking centered on statistics, pattern recognition, and data management. Goad’s background meant that the computer came to biology not as a machine for solving biological problems, but rather as a technology that imported ready-made ways of thinking, doing, and organizing from physics. From Sequence to Software: James Ostell It is important not to exaggerate the extent of computer use in molecular biology in the early 1980s. One MOLGEN report from September 1980 provides a list of just fifty-one users who had logged into the system. 57 Although interest and use were growing rapidly, computers still remained esoteric tools for most molecular biologists. This certainly appeared to be true for Jim Ostell when he began his doctoral studies in the laboratory of Fotis Kafatos in Harvard’s Department of Cellular and Developmental Biology in 1979. 58 Although Ostell had a background in zoology (he wrote a master’s thesis on the anatomy of the male cricket at the University of Massachusetts), he was attracted to the exciting field of molecular biology, which seemed to be advancing rapidly due to the new techniques of DNA sequencing and cDNA cloning. Swept up in the excitement, Ostell did some cloning and sequencing of eggshell proteins. Once he had the sequence, however, he had no idea what to do next, or how to make any sense of it. Somebody suggested that he use a computer. Before coming to graduate school, Ostell had taken one computer class, using the FORTRAN programming language on a Cyber 70 mainframe with punched cards. 59 The Kafatos lab had a 300-baud modem that connected an ASCII terminal to the MOLGEN
project running at Stanford. It also had an 8-bit CP/M microcomputer with an Intel CPU, 48 kilobytes of memory, and an 8-inch floppy disk drive. 60 The secretary had priority for the use of the computer for word processing, but the students were free to use it after hours. Ostell relates how he came to use the machine: This computer was always breaking down, so the repair people were often there. I had been a ham radio operator and interested in electronics, so Fotis [Kafatos] found me one day looking interestedly in the top as it was under repair and asked if I knew anything about computers. When I replied “A little,” he smiled and said “Great! You are in charge of the computer.” 61 Ostell had begun to experiment with the MOLGEN system for analyzing his sequences. He found the tools it provided unsatisfactory for his purposes. As a result, he began to write his own sequence analysis software in FORTRAN, using a compiler that had apparently come with the computer. In his dissertation, written some seven years later, Ostell outlined two sorts of differences between the MOLGEN programs and his own. First, the MOLGEN software was designed to run on a mainframe, “supported by substantial government grants.” By contrast, the system that Ostell was using was “mainly the province of computer buffs and cost about $10,000. . . . It was a radical idea that comparable performance could be attained from a (relatively) inexpensive desktop computer.” 62 Second, Ostell’s programs were user-friendly: The software attempted to converse with the scientist in an immediately understandable way. Instead of questions like “>MAXDIST?_,” such as one would encounter on the Molgen system, this package would ask things like “What is the maximum distance to analyze (in base pairs)?” The other aspect of “doing biology” was the way the analyses were done. For example, the Molgen software would give the positions of restriction enzyme recognition sites in a
sequence. But why would a biologist want to do a restriction search in the first place? Probably to plan a real experiment. So my package would give the cut site for the enzyme, not the recognition site. . . . I feel mine provided more immediately useful information to the scientist. 63 Ostell’s colleagues, first in his own lab, then all over the Harvard BioLabs, soon began asking to use his programs. When he published a description of the programs in Nucleic Acids Research in 1982, offering free copies to anyone who wanted it, he was overwhelmed with requests. 64 Ostell’s programs constituted one of the most complete software packages available for molecular biology and the only one that would function on a microcomputer. In addition to making his programs microcomputer-friendly, Ostell made sure that they could be compiled and used on multiple platforms. Roger Staden’s similar package suffered from the fact that it used unusual FORTRAN commands and made occasional PDP-11 system calls (that is, it was designed for a mainframe). 65 Over the next few years, Ostell occupied himself with making the package available to as wide a range of collaborators as possible, adapting it for different systems and adding additional features. In a description of an updated version of his programs published in 1984, Ostell made a bold claim for the value of his work: Adequate understanding of the extensive DNA and protein sequence derived by current techniques requires the use of computers. Thus, properly designed sequence analysis programs are as important to the molecular biologist as are experimental techniques. 66 Not everyone shared his view, including some members of Ostell’s PhD committee at Harvard. “It wasn’t something that biologists should be doing,” according to Ostell’s recollection of the reaction of some members of his committee. 67 Despite Kafatos’s support, Ostell

was not permitted to graduate on the basis of his computational work. Ostell’s programs had made a direct contribution to the solution of many biological problems, but the software itself was not understood to be “doing biology.” Even in Ostell’s own writing about his work, he describes the functions of his programs as “data management” and “data analysis,” rather than biology proper. 68 Ostell could not get his PhD, but Kafatos agreed to allow him to stay on as a graduate student provided that he could support himself. He got permission to teach a class called “Computer Anatomy and Physiology” to undergraduates. This class was an introduction to computer hardware that analyzed the machine as if it were a living organism. In 1984, Ostell was approached by International Biotechnologies, Inc. (IBI, a company that had been selling restriction enzymes and laboratory equipment), which wanted to license his software and develop it into a product. Since Ostell had done the work while a graduate student at Harvard, the university had a legal claim to the intellectual property rights. But it saw no commercial value in Ostell’s work and agreed to sign over all rights. Turning the software into a commercial product was a formidable task. In particular, the software had to be carefully re engineered for reliability, compatibility, and interoperability. The IBI/Pustell Sequence Analysis Package was released in August 1984, ready to use on an IBM personal computer, at a cost of $800 for academic and commercial users. Still unable to graduate, Ostell followed his wife’s medical career to Vermont, where he lived in a nineteenth-century farmhouse and adapted his programs for use, first on MS-DOS and Unix machines and then on the new Apple Macintosh computers (the latter version eventually became the MacVector software).
In an attempt to convince his committee and others of the value of his work, Ostell also embarked on applying his software to various biological problems, collaborating with others in the Harvard BioLabs. This effort resulted in significant success, particularly in using his programs to analyze conservation patterns and codon bias to determine protein-coding regions and exon boundaries in Drosophila and broad bean ( Vicia faba ) genes. 69 These sorts of problems have two significant features. First, they require the manipulation and management of large amounts of data. Analysis of conservation patterns, for instance, requires organizing sequences from many organisms according to homology before performing comparisons. Second, analyzing codon bias and finding protein-coding regions are statistical problems. They treat sequences as a stochastic space, where the problem is one of finding a “signal” (a protein-coding region) amid the “noise” of bases. Consider this excerpt from Ostell’s thesis in which he explains how codon bias is calculated: Each sequence used to make the table is then compared to every other sequence in the table by Pearson product moment correlation coefficient. This is, the bias is calculated for each codon in each of two sequences being compared. The correlation coefficient is then calculated comparing the bias for every codon between the two sequences. The correlation coefficient gives a sense of the “goodness of fit” between the two tables. A correlation coefficient is also calculated between the sequence and aggregate table. Finally a C statistic, with and without strand adjustment, is calculated for the sequence on both its correct and incorrect strands. These calculations give an idea how well every sequence fits the aggregate data, as well as revealing relationships between pairs of sequences. 70 Countless similar examples could be taken from the text: the basis of
Ostell’s programs was the use of the computer as a tool for managing and performing statistical analysis on sequences. The story has a happy ending. By 1987, Ostell’s committee allowed him to submit his thesis. As David Lipman began to assemble a team for the new National Center for Biotechnology Information (NCBI), he realized that he had to employ Ostell—there was no one else in the world who had such a deep understanding of the informational needs of biologists. Selling the rights to his software so as not to create a conflict of interest, Ostell began work at the NCBI in November 1988 as the chief of information engineering (a position in which he remains in 2012). 71 In particular, Lipman must have been impressed by Ostell’s vision for integrating and standardizing biological information. As Ostell’s work and thinking evolved, it became clear to him that the way data were stored and managed had fundamental significance for biological practice and knowledge. Developing a new set of tools that he called a “cyborg software environment,” Ostell attempted to allow the user to interface directly with the sequence, placing the DNA molecule at the center of his representation of biological information. Computer images of DNA sequences have been strongly influenced by this vision of an isolated object. We sequence a piece of DNA and read a series of bases as a linear series of bands on a gel. On paper we represent DNA as a linear series of letters across a page. Virtually every computer program which operates on DNA sequences represents a DNA sequence as a linear series of bytes in memory, just as its representation on a printed page. However, a typical publication which contains such a linear series of letters describing a particular DNA always contains much more information. . . . Most computer programs do not include any of this information. 72 Ostell proposed a new way of representing this extra information that
“tie[d] all annotations to the simple coordinate system of the sequence itself.” 73 The computer now provided a way to order biology and think about biological problems in which sequences played the central role. By 1987, as he was finishing his thesis, Ostell realized that he had been involved in “the beginnings of a scientific field.” 74 Problems that had been impossibly difficult in 1980 were now being solved as a matter of routine. More importantly, the problems had changed: whereas Ostell had begun by building individual programs to analyze particular sequences, by the late 1980s he was tackling the design of “software environments” that allowed integrated development of tools and large-scale data sharing that would transcend particular machines and file formats. For the epigraph to the introduction to his thesis, Ostell quoted Einstein: “Opinions about obviousness are to a certain extent a function of time.” Given the difficulties Ostell faced in completing his degree, it is hard not to read this quotation as a comment on the discipline he had helped to create: the application of computers to biology had gone from the “unobvious” to the “obvious.” But why? What does Ostell’s story suggest about the transition? First, it makes clear the importance of sequence and sequencing: the growth of sequence created a glut of data that had to be managed. The computerization of biology was closely associated with the proliferation of sequences; sequences were the kinds of objects that could be manipulated and interrogated using computers. Second, the growing importance of sequences increased the need for data management. The design of computers was suited to knowledge making through the management and analysis of large data sets.
Disciplinary Origins of Bioinformatics In the 1980s, due to the perseverance of Ostell and others like him, computers began to become more prevalent in biological work. The individuals who used these machines brought with them commitments to statistical and data management approaches originating in physics. During the 1990s, the use of computers in biology grew into a recognized and specialized set of skills for managing and analyzing large volumes of data. The Oxford English Dictionary now attributes the first use of the term “bioinformatics” to Paulien Hogeweg in 1978, but there is a strong case to be made that the discipline, as a recognized subfield of biology, did not come into existence until the early 1990s. 75 Searching PubMed—a comprehensive online citation database for the biomedical sciences—for the keyword “bioinformatics” from 1982 to 2008 suggests that the field did not grow substantially until about 1992 ( figure 1.1 ). Although this is not a perfect indicator, it suffices to show the general trend. 76 From a trickle of publications in the late 1980s, the early 1990s saw an increase to several hundred papers per year (about one paper per day). This number remained relatively steady from 1992 to 1998, when the field underwent another period of rapid growth, up to about ten thousand papers per year (about twenty-seven papers per day) in 2005. The late 1980s and early 1990s also saw the founding of several key institutions, including the NCBI in 1988 and the European Bioinformatics Institute in 1993. In 1990 and 1991, the Spring Symposia on Artificial Intelligence and Molecular Biology were held at Stanford. 77 Lawrence E. Hunter, a programmer at the National

Library of Medicine (NLM), was one of the organizers: “It was really hard to find people who did this work in either computer science or molecular biology. No one cared about bioinformatics or had any idea of what it was or how to find people who did it.” 78 In 1992, Hunter used publications and conference mailing lists to generate a database of researchers interested in artificial intelligence and molecular biology. The conference that Hunter organized around his list became the first Intelligent Systems for Molecular Biology (ISMB) meeting, held in Washington, DC, in 1993 and jointly sponsored by the NLM and the National Science Foundation. 79
It was also during the early 1990s that the first moves were made toward establishing specially designed bioinformatics courses at the undergraduate and graduate levels. In 1993, undergraduate and doctoral programs were established in bioinformatics at Baylor College of Medicine, Rice University, and the University of Houston. These were followed in 1996 by programs at Northwestern University, Rutgers University, and the University of Washington. By 1999, according to one report, there were twenty-one bioinformatics programs in the United States. 80 What caused this institutionalization of bioinformatics? It would be possible to tell this story as part of a history of the Human Genome Project (HGP)—many of the sequence data on which computers went to work were generated as part of human genome mapping and sequencing efforts. However, the story might just as easily be told the other way around: the HGP became a plausible and thinkable project only because methods of managing large amounts of data were already coming into existence in the 1980s. Computers had begun to be used for managing sequence data before the HGP’s beginnings. The HGP, while crystallizing the institutional development of bioinformatics, depended on the prior existence of bioinformatic practices of data management. It was the possibility of being able to store and manage the 3 billion base pairs of the human genome on a computer that made the project make sense. Similar problems of data management had arisen before the HGP. Almost since GenBank’s inception in 1982, its managers and advisors were under constant pressure to keep up with the increases in the publication of sequence data. By 1985, Los Alamos (along with Bolt, Beranek and Newman, which shared the responsibility for

running GenBank) was complaining of the difficulties of keeping up with the rate of sequence production: “The average amount of new information [each month] . . . is fully half the size of the first GenBank release in October 1982.” 81 The cause of their concern was clear: a fixed budget and a fixed number of staff had to cope with an exponentially growing set of sequences. Coping with this problem was the major challenge for GenBank throughout the 1980s—the need for timely entry and complete ness of data constantly animated GenBank staff, its advisory board, and its NIH overseers. But the problem extended beyond a single database. In 1985, an editorial in the first issue of the journal Computer Applications in the Biosciences: CABIOS warned, “The information ‘explosion’ cannot continue indefinitely, but has already reached unmanageable proportions.” 82 An announcement for a seminar in October 1987 expressed a similarly dire sentiment: “The body of experimental data in the biological sciences is immense and growing rapidly. Its volume is so extensive that computer methods, possibly straining the limits of current technology will be necessary to organize the data.” 83 By the time the HGP had begun in earnest, the problem was even worse: “Data collection is outstripping current capabilities to annotate, store, retrieve, and analyze maps and sequences.” 84 The amounts of data seemed to be constantly overwhelming biologists’ abilities to analyze and understand them. However, computers seemed to present a ready-made solution: they were designed to handle exactly the kinds of data management problems that biology now presented. Their roots in Big Science made them suitable tools for controlling the growing number of base pairs. In describing their crisis, biologists often used the metaphor of a“data flood.” Biological data, like water, are generally a good thing, necessary for the growth of biological understanding. But in large quantities (and moving at high speed), data, like water, can be dangerous—they can submerge the structures on which knowledge is built. 85 Of particular concern to biologists was the fact that as molecular biological data were generated, many of them flowed in an ad hoc fashion—into isolated databases, in nonstandard formats. Already by the early 1990s, finding all the available information about a particular gene or sequence was becoming nearly impossible. Using the computer as a day-to-day laboratory tool was one thing, but organizing large-scale coordination and collaboration required specialized knowledge and skills. The diluvian metaphor contributed to the sense that controlling the flow of data, filtering them and channeling them to the appropriate places, was an activity distinct from, but just as important as, the eventual use of the data. In other words, the metaphor created an epistemic space for data management in biology. By the time the HGP came on the scene, databases such as GenBank had already turned to computers and bioinformatic techniques for data management. Efforts to create institutional spaces for data work also arose independently of the HGP. 86 The bill calling for the creation of NCBI, first brought before Congress in 1986, reasoned that “knowledge in the field of biotechnology is accumulating faster than it can reasonably be assimilated” and that advances in computation were the solution. 87 In particular, the bill recognized that the design, development, implementation, and management of biological information constituted a set of distinct skills that required a distinct institutional home. Likewise, in Europe,
the formation of the European Bioinformatics Institute (EBI) as a quasi-independent outstation of the European Molecular Biology Laboratory (EMBL) was justified on the grounds that the application of computing tools to biology constituted a necessary and distinct skill set. Sophisticated interacting information resources must be built both from EMBL Data Library products and in collaboration with groups throughout Europe. Support and training in the use and development of such resources must be provided. Research necessary to keep them state-of-the-art must be carried out. Close links with all constituents must be maintained. These include research scientists, biotechnologists, software and computer vendors, scientific publishers, and government agencies. . . . Increased understanding of biological processes at the molecular level and the powerful technologies of computer and information science will combine to allow bioinformatics to transcend its hitherto largely service role and make fundamentally innovative contributions to research and technology. 88 The computer was no longer to be considered a mere lab tool, but rather the basis of a discipline that could make “innovative contributions” to biomedicine. The knowledge and skills that were needed to manage this data crisis were those that been associated with computing and computers since the 1950s. In particular, molecular biology needed data management and statistics. From the mid-1990s onward, new journals, new conferences, and new training programs (and textbooks) began to appear to support this new domain of knowledge. In 1994, the new Journal of Computational Biology announced that computational biology is emerging as a discipline in its own right, in much the same way molecular biology did in the late 1950s and early 1960s. . . . Biology, regardless of the sub-specialty, is overwhelmed with large amounts of very complex data. . . . Thus all areas of the biological sciences have urgent needs for
the organized and accessible storage of biological data, powerful tools for analysis of those data, and robust, mathematically based, data models. Collaborations between computer scientists and biologists are necessary to develop information platforms that accommodate the need for variation in the representation of biological data, the distributed nature of the data acquisition system, the variable demands placed on different data sets, and the absence of adequate algorithms for data comparison, which forms the basis of biological science. 89 In 1995, Michael Waterman published Introduction to Computational Biology: Maps, Sequences, and Genomes , arguably the first bioinformatics textbook. 90 Waterman’s book was grounded in the treatment of biology as a set of statistical problems: sequence alignment, database searching, genome mapping, sequence assembly, RNA secondary structure, and evolution, are all treated as statistical problems. 91 In 1998, the fourteen-year-old journal Computer Applications in the Biosciences changed its name to Bioinformatics . In one of the first issues, Russ Altman outlined the need for a specialized curriculum for bioinformatics. Altman stressed that “bioinformatics is not simply a proper subset of biology or computer science, but has a growing and independent base of tenets that requires specific training not appropriate for either biology or computer science alone.” 92 Apart from the obvious foundational courses in molecular biology and computer science, Altman recommended training for future bioinformaticians in statistics (including probability theory, experimental statistical design and analysis, and stochastic processes) as well as several specialized domains of computer science: optimization (expectation maximization, Monte Carlo, simulated annealing, gradient-based methods), dynamic programming,
bounded search algorithms, cluster analysis, classification, neural networks, genetic algorithms, Bayesian inference, and stochastic context-free grammars. 93 Almost all of these methods trace their origins to statistics or physics or both. 94 Altman’s editorial was inspired by the fact that solid training in bioinformatics was hard to come by. The kinds of skills he pointed to were in demand, and bioinformatics was the next hot career. Under the headline “Bioinformatics: Jobs Galore,” Science Careers reported in 2000 that “everyone is struggling to find people with the bioinformatics skills they need.” 95 The period 1997–2004 coincides with the second period of rapid growth of bioinformatics publications (see figure 1.1 ). Between 1999 and 2004, the number of universities offering academic programs in bioinformatics more than tripled, from 21 to 74. 96 The acceleration and rapid completion of the HGP made it clear that, as one Nature editor put it, “like it or not, big biology is here to stay.” 97 Data would continue to be produced apace, and bioinformatics would continue to be in demand. A shift towards an information-oriented systems view of biology, which grasps both mathematically and biologically the many elements of a system, and the relationships among them that allows the construction of an organism, is underway. But the social change required to make this shift painlessly should not be underestimated. 98 Indeed, the publication of the human genome in 2001 escalated the sense of crisis among biologists to the extent that some feared that traditional biological training would soon become redundant. If biologists do not adapt to the computational tools needed to exploit huge data sets . . . they could find themselves floundering in the wake of advances in genomics. . . . Those who learn to conduct high-throughput genomic analyses, and who can master the computational tools needed to exploit biological
databases, will have an enormous competitive advantage. . . . Many biologists risk being “disenfranchised.” 99 Again, the “wave” of data created a sense that the old tools were just not up to the job and that a radical re-skilling of biology was necessary. A sense of desperation began to grip some biologists: David Roos no doubt spoke for many biologists when, in the issue of Science announcing the human genome draft sequence, he fretted, “We are swimming in a rapidly rising sea of data . . . how do we keep from drowning?” 100 “Don’t worry if you feel like an idiot,” Ewan Birney (one of the new breed of computationally savvy biologists) consoled his colleagues, “because everyone does when they first start.” 101 This discomfort suggests that bioinformatics marked a radical break with previous forms of biological practice. It was not merely that the HGP used computers to “scale up” the old biology. Rather, what allowed bioinformatics to coalesce as a discipline was that the production and management of data demanded a tool that imported new methods of doing and thinking into biological work. In other words, the com puter brought with it epistemic and institutional reorganizations that became known as bioinformatics. The sorts of problems and methods that came to the fore were those that had been associated with computing since the 1950s: statistics, simulation, and data management. As computing and bioinformatics grew in importance, physicists, mathematicians, and computer scientists saw opportunities for deploying their own skills in biology. Physicists, in particular, perceived how biological knowledge could be reshaped by methods from physics:

The essence of physics is to simplify, whereas molecular biology strives to tease out the smallest details. . . . The two cultures might have continued to drift apart, were it not for the revolution in genomics. But thanks to a proliferation of high-throughput techniques, molecular biologists now find themselves wading through more DNA sequences and profiles of gene expression and protein production than they know what to do with. . . . Physicists believe that they can help, bringing a strong background in theory and the modeling of complexity to nudge the study of molecules and cells in a fresh direction. 102 Where biology suddenly had to deal with large amounts of data, physicists saw their opportunity. Physicists, mathematicians, and computer scientists found myriad opportunities in biology because they had the skills in statistics and data management that bioinformatics required. Bioinformatics and the HGP entailed each other. Each drove the other by enabling the production and synthesis of more and more data: the production of bioinformatic tools to store and manage data allowed more data to be produced more rapidly, driving bioinformatics to produce bigger and better tools. The HGP was Big Science and computers were tools appropriate for such a job—their design was ideal for the data management problems presented by the growth of sequences. Computers were already understood as suitable for solving the kinds of problems the genome presented. Within this context of intensive data production, bioinformatics became a special set of techniques and skills for doing biology. Between the early 1980s and the early 2000s, the management of biological data emerged as a distinct set of problems with a distinct set of solutions. Computers became plausible tools for doing biology because they changed the questions that biologists were asking. They brought with them new forms of knowledge production, many of them associated
with physics, that were explicitly suited to reducing and managing large data sets and large volumes of information. The use of computers as tools of data reduction carried Big Science into biology—the machines themselves entailed ways of working and knowing that were radically unfamiliar to biologists. The institutionalization of bioinformatics was a response to the immediate data problems posed by the HGP, but the techniques used for solving these problems had a heritage independent of the HGP and would continue to influence biological work beyond it. Conclusions The sorts of data analysis and data management problems for which the computer had been designed left their mark on the machine. Understanding the role of computers in biology necessitates understanding their history as data processing machines. From the 1950s onward, some biologists used computers for collecting, storing, and analyzing data. As first minicomputers and later personal computers became widely available, all sorts of biologists made increasing use of these devices to assist with their data work. But these uses can mostly be characterized in terms of a speeding up or scaling up: using computers, more data could be gathered, they could be stored more easily, and they could be analyzed more efficiently. Computers succeeded in biology when applied to these data-driven tasks. But biologists often dealt with small amounts of data—or data that were not easily computerizable—so the effects of the computer were limited. The emergence of DNA, RNA, and protein sequences in molecular biology provided the opportunity for the computer to make more
fundamental transformations of biological work. Now biological objects (sequences) could be managed as data. The application of computers to the management and analysis of these objects did not entail merely the introduction of a new tool and its integration into traditional biological practice. Rather, it involved a reorientation of institutions and practices, a restructuring of the ways in which biological knowledge is made. Specifically, the computer brought with it ways of investigating and understanding the world that were deeply embedded in its origins. Using computers meant using them for analyzing and managing large sets of data, for statistics, and for simulation. Earlier attempts to introduce the computer into biology largely failed because they attempted to shape the computer to biological problems. When the computer began to be used to solve the kinds of problems for which it had been originally designed, it met with greater success. In so doing, however, it introduced new modes of working and redefined the kinds of problems that were considered relevant to biology. In other words, biology adapted itself to the computer, not the computer to biology. Ultimately, biologists came to use computers because they came to trust these new ways of knowing and doing. This reorientation was never obvious: in the 1970s, it was not clear how computers could be successfully applied to molecular biology, and in the 1980s, Ostell and a handful of others struggled to earn legitimacy for computational techniques. Only in the 1990s did bioinformatics begin to crystallize around a distinct set of knowledge and practices. The gathering of sequence data—especially in the HGP—had much to do with this shift. Sequences were highly “computable” objects—their one-


dimensionality and their pattern of symbols meant that they were susceptible to storage, management, and analysis with the sorts of statistical and numerical tools that computers provided. The computability of sequences made the HGP thinkable and possible; the proliferation of sequence data that emerged from the project necessitated a computerization that solidified the new bioinformatic ways of doing and knowing in biology. I am not arguing here for a kind of technological determinism—it was not the computers themselves that brought new practices and modes of working into biology. Rather, it was individuals like Goad, who came from physics into biology, who imported with them the specific ways of using their computational tools. The computer engendered specific patterns of use and ways of generating knowledge that were brought into biology from physics via the computer. Chapter 2 will characterize these new forms of knowledge making in contemporary biology in more detail, showing how they have led to radically different modes of scientific inquiry.


https://citizensciences.net/bruno-strasser/ 

https://books.google.com/books?id=__aVDwAAQBAJ&dq=%22lederberg%22+%2B+%22goad%22&source=gbs_navlinks_s 

Collecting Experiments: Making Big Data Biology

Bruno J. Strasser

University of Chicago Press, Jun 7, 2019 - Science - 392 pages

Databases have revolutionized nearly every aspect of our lives. Information of all sorts is being collected on a massive scale, from Google to Facebook and well beyond. But as the amount of information in databases explodes, we are forced to reassess our ideas about what knowledge is, how it is produced, to whom it belongs, and who can be credited for producing it.

Every scientist working today draws on databases to produce scientific knowledge. Databases have become more common than microscopes, voltmeters, and test tubes, and the increasing amount of data has led to major changes in research practices and profound reflections on the proper professional roles of data producers, collectors, curators, and analysts.

Collecting Experiments traces the development and use of data collections, especially in the experimental life sciences, from the early twentieth century to the present. It shows that the current revolution is best understood as the coming together of two older ways of knowing—collecting and experimenting, the museum and the laboratory. Ultimately, Bruno J. Strasser argues that by serving as knowledge repositories, as well as indispensable tools for producing new knowledge, these databases function as digital museums for the twenty-first century.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1592577/2006-10-american-journal-of-human-genetics-vol-79-origins-of-human-genome-project-berg-pmc1592577.pdfhttps://drive.google.com/file/d/1ZyQU611FL3akAtrDQczx_PcMJUJi8CKO/view?usp=sharing2006-10-american-journal-of-human-genetics-vol-79-origins-of-human-genome-project-berg-pmc1592577-pg-1.jpghttps://drive.google.com/file/d/1rVd_-2Ryp5TWS32IzqimEbbYaSovYAg8/view?usp=sharing2006 Oct; 79(4): 603–605.doi: 10.1086/507688PMCID: PMC1592577PMID: 16960796Origins of the Human Genome Project: Why Sequence the Human Genome When 96% of It Is Junk?Paul Berg" That conviction stemmed from having seen, firsthand, the tremendous advantages of knowing the sequence of SV40 (in 1978) and adenovirus genomic DNAs (in 1979–1980), particularly for deciphering their biological properties. In each of these instances, as well as for the longer and more complex genomic DNAs of the herpes virus and cytomegalovirus, knowing the sequences was critical for accurately mapping their mRNAs, identifying the introns, and making pretty good guesses about the transcriptional regulatory elements. Even more significant was the ability to engineer precisely targeted modifications to their genomes (e.g., base changes, deletions and additions, sequence rearrangements, and substitutions of defined segments with nonviral DNA). One could easily imagine that knowing the human DNA sequence would enable us to manipulate the sequences of specific genes for a variety of hitherto-undoable experiments. "

Origins of the Human Genome Project: Why Sequence the Human Genome When 96% of It Is Junk?

I was not much involved in the discussion and debate about initiating a program to determine the base-pair sequence of the human genome, until the idea surfaced publicly. As I recall the genesis of the Human Genome Project, the idea for sequencing the human genome was initiated independently and nearly simultaneously by [Robert Louis Sinsheimer (born 1920)], then Chancellor of the University of California–Santa Cruz (UCSC), and [Charles Peter DeLisi (born 1941)] of the United States Department of Energy. Each had his own purpose in promoting such an audacious undertaking, but the goals of their ambitious plans are best left for them to tell. The proposal was initially aired at a meeting of a small group of scientists convened by Sinsheimer at UCSC in May 1985 and received the backing of those who attended. I became aware of the project through an editorial or op-ed–style piece by Renato Dulbecco in Science, March 1986. Dulbecco’s enthusiasm for the project was based on his conviction that only by having the complete human genome sequence could we hope to identify the many oncogenes, tumor suppressors, and their modifiers. Although that particular goal seemed problematic, I was enthusiastic about the likelihood that the sequence would reveal important organizational, structural, and functional features of mammalian genes.

That conviction stemmed from having seen, firsthand, the tremendous advantages of knowing the sequence of SV40 (in 1978) and adenovirus genomic DNAs (in 1979–1980), particularly for deciphering their biological properties. In each of these instances, as well as for the longer and more complex genomic DNAs of the herpes virus and cytomegalovirus, knowing the sequences was critical for accurately mapping their mRNAs, identifying the introns, and making pretty good guesses about the transcriptional regulatory elements. Even more significant was the ability to engineer precisely targeted modifications to their genomes (e.g., base changes, deletions and additions, sequence rearrangements, and substitutions of defined segments with nonviral DNA). One could easily imagine that knowing the human DNA sequence would enable us to manipulate the sequences of specific genes for a variety of hitherto-undoable experiments.

Aware of the upcoming 1986 Cold Spring Harbor (CSH) Symposium on the “Molecular Biology of Homo sapiens,” I suggested to Jim Watson that it might be interesting to convene a small group of interested people to discuss the proposal’s feasibility. I thought that such a rump session might attract people who would be engaged by the proposal, and Watson agreed to set aside some time during the first free afternoon. As the attendees assembled, it was clear that the project was on the minds of many, and almost everyone who attended the symposium showed up for the session at the newly dedicated Grace Auditorium. Wally Gilbert and I were assigned the task of guiding the discussion. Needless to say, what followed was highly contentious; the reactions ranged from outrage to moderate enthusiasm—the former outnumbering the latter by about five to one.

Gilbert began the discussion by outlining his favored approach: fragment the entire genome’s DNA into a collection of overlapping fragments, clone the individual fragments, sequence the cloned segments with the then existing sequencing technology, and assemble their original order with appropriate computer software. In his most self-assured manner, Gilbert estimated that such a project could be completed in ∌10–20 years at a net cost of ∌$1 per base, or ∌$3 billion. Even before he finished, one could hear the rumblings of discontent and the audience’s gathering outrage. It was not just his matter-of-fact manner and self-assurance about his projections that got the discussion off on the wrong foot, for there was also the rumor (which may well have been planted by Gilbert) that a company he was contemplating starting would undertake the project on its own, copyright the sequence, and market its content to interested parties.

One could sense the fury of many in the audience, and there was a rush to speak out in protest. Among the more vociferous comments, three points stood out:

The fury of the reactions of some of our most respected molecular geneticists startled me. Several of the speakers argued that certain areas of research, usually their own specialty, were far more valuable than the sequence of the human genome. I was particularly irked by the claims that there was no need to sequence the entire 3 billion base pairs and that knowing the sequences of only the genes would suffice. Frankly, I was shocked by what seemed to me to be a display of what I termed an “arrogance of ignorance.”Why, I asked, should we foreclose on the likelihood that noncoding regions within and surrounding genes contain signals that we have not yet recognized or learned to assay? Furthermore, wasn’t it conceivable that there are DNA sequences for functions other than encoding proteins and RNAs? For example, the DNA sequence might serve for other organismal functions (e.g., chromosomal replication, packaging of the DNA into highly condensed chromatin, or control of development). It seemed surprising and disconcerting to hear that many were prepared to discard, a priori, a potential source of such information, and it was even more surprising that this myopic view persisted both throughout the meeting and for some time afterward.

During the session, I tried to steer the discussion away from the cost issue and the fuzzy arguments about Little Science versus Big Science. Perhaps it was better, I thought, to tempt the creative minds in the audience. After all, this was a scientific meeting with some of the most creative minds sitting in the audience. What if, I said, some philanthropic source descended into our midst and offered $3 billion to produce the sequence of the human genome at the end of 10 years? And, I suggested, assume that we were assured that there would be no impact on existing sources of funding. Would the project be worth doing? If so, how should we proceed with it? Gilbert had offered his approach, but, I asked, are there better ways?

To get that discussion started, I proposed that we might consider sequencing only cloned cDNAs from a variety of libraries made from different tissues and conditions. Knowledge of the expressed sequences would enable us to bootstrap our way to cloning the genomic versions of the cDNAs and, thereby, enable us to identify the introns and the likely promoters. Such an approach, I argued, would very likely yield valuable and interesting cloned material for many investigators to work on long before we knew the entire sequence. The premise was that the effort would identify the chromosomal versions of the expressed sequences and, with some cleverness, their flanking sequences.

However, try as Imight, I could not engage the audience in that exercise. Their concerns were about the price that would be paid by traditional ways of doing science and that many more-interesting and important problems would be abandoned or neglected. The meeting ended with most people unconvinced of the value of proceeding with a project to sequence the human genome

At the end of the meeting, I flew to Basel, Switzerland, where I was part of an advisory group to the Basel Institute of Immunology. At the hotel, I found a group of American and European colleagues perched on the veranda overlooking the fast-flowing Rhine River. They were clearly aware of the discussion at CSH and my participation in it. I again had to defend my support for the sequencing project against arguments that were a repetition of those expressed at CSH.

Soon thereafter, the National Academy of Sciences convened a blue-ribbon committee, many members of which had been among the critical voices at CSH. Their report recast the scope and direction of the project in a more constructive way; the principal change was the proposal to proceed in phases: determine the genetic map by use of principally polymorphic markers, create a physical map consisting of linked cloned cosmids, and focus on developing more cost- and time-efficient means of sequencing DNA. The most important recommendation, in my view, was to include in the project the sequencing of the then-favorite model organisms: Escherichia coli, Saccharomyces cerevisiae, Drosophila melanogaster, Caenorhabditis elegans, and the mouse. It was clear that the new formulation did not threaten research support for those who worked on prokaryotes and lower eukaryotes. More likely, the additional funding would energize research on these organisms. It also provided a livelihood for those interested in mapping their favorite organism and for those committed to cloning and mapping large segments of DNA. In the end, people were mollified by the realization that they would not be left out of the project’s funding. Also, the proposal had a logic for how to proceed and the acceptance that useful information would be generated long before the project was completed.

Sometime after the project was under way, Watson became the director of the project and set the agenda for how the project would proceed. He was committed to a razor-like focus on the development of genetic and physical maps, discouraging and even dismissing proposals that focused on making the work relevant to the biology. Indeed, that strategy was enforced by the study sections that reviewed genome-project grant proposals; proposals involving methods that would further the two mapping projects received preference, whereas those that hinted at deviation from that goal went unfunded. There is little question thatWatson’s forceful and committed leadership ensured the project’s success.

It is interesting, in retrospect, that the course Gilbert had proposed for obtaining the human genome sequence— shotgun cloning, sequencing, and assembly of completed bits into the whole—was what carried the day. Also, people who had dismissed the necessity of knowing the sequence of the junk now readily admit that the junk may very well be the crown jewels, the stuff that orchestrates the coding sequences in biologically meaningful activities.

The past few years have revealed unexpected findings regarding noncoding genomic sequences, giving assurance that there is much more to discover in the genome sequences. Moreover, understanding the function of the noncoding genome sequences is very likely to accelerate, as the tools for mining the sequence and the application of robust and large-scale methods for detecting transcription become more refined.

PAUL BERG

Stanford University Medical Center

Stanford

https://www.tcracs.org/tcrwp/1about/1biosketch/1sumex-aim/


LEDERBERGhttps://royalsocietypublishing.org/doi/pdf/10.1098/rsbm.2010.0024https://profiles.nlm.nih.gov/spotlight/bb/catalog/nlm:nlmuid-101584906X13631-dochttps://collections.nlm.nih.gov/ext/document/101584906X13631/PDF/101584906X13631.pdfhttp://abfe.issuelab.org/resources/10969/10969.pdflederberg wrote bio for tatum : https://www.annualreviews.org/doi/pdf/10.1146/annurev.ge.13.120179.000245nelson rockefeller killed(died) jan 1979 - Nelson Rockefellervice president of United StatesPrint Cite Share MoreWRITTEN BYThe Editors of Encyclopaedia BritannicaEncyclopaedia Britannica's editors oversee subject areas in which they have extensive knowledge, whether from years of experience gained by working on that content or via study for an advanced degree....See Article HistoryAlternative Title: Nelson Aldrich Rockefeller

Nelson Rockefeller, in full Nelson Aldrich Rockefeller, (born July 8, 1908, Bar Harbor, Maine, U.S.—died January 26, 1979, New York City), 41st vice president of the United States (1974–77) in the Republican administration of Pres. Gerald Ford, four-term governor of New York (1959–73), and leader of the liberal wing of the Republican Party. He unsuccessfully sought the presidential nomination of his party three times.
 the rainbowhttps://books.google.com/books?id=5B0k-LUDjVEC&pg=PA285&lpg=PA285&dq=march+1979+rockefeller+university+genetics&source=bl&ots=SH0-dBtCGW&sig=ACfU3U3WnSyUunD0LJGcKHURrGGw00xFXw&hl=en&sa=X&ved=2ahUKEwjdr7rT8ffwAhUjKVkFHVSPAeQQ6AEwCXoECAoQAw#v=onepage&q=march%201979%20rockefeller%20university%20genetics&f=false




ALSO Interesting : https://www.encyclopedia.com/science/science-magazines/human-genome-project 

https://www.nature.com/scitable/topicpage/genomic-data-resources-challenges-and-promises-743721/Genomic Data Resources: Challenges and PromisesBy: Warren C. Lathe III (OpenHelix), Jennifer M. Williams (OpenHelix), Mary E. Mangan (OpenHelix) & Donna Karolchik (University of California, Santa Cruz Genome Bioinformatics Group) © 2008 Nature Education Citation: Lathe, W., Williams, J., Mangan, M. & Karolchik, D. (2008) Genomic Data Resources: Challenges and Promises. Nature Education 1(3):2Computer databases are an increasingly necessary tool for organizing the vast amounts of biological data currently available and for making it easier for researchers to locate relevant information. In 1979, the Los Alamos Sequence Database was established as a repository for biological sequences. In 1982, this database was renamed GenBank and, later the same year, moved to the newly instituted National Center for Biotechnology Information (NCBI), where it lives today. By the end of 1983, more than 2,000 sequences were stored in GenBank, with a total of just under 1 million base pairs (Cooper & Patterson, 2008).
LEDERBERG - "In 1979, he became a member of the U.S. Defense Science Board and the chairman of President Jimmy Carter's President's Cancer Panel."


https://web.stanford.edu/dept/HPS/TimLenoir/IdealAcademy.htm Prior to the startup of BIONET, GENET was not the only resource for DNA sequences. Several researchers were making their databases available. Margaret Dayhoff had created a database of DNA sequences and some software for sequence analysis for the National Biomedical Research Foundation that was marketed commercially. Walter Goad, a physicist at Los Alamos National Laboratory, collected DNA sequences from the published literature and made them freely available to researchers. But by the late 1970s the number of bases sequenced was already approaching 3 million and expected to double soon. Some form of easy communication between labs and effective data handling was considered a major priority in the biological community. While experiments were going on with GENET a number of nationally prominent molecular biologists had been pressing to start a NIH-sponsored central repository for DNA sequences. An early meeting organized by Joshua Lederberg was held in 1979 at Rockefeller University. The proposed NIH initiative was originally supposed to be coordinated with a similar effort at the European Molecular Biology Laboratory (EMBL) in Heidelberg, but the Europeans became dissatisfied with the lack of progress on the American end and decided to go ahead with their own databank. EMBL announced the availability of its Nucleotide Sequence Data Library in April 1982, several months before the American project was funded. Finally, in August, 1982 the NIH awarded a contract for $3 million over 5 years to the Boston-based firm of Bolt, Berenek, and Newman (BB&N) to set up the national database known as GenBank in collaboration with Los Alamos National Laboratory.


https://www.cnn.com/2019/05/12/health/stanford-geneticist-chronic-fatigue-syndrome-trnd/index.htmlBut Davis kept innovating, eventually accumulating more than 30 patents for technology he developed.Finally, the world caught up to his vision. The $3.8 billion Human Genome Project began in 1990, with Davis' gene-sequencing technologies at its core. Completed in 2003, it launched a revolution in science. Handing researchers that foundational blueprint for human life gave biologists and doctors what up to that point was an unimagined power to diagnose, treat and ultimately prevent the full gamut of human disease.Davis was shortlisted by The Atlantic, along with SpaceX founder Elon Musk and Amazon founder Jeff Bezos, as someone tomorrow's historians will consider today's greatest inventors.The same prescient mind that dreamed up the Human Genome Project now devotes days to what Davis calls "the last great disease to conquer."He may need all his brilliance to save his son.https://en.wikipedia.org/wiki/Ronald_W._Davis

LEDERBERG VIDEOS !!! FINALLY !!! https://profiles.nlm.nih.gov/spotlight/bb/catalog?search_field=all_fieldsÂ