Copying DNA

Today i hope to describe about Copying DNA. this is first post for Copying DNA..

• Polymerase Chain Reaction
• Also called PCR
• A method of making many copies of a piece of DNA

Steps in Copying DNA
• A DNA molecule is placed in a small test tube
• DNA polymerase that can work at high temps is added
• The DNA is heated to separate the two strands
• Primers, short pieces of DNA complementary to the ends of the molecule to be copied, are added
Copying DNA
• The tube is cooled, and DNA polymerase adds new bases to the separated strands

Human Genome Project

• Started in 1990
• Research effort to sequence all of our DNA (46 chromosomes)
• Over 3.3 billion nucleotides
• Mapping every gene location (loci)
• Conducted by scientists around the world
HGP Insights
• Only 2% of human genome codes for proteins (exons)
• Other 98% (introns) are non-coding
• Only about 20,000 to 25,000 genes (expected 100,000)
• Proteome – organism’s complete set of proteins
• About 8 million single nucleotide polymorphisms (SNP) – places where humans differ by a single nucleotide
• About ½ of genome comes from transposons (pieces of DNA that move to different locations on chromosomes)
Benefits of Human Genome Project
• Improvements in medical prevention of disease, gene therapies, diagnosis techniques …
• Production of useful protein products for use in medicine, agriculture, bioremediation and pharmaceutical industries.
• Improved bioinformatics – using computers to help in DNA sequencing

The Human Genome Project was started in 1989 with the goal of sequencing and identifying all three billion chemical units in the human genetic instruction set, finding the genetic roots of disease and then developing treatments. With the sequence in hand, the next step was to identify the genetic variants that increase the risk for common diseases like cancer and diabetes.
It was far too expensive at that time to think of sequencing patients’ whole genomes. So the National Institutes of Health embraced the idea for a "shortcut", which was to look just at sites on the genome where many people have a variant DNA unit. The theory behind the shortcut was that since the major diseases are common, so too would be the genetic variants that caused them. Natural selection keeps the human genome free of variants that damage health before children are grown, the theory held, but fails against variants that strike later in life, allowing them to become quite common. (In 2002 the National Institutes of Health started a $138 million project called the HapMap to catalog the common variants in European, East Asian and African genomes.)
The genome was broken into smaller pieces; approximately 150,000 base pairs in length. These pieces were then ligated into a type of vector known as "bacterial artificial chromosomes", or BACs, which are derived from bacterial chromosomes which have been genetically engineered. The vectors containing the genes can be inserted into bacteria where they are copied by the bacterial DNA replication machinery. Each of these pieces was then sequenced separately as a small "shotgun" project and then assembled. The larger, 150,000 base pairs go together to create chromosomes. This is known as the "hierarchical shotgun" approach, because the genome is first broken into relatively large chunks, which are then mapped to chromosomes before being selected for sequencing.
Funding came from the US government through the National Institutes of Health in the United States, and a UK charity organization, the Wellcome Trust, as well as numerous other groups from around the world. The funding supported a number of large sequencing centers including those at Whitehead Institute, the Sanger Centre, Washington University in St. Louis, and Baylor College of Medicine.
The Human Genome Project is considered a Mega Project because the human genome has approximately 3.3 billion base-pairs.
If the sequence obtained was to be stored in book form, and if each page contained 1000 base-pairs recorded and each book contained 1000 pages, then 3300 such books would be needed in order to store the complete genome. However, if expressed in units of computer data storage, 3.3 billion base-pairs recorded at 2 bits per pair would equal 786 megabytes of raw data. This is comparable to a fully data loaded CD.

Genetic Engineering and Fish

What is a genetically engineered fish?

Genetically engineered (also called transgenic) fish are those that carry and transmit
one or more copies of a recombinant DNA sequence (i.e., a DNA sequence produced
in a laboratory using in vitro techniques). Because genetic engineering is defined by
the technology that is used to create and transfer the DNA sequence, and not the
source species of the donor DNA, even fish that are engineered with DNA derived
entirely from fish species are considered to be genetically engineered. Currently, no
genetically engineered fish has been approved for food production in the United
States. To date only one company, AquaBounty, has publicly announced that it has
requested FDA approval to market a genetically engineered food animal, a growth-
enhanced Atlantic salmon that is capable of growing 4 to 6 times faster (but not larg-
er) than standard salmon grown under the same conditions.

What are the science-based concerns associated with genetically engineered fish?

The greatest science-based concerns associated with genetically engineered fish are
those related to their inadvertent release or escape. Concerns range from interbreed-
ing with native fish populations to ecosystem effects resulting from heightened com-
petition for food and prey species. There is, in principle, no difference between the
types of concerns associated with the escape of genetically engineered fish and those
related to the escape of fish that differ from native populations in some other way,
such as captively bred populations (Lynch and O’Hely 2001). Ecological risk assess-
ment requires an evaluation of the fitness of the genetically engineered fish relative
to non–genetically engineered fish in the receiving population in order to determine
the probability that the transgene will spread into the native population. Ecological
impacts are the result of the characteristics of the organism, regardless of whether the
organism acquired those characteristics through natural selection, artificial selection,
or genetic engineering. The presence of genetically engineered fish does not a priori
have a negative effect on native populations. If genetically engineered fish are ill-suit-
ed to an environment or are physically unable to survive outside of containment, they
may pose little risk to the native ecosystems. Regulators apply a scientifically derived,
risk-based framework to assess the ecological risks involved with each transgene, spe-
cies, and receiving ecosystem combination on a case-by-case basis. Risks will be quite
specific to the gene, species, and site in question, and simple generalizations concerning
the risks (and benefits) of genetically engineered fish are not scientifically meaningful.

Commercialization of genetically engineered fish will likely depend on the development
of effective containment strategies. If genetically engineered fish are adequately con-
tained, they pose little risk to native populations. The NRC recommended the simulta-
neous use of multiple containment strategies for genetically engineered fish (National
Research Council 2004). Physical containment is an obvious first line of defense to pre-
vent the escape of genetically engineered fish. Examples of such measures may include
building facilities on land or in locations removed from native populations, or ensuring
that water chemistry (temperature, pH, salinity, and concentrations of certain chemi-
cals) is lethal to one or more life stages of the genetically engineered fish, such as treat-
ing effluent water to prevent the release of viable gametes or fry. Biological containment
or bioconfinement approaches such as sterilization are also being developed.