29 July 2007

Jogging in Cambridge, UK (CamUK4Jogs)

Here are four slow jogs projected onto Google Maps in Cambridge radiating outwards from Trinity in four directions (in chronological order): CamWest (53:57 - 5.1km), CamNorth (34:30 - 3.8km), CamSouth (59:32 - 6.2km), CamShort (33:35 - 3.7km)

24 July 2007

Cycling in Queens (BikeQueens)

Into Queens over Queensboro, then Roosevelt Island and Triboro to Randall's Island
Route taken on Google Maps (kml)
Summary: apx 11AM-5:30PM, 31 miles

Total Time (h:m:s) 6:40:21 12:53 pace
Moving Time (h:m:s) 4:06:06 7:55 pace
Distance (mi ) 31.07
Moving Speed (mph) 7.6 avg. 231.3 max.
Temperature (°F) 53.6°F avg. 55.4°F high
Wind Speed ( mph) SSE 3.8 avg. SSE 6.9 max.

22 July 2007

let. to ed. re. "Sowing Seeds Of Cures" -- C&EN

Here's a letter that wasn't published:
I read with great interest the recent article on venture philanthropy. I think
this is an interesting trend allowing philanthropic contributions to energize
the commercial process towards a good end. However, an important aspect was not
emphasized is the significant potential for conflicts of interest to arise.
Non-profits, such as medical charities, are given special status in the United
States by the tax code. However, in the scenario described in the article where
a philanthropist contributes money to making a biotech investment opportunity
more favorable for venture capital fund, he is essentially using charitable,
untaxed money towards a profitable end. This raises obvious conflicts: One could
imagine a person contributing money sheltered from taxes to a charity and then
having the charity redirect the funds to a commercial endeavor from which he
would directly benefit. Clearly, safeguards need to be developed to prevent this.


Letter in response to:
http://pubs.acs.org/cen/coverstory/85/8519cover.html
May 7, 2007
Volume 85, Number 19
pp. 19-26
Sowing Seeds Of Cures
As venture capitalists' priorities shift, venture philanthropists fill the gap
in funding of drug discovery by biotechs
Chemical & Engineering News (C&EN)
Lisa M. Jarvis
IN 1989, when scientists found the defective gene that causes cystic fibrosis,
it seemed that a cure, or at least an array of better treatment options, was
just around the corner. Research efforts, largely funded by the Cystic Fibrosis
Foundation (CFF), gained momentum, and by the mid-1990s, scientists had pieced
together much of the complex biology behind this debilitating and eventually
deadly disease.....

let. to ed. re. "Biology's Big Bang" -- Economist

Here's a letter to the Economist that wasn't published:
I read, with great interest the recent cover article describing biology's big
bang. I agree wholeheartedly agree with the thrust of this piece. The article
makes the point that there is a revolution in biology akin to that in early
20th-century physics. It also compares the genome to a computer operating
system. One can take these comparisons even further. The revolution in biology
is fundamentally about how a discipline once preoccupied with descriptions of
anatomy and taxonomy is now increasingly concerned with digital information
processing. We are, in fact, witnessing the fusion of parts of biology and
computer science. The new roles found for RNA are so important because of its
central place in cellular information processing.


Letter in response to:
http://www.economist.com/opinion/displaystory.cfm?story_id=9339752
The RNA revolution
Biology's Big Bang
Jun 14th 2007
>From The Economist print edition
What physics was to the 20th century, biology will be to the 21st—and RNA will
be a vital part of it.
NATURE is full of surprises. When atoms were first proved to exist (and that was
a mere century ago), they were thought to be made only of electrons and protons.
That explained a lot, but it did not quite square with other observations. Then,
in 1932, James Chadwick discovered the neutron. Suddenly everything made
sense—so much sense that it took only another 13 years to build an atomic bomb....

21 July 2007

let. to ed. re. "A Challenge to Gene Theory, a Tougher Look at Biotech" -- NY Times

Here's a letter to the Times (in response to their one mention of ENCODE) that wasn't published:
As a participating scientist in the consortium cited in the July 1st Sunday
edition, I was excited that the Times covered some of the findings of our
project (ENCODE). The article discussed how the consortium's work is changing
the definition of a gene, and it was fascinating to see how scientific findings
ripple over into commercial and legal contexts. One of the interesting things
about genes is how plastic their definition has been over time. The current
definition, which is being recast by the ENCODE project's findings, derives from
the cracking of the genetic code in the 1960s. However, before that, a gene had
a more abstract definition as a unit of heredity, divorced from the physical
molecules actually encoding it. One of the amazing things about successive
redefinitions of a gene is that they have all been "backwards compatible" in a
scientific sense, still allowing old findings to apply to the current
definitions, with a bit of mental gymnastics. However, maybe we will find that
this backwards compatibility only applies in a scientific sphere and that a
redefinition of the gene will require substantial changes outside of it, in our
notions of commercially viable entities.
(Also, you might note that this subject is quite related to some recent publications, viz:
http://papers.gersteinlab.org/papers/grgenerev/
http://papers.gersteinlab.org/papers/whatisgene )


Letter in response to:
http://www.nytimes.com/2007/07/01/business/yourmoney/01frame.html
A Challenge to Gene Theory, a Tougher Look at Biotech - New York Times
July 1, 2007
Re:framing
A Challenge to Gene Theory, a Tougher Look at Biotech
By DENISE CARUSO
THE $73.5 billion global biotech business may soon have to grapple with a
discovery that calls into question the scientific principles on which it was
founded. Last month, a consortium of scientists published findings that
challenge the traditional view of how genes function. The exhaustive four-year
effort was organized by the United States National Human Genome Research
Institute and carried out by 35 groups from 80 organizations around the world.
To their surprise, researchers found that the human genome might not be a "tidy
collection of independent genes" after all, with each sequence of DNA linked to
a single function, such as a predisposition to diabetes or heart disease.
Instead, genes appear to operate in a complex network, and interact and overlap
with one another and with other components in ways not yet fully understood.
According to the institute, these findings will challenge scientists "to rethink
some long-held views about what genes are and what they do."...

20 July 2007

Cycling in Brooklyn (Aborted5boro07 + BikeBeltPkwy)

Aborted 5 Boro, following course route to Brooklyn Bridge and then improvising to Verrazano after a long lunch
Route taken on Google Maps (kml)
Summary: apx 6:30AM-5PM, 46 miles

Total Time (h:m:s) 10:08:40 13:07 pace
Moving Time (h:m:s) 6:39:03 8:35 pace
Distance (mi ) 46.4
Moving Speed (mph) 7.0 avg. 67.8 max.
Temperature (°F) 55.6°F avg. 62.6°F high
Wind Speed ( mph) NE 8.5 avg. NE 11.5 max.

Central Park to Prospect Park and then onto Belt Parkway Bikeway
Route taken on Google Maps (kml)
Summary: apx 8:30AM-5PM, 49 miles, with stops, including lunch
Total Time (h:m:s) 9:47:41 12:05 pace
Moving Time (h:m:s) 6:29:44 8:01 pace
Distance (mi ) 48.55
Moving Speed (mph) 7.5 avg. 51.0 max.
Temperature (°F) 67.3°F avg. 71.6°F high
Wind Speed ( mph) SE 5.3 avg. SE 12.6 max.

17 July 2007

Cycling on Northern Part of Farmington Valley Greenway (BikingCTGreenwayN)

Actuals
Route taken on Google Maps (kml)
Summary Data (apx. 11:30AM-6:30PM, with stops, 46 miles)

Total Time (h:m:s) 7:05:31 9:19 pace
Moving Time (h:m:s) 5:08:33 6:45 pace
Distance (mi ) 45.67
Moving Speed (mph) 8.9 avg. 59.8 max.
Temperature (°F) 82.1°F avg. 84.2°F high
Wind Speed ( mph) W 9.2 avg. W 11.5 max.

Planning
Useful links: http://del.icio.us/mbgmbg/FunBikingCTGreenwayN

01 July 2007

let. to ed. re. "A Smarter Web" -- Tech Review

Here's a letter to Technology Review that was published:
We read with interest John ­Borland's piece on the Semantic Web ("A Smarter Web," March/April 2007). We agree that this is an exciting time in the Semantic Web's development, yet we want to point out that its great degree of structure has drawbacks. As the article noted, Semantic Web users must learn complex ontology languages and structure their information and data using them. This difficulty inhibits the growth of the Semantic Web. It is thus arguable whether the Semantic Web can approach the scale of the standard Web, where anyone can easily create and publish content.
Ideally, we should combine the strengths of the Semantic Web and the normal Web. Search would be a good place to start. Today, global free-text search is the primary means of querying the whole Web, but it provides only coarse-grained access to documents. In contrast, the Semantic Web allows much more precise queries across multiple information sources (say, querying for a particular attribute, such as "street address"). However, it is on a much smaller scale, involving far fewer documents. We could imagine combining normal and Semantic Web queries--for instance, to search the free text of all real-estate Web pages written by women in Boston during the last week for the word "Jacuzzi." Taking this further, the few structured relationships currently in the Semantic Web could be used to refine the results of mainstream search engines.
Finally, as so much activity in the life sciences is focused on large-scale interoperation on the Web (as found in drug discovery), we feel that biological research could serve as a useful guide and driving force for the development of Web 3.0.


Citation of Letter
http://www.technologyreview.com/Infotech/18851/page2/
The Semantic Web
July/August Issue of Technology Review
Mark Gerstein and Andrew Smith
Computational Biology and Bioinformatics Program
Yale University
New Haven, CT


Letter in response to:
http://www.technologyreview.com/Infotech/18395/
Monday, March 19, 2007
Part I: A Smarter Web
New technologies will make online search more intelligent--and may even lead to a "Web 3.0."
By John Borland
Last year, Eric Miller, an MIT-affiliated computer scientist, stood on a beach in southern France, watching the sun set, studying a document he'd printed earlier that afternoon. A March rain had begun to fall, and the ink was beginning to smear....

Original Letter Text (before edit by magazine)

We read with great interest John Borland's March/April 2007 article "A Smarter
Web." We agree that this is an exciting time in the development of the semantic
web (or Web 3.0), and that it is on the cusp of more widespread acceptance and
use. A problem with the semantic web, however, is that it is not as flexible as
the free-text publishing supported by the standard web. As the article noted,
users must learn the semantic web's ontology languages and structure their
information and data using them. This presents a learning curve to users, acting
to inhibit the growth and spread of semantic web data. It is thus arguable
whether the semantic web can approach the huge size of the standard web where
almost anyone can easily create and publish web pages. The standard web will
likely still be the primary web most users see and use for the foreseeable
future, while the semantic web could remain a niche.

We thus feel that a practical direction is to investigate ways that the semantic
web and standard web can work together and leverage each other in a kind of
symbiosis. Keyword-based web search ala Google is the primary way of mining the
web for information today, but it only provides coarse-grained topical access to
documents and there are many kinds of information requests it cannot handle. For
example, queries that combine general relational information (such as provided
by the semantic web) about pages with keyword based searches are not supported.
Furthermore, one wants to be able to develop ways of leveraging small amounts of
highly structured information (as in the semantic web) as "training sets" to
better enable querying and clustering of the large bodies of unstructured, free
text information on the web; i.e. the small amount of highly structured
information could be used to bootstrap the automated organization, in support of
better querying, of the much larger unstructured information through data
mining. Since searching is widely perceived to be a crucial web application, the
semantic web's ability to improve it could be of high practical value and an
important driving force to help more fully realize the vision of the semantic
web. An important part of Web 3.0 should thus be to enumerate the kinds of
information requests that could be fruitfully made, and the kinds of information
infrastructure and data mining techniques needed to fulfill them. Finally, there
is much activity and excitement within biological research towards the goal of
truly large-scale integration and interoperation of its vast data, e.g. to aid
in more efficient drug discovery. The life sciences could thus be a useful
guide, test case, and driving force for Web 3.0.