http://ling.auf.net/lingBuzz/000411
Andrew Nevins (Harvard)
David Pesetsky (MIT)
Cilene Rodrigues (Universidade Estadual de Campinas)
March 2007
Everett (2005) has claimed that the grammar of Pirahã is exceptional in displaying "inexplicable gaps", that these gaps follow from an alleged cultural principle restricting communication to "immediate experience", and that this principle has "severe" consequences for work on Universal Grammar. We argue against each of these claims. Relying on the available documentation and descriptions of the language (especially the rich material in Everett (1986; 1987b)), we argue that many of the exceptional grammatical "gaps" supposedly characteristic of Pirahã are misanalyzed by Everett (2005) and are neither gaps nor exceptional among the world's languages. We find no evidence, for example, that Pirahã lacks embedded clauses, and in fact find strong syntactic and semantic evidence in favor of their existence in Pirahã. Likewise, we find no evidence that Pirahã lacks quantifiers, as claimed by Everett (2005). Furthermore, most of the actual properties of the Pirahã constructions discussed by Everett (for example, the ban on prenominal possessor recursion and the behavior of wh-constructions) are familiar from languages whose speakers lack the cultural restrictions attributed to the Pirahã. Finally, following mostly Gonçalves (1993; 2000; 2001), we also question some of the empirical claims about Pirahã culture advanced by Everett in primary support of the "immediate experience" restriction. We are left with no evidence of a causal relation between culture and grammatical structure. Pirahã grammar contributes to ongoing research into the nature of Universal Grammar, but presents no unusual challenge, much less a "severe" one.
Format:
[ pdf ]
Reference:
lingBuzz/000411(please use that when you cite this article, unless you want to cite the full url: http://ling.auf.net/lingBuzz/000411 )
domenica 21 settembre 2008
La tribù che sa contare fino a due
http://archiviostorico.corriere.it/2007/aprile/03/tribu_che_contare_fino_due_co_9_070403121.shtml
Una popolazione isolata dell' Amazzonia accende il dibattito sull' esistenza di una grammatica universale innata
Il linguaggio: dipende dai modi di vita o esiste una matrice mentale?
In questi ultimi anni, la lingua, la cultura e le credenze di un' isolata popolazione dell' Amazzonia, i Piraha, circa trecento individui distribuiti su otto villaggi lungo le sponde del fiume Maici, parevano aver tenuto in scacco la linguistica e l' antropologia. La vicenda aveva presto straripato ben oltre i confini accademici, trovando vasta eco anche sulla stampa. Il motivo di tanto scalpore è presto detto. Il linguista americano Daniel Everett, adesso professore all' Università dell' Illinois, dopo aver vissuto per lunghi anni a contatto con i Piraha, aveva riportato, nella sua tesi di dottorato e poi in vari articoli specialistici, alcuni dati sbalorditivi. Stando a quanto afferma Everett, la lingua dei Piraha avrebbe il più ristretto repertorio di suoni linguistici mai registrato (appena dieci fonemi), due sole parole per i colori (chiaro e scuro), nessuna parola per i numeri oltre uno e due (ma anche questi con un significato solo approssimativo), una sola parola per padre e madre, nessuna possibilità di esprimere una frase che contiene una frase subordinata, come «Ti ho detto che il bambino ha fame». La lista di queste radicali povertà linguistiche è lunga. I Piraha adulti sono strettamente monolingui e incapaci di apprendere qualsiasi altra lingua. Ma c' è ben di più. I Piraha non si curano di tracciare relazioni di parentela oltre quella con i propri fratelli e fratellastri, non hanno alcuna concezione che il mondo sia esistito prima che fossero nati i più anziani del villaggio, che una piroga e i suoi occupanti continuano ad esistere anche dopo aver svoltato la curva del fiume, sparendo dalla vista. Secondo Everett, gli stretti confini dell' esperienza immediata e diretta racchiudono il loro intero mondo mentale. OGNI SERA - Inoltre Everett racconta che, insieme alla moglie Keren, anch' essa linguista, ogni sera, per mesi e mesi, su richiesta esplicita dei Piraha, ha tentato pazientemente di insegnare loro i numeri da uno a nove in portoghese brasiliano, dato che la loro lingua non ha i numeri. Dopo mesi di tale volontaria scuola serale, i Piraha adulti avrebbero dichiarato, con grande rammarico: «La nostra testa è troppo dura». I bimbi Piraha riescono ad imparare i numeri, ma non gli adulti. Nei loro scambi in natura con occasionali mercanti brasiliani i Piraha adottano criteri volubili. Uno stesso individuo, talvolta esige molta merce in contraccambio, ma talvolta si accontenta di molto meno, per prodotti identici. I Piraha hanno la netta sensazione che i mercanti si approfittino di loro, e vorrebbero poter imparare a far di conto, ma si sono rassegnati a non riuscirvi. Questi racconti degli Everett sono in netto disaccordo con moltissimi dati di altri linguisti ed antropologi, su popolazioni che anch' esse parlano lingue prive di un sistema di numeri (uno, due, tre, molti è il caso tipico). Il compianto linguista Kenneth Hale, del Mit, esperto di lingue aborigene australiane senza incertezza, raccontava, invece, che i parlanti di quelle lingue non hanno difficoltà ad imparare un sistema numerico estratto da altre lingue e poi riescono a far di conto, come tutti noi. MESI - Lo psicologo Peter Gordon, della Columbia University, dopo aver passato alcuni mesi con i Piraha e aver sondato la loro ridotta capacità di stimare le quantità numeriche, ha pubblicato su «Science», nel 2004, un articolo intitolato «La vita senza i numeri». Gordon dichiara che, come i piccioni e i bimbi molto piccoli, i Piraha adulti non sanno contare oltre tre e stimano solo grossolanamente la differenza tra gruppi grandi e gruppi piccoli di oggetti. La loro lingua, del resto, stando agli Everett e a Gordon, non avrebbe nemmeno parole per esprimere i comparativi (tanto quanto, più di, meno di). In una recente intervista al «New Scientist», Everett non ha lesinato le parole: «La lingua dei Piraha è l' ultimo chiodo nella bara della teoria Chomskiana secondo la quale esisterebbe una grammatica universale innata». A dispetto dell' immenso seguito conquistato dalle teorie di Chomsky, alle quali lui stesso dice di essersi ispirato nel passato, Everett presenta i Piraha come prova vivente che la lingua e il pensiero sono interamente plasmati dalla cultura, dall' esperienza dei sensi e dai modi di vita. Una netta reazione a queste sue tesi non ha tardato a farsi sentire. In questi giorni, un illustre linguista del Mit, David Pesetsky (titolare delle cattedra precedentemente occupata da Noam Chomsky, ancora attivissimo, ma ufficialmente in pensione), un giovane e valente fonologo di Harvard, Andrew Nevins e una linguista brasiliana, Cilene Rodrigues, esperta di sintassi comparata, hanno reso disponibile su Internet un testo di 60 dense pagine nelle quali confutano tutte le conclusioni di Everett, punto per punto (http://ling.auf.net/lingBuzz/000411). Passando al setaccio i dati spesso contraddittorii dello stesso Everett, questi studiosi mostrano che alcune pretese limitazioni della lingua dei Piraha risultano puramente illusorie, mentre altre sono reali, ma presenti anche in lingue molto distanti dal Piraha, e distanti tra di loro, come il tedesco, il cinese, l' ebraico, il bengalese, la lingua degli indiani Wappo della California e quella parlata dai Circassi del Caucaso. Trattandosi di popoli con culture e stili di vita diversissimi, queste particolarità linguistiche comuni non possono certo, con buona pace di Everett, essere state plasmate da fattori ambientali e sociali. Nessun chiodo e nessuna bara, bensì un' accurata nuova rivendicazione dell' ipotesi che le variazioni tra le lingue umane riflettono variazioni di una comune profonda matrice mentale, la quale, ovviamente, interfaccia con la cultura, ma non viene da essa plasmata. IDEA SEDUCENTE - Pesetsky, Nevins e Rodriques giustamente insistono su una lezione centrale: ciò che è universale e comune a tutte le lingue, compreso il Piraha, non sono l' una o l' altra specifica forma linguistica, bensì un menu fisso forme linguistiche alternative, menu dal quale ciascuna lingua sceglie quanto le aggrada. Nevins in particolare insiste su un punto: «La nostra analisi conferma il grande interesse del caso Piraha, non lo sminuisce certo. Molti trovano intuitivamente seducente l' idea che le lingue siano plasmate dalla cultura e dagli stili di vita. E' interessantissimo mostrare, invece, una volta di più, proprio con una lingua insolita e per noi remota come quella dei Piraha, che esistono profonde somiglianze sintattiche tra lingue di culture molto diverse». L' antropologo brasiliano Marco Antonio Gonçalves ha raccolto tra i Piraha varie elaborate narrazioni. Eccone una, in sintesi: il demiurgo Igagai ha rigenerato il loro mondo dopo un diluvio e poi ha dato alle donne il fuoco per cuocere. Il mondo ha molti livelli, è sempre esistito, ma viene anche ricostruito ogni giorno. Forse non sono miti in senso stretto, questi dei Piraha, forse sono semplici novelle. Ma come non fare paralleli con Noè, Sisifo, Prometeo, Eraclito. Forse anche per i miti esiste un menu fisso, dal quale tutta l' umanità via via sceglie ciò che (come diceva Claude Lévi-Strauss) «è buono da pensare».
Industrial Metabolism
http://www.unu.edu/unupress/unupbooks/80841e/80841E00.htm
Edited by Robert U. Ayres and Udo E. Simonis
©The United Nations University, 1994
©The United Nations University, 1994
The views expressed in this publication are those of the authors and do not necessarily reflect the views of the United Nations University.
United Nations University PressThe United Nations University53-70 Jingumae 5-chome, Shibuya-kuTokyo 150, JapanTel.: (03) 3499-2811. Fax: (03) 3406-7345.Telex: J25442. Cable: UNATUNIV TOKYO.
Typeset by Asco Trade Typesetting Limited, Hong KongPrinted by Permanent Typesetting and Printing Co., Ltd.,Hong KongCover design by Apex Production, Hong Kong
UNUP-841ISBN 92-808-0841-9United Nations Sales No. E.93.III.A.303500 P
United Nations University PressThe United Nations University53-70 Jingumae 5-chome, Shibuya-kuTokyo 150, JapanTel.: (03) 3499-2811. Fax: (03) 3406-7345.Telex: J25442. Cable: UNATUNIV TOKYO.
Typeset by Asco Trade Typesetting Limited, Hong KongPrinted by Permanent Typesetting and Printing Co., Ltd.,Hong KongCover design by Apex Production, Hong Kong
UNUP-841ISBN 92-808-0841-9United Nations Sales No. E.93.III.A.303500 P
Contents
Note to the reader from the UNUAcknowledgementsIntroduction
Part 1: General implications
1. Industrial metabolism: Theory and policy
What is industrial metabolism?The materials cycleMeasures of industrial metabolismPolicy implications of the industrial metabolism perspectiveReferences
2. Ecosystem and the biosphere: Metaphors for human-induced material flows
IntroductionThe ecosystem analogueThe environmental spheres analogue: Atmosphere, hydrosphere, lithosphere, and biosphereSummary and conclusionsReferences
3. Industrial restructuring in industrial countries
IntroductionIdentifying indicators of environmentally relevant structural changeStructural change as environmental reliefEnvironmentally relevant structural change: Empirical analysisTypology of environmentally relevant structural changeSpecific conclusionsGeneral conclusions
4. Industrial restructuring in developing countries: The case of India
Industrial metabolism and sustainable developmentIndustry and sustainable developmentResource utilizationEnergy efficiency: An overviewEnergy use in Indian industry: A case-studyConclusionsReferences
5. Evolution, sustainability, and industrial metabolism
IntroductionTechnical progress and reductionismThe mechanical paradigmThe evolution of ecological structureDiscussion
Part 2: Case-studies
6. Industrial metabolism at the national level: A case-study on chromium and lead pollution in Sweden, 1880-1980
IntroductionThe use of chromium and lead in SwedenCalculation of emissionsThe development of emissions over timeThe emerging immission landscapeConclusionsReferences
7. Industrial metabolism at the regional level: The Rhine Basin
IntroductionGeographic features of the Rhine basinMethodologyThe example of cadmiumConclusionsReferences
8. Industrial metabolism at the regional and local level: A case-study on a Swiss region
IntroductionMethodologyResultsConclusionsReferences
9. A historical reconstruction of carbon monoxide and methane emissions in the United States, 1880-1980
IntroductionCarbon monoxide (CO)Methane (CH4)References
10. Sulphur and nitrogen emission trends for the United States: An application of the materials flow approach
IntroductionSulphur emissionsNitrogen oxides emissionsConclusionReferences
11. Consumptive uses and losses of toxic heavy metals in the United States, 1880-1980
IntroductionProduction-related heavy metal emissionsEmissions coefficients for productionConsumption-related heavy metal emissionsEmissions coefficient for consumptionHistorical usage patternsConclusionsReferences
AppendixPart 3: Further implications
12. The precaution principle in environmental management
IntroductionPrecaution and "industrial metabolism"Precaution: A case-studyHistory of the precaution principleThe precaution principle in international agreementsPrecaution on the European stagePrecaution as a science-politics gamePrecaution on the global stageReferences
13. Transfer of clean(er) technologies to developing countries
IntroductionSustainable developmentEnvironmentally sound technology, clean(er) technologyIndustrial metabolismKnowledge and technology transferEndogenous capacityCrucial elements of endogenous capacity-buildingInternational cooperation for clean(er) technologiesConclusionsTwo case-studiesReferencesBibliography
14. A plethora of paradigms: Outlining an information system on physical exchanges between the economy and nature
IntroductionDistinguishing between "harmful" and "harmless" characteristics of socio-economic metabolism with its natural environmentOutline of an information system for the metabolism of the socio-economic system with its natural environmentAn empirical example for ESIs: Material balances and intensities for the Austrian economyPurposive interventions into life processes (PILs)ConclusionsReferences
BibliographyContributors
Note to the reader from the UNUAcknowledgementsIntroduction
Part 1: General implications
1. Industrial metabolism: Theory and policy
What is industrial metabolism?The materials cycleMeasures of industrial metabolismPolicy implications of the industrial metabolism perspectiveReferences
2. Ecosystem and the biosphere: Metaphors for human-induced material flows
IntroductionThe ecosystem analogueThe environmental spheres analogue: Atmosphere, hydrosphere, lithosphere, and biosphereSummary and conclusionsReferences
3. Industrial restructuring in industrial countries
IntroductionIdentifying indicators of environmentally relevant structural changeStructural change as environmental reliefEnvironmentally relevant structural change: Empirical analysisTypology of environmentally relevant structural changeSpecific conclusionsGeneral conclusions
4. Industrial restructuring in developing countries: The case of India
Industrial metabolism and sustainable developmentIndustry and sustainable developmentResource utilizationEnergy efficiency: An overviewEnergy use in Indian industry: A case-studyConclusionsReferences
5. Evolution, sustainability, and industrial metabolism
IntroductionTechnical progress and reductionismThe mechanical paradigmThe evolution of ecological structureDiscussion
Part 2: Case-studies
6. Industrial metabolism at the national level: A case-study on chromium and lead pollution in Sweden, 1880-1980
IntroductionThe use of chromium and lead in SwedenCalculation of emissionsThe development of emissions over timeThe emerging immission landscapeConclusionsReferences
7. Industrial metabolism at the regional level: The Rhine Basin
IntroductionGeographic features of the Rhine basinMethodologyThe example of cadmiumConclusionsReferences
8. Industrial metabolism at the regional and local level: A case-study on a Swiss region
IntroductionMethodologyResultsConclusionsReferences
9. A historical reconstruction of carbon monoxide and methane emissions in the United States, 1880-1980
IntroductionCarbon monoxide (CO)Methane (CH4)References
10. Sulphur and nitrogen emission trends for the United States: An application of the materials flow approach
IntroductionSulphur emissionsNitrogen oxides emissionsConclusionReferences
11. Consumptive uses and losses of toxic heavy metals in the United States, 1880-1980
IntroductionProduction-related heavy metal emissionsEmissions coefficients for productionConsumption-related heavy metal emissionsEmissions coefficient for consumptionHistorical usage patternsConclusionsReferences
AppendixPart 3: Further implications
12. The precaution principle in environmental management
IntroductionPrecaution and "industrial metabolism"Precaution: A case-studyHistory of the precaution principleThe precaution principle in international agreementsPrecaution on the European stagePrecaution as a science-politics gamePrecaution on the global stageReferences
13. Transfer of clean(er) technologies to developing countries
IntroductionSustainable developmentEnvironmentally sound technology, clean(er) technologyIndustrial metabolismKnowledge and technology transferEndogenous capacityCrucial elements of endogenous capacity-buildingInternational cooperation for clean(er) technologiesConclusionsTwo case-studiesReferencesBibliography
14. A plethora of paradigms: Outlining an information system on physical exchanges between the economy and nature
IntroductionDistinguishing between "harmful" and "harmless" characteristics of socio-economic metabolism with its natural environmentOutline of an information system for the metabolism of the socio-economic system with its natural environmentAn empirical example for ESIs: Material balances and intensities for the Austrian economyPurposive interventions into life processes (PILs)ConclusionsReferences
BibliographyContributors
sabato 20 settembre 2008
Riconoscere un naso da pochi pixel un software ricrea i volti nascosti
http://www.repubblica.it/2008/09/sezioni/scienza_e_tecnologia/software-volti-nascosti/software-volti-nascosti/software-volti-nascosti.html
Un algoritmo traduce i pixel attingendo da un database di volti umaniTra le possibili applicazioni l'identificazione di criminali e persone scomparse
IN aeroporto, al supermercato, mentre guidiamo: telecamere e webcam sono dappertutto e registrano i nostri movimenti proprio come aveva previsto Orwell nel suo capolavoro 1984. Ma forse neanche il visionario scrittore inglese avrebbe immaginato che un giorno saremmo riusciti a identificare un volto prendendo spunto da un fotogramma di pochi pixel. A segnare il sorpasso del presente su un futuro immaginato ha pensato l'equipe del professor Pablo Hennings-Yeomans, ricercatore della Carnegie Mellon University, Pennsylvania, che ha messo a punto un software capace di ricostruire i tratti del viso basandosi su immagini in bassissima risoluzione. Un'applicazione che potrebbe essere usata in mille modi, dall'identificazione di criminali e persone scomparse al recupero di video e foto sul web. Al di là dei risvolti pratici, ciò che gli studiosi hanno annunciato con orgoglio all'International Conference on Biometrics 2008 è che in materia di identificazione facciale è stato fatto un bel passo avanti. "I sistemi usati oggi - spiega Hennings-Yeomans - tengono conto di luce, angolazione del viso e tipo di telecamera usata, ma non fanno che trasformare un'immagine in bassa risoluzione in un'altra che non esiste. Da una foto sfocata, il computer ricostruisce un volto riconoscibile all'occhio umano ma spesso diverso da quello di chi si sta cercando". Il software progettato dai ricercatori della Pennsylvania utilizza invece un algoritmo che traduce i pixel in bassa risoluzione in immagini reali, ricavando le informazioni necessarie proprio da un database di 300 volti umani. Da ogni faccia, il sistema "estrae" le caratteristiche lineari e le codifica, creando un'associazione immediata tra immagine digitale in bassa risoluzione e tratti del viso.
Insieme all'ingegnere informatico B. Vijaya Kumar e al ricercatore Simon Baker della Microsoft Research, Hennings-Yeomans ha programmato un software che unisce la precisione di un algoritmo di alta risoluzione alla gamma di informazioni dell'altro, programmato per la catalogazione dei lineamenti. "In questo modo - continua il ricercatore - si evitano distorsioni che, specie in campo forense, possono essere pericolose". Il progetto "Recognition of low-resolution faces using multiple still images and multiple cameras" è stato già sperimentato con successo e funziona ancora meglio se si utilizzano immagini provenienti da videocamere diverse. Naturalmente, mettono in guardia gli autori, il software è migliorabile. "Ma presto cercherete e troverete le cose su Google inserendo un frammento di immagine, invece che un testo. Presto anche le immagini più irriconoscibili non avranno segreti", conclude Hennings-Yeomans. Attenzione, dunque, perché il grande occhio non solo ci guarda, ma è anche capace di riconoscerci da lontano. Con gran sollievo di avvocati, pm e forze dell'ordine. Anche se l'intimità del cuore, come scriveva Orwell, resta imprevedibile.
Insieme all'ingegnere informatico B. Vijaya Kumar e al ricercatore Simon Baker della Microsoft Research, Hennings-Yeomans ha programmato un software che unisce la precisione di un algoritmo di alta risoluzione alla gamma di informazioni dell'altro, programmato per la catalogazione dei lineamenti. "In questo modo - continua il ricercatore - si evitano distorsioni che, specie in campo forense, possono essere pericolose". Il progetto "Recognition of low-resolution faces using multiple still images and multiple cameras" è stato già sperimentato con successo e funziona ancora meglio se si utilizzano immagini provenienti da videocamere diverse. Naturalmente, mettono in guardia gli autori, il software è migliorabile. "Ma presto cercherete e troverete le cose su Google inserendo un frammento di immagine, invece che un testo. Presto anche le immagini più irriconoscibili non avranno segreti", conclude Hennings-Yeomans. Attenzione, dunque, perché il grande occhio non solo ci guarda, ma è anche capace di riconoscerci da lontano. Con gran sollievo di avvocati, pm e forze dell'ordine. Anche se l'intimità del cuore, come scriveva Orwell, resta imprevedibile.
Etichette:
algoritmo,
identificazione,
low res,
Mellon University,
Pablo Hennings-Yeomans,
pixel,
riconoscimento
giovedì 18 settembre 2008
BlackBox Project in an abandoned coal mine
Sun and a consortium of other businesses are going to lower Blackbox self-contained computing facilities into a Japanese coal mine to set up an underground datacentre, using up to 50 percent less power than a ground-level datacentre.
The coolant will be ground water and the site's temperature is a constant 15 degrees Celsius (59 degrees Fahrenheit) all year, meaning no air-conditioning will be needed outside the containers. This reduces the energy required for the water chillers, used with surface-level Blackbox containers.
The group estimates that up to $9 million of electricity costs could be saved annually if the centre were to run 30,000 server cores.
Sun is working with eleven other companies, including Internet Initiative Japan - an ISP, BearingPoint, Itochu Techno-Solutions and NS Solutions. They will form a joint venture with Sun. NTT Communications and Chuo University are also involved.
The disused coal mine is located in the Chubu region on Japan's Honshu island. Sun will build 30 Blackbox self-contained datacentres containing a total of 10,000 servers (cores). This can be increased to 30,000 cores if there is the demand for it.
The containers will be lowered 100m into the mine and linked to power, water cooling and network lines via external connectors.
Sun has been developing its Blackbox concept for three years and a typical one has 250 servers mounted in seven racks inside a standard 20-foot shipping container. Sun says that With T-series processors, a single Blackbox can hold up to 2,000 cores, providing 8,000 simultaneous processing threads.
Such a subterranean datacentre will be easier to secure against unauthorised entry and terrorist attacks. The Blackbox containers are robust enough to withstand earthquakes, being capable of withstanding a quake of magnitude 6.7 on the Richter scale. The Nihonkai-Chubu earthquake shook the region in 1983.
The group estimates that up to $9 million of electricity costs could be saved annually if the centre were to run 30,000 server cores.
Sun is working with eleven other companies, including Internet Initiative Japan - an ISP, BearingPoint, Itochu Techno-Solutions and NS Solutions. They will form a joint venture with Sun. NTT Communications and Chuo University are also involved.
The disused coal mine is located in the Chubu region on Japan's Honshu island. Sun will build 30 Blackbox self-contained datacentres containing a total of 10,000 servers (cores). This can be increased to 30,000 cores if there is the demand for it.
The containers will be lowered 100m into the mine and linked to power, water cooling and network lines via external connectors.
Sun has been developing its Blackbox concept for three years and a typical one has 250 servers mounted in seven racks inside a standard 20-foot shipping container. Sun says that With T-series processors, a single Blackbox can hold up to 2,000 cores, providing 8,000 simultaneous processing threads.
Such a subterranean datacentre will be easier to secure against unauthorised entry and terrorist attacks. The Blackbox containers are robust enough to withstand earthquakes, being capable of withstanding a quake of magnitude 6.7 on the Richter scale. The Nihonkai-Chubu earthquake shook the region in 1983.
The project has been initially costed at $405 million and the site should start offering datacentre services to public and private sector customers in April, 2010.
Sun to set up datacentre in coal mine
Sun Microsystems and a consortium of other organisations are going to lower an enclosed self-contained datacentre into a Japanese coal mine. The goal is to set up an underground datacentre, using up to 50% less power than a ground-level one.
The coolant will be ground water and the site's temperature is a constant 15 degrees Celsius all year, meaning no air-conditioning will be needed outside the containers. This reduces the energy required for the water chillers.The self-contained datacentre will be housed in a shipping container and comes courtesy of Sun, which sells the enclosed datacentres, called Blackboxes.It is estimated that up to US$9 million (NZ$13 million) of electricity costs could be saved annually if the centre were to run 30,000 server cores.Sun is working with eleven other companies, including ISP Internet Initiative Japan, BearingPoint, Itochu Techno-Solutions and NS Solutions. They will form a joint venture with Sun. NTT Communications and Chuo University are also involved.The disused coal mine is located in the Chubu region on Japan's Honshu island. Sun will build 30 Blackbox self-contained datacentres containing a total of 10,000 servers (cores). This can be increased to 30,000 cores if there is the demand for it.The containers will be lowered into the mine and linked to power, water cooling and network lines via external connectors.Sun has been developing its Blackbox concept for three years and a typical one has 250 servers mounted in seven racks inside a standard 20-foot shipping container. Sun says that with T-series processors, a single Blackbox can hold up to 2,000 cores, providing 8,000 simultaneous processing threads.Such a subterranean datacentre will be easier to secure against unauthorised entry and terrorist attacks. The Blackbox containers are robust enough to withstand earthquakes, being capable of withstanding a quake of magnitude 6.7 on the Richter scale. The Nihonkai-Chubu earthquake shook the region in 1983.The project has been initially estimated to cost US$405 million and the site should start offering datacentre services to public and private sector customers in 2010. -->
The coolant will be ground water and the site's temperature is a constant 15 degrees Celsius all year, meaning no air-conditioning will be needed outside the containers. This reduces the energy required for the water chillers.The self-contained datacentre will be housed in a shipping container and comes courtesy of Sun, which sells the enclosed datacentres, called Blackboxes.It is estimated that up to US$9 million (NZ$13 million) of electricity costs could be saved annually if the centre were to run 30,000 server cores.Sun is working with eleven other companies, including ISP Internet Initiative Japan, BearingPoint, Itochu Techno-Solutions and NS Solutions. They will form a joint venture with Sun. NTT Communications and Chuo University are also involved.The disused coal mine is located in the Chubu region on Japan's Honshu island. Sun will build 30 Blackbox self-contained datacentres containing a total of 10,000 servers (cores). This can be increased to 30,000 cores if there is the demand for it.The containers will be lowered into the mine and linked to power, water cooling and network lines via external connectors.Sun has been developing its Blackbox concept for three years and a typical one has 250 servers mounted in seven racks inside a standard 20-foot shipping container. Sun says that with T-series processors, a single Blackbox can hold up to 2,000 cores, providing 8,000 simultaneous processing threads.Such a subterranean datacentre will be easier to secure against unauthorised entry and terrorist attacks. The Blackbox containers are robust enough to withstand earthquakes, being capable of withstanding a quake of magnitude 6.7 on the Richter scale. The Nihonkai-Chubu earthquake shook the region in 1983.The project has been initially estimated to cost US$405 million and the site should start offering datacentre services to public and private sector customers in 2010. -->
mercoledì 17 settembre 2008
Google Eyes Offshore, Wave-Powered Data Centers
Google data centers may someday float on the ocean. The search giant recently filed a patent for a “water-based data center,” which uses ocean surface waves to power and cool the facility. The patent also confirms Google’s development of “crane-removable modules,” a container-based data center, writes Rich Miller on Data Center Knowledge.
According to the patent, these floating data centers will be located 3 to 7 miles off-shore and reside in 50 to 70 meters of water. The data centers will incorporate Pelamis Wave Energy Converter units that can turn ocean surface waves into electricity and can be combined to form “wave farms.”
“If perfected, this approach could be used to build 40 megawatt data centers that don’t require real estate or property taxes,” writes Miller. But he questions which laws would govern the consumer data managed from the offshore location.
Back in January, according to Miller, International Data Security (IDS) said it was planning to build up to 50 data centers on cargo ships moored at piers, with data center space below-deck and container-based data centers being housed above deck.
In August, IBM announced it’s planning to build a $360 million data center. Although it will not be floating in the sea, the data center will also take a modular approach to construction, which the company says can defer significant capital costs and slash energy use by 50 percent.
Sun has also been working in the modular data center space. It has launched Project BlackBox, an energy efficient modular data center with eight racks in a shipping container.
Google search finds seafaring solution
http://technology.timesonline.co.uk/tol/news/tech_and_web/the_web/article4753389.ece
Google may take its battle for global domination to the high seas with the launch of its own “computer navy”.
The company is considering deploying the supercomputers necessary to operate its internet search engines on barges anchored up to seven miles (11km) offshore.
The “water-based data centres” would use wave energy to power and cool their computers, reducing Google’s costs. Their offshore status would also mean the company would no longer have to pay property taxes on its data centres, which are sited across the world, including in Britain.
In the patent application seen by The Times, Google writes: “Computing centres are located on a ship or ships, anchored in a water body from which energy from natural motion of the water may be captured, and turned into electricity and/or pumping power for cooling pumps to carry heat away.”
The increasing number of data centres necessary to cope with the massive information flows generated on popular websites has prompted companies to look at radical ideas to reduce their running costs.
The supercomputers housed in the data centres, which can be the size of football pitches, use massive amounts of electricity to ensure they do not overheat. As a result the internet is not very green.
Data centres consumed 1 per cent of the world’s electricity in 2005. By 2020 the carbon footprint of the computers that run the internet will be larger than that of air travel, a recent study by McKinsey, a consultancy firm, and the Uptime Institute, a think tank, predicted.
In an attempt to address the problem, Microsoft has investigated building a data centre in the cold climes of Siberia, while in Japan the technology firm Sun Microsystems plans to send its computers down an abandoned coal mine, using water from the ground as a coolant. Sun said it could save $9 million (£5 million) of electricity costs a year and use half the power the data centre would have required if it was at ground level.
Technology experts said Google’s “computer navy” was an unexpected but clever solution. Rich Miller, the author of the datacentreknowledge.com blog, said: “It’s really innovative, outside-the-box thinking.”
Google refused to say how soon its barges could set sail. The company said: “We file patent applications on a variety of ideas. Some of those ideas later mature into real products, services or infrastructure, some don’t.”
Concerns have been raised about whether the barges could withstand an event such as a hurricane. Mr Miller said: “The huge question raised by this proposal is how to keep the barges safe.”
The company is considering deploying the supercomputers necessary to operate its internet search engines on barges anchored up to seven miles (11km) offshore.
The “water-based data centres” would use wave energy to power and cool their computers, reducing Google’s costs. Their offshore status would also mean the company would no longer have to pay property taxes on its data centres, which are sited across the world, including in Britain.
In the patent application seen by The Times, Google writes: “Computing centres are located on a ship or ships, anchored in a water body from which energy from natural motion of the water may be captured, and turned into electricity and/or pumping power for cooling pumps to carry heat away.”
The increasing number of data centres necessary to cope with the massive information flows generated on popular websites has prompted companies to look at radical ideas to reduce their running costs.
The supercomputers housed in the data centres, which can be the size of football pitches, use massive amounts of electricity to ensure they do not overheat. As a result the internet is not very green.
Data centres consumed 1 per cent of the world’s electricity in 2005. By 2020 the carbon footprint of the computers that run the internet will be larger than that of air travel, a recent study by McKinsey, a consultancy firm, and the Uptime Institute, a think tank, predicted.
In an attempt to address the problem, Microsoft has investigated building a data centre in the cold climes of Siberia, while in Japan the technology firm Sun Microsystems plans to send its computers down an abandoned coal mine, using water from the ground as a coolant. Sun said it could save $9 million (£5 million) of electricity costs a year and use half the power the data centre would have required if it was at ground level.
Technology experts said Google’s “computer navy” was an unexpected but clever solution. Rich Miller, the author of the datacentreknowledge.com blog, said: “It’s really innovative, outside-the-box thinking.”
Google refused to say how soon its barges could set sail. The company said: “We file patent applications on a variety of ideas. Some of those ideas later mature into real products, services or infrastructure, some don’t.”
Concerns have been raised about whether the barges could withstand an event such as a hurricane. Mr Miller said: “The huge question raised by this proposal is how to keep the barges safe.”
Etichette:
energia,
extraterritorialità,
google,
offshore,
onda,
pelamis,
server,
Uptime Institute
lunedì 15 settembre 2008
Is wind the new ethanol?
http://www.theatlantic.com/doc/200810/world-in-numbers
Blowback
These are boom times for wind power. T. Boone Pickens, the wildcatter turned oil baron, is building the world’s biggest wind farm, in the dry scrub of the Texas Panhandle—a $10 billion bet on wind’s future. Twenty-eight states have set ambitious mandates for renewable energy, with wind power shouldering most of the load; many compel electric utilities to get at least 20 percent of their supply from wind and other renewable sources between 2015 and 2025.
Those requirements, along with a generous federal subsidy (20 percent of wind energy’s costs), have fostered a turbine-building frenzy. Overall capacity grew by 45 percent last year alone. Several wind-power companies have been snapped up in recent years in a string of multibillion-dollar deals. In May, Jim Cramer talked up wind stocks on Mad Money while assembling a model turbine in the studio.
And why not? Wind power seems to promise zero emissions and an endless supply of cheap power.
Still, it’s hard to ignore the parallels to the recent ethanol boom, which was also fueled by mandates and subsidies, and which is now viewed almost universally as a disaster. Wind power is unlikely to cause a global food crisis. But heedless investment in it may provoke blowback of a different sort.
Though wind advocates say that we can reliably and economically use wind for 20 percent of our power needs, the experience of Texas, which leads the nation in wind power—2.9 percent of its electricity comes from wind—highlights two big problems: transmission and variability.
Pickens’s windmills (like most of Texas’s) will be in the west, where the wind blows the most. The big cities are in the east. This problem plagues wind power nationally: people typically don’t live where the wind blows hardest, so you have to send power from, say, upstate to downstate New York, or from the Dakotas to the cities of the Midwest.
Texas expects to max out its east-west transmission lines by the end of the year. More wind power means new transmission lines, which will cost between $3 billion and $6.4 billion. Accommodating wind power on the scale foreseen nationally may require 12,000 to 19,000 miles of new high-power lines crisscrossing the country (by way of comparison, the interstate highway system runs 46,837 miles), plunging large parts of America into NIMBY hell.
Wind variability presents a more fundamental problem. Texas’s experience, at less than 3 percent wind power, is again instructive. In February, an unexpected cold front calmed the state’s wind farms. As power ran out and backup generation proved inadequate, grid operators were forced to call on large industrial and commercial users to power down.
Wind farms tend to produce the most energy when it’s not needed—at night and in the spring and fall, when demand is low. The hottest, highest-demand days of the year are the days when wind’s contribution is likely to be near zero. So wind, if it is to meet demand reliably, must be backed up, typically by (emissions-spewing) natural-gas plants that can ramp up and down quickly.
Powering plants up and down is inefficient, and when backup power is included, wind energy costs 10 to 30 percent more than fossil-fuel energy, even without factoring in the cost of new power lines. (Wind-energy costs have risen, not fallen, in recent years.) And once you include backup power, the cost of averting carbon-dioxide emissions by building a wind plant rises to $67 a ton, according to Cambridge Energy Research Associates. Less sexy emissions-reduction strategies, such as increasing efficiency at current electrical plants, cost between $10 and $30 a ton.
Wind is indisputably a promising source of renewable energy—today, in fact, it looks like the most promising and practical source. But many kinks remain to be worked out. It would be a tragedy if wind power were killed in the cradle by overeager requirements that bring hidden costs, unreliable operations, and higher energy prices, inviting a backlash.
The way to address our greenhouse-gas problems is not to champion wind or any other “silver bullet.” It’s to pass a national carbon tax or a cap-and-trade system, and let the market find the most efficient way to cut emissions and reduce our dependence on oil.
Those requirements, along with a generous federal subsidy (20 percent of wind energy’s costs), have fostered a turbine-building frenzy. Overall capacity grew by 45 percent last year alone. Several wind-power companies have been snapped up in recent years in a string of multibillion-dollar deals. In May, Jim Cramer talked up wind stocks on Mad Money while assembling a model turbine in the studio.
And why not? Wind power seems to promise zero emissions and an endless supply of cheap power.
Still, it’s hard to ignore the parallels to the recent ethanol boom, which was also fueled by mandates and subsidies, and which is now viewed almost universally as a disaster. Wind power is unlikely to cause a global food crisis. But heedless investment in it may provoke blowback of a different sort.
Though wind advocates say that we can reliably and economically use wind for 20 percent of our power needs, the experience of Texas, which leads the nation in wind power—2.9 percent of its electricity comes from wind—highlights two big problems: transmission and variability.
Pickens’s windmills (like most of Texas’s) will be in the west, where the wind blows the most. The big cities are in the east. This problem plagues wind power nationally: people typically don’t live where the wind blows hardest, so you have to send power from, say, upstate to downstate New York, or from the Dakotas to the cities of the Midwest.
Texas expects to max out its east-west transmission lines by the end of the year. More wind power means new transmission lines, which will cost between $3 billion and $6.4 billion. Accommodating wind power on the scale foreseen nationally may require 12,000 to 19,000 miles of new high-power lines crisscrossing the country (by way of comparison, the interstate highway system runs 46,837 miles), plunging large parts of America into NIMBY hell.
Wind variability presents a more fundamental problem. Texas’s experience, at less than 3 percent wind power, is again instructive. In February, an unexpected cold front calmed the state’s wind farms. As power ran out and backup generation proved inadequate, grid operators were forced to call on large industrial and commercial users to power down.
Wind farms tend to produce the most energy when it’s not needed—at night and in the spring and fall, when demand is low. The hottest, highest-demand days of the year are the days when wind’s contribution is likely to be near zero. So wind, if it is to meet demand reliably, must be backed up, typically by (emissions-spewing) natural-gas plants that can ramp up and down quickly.
Powering plants up and down is inefficient, and when backup power is included, wind energy costs 10 to 30 percent more than fossil-fuel energy, even without factoring in the cost of new power lines. (Wind-energy costs have risen, not fallen, in recent years.) And once you include backup power, the cost of averting carbon-dioxide emissions by building a wind plant rises to $67 a ton, according to Cambridge Energy Research Associates. Less sexy emissions-reduction strategies, such as increasing efficiency at current electrical plants, cost between $10 and $30 a ton.
Wind is indisputably a promising source of renewable energy—today, in fact, it looks like the most promising and practical source. But many kinks remain to be worked out. It would be a tragedy if wind power were killed in the cradle by overeager requirements that bring hidden costs, unreliable operations, and higher energy prices, inviting a backlash.
The way to address our greenhouse-gas problems is not to champion wind or any other “silver bullet.” It’s to pass a national carbon tax or a cap-and-trade system, and let the market find the most efficient way to cut emissions and reduce our dependence on oil.
Matthew Quirk is an Atlantic staff editor
Etichette:
alternativa,
energia,
impianto ibrido,
infrastruttura,
sussidio,
T. Boone Pickens,
Texas,
trasmissione,
variabilità,
vento
giovedì 11 settembre 2008
Human geography is mapped in the genes
The genes of a European person can be enough to pinpoint their ancestry down to their home country, claim two new studies.
By reading single-letter DNA differences in the genomes of thousands of Europeans, researchers can tell a Finn from a Dane and a German from a Brit. In fact a visual genetic map mirrors the geopolitical map of the continent, right down to Italy's boot.
"It tells us that geography matters," says John Novembre, a population geneticist at the University of California, Los Angeles, who led one of the studies. Despite language, immigration and intermarriage, genetic differences between Europeans are almost entirely related to where they were born.
This, however, does not mean that the citizens of each European nation represent miniature races. "The genetic diversity in Europe is very low. There isn't really much," says Manfred Kayser, a geneticist at Erasmus University Rotterdam in the Netherlands, who led the other study.
One-letter differences
Kayser's and Novembre's teams uncovered the gene-geography pattern only by analysing hundreds of thousands of common gene variants called single nucleotide polymorphisms (SNPs) across the genomes of people from about two dozen countries. SNPs are places in the genome where one person's DNA might read A, while another's T.
Though the teams worked independently, they used some of the same DNA samples, which were gathered by the pharmaceutical company GlaxoSmithKline to help hunt for genes linked to drug side effects. The researchers recorded the results alongside the country of origin for each subject as well as that of their parents and grandparents when possible.
For each subject, the researchers decoded half a million SNPs. However, to get an overall assessment of the difference between any two genomes, the researchers used a mathematical trick that scrunched the hundreds of thousands of SNPs into two coordinates, with each person's genome represented by a point. The greater the distance between two points, the greater the difference in their genomes.
When both teams plotted thousands of genomes on a single graph along with their country of origin, a striking map of Europe emerged. Spanish and Portuguese genomes clustered "south-west" of French genomes, while Italian genomes jutted "south-east" of Swiss.
These cardinal directions are artificial, but the spatial relationships between genomes are not. In general, the closer together two people live, the more similar their DNA. The same is known to be true of animals .
Predicting origins
The map was so accurate that when Novembre's team placed a geopolitical map over their genetic "map", half of the genomes landed within 310 kilometres of their country of origin, while 90% fell within 700 km.
Both teams found that southern Europeans boast more overall genetic diversity than Scandinavians, British and Irish.
"That makes perfect sense with the major migration waves that went into Europe," says Kayser, noting Homo sapien's European debut 35,000 years ago, post-ice age expansions 20,000 years ago, and movements propelled by the advent of farming 10,000 years ago. In each case, members of established southern populations struck north.
"A pattern in which genes mirror geography is essentially what you would expect from a history in which people moved slowly and mated mainly with their close neighbours," says Noah Rosenberg, a geneticist at the University of Michigan in Ann Arbor.
By reading single-letter DNA differences in the genomes of thousands of Europeans, researchers can tell a Finn from a Dane and a German from a Brit. In fact a visual genetic map mirrors the geopolitical map of the continent, right down to Italy's boot.
"It tells us that geography matters," says John Novembre, a population geneticist at the University of California, Los Angeles, who led one of the studies. Despite language, immigration and intermarriage, genetic differences between Europeans are almost entirely related to where they were born.
This, however, does not mean that the citizens of each European nation represent miniature races. "The genetic diversity in Europe is very low. There isn't really much," says Manfred Kayser, a geneticist at Erasmus University Rotterdam in the Netherlands, who led the other study.
One-letter differences
Kayser's and Novembre's teams uncovered the gene-geography pattern only by analysing hundreds of thousands of common gene variants called single nucleotide polymorphisms (SNPs) across the genomes of people from about two dozen countries. SNPs are places in the genome where one person's DNA might read A, while another's T.
Though the teams worked independently, they used some of the same DNA samples, which were gathered by the pharmaceutical company GlaxoSmithKline to help hunt for genes linked to drug side effects. The researchers recorded the results alongside the country of origin for each subject as well as that of their parents and grandparents when possible.
For each subject, the researchers decoded half a million SNPs. However, to get an overall assessment of the difference between any two genomes, the researchers used a mathematical trick that scrunched the hundreds of thousands of SNPs into two coordinates, with each person's genome represented by a point. The greater the distance between two points, the greater the difference in their genomes.
When both teams plotted thousands of genomes on a single graph along with their country of origin, a striking map of Europe emerged. Spanish and Portuguese genomes clustered "south-west" of French genomes, while Italian genomes jutted "south-east" of Swiss.
These cardinal directions are artificial, but the spatial relationships between genomes are not. In general, the closer together two people live, the more similar their DNA. The same is known to be true of animals .
Predicting origins
The map was so accurate that when Novembre's team placed a geopolitical map over their genetic "map", half of the genomes landed within 310 kilometres of their country of origin, while 90% fell within 700 km.
Both teams found that southern Europeans boast more overall genetic diversity than Scandinavians, British and Irish.
"That makes perfect sense with the major migration waves that went into Europe," says Kayser, noting Homo sapien's European debut 35,000 years ago, post-ice age expansions 20,000 years ago, and movements propelled by the advent of farming 10,000 years ago. In each case, members of established southern populations struck north.
"A pattern in which genes mirror geography is essentially what you would expect from a history in which people moved slowly and mated mainly with their close neighbours," says Noah Rosenberg, a geneticist at the University of Michigan in Ann Arbor.
Etichette:
Erasmus University Medical Center,
genetica,
Kayser,
Manfred,
mappa,
olanda,
Oscar Lao,
popolazione,
variazione
Correlation between Genetic and Geographic Structure in Europe
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VRT-4T5BRBK-2&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_version=1&_urlVersion=0&_userid=10&md5=abb22e14103043204d5356530350cfd9
Understanding the genetic structure of the European population is important, not only from a historical perspective, but also for the appropriate design and interpretation of genetic epidemiological studies. Previous population genetic analyses with autosomal markers in Europe either had a wide geographic but narrow genomic coverage [1] and [2], or vice versa [3], [4], [5] and [6]. We therefore investigated Affymetrix GeneChip 500K genotype data from 2,514 individuals belonging to 23 different subpopulations, widely spread over Europe. Although we found only a low level of genetic differentiation between subpopulations, the existing differences were characterized by a strong continent-wide correlation between geographic and genetic distance. Furthermore, mean heterozygosity was larger, and mean linkage disequilibrium smaller, in southern as compared to northern Europe. Both parameters clearly showed a clinal distribution that provided evidence for a spatial continuity of genetic diversity in Europe. Our comprehensive genetic data are thus compatible with expectations based upon European population history, including the hypotheses of a south-north expansion and/or a larger effective population size in southern than in northern Europe. By including the widely used CEPH from Utah (CEU) samples into our analysis, we could show that these individuals represent northern and western Europeans reasonably well, thereby confirming their assumed regional ancestry.
Understanding the genetic structure of the European population is important, not only from a historical perspective, but also for the appropriate design and interpretation of genetic epidemiological studies. Previous population genetic analyses with autosomal markers in Europe either had a wide geographic but narrow genomic coverage [1] and [2], or vice versa [3], [4], [5] and [6]. We therefore investigated Affymetrix GeneChip 500K genotype data from 2,514 individuals belonging to 23 different subpopulations, widely spread over Europe. Although we found only a low level of genetic differentiation between subpopulations, the existing differences were characterized by a strong continent-wide correlation between geographic and genetic distance. Furthermore, mean heterozygosity was larger, and mean linkage disequilibrium smaller, in southern as compared to northern Europe. Both parameters clearly showed a clinal distribution that provided evidence for a spatial continuity of genetic diversity in Europe. Our comprehensive genetic data are thus compatible with expectations based upon European population history, including the hypotheses of a south-north expansion and/or a larger effective population size in southern than in northern Europe. By including the widely used CEPH from Utah (CEU) samples into our analysis, we could show that these individuals represent northern and western Europeans reasonably well, thereby confirming their assumed regional ancestry.
The Genetic Map of Europe
Biologists have constructed a genetic map of Europe showing the degree of relatedness between its various populations.
All the populations are quite similar, but the differences are sufficient that it should be possible to devise a forensic test to tell which country in Europe an individual probably comes from, said Manfred Kayser, a geneticist at the Erasmus University Medical Center in the Netherlands.
The map shows, at right, the location in Europe where each of the sampled populations live and, at left, the genetic relationship between these 23 populations. The map was constructed by Dr. Kayser, Dr. Oscar Lao and others, and appears in an article in Current Biology published online on August 7.
The genetic map of Europe bears a clear structural similarity to the geographic map. The major genetic differences are between populations of the north and south (the vertical axis of the map shows north-south differences, the horizontal axis those of east-west). The area assigned to each population reflects the amount of genetic variation in it.
Europe has been colonized three times in the distant past, always from the south. Some 45,000 years ago the first modern humans entered Europe from the south. The glaciers returned around 20,000 years ago and the second colonization occurred about 17,000 years ago by people returning from southern refuges. The third invasion was that of farmers bringing the new agricultural technology from the Near East around 10,000 years ago.
The pattern of genetic differences among present day Europeans probably reflects the impact of these three ancient migrations, Dr. Kayser said.
The map also identifies the existence of two genetic barriers within Europe. One is between the Finns (light blue, upper right) and other Europeans. It arose because the Finnish population was at one time very small and then expanded, bearing the atypical genetics of its few founders.
The other is between Italians (yellow, bottom center) and the rest. This may reflect the role of the Alps in impeding free flow of people between Italy and the rest of Europe.
Data for the map were generated by gene chips programmed to test and analyze 500,000 sites of common variation on the human genome, although only the 300,000 most reliable sites were used for the map. Dr. Kayser's team tested almost 2,500 people and analyzed the data by correlating the genetic variations in all the subjects. The genetic map is based on the two strongest of these sets of correlations.
The gene chips require large amounts of DNA, more than is available in most forensic samples. Dr. Kayser hopes to identify the sites on the human genome which are most diagnostic for European origin. These sites, if reasonably few in number, could be tested for in hair and blood samples, Dr. Kayser said.
Genomic sites that carry the strongest signal of variation among populations may be those influenced by evolutionary change, Dr. Kayser said. Of the 100 strongest sites, 17 are found in the region of the genome that confers lactose tolerance, an adaptation that arose among a cattle herding culture in northern Europe some 5,000 years ago. Most people switch off the lactose digesting gene after weaning, but the cattle herders evidently gained a great survival advantage by keeping the gene switched on through adulthood.
The map shows, at right, the location in Europe where each of the sampled populations live and, at left, the genetic relationship between these 23 populations. The map was constructed by Dr. Kayser, Dr. Oscar Lao and others, and appears in an article in Current Biology published online on August 7.
The genetic map of Europe bears a clear structural similarity to the geographic map. The major genetic differences are between populations of the north and south (the vertical axis of the map shows north-south differences, the horizontal axis those of east-west). The area assigned to each population reflects the amount of genetic variation in it.
Europe has been colonized three times in the distant past, always from the south. Some 45,000 years ago the first modern humans entered Europe from the south. The glaciers returned around 20,000 years ago and the second colonization occurred about 17,000 years ago by people returning from southern refuges. The third invasion was that of farmers bringing the new agricultural technology from the Near East around 10,000 years ago.
The pattern of genetic differences among present day Europeans probably reflects the impact of these three ancient migrations, Dr. Kayser said.
The map also identifies the existence of two genetic barriers within Europe. One is between the Finns (light blue, upper right) and other Europeans. It arose because the Finnish population was at one time very small and then expanded, bearing the atypical genetics of its few founders.
The other is between Italians (yellow, bottom center) and the rest. This may reflect the role of the Alps in impeding free flow of people between Italy and the rest of Europe.
Data for the map were generated by gene chips programmed to test and analyze 500,000 sites of common variation on the human genome, although only the 300,000 most reliable sites were used for the map. Dr. Kayser's team tested almost 2,500 people and analyzed the data by correlating the genetic variations in all the subjects. The genetic map is based on the two strongest of these sets of correlations.
The gene chips require large amounts of DNA, more than is available in most forensic samples. Dr. Kayser hopes to identify the sites on the human genome which are most diagnostic for European origin. These sites, if reasonably few in number, could be tested for in hair and blood samples, Dr. Kayser said.
Genomic sites that carry the strongest signal of variation among populations may be those influenced by evolutionary change, Dr. Kayser said. Of the 100 strongest sites, 17 are found in the region of the genome that confers lactose tolerance, an adaptation that arose among a cattle herding culture in northern Europe some 5,000 years ago. Most people switch off the lactose digesting gene after weaning, but the cattle herders evidently gained a great survival advantage by keeping the gene switched on through adulthood.
Etichette:
Erasmus University Medical Center,
genetica,
Kayser,
Manfred,
mappa,
olanda,
Oscar Lao,
popolazione,
variazione
giovedì 4 settembre 2008
Met Police launch electronic crime mapping trial
http://maps.met.police.uk/
The Metropolitan police force has introduced its first trial crime map showing burglary, robbery and vehicle crime for the whole of London.
The Met online crime mapping project, which uses data up to the end of June this year, is an initiative launched by the London mayor, Boris Johnson.
The crime mapping project uses Google Maps technology combined with Met Police crime data, highlighting London boroughs with above- or below-average crime rates and comparing rates for different months and years.
Southwark scores worst on crime levels, which increased by more than 100 individual incidents between May and June this year.
Another four of London's 32 boroughs, including Westminster and Hackney, were above the overall average crime rate across the capital in June.
Seven outlying boroughs, including Richmond, all saw below-average rates of crime for London in June.
Users can zoom in on the map to see specific rates for their neighbourhood, or search by postcode.
"The Mayor made crime mapping a key manifesto commitment and it is an integral part of our strategy to make London safer, " said Kit Malthouse, deputy mayor for policing.
"It is a proven technique for increasing public safety and putting extra resources into crime hotspots where they are most needed."
A Met spokesman emphasised that this version of the map is a test phase and will be subject to a technical review.
"The software development will enhance the service that we currently provide regarding the number, rate and geographical location of defined crime types within the capital," the spokesman said.
"The electronic crime maps will sit alongside the crime statistics that are published monthly on a ward, borough and pan-London basis."
He added that the initial version will be limited to burglary, robbery and vehicle crime data and that the software will be enhanced before a formal launch in September.
Malthouse said the home secretary, Jacqui Smith, had "recently converted" to the crime mapping programme following the work by the mayor's office, consequently announcing a project to introduce maps for police forces around the country.
Police forces in Hampshire, Lancashire, the West Midlands and West Yorkshire are all conducting trials of their own crime maps. The government hopes the initiative will increase public confidence in the police and keep them more informed on local crime problems
The Met online crime mapping project, which uses data up to the end of June this year, is an initiative launched by the London mayor, Boris Johnson.
The crime mapping project uses Google Maps technology combined with Met Police crime data, highlighting London boroughs with above- or below-average crime rates and comparing rates for different months and years.
Southwark scores worst on crime levels, which increased by more than 100 individual incidents between May and June this year.
Another four of London's 32 boroughs, including Westminster and Hackney, were above the overall average crime rate across the capital in June.
Seven outlying boroughs, including Richmond, all saw below-average rates of crime for London in June.
Users can zoom in on the map to see specific rates for their neighbourhood, or search by postcode.
"The Mayor made crime mapping a key manifesto commitment and it is an integral part of our strategy to make London safer, " said Kit Malthouse, deputy mayor for policing.
"It is a proven technique for increasing public safety and putting extra resources into crime hotspots where they are most needed."
A Met spokesman emphasised that this version of the map is a test phase and will be subject to a technical review.
"The software development will enhance the service that we currently provide regarding the number, rate and geographical location of defined crime types within the capital," the spokesman said.
"The electronic crime maps will sit alongside the crime statistics that are published monthly on a ward, borough and pan-London basis."
He added that the initial version will be limited to burglary, robbery and vehicle crime data and that the software will be enhanced before a formal launch in September.
Malthouse said the home secretary, Jacqui Smith, had "recently converted" to the crime mapping programme following the work by the mayor's office, consequently announcing a project to introduce maps for police forces around the country.
Police forces in Hampshire, Lancashire, the West Midlands and West Yorkshire are all conducting trials of their own crime maps. The government hopes the initiative will increase public confidence in the police and keep them more informed on local crime problems
Etichette:
Boris Johnson,
crimine,
google,
londra,
mappa,
Metropolitan Police,
violenza,
visualizzazione
Iscriviti a:
Post (Atom)