Crowded: Digital Piecework and the Politics of Platform Responsibility in Precarious Times looks as crowdsourcing as a focal point for many of the issues that are raised by the structure of our current information economy: economic value, cultural meaning, and ethics.
Labs: New England
Project focussed on the patterns of road development in tropical forests.
Big Sky is a web service for exploratory data analysis.
Distribution Modeller (temporary name only!) is CEES' end-to-end browser tool that lets the researcher to rapidly import data, supplement that data with environmental info from FetchClimate, specify an arbitrary model by point and click or in code, parameterize the model against the data using Filzbach, make and visualize predictions with a full propagation of parameter uncertainty – then package and share everytihng, in a way that is inspectable, repeatable, and modifiable.
Identifying and Visualizing Viral Content
Labs: New York
SNAP is a new sequence aligner that is 10-100x faster and simultaneously more accurate than existing tools like BWA, Bowtie2 and SOAP2. It runs on commodity x86 processors, and supports a rich error model that lets it cheaply match reads with more differences from the reference than other tools. SNAP was developed by a team from the UC Berkeley AMP Lab, Microsoft, and UCSF. Binaries are available at http://github.com/downloads/amplab/snap/
The University College London and University of Oxford have recently received funding from the EPSRC Cross-Disciplinary Interfaces Programme (2020 Science: Mathematical and Computational Modelling of Complex Natural Systems) to collaborate with Microsoft Research Cambridge on a programme of research that will involve up to 17 post-doctoral Research Associates over a five year period.
We develop and accelerate better, predictive, conservation science, tools and technologies in areas of societal importance. We aim to provide scientific support for effective environmental solutions for key decision makers, from the boardroom to governments makers. We are committed to leveraging the unique position our group occupies to influence how individuals and nations approach and tackle issues such as natural resource scarcity and biodiversity loss.
These are two simple formulae to wrap latitude and longitude back to their proper ranges.
While Amazon has already made accessible (via S3) the genomes in the 1000 genome project, there is no accompanying abstraction to pick whatever portion of the vast data (250 Gbytes per sequence) that a biologist or doctor wishes interactively across the network. We would like to do something similar in a storage platform such as Azure, but where access can be done by what we call a Genome Query Language (developed with folks at UCSD).
Data sharing, management, and curation have become critical to scientists as well as private and public agencies that support their work. DataUp makes it easy for scientists and researchers to integrate the archiving, sharing, and publishing of tabular data into scientific workflows.
Species Distribution Modelling (SDM) aims to explain why species occur where they do, and why they do not occur anywhere else. For instance, why does an oak tree not occur further south in hotter and dryer regions, and why it may not occur further north in colder and wetter regions? This sort of information is amongst the most fundamental of all ecological knowledge and is of great societal importance. Distribution data can feed into almost all other biodiversity models, and can inform adaptive
Science, Policy, and Tools & Technology drive the Conservation@Microsoft Research Unit. This unique project combines those pieces to provide fresh insight into the relationship between species, their environment and the impact that human activity has on them.
A Systems and Software Perspective
Environmental Informatics Framework (EIF) is a strategy for using cutting-edge Microsoft technologies to advance environmental data discoverability, accessibility, and consumability.
A cloud-based user experience, Microsoft Layerscape makes it easy for the Earth-sciences community to visualize and analyze large, complex datasets to facilitate the discovery of new environmental insights into Earth. By using powerful, everyday tools like Microsoft Excel, Layerscape enables users to explore new ways of looking at Earth and oceanic data, and build predictive modeling in areas such as climate change, health epidemics, and oceanic shifts.
In recent years, computational challenges have become more and more important to infer biologically relevant information from the vast amount of experimental data available to systems biologists. Building on work originally done by Marc Mezard and Riccardo Zecchina in the context of random instances of satisfiability, we are developing various computationally efficient algorithms for problems in systems biology.
Labs: New England
SMT-based Analysis of Biological Computation
Filzbach is a flexible, fast, robust, parameter estimation engine that allows you to parameterize arbitrary, non-linear models, of the kind that are necessary in biological sciences, against multiple, heterogeneous data sets. Filzbach allows for Bayesian parameter estimation, maximum likelihood analysis, priors, latents, hierarchies, error propagation and model selection, often with just a few lines of code.
Visualize your data over the web: add complex dynamic graphs and maps to your web application.
Retrieve global environmental data with the click of a button or a few lines of code. FetchClimate is a fast, free, intelligent environmental information retrieval service that operates over the cloud to return only the environmental data you need. FetchClimate can be accessed either through a simple web interface http://fetchclimate2.cloudapp.net or via a few lines of code inside any .NET program.
Unify Biological Hypotheses with Models and Experiments
Built on Windows Azure, NCBI BLAST on Windows Azure is a cloud-based implementation of the Basic Local Alignment Search Tool (BLAST) that enables researchers to take advantage of the scalability of the Windows Azure platform to perform analysis of vast proteomics and genomic data in the cloud.
We describe the new symbolic differentiation feature in HLSL. We provide details of the compiler implementation, along with information relevant to a shader writer wanting to use the feature.