Jump to content
RemedySpot.com

Open Up Government Data

Rate this topic


Guest guest

Recommended Posts

Guest guest

Open Up Government Data

From Wired How-To Wiki

Jump to: navigation, search

Barack Obama rode into office with a high-tech, open source campaign that digitized the book on campaigning.

Now, with his selection of a celebrated open data advocate as his Chief Information Officer, Obama appears serious about bringing those same principles to the executive branch's treasure trove of data.

Vivek Kundra, the new CIO, comes to the White House from a similar role as the CTO of Washington, D.C., where he garnered kudos for his clear-headed approach to making data feeds from dozens of city agencies accessible.

"I'm going to be working very closely with all Federal CIOs in terms of at the agency level to make sure they are advancing an agenda that embraces open government, an agenda that looks at how we could fundamentally revolutionize technology in the public sector," Kundra said.

If you're a fan of free data flow into and out of the government, Vivek Kundra seems like an ally. But we can't rest on our laurels. Now is exactly the time when lobbying for particular data and documents to be made accessible could be most effective.

Data.gov is coming: Let's help build it.

Contents[hide]

1 The Problem 2 The Solution: You 3 How to Get Involved 4 Government Datasets: The Good, the Bad and the Ugly

4.1 Action Item: Require data to be available in at least one public or open-source format. 4.2 U.S. Legal System 4.3 U.S. Department of Agriculture

4.3.1 Economic Research Service

4.3.1.1 Action Item: Turn current spreadsheet data on crops, livestock, etc., into XML feeds to enable easier and more dynamic reuse. 4.3.1.2 Action Item: Turn current data on nutrition facts formatted as PDFs, ASCII format, and as a Microsoft Access 2000 database file into XML feeds and other open source file formats to enable easier and more dynamic reuse.

4.4 Department of Education

4.4.1 National Center for Education Statistics

4.4.1.1 Make NCES school data available for bulk download.

4.5 Department of Energy

4.5.1 Energy Citations Database

4.5.1.1 Action Item: Scan the 2.1 million paper documents in the Energy Citations Database.

4.5.2 Geothermal Technologies Legacy Collection

4.5.2.1 Action Item: Scan all remaining paper documents on geothermal power.

4.6 Food and Drug Administration

4.6.1 ClinicalTrials.gov 4.6.2 FDA website (Drugs@FDA)

4.6.2.1 Action Items

4.7 Department of the Interior

4.7.1 U.S. Geological Survey

4.8 National Aeronautics and Space Administration

4.8.1 Orbital Debris Data

4.8.1.1 COMPLETED ITEM: Make Shuttle Hypervelocity Impact Impact Database freely available. 4.8.1.2 Action Item: Acquire the most recent orbital debris impact data and make it freely available. 4.8.1.3 Action Item: Make historical orbital debris risk assessments of Shuttle missions public.

4.8.2 Technical Reports Server

4.8.2.1 Action Item: OCR all technical reports server documents. 4.8.2.2 Action Item: Put all publicly available documents online. 4.8.2.3 Action Item: Attempt to identify data buried in documents.

5 Models for Government Data Release, Transparency

5.1 Washington, D.C. Data Catalog 5.2 Human Genome Project: Genome.gov 5.3 NOAA's Climate Database Modernization Program 5.4 Earth Science Data and Information System Project at Goddard: ESDIS 5.5 Hubble Space Telescope Data 5.6 TriMet and BART

6 Government-Wide Changes

6.1 Create Data Catalog of Every Agency's Data Streams 6.2 Make Data Release the Rule, Rather Than the Exception 6.3 View Data Release From the User's Point of View, not the Agency's 6.4 Reward Making Datasets Publicly Available Through Grantmaking Bodies like NSF 6.5 Fund Data Reanalysis Projects

6.5.1 Action Item: Crowdsource the problem by encouraging high school and college level science classes to add government data reanalysis to their curriculum

6.6 Prioritize Open Government Investments and the Right to Know 6.7 Adopt global metadata standards and related technologies

7 Models for Opening and Using Government Data

7.1 USGovXML.com 7.2 Infochimps.org 7.3 public.resource.org 7.4 Sunlight Labs' Apps for America 7.5 GTFS Data Exchange 7.6 Trendrr.com 7.7 Proteomecommons.org 7.8 Ted.Com

The Problem

More than 100 government agencies collect statistics and data. Though some agencies have done a great job of getting their data and documents online, the accessibility and usability of government data overall can be improved.

The Federal government is a big beast with more lots of agencies collecting and publishing data, some of it stretching back decades. Much of the data about the workings of our country is stranded in PDFs, Excel spreadsheets and other less-than-ideal formats. The net effect is that scientists, lawmakers, journalists and citizens can't access key decision-making information.

While green tech, the banking crisis and war dominate headlines, data quietly underpins decision making in all of those areas.

The numbers — about how much corn we grow, what the universe looks like from Hubble, how much coal we have, and how well drugs work — are the results from the grand experiment of this country. We'll only know how to proceed, making refinements to our politics, policies and science, if we know what's happening in the world around us.

Vivek Kundra, Barack Obama's new appointee is well aware of the problem and wants to spark a revolution in the way government deals with data. But he's going to need your help to steer the (big) government boat.

The Solution: You

We've established this wiki to help focus attention on valuable data resources that need to be made more accessible or usable. Do you know of a legacy dataset in danger of being lost? How about a set of Excel (or — shudder — Lotus 1-2-3) spreadsheets that would work better in another format? Data locked up in PDF's?

This is your place to report where government data is locked up by design, neglect or misapplication of technology. We want you to point out the government data that you need or would like to have. Get involved!

Based on what you contribute here, we'll follow up with government agencies to see what their plans are for that data — and track the results of the emerging era of Data.gov.

With your help, we can combine the best of new social media and old-school journalism to get more of the data we've already paid for in our hands.

How to Get Involved

Just jump in and edit the wiki. Add links to data that's out of date or in danger of being forgotten or that comes stored in a less-than-ideal format. Help define how Data.gov gets built by making sure that the data you need is included.

We're not writing a policy paper here. We're trying to highlight datasets and sources of knowledge that the new Administration — and it's open-data friendly CIO — could make more widely available and accessible with small, concrete actions.

If you're not comfortable with the MediaWiki formatting language, feel free to get in touch with Wired.com staff writer, is Madrigal, either by e-mail alexis.madrigal[at]gmail.com or on Twitter: @alexismadrigal.

Government Datasets: The Good, the Bad and the Ugly

Our government generates tons of data. It's a data-making machine. There are three types of data that the government tends to make: information about internal government functioning, statistics like those provided by the U.S. Department of Agriculture and scientific data generated by the nation's scientific agencies.

Legends like Carl Malamud and The Sunlight Foundation have focused most of their efforts on exposing details about how the government works. Scientists tend to push for the latter — and they're doing an increasingly good job of making datasets available.

Those statistics, though, don't receive as much attention, even though they tend to reflect most directly on making economic and political decisions.

We can judge government efforts on two criteria. First, how accessible is the data? Is it a) online, B) as raw as possible, c) "feedable," and D) fully downloadable? Resource.org hosts an expanded list of 8 Principles of Government Data that seem reasonable and smart.

Second, how easy is it to use? Is the data well-described? Are there summaries available for less technical users? (Or, alternatively, is the data structured so outside people are likely to build applications with it, exposing more people to the data?)

Using these criteria, help us find out which valuable datasets we can campaign to make better.

Action Item: Require data to be available in at least one public or open-source format.

In addition to providing data in closed, proprietary, or semi-proprietary formats such as Excel, Word, or PDF, require that at least one open-source or public format be given. This allows government data to be offered in convenient forms for use in commercial applications and also protects the data from being antiquated if those commercial applications cease to exist.

An example of a "dataset" that has generally been "locked up" in PDF are the mission, vision, values, goals, and objectives statements documented in agency strategic plans, which are required by the Government Performance and Result Act (GPRA). The authoritative sources of such data should be XML documents conforming to a voluntary consensus standard like Strategy Markup Language (StratML).

U.S. Legal System

New laws and motions that are being put to the senate should be available as an RSS feed.

U.S. Department of Agriculture

Economic Research Service

The USDA's Economic Research Service is an excellent example of an agency that does a great job collecting data and making it available online. However, the usability of its data is limited. For example, statistics on crop harvests and fertilizer applications are stored in the form of individual Excel spreadsheets and PDFs. That frustrates researchers like Pamela , a geochemist at the University of Chicago, who investigates the nation's food system. She wants to more accurately determine the energy inputs that go into various types of food so she can evaluate the carbon and energy intensity of different types of diets.

Economic data is held separately from the chemical application data, so it's tough to to look at how fertilizer and pesticides have impacted yields. It'd be easy if the USDA provided raw data, but they don't. They cut and massage it into reports, which means and her team have to try to back out the aggregated stats into the data they want.

"They package it in a way that's useful for them, but what really would be useful is if all that data were available in nearly raw form," said. "I'd want a single clearinghouse for their data."

Action Item: Turn current spreadsheet data on crops, livestock, etc., into XML feeds to enable easier and more dynamic reuse.

Action Item: Turn current data on nutrition facts formatted as PDFs, ASCII format, and as a Microsoft Access 2000 database file into XML feeds and other open source file formats to enable easier and more dynamic reuse.

SR21 has a lot of potential to be greatly developed by new media entrepreneurs, social entrepreneurs, and startups. SR21 would be more useful if it was thoroughly integrated into the social and mobile web. Both the social graph and ubiquitous 3G mobile computing hold undeveloped potential for novel applications of SR21 not to mention the likely synergy we can expect. The confluence of the previous stated technological dynamics and many unstated disregarding the obvious public health dynamics makes SR21 a lucrative dataset and one that could better help manage obesity, diabetes, nutritional research, and whatever creative ideas need only nudged. This data shouldn't be hiding. "Raw data now!" - Tim Berners-Lee

Department of Education

National Center for Education Statistics

The NCES maintains a series of databases containing information on public and private schools around the nation. But as Twitter user, @facej notes, it's trapped inside web forms.

Data can be downloaded as Excel spreadsheets once a query is made, but it can't be downloaded in bulk.

Make NCES school data available for bulk download.

Department of Energy

Energy Citations Database

If ever there were a perfect opportunity for making data more available, one would think it would be in the energy arena. The Energy Citations Database is a repository of 2.3 million bibliographic records stretching back to 1943. It's a fantastic research resource, but there's an enormous amount of data trapped there. Of the 2.3 million citations, only 197,000 are available as electronic documents. The database is growing — in June 2007, there were only 140,000 online docs — but that growth rate isn't going to make a dent in that pile of citations without a lot more effort.

Action Item: Scan the 2.1 million paper documents in the Energy Citations Database.

Geothermal Technologies Legacy Collection

The DOE, in conjunction with the Office of Scientific and Technical Information, have made a herculean effort to preserve publications and documents related to the $1.5 billion worth of research into geothermal technology.

Going back through the literature, they came up with 15,000 documents related to geothermal technology, all of which have been entered into the OSTI Energy Citations database. But of those, only 6,000 have been scanned and are fully-searchable through the OSTI website.

Three points seem relevant here. First, this could be a model for how to do an information preservation search for a specific, high-value technology. The OSTI spelled out their step-by-step approach back in 2006. Two, all of those documents should be available online. Given that geothermal technology is receiving an exploding amount of attention from the government and private companies as possible major source of energy, it's difficult to stomach (though easy to understand) that the majority of the documents generated by DOE funds remain offline. Three, what happened to all the data that was generated in DOE experiments. Is it available online?

Action Item: Scan all remaining paper documents on geothermal power.

Food and Drug Administration

ClinicalTrials.gov

Clinicaltrials.gov is a registry for clinical trials that began in 1999. Registries allow you to track trials that were planned. That allows you to see if a trial was published, but it doesn't tell you how the trial turned out. Starting in late 2008, because of a new law that was passed (FDAAA), ClinicalTrials.gov began posting study results. This is great for future drugs, but it doesn't help us with all the drugs on the market today.

FDA website (Drugs@FDA)

The FDA is sitting on data on essentially all drugs currently marketed in the US. They have results not only on the clinical trials that got published, but they have data on lots of trials that haven't seen the light of day. (And sometimes the FDA version of the trial results tell a different story from what the journal articles say.) Starting in 1997, the FDA began posting their reviews on drugs they've approved. Unfortunately, this is a hit-or-miss proposition. Sometimes a drug review is there, and sometimes it's not. The other problem is that, if you want a review on a drug approved before 1997, you have to file an old-fashioned "paper" Freedom of Information (FOIA) request. Then you get to wait and wait, and maybe in a year or two you'll get it. When the FDA receives the next request for the same drug, they reinvent the wheel all over again and make that person wait and wait.

Drugs@FDA: http://www.accessdata.fda.gov/scripts/cder/drugsatfda/index.cfm Article about this issue: http://dx.doi.org/10.1371/journal.pmed.0010060

Action Items

• Make the FDA's clinical trial data from before 1997 available online. • Cover all drugs, not just some of them.

Department of the Interior

U.S. Geological Survey

USGS data about the mineral resources in our country is some of the most respected data in the world. Some of this data is available online through the Minerals Resource Program. Could it be improved? What's missing?

Some USGS data in SDTS format can be downloaded from here; and more data in SDTS and other formats can be found here.

USGS manages the Geospatial One-Stop Portal that contains over 200,000 references to geospatial resources ranging from datasets and web services to applications. The content of this site is contributed from Federal, State, and Local Government agencies as well as commercial data providers.

National Aeronautics and Space Administration

Orbital Debris Data

Space junk is an increasingly severe problem for vehicles and satellites in Earth's orbit. NASA has kept detailed records of the impacts of orbital debris and meteoroids on the Shuttle, International Space Station, and other vehicles, but little of this data has been made public. The risk assessments made for previous Shuttle missions also remain outside the public record, even though they are crucial for understanding how orbital debris risks to the Shuttle, Hubble, and ISS have changed through time.

COMPLETED ITEM: Make Shuttle Hypervelocity Impact Impact Database freely available.

Wired Science released a partial dataset of orbital debris impacts available on March 13, 2009. It was found outside the password-protected area on a NASA server, and posted on WiSci here. Key data tables were also posted to Google Documents.

Since the publication of the story, NASA has removed the data from the public domain. It used to be available at this link: http://hitf.jsc.nasa.gov/hitfpub/shuttle/Reports/ShuttleImpactDB.xls

The data we made public only runs through 2006, leaving out the last nine Shuttle missions. Wired Science has requested updated data from NASA's Hypervelocity Impact Testing Facility.

Action Item: Acquire the most recent orbital debris impact data and make it freely available.

Action Item: Make historical orbital debris risk assessments of Shuttle missions public.

Wired Science has requested that NASA make Shuttle risk assessments public, but has not received documents from the agency.

Technical Reports Server

The NASA technical reports server is a treasure trove of reports and documents on topics from Shuttle launches to wind turbines. There are 175,631 full-text PDFs on the server, but none of them have been OCR'd. Only the abstracts are searchable.

147,560 other documents have searchable abstracts but not are not available online. They have to be ordered through the Center for AeroSpace Information — which costs nontrivial amounts of money. This is true even for documents labeled, "Unclassified, No Copyright, Unlimited, Publicly available."

Action Item: OCR all technical reports server documents.

This is a simple first step that would make a tremendous amount of knowledge more freely accessible.

Action Item: Put all publicly available documents online.

To be publicly available in today's world has to mean that the documents are available online.

Action Item: Attempt to identify data buried in documents.

Behind these documents lie vast datasets that are probably stored on outdated media. From scanned reports, we can identify what they datasets underlying NASA's reports are.

Models for Government Data Release, Transparency

Washington, D.C. Data Catalog

The city's Data Catalog is simple: It just provides downloadable data and web-accessible feeds on all kinds of city information from juvenile arrests to completed construction projects to roadkill pickups. Building on that platform, the city even sponsored an Apps for Democracy contest, which saw independent developers create 47 mashups from DC's data streams.

Human Genome Project: Genome.gov

As Vivek Kundra mentioned in his CIO acceptance press conference, the public release of the human genome has led to the creation of 500 new drugs that are now in the FDA approval pipeline. This is widely considered one of the best examples of data sharing.

Help us out. What made this effort so successful?

NOAA's Climate Database Modernization Program

The CDMP has digitized 53 million images and more than 7 terabytes of data, making it available through a special web-based software interface. They are saving priceless, often handwritten climate data.

On the other hand, access to the data is restricted to "U.S. government employees and their contractors, educational institutions doing environmental research, and other researchers associated with NOAA projects." What prevents this data from being opened up to the public?

Earth Science Data and Information System Project at Goddard: ESDIS

The ESDIS program is currently saving crucial datasets from NASA's early mission. These include data from the weather research NIMBUS prgram, Heat Capacity Mapping Mission and the Earth Radiation Budget Experiment.

What are the best practices for finding high-value historical datasets? How has the ESDIS program done it?

Hubble Space Telescope Data

Fox, a computer scientist at Rensselaer Polytechnic Institute noted that the Hubble Space Telescope archive has been a smashing success. "Six times the amount of data has been taken out of that archive than has gone into it," he said. "The data has been used 6X more than you paid for."

TriMet and BART

While most public transit agencies have resisted giving out even simple digital equivalents of their paper schedules, Portland, Oregon's TriMet and the San Francisco Bay Area's BART systems have led the way in developer-friendliness. Not only do they provide their full timetables and geographic information in GTFS format, but both also provide real-time GPS information about where their buses and trains are at any given moment. Both also maintain dedicated developer areas on their website and actively reach out to the transit developer community.

Government-Wide Changes

Create Data Catalog of Every Agency's Data Streams

We agree with the Sunlight Foundation's Greg Elin that the single most important thing any government agency could do to make itself more transparent would be to create a data catalog of all its data streams.

"If there was one thing I could do, it would simply be creating a data catalog at every agency at every department that has data," said Greg Elin of the Sunlight Foundation, a group that promotes government transparency. "Every website has an About Us. Every website has a Frequently Asked Questions. Every website should have a data catalog."

FedStats.gov is the current attempt to do this, but it clearly doesn't rise to the level of a true data catalog.

Under the previous administration, the Federal Enterprise Architecture (FEA) Data Reference Model (DRM) was supposed to have become the governmentwide data catalog. An XML schema (XSD) was drafted to facilitate the sharing of DRM data (e.g., in readily searchable data catalogs). However, agencies pushed back against the thought of rendering their data descriptions in XML format, so the draft XSD was not finalized and implemented. Under the new administration, the status of the FEA models, including the DRM, is uncertain.

Make Data Release the Rule, Rather Than the Exception

Jessy Cowan-Sharp, who works on a team at NASA Ames looking at data-accessibility issues, had this simple suggestion. "If we changed policy to have automatic time frames within which data became publicly releasable, or even had certain categories of data that were by default publicly releasable, I think we would overcome one of the major hurdles to accessibility — paperwork."

View Data Release From the User's Point of View, not the Agency's

The EPA web manager, Levy, has made a trenchant argument via Twitter about this issue. "Why should ppl have to know which agency governs nat'l parks to find info? or fixes potholes? or explains env. issues?"

In short, why should user-citizens have to know which government silo handles which problem to get answers? Data.gov creates the possibility that at least in data, that won't be the case. But one key will be designing the site to think about data use-cases, not agency needs.

Reward Making Datasets Publicly Available Through Grantmaking Bodies like NSF

Andy Maffei of the Woods Hole Oceanographic Institute speaks for many scientists when he notes that when they make their data available, they aren't rewarded with promotions and recognition. "They don't get much attribution in terms of how much their data is used by other people," he said.

How can we encourage the NSF and other scientific grantmaking bodies to reward data release?

Fund Data Reanalysis Projects

Andy Fox at RPI told us that only between three and ten percent of scientific data is actually analyzed, which means that virtually no data is reanalyzed. He argues — and we agree — that mining the data we already have could be a low-cost, high-value means of scientific investigation.

The problem is, that doesn't sound like cutting-edge scientific research.

How do we convince funding agencies that data analysis can be valuable science, even if you do no more measurement?

Action Item: Crowdsource the problem by encouraging high school and college level science classes to add government data reanalysis to their curriculum

It might be good to get students involved with the scientific process by reanalyzing actual scientific data. It may be possible for the classes to get into contact with the agency/team/scientist that generated the data, and the students would get a chance to learn data analysis techniques that are valuable later in their scientific career (statistics, mathematical modeling, etc). If students or instructors wanted to tackle large data sets, the students could be exposed to different programming techniques as well (something that many professional scientists aren't good at themselves).

Prioritize Open Government Investments and the Right to Know

Government should not only makes its data available for third party use, it must also make the information it places online more accessible and relevant in a timely, personalized manner. The detailed report Moving Toward a 21st Century Right-to-Know Agenda: Recommendations to President-elect Obama and Congress and the short article, Ten practical online steps for government support of democracy, provide useful context.

How do we ensure that e-government investments couched in terms of one-way service delivery transactions are required in part to promote government accountability, transparency, accessibility and engagement with the public?

Adopt global metadata standards and related technologies

The existence of data by itself is insufficient to guarantee that it can be discovered, accessed or even used for research or analytical purposes. Data must be surrounded by a sufficient amount of information to ensure its quality and usefulness. Such documentation or "metadata" should be compiled in a harmonized fashion amongst agencies to facilitate its preservation, dissemination and exploitation or to capture the derived research outputs. Combining the adoption of global metadata specifications such as the Data Documentation Initiative (DDI) and the Statistical Data and Metadata Exchange standard (SDMX) with industry standard XML technologies and service oriented architecture would address such issues and foster the harmonization of the federal statistical system. International organizations like IASSIST, standard settings groups such as the DDI Alliance and the SDMX Sponsors, and initiatives like the Open Data Foundation or the International Household Survey Network are promoting and supporting such best practices at the national and international levels.

Models for Opening and Using Government Data

USGovXML.com

USGovXML is an index of publically available web services and XML data sources provided by the US government. It includes detailed descriptions of the data sources and their operations. It includes links to the host systems for documentaion, tech support, etc. Source code snippets are provided to help developers better understand how to use the data sources. Web based applets, for use by mobile devices, have also been provided. The mobile applets are available at USGovXML:Mobile.

Infochimps.org

Infochimps is dedicated to finding and hosting free, redistributable datasets. It's a simple but absolutely enormous mission. So far, they've got thousands waiting for you to use.

public.resource.org

Public.resource.org is the home of Carl Malamud's many-tentacled government data extraction program. From public safety codes to California's entire Code of Regulations to hundreds of Federally-produced movies, the website provides a tantalizing peek at the incredible extent of the government's information warehouses.

Sunlight Labs' Apps for America

Sunlight Labs is an organization dedicated to "turning government data into useful information." They are currently hosting an Apps for America contest to design web services that promote transparency in Congress.

GTFS Data Exchange

GTFS Data Exchange is a site that aggregates public transportation schedule data in GTFS format. Both officially-released and hand-entered/scraped schedules are listed.

Trendrr.com

Trendrr.API Graph and trend time-series data. Mashup, annotate it and export it in a variety of open formats including , XML ...SGS Simple Graph Syndication. (published XML graph syndication schema under Creative Commons) Robust simple API.

Proteomecommons.org

ProteomeCommons.org is a public proteomics database for annotations and other information linked to the Tranche data repository and to other resources. We provide public access to free, open-source proteomics tools and data. Manage your projects, data, and annotations. We'll track stats for you. Next time you are writing a grant wouldn't it be nice to say how many people have downloaded your code, where in the world your code is being used, and how often people currently use your project? We'll provide google-like search capabilities for your project's files and documentation, and we are always trying to improve our search to work well with proteomics terms, data, and code. You get credit for your work. All projects are clearly labeled so that users know exactly who did the work, and who to contact if there are questions.

Ted.Com

TED.com "Ideas Worth Spreading" offers two really good references for the best ways to use and SEE data. These are great for the kind of results we want to get EASY access that is useful and very visible. (1) Tim Berners-Lee: The next Web of open, linked data (2) Hans Rosling: Debunking third-world myths with the best stats you've ever seen.

This page was last modified 23:02, 19 March 2009 by alexismadrigal. Based on work by mainebob, theodorewheeland, rloftin, chumphrey, mediaeater, mhogeweg, odaf, tessarakt, mcoletti, betsymason, jesspepper, jamesturk, jwyg, essentialbeatfinger, headway, turnere, owenambur, amirebrahimi and netclift and others.

All text and artwork shared under a Creative Commons License.

Toolbox

What links here

Related changes

Upload file

Permanent link

Navigation

Welcome to the Wired How-To Wiki, a collaborative site dedicated to the burgeoning DIY culture. Here you'll find all kinds of projects, hacks, tricks and tips on how to make each day better than the last. Anyone can contribute new items or edit an existing item which means that the articles are only as good as you make them. Create an Article

Special pages Recent changes Random page Help

http://howto.wired.com/wiki/Open_Up_Government_Data

"Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction."

- Albert Einstein

Link to comment
Share on other sites

Join the conversation

You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...