Header image

CHI T-Shirt Competition

Posted by admin in Uncategorized - (Comments Off)

This data is pulled from a contest run to pick MIT’s CHI 2009 T-Shirt design. Click on a bar in the bar chart to view the design.

Veggie Guide to Glasgow

Posted by admin in Uncategorized - (Comments Off)

Here is a Vegetarian’s guide to the Glasgow area shown in a DataPress post. The data was imported from this Exhibit on the web. We simply referenced that URL and projected it onto a map.

Shown off in this post is DataPress’s ability to easily import data from elsewhere and embed it inline into a blog post. One missing feature this post highlights is our need to allow the user to set s default map Lat,Lng and zoom level.

Trial text

Posted by admin in Uncategorized - (Comments Off)

Posted by admin in Uncategorized - (Comments Off)

 

 

Ejemplo 1:

 

Test

Posted by admin in Uncategorized - (Comments Off)
XXX

 

 

    before

    aftersd

    Exhibit 3.0 Publishing Framework

    Posted by admin in Uncategorized - (Comments Off)

    Exhibit 3.0:

    Publishing Framework for Large-Scale Data-Rich Interactive Web Pages

    What’s New?

    Exhibit 3.0 is available for community input. Highlights include:

    Exhibit 3.0 Scripted (client-side)

    • New HTML5 configuration language, with backward compatibility for prior configuration language
    • New localization system (no more 404′s)
    • New history system (no more __history__.html)
    • Updated libraries, including jQuery
    • Removal of external or unconfigurable service dependencies within the core
    • Most core views and facets and some popular extension views implemented
    • More developer-friendly, including a new event-driven API, basic tests, and documentation
    • Persistence: Pick up where you left off browsing an Exhibit
    • Bookmarks: Share exactly what you see with others

    General – Exhibit 2.2.0 & Exhibit 3.0

    The expression language remains the same.

    The Exhibit attribute-based configuration has changed for HTML5. A compatibility mode remains for Exhibits in XHTML files. HTML5 does not support XML namespaces, providing a new custom attribute mode in its stead. Moving from Exhibit 2.2.0 in XHTML to Exhibit 3.0 in HTML5 requires changing all attributes prefixed with ex: to be prefixed with data-ex- instead. In addition, all capital letters within the attribute name should be converted to a hyphen and lower case, e.g., ex:itemTypes becomes data-ex-item-types. The HTML5 data attribute API treats capitalization differently during document processing and when attribute access occurs, necessitating the change to hyphenation.

    Anything used in a rel attribute has changed from using the / character to the - character. The / usage is deprecated; it will still work now but will not in the future. This mostly pertains to <link rel="exhibit-data" ...>.

    Updated, native browser JSON libraries are now used. These implement a much stricter implementation of the JSON specification. A one off extension to upgrade JSON was added to the selection of Exhibit extensions. Otherwise, you may need to run your existing Exhibit JSON through JSONlint to re-qualify it for use with Exhibit 3.0.

    Loading extensions should now be done using a <link> element instead of a <script> to allow Exhibit to load them in the right order. For example, loading maps should be done with <link rel="exhibit-extension" href="path/to/map/map-extension.js">.

    A new component, the control panel, was added to Exhibit 3.0 to contain Exhibit-wide widgets, like the new bookmarking widget.

    You no longer need a __history__.html file to accompany your Exhibit files.

    The following table compares feature gaps between Exhibit 3.0 and the previous release, Exhibit 2.2.0.

    Views

    Exhibit 2.2.0 feature Exhibit 3.0 status
    Tile Fully implemented
    Tabular Fully implemented
    Thumbnail Fully implemented
    Timeline Extension Fully implemented
    Map Extension Partially implemented1,2
    Chart Extension Not yet implemented
    Timeplot Extension Not yet implemented
    Calendar Extension Not yet implemented
    Editing Hooks provided, view not yet implemented

     

    The Map Extension no longer incorporates the OpenLayers or VirtualEarth, now Bing Maps, providers. Interested parties should be able to review the Google Maps v2 and v3 provider scripts to see how to include a mapping provider. Consider using MapQuest as a default tile service for OpenLayers. See ex: go here.

    Facets

    Exhibit 2.2.0 feature Exhibit 3.0 status
    List Fully implemented
    Cloud Fully implemented
    Text Search Fully implemented
    Numeric Range Fully implemented
    Alphabetic Range Fully implemented
    Hierarchical Fully implemented
    Slider See below1
    Image No plans to implement

    Widgets

    Exhibit 2.2.0 feature Exhibit 3.0 status
    Toolbox Fully implemented
    Exhibit Logo Fully implemented
    Resizable Element Fully implemented
    Options Fully implemented
    Collection Summary Fully implemented
    Legend Fully implemented
    Legend Gradient Fully implemented 1
    Bookmark New to Exhibit 3.0
    History Reset Development Tool New to Exhibit 3.0

    Data-Rich Interactive Web Pages Examples:

    This Website uses Data-Rich Interactive Web Pages with Exhibit 3 Publishing Framework.

    datavis:«Universo de emociones» by PalauGea

    «Universo de emociones» by PalauGea

    datavis:Time travel in movies by mr-dalliard

    Time travel in movies by mr-dalliard

    Background:

    The Exhibit 3 project was supported by the Library of Congress. It is a partnership among MIT Libraries, MIT CSAIL and Zepheira, including personnel from the original SIMILE project. See the Exhibit 3.0 launch press release (PDF).

    Freebase: Then & Now

    Posted by admin in Uncategorized - (Comments Off)

     Freebase Logo optimised.svg

    Freebase is an open database of the world’s information, built by a global community and free for anyone to query, contribute to, and build applications on top of.

    In order to enable that, Freebase provides a set of HTTP web services that your applications can interact with. While you don’t need any special software to perform the required HTTP requests, it is often useful to have language-specific libraries that try to simplify the effort required to connect to such web services.

    This project provides one such library for the Java language.

    JSON

    One of the characteristics of the Freebase APIs is that they heavily rely on JSON, both for input and for output. Even the query language used to query information against Freebase (what is known as MQL and pronounced so that it rhymes with “mickel”) is a specific form of JSON.

    Unfortunately, Java’s strongly typed nature and unlike other languages like Javascript and Python where it belongs as a first class citizen, makes it very unnatural to use JSON as a data format with the API.

    This is why this library is built around the concept of a telescopic JSON API that is designed to make it easier not only to parse and serialize data in JSON formats, but also to construct, modify and manipulate JSON data directly inside code with as little syntax overhead as possible.

    We suggest you to familiarize yourself with our telescopic JSON API before reading the examples below.

    MQL

    The most important tool at our disposal when dealing with Freebase is the MQL language. To learn more about it, point your browser as the very comprehensive documentation on the Freebase web site and follow up from there. You are also highly encouraged to try it out on the MQL Query Editor, a highly interactive page on the Freebase web site that helps you familiarize yourself with the query language and with the schemas used in the various data domains that Freebase contains.

    Reading Data from Freebase

    Let’s start by asking who is the director of the movie “Blade Runner”. This is achieved with the following MQL query

    {
      "id":   null,
      "type": "/film/film",
      "name": "Blade Runner",
      "directed_by": [{
        "id":   null,
        "name": null
      }]
    }

    how do we execute that query from Java with this library? First, you need to make the jar available to your app. This is beyond the scope of this document, but if you use Maven or Ant+Ivy, we strongly suggest you to read how to directly integrate this library with your POM.

    Once the library is made available to your classloader, we need to import the required classes

    import com.freebase.api.Freebase;
    import com.freebase.json.JSON;
    
    import static com.freebase.json.JSON.o;
    import static com.freebase.json.JSON.a;

    The first import is the class that contains all the Freebase APIs, while the second is the class that represents the telescopic JSON API.

    The two following static imports don’t change any behavior but add useful syntax sugar to our class and allows us to use the short hand form of object and array creation, respectively o() and a() instead of JSON.o() and JSON.a() saving us a lot of typing and making it easier to read our hand-crafted JSON.

    Now that we have all the tools we need, the first action is to retrieve a Freebase instance to work on. This is achieved by calling a static constructor on the Freebase class, like this:

         Freebase freebase = Freebase.getFreebase();

    This connects you with the main Freebase site at http://www.freebase.com/. Alternatively, you can ask to get access to the Freebase Sandbox, a scratchpad replica of the entire Freebase system (along with all the data and its underlying infrastructure) that is very useful to test writes and data loads. That is done like this:

          Freebase sandbox = Freebase.getFreebaseSandbox();

    Both main and sandbox instances present the exact same APIs and react in the exact same way, the only difference is that in sandbox you should not be concerned if you write something bogus because it is wiped out every well. If you’re doing read-only activity, it’s probably better to use Freebase since it’s a faster system, but if you’re doing read-write activity and you’re not 100% sure of the results, we strongly suggest to use the sandbox.

    Now we need to come up with the MQL query. There are two ways to do this: by parsing a string that encodes JSON (but this makes the burden of keeping it well formed on you) or use the telescopic JSON API.

    Here is how the first would be:

    String query_str = "{" +
      "'id':   null," +
      "'type': '/film/film'," +
      "'name': 'Blade Runner'," +
      "'directed_by': [{" +
        "'id':   null," +
        "'name': null" +
      "}]" +
    "}".replace('\'', '"');
    JSON query = JSON.parse();

    Note how both Java and JSON use double quotes and instead of doing escaping all over the place and heavily sacrifice readability, we preferred using a runtime approach with the last replace() call. This is by no means required (or safe in case you have escaped single quotes inside), but it makes life a lot easier in practice.

    Still, the above is cumbersome and error prone, mostly because it’s very easy to destroy the syntax integrity of the JSON object while working it with piece-meal with concatenated strings. A better approach is to use the JSON API directly, like this:

    JSON query = o(
      "id", null,
      "type", "/film/film",
      "name", "Blade Runner",
      "directed_by", a(o(
        "id", null,
        "name", null
      ))
    );

    where o() builds a JSON object and a() builds a JSON array and all the various casting issues have been eliminated by the use of a single polymorphic class that can represent all JSON content.

    Now that we have the query, we need to execute it against Freebase, this is done by calling the “mqlread” service, like this:

    JSON result = freebase.mqlread(query);

    Like most Freebase web services, they behave as JSON-in/JSON-out services and also wrap the result with an envelope that provides additional information from the HTTP protocol on how the request has been executed. To get to the meat of our result, we need to ask for the “result” object, then dig our way into the JSON result to find the name of the director, like this:

    String director = result.get("result").get("directed_by").get(0).get("name").string();

    NOTE: since movies can have more than one director (even if it’s not the case on this one), we have encapsulated the ‘directed_by’ clause with an array. This is why we have to have the extra “get(0)” in there to get the first director).

    Writing Data to Freebase

    The other side of the equation is obviously how to write data in Freebase or modify data that is already there. In order to achieve this, first you have to have a valid Freebase account, so go get one before you can continue.

    After that the only difference between reading and writing (besides the shape of the query, of course) is that you have to authenticate yourself against the site. This is done by invoking sign-on methods on the Freebase class like this:

    Freebase sandbox = Freebase.getFreebaseSandbox();
    sandbox.sign_in(username, password);

    which fills your current Freebase instance with authorization credentials that will be used later when invoking the commands that require authorization which are the “mqlwrite” (used to add relational data) and the “upload” (used to add binary and textual data) customer services.

    When we publicly launched Freebase back in 2007, we thought of it as a “Wikipedia for structured data.” So it shouldn’t be surprising that we’ve been closely watching the Wikimedia Foundation’s project Wikidata[1] since it launched about two years ago. We believe strongly in a robust community-driven effort to collect and curate structured knowledge about the world, but we now think we can serve that goal best by supporting Wikidata — they’re growing fast, have an active community, and are better-suited to lead an open collaborative knowledge base.

    So we’ve decided to help transfer the data in Freebase to Wikidata, and in mid-2015 we’ll wind down the Freebase service as a standalone project. Freebase has also supported developer access to the data, so before we retire it, we’ll launch a new API for entity search powered by Google’s Knowledge Graph.

    Loading Freebase into Wikidata as-is wouldn’t meet the Wikidata community’s guidelines for citation and sourcing of facts — while a significant portion of the facts in Freebase came from Wikipedia itself, those facts were attributed to Wikipedia and not the actual original non-Wikipedia sources. So we’ll be launching a tool for Wikidata community members to match Freebase assertions to potential citations from either Google Search or our Knowledge Vault[2], so these individual facts can then be properly loaded to Wikidata. We believe this is the best first step we can take toward becoming a constructive participant in the Wikidata community, but we’ll look to continually evolve our role to support the goal of a comprehensive open database of common knowledge that anyone can use.

    Here are the important dates to know:

    Before the end of March 2015
    - We’ll launch a Wikidata import review tool
    - We’ll announce a transition plan for the Freebase Search API & Suggest Widget to a Knowledge Graph-based solution

    March 31, 2015
    - Freebase as a service will become read-only
    - The website will no longer accept edits
    - We’ll retire the MQL write API

    June 30, 2015
    - We’ll retire the Freebase website and APIs[3]
    - The last Freebase data dump will remain available, but developers should check out the Wikidata dump[4]

    The Knowledge Graph team at Google

    [1] http://wikidata.org
    [2] http://www.cs.cmu.edu/~nlao/publication/2014.kdd.pdf
    [3] https://developers.google.com/freebase/v1/
    [4] http://dumps.wikimedia.org/wikidatawiki/


    About Author:  LissaCoffey

    As a Google Apps for Education Trainer, Google Apps Certified Administrator, Allison is an international Google Summit featured speaker and global trainer with Apps Events. She is also a Virtual Instructor for NH’s Virtual Learning Academy, with MIT Technology in Education . She works with school by developing customized approached to deploying and leveraging Google Apps for Education in ‘their environment.’ For over a decade she has specialized in technology in education and her professional skills focus on creating rich and engaging [student centered] 21st Century Learning Environments that integrate innovative teaching strategies designed for digital learners at .

    Years in Education:11+ ,Services Provided:Professional Development and Teacher Training,Consulting,Speaking, Engagements, Educator-to-Educator Connections

    Website

    Sky customer services