Gadgets Instead of Portlets?

Via CMSWatch, an interesting facet of the Apache Shindig proposal has surfaced. Shindig is Apache’s server-side container for OpenSocial and OpenSocial, in turn, is (among other things) based on Google Gadgets.

Here’s the interesting bit:

A social application, in this context, is an application run by a third party provider and embedded in a web page, or web application, which consumes services provided by the container and by the application host. This is very similar to Portal/Portlet technology, but is based on client-side compositing, rather than server.

I was aware, of course, that you can put a Google gadget on a web page, but I hadn’t thought about it explicitly in terms of a portlet alternative before now.

I’m curious to hear from more technically savvy readers–especially those associated with LMS development: Have you thought about building a gadget server into your LMS? What are the pros and cons? How does this option compare to other display “standards”? How hard is it to make gadgets accessible? And could this work as, say, part of the IMS Learning Tools Interoperability standard?

Share Button

Google+ Comments

About Michael Feldstein

Michael Feldstein is co-Publisher of e-Literate, co-Producer of e-Literate TV, and Partner in MindWires Consulting. For more information, see his profile page.
This entry was posted in LMOS, Notable Posts, Tools, Toys, and Technology (Oh my!) and tagged , . Bookmark the permalink.

3 Responses to Gadgets Instead of Portlets?

  1. In terms of our discussion at Newport Beach (not to alienate any non-Sakai readers of this blog) a “gadget” is simply a “helper” in our new model. I think what we want to see is a general move towards more RESTful, markup-based strategy for aggregation, in line with John (Norman’s) comment that we would like to adopt a strategy capable of aggregating “widgets” or whatever from outside Sakai entirely as well as those dispatched internally.

    I think this generally reflects a shift in thinking in which the client-side is increasingly seen as a relatively capable and reliable environment, as well as changes in recommendations from the accessibility community which no longer insist that the essential definition of an accessible component is one which will function with Javascript turned off (not to say that we entirely want to turn our back on strategies which do something sensible when Javascript is turned off).

    Talking technically, the difference between “portals” and “gadgets” is simply at which end of the pipe the aggregation occurs. I believe that to be truly reasonable, we do need to keep the door open, in a particular situation, to having aggregation performed either at the server end (for a “portal”-like strategy) or at the client (for a “gadget”-like strategy) – if only for efficiency and user experience. Rendering a complex page where lots of the content *indeed* comes from the same server, we don’t really want to incur the request overhead of multiple requests from the client, as well as disturbing the user by seeing their content starting up “raggedly”.

    This is what we hope to facilitate with the new “helper” interfaces within the Entity Broker. Talking *very* technically :), this API:

    public interface HttpServletAccessProvider {
    public void handleAccess(HttpServletRequest req, HttpServletResponse res, EntityReference ref);
    }
    refers to an aggregation which is considered essentially “client-based”, since markup rewriting is performed on the client, whereas this one:

    public interface PortletAccessProvider {
    public void handleAccess(PortletRequest req, PortletResponse res, EntityReference ref);
    }

    refers to an essentially server-based strategy, which is intended to render a more familiar (old-fashioned) “flat” portal view with markup rewritten on the server.

    With suitable presentation technology support, it should become transparent and even dynamically switchable which of these “internal” aggregation strategies are used for a given view or user. Only the first corresponds to a “Web 2.0-like” mashup-like strategy.

  2. Michael – Excellent opening statement. My response is less technical than Antranag’s – I treat “gadget” as a Google gadget – not a generic gadget.

    I have been working with Java Portlets (JSR_168) and WSRP (Web Services for Remote Portlets) now for a long time – trying to use them and then finding them not well suited for many applications and then trying to get involved in improving them.

    I think that architecturally Google Gadgets, Facebook applications, and remote portlets are *exactly the same thing*. Effectively they all describe a way to build a UI out of rectangles where the rectangles come from various sources and servers and can be written in very different languages.

    The only thing that Facebook and Google has added to the discussion is that they have built their standards for integrating “rectangles” in such a way that many people can implement them. The formal standards like JSR-168 and WSRP are aimed at “Enterprise Developers” who are being paid to do this stuff – and it it takes a month to figure out how to do a “hello world” application – there is no harm – because everyone is getting paid.

    For Google and Facebook – they need people to use their stuff and quickly – because few will be paid to build gadgets or Facebook applications – mostly it will be people who have some cool idea and want to join the cloud. This means that Google and Facebook give us a rich but simple interface focused on using REST – and in particular focused on making the first few steps really really easy.

    Easy and loosely coupled also has a cost – performance tuning can be harder and you lose control of your data. You lose control of the service level agreements – you lose control of privacy – you lose control over a lot of things – but you gain vast richness.

    So to summarize the first part of my comments – I see Google Gadgets and Facebook applications solving a problem that portlets and WSRP could not solve in a way that is satisfactory to the market. Google and FB step into a vacuum – provide a crude but effective solution and we all take off running and having fun – and doing things we wished were possible a long time ago.

    The second part of this is about IMS Tool Interoperability. Eventually we will need a way of mashing up functionality from many sources without compromising performance, functionality, privacy, or quality. IMS Tool Interoperability *wants* to solve this problem and we are making slow progress on it – I wish I had five programmers I could just tell what to do next on IMS TI – then we would make some progress.

    The problem is that this is not as simple as mashing up RSS feeds for our own newspaper. The simple example (moving towards Antranig’s comments above) is when I want to outsource both my content file storage and my testing engine – i.e. neither run in my enterprise LMS.

    Lets say for example, I am using Microsoft SharePoint for my file hosting, Sakai for my enterprise LMS and grading, and QuestionMark for my assessment hosting. I am authoring a test question and want to add an image to the question – I click on a button in my question authoring environment and the application needs to bring up a list of files from SharePoint for me to choose from in the QuestionMark UI. Then when the students take the tests – the grades need to end up in Sakai. And I am just a simple faculty member – I want this to just *happen*.

    The sad thing is that this is all pretty technically feasible. But no standards exist for it and no one is willing to be the guinea pig for the application and no one seems willing to work with unfinished standards and iteratively evolve them to get to the point where real work can be done.

    The good news is that organizations with money to burn and well-funded teams looking at how to make more money in the future are looking at this (i.e. Google and Facebook) – we (the rest of the market) are not part of their thinking and don’t get much influence in where they are going. But for me – at least there is *some* investment in what will be the future service oriented architecture for learning.

    I am sitting back and watching with some glee as Google and FB beat each other up trying to out-innovate each other for the next few years – my feeling is that they will end up with an 80% solution and then for teaching and learning applications, we can build on their technical design and user interaction patterns.

    I think that a company could make a *lot* of money if they decided to get ahead of this and add T/L innovations to the FB/Gadget world. It would not be that hard – Google and FB are solving all of the provisioning, markup, authentication, and sand-boxing problems – we just need to add some standards for content and authorization.

    But the road ahead is not simple nor trivial – there will need to be a several year investment in solutions that will need to be thrown away – given that teaching and learning cannot affect Google or Facebook’s direction – we are at their mercy.

    But this is how innovators work – they take risk trying to find their way into a market that does not yet exist and when it initially does exist – it will not be too lucrative. But sometime later – it will replace the old market and become the only market – then all of the former market leaders kick themselves and ask “Damn, why didn’t we think of that?”.

    Of course the reality is that those former market leaders were probably told repeatedly by members of their organizations where the future was going – but these future thinkers in the market leading organizations are told – “This all sounds interesting, but how will your idea increase next quarter’s revenues by 10%? Unless we can find a way to affect revenue next quarter – we will sit back and wait until someone else figures this out and then jump on as a late adopter.”.

    Readers should Google for the “The Innovator’s Dilemma” from Clayton Christensen and read it cover to cover – one of my favourite books.

  3. Pingback: My Sakai Widget at e-Literate