Dienstag, 6. November 2012

OSGi DevCon 2013 coming soon

OSGi DevCon 2013 once again is co-located with EclipseCon 2013 taking place in Boston, Massachusetts, between March 25 to 28. See the OSGi Alliance announcement for full details.

The call for papers is still open until November 19, so hurry up and submit a talk. For teasers an early bird selection has been taken: Modularity in the Cloud: a Case Study by by Paul Bakker and Marcel Offermans from Luminis.

Hop to see you next March in Boston.

Montag, 18. April 2011

Apache Sling 6 Released

Bringing Back the Fun

Apache Sling brings back the fun to Java developers and makes the life of a web developer much easier. It combines current state of the art technologies and methods like OSGi, REST, scripting, and JCR.

The main focus of Sling deals with the important task of bringing your content into the web and providing a platform to manage/update the content in a REST style.

Sling is built as OSGi bundles and therefore benefits from all advantages of OSGi. On the development side a scripting layer (leveraging the Java Scripting API) allows to use any scripting language with Sling (of course you can use plain old Java, too). And on top of this, Sling helps in developing an application in a RESTful way.

As the first web framework dedicated to JSR-283 Java Content Repositories, Sling makes it very simple to implement simple applications, while providing an enterprise-level framework for more complex applications. Underneath the covers Apache Jackrabbit is used for the repository implementation.

Download the new release, Apache Sling 6, today and give it a try!

Apache Sling currently comes in four flavors:
  • A standalone Java application (a jar containing everything to get started with Sling)
  • A web application (just drop this into your favorite web container)
  • The full source package (interested in reading the source?)
  • Maven Artifacts (available from the Central Maven Repository)
For more information, please visit the Apache Sling web site or go directly to the download site.

For those interested in numbers: Since the Apache Sling 5 announcement ...
  • 22 months have passed
  • 7 committers have been added to Apache Sling
  • 3 members have been added to the Apache Sling PMC
  • 158 releases have been cut and voted on
  • ~2800 commits have been sent to the SVN repository
I would like to thank every user, contributor, committer and member of the PMC for their hard work and sometimes patience for making Apache Sling 6 a reality.

Montag, 11. April 2011

To embed or to inline ?

At times you want (or need) to include third party libraries in your bundles. You basically have two options to do that:
  • Embed the complete library
  • Inline the required classes/packages (or everything)
Until recently I was in the camp of embedding the complete libraries and setting the Bundle-ClassPath manifest header accordingly. The Embed-Dependency directive of the Apache Felix Maven Bundle Plugin makes this extremely easy to do.

Lately, though, this has been questioned by Karl Paul's pojosr project. This project brings the OSGi Service Registry to regular applications. The nice thing here is, that it really is a stripped down OSGi Framework basically removing the modularity pillar. The drawback is that this causes the Bundle-ClassPath manifest header and embeddeded libraries to not be supported.

Thus to run your regular bundle inside pojosr, you will have to inline all third party libraries instead of embedding them. The upside is that this really works. For example the Apache Felix Declarative Services implementation works flawlessly after inlining the KXml library.

So, this sparked a discussion of whether embedding or inlining third party libraries is preferrable. And contrary to my former believe (embedding is preferable) it turns out that inlining is probably preferable for a number of reasons:
  • You can just include those classes, that you really need. In fact recent builds of Peter Kriens' fantastic BND tool now supports an experimental Conditional-Package header which allows to inline packages conditionally: Works as private package but will only include the packages when they are imported. When this header is used, bnd will recursively add packages that match the patterns until there are no more additions (quoted from http://www.aqute.biz/Bnd/Format). This leads to smaller bundles.
  • The OSGi framework can more easily create the class loader for the bundle, because everything is just contained directly within the JAR file. For embedded libaries, these have to be unpacked, place on the filesystem and added to the class loader.
  • Performance will probably increase because everything is in a single JAR file and does not have to be searched in multiple JAR files
  • Memory Footprint will also be likely be reduced due to only a single JAR file being accessed.
  • Chances are that even the number of consumed filehandles will be reduced.
There is a drawback, though: If you happen to include a signed third party library or a library which you are contractually or by license not allowed to modify, you must embed it completely.

Overall, this discussion changed my mind bringing me to the inline-preference camp...

So, you might expect to see a few of the Apache Felix and Apache Sling bundles I am working on to me modified to inline third party libraries instead of embedding them. In fact the Apache Felix Declarative Services and Web Console projects have already been modified to inline the third party libraries...

Freitag, 10. Dezember 2010

Class.forName ? Probably not ...

After subscribing to the OSGi Planet feed I felt like starting to read some old blog posts and stumbled upon a series of posts by BJ Hargave around the issues of the Eclipse ContextFinder caused by the Class.forName methods.

For the full story please go and read Class.forName caches defined class in the initiating class loader (and folow the links !).

So, these posts caused me to try and look how we behave in Apache Sling ... and of course hoped we would be clean.

Well, hmm, turns out we are not ... I found nine classes using Class.forName.

So we probably have to clean this up. Maybe or maybe not, these uses may be the cause for some strange failures we had over time. I cannot really tell. But I cannot exclude this possibility either.

BTW, this is what I did to find the classes:
$ find . -name "*.java" -exec fgrep -l Class.forName {} \;

Donnerstag, 15. Oktober 2009

On Version Numbers

I have been thinking about using version numbers lately while working on some API extension of the Sling Engine bundle. So here is what I think versions are all about and that we all should be very careful when changing code and assigning versions to it.

On a high level versions have various aspects:
Syntax
There is no global agreement on the correct syntax of versions. I tend to like the OSGi syntax specification: The version has four parts separated by dots. The first three parts are numbers, called the major, minor and micro version. The fourth part is a plain (reduced character set) string which may be used to describe a particular version. Version numbers are compared as you would expect, except that the fourth part is employs case-sensitive string comparison comparing the actual Unicode codepoints of the characters.
Semantic
The semantics of a version define what it means to increment each place of a version. In the world of software development there is even less agreement on the semantics of version numbers than there is agreement on the syntax. The OSGi specification just defines suggested semantics.
Expectations
When seeing product version numbers people tend to have expectations towards the products. For example when Firefox went from 2.x to 3.0 we expected a major change. Likewise when Day upgraded the version number to 5 for the newest version of Communiqué the expectation is correct, that it is a major new version of the product. In fact we completely rewrote Communiqué for the 5.0 release.
Version Items
When it comes to apply version numbers to things there are quite a number of things in a single product, which may be numbered. Take for example Day Communiqué 5. There is a product - the thing you take out of the box and install on your server. Then there are OSGi bundles. Finally there are Java packages shared between the bundles and used by the application scripts.
So here are my definitions of the version number aspects layed out above.

Syntax

IMHO the Syntax for version numbers as defined in the OSGi Core specification (Section 3.2.4, Version) is good enough and clear for most uses. The nice thing about this specification is that in Section 3.2.5, Version Ranges, a syntax is defined to define ranges of versions. Such ranges are of great use when depending on other items. Most importantly of course this would be list of imported Java packages.

Semantics

As for the semantics the main problem comes from the fact, that not all versioned items understand version numbers in the same way. For example on a product level, c.f. Day Communiqué, the version number of a release is generally defined by marketing and/or product management.

I will not dive into how product numbers are to be defined. This is outside of my working knowledge and beyond may abilities ;-)

On the OSGi bundle level on the other hand and even more so on the Java package level (for OSGi package exports), the version number is more a call of the developer. Version numbers on this level are intended to convey to other developers something about the evolution of the bundle and/or package.

Let's start with exported Java packages. I tend to attribute the following semantics to the parts of a a version number:
  • Increasing the major version number means the API has been modified in an incompatible way. Mostly this means public classes, interfaces, methods, fields have been removed or renamed. As a consequence code using and implementing the API will break and has to be modified.
  • Increasing the minor version number means the API has just been enhanced in a way that is compatible for use. Increasing the minor version number, though, means that code implementing the API might have to be modified to comply with the added API like the definition of new methods.
  • Increasing the micro version number means that there have been some bug fixes. Generally, a pure API consisting of just interfaces has little chance for bugs which do not ammount to minor or even major version number increase. If the exported packages of a bundle happen to contain concrete or abstract classes with implementation code, bugs cannot be excluded. As such it is conceivable that a the micro version number of an exported package might be increased.
  • As for the qualifier part, as the fourth part of a version number is called by the OSGi specification, this meaning of this part is completely free. On a package export level, I would go as far as to say, it should not generally be used. The qualifier part may be interesting on an OSGi bundle level to create inter-release builds.


Expectations

Peoples expections as it comes to version numbers is not ease to convey. Most people expect different things. But I think one thing is common to all: If there is a version number increase something must have changed.

So, I think to use developers it is important to understand, that we only increase the version number of an item if there is a change -- not sure whether a fixed spelling error in some Java comment is change enough. Again, your mileage may vary if you happen to be product manager for a product to be sold ....

Recommendations

Based on how I understand the version number parts in terms of exported packages, here are my recommendations for package imports and bundle versions.
  • If you implement the exported API of another bundle, import the API package using a version range of the form [x.y,x.y+1). This means accept any increment in the micro and qualifier parts. But as soon as the minor version number changes, consider this an incompatibility.
  • If you use an exported API, import the API package using a version range of the form [x.y,x+1). This means accept any version starting with a minimum number upto the next breaking API chnage identified by a new major version number.
  • Don't increase the version number of an API package if nothing in that package has changed at all.
  • Bundles should be versions following versioning of exported packages. So if at least one of the exported packages has a major version number increase, the bundle's version should also have a major version number increase. Likewise for the minor number. The use of qualifiers is optional and sometimes helpful.
  • Apart from being driven by versioning of exported packages, bundle versions may also be increased depending on the extent of changes in the bundle. For example in the case of a pure implementation bundle, greatly increasing the functionality might give rise to a major version number increase of the bundle.
  • If you are using Maven to build your projects, always depend on the lowest version of a dependent module which has the API functionality you need.


Link

The Eclipse site contains a very interesting and IMHO very reality proven text about versioning of products, bundles and packages: Version Numbering

Update 2012/10/11

The OSGi Alliance released an excellent white paper on Semantic Versioning which pretty much aligns with what I was talking about above. 

Freitag, 17. April 2009

Ready to serve requests ...

In the Apache Sling project we have an interesting problem: Knowing when the application has finished its startup.

Coming from a background of a traditional application, you know when the system has finished its startup. For example, a servlet container knows it has finished the startup, when all web applications have been started.

In Apache Sling, the situation is a bit different: Apache Sling is an extensible system, where extensions may simply be added by adding more bundles. "Easy", you say, "just wait for all bundles to have been started and you know when the application is ready". True, but there is a catch.

To extend Apache Sling, you register services with the OSGi registry. "Still easy", you might say. Right, if the services are all started by bundle activators, we still can depend on having all bundles started for the system to be ready.

Again, this is only part of the story: Some services depend on other services. So the dependent services may only be started when the dependencies get resolved. This is where the trouble starts.

To help solve the dependency issues in a simple way, we employ OSGi Declarative Services. Great things to define components and services and have the dependency requirements being enforced and have dependency injection and configuration support and ... much more.

"What does it cost?", you say. Well, we buy this functionality with a lot of asynchronicity: When all bundles have been started, not all components may have been activated and not all services may have been registered.

Now, when is the application ready ? I cannot easily tell.

One approach could be to have a special service to watch out for a configurable list of services to be available. When all services are available and after the framework has started, the service signals Application Ready. As soon as one of the services goes away, the service might signal Application Not Ready.

The real question raising now is: What services are required for the application to be considered ready ? Can we come up with such a list ? How to we manage this list in light of more services to come, which might be considered vital ?

Any input would be appreciated ;-)

Dependency Injection in OSGi

The OSGi framework and its compendium services provide a whole lot of fun to build applications. Defining bundles is a cool stuff to cut the big job into pieces and enjoy the coolness of separation of concerns just like the old Romans said: Divide et Impera !

One interesting compendium specification is the Declarative Services Specification. This specification tries and IMHO succeeds very well to bring some of the cool stuff of Spring, namely Dependency Injection, into the OSGi world. Just like the application descriptors in Spring you have component descriptors in Declarative Services.

Using a component descriptor, you define the following properties of a component:


  • The name of the component and whether it is activated immediately or not

  • Whether the component is a service and the service interfaces to register the component with

  • Which other services are used by the component. These services may be injected (bound in OSGi speak) or may be looked up. There is also the notion of mandatory and optional services which provides the functionality to delay the component action until the mandatory service becomes available.

  • Configuration properties. Some properties may be injected by the descriptor itself. But at the same time, configuration properties may also be overwritten by configuration from the Configuration Admin Service. Thus the configuration of components may even be very dynamic.



The good news for the XML-haters like me: Over in the Apache Felix project we have Maven 2 plugin which takes annotations (JavaDoc or Java 5 Annotations) from your Component classes and builds the descriptors on your behalf.

So the next time, you are looking for dependency injection, you might want to consider OSGi and Declarative Services ;-)

Just for completeness, here is a list of other frameworks providing some sort of dependency injection:



In the end all work more or less the same, in that the provide some abstraction layer on top of the basic OSGi framework functionality: the Service Registry. This is really, the greatest things of all and IMHO shows the cleverness of the OSGi Framework specification: With just three basic layers (modularity, lifecycle and the service registry), you get the whole world in your hand to build flexible, modular and extensible applications.