mercredi 2 juin 2010

JBoss specific packaging

I post this article in order to help several people who can be found one day in my case, package a war with Maven for JBoss.

The problem is to keep the possibility to package the war with the default configuration, in order to not integrate the JBoss specific files to other servers such as Tomcat.

I begin to isolated my webapp classloader so that JBoss only load the jars located into my war /lib
To begin I define a "jboss-classloading.xml" into a specific directory of my project, deditacted to JBoss files:


After that I define the classloader isolation of my webapp into the jboss-classloading.xml:
Because In Petals View, Spring is used, JBoss et Spring logger get in conflict so that I decided to create a second web.xml (located into src/main/jboss/web.xml) with the Spring logger listener commented, like this:
But the web.xml under src/main/webapp/WEB-INF/web.xml don't comment this listener, because it work correctly with others servers.

Now I have all specific files for the deployment work correclty under JBoss AS, I just have to create in my pom.xml a specific profile for JBoss.

Explanation: the specific JBoss profile like "jboss-packaging" will package the war with the specific JBoss files under /src/main/jboss (jboss-classloading.xml and web.xml)

This profile allow with the following maven command: mvn -P jboss-packaging to generate a specific war for JBoss deployment, and allow to keep the default war packaging for other servers with mvn package command. The default war packaging don't include the jboss-classloading.xml under /WEB-INF/ and use the web.xml located into /src/main/webapp/WEB-INF/web.xml of my project, instead the src/main/jboss/web.xml file.

The JBoss specific declaration profile, into pom.xml:

Now I have the possibility to package two differents war, in the first hand a specific war for JBoss deployment with mvn -P jboss-packaging command, and in the other and a default war packaging for other server deployments thank to mvn package command.


Thank for reading.

best regards,

Adrien

vendredi 14 mai 2010

New Petals Webconsole feature


Well, I waited quite some time before post a new article because, I wanted to present the new feature that I added in the Petals Webconsole. This new feature is the technical monitoring which was expected for quite some time, in the new webconsole version.


 

This new version of monitoring functionnality is more robust, flexible and
intuitive than the old version. This new version allows you to declare several filters, each of these can monitor exchanges with the following parameters :

  • Interface name
  • Service name
  • Endpoint name
  • Operation name
But all these parameters are optional, you can declare a filter with null/null/null/null parameters. This previous filter, then monitor each exchange passing through Petals ESB.

But you can declare a filter only on an endpoint name, this will effect to monitor all exchanges on the specified endpoint name.


 

For this demonstration, I take in account the following components are already installed and started on the ESB :

  • petals-se-rmi, in order to send a message.
  • petals-sample-clock, in order to provide a « clock service ».



 

The first step of this demonstration is to create a new filter, which will monitor all exchanges performed on the « clock service » and the operation name « time ». On the following screen-shot you can observe, the four parameters (service, interface, operation, endpoint) are checked that mean, just exchanges performed on the following parameters will be monitored :

  • Interface name : {http://petals.ow2.org}Clock
  • Service name : {http://petals.ow2.org}ClockService
  • Endpoint name : 47141270665691
  • Operation name : {http://petals.ow2.org}time
The parameter « store » will be used to persist monitored exchange informations.



After created the filter you can see it on the filter list of current server :



You can enable it, by clicking on « Enable » on the monitoring column. After that the filter is active and

Begins to monitor focused exchanges.


 


 

For this demonstration, you can go into the « Test » tab in order to send a message to the ClockService.

For the « Test » form, you need to select :


 

  • the clock endpoint : 47141270665691
  • The interface name : {http://petals.ow2.org}Clock
  • The service name : {http://petals.ow2.org}ClockService
  • The operation name : {http://petals.ow2.org}time

 

You need to take in account, these previous parameters must match the same filter parameters.

To finish select the « InOut », the accepted mep on the « time » operation, and check « send a DONE »

message. You normally don't need to change the « timeout », and the content is automatically generated.



After you need to click on « SUBMIT » in order to send the message to the « ClockService ». The service

normally send you this following response :



When the response is received, thank to the monitor filter actived on the endpoint/interface/service/operation

the previous exchange should have been monitored. In order to display this monitored exchange, you need to

go on the « Filter Monitoring » tab :



So you can observe into the previous screen shot, that the exchange was correctly monitored. If by

bad luck the exchange was not displayed, you can « SUBMIT » a good starting/ending date, in order

to refresh the view and display the good slice of monitored exchanges. When you have found the

good exchange, you can click on « More details » .


 

The following view will be displayed :




 

This huge view allows to display all exchange informations, the big advantage is that the user don't need to

click somewhere to see data. This previous is very perfect J I love this term because, attachments, properties

content of each message are detailed, but also the representative image of the exchange pattern here « InOut »,

the consumer component, the provider component and all messages were sent during the exchange.

the duration, exchange properties, related informations (endpoint, service, interface, operation) are also displayed

and especially the status of the exchange ! Here « done » that tell you if the exchange was correctly finished.


 


 

To conclude, we developped this monitoring filter mechanism because, the declarative aspect of this feature

makes very light and flexible to use. I just need to monitor one endpoint/interface/service/operation, I have

just to declare a monitoring filter with the previous parameters, and after that is activated all exchanges

matching the filter parameters will be persisted. The best benefit is not all messages will be persisted but,

just those I want, the minimum needed …


 

After reading this and after having tested, you could give me back your opinion, your exceptions if you found them

and any comments on my new feature will be welcome.

mercredi 7 avril 2010

Good perf for this beginning of the year

I'm very happy, monday I finally chained "Watrou Watrou" the Bonrepaux 7C+ route !


I'm very happy because, this route asked me several working days in order to perfectly memorize all hard moves for me ...

This performance raise my motivation for this year :-)




Adryen

vendredi 2 avril 2010

An idea takes shape in my head

During the reading of Martin Fowler article on InfoQ:

http://www.infoq.com/news/2010/03/RESTLevels

A good idea  rose in my head, related to the topic
that I wanted handle for my master's degree (memory)

My goal  was to write my memory on W3C WCAG but not

 allowed for the moment ...


But what's WCAG : Web Content Accessibility Guidelines

It's a requirement, specification, ... ?

No ! just a guidelines document, which explain according to the current date
and available web technologies like HTML, CSS, ... how to make the content
of Web more accessible to disabled persons

subject that I really care about it, in my heart, because bring up more accessible
Web Content for deaf-blind person, visual deficiencies, physical disabilities ...
is a very beautiful topic.

In this article Martin Fowler perform the following request for given doctor resource:

     POST POST /doctors/mjones HTTP/1.1

     <openSlotRequest date = "2010-01-04"/>

And the returned response, provide each available slot as a directly addressable resource:

     HTTP/1.1 200 OK
     <openSlotList>
       <slot id = "1234" doctor = "mjones" start = "1400" end = "1450"/>
       <slot id = "5678" doctor = "mjones" start = "1600" end = "1650"/>
     </openSlotList>

In order to provide a better accessibility to blind person, visual deficiencies, ...
I suggest several xml additional attributes like "description", but why ?


For a pretty purpose, allow like a talking web browser (allows blind people to interact with the WorldWideWeb), a describable REST service thank to "description" attribute. This describable REST service would allow to describe
the returned response to a blind person.

     HTTP/1.1 200 OK
     <openSlotList>
       <slot id = "1234" doctor = "mjones" start = "1400" end = "1450"
          description="Available appointment with MJones doctor
          with the slot 2PM to 2:50PM"/>
       <slot id = "5678" doctor = "mjones" start = "1600" end = "1650"
          description="Available appointment with MJones doctor
          with the slot 4PM to 4:50PM"/>      </openSlotList>

isn't it beautiful, for a blind person ? It just only needs a engine to read this REST service description




Adrien

samedi 30 janvier 2010

Provide a Service thank to OSGi

Hi everybody !

In this article I demonstrate to you, how to create and deploy a service thank to OSGi.

But for the moment, we just focus on OSGi acronym and explanations on the subject.

OSGi : Open Services Gateway initiative

OSGi is considered as a framework, it's a modular system for Java in order to implements a complete and dynamic component model, something that does not exist in standalone Java/VM environments, unfortunately.

OSGi can be represented as a set of stacking bricks, applications or components. These things in OSGi context as often called "bundle", the main purpose of OSGi is to management bundle life cycle which can be you Java packages, classes provided as "services". The management life cycle can be done remotely and regroup five states (installed, started, stopped, updated, uninstalled), but the real challenge for OSGi is to perform the changement status without requiring a reboot.




For this article I choose Felix the OSGi implementation of The Apache Software Foundation, but several implementations are also available like Equinox from Eclipse, Knopflerfish OSGI, OSCAR ...


http://felix.apache.org


In this article, I create a bundle that implements and provide a People Access Service. First we need to define the service interface :

This interface define a function that allow to check the person existence in the world, by providing the firstname and lastname.

Take attention to the package aruffie.osgi.tutorial.service because it will has a really importance afterward. I do this because I need to share the service interface with other bundles, it is better to separate service interface that need to be shared from code that doesn't need to be shared. This OSGi approach provides a high separation between interface and service implementation.

In order to respect this approach I create the service implementation in other package, aruffie.osgi.tutorial.implementation:


Now the Person.java for model:



Now, I need to create the bundle activator class of my service. you can find the source code in ServiceActivator.java:




The main purpose of this activator is as its name indicates, activate my people access service into the Felix OSGi Context.


To finish I need to provide the linked manifest.mf that contains the bundle meta-data for its package, deployement ... (its must be provided with the linked service into the jar archive)




I precise the activator for my bundle thank to "Bundle-Activator", and specify the shared package by using "Export-Package".
For my bundle, I need to import the org.osgi.framework bundle dependency. Why ? because I use an activator in order to activate my service in OSGi context (need org.osgi.framework.BundleActivator)


To finish I package the jar with all classes and the manifest file.
The next step is to launch Felix with this command, in my Felix environment "java -jar bin\felix.jar":


When Felix running, deploy the People Access Service packaged into "osgitutorial.jar" with the following command line "start file:[your jar path]"

But wait, "start" aggregate "install" and "start" the bundle ... (but you can use install command, and after start)

Now your People Access Service is running into the Felix OSGi Context ...


Adrien

jeudi 28 janvier 2010

OpenSUIT Maven Dependencies

In order to use OpenSUIT in your project as a presentation framework, and if your project is "mavenised"

Please use the 1.0 Version, because is now available !


To use OpenSUIT base core, just insert these three dependencies:





The last two dependencies are required, because OpenSUIT core is based on XMLMap annotations method for the XML binding to HTML components.




If you need display several chart in your application, please use OpenSUIT chart module !
Based on JFreeChart, you can set this dependency in your pom.xml:


artifactId: opensuit-chart
groupId: org.ow2.opensuit 
version: 1.0


For the Spring users, you can easily integrate Spring and OpenSUIT, thank to this dependency:


artifactId: opensuit-spring
groupId: org.ow2.opensuit 
version: 1.0


Adrien

mardi 26 janvier 2010

OpenSUIT 1.0V released !

Today Pierre and me, we completed the 1.0 version for OpenSUIT Framework !


We are really happy, and we already offer to you several tutorials available on the website, for different OpenSUIT modules.


We hope that you find OpenSuite easier to use than Struts, JSF, Echo ...

For information :

SUIT stands for "Simple UI Toolkit" Open SUIT targets the rapid development of presentation layers especially dedicated to SOA.

If you are interested to start, go ahead  http://opensuit.ow2.org/

More article on OpenSUIT Framework coming soon here !


Adrien

jeudi 21 janvier 2010

WSIT between Java & .Net

I just like to inform you in this article, the existence of the WSIT "Web Services Interoperability Technologies".

WSIT specification was thought in order to handle and ensure interoperability of web services in the enterprise technologies scope. Sun, Microsoft ... and other technology pundits work together on several subjects, such as message optimization, reliable messaging, security ...
WSIT handle differents features aggregate many topics in high level categories, for example:

- SOAP, MTOM, WS-addressing are aggreated in "Optimization" purpose.
- WS-ReliableMessaging, WS-Coordination, WS-AtomicTransactions are grouped into "Reliability" subject.
- "Bootstrapping" category containt WSDL, WS-Policy, WS-MetadataExchange topics.
- "Security" handler WS-Security Policy, WS-Security, WS-Trust and WS-SecureConversation


Thank to DotNet France, I have publish a simple course about ".NET WCF and Java Interoperability", available here:

http://www.dotnet-france.com/Documents/WCF/WCF%20et%20int%C3%A9ropabilit%C3%A9%20avec%20JAVA.pdf


Julien Dollon in his vision asked to me, a presentation about these communication technologies. The goal was not to start a controversy or a fight, but to really show the ease of communication.

In this article I use WCF (Windows Communication Foundation) in .NET 3.5 and Metro in Java . These two technologies are based on standards interoperability has been really easy to implement, but in order to adding quality of service and .NET interoperability to Metro I recommend you throw an eye on the WSIT specification.


Why ? because, even these two Web Services Stacks (WCF .NET and Metro Java) are well-built, different interoperability problems can be appear, for example the Java long, double ... are broader than long, double in .NET

samedi 16 janvier 2010

SSC "Simple Security Checking"

I've had enough, of all these articles which tell you about how this technology works, or how to install a web server on a network for example ... it's really boring with time !


For this article I will show you, although this is a rudimentary subject it can become a point of failure in critical architecture ! 

Because nowadays, developpers don't pay attention to these notions of algorithmic, I'm rubbing slat into the wound, mouhaha !


But what it is ? The validation of input parameters, simply ...


I think several developpers can do this type of error, and with that can bring down an entire system !




You don't believe it ? Let me introduce to you a critical context, for example a calculating system for trajectory embedded in a space shuttle, spaceship ...


One method used in order to calculate the trajectory is the following:







Imagine, if the "updateTarget" method, calculates the new trajectory with the "distortion" parameter with its value set to 0 ?

Imagine, the beautifil memory leak, error, exception ... that can cause ! For this context, just a disfunction of the calculating system of trajectory, and lead to the "destruction, alteration, damage, deterioration, mischief, loss, devastation ..." of the space shuttle.




In this article I trying just to remember to you, that simple little things must be taken seriously in critical environment, because the security architecture can be impacted ...


I give you, though it should be done :) 















The input parameters checking !

First Draft Architecture

For one of my project, I designed this architectural scheme. I love take my pencil and think on paper ...


In order to provide a packaged software for an association, I started thinking about the architecture of the application on paper.

This following drawing is my first idea and my view of the deployment on association's network (click on the picture in order to display it fully):




The first part of my drawing, is the Server 1 where are hosted the database, several Hibernate entities, and the core of my architecture --> the JMX server.

The JMX  server allow to expose through the network several services as Managed Beans

I will explain afterwhy this choice !


On Server 2  a web server like Tomcat, Jetty, ... is deployed. But why not a Glassfish, JBoss, Websphere ... ? Because, I needn't an heavy application server (for example in order to use EJB, JMS ...)


We can just found my web application deployed on the web server, and a linked JMX Connector that allows the web app to use the differents services, deployed on the first server.


The web server will allow to deploy the web app of the association, on the world wide web.






Now  return to our sheep ! Why did I use JMX Managed Bean instead EJBs, RMI objects, ... ?


I simply preferred publish all services on network as Managed Beans instead EJBs, because my services are wrapped with JMX layer and allow me to manage, monitor, change dynamically the behavior of my these !


Because this application can evolve quickly, it's a good point to allow change, remove, add components at the runtime.


Wrappe my services with managed beans allow to me to generate statistics on each service and notify dysfunction.


Another interesting point is that JMX can explose your MBean with RMI but also HTML thank to HTML adapter, IIOP adapter and several other with MX4J adapters.

mardi 12 janvier 2010

Package Architecture

I don't clame that my vision is the best, but I think it's a good practice.


First time, several programmers, developers ... can structure their packages, as follow:









In order to isolate the different parts of your package architecture, I propose to you the followed hierarchy:


But, why this hierarchy?

Because, I think this package hierarchy is more suitable and representative of a really good architecture.

Now the Hibernate implementation of your DAO is hidden into the impl package, besides the dao package

is now free of the hibernate dependencies.

The dao package with DAO interface now, can be extract in order to be used without knowing the explicit hibernate dependencies.


These Hibernate dependencies can cause problems in several projects, but this package hirerachy is now, more clear and more flexible.


Several huge packages can be splitted into sub packages, their contents become more consistent and more understandable.

mercredi 6 janvier 2010

How to integrate Spring & Struts2 into your web application

In order to integrate these two frameworks, you need to declare each utilization in the web.xml file.

The content of your web descriptor file should be:



The first filter declaration represent the Struts 2 filter. It's necessary in your web application, if you decide to use Struts 2 into your web application architecture:

org.apache.struts2.dispatcher.FilterDispatcher




The filter mapping of your Struts 2 filter is mapped to all requests coming into your web application, but you can provide a specific url pattern like : /GUI


In this example:

struts2 --> /*
  

All requests coming into you web application with .action extension will normally be processed as a Struts Action.




After that, you need to declare the Spring listener which handler and manage your Spring Web Application Context. The ContextLoaderListener will load the Spring applicationContext.xml file and manage each call to its context. Also, its enable the Spring object factory and the wiring interceptors



org.springframework.web.context.ContextLoaderListener