Yesterday the first Hackcamp Wolfsburg took place at the Schiller 40. Steven Schwenke did a great job to organize and prepare the event. Despite the fact that only 5 persons, including myself, attended the event it has been a great success. Beneath my coworkers Kevin and Niko from NeosIT and Steven from MSG, Carsten from Eves IT was involved.
Steven started with his JavaFX workshop which gave us a good introduction what can be done with JavaFX. I was working with JavaFX the first time and realized that the framework is cool but there are some drawbacks: Inconsistent APIs, no easy option to import own components in Scene Builder, a missing marketplace for FX components, no reflection of bean properties inside the scene builder and so on. Nevertheless, it gave me a really good start. During lunch time we ordered pizza @Joeys and watched some very funny videos of Joko and Klaas. Being pigged out I started my Xtext talk with a small presentation I hacked together during lunch. It was not the first time I hold a presentation about this topic and the reactions are almost every time the same if the listeners are developers without experience in code generation or modelling: “WTF is he talking about?”. Sooner or later I must prepare an easy introduction presentation, but it is not so easy at all to give a methodical valuable start.
After the presentation with lot of Meta in it we implemented a small project I prototyped together with a customer. As we went along with the coding, the attendees got the idea of code generation and modelling and realized the possibilites what can be done with Xtext. I *really* like this moment I pointed Steven towards XtendFX and we did talk a lot about possibile applications. I was really surprised that the session took longer than I expected. Thanks for listening and the very interesting discussion!
At the end Karsten showed us JBoss Forge. At a first glance it looked like Spring Roo. I have to take a further look the next days.
After eight hours of hacking and talking I was really wasted but in a good way. I am looking forward to the next Hackcamp and want to thank all participants for making it such a good event!
Our current project uses JSF and CDI for the presentation layer. The business logic is encapsulated inside EJB with no-interface view as proposed by Adam Bien and others. I evaluated different alternatives for integration testing and ended up with Arquillian. For JSF/CDI based applications Arquillian is the best fit.
As I digged a little bit deeper into Arquillian one big problem occured: The usage of no-interface EJBs did not allow me to inject some CDI alternatives through the @Alternative annotation. @Alternative expects an interface which I did not have. In addition I had to use the @EJB annotation in the JSF backing bean because our target application server was WebSphere. Since the EJB container, for example JBoss for integration testing, expects all fields annotated with @EJB to be resolved and deployed, I would have to deploy the EJB with all its dependencies. In the end the whole application had to be deployed including database access and without being able to manipulate the result of the EJB methods.
Our data access layer uses JPA/Hibernate but can not make use of “plain” JQL because we had to access legacy stored procedures of an already existing Oracle database – in-memory testing with H2 or Derby was not possible. Another problem would have been the total duration of the integration tests. Our application has a certain complexity and with proceeding project progress the integration tests could not be executed any longer in an acceptable time span.
The only option would have been to switch from no-interface EJBs back to traditional @Local EJBs/interfaces. In the integration tests I would define a stub which implements the interface and deploy the stub with Arquillian. Nevertheless, dynamically controlling the behavior of this stub is not directly possible and I had to write a lot of stubs.
The whole situation did not make me happy. Doing integration tests with Arquillian should force me to change the architecture and introduce more complexity? This was an option I was unwilled to choose and so I searched for alternatives. Surprisingly, Google did not provided any solution. I thought about the problem again and had an idea: I could modify the Java bytecode of the EJB class before it is deployed. The modified EJB would only act as a facade and delegates every method call to an inner mock which has the same class methods as the facade.
After doing some research Javassist seemed to be the best tool for doing the bytecode manipulation. During the implementation of the desired bytecode modifier I struggled with some odd behavior of the application container but in the end I suceeded.
EjbMocker allows you to deploy a bytecode modified version of your EJB to be injected by Arquillian into your application server. You can completely control the behavior of the EJB with help of Mockito. Every method of the EJB is forwarded to an internal mocked instance with the same class signature.
An example project can be found at https://github.com/schakko/arquillian-warp-mocked-ejb. The EjbMocker contains usage instructions so I won’t repeat it here.
A few weeks ago I had a talk with one of my co-workers in which he said he had not evolved technically and personally in the past months. Aside from the fact that I had a different view it made me think of my own current situation. Which insights did I achieve in last few months? Take this post as a personal retrospective of the last few months.
I had to take a break for a few days which made me feel real uncomfortable. On the one hand I felt really anxious, on the other hand I had the feeling that I had not accomplished anything. Doing sports almost every day and then having to take a break is a situation I can hardly deal with. It has been written by SpOn that doing sports regularly it pushes you into a addiction. Having smkoed cigarettes for years I can confirm that sports has replaced cigarettes as my addicted-to-drug.
I have always been the person who uses his clothes and tools until they fall apart. This attitude started in my teenager years. Listening a lot to punk rock music and going to concerts could be one for reason for my behavior, I am not really sure about this. During this time I had a real good friend who supported me mentally and financial by donating me without ever wanting a reward. She always said making a gift to other people makes her feel better than buying something for herself. 12 years later I remembered her words again and determined that buying myself something new just for having something new without real usage does not make me happy. Instead I want to pass on the attitude of spending some useful things to people who are really happy about donations. I started my project this month by donating some money to Stratum0 and endowing Mike Pfingsten for his visit at the Entwicklerstammtisch Wolfsburg. My plan is to do one or two smaller donations every months for people or projects inside the IT community.
Planning projects or designing processes is really funny but sometimes I am in the situation in which a topic is talked to dead. You know it from roasting a steak: At some point the steak is so tough that you can break someones bones with it but not eat it any longer. If hou have reached this point take the initative and act or nothing will change.
To be honest, in my personal life I follow this rule a little bit too often. This results often in “rollbacks” which cost everyones time. This is definitely one of my weak points which I am trying to reduce.
I must admit, after finishing my study at the beginning of the last year I got really lazy with my personal projects, e.g. publishing Nostradamus/prophetr. It is a pity not to deliver my own product but I feel much better with playing some games, reading books or doing sports. Eventually your own state of mind is all that matters and currently I am feeling really balanced.
At some point of my work life I realized that I can’t handle everything and had to delegate tasks. In the first few weeks it was very hard. I had the feeling I had lost control. I had to learn that delegating work means having trust in somebody elses capabilities.
Making rules for others means you have to follow your own rules, too. Bending your own rules results in a loss of your authenticity and reliability.
Until a few years ago I had the confidence that people can change. Sadly, I must admit, this perspective had changed in the last years. I had to accept the fact that most people can’t or don’t want to change even if this would be the better option.
Having a lot of intrinsic motivation is really powerful. Apart from a few exceptions I am achieving most of my goals. Although I can taint other people with my motivation I can not expect them to have the same level of motivation I have.
Every team member should have a defined role. This role should be written down where everybody can look it up. I am the opinion having a defined role makes you stronger. You have a work identity and the responsibility to push your role topic forward.
Making a personal retrospective means admitting my own faults, too.
Eh… well.. I set up our company domain with the .local TLD two years ago. In retrospect this decision had led to so much problems like not being able to resolve DNS names because the .local TLD infers with Bonjour protocol. Switching the domain name inside a Windows network is not so easy. I regret the decision every day.
I did know that this was probably a bad idea but I had the sligthly hope that it would work. Making things short: It did not work. Never ever ever store your IDE settings in the repository if you are working with other people.
Xtext is one of the strongest tools in my developer toolbox. It helped me in various projects and saved a lot of time and money
Some of our customers have full access to their designated JIRA project instances. Being transparently means to track the work times in JIRA. For accounting we use another internal application so we had to track our times in two different systems. I hacked down a PHP script in a few hours which simply copies all JIRA entries to our internal tracking system. Small script, huge time saver.
We are using Spring Integration in one project for collecting XML files from multiple servers. Spring Integration fits exactly into our requirements and saved the customer a lot of money.
I am still not sure if I really like CDI or if I am troubling with Stockkolm syndrom. Using Dependency Injection without having a third party library like Spring or Guice is nice. One drawback is the different CDI implementations (Weld vs. OpenWebBeans) doesn’t necessarily produce the same result. Developing
I worked with HTML, jQuery, knockout.js and other frontend technologies, like GWT, for years and I liked it because they are transparently in their behavior. Switching from those techniques to JSF is hard. Hard is an understatement. Awful seems to be better fit. JSF adds an additional layer of abstraction where the interaction between browser and client is no longer easy to understand. Besides the fact that different JSF implementations like Mojarra and MyFaces produces different output.
From an administrative view, WebSphere must be the holy grail. But why the hell does it take so long to deploy a simple web application? Why does enabling security makes the whole WebSphere administration view so slow?
I am currently playing around with ASP.NET and its Entity Framework. At some point I wanted to execute all my migrations against a new local SQL Express database. After I had dropped the database in SQL Management Studio, the Update-Database command of the Entity Framework failed with the error ”Database ‘$path.mdf’ already exists. Choose a different database name. Cannot attach the file ‘$path.mdf’ as database” (German translation: “Die ‘$path.mdf’-Datenbank ist bereits vorhanden. Wählen Sie einen anderen Datenbanknamen aus. Die Datei ‘$path.mdf’ kann nicht als ‘$path’-Datenbank angefügt werden.”).
SQL Server Management Studio did not longer show the database as present. A manually executed DROP DATABASE SQL statement only showed that there was not such a database. After checking some other possible error sources (machine.config, web.config and so on), I ended up with downloading sseutil from http://www.microsoft.com/download/en/details.aspx?DisplayLang=en&id=3990 and dropped the database by hand:
> sseutil -s \.SQLEXPRESS -l # shows all databases in the local SQL Express instance 1. master 2. tempdb 3. model 4. msdb 5. $failed_database > sseutil -s \.SQLEXPRESS -d name=$failed_database Failed to detach '$failed_database'
Although I received the detachment error, the database was no longer registered and I was able to execute the Update-Database statement without any problems
Have you ever tried to develop an Java 6 EE application on different application servers? In production we are forced to use WebSphere AS. I like the configuration interface but that’s all. WAS is not usable during development because the deployment cycles are way too long. Because of this we use JBoss AS 7.1.1 in our development environment. Our application uses Java 6 EE features in service (EJB, CDI) and presentation (JSF, CDI) layer but still uses a DAO layer which is managed by Spring. The DAOs get injected by SpringBeanAutowiringInterceptor. For consistency I had planned to port the Spring DAO layer to Java 6 EE.
First of all our Spring configuration uses the correct database connection settings (Hibernate transaction manager, JNDI name) by a simple environment switch which can be set in the application server. By default, the production configuration for WebSphere is used. If you populate an environment key jndi-jboss, the JBoss settings are loaded on startup. This approach introduced a new architectural complexity but fits exactly our needs. Using JPAs persistence.xml and reaching the same goal should be doable, right?
Well… no. First of all, the JPA configuration is simply not designed to handle different environments. This would not be a problem if an application server specific file like jboss-persistence.xml or ibm-persistence.xml would be used by JBoss repsectively WebSphere. On application startup the application server would load the designated persistence.xml and everything is fine.
My approach was to write a simple parser for properties inside persistence.xml which can be evaluated against system properties, like
<!-- Remove the hibernate.transaction.manager_lookup_class setting --> <property name="?(applicationserver.runtime=jboss)hibernate.transaction.manager_lookup_class" value="" /> <!-- Overwrite the setting --> <property name="?(applicationserver.runtime=jboss)jta.UserTransaction" value="java:comp/JBossUserTransaction" />
Writing and testing the parser was an easy task so I tried to integrate it in the startup process. JPA has no InitializePersistenceContext handler or something else which is executed on startup. The only possibilty was to extend the Hibernate persistence provider and define my own provider inside the persistence.xml. The idea seemed good but did not work. Persistence providers must be deployed in the application server. JBoss only threw an Persistence Provider not found exception. While I was searching for an easier solution (which probably does not exist – you must deploy the provider inside the application server, at least in JBoss) I came upon a blog post which said, that it is not possible to reference a data source in persistence.xml through a res-ref-name resource. Huh? I do need this, otherwise I am not able to specify the JNDI entry independently from my application server. My jboss-web.xml contains a reference to the JBoss data source while the original web.xml holds the reference to the WebSphere data source.
I reached the point where I decided not to go with a pure Java EE 6 implementation and keep the Spring backend. The only solution is adjusting our Maven build process and creating different EAR artifacts for JBoss and WebSphere, which does not solve the problem to easily deploy the application through Eclipse into one of the application servers.
Update (2014-01-29): I opened a feature request in JBoss’ JIRA (https://issues.jboss.org/browse/WFLY-2816) and put a message on the mailing list. A solution for this problem is to add a check for a jboss-persistence.xml in favour of the original persistence.xml.
Recently I struggled upon the same problem, this guy described. Our Oracle database instance contains multiple schematics with almost the same structure. Every developer has it’s own schema for unit and integration tests. On application startup the Hibernate schema validator calls the DatabaseMetaData.getTables() for every linked entity. The method returns the first table which could be found in any schema. The returned tables are by default ordered by schema name. Side node: I would expect that the home schema of the current user would be prefered. This leads to situation that sometimes the validation fails: a user has already migrated his own schema (schema name app_user_unittest) but the schema for the build server (schema name app_build_unittest) still has the old schema version.
Overwriting DatabaseMetaData.getTables() method is not possible as it resides in the Oracle JDBC driver. Instead, you can use the environment variable hibernate.default_schema which points to to prefered schema. Depending on your development environment, the variable could be set during application startup by the application itself or by a system property through your application server.
Und wieder ist ein Jahr vorüber – und was für eines!
In den ersten zwei Monaten hatte ich noch ordentlich für meine Bachleorarbeit zu tun. Glücklicherweise hat alles geklappt und so kann ich auf mein Studium erfolgreich zurückblicken. In den darauf folgenden Wochen habe ich erstmal quasi nichts gemacht und mir eine Serie nach der anderen angeschaut. War auch gut, da ich tierisch ausgebrannt gewesen bin. Leider hält diese Computer-Lethargie im privaten Bereich immer noch an. Mal schauen, ob das in 2014 besser wird.
Als Belohnung für das abgeschlossene Studium kaufte ich mir im März endlich mal ein neues Bike und das Nexus 4. Ende März machte ich dann auch gleich eine Tour im Elm. War bei Minusgraden und Schnee allerdings doch recht … spannend.
Ab April waren wir auf der Suche nach neuen Azubis für die Firma. Das erlaubte uns den Einblick in einige interessante Persönlichkeiten. Glücklicherweise haben wir wieder super Auszubildende bekommen. Ich bin gespannt, welche Azubis 2014 bei uns starten werden.
Im Juni begann ich, mein wöchentliches Tagebuch zu führen. Mir geht das immer auf den Keks, dass ich nicht mehr weiß, wann was gewesen ist. In den Sommermonaten machte ich auch richtig viel Sport. Ich war (und bin immer noch) regelmäßig Laufen und Bouldern gewesen. Beim Laufen war das Highlight sicherlich der 15km Lauf während eines Sommerregens. Beim Bouldern habe ich gute Fortschritte gemacht. Schwimmen habe ich nach einer üblen Migräneattacke direkt nach dem Training für ein paar Monate ausfallen lassen. Werde ich aber ab nächstes Jahr wieder machen.
Auch sportlich war die Teilnahme beim Kickerturnier in Braunschweig. Leider belegten wir nur den fünften Platz. Nächstes Jahr muss das besser werden. Da wir kurz vor Weihnachten zu der Tischkickermannschaft vom Vfl Wolfsburg – so etwas gibt es scheinbar wirklich – eingeladen worden sind, sehe ich da gute Trainingsmöglichkeiten.
Neben diversen Fahrradtouren, unter anderem im Fallsteingebirge, bin ich mit meinen Kollegen noch beim Monkey Man am Allersee zum Klettern gewesen. Hat Spaß gemacht. Vor zwei Wochen habe ich angefragt, ob die denn planen, eine Boulderhalle hier in Wolfsburg zu eröffnen. Sie denken darüber schon länger nach
Zwei Wochen nachdem ich mein Knochenmark hab typisieren lassen, habe ich bei meinem Schwiegervater um die Hand seiner Tochter angehalten. Da mein jetziger Schwager zu dem Zeitpunkt gerade in seinen Hochzeitsvorbereitungen gewesen ist, habe ich den Heiratsantrag auf ein paar Wochen später verschoben. Der Junggesellenabschied und die Hochzeit von meinem Schwager fand dann Mitte/Ende Juli in Hannover statt. Das waren zwei grandiose Wochenenden, bei denen ich *echt* viel Spaß hatte.
Wegen des super Wetters bin ich ich diesem Jahr extrem braun geworden. Gut für mich, dass ich während meiner Mittagspause einfach mal ein, zwei Stunden mit dem Bike zum Allersee knallen und dann mich dort in die Sonne legen kann. Ebenfalls bei super Wetter habe ich dann Anfang August Jenny einen Heiratsantrag gemacht. Die Planungen für den Polterabend im Ruderclub und die standesamtliche Trauung im November liefen ausgezeichnet, so dass alles glatt von der Bühne ging und wir jede Menge Spaß (und Freudentränen in den Augen) hatten.
Nach einem sehr warmen Weihnachten mit fast 15°C wurde Silvester im Büro gefeiert. Entgegen meiner Befürchtung war dies wohl mit eine der drei besten (neben der Hausparty und der Party im Hallenbad) Silvesterfeiern. Die Tatsache, dass wir bis um kurz vor acht morgens mit Tanzen (!) verbracht haben, spricht eindeutig für sich.
Das Jahr war wirklich super. Erst im Rückblick merke ich, wie sehr mich doch das Studium auch körperlich gestresst hat. Meine Magenprobleme sind deutlich weniger geworden und man sagt mir nach, dass ich merkbar entspannter sei
Abgleich mit meinen Vorsätzen des letzten Jahres:
Insgesamt sieht zwar nach wenig aus, aber dieses Jahr ist viel passiert. Für 2014 steht folgendes auf dem Plan:
Picture the following scenario: You have an Enterprise Application Archive (EAR) which contains an EJB module and a WAR file. The web application uses a Spring application context and the same application context must be – for some reason – shared with your EJB. Using the beanRefContext.xml which points to the applicationContext.xml means that you will instantiate a new application context and have no access to the Spring environment of the web instance.
I used the following method:
Our new project makes use of Maven as build management tool. Eclipse (STS edition) is used for the development process. A part of the project consists of a transformation process which converts XML files to Java POJOs. Because of the given XML structure we used JAXB in combination with EclipseLink MOXy for this.
After a few weeks of initial development, mainly architectural decisions I prepared our TeamCity instance. The first TeamCity build failed because some of the unit tests throw unexpected errors. I must admit that until this time I had only executed the unit tests through Eclipse and every test case had passed without any problem. My local command line Maven builds were triggered with -DskipTests=true and succeeded too.
The failed build in TeamCity occured through the following JAXB error:
com.sun.xml.internal.bind.v2.runtime.IllegalAnnotationsException: 1 counts of IllegalAnnotationExceptions org.springframework.integration.Message is an interface, and JAXB can't handle interfaces. this problem is related to the following location: at org.springframework.integration.Message at public org.springframework.integration.Message ...
I repeated the test suite on my local machine (mvn test) and the first time it ran it succeeded. Eclipse passed the unit tests, too. At first I suspected different Java/JDK versions on my local machine and the build server, but the versions were the same. So I started with a fresh mvn clean test on my machine and the build failed, too. WTF? Now running the compiled unit test and the source code in Eclipse although resulted in the error above. Re-compiling the code with Eclipse fixed the errors. Eclipse uses Eclipse Compiler for Java (ECJ) during compilation and not javac of the JDK. Could it be a compiler bug? The byte code of both .class files (Maven compiled vs. Eclipse compiled) were more or less the same so this was not the answer.
During debugging the Maven compiled artifacts I noticed that the MOXy compiler was not hit, instead the default implementation was used. Could it be that the jaxb.properties file was not copied to the class path? jaxb.properties is read by JAXB for initializing/overwriting the default XML context factory. And indeed, the jaxb.properties was missing. ECJ copied the .properties file to the target directory but Maven ignored the file.
What did I learned from that?
I received the message “Segmentation fault” while running an apt-get install. My syslog contained the following lines:
Aug 11 11:34:19 srv kernel: [65729.407484] check-new-relea: segfault at 7f2dfd94746c ip 00007f2dfc0becd8 sp 00007fffd0671d20 error 4 in libapt-pkg.so.4.12.0[7f2dfc069000+11c000] Aug 11 11:35:52 srv kernel: [65822.603384] apt-get: segfault at 7f7d256e346c ip 00007f7d24252cd8 sp 00007fffdbb89140 error 4 in libapt-pkg.so.4.12.0[7f7d241fd000+11c000]
I fixed it with