martes, 14 de enero de 2014

Getting started with Apache Camel (24-25 / 01)

Talk in English
Hi devs !!!

This new year 2014 we start with amazing meetings !!!

Now it's turn of an introduction and a workshop of Apache Camel by the hand of one of the main committers for this Apache project and also the author of the book "Camel in Action"..... Mr Claus Ibsen.


Claus Ibsen has worked on Apache Camel for years and he shares a great deal of his expertise as a co-author of Manning's Camel in Action book.

He is a Principal Software Engineer working for Red Hat specializing in the enterprise integration space. Claus is the most active contributor to Apache Camel and is very active in the Camel community. He hang out on the Camel mailing lists, irc-room and often blogs about Camel. 
Prior to joining Red Hat, Claus has worked with integration in all sorts for the last decade.


"Apache Camel  is a versatile open-source integration framework based on known Enterprise Integration Patterns. Camel empowers you to define routing and mediation rules in a variety of domain-specific languages, including a Java-based Fluent API, Spring or Blueprint XML Configuration files, and a Scala DSL. This means you get smart completion of routing rules in your IDE, whether in a Java, Scala or XML editor."

Session 1 : Presentation ( Friday 24 ) :


This session will teach you how to get a good start with Apache Camel. We will introduce you to Apache Camel and how Camel its related to Enterprise Integration Patterns. And how you go about using these patterns in Camel routes, written in Java code or XML files.


We will then discuss how you can get started developing with Camel, and how to setup new projects from scratch using Maven and Eclipse tooling.


This session includes live demos that show how to build Camel applications in Java, Spring, OSGi Blueprint and alternative languages such as Scala and Groovy. You will also hear what other features Camel provides out of the box, which can make integration much easier for you.


We also take a moment to look at web console tooling that allows you to get insight into your running Apache Camel applications, which has among others visual route diagrams with tracing/debugging and profiling capabilities.


Before opening up for QA, we will share useful links where you can dive into learning more about Camel.

Reserve your Ticket !!!!


Session 2 : Workshop ( Saturday 25 ) :


We would start with playing with a few of the out of the box Apache Camel examples, and get them into your IDE of choice, and then be able to edit the source code a bit, and see the changes in action etc.

Then move on to create a new project from scratch, and it can be something with REST or the likes, to build a mini app.

If there is people in the audience who have tried OSGi we would try out OSGi on Karaf / ServiceMix / Fuse etc.

Requirements :
Hope to see you there and don't forget to RSVPed in our MeetUps for the talk on friday 24th and/or saturday 25th (there are only 30 seats availables, hurry up)!!!!


lunes, 13 de enero de 2014

Resumen workshop jBPM & Drools (10/12)


Hola devs!

Cerramos el año 2013 con un workshop de jBPM y Drools el 10 de diciembre de la mano de los compañeros de RedHat: Mauricio Salatino, Pere Fernandez y Walter Medvedeo.

Empezamos con una primera parte de presentación rápida con los conceptos básicos para poner en contexto a todos los asistentes: que son los motores de procesos de negocio y los motores de reglas, para que se usan y cuales son sus ventajas, y una introducción a jBPM y Drools.

A continuación nos presentaron la integración de jBPM y Drools en la nueva plataforma KIE ( Knowledge is Everything), sus principales características, ventajas y las muchísimas herramientas de soporte que hay en la plataforma KIE. Nos presentaron rápidamente las diferentes herramientas y como funciona la plataforma KIE. No pudimos profundizar demasiado para poder pasar a la parte práctica, pero en las slides os dejamos todo el detalle, y cualquier comentario o pregunta, como siempre será bienvenido ;)

En la parte práctica, todos los asistentes pudieron levantar la plataforma KIE en su laptop, con los ficheros que se habían distribuido previamente para el workshop y que podéis encontrar al final del post. Hicimos un tour rápido guiado por la plataforma: vimos un proyecto que teníamos creado (estructura, procesos, human tasks y forms), compilamos y deployamos el proyecto en la plataforma, iniciamos el proceso, ejecutamos las tareas, y vimos las herramientas de monitorización. Un tour muy completo, todo y que rápido por falta de tiempo, pero que sirvió de inicialización.

A continuación os dejamos las slides de la presentación y los ficheros necesarios para el workshop. En la parte final de la presentación podéis encontrar el guión del workshop por si alguno/a se perdió el workshop y quiere probarlo, vale la pena :)



Al final del workshop hubo una tanda de interesantes preguntas e intercambio de experiencias, y surgió el interés de una charla o workshop más centrada en el motor de reglas de Drools. Los ponentes les gustó la idea y están abiertos a buscar fechas y hablarlo ¿Que opináis? Necesitamos vuestro feedback ;) Si vemos que hay interés en la comunidad por temas de Drools o jBPM, miraremos de hacer más charlas y talleres.

Let’s go community!

lunes, 6 de enero de 2014

Introducción a Hadoop (9/1)

¡Hola!

¿Os habéis portado bien? ¿Los Reyes Magos os han dejado algún gadget? Esperamos que sí, pero si no es así, no os preocupéis que aquí os hacemos un pequeño regalo: os invitamos a nuestro próximo evento este próximo jueves día 9 a partir de las 19h, en La Fontana. En esta ocasión, nos centraremos en ese gran /*¿des?*/conocido que es Hadoop, este marco de trabajo diseñado para el procesado distribuido de grandes conjuntos de datos usando modelados simples de programación.


La charla, que será conducida por Ferran Galíconsistirá en una introducción al framework de computación distribuida Hadoop. Descubriremos qué necesidades influyeron en la aparición de ese nuevo paradigma y aprenderemos la base de los dos pilares que conforman la tecnología: HDFS para el almacenamiento, y MapReduce para el procesado. Seguiremos con unas pinceladas en algunos de los proyectos más importantes del ecosistema Hadoop, y terminaremos mirando la tecnología desde una perspectiva más pragmática, comentando algunos proyectos que cubran necesidades reales. 

Ferran Galí i Reniu es ingeniero en informática por la Universitat Politècnica de Catalunya y actualmente está trabajando como desarrollador backend en Trovit resolviendo problemas que requieran de un procesado masivo de datos, desde simples análisis hasta la compleja generación de los índices de búsqueda.

Esperamos poder contar con vosotros este próximo día 9, para poder comentar la jugada y de paso poder compartir experiencias e ideas para este año que justo acabamos de comenzar.

Un saludo,

sábado, 4 de enero de 2014

Summary: Introduction to Graph Databases and Neo4j (28/11)

Hi devs! How's the new year going? We hope everything goes well ;-)

As you remember, last 28th of November we received the visit of Neotechnology, the creators of Neo4j and they gave us a talk about this NoSQL database plus an introduction to Graph Databases in our community. Here is the summary of this talk by Stefan Armbrustrer, and of course, we would like to thank Stefan and Dirk for coming from Munich to talk about Neo4J inside our community, it was really a pleasure to host their talk.


First Stefan started his talk speaking about the trends and what is all about with terms like BigData and NoSQL: we need to rethink about how the data is increasing in size really really fast, why everything is almost connected in our IT world and how is increasingly connected (graphs are everywhere). 

Next, the talk continued with the description about what is NoSQL and why is not the magic key to solve all the problems related with query and save data. First, in the process of comparing a relational database with a non-relational one, we must think of what kind of NoSQL database to choose, and we have a bunch of different types: key/value, column-oriented, document, graph, etc. Depending on which are the problems you are trying to solve, the nature of your database will fit better one or another (ex: cost of the size vs complexity of your database).



When you think in a graph database, you must rethink your model in a way of connected nodes with relationships that have properties (key+value). Of course you can use indexes and in the case of Neo4j another functionalities like labels. Using the example of a social network, Stefan presented numbers comparing the performance of Neo4J vs another relational database and the results were really impressive: the rdbms was really slow (1000x) and increasing the number of records to a million didn't change the time response of the query, in fact, that's the key of the power of a graph database.


In this sense, Stefan showed us the strengths of a graph database for example, how powerful the model you can build like your can do in relational database, or how fast it is and how easy is to query, as well as their constrains, including the learning curve or your conceptual change you must do. The presentation went on the benefits of using Neo4j, and how powerful and functionalities it has: it can be embedded and use it with a lot of languages, provides acid transactions, integrates indexes and a REST API, high-availability, highly scalable, etc. Below, Stefan showed us in live how to play with Neo4J, and how beautiful is to query using a browser or with Java code and how easy is to work with this schema-free database. 

The talk ended with the sample of many of the customers Neotechnology are working with, and how they are satisfied using Neo4j and at the end, the closure was the raffle with few books about Graph databases that helped to end with a better taste this great talk. 

Ah! Here you can find the video in our Youtube channel as well as the presentation in our Slideshare account. Hope you will enjoy it as we did ;-)



Finally, from the BarcelonaJUG crew, we would like the best for this new year ...and happy coding!