14:40 to 14:55
14:40 to 14:55
14:15 to 14:20
14:30 to 15:05
Embedding support for expressions into Xtext based languages has become easy when Xbase is chosen as base language. However, deriving a language from Xbase implies the usage of a Java based type system with dependencies on JDT. For language implementations that need to be independent from Java or that should have a different type system it is required to embed an own expression language.
This session will explain the typical pattern for grammars with expression support. Attendees will gain some insight in the related topics of AST rewriting, left factoring, rule precedence and associativity, the usage of syntactic predicates and type system implementation.
The talk is a revised version of the talk given at Xtext Summit 2017 in Toulouse.
15:55 to 16:00
16:05 to 16:10
16:15 to 16:50
In several projects, we have been building automotive toolchains based on EMF models and model transformations with Xtend. In this talk, we will introduce our "lessons learned" from projects, in which we integrate automotive engineering data from different sources (relational databases, specific configuration files, EMF) into a consolidated model for product lines of electronic control units (ECUs) for cars and then transforming it to AUTOSAR. These models easily exceed a size of 1mio elements.Topics include:
After attending the talk, the participants will have a bag of new ideas and methods to improve the design and implementation of model-to-model transformations in their projects.
17:00 to 17:35
Java concurrency has evolved a lot from Java 1 to Java 9. Very sophisticated tools became part of the JDK providing developers with various design opportunities.
Still many of these tools and the underlying concepts are unknown to many of us.
In this talk I’m gonna show a brief overview about the evolution of concurrency tools and concepts findable in the JDK, explain some scenarios for the tools I recommend and show the new Reactive Streams concept coming up with Java 9.
The talks is intended for developers interested in an overview about the concurrency opportunities findable in the JDK as well as everyone interested in the upcoming Reactive Streams concept.
17:00 to 17:35
As Eclipse RCP developer, you have different options to create prototypes for your customers quickly.In this talk, I will share my experience in building RCP applications by using a simple, efficient and extensible technology stack based on:
With the help of a case study from the banking domain, you will see all the steps needed to create a complete prototype that you can easily re-use and transform into a real world application.During the presentation, we'll examine the key points for the creation of the prototype, and you will see few demos on how to use the E4 Editor and Window Builder efficiently. At the end of the talk, you will be able to repeat the process and understand and adapt the provided source code to develop your applications quickly.If you want to prototype Eclipse RCP applications with XML/DB persistence and a relatively complex UI, you should attend this talk.
10:05 to 10:10
10:10 to 10:15
10:20 to 10:25
10:30 to 11:05
The Eclipse Modeling infrastructure and tools have evolved to be a de-facto standard in the context of modeling in general: lots of important and interesting systems have been built with Eclipse technologies such as EMF, Xtext, and GEF.
However, over the last few years, two trends have developed. The first one concerns the web. Modeling tools, like all other tools, will have to run in the browser and the cloud. The viability for rich clients is in rapid decline. Second, the notion of “modeling” itself has evolved. In particular, more and more “modeling” tools are targeted at non-technical users. We have used modeling technologies successfully to develop business DSLs for insurance contracts, medical algorithms, or banking. Features near and dear to developers’ hearts, such as powerful IDEs, flexible version control, and the edit-build-run cycle are usually not valued by the non-technical users of these systems. The current Eclipse Modeling ecosystem offers no tools or technologies to convincingly address these two trends.
In the first half of this talk we will elaborate on these challenges. While the first one is probably undisputed, the second one needs more convincing: we will show examples from a couple of non-technical modeling systems to substantiate the claim.
The second half of the talk will outline a vision of where we think modeling needs to go. This vision will include technical aspects, such as: how can existing Eclipse Modeling technologies be ported to the browser, and what can we learn from non-Eclipse tools such as MPS. Just porting technologies to the browser is not enough though -- we will thus also outline a set of architectural and usability considerations for tools that will allow modeling to move forward over the next decade.
12:00 to 12:35
Creating good Oomph Setups is not trivial, but from the existing setups in Oomph's default catalogue users can learn much about some advanced features of Oomph. In this session I will show several examples from available Oomph setups that can be used for the definition of own setups. Attendees will learn about multi-project setups, collecting useful workspace preferences, dynamic working sets, launching initial builds after project import, managing modular target platforms and other useful stuff.
14:45 to 15:20
Code formatting is an opinionated beast. It always has been a matter of taste, and it always will be a matter of taste. This is the reason, why professional formatting tools, such as Eclipse JDT, offer a gazillion number of options. Which is still not sufficient enough. After all, you can override them inline with tag-comments to make the formatter shut up. Can't we do better than that? What if we could use machine learning techniques to detect the preferred code style that was use in a codebase so far? Turns out, we can.
The Antlr Codebuff project offers a generic formatter for pretty much any given language. As long as a grammar file exists, existing source can be analyzed to learn about the rules that have been applied while writing the code. Those can than be used to pretty print newly written code. No configuration required. And existing sources will stay as nicely formatted as they are. In the end, the primary purpose of code formatting is not to re-arrange all the keywords, but to make the source layout consistent.
In this talk, we will demonstrate the usage of the codebuff project and how it can be used to format the sources of your repo in a consistent way. We'll also show some other gems that have been revealed when toying around with the technology.
15:25 to 15:30
15:35 to 15:40
16:30 to 17:05
Are you tired of null pointer exceptions, unwanted side effects, SQL injections, broken regular expressions, concurrency errors, mistaken equality tests, and other runtime errors that appear during testing or in the field? Do you wonder why every production code base needs its own implementation of money and currency types, physical units, or string processing? Aren’t all these simply indicators for missing features in Java’s typesystem? Turns out they are. And even better: Annotation processing to the rescue - there is a standardized way to fix it! Annotation processors allow to tweak and enhance the Java compiler. Especially with Java8 type annotations, they offer means to improve the typesystem and detect many of the problems at compile-time that would usually only surface at runtime.
In this session, I want to introduce the Checker Framework. It offers plenty of sensible annotations which greatly enhance Java's type system to make it more powerful and useful. The framework lets software developers detect and prevent errors in their Java programs at compile time independently from the used IDE, compiler or build tool. This helps to find bugs or verify their absence. Out of the box, it ships with checkers for various bug patterns in the area of:
There are many more, and it’s even possible to implement custom checkers tailored to project specific requirements and coding habits.
If you want to learn about advanced use cases for Java annotations and detect bugs while you code, come to this session and get a jumpstart about the Checker Framework.
09:50 to 09:55
10:05 to 10:10
10:15 to 10:50
In this talk we will demonstrate how ThyssenKrupp Steel’s Manufacturing Execution System, which targets the production planning and control of steel plants, incorporates GEF-based views in its Eclipse-based development environment. We will start with a short introduction to the application domain, then demonstrate the relevant parts of the development environment’s user interface, focussing on the diagrammatic views that integrate automatic layout, image export, as well as JSON-based persistence. We will then give some insight about the underlying architecture, sketching how GEF Zest and Graphviz have been combined to realize the demonstrated.
The talk is intended at adopters and might be especially of interest for those that aim at realizing diagram editors or views with GEF 5.0.0.
11:00 to 11:35
Executing automated tests of every reasonable scope as a fixed step of every build job is mandatory. Some testworthy situations can’t be simulated with a mocked environment. Therefore integration tests have to be part of the buildstep as well. In this talk I will demonstrate how to use Docker and Gradle to integrate integration tests and the necessary setups into the build process and how to use this setups for debugging and publishing of buildresults. The talk is intended for developers familiar with the basics of Gradle and Docker and curious about the opportunities to use their advantages in test and build processes.
11:00 to 11:35
We at Fiducia & GAD IT AG have been using Code Generation as a tool to develop our Banking Software "agree BAP" for about 15 years. As the codebase grew up to about 80 Mio. LOC the modeling tools that were used changed over time: from proprietary XML-based formats to UML models with MID Innovator and IBM RSA. On top of that, in-house developed Xtext DSLs are used for several purposes.
In an effort to ease maintenance and to consolidate our tool landscape, we're currently in the process of migrating about 500 UML models and 620 Java Projects to a Xtext-based DSL. The tools have to be developed and the Migration has to be coordinated by a small team of developers.
In our talk we'll put the focus on three topics:
Our goal is to show you how Xtext DSLs can be introduced in a company - especially in a big Company, with a mature codebase and a small Team. We'll talk about the technical challenges we encountered and how we solved them. Of course also give you a glimpse of our future plans.
11:45 to 11:50
12:00 to 12:05
13:00 to 13:35
Since the first graduation of the next generation code base (a.k.a. GEF4) in June 2016, we have worked intensively on making GEF even more robust and concise. And we have added some nice end-user features that make GEF applications fun to use. In this talk I will give an overview from an end-users perspective, especially pointing out what has been added during the 5.0.0 (Oxygen) release timeframe. I will also give a short outlook about our plans for Eclipse Photon.
13:45 to 14:20
While creating languages and IDEs with Xtext is a breeze, it may become a little bumpy when you want to provide headless tools. Even though there exists decent support to generate and compile Java code from DSLs with Gradle or Maven, build systems for other target languages are still uncharted waters. Navigating through them depends a lot on your own technological decisions and of course on the target language of your choice.