Performance and runtime at the conference JPoint 2018

We all have some expectations from the conferences. Usually we go to a very specific group of reports, quite specific subjects. The set of topics differs from platform to platform. That's what javists are interested in:
 
 
Productivity
 
Virtual machines and 3 r3r3153 runtime features.  
JDK 9/10 /
 
Frameworks
 
Architecture
 
Enterprise
 
Large data and machine learning
 
Databases
 
JVM languages ​​(including Kotlin)
 
DevOps
 
Various small topics
 
 
The program of the conference is drawn up so that for each of the topics one tries to select at least one good report. JPoint is held for two days, there will be about forty reports, so all the main questions will be covered in one way or another.
 
In this small post I'll tell you about those reports that I liked as a person who goes mainly to reports on performance and runtime.
 
Scaling, clusters and all this will not be considered here, it is enough to say that it is (Christopher Batey of Lightbend will tell about Akka , Victor Gamow from Confluent ) Tell about Kafka , And so on).
 
Performance and runtime at the conference JPoint 2018
 
Disclamer. The article is written on the impressions of the contents of program on the official site . All nizenapisannoe - my own thoughts, and not quotes from the reports. In the text there may be (and certainly is) incorrect assumptions and inaccuracies.
 
Productivity
 
Remember the comic article "Java with assembler inserts" ? In the comments apangin said that he would make a report about VMStructs. It is said - done, here it is: "VMStructs: why should an application know about JVM insides" . The report is devoted to the use of VMStructs, a special API of the virtual machine HotSpot, which allows you to learn about the internal structures of the JVM, including TLAB, Code Cache, Constant Pool, Method, Symbol, etc. Despite its "hacker" essence, this API can come in handy regular program. Andrew will show examples of how VMStructs helps in the development of real tools (which they use in Classmates).
 
Second report, "Hardware Transactional Memory in Java" , makes Nikita Koval - a research engineer in the research group dxLab company Devexperts. If you were on JBreak earlier this month, you might notice that there he was talking about quite other things (about writing a fast multi-threaded hash table using the power of modern multi-core architects and special algorithms). In the same report, we will talk about transactional memory, which gradually appears in modern processors, but which is still unclear how to use an ordinary person. Nikita should talk about ways to use what optimizations already exist in OpenJDK, and how to execute transactions directly from Java code.
 
And finally, "Enterprise without brakes" . Where are we without a bloody enterprise? Sergey Tsypanov deals with productivity issues in Luxoft, in the domain of Deutsche Bank. The report will deal with patterns that kill the performance of your applications - easy enough to find on the code review, but complex enough that in the IDE red they are not emphasized. All examples are based on the code of working in production applications.
 
Profiling
 
On profiling the eye caught on three reports. The first report is by Sasha Goldstein, "Linux container performance tools for JVM applications" . Sasha is a serial creator of the performance hardcore. Last year at JPoint he made an excellent report on the use of Berkeley Packet Filter for JVM (I desperately recommend watching the record on YouTube), and it was only a matter of time before he gets to the detailed analysis of containerization. The world goes into the clouds and dockers, which in turn brings us a lot of new problems. As you can see, most low-level debugging and profiling systems, when applied to containers, are overgrown with different features and jambs. Sasha will consider the main scenarios (CPU loading, IO responsiveness, access to shared databases, etc.) through the prism of using modern tools on the GNU /Linux platform, including BCC and perf.
 
"Profil with accuracy to microseconds and instructions processor" - The second report on profiling, which makes Sergei Melnikov of Raiffeisenbank. Interestingly, before working on low-latency code in Java, he worked as an Intel C /C ++ /FORTRAN compiler performance engineer. In this report, too, will be perf! :-) There will also be hardware features for the Intel Processor Trace processors and technology, which allows you to take the next step in accurately profiling and reconstruct the execution of the program site. There are quite a few such reports (for example, you can find Andi Kleen's report
at Tracing Summit 2015), they usually leave a lot of questions and do not shine with practicality in the context of Java. Here we do not just have a person who has visited both worlds (both Intel and Java in the bank), he can also be found in the discussion area and ask uncomfortable questions.
 
The third report is "Universal profilers and where they live" . He is made by Ivan Ugljansky - one of the developers of Excelsior JET (certified implementation of Java SE, based on optimizing AOT-compilation), engaged in runtime: GC, class loading, multithreading support, profiling, etc. The essence of the report is that recently they had to collect the profile of applications launched on Excelsior JET. Do this on all supported systems and architectures, without recompiling the application, and even with acceptable performance. It turned out that the usual ways of profiling at the same time for all these points do not fit, so I had to come up with something of my own. Ivan will tell you which methods of profiling are suitable for AOT, which you can afford if you profile the code from within the JVM, and what you have to pay for the universality of the profiler.
 
Non-standard runtime
 
Runtime, in short, it's a thing that takes your high-level code in a JVM-language, turns it into a low-level code (mashkod, for example) and controls the execution process. Usually there is some sort of assembler, compiler, interpreter, virtual machine. The features of the runtime are defined by the features of the performance of applied problems.
 
The first thing that looks at the program is the report Alibaba about their JDK . Who did not dream of making your own JDK with blackjack and croutin? But it is clear to all that this is hellish labor, pain and suffering. But in Alibaba it turned out. Here's what they have:
 
 
A mechanism that allows you to allocate objects in hidden regions without overhead to GC;
 
Light threads (korutiny), built right into the JVM, it is necessary for them for asynchronous programming;
 
The possibility of profiling to live
 
Various pleasant trifles
 
 
Yes, we (the general public using OpenJDK) will soon be Project Loom . But there is a nuance - the development of cortine is in Loom secondary to the main goal - the fayers. Fibers require
delimited continuations
, but it is not necessary that they will, sooner or ever, appear in the public API. It seems that in Alibaba all this has already been mended by ourselves.
 
As far as I understand, this is not a report from the category "use our proprietary private JDK", but a guide for people who are going to master the development of similar features, or to fight their absence in OpenJDK. For example, the tools for profiling depend on the profiled areas and the product groups - for each product they will have their own. Speaker from Alibaba will not only talk about his tools, but rather about the process of classifying the warlords who are developing such tools in the right direction.
 
By the way, since we are talking about korutiny. In Kotlin, they appeared, starting with version 1.1 (
? in the experimental status of
), And about them will be report of Roman Elizarov from JetBrains. Roman will talk about the evolution of approaches to asynchronous programming, about their differences and similarities. Plus we'll hear the official position, why what is now in Kotlin is better than the familiar async /await .
 
In order not to go far, Alibaba JDK are not the only representatives of unusual ecosystems. Of course, there is a report on Azul Zing, and two about OpenJ9 ( Times , , Two ).
 
All reports about the insides of Azul products have for me a kind of shade of sadness. never in my life had to enter the circle of the elected, using their steep, but very inexpensive solutions. Therefore for me this their recent report has a rather theoretical importance, as a source of information about technologies that compete with our native OpenJDK. Now in the OpenJDK actively develops the theme of AOT - in OpenJDK JDK 9 already had built-in AOT (only for 64-bit Linux), there is SubstrateVM , and will continue to be only better, up to the implementation of the project Metropolis . Unfortunately, with AOT in Java, it's not so simple, it's very unpleasant to part of the modern infrastructure (remember the epic Nikita Lipsky's report about the crookedly designed OSGi ?). Azul already has some ready-made AOT solution called
ReadyNow
, built into their Zing, trying to combine the best qualities of JIT and AOT - that's what this report will be about.
 
On the other hand, OpenJ9 you can download it now . Since IBM opened its virtual machine in the Eclipse Foundation, a lot of HYIP has happened around it. In the mass consciousness there is a certain set of ideas and facts about the fact that it can replace HotSpot, that while libraries from OpenJDK can be re-used easily, that the amount of consumed memory should decrease, and even something to shift to the GPU and in general, all. (By the way, the GPU generally seems to be like black magic - good, on the past Joker Dmitry Alexandrov did a great report "Java and GPU: where are we now?" No video yet, but you can see slides ).
 
First report, "The Eclipse OpenJ9 JVM: a deep dive!" says Tobi Ajila, developer of J9 from IBM, working on Valhalla and Panama , with a large track record like improvements in the interpreter, JVMTI and lambdas. Apparently, there will be a description of some technical features of OpenJ? thanks to which you can overclock your cloud solutions and other performance-critical things. Second report, "Deep dive into the Eclipse OpenJ9 GC technologies" is the architect of the garbage collector in OpenJ? also from IBM - there will be a very pragmatic story about the four garbage collection policies, where they should be used, and how it all works under the hood. I hope that after listening to these reports, the magic aura around OpenJ9 will be slightly understated.
 
Conclusion
 
During those two days you can visit 12 reports. Of these, 3 keynotes are common to all, so you need to make a choice 9 times. If you select reports only from this list, then you can make 7 decisions from 9. The other two are to your liking (do you have an outlook on "universal" topics?). Some reports overlap (the hardest choice at ??? on the first day is between the profiling of Sasha Goldstein's containers, Nikita Koval's hardware transactional memory and Roman Elizarov's korutins). There is an idea that from the point of view of a person interested in performance and runtime, the program is designed well enough to be interesting from beginning to end. Meet you at the conference!
 
I recall that less than a month is left until JPoint 2018. Tickets can still be purchased on the official website .
+ 0 -

Add comment