"For us, there is no point in using Retrofit": about Android-development in Sberbank Online
How many Russian applications on Google Play do you have "???+ installations"? Obviously, every such case is a unique story with its own specifics, so it would be interesting to talk with the developers. And when such an app also has a score of 4.? it increases interest.
Vladimir Tebloev - one of the people working on Android-application Sberbank Online . In the spring, when Sberbank-Technologies participated in our conference Mobius , he made a report there, and now we decided to ask Vladimir about the peculiarities of his work.
source code their Android-applications are not easy to understand. We are trying to follow our path, especially since the interaction with the server is different: in the same Telegram it is MTProto, we have the usual WebSockets.
- I'm a lazy interviewer, so I decided to just take a list of things that you are connected with at work, and about each item to ask "tell me exactly how things are with this."
The first point: you are engaged in the "architecture of application modules". About the fact that the application is divided into modules, have already said - but what else can you say about the architecture?
- It is developing iteratively, versioning is under way, now we have reached the 17th version.
On the 16th, Clean Architecture was introduced. We agreed who is responsible for what (presentation, domain, data-layer), what entities and where should be used, where converters should be-in general, they painted all the architectural issues and implemented them.
Implemented as follows: all new features should have been written on our new architecture. If there is something in the pull-requester that deviates from the given rate, then such a call-reqest goes for revision. But they did not immediately rush to rewrite all the old functionality, because it can cause a lot of problems.
For the presentation layer, we chose the standard MVP, but some commands we use MVVM. In the presentation layer, we are not limited to anything. For example, we copied our chat to MVI - more precisely, to our interesting MVI implementation, which is radically different from what the Mosby developer wrote.
Then we switched to the 17th version of the architecture and implemented RxJava, which entailed architectural changes. If we use strict definitions, now our architecture is hexagonal, from Clean we "forged". But they are similar in that both work on the principles of SOLID, so one of the others flows smoothly. Now we are working on this.
In future versions of the architecture, we want to abandon the Moxy framework used to implement MVP, because it causes some difficulties. The project is large, it uses annotation processing, and when modifying the "lower level" modules, the build time is longer. And we are trying to make life easier for our developers.
- The second item is "optimization of work and memory consumption." How acute is this issue, do you have to think constantly about users with old devices?
- This issue is in the focus of the platform teams, they are developing the tools used by the feature teams. The need to deal with this, rather, arises from the need of one of the teams. For example, in our team of "Dialogs" in the early stages of development, the chat worked very slowly. Then I had to roll up my sleeves, start with the profiler, see where the bottlenecks are in the application, understand the reasons for their occurrence.
With regard to optimization, for example, we abandoned PNG and gradually clean them out of the project to use only the vector. For this year, the graph of dependency in Dagger is planned to accelerate the cold start of the application.
- Let's pass to the testing questions: how does it happen?
- I can only tell about our team, in others this process can be built differently.
We initially had one tester on the team. Subsequently, it became boring to simply test it. And he started asking us to help with the writing of unit tests. We showed him how to test the database, on the essence, on parsing - and so he unloaded us, took some of the work off of us. This is good: he is interested, and to us.
Over time, we came to the need to automate regression, we need to write UI-tests. The first time I worked on my UI tests and my partner, and later we were joined by the quality department - our testers who tested the backend in the past. They know Java, and now they are connected to our project to automate all the regress. We together sat down and examined the solutions that are: Appium, Espresso, Selenium.
They stopped at Espresso and began to develop approaches together. To make testing easier, they developed their own framework, something like Kakao. We started this work in early 201? and now we have a large framework, and most of the tests are assembled as a designer, because many matches and action games are written for different situations.
Now our testers are actively asking us to teach them how to write UI-tests, because it's easier to write a test once, than on five devices to "poke" the same actions. But, of course, everything is not automated, and some cases still need to be checked manually.
As for the developers, in our team every two weeks a retrospective is held. On one of them we came to the conclusion that developers should conduct at least alpha testing after they wrote a feature. That did not get out any absolutely basic bugs of a kind "application falls at start". Thus, the developers also connected to the testing. When we are preparing a major release and we need to quickly test the feature, all sit down on regression and together pass tests for regression. When a bug is found, developers are disconnected from regression, quickly fix, and on a new one.
- The next item: code review. Here you have some specificity, or "like all"?
- There is a specificity caused by the number of developers. When the mobile developers in the company ten, then all can be reviewed by two or three people. And how to check the code of hundreds of people? We have developed a "matrix of the reviewer." We selected 20-30 people, about whom we know for sure that they can well make good, leave a feedback and resolve the disputed points in the comments. They took these people and divided all the teams between them.
Why the matrix? This is done so that all reviewers have the same load. How does the review go? We need at least three people in the team. The first - from someone from the team. The second one is from someone from the outside, from a team that does not handle this functionality. And the third one is from someone from the adjacent team. In our case, there are several adjacent commands, and they all look at our code. Well, accordingly, all builds should be collected: unit-tests and UI-tests should pass without problems. So we have a code review.
- The next point is refactoring legacy code. How systematic is it going: exactly planned tasks, or "it was necessary to make changes to the old code - at the same time they refactored"?
- In general, we have a kind of "scouting principle": if you touched something old - be kind to do it right, you are now a co-author. But the planned refactoring is also there. For example, for "Dialogs" we needed refactoring two directions: the contact book we use and the translations. The contact book was taken out, cleaned, and the entire database was rewritten on the Room, and placed in a separate module. And payments we have been written for a long time with the help of RoboSpice, if you still remember that, and it hurt us. I must say, sawing this turned out to be an unpleasant task, because there were a lot of ties to it. And it was necessary to scrub thinly, so as not to break the rest of the functionality.
- Even in Sbertech you are involved in the training of programmers. What does the training look like inside the company?
- Since September, it is planned to conduct such a program, as retraining of internal employees. Now we have already defined the range of themes for Java and Android. For example, we have jаvascript programmers, javists and analysts who want to retrain to Android. For them, such a school will be organized, where there will be purposeful study, according to the schedule and with lectures.
And now we have regular Mitapes. The choice of topics for them is not the same as at conferences, where it is necessary to have something new and haypovoy. For example, if we know that developers have problems with something, then it's important to tell about it. Of the recent - one of our developers talked about the vector graphics. It's not just about a specific library, which on Android beautifully draws vectors, but started with how vector graphics works in general, and then went into the private. They talked about Room, and about Java concurrency, which many developers have problems with, and about Dagger 2.
Last year we had an Android development school, and we recruited those who successfully passed it. Such people should not immediately connect to some projects and leave yourself to cook. Therefore, a mentor is attached to each newcomer and also a junior, who will lead him, develop and help him. This is internal training.
- Interviews: do they pass "like everyone else", or is there a specificity?
- I used to think that "like everyone else", but in the end it turns out that they are still a bit special. In my experience, three frequently encountered approaches can be identified in the market. The first - ask for three or four topics and evaluate solely for them. For example, I came to the company as an Android developer, and I'm considered a person who must know the algorithms and synchronization well in Java, and do not appreciate what I'm doing in super libraries. This may be due to the fact that the company needs a person who must perfectly know some narrow part of the framework or language. The second - when they are interviewed through their sleeves, almost a conversation for life for 30-40 minutes. Here, rather, the matter is in the competence and experience of the interviewer. The third is when they talk about the problems of the company and try to get a solution on the spot. The downside of this approach is that the decision may not coincide with the opinion of the one who asks this question. In my opinion, such approaches are encountered in about half the cases.
As for us, we have worked out the method of examining the candidate in four broad areas: OOP, OOD (Object Oriented Design, Architecture), Java Core and Android SDK. Methodically, the question goes through all the topics. If the candidate as a whole confidently responds to the topic, we gradually begin to go deeper, ask more specific questions. Figuratively it looks like a tree: we have a root from where we come in each topic, and we can go five to seven steps deep. Then the candidate is evaluated in aggregate on all the issues covered. If the interview passes quickly, then we start asking for libraries, for example, Dagger ? RxJava. If there was enough time for this, then by Kotlin. Thus, the candidate is assessed as a whole. If a person does not understand a single topic, but knows the other well, this does not mean that he is a bad programmer. This means that for a certain period, he should bring up this topic.
- Another among your work tasks is "research and review of new technologies." Here I want to ask for a concrete example of some technology that is reviewed.
- The last major library is RxJava, we looked at how it could affect our project. Tested in local branches, then implemented in one uncritical module to see how it will behave in production. After all this, they took the standard and defined everyone to write new functionality on it.
Of the unsuccessful examples, Retrofit was considered, which I already mentioned: a good library that solves its problems, but its time for our project has passed. Implement it so that we have many ways to enter the network - a bad practice.
Also considered the library TinyMachine for implementing the state machine - the library is simple, not extensible, that is, one command it satisfies, but for others it does not fit. Therefore, it was abandoned, because if you "drag" the library, then only the one that suits everyone. In the end, we decided to write our own state-machine, good, it's not some kind of rocket science, which is hard to implement.
- And the last thing: keeping the documentation. In your case, when there are many developers and it is impossible to keep everything in mind, without accurate documentation at all anywhere?
- Yes. The first kind of documentation is Java docks. We have a local memorial "check Prilutskogo": there is no Java-dock - the call-back does not work ("Prilutsky" - in honor of one of the leaders in the team of Android developers: he wrote so often about what should be described on all the requests documentation to the code, and without this code will not pass into the common branch, that such a meme was born). Now, developers already understand that every public method, every class, every designer - everything should be described by Java docks. All code must be covered by docks, even tests. In order to understand what this test was written in order not to raise, for example, the questions "What is this payment? Is it payment from instant messenger or payment from payments? What is the paymentTest? ".
In addition, we have the documentation in Confluence. When I came here, the product design manuals were described in the cloud, and there were a couple of articles about how we work. Now all global things that affect everyone are necessarily described in Confluence. For example, we need to insert certificates to access the repository, and the person who did this writes an article so that they do not write a million times in the chat, what to do in case of a non-working certificate. Another example: it was decided to implement RxJava and Confluence describes the best practices - how well it is done, how not to do it, and the link to the sample. The simplest example is how to arrange methods in a class so that everything is standard.
These articles are gradually but regularly written. Now our Confluence has grown up to 200 articles on various issues. Such a tool helps even newcomers. They study Confluence, get an idea of the internal kitchen of the development, in case of arising questions they can independently understand and make a decision, not always drawing their mentor to it.
It may be interesting