How to replace HR-a robot? Technical part

We have already told you about the Robot Vera project from the point of view of business. It's time to learn more about the inside of the service. Today you will learn about the limitless resources, semantics and technologies that are used in this project. Under the cut - the decryption of the video.
 
 
Beginning .
 
2. Block in the bank .
 
3. We learn the machine to understand the human genes .
 
4. Machine learning and chocolates .
 
5. Loading
 
 
A series of interviews with Dmitry Zavalishin on channel DZ Online :
 
 
1. Alexander Lozhechkin from Microsoft: Do developers need in the future?
 
2. Alexei Kostarev from "Robot Vera": How to replace HR-a robot?
 
3. Fedor Ovchinnikov from Dodo Pizza: How to replace the restaurant director with a robot?
 
4. Andrew Golub from ELSE Corp Srl: How to stop wasting a lot of time on shopping trips?
 
5. Vladimir Sveshnikov from Robot Vera: How to replace HR-a robot? Technical part.
 

With whom is the interview?


 
How to replace HR-a robot? Technical part Dmitry Zavalishin - Russian programmer, author of the Phantom OS concept, organizer and member of the program committee of the OS Day conference, founder of the DZ Systems group of companies. In 1990-2000 he took an active part in the creation of the Russian Internet segments (Relcom) and Fidonet, in particular, ensured the transparent interaction of these networks. In 2000-200? he was responsible for the design, development and development of the Yandex portal, he created the Yandex.Guru service (hereinafter - Yandex.Market). More can be read at Wiki .
 
 
Vladimir Sveshnikov - came to St. Petersburg in 2006 from the Far East. He received a law degree from the University of Economics and Finance. In 200? they organized the company "First Street", which dealt with the design of unskilled personnel from the CIS. Later engaged in outsourcing of personnel, and by 2012 they had a friend with two large customers - the chain of stores "Healthy baby" and "Dixie". The annual turnover of "First Street" amounted to 30 million rubles. And in 2013 - 50 million. But soon Vladimir realized that he does not see himself as an outsourcer and wants to make a technological start-up.
 
 

Interview


 
Hello! On air DZ Online technology, and our first guest Vladimir Sveshnikov co-founder and CEO of the company Robot Vera, which deals with the selection of personnel using artificial intelligence. This is one of the first, probably, start-ups that really allow artificial intelligence to people. We have already met with you, discussing the business objectives of this area, and learned about what it is for and why it is good. And today Vladimir will tell us a little about how all this is arranged, what problems arose on the way to that ideal in any case, the current decision that is now. Vladimir, hello!
 
 
Yes, hello! Yes, I'm happy to talk about how we started. The fact that we were only three in the team, it was just a year ago. Now we have more than 50 people in the company working together. But when we were three, I was fully responsible for the whole technical part. Initially, we started to do such simple things. That is, we just took and started repeating the recruiter process. We were looking for a resume for him, calling for candidates for him, sending him an e-mail with job description. And since I have a certain technical background.
 
 
Although I'm a lawyer by training, I was re-profiled as a programmer. And I realized that these processes are very routine, very monotonous, they can be automated. And the first thing we did this, I remember, was the day when we were looking for a resume. It occupied somewhere half a day: half a day looking for a resume, half a day, correspondingly, calls. Then I climbed we did it on the SuperJob, Work, Zarplata.ru sites. Then I climbed up on their ip looked, realized that you can do what we did half a day to do in one minute. And my partner and I did it together. He, in general, was looking for a resume somehow in one day, and I went for one minute did it all, went to drink tea. Comes, says: "Why do you drink tea?", I say: "I have already fulfilled my standard."
 
 
He made his own rule.
 
 
And, as a matter of fact, it was such a first impetus. That is, we realized that technologies can be used and automate some processes that are not automated in HR at all. Well, further, we, accordingly, have already begun to actively engage in calls. That is, we hired call center operators, and automated calls, automated there the button was made, right from the browser they started to call. In general, everything was done as simple as possible so that the operator could sit down, put on headphones and, in general, the only thing he did was synthesize speech and recognize speech. Then we realized that these technologies are already on the market, they show a fairly good level of quality, and they can be used.
 
 
That is, here a man synthesized speech and recognized speech? Here in this sense, you at this moment realized it, as a detail of this machine itself on communication with Until that moment, everything was simple. Pick up a list of vacancies through the ip, select them by some keyword. Although, there are subtleties by and large. But God is with him. Probably, we will return to them later. At some point, you began to study with the voice: synthesize speech and recognize speech. Synthesize is also understandable: there were scripts, they are more or less fixed, probably for a certain selection. And the recognition is After all, have you started with very simple questions and answers?
 
 
Yes.
 
 
Was it that the recognition worked badly?
 
 
Yes, certainly. There are several points. There, if you look at it First of all, we have been looking for an approach for a long time, how people make it so that they understand that they are talking to the robot, how they built the dialogue. That is, at first people have a shock, they do not understand what to say, what to answer (especially when we call in regions). Moscow, St. Petersburg still somehow normal, but in the regions people are directly surprised: what is a robot? (There any different vocabulary can be heard at this moment).
 
 
And then we became we made it so that she began to introduce herself and then she sets a certain standard format for communication. That is, she says: "I am able to recognize the answers" yes "or" no ". Answer my questions "yes" or "no". " Then people begin to understand that if they say "yes", then well, that is, how they communicate they understand. Because before that they have a dissonance. That is, like a robot, robots, like, not yet. What? A call from the future? Well, in general, that's it. Accordingly, yes, most likely, it is precisely speech recognition. It now works so much that it can recognize absolutely different words. That is, we now have scripts, where they choose vacancies, where they ask questions. That is, we are all well aware of this. But it was "yes" or "no" - it was for people to understand how to communicate with the robot. That is, this was the main moment.
 
 
That is, you could do more at once?
 
 
Yes.
 
 
Or is it not? Because then semantics begin.
 
 
Yes, well, we have already added semantics. That is, we have added here a few months ago the answers to the questions. That's it to recognize what he said - we could long ago. We even had such an item in this script: if he says no, the vacancy is uninteresting to him, then we ask "Why is not interesting?". And there he answers why uninteresting.
 
 
 
 
But is this a record? Do you simply write down the answer without trying to analyze it?
 
 
We recognize it.
 
 
Do you recognize?
 
 
Yes, and show in your account as an answer.
 
 
In the form of text?
 
 
Yes.
 
 
Were there any problems at that moment? That's what you describe seems rather banal. In general, it seems that almost anyone on the planet can now take, pick up a pack of libraries and collect all this on their knees in two days.
 
 
There are problems, of course, certainly. The main problem was, probably, if we talk about the technological aspect, that we had such a complicated product. If we talk only about calls, only about recognition, only about the synthesis of speech is a separate story. It is very big, complex. There, too, for example, we did what we have there we use an external speech cognition. We use Google, Yandex. First, there is no such benchmark. That is, you need to look at your tasks, exactly how your text is, how your audio records they recognize. That is, the first thing we did was we did this analysis. We looked accordingly which one works best. Then we realized that in spite of the fact that one of the campaigns works better and shows better results, all the same the speed of the answer at it at some moments can be longer. That is, she can respond more. And then we began to send a record to several speech recognition systems. Including in Microsoft, Google we had Amazon, Yandex. That is, immediately at four we sent and received some quick response.
 
 
Now we use two or three systems at most. That is, during rush hours. And so the main difficulty was that we had besides we needed to run first a resume search, then it's a robot, she does everything herself. She searches for herself, then after she finds it, she calls herself, then after that she, accordingly, sends out an email to those who have not answered "yes" yet.
 
 
Do not look after the robot?
 
 
Well, take a look. Monitoring system. That is, that's all we've been through. And first there, since I did it all alone. I did it not quite right, I did it quickly, in a hurry. And in general, I had one Docker container there, there was a base in it That is, I did not break it all into micro services, as now it is accepted and as now we did. This all was one container, one image, in which everything worked on the same virtual. And as a matter of fact, there we made a new virtual player, a new image, under the new client of each. And it often happened that there under loads, since there was no monitoring system, everything was falling there. One of the stories was when one major customer came to us. We spent two or three days with the pilots and then for some time he decided to call his stop-lists, he downloaded several thousand candidates. Of course, I had a memory leak and all this was covered, since it was one container, no image was saved. I was there through telephony, through all this I restored this story almost all night long, so that they did not lose their calls. In general, yes, there were such problems, but here, probably, if we talk about
 
 
Well, these are typical problems, in fact. Quite banal. They are also not very connected with high-tech and recognition. This, in general, probably would have happened in any start-up, in which yes, in any, because all startups, probably, first make the first person, who is a technology ideologist, by the forces. Namely, with the quality of recognition?
 
 
Here with the quality of recognition, of course, there are problems here. They are solved in different ways. That is, for example, if we recognize the address, submit to the recognition system that this is the address. Then it gives better quality. If we recognize a question, then we mark that it is a question. But, in general, now it's good enough quality if we have normal audio recordings, if we do not have any extraneous noise, if we do not have some well, a person speaks normal speech, there are no defects.
 
 
This, by the way, is an interesting moment. You know, I've been talking to the city of Mos.ru, the Moscow city services. They are also engaged in actively similar technologies, and there too are understandable tasks that are quite massive. There's an absolutely ridiculous task - to collect information about water meters. You can call and call the voice, the robot recognizes. And they on the contrary say that it is precisely imperfect speech or speech with accents, with strong accents, quite the contrary, it is quite well covered by the algorithm and is covered even better than by living people. Do you have a reverse situation if I hear you right?
 
 
Well, we have such a problem right here, that there were a lot of people with accents, to be honest, no. That is, we, in general, all somehow say something standard. It probably still depends on what how to ask. That is, if we ask to choose an answer, for example, a vacancy, she says it: choose "sales manager", "loader" or "storekeeper".
 
 
Are you trying to reduce them to a narrow spectrum so little?
 
 
Yes, certainly. We had a case there where we collected questionnaires, summaries, where we asked a person to tell us about himself, to tell us about his experience. And there, of course, there were all kinds of interesting, very funny stories. Well, that is, how they told about themselves and how it was all recognized. Of course there is a big mistake right now. And there is certainly no such thing that she fully recognizes speech there.
 
 
How do you measure an error? After all, in fact, if a person simply told about himself, then it is impossible to build any obvious metric that could be compared with the correct text.
 
 
It's only by hand now. We have specific account managers who listen to some of the records and they then watch the answers.
 
 
Manually verified pointwise? That is, is this a less ordinary, selective verification of quality?
 
 
Yes. That is, when we just chose which recognition system take for the basic, we took the order, a little less than 1000 audio recordings, which we had and each one of this audio recording we recognized the text there, checked the recognized text with the audio recording. There were several of us sitting and doing it.
 
 
But is this the choice of the system? That is, there is a n system, you have a body of already known texts about which the correct answer is known. You drive them away, compare them. There is an obvious mechanism. And by the way, who in the end is the best of all four recognized?
 
 
Well, there they have everyone in their well, of course, it is now considered in the market that Google is the best. Well, we have there, for example, Microsoft gave us a faster record. Here, probably, in different ways you can watch. It is impossible to single out one system, which we take as a basis. But we always use 2-3 now.
 
 
Yandex is obtained in outsiders? Does he recognize worse and slower answers?
 
 
Yandex recognizes addresses very well. Probably, this is the best. Now, if we have an address, then without even thinking, we immediately take Yandex. Because this is the best option.
 
 
But this is probably just because they have a good address database?
 
 
Yes, yes, Yandex.Navigator. Of course, there is Yandex. Taxi. That is, they have a lot of voice samples, when there the driver in the taxi names the address. That is, they have it very well worked out. There's not even trying any other well, of course tried in the framework of the general analysis, but Yandex is much better.
 
 
Any such trivial thing, such as that the output of the recognizer is sent through a grammatical analyzer, which it checked there? And is this used as a kind of metric as recognition?
 
 
Yes. If we start talking about what we are doing now well, if we automatically measure them somehow, there are certainly benchmarks, we look international. There, recently, Mozilla made open source its own, speech recognition, which showed a quality criterion, that is, accuracy in quality roughly like Google.
 
 
Including Russian?
 
 
No, they are only in English so far. Before the Russian they can train. But we are still looking at the international market, therefore, for us as it were We have now the first partner in Dubai, and for us, just like that.
 
 
And in Dubai is English all the same?
 
 
Yes, there is completely English. All their work sites are in English. Well, there is a translation into Arabic, but on the English page attendance is much greater.
 
 
Going back to the problems. If I understand correctly, I here looked at your articles on Habr, what is happening there, then you have semantics.
 
 
Yes, now I'll tell you in more detail. The second task that we began to solve we solve the problems that business brings to us in development. Business tells us from time to time, we need to make our product better, we need to go further along the way of replacing the recruiter. We need, accordingly, to make sure that the robot Vera can call for an interview, fully negotiate, address and place to coordinate. And we started to do this thing. There, in principle, everything is simple enough, but okay, they added a script invitation to the interview, received some answers, if not interesting, then another date was proposed. There's synchronization with the calendar. In general, a simple task is quite simple, but we did not even have time to proceed to it, we realized that another task, which is very important, is that the candidates do not receive feedback.
 
 
From the robot?
 
 
Yes. They follow the employer's script. The employer decided to ask three questions, the candidate heard these three questions, answered them, but the candidate can not ask his questions. This is a one-sided system. That is, it is such we are B2B-business, but at the same time we have a large B2C part. That is, we have a lot of candidates, which we have already made more than one and a half million interviews and, accordingly, it is one and a half million people who talked with a robot who potentially would like to ask their questions and hear some answers. So we began to solve this problem. And we realized that, for example, a simple question about wages may sound different. That is, it can not be programmed at the level of a simple hardcore word list. Well, for example, there the question on the salary can sound "che on money?", And can sound "what financial component?" That is, at us both that and that case is.
 
 
And, accordingly, we did not answer the wrong question, they did not respond to the question, because we pawned: income, salary. Then we began to look for some options, stumbled upon a machine machine learning I have been studying for a long time, we have people on the team who are engaged in this actively. And we remembered that there is such a story as there is a Word2vec library, it is based on neural networks. Google provides pages on it. That is, for example, if we write in the query there are the same requests in Google, they are about the same as we have questions about the vacancy. And, accordingly, Google solves this problem with the help of this library. That is, which is better to show. That is, he takes a text there and accordingly shows which document is better, higher. Ranking of documents. In fact, how does it work? In general, all words are transformed in the form of a vector, in vector space are expressed.
 
 
How many N-dimensional?
 
 
Now I just will not say. But these parameters are aligned. That is, they can be changed. And the quality of the model depends on them. We took the standard model Word2vec, we trained it on the case of well, there is about 150 Kb of the case. These are millions of books. It includes the body of Wikipedia, all articles of Russian Wikipedia, they are all translated into text and on this text it is trained. How does she learn? That is, she runs through this text and looks. For example, there is a sentence "I called by phone" and, for example, there is a sentence "I called on the mobile". Since, "I called" and in that case - it's the same context, it's "phone" and "mobile"
 
 
Assumes they are close.
 
 
Yes. There she just brings them together, she randomly arranges them first, then brings this distance closer to the points. And so we get such a certain some kind of mapping of our words in vector space.
 
 
The metric of the semantic proximity of words.
 
 
Yes. And then we consider the cosine distance or we consider the Euclidean distance. There we also tinkered a little, because we first considered the cosine distance, but it turned out that Euclidean gave us 10% plus to the quality. Because it is more cosine when there is a lot of text, that is, a big text, if we need to compare the document. And since we have questions they are about all the same, short all, then it is simpler to use the Euclidean metric. Well, in fact, we decided to implement all this. Quality here, of course, about 70% turned out. That is, it is low enough.
 
 
70% of what?
 
 
Right answers.
 
 
Asking the expectations of the questioner?
 
 
Yes. That is, we here, in fact, made such Data Set, in which there was a question-answer, a question-answer. That is, this is the question and category to which we refer. We have several categories: salary, address, company. There is a category "did not understand the question". There everything falls that did not fall into other categories. There is a category still "about vacancies", "job duties". That is, a number of such categories. Accordingly, in each category we write some questions. 5-10 questions usually. And what happens? We get the candidate's question, we run a cycle on all these categories, we consider the distance to all questions (questions of the same number in each category), then we give the nearest category to it. If there is a degree of proximity less than a certain one, then we say that we did not understand, please rephrase the question.
 
 
And, actually, here we are, precisely, this problem we try to solve as best as possible. That is, we took first here this Word2vec, then there is such an OpenAI company in the states. They are studying artificial intelligence not for commercial purposes. There they have Ilon Mask, Esen Altman investors. And they are enough, probably, alone in the forefront of these technologies are. Recently, they released their research on the segmentation analysis. They trained a model on Amazon's 80 million product reviews. And the main task was to train the model to generate these reviews. She reviews the reviews. But apart from that, she learned to do a good analysis of the segment. That is, she learned to distinguish good feedback from bad. This we also use now.
 
 
By the way, yes, a very interesting question. You do communicate with him and you can try to take some kind of predisposition, inclination, positive, negative to this vacancy from his answers. Do they?
 
 
This is already done. This model, which we are now using, it, in principle, allows it to be done quite easily. In fact, it solves the same problem, that is, it displays words in vector space, and then they just The most important thing
 
 
Then you look at some key points positive and negative and see how close the answers are to the points that evaluate it in terms of positive, negative? Correctly understand?
 
 
Yes.
 
 
Can you give an example of some incorrect answer to the question? She answered incorrectly. Do not understand the question. And in which direction is the error?
 
 
There were many. So I say, 30 percent. After the first such test, we had 30 percent. We chose our own protege-managers.
 
 
So you tested yourself?
 
 
We went to live production. Well, we knew that there would be 70%, but we sent it. By the way, we even had 70 tests on tests, and in live was 56. That is, it was even lower. We realized that it is necessary to refine it there.
 
 
50 this is very finite

 
 
In fact, this is like a statistical error. Just there 50 to 50. And in the end, we have already approached to 80. These were the very first such tests.
 
 
What's happened? Is that how you moved?
 
 
For example, we have replaced Word2vec with a sentimental neuron, which shows a 5% better quality. Immediately added 5%, there on our tests we still tweaked it, and there still the quality has grown. Then we replaced the cosine distance by Euclidean. This still gave about 10% in quality. Well, then we just increased the number of questions. We had about 5 questions in each category approximately, and we did 15. We were three times increased and, as a whole, thanks to these three operations we got so that now about 80.
 
 
Increasing them, you did not add other questions, but other wording of the same question? That is, in this sense, what you are saying now means that it is the semantic model that does not work well? You have added variability to the options that the system considers, and it has become more clearly fall into the option. Apparently, you took these errors, analyzed them, and on the basis of them gave rise to variants of questions that are reference ones and also close a certain semantic cloud around you.
 
 
That is, now how will it be? How do we position this thing? What is this artificial intelligence, which is a chat-bot that simply helps the recruiter to answer all these questions. Here all the questions, when she did not hit, they are shown to the recruiter. The recruiter can at any time create a new category at any time. The only thing if he now creates a category Why initially five questions? Because in each category there should be questions of approximately the same length and they should be the same number. Otherwise, this Euclidean distance will be considered bad. Therefore, we first wanted to do it ourselves. Then we realized that all the questions are different. For example, some company that sells cellular phones in stores, they may have a question "how much do you deduct for a marriage if I break the phone of the IPhone X?" That is, this question does not exist, for example, for programmers or from the storekeepers.
 
 
Although some would like.
 
 
And, accordingly, all these questions fall to the recruiter. And the recruiter can collect a category from them, can add questions, remove questions.
 
 
Is there such a body of typical questions that can be used?
 
 
As a standard, we collect about 12 categories now. But he can add, create, can add questions to each category. That is, he can, in theory, make more of them.
 
 
And the answer to the question is fixed?
 
 
The recruiter asks the answer to the question.
 
 
In fact, he in this place can generally move from the answers? First, select the answers that interest him.
 
 
And it works. We, look, there is a vacancy at the entrance, there is a description for the vacancy. That is, we form the answers automatically. That's all very bad work. Half we, probably, questions even half of questions to fill slowly, long. That is, we pull up half automatically, and this allows the recruiter to understand We take the Job description, break it into sentences. And each sentence is the distance to the category. And we take the most normal proposal for each category. Well, if there are job descriptions and requirements, then it is usually allocated there, we can pull it out of the text right away. There are some questions, for example, about a lunch break, about whether I need an education, higher education or something. We pull it out simply from the general text.
 
 
We now tell you all this in such detail, in fact. Are not you afraid that now people are sitting behind the camera, who are accurately recording and will compete in the market in a month?
 
 
Well, I have an approach like this. I believe that this is the story with the text and with the recognition of the natural language, it is just beginning to evolve, and the fields of application are huge. And what people are doing there If someone does something, someone will do something like that, I will be only glad, we will use their experience. That is, I have such a more open source story. It's closer to the rester.
 
 
But are you unique today? Do you have competitors in the world?
 
 
No.
 
 
That's just reala unique company in the whole space of the globe?
 
 
Well actually.
 
 
Nobody here does that?
 
 
There are similar products. Here the technology is very developed. Certainly somewhere, someone in something better. For example, in the states there is a chat-bot Mia, can she communicate better in the text than we are now? We here do not exclude.
 
 
You close a very clear task.
 
 
If we talk about the voice is not. Analogues in the states. There is one company that if we can make a script in the office, then we need to call and record the broadcast. Here is the phone number, call, write down what you want to say to the subscribers. You record this and they record this audio and then roll your voice. That is, somehow not so.
 
 
It would seem that you will be technologically more efficient.
 
 
It would seem that this thing is all simple. But I here believe that a tangent to replace, somehow copy and so on, here in the first place, here we are, for example, this sentimental neuron taught more than two months. Purely training went on. That is, Amazon, OpenAI.
 
 
But this is a question of CPU power, is not it?
 
 
Yes.
 
 
This can be done in 3 days, if you just buy more virtual machines?
 
 
No. We already bought this.
 
 
You've already bought everything, it's all over now.
 
 
Yes. Well, at us as, we with Microsoft in partnership, at us unlimited practically resources on these capacities. Therefore, we take big cars, we use the best, probably, that now is on the market. Well, I do not know, the quality of the IWS is no different. And, in fact, we are teaching them. And even OpenAI having its own resources, taught this model a month. That is, therefore, here the tasks are so complex. First, it's still part of the training just a month, and the second task, which is even more difficult - is to find the data. That is, the question is, what else do you need to get we do not need just text.
 
 
 
 
You took the body of the text in some less banal? Wikipedia - it's generally a common place, it's all enough.
 
 
His one is not enough. Well, if we want to add some more We did as we did not just take this case, we still we have, in general, about, probably, several hundred thousand vacancies already on the service published. That is, we took here these texts, 2 million resumes.
 
 
Here you have competitors, who have vacancies.
 
 
Yes, certainly. They took a resume. That is, it's about 2 million resumes from us. We took, in general, still such vacancies, that is, we simply collected job vacancies through ip work sites. That is, we are somewhere, probably 10-15 GB of texts related to HR-themed - this is a summary, vacancies. That's all we took. That is, probably, without this there would be little change. We roughly compared the model, which only in the Russian corps and the Russian corps with our texts. There's a difference of 2-3% approximately. That is, small, but it is all the same. And, as it were, why not do it?
 
 
And do 3% influence? Look, in fact, when we were talking with your colleague, at that moment another robot, in my opinion, only "yes" and "no" took. The level was this. And already at this level it was very clearly understood that I now recalled there was a good story about how TPP broke the encryption algorithm. It was great in that they made a hardware farm that knows exactly the bad key from the unknown. Not bad from good, but bad from the unknown.
 
 
But on this it was possible to remove two orders of volume, to clear this case, and to drive out the remaining keys simply by the usual selection. And you did the same? That is, you, in fact, have made some tool that makes it possible to distinguish precisely the bad candidate from the unknown. And by the same token, also you took off a significant amount of work, such a meaningless, stupid job from the recruiter and raised the efficiency of order by ? probably? Somewhere such a level? And this is an essential driver. Everything else that you do is valuable, interesting, very interesting, but it seems to me, from the point of view of business, this jump does not bring. Or I'm wrong?

 
 
Well, here we surely 2-3% here in this task, they certainly
 
 
Weather do not?
 
 
Especially the weather is not done, but we are, as it were, fighting for this we are going to this figure of 80 quality. Therefore, we consider such, probably, for us the base line after which it is possible to straighten out the production for all customers and say that we did it.
 
 
And now you are experimenting with the client agree?
 
 
Yes, several clients who test.
 
 
Ready to take risks

 
 
Yes, they use this technology. Well, and we are, as it were, while we keep it in the closed state a little. That is, we do not tell in the least.
 
 
Just told.
 
 
Yes. And, accordingly, here everyone here is some small addition. That is, we here have changed the metric from cosine to Euclidean, here we changed the model. Then we also tweaked this model a bit, the method of classifying another set, and it has become a little better. Also 2-3%. And so here we are from all 80 of these and go. That is, there is such a process. Each component somehow influences it all.
 
 
On the other hand, now your competitors are two orders behind you, right? And it is not difficult for them to close these 2 orders. Well, that is, it is in general some quite banal system with a rather banal recognizer of the words "yes" or "no". And how, in fact, are you, if I remember correctly, in three months you all did it? Well, maybe

 
 
A little more. It was such months. That is, there without sleep
 
 
Well, you did not know yet that it was possible. They already know that. That is, in principle, now people sitting behind the camera, somewhere behind the screen from us, probably the question is there for one month, in order to build on your experience and success, to make an analogue that these two orders also implements. That is, in principle, your competitors can reach you pretty quickly.
 
 
Yes, certainly. Moreover, there are already companies that offer similar services. That is, it already, in principle, is happening. But now we, as it were, had a variant to improve our service here, to say that we are the most stable and to say that we are the best, we are the very first and so on. But we, as it were, decided to go to the next stage. That is, we decided to go where there are no competitors. That is, competition is for losers, as one investor from the valley Peter Til spoke there. And, actually, we try to do some things that are not. Well, for example, there is no live dialogue. That is, our task, that's all I'm telling, these little details, they all go to a big task - creating such a live dialogue.
 
 
That is, we are now in the test itself made such a story that after she recognizes these categories of questions, she, in principle, the answer "yes" and the answer "no" is also a category in fact. And then, we now how? That is, if earlier it was necessary to create a call script, and it was quite such a dull process, in fact. Well, because there we give some sort of default script, standard, but still each company in different ways. There, for example, the company is one on "you" turns, the other on "you". That is, someone has an official idea, someone has something different. Someone "hello", someone there is a "good day." And, accordingly, here we are trying the candidate can ask the right place, where he can ask questions, and she should ask: "What questions do you have?" And we want the candidate to ask a question at any moment. That is, she asks him: "Are you interested in the work?", He says: "Who are you?" And she answers him.
 
 
And she answers.
 
 
Yes.
 
 
That is, it supports such a lively dialogue.
 
 
Yes. And after she answered, he says: "Yes, interesting." And she understands that the answer to her previous question. And now we are working on this. And it certainly is such a big leap, here in general in this communication it, in our product will be. Because then the candidate will receive information. Not only receive information on the vacancy, but simply communicate, as he is accustomed to. And this will increase the conversion of course much.
 
 
 
 
Well, you're not the first. In Russia there are teams that have done something similar. Somehow communicating, discussing?
 
 
Yes there is, and in the world. Of course, yes. Now, if we talk about generally such systems where a natural language is recognized, where they try to build a live dialogue, then unless of course the big task is big deal such, it is to create there a system that will communicate there Jarvis Mark Zuckerberg recently showed. But at the same time, this is, like, such a great story. The giants work there: Facebook, Amazon, Google, Microsoft and so on. All of them work on this. Well, there are at the same time companies of smaller startups who are engaged in some highly specialized task. For example, technical support. Very often it is automated, very often it is replaced with the use of these technologies. There people on bots. There's a lot of progress there.
 
 
We certainly talk with companies, with the guys who do it. We, for example, are specialized in HR, we have our own field, our subject, we need our own texts for model training, we need some specific patterns, our categories. That is, this is what we are doing here. There are companies that do the same. In the states there is Mia, a number of companies. Above this, too, work, but here in a highly specialized format, it still somehow turns out. Well, some reach there, 80% and maybe even more.
 
 
It is segmented that you take a subset of the types of communication and lock it in it. Due to this, a little, semantic model, having limitations on the semantics class, probably operates more clearly. Look, in fact, again, that's what you're talking about - of course it's nice and very interesting, it's delicious to be proud of: "We have a robot that can talk." Does business have value?
 
 
Yes, we are just, probably, our story, that we periodically want to go into the segmented analysis or go somewhere else. But we go from business, from his requests. That is, from what I just started. The fact that the business came and says: "We need the robot to start answering the candidate's questions so that it is not one-sided, not a one-way game."
 
 
And the metric? You did this in some amount. How do you measure the effect in terms of business?
 
 
It's easy enough. We have a metric. We are all talking about calls, making calls. That is, there is a certain timeline of the recruiter's conversation with the candidate. There usually, if it's a position of any management selection, a sales manager, then it's probably 10-15 minutes. And the first part is, of course, these three questions that we ask: "Do we need a job? And interesting? And we have such conditions. Come, will not you come? ". Some kind of last part of it - it's probably also there for a couple of minutes at most. That is, this is the coordination of the interview. Well, there's even a minute, maybe. It's there: tomorrow at three, and I can not tomorrow, okay, let's the day after tomorrow. She looks at the calendar, writes down. That is, this is also a simple story. And here in the middle they have this here is communication.
 
 
And, in fact, our metric, it just the same is that here we are, when we were looking for employees and doing this test, we measured how many of our accounts communicate with candidates. Here's to this feature with questions for the answers and after. And it turned out that we were able to reduce the time of their communication two times. That is, they asked only those questions that they were uninteresting. And this, if you take into account that they still asked questions: "What kind of a robot?", Etc. That is, it also wasted time. And therefore, there is certainly. That is, our history is about reducing the time spent by a recruiter. But all the functions that we can now help him, we try to do it. Therefore, there certainly is such a business value.
 
 
That is, the metric is not a conversion, namely a reduction in the costs of a living person?
 
 
Yes, while we are fighting over this metric. Conversion is such a next stage. We certainly consider it too. Well, in fact, here in general, it probably does not change. That is, about, there is a big difference we did not see. Candidates who receive answers to questions, it even, probably, gets a little bit less in the end. They are getting smaller, because some get answers that do not suit them.
 
 
Which they do not like, and it falls off. By the way, it's also a separate issue, which could also be discussed, but probably not this time. Our today's meeting is coming to an end, and you know, I'm talking to you right now and I remembered an episode from my childhood. I remember in my time were issued records that were printed in a journal on plastic such. A plastic page was inserted in the magazine, it had to be cut out, and there was a platelet. And they inserted some absolutely enchanting such interesting things in them sometimes. And there, for example, there was a story about how scientists made an artificial pharynx mechanical. She was spinning there somehow, shrinking. And there was a sound recorded, pronounced by this mechanical robot. And then it was not just a fantasy, but some kind of, in general, beyond. And it even seemed that this would never happen, and today we are discussing a situation in which the synthesis of speech itself is not even an object of conversation. It simply is, it simply exists and already works, and even recognition as such, in general, works.
 
 
And we are discussing with you not the recognition of words, but the recognition of the meaning of the spoken and practically communication at the level, well, in general, probably already of some kind of living reason of such acting. Probably a child there two or three years old. Somewhere it gets there. That is, the question of getting into a real person's conversation, probably, somewhere in our life will already happen?

 
 
Yes, I think so. I am still so optimistic about technology. Undoubtedly, we are faced with a lot of difficulties, of course, all the same, these 20% of them are quite a lot of this, that we do not recognize these answers. We when we go here these 20% to look, we yet do not understand, how these percent to us to add. That is, we have not even got any ideas yet. But at the same time, and we understand that this is still taking into account the fact that this is a very narrow area. That is, it's only HR, it's just about vacancy, it's just that about it. And we still do not take questions, which can still be 10-15%, which in general as a whole. Well, sometimes they ask her questions: "And what is the weather in Moscow?" They can also be processed, of course.
 
 
She should say: "Wait, Alisa will call back."
 
 
Of course, here's what, when we're here looking at these mistakes, we understand that technology, they are not so developed to directly replace a person. And this one is often very simple, I see there articles, they write that they have completely replaced the type. It certainly raises such a superheip in this area, in fact, no.
 
 
Far still.
 
 
Yes, it's still a long way off. I think that in our lifetime, I think, yes of course. Somewhere we will approach this. Probably, 100% still will not come close, but we will be very close. But in general, it's a story about how we treat this technology more as a one that helps some simple tasks, simple problems for people are already being solved. But of course, it does not replace people completely.
 
 
Leading. So what Perhaps, we will wait for the day when your system will pass the Turing test and then will we call you again?
 
 
Yes, yes, of course.
+ 0 -

Add comment