Recognition of eco-labels using Azure Custom Vision from the mobile application
In this article, I want to talk about using the Custom Vision service to recognize eco-label photos from a mobile application.
The CustomVision service is part of the cloud-based Cognitive Services on the Azure platform.
About what technologies had to be studied, how to work with CustomVision, what it is and what it allows to achieve - further.
here ) For all classes at once, and separately.
This is how the project looks with the images already downloaded and transmitted through training.
On the portal, you can start recognizing images from disk or by URL using Quick Test and test recognition by one image.
On the Predictions tab, you can see the results of all the latest recognitions - the percentages of belonging to the tags are displayed directly on the picture.
The ability to see all the results of recognition and add them to the training sample is literally a couple of mouse clicks, it can be done by anyone, without any knowledge of AI or programming.
Using the API
Custom Vision Service has a very simple and intuitive REST API for learning and recognizing.
In our application, only the recognition API is used and I will talk about its use of
Url for recognizing this kind of:
https://southcentralus.api.cognitive.microsoft.com/customvision/v2.0/Prediction/{Your project GUID} /image
where
southcentralus **
- The name Azure region where the service is located. While the service is available only in the South Central US region. This does not mean that only there you can use it! It's just that he lives there - you can use it anywhere, wherever the Internet is.
{Your project GUID} **
- The identifier of your project. You can see it on the portal customvision.ai
For recognition, you need to send the image via POST. You can also send publicly available url images and the service downloads it yourself.
In addition, it is necessary to add the header "Prediction-Key" to Headers in which to transfer one of the Access Key that will be issued during registration - they are available both on the portal customvision.ai and on the Azure portal
.
The result contains the following field:
"Predictions":[
{"TagId":"35ac2ad0-e3ef-4e60-b81f-052a1057a1ca","Tag":"dog","Probability":0.102716163},
{"TagId":"28e1a872-3776-434c-8cf0-b612dd1a953c","Tag":"cat","Probability":0.02037274}
]
Where Probability indicates the probability that the image belongs to the specified tag (class).
In C #, it looks like this
var client = new HttpClient ();
client.DefaultRequestHeaders.Add ("Prediction-Key", "{Acess key}");
string url = "https://southcentralus.api.cognitive.microsoft.com/customvision/v2.0/Prediction/{Your project GUID} /image";
HttpResponseMessage response;
List recognitions = new List ();
using (var content = new ByteArrayContent (imageBytes))
{
content.Headers.ContentType = new MediaTypeHeaderValue
("application /octet-stream");
response = await client.PostAsync (url, content);
if (response.IsSuccessStatusCode)
{
string strRes = await response.Content.ReadAsStringAsync ();
dynamic res = (dynamic) JsonConvert.DeserializeObject (strRes);
foreach (var pr in res.predictions)
{
recognitions.Add (
new RecognitionResult ()
{Tag = pr.tagName, RecognPercent = pr.probability});
}
}
else
{
Debug.WriteLine (
"Non successful response." + Response.ToString ());
}
}
As you can see - absolutely nothing complicated. All magic takes place on the side of the service.
Application and some selected parameters.
The application is fairly simple and consists of a list of Eco-labels, information about what eco-labels are, how they are subdivided and the scanner itself.
The main part is written in Xamarin.Forms, but the scanner window works with the camera and it had to be done as renders and implement
for each platform separately.
The level when the application decides that the eco-label is recognized exactly> = 90% while almost all images are recognized if they are more or less of acceptable quality and there are no other characters in the picture.
This number was derived empirically - we started at 8? but realized that 90 reduces false positive positives. And they happen quite a lot - many markings are similar and contain similar elements and the color scale is shifted to green.
For example, this is not the most qualitative image is recognized correctly with an accuracy of 91%
B while this class was trained in 45 images.
I hope the article was useful and will allow interested readers to look at the new AI and ML tools.
It may be interesting
weber
Author26-09-2018, 09:27
Publication DateDevelopment of mobile applications / Xamarin / Microsoft Azure
Category- Comments: 0
- Views: 396
nursing test bank
nursing test bank