07 March 2014

Face detection is not for serious stuff only

Why can't technology be used just for fun? Of course it can! And it is.
Maybe you already heard of interactive Sing it kitty commercial? If not, give it a try.
Under the hood it uses SkyBiometry face detection service to find person's face in a photo and necessary feature points (we return 68 of them!) to later align the face with the video and synchronize it with the song. Check out Stinkdigital page for more details.
If the commercial page is not available in your country or region, check some YouTube videos posted.
Do you have an interesting or just fun project using our technology that you want to share with world? Do not hesitate and contact us!

01 October 2013

Happy? Surprised? Emotion recognition available now!

Emotion (sentiment or mood) recognition was the top-voted feature on our user voice site. It should not be a surprise as analysis of a person's mood is a killer feature for marketing or just plain fun applications. And we are happy to announce that it is available now for all our customers!
Currently for each face in a photo we return one of the following values (along with a confidence) for the mood attribute: neutral, surprised, happy, sad, disgusted, scared and angry. In case you wish to make some custom emotion analysis like "how happy does the person look in the photo?" we also return confidence for each one of the basic emotions separately.
You can check out the functionality in our demo right now. And as always your feedback is more than welcome.

25 July 2013

More attributes, more points - more value

Starting this week our Face Detection and Recognition service supports two more attributes: lips and eyes. These attributes enable even more scenarios, such as checking that person's eyes are open in the photo or the mouth is shut during submitted photo filtering for example. Also quality of existing attributes determination is significantly improved. All attribute values are returned along with confidence in the range 0-100%. In the case attribute value cannot be reliably determined it is not returned at all, as we do not want to confuse you with the noisy results.
All currently supported attributes along with possible values are summarized in the table below:

AttributeValues
gendermale, female
glassestrue, false
dark_glassestrue, false
smilingtrue, false
lipssealed, parted
eyesopen, closed

And more are coming in the future. Please use our user voice page to influence which ones come first!

Another addition to the service is detect_all_feature_points parameter. If set to true when calling faces/detect or other method then the response will contain up to 68 points in addition to always returned left eye center, right eye center, nose tip and mouth center points. Each point has a confidence specified in the range 0-100% and an id. For additional points id has the following format: 0x03NN where NN is a point number. Each number is linked to a specific point on the face as you can see in our demo. Just check the "Detect all feature points" checkbox before clicking Submit or a picture.
If you are using our C# wrapper then you specify the parameter in additionalArgs argument:

FCResult result = await client.Faces.DetectAsync(
    new string[] { url }, null,
    Detector.Default, Attributes.Default,
    new KeyValuePair<string, object>[] {
        new KeyValuePair<string, object>(
            "detect_all_feature_points", true) }});

 Also we have greatly updated out documentation to include information about returned response object fields and possible error codes.

12 June 2013

Congratulations to hackathons winners

We would like to dedicate this blog entry to new members of our growing community of developers who use computer vision in their products. Thanks to cloud API hub Mashape we had an opportunity to open our face detection and recognition API for free usage during several recent hackathons: AngelhackNYC and APIDays Mediterranea.

AngelhackNYC winner

During the AngelhackNYC “OrderCandy” team (Vache Asatryan, Pasha Riger, Antonio Pellegrino and others) built an ad system which used our face detection API to start the interaction with a person automatically when the person is detected in front of the ad screen. A short welcome message is followed by an offer to make an order from the local coffee shop menu. The order is made by using hand gestures on LeapMotion. Our API is also used to capture the picture of client face for giving the order out to the right person. Additionally, expression recognition is used to unlock an “Easter egg” (discounts, etc.) with a smile.

APIDays Mediterranea winner

During the APIDays Mediterranea “LoveHere” team (Javier Abadia, Jaime Nieves Martinez, and Francesc Puigvert Pell) in 7 hours built a dating platform which allows users to connect with nearby people without compromising personal information. After taking a picture of him/herself the user is registered in the system. The photo is analyzed to estimate age and gender (that’s where our API was really useful). The app also uses the ArcGIS Online hosted map service for storing user’s location and finding other nearby users. Additionally, the user can perform phone calls to nearby potential dates without revealing the personal phone number. Another great Twillio API was used for that. Check out the demo video here.

We wish nice, fun and productive days for the teams, Mashape and all of you guys.

23 April 2013

Face recognition in action

Last week we have rolled out some updates to our face recognition functionality. First of all the demo on our website was updated with a recognition page.
Here you can pick two photos from a set of provided ones or use your own images to see recognition results between them. By default two most similar faces from the photos are selected, but you can click or tap around to see one face (with dotted rectangle) similarity score with all others.

Faces/group improvements

The demo uses new functionality we added to faces/group method - a similarity matrix. The method now takes optional return_similarities parameter, and if it has a value of true then for each detected face tag, within the response, we return similarity scores with all other tags in all photos in the request (except for the zero scores as we want to keep the response size reasonable small). With this new functionality you now have 3 options to group people in the photos: fully automatic; automatic with specifying your own threshold value for grouping (instead of default value of 70) and manual by examining the similarity matrix and adding you own logic on top for greatest flexibility. The third option also allows image verification scenario, when you need an answer to a question like "How similar are two persons in these two photos?".
Faces/group works great when there are small finite set of photos to compare. But if you need to compare a person in a photo with a database of two million people or group people in several hundreds or thousands of photos then faces/recognize is your choice.

The recognition workflow

Recognition scenario has a slightly more complicated workflow. People have to be enrolled to a database first: their faces have to be saved by the means of tags/save method and the user have to be prepared for recognition using faces/train method. Additional face impressions can be added or some of them can be removed in the future using tags/save or tags/remove methods and faces/train have to be re-run on the user to update its information for recognition. Note that the more impressions of the user you add (with different head rotation, expression, lightning conditions, with and without glasses, beard, etc.), the more likely it will be reliably recognized later. Also note that if you want to remove a user from the database you have to remove all user tags from the system (call tags/remove with tag list obtained from tags/get for the user) and call faces/train for the same user to remove its information from the recognition system.
Recognition results for each tag in response from faces/recognize method contain list of value pairs: a user id it was matched with and a confidence value of the match (similarity score). You should compare the confidence with a threshold value of the tag to decide if the match is reliable or not. The threshold value depends on quality of the face itself and also the database size it was matched against.
That is the second improvement we have rolled out - dynamic tag threshold value along with more natural (for a human) matching similarity score normalization. You may notice that the confidence values you receive in the response now feel (and are) more accurate.
Because faces/recognize (and faces/group if requested) can be used to match a person against very large databases that can potentially yield a lot of matched users to be returned. You can now use limit parameter to return the specified maximum number of top matching results. If not specified a value of 100 is used.

The meaning of label

We see from the support requests that there is a bit of misunderstanding about label field value in the response. The thing is that the label returned with a tag in the results is the same label (if) used during tags/save or tags/add for the same tag for the same photo. It is not a recognition result, the same label faces/detect may return for the same tag in the same photo. It is saved for the tag, not for the user. If you need to store some additional information for a user and retrieve it along with recognition results, you have to store it in information system of yours and pull it from there after obtaining recognized user ids.

18 December 2012

Smile and glasses: the new attributes

Today we introduce the welcome addition to detected attributes family in Face Detection and Recognition API: smiling and glasses. These attributes enable new scenarios for automatic photo processing such us: selecting photos with smiling faces, filtering out photos with sunglasses during upload, offering glasses only to a customer already wearing ones, and many more.
When you ask for glasses attribute we actually return two: glasses and dark_glasses. The second one additionally allows to differentiate between clear and dark glasses.
Check out the demo and see the attributes detection in action! The algorithms behind them are quite new and your feedback on these attributes would be very welcome.
If you are using our NuGet package, please update it to the latest version.
If you think that other facial attributes could add value to your applications, let us know.

12 December 2012

Paid subscription plans are available

After we released our Face Detection and Recognition API about two months ago we receive a lot of feedback on the service. And the most popular requests are: "How we can perform more than 5000 API calls per month?", "How can we pay for additional API usage?", "Are there any paid subscription plans available?" and so on. We understand that you need to use our service in a production environment and the free plan feature set may be insufficient for your application. And today we are proud to present our paid subscription plans for you:

P2
P1
FREE
Subscription
€100/mo
€50/mo
€0/mo
Calls per month
100000
40000
5000
Each additional call
€0.01
€0.0125
-
Calls per day
1-100000
1-40000
5000
Calls per hour
1-100000
1-40000
100
Trained tags per account
1000
1000
1000
Support
2 business days
2 business days

You can upgrade to a higher level subscription plan any time your application needs to grow. And if your application needs outgrow our existing subscription plans feel free to contact us.
Regardless of the availability of the paid plans, the free one is not gone. We are planning to continue providing it in the foreseeable future for evaluation purposes and small-scale applications.