About Me

I am a Ph.D. student in the Computer Science Department at Ben Gurion University (BGU), and a member of the Interdisciplinary Computational Vision Laboratory (iCVL) led by Prof. Ohad Ben-Shahar. I also cooridnate our weekly seminar, so if you wish to present your work, please let me know.

I conduct research in computer vision, focusing on the detection of objects in 2D and 3D - object detection with and without context, pose estimation, symmetry detection, and how these elements complement each other.


Please check out the research tab.

Contact Information

Ehud Barnea
Ben-Gurion University Of The Negev,
Beer-Sheva, Israel.
Office: Alon building, Vision Lab (37/-103)


Contextual object detection

The incorporation of context has been shown time and again to improve detection results when an object's local appearance is ambiguous. While many kinds of context can be employed, we focus on the spatial relations between objects, which can be highly predictive in many cases. In this line of research we study the high order relations between different objects with the aim of developing stronger models of context.

Barnea E. and Ben-Shahar O., High-Order Contextual Object Detection with a Few Relevant Neighbors (paper in review) (presentation).

3D object detection exploiting symmetry

Knowing the poses of objects before their detection or classification has been shown to improve the results of object detectors. However, a robust and fine estimation of object poses is still challenging. To do just that, we suggest to employ the mirror symmetry of objects, providing a part of the pose information. In our ECCV14 paper we show how the symmetry of objects in 3D can be robustly detected (providing fine but partial pose information) and used to construct a partial pose invariant representation of objects’ shape, allowing state of the art object detection.

Barnea E. and Ben-Shahar O., Depth Based Object Detection from Partial Pose Estimation of Symmetric Objects, ECCV 2014 (pdf).

A fully automatic fruit harvesting robot (cRops)

In the European Union's “cRops” project, we seek to automate the fruit harvesting process with a robot that does everything from the visual detection of fruit to precision picking with specially designed grippers. This robot is a result of collaboration with several research groups. Our research lab (BGU ICVL) was tasked with the visual task of fruit detection, while I developed methods for the 3D detection of fruit and acted as group liaison. For more information regarding the project, visit the formal website.

Barnea E., Mairon R., and Ben-Shahar O., Colour-agnostic shape-based 3D fruit detection for crop harvesting robots, Biosystems Engineering 2016 (pdf).
Kapach K., Barnea E. *, Mairon R. *, Yael E., and Ben-Shahar O., Computer vision for fruit harvesting robots – state of the art and challenges ahead, IJCVR 2012 (pdf).


Anki Furigana Hint

For those learning Japanese using Anki SRS, should you wish to configure a hint field to add furigana to the currently visible kanji, check out this page.

Group Seminar

Our group holds weekly seminars on the subject of computational vision and cognition. Most of our speakers consist of graduate students and professors presenting their latest research, and sometimes general subjects are displayed as well. If you have interesting work to show, or you wish to be added to the newsletter, please contact me.

Upcoming Lectures:

<- Back to personal

Configuring An Anki Hint To Display Furigana

An Anki hint is like a link that adds information to a card after it is clicked. This feature is pretty comfortable, though it does not allow you to make changes to the data that is already displayed (also, it doesn't allow to do something like {{hint::furigana::Japanese}} which forces us to include a hiragana field. See sad-face example below). However, as a Japanese learner I did want to change the data. When I read read text with kanji and furigana on top I always seem to disregard the kanji and read the furigana first. To solve this I went on and displayed only the kanji and added a hint to display the hiragana reading, however, this isn't elegant, so what I would like to have displayed is the furigana above the kanji. This appears to be possible using links (see smiley-face example at the bottom of the page).

The current option using a hint field. Translation by EDICT.

Since Anki allows links to be added to cards we can create a link to resemble a hint. It seams text cannot be altered, so we configure this link to have two actions upon click, to hide the visible text (kanji only) and then to display the hidden text (kanji with furigana). To do so with a deck containing two fields named "English" and "Japanese" (where "Japanese" contains text like "木[こ] 漏[も]れ 日[び]"), you may simply copy the front and back templates below:

Front template:
<span id="before" style="font-family: Mincho; font-size: 50px; ">
<a href="#"
onclick="document.getElementById('after').style.display='block';document.getElementById('before').style.display='none';return false;">

<span id="after" style="font-family: Mincho; font-size: 50px; display: none">
Back template:
<span style="font-family: Mincho; font-size: 50px; ">
<hr id=answer>

As you can see, the front template is made of two "span" tags, a span tag (with id="before"), including the link (or hint), and another span tag (with id="after"). The "after" tag has a property "display: none", which means it starts off as hidden (while the "before" one does not have this property and is visible). The action of the link (specified in the onclick field) does two things, it sets the display value of the "after" tag to "block" (rendering it visible) and it sets the value of the "before" tag to "none" (rendering it hidden). The link disappears as well (as it is inside the "after" tag) and we are left only with the kanji and furigana on top. The back template had to be updated as well, so that it won't show the link and all the mess in the front template.

If you know of a better way to do so, please let me know.

The hint adds furigana to the kanji. Translation by EDICT.