Showing 1–4 of 4 results for author: Tabibian, B

.
  1. arXiv:1712.01856  [pdf, other stat.ML

    Optimizing Human Learning

    Authors: Behzad Tabibian, Utkarsh Upadhyay, Abir De, Ali Zarezade, Bernhard Schoelkopf, Manuel Gomez-Rodriguez

    Abstract: Spaced repetition is a technique for efficient memorization which uses repeated, spaced review of content to improve long-term retention. Can we find the optimal reviewing schedule to maximize the benefits of spaced repetition? In this paper, we introduce a novel, flexible representation of spaced repetition using the framework of marked temporal point processes and then address the above question… ▽ More

    Submitted 10 March, 2018; v1 submitted 5 December, 2017; originally announced December 2017.

  2. arXiv:1711.09918  [pdf, other cs.SI

    Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation

    Authors: Jooyeon Kim, Behzad Tabibian, Alice Oh, Bernhard Schoelkopf, Manuel Gomez-Rodriguez

    Abstract: Online social networking sites are experimenting with the following crowd-powered procedure to reduce the spread of fake news and misinformation: whenever a user is exposed to a story through her feed, she can flag the story as misinformation and, if the story receives enough flags, it is sent to a trusted third party for fact checking. If this party identifies the story as misinformation, it is m… ▽ More

    Submitted 27 November, 2017; originally announced November 2017.

    Comments: To appear at the 11th ACM International Conference on Web Search and Data Mining (WSDM 2018)

  3. arXiv:1708.09794  [pdf, other cs.DL

    Design and Analysis of the NIPS 2016 Review Process

    Authors: Nihar B. Shah, Behzad Tabibian, Krikamol Muandet, Isabelle Guyon, Ulrike von Luxburg

    Abstract: Neural Information Processing Systems (NIPS) is a top-tier annual conference in machine learning. The 2016 edition of the conference comprised more than 2,400 paper submissions, 3,000 reviewers, and 8,000 attendees. This represents a growth of nearly 40% in terms of submissions, 96% in terms of reviewers, and over 100% in terms of attendees as compared to the previous year. The massive scale as we… ▽ More

    Submitted 23 April, 2018; v1 submitted 31 August, 2017; originally announced August 2017.

  4. Distilling Information Reliability and Source Trustworthiness from Digital Traces

    Authors: Behzad Tabibian, Isabel Valera, Mehrdad Farajtabar, Le Song, Bernhard Schölkopf, Manuel Gomez-Rodriguez

    Abstract: Online knowledge repositories typically rely on their users or dedicated editors to evaluate the reliability of their content. These evaluations can be viewed as noisy measurements of both information reliability and information source trustworthiness. Can we leverage these noisy evaluations, often biased, to distill a robust, unbiased and interpretable measure of both notions? In this paper, we… ▽ More

    Submitted 2 April, 2017; v1 submitted 24 October, 2016; originally announced October 2016.

    Comments: Accepted at 26th World Wide Web conference (WWW-17)