From alejandro.bellogin at uam.es Mon Jul 14 09:39:12 2014 From: alejandro.bellogin at uam.es (Alejandro Bellogin Kouki) Date: Mon, 14 Jul 2014 15:39:12 +0200 Subject: [Sigkm-l] Final CFP: REDD 2014 - ACM RecSys Workshop on Recommender Systems Evaluation: Dimensions and Design Message-ID: <53C3DD80.2040003@uam.es> ------------------ Final Call for Papers - REDD 2014 ------------------ International ACM RecSys Workshop on Recommender Systems Evaluation: Dimensions and Design - REDD 2014 Foster City, Silicon Valley, CA, USA, October 2014 http://ir.ii.uam.es/redd2014 --------------------------------------------------------------------- * Submission deadline: 21 July 2014 * Scope ----- Evaluation is a cardinal issue in recommender systems; as in almost any other technical discipline, it highlights to a large extent the problems that need to be solved by the field and, hence, leads the way for algorithmic research and development in the community. Yet, in the field of recommender systems, there still exists considerable disparity in evaluation methods, metrics, and experimental designs as well as a significant mismatch between evaluation methods in the lab and what constitutes an effective recommendation for real users and businesses. This workshop aims at providing an informal forum to tackle such issues and move towards better understood and commonly agreed evaluation methodologies, allowing one to leverage the efforts and the workforce of the academic community on meaningful and relevant directions to real-world developments. REDD 2014 places a specific focus, on the one hand, on the identificationand measurement of different recommendation quality dimensions that go beyond the monolithic concept of simply matching user preferences. Noveltyand diversity, for instance, have been recognized as key components of the utility of recommendations for users in real-world scenarios, with a direct positive effect on business performance. Considering the business perspective, performance metrics related to sales, revenues, and user engagement along the recommendation funnel should also be used. Additionally,from an engineering point of view, aspects such as efficiency, scalability, robustness, and user interface design are typically major concerns; often prioritized over the effectiveness of the internal algorithms at thecore of the system. On the other hand, once a relevant target quality has been defined, a clear evaluation protocol should be specified in detailand agreed upon, allowing for the comparison, replicability and reproducibility of the results and experiments by different authors and enabling incremental contributions. REDD 2014 aims at gathering researchers and practitioners interested in better understanding the unmet needs of the field in terms of evaluation methodologies and experimental practices. The main goal of this workshop is to provide an informal setting for discussing and exchanging ideas, experiences, and viewpoints. REDD seeks to identify and better understand the current gaps in recommender system evaluation methodologies, help lay directions for progress in addressing them, and foster the consolidation and convergence of experimental methods and practices. Topics ------ We invite the submission of papers reporting original research, studies, advances, or experiences that focus on recommender system utility evaluation. The topics that the workshop seeks to address include--though need not be limited to--the following: * Recommendation quality dimensions - Effective accuracy, ranking quality - Novelty, diversity, unexpectedness, serendipity - Utility, gain, cost, risk, benefit - Robustness, confidence, coverage, ease of use, persuasiveness, etc. * Matching metrics to tasks, needs, and goals - User satisfaction, user perception, human factors - Business-oriented evaluation - Multiple objective optimization, user engagement - Quality of service, quality of experience * Evaluation methodology and experimental design - Definition and evaluation of new metrics, studies of existing ones - Adaptation of methodologies from related fields: IR, Machine Learning, HCI, etc. - Evaluation theory * Practical aspects of evaluation - Offline and online experimental approaches - Simulation-based evaluation - Datasets and benchmarks - Validation of metrics - Efficiency and scalability - Open evaluation platforms and infrastructures Submission ---------- Two submission types are accepted: technical papers up to 6 pages long, and position papers up to 3 pages. Each paper will be evaluated by at least two reviewers from the Programme Committee. The papers will be evaluated for their originality, contribution significance, soundness, clarity, and overall quality. Within a required quality standard, position papers will be evaluated based on the presentation of new perspectives and insights, and their potential for provoking thoughts and stimulating discussion. All submissions shall adhere to the standard ACM SIG proceedings format: http://www.acm.org/sigs/publications/proceedings-templates. The accepted papers will be published in the CEUR Proceedings series. Submissions shall be sent as a pdf file through the online submission system now open at: http://www.easychair.org/conferences/?conf=redd2014. Important dates --------------- Paper submission deadline: 21 July 2014 Author notification: 21 August 2014 Camera ready version due: 5 September 2014 REDD 2014 workshop: October 2014 Programme Committee ------------------- Linas Baltrunas, Telefonica Research, Spain Marcel Blattner, Univ. of Applied Sciences, Switzerland Iv?n Cantador, Universidad Aut?noma de Madrid, Spain Charles Clarke, University of Waterloo, Canada Juan Manuel Fern?ndez, Universidad de Granada, Spain Zeno Gantner, Nokia, Germany Ido Guy, IBM Haifa Research Lab, Israel Juan Huete, Universidad de Granada, Spain Kris Jack, Mendeley, UK Dietmar Jannach, University of Dortmund, Germany Jaap Kamps, University of Amsterdam, Netherlands Alexandros Karatzoglou, Telefonica Research, Spain Bart Knijnenburg, University of California, Irvine, USA Till Plumbaum, TU Berlin, Germany Filip Radlinski, Microsoft, Canada Alan Said, TU Delft, Netherlands Yue Shi, Yahoo! Labs, USA Fabrizio Silvestri, Yahoo!, Spain David Vallet, Google Inc., Australia Arjen de Vries, CWI, The Netherlands Jun Wang, University College London, UK Xiang-Jun Wang, Netflix Xiaoxue Zhao, University College London, UK Organizers ---------- Panagiotis Adamopoulos, New York University, USA Alejandro Bellog?n, Universidad Aut?noma de Madrid, Spain Pablo Castells, Universidad Aut?noma de Madrid, Spain Paolo Cremonesi, Politecnico di Milano, Italy Harald Steck, Netflix, USA Contact email: redd2014 at easychair.org More info at: http://ir.ii.uam.es/redd2014 From alejandro.bellogin at uam.es Mon Jul 21 03:28:03 2014 From: alejandro.bellogin at uam.es (Alejandro Bellogin Kouki) Date: Mon, 21 Jul 2014 09:28:03 +0200 Subject: [Sigkm-l] Extended deadline: REDD 2014 - ACM RecSys Workshop on Recommender Systems Evaluation: Dimensions and Design Message-ID: <53CCC103.9020207@uam.es> ------------------ Final Call for Papers - REDD 2014 ------------------ International ACM RecSys Workshop on Recommender Systems Evaluation: Dimensions and Design - REDD 2014 Foster City, Silicon Valley, CA, USA, October 2014 http://ir.ii.uam.es/redd2014 --------------------------------------------------------------------- * Extended deadline: 28 July 2014 * Scope ----- Evaluation is a cardinal issue in recommender systems; as in almost any other technical discipline, it highlights to a large extent the problems that need to be solved by the field and, hence, leads the way for algorithmic research and development in the community. Yet, in the field of recommender systems, there still exists considerable disparity in evaluation methods, metrics, and experimental designs as well as a significant mismatch between evaluation methods in the lab and what constitutes an effective recommendation for real users and businesses. This workshop aims at providing an informal forum to tackle such issues and move towards better understood and commonly agreed evaluation methodologies, allowing one to leverage the efforts and the workforce of the academic community on meaningful and relevant directions to real-world developments. REDD 2014 places a specific focus, on the one hand, on the identificationand measurement of different recommendation quality dimensions that go beyond the monolithic concept of simply matching user preferences. Noveltyand diversity, for instance, have been recognized as key components of the utility of recommendations for users in real-world scenarios, with a direct positive effect on business performance. Considering the business perspective, performance metrics related to sales, revenues, and user engagement along the recommendation funnel should also be used. Additionally,from an engineering point of view, aspects such as efficiency, scalability, robustness, and user interface design are typically major concerns; often prioritized over the effectiveness of the internal algorithms at thecore of the system. On the other hand, once a relevant target quality has been defined, a clear evaluation protocol should be specified in detailand agreed upon, allowing for the comparison, replicability and reproducibility of the results and experiments by different authors and enabling incremental contributions. REDD 2014 aims at gathering researchers and practitioners interested in better understanding the unmet needs of the field in terms of evaluation methodologies and experimental practices. The main goal of this workshop is to provide an informal setting for discussing and exchanging ideas, experiences, and viewpoints. REDD seeks to identify and better understand the current gaps in recommender system evaluation methodologies, help lay directions for progress in addressing them, and foster the consolidation and convergence of experimental methods and practices. Topics ------ We invite the submission of papers reporting original research, studies, advances, or experiences that focus on recommender system utility evaluation. The topics that the workshop seeks to address include--though need not be limited to--the following: * Recommendation quality dimensions - Effective accuracy, ranking quality - Novelty, diversity, unexpectedness, serendipity - Utility, gain, cost, risk, benefit - Robustness, confidence, coverage, ease of use, persuasiveness, etc. * Matching metrics to tasks, needs, and goals - User satisfaction, user perception, human factors - Business-oriented evaluation - Multiple objective optimization, user engagement - Quality of service, quality of experience * Evaluation methodology and experimental design - Definition and evaluation of new metrics, studies of existing ones - Adaptation of methodologies from related fields: IR, Machine Learning, HCI, etc. - Evaluation theory * Practical aspects of evaluation - Offline and online experimental approaches - Simulation-based evaluation - Datasets and benchmarks - Validation of metrics - Efficiency and scalability - Open evaluation platforms and infrastructures Submission ---------- Two submission types are accepted: technical papers up to 6 pages long, and position papers up to 3 pages. Each paper will be evaluated by at least two reviewers from the Programme Committee. The papers will be evaluated for their originality, contribution significance, soundness, clarity, and overall quality. Within a required quality standard, position papers will be evaluated based on the presentation of new perspectives and insights, and their potential for provoking thoughts and stimulating discussion. All submissions shall adhere to the standard ACM SIG proceedings format: http://www.acm.org/sigs/publications/proceedings-templates. The accepted papers will be published in the CEUR Proceedings series. Submissions shall be sent as a pdf file through the online submission system now open at: http://www.easychair.org/conferences/?conf=redd2014. Important dates --------------- Paper submission deadline: 21 July 2014 Author notification: 21 August 2014 Camera ready version due: 5 September 2014 REDD 2014 workshop: October 2014 Programme Committee ------------------- Linas Baltrunas, Telefonica Research, Spain Marcel Blattner, Univ. of Applied Sciences, Switzerland Iv?n Cantador, Universidad Aut?noma de Madrid, Spain Charles Clarke, University of Waterloo, Canada Juan Manuel Fern?ndez, Universidad de Granada, Spain Zeno Gantner, Nokia, Germany Ido Guy, IBM Haifa Research Lab, Israel Juan Huete, Universidad de Granada, Spain Kris Jack, Mendeley, UK Dietmar Jannach, University of Dortmund, Germany Jaap Kamps, University of Amsterdam, Netherlands Alexandros Karatzoglou, Telefonica Research, Spain Bart Knijnenburg, University of California, Irvine, USA Till Plumbaum, TU Berlin, Germany Filip Radlinski, Microsoft, Canada Alan Said, TU Delft, Netherlands Yue Shi, Yahoo! Labs, USA Fabrizio Silvestri, Yahoo!, Spain David Vallet, Google Inc., Australia Arjen de Vries, CWI, The Netherlands Jun Wang, University College London, UK Xiang-Jun Wang, Netflix Xiaoxue Zhao, University College London, UK Organizers ---------- Panagiotis Adamopoulos, New York University, USA Alejandro Bellog?n, Universidad Aut?noma de Madrid, Spain Pablo Castells, Universidad Aut?noma de Madrid, Spain Paolo Cremonesi, Politecnico di Milano, Italy Harald Steck, Netflix, USA Contact email: redd2014 at easychair.org More info at: http://ir.ii.uam.es/redd2014 From hdp at cs.nmsu.edu Sun Jul 27 13:55:53 2014 From: hdp at cs.nmsu.edu (Heather D. Pfeiffer) Date: Sun, 27 Jul 2014 11:55:53 -0600 Subject: [Sigkm-l] When should we have our AM at ASIS&T AM 2014? Message-ID: These are the possible time slots: November 3 8-9 9:05-10:05 1:45-2:45 2:50-3:50 3:55-4:55 November 4 8-9 9:05-10:05 10:10-11:10 2:00-3:00 3:05-4:05 4:10-5:10 What do you like? -Heather (Treasurer) heather at pfeifferfamily.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbradshaw1 at yahoo.com Sun Jul 27 14:35:42 2014 From: pbradshaw1 at yahoo.com (Tricia Bradshaw) Date: Sun, 27 Jul 2014 11:35:42 -0700 Subject: [Sigkm-l] When should we have our AM at ASIS&T AM 2014? In-Reply-To: References: Message-ID: <1406486142.52476.YahooMailNeo@web164004.mail.gq1.yahoo.com> 10:10-11:10 on the 4 sounds good to me. ? Tricia Bradshaw(Chair) ? On Sunday, July 27, 2014 1:58 PM, Heather D. Pfeiffer wrote: These are the possible time slots: November 3 8-9 9:05-10:05 1:45-2:45 2:50-3:50 3:55-4:55 November 4 8-9 9:05-10:05 10:10-11:10 2:00-3:00 3:05-4:05 4:10-5:10 What do you like? -Heather (Treasurer) heather at pfeifferfamily.net _______________________________________________ Sigkm-l mailing list Sigkm-l at asis.org http://mail.asis.org/mailman/listinfo/sigkm-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From thornbug at oclc.org Sun Jul 27 14:56:27 2014 From: thornbug at oclc.org (Thornburg,Gail) Date: Sun, 27 Jul 2014 18:56:27 +0000 Subject: [Sigkm-l] When should we have our AM at ASIS&T AM 2014? In-Reply-To: References: Message-ID: <1406487387179.53534@oclc.org> How about 3:55-4:55 November 3d. ________________________________ From: Sigkm-l on behalf of Heather D. Pfeiffer Sent: Sunday, July 27, 2014 1:55 PM To: sigkm-l at asis.org Subject: [Sigkm-l] When should we have our AM at ASIS&T AM 2014? These are the possible time slots: November 3 8-9 9:05-10:05 1:45-2:45 2:50-3:50 3:55-4:55 November 4 8-9 9:05-10:05 10:10-11:10 2:00-3:00 3:05-4:05 4:10-5:10 What do you like? -Heather (Treasurer) heather at pfeifferfamily.net -------------- next part -------------- An HTML attachment was scrubbed... URL: