[Infovis] Call for Papers: 1st International Workshop on Disinformation and Toxic Content Analysis (DiTox 2023), September 13th, 2023
Nazemi, Kawa, Prof. Dr.
kawa.nazemi at h-da.de
Tue Feb 14 21:07:37 CET 2023
Call for Papers: 1st International Workshop on Disinformation and Toxic Content Analysis (DiTox 2023), September 13th, 2023
https://ditox.ait.ac.at/
In conjunction with the 4th biennial conference on Language, Data and Knowledge (LDK 2023) to be held in Vienna, Austria.
The spread of misinformation and disinformation not only affects people's perceptions and beliefs, but can also have a direct impact on democratic institutions, critical infrastructure, and lives and families. Most critically, it raises the more fundamental issue of what sources of information can be trusted at all, potentially calling into question our relationship of trust with traditional media. Because of these profoundly harmful effects, disinformation is seen as one of the most pressing problems of our time.
The weak definition of the research task of disinformation analysis and detection, as well as the enormous range in terms of the heterogeneity and multimodality of the data involved, make this an exceptionally challenging field of research. The complexity ranges from media tampering detection to text content analysis to large-scale information fusion to analyze disinformation trends. Maintaining a comprehensive overview is equally difficult.
Respectively, the overall goal of this workshop is therefore to provide insights on how approaches from different domains can be used to address disinformation at a technical level including AI/ML-based methods, visual analytics, and visualization approaches as well as interdisciplinary approaches inspired by the social sciences (i.e., computational social science). To this end, we invite task-specific contributions, as well as large-scale integration approaches, demo and project presentations, to provide a comprehensive overview of the current state of the art in countering disinformation.
Topics:
Full Paper Submissions:
- Machine and Deep learning methods for disinformation (e.g., analysis, detection)
- Visual analytics and visualization approaches for disinformation
- Audio tampering detection
- Social network analysis (e.g., key actors, distribution patterns) including visualization approaches
- Graph algorithms for disinformation identification
- Natural language processing methods (e.g., content evaluation, toxicity, radicalization)
- AI-supported fact checking and detection of disinformation campaigns
- Identification of fabricated and manipulated content (e.g., deep fakes, audio, generated text)
- Community detection and characterization in social networks (e.g., conspiracy theories, echo chambers)
- Bots characterization and detection
- Multimodal fake content detection
- Recommendation systems and disinformation
- AI uses, practices and tools in fact-checking journalism
- Qualitative and quantitative studies on disinformation
- Ethics and law in disinformation
Demo and Project Presentation (Short Paper Track, Poster Presentation):
- Demo presentations (e.g., fact checking tools, disinformation detection tools)
- Project platform presentations
- Project presentations
Important Dates:
- Paper submission: May 21st, 2023
- Notification: June 20th, 2023
- Camera-ready submission deadline: July 9th, 2023
- DiTox workshop: September 13th, 2023
Submission:
Submissions can be in the form of Long papers (9-12 pages) and Short papers (4-6 pages). All submission lengths are given including references. Accepted submissions will be published by ACL in an open-access conference proceedings volume, free of charge for authors. The reviewing process is single-blind, submissions should not be anonymised. The workshop will be hybrid (face-to-face and remote). At least one author of each accepted paper must register to present the paper at the workshop (either remotely or on-site). There will be no registration fee administered for participating in LDK 2023. Papers should be submitted via OpenReview at the following address: https://openreview.net/group?id=LDK/2023/Conference
Prof. Dr. Kawa Nazemi
Head of Human-Computer Interaction & Visual Analytics
[cid:image001.jpg at 01D940B8.5BBE7970]
Tel +49.6151.533-639393
kawa.nazemi at h-da.de<mailto:kawa.nazemi at h-da.de>
Hochschule Darmstadt
University of Applied Sciences
Schöfferstraße 3
64295 Darmstadt
www.vis.h-da.de<http://www.vis.h-da.de>
www.univ-tech.eu<https://www.univ-tech.eu/>
More information about the Infovis
mailing list