---------- Forwarded message ----------
From: Steven Clift
|
|
|
|
See CFP below ...
A comment ... as someone monitors all sorts of political online groups on
Facebook, the conspiratorial tone is a return of what I saw on political
USENET newsgroups back in the 1990s. They were essentially anonymous and
totally chaotic or totally controlled if moderate (to keep politics out of
groups). It was terrible stuff.
Crafting another way led to E-Democracy's real name accountability based
forums with strong civility and active human facilitation. We did this in
1994 a decade before Facebook brought to the masses.
Again and again, folks hope for the magic bullet of cheap technology to
police the abuse at a low cost. Look at news online commenting. It doesn't
work or only can only help boost the essential role of active facilitators
that are supported by good rules and a culture of respect for their
leadership. Facilitators need to be real named people and not the mystery
man behind the curtain.
While real names work well on Facebook and encourage accountability and
self-control/censorship among "friends" and on public posts, the
introduction of less visible (to your friends and relatives) posts to
closed and secret groups have unleashed the beast inside of many of us.
Further with hundreds of millions of people online in the US for example,
you only need a small small percentage of people who feel they have nothing
to lose by being nasty as can be or worse when they comment on the White
House Facebook Page or a local news story about Somali immigrants in
Minnesota for example.
On top of this, with Twitter and Facebook profiles, the masses now have
public calling cards on the Internet which makes it 100x easier for anyone
to be privately contacted or publicly shamed. Previously only a small
percentage of individual people had websites or blogs.
So back to crafting another way ... that is what I am seeing on the best
Facebook Groups. Active facilitators, participants respecting those
leaders, and removing the abusers, spammers, or those unwilling or unable
(can be tied to severe antisocial behavioral medical conditions) to follow
the rules or get along with others. So to those looking to connect
non-friends online be bold and build another way. To those orgs who sponsor
online exchange, start investing in what really works - people as active
leaders and facilitators.
Steven Clift
E-Democracy
P.S. For orgs looking for great professional online facilitators who can
handle politics, I can hook you up.
From: "Andrew Whitacre" <awhit@mit.edu>
Date: Jan 19, 2017 8:52 AM
Subject: [civicmedia-researchers] CFP: Abusive Language Online
To: <cmsw-all@mit.edu>, "civicmedia-researchers" <
civicmedia-researchers@mit.edu>
Cc:
Part of the annual meeting of the Association of Computational Linguistics
2017 (Vancouver), August 3rd (or) 4th, 2017
https://www.hastac.org/opportunities/cfp-1st-workshop-abusive-language-
online
Snippet:
Overview
The last few years have seen a surge in abusive online behavior, with
governments, social media platforms, and individuals struggling to cope
with the consequences. Online forums, comment sections, and social media
interaction in general have become a playground of bullying, scapegoating,
and hate speech. These forms of online aggression not only poison the
social climate of the communities that experience it, but also lower the
inhibition for direct physical violence, and increasingly even result in it.
As a field that directly works with computing over language, Natural
Language Processing researchers are in a unique position to develop
automated methods to analyse, detect, and filter abusive language.
Additionally, we recognize that addressing abusive language is not solely
the purview of NLP approaches but is a truly multi-disciplinary problem and
thus requires knowledge from other fields, including but not limited to:
psychology, sociology, law, gender studies, digital communication, and
critical race theory.
In this one day workshop, we aim to provide a space for researchers of
various disciplines to meet and discuss approaches to abusive language. The
workshop will include invited speakers and panelists from fields outside of
NLP, as well as solicit papers from researchers across all areas. In
addition, the workshop will host an "unshared task".
Paper Topics
We invite long and short papers on any of the following general topics:
-
NLP models and methods for abusive language detection
-
Application of NLP tools to analyze social media content and other large
data sets
-
NLP models for cross-lingual abusive language detection
-
The social and personal consequences of being the target of abusive
language and targeting others with abusive language
-
Assessment of current non-NLP methods of addressing abusive language
-
Legal ramifications of measures taken against abusive language use
-
Best practices for using NLP techniques in watchdog settings
-
Development of corpora and annotation guidelines
Panel Discussion Topics
Potential panel discussion topics reflect the relevance for industry and
individuals:
-
Responsibility of companies and governments in monitoring speech
-
Privacy and ethical implications of abusive language detection (false
positives)
-
Follow-up: what to do when a community experiences abusive language
-
Personal experiences from individuals who have been threatened online
-
Best methods for cross-pollination of ideas between fields
Andrew Whitacre
Communications Director
Comparative Media Studies/Writing
Massachusetts Institute of Technology (snips) | cmsw.mit.edu
|
|
|
|
|
1 comment:
Post a Comment