June 3 2021 GM: Difference between revisions

From TCU Wiki
No edit summary
No edit summary
 
(One intermediate revision by the same user not shown)
Line 31: Line 31:
'''Bio:'''
'''Bio:'''
Yonah Welker has been working on the intersection of tech and society since the 2005 year - when they became a tech explorer and launched a hardware think tank. Over their journey, they founded and co-created tech startups and labs, helped to facilitate tech ecosystems through North America to APAC, MENA, Africa, Europe, screened over 2000 teams, contributed projects in ethics (AI, tech), deeptech and sustainability.
Yonah Welker has been working on the intersection of tech and society since the 2005 year - when they became a tech explorer and launched a hardware think tank. Over their journey, they founded and co-created tech startups and labs, helped to facilitate tech ecosystems through North America to APAC, MENA, Africa, Europe, screened over 2000 teams, contributed projects in ethics (AI, tech), deeptech and sustainability.


=Notes=
=Notes=
Line 45: Line 44:
** 'Misinformation, Disinformation, and the Pandemic in MENA' on Friday, June 11, 2021 at 12:15 pm EST
** 'Misinformation, Disinformation, and the Pandemic in MENA' on Friday, June 11, 2021 at 12:15 pm EST


== Topic of Discussion: Algorithmic Diversity: Zero Exclusion, AI & Ethics (AI & Human Rights) ==
=== Topic of Discussion: Algorithmic Diversity: Zero Exclusion, AI & Ethics (AI & Human Rights) ===


Yonah Welker (@yonahwelker on Mattermost) has been working in tech work since 2005, when they became a tech journalist because they love it. But also because they were forced to abandon formal education to make a living due to their disability. Later, they started to make startups (and simultaneously - art projects).  
Yonah Welker (@yonahwelker on Mattermost) has been working in tech work since 2005, when they became a tech journalist because they love it. But also because they were forced to abandon formal education to make a living due to their disability. Later, they started to make startups (and simultaneously - art projects).  

Latest revision as of 12:19, 10 June 2021

Glitter Meetups

Glitter Meetup is the weekly town hall of the Internet Freedom community at the IFF Square on the IFF Mattermost, at 9am EDT / 1pm UTC. Do you need an invite? Learn how to get one here.

  • Date: Thursday, June 3
  • Time: 9am EST / 1pm UTC
  • Topic: Algorithmic Diversity: Zero Exclusion, AI & Ethics (AI & Human Rights)
  • Featured Guests: Yonah Welker

Join us for Glitter Meetup with featured guest Yonah Welker where we will dive into the meaning and application of algorithmic diversity in technology (including such cases as neurodiversity). Using the latest experiences, cases and research, we will analyze the current state and problems of inclusive innovation and technology, including the problems of representation and criteria, inclusive research and design-thinking, the building of inclusive products (AI-driven platforms, devices, apps, social and emotional robotics), ethical considerations and concerns (the "black-box" and "double-check" problems, transparency, explainability, fairness, surveillance), shortcomings of current technology ecosystems, policies and human rights frameworks. Other topics we will address:

  • Inclusive and neurodiverse solutions in hiring, learning, wellbeing fields
  • Collaborative AI, Data & NLP for open-source neuro solutions
  • Ethics, policies and human rights
  • Neuro representation in data science and AI teams

Bio: Yonah Welker has been working on the intersection of tech and society since the 2005 year - when they became a tech explorer and launched a hardware think tank. Over their journey, they founded and co-created tech startups and labs, helped to facilitate tech ecosystems through North America to APAC, MENA, Africa, Europe, screened over 2000 teams, contributed projects in ethics (AI, tech), deeptech and sustainability.

Notes


Community Updates

  • A whole county (first one) in the U.S. has banned government use of facial recognition
  • pluggable transportssupport censorship circumvention.
  • RightsCon will happen next week and folks are sharing their sessions.
    • "User insights without user surveillance: the clean insights approach to privacy-respecting analytics" on Friday, June 11 at 9:45am Eastern
    • 'Misinformation, Disinformation, and the Pandemic in MENA' on Friday, June 11, 2021 at 12:15 pm EST

Topic of Discussion: Algorithmic Diversity: Zero Exclusion, AI & Ethics (AI & Human Rights)

Yonah Welker (@yonahwelker on Mattermost) has been working in tech work since 2005, when they became a tech journalist because they love it. But also because they were forced to abandon formal education to make a living due to their disability. Later, they started to make startups (and simultaneously - art projects).

We talked about neurodiversity earlier - what is neurodiversity?

  • Sometimes, we have invisible differences in our brain, which are not visible for MRI and typically scanning. It leads to such conditions as autism, ADHD, dyslexia.
  • It's different from mental health challenges, (but often comorbid with them), because we were born in this way.
  • And we should learn how to use it effectively, not just how to fit society but create our own ecosystems to truly realize our potential, creativity.

How is neurodiversity linked to developing and using technology?

  • There are few examples:
    • Some people learn through listening,
    • Some - through reading
    • Some - people have fluctuation in mood
    • Some - people are not efficient in communication
    • Some - people are not efficient in writing
  • We can use machine learning and similar algorithms to study and learn more about particular individuals to provide personalized interfaces and experiences.

What do you mean by algorithmic diversity?

  • Algorithmic diversity is a philosophy that thinks about inclusion in a broader way - as a zero exclusion principle:
    • Why did we exclude someone initially?
    • How to build ecosystems / classrooms / workplaces that don't exclude.
  • In the real world, it's a combination of niche solutions, assistive technology, social robots, smart glasses, flipped classroom learning technologies and many other things that help to build micro-ecosystems.
  • But more importantly:
    • It's ethics and policies behind it;
    • Human rights guidelines and frameworks: Accessible moral vocabulary and bioethics understanding across teams and organizations behind these technologies.

What do you think is the current state of inclusive innovation and technology? There's potential, but are there some things we should be cautious about too?

  • There are good and bad news.
  • On the one hand:
    • Microsoft & Amazon integrated Accessibility Disability programs.
    • Google Glass collaborates with autism-focused startup
    • Tech companies try Neurodiverse hiring platforms
    • Schools experiment with social robots for ADHD and AI trainers for dyslexia
    • Angie C (Canadian artist) uses biofeedback and imagination to control a modular synthesizer, so we can use such technology for so many cases!
  • But at the same time:
    • 90% of people with autism are not employed.
    • Only 1/10 of people who need inclusive technology have access to it.
    • Representation behind tech (only 10-15% of women in data science and some small part - disabled and neurodivergents)
    • Disconnection between technology, social science, gender and cultural science
    • Accountability. We learn that bias exists. But we still don't know how to make technology (and teams behind) accountable and transparent for different stakeholders
    • Frameworks for children, women, disabled. Only in 2019 framework for AI and disability was create. And 2020 Unicef issued AI and Children.
  • We are also still far to make this movement truly open-source.

So it sounds like these technologies aren't always developed with the people who use them. What is inclusive research? How do you best practice it in designing things like algorithms or robots? Ie, less about the UX of an app, but the actual backbones of technology?

  • There is a great phrase, that AI can be the solution for any problem in the world or a source of apocalypse depending on who and how ask questions. (that's why representation matters so much!)
  • Recently I worked with several teams related to neurodiversity:
    • Ai for dyslexia (Denmark-based startup). They used eyes-tracking to define the most effective patterns of reading to provide you with recommendations and personalized experiences
    • Data analytics for autism children development (from Canada). It's an ecosystem for various parameters of how children grow and deal with different challenges. This platform plays a role of ecosystem for other tools - like robots, glasses, educational methodologies;
    • During my podcasts I also dealt with many social robots startups (Robokind - US, Luxai - Luxembourg)
  • As for research, it includes several level:
    • Academic. Typically founders present universities and research facilities focused on human-machine interaction and related approaches;
    • Stakeholders research - parents, educators, schools, medical professionals;
    • Impact measurement - protocols, questionnaires, interviews.
  • We work with many experiments across neurodiverse quadrant to measure the key metrics and turn them into product tasks.
  • I'm not an engineer, so I can't answer it deeper in terms of the technology stack.

There's a lot of data in these projects, and privacy is a massive issue for neurodiverse people and people with disabilities. What ethical considerations and concerns should we all be aware of in coming years?

  • Typically there are several ethical challenges:
    • Algorithmic Bias (when technology as computer vision can't recognize correctly particular social groups, genders, ethnic groups)
    • The Black box (when we can't turn machine learning predictions and data into valuable insights and understand reasons behind particular decision)
    • Supremacy of algorithms (when we consider technology as a "subject" of the Law, but not object. Instead, people are always accountable for these tools, so we need "double-check" principle in medicine, nursing and education to recheck predictions and decisions)
    • Filter Bubble & Echo Chamber (when we completely delegate learning process to algorithm creating a silos/vacuums of content/approaches)
    • Technical fixes (when we fix any problems with another technology fix / update / feature instead of reconsidering social practice, representation of fundamental issues.)
    • Privacy (as you mentioned it. Hopefully, the European Union works in this direction)
    • Autonomous agents (policies for autonomous robots, cameras, computer vision etc)
  • In other words, there are so many challenges which we should fix. And privacy is not the only one! Unfortunately.

You mentioned that representation matters, and that neurodiverse people face high rates of unemployment. What are some of the best practices for inclusive and neurodiverse solutions in hiring?

  • There are just rare cases when employers actually help.
  • Moreover, we even made experiments when we send the same CV, and CV with "disability" had a dramatically lower level of response.

Retention is always difficult. Once folks are hired, what are some things employers should be doing to make sure they are meeting the needs of a team that is neurodiverse?

  • So, as an employer, you should learn that you help neurodiverse people not because 'you are good' or "you help", but because they have the same rights and talent as you have. So if they want to work remotely - ok. If they want to avoid meetings - ok. If they don't like zoom video calls - ok. Measure them through results, not through your subjective vision of "normality".