By: Kim Polley, Head of UK Corporate, Instinctif Partners 

Collective cognition is the ability of a group of people to work together to solve problems and make decisions that are more than the sum of their parts. When it originated, the internet seemed to answer how we harnessed connectivity for collective cognition. However, developers failed to consider – and mitigate for – how ubiquitous access and human ambition would turn it into an unchecked platform for anyone with an agenda, negative or positive.  

With a new world of ‘intelligent computing’ at our fingertips, could AI be the answer to how we recalibrate this conscienceless connectivity? Imagine harnessing the power of AI for collective cognition, driving human progress with the development of an engine that evaluates, validates and categorises insights that move the world forward. AI rules and applied learnings would ensure that the information being connected has inherent value and is purposeful, factful and credible. 

AI could collect and organise information from various sources, including academic papers, government reports, and social media posts, and analyse this information to identify patterns and trends, highlighting the most valuable insights from the vast amounts of data generated daily. These insights could be categorised so they can be easily shared and reused. The result of this optimised collective cognition could inform decision-making at all levels, from government to business to individual citizens, ensuring that the knowledge and expertise of the global community is used to its fullest potential.

Approaching Risk with Purpose and Foresight

Of course, there are also potential risks associated with using AI for collective cognition. For example, AI systems could be biased, reflecting the biases of the data they learn from, leading to unfair or discriminatory decisions, especially against certain groups of people. Additionally, AI systems could be hacked or manipulated, which could have severe consequences for the integrity of collective cognition, such as being used to spread misinformation or propaganda.

It is essential to be aware of these risks and take steps to mitigate them. To safeguard the use of AI-enabled collective cognition for good, thereby limiting potential negative impact on people or society, considerations should include:

  • Starting with a clear understanding of the problem you are trying to solve. What are the goals of the collective cognition system? What kind of information do you need to collect or reject to protect the integrity of the collective cognition engine? Who are the system users, and what purpose drives their usage?
  • Involving a diverse group of people in the design and development of the system contributes to ensuring that the collective cognition system is inclusive and considers the needs of different users, whether individuals, businesses or administrations. It also ensures that the collective cognition engine approaches data enquiries from various perspectives, including factoring in unconscious bias.
  • Using transparent and ethical AI practices. This includes ensuring the collective cognition system is unbiased and respects users’ privacy. Ensuring that the AI systems are transparent about how they work will help users understand how the system is making decisions and incorporate their feedback into how the collective cognition process learns about identifying potential biases.
  • Using a diverse dataset and monitoring for bias: The dataset used to train the collective cognition AI engine can significantly impact its preferences. It is critical to use a diverse dataset that reflects the different perspectives and experiences of the people using the system. On deployment, monitoring the collective cognition system for signs of bias and tracking the system’s outputs to identify patterns that suggest bias is critical.
  • Using fairness algorithms: Several fairness algorithms can be used to reduce bias in AI systems, and these should be evaluated and incorporated into the collective cognition system. These algorithms help ensure the system treats all users fairly, regardless of race, gender, or other characteristics.
  • Using clear and unambiguous language in the collective cognition system’s code and documentation helps ensure data is interpreted and used per the system’s intended purpose.
  • Allowing users to flag and challenge any decisions they believe are biased or discriminatory will help identify and address potential biases in the collective cognition system.

The future of collective cognition

The future of collective cognition is as yet untapped. Still, as AI technology develops, we can expect to see even more innovative and effective applications of AI for human progress. These systems can significantly impact the world, helping us solve complex problems, make better decisions, and collaborate more effectively. However, it is vital to consider the risks and to develop safeguards to mitigate them.

With careful planning and informed execution, we can create systems for collective cognition that are both effective and ethical. But, it’s critical to remember that while collective cognition could well be the future, without collective accountability, that future may not be the one we wanted.