Finally! After more than a year of preparation and planning, MI4People was recently founded as a non-profit research institution. We are very excited and cannot wait to research on how we can apply modern Machine Intelligence (MI) technologies such as advanced analytics, artificial intelligence, machine learning, deep learning, robotic process automation, and process mining to tackle the humanitarian and ecological problems of this world.
We are currently ramping up our operations, preparing our first set of projects, and clarifying the last bureaucratic nuances. We are also building up our presence on social media like Facebook and LinkedIn to promote the ideas of Machine Intelligence for Public Good and are preparing our first donation campaign which will start in the next few weeks.
But most importantly, we are going to kick-off our first project in the last week of October! Together with our first volunteers, Kira and Adil, we will start to work on an MI-based Soil Quality Evaluation System that uses various data inputs, such as satellite imagery and infra-red spectral measurements data in order to predict the most important soil quality indicators. These predictions will be made accessible und understandable for farmers, especially in the developing countries, and will enable them to better understand their soil, make intelligent choices about which crops to plant, how to best fertilize and protect soil from pests in a more suitable, sustainable, and environmental-friendly manner. It will lead to better crop yields with eco-friendly farming and to more stable food supply chains, less famine, and undernutrition.
Stay with us – we will keep you up to date about our progress.
Together, we can build a better world, for all of us!
Your MI4People Team
From Commercial Sector
Pressure on Facebook intensifies after former Facebook data scientist Frances Haugen, who was a part of a team that combated expressions of hate, violence, and misinformation, leaked internal research showing the company has known that its efforts to engage users have, in some instances, harmed individuals and society at large. These accusations aren’t new, but this leak provided more concrete evidence.
Facebook has already developed powerful AI systems to identify hateful posts, memes, and misinformation. However, it also seems torpedo any such effort if it has a negative impact on the user engagement or – described in technical terms – on the performance of its also AI-driven recommendation systems. In fact, different Facebook’s AI systems – those that aim to increase profit and those that take care about Public Good – seem to conflict with each other. Currently, profit-oriented systems gain the upper hand.
Looking through a lens of MI for Public Good: While using recommendation systems to generate profit is a reasonable application of MI/AI, one should ensure that the net-net impact of such a system improves society or at a minimum doesn’t harm it, especially, if one has so much power and impact as Facebook. In fact, scandals such as the recent Facebook leak might undermine public trust in AI and create broad public support for laws that limit recommendation systems or AI in general. It would not only affect the commercial use of AI but also hinder application of AI (and use of MI in general) for Public Good.
Michelle Bachelet, the UN High Commissioner for Human Rights, appealed to the UN’s member states for a moratorium on the sale and use of AI systems that pose serious risks to human rights until adequate safety protocols are established. Her call is accompanied by a new report from the UN Human Rights Council analyzing how AI applications can negatively affect people’s right to privacy, freedom of movement and limit access to healthcare and education.
Bachelet stressed that “the power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.”
Looking through a lens of MI for Public Good: While statements and reports like these are important for bringing attention to the topic and for increasing public pressure on companies and authorities, they do not substitute concrete and practicable industrial standards for safeguards and legal limits. However, such standards and limits should be designed in a way that they do not indirectly “forbid” application of AI technologies in general rather help ensure the acceptance of AI among broad population and to direct the potential of AI technologies towards human flourishing.
Researchers at the Max Planck Institute for Solar System Research (MPS) in Germany use machine learning in order to map craters on the dark side of the moon at far higher resolutions than ever before.
Capturing images of the moon’s shadowed craters is not a trivial task because of the absence of direct light and motion of the spacecrafts. As a result, such images are usually full of noise. To overcome this problem MPS researchers have used machine learning to create the so-called HORUS (Hyper-effective nOise Removal U-net Software) – a software that is able to reduce the heavy amount of noise created by low light imagery.
Using HORUS, the researchers can achieve a resolution of about 1-2 meters per pixel, which is five to ten times higher than the resolution of all previously available images. Such capabilities are important for planning of future lunar missions and are of particular significance due to the believed presence of frozen water within many of craters on the dark side of the moon.
Looking through a lens of MI for Public Good: When humankind will expand to the moon, naturally occurring water will be an extremely important resource and knowledge about its location will be critical. Developments like HORUS can also be applied to many other humanitarian projects (where high fidelity image analysis is beneficial) besides exploring possible habitation of the moon and demonstrates how important and critical MI technologies will become for the humankind in the near future.