It’s an exciting time to watch technology grow and advance. Yet, as President Theodore Roosevelt once said in a letter addressing his reason for not pursuing a third presidential term, “I believe in power; but I believe that responsibility should go with power…”
With each new technological advancement, the benefits are considered in the process. But many believe the potential negative impacts on the people it’s supposed to benefit should be considered as well.
While Google just announced a new artificial intelligence (AI) ethics panel to keep its new AI projects under control, little is known about the original AI ethics board established during the acquisition of DeepMind in 2014.
Just Because You Can, Should You?
It was the subject of much discussion at a recent Big Data World event in London earlier in March. Experts discussed the ethical applications of AI, many agreeing companies developing AI projects should strongly consider not only the reasons why but also the possible consequences to the end users.
A public ethics board was also discussed by experts who believe the implementation of such would be an effective way to ensure the focus remains on the correct objectives for AI and data gathering. Just as large corporations have executive boards that oversee decisions, the public ethics panel could be of great help to companies developing AI and advancing technology wishing to make decisions in the best interests of the public.
Such an ethics board would need to consist of a diverse group of people from a variety of backgrounds and with differing experiences. The board would also when possible publish their results to engage the public in debates about how big data should be used. Most importantly, it’s believed that companies should constantly review their ethics and not become complacent. What passes for an acceptable data-policy at one point in time might not be as viable or ethical over time and should be revisited.
Ensuring The Unthinkable Doesn’t Happen
Measures must also be taken to make sure any such ethics panel be free of influence from outside entities like legislators, shareholders, or special interest groups. Governance is also vital in the assessment of the impact of AI. At this time, there’s no established global standard pertaining to acceptable AI and data use.
Progress towards having such standards, however, is being made. Analyzing the risks and opportunities of data-directed technology, the Centre for Data and Ethics and Innovation (CDEI) recently published its 19/20 Work Programme and a two-year strategy. The report outlines the need for prioritization and a defined working structure for the first year.
Many are hopeful that if a framework and a set of standards could be put in place, they might be flexible enough to keep up with the technology’s development and the public’s perception.