AI in Petrochemical Plants: Ethical Dilemmas & Training

Поделиться
HTML-код
  • Опубликовано: 8 сен 2024
  • AI-Based Autonomous Petrochemical Plants, Character, Ethical Decision Making, and AI Training
    ENEOS, one of the largest companies in the world is operating an AI-based autonomous petrochemical plant. What training does this artificial intelligence autonomous system has received in ethical decision-making, and character? We asked ChatGPT about the possible consequences of an AI-based autonomous petrochemical plant that has not been trained in ethical decision-making and character, and the answers were catastrophic.
    Among the potential risks are: the AI may not understand the importance of minimizing environmental impact; may not prioritize safety protocols resulting in accidents or explosions; and may not have the ability to consider the broader implication of its actions.
    In 1984 one of the most devastating petrochemical plant accidents in history, the Bhopal disaster occurred at the Union Carbide pesticide plant in Bhopal, India. A gas leak of methyl isocyanate (MIC) resulted in the release of toxic gases into the surrounding area. The incident caused immediate deaths of thousands of people and long-term health issues for many more.
    There are more than 300 petrochemical plants in the United States. These plants produce a wide range of petrochemical products, including plastics, synthetic fibers, rubber, and various chemical intermediates. The petrochemical industry plays a vital role in the U.S. economy, providing jobs and contributing to domestic manufacturing and exports.
    Training GenAIs in ethical decision-making and character is a complicated and continuous endeavor that requires ethical guidelines, continuous monitoring, human oversight, collaboration, and public input. Moreover, different countries have their own definitions of ethics and culture. For example, American and Japanese cultures place different priorities on collectivism vs individualism, hierarchy and respect, and long-terms goals. Different is not necessarily good or bad, but it can make a tremendous difference in the decision-making output of GenAIs.
    Imagine this scenario. In a petrochemical plant, there is a critical issue with one of the main processing units. If the unit is shut down immediately for repairs, it will disrupt the production process and lead to significant financial losses for the company. However, delaying the shutdown could potentially result in a hazardous situation that may cause harm to a few workers directly involved in the unit's operation. This is a case of profits vs. workers’ safety. How would an autonomous AI recommend dealing with this issue?
    The Bhopal gas tragedy serves as a tragic reminder of the consequences that can arise when the interests of the few are sacrificed for the benefit of the majority. It underscores the need for strict adherence to safety regulations, proper maintenance of equipment, and transparent decision-making processes in industrial operations to prevent such catastrophic incidents from occurring in the future.
    The problem of ethical decision making and character of GenAIs has exponentially increased because Google, Meta, Amazon, OpenAI, and Microsoft have all made cuts to their AI ethics teams. The solution is a multi-facet approach involving increasing (instead of decreasing) the visibility of AI ethics team, continuous monitoring of the training data used in GenAIs and how is it weighted, and computer simulations using game theory involving different scenarios.
    This is a podcast from CHIPS Sparks. We are your hosts Carmen and Alberto. Subscribe to our RUclips channel, and join our Artificial Intelligence and Business Analytics on LinkedIn. Let us revolutionize the way we think using artificial intelligence.

Комментарии •