Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

Search Results (170)

Search Parameters:
Journal = AI

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Review
Explainable Image Classification: The Journey So Far and the Road Ahead
AI 2023, 4(3), 620-651; https://doi.org/10.3390/ai4030033 - 01 Aug 2023
Viewed by 477
Abstract
Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey paper, we provide a comprehensive analysis of existing approaches in the field of XAI, focusing on the tradeoff [...] Read more.
Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey paper, we provide a comprehensive analysis of existing approaches in the field of XAI, focusing on the tradeoff between model accuracy and interpretability. Motivated by the need to address this tradeoff, we conduct an extensive review of the literature, presenting a multi-view taxonomy that offers a new perspective on XAI methodologies. We analyze various sub-categories of XAI methods, considering their strengths, weaknesses, and practical challenges. Moreover, we explore causal relationships in model explanations and discuss approaches dedicated to explaining cross-domain classifiers. The latter is particularly important in scenarios where training and test data are sampled from different distributions. Drawing insights from our analysis, we propose future research directions, including exploring explainable allied learning paradigms, developing evaluation metrics for both traditionally trained and allied learning-based classifiers, and applying neural architectural search techniques to minimize the accuracy–interpretability tradeoff. This survey paper provides a comprehensive overview of the state-of-the-art in XAI, serving as a valuable resource for researchers and practitioners interested in understanding and advancing the field. Full article
(This article belongs to the Special Issue Interpretable and Explainable AI Applications)
Show Figures

Figure 1

Article
Evaluating Deep Learning Techniques for Blind Image Super-Resolution within a High-Scale Multi-Domain Perspective
AI 2023, 4(3), 598-619; https://doi.org/10.3390/ai4030032 - 01 Aug 2023
Viewed by 324
Abstract
Despite several solutions and experiments have been conducted recently addressing image super-resolution (SR), boosted by deep learning (DL), they do not usually design evaluations with high scaling factors. Moreover, the datasets are generally benchmarks which do not truly encompass significant diversity of domains [...] Read more.
Despite several solutions and experiments have been conducted recently addressing image super-resolution (SR), boosted by deep learning (DL), they do not usually design evaluations with high scaling factors. Moreover, the datasets are generally benchmarks which do not truly encompass significant diversity of domains to proper evaluate the techniques. It is also interesting to remark that blind SR is attractive for real-world scenarios since it is based on the idea that the degradation process is unknown, and, hence, techniques in this context rely basically on low-resolution (LR) images. In this article, we present a high-scale (8×) experiment which evaluates five recent DL techniques tailored for blind image SR: Adaptive Pseudo Augmentation (APA), Blind Image SR with Spatially Variant Degradations (BlindSR), Deep Alternating Network (DAN), FastGAN, and Mixture of Experts Super-Resolution (MoESR). We consider 14 datasets from five different broader domains (Aerial, Fauna, Flora, Medical, and Satellite), and another remark is that some of the DL approaches were designed for single-image SR but others not. Based on two no-reference metrics, NIQE and the transformer-based MANIQA score, MoESR can be regarded as the best solution although the perceptual quality of the created high-resolution (HR) images of all the techniques still needs to improve. Full article
(This article belongs to the Topic Applied Computing and Machine Intelligence (ACMI))
Show Figures

Figure 1

Article
Applying Few-Shot Learning for In-the-Wild Camera-Trap Species Classification
AI 2023, 4(3), 574-597; https://doi.org/10.3390/ai4030031 - 31 Jul 2023
Viewed by 317
Abstract
Few-shot learning (FSL) describes the challenge of learning a new task using a minimum amount of labeled data, and we have observed significant progress made in this area. In this paper, we explore the effectiveness of the FSL theory by considering a real-world [...] Read more.
Few-shot learning (FSL) describes the challenge of learning a new task using a minimum amount of labeled data, and we have observed significant progress made in this area. In this paper, we explore the effectiveness of the FSL theory by considering a real-world problem where labels are hard to obtain. To assist a large study on chimpanzee hunting activities, we aim to classify various animal species that appear in our in-the-wild camera traps located in Senegal. Using the philosophy of FSL, we aim to train an FSL network to learn to separate animal species using large public datasets and implement the network on our data with its novel species/classes and unseen environments, needing only to label a few images per new species. Here, we first discuss constraints and challenges caused by having in-the-wild uncurated data, which are often not addressed in benchmark FSL datasets. Considering these new challenges, we create two experiments and corresponding evaluation metrics to determine a network’s usefulness in a real-world implementation scenario. We then compare results from various FSL networks, and describe how factors may affect a network’s potential real-world usefulness. We consider network design factors such as distance metrics or extra pre-training, and examine their roles in a real-world implementation setting. We also consider additional factors such as support set selection and ease of implementation, which are usually ignored when a benchmark dataset has been established. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Improving Alzheimer’s Disease and Brain Tumor Detection Using Deep Learning with Particle Swarm Optimization
AI 2023, 4(3), 551-573; https://doi.org/10.3390/ai4030030 - 28 Jul 2023
Viewed by 498
Abstract
Convolutional Neural Networks (CNNs) have exhibited remarkable potential in effectively tackling the intricate task of classifying MRI images, specifically in Alzheimer’s disease detection and brain tumor identification. While CNNs optimize their parameters automatically through training processes, finding the optimal values for these parameters [...] Read more.
Convolutional Neural Networks (CNNs) have exhibited remarkable potential in effectively tackling the intricate task of classifying MRI images, specifically in Alzheimer’s disease detection and brain tumor identification. While CNNs optimize their parameters automatically through training processes, finding the optimal values for these parameters can still be a challenging task due to the complexity of the search space and the potential for suboptimal results. Consequently, researchers often encounter difficulties determining the ideal parameter settings for CNNs. This challenge necessitates using trial-and-error methods or expert judgment, as the search for the best combination of parameters involves exploring a vast space of possibilities. Despite the automatic optimization during training, the process does not guarantee finding the globally-optimal parameter values. Hence, researchers often rely on iterative experimentation and expert knowledge to fine-tune these parameters and maximize CNN performance. This poses a significant obstacle in developing real-world applications that leverage CNNs for MRI image analysis. This paper presents a new hybrid model that combines the Particle Swarm Optimization (PSO) algorithm with CNNs to enhance detection and classification capabilities. Our method utilizes the PSO algorithm to determine the optimal configuration of CNN hyper-parameters. Subsequently, these optimized parameters are applied to the CNN architectures for classification. As a result, our hybrid model exhibits improved prediction accuracy for brain diseases while reducing the loss of function value. To evaluate the performance of our proposed model, we conducted experiments using three benchmark datasets. Two datasets were utilized for Alzheimer’s disease: the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and an international dataset from Kaggle. The third dataset focused on brain tumors. The experimental assessment demonstrated the superiority of our proposed model, achieving unprecedented accuracy rates of 98.50%, 98.83%, and 97.12% for the datasets mentioned earlier, respectively. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
High-Performance and Lightweight AI Model for Robot Vacuum Cleaners with Low Bitwidth Strong Non-Uniform Quantization
AI 2023, 4(3), 531-550; https://doi.org/10.3390/ai4030029 - 27 Jul 2023
Viewed by 298
Abstract
Artificial intelligence (AI) plays a critical role in the operation of robot vacuum cleaners, enabling them to intelligently navigate to clean and avoid indoor obstacles. Due to limited computational resources, manufacturers must balance performance and cost. This necessitates the development of lightweight AI [...] Read more.
Artificial intelligence (AI) plays a critical role in the operation of robot vacuum cleaners, enabling them to intelligently navigate to clean and avoid indoor obstacles. Due to limited computational resources, manufacturers must balance performance and cost. This necessitates the development of lightweight AI models that can achieve high performance. Traditional uniform weight quantization assigns the same number of levels to all weights, regardless of their distribution or importance. Consequently, this lack of adaptability may lead to sub-optimal quantization results, as the quantization levels do not align with the statistical properties of the weights. To address this challenge, in this work, we propose a new technique called low bitwidth strong non-uniform quantization, which largely reduces the memory footprint of AI models while maintaining high accuracy. Our proposed non-uniform quantization method, as opposed to traditional uniform quantization, aims to align with the actual weight distribution of well-trained neural network models. The proposed quantization scheme builds upon the observation of weight distribution characteristics in AI models and aims to leverage this knowledge to enhance the efficiency of neural network implementations. Additionally, we adjust the input image size to reduce the computational and memory demands of AI models. The goal is to identify an appropriate image size and its corresponding AI models that can be used in resource-constrained robot vacuum cleaners while still achieving acceptable accuracy on the object classification task. Experimental results indicate that when compared to the state-of-the-art AI models in the literature, the proposed AI model achieves a 2-fold decrease in memory usage from 15.51 MB down to 7.68 MB while maintaining the same accuracy of around 93%. In addition, the proposed non-uniform quantization model reduces memory usage by 20 times (from 15.51 MB down to 0.78 MB) with a slight accuracy drop of 3.11% (the classification accuracy is still above 90%). Thus, our proposed high-performance and lightweight AI model strikes an excellent balance between model complexity, classification accuracy, and computational resources for robot vacuum cleaners. Full article
Show Figures

Figure 1

Article
Federated Learning for IoT Intrusion Detection
AI 2023, 4(3), 509-530; https://doi.org/10.3390/ai4030028 - 24 Jul 2023
Viewed by 754
Abstract
The number of Internet of Things (IoT) devices has increased considerably in the past few years, resulting in a large growth of cyber attacks on IoT infrastructure. As part of a defense in depth approach to cybersecurity, intrusion detection systems (IDSs) have acquired [...] Read more.
The number of Internet of Things (IoT) devices has increased considerably in the past few years, resulting in a large growth of cyber attacks on IoT infrastructure. As part of a defense in depth approach to cybersecurity, intrusion detection systems (IDSs) have acquired a key role in attempting to detect malicious activities efficiently. Most modern approaches to IDS in IoT are based on machine learning (ML) techniques. The majority of these are centralized, which implies the sharing of data from source devices to a central server for classification. This presents potentially crucial issues related to privacy of user data as well as challenges in data transfers due to their volumes. In this article, we evaluate the use of federated learning (FL) as a method to implement intrusion detection in IoT environments. FL is an alternative, distributed method to centralized ML models, which has seen a surge of interest in IoT intrusion detection recently. In our implementation, we evaluate FL using a shallow artificial neural network (ANN) as the shared model and federated averaging (FedAvg) as the aggregation algorithm. The experiments are completed on the ToN_IoT and CICIDS2017 datasets in binary and multiclass classification. Classification is performed by the distributed devices using their own data. No sharing of data occurs among participants, maintaining data privacy. When compared against a centralized approach, results have shown that a collaborative FL IDS can be an efficient alternative, in terms of accuracy, precision, recall and F1-score, making it a viable option as an IoT IDS. Additionally, with these results as baseline, we have evaluated alternative aggregation algorithms, namely FedAvgM, FedAdam and FedAdagrad, in the same setting by using the Flower FL framework. The results from the evaluation show that, in our scenario, FedAvg and FedAvgM tend to perform better compared to the two adaptive algorithms, FedAdam and FedAdagrad. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Article
Training Artificial Neural Networks Using a Global Optimization Method That Utilizes Neural Networks
AI 2023, 4(3), 491-508; https://doi.org/10.3390/ai4030027 - 20 Jul 2023
Viewed by 593
Abstract
Perhaps one of the best-known machine learning models is the artificial neural network, where a number of parameters must be adjusted to learn a wide range of practical problems from areas such as physics, chemistry, medicine, etc. Such problems can be reduced to [...] Read more.
Perhaps one of the best-known machine learning models is the artificial neural network, where a number of parameters must be adjusted to learn a wide range of practical problems from areas such as physics, chemistry, medicine, etc. Such problems can be reduced to pattern recognition problems and then modeled from artificial neural networks, whether these problems are classification problems or regression problems. To achieve the goal of neural networks, they must be trained by appropriately adjusting their parameters using some global optimization methods. In this work, the application of a recent global minimization technique is suggested for the adjustment of neural network parameters. In this technique, an approximation of the objective function to be minimized is created using artificial neural networks and then sampling is performed from the approximation function and not the original one. Therefore, in the present work, learning of the parameters of artificial neural networks is performed using other neural networks. The new training method was tested on a series of well-known problems, a comparative study was conducted against other neural network parameter tuning techniques, and the results were more than promising. From what was seen after performing the experiments and comparing the proposed technique with others that have been used for classification datasets as well as regression datasets, there was a significant difference in the performance of the proposed technique, starting with 30% for classification datasets and reaching 50% for regression problems. However, the proposed technique, because it presupposes the use of global optimization techniques involving artificial neural networks, may require significantly higher execution time than other techniques. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Commentary
Predictive Analytics with a Transdisciplinary Framework in Promoting Patient-Centric Care of Polychronic Conditions: Trends, Challenges, and Solutions
AI 2023, 4(3), 482-490; https://doi.org/10.3390/ai4030026 - 13 Jul 2023
Viewed by 580
Abstract
Context. This commentary is based on an innovative approach to the development of predictive analytics. It is centered on the development of predictive models for varying stages of chronic disease through integrating all types of datasets, adds various new features to a theoretically [...] Read more.
Context. This commentary is based on an innovative approach to the development of predictive analytics. It is centered on the development of predictive models for varying stages of chronic disease through integrating all types of datasets, adds various new features to a theoretically driven data warehousing, creates purpose-specific prediction models, and integrates multi-criteria predictions of chronic disease progression based on a biomedical evolutionary learning platform. After merging across-center databases based on the risk factors identified from modeling the predictors of chronic disease progression, the collaborative investigators could conduct multi-center verification of the predictive model and further develop a clinical decision support system coupled with visualization of a shared decision-making feature for patient care. The Study Problem. The success of health services management research is dependent upon the stability of pattern detection and the usefulness of nosological classification formulated from big-data-to-knowledge research on chronic conditions. However, longitudinal observations with multiple waves of predictors and outcomes are needed to capture the evolution of polychronic conditions. Motivation. The transitional probabilities could be estimated from big-data analysis with further verification. Simulation or predictive models could then generate a useful explanatory pathogenesis of the end-stage-disorder or outcomes. Hence, the clinical decision support system for patient-centered interventions could be systematically designed and executed. Methodology. A customized algorithm for polychronic conditions coupled with constraints-oriented reasoning approaches is suggested. Based on theoretical specifications of causal inquiries, we could mitigate the effects of multiple confounding factors in conducting evaluation research on the determinants of patient care outcomes. This is what we consider as the mechanism for avoiding the black-box expression in the formulation of predictive analytics. The remaining task is to gather new data to verify the practical utility of the proposed and validated predictive equation(s). More specifically, this includes two approaches guiding future research on chronic disease and care management: (1) To develop a biomedical evolutionary learning platform to predict the risk of polychronic conditions at various stages, especially for predicting the micro- and macro-cardiovascular complications experienced by patients with Type 2 diabetes for multidisciplinary care; and (2) to formulate appropriate prescriptive intervention services, such as patient-centered care management interventions for a high-risk group of patients with polychronic conditions. Conclusions. The commentary has identified trends, challenges, and solutions in conducting innovative AI-based healthcare research that can improve understandings of disease-state transitions from diabetes to other chronic polychronic conditions. Hence, better predictive models could be further formulated to expand from inductive (problem solving) to deductive (theory based and hypothesis testing) inquiries in care management research. Full article
Show Figures

Figure 1

Article
A Robust Vehicle Detection Model for LiDAR Sensor Using Simulation Data and Transfer Learning Methods
AI 2023, 4(2), 461-481; https://doi.org/10.3390/ai4020025 - 01 Jun 2023
Viewed by 1273
Abstract
Vehicle detection in parking areas provides the spatial and temporal utilisation of parking spaces. Parking observations are typically performed manually, limiting the temporal resolution due to the high labour cost. This paper uses simulated data and transfer learning to build a robust real-world [...] Read more.
Vehicle detection in parking areas provides the spatial and temporal utilisation of parking spaces. Parking observations are typically performed manually, limiting the temporal resolution due to the high labour cost. This paper uses simulated data and transfer learning to build a robust real-world model for vehicle detection and classification from single-beam LiDAR of a roadside parking scenario. The paper presents a synthetically augmented transfer learning approach for LiDAR-based vehicle detection and the implementation of synthetic LiDAR data. A synthetic augmented transfer learning method was used to supplement the small real-world data set and allow the development of data-handling techniques. In addition, adding the synthetically augmented transfer learning method increases the robustness and overall accuracy of the model. Experiments show that the method can be used for fast deployment of the model for vehicle detection using a LIDAR sensor. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

Review
Machine-Learning-Based Prediction Modelling in Primary Care: State-of-the-Art Review
AI 2023, 4(2), 437-460; https://doi.org/10.3390/ai4020024 - 23 May 2023
Viewed by 1816
Abstract
Primary care has the potential to be transformed by artificial intelligence (AI) and, in particular, machine learning (ML). This review summarizes the potential of ML and its subsets in influencing two domains of primary care: pre-operative care and screening. ML can be utilized [...] Read more.
Primary care has the potential to be transformed by artificial intelligence (AI) and, in particular, machine learning (ML). This review summarizes the potential of ML and its subsets in influencing two domains of primary care: pre-operative care and screening. ML can be utilized in preoperative treatment to forecast postoperative results and assist physicians in selecting surgical interventions. Clinicians can modify their strategy to reduce risk and enhance outcomes using ML algorithms to examine patient data and discover factors that increase the risk of worsened health outcomes. ML can also enhance the precision and effectiveness of screening tests. Healthcare professionals can identify diseases at an early and curable stage by using ML models to examine medical pictures, diagnostic modalities, and spot patterns that may suggest disease or anomalies. Before the onset of symptoms, ML can be used to identify people at an increased risk of developing specific disorders or diseases. ML algorithms can assess patient data such as medical history, genetics, and lifestyle factors to identify those at higher risk. This enables targeted interventions such as lifestyle adjustments or early screening. In general, using ML in primary care offers the potential to enhance patient outcomes, reduce healthcare costs, and boost productivity. Full article
Show Figures

Figure 1

Article
An Empirical Comparison of Interpretable Models to Post-Hoc Explanations
AI 2023, 4(2), 426-436; https://doi.org/10.3390/ai4020023 - 19 May 2023
Viewed by 1121
Abstract
Recently, some effort went into explaining intransparent and black-box models, such as deep neural networks or random forests. So-called model-agnostic methods typically approximate the prediction of the intransparent black-box model with an interpretable model, without considering any specifics of the black-box model itself. [...] Read more.
Recently, some effort went into explaining intransparent and black-box models, such as deep neural networks or random forests. So-called model-agnostic methods typically approximate the prediction of the intransparent black-box model with an interpretable model, without considering any specifics of the black-box model itself. It is a valid question whether direct learning of interpretable white-box models should not be preferred over post-hoc approximations of intransparent and black-box models. In this paper, we report the results of an empirical study, which compares post-hoc explanations and interpretable models on several datasets for rule-based and feature-based interpretable models. The results seem to underline that often directly learned interpretable models approximate the black-box models at least as well as their post-hoc surrogates, even though the former do not have direct access to the black-box model. Full article
(This article belongs to the Special Issue Interpretable and Explainable AI Applications)
Show Figures

Figure 1

Article
AI in Energy: Overcoming Unforeseen Obstacles
AI 2023, 4(2), 406-425; https://doi.org/10.3390/ai4020022 - 12 May 2023
Viewed by 1557
Abstract
Besides many sectors, artificial intelligence (AI) will drive energy sector transformation, offering new approaches to optimize energy systems’ operation and reliability, ensuring techno-economic advantages. However, integrating AI into the energy sector is associated with unforeseen obstacles that might change optimistic approaches to dealing [...] Read more.
Besides many sectors, artificial intelligence (AI) will drive energy sector transformation, offering new approaches to optimize energy systems’ operation and reliability, ensuring techno-economic advantages. However, integrating AI into the energy sector is associated with unforeseen obstacles that might change optimistic approaches to dealing with AI integration. From a multidimensional perspective, these challenges are identified, categorized based on common dependency attributes, and finally, evaluated to align with the viable recommendations. A multidisciplinary approach is employed through the exhaustive literature to assess the main challenges facing the integration of AI into the energy sector. This study also provides insights and recommendations on overcoming these obstacles and highlights the potential benefits of successful integration. The findings suggest the need for a coordinated approach to overcome unforeseen obstacles and can serve as a valuable resource for policymakers, energy practitioners, and researchers looking to unlock the potential of AI in the energy sector. Full article
Show Figures

Figure 1

Communication
Challenges and Limitations of ChatGPT and Artificial Intelligence for Scientific Research: A Perspective from Organic Materials
AI 2023, 4(2), 401-405; https://doi.org/10.3390/ai4020021 - 04 May 2023
Viewed by 3028
Abstract
Artificial Intelligence (AI) has emerged as a transformative technology in the scientific community with the potential to accelerate and enhance research in various fields. ChatGPT, a popular language model, is one such AI-based system that is increasingly being discussed and being adapted in [...] Read more.
Artificial Intelligence (AI) has emerged as a transformative technology in the scientific community with the potential to accelerate and enhance research in various fields. ChatGPT, a popular language model, is one such AI-based system that is increasingly being discussed and being adapted in scientific research. However, as with any technology, there are challenges and limitations that need to be addressed. This paper focuses on the challenges and limitations that ChatGPT faces in the domain of organic materials research. This paper will take organic materials as examples in the use of ChatGPT. Overall, this paper aims to provide insights into the challenges and limitations of researchers working in the field of organic materials. Full article
Article
CAA-PPI: A Computational Feature Design to Predict Protein–Protein Interactions Using Different Encoding Strategies
AI 2023, 4(2), 385-400; https://doi.org/10.3390/ai4020020 - 28 Apr 2023
Viewed by 1300
Abstract
Protein–protein interactions (PPIs) are involved in an extensive variety of biological procedures, including cell-to-cell interactions, and metabolic and developmental control. PPIs are becoming one of the most important aims of system biology. PPIs act as a fundamental part in predicting the protein function [...] Read more.
Protein–protein interactions (PPIs) are involved in an extensive variety of biological procedures, including cell-to-cell interactions, and metabolic and developmental control. PPIs are becoming one of the most important aims of system biology. PPIs act as a fundamental part in predicting the protein function of the target protein and the drug ability of molecules. An abundance of work has been performed to develop methods to computationally predict PPIs as this supplements laboratory trials and offers a cost-effective way of predicting the most likely set of interactions at the entire proteome scale. This article presents an innovative feature representation method (CAA-PPI) to extract features from protein sequences using two different encoding strategies followed by an ensemble learning method. The random forest methodwas used as a classifier for PPI prediction. CAA-PPI considers the role of the trigram and bond of a given amino acid with its nearby ones. The proposed PPI model achieved more than a 98% prediction accuracy with one encoding scheme and more than a 95% prediction accuracy with another encoding scheme for the two diverse PPI datasets, i.e., H. pylori and Yeast. Further, investigations were performed to compare the CAA-PPI approach with existing sequence-based methods and revealed the proficiency of the proposed method with both encoding strategies. To further assess the practical prediction competence, a blind test was implemented on five other species’ datasets independent of the training set, and the obtained results ascertained the productivity of CAA-PPI with both encoding schemes. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Commentary
Marketing with ChatGPT: Navigating the Ethical Terrain of GPT-Based Chatbot Technology
AI 2023, 4(2), 375-384; https://doi.org/10.3390/ai4020019 - 10 Apr 2023
Cited by 6 | Viewed by 7602
Abstract
ChatGPT is an AI-powered chatbot platform that enables human users to converse with machines. It utilizes natural language processing and machine learning algorithms, transforming how people interact with AI technology. ChatGPT offers significant advantages over previous similar tools, and its potential for application [...] Read more.
ChatGPT is an AI-powered chatbot platform that enables human users to converse with machines. It utilizes natural language processing and machine learning algorithms, transforming how people interact with AI technology. ChatGPT offers significant advantages over previous similar tools, and its potential for application in various fields has generated attention and anticipation. However, some experts are wary of ChatGPT, citing ethical implications. Therefore, this paper shows that ChatGPT has significant potential to transform marketing and shape its future if certain ethical considerations are taken into account. First, we argue that ChatGPT-based tools can help marketers create content faster and potentially with quality similar to human content creators. It can also assist marketers in conducting more efficient research and understanding customers better, automating customer service, and improving efficiency. Then we discuss ethical implications and potential risks for marketers, consumers, and other stakeholders, that are essential for ChatGPT-based marketing; doing so can help revolutionize marketing while avoiding potential harm to stakeholders. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

Back to TopTop