Advancing the Battle Against Fake News: My AI Journey

My journey into the realm of AI commenced with humble beginnings. Armed with determination and curiosity, I delved into the world of machine learning, starting with basic models from the Python libraries, SKlearn and SciKit. I embarked on creating a rudimentary model that analyzed text data from both fake and reputable news sources. The outcome was a classification system that provided a clear verdict: 0 for fake, 1 for true, and 2 for an uncertain result. During this phase, I experimented with various classification algorithms, including K-Nearest Neighbors (KNN). This initial foray was an essential stepping stone, providing me with valuable insights and motivation to push the boundaries of what my AI could achieve. I then moved up to the big leagues working with ChatGPT and advanced large language models (LLMs).

Global Leadership Program

As fate would have it, my high school, Pickering College, has a unique program called the Global Leadership Program (GLP). This program empowers students to explore and take action on global issues. For my mission, I chose to combat the proliferation of fake news and misinformation.

The foundational AI model I had developed became the cornerstone of my action plan. However, it quickly became apparent that if I wanted to make a substantial impact and drive real change, my AI needed to evolve beyond its black-and-white classification.

The Shift to Deep Neural Networks

To enhance the capabilities of my AI, I transitioned to using TensorFlow, a powerful tool for building machine learning models, particularly deep neural networks. This shift allowed me to develop a more intricate and nuanced system for fake news detection.

My advanced algorithm went beyond the binary classification and provided users with comprehensive information about the text it analyzed. It distinguished between opinion-based and fact-based content and assessed the veracity of claims through the examination of proven evidence. This multifaceted approach serves to educate and assist readers in critically evaluating news articles, distinguishing between factual evidence and biased claims.

The Big Leagues

To further expand my capabilities, I began utilizing the ChatGPT API which allows me to use the ChatGPT model to analyze text more effectively. By employing prompt engineering techniques, I refined my interactions with the API to achieve more accurate and tailored results. This involved crafting specific, context-aware prompts that guided the AI to generate responses aligned with the goals of my analysis, such as identifying nuanced biases or providing detailed breakdowns.

The ChatGPT API served as the backbone for processing and analyzing text, enabling me to tap into advanced natural language understanding without having to build the underlying model myself. Initially, I experimented with simple queries to understand the API’s capabilities and limitations. Through iterative testing, I discovered that the quality of the input prompts significantly influenced the output. This led me to delve deeper into prompt engineering, where I iteratively designed and fine-tuned prompts to achieve specific outcomes. For instance, I would include contextual details, clarify the purpose of the analysis, and provide examples within the prompt to guide the AI more effectively.

Reaching this stage required significant groundwork. I studied the API’s documentation to understand its parameters and capabilities fully. Then, I integrated the API into a backend system, writing scripts to automate text submissions and parse the AI’s responses. Along the way, I tackled challenges such as optimizing API calls for speed and efficiency, handling edge cases in text input, and ensuring the outputs were interpretable and actionable. By systematically testing and refining these processes, I developed a reliable framework for utilizing the API in my projects.C

The Path Forward: From Idea to Reality

To bring this creation to life I applied and was admitted into a more advanced and new subprogram of GLP called the Incubator Program. Here, I worked tirelessly to transform my idea into a fully functional AI model with an intuitive user interface. With the tools and funding from the incubator program I created a tool that empowers users to make informed decisions when consuming news and information, contributing to a world with less misinformation and more critical thinking. 

Together, we can stand against the tide of fake news and work towards a more informed and resilient society.

Thank you for joining me on this transformative endeavor.