This week, children will be going back to school, and you need to be thinking about more than just ChatGPT at this time.
Kids, teachers, and even parents have been able to get a crash education in artificial intelligence over the course of the past year, all thanks to the incredibly popular AI chatbot ChatGPT.
Some schools, such as the public schools in New York City, were among those that, in a knee-jerk reaction, banned the device, only to reverse their decision many months later. Now that a significant number of older people are familiar with the technology, educational institutions have begun investigating the potential of artificial intelligence (AI) systems to facilitate the instruction of vital lessons on critical thinking to younger generations.
However, children are not just interacting with AI chatbots in their schools and in their day-to-day lives in general. The use of artificial intelligence (AI) is becoming increasingly widespread; for example, Netflix uses AI to recommend shows to us, Alexa uses AI to assist her answer our queries, your favorite interactive Snapchat filters are powered by AI, and you unlock your smartphone using AI.
According to Regina Barzilay, a professor at MIT and the faculty lead for AI at the MIT Jameel Clinic, knowing the fundamentals of how these systems work is becoming an essential form of literacy and is something that every student who graduates from high school should be familiar with. This is despite the fact that some students will invariably be more interested in artificial intelligence than others. The clinic just finished up a summer program for fifty-one high school students who were interested in learning about the application of AI in the medical field.
According to her, parents ought to instill in their children an interest in learning about the various systems that are playing an increasingly important part in our lives. “Moving forward, it could create enormous disparities if the only people who go to university and study data science and computer science understand how it works,” she adds. “[T]hese people are likely to have a much better understanding of how it functions.”
1. Remember: AI isn’t your buddy.
Chatbots are designed to carry out their namesake function, which is to chat. Students may find it easy to forget that they are engaging with an AI system rather than a reliable friend or confidante due to the warm and conversational tone that ChatGPT uses while responding to questions. It’s possible that as a result of this, individuals will be more likely to believe what these chatbots say rather than taking their recommendations with a grain of salt. According to Helen Crompton, a professor at Old Dominion University who specializes in digital innovation in education, chatbots are simply copying human speech based on data collected off the internet. Despite the fact that chatbots are quite effective at sounding like a compassionate human, they are not actually human speech.
“We need to remind children not to give systems like ChatGPT sensitive personal information because it’s all going to go into a large database,” she says. “We need to remind children not to give systems like ChatGPT sensitive personal information.” Once your information has been entered into the database, it will be extremely difficult to delete it. It is possible that without your knowledge or approval, technological corporations will use it to generate additional revenue, or that hackers could use it to steal your information.
2. AI models cannot replace search engines.
The quality of the data that large language models are trained on determines how accurate they are. This indicates that not all of the information that chatbots provide will be accurate or trustworthy, despite the fact that they are skilled at confidently answering inquiries with language that may appear convincing. It is also well known that AI language models will sometimes offer lies as truths. Furthermore, depending on the context in which the data was gathered, they may reinforce prejudices that are inaccurate or even harmful. Students ought to approach the responses provided by chatbots in the same manner that they would any other kind of information that they come across on the internet: critically.
What they tell us is dependent on what they’ve been trained on, therefore the results of these tools are not always reflective of everyone. According to Victor Lee, an associate professor at the Stanford Graduate School of Education who has developed free AI resources for high school curriculums, “Not everybody is on the internet, so they won’t be reflected.” “Before students click, share, or repost anything, they should take a moment to pause and reflect, and they should be more skeptical of what they are seeing and believing because a lot of it might be fake.”
David Smith, a professor of bioscience education at Sheffield Hallam University in the United Kingdom, who has been preparing to assist his students in navigating the applications of artificial intelligence in their own learning, says that it may be tempting to rely on chatbots to answer queries; however, they are not a replacement for Google or other search engines. According to him, students shouldn’t take anything that massive language models say as an indisputable fact, and he adds, “Whatever answer it gives you, you’re going to have to check it.”
3. Teachers might accuse you of utilizing an AI without evidence.
The widespread availability of generative AI presents a number of issues for educators, one of the most significant of which is determining whether or not students have utilized AI to compose their assignments. The problem with AI text recognition tools is that they are quite unreliable, and it is extremely easy to deceive them into thinking something is not what it is. Despite the fact that a large number of firms have developed products that claim to be able to determine whether text has been authored by a human or a computer, the problem remains. There have been many instances where teachers have incorrectly assumed that an essay had been generated by AI when, in reality, this was not the case.
According to Lee, one of the most crucial steps is for parents to become familiar with the artificial intelligence policies or AI disclosure protocols (if their child’s school has any) and to remind their children of the significance of adhering to these rules. Crompton advises parents to remember to maintain their composure in the event that their child is falsely accused of employing artificial intelligence (AI) in an assignment. If you need to prove that your child did not directly lift material, she suggests that you do not be afraid to contest the judgment and inquire how it was reached. You should also feel free to point to the record that ChatGPT keeps of an individual user’s discussions if you need to make your case.
4. Recommender systems hook you and might show you terrible stuff.
According to Teemu Roos, a professor of computer science at the University of Helsinki who is building a curriculum on artificial intelligence for schools in Finland, it is crucial to understand how recommendation algorithms function and explain its operation to children. People viewing advertisements on platforms owned by technology corporations generate revenue for such companies. Because of this, businesses have built strong AI algorithms that propose material, such as videos on YouTube or TikTok, in the hopes that users will become addicted to the platform and continue using it for as long as they possibly can. The algorithms monitor and carefully assess the types of videos that people view, and then they provide recommendations for other videos that are similar. When you watch more cat videos, for instance, the algorithm has a greater chance of concluding that you are interested in viewing other cat movies.
According to Roos, these services have a propensity to lead consumers to hazardous content such as misinformation. People have a tendency to linger on anything that is strange or alarming, such as disinformation regarding health or strong political ideas. This is because of the way that the human brain works. It is quite likely that you will fall down a rabbit hole or become trapped in a loop if you accept everything that you see on the internet; therefore, it is in your best interest to avoid doing so. You should verify the material using a variety of other trustworthy sources as well.
5. Use AI responsibly and safely
It’s not just text that can be generated by artificial intelligence; there are lots of free deepfake apps and web tools that can superimpose one person’s face onto another person’s body in a matter of seconds. Even though today’s students are likely to have been cautioned about the risks of posting private photos online, they should nevertheless exercise extreme caution before uploading the faces of their peers onto inappropriate apps, especially given the possibility that doing so could have legal ramifications. For instance, courts have convicted minors guilty of transmitting child pornography for emailing graphic material about other teens or even themselves. The material may have been about anything, but it was about other kids.
“We have conversations with kids about responsible online behavior, both for their own safety and also to not harass, doxx, or catfish anyone else, but we should also remind them of their own responsibilities,” adds Lee. “We have conversations with kids about responsible online behavior, both for their own safety and also to not harass, doxx, or catfish anyone else.” You can understand what happens when someone starts to disseminate a phony image since it happens in the same way that malicious rumors spread.
According to Lee, one of the most effective ways to communicate with children and teenagers about the potential dangers to their privacy and legal standing posed by their use of the internet is to show them concrete examples of these hazards rather than attempting to instruct them on general principles or laws. For instance, walking people through how artificial intelligence (AI) face-editing tools could keep the photographs they submit, or referring them to news stories about platforms being hacked, can make a stronger impression than blanket warnings to “be careful about your privacy,” according to what he says.
6. Don’t overlook AI’s strengths.
But things aren’t as bad as they seem to be right now. In spite of the fact that many of the earliest conversations about artificial intelligence (AI) in the classroom centered on the possibility that it may be used to aid in cheating, AI can really be an extremely helpful tool when it is applied sensibly. Students who are having trouble grasping a complex subject could ask ChatGPT to break it down for them step by step, to rewrite it as a rap, or to take on the persona of an experienced biology instructor so that they can test their own mastery of the subject. It is also quite good at quickly producing thorough tables to evaluate the relative benefits and drawbacks of certain universities, for example, something that would otherwise take a number of hours of research and compilation to do manually.
According to Crompton, other beneficial uses include asking a chatbot for glossaries of difficult terms, practicing historical questions before a quiz, or helping a student analyze their answers after writing them. “If a student is using it in the right way, that’s great,” she adds. “But as long as you remember the bias, the tendency toward hallucinations and inaccuracies, and the significance of digital literacy—if a student is using it in the right way, that’s great.” “No one really knows what’s going on except us,” they said.