Lesson #2: Create a Pattern-Finder AI Agent

Objectives

In this lesson you will learn:

  • Agents use sensors to gather information about their environment.
  • AI agents use electronic sensors to gather information.

To complete this lesson you will need:

  • A device with access to Internet
  • AI Inventors notebook
  • Smartphone (optional)

Over the course of this lesson you will:

  • Identify sensors on your smartphone
  • Create an AI agent

Sensors gather information

  • Agents try to sense their environment so then take creative actions.
  • Agents have different types of sensors that use to gather information about their environments.

Stop and answer

What are your human sensors? How do your sensors help you gather information about the environment?

AI agents don’t have ears or eyes, but humans have invented electronic sensors that can help them to gather information about their environment.

Black Amazon Echo dot on a table
Voice assistants like Siri or Alexa have a sensor called a microphone to gather information

Challenge 1

Find sensors on your phone that gather information similar to your human sensors

Touch Fingerprint sensor to unlock,
Pressure sensor on the screen to understand when you are texting
Eyes Camera
EarsMicrophone

Challenge 2

Can you find apps with sensors that gather different information

Tell you where you areLocation apps (like maps) use sensors like GPS to
understand where you are,
Compass uses a magnetometer sensor to measure magnetic
fields to understand your phone’s orientation
Measure movementAn accelerometer sensor gather information about how many
steps you’ve taken (for health apps) or how fast you are driving
(for maps)
Measure lightAn ambient light sensor measure how dark or light a place is to
adjust your screen’s brightness or flash (camera)

What can AI agents do with the gathered information?

The sensors in your phone can be used to gather information for AI agents.

SensorInformation gatheredWhat AI agents can do
with the information
CameraImages of its surroundingsIdentify items in a
viewfinder,
Translate text in the image
from one language to
another
LocationIdentify where your phone
is in the world
Update traffic routes and
directions based on
where you’re going and the surrounding traffic
MicrophoneCapture spoken words
and sounds
Understand and react to
a human speaking to a
voice assistant (like “Hey
Siri! or “OK Google!”)

Agents can use sensors to help understand their environment.

AI inventions using sensors

There are many types of sensors an AI agent could use to gather information.

Some examples of AI inventions

This person made an AI powered cat door that uses a camera to tell when his cat approaches the door with a critter in his mouth. If it’s just the cat, door’s open! If he’s carrying a rat or bird? Cat door locks.

Remember agents?

AI agents use sensors to find patterns in environments and make decisions towards goals.

Some AI agents use many types of sensors to understand their environment, find patterns and make decisions towards goals.

This prototype is a large screen that takes up most of a wall and senses human characteristics such as age, gender, and even heart rate. It uses facial recognition technology that was originally developed for cyber security. The AI agent uses camera sensors to detect minor changes in the cheeks around the nose in order to determine heart rate. In the future, smart homes will probably be outfitted with this technology.
This AI agent assists people in wheelchairs using facial recognition software. The Wheelie is the first AI-driven program that uses facial gestures to help wheelchair users gain more control and independence. It can also be customized based on their needs. The Wheelie uses Intel RealSense camera sensors to capture a 3D map of a user’s face, then uses Intel-powered deep learning to process over ten different facial expressions. Then, the wheelie turns smiles, kisses and raised eyebrows into real-time chair movements. This gives wheelchair users a newfound sense of mobility without lifting a finger.
A self-driving car is a car that is going to perform all your tasks for you, allowing you to go from point A to point B without having to engage. Self-driving cars can basically sense everything around them using sensors. They have cameras, a lidar sensor that is laser based, a radar sensor, as well as GPS. The camera lets the car see and hear, the lidar sensor tells the car how far away objects are, and GPS tells the car where it is. These sensors give the car vision all around, so that if something like a dog is standing in the road, the car can see the dog and stop itself. AI allows the car’s supercomputer to multitask, and process the data from all the sensors simultaneously. Self-driving cars can also share information with each other, so the more self-driving cars there are on the road, the smarter they can be.

Stop and answer

What information about the environment are the sensors gathering? What patterns does the AI look for? What decisions are the inventions making?

Make an AI agent

Your challenge is to follow these steps to create an AI agent that finds patterns in animal images to decide what animal it’s looking at. You will design your AI agent to identify between two things. For example, your agent could be programmed to determine if an image contains a narwhal or a unicorn. They both have a spike on their head, but unicorns look like horses and narwhals look like whales. You will train your AI agent to identify differences between them by supplying it with images of unicorns and narwhals. After your AI agent has been trained with enough information it will be able to decide if an image is a narwhal or a unicorn.

Cognimates Tutorial

Video tutorial for creating the AI agent

Creating the API key

  1. Create an account by following this link to Clarafi https://portal.clarifai.com/signup 
  2. Then login using your email and password at the Clarafai login https://portal.clarifai.com/login 
  3. Once you are logged in select “create new application”
  4. Create a name for your application and then select “create app” 
  5. Click on the application name to see the application details
  6. Then select the “API keys” section of the application details to see the API key
  7. Finally, click “copy to clipboard” and continue to register with Cognimates

Start a project on Cognimates

To begin training your AI agent you must also set up your project on Cognimates. Follow the link and begin by clicking the button to train a model, then click train vision, and then name you project. Begin the process by following this link to Cognimates http://cognimates.me/home/ 

After naming your project you must paste your Clarafai API key into the Clarafai key field, and select “set key”

Train your model

Next, input the categories of objects that your AI agent will be identifying. For example, you could enter “narwhal” and then select “add category.” Then you would enter “unicorn” and select “add category.” Now your agent knows that it is distinguishing between narwhals and unicorns, or you could do cats and dogs, or shoes and hats. 

After you have chosen the categories of objects, you will upload 10 images of each object. 

When you are done adding images of the objects, click “train model”

Test your model

Finally, now that your model is trained you can test it by uploading a new image that belongs to one of your categories, like narwhals or unicorns. Then select “predict” to discover if your agent can correctly identify your image.

Reflect

AI Inventors notebook

Do you have an idea for an AI invention that would use sensors to gather information and find patterns?

Can you think of a way that an AI agent could use images to solve a community problem?

Big Takeaways

Agents can use sensors to understand their environment.

AI agents can use electronic sensors to gather information and find patterns.

You built an AI agent that finds patterns in animal pictures!