Data-security researchers at Indiana University, probing the hacking vulnerabilities of an Internet-connected Crock-Pot, easily took control of the device and other devices that share its Wi-Fi router—but they have found no risk of an attacker remotely setting it on fire. That’s the good news.

Now, the bad news: Internet connectivity is designed into many more devices than consumers realize, including standby kitchen gadgets, home-entertainment systems, and children’s toys. To optimize energy consumption and simplify their lives, consumers have adopted a chattering swarm of robots that generate, gather, and share data via the Internet of Things (IoT).

It’s reaching the point where purchasers cannot reasonably know that they are generating data; who has access to that data; and what that means for privacy, security, and even physical safety concerns. And it’s becoming increasingly difficult to opt out.

Researchers at Indiana University’s Security & Privacy in Informatics, Computing, and Engineering center (SPICE) and its IoT Research Center in Bloomington, Indiana, have begun sounding the alarm.

The IoT Research Center has established a living lab called the IoT House, funded by a five-year National Science Foundation grant. It’s literally a house filled with IoT-connected smoke alarms, thermostats, lighting systems, toys, appliances, and other gadgets.

“We asked the university for a network in which a lot of the defense mechanisms are reduced, to mimic a typical home,” says Joshua Streiff, project manager at the IoT House. That environment is carefully walled off from the university’s network to avoid exposing nearby academic users to the wide-open Internet traffic that flows through a typical suburban house. The researchers submit their findings to industry and academia when they discover product-security gaps.

Streiff picks up a colorful stuffed unicorn toy called a CloudPet. It allows a child to send and receive voice messages through a Bluetooth low-energy (BLE) module embedded in its plush body and paired with a mobile app that connects the data to a cloud server. It’s adorable yet infamous.

“Researchers raised the alarm about the CloudPet unicorn for years,” Streiff says. “It was designed with essentially no security. Mozilla eventually convinced Amazon and eBay to pull it, and subsequently, the brand was discontinued, but the toy is still out there. Manufacturers stop selling IoT devices like this sometimes, but they’re virtually never recalled.”

The toy can be used to track a child’s location, and hackers can send bogus messages to it. “If I had the wrong intentions, I could drive through a neighborhood and pinpoint where the children live,” Streiff says. “I could find the unicorn’s BLE module and send it voice messages to convince the child that I’m his parent and persuade him to come outside.”

In December, the Washington Post reported that parents found a hacker shouting obscenities and threats through their Nest Cam, used as a baby monitor. “Nest wasn’t hacked,” says Behnood Momenzadeh, a doctoral candidate who works at the IoT House. “People’s email accounts were hacked, and the hackers then were able to access their Nest accounts and change permissions. So a combination of technologies put that family at risk.”

The Fisher-Price Smart Toy Bear offers another object lesson. The toy is designed to “talk, listen, and learn” and has a video camera installed in its nose—not recording, but always on. “The teddy bear poses a risk because the designers assumed nobody would ever open this thing up and look inside,” Streiff says.

The bear contains essentially a complete Android tablet. “It can do anything a tablet can do, including video and email,” Streiff says. “It has a hardware data port. You can communicate with it using a remote keyboard. An actual attack on the device requires me to open the bear up. But after three minutes, I own the bear, from anywhere in the world.”

Backdoor intrusions happen with ostensibly secure sites, too. Last year, security officers learned of a data theft in which cybercriminals stole a casino’s high-roller list from its network, gaining access through a security hole in the IoT-connected thermostat in the casino lobby’s fish tank. Virtually no home has the kind of professional data security that a casino has.

Consider again the Crock-Pot—it’s made by Belkin and incorporates Wemo, Belkin’s technology for wireless IoT connectivity. The problem, Streiff says, is the system’s built-in design assumption that the individual who sets up the Crock-Pot must also own everything else on the network. The software is designed to look for other compatible wireless devices to connect to and give the installer full control and access over those devices, as well. Additionally, that control can be set for remote access, giving control from any location in the world.

“It may be cool to be able to control the Crock-Pot from my phone, but that also means I can control everything else that doesn’t have its own separate password—and so can anyone else who can get into the Crock-Pot,” Streiff says. “If the thermostat and the refrigerator can communicate, total power usage in the house can be hugely improved. The problem is, a hacker could deliberately run up the house’s electric bill—a lot—through something like the Crock-Pot, without ever setting foot in the house.”

Designers make trade-offs. The engineers want to keep customers safe and guard their privacy; the marketers just want people to enjoy using the device. They compromise between safety and ease of use. They want every manufacturer’s devices to be able to connect to everyone else’s—for the same reason one universal television remote control is better than three. “The user wants one app to rule them all,” Streiff says.

Devices with video cameras particularly worry researchers, but there are similar concerns about audio-recording devices—especially the increasingly popular voice-activated, interactive speakers such as Amazon’s Echo and its competitors. There have been multiple media reports of “smart speakers” recording private conversations and then sending those recordings to other people. (“Creepy smart-speaker stories” are now a staple on message boards such as Reddit.)

Can vulnerabilities like these be designed out of IoT household devices? And in the meantime, what can consumers do to avoid problems?

The SPICE research team offers a few suggestions. The first is to change default passwords. Many device exploits start with bot software like the infamous Mirai that crawls the Internet looking for specific devices and then polls those devices to see whether owners have changed default passwords—which are known to the bot’s owner. If not, the hacker quickly owns that device.

The IoT House team is developing and testing a more advanced system that uses Manufacturer Usage Description Specification (MUDS) to outline proper communications by IoT devices. If an IoT device begins to communicate outside of the expected range, then the system stops the communication and alerts the owner. That way, if a teddy bear that should communicate with a known cloud server communicates with an unknown server in Eastern Europe, the system protects the house and owners. This system puts the onus on the manufacturer to set expectations.

While the industry develops new risk-mitigating standards, a user should self-educate: Research the IoT device before buying or installing. A quick Google search will likely turn up known issues. If a device has no known issues, but the company has a history of failed security, that also should be obvious. An example would be a company that uses secure and encrypted communications versus one that does not.

Either way, users should think carefully about mitigating device use. Should a family use a toy with a camera only in certain rooms? Should they cover the camera with opaque tape when not in use? Should the toy be turned off most of the time?

Ultimately, real change on the manufacturer’s side is unlikely until consumers develop higher levels of concern about the devices they bring into their lives. Most people find it difficult to imagine themselves as the targets of cyberattacks: A “why me” bias handicaps critical thinking. The question is, who’s willing to pay to know about you? “A lot of people are,” Streiff says. “Companies are gobbling up information about individuals without even knowing yet how it can or will be used.”

This article originally appeared on Autodesk’s Redshift, a publication dedicated to inspiring designers, engineers, builders, and makers.