I hope you’re enjoying this piece in print. If not, then there’s a good chance you’re reading this on a cell phone.

Take a moment to think about that. Bytes of data are sitting in a server somewhere, getting sent through “a series of tubes,” and finally appearing as crisp text within 10 seconds. All of this is done with what we consider conventional computing, where computers use transistors as switches to step through one computation at a time.

Through what’s called Moore’s law, every 18 months the number of transistors we can fit on the same microchip doubles, while, as though through radioactive decay, the price continually halves. Although the law started as only an observation, it’s become a trend that industry professionals treat as a quasi-fact when they make new products. And it’s allowed us to do amazing things. You can video chat with a significant other on the other side of the world; compute the trajectory a rover takes to land on Mars; or watch all of “House of Cards” without having to leave the couch. All of this is done with what is, fundamentally, a beefed up calculator.

But conventional computers have their drawbacks. For one, we’re quickly approaching the limit of Moore’s law: As transistors get smaller and smaller, they’ll cease to be reliable carriers of information and generate too much heat to function properly. Computational boosts won’t be possible by simply squeezing more transistors onto a plate. Secondly, as my 10 percent battery warning reminds me, conventional computers use a lot of power. And lastly, they’re sometimes just plain bad at solving what often seems like simple problems. Compute pi to a million digits: easy as pie; have Siri identify a type of pie based solely on a photo: a scary computational problem.

All the while, humans have long known about and used a different type of computer. This one performs tasks such as face recognition and natural language recognition “much better and more accurately” than anything we’ve been able to make a computer do so far, according to John Harris, Ph.D, chair of the Electrical and Computer Engineering (ECE) Department at the University of Florida.

Compute pi to a million digits: easy as pie; have Siri identify a type of pie based solely on a photo: a scary computational problem.

If you haven’t guessed by now, this computer is the brain.

With just 20 watts of power — about how much your laptop consumes — your brain can navigate you through Turlington without hitting a pedestrian on your bike. With just a tiny electrical nudge, you can find your friend’s voice from across a loud room and immediately gather from his tone that he wants to leave.

Still, despite everyone having one, the brain remains a mystery, Harris said. In fact, we still haven’t been able to construct computers that work as well as the brain.

“It is embarrassing,” Harris said, “that we can’t do as well as a rat running around.”

But a field coming into its own, neuromorphic engineering, seeks to use the brain as a biological inspiration for new computers. In his lab, Harris used neuron-like circuits to design a camera that works with less power and provides a higher dynamic range than even high-end models.

Another approach to this field, taken by Karim Oweiss, PhD, a professor in the ECE department, is to understand the brain not by modeling it, but by working on it directly.

Oweiss’ team focuses on developing Brain-Machine Interfaces (BMI). Prior to the development of modern neurobiology, we could only learn about the brain by observing neural input and the resulting action. For example: Touch a hot pan, and your hand snaps back instantly. In Oweiss’ lab, BMIs are used as a window through which we can read the individual firings of neurons in a brain, showing us what the brain does between touching the pan and snatching your hand away.

This has allowed the Oweiss lab to see how neurons adapt themselves when they’re damaged or needed for other tasks — a property known as “neural plasticity.”

In a current project, Oweiss places a sensor into mice brains that detects when neurons fire. He then trains the rats to move robotic arms simply by thinking about which direction they would like to move their bodies.

Oweiss said this work could provide relief to paralyzed patients who are unable to move. By applying similar BMI to those patients’ brains, it would be possible to train them to independently move a robotic arm.

In Oweiss’ lab, BMIs are used as a window through which we can read the individual firings of neurons in a brain, showing us what the brain does between touching the pan and snatching your hand away.

But training requires mental gymnastics, with patients learning to think through very specific movements. Try it out yourself: To work the arm, you’d need to walk through every tiny motion required to, say, flip this page (or your phone, I’m still not going to assume). It’s not easy.

For this reason, Oweiss’ former Ph.D. student has attempted to train mice to move the robotic arm with higher level, ‘“goal-oriented” signals. Instead of the mouse thinking through all the necessary in-between steps — arm up, arm down, squeeze — Oweiss’ team believes it has been able to detect and act on the signal. Now they believe the robotic arm is simply acting on the urge to get food.

In other words: Oweiss’ team may be reading the mices’ thoughts.

This is why neuromorphic computing is so exciting. By trying to answer engineering questions, we are building up the necessary tools to answer what were once considered philosophical questions. And as scientists and engineers continue to explore neurobiology, we see more and more that the mind-melting innovation of science fiction could become an everyday reality.

And Harris pointed out the profound possibilities best: “There is seemingly no roadblock to getting consciousness into a computer.”