The “Wow” of Web support for Adafruit TinyUSB

The “Wow” of Web support for
Adafruit TinyUSB

Emily Twines
7 August 2019

So remember about 2 months ago when the Adafruit TinyUSB library made it possible to drag and drop files directly onto your Arduino as if it were a disk drive?

Instead of plugging away, changing your files in the Arduino IDE on a line-by-line basis, this library made your device simply appear as an external drive onto which you could subsequently drag the files of your choice.

Well, on Monday, an update to the library came out. Now we can easily change files for the Arduino output, as the original library allowed, and also interact with the Arduino’s output data without touching the code — and this has significant implications for performance.

As many of you will know, WebUSB tech promises big things in software development, as it offers a standard Javascript API for web-to-device (and vice versa) exchanges. This means that developers can easily build cross-platform interfaces using the web frameworks they’re already familiar with (React Native, Semantic UI, etc.) and that consequently, they can build faster and more responsively to community needs and opportunities.

But it’s not just on the development side that WebUSB can offer progressive steps. In academic and business settings, WebUSB support might be beneficial as a way to decrease software barriers; no more pop up “updates necessary” messages; WebUSB enabled devices can directly access the right information from the web, no middle-man software required. This makes using WebUSB devices faster and easier to implement than traditional options, and more accessible in school or conference room settings where the tech may need to be used across several different machines, all with different specs.

And what does this mean for Arduino?

The Adafruit TinyUSB library update uses WebUSB tech to expand ease and methods of user input. Even the example code is interesting: on your computer or mobile device, you move your mouse/finger to green, and, if it’s set up to do lights, the Arduino produces a green light. Aqua to Aqua. Vermillion to I bet you can’t guess what… And this update has huge implications for wearables, for example, or for other instances of Arduino use in performance.

Take this example of the color picker. Say, I’m a performer, and I want to wear an interactive LED dress for my solo. Previously, I would have had to stand at the computer changing the RGB values throughout the whole performance. To be sure, this has a certain charm, but it may not be what I’m going for in this performance. With this Adafruit update, without buying or downloading any additional software, I can maneuver intuitively through colors with just the swipe of my finger, and that’s a whole lot easier to work into a dance routine than the clacking of a keyboard.

But that’s not even the best part. Perhaps you want someone not familiar with Arduino or coding at all — say, an audience member — to use your creation. You can set up the UI of your choice for them to see, using your favorite web user interface framework, and they never even have to know the code exists. That way, you can also control what the user sees, so that they only effect the things that you want them to change. It’s a new level of accessibility for Arduino, with performance (and business) potential to spare.

So that’s my personal “wow” factor for Adafruit TinyUSB Library update 0.6.0 — it’s easy to use, easy to share, and opens up new doors for performance. Maybe it’ll inspire you as well.

Check out the WebUSB support and the rest of the updates to the library here!

Happy Building!

AI for the Art World: Neural Nets and Artistic Application

AI for the Art World:
Neural Nets and Artistic Application

By: Emily Twines
AI specialist: Jeff Jacobs, Columbia University
6 August 2019

In 500 words or less, please describe your biggest historical influences:

“Who are my biggest influences?” It’s a dreaded prompt. Inquiries such as these often imply deep soul-searching evenings and endless grant application re-writes… Yes I know I like the Expressionists, but do they influence my work? Do I think of them when I paint? What does it even mean to be influenced?


But what if you could just — get an objective response? No talk of process or who you admire or who you grew up with plastered on your wall — just a quantifiable, numerical answer?

You’re in luck if you know a computer scientist, because advances in a leading AI technique called, for short, “Neural Nets” can provide just this solution.

Named for its vague resemblance to the networks of neural pathways inside the human brain, a neural network finds relationships between two or more given things.

Like — did we all see those contemporary, every day pictures that were reproduced in the style of famous artists — “Starry Starry my trip to Vegas” or Delacroix’s famous “La Bathroom Selfie?” These are both examples of a neural style transfer.

How do they do it? I’m glad you asked.

First, you feed the computer a “content image,” or the basis from which future extrapolations are made. So, in our first example, your Van Gogh.

Second, the computer learns, by Pavlovian reinforcement, a custom compression algorithm for your image, similar to the generic codecs you might use for a picture to save space on your drive. In this self-compression, or “autoencoding,” the computer effectively chooses certain traits it considers definitive for the art piece. As each piece has different sets of qualities that a computer might recognize as integral, the autoencoding process is unique to each content image.

Third, you feed the computer your new data (your newest Instagram pic). It analyzes the picture using the same autoendoder, so that the painting that it spits out, content-wise, is still the second art piece, but encoded according to stylistic qualities of the first. Bingo-bango, you’ve got your image stylized according to one of the greats — your own personal classic.

And you say this works with video too?

Yep. While doing this work by hand would take a ton of time, the computer doesn’t really care if you add a dimension. There are ways of applying a given picture to a temporal art object (i.e. video); you just watch the boundaries and relative densities identified in an initial image. This means that you can, in fact, apply style to an unrelated video, and with shockingly — dare I sayi it? — artistic effects.

“So, this is all well and good,” you might say, “but how can neural nets help me answer the question: ‘Who are my biggest influences?’”

Start with one of your art pieces (just one to keep things as simple as possible). Autoencode it. Then choose a paradigmatic, say, Degas, and autoencode it using the same algorithms. Finally, ask a computer to compare two matrices: “How does the original Degas compare with its compressed representation?” The response to this will answer the telling question: “Mathematically, how close is my stye to theirs?”

Now, repeat this process. Like, a bunch of times. For Kahlo, Klimt, Munch, Basquiat… And see how each set of matrices compares.

So now you have your own work, and a whole lot of numbers, yeah?

Last step. You remember that band Joy Division? Remember that album cover with all the little mountain bumps? There’s a plot named after that image where you can compare groupings of points with each other. The groups make up the mountains, and when you compare several groups, you get the whole Joy Division Plot.

So group all your artists together by movement — your Romantics, your Cubists, your Futurists, etc. — as they will likely return similar difference sums. Insert them into the plot, and, with your work as point zero for each line, compare visually which groups are furthest left, or in other words, which are closest to zero. The closer the mountain is to zero, the greater the stylistic similarities to your own piece there are that were noticed by the neural network. The more spatially left the grouping, the more likely it is that this movement of artists influenced your work.

So there you have it — a plot of historical works that visualizes which ones your own piece most resembles, thereby offering good candidates for your biggest influences — all quantified and categorized.

And the best part?

There’s no essay required.

Further reading:

For apps that do style transfer, see here.
For a gif of style transfer with video in action, see here.
For the math behind video style transfer, check out this paper here.
For further further reading, including the sort of original Gatys et al. paper, check out this extensive GitHub here.

Photo and video edited with Artisto because it’s a free app that does style transfers for both photo and video. Note however that it does not do them specifically between your photos and famous art works, but rather between yours and other, historically influenced, but contemporary works.