It's a long way to Carnegie Hall, but we bet that Google researchers are already thinking of the day when they can send a robot or AI to play an interesting, improvised piano performance in a major venue.

While that's not the stated end goal of Magenta, a new project from the Google Brain team, it's certainly a possibility. The entire premise of Magenta is built around two simple questions: Can machines make art? And can machines make music? And, dare we say it, there's also an unstated third question: Can machines make either art or music that's any good?

We'll let you judge the last one. Here's the first piece of music from Google's machine-learning system. It's only 90 seconds long, but it's at least an early demonstration of Magenta's capabilities.

"To start, Magenta is being developed by a small team of researchers from the Google Brain team. If you're a researcher or a coder, you can check out our alpha-version code. Once we have a stable set of tools and models, we'll invite external contributors to check in code to our GitHub. If you're a musician or an artist (or aspire to be one—it's easier than you might think!), we hope you'll try using these tools to make some noise or images or videos… or whatever you like," reads a blog post from Google.

"Our goal is to build a community where the right people are there to help out. If the Magenta tools don't work for you, let us know. We encourage you to join our discussion list and shape how Magenta evolves. We'd love to know what you think of our work—as an artist, musician, researcher, coder, or just an aficionado. You can follow our progress and check out some of the music and art Magenta helps create right here on this blog. As we begin accepting code from community contributors, the blog will also be open to posts from these contributors, not just Google Brain team members."

The Magenta project runs on top of Google's open-source AI engine, TensorFlow. And while it might sound a little odd at first that Google is opening this not-so-simple source code for anyone to use, it's part of the company's hope that open-sourcing its AI engine will allow the technology to grow far faster (and more widespread) than if Google kept it under wraps.

"Research in this area is global and growing fast, but lacks standard tools. By sharing what we believe to be one of the best machine learning toolboxes in the world, we hope to create an open standard for exchanging research ideas and putting machine learning in products. Google engineers really do use TensorFlow in user-facing products and services, and our research group intends to share TensorFlow implementations along side many of our research publications," Google writes.

As Billboard reports, Google's Magenta built its first tune with just a four-note prompt. Drum tracks were added afterwards to give the song a little more zest. And this, as the researchers note, is the trickiest part of Magenta: not making a song, but making a song that makes people want to listen to it. (Welcome to songwriting 101, Google.)

"The design of models that learn to construct long narrative arcs is important not only for music and art generation, but also areas like language modeling, where it remains a challenge to carry meaning even across a long paragraph, much less whole stories. Attention models like the Show, Attend and Tell point to one promising direction, but this remains a very challenging task," reads Google's blog post.