…aan zee

Video posted by Anthony Martin on Monday, November 9, 2015
Click here if you don’t see the video.

aan-zee

Radical Networks was rad

Radical Networks

Last weekend, Brooklyn, in the chilling space of NYU Polytechnical School of Engineering, was held the Radical Networks conference, organized by Sarah Grant, Amelia Marzec and Erica Kermani. This was the first edition, but the mood and quality of it really made it seem like this had been rolling for years. The talks and workshops available over the weekend shared these goals:

  • To understand how the technology can be used as a method of control and how to subvert that.
  • Teach people how to use networking technology for themselves.
  • Encourage creative and social exploration with computer networks.

The event was sold out but thanks to Internet Society NY, you could watch a live stream and still can access the recordings. So I’ll just point you a couple that I really enjoyed, although you could just watch all of them, as they really bring interesting approaches and point of views on these questions.

Seeing the Internet


Do you have any idea what the cloud looks like? Well, Shuli Hallak has been photographing it for years so the “Saint Thomas of I have nothing to hide” can’t say he did not know.

In The Final Days Of The WWW


A portfolio of digital art projects done by Dennis de Bel and Roel Roscam Abbing, former students of Piet Zwart Institute. They played around and hacked the notions of networking in goofy clever ways and brought a breath of fresh air and good laughs in the middle of all the “serious” talks we had over the weekend. A must watch in terms of creativity and exploration.

NYC Mesh, a community owned Wi-Fi network


And if you want to hear me speak about why you should talk to your neighbor, or how citizens can reclaim these hidden networks Shuli Hallak talked about, hop on the NYCmesh train with Brian and Dan.

See you next year, RadNets.

(Cover illustration uses a photo by Shuli Hallak, licensed CC-BY-SA)

Github, why u no show more media files?

Break down of media files on GithubMaybe you’ve noticed, it’s impossible to search for media files on Github. Searching Github is for code only. You might find references to media files in code, but no more. This is pretty annoying although understandable for two reasons:

  1. Github targets developers and, as such, focuses on tools that are relevant to them.
  2. The open source licenses that Github promotes for its public projects are maybe not always the most relevant or friendly ones when applied to media content. So, it’s just a supposition but, by preventing search for media files, Github avoids getting in trouble for actually hosting content that stands in a the gray area of open source licensing.

Anyway, since I’m very interested in how designers are using Github for their projects, I conducted my own study and started indexing as many projects as I could, mainly storing references to the media files they contained. And after more than 2 weeks of constant querying their API − with a little help of my friend Olm– − I managed to store information from ~500.000 original public projects. That’s a little more than 1% of all the projects that exists on Github so far (44,444,444 at the time I’m writing this blog post).

1% is a pretty small number, but the API is limited to 5000 calls per hour. It would take me years to get the whole data and certainly more as Github growth seems accelerating. But for the purpose of this study, it should be pretty enough. The goal is to get a sense of what’s popular. These 500.000 projects are also what I call “original”, which means they are not “forks” of other projects. So it overall might represents more projects than this 1%.

Another disclaimer before getting into the data, when I say media files, I actually searched for files with certain extensions. I used a list of 210 popular and not so popular media file extensions, compiled with the help of Wikipedia and others. Again, a trade off here due to time and space constraints. I could have missed some big ones that I never heard of. Although I hope its unlikely.

Ok, so with 1% of Github in my hands, it’s starts to be interesting to make assumptions about the big picture.

Out of the 546,574 projects, only 52,564 have been forked at least once. That’s barely 10%. But those 10% have produced 276,118 forks. So maybe overall 30% of Github is forks and 6% is original projects that have been forked. Yeah, open source is hard. The rest is empty projects (20% of the originals I downloaded), deleted ones and the occasional spam.

Surprisingly, Github gets spammed, a little. And the not-super-smart spammers are just filling the description of projects with their trash content, which makes it easy for Github to spot, I guess. Why are those spammy repositories still available from the API is a wonder to me.

550,000 projects represents a total of 130,000,000 files of which 12% are media files. Extrapolate this and Gihtub might host more than 1,5 billion media files. Quite a resource if we could only search through it. Anyway, as expected, the most popular media files are the PNG, JPG, GIF and SVG.

This is understandable as Github is the go to place if you’re into web design, whether its javascript libraries, CSS frameworks or icon sets. Github also offers static website hosting that attracts a lot of people. But let’s have a deeper look at the “others”. What’s popular and how does it break down?

What’s interesting to see here is that after PDF, which Github allows you to view in the web interface, comes two font formats (TTF and WOFF) that are very popular with web designers also, but for some reason, Github is not displaying. Actually, the next format that Github offers a preview of comes on the 11th position in this graph, the famous PSD. In between, we have many formats that could easily be previewed in a browser, but Github does not seem to care.

The little surprise here for me is the amount of OGG, MP3 and WAV files available. I certainly did not expect that. Seeing also that the ASSET file type is quite popular (a file format used in game design with Unity) and considering that game development tools overlap web development tools these days, all of this starts to make sense. Sound is an important part of any interactive experience, being a web/app interface or a game. Again, these sound files could be easily previewed in a browser.

Lastly, let’s consider STL, the last file format displayed here (and 30th in position). It’s the common file format for exchanging object files used in 3D printing. Github has a preview for it and even shows some form of “3D diff” between commits.  Great, but on 13th position, we have OBJ, also an open 3D format, that counts 5 times more files on Github than STL. To my knowledge, it’s not more complicated to display an OBJ file in the browser than a STL one. So what’s the logic here?

To wrap this up, Github could do so much more with not so much effort to allow previews in the browser of some important media file formats for designers. Maybe the “licensing” trouble described at the beginning is not a bad supposition after all. I’d be certainly happy to hear Github’s take on this. If you know anyone working there, thanks for forwarding these questions, and if anyone there is listening, I’d be pleased to dig more deeply into your data to understand more how designers (could) use your product.

Send us a picture of your laptop with stickers

laptopstickers

For a research project with Dorothy Howard, we ask you to send us a picture of your laptop cover with stickers. There could be one or many stickers, just as long as you agree to license the picture under a Creative Commons Attribution-ShareAlike (or equivalent, or Public Domain is fine too).

Write your name (how you want to be credited) in the image filename and drop it on https://balloon.io/laptopstickers.

You can also send it by mail to julien [a] xuv.be or tweet to with the hashtag #laptopstickers.

Thanks for spreading the word in your network and beyond.

Teasing : the most popular media file formats on Github

In my process of studying collaborative tools for designers, I took a deeper look at Github to find out how much media files were hosted there, of which type, etc. I’m just using the API provided by Github. No magic trick here. Although it’s a long process due to the API call limitations. There is 43.000.000 projects on Github. But I’m close to have gone over 1%, which is the lower limit I was reaching for before making any assumptions. So here under is just an small infographic to  tease you and make you impatient for the larger study I plan to release in a couple of days. Enjoy.

 

 

Took also the opportunity to test Infogr.am. Still not sure if I’ll use their service for the following article. Any suggestion or remarks?

Collaborative tools for designers − part 6 : the Githosters

The Githosters

Part 6 of this series of posts around my quest for tools that would encourage or facilitate collaboration with and between designers, this time, I went over what I’m calling the Githosters, also known as Github, Gitlab or Bitbucket.

Github is so popular these days it might become a verb one day and for some, there is still confusion between Git and Github. But since its beginning in 2008, coders have embraced it. It’s their social network. And they’ve been followed by a growing population that works also sometimes with coders such as journalists, jurists, typographers, icon designers, cartographers, etc. So we’ve actually come a long way since SourceForge − once the “only” place to get some open source software − and I guess, for the better. Github also helped popularize Git and became a central place to experience “code based” development as a community. With a major web actor such as this one, comes a list of alternatives, such as Bitbucket, pretty much born at the same time (at the beginning using only Mercurial, but later also Git) and more lately, Gitlab, the open source equivalent that you can host on your own servers.

The crowd that has followed the coders is the one that’s the most interesting to me. And I’m guessing the future of these tools depend on it. The winner in the next years will be the one that manages to keep its crowd of developers while providing relevant tools for all the other creators that surround it. And we can see that some of those players have started to understand this.

Github visual diffing

Let’s start with Github, since they are leading the pack. Github offers previews and version comparison for certain files. Which means that you can see how some of your graphic files have evolved through their commit history. They do this for psd, jpg, png, gif, svg and stl files. They also support rendering of pdf (but no diffing). It’s worth noting that Github renders also IPython/Jupyter notebooks, csv tables and geojson map files (the latter with diffing), appealing there to the data scientists and alike.

All of these features are great and can encourage non-coders to use that service. But it’s very limited to viewing a file, just one, and in some cases, compare 2 versions of that file. But the rest of the interface is pretty focused on code only. Coders might know what to do with a file just by seeing its name, but designers like previews and thumbnails. And none of this is available so far, resulting in many clicks to view the content of a folder and finally seeing the file you were looking for. I do understand that this is a tool for coder, but if Github wants to please another crowd, they might have to propose a different interface. I’m guessing they could do this through an desktop client − by improving the one they already propose for example − or by providing a different web interface depending on your role. The issues and comments could also be enhanced to enable some form of “image annotation”.

Bitbucket psd diff preview broken

Bitbucket, from the point of view of a developer, does not seem to be lacking features compared to Github. It boils down maybe to some interface choices, but they overall look the same to me (although I’ve not used Bitbucket extensively, just for this test). Although with graphic file handling, I had the feeling Bitbucket is still buggy or unpolished in the corners. They seem to go in the same direction as Github, but if psd files are supposed to be rendered, all I got was broken/empty previews. The rendering and diffing works for svg, jpg, png and gif. Good. But no pdf rendering and most “binary” file formats spill their content source when viewing commits. I know it’s the default behavior of Git, but this creates pages and pages or gibberish content in the web browser that is totally unnecessary.

Like Github, Bitbucket proposes an in-browser file editor, if you quickly wish to correct a typo somewhere and commit immediately. But why offering this feature on image files? At first, I though: “wow, could I edit my image files right in the browser with Bitbucket?“. Answer is no, you’ll see a series of characters (Base64? or cyphered exif data?) that you’d better not touch if you don’t want to make your files definitely unreadable.

Bitbucket also offers a desktop client, but didn’t take the time to install it as it was looking more like a Soyuz cockpit again and only seemed to focus on coders also.

Gitlab file viewer

Lastly, let’s go over Gitlab CE, perceived as the open source clone of Github, but I guess that’s what most of us expect. Their Community Edition (CE) is a full featured “Github clone” that you can install on your own server. So it’s pretty convenient if you care about secrets or the indieweb. Unfortunately, they support fewer features regarding image handling. You’ll get a preview of most web file formats (jpg, png, gif, but not for svg) and visual diffing for those. But no pdf viewer (yet) or other fancy stuff. Since Gitlab is open source, it means we could drive it in the direction we’d like, or at least we might make propositions or fork it to bring it into a more designer friendly direction. Although the fork is maybe too far fetch as they seem to be very open to new propositions.

Even if these 3 solutions are a little  poor relative to what we’ve seen in previous posts about collaborative tools for designers, I’m definitely in favor of a Git based solution than a Dropbox (or similar) solution. What Dropbox misses totally is the collaborative work between designers and developers. And devs will never use Dropbox to version or share their code. While designers, provided with the right interface, could totally switch for Git. In that sense, for those looking for an easy solution to collaborate with developers on Git repos without learning any Git command, I recommend SparkleShare. It’s kind of like Dropbox, but with a Githoster backend. Though you better warn your devs that you’re going to use this, because your Git commits are going to be pretty dull and voiceless. For the rest, it’ll be as simple as saving your files from your preferred graphical application into the right folder. All the rest such as pulling, committing and pushing is handled by the Sparkleshare, running in the background.

That’s it for this part. I know there are some other Githosters out there. I did not want to go over them all, just the most popular ones. But if I missed one that has some great features or that is much more promising for designers, don’t hesitate to drop a line in the comments or contact me.

Rendering WordWars on radiators

Qarnot Blender Interface

TL;DR: Testing the private beta Python API from Qarnot

About a month ago, while keeping an eye on the Libre Software Meeting (RMLL) in Beauvais and watching the news about WordWars spread on some blogs and magazine, I stumbled on a video presentation about open source software tools for animators and Blender 3D rendering using Qarnot (in french). And I was struck.

Qarnot Computing is a high performance computing service provider, which means they can do a lot of complex calculations, super fast, using a large amount of computers. This service, of course, is not new. For example, 3D content creators often use these external services, also called “render farms”. Some 3D renderings require a lot of computing power in case of complex scenes, long movies or highly detailed realistic simulations. And by externalizing that work to a large amount of processing power, this can be done at a fraction of the time necessary compared to what it would require on a single machine.

Where Qarnot stands out from its competitors is that instead of spending a lot of money on cooling and infrastructure, they decided use the excessive heat produced by those computers to maintain temperatures in private houses and buildings. So, instead of having one giant warehouse with thousands of computers that needs to be constantly refrigerated, they have installed a network of “radiators”, made of the same computers, in hundreds of private interiors. So no more energy is required to cool down the computers, the heat is used to make a home comfortable and no more horrible window-less farms either, the infrastructure is distributed across a city. Using this approach, they claim to have reduced the carbon footprint of such services by 78%. And I could not be more happy to test this on my own projects.

The next cool thing about Qarnot is that they provide rendering services for Blender. The Blender Foundation uses them for their latests open movie projects. They offer some free credits to test their services. And they are cheap too. All these were so many good reasons to give them a try.

My WordWars project uses Blender to render the daily war news from The New York Times into a movie that looks like the intro scene from StarWars. So every day, a new Blender file is generated and rendered. On my home machine, it takes about 1h30 to render but on my dedicated server, it takes up to 8 hours. And I’m running this project from the server because I want to keep my home computer for other things and its also not constantly up or connected. The server, well, is up 24/7. So I decided to contact Qarnot.

I needed to get in touch with them, because so far, the only way to launch a rendering using their system is via a web interface. It’s really practical for a one time project, but having a file that automatically updates every day needs a different approach. I do want to keep things automated. So I asked if they had any API, any way to interact with the rendering farm via scripts. And as a response, I was happy to get invited to test their Python API, still in private beta.

They provided me with the necessary Python files, documentation, and a small, but feature complete, example that I could start with. It took me a while to figure out how to operate it correctly though. Maybe because I never interacted with a render farm before or maybe also because I got a bit confused with some of the parameters and the limitations of the private beta.

One thing that I found confusing was the ‘framecount’ parameter. The ‘advanced_range’ indicates which part of the movie you want to render but I did not understand, at first, how this related (or not) with ‘framecount’. ‘framecount’ is also named ‘frame_nbr’ in another function (‘create_task’) and that also puzzled me for a while as what this variable really represented. After testing different approaches, I understood ‘framecount’ as being the maximum number of frames that you want to process at the same time (the whole point of speeding the process of rendering). I say maximum, because it “feels” different depending on the amount of processing power available for the task. And in the end, the whole range of frames you want to render will be rendered, it might just take more time.

As this is still private beta, I could only get access to some of their processing power and was limited in storage space for the resulting renderings (limitations that you don’t have normally). So in the end, I could only run portions of my project − 300 frames per task − and that would take around 10 minutes to process (rendering the whole clip would take a little more than an hour using this method). So I’ll keep all these as tests for now and wait for the public release, which they should announce by the end of this year. Testing the same render (full power) using their web interface, the whole clip was rendered in less than 10 minutes. And the cost for the service would be around 0.75€ for that particular case.

So using Qarnot, I could be delivering a WordWars video clip, 10 minutes after the news have come out, at an affordable cost of less than a euro and without feeling guilty by double heating the planet.

I’m also guessing that, since the Python API they provided runs under Python 3.4, you could also expect a Qarnot-plugin that will integrate directly in Blender interface. And if they are not the ones to create this, someone will do it for sure.

I want to thank them for allowing me to test this and the patience they took to respond to the many questions I had. I also wish them success as this is truly an inspiring way to create a computing company.