I ♥ Phaune Radio

Phaune Radio homepage

Je n’écoute plus la radio. Tsé bien, celle qui diffuse encore sur les ondes. Enfin, je l’écoute, parfois, quand je loue une camionnette pour déménager. C’est te dire que c’est pas si souvent, je suis développeur, pas déménageur. Bref, je ne l’écoute pas en ligne non plus. Ça m’arrive d’écouter un podcast, quand on m’envoie un lien, mais je ne me branche pas toutes les semaines ou tous les jours sur les chaînes nationales ou régionales ou internationales. Pourtant, elles sont toutes, là, sur le oueb, en streaming ou podcast… Bien non. Je n’écoute pas. Et le principal problème, je crois, c’est le format. Coupure pub, jingle, nouvelles du monde, annonce de programme, voix radiophonique du présentateur,… tout ressemble trop à un média. C’est formaté. Même si tu changes la sauce, on dirait que l’emballage est produit en série.

C’est vache ce que je viens de dire. Je n’écoute pas généralement, donc je ne peux vraiment pas faire de généralité. Mais j’ai pas la patience de me brancher sur un programme à une certaine heure pour pas louper une émission que j’aimerais bien. Je n’ai pas non plus la patience de trouver un podcast dont j’aimerais entendre les enregistrements chaque semaine. J’ai bien essayé, il y a de très bons programmes sur certaines chaînes. Mais je n’ai pas les bons outils ou juste pas la motivation.

Sauf…, sauf depuis qu’on m’a branché sur LeDjamRadio. Ça c’est pour moi de la radio. Je peux écouter à n’importe quelle heure, le programme me plaît dans 90% des cas. Il n’y a pas de pub (ou presque, une fois par jour max), il n’y a pas de présentateur à la voix suave, pas de jingle rrrrrrépépépétititif qui te bombe l’oreille et la musique est un bon mix entre originalité, diversité et classiques revisités. Bref, c’est idéal pour bosser.

Bon, après deux ans d’écoute intensive et de partage − ouais, je suis boulimique − je me suis quand même un peu lassé. J’y ai piqué beaucoup de titres pour ma chronique “Cover Tuesday” (je leur dois bien cet aveu). Mais j’aime pas non plus la nouvelle interface où il faut t’inscrire avec ton mail et tout. Ça casse le rythme.

C’est là que mon dealer qui s’ignore a posté un lien vers Phaune Radio, la radio smart. Et là, c’est le bonheur à nouveau. Un vent de fraîcheur dans mes pavillons, un tour d’oreille bien nécessaire et une visite en “mouvement perpétuel d’un cabinet de curiosités sonores”. Bref, ne cherche plus, branche-toi. C’est bon.


Github, why u no show more media files?

Break down of media files on GithubMaybe you’ve noticed, it’s impossible to search for media files on Github. Searching Github is for code only. You might find references to media files in code, but no more. This is pretty annoying although understandable for two reasons:

  1. Github targets developers and, as such, focuses on tools that are relevant to them.
  2. The open source licenses that Github promotes for its public projects are maybe not always the most relevant or friendly ones when applied to media content. So, it’s just a supposition but, by preventing search for media files, Github avoids getting in trouble for actually hosting content that stands in a the gray area of open source licensing.

Anyway, since I’m very interested in how designers are using Github for their projects, I conducted my own study and started indexing as many projects as I could, mainly storing references to the media files they contained. And after more than 2 weeks of constant querying their API − with a little help of my friend Olm– − I managed to store information from ~500.000 original public projects. That’s a little more than 1% of all the projects that exists on Github so far (44,444,444 at the time I’m writing this blog post).

1% is a pretty small number, but the API is limited to 5000 calls per hour. It would take me years to get the whole data and certainly more as Github growth seems accelerating. But for the purpose of this study, it should be pretty enough. The goal is to get a sense of what’s popular. These 500.000 projects are also what I call “original”, which means they are not “forks” of other projects. So it overall might represents more projects than this 1%.

Another disclaimer before getting into the data, when I say media files, I actually searched for files with certain extensions. I used a list of 210 popular and not so popular media file extensions, compiled with the help of Wikipedia and others. Again, a trade off here due to time and space constraints. I could have missed some big ones that I never heard of. Although I hope its unlikely.

Ok, so with 1% of Github in my hands, it’s starts to be interesting to make assumptions about the big picture.

Out of the 546,574 projects, only 52,564 have been forked at least once. That’s barely 10%. But those 10% have produced 276,118 forks. So maybe overall 30% of Github is forks and 6% is original projects that have been forked. Yeah, open source is hard. The rest is empty projects (20% of the originals I downloaded), deleted ones and the occasional spam.

Surprisingly, Github gets spammed, a little. And the not-super-smart spammers are just filling the description of projects with their trash content, which makes it easy for Github to spot, I guess. Why are those spammy repositories still available from the API is a wonder to me.

550,000 projects represents a total of 130,000,000 files of which 12% are media files. Extrapolate this and Gihtub might host more than 1,5 billion media files. Quite a resource if we could only search through it. Anyway, as expected, the most popular media files are the PNG, JPG, GIF and SVG.

This is understandable as Github is the go to place if you’re into web design, whether its javascript libraries, CSS frameworks or icon sets. Github also offers static website hosting that attracts a lot of people. But let’s have a deeper look at the “others”. What’s popular and how does it break down?

What’s interesting to see here is that after PDF, which Github allows you to view in the web interface, comes two font formats (TTF and WOFF) that are very popular with web designers also, but for some reason, Github is not displaying. Actually, the next format that Github offers a preview of comes on the 11th position in this graph, the famous PSD. In between, we have many formats that could easily be previewed in a browser, but Github does not seem to care.

The little surprise here for me is the amount of OGG, MP3 and WAV files available. I certainly did not expect that. Seeing also that the ASSET file type is quite popular (a file format used in game design with Unity) and considering that game development tools overlap web development tools these days, all of this starts to make sense. Sound is an important part of any interactive experience, being a web/app interface or a game. Again, these sound files could be easily previewed in a browser.

Lastly, let’s consider STL, the last file format displayed here (and 30th in position). It’s the common file format for exchanging object files used in 3D printing. Github has a preview for it and even shows some form of “3D diff” between commits.  Great, but on 13th position, we have OBJ, also an open 3D format, that counts 5 times more files on Github than STL. To my knowledge, it’s not more complicated to display an OBJ file in the browser than a STL one. So what’s the logic here?

To wrap this up, Github could do so much more with not so much effort to allow previews in the browser of some important media file formats for designers. Maybe the “licensing” trouble described at the beginning is not a bad supposition after all. I’d be certainly happy to hear Github’s take on this. If you know anyone working there, thanks for forwarding these questions, and if anyone there is listening, I’d be pleased to dig more deeply into your data to understand more how designers (could) use your product.

Send us a picture of your laptop with stickers


For a research project with Dorothy Howard, we ask you to send us a picture of your laptop cover with stickers. There could be one or many stickers, just as long as you agree to license the picture under a Creative Commons Attribution-ShareAlike (or equivalent, or Public Domain is fine too).

Write your name (how you want to be credited) in the image filename and drop it on https://balloon.io/laptopstickers.

You can also send it by mail to julien [a] xuv.be or tweet to with the hashtag #laptopstickers.

Thanks for spreading the word in your network and beyond.

Teasing : the most popular media file formats on Github

In my process of studying collaborative tools for designers, I took a deeper look at Github to find out how much media files were hosted there, of which type, etc. I’m just using the API provided by Github. No magic trick here. Although it’s a long process due to the API call limitations. There is 43.000.000 projects on Github. But I’m close to have gone over 1%, which is the lower limit I was reaching for before making any assumptions. So here under is just an small infographic to  tease you and make you impatient for the larger study I plan to release in a couple of days. Enjoy.



Took also the opportunity to test Infogr.am. Still not sure if I’ll use their service for the following article. Any suggestion or remarks?

Collaborative tools for designers − part 6 : the Githosters

The Githosters

Part 6 of this series of posts around my quest for tools that would encourage or facilitate collaboration with and between designers, this time, I went over what I’m calling the Githosters, also known as Github, Gitlab or Bitbucket.

Github is so popular these days it might become a verb one day and for some, there is still confusion between Git and Github. But since its beginning in 2008, coders have embraced it. It’s their social network. And they’ve been followed by a growing population that works also sometimes with coders such as journalists, jurists, typographers, icon designers, cartographers, etc. So we’ve actually come a long way since SourceForge − once the “only” place to get some open source software − and I guess, for the better. Github also helped popularize Git and became a central place to experience “code based” development as a community. With a major web actor such as this one, comes a list of alternatives, such as Bitbucket, pretty much born at the same time (at the beginning using only Mercurial, but later also Git) and more lately, Gitlab, the open source equivalent that you can host on your own servers.

The crowd that has followed the coders is the one that’s the most interesting to me. And I’m guessing the future of these tools depend on it. The winner in the next years will be the one that manages to keep its crowd of developers while providing relevant tools for all the other creators that surround it. And we can see that some of those players have started to understand this.

Github visual diffing

Let’s start with Github, since they are leading the pack. Github offers previews and version comparison for certain files. Which means that you can see how some of your graphic files have evolved through their commit history. They do this for psd, jpg, png, gif, svg and stl files. They also support rendering of pdf (but no diffing). It’s worth noting that Github renders also IPython/Jupyter notebooks, csv tables and geojson map files (the latter with diffing), appealing there to the data scientists and alike.

All of these features are great and can encourage non-coders to use that service. But it’s very limited to viewing a file, just one, and in some cases, compare 2 versions of that file. But the rest of the interface is pretty focused on code only. Coders might know what to do with a file just by seeing its name, but designers like previews and thumbnails. And none of this is available so far, resulting in many clicks to view the content of a folder and finally seeing the file you were looking for. I do understand that this is a tool for coder, but if Github wants to please another crowd, they might have to propose a different interface. I’m guessing they could do this through an desktop client − by improving the one they already propose for example − or by providing a different web interface depending on your role. The issues and comments could also be enhanced to enable some form of “image annotation”.

Bitbucket psd diff preview broken

Bitbucket, from the point of view of a developer, does not seem to be lacking features compared to Github. It boils down maybe to some interface choices, but they overall look the same to me (although I’ve not used Bitbucket extensively, just for this test). Although with graphic file handling, I had the feeling Bitbucket is still buggy or unpolished in the corners. They seem to go in the same direction as Github, but if psd files are supposed to be rendered, all I got was broken/empty previews. The rendering and diffing works for svg, jpg, png and gif. Good. But no pdf rendering and most “binary” file formats spill their content source when viewing commits. I know it’s the default behavior of Git, but this creates pages and pages or gibberish content in the web browser that is totally unnecessary.

Like Github, Bitbucket proposes an in-browser file editor, if you quickly wish to correct a typo somewhere and commit immediately. But why offering this feature on image files? At first, I though: “wow, could I edit my image files right in the browser with Bitbucket?“. Answer is no, you’ll see a series of characters (Base64? or cyphered exif data?) that you’d better not touch if you don’t want to make your files definitely unreadable.

Bitbucket also offers a desktop client, but didn’t take the time to install it as it was looking more like a Soyuz cockpit again and only seemed to focus on coders also.

Gitlab file viewer

Lastly, let’s go over Gitlab CE, perceived as the open source clone of Github, but I guess that’s what most of us expect. Their Community Edition (CE) is a full featured “Github clone” that you can install on your own server. So it’s pretty convenient if you care about secrets or the indieweb. Unfortunately, they support fewer features regarding image handling. You’ll get a preview of most web file formats (jpg, png, gif, but not for svg) and visual diffing for those. But no pdf viewer (yet) or other fancy stuff. Since Gitlab is open source, it means we could drive it in the direction we’d like, or at least we might make propositions or fork it to bring it into a more designer friendly direction. Although the fork is maybe too far fetch as they seem to be very open to new propositions.

Even if these 3 solutions are a little  poor relative to what we’ve seen in previous posts about collaborative tools for designers, I’m definitely in favor of a Git based solution than a Dropbox (or similar) solution. What Dropbox misses totally is the collaborative work between designers and developers. And devs will never use Dropbox to version or share their code. While designers, provided with the right interface, could totally switch for Git. In that sense, for those looking for an easy solution to collaborate with developers on Git repos without learning any Git command, I recommend SparkleShare. It’s kind of like Dropbox, but with a Githoster backend. Though you better warn your devs that you’re going to use this, because your Git commits are going to be pretty dull and voiceless. For the rest, it’ll be as simple as saving your files from your preferred graphical application into the right folder. All the rest such as pulling, committing and pushing is handled by the Sparkleshare, running in the background.

That’s it for this part. I know there are some other Githosters out there. I did not want to go over them all, just the most popular ones. But if I missed one that has some great features or that is much more promising for designers, don’t hesitate to drop a line in the comments or contact me.

Rendering WordWars on radiators

Qarnot Blender Interface

TL;DR: Testing the private beta Python API from Qarnot

About a month ago, while keeping an eye on the Libre Software Meeting (RMLL) in Beauvais and watching the news about WordWars spread on some blogs and magazine, I stumbled on a video presentation about open source software tools for animators and Blender 3D rendering using Qarnot (in french). And I was struck.

Qarnot Computing is a high performance computing service provider, which means they can do a lot of complex calculations, super fast, using a large amount of computers. This service, of course, is not new. For example, 3D content creators often use these external services, also called “render farms”. Some 3D renderings require a lot of computing power in case of complex scenes, long movies or highly detailed realistic simulations. And by externalizing that work to a large amount of processing power, this can be done at a fraction of the time necessary compared to what it would require on a single machine.

Where Qarnot stands out from its competitors is that instead of spending a lot of money on cooling and infrastructure, they decided use the excessive heat produced by those computers to maintain temperatures in private houses and buildings. So, instead of having one giant warehouse with thousands of computers that needs to be constantly refrigerated, they have installed a network of “radiators”, made of the same computers, in hundreds of private interiors. So no more energy is required to cool down the computers, the heat is used to make a home comfortable and no more horrible window-less farms either, the infrastructure is distributed across a city. Using this approach, they claim to have reduced the carbon footprint of such services by 78%. And I could not be more happy to test this on my own projects.

The next cool thing about Qarnot is that they provide rendering services for Blender. The Blender Foundation uses them for their latests open movie projects. They offer some free credits to test their services. And they are cheap too. All these were so many good reasons to give them a try.

My WordWars project uses Blender to render the daily war news from The New York Times into a movie that looks like the intro scene from StarWars. So every day, a new Blender file is generated and rendered. On my home machine, it takes about 1h30 to render but on my dedicated server, it takes up to 8 hours. And I’m running this project from the server because I want to keep my home computer for other things and its also not constantly up or connected. The server, well, is up 24/7. So I decided to contact Qarnot.

I needed to get in touch with them, because so far, the only way to launch a rendering using their system is via a web interface. It’s really practical for a one time project, but having a file that automatically updates every day needs a different approach. I do want to keep things automated. So I asked if they had any API, any way to interact with the rendering farm via scripts. And as a response, I was happy to get invited to test their Python API, still in private beta.

They provided me with the necessary Python files, documentation, and a small, but feature complete, example that I could start with. It took me a while to figure out how to operate it correctly though. Maybe because I never interacted with a render farm before or maybe also because I got a bit confused with some of the parameters and the limitations of the private beta.

One thing that I found confusing was the ‘framecount’ parameter. The ‘advanced_range’ indicates which part of the movie you want to render but I did not understand, at first, how this related (or not) with ‘framecount’. ‘framecount’ is also named ‘frame_nbr’ in another function (‘create_task’) and that also puzzled me for a while as what this variable really represented. After testing different approaches, I understood ‘framecount’ as being the maximum number of frames that you want to process at the same time (the whole point of speeding the process of rendering). I say maximum, because it “feels” different depending on the amount of processing power available for the task. And in the end, the whole range of frames you want to render will be rendered, it might just take more time.

As this is still private beta, I could only get access to some of their processing power and was limited in storage space for the resulting renderings (limitations that you don’t have normally). So in the end, I could only run portions of my project − 300 frames per task − and that would take around 10 minutes to process (rendering the whole clip would take a little more than an hour using this method). So I’ll keep all these as tests for now and wait for the public release, which they should announce by the end of this year. Testing the same render (full power) using their web interface, the whole clip was rendered in less than 10 minutes. And the cost for the service would be around 0.75€ for that particular case.

So using Qarnot, I could be delivering a WordWars video clip, 10 minutes after the news have come out, at an affordable cost of less than a euro and without feeling guilty by double heating the planet.

I’m also guessing that, since the Python API they provided runs under Python 3.4, you could also expect a Qarnot-plugin that will integrate directly in Blender interface. And if they are not the ones to create this, someone will do it for sure.

I want to thank them for allowing me to test this and the patience they took to respond to the many questions I had. I also wish them success as this is truly an inspiring way to create a computing company.

Ma collection de portables timbrés

Instagram de laptops avec autocollants

Dans tous les brols que je peux alimenter sur le web, il y a entre autre, ce compte Instagram, démarré à Leipzig il y a un peu plus d’un an. J’y collectionne, au gré de mes balades, des couvertures de portables couverts d’autocollants, une pratique assez répandues chez les bidouilleurs informatiques.

On a déjà comparé le mien à une planche de skateboard. J’y vois surtout l’envie de personnaliser un objet de masse dont la couleur dévie rarement de la palette chatoyante d’un ciel du plat pays. C’est peut-être aussi une forme de tatouage tribal, de rite de démystification de la machine, ou le besoin d’apposer son tag, sa marque, celle de ses chouchous, de balancer un slogan et d’y donner une plastique qui peut-être découragerait un voleur.

Un grand doigt dans l’oeil d’Instagram, royaume de la jolie photo, pour m’avoir obligé à créer un compte alors que je voulais juste accéder à ses données.

Et comme le dit Wily:

Collaborative tools for designers − part 5 : Adobe Creative Cloud

Adobe Creative Cloud App

Episode 5 already. Damn, so many collaborative tools for designers. Yes. And this post is about the one everybody talks about, but what they are actually referring to is something else: Adobe Creative Cloud. Let me rephrase that. When you tell people you are trying out Adobe’s cloud, they think you are using Photoshop or something. They don’t picture that what you are actually studying is the file hosting and syncing with Adobe’s servers. You know, the thing that works like Dropbox. Well, unless I’ve badly searched in the giant list of Adobe’s applications, the only tool they advertise doing collaboration and versioning, is Adobe Creative Cloud. Yes, the syncing app with the “real” cloud. No, not Photoshop. Your files, on their computer. You get it.

And don’t tell me it’s called Adobe Version Cue. Has anyone ever used this? I tried. A long time ago. And did not manage to make versions of my files with it. So I abandoned. And so did Adobe.

As I flew away from Adobe products since the first Creative Suite versions, I really thought that people were actually using the “cloud” services from Adobe, so I was surprised to hear none of my friends were actually using the 2Go of “free” hosting offered with their subscription. So to complete this review, I asked some friends if they wanted to test it. Sébastien Monnoye, teacher and designer, who also had never pushed files on the Creative Cloud before, answered the call and we started sharing folders together and explore the tool.

AdobeCC Hello World

One thing for sure, and you guessed it, is that it treats Adobe’s file format very well. The rest, not that much. You can, of course, see a preview of .psd and .ai files in the browser. But you can also break down that preview by turning on and off the individual layers for those files. Neat. With a click of a button you can extract certain elements from a .psd or a vector file and add them to a library of assets. These libraries are accessible right from inside Photoshop or Illustrator. So you could just drag and drop from those into a new design. It really speeds up the reuse of certain elements you might have made in your previous projects. (They also propose to buy and download existing libraries of assets. But I did not look into that too much or how you could submit your own designs.) And lastly, for all the elements in a design that you can select, they show you what are the CSS rules you need to write to have that same element behave the same way in a web page. Kind of nice. Especially for positioning.

Positioning from Adobe Creative Cloud

So, for every (adobe) file you upload, you can see some meta data, quickly access the major colors present in the design, leave comments for collaborators, extract assets, share the design on Behance and see the previous versions of the file. But don’t expect a reliable versioning system here. After 10 days, old versions are trashed.

After 10 days, we delete the past versions from the servers. If you know a version is going to be important, we suggest you make a copy of the current file you’re working on and save it with a new name. Then restore the original file back to the revision you care about. This way both copies become their own unique file that won’t be deleted automatically.

10 days is… ridiculous. And the solution proposed here kills all the benefits of using a versioning system. So let’s call this a short term memory backup, shall we. Nothing more. Why would you want to come back to a previous version of a design after 10 days, right?

Anyway, now on to the weird parts.

You know unix’s philosophy: everything is a file. Well, in the Adobe cloud, everything is an image. And if it’s not, it’s a blob. So yes, any .doc or plain .txt file is rendered as an image… Making it − if not unreadable, at least − unusable. HTML, CSS and Javascript are blobs. No way to see what’s inside them from the web interface. No versioning either (sorry, short term memory backup) for those. I’m not making this up. This is super strange. Adobe has a long list of tools related to web design, but displaying code in a web interface and versioning it, why would you want that, right?

AdobeCC file viewer

In the little application you install on your desktop, you get real-time notifications when a collaborator leaves a comment, but they don’t tell you which file has changed and who did the changes. So apart from the comments, the app is just there to do the syncing and let you access the Adobe market for apps, fonts, stock photos and assets libraries. You can also see a preview of what’s trending on Behance. (Mmmh actually, there is a way to see which files have changed, but good luck on finding where that is. Tip: it’s not in the notification window.) So to sum it up, this is more like a market place for the Adobe products than a tool to communicate and collaborate, as I thought it could be.

I’ve only tested graphic files. More tests could be made with InDesign files, Premiere, After Effects or Flash. But I’m kind of focusing from a web designer point of view. So I also tested some other file types: .odf (Libre Office), .xcf (Gimp) and .py (Python) and unsurprisingly, they are all treated as blobs. With SVG, it’s a special case.  They all are rendered correctly, but the features you can have with them are very different if you have saved them from Illustrator or from Inkscape, for example. The latter having the less feature, of course.

To wrap up, I’d say that the tool to extract assets is really a nice productivity improvement, but the syncing and collaborative features are behind what Dropbox is proposing. And since Dropbox just acquired Pixelapse, they really aim at offering an even better experience to designers, if they are not the tool of choice in that field already. Adobe is forcing its tool on everybody, of course, because you need that little app to install the Photoshop and others, but they need to polish it a lot more to get the designers to give them their files. And I don’t think that the “Publish on Behance” button is really a key here. But lets see how that evolves over time.

(Next episode: The Git Hosters )

Collaborative tools for designers: part 4

TL;DR: Tools I will not review and why.

Three weeks ago, I started a series of publications regarding tools that encourage collaborative practices for designers and that use some form of versioning at their core. You can read the previous chapters and especially the first one to understand what I’m looking for (links to part 2 and part 3). In this fourth episode, I’m just going to list tools I will not review thoroughly but that I felt I should mention because they exist (or existed) for certain reasons. If you have experience with some of them, I’d certainly be happy to hear about it.


Used by blockbuster media companies (such as Rockstar Games or Disney, for example), it’s a complete project, team, assets, source control management solution aimed at very large studios with hundreds or thousands employees and its apparently great at handling large (sets of) media files (at least, that’s how it was once presented to me). The licensing fee are relative to that description. It’s worth noting that the latest presentation of Perforce Helix mentions the integration of Git for the developers in the team. Maybe worth watching is this video showing how to handle their image diff tool.


Although, like Pixelapse, I had them on my watch list, I never took the time to dig deep into their product. And unfortunately, as I understand it from the description on their website, the company went bankrupt 6 months ago. Too bad, when you see Kelly Sutton present their product just 2 years ago, it looked very interesting. Their “wormhole” tool (at 6:00) is really a nice approach to get “what changed and when” in an image.


I hesitated for a while. Should I review this tool or not. They call themselves “The world’s leading prototyping, collaboration & workflow platform“, and looking at their client portfolio, they surely can pretend to this, I guess. But they focus mainly on screen based media and interactivity, or so it seems. and their level of version control is no better than Dropbox or similar. So the designers targeted here are “application” designers, UX/UI designers,… and I’m looking for a a less opinionated tool. Anyway. If you want to get a quick glance at what this tool does, here’s a quick introductory video. Certainly a great tool to review designs, manage projects & teams and test user experience. Not what I’m looking for atm.

Visual Culture

Visual Culture is a server software developed by Open Source Publishing to visualize git repositories. Since their design practice involves doing things open and in the open, their website runs on it and allows the visitor to browse into their latest projects and how those are made. Their intention was to turn it into a general tool for designers and they opened a crowdfunding campaign to achieve that goal, but this did not received enough support. I did try to install it and see how I could use it on my own projects. With the help from Colm, Stéphanie and Eric, I passed through a couple of bugs and misunderstandings on my part. And I could see a recap’ page of what I had in my folders. Unfortunately, that’s how far I managed to go. I lack some Django knowledge to adapt the necessary bits so it would behave nicely with my projects as, at this stage, Visual Culture is still very closely knitted to (how) OSP(‘s) work.

visual culture on my repos

Come back next week for another chapter.