s2e20: Dreaming of writing; Just some notes 

by danhon

0.0 Sitrep

I spent the weekend looking at tractors, or, rather, chasing after my son who wanted to go look at tractors. I’ve seen a lot of tractors now, lots of old ones, a few new ones, and even some tank-chairs, which are like wheelchairs but with treads and a way for you to mount your rifle to the armrest so you can go out hunting and enjoy sports if you’re a disabled veteran. I also spent the weekend wrestling with Amazon’s AWS. But more on that…

1.0 Deep Writing

I have a late 2013 13in Retina MacBook Pro, which means that instead of a discrete graphics card (ie: a separate one, from a company like nVidia or AMD/ATI) it has one that’s already on the mainboard, made by Intel. Traditionally, these integrated graphics cards haven’t been any good or worth writing home about – all the progress and power in 3D acceleration has been in those discrete cards. But slowly, Intel’s been getting better to the extent that most Macs now ship with an Intel built-in graphics processing unit (a GPU). The one in my laptop is an Intel Iris, the second generation of which Apple saw fit to put in a Retina-class 13in laptop. (The 15in and bigger laptops normally have discrete GPUs as well as built-in ones because you can fit more battery in a bigger laptop).

All this is to say that while the GPU in my laptop is good for, say, looking at webpages, it’s not *great* at doing 3D graphics stuff. Incidentally, there was a time when “3D graphics stuff” meant things like computer games like Quake and more recently stuff like Call of Duty or Assassin’s Creed, but nowadays the term’s starting to get a bit confusing because we’ve got *actual virtual reality headware now*, and that headware *does* simulate 3D images and does so by rendering one image for your left eye and another one for your right eye so that the “3D graphics stuff” requirements are at least doubled. Anyway: one of the things that you need to know about all this 3D graphics stuff and the GPUs that do it is that 3D graphics is a long list of calculations on a bunch of numbers – matrix transformations and vectors. You take one bunch of numbers and then do the same thing to all of them. And then again. And again. Hopefully at least 30 times a second. It turns out that *that* kind of maths – vector maths or matrix maths – also happens to be the kind of stuff that neural networks do – or can do. Which is why nVidia, a company that started out helping people shoot things in games, is now the main provider of most of the processing power behind deep-learning and artificial intelligence efforts.

Fortunately, you don’t need to go out and build or buy a computer with one of these nVidia GPUs in it. Amazon has got a bunch in their cloud that you can use, and they cost about $0.70 an hour to play with. So I spent most of the weekend trying to set up a computer in the cloud to dream about my newsletter instead of getting my laptop to dream about my newsletter because my laptop literally requires *the entire night* to dream, which is appropriate, I suppose, for sufficiently-complex-dreaming.

But anyway, just so you know: I completely failed. I’m back to trying to do all this stuff on my laptop. The reason why is:

– first, you have to get a Linux instance up and running on EC2 (Amazon’s cloud), which is fine and easy
– then you have to log into through the terminal and over SSH, which is also fine and easy and also cool looking because you can pretend like you’re Crash Override from the movie Hackers.
– anyway, now you need to install the nVidia GPU drivers and  ¯\_(ツ)_/¯
– and then you need to test them so  ¯\_(ツ)_/¯
– and now you need to make sure you’re using Ubuntu because the easy Torch installer requires Ubuntu
– so you start again, this time with Ubuntu
– and then you install the nVidia GPU drivers again which means looking up a whole bunch of documentation
– oh wait you need to install the build tools
– and now you need to disable some modules that conflict with the GPU drivers
– and now you need to install Torch again
– wait, but that didn’t work
– probably because of some other dependency

See, if I’d managed to get this all working, at some point I would’ve just clicked a button in the management console and selected “make an image out of this” and I’d be able to link to that image and tell you all: hey, just stick some text in this file over here and then wait and then you’ll get some dreamed text. But I couldn’t, so you can’t.

Instead, you get this, which is a bunch of stuff that my laptop dreamed up:

The internet of things that I think it’s been a signal that I think I’m strugged in a corporation system in the first place. On the other hand, the day that I was at the end of a plane, and I don’t spend me that a bit like a sort of second-order terms of a different connectivity to be sure, but when you have a feeling of a startup is a bit about the internet of things that are super excited about how much I said to be a company, and I’ll just be a ballo serendipity of a company to the internet for the other day to be continuing to the opportunity in the internet, it’s a serious about the world.

which on first glance looks like exacrtly the kind of thing I’d write, which is a bit disturbing, because it’s been generated by a 400 neuron 3-layer recursive net that taught itself English from scratch only from reading my newsletter and the thought that I could be replaced by it is frankly a little unsettling. Look, here’s some more:

The fact that I was thinking about this is the last ten years. If you’re a lot more, but only because it doesn’t matter that I’d eat, those ships or even holiday, and while someone does what I probably be somewhat concept of the world that we have the state of the state of a consumer physical program. But it’s not like this is the product of the stages of focusing on an almost-criticalism in my hands because the work can be presented as interesting and solved with a project that *does* it matters to be accomplished and that most of the time again is you can look at the productivity game device has a platform and then all the stuff like cheaper who has the prompts that I liked when we were talking about when you want to know what they’re doing a great as an author and then explicit and satisfying on their days in my head. I keep me to spend the last twenty years of the next thing on the internet. And it’s not like that’s a supply-chain it doesn’t see something that can be more effice to across a bad way – and the internet could be replying to me that more complicated and products and services and making robots might fit in it and show differently, more consumer games and service in what they’re doing. And then one of the first things on both shit’s along them in a bit of a master, intercont and one of the more powering and entertainment or attention. In a way, that solved a new game arda of a number of people at the moment, rapped minimum viable cars (and just make a different management” or even disappeared in which shows a new organial infrastructure that makes sense to rely on using a sort of steveorable to garger happening into a service that infrastructure to have tracking an agency, waiting to make sense of products and settings and then makes sense to left a bit shit. It turned out that the city is being spent to play a distributed tradition.

I may as well give up at this point.

Anyway, there’s a whole bunch of this stuff and I put a whole load more into a Twitter account called @r_danhon[0] that tweets out snippets of duly-dreamed text that you can follow, should you wish.

[0] dan hon is dreaming (@r_danhon) | Twitter

2.0 Just some notes

Some things from my pile of notes to myself, and pile of notes from you:

– I bookmarked, and Deb Chachra sent me, Natalie Kane’s must-read piece about “means-well” technology[0], which is a much better-written piece than I would’ve done about what happens when things are built and put into the world without considering their repercussions or actions more widely than in a narrow scope. “Means Well” is a polite name for this kind of stuff because I suspect if you’ve been on the receiving end of that type of well-intentioned-but-actually-shit, half-assed technology, then you’re just going to get angry.

– Also, Deb Chachra with a note about Sara M. Watson’s article from 2014[1] which predates my reckon from last week about what happens when fitness trackers encounter someone who isn’t. This ties in with the whole genre of means-well technology, that kind of ill-thought-through “yeah! crushed it!” that you receive when you hit a goal when really, the designers never really thought of or anticipated that a tracker might be used by someone who needs to adjust their activity downward. Or that maybe they just did too much. Have you seen an activity tracker say “Hey, looks like you’re overdoing it a bit. That was great, but maybe take it a bit easier tomorrow?” No: they’re all in the mode of increasing performance over time. There is no such thing as management. Of the steady state.

– I’ve had Dan Hill’s Dark Matter and Trojan Horses[2] on my Kindle for a while and still haven’t gotten around to finishing it, but needless to say the bit that I’m currently on (after Hill reminded me that I should read it, which I should) was about how even just figuring out the problem and where the problem is and where we might do something about it in terms of the ever-complicated (and true) systems we’re embedded in. Take the fitness tracker example: there’s clearly no incentive for anyone to have designed a fitness tracker that doesn’t push you to some sort of personal achievement nirvana, right? Despite lots of people wanting that? Where is that all coming from?

[0] “Means Well” Technology and the Internet of Good Intentions — Thingclash
[1] Stepping Down: Rethinking the Fitness Tracker – The Atlantic
[2] Dark Matter and Trojan Horses – Dan Hill

10:04pm and I’m teaching my network again.