There are a load of sort of generic tv boxes on ebay with an Intel x5-z8350
processor.
The x5 SOC (formerly Cherry Tree, formerly Cherry View) is the same family as
the SOC in my beloved GPD Pocket. I was really having trouble with the i2c
hardware in the GPD Pocket and wanted something I could take apart and poke
with an oscilloscope. I looked first at the UP Board, an x5-z8350 in a
raspberry pi form factor, but not only was it much more expensive than this tv
box, but it has a CPLD between the SOC io and the pin header.
Installing FreeBSD
First I needed to get the board to boot from USB, the listing I bought
came with both android and windows 10 (I guess that is what dual os means). In
both android and windows 10 there was a handy reboot to other os application.
From installing on the GPD Pocket I suspected that the bios boot menu key would
be F7 so I used that. Windows 10 also includes a handy reboot to uefi
config option which makes it easy to get into a bios menu. I used it to
disable quiet boot and set the boot delay to a more sensible number.
With those changes I rebooted and got a familiar AMI bios boot screen hit F7
and choose your usb stick from the menu. The FreeBSD loader menu came up and
continued into a boot from the usb stick, but it hung probing ppc0.
I found a solution on the freebsd forum post about the upboard which
suggested running:
OK unset hint.uart.1.at
at the loader prompt. With that you I could boot and do an install.
Before you reboot make sure to make that change permanent, by removing this
line in /boot/device.hints
...
hint.sc.0.flags="0x100"
#hint.uart.0.at="isa" # comment this line out
hint.uart.0.port="0x3F8
hint.uart.0.flags="0x10
...
now reboot.
Setup
I setup the the drm-next-kmod driver, but the machine froze during boot.
Next I tried using a frame buffer driver, which required the collowing config
in /usr/local/etc/X11/xorg.conf.d/driver-scfb.conf :
Pressing the reset button caused an instant power cycle.
4 usb ports
1 USB 3
2 external USB 2
1 internal USB 2
sd card reader
but it doesn't seem to be hotpluggable
ethernet
hdmi
The x5 box also has bluetooth and wifi, but neither currently have FreeBSD
drivers.
Internally there are a whole bunch of unpopulated things that might be
interesting.
On the top left there is an unpopulated 2.54mm pin header slot next to the led,
silkscreen on the board has 1 and a 7 on either end. Probing around with a
multimeter suggested that P7 was ground.
I spent quite a while poking the board with a multimeter and osclloscope to see
if any gpio or buses were exposed on the headers or the board. I did find that
if you connect pin 1 to gnd (or pin 7) the red led comes on and the board goes
off.
I did not find any useful or even really interesting signals.
On the bottom right there is an unpopulate 15 pin header, all but two of these
were connect to ground.
MEAT
Some more gory insides:
Copyright (c) 1992-2018 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 12.0-CURRENT #0 r328126: Thu Jan 18 15:25:44 UTC 2018
root@releng3.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64
FreeBSD clang version 6.0.0 (branches/release_60 321788) (based on LLVM 6.0.0)
WARNING: WITNESS option enabled, expect reduced performance.
VT(efifb): resolution 1920x1080
CPU: Intel(R) Atom(TM) x5-Z8350 CPU @ 1.44GHz (1440.00-MHz K8-class CPU)
Origin="GenuineIntel" Id=0x406c4 Family=0x6 Model=0x4c Stepping=4
Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
Features2=0x43d8e3bf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,TSCDLT,AESNI,RDRAND>
AMD Features=0x28100800<SYSCALL,NX,RDTSCP,LM>
AMD Features2=0x101<LAHF,Prefetch>
Structured Extended Features=0x2282<TSCADJ,SMEP,ERMS,NFPUSG>
VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID
TSC: P-state invariant, performance statistics
real memory = 2147483648 (2048 MB)
avail memory = 1946144768 (1855 MB)
Event timer "LAPIC" quality 600
ACPI APIC Table: <ALASKA A M I >
WARNING: L1 data cache covers fewer APIC IDs than a core (0 < 1)
FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
FreeBSD/SMP: 1 package(s) x 4 core(s)
random: unblocking device.
ioapic0 <Version 2.0> irqs 0-114 on motherboard
SMP: AP CPU #2 Launched!
SMP: AP CPU #1 Launched!
SMP: AP CPU #3 Launched!
Timecounter "TSC" frequency 1440001458 Hz quality 1000
random: entropy device external interface
netmap: loaded module
[ath_hal] loaded
module_register_init: MOD_LOAD (vesa, 0xffffffff80ff8620, 0) error 19
random: registering fast source Intel Secure Key RNG
random: fast provider: "Intel Secure Key RNG"
kbd1 at kbdmux0
nexus0
cryptosoft0: <software crypto> on motherboard
acpi0: <ALASKA A M I > on motherboard
Firmware Error (ACPI): Failure creating [BDLI], AE_ALREADY_EXISTS (20180105/dswload-498)
ACPI Error: AE_ALREADY_EXISTS, During name lookup/catalog (20180105/psobject-371)
ACPI Error: AE_ALREADY_EXISTS, (SSDT: DptfTab) while loading table (20180105/tbxfload-355)
ACPI Error: 1 table load failures, 8 successful (20180105/tbxfload-378)
acpi0: Power Button (fixed)
unknown: I/O range not supported
cpu0: <ACPI CPU> on acpi0
cpu1: <ACPI CPU> on acpi0
cpu2: <ACPI CPU> on acpi0
cpu3: <ACPI CPU> on acpi0
attimer0: <AT timer> port 0x40-0x43,0x50-0x53 irq 0 on acpi0
Timecounter "i8254" frequency 1193182 Hz quality 0
Event timer "i8254" frequency 1193182 Hz quality 100
atrtc0: <AT realtime clock> port 0x70-0x77 on acpi0
atrtc0: Warning: Couldn't map I/O.
atrtc0: registered as a time-of-day clock, resolution 1.000000s
Event timer "RTC" frequency 32768 Hz quality 0
hpet0: <High Precision Event Timer> iomem 0xfed00000-0xfed003ff irq 8 on acpi0
Timecounter "HPET" frequency 14318180 Hz quality 950
Event timer "HPET" frequency 14318180 Hz quality 450
Event timer "HPET1" frequency 14318180 Hz quality 440
Event timer "HPET2" frequency 14318180 Hz quality 440
Timecounter "ACPI-safe" frequency 3579545 Hz quality 850
acpi_timer0: <24-bit timer at 3.579545MHz> port 0x408-0x40b on acpi0
pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
vgapci0: <VGA-compatible display> port 0xf000-0xf03f mem 0x90000000-0x90ffffff,0x80000000-0x8fffffff at device 2.0 on pci0
vgapci0: Boot video device
xhci0: <Intel Braswell USB 3.0 controller> mem 0x91700000-0x9170ffff at device 20.0 on pci0
xhci0: 32 bytes context size, 64-bit DMA
usbus0 on xhci0
usbus0: 5.0Gbps Super Speed USB v3.0
pci0: <serial bus, USB> at device 22.0 (no driver attached)
pci0: <encrypt/decrypt> at device 26.0 (no driver attached)
pcib1: <ACPI PCI-PCI bridge> at device 28.0 on pci0
pci1: <ACPI PCI bus> on pcib1
re0: <RealTek 8168/8111 B/C/CP/D/DP/E/F/G PCIe Gigabit Ethernet> port 0xe000-0xe0ff mem 0x91604000-0x91604fff,0x91600000-0x91603fff at device 0.0 on pci1
re0: Using 1 MSI-X message
re0: turning off MSI enable bit.
re0: Chip rev. 0x4c000000
re0: MAC rev. 0x00000000
miibus0: <MII bus> on re0
rgephy0: <RTL8251/8153 1000BASE-T media interface> PHY 1 on miibus0
rgephy0: none, 10baseT, 10baseT-FDX, 10baseT-FDX-flow, 100baseTX, 100baseTX-FDX, 100baseTX-FDX-flow, 1000baseT-FDX, 1000baseT-FDX-master, 1000baseT-FDX-flow, 1000baseT-FDX-flow-master, auto, auto-flow
re0: Using defaults for TSO: 65518/35/2048
re0: Ethernet address: 84:39:be:65:0d:60
re0: netmap queues/slots: TX 1/256, RX 1/256
isab0: <PCI-ISA bridge> at device 31.0 on pci0
isa0: <ISA bus> on isab0
acpi_button0: <Power Button> on acpi0
acpi_tz0: <Thermal Zone> on acpi0
sdhci_acpi0: <Intel Bay Trail/Braswell eMMC 4.5/4.5.1 Controller> iomem 0x9173c000-0x9173cfff irq 45 on acpi0
mmc0: <MMC/SD bus> on sdhci_acpi0
sdhci_acpi1: <Intel Bay Trail/Braswell SDXC Controller> iomem 0x91738000-0x91738fff irq 47 on acpi0
mmc1: <MMC/SD bus> on sdhci_acpi1
uart0: <16550 or compatible> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0
atkbdc0: <Keyboard controller (i8042)> at port 0x60,0x64 on isa0
atkbd0: <AT Keyboard> irq 1 on atkbdc0
kbd0 at atkbd0
atkbd0: [GIANT-LOCKED]
atkbdc0: non-PNP ISA device will be removed from GENERIC in FreeBSD 12.
est0: <Enhanced SpeedStep Frequency Control> on cpu0
est1: <Enhanced SpeedStep Frequency Control> on cpu1
est2: <Enhanced SpeedStep Frequency Control> on cpu2
est3: <Enhanced SpeedStep Frequency Control> on cpu3
Timecounters tick every 1.000 msec
ugen0.1: <0x8086 XHCI root HUB> at usbus0
uhub0: <0x8086 XHCI root HUB, class 9/0, rev 3.00/1.00, addr 1> on usbus0
mmcsd0: 31GB <MMCHC NCard 4.5 SN 6E7E9160 MFG 06/2017 by 136 0x0003> at mmc0 200.0MHz/8bit/8192-block
mmcsd0boot0: 4MB partion 1 at mmcsd0
mmcsd0boot1: 4MB partion 2 at mmcsd0
mmcsd0rpmb: 4MB partion 3 at mmcsd0
mmc1: No compatible cards found on bus
WARNING: WITNESS option enabled, expect reduced performance.
Trying to mount root from ufs:/dev/mmcsd0p2 [rw]...
uhub0: 13 ports with 13 removable, self powered
lock order reversal:
1st 0xfffff8000417e240 ufs (ufs) @ /usr/src/sys/kern/vfs_subr.c:2607
2nd 0xfffffe0000e46500 bufwait (bufwait) @ /usr/src/sys/ufs/ffs/ffs_vnops.c:282
3rd 0xfffff800042a09a0 ufs (ufs) @ /usr/src/sys/kern/vfs_subr.c:2607
stack backtrace:
#0 0xffffffff80b2bba3 at witness_debugger+0x73
#1 0xffffffff80b2ba24 at witness_checkorder+0xe34
#2 0xffffffff80a9cbeb at __lockmgr_args+0x88b
#3 0xffffffff80dc2565 at ffs_lock+0xa5
#4 0xffffffff810f7af9 at VOP_LOCK1_APV+0xd9
#5 0xffffffff80ba7006 at _vn_lock+0x66
#6 0xffffffff80b9599f at vget+0x7f
#7 0xffffffff80b87891 at vfs_hash_get+0xd1
#8 0xffffffff80dbe25f at ffs_vgetf+0x3f
#9 0xffffffff80db4886 at softdep_sync_buf+0xd16
#10 0xffffffff80dc3354 at ffs_syncvnode+0x294
#11 0xffffffff80d999ff at ffs_truncate+0x6df
#12 0xffffffff80dca7f1 at ufs_direnter+0x641
#13 0xffffffff80dd393c at ufs_makeinode+0x61c
#14 0xffffffff80dcf5b4 at ufs_create+0x34
#15 0xffffffff810f51d3 at VOP_CREATE_APV+0xd3
#16 0xffffffff80ba6908 at vn_open_cred+0x2a8
#17 0xffffffff80b9f14c at kern_openat+0x20c
ugen0.2: <Dell Dell USB Entry Keyboard> at usbus0
ukbd0 on uhub0
ukbd0: <Dell Dell USB Entry Keyboard, class 0/0, rev 1.10/1.15, addr 1> on usbus0
kbd2 at ukbd0
ugen0.3: <SanDisk Cruzer Fit> at usbus0
umass0 on uhub0
umass0: <SanDisk Cruzer Fit, class 0/0, rev 2.00/2.01, addr 2> on usbus0
umass0: SCSI over Bulk-Only; quirks = 0x8100
umass0:0:0: Attached to scbus0
da0 at umass-sim0 bus 0 scbus0 target 0 lun 0
da0: <SanDisk Cruzer Fit 2.01> Fixed Direct Access SPC-4 SCSI device
da0: Serial Number 4C530302741216116074
da0: 40.000MB/s transfers
da0: 3819MB (7821312 512 byte sectors)
da0: quirks=0x2<NO_6_BYTE>
re0: link state changed to DOWN
GEOM_PART: integrity check failed (da0s4, BSD)
GEOM_PART: integrity check failed (ufsid/5a1180062a826673, BSD)
GEOM_PART: integrity check failed (diskid/DISK-4C530302741216116074s4, BSD)
In the distant past before smart phones became identical black rectangles there
was a category of devices called palmtops. Palmtops were a class of PDA PC
thing that fit in the palm of your hand. Today the Psion 5 series of devices
most often capture peoples attention. Not only are they small and awesome, but
they have something like a real keyboard.
This form factor is so popular that there are projects trying to update Psion
5 devices with new internals. The Psion 5 is the sort of device I have
complained isn't made for a long time, at some point I picked one up on ebay
with the intention of running the NetBSD port on it.
Earlier this year the world caught up and two big crowd funding projects
appeared for modern Psion like palmtop devices. Neither the Gemini or the
GPD Pocket campaigns convinced me that real hardware would ever appear. In
May reviews of the GPD Pocket started to appear and I became aware of people
that had backed and received their earlier campaign for the GPD WIN.
With a quirk in indiegogo allowing me to still back the campaign I jumped on
board and ordered a tiny little laptop computer.
FreeBSD
FreeBSD is the only choice of OS for a pc computer. Support is good enough
that I could boot and install without any real issues, but there was enough
hardware support missing that I wanted to fix things before writing a blog post
about it.
Somethings don't work out of the box others will need drivers before they will
work:
Display rotation
WiFi (broadcom 4356)
Bluetooth (broadcom BCM2045A0)
Audio (cherry trail audio chrt54...)
Graphics
Nipple
USB C
Keyboard vanishes sometimes
Battery
Suspend
Touch Screen (goodix)
fan (there is some pwm hardware)
backlight
I2C
gpio
Display
The most obvious issue is the display panel, the panel it self reports as being
a high resolution portrait device. This problem exists in the bios menus and
the windows boot splash is rotated for most of the time.
Of course the FreeBSD bootsplash and framebuffer are also rotated, but a little
neck turning makes the installer usable. Once installed we can address the
rotated panel in X, accelerated graphics are probably in the future for this
device, but the X framebuffer drive is good enough for FreeBSD hacking.
The screen resolution is still super high, there doesn't seem to be anyway to
do DPI hinting with the framebuffer driver (or in i3 at all), but I can make
terminals usable by cranking up the font size.
Keyboard and touchpoint
A Keyboard is vital for a usable computer, out of the box the keyboard works,
but the touch point does not. Worse, touching the touch point caused the
built in USB keyboard to die.
Some faffing trying to debug the problem with gavin@ at BSDCam and
we got both keyboard and mouse working. For some reason my planck keyboard
presents as a mouse among other things, pluggin in a mouse and power cycling
the USB device caused ums(4) to correctly probe and attach.
Manually loading ums(4) at boot got the touch point working correctly. In
fact, ig4(4) also attaches when manually loaded.
Add these lines to /boot/loader.conf
ums_load="YES"
ig4_load="YES"
The dmesg shows some problems with ACPI probing, this is probably the
source of some of the device problems.
Other devices
Wifi, bluetooth and graphics are bigger problems that will hopefully be caught
up in others work and made to work soon. The touchscreen controller is adding a
driver and support for Cherry View GPIO, there are datasheets for these and I am
working on them.
No battery level indicator makes it annoying to use the GPD Pocket out and
about. Without a driver the charge controller is using a really low current to
recharge the battery. Datasheets are quite readily available for these devices
and I am writing drivers now.
GPD Pocket
The Pocket is a great little device, I think its 'cuteness' makes everyone fall
in love with it on first sight. I am really looking forward to getting the
final things working and using this as a daily device.
I like keyboards, I have been using an OLKB Planck as my daily driver for
18 months now. I saw a really nice ortholinear 30% keyboard go by on
mastodon and I had to have one.
The keyboard I saw was actually the excellent gherkin by di0ib. di0ib
has worked in the true spirit of open source and provided all of the design
files and firmware for the gherkin. Beyond that they have included child proof
instructions to order pcds.
I tricked some friends into agreeing to build boards if I got a run of PCBS and
set off. Amazingly easyeda.com was offering 5 more boards (10 vs 5) for just $2
extra. I managed to get 10 sets (board, key plate and base) of the PCBs for
about £80.
Build
The build was really easy to do, there is some advice for the socket
on 40 percent club, but if you test fit everything as you go it should be
straight forward. A build is probably around 2 hours depending on proficiency.
With the board built and programmed (first try) it is time to figure out how to
use it. It took a couple of months of daily use to get used to using the
planck, it will be the same with the gherkin. To help learn I have printed out
the keyboard layout and the combination of layers.
I modified the default layout a little to make it more similar to how I
normally type. I moved space bar to my left hand, made 'X' a repeatable
key(gotta be able to delete chars in vim) and added a 'CMD' key. I have a fork
of the repo with my layout and Makefile changes.
The layer system is easy to use, if you hold any of the keys on the base layer
it will enable the alternate function for a meta key or it will switch to
another layer for a layer key.
I did more bread, but at batch 8 this is no longer really interesting to anyone
other than me.
People have been complaining that my tweets are marked as offensive material,
which is really funny I only really tweet about bread and technology. I looked
at my settings and the 'mark as offensive' option was enabled on my output.
I'm sure I accidentally enabled it, but the twitter documentation does say they
will add it to accounts that have flagged posts.
I have no love for twitter, if literally anything else had the communities I
want to pay attention to posting I would move away. Ideally something
federated, but that is only a pipe dream.
Yes my phone autocompleted flour to four, you can't edit twitter posts and
phones are the worst thing ever.
Pebble the company is dead, I can still get replacement hardware from amazon or
ebay and I suspect it will be generally available at reasonable prices for a
year or two.
I used my pebble for 3 things
It's a smart watch, so I used it as a watch for time and date
The vibrate function is amazing for notifications. My phone hasn't been off silent for since I got the pebble, notifications for calls and messages are awesome. Better I can forward notifications from a service bus app like pushover and generate them based on things I want.
I can just wear a watch to deal with 1, for 2 I am probably going to use the
awesome forecast.io app and not rely on being able to casually check the
temperature.
For 3 I am really at a loss what to do. I could just replace the pebble, but
really I think I want a smart band with a vibration motor for notifications.
If what I want doesn't already exist, it is probably too niche to ever become a
thing.
Reading: The Moon is a Harsh Mistress, The Difference Engine
release(7) documents a set of shell scripts for creating FreeBSD release
files in same manner as the release engineering team. The script creates a new
chroot environment, checks out a fresh tree, doing the release builds in a
clean environment.
That might be what you want.
I want to write some scripts that take in a specified network, some git commit
ids and generates a set of virtual machine images running in bhyve to
reproduce a test environment. Building in a clean environment isn't what
I need.
The Makefiles in release expect to be run from a tree that already has a
built kernel and world. They make building the VM images really easy, but apart
from comments in the files aren't documented.
I am going to use a directory for all of the stuff:
freebsd/
-> src # freebsd src tree
-> obj # object directory
-> destdir # freebsd destination direcory
$ cd freebsd
$ git clone https://github.com/freebsd/freebsd.git src
$ cd src
Build the kernel and world, setting the object directory to the one in our tree.
$ env MAKEOBJDIRPREFIX=/home/user/freebsd/obj time make -j4 -DKERNFAST buildkernel
$ env MAKEOBJDIRPREFIX=/home/user/freebsd/obj make -j4 buildworld -DWITH_META_MODE=yes -DWITH_CCACHE_BUILD -DNO_CLEAN
Move to the release directory to build our VM images:
$ cd release
# env MAKEOBJDIRPREFIX=/home/user/freebsd/obj make vm-release -j4 DESTDIR=/home/user/freebsd/destdir WITH_VMIMAGES=yes VMFORMATS=raw NOPKG=yes NOPORTS=yes NOSRC=yes
# env MAKEOBJDIRPREFIX=/home/user/freebsd/obj make vm-install -j4 DESTDIR=/home/user/freebsd/destdir WITH_VMIMAGES=yes VMFORMATS=raw NOPKG=yes NOPORTS=yes NOSRC=yes
I exclude, packages, ports and the src distribution in the images.
As a test launch a bhyve VM with our created disk image:
# sh /usr/share/examples/bhyve/vmrun.sh -c 4 -m 1024M -t tap0 -d ../../destdir/vmimages/FreeBSD-12.0-CURRENT-amd64.raw test
Reading: The Moon is a Harsh Mistress, The Difference Engine
Back in January I wrote about a small tool I had thrown together to do
some internet measurements. Back then we decided not to take the next step and
attempt to roll the tool out to a large audience.
We have decided we need the network edge data after all and I need your help.
In short: We need measurements from as many network edges as possible.
Places where people connect are almost always near the edges of the internet.
Your home, office, the pub or a park with WiFi is probably near the edge. We
need your help by running our tool from these sorts of places. The more the
better.
In full: Packets on the internet are given a Best Effort service by
default, everything is treated the same. The packets for your video call are
treated the same way as a large download, but that means there is more latency
when queues grow and packets in your file transfer are dropped when there is
network pressure. With Quality of Service and Active Queue Management we can
build networks that allow latency sensitive packets through the queue quicker
while also stopping packets that shouldn't be dropped from being dropped.
The DSCP Bits in the IP header are used give different IP packets different
Quality of Service classes. Right now, no one is really sure how these marks
are treated; Are they removed? Changed in someone way? Or much worse, does the
presence of these marks lead to packets being dropped?
To find this out we need to perform a survey, we can (and have) bought time on
virtual machines in data centers, but that only measures things that are close
to the network core. We also need to measure how these marks are treated at the
edge, on connections that real people use.
There isn't anyway to easily perform these measurements without asking a whole
lot of people for help. This is where you come in.
We need you to download and run our tool. If you can do it from home, the bus
or the train that is excellent. Every run of the tool helps us build up more
data about what is happening in the internet.
Thank you for helping make the internet better.
Reading: The Moon is a Harsh Mistress, The Difference Engine
Paper deadline was today, I have to set up a large survey this week, but I am
starting to surface again from this insane series of deadlines. There is a lot
of FreeBSD Kernel work coming up, hopefully both at work and at home.
I have already poking at an implementation of UDP Options, there is also the
possibility of me being given a TCP ABE implementation to port. For this work,
unlike the stuff I did before for NewCWV I am going to provide a solid set of
tests in the form of VM images. To do that I will need to figure out generation
of images from just a git commit id.
Reading: The Moon is a Harsh Mistress, The Difference Engine
I have finally after nearly a year started setting up data stores with git
annex, I am going to try it out with my stash of datasheets, documents and
books for a while. If it holds up to what I expect I will use it for the rest
of my static binary media, video, audio and images.
I have also been revisiting the infuriating torture of learning haskell, with
the real world haskell book. I did a haskell course and uni and it was
horrible, so far the real world haskell book has been equally unenjoyable and
slow.
git annex is written in haskell so the two things sort of tie together. Not
that I plan to hack on git annex.
At the end of this month they will stop running buses to where I live, it seems
basic services aren't available to those that aren't quite rural enough.
Preempting the hard switch over I started cycling to work this week.
Work is not close (hence the whole bus thing), at 20km a day commuting I have
done the first 100km week of what will probably be many. Week one has seen two
puncture from a hole in my tyre, hopefully I will have better luck next week.
# kldload vmm
# ifconfig tap0 create
# sh /usr/share/examples/bhyve/vmrun.sh -c 4 -m 1024M -t tap0 -d FreeBSD-11.0-RELEASE-amd64.raw test
Of course that misses out loads of stuff, the network won't work for one. Real
instructions are in the handbook. Following in Hiren Panchasara's
foot steps I am going to use bhyve to test and develop some network
modification in FreeBSD.
I might try and automate the deployment a bit, so I can run a single command
and have fresh vms on a configured network up and running. I suspect I will
have to make some changes that involve rebuilding the whole world tree, if that
is the case I will be trying to figure out how to get builds much much faster.
Reading: Gun Machine, The Difference Engine
Aberdeen, Scotland: 12°C, Drizzle starting in the afternoon, continuing until evening.
I am working on an implementation of the UDP Options draft at work, this
morning I got the udp_input side of processing building. This needs to be
test and gotten working before moving on, before setting up some VMs to test
this I need a way to generate packets with UDP Option data appended.
This seemed like a great occasion to use go a little more. There is the
gopacket library from google that provides raw packet stuff.
images/
I tried for ages to put together a send example that didn't depend on linux.
Eventually I got to the point where I could form crazy malformed arp packets. I
got to the point of generating the above traces in wireshark, for some reason
go was sticking 16 bytes into the address fields and creating madness. You will
note in the above arp packet that the length is much longer, that is because go
is appending some extra data for shits and giggles.
Giving up on go I had a look at the python libraries for generating packets,
they are all about the same level of insanity. The pathspider project has
some test probes for UDP Options using scapy.
pathspider is a lot of stuff to pull in to generate UDP datagrams, I
extracted out the relevant stuff to use with scapy directly:
from scapy.all import IP
from scapy.all import UDP
from scapy.all import *
if __name__ == "__main__":
ip = IP(src="139.133.204.4", dst="139.133.204.64")
udp = UDP(sport=2600, dport=2600)
pkt = ip/udp/"Hello world"
pkt.getlayer(1).len = len(pkt.getlayer(1)) #force UDP len
send((pkt/"\x01\x01\x01\x00"))
You can add the numbers together to find that the extra option space is
include, you can also see the 01 01 01 00 bytes at the end of the packet
which are the options I add.
There is this pdf ebook that I want to read, but it has a really annoying
'DRAFT' water mark on every page. I looked for an automatic way to remove the
watermark and found a really handly superuser answer that completely covers
it.
Before I ran that I tried grepping through the pdf for the string 'DRAFT', now
the pdf was compressed so I didn't find anything. I wanted to make sure the
watermark was just a string so I extract just the first page with pdfseperate.
I opened out.pdf with inkscape and played with editing the watermark, which
was indeed just text. I then round tripped the pdf through pdftk and
generated a watermark free pdf.
With installing my new desktop I am also going to move my 4G modem. I wanted to
get some signal strength numbers so I could be sure I wasn't completely ruining
things for myself. My router has a hand status page that among sensitive
private information has signal strength, SNR and noise numbers.
On that page there is also a CELL_ID field. The field is the unique network
id of the base station you are connected to. This is apparently useful for
location lookups, the wikipedia page has a list of databases that use this
field.
I tried to feed my CELL_ID into some of these databases, but they all wanted
more information. MCC and MCN are pretty easy to find, there is a big table
on the wikipedia page. I was not able to resolve down a LAC from
anywhere. There are apps I could try, but I don't really want to install any of
them on my phone.
I am finally starting to make a dent in the pile of things I could be using,
but aren't. A friend gave me a motherboard, case, graphics card and power
supply over about 18 months, in the past fortnight I finally put it all
together and had a working computer.
The machine came up no problem, one of the drives I recycled from another
machine and it already had FreeBSD on it. It turns out the motherboard I was
given doesn't want to boot from USB at all.
We tried all the different configurations and eventually fell back to using
PXE. There is an excellent graphic PXE boot environment available from
netboot.xyz, there was a FreeBSD entry in the OS boot menu, but it this
is not a supported boot method for FreeBSD.
netboot.xyz uses a mfsboot FreeBSD image to launch a live system over
PXE. The image is created with a set of scripts available on github.
FreeBSD supports booting from a bundled memory image configured with the kernel
config, it looks like that is the feature that makes all of this possible.
With the tool I was only able to hit an A rating on the ssllabs testing
site, the A+ rating was annoyingly elusive. I am using nginx as vhost for a
go web service, for HSTS a header has to be appended to the response. The
config from Mozilla does this for nginx like this:
Attempt number 3 at making bread, I was much closer to the recipe this time. I
will be able to tell in the morning. I wanted to make buns this time, but I
chickened out. I looked at the size of the dough ball and it looked like it
would surely burn.
The rest of the weekend I spent drinking, which is only interesting if you were
both there and drunk.
On a 'high speed train' which means it is a directish train that barely stops.
The wifi is running through the GSM network which in Scotland means it is
really spotty. Of course it is unsecured with a captive portal. It is
surprising that there aren't more terrible people around. It would be pretty
easy to run your own bridging access point.
I saw an article this week that claimed something along the lines of "50% of
mobile traffic is facebook". That makes sense, if most of the traffic is just a
few sites (it is) and those sites have strong protections like certificate
pinning, then it doesn't really matter if on hop 1 all of your traffic is
exposed.
It doesn't stop people fucking with you, but the apps should offer some
protection.
Today was insanity with a lab going haywire and then revisions on a paper. I am
heading south tomorrow which means pictures of trains and complaining about
trains.
I asked my father how I should start, how I should approach making bread for
the first time. He told me just to follow the recipe on the packet of flour.
Doing so seems to have worked out okay.
Due to a misunderstanding about yeast I ended up making two loafs.
Here is an article that explains in clear simple terms why the CIA
Vault7 leaks are not the end of the world. If you consider yourself
technical (which you do, you are reading a blog after all) you really
have to help constrain the insanity in the face of leaks.
Just because you can read 'breaking signal by attacking the device' it does not
mean that signal is broken. You have a responsibility to your friends and
family, if they panic when they read the news and fall back to SMS because
whatsapp is broken, the world is not becoming a better place.
Read what trusted security people say, validate their comments, help your
family.
The space was sent a cool puzzle box as part of a secret santa. One of the
puzzles involves getting the output of a flashing light into an LDR on the
other side of the box. I played with this with another member last week, we
decided to try and decode the light output.
I am sure we don't need to do this, but I wanted to try out my idea. He tried
to use FinalCut to process a vide of the output, but this didn't work. I
suggested we try breaking the video down to frames, running a brightness
threshold over them, then generating a plot of the output.
We can use convert from imagemagick to threshhold an image, here we
convert its colour space into hue saturation and brightness and resize the
image down to 1x1. Convert will give a text description of what it did so we
don't have loads of temporary images.
for filename in frames/*png; do
echo $filename
convert $filename -colorspace hsb -resize 1x1 txt:- | tail -n 1 | awk '{ print $4}' | awk -F "," '{ print $3}' | sed -e "s/%)//" >> outfile
done
I ran the outputfile through matplotlib to generate a nice plot of the light
values.
import matplotlib.pyplot as plt
values = []
with open("outfile", "r") as f:
for line in f.readlines():
value = int(line)
if value > 40:
value = 100
else:
value = 10
values.append(value)
plt.figure(figsize=(30,2))
plt.plot(values)
plt.ylabel('brightness')
plt.savefig("plot.png")
The hardest part of this was getting the video off my phone, the most time
consuming was installing matplotlib.
You can't because it is a terrible idea, and yet this post explains a
really reasonable way forward. My first thought was hopefully the same as
yours, "That is absolute insanity, won't someone think of the DDOS?". Glenn
Fiedler is responsible for the most pervasive game networking tutorial,
it is the beej net guide for games.
This isn't a general interface for UDP, that is of course insane. Instead it is
networking library specifically for real time games. It uses http
authentication to generate a security token then offers a frame locked secure
datagram API for moving real time data. The proposed API has hard timeouts,
latency and bandwidth expectations, really not useful for anything other than
games.
Right now there is a c library available on github. It will be interesting
to see a prototype javascript interface.
When I wrote the last post I really wanted to use curl to turn rfc
numbers and drafts into bibtex entries. I did have a look, but I had other
things to do that seem urgent and I didn't follow it through.
That was lazy of me, the page will generate an error message with a url when
given an rfc or draft that doesn't exist. I looked at this url with a valid
rfc, but it wasn't clear how to turn the returned info into a bibtex entry.
Stripping that div off the page makes the url visible:
Failed to read RFC or Internt-Draft resource at http://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.9999.xml
Using that url format with a valid rfc number (Our beloved RFC768) spits
out this xml document:
I am writing stuff up right now so I am not really doing anything that
interesting. Users of latex for academic work will know the nightmare that is
generating bibtex entries. Thankfully for ietf rfc and drafts entries are
automatically generate. I found this blog post that includes a tool for
looking up an entry and getting its bibtex.
We are trying to put up a 70cm amateur radio packet network, I started this by
hacking together a connector last week. I really need to figure out where
I can reach on 70cm.
Hibby and I tried to voice between my home and his, but were unable to
hear each other. That isn't so bad, we really want to be able to get into my
office where a friend can host a packet repeater.
I have remote access to work, so if I can get audio out of a radio onto the
internet I will be able to do a remote check. The plan was to setup a pi, with
an [rtl-sdr][6] and [rtl_fm][7] doing demodulation, ssh in and stream the audio
back to where ever I am.
rtl_fm is an excellent tool, it can connect to an rtl-sdr stick and provide
samples from the sdr. It can give you raw iq output or demodulate audio. This
rtl_fm command can be used with the play (from the [sox][8] package to play
broadcast fm.
$ rtl_fm -M wbfm -f 90.9M | play -r 32k -t raw -e s -b 16 -c 1 -V1 -
$ ssh -C sdr@192.168.1.181 "cat test.flac"| play -q -
rtl_fm will happily dump samples to stdout, a test with broadcast fm over ssh
is:
$ ssh -C localhost "rtl_fm -M wbfm -f 90.9M " | play -r 32k -t raw -e s -b 16 -c 1 -V1 -
Broadcast FM is an excellent way to figure out if your demodulate pipeline is
working, it is always running and it shits out a fuckton of power.
The command to demod narrow fm looks like:
$ rtl_fm -M fm -s 1000000 -f 145.800M -r 48k | play -r 32k -t raw -e s -b 16 -c 1 -V1 -
And it can be run over ssh:
$ ssh -C localhost "rtl_fm -M fm -s 1000000 -f 433.550M -r 48k "| play -r 48k -t raw -e s -b 16 -c 1 -V1 -
Hibby and I tried some calls to this station from his, but I wasn't able to
hear anything. Probably something to do with both of us being in buildings
facing away from each other. I will give this a try from mine and see what
happens.
Reading: Normal
Of course before trying to do all this from my desktop I faffed about for two
hours getting a pi up and running with FreeBSD. The pi wasn't able to handle
the sdr without really choppy audio. Eventually while writing this up I noticed
that the cable from the antenna was long enough to read my desktop. Oh well.
In the past I required a custom script to mosh into my desktop, I was
updating the script I use for this last night when I discovered it was no
longer needed. I figured I might as well post the script here in case it helps
someone else.
#!/bin/sh
# ########################################################## #
# wrapper for mosh to work with ssh's proxycommand directive #
# this only makes sense if the machine is directly reachable #
# from the internet using udp. #
# ########################################################## #
THISSCRIPT="`basename \"$0\"`"
REMOTE="$1"
REMOTEIP="$2"
NUM=`od -An -N1 -i /dev/random`
PORT=$((60000+NUM))
debug() {
echo "[$THISSCRIPT] $@" >&2
}
usage() {
debug "use me like this: $THISSCRIPT host [ip]"
}
# some default values
if [ -z "$REMOTEIP" ]; then
if [ -z "$REMOTE" ]; then
usage
exit 1
fi
# does the remote have a hostname listed in .ssh/config
REMOTEHOST="`grep -E -C1 \"^Host ([a-zA-Z0-9 ]+ )?$REMOTE( [a-zA-Z0-9 ]+)?$\" ~/.ssh/config | tail -n1 | gsed -r 's/\W*Hostname\W+//'`"
if [ -z "$REMOTEHOST" ]; then REMOTEHOST="$REMOTE"; fi
# resolve hostname
REMOTEIP=`host -4 -t a $REMOTEHOST | head -n 1 | awk '{print $4}'`
if [ -z "$REMOTEIP" ]; then
debug "could not resolve hostname $REMOTE"
exit 1
fi
fi
debug "starting mosh-server on remote server $REMOTE $REMOTEIP:$PORT"
MOSHDATA="`ssh -t \"$REMOTE\" mosh-server new -i $REMOTEIP -p $PORT | grep '^MOSH' | tr -d \"\r\n\"`"
if [ -z "$MOSHDATA" ]; then
debug "mosh-server could not be started"
exit 1
fi
PORT="`echo -n \"$MOSHDATA\" | awk '{print \$4}'`"
MKEY="`echo -n \"$MOSHDATA\" | awk '{print \$5}'`"
if [ -z "$PORT" -o -z "$MKEY" ]; then
debug "got no parseable answer"
exit 1
fi
debug "starting local mosh-client to $REMOTEIP $PORT"
MOSH_KEY="$MKEY" exec mosh-client "$REMOTEIP" "$PORT"
We finally managed to get the programming software, radios and cables in the
same room for the Motorola GM4340s we have. The radios are for doing 70cm
packet data in Aberdeen, with them finally programmed and tested we need to
make a computer radio interface.
The GM340 has an accessory connector on the rear, rather than hacking
together something using the microphone jack we can use this. The connector is
0.1" pitch so we can just connect female pin headers to it.
Our connector needs to hook into audio in, audio out, PTT and ground. I
soldered these connections on a strip of female pin header and connected them
to two TRRS jacks that were lying around. I pulled off the PTT and a ground
line to another block of female header so we could connect that to a GPIO on a
pi.
With this hacked together connector we were able to do some packet between a pi
and another radio.
I have added some parsing data for the andes core firmware format I wrote
about before. kaitai is quite nice to write binary formats out with the
result is reasonably understandable:
The power of kaitai comes from its integration into languages, there is a
compiler output to dot that you can play with online. Using that compiler
I could generate a png from the dot file like so:
I have done a lot today, but accomplished very little. I think more often than
not these days happen. Here is a cool talk about archivist activities around
games:
I finally had the need to dynamically change my font size in urxvt. If you
search you will find keybindings to do this, such as the ones recommended in
this thread. With my teeny planck keyboard, i3, and tmux I don't really
have room in my head for learning other keybindings.
In that thread there is also a printf command that changes the font size via
a terminal escape code.
alias biggest="printf '\33]50;%s\007' \"xft:Source Code Pro:pixelsize=30\""
alias big="printf '\33]50;%s\007' \"xft:Source Code Pro:pixelsize=20\""
alias small="printf '\33]50;%s\007' \"xft:Source Code Pro:pixelsize=10\""
alias teeny="printf '\33]50;%s\007' \"xft:Source Code Pro:pixelsize=8\""
alias normal="printf '\33]50;%s\007' \"xft:Source Code Pro:pixelsize=12\""
Some stuff from playing with the stupidly named CHIP.
Battery
There is a script shipped with the CHIP images that will dump some information
from the battery controller. Which is sort of useful I guess.
[chip@chip] $ sudo battery.sh
BAT_STATUS=0
CHARG_IND=1
BAT_EXIST=1
CHARGE_CTL=0xc9
CHARGE_CTL2=0x45
Battery voltage = 3930.3mV
Battery discharge current = 0mA
Battery charge current = 882.5mA
Internal temperature = 51.9c
LEDs
There are two leds on board, a pink one that is directly wired into power and a
status led connected over i2c. The led can be control directly over i2c with
the i2cset command.
On my CHIP image the led is showing some sort of heartbeat that isn't stopped
when I manually intervene. On their forums the i2cset method is reccomended
to control the led, but the heartbeat made this impossible.
After a ton of poking and searching, trying to see if you can get strace to
log processes that access a path (doesn't look like you can) I came across
ledtrig-cpu in the dmesg.
[ 2.315000] ledtrig-cpu: registered to indicate activity on CPUs
ledtrig-cpu is a kernel module for showing event status on built in leds,
there is some inscruitable BBB documentation that somewhat shows how to
control it.
In /sys/class there is an entry for each on the leds on board, listed with
their colour. We can have a play with the led by looking at the following:
The Internet Archive'sWayback Machine has an 'archive this page now'
button, which I know I was aware of, but I hadn't ever used it. Watching this
chat about archiving I thought I would have a look at it.
My preferred option would be to add a hook to my post script that triggers the
IA to scrap any new pages. Poking around the FAQ and the scant API
page didn't reveal any recommendations for how to use the tools. With no
advice I tried curl with the URL they used:
That URL worked great, the history page now has a new scrap entry
from today. Having done that I started looking to see how well covered my site
was, it turns out very few pages have been captured into the global history.
The Internet Archive is my favourite thing on the internet, it is much
much more than just the wayback machine. It is a massive archive of human
culture, it might be one of the most important things being created right now.
There is much more information being added to the IA now than any one person
could process themselves. But it isn't that hard for individuals to pick an
upload or a topic and process through the files and provide some sort of best
of list.
The video below is a chat about archiving, I think the most important take away
is that we really need people to review the material and make sections
accessible.
Paraphrased:
I am waiting for someone to go through 200 floppy discs and write a blogpost:
"I looked at all this junk, these 7 are great."
I am working on a modification to hostapd and I really have to run this on
linux. The pi I was planning to use has seems to have finally given up the
ghost after 4 years of use. Never fear, I failed over to using the chip,
that came with my pocketchip flashed with the headless firmware.
The chip will use the microusb connection as a serial port by default,
similar to the way the BBB uses the usb port to do ethernet. With a serial
connection, I needed to figure out how to get the chip onto wifi.
There is a command line tool for interacting with network manager listed in the
chip documentation. nmcli is a pretty great tool for network access.
$ nmcli device wifi list
* SSID MODE CHAN RATE SIGNAL BARS SECURITY
* HameNetwork Infra 11 54 Mbit/s 100 ▂▄▆█ WPA2
NetGear-AWQ4D8 Infra 1 54 Mbit/s 69 ▂▄▆_ WPA1 WPA2
BTWifi-X Infra 6 54 Mbit/s 30 ▂___ WPA2 802.1X
BTWifi-with-FON Infra 6 54 Mbit/s 30 ▂___ --
Better yet, there is a nmtui interface that presents a nice ncurses way to
configure the network.
Awesome news, OONI probe now has an android tool. OONI probe uses tor to map
out censorship on the internet. If you want to make the internet a much better
place I really suggest running this app.
Unrelated, I read an awesome write up on reverse engineering a book
cover. It is a shame I don't read Polish, the author including something
interesting in the cover is a real draw to the book for me.
FOSDEM is split between devrooms and booths, there are booths for a load of
different projects. The Olimex DIY Laptop was announced the week before,
they were showing it off at a stand at FOSDEM.
The hardware is quite nice, the case feels a little cheap, but what can you
expect for that price? The keyboard they have on it is horrible. It would be
much better served with much fewer keys on the layout, something more like a
chromebook would be good (I would settle for 40%, but the market is probably
small).
I unplugged the assembled model they had on display and they got upset and it
was quickly plugged back in. The hardware is still quite early, it looks like a
really solid start.
I don't think I would pick up the first generation of hardware, if they
continue with the project it will be really promising.
I got this TShirt at FOSDEM, I was told that it means something.
First here is all of the text from the tshirt, which should make the page more
discoverable:
Xen
Project
@FOSDEM 2017
Forging the
DNA of the Cloud
AGTCTACTCAGCCGTACTTCATCGACTTAGCGTTGTCTCAGC
GCTTGTCACGTCCTGCTCGTCAGCTAGCGTCTAAGCTACCTC
ACTAGCTGTGCTAGCTAGCGTCTAAGCTCAATCTGTGATTAC
These three strings are a bit unweildly to work with, lets break them down into
groups of three.
Now to wikipedia to get a DNA Codon Table to refer to:
Amino acid Codons Compressed
Ala/A GCT GCC GCA GCG GCN
Arg/R CGT CGC CGA CGG AGA AGG CGN MGR
Asn/N AAT AAC AAY
Asp/D GAT GAC GAY
Cys/C TGT TGC TGY
Gln/Q CAA CAG CAR
Glu/E GAA GAG GAR
Gly/G GGT GGC GGA GGG GGN
His/H CAT CAC CAY
Ile/I ATT ATC ATA ATH
START ATG
Amino acid Codons Compressed
Leu/L TTA TTG CTT CTC CTA CTG YTR CTN
Lys/K AAA AAG AAR
Met/M ATG
Phe/F TTT TTC TTY
Pro/P CCT CCC CCA CCG CCN
Ser/S TCT TCC TCA TCG AGT AGC TCN AGY
Thr/T ACT ACC ACA ACG ACN
Trp/W TGG
Tyr/Y TAT TAC TAY
Val/V GTT GTC GTA GTG GTN
STOP TAA TGA TAG TAR TRA
I poked at this for ages, I wondered if I had need to the take the codons in as
they are written. A column at a time. But even looking at that, there was
nothing obvious about the codon sequence.
After a while I started searching for subsequences from the string. This didn't
help at all, but one page on google had a related search for ['dna message
translator'][3]. Annoyingly I fed in this string:
FOSDEM is done! I want to start writing about the things I saw, but my phone
still won't speak to my laptop. Tomorrow should be more differenter, once the
travel is done I will have access to sane computing.
I did my talks today, there is video coming, but I am told the audio is out of
sync. My hotel doesn't seem to be up to doing streaming video so I haven't even
tried.
Tomorrow, I am presenting at FOSDEM. Somehow I have gotten roped into
doing two talks at almost the same time.
At 1555, I will present QoS Challenges for Real Time Traffic in the Real
Time Communications devroom. I am pretty certain I am going to explain what the
hell QoS in the internet is, some of the benefit you might see and how to do
any of this with NEAT. The recent measurements we have been doing suggest that
QoS shouldn't be a deal breaker for any connection, great news, but it doesn't
make much of a discussion.
At 1630, I will present Transport Evolution on top of the BSD's in the BSD
devroom. The two talks are in the same building, but 15 minutes is a bit
tighter than I would have liked. This talk will be about the problems faced in
making the internet better and how NEAT will help.
I know the BSD devroom will be recorded, but not streamed. I haven't heard
anything about the RTC rooms video setup. If you are around for FOSDEM you
should drop by and ask me difficult questions. If not I will take them over
email.
I wandered down to the Louvre last night and took some pictures, but my phone
is refusing to speak to my laptop, so here is the Seidenstrasse instead. I am
hacking from the amazing Mozilla Paris offices this week, for some reason
this is the projects favourite place to meet.
Reading: Babylon's Ashes, Cryptonomicon
Paris, France: 10°C, Light rain in the morning and evening.
Well no, nothing as cool. I have meeting and a hackathon in Paris this week,
then on Friday I am jetting (on a train) up to Brussles for FOSDEM. Travelling
gives me a chance to run my network tracing tool from lots of strange places,
hopefully strange places reveal strange networks.
Now I am going to have a wander around this city for a bit, before a great day
of coding tomorrow.
Something for work meant I had to throw together a DSCP probing tool at
the last minute. It still needs lots of work, but I needed to get out today and
test on some real networks (just work and my house don't count). If I have
to spend a lot of time in cafes, pubs and chains doing measurements from edges
I might as well enjoy it.
If I can manage more than a couple of shots like this I can put together an
excellent slide when presenting this work.
I wonder if a CCC style lounge would work as a real club, I guess the status of
DNA Lounge indicates it isn't a great proposition. Hacker bars for console
jockeys are really appealing, the properties that would make them a nice place
to be; not too crowded, room to hack, exclusive, don't really translate to a
successful business.
"The Stockings Were Hung by the Chimney with Care"
The ARPA Computer Network is susceptible to security violations for at least
the three following reasons:
(1) Individual sites, used to physical limitations on machine access, have
not yet taken sufficient precautions toward securing their systems
against unauthorized remote use. For example, many people still use
passwords which are easy to guess: their fist names, their initials,
their host name spelled backwards, a string of characters which are
easy to type in sequence (e.g. ZXCVBNM).
(2) The TIP allows access to the ARPANET to a much wider audience than
is thought or intended. TIP phone numbers are posted, like those
scribbled hastily on the walls of phone booths and men's rooms. The
TIP required no user identification before giving service. Thus,
many people, including those who used to spend their time ripping off
Ma Bell, get access to our stockings in a most anonymous way.
(3) There is lingering affection for the challenge of breaking
someone's system. This affection lingers despite the fact that
everyone knows that it's easy to break systems, even easier to
crash them.
All of this would be quite humorous and cause for raucous eye
winking and elbow nudging, if it weren't for the fact that in
recent weeks at least two major serving hosts were crashed
under suspicious circumstances by people who knew what they
were risking; on yet a third system, the system wheel password
was compromised -- by two high school students in Los Angeles
no less.
We suspect that the number of dangerous security violations is
larger than any of us know is growing. You are advised
not to sit "in hope that Saint Nicholas would soon be there".
I have gotten to the point with the MT76x0Udriver where I need to
load the firmware image onto the MCU. Unlike the older hardware on which the
driver is based, the firmware image is much more complicated. The old images
are 4KB and can be directly DMA'd across to the MCU, the newer image is around
80KB, contains 2 sections and the reference driver does a complicated dance to
copy them across.
There is quite a lot of pointer magic in setting up the DMA buffers in the
reference image, I need to understand the firmware layout to know what this is
trying to accomplish.
I know from the reference driver that there are two images shipped in the
firmware, called ILM and DLM (accoring to this Instruction and Data Local
Memory).
Currently I think the reference driver skips some further data in the ILM, I
need to see if there is documentation for the format so I can make an informed
guess.
Modern IDEs have a load of functionality to help trace function call and data
accesses through large code bases. cscope is an interactive command line tool
that helps with searching codebases based on C symbols. With cscope you can
find all the callers of a function, every function a function calls, or by C
type.
$ cd code/repo
$ cscope -R
Working on thiswireless I have spent a lot of time digging down
the callgraph with cscope manually figuring out how things tie together.
Today I looked to see if there was a tool I could use to generate a callgraph
from the cscope database files.
I found a stackoverflow thread with the recommendation of a shell script
that could generate a dot file with the callgraph. Thescript is
unfortunately very basic, rather that something to run against a code base it
is a set of bash functions.
The graph below was (generated from this repo) took about 1 minute to
generate on my reasonably fast laptop. You will need to install the graphviz
tools to generate the png.
I had to make some modifications to get the script to run, here is my version:
#!/bin/bash
echo "loading calltree.sh functions"
#use cscope to build reference files (./cscope.out by default, use set_graphdb to override name or location)
set_graphdb() { export GRAPHDB=$1; }
unset_graphdb() { unset GRAPHDB; }
build_graphdb() { cscope -bkRu ${GRAPHDB:+-f $GRAPHDB} && echo Created ${GRAPHDB:-cscope.out}...; }
# cscope queries
lsyms() { cscope -R ${GRAPHDB:+-f $GRAPHDB} -L0 $1 | grep -v "<global>" | grep "="; }
fdefine() { cscope -R ${GRAPHDB:+-f $GRAPHDB} -L1 $1; }
callees() { cscope -R ${GRAPHDB:+-f $GRAPHDB} -L2 $1; }
callers() { cscope -R ${GRAPHDB:+-f $GRAPHDB} -L3 $1; }
# show which functions refer to a set of symbols
filter_syms() { local sym cscope_line
while read -a sym; do
lsyms $sym | while read -a cscope_line; do
printf "${cscope_line[1]}\n"
done
done
}
# given a set of function names, find out how they're related
filter_edges() { local sym cscope_line
while read -a sym; do
fdefine $sym | while read -a cscope_line; do
grep -wq ${cscope_line[1]} ${1:-<(echo)} &&
printf "${cscope_line[1]}\t[href=\"${cscope_line[0]}:${cscope_line[2]}\"]\t/*fdefine*/\n"
done
callees $sym | while read -a cscope_line; do
grep -wq ${cscope_line[1]} ${1:-<(echo)} &&
printf "$sym->${cscope_line[1]}\t[label=\"${cscope_line[0]}:${cscope_line[2]}\"]\t/*callee*/\n"
done
callers $sym | while read -a cscope_line; do
grep -wq ${cscope_line[1]} ${1:-<(echo)} &&
printf "${cscope_line[1]}->$sym\t[label=\"${cscope_line[0]}:${cscope_line[2]}\"]\t/*caller*/\n"
done
done
}
# dump args one-per-line
largs() { for a; do echo $a; done; }
toargs() { local symbol
while read -a symbol; do
printf "%s " $symbol
done
echo
}
# present list of symbols to filter_syms properly
refs() { local tfile=/tmp/refs.$RANDOM
cat ${1:+<(largs $@)} > $tfile
filter_syms $tfile <$tfile | sort -u
rm $tfile
}
# present list of function names to filter_edges properly
edges() { local tfile=/tmp/edges.$RANDOM
cat ${1:+<(largs $@)} > $tfile
filter_edges $tfile <$tfile
rm $tfile
}
# append unknown symbol names out of lines of cscope output
filter_cscope_lines() { local cscope_line
while read -a cscope_line; do
grep -wq ${cscope_line[1]} ${1:-/dev/null} || echo ${cscope_line[1]}
done
}
# given a set of function names piped in, help spit out all their callers or callees that aren't already in the set
descend() { local symbol
while read -a symbol; do
$1 $symbol | filter_cscope_lines $2
done
}
# discover functions upstream of initial set
all_callers() { local tfile=/tmp/all_callers.$RANDOM
cat ${1:+<(largs $@)} > $tfile
descend callers $tfile <$tfile >>$tfile
cat $tfile; rm $tfile
}
# discover functions downstream of initial set
all_callees() { local tfile=/tmp/all_callees.$RANDOM
cat ${1:+<(largs $@)} > $tfile
descend callees $tfile <$tfile >>$tfile
cat $tfile; rm $tfile
}
# all the ways to get from (a,b,...z) to (a,b,...z), i.e. intersect all_callers and all_callees of initial set
call_graph() { local tfile=/tmp/subgraph.$RANDOM; local args=/tmp/subgraph_args.$RANDOM
cat ${1:+<(largs $@)} > $args
cat $args | all_callers | sort -u > $tfile
comm -12 $tfile <(cat $args | all_callees | sort -u)
rm $tfile $args
}
# all functions downstream of callers of argument
all_callerees() { callers $1 | filter_cscope_lines | all_callees; }
# odd experimental set of calls that might help spot potential memory leaks
call_leaks() { local tfile=/tmp/graph_filter.$RANDOM
all_callerees $1 | sort -u > $tfile
comm -2 $tfile <(all_callers $2 | sort -u)
rm $tfile
}
# wrap dot-format node and edge info with dot-format whole-graph description
graph() { printf "digraph iftree {\ngraph [rankdir=LR, ratio=compress, concentrate=true];\nnode [shape=record, style=filled]\nedge [color="navy"];\n"; cat | sort -u; printf "}\n"; }
# filter out unwanted (as specified in “~/calltree.deny”) and/or unnecessary edges
graph_filter() { local tfile=/tmp/graph_filter.$RANDOM
cat > $tfile
grep fdefine $tfile
grep $1 $tfile | grep -v ~/calltree.deny | cut -f1,3
rm $tfile
}
# how to invoke zgrviewer as a viewer
zgrviewer() { ~/bin/zgrviewer -Pdot $@; }
# how to invoke xfig as a viewer
figviewer() { xfig <(dot -Tfig $@); }
# how to create and view a png image
pngviewer() { dot -Tpng $@ -o /tmp/ct.png && gqview -t /tmp/ct.png; }
# specify a viewer
ctviewer() { pngviewer $@; }
# add color to specified nodes
colornodes() { (cat; for x in $@; do echo "$x [color=red]"; done;) }
# generate dot files
_upstream() { all_callers $1 | edges | graph_filter ${2:-caller} | colornodes $1 | graph; }
_downstream() { all_callees $1 | edges | graph_filter ${2:-callee} | colornodes $1 | graph; }
_upndown() { (all_callers $1; all_callees $1) | edges | graph_filter ${2:-callee} | colornodes $1 | graph; }
_relate() { call_graph $@ | edges | graph_filter callee | colornodes $@ | graph; }
_leaks() { call_leaks $1 $2 | edges | graph_filter ${3:-callee} | colornodes $1 $2 | graph; }
# generate dot files and invoke ctviewer
upstream() { _upstream $@ > /tmp/tfile; ctviewer /tmp/tfile; rm -f /tmp/tfile; }
downstream() { _downstream $@ > /tmp/tfile; ctviewer /tmp/tfile; rm -f /tmp/tfile; }
upndown() { _upndown $@ > /tmp/tfile; ctviewer /tmp/tfile; rm -f /tmp/tfile; }
relate() { _relate $@ > /tmp/tfile; ctviewer /tmp/tfile; rm -f /tmp/tfile; }
leaks() { _leaks $@ > /tmp/tfile; ctviewer /tmp/tfile; rm -f /tmp/tfile; }
# dot file conversions
dot2png() { dot -s36 -Tpng -o $1; }
dot2jpg() { dot -Tjpg -o $1; }
dot2html() { dot -Tpng -o $1.png -Tcmapx -o $1.map; (echo "<IMG SRC="$1.png" USEMAP="#iftree" />"; cat $1.map) > $1.html; }
Poking at the mt7620 wifi driver today, but I don't have a FreeBSD source tree
in /usr/src. Trying to build spits out this message:
$ make
make: "/usr/share/mk/bsd.kmod.mk" line 12: Unable to locate the kernel source tree. Set SYSDIR to override.
Searching around, I could find others with this problem, mostly they had had
forgotten to checkout a source tree into /usr/src. With a source tree in
/home/user/code/freebsd I needed to set SYSDIR.
SYSDIR must point to the sys subdir in the FreeBSD source rather than the
location of the whole tree(i.e. /usr/src). I modified my module Makefile list
so:
Interference is a bitch, you should heed the warnings about cheap Chinese video
systems they can make a lot of noise. This weather sat didn't stand a chance,
Here is an article explaining what is going on.
Pretty sure I am not, I lost the git branch with a new feature on it and it
took an hour to find. I didn't delete it, I just couldn't remember where it was.
I read this cool article on trying to get the Purism laptop booting
with coreboot instead of the proprietary bios. Quite a lot of people
having been trying to open up the Intel hardware ecosystem in the past few
years, all those closed bits make it very hard to say that hardware is secure.
It would be nice if we could leave the Intel world and use ARM or MIPS
processors, but I think the graphics situation holds us back.
I was given a Teenage Engineering PO-14 for Christmas and took it with me
to congress for entertainment on the way. The pocket operator has a load of
functions hidden behind very few buttons, I had a lot of fun playing with it on
the flight. I am still to really figure out everything this board can do.
Watching some OP-1 videos (their much bigger synth) TE manage to pack a ton of
functionality into hardly any keys.
Thought we had hit all of the peaks on Bennachie, but looking at stuff
later it seems there are about 7 'summits' to hit. That's annoying, living in
Aberdeen I have done the Mither Tap walk loads of times. Today was my
first time taking the trek over to Oxencraig.
That was most of today, I poked some wireless driver stuff, but it is all
initial steps.
Reading: Babylon's Ashes
Aberdeen, Scotland: -2°C, Mixed precipitation in the morning and evening.
Rudy_Giuliani was nominated Cyber Tzar or something yesterday, the hacker
community suddenly became very interested in this credentials. This morning
twitter was filled with the results of int gathering exercises.
The domain now points to localhost, someone clearly got a late night phone
call. It is strange that only now is noise being made about this, Ruddy isn't
exactly a popular figure in America. He made a lot of mistakes in high profile
positions. The big scary guys in the Int agencies will have pursued all these
leads a long time ago.
Of course, that is assuming the site wasn't a honeypot.
Reading: Babylon's Ashes
Aberdeen, Scotland: -3°C, Mixed precipitation in the morning and overnight and windy in the morning.
I useWiresharkquite all the time. I was lucky to get a copy
of Hacking: The Art of Exploitation when I was a teenager, the book gave
me an excellent introduction to using tcpdump to perform network analysis.
tcpdump is the first tool I reach for when I wonder where the packets are
going, but for anything higher level (breaking down http, checking wlan flags)
I use wireshark, I am always impressed.
At 33c3 there was a wireshark introductory self organised session run by
kirils. I did not go to this session, but the slides I found look to
be an excellent introduction to using wireshark.
My head is pretty full writing slides for FOSDEM. Here is an interview with
William Binney, if you don't know of Binney this interview is a great
introduction. Binney is credited by Snowden as one of the motivators behind his
set of leaks.
Binney also gave the keynote at Hope 9, which is a great watch.
I reinstalled or upgraded my c720 or something and things are a bit all over
the place. Tonight I started firefox in the hackerspace and noticed my
trackpad wasn't working, it needs to be explicitly setup. This is mentioned on
the comprehensive FreeBSD c720 guide, but there have been some
updates to the driver that aren't reflected on the page. You now need to
load the chromebook_platform driver manually.
The cyapa driver offers all the features you would want from a trackpad,
two finger dragging, thresholds for taps and an three button mouse emulation
mode.
# sysctl debug.cyapa_enable_tapclick=3
Which gives me the following awesome mouse button layout on the trackpad.
Physical access is pretty much always game over, apart from the iPhone there are
not many devices that can stand up to attack. Intel seem to want to make
physical access even easier and are now offering JTAG access on USB.
JTAG is a hardware debugging protocol normally seen on embedded systems or
accessed through a special adapter on the motherboard. You can use JTAG to
pause a processor, step through the instructions being executed and read into
memory. With JTAG access you have full access to the machine.
One of the speakers asks the audience early on 'Do you think Internet
Censorship should be allowed?' and gets about half the crowd showing hands. I
really cannot understand that sort of response, clearly there are things we
don't want people to see, but I can't support a blanket censorship system to
block that content.
If there was a way to block really dangerous material, without risking blocking
completely reasonable material I am sure that is what we would be implementing.
It was a change from sitting inside, the view was really nice. On the way up I
was thinking about photography and finding the right equipment. It is pretty
clear my J1 with a 10mm pancake lens isn't ideal for landscape photography, but
I am not really sure how to get a set of gear to make the photos I want to take
possible.
Sitting down with books and reviews are the obvious way to figure this out, but
maybe there is a more 'fun' solution. Here's an idea for free:
We take in the camera equipment you already have.
You go through flickr, 500px or something else and tag photos you wish you had taken.
We parse out the lens/camera used
We recommend the gear to help take the photos you want
What do you do when you find a USB stick on the ground?
Clearly you take it to work, plug it into a computer with network admin
privileges to make sure there is nothing funny about it.
I guess something could go wrong, I saw a documentary once where criminals
dropped a load of USB sticks on the ground which an unsuspecting prison guard
used in a computer. They probably put some malware on that USB stick and all,
not cool.
Anyway, at congress I saw this sign, sans stick. I hope there was both
something horrible on it and something that made it worth the hassle.
The CCC put together an excellent track of talks about Science and Space
technology. I chewed through a lot of them yesterday, they set a really great
tone and are aimed really well at their audience.
I have been thinking recently about organising events locally that have much
more technical content than the current things that happen. Up here there isn't
the density of expertise required to run a monthly or even quarterly event
without running out of fresh speakers very quickly.
TechmeetupAberdeen really struggles to bring speakers in, very have
many times falling back to a set of 'known good' speakers from the local
hackerspace.
Sessions by experts in a field with technical content, aimed at Non Cyber
Muggles from other fields (similar pitching as the space track talks) could
work very well. I will have to play with this idea and see if people from other
fields are interesting in taking part.
bunnie has a long history of doing really cool things in hardware hacking,
his book Hacking the Xbox is a great read (he has another book in the
works too). bunnie and xobs presented a complete tear down and reverse
engineering of sd cards at 30c3, at 33c3 they were back talking about
their education project chibitronics.
bunnie's talk is about the project it self, technical design and motivations,
if the front matter of the talk turn you off believe me when I say it is worth
powering through and watching the whole thing.
xob's presents an excellent session of bit banging out usb from a low power
Cortex-M0+ microcontroller. This talk is a great introduction into the low
level details of the usb protocol.
God damn it! I won't be downloading all the 33c3 talks this year to watch
offline, instead I will stream them from the excellent media.ccc.de. No
good reason, I am only doing this because when making a list to feed to wget I
did:
I didn't really have the disc spare to store 100GB or so of talks anyway. I
will stream the videos in my browser instead. I don't really have set approach
to watching the CCC talks. I normally work through the list watching things
that other have said were good, or talks whose title catches my eye.
Chaos is an important part of CCC, most of the best things that happen are
pranks that only a small number of people experience. The Fnord News Show has a
large audience German speaking audience, I am pretty sure this awesome 'event'
is unknown outside of the German crowd.
14 hours of sleep, I feel like I have woken up in another dimension.
There is a ton of congress stuff floating around twitter, Here is a list
of talks ranked from the number of tweets and retweets mentioning them. If you
are only going to watch a couple of sessions from 33c3 that list is probably
great.
I caught an awesome article classifying the MacOS malware found in 2016. I
am glad there is so little malware aimed at this platform.
I was hoping to write the first blog post of this year from the airport, but
time conspired against me. With BA moving my flight 24 hours I had to decided
between having a quiet New Year and a fun one.
I certainly did not go to 'There is No Party' which was not held somewhere in
HH. The music was excellent, the crew that did the lighting and audio did
amazing work. It makes me wonder what could happen here if there was space
where people could play.
From the party I headed back to the apartment, packed and set off for the
airport. I turned a long day with a weird sense of time into an adventure
across Hamburg at New Year and through the Airport.
By the time I made Heathrow I was on about 4 hours sleep in a 36 hour window, I
opted to nap instead of writing..
An abundance of particles in the air forced BA to cancel my flights home. They
knocked my flight back 24 hours without any choice in the timing of my new
flight. This means I get to spend an extra night in Hamburg and I get to move
all of my new year plans to somewhere in the future.
Congress is done for another year, it has been any amazing event. There really
is nothing like it on the planet, attempts at describing the event and
conveying that different world always seem to fail.
The Chaos Communication Congress really is a place that must be seen, with the
CCH being knocked down next year the event you go to will certainly be
materially different that the one I have attended for the past three years.
I cannot wait to see how the deal with loosing the CCH and where CCC ends up in
the future.
Presented with out comment are the books I read in 2016, ordered with the most
recently read first:
* All Tomorrow's Parties
* Idoru
* Virtual Light
* Excession
* Cibola Burn
* Reamde
* Abandon's Gate
* Seveneves
* ELEKTROGRAD
* Ashes of Victory
* Little Brother
* Transmetropolitan Book Vol 3
* Transmetropolitan Book Vol 2
* Transmetropolitan Book Vol 1
* Transmetropolitan Book Vol 10
* Transmetropolitan Book Vol 9
* Transmetropolitan Book Vol 8
* Transmetropolitan Book Vol 7
* Transmetropolitan Book Vol 6
* Transmetropolitan Book Vol 5
* Transmetropolitan Book Vol 4
* The Cuckoos Egg
* Networks of New York
* Hydrogen Sonata
* Surface Detail
* Matter
* Look to Windward
* Inversions
* Use of Weapons
* The Player of Games
* Consider Phlebaa
* The Man in the High Castle
* Overtime
* Eqoid
* Down on the Farm
* Neuromancer
* The Soul of a New Machine
* The Wise Man's Fear
* Name of the Wind
* Zero History
* Spook Country
* The Atrocity Archives
* The Fuller Memorandum
* The Jennifer Morgue
* The Apocalypse Codex
* The Annihilation Score
* The Rhesus Chart
* The Long Dark Tea Time of the Soul
* Dirk Gently's Holistic Detective Agency
* Titus Groan
* Next Stop Execution
* Mona Lisa Overdrive
* Pattern Recognition
* Gormenghast
* Cunning Plans
* Tubes
Day 4 is the sad day, at a certain time this evening a switch will flip,
everyone will stop what they are doing and start packing up. Right away the
hackcenter will go from being another world back to a boring hall.
Blinkenlights are really big in hacker culture, the hackcenter where our
table is located is completely full of led strips, installations,
projectors and a ton of other things that glow, flash or blink.
ATTENTION
This room is fullfilled mit special electronische equippment.
Fingergrabbing and pressing the cnoeppkes from the computers is allowed for die experts only!
So all the “lefthanders” stay away and do not disturben the brainstorming von here working intelligencies.
Otherwise you will be out thrown and kicked anderswhere!
Also: please keep still and only watchen astaunished the blinkenlights.
The obvious think to do with blinkenlights is to get them onto the network, my
udp panel continues this glorious tradition. There are loads of awesome blinkenligths on the network:
There is a group of hackers sat in front of the flipdot sign day and night
playing with it. The flipdots are small electromechanical modules that can be
either white or black, the modules take a fraction of a second to swith and
make an awesome sound as they do.
Congress is a great place to try out apps or networking things that require a
lot of people involved. All over the building there are posters up with apps,
network services, political action, calls for poets, puzzles, manifestos.
Following up on these ideas could fill your time at congress.
One poster than caught my eye was a call to use a decentralised mesh networked
micro blogging service. The post links to an app called Rumble that is
available with Android and iOS. There are enough people willing to try things
out at congress that a meshed messaging app could be great fun to use.
Unfortunately it seems the app can't handle the network conditions at congress.
The meshing can work over wifi or via bluetooth. I suspect the mesh over the
wifi uses something like mdns for neighbour discovery. We found when we
tried the SlowTV that multicast is blocked on the wifi for performance reasons.
The bluetooth option for the app seems unable to find any neighbours in the
hackcenter either. It might be that the rf conditions are making this nearly
impossible.
I will keep trying to play with the app after the event, but it would have been
awesome if it had been usable at congress.
Hackaday are covering 33c3, mostly talks so far, but there may also be
articles about the all the amazing projects that fill up the CCH. There are so
many awesome internet controlled projects around here that it is probably
impossible to see all of them. The contents of the rooms in the hack center is
changing all the time as well.
I think today I am going to see how many network blinkenlights projects I can
find and make a little catalog. A metablinkenlights controller would be
awesome to build out.
Day 1 became Day 2 with the industry standard partying all night transition.
This morning was a very slow start, with my lightning talk somewhere in there,
talk came out okay I think.
Keeping track of time in here is really difficult, the leds sort of merge
everything together, windows would ruin the atmosphere so that isn't available
as a measure of time. We know that day lights hurts hackers brains.
So far things have been a flop on the project front. The congress network
doesn't support multicast on the wireless, the wired segment is fine, but it
has caused us to run out of steam. Multicast traffic on the wifi has to be sent
at the lowest rate connected clients support, this burns a lot of airtime
leaving multicast blocked on wifi access points.
The UDP panel hasn't been set up yet, the projector didn't make the trip across.
The first day and their are some excellent sessions lined up. All of the talks
are recorded, I normally catch up with the talks that catch my interest after
the event.
At congress it is best to hit the self organised sessions, they aren't recorded
and are almost always excellent. Because they are excellent they are really
hard to attend, loads of other people show up. The good sessions are standing
room only with the corridor completely full too.
- [Are_decentralized_services_unable_to_innovate][1]
- [Mechanical_Keyboard_Meetup Tryout][2]
- [We Fix the Net][3]
In the past I lined up a busy schedule for the congress and ended up not going
to any of the sessions. This year I am going to try and drop in and out of
events, the sessions above are more of an intent that a plan.
The We Fix the Net session is really interesting, instead of a single event
they have an afternoon of panels lined up. They are more focused on security
aspects than transport, it should be a highlight of the event.
Flight worked out well, the delay I had setting off from Aberdeen shortened my
transfer, but it didn't hold anything up. I made it into the congress center
around half three and was too early to get a ticket.
Rest of the day went into setting up blinkenlights and other important
projects. Tomorrow I will explore the place and see what is going on.
I think the guy that brought my breakfast told me off for the table I was
using. He said something, thankfully my amazing Shure SE215 isolating earphones
blocked all of the sound out. I have a reason to be ignorant. I am sitting in a
bar place, but I bought food so I could sit at a sensible height and type.
I figure the main purpose of airport seating is to stop you complaining and to
keep you awake. The lack of somewhere to sit your laptop and type like a normal
human is infuriating.
This coffee isn't even good.
ABZ->LHR->HAM
Reading: Nemesis Games, Or Nothing
And by nothing I mean I don't think I will get any more reading done this year.
Dyce, Scotland: 1°C, Snow (1–2 cm.) in the morning and windy starting in the afternoon, continuing until evening.
Weather is getting cold, the storms have been beating the house.
Time to leave for somewhere, well somewhere not warmer, just different. Packing
for congress was made much harder by ridiculous shipping regulations, an extra
couple of bottles in my bag are worth it for the fun of a buckfast party night.
The pico projector was one of the first victims in the packing war, having
spent more time with this projector I don't think it is going to be a big loss.
I still have the udp panel and the slowtv project, with the lightning talk I
probably have enough details to worry about at congress..
For development of the RSS IRC bot I need an RSS feed that updates frequently,
I could subscribe to a HN feed or something from reddit. I did a quick search
for something high speed and found a SO thread]1 with exactly what I need.
The linked heroku app can provide a feed updating at any interval you
want. Turns out this is great for soak testing code over night.
For a long while I have wanted a bot to sit in irc channels spitting out
updates on rss feeds. I Think I have finally found all the pieces I need to
write a bot in the way I like.
I have written up a really simple IRC client on top of asyncio, but it needs
much more testing than a day of dev. I want to have ssl running before I try
and run the bot against anything.
The RSS code is much simplier:
import aiohttp
import asyncio
import async_timeout
import feedparser
import pprint
INTERVAL = 60
async def fetch(session, url):
with async_timeout.timeout(10):
async with session.get(url) as response:
return await response.text()
async def fetchfeeds(loop, feedurls, ircsock):
last_entry = None
feeds = []
for url in feedurls:
feeds.append({'url':url, 'last':""})
while True:
for feed in feeds:
async with aiohttp.ClientSession(loop=loop) as session:
html = await fetch(session, feed['url'])
rss = feedparser.parse(html)
if feed['last']:
if feed['last']['title'] != rss['entries'][0]['title'] and feed['last']['link'] != rss['entries'][0]['link']:
print("new entry")
feed['last'] = rss['entries'][0]
print("MSG {}".format(feed['last']['title']))
print("MSG {}".format(feed['last']['link']))
else:
feed['last'] = rss['entries'][0]
await asyncio.sleep(INTERVAL)
loop = asyncio.get_event_loop()
loop.run_until_complete(fetchfeeds(loop, ['https://n-o-d-e.net/rss/rss.xml',
"http://localhost:8000/rss.xml"], None))
This is really only a proof of concept, there needs to be much more error
handling before I would expect this to run for long.
Checking the FOSDEM instance of the terrible pentabarf submission system I see
that my second talk proposal to a devroom has been accepted. I think, they
haven't timetabled the room yet so I can't link to a timetable slot for the
talk.
I have three talks coming up:
33C3 Lightning Talk
FOSDEM BSD devroom talk
FOSDEM Real Time Communications devroom Talk
All three of these talks are going to tell the same story, laid out in
different ways: The Internet is Broken, Fixes are hard to deploy, Developers
won't use new protocols, We have a solution.
The changes that are happening in the internet right now are really
interesting, but it is really hard to get over the knowledge curve required for
the solutions to make sense. It is really common to hear, "The Internet works
fine, why are you trying to fix it", from people that really should know
better.
If you want to find out the what and why that is internet transport evolution
you should find one of these talks and watch it. Hopefully they will all be
recorded and online after the events.
If you want to know more you can email or track me down in IRC.
I played with the wifi camera last night, but I couldn't get my phone to
connect to it when my laptop was in monitor mode. That was perplexing enough to
hold up anything I was trying to do. I might try again tonight with something
that isn't a mac, verifying I can intercept phone traffic is step 1 in this
project.
This is my last day in the 'office' this year, apparently I am too late to wish
folk a good new year as I am the only person in today.
Okay, one CCC project done. The panel now accepts data via UDP, if you send
enough it will reset the whole panel, to something. It doesn't do what I want,
but what it does right now is much much cooler than what I planned to do.
If I get time during congress I will do something more I guess. Here is all of
the code so you can make your own and play a long at home.
import machine, neopixel, time, socket
LEDCOUNT = 64
skull = [
0,0,1,1,1,1,1,0,
0,1,1,1,1,1,1,1,
1,0,0,1,0,0,1,1,
1,0,0,1,0,0,1,1,
0,1,1,0,1,1,1,0,
0,0,0,1,1,1,0,0,
0,0,0,1,0,1,0,0,
0,0,0,0,0,0,0,0,]
def chunks(l, n):
n = max(1, n)
return [l[i:i + n] for i in range(0, len(l), n)]
if __name__ == "__main__":
addr = "0.0.0.0"
port = 6969
pin = machine.Pin(14, machine.Pin.OUT)
np = neopixel.NeoPixel(pin, LEDCOUNT)
print("receiving from {} {}".format(addr, port))
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True)
sock.settimeout(1.0)
sock.bind((addr,port))
while True:
try:
pkt,addr = sock.recvfrom(1024) #blocks
print("addr {}".format(addr))
except OSError:
pkt = b""
colours = chunks(pkt, 3)
if len(pkt) == 3*LEDCOUNT:
for x in range(LEDCOUNT):
np[x] = colour
else:
colour = (0,0,0)
if not len(colours) % 3:
colour = uos.urandom(3)
else:
colour = colours[0]
for x in range(len(skull)):
if skull[x]:
np[x] = colour
np.write()
time.sleep(0.16)
SlowTV is going slowly(lol), the project is all set up. I need to figure out
how to get the pi to output at the teeny resolution it supports.
This weekend has to be considered a weekend off. I played with a wifi camera
yesterday, but I only made a dent in the project. Today was a final(ish) run at
Christmas shopping, in all I did almost nothing all weekend. It feels great.
Next week will be a hectic run to get work done, and projects ready for
congress.
For a thing, I want to dump the wlan traffic between an Android app and a
wifi camera. It isn't hard to grab network traffic from Android, if you have a
rooted device you can just run tcpdump. tcpdump on Android is annoying, you
have to manage the pcap files and it isn't clear what you are capturing.
Thankfully, wireshark can be fed WPA and WEP keys, making snooping as a
third party an absolute breeze. The key options are in the protocol preferences
for IEEE 802.11, they look something like this:
The protocol preferences dialog doesn't seem to do any validation of the keys,
instead I had to restart wireshark to get the super unhelpful error message.
The wireshark guide mentions the wireless toolbar, but this wasn't available
on my platform and I didn't need it. With just the key, WEP traffic can be
decrypted. WPA traffic requires that you capture an EAPOL handshake first. The
easiest way to do that is observe the device keying, for testing I just had my
phone join the network.
Today was a very slow start, staying in bed for an extra hour really didn't
help me out at all today. Normally the end of the year is quite calm, all of
the deadlines seem to have concentrated themselves at the start of next year.
Time to work on interesting, but not pressing problems probably won't exist
next year, as much as possible has to happen in the next week.
That does make preparation for congress very interesting.
Unable to remember the name of the Wellington Suspension Bridge I came
across this awesome website that documents the 'Doric Columns'. The site
is full of history about Aberdeen and the local area, including old
photographs, paintings and etching of the local infrastructure. This
Etching of the Brig o' Balgownie gives a real impression of the extent of
the land reclaimed from the Sea in Aberdeen.
I got the confirmation email today, I will be presenting a lightning talk about
internet transport at congress. There are about one hundred billion lightning
talks at each congress spread over three days, the bar for entry is much lower
than a real track talk. I am happy to be included with the likes of the hacker
yoga guy from camp. The lightning talk reveals the secret fourth planned item
for my trip to hamburg.
With myFOSDEM talks and congress I have been preparing 'external'
facing presentations a lot this month. I am now sure that there isn't any fixed
length of talk that really works. 55 minutes is a lot of time to speak for,
writing a coherent story that will come across in that amount of time is hard.
And yet, a 5 minute lightning talk slow is a horrible thing! There isn't much
time to speak, which means there is almost no time at all to get your problems
out and your solutions in order.
I am quite sure the lightning talks are live streamed, they are certainly
recorded. I will post a link to the timeslot once I know when it is, the live
stream just before it happens and the video once it is posted.
I have the panel built and some initial code running. I still want to connect the panel to the network and do some other cool effects, this has a good start.
Slow TV
hibby and I spent a couple of hours playing with VLC and python on Sunday. We have a script that can send out a multicast video stream which we can pick up in vlc. We need to get separate audio working and video playlists going before we can say this is ready.
Some sort of display showing:
I am going to set up a pi today, it will boot into rainbowstream running with image on terminal. I will connect that to the cheap pico projector I have and then I will call it 'done'.
It seems I will speaking at FOSDEM next year in the bsd devroom. I will be
presenting "Transport Evolution on top of the BSD's", whatever that means
:D. I also have another talk in a similar vein submitted to the Real Time
Communications devroom, it looks like my first FOSDEM will be a busy one.
Yesterday I finally got sick of windows 10 and installed Ubuntu 16.10 on the HP
Stream 7. There are load of instructions out there to install earlier versions,
but nothing up to date.
Intel created a world of pain when it introduced 32bit UEFI for 64bit machines,
the Stream 7 is one of many Baytrail devices that have this ridiculous boot
firmware configuration. Linux distros have completely ignore this boot
combination and 3 years after these devices started appearing haven't started
to ship the bootia32.efi files on the install media.
Thankfully there are people spinning builds with the correct media and a whole
load of drivers to help support silly little Intel convertible tablet things. I
used Linuxium's Ubuntu 16.10 build. I flashed it to a USB stick and used
my awesome little otg usb hub to connect the install media. I did have to
disable secure boot in the bios to get the media to boot.
I needed a separate keyboard for the install, but the touch screen and WiFi
worked out of the box. The hardware support is okay, there is no screen
brightness control out of the box and suspend is missing. I spent a while
setting up the on screen keyboard onboard, it looks like I am going to
have to create my own layout to get a split keyboard.
The install was painless, much much easier than the Baytrail device I tried to
use ubuntu on in 2014. It does still baffle me that distros don't have boot
support for these devices on the installers.
The whole point of sharing my plans yesterday was to make sure I actually
do them. Easiest on the list is to set up the UDP controlled blinkenlights
panel. I am going to attach this to my bag or something, there must be cool
blinkenlights everywhere I go.
The panel is a 8x8 Addressable RGB pixel array made from the super popular
WS2812. Control is a NodeMCU board running micropython, the NodeMCU board is a
ESP8266 broken out in a sensible way, it means I can get this cool little
project on the network.
I have some pieces of clear acrylic and foam sandwiched together to diffuse the
light, all held everything together with some bolts. Here is the small test
script I have put together so far:
import machine
import neopixel
import time
pin = machine.Pin(14, machine.Pin.OUT)
np = neopixel.NeoPixel(pin, 64)
skull = [
0,0,1,1,1,1,1,0,
0,1,1,1,1,1,1,1,
1,0,0,1,0,0,1,1,
1,0,0,1,0,0,1,1,
0,1,1,0,1,1,1,0,
0,0,0,1,1,1,0,0,
0,0,0,1,0,1,0,0,
0,0,0,0,0,0,0,0,]
while True:
colour = uos.urandom(3)
for x in range(len(skull)):
if skull[x]:
np[x] = colour
np.write()
time.sleep(1)
I am going to accept any 8x8 RGB frame (any 192 byte packet) that is sent and
take any other shorter packet and use it to set the colour on the skull. I will
include a timeout to change the colour so it isn't just a static panel.
If I find a load of spare time between the sofa cushions I will throw together
a web interface.
The most important twitter account counts down the remaining days to
congress. Today there are just 19 days left until a hole opens in the universe
and excellent people appear to keep the base going.
Ignoring the large amount of realwork™ I have to do, there is a lot of
important stuff to be prepared for congress. I have tried to avoid committing to
doing anything on my holiday, but I plan to bring the following three ideas:
RGB Pixel Display
I have a 8x8 neopixel display with a small case. I am planning to make it controllable via UDP packets and leave it on the open network for other people to find and play with.
Multicast video feed showing relaxing slow paced video. I have a lot of driving from my trip to Iceland. I will include some other feeds I have picked up in the last few month. We might include a radio station a long side, but that bit hasn't been figured out yet.
I looked up reverse geocoding with openstreetmap and found a keyless api.
Reverse geocoding is the process of turning a location as a latitude and
longitude into a place name. This is handy for creating my daily post footer, I
want to have a script that will take in a lat/lng pair and output the full
location name and weather with a map link.
I can use the kindly provided nominatim reverse geocoding URI and a bit of
python. I guess openstreetmap thinks I am in a weird parallel UK that is made
up of states, that is easy to deal with thankfully.
base_url = "http://nominatim.openstreetmap.org/reverse?format=json&lat={}&lon={}&zoom=18&addressdetails=1"
uri = base_url.format(lat, lng)
fp = urllib.request.urlopen(uri)
response = fp.read()
location = json.loads(response.decode("utf8"))
fp.close()
city = location['address']['city']
country = location['address']['country']
if country == "UK" or country == "US":
country = location['address']['state']
return {'country':country, 'city':city}
I end up with a single script for generating the location/weather block. The
script will default my 'work' location or it will try and format a lat/lng out
of any arguments passed in.
#!/usr/bin/env python3.5
import forecastio
import pprint
import urllib.request
import json
import sys
api_key = "yer_key_here_bawbag"
lat = 57.168
lng = -2.1055
def forwardweather(lat, lng):
forecast = forecastio.load_forecast(api_key, lat, lng)
weather = forecast.daily().data[0]
temperatureMax = int(weather.apparentTemperatureMax)
temperatureMin = int(weather.apparentTemperatureMin)
summary = weather.summary
return {'temperature':temperatureMax, 'summary':summary}
def reversegeocode(lat, lng):
base_url = "http://nominatim.openstreetmap.org/reverse?format=json&lat={}&lon={}&zoom=18&addressdetails=1"
uri = base_url.format(lat, lng)
fp = urllib.request.urlopen(uri)
response = fp.read()
location = json.loads(response.decode("utf8"))
fp.close()
city = location['address']['city']
country = location['address']['country']
if country == "UK" or country == "US":
country = location['address']['state']
return {'country':country, 'city':city}
if __name__ == "__main__":
if len(sys.argv) == 2:
loc = sys.argv[1].split(',')
if len(loc) != 2:
exit()
lat = float(loc[0])
lng = float(loc[1])
if len(sys.argv) == 3:
lat = float(sys.argv[1])
lng = float(sys.argv[2])
print("Getting weather for: {}, {}\n\n".format(lat, lng))
weather = forwardweather(lat, lng)
location = reversegeocode(lat, lng)
base_url = "http://www.openstreetmap.org/search?query={}%2C%20{}"
uri = base_url.format(lat, lng)
print("[{}, {}][0]: {}°C, {}".format(location['city'], location['country'],
weather['temperature'], weather['summary']))
print("\n[0]: {}".format(uri))
I am still playing with other fields to stick onto the daily post. So far I
have been sticking on a reading field that can sort of track how I am
progressing with books. I want to include a fuzzy location and the state of the
weather around me, obviously I know where I am looking back it will be
interesting to me having a record of where I was when I posted.
I have tried with outside, reality, being and a load of other
vague terms, writing this out those all look ridiculous. Now I have tried just
letting the info hang there instead, my current lat/long converted to a place
name with a link to a map, followed by the weather.
Union Terrace Gardens has some excellent pieces that were put up as part of a
street art festival. Adding culture to the city is great, but there is
something about 'santioned creativity' that really annoys me. I know the
residents around here would be up in arms if someone did a giant mural
overnight.
Reading: Cibola Burn, Virtual Light
Location: 57.1578,-2.2143
Weather: 2°C Partly cloudy starting in the evening.
I think the weather stuff I played with yesterday is going to be an input
to a quantified self dashboard I have been toying with building for a long
time.
I have wanted to put together a dash for years, but I have always struggled to
find technologies that I want to work with. For a demo at work I have had
to put together a simple dash, all it does is show interface throughput for two
interfaces, but it has give a chance to play with the front end UI and backend
webserving components that I want to use.
I am lurking in a coffee shop now, which is a great time to have a first whack
at the idea.
My good friend Warren Ellis (well complete stranger, but I read his newsletter
so that is pretty the same thing) tweets pictures of where he is with the
weather info overlaid. I am sure he is using some sort of newfangled social
media filter to provide the info. I want something similar for the footnotes on
my fairly post, but social media stuff is no good for me, I need an API to use.
Now, as hard as I try I cannot find a weather service that will just spit some
data at me. I really want to do curl weathersite.internet | jq... and end up
with a nice summary for a location. The web is closing up and locking down,
which means an API key is required.
After putting this off for a while, this morning I remembered I have previously
registered for a weather service. A steaming cup of coffee later and I found
the python bindings to the excellent forecast.io already installed.
There isn't anything to this, If I could find an API that didn't require a key
I probably wouldn't even use python. But madness makes more madness, so here we
are.
Reading: Cibola Burn, Virtual Light
Location: 57.168, -2.1055
Weather: 4°C Partly cloudy throughout the day.
Warren Ellis's morning.computer was the main driver for me to start
blogging everyday. I like to think I am being influenced by someone super
productive, rather than blantently copying him.
I have felt terrible all week and haven't had energy to really do anything.
Having an image to post everyday turned out to be an excellent idea. I do need
to go through the archives to top up the reserve of images at some point.
This plane sit in a glacial outwash plain in the South of Iceland.
The area around it is barren and devoid of life. We arrived in a fog bank,
there was nothing to see in any direction save from the well worn path out to
the wreckage.
Walking out was like being in a dream, we could see through the haze the bright
clothing of other visitors to the plane.
The fog lifted for our return journey, the landscape didn't improve. The area
is almost completely flat, with small undulating banks of aggregate. The entire
place looked life the surface of mars renderer in black.
I use taskwarrior to manage tasks, well sort of. Every so often I fill it
with highish level tasks and leave it completely forgotten for a few weeks. On
a similar frequency(though out of phase) I look through my task list and prune
out the things I have done. This isn't great, I have had a lot of trouble
refining down tasks, figuring out what to do, then doing it.
Last night I thought I would try to start generating a set of tasks to do
TOMORROW, then when I got to work the next today I could ask task warrior
what it I was to do that day. Taskwarrior makes that sort of easy with virtual
tags, the virtual tags can only be generated by due dates.
$ task add due:tomorrow proj:life get milk
Created task 1
Will generate a task, due tomorrow today, but come tomorrow it will be tagged
with today. Makes sense right?. We can then easily search for all tasks
matching the TODAY tag:
$ task +TODAY list
ID Age P Project Due Description Urg
1 1m L life 2016-11-30 get milk 1
1 task
The taskwarriors output looks awesome on the command line, but it doesn't come
out my thermal printer very well. Taskwarrior will output json with the
export flag, json isn't very fun on the command line. Thankfully there is the
jq tool. jq claims to be like sed for json, explains it's near
inscrutability.
With these bits we can generate a snappy list of things we have to do
today:
figlet -f small TODAY:;cat tmp.json| jq -r '''.[] | .project,.description,""'''
Something like:
_____ ___ ___ ___ ___
|_ _/ _ \| \ /_\ \ / (_)
| || (_) | |) / _ \ V / _
|_| \___/|___/_/ \_\_| (_)
schemes.crime.bank
Order drawings of bank
schemes.crime.bank
Enroll on plasma cutting course
schemes.crime.botnet
Establish control channel for bots on freenode
schemes.crime.botnet
Register spam address
life
get milk
life
put bins out
Which is really easy to spit out to my thermal printer:
I wonder if there is some way to get xscreensaver to run a script when I log
in? I could use that hook to tidy away undone tasks and do the print out on my
first log in of the day.
python-libtrace comes highly recommended over scapy. Scapy always
feels a bit alien to me, I think the custom repl front end aimed at 'security
people' (whatever that means). I am sure it is there to make things simple, but
for me it just makes it harder to write programs with.
python-libtrace certainly isn't easy to install, all of the documentation is
left to the libtrace project. Once I figured out the magic words I was able to
throw together a dscp mark classifier really quickly. For live capture on your
system you will probably have to change the bpf:em0 to something like
pcapint:eth0.
import plt
import time
trace = plt.trace('bpf:em0')
trace.start()
INTERVAL = 1
dscp = {}
start = time.time()
try:
for pkt in trace:
ip = pkt.ip
if not ip:
continue
dscpvalue = ip.traffic_class >> 2
if dscpvalue in dscp:
dscp[dscpvalue] = dscp[dscpvalue] + 1
else:
dscp[dscpvalue] = 1
done = time.time()
if done - start > INTERVAL:
print("marks:".format(len(dscp)), end="")
for mark,count in dscp.items():
print(" {}:{},".format(mark, count), end="")
print("")
dscp = {}
start = done
except KeyboardInterrupt:
trace.close()
sys.exit()
This can be tested with netcat quite easily, though the options seem to be
different everywhere.
I wrote up a script yesterday to grab the most recent file from the super
awesome toshiba flashair wifi sd card. I had suggested the card to someone
in the hackerspace, he planned on using it to help align a camera trap
(not that model, but you get the idea).
Once you put the trap up a tree, it is a real hassle to figure out if it is
really pointing the way you want it to. So use the wifi sd card to grab the
latest image and confirm it is.
After writing the script I tried for a while to get my laptop connected, but it
seems that the camera trap doesn't keep the card powered on for nearly long
enough. I might be able to get it to work if I can get my laptop to over
overzealous in connecting to the wifi.
Apparently there isn't a simple API to turn a lat/lon into the weather. I have
no idea why web services all seem to insist on having an API key for all
requests. It is just annoying.
It seems I am submitting a lightning talk to CCC. Lightning talks a short 5
minute presentations. The format is really popular for adding a load of content
to a conference, giving many more people a chance to talk.
I have watched the congress and camp lightning talk sessions before, but I
can't really remember any jumping out at me. Searching today for 'best
lightning talks eva' didn't have useful results. Well, wat came up,
wat is an excellent talk.
I guess I will watch some lightning talks from previous congresses and see
what they were like.
$ iftop
interface: em0
IP address is: 192.168.204.4
MAC address is: ffffffec:ffffffb1:ffffffd7:34:ffffffa3:ffffffa1
pcap_open_live(em0): em0: You don't have permission to capture on that device ((cannot open device) /dev/bpf: Permission denied)
Getting a look a network rates is really easy on FreeBSD, the systat tool in
ifstat ships with the base system. But if you want to do this
programmatically there isn't a lot of information out there, I had to read
source code to figure out how to do it.
The initial iftop error message indicates they are doing a capture of all the
traffic on all interfaces and working this stuff out on their own. That
requires root and I really don't want the hassle of doing it, surely the OS is
capturing these stats from the network stack?
There may actually be other interfaces for Linux, but I don't think it is worth
digging any further.
On FreeBSD you can do what systat does and use a sysctl call to populate a
struct. The bwm-ng man page has a heap of methods for finding these numbers on
different platforms, for the BSD's and MacOS it suggests the getifaddrs
interface.
For portable code not written in C I will probably set up a thread running
bwm-ng outputting csv data.
The Myth of Something Easy was a good talk, you should watch it. I would
have just embedded it and left it at that today, but I had already picked out a
picture.
The panorama came from my Android phone (nexus something), if you zoom in, the
cuts between frames are really jarring. It will be interesting to see how I get
on with hugin, the images I have to stitch are much large (and maybe higher
quality) than anything my crappy phone can do. Some of the shots are out of
focus, the stitching will be really interesting to do, whenever I get around to
it.
This Killscreen article, The people trying to save programming, which I
found via lobste.rs really caught my attention. The article is about some
people that are trying to fix the way games are made, they think that software
is development is too impersonal and long for the good old days of the Apple 2.
Commercial Game engines are the problem.
The article is worth a read. Digging into the community around handmade
hero is interesting too, but I don't really think either of the developers
mentioned are starting a movement. To me it feels like the appeal to the desire
everyone has to understand everything, actioning that by inventing the
universe.
That is fine and all, but far too many new people get stuck in the trap of
trying to build a world before they can walk(I did). The best tools for a
beginner are the ones that let them succeed as quickly as possible. The hard
nitty gritty details can be learnt later on.
Again it is cold, the previous few years there really hasn't been any
substantial 'winter'. This year is different.
I did some work on the wireless driver yesterday, but it was entirely
refactoring. I do think I am in a point to start crashing things. I am very
happy with this sort of progress, even if it isn't really interesting. The
small steps are required for the big steps to work.
It is cold and I am hiding inside, today clearly things yesterday was far
too warm. Morning temperature was -5, which is nothing compared to the arctic,
but cold for somewhere people live. If I set up temperature sensors I could
make some plots, but that seems like a lot of hassle.
This article on the use of bots on github made me think of a different use
of the github api.
The first pieces of python code I pushed to github on my own account were in my
tiny-artnet mircopython artnet implementation. Soon after committing that
code I started getting emails from recruiters looking to hire python
developers. They would say something along the lines 'based on your github
activity we think you would be perfect for a job doing django".
At first these were hilarious, micropython is nothing like python, if they had
looked at my github profile they would have seen the large C projects I work
on.
But after a few of these I started to get annoyed, clearly these people were
finding my email from code I had written or from commit logs. Why weren't they
trying a little bit harder? To me, github is the technical recruiters wet
dream, but whoever was generating the leads here clearly wasn't doing a good
job.
I don't think cold lead generation is a good way to sell anything, let alone a
job opportunity, but this is how I would use github(bitbucket, gitlab and
everything else too) to do it.
Search projects that have the correct language keywords (python, go, c)
Find any email addresses at all, sort by most recent
Attempt to resolve email addresses into real people
a) Find personal site for email address or b) (worse) find social media pages for address
Send generated lead info to recruiter
The human at the end needs to be able to do a final set of filters, but
anywhere that is too high a cost isn't going to use the lead well anyway. I am
sure the 100 line script that could be written on those lines that would
generate substantially better leads than cold contacting any email address.
Winter is here, stepping out this morning it was -2, hopefully the start of
some nice seasonal weather with a showering of snow and not the minimum
temperature for the year.
The twitters tell me that Bunnie Huang of Hacking the Xbox, Breaking
SD Cards, The Essential Guide to Electronics in Shenzhen and a ton of
other cool things has a new book in the works. I read Hacking the Xbox
when it was released for Free after Aaron Swartz's death, the book is an
excellent read and gave me a ton of insights about electronics and breaking
physical things. The new book is in early access, which means you can read it
if you think reading tiny bits of a book is a good idea.
While on the nostarch I looked at another early access book, Attaching
Network Protocols. The cover, looking a Tardigrade at a glance(it
isn't), drew me in, the awesome title didn't hurt.
Hopefully the internet will come alive and tell me when these two books are
finished and available.
Reading: Reamde
Of course that snowy picture was taken up a mountain, but it was only about 4
degrees up there. Warmer than it seems it is going to get to today.
Last night at the hacker space I finally got around to building hardware
out for my emfcamp badge powered satellite tracker. Most of the time
was spent hot gluing together foam board to make a stand for the servos I
integrated the control code with the TCP server and the whole thing is
controllable from gpredict now.
When testing servos, knifes are the recommended indicator devices.
Today I've got nothing. At my desk there are a load of started and unfinished
projects, parts for other things, kits from boldport club to be made.
Nothing that is interesting even in its started state, components to make cools
things, coolness sold separately.
At the hackerspace tonight I will try to finish my sat tracker, but
even that is a fallback project. The projects I want to have completed have
such a high bar to entry.
My Idea to use the hugin stitching software to make a panorama from
some images I found on my camera seems to have hit a snag. I am convinced I
didn't have a tripod with me and took the panorama in a haphazard fashion, I
remember the area by the glacier being much much colder than the campsite we
were staying in and I was pushed to leave.
I opened up the 8 images I had to try and stitch together and while they sort
of fall out in a reasonable orde I think it is going to take some time with the
software to get them together. Unless I find the more magic button.
I tried to use hugin to stitch together a panorama I took of a glacier,
but the binaries they offer will only run on the next version on MacOS. Really
annoying. I will give it a try tomorrow on FreeBSD, if not I will have to try
some of the gimp plugins.
ffmpeg by default aims for the lowest bitrate it can manage for a video when
encoding webm. I have been happy with this so far, but the video I grabbed of a
waterfall today does not look good in this mode. I tried changing the bitrate
options as discussed on the ffmpeg wiki, I thought I would show what you
can expect with a couple of differnet rates.
The original mov file generated from my camera was 21MB.
$ ffmpeg -i INPUT.mov -an output-default.webm
The original ultra low, 443kb/s that ffmpeg generates, file is 369KB.
I had a look through some of the pictures I have from my Iceland trip in
August, but it was really painful. My network drive seems to be struggling
delivering large files over sshfs, it probably doesn't help that they are 25MB
raws.
I used darktable to crop the image, everythin else I had on my machine
chocked on the CR2 raw files.
As Ihave said already, I am trying to get control of my photo
collection. I want to have an image on almost every blog post, but before I can
do that I need to sort out the mess that is my collection. Currently I have
raws and jpegs in a directory structure, an iPhoto library and some almost
structure files.
I want to have the directory layout:
year/month/day/[raw|jpeg]
For today it would be:
2016/11/11/raw
2016/11/11/jpeg
Before I can do that I need to extract images from iPhoto and collate
everything together. Unfortunately iPhoto on my laptop does not want to start
up at all and I suspect the App Store will want me to upgrade my OS too. I am a
hacker so this isn't a problem.
Some searching turned up exportiphoto a python program that will extract
images from your iPhoto library. Download, run:
Running this script there was some crunching, some promising output and then it
was done super fast, awesome! I sshfsed out to the storage box and started
looking around for my photos. Instead I found a bunch of empty directories, I
must have done something wrong.
Instead of poking at the script I thought I would have a look at the iPhoto app
bundle. Apps on a back are made up of a bundle, the bundle is just a directory
which the finder treats in a special way. Looking into the bundle I found a
Masters directory. The Masters directory was 40GB of photos in a raw
format, most of the pictures that will be in the library.
The Masters directory has the photos stored in the correct directory
structure, so I copied that out to use as the basis for my tidy.
MacOS has lots of cool security features, by default the OS will only run
signed code. Great security has trade offs, tonight I was hit my MacOS
restricting permissions. gdb needs to be signed before it will be allowed to
debug other program. It manifests like this:
$ gdb -q neat-streamer
Reading symbols from neat-streamer...done.
(gdb) r
Starting program: /Users/jones/code/neat-streamer/neat-streamer
Unable to find Mach task port for process-id 13334: (os/kern) protection failure (0x2).
(please check gdb is codesigned - see taskgated(8))
Learning lldb seems like far too much work, this needs fixed. Searching
brings up stackoverflow questions, with a pointer to this guide that
explains the entire process. In general you need to create a code signing key,
sign the gdb binary and then restart the enforcement service taskgated.
The restart commands were a little harder to track down.
There are also start and stop commands, but this didn't work for me. The
troubleshooting on the guide was of no help. I even went as far as trying a
reboot, but no luck. Maybe I will try figuring out lldb.
If anyone has any idea hows to get this working, I would love some help.
Politically the last few years have been really hard for me, 2014,
2015, and 2016 saw votes go completely against my expectations. This
week was also surprise. It is easy to think I hold fringe views, that I am
all alone surrounded by fascists, but the numbers show that only about have the
electorate disagree in each of these cases.
The problem in almost all of these votes is not the right, but the inability of
the left to draw people out. The fascists have it easy, they can hold a
deplorable ideal, get rid of the immigrants and their supporters can galvanise
around the idea. The left only seems to offer the status quo.
There are two courses of action at a time like this.
Get some supplies and a gun, go up a hill and disconnect from the world. (I can reccomend some hills)
Get involved and try to advance the causes you really care about.
Today, I really just want to climb a hill and start living in a cave. But that
is the easy way out, instead I am going to start helping make the world a
better place.
The ticket sales for Congress this year have been a ultra fast, it is has
been entertaining to watch friends fight against server crashes and load while
trying to get tickets. This year I was lucky enough to avoid that ordeal, but
it has made me think about writing bots to buy tickets. I think I would be
trying to do so if I was going through the public sale.
I have previously watched a defcon talk about buying cars using a set of
bots, I do wonder if there is a set of literature on doing this and dealing
with mitigations.
Ormiret from the hacker space created a tool to encourage him to
head out and take more photos. His random theme generator is built up from some
photo theme lists. I have been wanting to have a picture on every blog
post, for that to be feasible I have to take many more photos.
The theme tool is an awesome idea, I thought it would be more powerful if there
were exemplar images alongside the theme. There are quite a few sites that have
attracted a large number of photographers and make an excellent place to search
for images matching a theme.
I looked at both flickr and 500px, but neither of these sites have an API that
allows unauthenticated access. I really don't want to create account on these
sites just for a throwaway image search. I did spend some time looking at their
APIs but neither looked like much fun.
500px has a public search page that doesn't require auth, by using Firefox to
grab the request headers it was easy in an hour or so to put together a command
line search tool.
This scrip paired with the request headers exported from firefox allows you to
search for a string on 500px and generates a page with the first set of
results. Of course I had to pull out the search and send Ormiret a pull request
to add this functionality to his theme generator.
Yesterday I came across the awesome CC0 license stock image site
unsplash, both today and yesterday I have used other peoples images from
that site. The images aren't of anything I have been writing about, but images
make blog posts looks a ton better.
I think I am going to continue to try and have an image on every blog post, even
if it is just to give them some colour. There really is nothing stopping me
from using my own pictures of awesome places.
This awesome picture of the Skogarfoss Waterfall(which I got from
unsplash) is really strange to me. I was in Iceland in August and I visited
that exact waterfall and while I have used someone else's stock photo there is
nothing stopping me from using my own picture of the exact same feature.
I will attack my photo collection in the next few days and try to built up a
bank of images to use. I think I want to have images sorted so I can match them
against the tags on blog posts. My approach to tagging is very haphazard, I
will probably make groups like:
This has been a hard morning, the weather is extra foul and heading into a real
winter, and I sense a cold coming on. On HN today there is the curated CC0
image site unsplash, I have co-opted a giant night sky image from there for
today's post.
I have been thinking for a while that every post really should have an image,
ideally I would take a load of great photos and add a sort of relevant one to
each post as I go. In lieu of that happening I might integrate something like
unsplash and one of their great collections.
https://unsplash.it/200/300/?random
They do have a service which provides random images with a url, others have
used it to make 404 pages more interesting.
The changes to the keyboard, awkwardness of touch screens and position of the
board are common points of compliant. I think Keyboards have too many keys, I
type on a planck which only has 40 keys. I love this keyboard. I think if
you look at a layout map of the planck you will realise that you can get used
to very strange layouts.
An awesome, long screen like that is a great addition the standard laptop
layout. Having a way to access context while not obstructing the main display
is awesome. The hardware as implemented by Apple is a problem, if it really is
running WatchOS then no other OS will ever work with it, that doesn't stop
over manufacturers doing it with a sensible hardware link.
I love the idea of having a secondary display built into my laptop. Look at the
awesomeness @jcs managed with the RGB bar on the chromebook pixel, that bar is
only RGB, a full colour display could do so much more.
My OpenBSD driver for the Chrome EC supports userland access, so now the lightbar can blink red whenever pf blocks a packet. pic.twitter.com/1wnwGOFaPq
Screens on keyboards aren't new, there have been gaming keyboards with
screens for a long time. Apple might be the first to try this on a laptop, but
will probably be the first to succeed with this idea. I suspect Apple will have
the first implementation that sees real adoption in the secondary screen
peripheral space.
The Starbucks I am sat in right now is the model of the modern internet cafe.
There is coffee, free WiFi, chairs(!), they are happy for you to sit there all
day if you order an over priced drink every so often. And other than me there
are people in here using laptops, they might even be working.
In the 90's an internet cafe was a different thing, there might have been
coffee and drinks, but the main feature that drew people in were the rows and
rows of computers. Laptops had weedy specs and were really over priced. Many
people probably visited just to use the computers, it might have been there
only way to get online.
Internet cafes did not last in the west, the pc market had to make laptops
affordable to live. With disposable income and infrastructure that had to appear
to be world leading it quickly became expected to have a computer at home.
There is an impression in the western mindset, driven by the media, that
internet cafes are still a big thing in poorer parts of the world. If you show
a user in India or China using a computer from an internet cafe no one will bat
an eye. Both For the Win and Reamde feature Gold Farmers playing
MMO's from internet cafes.
Unfortunately Internet cafes aren't a myth, there are still many places you
can find desktop computers set up for general public access. University
computer rooms, public libraries, airports and hotel lobbies are some common
culprits. As in the 90's and 2000's public machines are a security nightmare.
You can never be safe using someone else's computer, that is why the cloud is
such a joke. General public machines are a potential goldmine to a malicious
actor and maybe worse, are a breeding ground for malware that will be around
even when the host isn't actively malicious.
== Can We Build An Internet Cafe in 2016? ==
People are going to no matter what, can we build something that is reasonably
safe for a user? I think we first have to assume that the machines we are going
to use are not actively malicious, there is very little we can do to stop
someone that is actively coming after you. Active attacks are rare, most people
are only targeted when they stand out from the crowd.
I think there are two ways we can do this:
1. User provides the computing and storage
In this case the user has their own computing power, but they need access to a
larger screen and more capable peripherals. The venue operator just have to
provide a standard interface, lets say HDMI ports on large monitors, and the
keyboard and mouse.
You could carry a some sort of HDMI stick pc, a raspberry pi, or something
else. This idea is the basic of Ubuntu's Convergence computing, the phone
you carry around all day is already a capable enough computer. With a little
hardware to connect a screen, keyboard and mouse, the convergence device goes
from phone OS to full desktop OS.
The convergence idea is really interesting, but Ubuntu is starting it up very
slowly. One day soon, hopefully.
2. User provides storage
The second idea is that the venue provides normal desktop computers of some
sort we would expect, but they don't have a hard drive or operating system
installed.
Instead the user brings a bootable USB stick with a proactively secure
operating system like tails installed. The user is able to take the USB
stick wherever they go and manage to maintain a session between boots.
This is possible now.
Reading: Abaddon's Gate, Reamde
The subtitle text for Neal Stevenson's website is excellent
We don't have candles lying around the lab, I wasn't going to let that stop me.
I made one using an arduino mega, the single ws2812 neopixel led I could find
and some diffuse that was lying around. I was really hard to capture on my
phone, but the flicker effect I found on github works really well.
gpredict is piece of software for tracking things in orbits, sometimes you
want to automatically point things at stuff in orbit. To get things pointed at
stuff in orbit we can use a rotator controller, gpredict as a piece of radio
software has an antenna rotator controller built it. The gpredict rotator
controller expects to speak to something over TCP.
I have not been able to find documentation for the protocol (I didn't look very
hard), I thought it would be fun to reverse engineer the protocol and write a
simple daemon. Earlier I took some first steps to see what gpredict was
doing on the network.
If you want to play a long at home this is what I am going to do:
set up a dummy daemon using netcat (nc -l localhost 4533)
use tcpdump with -XX to watch all traffic (e.g. tcpdump -XX -ilo0 tcp and port 4533)
send data from gpredict to the daemon (hit the 'engage' button on the antenna control screen)
play with responses (type into the console running nc)
When I press the 'engage' button, gpredict sends a single lower case 'p', if
I press enter, sending a blank line, gredict responds with a capital 'P' and
two numbers. To me these numbers look like an Az El pair, they correspond to
the values on the antenna control screen in gpredict. No need for tcpdump
this time.
We have source avaialble
With only one half of the network protocol to look at, we can't get very far.
gpredict is open source and there is a github mirror where we can browse
the source tree. The file names in the 'src' directory show some promising
results:
The pref and conf files, are probably configuration stuff, I have no idea what
is in the knob file, but the gtk-rot-ctrl set of files is what we want. I
confirmed this by picking a string in the UI of the relevant screen and
grepping through the code for it. This can be troublesome if the software is
heavily localised, but it this case I could track down the 'Engage' button to a
comment in the code.
There are two functions used for network traffic, send is used to send data
into a tcp connection, recv is used to receive data from a TCP connection. If
we can find these in the code, we find where the software is generating network
traffic. Normally only a starting point, it is very common to wrap these two
functions into other convenience functions.
A grep through the code brings up a sendcall in
send_rotctld_command. More grepping and we find that
send_rotctld_command is called from two places, the get_pos function
(which I have to guess asks for the rotators positions) and the set_pos
function (which must try to set the rotators position).
The get_pos function fills a format string with "p\x0a" and uses
send_rotcld_command to send it. Looking up 0x0A in an ascii table shows it is
Line Feed(LF) also known as a newline on a unix system. It splits buffback on
newlines using g_strsplit, looking to find two floating point numbers to
use as azimuth and elevation, one on each line.
This piece of code shows up something really important, gpredict is using a
single function to both send a command and gather the response from the remote
end. If we look at send_rotctld_command the recv call is called right
after a send. Here we can see that gpredict only does a single recv to
gather responses, it is expecting a reply that fits into a single read. This is
a bug, but probably not one that really matters.
The set_pos function fills up a format string with a capital 'P', and two
floating point numbers. It doesn't do any parsing of the response, only looking
at the error code from the socket call.
With this little bit of analysis we have enough to write an antenna control
daemon that gpredict can speak to. The rotator control protocol has two
simple commands, a position query which expects the currect az/el across
separate lines and a position setter, which expects no response.
#!/usr/bin/env python
import socket
TCP_IP = '127.0.0.1'
TCP_PORT = 4533
BUFFER_SIZE = 100
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((TCP_IP, TCP_PORT))
s.listen(1)
conn, addr = s.accept()
print 'Connection address:', addr
az = 0.0
el = 0.0
while 1:
data = conn.recv(BUFFER_SIZE)
if not data:
break
print("received data:", data)
if data == "p\n":
print("pos query at az:{} el: {}", az, el);
response = "{}\n{}\n".format(az, el)
print("responing with: \n {}".format(response))
conn.send(response)
elif data.startswith("P "):
values = data.split(" ")
print(values)
az = float(values[1])
el = float(values[2])
print("moving to az:{} el: {}".format( az, el));
conn.send(" ")
elif data == "q\n":
print("close command, shutting down")
conn.close()
exit()
else:
print("unknown command, closing socket")
conn.close()
exit()
Using the python TCP server example as a starting point it is easy to put
together a daemon that will listen to the rotator controller. The code should
be pretty straight forward to read, we process the commands documented earlier.
There is one addition that I didn't see in the code at first. There is a quit
command that does not use the normal wrapper and instead uses send directly.
This command was easy to handle.
This is how I approach network problems, whether in code I have written or code
that is completely new to me. Hopefully if you have been following along at
home the example above is straightforward to read.
gpredict is able to signal a rotator controller over TCP. This is awesome, I
want to track satellites and I am not going to pay for a rotator controller. I
am going to build something to get my antenna pointed, using servos and a wifi
microcontroller board.
I have tried searching a few times, but like everything amateur radio hard
facts are hard to come by, the scam artists and windows developers protecting
their sacred lore abound. I was at a bit of loss until I thought to try seeing
what gpredict spits out over the network.
First I created a test rotator in the gpredict settings:
Then I dug around until I could find the rotator control panel, named antenna
control. In this panel there is a 'track' button and an 'engage' button,
figuring engage was the test option to manually set the rotator I hit that.
After a short pause a helpful 'ERROR' pops up under the Az/El settings, Good
progress. Next I started up nc pretending to be a listening rotator
controller so I could see what gpredict was sending.
$ nc -l 0.0.0.0 4533
p
P 180.00 45.00
p
P 180.00 45.00
p
P 180.00 45.00
p
P 180.00 45.00
p
P 180.00 45.00
This output is great, nc is just outputting the bytes sent down the tcp
connection. It seems that gpredict sends a letter 'p', I replied with a blank
line by hitting enter, this resulted in a capital 'P' and a Az and El. Some
guess work interpretation suggests gpredict is asking for our position with
'p', then giving us a position to move to with 'P'.
This is a great start, next I will have a look through the gpredict source
to see what it is doing. I will start with the 'engage' button from
gtk-rot-ctrl.c.
Yesterday I posted a stupidly large gif of some pumpkins flashing
different colours. Then a couple of minutes later after suffering through the
upload I replaced it with a teeny webm.
ffmpeg now has excellent webm support, for me it will happily pick it up with
automatic detection. To make videos like the one above I use ffmpeg and let it
choose its own options. I drop the audio from the video to give it a nicer gif
like effect, I think from today I will tell browsers to autoplay the video
snippets so they look like awesome gifs.
On Sunday I thought I would try to extract images from a TCP Flow with
scapy. Initial searches should how to do this with wireshark, but the
idea is to do this programatically and try to learn something about scapy. At
the hackerspace tonight, not wanting to work on anything I said I would, I
thought I would have a play with scapy.
A little bit on TCP
A TCP Flow is how we refer to the stream of packets that make what you might
call TCP Connection. The connection bit is just the start. A Flow is defined in
IP by 5 numbers:
* protocol (TCP or UDP for the most part)
* source
- address (ip address of the initiator of the connection
- port (normally a randomly chosen emphemeral port number)
* destination
- address (ip address of the host)
- port (normally a well known service number, http is 80)
A lot of time we call this the 5 tuple, or 4 tuple if we know the protocol. At
any one point in a time a given 5-tuple defines the connection(Flow).
To a programmer TCP presents a reliable byte orientated stream interface. This
means any bytes we write into our TCP socket, will come out of the other end in
order and they are guaranteed to arrive (or an error is generated).
Data written into a TCP socket is broken into chunks the network can support
(normally, without fragmentation), we call these chunks of data segments. Each
segment has a sequence number, which tells the remote end where it is in the
stream, there can be a large number of these segments in the air at a time, the
flight size.
Segments can get lost in the network (well dropped by routers), reordered or
delayed.
Extracting Images
To extract images from a network capture we need to separate out the packets
into flows; reassemble TCP flows into a byte stream taking into account loss
and reordering; reconstruct the segments into a coherent byte stream; search the
byte stream for image headers and try to extract them.
This is a none trivial amount of work for a Tuesday night.
Before writing any code I did some searching, scapy might have support for flow
reconstruction(nope). I came across some references to a tool called tcpflow,
tcpflow claims to be a tool for extracting TCP Flows from either a live
capture interface or a pcap file.
That looked great to me, I would grab a pcap with tcpdump, process out the
flows with tcpflow and then drop that into scapy and start looking for some
images.
Reading the tcpflow man page I instead found a single option that would do
all the work for me.
Images with tcpflow
It is really easy to extract images from a http TCP Flow using tcpflow, you
can do this live, but I used a pcap file.
# tcpdump -w webimage.pcap host adventurist.me and port 80
tcpflow also seems to be spitting out a report.xml, it seems to describe
what it has just done. I image that is super useful when running tcpflow
against a live capture. I haven't managed to get very far using scapy to pull
images out of flows, I am starting to wonder if there is really any point when
all these tools are available.
Last night was All Hallows' Eve, I wanted to do something cool with the
decorations. I repurposed an rgb neopixel board driven by a nodemcu board and
gave one of our pumpkins a network controlled candle instead of the old analog
kind.
I also spent some time building out a motion sensor, but I wasn't able to
integrate that with the network code in time to use it. In the end the weather
seems to have kept everyone at home and we didn't have any visitors.
I am going to try and get everything together tonight at the hackerspace,
if I do I will write up what all the parts are.
While researching extracting images with scapy I found a page describing
image extraction with Wireshark, I am not sure why I didn't think to try
this first. Of course Wireshark can do this super useful network task, their
mission is to make the ultimate network diagnostic tool.
The information on that page seems to be a little out of date, on my Wireshark
build the PDU tracing and http follow options were already selected.
Grab a dump of a http session, then feed it into Wireshark:
# tcpdump -w webimage.pcap host adventurist.me and port 80
I visited this page which I know has an image on it in FireFox's porn mode.
http.response.code==200
In Wireshark I used a http 200 response code to find all of the assets in the
stream. This left only three items, the page itself, the css style sheet and
the image. Expand out the TCP block in Wireshark, right click on the JPEG block
and choose 'Export Packet Bytes'. I saved this as .bin, moved it to a .jpeg and
was able to open the image.
Packet capture tools are oscilloscopes to network programmers, I couldn't get
anything done without near continual use of tcpdump and wireshark. In a
pinch tcpdump canbe used instead of writing server code.
Wireshark has support for a load of protocols and can really help with
debugging. Recently I added dtls support to NEAT. DTLS is a protocol
enhancement to TLS to support datagram traffic, when it is working all of the
traffic is basically random noise.
I had trouble gettting server certs to work correctly with DTLS, thankfully
Wireshark can reassemble the datagrams into a coherent certificate and
export the data out to a file. I can use this to manually check the cert is
being sent correctly.
The process is something like this:
1. Import pcap
2. Find the full reassembled server hello
3. Expand the DTLS body
4. Expand the DTLS Record, Certificate (Reassembled)
5. Right click on 'Handshake Protocol: Certificate(Reassembled)'
6. Select Export Packet Bytes
After than I had a TLS Cert in DER format, DER is just he raw cert bytes.
With this I could then verify using openssl that the cert chain was valid.
Went to a friends and carved some pumpkins last night, that means I didn't
manage to do anything interesting yesterday. Weekends are when I make
coffee, Sunday is filtering day which looks something like this:
I have to run out to meet someone for lunch, tonight I am going to have a play
with Scapy. I think I will try to pull an image out of a http stream, that
seems like a small enough task to be doable.
Maybe because there is an election on or maybe just because I wanted a use for
my new stream 7 tablet thing, I read through all of Transmetropolitan.
Transmet (as I am told the cool kids call it) is a Cyberpunk comic book series
written by Warren Ellis, featuring a Gonzo journalist reporting on an Election
from 'The City'.
I am a huge fan of Gonzo as written by Hunter S. Thompson, but Hunter is long
dead and this has limited his journalistic output severely. So here I have a
problem, I would be very happy to read more high quality pieces in the Gonzo
style, but I have found finding such writing to be an absolute nightmare.
Here is a list of people I know writing great stuff:
Barrett Brown has been writing from prison, his stay is nearly up, it seems they want to kick him out for some reason.
I might have to look harder.
Reading: ELEKTROGRAD
I couldn't finish Little Brother, it became too YA and it just annoyed me.
I did read all of it when it came out so I am not that bothered.
I finished Hacktoberfest Last night!!!! Hacktoberfest is a month long
hackathon thing run by DigitalOcean and github, in exchange for some open
source Pull Requests DO will send you stickers and a tshirt. I tried to do this
last year, but found it is really hard to do small commits against projects, I
ended only managed 1 commit, but DO still sent me a sticker.
This year I was determined to manage the 4 commits required to get a tshirt,
silly me I thought that working on an open source github hosted project
for $work would make that easy. Instead I really struggled to manage the
four PR's, I only got two via the work project, small commits are hard things
to find.
For the other two pull requests I looked at open source software.
gpredict is a cross platform open source satellite tracker, I have used to
for following amateur satellites. gpredict has always been super buggy for me,
the current packaged build for FreeBSD dumped core when I tried to open the
'sat info screen'. Firing up gpredict with debug symbols and within in gdb made
it really easy to find the use after free that was the culprit.
There were a pile of issues like this, I ran the build through llvm's
scanbuild tool and it showed up a bunch of potential bugs. They too went
into the PR for gpredict.
Last night an email came from DO stating there was still time to get the
necessary PR's in. Dern, I had only manage three of the four pull requests so
far.
Kaitai Struct is an awesome project for generating code from binary
formats, it is a compiler, a visualizer and a declarative language. There is a
set of example formats of images, games, media, compression and network
packets. I noticed that UDP was missing from the network set and shamelessly
added it.
Osmocom can do 3G voice! Look at this awesome article about the new
support, it builds on this equally awesome article that gives a status
update on the 3G stack. This is excellent news, as we move through LTE into
whatever the 5G tech will be called, the open source community is starting
to catch up with commercial hardware.
I cannot draw lines like that, I can draw lines like this:
+--------+
> |
-/| |
/ | |
/ +--------+
-/
-----/
[DrawIT][2], the vim plugin I use for ascii boxes and lines just can't do those
amazing curved lines. I bet it is a emacs plugin or something else I can't use
making those awesome lines. Man am I jealous.
For my silly little tablet I got this awesome usb otg hub thing. It has 3 usb
ports, a microusb hole and an otg cable, you can you it to connect 3 devices to
your phone or tablet and power them all at the same time.
I got this thing so I could install something other than windows on my stream
7, to do that I need power, usb storage, usb networking and io stuff all at
once.
It also comes with the most mental instructions I have seen. I am trying to
figure out what it says, but man, who knows. I think there was a deal on 3
postition swtiches and they put it in instead of a 2 position one.
Last night, I drilled some holes in a book case and bricked a pi. That isn't so
interesting, unless you really like holes in wood, and it leaves me at a loss
what write about.
Okay, fine. I have this awesome eink screen for the pi, I got it to do
something like this tide clock. I don't want single purpose things lying
around, the same pi is going to be running mpd my music player of choice.
It will be using the screen to show cool effects (like the thing on it now) and
probably stats about things.
What things, I have no idea. Maybe:
* bus times
* output from the house sensors
* whats playing
* network uptime
See, it isn't really fleshed out yet. I do have all the code to write stuff to
the screen, it took ages to get working using python, cairo and pango. Now I
have holes drilled and audio cables routed through the book case, I need to get
the pi up and doing music.
Yesterdayfeatured a massiveddos attack against DynDNS.
For me, in the north of Scotland, this meant an entire shutdown of the web. ssh
and mosh connections stayed, but everything from twitter to google were
unreachable.
Name discovery in decentralised networks is a really hard problem, I am not
aware of any really solid solutions. There is probably a large capitalist
factor involved here, you really can't centralise profits from a decentralised
system.
I spent some time reading about name systems for adhoc mesh networks, before I
gave up on trying to build this out. It is hard and would require a load of
other people to test.
mdns is probably already running on your local network, it won't scale well and
certainly not to internet sizes. namecoin is something I am just sort of aware
of, I think worry of blockchain buzzword bingo has stopped me looking too hard.
I would love to know about more interesting and diverse systems, if you know of
any drop me a line.
I have to ssh proxy to get to my main machine, everything is filtered on the
network my machine is on, apart from the ssh access box. This makes using mosh
a little troublesome.
dev can only be reached via an ssh proxy, but thankfully there is an open UDP
port range that works. Mosh seems to have trouble figuring out the correct
ip/port pair to select in this setup, mosh is quite simple so it is easy to
deal with.
Host dev
Hostname dev.domain.tld
User tj
ProxyCommand ssh -w 30 -q gateway.domain.tld nc %h 22
The mosh command is just a shell script, it sshs to the remote machine and
runs mosh-server. Mosh server generates an AES session key and starts the
mosh server process on the machine. mosh-client takes the session key via an
environmental variable, ip address and port the server is listening on.
With that we can run mosh by hand:
[laptop] $ ssh dev
[dev] $ mosh-server
setsockopt( IP_RECVTOS ): Invalid argument
MOSH CONNECT 40001 pv2jeN0MJ1N4gCd1V0i21g
mosh-server (mosh 1.2.5) [build mosh 1.2.5]
Copyright 2012 Keith Winstein <mosh-devel@mit.edu>
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
[mosh-server detached, pid = 19100]
Warning: termios IUTF8 flag not defined.
Character-erase of multibyte character sequence
probably does not work properly on this platform.
[dev] $ exit
[laptop] $ MOSH_KEY="pv2jeN0MJ1N4gCd1V0i21g"
[laptop] $ mosh-client 143.100.67.5 40001
Once you know how to do mosh by hand there are other things we can try. I don't
think it would be impossible to work around certain types of NAT using nc. It
requires a third party box, but a lot of STUN can be done with just UDP
packets.
Reading: Little Brother, Transmetropolitan
I am sure I have written this down before, google couldn't find it.
I spent last night working on the mt7610 driver and by that I mean I was
reading the open linux source trying to work through it's general insanity.
Look I found the register access isn't really meaty enough to write about.
@Famicoman is attempting to create a full archive of the Cypherpunks
mailing list. I tried to read the mailing list last year and made by own
copy of an archive. My copy has been add to the github repo that is trying
to capture this.
I am struggling for something to write today. I spent last night working on the
second stage of a reverse engineering project, but I haven't made much
progress yet and there isn't anything to show. Windows tools seem determined to
be as alien as possible to use.
I had a look through my browser tabs, I still have what I consider the
canonical bytebeat reference open. bytebeat is a sort of code golf
based algorithmic music generation, the tiny snippets of code can manage to
create some awesome sounds.
There are quite a few people working on audio from crazy systems. Captain
Credible's excellent album Dead-Cats is generated with an attiny85. I
have Blooper Eel mini synth kit from him that I have toyed with a ton at
my desk.
And this is just the start of the rabbit hole, if you want to go up a level you
should read the excellent noisepedals blog.
I seem to have a knack for finding the hardest problems to start with. Anyway I
thought I would have a look at doing some android reverse engineering on a
local transit app.
First you will need to get the apk application bundle for the app you want to
have a look at. If you have the app installed on your phone this is really easy
to do with adb.
Now you will have the apps apk as base.apk and feed it to jadx. jadx is
a dex to java decompiler with a pretty gui and the ability to deobfuscate code.
When you fire up jadx with the apk you will get a complete break down of the
apk bundle and decompiled classes.
At this point you should see the decomiled classes, but as I said I am great at
picking hard targets. There is some decompiled java here, but there are also
mono packages and a load of dlls shipped in the assemblies directory.
As I said, great at picking hard targets. To get further with this I shall have
to find a c# decompiler, they seem to be quite common.
Today has been a very slow start, most of yesterday was spent drinking shows and
playing with radios. I wanted to post a gif from twitter, but my brain isn't
work well enough to figure out how on earth to get hold of one.:
I can embed a tweet here using the code twitter gives me, but the media preview
doesn't seem to work. There aren't any errors in the console or in the network
debugger in firefox.
Weather is horrible againg, looks like we are getting the tail end of some
dramatic weather.
Hibby and I planned ot try some more line of sight microwave, neither
of us fancied climbing a hill in this storm. Instead we did a bit hf from my
QTH. Radio power meter looks mental when doing hell.
It is raining so hard I can hear it over my music and the rumble of the bus. It
is raining in the book I am reading. Completely unconnected events, but humans
have this thing for making patterns where they don't exist.
In this book over centralisation leads to a complete media blackout.
Decentralisation is a core ethical tenant, of course I enjoy the collapse
of the media in the story.
But what can you do about centralisation?
Until the singularity you are going to be stuck as a centralised human being. I
know it sucks, but one day we will be able to move past this.
The indieweb movement has great advice for getting started. The
biggest single step you can take to decentralise yourself on the internet is to
have another machine to represent you.
Once you have a VPS running somewhere in the internet, you have access to an
constantly running, near permanent version of yourself.
I was pretty much dead yesterday, I didn't do anything interesting.
I signed up for an Offensive Security Newsletter from Phobos Group.
I don't normally take corporate output directly, the people behind Phobos
have a track record of doing awesome things. The first issue appeared today,
certainly worth a read.
I have been thinking about adding more automation into my...I dunno life? This
morning I was thinking about using post tags to automatically cross blog to
reddit. I think that might work for well for hacking, radio
definitely has a home in the ham subreddits.
I will automate everything to go out the twitter hole, I would like to do
the tag thing to irc channels to. That might be a bit insane and self
promotional though.
Damn, today has been a hard fucking start up sequence (slow starts punk
brother). TCP jokes are the best, if you don't get them we can keep
retrying until you do.
Possibly the most unbelievable thing about Star Trek is how different alien
civilizations maintain cross-compatible video calling software.
It's a funny joke. Current humans are still competing in the name of
capitalism, there is little to no incentive to build interoperable system when
you can control a market sector. Of course no one actually can, but that
doesn't stop facetime not being available on android.
Rants aside; We are going to solve this set of problems with automation,
machine learning and AI. Here is a great talk on transport layer
improvements, it talks about machine learning approaches to optimise
delay/bandwidth for live streaming video connections.
It is entirely feasible that we could run similar approaches to coordinate
video communication, especially if we are a civilisation that spends all of its
time exploring and finding new people to speak to. Automate the boring stuff,
you know?
Reading: Little Brother, Transmet
The BBC have an excellent rendition of Burning Chrome by William Gibson. I
am sure a neighbour will help you out if you are geographically impaired.
On Sunday I set up some quick and dirty temperature monitoring. At that
point I didn't have any server code lying around to recieve the readings from
the sensors. I set up tcpdump on a fileserver to capture the packets, tcpdump
has the benefit of loggin a timestamp with each packet helping me get around
limitations of the nodemcu hardware.
A day later I have to try and process the pcap files.
The -A flag for tcpdump will show me the packet payload as ascii, I was pushing
json from the server so this is rather easy to see. I could use some shell
magic to pull this out, but I wanted to play with scapy.
Scapy is a python library for dealing with packets, it does everything
tcpdump will with packet injection to boot. Scapy will happily take in the
pcap files.
#!/usr/bin/env python
from scapy.all import rdpcap
import json
if __name__ == "__main__":
pcapfiles = [ "temperaturevalues.pcap-1", "temperaturevalues.pcap-2"]
readings = []
for files in pcapfiles:
pkts = rdpcap(files)
for p in pkts:
time = p.time
readings = json.loads(p.load)
print("%s,%s,%s,%s,%s" %
(time,
readings[0]["sensor"],readings[0]["temp"],readings[0]["humidity"],
readings[1]["sensor"],readings[1]["temp"],readings[1]["humidity"],
)
)
Running
$ python process.py > readings.csv
Gives me a csv file with the temperature and humidity data from the sensors.
Feeding this to gnuplot with something like the below results in a nice(albeit
noisy) plot of the temperature from the two sensors.
set datafile sep ','
set timefmt "%s"
set format x "%m/%d/%Y %H:%M:%S"
set xdata time
set terminal png size 3000,500
set output 'data.png'
plot 'temperaturedata.csv' using 1:3 with lines, 'temperaturedata.csv' using 1:6 with lines
Fresh of great weekend at the RSGBConvention my good friend hibby was
talking about doing point to point line of sight lines with 400MHz and up. He
is super eager to do giant 50Km links and was suggesting hills to climb at the
weekend.
I thought maybe we could try something a little easier to debug when it doesn't
work. We settled to try point to point between my house and something the other
side of the valley.
We did some local test and I was able to hear clear audio out to about 500m. At
that distance we ran out of road to walk down. I can see the Newhills Parish
Church from a rear window of my house, it is probably a little under a mile
away line of sight.
While Hibby headed out there and I set up the yagi, we used 70cm as a return channel
as the portapack can't transmit with the current firmware.
We ended up using the rad1o badge from cccamp last year as a 2.4GHz
transmitter and a wifi yagi I had lying around. We played with settings for a
while and eventually figured out the right combination of settings to do WFM
voice!
Next we need to find a pair of points with los that are far enough apart to
test range.
I ordered a handful of the cheapest nodemcu boards I could find from ebay. A
couple of weeks later I got a nodemcu 'like' board from a company callsed
AI-THINKER. The boards following instructions written on the back of them:
1. Install CH340G driver.
2. Use 9600bps baud rate.
3. Connect to WiFi.
I tried playing with two of the boards, powering them up and searching for wifi
networks showed a network with a name like:
AI-THINKER_238810
AI-THINKER_23A9BF
Connecting to the wifi was fine, but I didn't really know what they expected me
to do. nmap'ing the device has no results and an hour googling didn't really
show up anything. Connecting over serial resulted in some noise then nothing.
I was going to flash micropython anyway, so lets do that.
Flash micropython
Connecting to the nodemcu board over serial spits out some gibberish no matter
the baud rate I pick.
$ sudo cu -l /dev/ttyU1 -s 76800
Connected
Sd3²ì{£P:ýCê
ets Jan 8 2013,rst cause:2, boot mode:(3,6)
load 0x40100000, len 1856, room 16
tail 0
chksum 0x63
load 0x3ffe8000, len 776, room 8
tail 0
chksum 0x02
load 0x3ffe8310, len 552, room 8
tail 0
chksum 0x79
csum 0x79
2nd boot version : 1.5
SPI Speed : 40MHz
SPI Mode : DIO
SPI Flash Size & Map: 8Mbit(512KB+512KB)
jump to run user1 @ 1000
êñ+Pr-r+§(r
SD«¢hJëÙ-$xùÊkPx\)§k ¢ÀjtNü
Some time with a scope reveals the board is starting up at one rate then
switching to another. The rate switch means the esptool is unable to do
automatic baud rate detection.
With that we can flash the boards:
erase the flash
esptool.py --port /dev/tty.wchusbserial1420 erase_flash
flash the image
esptool.py --port /dev/tty.wchusbserial1420 --baud 76800 write_flash --flash_size=8m 0 esp8266-2016-05-03-v1.8.bin
reset the board
cu -l /dev/tty.wchusbserial1420 -s 115200
MicroPython v1.8.2 on 2016-08-05; ESP Module with ESP8266
Type "help()" for more information.
>>>
I read this excellent post by Simone Margaritelli on hacking a
network connected coffee machine. Simone reverse engineered the Android app
that controls the coffee machine and wrote a command line tool for getting the
machine going.
Simone took a completely different angle to solving the problem than I would.
Being a network person I would have gone straight to tcpdump, grabbed some
traces from the app/coffee machine and worked from that.
Instead Simone used a tool to dump a disassembly of the Android apk. I haven't
done that before, I don't think it would be my first thought when I had to take
something apart. From this post I think I might give it a shot on the local bus
app.
The coffee machine looks awesome, you might not want an internet connected
coffee machine, but I think it is an awesome idea. Coffee is a great reward for
solving a problem, the machine could automate teaching people how to reverse
network protocols.
The tortoise needs an improved heating setup, now have a 'night time' buld that
just puts out heat. Before I change anything I want to have numbers so I can
try and quantify the change.
I knocked up a micropython script and ran it on a nodemcu board with a couple
of dht11's. It looks like this:
It doesn't have to live for long, just a day or two.
The always on machine on my network doesn't seem to have anything useful
installed and without internet at home that wasn't going to be a simple fix.
Instead I used tcpdump to capture the json packets.
Tcpdump works really well in this situation, the micopython board doesn't have
a RTC, but the pcap from tcpdump will have acurate timestamps for each field. I
did something like:
$ tcpdump -w tempreadings.pcap udp and port 6969
Later I can process this out with a shell script or scapy or something.
128g of Coarse ground coffee (I guess 125g is okay, if you aren't cool)
1L Vessel (I use a nalgene)
1L of potable water
Fridge
v60
Jug
Method:
Put the ground coffee in the vessel.
Fill the vessel with cold water
Place vessel in fridge
I use tap water because I live in a place with excellent drhinking water. If
that isn't the case for you, you will have to figure something else out. Make
sure the ground is well soaked, it will swell. I give it a good shake then add a
little more water to make sure the nalgene is good and full.
After about a day take the nalgene out of the fridge.
Pour the coffee/concentrate blend into the jug.
Clean the nalgene.
using the v60 filter the concentrate back into the nalgene.
I normally end up with about 700ml of concentrated coffee. I mix it with
boiling water to drink, about 120ml of concentrate to 200ml.
To win this bet I have with Ed I need a WiFi adapter that can do 80211n in the
5GHz band. There aren't a lot of these around and n in 2.4GHz band makes it
hard to find adapters with the right support.
I got pair of AC600 generic adapters on ebay for about a tenner, a quick look
showed promising Linux support. This indicated I could use one for the bet
without too much hassle.
I got a second so I could work on a wireless driver for FreeBSD, what else am I
to do with my time?
The adapter is a MediaTek MT7610U device, there is a whole load of
information about it on Wikidevi and there are a family of forks of
the vendor code on github.
Wikidevi says the MT7610U is similar to the RT28xx series, which are
supported by run in FreeBSD. I started last night by taking the run
driver, getting it to build as a module, then turning everything off apart from
probe, attach and detach.
This is the first time I have tried to port a driver, to help I collated
everything I could find written about doing it.
I had an argument with some Germans about the pronunciation of WiFi,
apparently it is WeeFii using the sounds of wireless and fidelity. They
also pronounced HiFi incorrectly, English is a strange language.
Recently StarShipSofa has been delivering podcast files to me that contain
3rd party ads. It is their hosting provider that is inserting the ads, but both
times I have been aksed if this my client is to blame.
Maybe there is something in the file that would indicate who did the encoding?
play(from the sox package)
$ play starshipsofa-454-ads.mp3:
starshipsofa-454-ads.mp3:
File Size: 33.7M Bit Rate: 64.0k
Encoding: MPEG audio
Channels: 1 @ 16-bit
Samplerate: 44100Hz Album: StarShipSofa
Replaygain: off Artist: StarShipSofa
Duration: 01:10:10.78 Title: StarShipSofa No 454 Alex Shvartsman and Stephen S. Power
In:0.05% 00:00:02.04 [01:10:08.74] Out:90.1k [ -===|===- ] Clip:0
Just the file name and year, lets try ffprobe from the ffmpeg tools:
ffprobe
$ ffprobe starshipsofa-454-ads.mp3:
[mp3 @ 0x809691000] Skipping 0 bytes of junk at 159.
[mp3 @ 0x809691000] Estimating duration from bitrate, this may be inaccurate
Input #0, mp3, from 'starshipsofa-454-ads.mp3':
Metadata:
title : StarShipSofa No 454 Alex Shvartsman and Stephen S. Power
album : StarShipSofa
artist : StarShipSofa
date : 2016
Duration: 01:10:10.39, start: 0.000000, bitrate: 64 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, mono, s16p, 64 kb/s
Nothing more there, a google says there is something called mp3info:
mp3info
$ mp3info starshipsofa-454-ads.mp3:
starshipsofa-454-ads.mp3 does not have an ID3 1.x tag.
Well that was no good at all.
I don't have a ton of time to find the mp3 metadata might be, none of these
tools show anything. I guess that means I can be happy I am not leaking
info when I encode an mp3, or I can't find it with normal tools.
As an aside form talking about the Electron Gnomes on the latest Embedded
FM podcast Elecia and Christopher implored us to talk to people about their
awesome podcast to everyone we know.
So, go and listen to the Embedded FM Podcast featuring excellent
interviews, professional advice and something about Electron Gnomes.
Okay it isn't, that attack is awesome, but it is a social one rather than a
break of WPA. I bet it would work in a load of environments, I would be
surprised if pentesters didn't already have it in their toolkits.
Really the OS should be doing much more to protect users from this class of
attacks. WPA written today would not be vulnerable to this class of attack at
all.
There are reports of Malware in the PS Vita Piracy scene. When you have to
pursue shady enterprises to use the hardware you own this is always the risk
you take. Consoles have the coolest security hardware, but it is aimed at
stopping piracy rather than protecting users.
The Grey area jailbreak tools live in make it really hard for users to find
the real tools. Instead the end up with malware.
Here is 50 minutes on why this was going to happen.
I managed my first SO-50 pass yesterday. Using a tape measure 70cm yagi I made
last year and my baofeng I was able to hear chatter on the repeater for about
30 seconds.
As mediocre as it is I am really happy with success on my first try, I did
attempt to listen to another pass of SO-50, but only heard a two second chirp.
I used gpredict on my stream 7 for satellite tracking, and a cheap compass app
on android to verify which way the building pointed.
Next I am going to try using an sdr for the downlink capture. I am hoping it
will be a little easier to get the yagi pointed the correct way and give me a
chance of finding the signal mid pass.
The excellent newsbeuter says I have 80 rss feeds that I pay occasional
attention to. There are habit sites I visit like reddit and hackernews, but I
fall back on the rss feeds when want to focus and read.
I put the rss feeds from peoples blogs in my reader, normally when I read an
awesome article via HN or reddit. People don't normally post more than 3 times
a month. This means there isn't so much I can't read it, but just enough that I
can process it when I want to.
Due to a Chatham House report on the latest dangers of Satellite hacking
uhf_satcom was on this weeks risky business talking about Satellite
pirates and exploit possibilities on the birds.
Not the Satellite Pirates of the 90s trying to access free TV and not
arrgg Pirates out at sea(though maybe), but people taking advantage of the
great accessible repeater in the sky.
A terrestrial repeater takes in a signal on an input frequency and rebroadcasts
it on an output frequency. The repeater normally has better antennas system and
is situation in a physical position to give the best area coverage.
A satellite repeater does the same thing, from its vantage point in space it
can cover a much larger area. There are amateur radio satellites that provide
this functionality, but from low earth orbit.
The pirates on Risky Business are probably using a satellite in geostationary
orbit and taking advantage of it being a dumb pipe pointing back at earth.
Listening to this weeks ATP on the bus, they speak about the latest Mach
OS release SomthingCali. It reminded me how little I really care for software
updates. Of course I want things to get faster, more secure and less buggy so I
have to endure updates. Most updates don't just bring clear improvements instead
they bring feature updates.
I write software for fun and for a living and for a while I even wrote products
that people used. I even provided training for our users on product updates. I
saw first hand how annoying changes can be.
Most of the changes we delivered were customer driven (in fact, they were all
paid for by individual customers). When we trained a customers users on the
new software there were normally a whole bunch of changes to off path
functionality that someone else had asked for.
They were happy that bugs had been fixed and UI had gotten a little cleaner,
they loved that the software was better on the crappy machine IT or we supplied
them. But they didn't want change for changes sake.
I have been using Puzzle Alarm Clock to make me get up. It is great it can
make you solve puzzles, quizzes, or it can use the NFC reader or camera to scan
a QR code to turn the alarm off. Puzzle Alarm Clock updated this week. The UI
was improved or something, all I can tell is that it is white instead of black
now. But they also removed features, making the app much worse.
The news on this weeks Risky Business Podcast mentioned the record breaking
DDOS against Krebs. 665 Gigabits of traffic per second is a lot of
traffic, but that is probably only the start of such massive attacks.
While wondering how these attacks manifest an article about the slowloris
attack popped up. This is a different sort of denial of service to the network
traffic sent to Krebs and one that should be rather easy to mitigate against at
the protocol layer.
The Krebs attack is the first I am aware of with a large IoT component. I think
we have all been waiting for the hordes of vulnerable devices to appear in
abuse logs. Maybe we can move to ipv6 and leave the Internet of Shit on a
blackholed v4 Internet.
Yesterday I wrote about the Ex Machina soundtrack, but linked to an hour
long loop of one of its tracks. Whoops. The whole soundtrack is equally great,
go find it. Similar stuff on youtube lead to 9980 by CONNECT.OHM.
The Science Fiction podcast magazine I listed to, StarShipSofa has had
some great CyberPunk stories recently.
Humans are going to become augmented, this is an inevitablity, we won't be
able to resist making our selves better by merging computers and machinery
into our body. "Must Supply Own Workboots" considers what happens when
our jobs rely on expensive augmentations, but the augmentations become out
of date.
I think I really enjoyed Ex Machine, it has a great mixture of near scifi
and technology. There is enough mystery and conspiracy in the film to keep me
engaged, I am glad my world doesn't have so much intrigue. If it did I would
probably be in some Billionaires dungeon for following the wrong lead.
The Ex Machina Soundtrack is even better than the film. It reminds me of
the ambient music that plays in GTAV when you wander around with the radio off.
A podcast with similar drones and loops would be an excellent thing to add to
my work music mix.
Reading: The Puzzle Palace, 802.11 Wireless Networks 2nd Edition, MOONCOP!
Writing blog posts and getting them out takes far too much effort. With a
streamlined publishing system the author still has to manage to write something
down.
I do not have a streamlined publishing system. Instead the tools I use sit in a
balance between the ideal thing I want and the hacked together scripts I have.
It has been 4 months since my last post, so you can join me on a refresher.
The web side of the software is written in nodejs using express (and python
with flask, but that isn't finished). The node program starts up and parses in
a configured directory containing the blogposts.
$ cd blogposts
$ git pull
Already up-to-date.
The blog posts live in git and are written in markdown. Images for the posts
are kept in the images subdir. The blog posts themselves live in year folders
(2014,2014, etc). The year folders are provided as configuration to the node
web process as well, which implies there is work to do when the calendar flips
around.
blogposts have an id, which is used to sort and sequence them and is used for
the post url. It was needed in earlier pieces of software I wrote and I would
like it to go away. Until I move to something else I have a helper script to
tell me what the next id is.
$ sh ./newid
last post id: 0089
next post id: 0090
blogposts use an email style header, each line is a key value pair separated by
the first colon on the line. The header block is terminated with two newlines
'\n\n'. I can type out the header, but normally I copy it from a blogpost.
That's the sort of lazy person I am.
$ copy 2016/somepost.md to 2016/newpost.md
$ vim 2016/newpost.md
Title: Some post
Tags: meta
Date: 2016-01-01
Preview: Some post
Permalink: 0001
Hurr durr I am a blogpost
I am totally inciteful and full of useful information, like how nat punch
through works and the secret to everlasting life.
Now we have to edit all of the fields in the header, and content for the body
of the blogpost. This is a great time to add the correct post id value we got
way up top.
Title: Writing this takes a little too much effort
Tags: blog
Date: 2016-09-26
Preview: Writing this takes a little too much effort
Permalink: 0090
Writing blogposts takes far too much effort...
Okay, we have now written the blogpost, maybe even spell checked, we can upload
it to the web server.
On the remote web server we need to pull from the master blogposts branch to get
the new article we wrote.
$ ssh webserver
$ cd sites/blogposts
$ git pull
Now we have the updates we have to restart the node process. There is code to
reload dynamically, but I could never get nodejs to behave here. I would like
to use kqueue to watch posts dir, but when I last looked this wasn't supported
on the platform.
$ cd ../register
$ forever restart server.js
Phew, there we go.
We are serving up the new blogpost from the site. This seems like a lot of
work, but I think post of the component stages would be required with a static
site generator.
I want to write some tools to help with schduling posts. At the moment I can
write a post for future release, but I have to specify the date for release.
Reading: The Puzzle Palace, 802.11 Wireless Networks 2nd Edition.
The first step to getting my devices working for me is to set up a consistent
network for them to use. To do this I am going to use a small pocket sized
router that can be run from a usb battery to act as a hot spot for my devices,
but also as a bridge to an internet connected wifi network.
The network I want to setup looks something like this:
I struggled to find a network configuration like this in the OpenWRT wiki. I
wondered for a while if it was because the network was an impossible (seemed
unlikely) or if it was so obvious to not be worth documenting.
Eventually google turned up a bitbucket page with a config that worked
perfectly.
I need to find a method which makes it straight forward to configure a new
outgoing network. I think at the moment I am going to have to edit the wifi
config files to make any changes. On the road that will be less than ideal.
We will see the comment /* XXX */ throughout Net/3. It is a warning to the
reader that the code is obscure, contains nonobvious side effects, or is
quick solution to a more difficult problem.
The second volume of that series might be one of the best networking books ever
written. Not because it is a good tome to learn networking from, it is instead
a guide into the heart of a real system. It is close enough today to use as a
starting point for finding out where things are and a step to finding out why
they are.
It is where I go when I want to find out how my current machines get bytes from
an application to packets on the wire.
For about as along as I have been using terminals I have had them set to a dark
theme, for a while at the beginning I had my terminal set up to be green text
on a black background. I might have been the l33test mutherfucker around.
I moved to Zenburn from the unreasonably popular solarized at some
point last year. When I moved I went through a few themes trying things out. I
found that .Xresources supports cpp macros making it easy to swap themes.
I tried to work outside quite a few times last year. Dark themes for terminals
are really hard to see in bright sunlight. For some reason yeahconsole doesn't
like the way I include themes and stayed with a default light theme. The light
theme is really easy to see in sunlight. The contrast in readability made me
really question using a dark theme at all.
At camp last year most of the daylight hours I spent on my laptop were inside
our super tent, direct sunlight made my screen hard to read and the sun seems
to melt my skin. My dark theme was okay to read in the tent, but it was no good
outside, with the sun melting I just hid away until it was night time.
The sun being bad, I am grateful I live in Scotland, we don't have to put up
with the sun very often(this is a joke, it is always fucking sunny, I want a
refund).
When the sun isn't around I do quite like to sit in darkened rooms when I am
hacking. If you too are a vampire you might have noticing the eyeball explosion
that happens when you switch from your friendly terminal to the light explosion
that is a web browser. Pretty much every web page has a light theme, I
actually dislike dark themed web pages, I always think the designer is being up
front with how much of a douchebag they are.
I dropped my android phone at congress, I was trying to figure out which
direction to walk in the compass was spinning wildly being absolutely no help.
I slide the phone into my pocket missed and the screen cracked on contact with
the ground. The phone had a terminal diagnosis then, but I soldered on for as
long as I could.
Eventually the nexus 5 dropped the cellular network and I travelled 40 minutes
to a vet when I didn't need to. The next day I jumped on to the bq site and
ordered my ubuntu edition E5.
The BQ sales process was just terrible, it didn't help that my email
provider decided I didn't need to receive email that weekend. The BQ site is
really badly laid out, confusing and proactively difficult to login to and view
your orders. They don't offer tracking number, so I don't know where the extra
€10 'international' shipping went.
The ubuntu touch os out of the box is annoying as any other smart phone, you
are run through the standard dialog that asks mostly pointless questions. The
version of ubuntu touch that ships on the E5 is "really old" (with really old
being may 2015, 9 months prior). I was unable to install anything from the
ubuntu app store (well I installed a hex colour picker, but that doesn't count).
I tried to check for updates, but the phone was happy believing it was all up
to date. I searched for a while, but any search term with 'ubuntu touch'
results in ubuntu users having issues with their touch screen laptops.
Eventually I fell into the #ubuntu-touch irc channel and asked.
And I waited...three hours later on my third time asking someone suggested I
try updating. There isn't wifi at work, I wondered if the ubuntu touch os is so
broken it won't acknowledge user settings and will only look for updates on
wifi. I tried at the hackerspace later that night and I managed to update to
OTA9.1. This is the latest release version.
I don't know how this operating system is for. I have been using free unix
desktops for a decade, I can't see anything in ubuntu touch that I want. The
default system shows a collections of scopes, scopes are an advertisers dream.
There are default scopes like, weather, music, video, news and the scopes are
populated by scope apps that create a feed for each theme. There are loads of
different input services for them as well, so if you use facebook, instagram,
flicker, 500px you will have a nice full feed. If you use one, or none of them
you won't have anything. I saw the scopes and thought of windows crap wear.
I turned off all of the scopes as soon as I could.
Here is a short list of issues I encounter:
Out of the box
No updates were shown as available on 3G
Updates became available to download on wifi
Unable to download apps from the ubuntu app store
Unable to run any software other than browser
No way to find out the phone was an entire naming scheme behind without wifi
The UK (and English) support site doesn't mention the Ubuntu Editions at all. The Austrian one does.
General System issues
All advice assumes you are running ubuntu on the desktop
Phones always seems to be low on memory
You are stuck looking at the apps scope
No way to have a phone wallpaper, just apps
If you turn off all of the scopes there is no way to add them back in
No way to import contacts into the phonebook, yes on a phone
There isn't a calendar
There is no count down alarm timer (make noise in 20 minutes)
Bluetooth volume is massively reduced (unable to hear anything out side)
Removing wired headphones doesn't pause playback
Volume transfers when plugging/unplugging head phones. Play music full volume on speakers, plug in headphones, go deaf.
Headphone buttons don't work
It can take still running paged out apps 10 seconds to become active. This was seen on the alarm clock app
No security
No encrypted storage
No way to hide notification bodies on lock screen
Default lock screen leaks data (how many calls, how many messages)
UI Bugs everywhere
There is gravity scrolling sometimes, I can't tell you when though
The alarm time picker has gravity scroll when you don't want it and doesn't when you want it.
When you switch to paged out apps, sometimes they will be out of focus. This is your only idicator they app isn't actually running. Don't worry it will restart entirely within 10 seconds
There edge swipe gestures, but they only seem to work at the most annoying times
The left swipe menu is useless when the default screen is a page of apps
In the browser you access multiple tabs with a bottom up gesture, scroll in landscape mode is nearly impossible.
General App usage
The apps are either written by Canonical or some random person, these apps wants my login credentials
There are loads of apps available, as long as you want a news app from a minor regional newspaper
Nobody wants scopes
Built in apps (twitter, youtube, etc,) are just web views and they are terrible.
Can't tweet photos from twitter app
Can't send photos on telegram app
The Canonical provided apps have led to no good alternatives for apps (youtube, twitter, etc)
Specific Apps
Podbird looses progress in podcasts
podbird redownloads podcasts on new wifi networks
podbird doesn't have any play back indicator for podcasts other than the currently playing
podbird can't handle many rss feed urls
podbird can't bulk import podcast urls
Ureadit can't show the first item in the list in portrait
ureadit has no touch area for gifs
No way to refresh feed without a long scroll up
On out of memory feed/message thread is reset or cleared away
I am making my email setup better. I have moved hosts to Fastmail(referrer
link) and I am setting up mutt to work with folders correctly. There is a patch
for mutt called mutt-sidebar which gives a list of folders in a side pane
in mutt, much like the interface would be on the web.
Looking at Freshports it looks like the sidebar patch is part of the
FreeBSD port, but it isn't setup by default. I wanted to check this and looked
for the option in pkg to list the build options. There is the pkg query
command that will show information about installed packages, but it is a little
mental to use.
I found how to use pkg query to list the build options for a pkg in the
wiki.
$ pkg query "%n is compiled with option %Ok set to %Ov" mutt
mutt is compiled with option ASPELL set to off
mutt is compiled with option COMPRESSED_FOLDERS set to on
mutt is compiled with option DEBUG set to off
...
mutt is compiled with option SIDEBAR_PATCH set to off
...
That tells me that I new to build mutt from the port and turn sidebar patch on.
I was hit pretty hard the last few weeks with the standard issue winter cold. I
was quite surprised to see how little energy I had for working on anything
after work. I certainly see the value in taking time of work even if it just
means more energy for side projects.
Yakamo and I have continued to push out our podcast, with another
episode appearing today. Our audio is getting better though you won't hear
it in the most recent show as I did all of the editing with my backup audio.
Editing the backup audio showed some of the issues with mumbles recorder. It
was a lot of trouble to keep my levels balanced, there seems to be gain
correction on the mumble feed so my starting audio is super loud and it just
tampers off.
This episode should now be missing large chunks of silence and features a much
deeper sexier voice for me. Turns out there were some config issues with my
microphone. There were also some observation issues, I didn't actually listen
to the whole show or check the waveform before doing the renders.
Should all be fixed now and it will sound silky smooth.
It has been mentioned by a friend that my voice in the recentunreasonablepodcast episodes is much higher than it is in reality.
Of course for the first few episodes he just said 'your audio is fucked' which
didn't help me resolve the issue at all.
With the detail that pitch was off I knew where to start looking, the pitch
issue was present on both the audacity recording and the mumble back up. The
audio rate for the microphone and audacity where both the same, 44.1kHz.
The last thing to check was the audio sub system on FreeBSD. I read the
snd man page and it pointed me to a few sysctl knobs that I might be able
to tweak. I also checked the man page for usb audio and found this little
notice in the bugs section:
BUGS
The PCM framework in FreeBSD only supports synchronous device detach.
That means all mixer and DSP character devices belonging to a given USB
audio device must be closed when receiving an error on a DSP read, a DSP
write or a DSP IOCTL request. Else the USB audio driver will wait for
this to happen, preventing enumeration of new devices on the parenting
USB controller.
Some USB audio devices might refuse to work properly unless the sample
rate is configured the same for both recording and playback, even if only
simplex is used. See the dev.pcm.%d.[play|rec].vchanrate sysctls.
The PCM framework in FreeBSD currently doesn't support the full set of
USB audio mixer controls. Some mixer controls are only available as
dev.pcm.%d.mixer sysctls.
vchanrate is a per device sample rate that can be controlled by a sysctl,
toggling the value showed me the problem. With the rate at the correct 44100 my
deep voice poured out of my microphone and into a file.
Recording for the show has been okay so far, we have been using mumble to run
our call while producing local recordings with audacity and a backup recording
with mumble itself.
The back up recording has already been useful, I selected the wrong channel in
audacity and didn't notice the flat waveform coming out of my mic. There has
been a lot of trouble with the audio streams going out of sync making it
bothersome to edit in audacity. I hope that is related to my own rate issues on
my local recordings. I will see when I get to editing 0x03.
I think the show will probably get closer in structure to other shows as we go,
I have already given in and you will hear intro noise in episode 0x02. You
will probably also hear us saying the name of the podcast a lot, that should
remind people what they are listening to.
JCS was interviewed on the latest episode of Garbage, he spoke
about his app pushover. Pushover is an Android, iOS and mac app that works
with a service backend. You can send notifications to pushover via simple
api (you can just use curl) and the notifications are delivered to your devices.
This is awesome, I can set up pushover on my phone, a client on my build
machine and get alerts when builds are complete. No more checking while a build
finishes, instead I can get notifications directly on my pebble via pushover
and the pebble app.
Looking through the app directory I found the command line tool ntfy.
ntfy is really easy to set up and use, for pushover you need a simple
.ntfy.json (with a real user_key) like:
You can then send messages with ntfy or send the result of a command:
$ ntfy -t "Test" send "This is a test message"
$ ntfy done false
By default ntfy will set the message title to the user@host, but the -t flag
can override this. ntfy supports other backend services and a 'linux' backend.
I though the linux backend would tie into the same thing as notify-send, but
that wasn't the case. I need to figure out how those tie in.
I have to run FreeBSD builds with root privileges, I didn't want to give a tool
like ntfy root access. I wrote a small alias to send the result of the previous
command.
alias buildres="if [ $? -eq 0 ]; then ntfy send 'Build passed'; else ntfy send 'Build failed'; fi"
Via the ReverseEngineering subreddit I found that vim's built in :X
encryption mode can be pretty easily broken. I didn't know that vim had
anything built in to encrypt files, in hindsight I should have expected some
functionality.
Looking into the vim documentation on on Encryption shows that most of
these methods aren't recommended for use. It also looks really easy to
accidentally destroy a file using vim. If you do not decrypt the correctly you
get a vim buffer filed with encrypted noise, if you save that buffer you
destroy the original file.
I have been using vimwiki in a git repo since August last year. Vimwiki is
a really simple markdown style wiki, the features are really limited. There is
some markup, links and that is all. It has been filling all of my needs
perfectly. I would like to be able to encrypt the wiki files so I could have a
little more peace of mind, but with a little searching I haven't found anything
that has the utility I need.
I could write something myself that worked well with both git and vimwiki, but
I don't really want to subject my personal files to my own bugs. If you know of
a solution to encrypting files in git repo or integrating with vimwiki that
would be really helpful.
For some reason I have been recording a lot of audio on my desktop
recently. I also saw a conversation in irc about how to simply record audio
from a microphone on FreeBSD.
I hoped I was going to find a super simple OpenBSD style solution to
capturing samples, but I wasn't able to dig anything out. I did play with cat
for a little while, but nothing useful came from it.
Audacity is the tool I have been using to record long sessions the most.
Audacity is now probably the foss standard for doing audio editing/production
and it has been really stable for me. On FreeBSD it has been rock solid so far
if a little heavy weight.
ffmpeg is an audio and video swiss army knife and can be used to capture
video from webcams and audio from capture devices. The only issue I have had
with ffmpeg on FreeBSD is that lame support is not built into the default
packages.
ffmpeg can be used to caputure audio from a source:
ffmpeg -f oss -i /dev/dsp -vn -ab 128k test.wav
Sox is the ultimate tool for handling audio, a long with the two front
ends play and rec you can do most operations on an audio stream. Sox can built
with codec support for a ton of formats. It is quite simple to use sox to
convert different bit formats of sdr capture files with sox.
Yakamo and I have started a podcast, the first episode was
released yesterday. The website is still very simple and I don't think there is
an rss feed setup yet. But, we have managed to put out the first episode and
the second episode is lined up to be released on Monday.
Give it a listen if you like podcasts, any feed back should be directed to
stuff@yakamo.org or /dev/null. Thats where I send your emails anyway.
Great news today about David Miranda's Case, but I can't help but feel
down with the direction of the country. I can see British law being deemed
incompatible with the ECHR being used to strengthen arguments against being a
signatory to ECHR and part of the EU.
At home we have GCHQ dismantling secure communications at every turn. The
low price of oil is causing a down turn up here and it doesn't look like there
is bright future. Sometimes it is hard to stay positive when you let the real
world seep in.
While I sit numbly at my desk I like to restlessly fumble with anything at
hand. This week it has been this awesome mind bending deck of cards. I
have already had many visitors complain my cards are misprinted and hurt their
head, this real world glitch is doing well. The glitch_art sub reddit
contains many more examples of images like these. None quite as satisfying as
holding these 'broken' playing cards.
The Raspberry Pi page on the FreeBSD Wiki links to a blogpost about
setting up xorg on the Pi. That post was written back in 2013 and most of the
information there seems to be out of date.
I set up X on a Pi at the end of December 2015, this information is up to date
for r292413. pkg is now available on arm images so there is no need to build
everything from ports, considering tools like tmux could take 6 hours to build
on the pi itself this is a huge improvement. I installed the following packages
to get X up and running on the Pi:
# pkg install xorg xf86-video-scfb i3
The Pi isn't able to auto detect the X configuration, I looked for a while for
a config that would work. Eventually I dug the following one from a mailing
list post. Place the following into /etc/xorg.conf:
ffmpeg can now make gifs in a single step, no longer do you have to generate
frames then pass them into ImageMagick. For most of the videos I have tried the
initial gif from ffmpeg hasn't been very good.
I found a stackoverflow post that describes a two step process for
generating gifs with ffmpeg that has great results. The first step generates a
palette from the source video, then this palette is used as a filter when
converting the video into a gif.
The improvement is more evident if you click and watch the full size gifs side
by side. The stackoverflow post links a blog post with even more information on
generating high quality gifs from video.
I wrote a post about how to survive congress, but didn't publish it. It
contained a list a little like this:
All of the talks are recorded, streamed and put online at media.ccc.de.
The self organised sessions are not recorded.
The most interesting things are happening at the assemblies.
These points hold true, my original suggestion because the talks are available
after the fact there isn't much point sitting in the lectures. At 32c3 I didn't
attend any of the talks, this was a mistake. I really regret not going to any
talks.
Going to the talks gives you something to talk about with the people at CCC.
The self organised sessions I went to were great and hanging out with people at
their assemblies and at the Scottish Consulate was great. If I had been in a
lecture instead of at our table I definitely would have missed
#toiletparty. But I think if I had gone to some of the talks early each
day I would have gotten much more out of the event.
Next CCC I will head to the event with more of a plan. I don't think there is a
right way to do congress, it is just too insane, but I will try to go to each
one in a different way.
Around the December holidays I received three sets of the jyetech lcd
scope kit. This cheap kit (~£10) builds a small low frequency (1Msps)
oscilloscope.
In all it took me about 2 hours to solder everything together, that includes me
misplacing a resistor and a capacitor. I wish I had sorted the resistors with a
multimeter that scales automatically before starting.
I am planning to build these kits into some audio projects later in the year,
getting three of them was great luck. The kit was really straightforward to
build and didn't take too long, there are serial logging features on the board
as well. This kit could be built into a portable work bench without much
thought.
We have two old Black and White CRT monitors in the hackerspace, they look
really cool. I put together a Card and gif for the holidays:
I used a raspberry pi running FreeBSD for each monitor. The bottom monitor is
running aafire which gives a nice fireplace effect. I did some big text in
figlet for the message.
I also put together a paper card by using the gcard package in latex to put
together some cards. This relies on double sided printing to get the message
inside the cards. It was a bit of trouble(I had to trim the cards down), but
the cards came out quite well for an hours work.
In the last post I showed an animated gif of the of the post source run
through sent.
This gif was super easy to make manually, I ran sent on the post source file,
then I ran my screenshot tool from dmenu on each slide. I stepped through
each slide manually.
For a long presentation, or if I might do this more often I would probably
automate this in some way.
I was left with a directory of files call 1.png, 2.png, for each of the slides.
I used the convert tool from imagemagick to turn these into an animated gif.
$ convert -delay 100 -loop 0 *.png sent.gif
Animated gifs can be played with the animated tool from imagemagick to see how
the delay is working.
This weekends In The Other BSD's section had a link to a nycbug
thread about presentation software. That was strangely apropos, last week I
made slides for a lightning talk using my own template and beamer just
exploded. I fixed the issue with beamer, I was pretty upset, upset enough to
try looking for other software to use when I can.
At the start of the nycbug thread suckless sent is mentioned. sent is a
really simple presentation tool that, it takes some input files and shows them
as a slideshow. No pdf output, no templates, just a presentation.
sent isn't packaged in FreeBSD, suckless make it rather easy to build their
tools(you normally edit a header and rebuild to do config) so I grabbed the
source and built it.
I had to add some search paths to get it to build:
For the past month or two the uboot on the FreeBSD RPI-B images has been unable
to boot on most sd cards. This weekend a new version of uboot was released and
new images were created. The new images boot no problem and I am finally able
to try this cheap 5 inch screen I got on ebay.
This weekend was the first 57N Stupid Shit No One Needs Hackathon. I tried
this weekend to perform serial comms over a string and cup using the msp430
based TI Launchpad.
I had the tone generation working really quickly and then spent 15 hours trying
to demod and tones and find a byte stream using a microphone. I had no chance,
it didn't work at all.
I was able to transmit the tone a long the string over 3 meters of room. So the
core idea does work. I think I will try this project again after reading some
more dsp.
Hibby and I were happy to announce the first 57N Stupid Shit No One Needs
Hackathon this week. It isn't often that you come across a strange
link in your search history and it turns into an awesome event, we seem
to have beaten the odds.
The first Stupid hackathon I read about produced some of the coolest ideas for
pointless things I have ever seen. The best one to make the event clear has to
be endless.horse.
So what are you going to do Tom?
Of the many terrible ideas I have each day, only a few are worth spending 48
hours polishing to death. This coming weekend I have decided to take two
technologies I have been gradually learning, mirco controllers and DSP, and
build the most terrifyingly bad things I can think of.
So tomorrow prepare yourself to see the start of a paper cup and string
telegraph being forged in 57North Hacklab.
Just before I left work yesterday I built one of the gimme boards I got earlier
this week and connected it up to a goodfet. I had to do a little source editing
to let the goodfet run and connect to the correct serial port. If you need to
change the serial port from the default it is a quick grep through the source
tree to find literal string "/dev/ttyU0" to change.
I followed the instructions on the git repo for the specan code. The first
time I ran the flasher the IM-ME booted into the stock firmware again. I
erased the flash, tried again and it all worked. I am not sure how long the
flashing took, but if you will be holding gimmme expect it to be a few minutes.
To flash the IM-ME I did:
$ goodfet.cc erase
$ goodfet.cc flash specan.hex
This turned out to be a lot easier than I expected, everything seems to be well
documented. If you can get an IM-ME and want to flash it with a goodfet and a
gimme, send me an email and I will send you one of my spare(partially
assembled) boards.
My Yardstick One appeared yesterday, time to set up RFCat.
RFCat has not yet been packaged on FreeBSD so I had to install it manually. I
pulled the RFCat source from bitbucket which includes both the firmware
and the client tools. To play with the stock firmware on the YSO I just had to
install the client tools.
The client tools depends on libusb-1.0, which ships in FreeBSD and on
pyusb. Pyusb is offered by the py27-usb port.
$ sudo pkg install py27-usb
Then I built the rfcat client tools:
$ cd code
$ hg clone ssh://hg@bitbucket.org/atlas0fd00m/rfcat
$ cd rfcat
$ sudo python setup.py install
I had to set up devfs rules to access the usb devices, with my account in the
usb group I have the following:
# /etc/devfs.rules
[localrules=10]
add path 'usb/*' mode 0660 group usb
#/etc/rc.conf
devfs_system_ruleset="localrules"
devd_enable="YES"
With that all set up I can now try the rfcat tools
$ rfcat -r
'RfCat, the greatest thing since Frequency Hopping!'
Research Mode: enjoy the raw power of rflib
currently your environment has an object called "d" for dongle. this is how
you interact with the rfcat dongle:
>>> d.ping()
>>> d.setFreq(433000000)
>>> d.setMdmModulation(MOD_ASK_OOK)
>>> d.makePktFLEN(250)
>>> d.RFxmit("HALLO")
>>> d.RFrecv()
>>> print d.reprRadioConfig()
The r flag tells the client to throw me into the research prompt and I get left
in something that looks sufficiently like ipython. To test that everything was
working I decided to transmit some bytes in a loop in the ism 433 band.
In [1]: d.setFreq(433920000)
In [2]: d.setMdmModulation(MOD_ASK_OOK)
In [3]: d.makePktFLEN(4)
In [4]: d.setMdmDRate(4800)
In [5]: for i in range(0,15):d.RFxmit('\xDE\xAD\xBE\xEF');
In [6]: for i in range(0,15):d.RFxmit('\xDE\xAD\xBE\xEF');
In [7]: quit()
I used an rtlsdr dongle and sdrtouch on my phone to get a quick demod of
the spectrum and to see a waterfall. I tried this a few times, but I wasn't
seeing the expected signal. Right off to the far right edge of the screen I was
seeing a jump in strength, tuning around a bit while transmitting I eventually
caught my burst packet. It seems that my rtl dongle is about 400KHz off the
actual observed frequency.
With the launch of the yardstick one I remembered the im-me I bought
earlier this year. Not wanting to risk destroying one of the last available
im-me's in the world I decided to get pcbs made of Michael Ossmann's
gimme.
I found a link to the OSH Park board page an ordered a small batch(3
boards) for less than £10. They came in about 3 weeks and seem to be reasonable
quality, I will try them when my goodfet appears this week.
My main laptop is a Lenovo x220 Tablet with an an awesome swivel screen. The
screen on the laptop is a touch screen and wacom tablet which uses a pen that
hides in the side of the laptop.
I had quite a bit of trouble getting this all setup. Wacom touch and pen
devices are supported by webcamd in FreeBSD. I set up webcamd as
documented elsewhere on the internet and while I could see webcamd grabbing the
input devices the touch screen or pen didn't work at all under X.
Eventually I figured out the problem was xorg not detecting the hid device
nodes. To solve this I had to manually create an xorg.conf and the following
sections.
Section "ServerLayout"
Identifier "X.org Configured"
Screen 0 "Screen0" 0 0
Screen 1 "Screen1" RightOf "Screen0"
InputDevice "Mouse0" "CorePointer"
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "stylus" "SendCoreEvents"
InputDevice "touch" "SendCoreEvents"
EndSection
...
Section "InputDevice"
Driver "wacom"
Identifier "stylus"
Option "Device" "/dev/input/event0"
Option "Type" "stylus"
Option "USB" "on" # USB ONLY
Option "Mode" "Absolute" # other option: "Absolute"
Option "Vendor" "WACOM"
Option "tilt" "off" # add this if your tablet supports tilt
Option "Threshold" "5" # the official linuxwacom howto advises this line
EndSection
Section "InputDevice"
Driver "wacom"
Identifier "touch"
Option "Device" "/dev/input/event1"
Option "Type" "touch"
Option "USB" "on" # USB ONLY
Option "Mode" "Absolute" # other option: "Absolute"
Option "Vendor" "WACOM"
Option "tilt" "off" # add this if your tablet supports tilt
Option "Threshold" "5" # the official linuxwacom howto advises this line
EndSection
With the now xorg.conf dropped into /etc I could now restart the server and
boom, touch screen and tablet working quite well.
When I sniveled the screen I wanted to to be able to rotate my display and
input devices to the correct orientation. I wrote a little shell script that
can either advance the screen rotation by 90 degrees or set it back to the
default orientation.
I bound the script in my .i3/config to the two screen rotation buttons on the
fron of the bezzel. I found the keycodes by using xev.
bindcode 198 exec rotate normal
bindcode 204 exec rotate
Overall the touch screen and tablet work quite well. When webcamd starts it
doesn't always detect both the touch screen and tablet and sometimes in places
them at different event points. If I could figure out a way to make these
predictable or if a later xorg detects the input devices correctly then this
setup would be perfect.
There are a number of services out there that allow you to take a screenshot
and upload it to a website. All of these tools that I have seen(I didn't look,
at all) used have involved a proprietary service and uploading your images to
someone else's hosting.
That isn't good enough for me, I needed an open tool I could use anywhere
(FreeBSD support) with the ability to drop the resulting png into a directory
on a webserver I control.
Here is my tool to solve this problem, screenshot. Screenshot can capture
either the entire window or offer a picker to grab a certain area. I used
import from ImageMagick to handle the capturing and some glue to upload the
image. There is another option to open the image with feh if required.
$ screenshot open
$ screenshot upload
$ screenshot pick upload
The script also dumps file names and url into a log file, this makes it easy to
track down the last taken screen shots. I have some awk magic to pull out the
last url and throw it onto my clipboard.
#!/bin/sh
shotdir=$HOME/screenshots
site="mysite.me"
uploaddir="webdir/screenshots/"
if [ ! -d $shotdir ]; then
mkdir $shotdir
fi
one=`word`
two=`word`
word=$one-$two.png
file=$shotdir/$word
name=`basename $file`
url=$site/screenshots/$name
pick=false
open=false
upload=false
for var in "$@"
do
if [ "$var" = "pick" ]; then
pick=true
continue;
fi
if [ "$var" = "upload" ]; then
upload=true
continue;
fi
if [ "$var" = "open" ]; then
open=true
continue;
fi
file=$var
done
echo "File:" $file
echo $file "http://"$url >> $shotdir/screenshot.log
if $pick; then
import $file;
else
import -window root $file;
fi
if $upload; then
scp $file $site:$uploaddir/$name
fi
if $open; then
feh $file
fi
I use another shell script to generate a random word. This script uses my local
system dictionary, /dev/random and some glue to get a random word. The glue
uses three bytes read from /dev/random and uses od to format those bytes into
something useful. I then use sed to seek to the line in the dictionary to get
the word.
Planning a radio field day was all it took to ruin a week of perfect weather.
Instead of the glorious sunshine and high temperatures of the previous days the
North Sea took revenge and summoned an mighty Haar to punish us for our
hubris.
We hit the beach with a bbq, food and a couple of radios. Hibby had his new
toy, a clansman set including a 5m mast. The mast was light to carry, easy to
slot together actually really easy to put up. I think with some practice it
could be erected by one person by pegging in the guy lines first.
The bands were relatively quiet considering it was a Friday afternoon, but
Hibby and Derecho had a couple of good contacts from across Europe. This radio
nonsense was what interested me though, I was more interested in the playing
in the dune system.
Paris loves the theatre, they have world renowned plays enjoyed by douchy
teenage girls the world around. They love no theatre more than Security
Theatre. To transit through Paris CDG and make it to the departure lounge you
need to show your passport twice and your boarding pass at least six times.
In fact one agent of the airport was enjoying her role more than anyone I have
seen at work. She scanned my boarding pass and scrutinized my passport before
sending me through the metal discoverer.
At the other side I waited and waited expecting my bag. Instead I hear a
shriek! 'You did not show me your pass'. I am dragged back through the magnetic
arch to show my passport once again. This this with the agent shouting as if I
had stripped half naked.
Oh fun.
BHX is a strange airport, security are trying to stay in business and keep
manning up. They manage this by directing transfer passengers back through
security to redo the dance, I did get set on the priority track though.
Of course, with a flights worth of passengers transferring this wasn't quick.
Past security and a worm whole takes you to a mall in the center of the city. A
shopping horror exists until you overcome the forces of capitalism and resign
your self to sit in the uncomfortable long.
The Mega Charity Mozilla keeps offices for their staff in many major
cities. I think most of their staff work from home, but some must visit offices
and the require space to hold meetings. Hopefully Mozilla Space Paris is
the most decant of them all.
The space has all the trappings you would expect from a hip and trendy startup,
mozilla sort of is. They have a big airy space, a fancy cateries kitchen and
the most insane meeting room I have ever seen. You can see from the pictures
why the French Revolution started.
I should probably apoligize to anyone that has donated to mozilla in the past.
I took full use of their stocked kitchen and to avoid the ridiculous parisian
beer prices drank more than my share of mozilla beer. Yum yum.
The best way to get around Paris is to use the metro, if you are coming into
CGD you can take the train to Gard du Nord then hop onto the metro from there.
Metro stations seem to be dense enough that there will be one near to your
destination, I didn't see more than a 10 minute walk.
Using the metro fulfilled every Parisian stereotype I had, lovers kissing,
gypsies begging, men busking with accordions. The metro was a brilliant way to
get around very entertaining.
Just as entertaining for me(though some might not enjoy it) was my pre metro
knowledge walk across Paris to reach my hotel. On the map before traveling the
walk didn't look every long. I didn't have any frame of reference for Paris,
but a similar distance around the Thames in London would be a reasonable walk.
Well reasonable to people that like to walk through cities.
With the 30°C heat at 1800 it was probably a little long for a 6Km walk
through the city. But the walk was very fortuitous if I had been down in the
metro I wouldn't have seen the stunning sights of Paris, large buildings,
street gangs, passed out tramps that have pissed them selves and the myriad of
cheap suit shops. Shiny silver suits are a steal at 50€.
After a couple of bouts of despair I reached my hotel in once piece,
only loosing about 5 kilos in water.
Like last year, here are from BSDCan that have stood out to me. I don't
think all of the videos have been posted yet so there are probably some gems
left to watch. All of the videos are here
I have a navspark gps microcontroller board I backed on indiegogo last
year. The board has been sat in my desk for a year so I decided to just use it
as a dumb gps and not bother with the microcontroller part of the board.
The default firmware sends nmea strings over a usb serial controller at 115200
baud, this was easy to test with cu. I wanted to use gpsd with the gps, I am
planning to integrate it into a wardriving box in the next few weeks.
cu -l /dev/ttyU0 -s 115200
gpsd is unable to accept baud rate changes, instead there is workaround in
the faq. The faq is probably wildly out of date, I couldn't get stty to change
the baud rate on FreeBSD. I found that FreeBSD offers .init files for each of
the serial devices and they should be used for configuring the serial device.
Using the following command worked for me and allowed gpsd to speak to the
navspark.
# stty -f /dev/ttyU0.init speed 115200
# gpsd
I could then connect to the gpsd and make sure it is working with cgps.
$ cgps -s -u m
I am not really happy with the navspark, the indiegogo made the board look
really cool, but so far there no community has formed around the board. This
has led to a lack of approachable documentation and an ide only available with
Linux and Windows bulds.
I would love to find a cheap gps that emits data over serial. The closest thing
is the Adafruit Ultimate gps, but it is far too expensive for what it is.
I have a pair of U-Blox PCI GPS cards, so far I haven't been able to get
them working with anything.
Using a TP-Link WR703N and a RTLSDR I decided to make a small
dedicate SDR box. Using rtl_tcp I can set up the box and a suitable
antenna and use it to receive IQ values over a wifi or ethernet link. Using the
wifi means I can do this without pluggin a ton of crap into my laptop.
WR703N
The WR703N has been really well documented, with a full section of mods on its
wiki page. I have add serial console headers and a rp-sma antenna
connector on the box I used for this project.
These were fun to do, the serial connector makes debricking the WR703N a lot
easer, the rp-sma connector allows different antennas to be used with the
router. With some more gain behind it, I should be able to place the sdr box
somewhere high and out of the way and still be able to connect to it.
For getting OpenWRT onto the WR703N you can follow the generic flashing
instructions. Make sure to install Barrier Breaker or later, BB has a
prebuilt package for rtl_sdr.
I installed the rtl_sdr software via the web interface, but it can be
done from the command line with something like the following.
# opkg update
# opkg install rtl_sdr
rtl_tcp
Once you have the rtl_sdr packages installed, connect your rtl_sdr dongle to
the usb port then run the following.
$ rtl_tcp -a 192.168.1.1 -n 8 -b 8
This command will start rtl_tcp and have it listen on the 192.168.1.1 address
for external connections, without this it will only listen on localhost. If you
have configured your network differently you will want to change the listen
address.
I had a lot of trouble running rtl_tcp for more than a few seconds with a
client connected, this was fixed by configuring the buffer options. The -n
option configures the number of linked lists rtl_tcp will use, -b configures
the number of buffers it will use. I have had a quick look at the rtl_tcp
source, but I couldn't really figure out why this helped so much.
Viewing the data
The last thing to do to test this is connect a client. The rtl_sdr tools can't
connect to a rtl_tcp source, but we can connect and grab some data using
netcat.
$ nc 192.168.1.1 > capture.iq
This might be enough if you have a process for dealing with iq data, but I like
to look at things. The GrOsmoSDR package comes with a couple of tools for
viewing ffts and waterfalls using GNURadio.
$ osmocom_fft -W -s 2000000 -f 144000000 -a 'rtl_tcp=192.168.4.1:1234'
Without any of the following it will show an fft
-W Show a waterfall
-S Show a scope
-F Show the cool fosphor display
It took me a while to find a screenshot tool as useful as the built in
screenshotting tool in OS X. I looked again today and found the import tool
that comes as part of ImageMagick.
$ import screenshot.png
You can use it to capture an area on the screen with the above command or you
can capture a whole window.
$ import -window root screenshot.png
I will probably throw this into a script and bind it to a key for ease of use.
I have been working on stuff in latex recently and wanted something to trigger
regeneration of pdfs without manual intervention. At first I thought about
doing so in a loop every so often, but it didn't seem like the best approach.
Knowing about the cool kqueue framework I looked to see if there was a
utility to watch files for me. I found the wait_on command via a FreeBSD
forums post.
The wait_on command will wait until the watched file has been changed and then
exit. It is meant to be used within a loop. I through this together in zsh. Now
when I :wq in vim wait_on exists and my document rebuilds, evinces picks up on
this and refreshes the display.
$ while true; do wait_on pres.tex; xelatex pres.tex; sleep 5; done
I have the sleep at the end to stop the document being built too often.
I have just finished reading Cryptonomicon, without any spoilers I can say
that at a certain point a character controls the led's on his keyboard. This
morning I found the kbdled utility via a forum post. It looks like a nice
simple way to make the keyboard do something useful.
As I write this there are not any packages available for FreeBSD arm. That
means on the Raspberry Pi I have to build the things I need from ports. Ports
gives a lot of control about how the software is built, but right now I just
want the tools I need installed.
It is hard to set up ports to install a collection of tools in a oner, but we
can use portmaster to do this for us. The Pi is quite slow and will take a long
time to build a small number of tools and their dependencies.
Normally portmaster will prompt for configuration for each package as it
builds, it will then build a list of packages and prompt again to install these.
The following line will install the listed tools without any prompting.
I finally got all the components to use the bag of electret microphones I got
last year. Using a LM36 audio amp I set up a simple circuit to read values
from the microphone and set LED's corresponding to the value read.
I had trouble finding a solid example of doing analog reads in a loop for the
msp430. This program, sets up LED's(On P1.1 and P1.6), sets up the ADC, then
sits in a loop. If the value on the ADC is grater than 512 the LED will go red.
I tested this by shouting at the micro controller, something I have done many
times before, but never with such satisfying results.
The cool thing we get with the msp430 and the Launchpad is on chip debugging.
This goes a long way to pave over the warts of writing straight C for the 430.
We will load up the blink program from last time, run it with the debugger
and pause execution.
Start up mspdebug as before, this time we pass a command for mspdebug to run
directly. The "gdb" command will cause mspdebug to listen on port 2000, gdb can
then connect and control the debugger.
$ mspdebug rf2500 "gdb"
Next we are going to start up msp430-gdb, load the program from before and
start it running.
$ msp430-gdb
(gdb) target remote localhost:2000
(gdb) file led.elf
(gdb) load led.elf
(gdb) continue
^C
(gdb) break main.c:14
(gdb) c #We can shorthand commands
Now between each continue we will see the LED's toggle, red then green.
(gdb) continue
From gdb we can send mspdebug directly with the monitor command.
In Aberdeen we have digital displays mounted in most of the bus stops, in fact
most major cities in the world probably have similar signs. The signs get their
data via radio broadcasts, these broadcasts have in fact been captured
before and reverse engineered.
For a long time I have been thinking about doing a similar thing and figuring
out the bus information that is in the air. I am sure I will get to it one day.
Well this morning as I headed off to work I caught a technician in the act of
debugging one of these signs. I grabbed a quick picture of the guy working, but
I didn't want to bother him.
It looked like the tech was using a serial cable from his laptop up to the
display. The antenna on the bus shelter looked much larger than the normal
ones.
A couple of years ago the TI Launchpad made quite a splash when TI released the
boards for just $5 each. The Launchpad uses the msp430 low power
microcontroller from TI, these microcontrollers don't have the same pretty face
as the avr microcontrollers in the AVR world.
This means the code is a little harder to read and write. It is a lot closer to
assembly language than the high level Arduino C/C++. While it looks worse I
find it more fun to write and it will leave you with a much better
understanding of how the controller is working.
So lets load up a simple blinking light program(no more of that sketch
nonesense), fire up the debugger and stop the code as it is executing. Grab the
following code and make file and compile up the main.elf target. If you are
using a different microcontroller then you should change the g2553 to the
microcontroller you are using.
Blink Program
#include <msp430g2553.h>
int i = 0;
int j = 0;
int
main(void)
{
WDTCTL = WDTPW + WDTHOLD; //Stop the watch dog timer
P1DIR |= 0x41; //Set the direction bit of P1(0x01) as an output
P1OUT = 0x40; //Set P1.6 high, P1.0 low
for(;;) {
P1OUT ^= 0x41; //Toggle both P1.1 and P1.6
for(i = 0; i < 20000;i++){ //Loop for a while to block
nop();
}
}
return 0;
}
I have been working on applications using the SPI user space api. One of the
devices I have been playing with is a PN523 NFC reader. The reader is
supported by libnfc and can communicate using serial, SPI, i2c or usb
depending on device support.
I wanted to get the nfc reader working over i2c, to form a baseline to compare
it against SPI. I couldn't get the nfc-list to show any NFC devices connected
to the Pi. I then tried to use the i2c -s command to scan the i2c bus, but
instead of device detection the command threw an error.
It turns out that the iic driver for the Pi only supports one ioctl,
I2CRDWR. That neuters most of the FreeBSD i2c tools as they use other
ioctl's and error out on failure.
Learning that I looked at the Makefile for libnfc, this time realising that the
i2c and the SPI device options are both commented out. I missed this the first
time I looked at the FreeBSD port, there is still the option to use a serial
device once I dig out a usb serial adapter.
It is looking a lot harder than I thought to get devices working with user
space SPI on FreeBSD. It doesn't help that Linux has been the only operating
system with user space SPI support for quite a long time.
Last year I started working on a project in the space that I thought would be
pretty cool, a Mystery Box. I made a box out of foam board, mounted a servo as
a catch. Inside I wired up an Arduino to control the servo. The Arduino
connected to a Raspberry Pi over SPI, I used SPI because it was a nice simple
protocol to implement on both the Arduino and the Raspbery Pi.
The Mystery Box was going to run a BBS which had control over the servo. My
idea was to use the box as a simple CTF target, with the servo giving instant
and substantial feedback for success.
Around this time I had been asked to port NewCWV to FreeBSD, we wanted to
have more than one implementation of the proposed standard available. Doing
some development in the FreeBSD kernel made me want to look at using the
operating system in other places.
Up to this point I had been using Linux on the Pi in the Mystery box, but I
thought it would be fun to try FreeBSD. FreeBSD on the Pi was in a reasonable
state, I didn't have trouble getting the Pi to boot. When it came to controlling
the Arduino over SPI I hit a snag.
There wasn't (and still isn't) user space SPI support in FreeBSD. This means
that I can control devices to connected to SPI from a kernel driver, but I can't
do so from user space. Kernel code is harder to write, not portable and means I
can't reuse Linux code. User space code is easier to write and if the interface
is similar to existing ones I can reuse a lot of other code.
I was enjoying writing the NewCWV port and I thought to myself: "I should make
the world a better place and write a user space SPI layer". And that is exactly
what I set out to do over the next few months.
This week, well over 9 months later I finally have a working SPI layer. I can
issue, read(2), write(2) and ioctl(2) commands and see the bus burst into life
with data flowing across.
On the other end of the bus I have a Trinket Pro(Adafruit arduino clone).
The Trinket acts as a SPI slave, when there is activity on the bus it spits on
the values from the master over UART and writes the SPI values back onto the
bus with 10 added.
The Arduino seems to struggle at the default bus speed of 500KHz, but runs fine
when I lower the speed with a sysctl down to 50KHz.
Speaking to the Arduino is okay, but it doesn't make a very exciting demo. I
had a couple of devices that can be controlled over SPI, a SSD1306 OLED
Screen and a PN532 NFC Reader.
The NFC Reader is supported by libnfc, a quick look shows that libnfc is
available in the FreeBSD ports tree. I looked at the libnfc code while writing
the user space layer and it seemed pretty straight forward. libnfc opens the SPI
device and calls the SPI_IOC_MESSAGE ioctl to send spi_ioc_transfer structs to
the driver.
The interface I have written is a little different, using an object to describe
the transfer similar to the way iic(4) works. To add FreeBSD support I need to
create the struct and swap the ioctl(2) call, this should be straight forward.
Using the OLED is a little different. Adafruit have provided a python
library to speak to the screen using either i2c or SPI. The python library
imports a module to speak to SPI and seems to mostly use read(2) to control the
screen. This is going to be harder to port across and get working.
There is also an Adafruit Arduino library for the SSD1306 and Arduino compatible
boards. This is C++ that has been written to run on an Arduino rather than on a
unix machine. This code could probably be slimmed down, with the calls to
fastspiwrite swapped out to calls to the kernel interface.
My implementation is still a little rough around the edges, the code needs to
be tidied up, moved into the kernel directly and tested on the latest head. I
think that getting user space code working with it will show up any bugs, it
will certainly make for more meaningful test cases.
My next step is to get the NFC reader working with libnfc and the raspberry pi,
then I can start work on the screen. Once I have some SPI examples working I
might even get back to setting up the Mystery Box for a CTF in the space.
31c3 Finished 21 days, probably enough for me to get past the conference high,
but also enough time for me to catch up on almost all of the talks. All of the
talks from 31c3 and most of the other congresses have been put online at
media.ccc.de.
I took notes during Congress, but I had a hard time turning them into a post.
If anyone wants to see my notes I am sure they could appear. Instead here are
some thoughts.
Congress is Europe's largest temporary art installation. The CCH is a simply
massive building, after 5 days of wandering around the I am still not sure I
saw everything. It wasn't until day 4 of Congress that I realised the floor
numbers were not floor numbers, but actually the Saal you were closest to. The
ones and twos I could see around me were not helpful directions.
The building was augmented by the CCC to make it a home for hackers. The lights
were dimmed, there were blinkenlights in every corner and just to make
things even better there seemed to be a pop up interactive installation at
random intervals. A pneumatic tube system was run around the ground floor of
the building.
Of course there were talks 12 hours of the day, but the talks were streamed
later. There were too many things that could only be found in the 4 days of
Congress to sit in a full lecture theatre.
Instead we held court at our table in the international hackerspace village.
Hung around with the crazy cooks in the Food Hacking Base, argued politics
in Noisy Square and wandered the cavern is a daze.
At night(due to the lighting it was hard to tell when that was) we would be at
our table hacking on something super cool, or hiding in the amazing nightclub.
It is probably impossible to describe congress, there is just too much
happening. It is probably unfair to try and give someone else a picture, the
only real way to know what it is like is to experience it.
Walking home from the hackerspace last night I came across this interesting
mast behind a car parked on the pavement. I had to grab a picture of this
strange thing on Union Street.
The guy operating the mast spotted me taking the picture and came down for a
chat. This mast was acting as a 4G base station, there was a second vehicle
driving around the city, listening for this mast to map 4G propagation. It
turns out that Aberdeen city council are planning to roll out 4G across the
entire city, with free access. The council want to use this for fleet
management and I think it is probably part of their initiative to improve
bandwidth in the city.
According to the operator of this mast the 4G won't just cover the city
center, they have been mapping industrial estates in Altens and out towards the
edges of the Bridge of Don
This weekend I got FreeBSD on my Chromebook Snow in a usable state. Getting
wifi going was a bit of a bother. I have an Edimax Wifi Adapter, but the
default kernel config builds out support for wifi and the urtwn device driver.
The Beaglebone Black page on the FreeBSD wiki has a kernel config that
includes the drivers I need. I took the wifi config and added them to a
CHROMEBOOK-WIFI config so I could build a kernel for the Chromebook with
support.
#USB WiFi
# Wireless NIC cards
device wlan # 802.11 support
options IEEE80211_DEBUG
device wlan_wep # 802.11 WEP support
device wlan_ccmp # 802.11 CCMP support
device wlan_tkip # 802.11 TKIP support
device wlan_xauth
device firmware # Required to load firmware
device urtwnfw # Firmware for RTL driver below
device urtwn # Realtek RTL8188CU/RTL8192CU
After building the new kernel and moving it over to the USB stick I use for the
Chromebook I needed tell FreeBSD to accept the license terms for the wifi
firmware.
I found it quite difficult to get GPGME working with Mutt in OS X, I
was using Homebrew to install mutt. I could see the option in the brew
build file to use GPGME, but it was set as an optional dependency. I fought
with it for a while then jumped across to #homebrew on freenode to get an
answer.
I had to force brew to build mutt from source to get the dependency included.
You will have to uninstall mutt if you have already installed it.
I found a tool called pv via a Hacker News thread. pv or pipe viewer
allows you to view data as it is pulled out of a Unix pipe. This is really
helpful when dealing with long running commands. I used it today to check on
the progress of encrypting a large tar archive.
One of the facilities at campGND is going to be a wireless network. The
hope is to have the network running for the majority of the time. I have built
a wireless network at a campsite before, that was made easier by having
guaranteed bandwidth from a satellite terminal.
The plan is to have a wireless network for the campsite served by a
MikroTik. Using a wireless bridge to reach to the farmhouse. The
farmhouse is out of site of the fields we are planning to use. Instead of
having wifi doing the full jump I am going to run ethernet as far as possible.
At campGND we are depending on a few things that could be fickle.
BT Home Broadband
A long run of ethernet
Solar Cells and a battery for network power.
Our final back haul is the BT network the site is pretty off the grid for phone
reception so we are stuck with BT. We have to be able to make a long hop form
the farmhouse before we can do a wireless link down to the site. The solar
cells will provide enough to run wireless access points during the day. I think
at night we might be a little drunk to care.
I still need to do some testing of the wireless hardware but the plan is to use
the following.
campGND is coming up and it is time to start talking about my projects for
the weekend. With our remote location I thought it would be fun to play with
something flaming and dangerous.
Rockets were the first thing that came to mind, I haven't done much with
rockets beyond launching fireworks a couple of times. Doing my first launches
at campGND would probably slow everything down somewhat. I got myself a starter
kit from Model Rocket Shop and some extra motors, for a bigger bang.
Iain and myself went out to Balmedie Beach to have a test run with my new
toy. We got a couple of videos of the rockets going up, excuse the portrait
slow-mo.
On the first launch the recovery canopy got slightly melted by the rocket
motor. This meant we didn't really have any recovery mechanism for the rest of
the launches. The beach was pretty deserted in the dunes so this wasn't a big
deal. At campGND loosing recovery could make things a little tricky.
For campGND I am planning on adding some telemetric data to the rockets, using
an Arduino and some sensors. I also want to try adding a camera to the nose
cone on a rocket.
For my new business cards I wanted to impose data from an experiment onto the
background of an image. For the best results I wanted to render the plot of data
onto a transparent png without any axis, values or the standard box.
For campGND we need to extend a wireless network
about 500m from the farm down to the site. We have been trying to salvage
some equipment but where having trouble getting control of a pair of Senao
wireless bridges (Senao Long Rage Multi-Client Bridge).
The devices has previously been configured by someone else to bridge a network
between two buildings. Problem being we have no idea how these boxes have been
setup. Looking online there was nothing helpful about factory resetting these
boxes unless you already had access.
I decided to put a box on our ethernet and use tcpdump to scan for any traffic
coming from the MAC Address on the bottom of the bridge.
# tcpdump -e -i en0 ether src 00:02:6F:45:C9:83
After a reboot of bridge the following appeared in my terminal.
Bingo, exactly what I was looking for. That arp request tells us where the
bridge thinks it is 10.0.2.1 .
Now I could navigate to the bridges web interface, but I was still locked out.
I read through the manufactures guide for the bridge, but I still couldn't see
anything that looked like a factory reset. The guide did mention that the
default ip for the bridge was 192.168.1.1 and it used a admin:admin as the
login.
I decided to try powering on the bridge with the hardware button held down. I
left tcpdump running so if there was any change on the bridges interface. I held
down the reset switch and powered the bridge on, counting to 30 seconds. I then
toggled the power and finally saw
For some reason we change the time zone throughout the year. It doesn't make
any sense to me and it makes it hard when I am using applications on my vps.
To avoid confusion it is nice to have irrsi running at local time.
This is the worst day of the year for the internet. Terrible internet
holiday you can't really trust anything you read and most of it is really
just quite annoying. A part from Google Pokemon, that was cool.
There is a lot of misinformed dialog about TCP and UDP for gaming. The 1024
Monkey's article covers a lot of real issue with using TCP and doesn't fall
on the normal argument "Well I am pretty much reimplementing TCP". If that
was the case you probably wouldn't get much of your game done.
Computers are now very complex, that Anandtech article really blew my mind.
Mike Ash's article on
64bit arm
has this the same insane sort of detail that the Anand Tech article does.
Computers are cool.
I am a big fan of stripe, recently using them for 57north'sMakeIt-Glo workshop. The payment was smooth
and easy to use, we didn't hear about any issues with stripe from any of the
attendee's either.
Bitcoin I am still unsure of. I would love to make £1000's from idle
speculation, but I haven't been able to buy it from anywhere other than
people in the real world.
Being able to use both together can only be seen as a good thing, the more
services that start to take bitcoin the better. Services like stripe that
are genuinely legitimate and have good standing go a way to remove a lot of
the alarmism around the currency.
Tarsnap is one of my favourite services on the
internet. If you are looking for secure small scale off site backup I can't
think of anything better.
The decapping of the 3DS chip is the sort of reverse engineer that just
amazes me. The author mentions Bunnie's Book
as a source of inspiration and I have to agree. Bunnie's book operates well
above the level of chip decapping, but it gives you a window into an entire
world of engineering that is usually hidden. Last year Bunnie released his
book for free in memory of Aaron Swartz.
Bunnie is really cool guy and a hero of hackers around the world. Amoung
other projects he is making the ultimate engineers laptop.
On Saturday the 8th march 2014 we did a run through of the MakeIt-Glo
workshop. Afterwards I went to the pub, leaving my bag(laptop and camera)
in the space. Ed and Calum stayed in the space.
Charlene text me and awoke the hangover at 0500 on the 9th. Unable to sleep
I headed into the space to get my laptop and bag and shit. The time lock
was disabled at 0657 when I came in and the main door was open for the
world.
I came up the stairs and saw that the door to the kitchen area had been
pried open and all round damaged. I saw a guy that I thought was a lock
smith(hungover head is optimistic), he pointed at our door and said
something like "It is locked". I unlocked the door and walked up to him, I
fumbled questions about his name and what was going on. He went to leave,
but I saw my (United Pixel Workers) laptop sticker sticking out of the bag.
I said it was my laptop, he put the bag down and I grabbed it, my camera and
laptop charger. He placed both the bags he was holding on the floor.
I walked across the lab and put my stuff in my bag then
pulled out my phone. He said "I've called the police already" my witty
retort was "Well I'm doing it again". As the police call center answered he
disappeared down the stairs.
Police came and took immediate details and put out a bulletin. Anther
robbery happened on king st while the officers where talking with my. Logic
ties the two together to both me and the officers. A crime scene officer
came and printed the broken door and the items we were sure he had touched.
This bloke didn't wear gloves, tried to break through an unlocked door and
didn't manage to grab our beer money jar. He broke into a dentist, I have no
idea what he was expecting to steal. The
bastard tried to steal our drinks cupboard.
Years ago I got a copy of
Designing BSD RootKits by
Joseph Kong. A combination of lack of hardware and probably my own ability
has stopped me from working through the book so far.
But now with 57 North up and running and an influx of free
machines I have everything I need.
The machine I have been given is part of an old biomed cluster and is really
over powered for what I need. As a 2U server it doesn't have a floppy or CD
drive to easily install an OS, but it does have the ability to boot off of a
USB stick.
The first thing I tried to get a FreeBSD installer running was burning an
ISO image to a USB stick with UNetBootin. I think the project might actually
be dead as the newest version of FreeBSD it supports is 8.0. UNetBootin
takes forever to set up the USB stick and after the second failed attempt I
couldn't stomach another.
I dug around the FreeBSD install guides for a while and then found something
that should have been really obvious. FreeBSD supports installation from USB
and provides a pre packaged .IMG file to dd to the USB.
All the information is
here
with the USB stuff near the bottom. FreeBSD is nice enough to include simple
instructions that work even from windows. This meant I could test the new
media from work and all seems good.
Dealing with a horrible database this last week, I found the need to
combine things in a reasonable way. It took a lot of searching to find out
how to query on multiple sets. So thought I would put it here.
var roles = (from x in userRoles
from y in editUser.UserRoles
where y.SOXRole && y.Id == x.RoleId
select x).ToList<DBModel.UserToRoles>();