S3 Direct Form Posting and Go

Introduction

I recently ran into a situation that required files to be directly posted to S3 without using an intermediate machine as a middleman.  This brought about some obvious challenges including but not limited to showing a progress bar or spinner during upload, the fact that the file needed to be posted directly to S3 also initially meant that any errors that were going to be displayed would be ugly S3 provided (and rather abrupt) XML messages.

So is there a better way?  Of course there is, and here’s how I did it.

Getting Started

First and foremost before anything gets posted to S3 you need to ensure that you can create an S3 signature properly.  Now I found this to be the most tiresome and time consuming part of the process.  Reading through outdated docs and non-working samples are enough to make any seasoned programmer want to rage quit after a time.  However, through perseverance and a lot of trial and error, I figured it out.  If you’re using Go, then the following examples are going to be perfect for you, if you’re using a different language, then there’s probably already another working sample out there for you to use.  Just in case however, I will precede each with pseudo code so that it may be portable to other languages where needed.

Encryption

The first thing we are going to want to do is create a function for handling HMAC SHA 256 encryption.  We are going to be using this function over and over again throughout this post.

Pseudo Code

function:
    encrypt(key, value)
        return hmacSha256(key, value)

In Go, I’ve defined the function for the sake of this post as the following.

Go Code

import (
    "crypto/hmac"
    "crypto/sha256"
)

func AwsEncode(key, value []byte) []byte {
    mac := hmac.New(sha256.New, key)
    mac.Write(value)
    return mac.Sum(nil)
}

In understanding the above, let’s walk through it line by line. First we define the function as accepting key and value as a byte slice.  The value is what will be encrypted, the key is the salt used to encrypt the value. The function then returns the encrypted string as a byte slice. First we create a new hmac specifying ssh256 as our encryption scheme, and initialize it with the key. Next, we encrypt the value and return it.

Signing

Next we’re going to want to begin the process of signing the payload which is required for S3 form posting to work.  Once again, let’s start by pseudo coding then showing the actual implementation.

For this we’re going to want to create a series of encryption calls, each based on the previous value and feeding into the next, sounds confusing at first but it’s not that bad, here’s how it works.

Basically what we’re going to want is get the current date in YYYYMMDD format, which will get HMAC SHA 256 encoded using your AWS secret.  For the sake of AWS V4 authentication we’ll need to add a prefix to your AWS secret.  Do not make the mistake of hex or base64 encoding this, leave it returned in its raw state.

Pseudo Code

yyyymmdd = Now.format("yyyymmdd")
encrypt("AWS4" + your_aws_secret, yyyymmdd)

Go Code

import (
    "fmt"
    "crypto/hmac"
    "crypto/sha256"
    "time"
)

now := time.Now()
yyyymmdd := fmt.Sprintf("%d02d%02d", now.Year(), now.Month(), now.Day())
dateKey := AwsEncode([]byte("AWS4" + your_aws_secret), []byte(yyyymmdd))

In true fashion, let’s go line by line here.  First, we create a yyyymmdd value by ensuring the current date is 0 left padded for values mm and dd.  Once the yyyymmdd string is created we move on to creating the encrypted dateKey which is comprised of encrypting your AWS secret along with the string prefix “AWS4”.  Lastly we ensure these values are passed in as byte slices and store the returned dateKey value.

We will use this pattern to create the entire signature.  The only variation will be at the very end when we return the final singed value as hex encoded. Also note, value cited as your_aws_region should contain the string name of the region such as “us-east-1” for example.

Pseudo Code

function:
    sign (string_to_sign)
        dateKey = encrypt("AWS4" + your_aws_secret, yyyymmdd)
        dateRegionKey = encrypt(dateKey, your_aws_region)
        dateRegionServiceKey = encrypt(dateRegionKey, []byte("s3"))
        signingKey = encrypt(dateRegionServiceKey, "aws4_request")
        signature = encrypt(signingKey, string_to_sign)
        return hex(signature)

Go Code

import (
    "fmt"
    "crypto/hmac"
    "crypto/sha256"
    "encoding/hex"
    "time"
)

func AwsSign(stringToSign string) string {
    your_aws_secret := "SOMEAWSSECRETKEYSTOREDINYOURAIMSETTINGS"
    your_aws_region := "your-region-1"

    now := time.Now()
    yyyymmdd := fmt.Sprintf("%d%02d%02d", now.Year(), now.Month(), now.Day())

    dateKey := AwsEncode([]byte("AWS4" +your_aws_secret), []byte(yyyymmdd))
    dateRegionKey := AwsEncode(dateKey, []byte(your_aws_region))
    dateRegionServiceKey := AwsEncode(dateRegionKey, []byte("s3"))
    signingKey := AwsEncode(dateRegionServiceKey, []byte("aws4_request"))
    signature := AwsEncode(signingKey, []byte(stringToSign))
    return hex.EncodeToString(signature)
}

Now is probably a good time to note that the above code is really laid out for learning purposes, the variable notation used such as your_aws_secret and so forth is really there solely for the purpose of drawing attention.  Your final implementation of the above should either pull the secret and region from a configuration file or be implemented as function parameters depending on your requirements.  That being said, let’s go line by line.  First we’ve setup a function called AwsSign which takes a stringToSign variable.  This variable will be the final content to be signed.  In the case of direct form posting this will variable will contain the JSON policy definition which we will describe later. The AwsSign function returns a string which will be the final signature value.  The next two lines in true implementation will contain your actual AWS secret and AWS region name respectively. We then generate the yyyymmdd string as previously described and generate the dateKey variable based on the “AWS4” prefix along with your AWS secret and yyyymmdd variable.  We then take that variable and feed it back into the AwsEncode function along with the AWS region.  We then take the dateRegionKey and feed it back into the AwsEncode function again along with the service we which to use, in this case “s3”. We then take that value and feed it back into the AwsEncode yet again along with the “aws4_request” string telling AWS that we are using a version 4 request schema. We then take that value and pass it yet again back into AwsEncode along with the stringToSign value finally completing the change of encryption.  The last step is the hex encode the byte slice into a string and return it as the final signature to be used along with direct form posting to S3.

I know that was a bit of gallop and hopefully it made sense.  If now, please feel free to contact me or leave a comment below and I’ll do my best to describe what’s going on here.  For reference the following might provide a more visual flow to the above.

https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html (scroll to the bottom of the page)

Policy

Now we start getting into the guts of the operation, the policy.  The way the form post works is that you define a policy, that policy gets base 64 encoded and added as a form variable named “policy” (go figure), then you pass the same un-encoded policy to be signed using the AWS signature function we defined above and use the returned signature value as a form field called, you guessed it, “signature”.  One of the caveats to understand with policy is that everything defined in the policy has to match in the form, we’ll discuss this a little more in detail later, but for now, let’s define a simple working policy.

Example

{
    expiration: "2019-02-13T12:00:00.000Z",
    conditions: [
        {"bucket", "your_bucket_name"},
        {"success_action_redirect", "https://yourserver.com/your_landing_page"},
        ["starts-with", "$key", "your_root_folder/perhaps_another_folder/"],
        {"acl": "public-read"},
        ["starts-with", "$content-type", ""],
        {"x-amz-meta-uuid": "14365123651274"},
        {"x-amz-server-side-encryption": "AES256"},
        {"x-amz-credential": "your_aws_id/your_aws_region/s3/aws4_request"},
        {"x-amz-algorithm": "ASW4-HMAC-SHA256"},
        {"x-amz-date": "20190212T000000Z"}
    ]
}

So now the fun, let’s walk through exactly what in tarhooty is going on above.

expiration: The first thing you’ll probably notice is the expiration field.  This field describes how long the transaction should live for.  Any easy way to handle this for example would be to take to current date time and add one day to it, allowing for a period of twenty four hours to upload the file.  For a more tightened approach depending on your needs you could set it to seconds, minutes or hours.

conditions: Next, let’s look at the conditions field.  This is an array which describes the conditions that must be met in order for the post upload to succeed.  Anything not met will automatically cause a failure by the AWS S3 API.

bucket: The bucket field is the top most name of your S3 folder structure.  This field is required to ensure the upload goes to the correct bucket.

success_action_redirect: This success action redirect field specifies where you would like S3 to redirect after a successful upload.

starts-with: The starts-with condition allows one to define a restriction for a field in the post upload form.  For example, the “starts-with” “key” requires that the html form containing the “key” value must in fact start with whatever is specified in the last parameter (ex. “uploads/public/”).  When the last parameter in this condition is an empty string “” that means anything goes and this condition will be satisfied as long as there is a key field.

acl: The acl (Access Control List) field determines the visibility of the uploaded object.

x-amz-meta-uuid: This field is a bit of a bizarre one, as far as I can tell this field is required and it’s required to have the following value of “14365123651274”. No clue as to why, wouldn’t work without it…when in Rome.

x-amz-server-side-encryption: This field specifies the encryption scheme being used to sign the policy.

x-amz-credential: The credential field is composed of several values and usually takes the form of {your_aws_id}/{your_aws_region}/s3/aws4_request. A working example if we were to use the values from our AwsSign function would take on the following form “AKIAYOURAWSID/us-east-1/s3/aws4_request”.

x-amz-algorithm: The algorithm field tells the S3 api which algorithm was used for signing.

x-amz-date: The date field would be today’s date.  For simplicity’s sake, I zeroed out the hours minutes and seconds above however, you could and should use a complete date taking the form of YYYYMMDDThhmmssZ.

Constructing The Form

Now that we have a general understanding of all the moving parts involved in making the post request work, let’s bring them all together.  First a walk through of what we’re going to do.

We’re going to need to build a policy with dynamic values populated at request time.

When building the form, we’re going to need to create our policy and base64 encode it and pass it into the form as “policy”.

We’ll also need to AwsSign our policy and pass it into the form as the “signature” field.

Finally, we’ll need to ensure all the values we’ve specified in our policy are met by our form.

Here’s an example HTML form for use with our policy described above.  Please note, when developing your form the order of the elements matters.  It must match the order described in the policy otherwise the post will fail

<form method="post" action="https://your_bucket_name.s3.amazonaws.com/" enctype="multpart/form-data">
<input type="hidden" name="key" value="your_root_folder/perhaps_another_folder/your_file_to_upload.txt" />
<input type="hidden" name="acl" value="public-read" />
<input type="hidden" name="success_action_redirect" value="https://yourserver.com/your_landing_page" />
<input type="hidden" name="content-type" "" />
<input type="hidden" name="x-amz-meta-uuid" value="14365123651274" />
<input type="hidden" name="x-amz-server-side-encryption" value="AES256" />
<input type="hidden" name="x-amz-credential" value="your_aws_id/your_aws_region/s3/aws4_request" />
<input type="hidden" name="x-amz-algorithm" value="AWS4-HMAC-SHA256" />
<input type="hidden" name="x-amz-date" value="20190212T000000Z" />
<input type="hidden" name="policy" value="{replace-with-actual-base64-encoded-policy}" />
<input type="hidden" name="x-amz-signature" value="{replace-with-actual-aws-sign}" />
<input type="file" name="file" />
<input type="submit" value="Upload" />
</form>

As you can see above all of the fields specified in the policy are present in the form.  The “policy” field will contain the direct result of taking the policy JSON as a string and running through a base64 encode.  The “x-amz-signature” field will need to contain the base64 encoded policy which has been run through the AwsSign function.  Finally we added a file and submit button to select the file (which in this case must be named “your_file_to_upload.txt”) and to submit the form.

Putting It All Together

Server Side

import ( 
    "fmt" 
    "crypto/hmac" 
    "crypto/sha256" 
    "encoding/hex"
    "encoding/base64"
    "time" 
)

func AwsEncode(key, value []byte) []byte {     
    mac := hmac.New(sha256.New, key)     
    mac.Write(value)     return mac.Sum(nil) 
}

func AwsSign(stringToSign string) string {     
    your_aws_secret := "SOMEAWSSECRETKEYSTOREDINYOURAIMSETTINGS"     
    your_aws_region := "your-region-1" 

    now := time.Now() 
    yyyymmdd := fmt.Sprintf("%d%02d%02d", now.Year(), now.Month(), now.Day()) 

    dateKey := AwsEncode([]byte("AWS4" +your_aws_secret), []byte(yyyymmdd)) 
    dateRegionKey := AwsEncode(dateKey, []byte(your_aws_region)) 
    dateRegionServiceKey := AwsEncode(dateRegionKey, []byte("s3")) 
    signingKey := AwsEncode(dateRegionServiceKey, []byte("aws4_request")) 
    signature := AwsEncode(signingKey, []byte(stringToSign)) 
    return hex.EncodeToString(signature) 
}

//define a policy
policy := []byte(`{ 
    expiration: "2019-02-13T12:00:00.000Z", 
    conditions: [ 
        {"bucket", "your_bucket_name"},
        {"success_action_redirect", "https://yourserver.com/your_landing_page"}, 
        ["starts-with", "$key", "your_root_folder/perhaps_another_folder/"], 
        {"acl": "public-read"}, 
        ["starts-with", "$content-type", ""], 
        {"x-amz-meta-uuid": "14365123651274"}, 
        {"x-amz-server-side-encryption": "AES256"}, 
        {"x-amz-credential": "your_aws_id/your_aws_region/s3/aws4_request"}, 
        {"x-amz-algorithm": "ASW4-HMAC-SHA256"}, 
        {"x-amz-date": "20190212T000000Z"} 
    ] 
}`)

//base4 encode policy
hexPolicy := base64.StdEncoding.EncodeToString(policy)
signature := AwsSign(hexPolicy)

Client Side

<form method="post" action="https://your_bucket_name.s3.amazonaws.com/" enctype="multpart/form-data">
<input type="hidden" name="key" value="your_root_folder/perhaps_another_folder/your_file_to_upload.txt" />
<input type="hidden" name="acl" value="public-read" />
<input type="hidden" name="success_action_redirect" value="https://yourserver.com/your_landing_page" />
<input type="hidden" name="content-type" "" />
<input type="hidden" name="x-amz-meta-uuid" value="14365123651274" />
<input type="hidden" name="x-amz-server-side-encryption" value="AES256" />
<input type="hidden" name="x-amz-credential" value="your_aws_id/your_aws_region/s3/aws4_request" />
<input type="hidden" name="x-amz-algorithm" value="AWS4-HMAC-SHA256" />
<input type="hidden" name="x-amz-date" value="20190212T000000Z" />
<input type="hidden" name="policy" value="{{.hexPolicy}}" />
<input type="hidden" name="x-amz-signature" value="{{.signature}}" />
<input type="file" name="file" />
<input type="submit" value="Upload" />
</form>

The above client side and server side code examples are obviously not meant to be implemented verbatim however, they should be enough to get you started and heading in the right direction.

Moving Beyond Post – Ajax

A few things may have jumped out a you with the approach above.  The values required will need to either be populated dynamically or be populated from the server before the form is rendered.  Both could work.  The main issue I had with both of these approaches is that it required the key name to be known before the form was constructed which could be problematic depending on your requirements, the other was that there’s no spinner or loading for the user.  The user is instead just required to wait and hope all went well.  For me, that’s just not good enough so I devised a way to use AJAX (or XHTTP) to upload the file without refreshing the page. If you’re interested then read on.

So how could we get this solution to more in a more elegant manner.  How about this workflow?

Use sees a file button, clicks the file button and selects the file to be used.

The onchange event is triggered for the file button and in turn sends an ajax (XHTTP) request to the server to generate all the fields required to populate the client form.  The values could contain the full key using the selected filename, signature, everything.

When the ajax (XHTTP) request returns with all the required form values, the client constructs a FormData object and appends the data as form attributes.

//assuming json contains server side AWS values and
//using jQuery to communicate with the file input element

var formData = new FormData();
formData.append("key", json.AwsKey);
formData.append("acl", json.AwsAcl);
...
formData.append("policy", json.AwsPolicy);
formData.append("signature", json.AwsSignature);
formData.append("file", fileInput.prop("files")[0], input.attr("name));
formData.append("upload_file", true);

Finally, once the formData object has been fully populated and in the correct order with all of the required AWS field values, that form could then be sent to the AWS end point specified earlier in the form.

//assuming jQuery being used to post XHTTP request to AWS

//show spinner

$.ajax({
    url: "https://your_bucket_name.s3.amazonaws.com",
    type: "post",
    data: formData,
    contentType: false,
    processData: false,
    success: function(json){
        //do something
    },
    error: function(xhr, status, message){
        //show error
    },
    complete: function(){
        //hide spinner
    }
});

This approach would allow you to upload asynchronously, show a spinner, handle errors and overall be more elegant.

Two important things of note here however.

First – We don’t want to use success_action_redirect in our policy anymore.  If we’re using AJAX then we want to change our policy to use “success_action_status” with the value of “200”

{ 
 expiration: "2019-02-13T12:00:00.000Z", 
 conditions: [ 
 {"bucket", "your_bucket_name"},
 {"success_action_status", "200"}, 
 ["starts-with", "$key", "your_root_folder/perhaps_another_folder/"], 
 {"acl": "public-read"}, 
 ["starts-with", "$content-type", ""], 
 {"x-amz-meta-uuid": "14365123651274"}, 
 {"x-amz-server-side-encryption": "AES256"}, 
 {"x-amz-credential": "your_aws_id/your_aws_region/s3/aws4_request"}, 
 {"x-amz-algorithm": "ASW4-HMAC-SHA256"}, 
 {"x-amz-date": "20190212T000000Z"} 
 ] 
}

Second – We’re going to run into a CORS pre-flight limitation that will need to be address.  The fix is to go into your S3 settings on AWS, click on the bucket name, then click on the “Permissions” tab and finally click the “CORS configuration button.

Once there add the following xml and save.

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
 <AllowedOrigin>*</AllowedOrigin>
 <AllowedMethod>POST</AllowedMethod>
 <MaxAgeSeconds>3000</MaxAgeSeconds>
 <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

Much better, now instead of being redirected we have full control over how we show error messages be alerts, growls, or some other.  Furthermore, we’ve added the correct CORS check and should be good moving forward.

Conclusion

Hopefully this helped some of you out there trying to get AWS direct post working with Go. If I’ve left something out, made a mistake or if you’d just like a bit more information or clarification on something, please leave me a comment and I’ll do my best to answer.  Thanks for stopping by.

Arch Linux Running OSX High Sierra In QEMU

I’m not a fan of Mac or OSX by any means however, there comes a time where I need to test either using their native Safari browser or some other native Mac environment. To solve this issue, I set out to install OSX into QEMU. It wasn’t very straight forward but I figured it out so hopefully you won’t have to go through the same issues I had to.  Please make sure to follow along exactly with the steps provided, any missed or skipped step could easily lead to a failed install.

Prerequisites

Make sure you already have QEMU installed on your system.

Also ensure you have cfdisk installed on your system, or something comparible that you’re comfortable using.  We’ll be using this later to create a GPT partition.

You’ll also need to get a hold of an OSX High Sierra ISO.  At the time of this publication I was able to located a download of OSX High Sierra 10.13.1 at High Sierra 10.13.1. If for some reason the provided link is unavailable simply do a search or use an actual OSX install disc.

I found the best luck using the enoch_rev2902_boot BIOS.  At the time of this publication I was able to located a download at enoch_rev2902_boot.  If this specific version is no longer available a later version may work as well.

Create The OSX QEMU Image

First we need to create the OSX QEMU image.  Issue the following command:

qemu-img create -f raw osx 32g

Replace the 32g with the actual hard disk size you want to use (ie. 64g, 128g, 256g, etc.).

Next, let’s create a GPT partition on the “osx” image.  Issue the following command:

cfdisk osx

cfdisk gpt

You should be presented with the above image.  When you are hit enter to select “gpt” as the label type.

cfdsik new partition

Next, select “New” and hit enter.  At this point you can select the size of the partition or partitions you would like to create.  I created a single partition.  Hit enter when you’ve chosen your size or just hit enter for the default max size.

cfdisk write

After the partition has been created, we’ll need to write it back to disk and quit.  Select the “Write” option and hit enter, then when prompted enter “yes” to confirm the writing of the partition to disk and hit enter.  Next, select “Quit” and hit enter.

At this point we’ve created the osx disk image which we’ll be using with QEMU, next we’ll need to configure a boot script for QEMU to load the installer ISO.

Install Mac OSX High Sierra

To make life easy on ourselves, let’s create a start-mac.sh script so that we don’t have to type out a ton of information each and every time we want to launch the qemu image.  Create a start-mac.sh script and add the following to it:

#!/bin/sh

qemu-system-x86_64 \
	-enable-kvm \
	-boot c \
	-m 8192 \
	-machine pc-q35-2.4 \
	-usb \
	-device usb-kbd \
	-device usb-tablet \
	-device isa-applesmc,osk="ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc" \
	-kernel ./enoch_rev2902_boot \
	-smbios type=2 \
	-device ich9-intel-hda \
	-device hda-duplex \
	-device ide-drive,bus=ide.2,drive=MacHDD \
	-drive format=raw,id=MacHDD,if=none,file=./osx \
	-cpu Penryn,vendor=GenuineIntel \
	-smp 4,cores=2 \
	-netdev user,id=eno1 -device e1000-82545em,netdev=eno1,id=vnet0 \
	-device ide-drive,bus=ide.0,drive=MacDVD \
	-drive id=MacDVD,if=none,snapshot=on,file=./macOS_High_Sierra_10_13_1_Official.iso \
        -monitor stdio \

Take note of a few things here.  If you’re network device differs from mine which is “eno1” then rename anything in the -netdev from “eno1” to whatever your network interface is called.  Next, if you named your QEMU disk image anything other than “osx”, make sure you change the -drive line to reflect that.  Lastly, before we move on and spin up the image, make sure you have your High Sierra install ISO and the “enoch_rev2902_boot” BIOS all in the same directory or at least referenced correctly in the start-up script.

Now, let’s set the execute bit on the script and spin it up:

chmod +x start-mac.sh

./start-mac.sh

As the image starts up you should see something like the following:

QEMU BIOS

Hit enter to start the OSX installer.

NOTE: from this point forward if your mouse gets stuck in the QEMU window, use the key combination “CTRL + ALT + G” to release the mouse.

Next you’ll see the following if everything went well:

Apple Start

This will take a while and you may see no progress for quite some time.  Don’t shut the image down, just wait.  Seriously, it could take up to 30 minutes on just that screen along.  Eventually however, you will be presented with the black loading screen:

Apple Black Start

This screen will take a little bit at first too, once again, just be patient it will eventually load.  Eventually you will be presented with the language selection screen.  Go ahead and select your choice of language and hit the next arrow.

DO NOT SELECT “Install macOS” YET, OR IT WILL FAIL!

First we’ll need to drop down into a terminal and format the disk image “osx” we created to be of the proper Apple type.  All the work we did up until this point was just getting it so the Apple installer would recognize it.  Also, don’t try and using the graphical Disk Utility either, it will fail and complain about the the disk saying:

Operation failed...

Running operation 1 of 1: Erase "MacOS"...
Unmounting disk
MediaKit reports nout enough space on device for requested operation.
Operation failed...

Re-partition The OSX Disk

Once you reach the macOS Utilities screen, click on the Utilities drop down at the top of the screen and select the “Terminal”.

macOS Utilies Terminal

Once the “Terminal” app has launched we’ll need to re-partition the disk and format it using a Mac readable format.

Mac Terminal

By default, the disk we’re looking to format should be labelled as “disk2”.  If for some reason you’ve deviated from the above steps your label may be different.  To ensure you’re partitioning the correct drive issue the following command:

diskutil list

Once you’ve identified the correct drive, we’ll need to partition it.  Issue the following command to re-partition the drive:

diskutil partitionDisk /dev/disk2 GPT JHFS+ New 0b

Once complete you should see the following:

diskutil complete

Finally, close the open terminal application and select Terminal -> Quit Terminal from the top drop down.

Now, we can actually install macOS to our drive.  Select the “Install macOS” option and click continue.

macOS Install

Follow the next few screens and agreements until you get to the disk selection screen.

macOS Install

macOS Install Disk Selection

Now you can actually select the installation drive we labelled as “New”.  You may have chosen a different label and if so select that drive and continue the installation.

Shortly after the installer finishes, you may be presented with a black screen and an error which reads:

ParseTagDate unimplemented

ParseTagDate Unimplemented Error

Don’t panic! Hit F12 and the boot BIOS screen will be presented.  Select the “New” drive “hd(1,2) () New”, hit enter and complete the macOS installation.

macOS Install BIOS

At this point you’ll be presented with the gray Apple loading screen.  Once again, this may take quite a while.  Don’t terminate the instance, let it complete fully.

Apple Start

Eventually you’ll be presented with the black Apple loading screen again.

Apple Black Start

Finally, you’ll see the actual install progressing.

macOS Installing

Hooray! After the install completes you’ll see the BIOS boot screen once more.  This time it will contain your Installer, your installed macOS image disk and a recovery disk.  Select your macOS image disk “hd(1,2) (10.13) New”, finish up the welcome installer and enjoy your new OSX QEMU instance.

macOS Install Complete BIOS

Post Install Tweaks

The follow are some final tweaks to make the overall experience a little better.

Screen Resolution

If you see your OSX resolution stuck at something you don’t like say 1024×768, there’s a simple fix that can be applied to change this.  Do the following:

First, apply any Apple updates to the OS currently outstanding or available.

Next, open your “Finder” application, select “Applications” and go to the root of your drive by using Apple/Metakey + Option/Alt + Up arrow.

macOS Drive Root

Create a directory called Extra

macOS Extra Folder

Next, copy the file /Library/Preferences/SystemConfiguration/com.apple.Boot.plist into the /Extra directory you just created.

Now, edit the file and change the following from:

<key>Kernal Flags</key>
<string></string>

org.chameleon.Boot.plist from

To:

<key>Kernal Flags</key>
<string></string>

<key>Graphics Mode</key>
<string>1920x1080x32</string>

org.chameleon.Boot.plist after

Or change it to whatever graphics resolution you would like.  The format of which is:

Next, shutdown the OS completely by selecting “(Apple Icon) -> Shutdown”.  Once the instance has completely shutdown, relaunch manually by issuing:

./start-mac.sh

You should see an error in the QEMU window:

Booting with plist

Press any key and you should see the window resized to the width and height you specified above.  Next, to get rid of the boot error, rename the com.apple.Boot.plist file to org.chameleon.Boot.plist in the /Extra directory.

org.chameleon.Boot.plist

Finally, shutdown fully using the “(Apple Icon) -> Shutdown” and start the instance again using:

./start-mac.sh

Now your resolution should be set and you’re good to go.

Remove the ISO image from the start script

Edit the start-mac.sh script and comment out the last 2 -device and -drive lines:

#!/bin/sh

qemu-system-x86_64 \
	-enable-kvm \
	-boot c \
	-m 8192 \
	-machine pc-q35-2.4 \
	-usb \
	-device usb-kbd \
	-device usb-tablet \
	-device isa-applesmc,osk="ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc" \
	-kernel ./enoch_rev2902_boot \
	-smbios type=2 \
	-device ich9-intel-hda \
	-device hda-duplex \
	-device ide-drive,bus=ide.2,drive=MacHDD \
	-drive format=raw,id=MacHDD,if=none,file=./osx \
	-cpu Penryn,vendor=GenuineIntel \
	-smp 4,cores=2 \
	-netdev user,id=eno1 -device e1000-82545em,netdev=eno1,id=vnet0 \
	#-device ide-drive,bus=ide.0,drive=MacDVD \
	#-drive id=MacDVD,if=none,snapshot=on,file=./macOS_High_Sierra_10_13_1_Official.iso \
        -monitor stdio \

This will remove the macOS Installer selection from the boot BIOS defaulting the selected item to the “New” drive.  Next time you boot, just hit enter and you’re good to go.

Bartop Arcade Cabinet Build

I’ve always wanted to build a game cabinet from scratch. In this build, I create a bartop arcade game cabinet using some pre-cut CNC templates, Happ and Seimitsu controls and an HP all in one monitor.

Build Components

The following is a breakdown of all the parts used in my build.

  • CNC bartop templates by LEP1 Customs, LLC on eBay
  • 1 can of spray shellac from Home Depot
  • Rough and fine grain sand paper from Home Depot
  • 3 cans of black semi gloss spray paint from Home Depot
  • HP all in one monitor from eBay
  • AK81 Mini Wireless Keyboard Mouse Combo from Tmark.com
  • Happ control joysticks and buttons from BSA Gaming on eBay
  • LED Happ style player and coin buttons from Magic Stone on eBay
  • Seimitsu buttons for pinball paddle controls from Focus Attack
  • Kinter 2 channel mini amp from Nargos on Amazon
  • Lanmu USB DC 5V to DC power converters from Lanmu direct on Amazon
  • BOSS Audio BRS540 50 watt 4 inch speakers from Speece, Inc on Amazon
  • Joystick mounting from amazingtreatsure123 on eBay
  • Electrical wire terminal crimp connectors from eBay
  • Rubber non-slip adhesive feet from Home Depot
  • 2 sheets of 12×24 x .093 plexiglass from Home Depot
  • 1 1/8″ drill bit from Home Depot
  • Small gauge wiring and screws from eBay

6 GPU Mining Build With MSI RX 570 Gaming X Video Cards

Building a mining machine can be a daunting task when trying to figure out the right balance of investment versus return.  This build is a minimum investment build which yields a high return, both in terms of hashrate/solves as well as electrical usage.

VBIOS Modding

After the initial setup and build I began the process of modifying the VBIOS on the cards as well as tuning the settings on the cards to get the highest performance while maintaining a low electrical footprint.

The VBIOS I used was originally modified by Shados and was the 1500 strap modified BIOS. I used the ATIFlash utility to flash all of the cards and eventually put together a batch file to better facilitate this. Please be aware that flashing your VBIOS can render you card unusable if you don’t know what you’re doing, take care when doing so ensuring that you’re using the correct flashing utility for your respective card.

The ATIFlash utility is available here: Download ATIFlash Utility

The VBIOS used was originally downloaded from anorak.tech and all the credit for this goes to shados.  The video cards I used all contained Elpida memory therefore I used the msirx570gamingX_memshift_1500Mem2000.rom.  If your card has a different type of memory make sure you either modify the VBIOS yourself or find one that matches your chipset.

Performance Tuning

I found that for the series of GPUs I was using, namely the 6x MSI RX Radeon 570 Gaming X cards, the following settings worked best using Claymore’s Dual Miner while mining Ethereum.

-cclock 1075
-mclock 1900
-cvddc 900
-mvddc 900

Build Components

The following is a breakdown of all the parts used in my build.

  • ASUS Z170 PRO GAMING LGA 1151 Intel Z170 HDMI SATA 6Gb/s USB 3.1 ATX Intel Motherboard
  • 6x MSI Radeon RX 570 DirectX 12 RX 570 GAMING X 4G 4GB 256-Bit GDDR5 PCI Express 3.0 HDCP Ready CrossFireX Support ATX Video Card
  • 6-Pack USB RISER PCIe 6-Pin PCI-E 16x to 1x Powered Adapter Card w/ 60cm USB 3.0 Cable & PCI-E to SATA Power – GPU Mining Ethereum ETH Zcash ZEC
  • Intel Celeron G3930 Kaby Lake Dual-Core 2.9 GHz LGA 1151 51W BX80677G3930 Desktop Processor Intel HD Graphics 610
  • Rosewill Quark Series 1200W Full Modular Gaming Power Supply with LED Indicator, 80Plus Platinum Certified, Single +12V Rail, Intel 4th Gen CPU Ready, SLI & Crossfire Ready – Quark-1200
  • Biwin® 120GB MLC m.2 80mm Sync NAND flash SSD Solid State Drive, Read: 561MB/s Write: 450MB/s
  • Steel Crypto Coin Open Air Mining Frame Rig Case For 6 GPU ETH BTC Ethereum
  • G.SKILL NT Series 8GB (2 x 4GB) 288-Pin DDR4 SDRAM DDR4 2133 (PC4 17000) Intel Z170 Platform / Intel X99 Platform Desktop Memory Model F4-2133C15D-8GNT

Idea: Building A Quantum Interface Card (QIC)

A few years ago, an idea came to me regarding quantum entanglement and networking/communication technology.  After some time and research into trying to patent the idea, I found that others had similar ideas but were not at all interested in acting on its development.  To this end, I thought I might share my idea with the world, freely, in the hopes that perhaps someone, someday might find it useful in the furthering of how we not only communicate digitally, but how we share information as a whole.  The idea is as follows and is described using the simplest terms possible not to loose viewers not currently versed in quantum theory.  This post will also contain a simple entanglement primer.

Understanding Quantum Entanglement

To start, one must understand some simple quantum theory observations.  The first, particles entangled at the quantum level, retain their entangled state regardless of distance between said particles.  These particles, share an interesting state which can be described by the following.

Imagine for a moment that in front of you sat a quantum blender.  This magical device would take two particles, blend them up and spit out entangled resulting particles.  Now, imagine that the quantum particles where actually two tennis balls.  One orange and one green.  Once the tennis balls have been placed in the blender and turned on, the resulting particles could be observed as having the following characteristics:

The first ball which was orange, becomes half orange and half green.
The second ball which as green, becomes half green and half orange, in that order and yes, the order does matter and here’s why.

The state of the first tennis ball will always be the exact opposite state of the second and vice versa.  If the first tennis ball is imagined as being rotated 180 degrees, the second tennis ball could be visualized as rotating the opposite direction and have the opposite state.  This is where the importance of quantum communication begins.

If the state of the first tennis ball, when it’s orange state facing you could be thought of as 1, then the opposite state could be expressed as 0.  At the same time that the first ball’s state is 0 the second ball’s  state would be 1.  This is important because synchronization of the two states requires that one end of the entangled particles become inverted or not-ed.  This would render a synchronous state so that when the first tennis ball is facing with a state of 1, the second would also have a state of 1 and once again when changed, vice versa (ie. Orange:1 = Green:1, Orange:0 = Green:0).

Hopefully all of the above helps in understanding of the quantum theory being employed here.  If not, please feel free to ask questions in the comments and I will do my best to answer them.  Also, please keep in mind the idea above is a simplified example to only be used as a reference or mental model for understanding what’s to be explained in the development of a QIC.

Building A Quantum Interface Card (QIC)

In quantum physics it is understood that the sheer act of observing a quantum system changes the state of the system and this must be taken into account.  Currently two large hurdles exist at the time of this writing, the first of which is stabilizing a quantum particle or particles for a long duration of time.  The second, is creating a non-disruptive observance of a quantum particle or at the very least, a system of observance by which the state can be obtained from a quantum system before/during disruption as to obtain the correct state of the particle at the point in time of observance.

The Client Interface

In this model, the client interface could be thought of as a hand held device such as a phone or other user machine such as a desktop/laptop computer, tablet, etc..

The Service Interface

In this model, the service interface could be thought of as the side in which has direct or near direct communication with the service provider or access point into a provider infrastructure such as a data center.

Implementation

The implementation for the QIC requires one or more particles to be entangled at the quantum level and stabilized, with client particle(s) residing in the device used by the end user (ie. phone, tablet, computer, etc.) and the service particles remaining at the data center or provider’s data hub.  Information exchanged from the service end will be instantaneously observed by the client and vice versa.  A protocol will need to be established whereby client communication is recognized and service communication is recognized and handled to avoid collisions.  The simplest approach that comes to mind is to have a series of observed particles dedicated as client to service, and another dedicated as service to client.  This would allow for truly bidirectional and parallel communication.  Information exchanged in this manner could initially be interpreted using simple states such as 1 and 0.  By determining an initial quantum state, any state changes thereafter would determine the transmission state of either 1 or 0 (1 -> 0, 0 -> 1).  Current compression and optimization techniques used for optimizing and encrypting network communication could still be used although, being that this implementation takes place at the quantum level, one might argue that for the time being at least, such techniques may not be necessary for quite some time, if at all.

Realization

Once realized, quantum communication opens up an entirely new avenue of real time information sharing easily measured in the gigabyte, terabyte and potentially in the petabyte per second range.  This caliber of wireless data sharing would allow any device supporting a QIC to download/upload information far beyond what current wired/wireless technology allows, thus putting new bottlenecks on the side of the service provider and the speed in which the client can actually process and persist the data as received.

Other applications for QIC communication go beyond terrestrial technology and into that of deep space communication, allowing instantaneous communication with deep space vehicles and probes.

Conclusion

The above idea expressed reflects my vision for the future of wireless communication in the hopes that it be put into practical use.  If you use this idea, please mention me as one of the authors and I wish you the very best of luck.  Please feel free to ask any questions regarding any of the information listed above.  Thanks.

Windows 10 Tips And Tricks: Keyboard Shortcuts

Windows 10 is chalked full of very helpful keyboard shortcuts, seemingly more so than any of its predecessors. The following is a list of keyboard shortcuts and combinations that can really help to accelerate your speed and use of applications and features.

Window Commands
Minimize All: Win + M
Toggle Desktop: Win + D
Peek Desktop: Win + ,
Maximize Window: Win + Up
Minimize Window: Win + Down
Pin Window Left: Win + Left
Pin Window Right: Win + Right
Next Window: Alt + Tab
Previous Window: Shift + Alt + Tab

System Commands
System Shutdown: Win + X, U, U
System Signout: Win + X, U, I
System Restart: Win + X, U, R
Lock System: Win + L
Control Panel: Win + X,  P
System Information: Win + Pause
Device Manager: Win + X, M
Task Manager: Ctrl + Shift + Esc
Run Dialog: Win + R
File Explorer: Win + E
Search: Win + S
Action Center: Win + A
Project: Win + P
System Settings: Win + I
Windows Ink: Win + W
Share: Win + H
Game Bar: Win + G
Feedback Hub: Win + F
Magnifier Zoom In: Win + +
Magnifier Zoom Out: Win + –
View All Windows: Win + Tab
OS Screenshot: Win + Print Screen

Taskbar Items
Items that have been pinned to the taskbar can be launched using:

Win + {position number}

Where {position number} is the numbered position from left to right.  For example, if you would like to launch the first application pinned to your task bar you would use the following command:

Win + 1

How To: Run Arch Linux on Windows 10

Windows 10 has the ability to run a Linux subsystem.  However, you can now run Arch Linux instead of the default Debian thanks to some great work by “turbo” at github.com.  Here’s the skinny on getting this setup.

First, download the alwsl.bat file from: https://github.com/alwsl/alwsl

Next, open a command Windows command prompt and execute the following:

alwsl install

Follow the steps provided by the batch file and voila! Arch Linux on Windows.

Lastly, just open ArchLinux from the Windows menu and enjoy.

I’ve noticed using Arch Linux instead of Debian on Windows appears to have some performance gains, especially around the usage of Vim and other console editors.

How To: Play Oculus Rift Games On HTC Vive

If you don’t have a VR headset yet, you may be wondering which to get. There seem to be ample support for both, but which one is the best?  Now, you don’t have worry, the answer thanks to some dedicated developers is now easy.  Get the HTC Vive.  Why?  Because the HTC Vive has the best versatility between the two.  You can play either sitting or standing (stationary) or within a specific set of bounds (room scale).  Now, you can also play Oculus Rift only games on the HTC Vive and here’s how.

First, if you haven’t already install SteamVR.  You’ve more than likely already done this, but it’s worth mentioning just in case.

Next, install the Oculus Rift Setup software:  http://www.oculus.com/setup

Finally, install the latest version of Revive: https://github.com/LibreVR/Revive/releases

Once both of those steps are done, launch SteamVR and you’ll have a new button at the bottom of the Steam overlay called “Revive”.

Now that you have revive installed, there are several free games that you can play from Oculus.  At the time of this publication they include:

And a few others.  Personally, I suggest Farlands, it’s a very cool Viva Pinata meets discovery game.  Enjoy the new set of games in your HTC Vive library.

Adding Custom CSS Properties Using gtkmm

Adding custom CSS properties in gtkmm is actually pretty simple once you know how.  It took me quite a bit of searching along with some trial and error before I came up with a pretty simple pattern for doing so.  What I was looking for was a way to use custom CSS without having to develop a whole bunch of custom classes which extended Gtk base classes.

First thing we need to do is to create a class that extends from Gtk::Widget.  Now, we don’t necessarily have to use the class directly as a visual object however, it does need to extend the Gtk::Widget class for it to work.  The following is my example for creating simple custom CSS properties.  Let’s start off by adding a single custom property called “width”.

 

class CustomCSS : public Gtk::Widget {
 
    public:
        Gtk::StyleProperty<int> width_property;
        int width;
 
        CustomCss() :
            Glib::ObjectBase("customcss"),
            Gtk::Widget(),
 
            //-gtkmm__CustomObject_customcss-width
            width_property(*this, "width", 100),
            width(100)
            {}
};

Let’s examine the above code to find out what’s really going on here.

 

class CustomCSS : public Gtk::Widget

First we need to create a class the extends the Gtk::Widget class.

 

public:
    Gtk::StyleProperty<int> width_property;
    int width;

Next, we need to add a public style property that defines what will be reading the width values and a simple property of the type we’ll want to use, in this case an int.

 

CustomCss() :
    Glib::ObjectBase("customcss"),
    Gtk::Widget(),
 
    //-gtkmm__CustomObject_customcss-width
    width_property(*this, "width", 100),
    width(100)
    {}

Finally, we’ll want to implement a default constructor where we set the name we will be using in our CSS to reference this class “customcss”, pass in the Gtk::Widget() constructor, then we’ll need to being initializing our custom properties to be used in CSS.  For this, we initialize the width_property, give it a CSS property name of “width” and set the default value to 100.  Then we do the same for the actual property by calling width(100).

Now, if we want to use the custom CSS class and properties we’ll need to add them to our CSS file.  I noted above also that in our CSS file, the “width” property would actually need to be specified as “-gtkmm__CustomObject_customcss-width”.  The first part “-gtkmm__CustomObject_” will be the default used by gtkmm when accessing the custom property, the last two parts “customcss” and “width” we control by setting the Glib::ObjectBase(“customcss”) and the width_property(*this, “width”, 100) respectively.

 

/* somefile.css */
 
* {
    -gtkmm__CustomObject_customcss-width: 500;
}

To enable this custom property to be used, we first need to define an all selector “*” followed by our CSS implementation which contains “-gtkmm__CustomObject_customcss-width: 500;”.  Now, let’s use it in our C++ application.

 

#include <gtkmm.h>
 
int main(int argc, char ** argv){
 
    //your code doing something useful
 
    //setup our css context and provider
    Glib::ustring cssFile = "/path/to/somefile.css";
    Glib::RefPtr<Gtk::CssProvider> css_provider = Gtk::CssProvider::create();
    Glib::RefPtr<Gtk::StyleContext> style_context = Gtk::StyleContext::create();
 
    //load our css file, wherever that may be hiding
    if(css_provider->load_from_path(HOME_DIRECTORY + "/.config/xfce4/finder/xfce4-finder.css")){
        Glib::RefPtr<Gdk::Screen> screen = window->get_screen();
        style_context->add_provider_for_screen(screen, css_provider, GTK_STYLE_PROVIDER_PRIORITY_USER);
    }
 
    //instantiate our custom css class
    CustomCss css;
 
    //get the width
    int width = css.width_property.get_value();
 
    //do something useful with the width we retrieved from css
 
    //run application run
    application->run(main_window);
 
    return EXIT_SUCCESS;
}

As you can see from the above, we’ve created both a CssProvider and a StyleContext, then we load our CSS file by using load_from_path, set our screen to be the main application window’s screen, then finally we add a provider by calling add_provider_for_screen.  Once that’s all done we can use our CustomCSS class.  Simply instantiate it, and call get_value() of the property we want to use.  It’s that simple.  If we wanted to add more than one property, we just added them to the constructor’s inline calls like so.

 

class CustomCSS : public Gtk::Widget {
    public:
 
    Gtk::StyleProperty<int> width_property;
    int width;
 
    Gtk::StyleProperty<int> height_property;
    int height;
 
    //etc...
 
    CustomCss() :
        Glib::ObjectBase("customcss"),
        Gtk::Widget(),
 
        //-gtkmm__CustomObject_customcss-width
        width_property(*this, "width", 100),
        width(100),
 
        //-gtkmm__CustomObject_customcss-height
        height_property(*this, "height", 50),
        height(50)
 
        //etc...
        {}
};

 

Once again, we add the new properties if we want to our CSS file.

/* somefile.css */
 
* {
    -gtkmm__CustomObject_customcss-width: 500;
    -gtkmm__CustomObject_customcss-height: 100;
    /* etc... */
}

 

Then we access them the same way as before in our C++ code.

//instantiate our custom css class
CustomCss css;
 
//get the width
int width = css.width_property.get_value();
 
//get the height
int height = css.height_property.get_value();
 
//get etc...

 

I hope this helps.  I found this a simple and clean way of introducing custom CSS properties without having to create a bunch of Gtk::Widget classes.  As always, thanks for stopping by and let me know if there’s anything I can do to help.

 

 

How To: Xfinity TV and Arch Linux

Hey everyone, I’ve figured out how to get tv.xfinity.com working on Arch Linux today. Here’s what I did in case anyone else needs help:

First install pipelight:

user@computer:$ yaourt -S pipelight

Next, if you don’t already have it install firefox:

user@computer:$ sudo pacman -S firefox

Next, update and configure pipelight:

user@computer:$ sudo mkdir /root/.gnupg
user@computer:$ sudo pipelight-plugin ––update

Next, enable the flash plugin:

user@computer:$ sudo pipelight-plugin ––enable flash

Next, add the plugins to firefox:

user@computer:$ sudo pipelight-plugin ––create-mozilla-plugins

Lastely, make sure you have hal installed:

user@computer:$ yaourt -S hal-flash-git
user@computer:$ yaourt -S hal-info

Once you’ve completed the steps above, completely kill of firefox (Super+Q), then restart and browse to tv.xinfinity.com, you should be able to watch it with no problems.