Category Archives: Code

S3 Direct Form Posting and Go

Introduction

I recently ran into a situation that required files to be directly posted to S3 without using an intermediate machine as a middleman.  This brought about some obvious challenges including but not limited to showing a progress bar or spinner during upload, the fact that the file needed to be posted directly to S3 also initially meant that any errors that were going to be displayed would be ugly S3 provided (and rather abrupt) XML messages.

So is there a better way?  Of course there is, and here’s how I did it.

Getting Started

First and foremost before anything gets posted to S3 you need to ensure that you can create an S3 signature properly.  Now I found this to be the most tiresome and time consuming part of the process.  Reading through outdated docs and non-working samples are enough to make any seasoned programmer want to rage quit after a time.  However, through perseverance and a lot of trial and error, I figured it out.  If you’re using Go, then the following examples are going to be perfect for you, if you’re using a different language, then there’s probably already another working sample out there for you to use.  Just in case however, I will precede each with pseudo code so that it may be portable to other languages where needed.

Encryption

The first thing we are going to want to do is create a function for handling HMAC SHA 256 encryption.  We are going to be using this function over and over again throughout this post.

Pseudo Code

function:
    encrypt(key, value)
        return hmacSha256(key, value)

In Go, I’ve defined the function for the sake of this post as the following.

Go Code

import (
    "crypto/hmac"
    "crypto/sha256"
)

func AwsEncode(key, value []byte) []byte {
    mac := hmac.New(sha256.New, key)
    mac.Write(value)
    return mac.Sum(nil)
}

In understanding the above, let’s walk through it line by line. First we define the function as accepting key and value as a byte slice.  The value is what will be encrypted, the key is the salt used to encrypt the value. The function then returns the encrypted string as a byte slice. First we create a new hmac specifying ssh256 as our encryption scheme, and initialize it with the key. Next, we encrypt the value and return it.

Signing

Next we’re going to want to begin the process of signing the payload which is required for S3 form posting to work.  Once again, let’s start by pseudo coding then showing the actual implementation.

For this we’re going to want to create a series of encryption calls, each based on the previous value and feeding into the next, sounds confusing at first but it’s not that bad, here’s how it works.

Basically what we’re going to want is get the current date in YYYYMMDD format, which will get HMAC SHA 256 encoded using your AWS secret.  For the sake of AWS V4 authentication we’ll need to add a prefix to your AWS secret.  Do not make the mistake of hex or base64 encoding this, leave it returned in its raw state.

Pseudo Code

yyyymmdd = Now.format("yyyymmdd")
encrypt("AWS4" + your_aws_secret, yyyymmdd)

Go Code

import (
    "fmt"
    "crypto/hmac"
    "crypto/sha256"
    "time"
)

now := time.Now()
yyyymmdd := fmt.Sprintf("%d02d%02d", now.Year(), now.Month(), now.Day())
dateKey := AwsEncode([]byte("AWS4" + your_aws_secret), []byte(yyyymmdd))

In true fashion, let’s go line by line here.  First, we create a yyyymmdd value by ensuring the current date is 0 left padded for values mm and dd.  Once the yyyymmdd string is created we move on to creating the encrypted dateKey which is comprised of encrypting your AWS secret along with the string prefix “AWS4”.  Lastly we ensure these values are passed in as byte slices and store the returned dateKey value.

We will use this pattern to create the entire signature.  The only variation will be at the very end when we return the final singed value as hex encoded. Also note, value cited as your_aws_region should contain the string name of the region such as “us-east-1” for example.

Pseudo Code

function:
    sign (string_to_sign)
        dateKey = encrypt("AWS4" + your_aws_secret, yyyymmdd)
        dateRegionKey = encrypt(dateKey, your_aws_region)
        dateRegionServiceKey = encrypt(dateRegionKey, []byte("s3"))
        signingKey = encrypt(dateRegionServiceKey, "aws4_request")
        signature = encrypt(signingKey, string_to_sign)
        return hex(signature)

Go Code

import (
    "fmt"
    "crypto/hmac"
    "crypto/sha256"
    "encoding/hex"
    "time"
)

func AwsSign(stringToSign string) string {
    your_aws_secret := "SOMEAWSSECRETKEYSTOREDINYOURAIMSETTINGS"
    your_aws_region := "your-region-1"

    now := time.Now()
    yyyymmdd := fmt.Sprintf("%d%02d%02d", now.Year(), now.Month(), now.Day())

    dateKey := AwsEncode([]byte("AWS4" +your_aws_secret), []byte(yyyymmdd))
    dateRegionKey := AwsEncode(dateKey, []byte(your_aws_region))
    dateRegionServiceKey := AwsEncode(dateRegionKey, []byte("s3"))
    signingKey := AwsEncode(dateRegionServiceKey, []byte("aws4_request"))
    signature := AwsEncode(signingKey, []byte(stringToSign))
    return hex.EncodeToString(signature)
}

Now is probably a good time to note that the above code is really laid out for learning purposes, the variable notation used such as your_aws_secret and so forth is really there solely for the purpose of drawing attention.  Your final implementation of the above should either pull the secret and region from a configuration file or be implemented as function parameters depending on your requirements.  That being said, let’s go line by line.  First we’ve setup a function called AwsSign which takes a stringToSign variable.  This variable will be the final content to be signed.  In the case of direct form posting this will variable will contain the JSON policy definition which we will describe later. The AwsSign function returns a string which will be the final signature value.  The next two lines in true implementation will contain your actual AWS secret and AWS region name respectively. We then generate the yyyymmdd string as previously described and generate the dateKey variable based on the “AWS4” prefix along with your AWS secret and yyyymmdd variable.  We then take that variable and feed it back into the AwsEncode function along with the AWS region.  We then take the dateRegionKey and feed it back into the AwsEncode function again along with the service we which to use, in this case “s3”. We then take that value and feed it back into the AwsEncode yet again along with the “aws4_request” string telling AWS that we are using a version 4 request schema. We then take that value and pass it yet again back into AwsEncode along with the stringToSign value finally completing the change of encryption.  The last step is the hex encode the byte slice into a string and return it as the final signature to be used along with direct form posting to S3.

I know that was a bit of gallop and hopefully it made sense.  If now, please feel free to contact me or leave a comment below and I’ll do my best to describe what’s going on here.  For reference the following might provide a more visual flow to the above.

https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html (scroll to the bottom of the page)

Policy

Now we start getting into the guts of the operation, the policy.  The way the form post works is that you define a policy, that policy gets base 64 encoded and added as a form variable named “policy” (go figure), then you pass the same un-encoded policy to be signed using the AWS signature function we defined above and use the returned signature value as a form field called, you guessed it, “signature”.  One of the caveats to understand with policy is that everything defined in the policy has to match in the form, we’ll discuss this a little more in detail later, but for now, let’s define a simple working policy.

Example

{
    expiration: "2019-02-13T12:00:00.000Z",
    conditions: [
        {"bucket", "your_bucket_name"},
        {"success_action_redirect", "https://yourserver.com/your_landing_page"},
        ["starts-with", "$key", "your_root_folder/perhaps_another_folder/"],
        {"acl": "public-read"},
        ["starts-with", "$content-type", ""],
        {"x-amz-meta-uuid": "14365123651274"},
        {"x-amz-server-side-encryption": "AES256"},
        {"x-amz-credential": "your_aws_id/your_aws_region/s3/aws4_request"},
        {"x-amz-algorithm": "ASW4-HMAC-SHA256"},
        {"x-amz-date": "20190212T000000Z"}
    ]
}

So now the fun, let’s walk through exactly what in tarhooty is going on above.

expiration: The first thing you’ll probably notice is the expiration field.  This field describes how long the transaction should live for.  Any easy way to handle this for example would be to take to current date time and add one day to it, allowing for a period of twenty four hours to upload the file.  For a more tightened approach depending on your needs you could set it to seconds, minutes or hours.

conditions: Next, let’s look at the conditions field.  This is an array which describes the conditions that must be met in order for the post upload to succeed.  Anything not met will automatically cause a failure by the AWS S3 API.

bucket: The bucket field is the top most name of your S3 folder structure.  This field is required to ensure the upload goes to the correct bucket.

success_action_redirect: This success action redirect field specifies where you would like S3 to redirect after a successful upload.

starts-with: The starts-with condition allows one to define a restriction for a field in the post upload form.  For example, the “starts-with” “key” requires that the html form containing the “key” value must in fact start with whatever is specified in the last parameter (ex. “uploads/public/”).  When the last parameter in this condition is an empty string “” that means anything goes and this condition will be satisfied as long as there is a key field.

acl: The acl (Access Control List) field determines the visibility of the uploaded object.

x-amz-meta-uuid: This field is a bit of a bizarre one, as far as I can tell this field is required and it’s required to have the following value of “14365123651274”. No clue as to why, wouldn’t work without it…when in Rome.

x-amz-server-side-encryption: This field specifies the encryption scheme being used to sign the policy.

x-amz-credential: The credential field is composed of several values and usually takes the form of {your_aws_id}/{your_aws_region}/s3/aws4_request. A working example if we were to use the values from our AwsSign function would take on the following form “AKIAYOURAWSID/us-east-1/s3/aws4_request”.

x-amz-algorithm: The algorithm field tells the S3 api which algorithm was used for signing.

x-amz-date: The date field would be today’s date.  For simplicity’s sake, I zeroed out the hours minutes and seconds above however, you could and should use a complete date taking the form of YYYYMMDDThhmmssZ.

Constructing The Form

Now that we have a general understanding of all the moving parts involved in making the post request work, let’s bring them all together.  First a walk through of what we’re going to do.

We’re going to need to build a policy with dynamic values populated at request time.

When building the form, we’re going to need to create our policy and base64 encode it and pass it into the form as “policy”.

We’ll also need to AwsSign our policy and pass it into the form as the “signature” field.

Finally, we’ll need to ensure all the values we’ve specified in our policy are met by our form.

Here’s an example HTML form for use with our policy described above.  Please note, when developing your form the order of the elements matters.  It must match the order described in the policy otherwise the post will fail

<form method="post" action="https://your_bucket_name.s3.amazonaws.com/" enctype="multpart/form-data">
<input type="hidden" name="key" value="your_root_folder/perhaps_another_folder/your_file_to_upload.txt" />
<input type="hidden" name="acl" value="public-read" />
<input type="hidden" name="success_action_redirect" value="https://yourserver.com/your_landing_page" />
<input type="hidden" name="content-type" "" />
<input type="hidden" name="x-amz-meta-uuid" value="14365123651274" />
<input type="hidden" name="x-amz-server-side-encryption" value="AES256" />
<input type="hidden" name="x-amz-credential" value="your_aws_id/your_aws_region/s3/aws4_request" />
<input type="hidden" name="x-amz-algorithm" value="AWS4-HMAC-SHA256" />
<input type="hidden" name="x-amz-date" value="20190212T000000Z" />
<input type="hidden" name="policy" value="{replace-with-actual-base64-encoded-policy}" />
<input type="hidden" name="x-amz-signature" value="{replace-with-actual-aws-sign}" />
<input type="file" name="file" />
<input type="submit" value="Upload" />
</form>

As you can see above all of the fields specified in the policy are present in the form.  The “policy” field will contain the direct result of taking the policy JSON as a string and running through a base64 encode.  The “x-amz-signature” field will need to contain the base64 encoded policy which has been run through the AwsSign function.  Finally we added a file and submit button to select the file (which in this case must be named “your_file_to_upload.txt”) and to submit the form.

Putting It All Together

Server Side

import ( 
    "fmt" 
    "crypto/hmac" 
    "crypto/sha256" 
    "encoding/hex"
    "encoding/base64"
    "time" 
)

func AwsEncode(key, value []byte) []byte {     
    mac := hmac.New(sha256.New, key)     
    mac.Write(value)     return mac.Sum(nil) 
}

func AwsSign(stringToSign string) string {     
    your_aws_secret := "SOMEAWSSECRETKEYSTOREDINYOURAIMSETTINGS"     
    your_aws_region := "your-region-1" 

    now := time.Now() 
    yyyymmdd := fmt.Sprintf("%d%02d%02d", now.Year(), now.Month(), now.Day()) 

    dateKey := AwsEncode([]byte("AWS4" +your_aws_secret), []byte(yyyymmdd)) 
    dateRegionKey := AwsEncode(dateKey, []byte(your_aws_region)) 
    dateRegionServiceKey := AwsEncode(dateRegionKey, []byte("s3")) 
    signingKey := AwsEncode(dateRegionServiceKey, []byte("aws4_request")) 
    signature := AwsEncode(signingKey, []byte(stringToSign)) 
    return hex.EncodeToString(signature) 
}

//define a policy
policy := []byte(`{ 
    expiration: "2019-02-13T12:00:00.000Z", 
    conditions: [ 
        {"bucket", "your_bucket_name"},
        {"success_action_redirect", "https://yourserver.com/your_landing_page"}, 
        ["starts-with", "$key", "your_root_folder/perhaps_another_folder/"], 
        {"acl": "public-read"}, 
        ["starts-with", "$content-type", ""], 
        {"x-amz-meta-uuid": "14365123651274"}, 
        {"x-amz-server-side-encryption": "AES256"}, 
        {"x-amz-credential": "your_aws_id/your_aws_region/s3/aws4_request"}, 
        {"x-amz-algorithm": "ASW4-HMAC-SHA256"}, 
        {"x-amz-date": "20190212T000000Z"} 
    ] 
}`)

//base4 encode policy
hexPolicy := base64.StdEncoding.EncodeToString(policy)
signature := AwsSign(hexPolicy)

Client Side

<form method="post" action="https://your_bucket_name.s3.amazonaws.com/" enctype="multpart/form-data">
<input type="hidden" name="key" value="your_root_folder/perhaps_another_folder/your_file_to_upload.txt" />
<input type="hidden" name="acl" value="public-read" />
<input type="hidden" name="success_action_redirect" value="https://yourserver.com/your_landing_page" />
<input type="hidden" name="content-type" "" />
<input type="hidden" name="x-amz-meta-uuid" value="14365123651274" />
<input type="hidden" name="x-amz-server-side-encryption" value="AES256" />
<input type="hidden" name="x-amz-credential" value="your_aws_id/your_aws_region/s3/aws4_request" />
<input type="hidden" name="x-amz-algorithm" value="AWS4-HMAC-SHA256" />
<input type="hidden" name="x-amz-date" value="20190212T000000Z" />
<input type="hidden" name="policy" value="{{.hexPolicy}}" />
<input type="hidden" name="x-amz-signature" value="{{.signature}}" />
<input type="file" name="file" />
<input type="submit" value="Upload" />
</form>

The above client side and server side code examples are obviously not meant to be implemented verbatim however, they should be enough to get you started and heading in the right direction.

Moving Beyond Post – Ajax

A few things may have jumped out a you with the approach above.  The values required will need to either be populated dynamically or be populated from the server before the form is rendered.  Both could work.  The main issue I had with both of these approaches is that it required the key name to be known before the form was constructed which could be problematic depending on your requirements, the other was that there’s no spinner or loading for the user.  The user is instead just required to wait and hope all went well.  For me, that’s just not good enough so I devised a way to use AJAX (or XHTTP) to upload the file without refreshing the page. If you’re interested then read on.

So how could we get this solution to more in a more elegant manner.  How about this workflow?

Use sees a file button, clicks the file button and selects the file to be used.

The onchange event is triggered for the file button and in turn sends an ajax (XHTTP) request to the server to generate all the fields required to populate the client form.  The values could contain the full key using the selected filename, signature, everything.

When the ajax (XHTTP) request returns with all the required form values, the client constructs a FormData object and appends the data as form attributes.

//assuming json contains server side AWS values and
//using jQuery to communicate with the file input element

var formData = new FormData();
formData.append("key", json.AwsKey);
formData.append("acl", json.AwsAcl);
...
formData.append("policy", json.AwsPolicy);
formData.append("signature", json.AwsSignature);
formData.append("file", fileInput.prop("files")[0], input.attr("name));
formData.append("upload_file", true);

Finally, once the formData object has been fully populated and in the correct order with all of the required AWS field values, that form could then be sent to the AWS end point specified earlier in the form.

//assuming jQuery being used to post XHTTP request to AWS

//show spinner

$.ajax({
    url: "https://your_bucket_name.s3.amazonaws.com",
    type: "post",
    data: formData,
    contentType: false,
    processData: false,
    success: function(json){
        //do something
    },
    error: function(xhr, status, message){
        //show error
    },
    complete: function(){
        //hide spinner
    }
});

This approach would allow you to upload asynchronously, show a spinner, handle errors and overall be more elegant.

Two important things of note here however.

First – We don’t want to use success_action_redirect in our policy anymore.  If we’re using AJAX then we want to change our policy to use “success_action_status” with the value of “200”

{ 
 expiration: "2019-02-13T12:00:00.000Z", 
 conditions: [ 
 {"bucket", "your_bucket_name"},
 {"success_action_status", "200"}, 
 ["starts-with", "$key", "your_root_folder/perhaps_another_folder/"], 
 {"acl": "public-read"}, 
 ["starts-with", "$content-type", ""], 
 {"x-amz-meta-uuid": "14365123651274"}, 
 {"x-amz-server-side-encryption": "AES256"}, 
 {"x-amz-credential": "your_aws_id/your_aws_region/s3/aws4_request"}, 
 {"x-amz-algorithm": "ASW4-HMAC-SHA256"}, 
 {"x-amz-date": "20190212T000000Z"} 
 ] 
}

Second – We’re going to run into a CORS pre-flight limitation that will need to be address.  The fix is to go into your S3 settings on AWS, click on the bucket name, then click on the “Permissions” tab and finally click the “CORS configuration button.

Once there add the following xml and save.

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
 <AllowedOrigin>*</AllowedOrigin>
 <AllowedMethod>POST</AllowedMethod>
 <MaxAgeSeconds>3000</MaxAgeSeconds>
 <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

Much better, now instead of being redirected we have full control over how we show error messages be alerts, growls, or some other.  Furthermore, we’ve added the correct CORS check and should be good moving forward.

Conclusion

Hopefully this helped some of you out there trying to get AWS direct post working with Go. If I’ve left something out, made a mistake or if you’d just like a bit more information or clarification on something, please leave me a comment and I’ll do my best to answer.  Thanks for stopping by.

How To: Run Arch Linux on Windows 10

Windows 10 has the ability to run a Linux subsystem.  However, you can now run Arch Linux instead of the default Debian thanks to some great work by “turbo” at github.com.  Here’s the skinny on getting this setup.

First, download the alwsl.bat file from: https://github.com/alwsl/alwsl

Next, open a command Windows command prompt and execute the following:

alwsl install

Follow the steps provided by the batch file and voila! Arch Linux on Windows.

Lastly, just open ArchLinux from the Windows menu and enjoy.

I’ve noticed using Arch Linux instead of Debian on Windows appears to have some performance gains, especially around the usage of Vim and other console editors.

Adding Custom CSS Properties Using gtkmm

Adding custom CSS properties in gtkmm is actually pretty simple once you know how.  It took me quite a bit of searching along with some trial and error before I came up with a pretty simple pattern for doing so.  What I was looking for was a way to use custom CSS without having to develop a whole bunch of custom classes which extended Gtk base classes.

First thing we need to do is to create a class that extends from Gtk::Widget.  Now, we don’t necessarily have to use the class directly as a visual object however, it does need to extend the Gtk::Widget class for it to work.  The following is my example for creating simple custom CSS properties.  Let’s start off by adding a single custom property called “width”.

 

class CustomCSS : public Gtk::Widget {
 
    public:
        Gtk::StyleProperty<int> width_property;
        int width;
 
        CustomCss() :
            Glib::ObjectBase("customcss"),
            Gtk::Widget(),
 
            //-gtkmm__CustomObject_customcss-width
            width_property(*this, "width", 100),
            width(100)
            {}
};

Let’s examine the above code to find out what’s really going on here.

 

class CustomCSS : public Gtk::Widget

First we need to create a class the extends the Gtk::Widget class.

 

public:
    Gtk::StyleProperty<int> width_property;
    int width;

Next, we need to add a public style property that defines what will be reading the width values and a simple property of the type we’ll want to use, in this case an int.

 

CustomCss() :
    Glib::ObjectBase("customcss"),
    Gtk::Widget(),
 
    //-gtkmm__CustomObject_customcss-width
    width_property(*this, "width", 100),
    width(100)
    {}

Finally, we’ll want to implement a default constructor where we set the name we will be using in our CSS to reference this class “customcss”, pass in the Gtk::Widget() constructor, then we’ll need to being initializing our custom properties to be used in CSS.  For this, we initialize the width_property, give it a CSS property name of “width” and set the default value to 100.  Then we do the same for the actual property by calling width(100).

Now, if we want to use the custom CSS class and properties we’ll need to add them to our CSS file.  I noted above also that in our CSS file, the “width” property would actually need to be specified as “-gtkmm__CustomObject_customcss-width”.  The first part “-gtkmm__CustomObject_” will be the default used by gtkmm when accessing the custom property, the last two parts “customcss” and “width” we control by setting the Glib::ObjectBase(“customcss”) and the width_property(*this, “width”, 100) respectively.

 

/* somefile.css */
 
* {
    -gtkmm__CustomObject_customcss-width: 500;
}

To enable this custom property to be used, we first need to define an all selector “*” followed by our CSS implementation which contains “-gtkmm__CustomObject_customcss-width: 500;”.  Now, let’s use it in our C++ application.

 

#include <gtkmm.h>
 
int main(int argc, char ** argv){
 
    //your code doing something useful
 
    //setup our css context and provider
    Glib::ustring cssFile = "/path/to/somefile.css";
    Glib::RefPtr<Gtk::CssProvider> css_provider = Gtk::CssProvider::create();
    Glib::RefPtr<Gtk::StyleContext> style_context = Gtk::StyleContext::create();
 
    //load our css file, wherever that may be hiding
    if(css_provider->load_from_path(HOME_DIRECTORY + "/.config/xfce4/finder/xfce4-finder.css")){
        Glib::RefPtr<Gdk::Screen> screen = window->get_screen();
        style_context->add_provider_for_screen(screen, css_provider, GTK_STYLE_PROVIDER_PRIORITY_USER);
    }
 
    //instantiate our custom css class
    CustomCss css;
 
    //get the width
    int width = css.width_property.get_value();
 
    //do something useful with the width we retrieved from css
 
    //run application run
    application->run(main_window);
 
    return EXIT_SUCCESS;
}

As you can see from the above, we’ve created both a CssProvider and a StyleContext, then we load our CSS file by using load_from_path, set our screen to be the main application window’s screen, then finally we add a provider by calling add_provider_for_screen.  Once that’s all done we can use our CustomCSS class.  Simply instantiate it, and call get_value() of the property we want to use.  It’s that simple.  If we wanted to add more than one property, we just added them to the constructor’s inline calls like so.

 

class CustomCSS : public Gtk::Widget {
    public:
 
    Gtk::StyleProperty<int> width_property;
    int width;
 
    Gtk::StyleProperty<int> height_property;
    int height;
 
    //etc...
 
    CustomCss() :
        Glib::ObjectBase("customcss"),
        Gtk::Widget(),
 
        //-gtkmm__CustomObject_customcss-width
        width_property(*this, "width", 100),
        width(100),
 
        //-gtkmm__CustomObject_customcss-height
        height_property(*this, "height", 50),
        height(50)
 
        //etc...
        {}
};

 

Once again, we add the new properties if we want to our CSS file.

/* somefile.css */
 
* {
    -gtkmm__CustomObject_customcss-width: 500;
    -gtkmm__CustomObject_customcss-height: 100;
    /* etc... */
}

 

Then we access them the same way as before in our C++ code.

//instantiate our custom css class
CustomCss css;
 
//get the width
int width = css.width_property.get_value();
 
//get the height
int height = css.height_property.get_value();
 
//get etc...

 

I hope this helps.  I found this a simple and clean way of introducing custom CSS properties without having to create a bunch of Gtk::Widget classes.  As always, thanks for stopping by and let me know if there’s anything I can do to help.

 

 

Windows 10 BASH: Tips and Tricks

I’ve been using the Windows 10 BASH shell for quite a while now forcing myself to get used to it. Along the way I’ve figured out a few working arounds for features such as sshfs and simple copy paste commands. First, the copy paste. When I first started using the Windows BASH shell I found it very frustrating when trying the copy and paste using the CTRL-C and CTRL-V commands. As any Linux user will tell you, these are kill commands and it doesn’t really make sense to use them. However, I did find another set of commands which work much better, here they are.

COPY: Select text, hit ENTER.
PASTE: Right click

So much easier than trying to fiddle with CTRL commands which are sometimes working and sometimes not.

Now let’s talk about sshfs, at the time of this writing FUSE support was not part of the Windows 10 BASH shell. There is/was however a petition to get it introduced, if it’s still not included and you would like to help, vote it up using the link below.

Windows 10 BASH FUSE Support

So, how did I get something similar to sshfs working, simple I put together a couple of shell scripts that utilize rsync and here’s the gist.  Basically what we want is a way to pull all the files we want to modify locally, use an editor to modify files and when the files are modified, push them back up to the server.  To do this, I created 3 BASH shell scripts named “pull”, “push” and “watcher”.  First the pull script, this script performs an incremental rsync pulling the remote files locally into a directory of your choosing.  The “pull” file looks like this:

#!/bin/sh
 
rsync -au --progress user@someserver:/path/to/remote/directory /path/to/local/directory

 

The above requires that you pass change the “user@someserver” to your actual username and server name.  Also, the “/path/to/remote/directory” and “/path/to/local/directory” need to be changed to your specific files.  For my specific needs, I used something similar to the following:

root@myserver:/var/www/html/zf /mnt/c/Users/Jason/Documents/zf

You’ll noticed I have the end point of the rsync specified under the standard Windows file system, this is so I can use a native Windows editor (LightTable in my case) to edit the files.

Next, let’s look at the “push” file.  The “push” file looks like this:

#!/bin/sh
 
rsync -au --progress /path/to/local/directory user@someserver:/path/to/remote/directory

 

The above is very similar to the “pull” file, just in reverse. This will do an incremental push only for files that have changed.

Lastely let’s look at the “watcher” file:

#!/bin/sh
 
while ./pull && ./push;do : ;done

 

The above file continually attempts to do an incremental rsync push until you kill it (Ctrl-C). After you’ve customized the above files to your liking, all you need to do is execute a pull then execute the watcher, and you’ll have yourself a poor man’s sshfs.

user@computer:$ ./watcher

 

As an optimization approach, you can also add –exclude directives to the “pull” file which will ignore files and directories, for example:

#!/bin/sh
 
rsync -au --progress --exclude .svn --exclude .git user@someserver:/path/to/remote/directory /path/to/local/directory

Zenhance NodeJS MVC Framework Is Alive

A Zend Framework like MVC approach to building NodeJS enterprise applications in an easy to install NodeJS module.

The new Zenhance NodeJS MVC Framework is up and running and available for install.  The entire framework is comprised of a single NPM module and can be installed and running in less that 5 minutes once you have NodeJS installed.  To begin, simply run:

user@computer:$ npm install zenhance

To install the Zenhance module and all of its dependencies

Next, create a server.js file with a single line of code

require('zenhance').run()

Then execute the server.js file with node

user@computer:$ node server.js

That’s all there is to it.  Full documentation and more information is available on Github and on collaboradev at Zenhance.

Deploying Go Revel Apps On Heroku

I’ve spent quite a bit of time trying to figure out how to successfully deploy Go Revel applications to Heroku, luckily for you.  Let’s go through them step by step and I’ll show you how I did it.

Install Go

First install Go if you haven’t already, this should be pretty self explanatory.

Install Revel

Next, install the Revel framework.  Change to the directory where you have your GOPATH configured, for example:

user@computer:$ cd $GOPATH

Then execute the following to install Revel:

user@computer:$ go get github.com/revel/revel
user@computer:$ go get github.com/revel/cmd/revel

Update your path to contain the Revel binary:

user@computer:$ export PATH=$PATH:$GOPATH/bin

Now change directory to the src directory:

user@computer:$ cd $GOPATH/src

Create the app

If you haven’t already done so, create a basic vanilla app using revel by executing the following command (replaceing “myapp” with your application name of course):

user@computer:$ revel new myapp
user@computer:$ cd myapp

Next, let’s make sure the app runs by executing the run command:

user@computer:$ revel run myapp

Open a browser to http://localhost:9000 (or whatever port you have your application configured to run on) and verify that everything is working as expected.

Install Heroku Toolbelt

Download and install the Heroku Toolbelt: https://toolbelt.heroku.com/

Next login to Heroku using the toolbelt:

user@computer:$ heroku login
Enter your Heroku credentials.
Email: adam@example.com
Password (typing will be hidden):
Authentication successful.

Once that’s done you’ll need to create the application.  If you’ve already done so by way of the Heroku web GUI, you can skip this step, if not execute the following (Once again replacing “myapp” with your actual app name):

user@computer:$ heroku create -b https://github.com/revel/heroku-buildpack-go-revel.git myapp

This will ensure that the revel go buildpack is being used.

If you’ve already created the application using the Heroku web GUI, then go under settings and set the buildpack to use “https://github.com/revel/heroku-buildpack-go-revel.git”

heroku-buildpack

Initialize Git

Next ensure that you are still under your application directory “src/myapp” and execute the following:

user@computer:$ git init

Now that your git is initialized let’s link it to the Heroku online app repo, execute the following (Replace myapp with your actual app name).

user@computer:$ heroku git:remote -a myapp

Install Godep

Now, we need to install Godep for our project to work correctly on Heroku, install by executing the following:

user@computer:$ go get github.com/tools/godep

Next, let’s automatically update all of our dependencies using godep, execute the following under your application directory:

user@computer:$ godep save ./…

If this ran correctly, you should see a “Godeps” and “vendor” folder under your application directory. Note: pay special attention to the “./…” that needs to be there verbatim.

 

Test Heroku Local

To ensure everything is going to work in the Heroku environment, we should first test the application locally by executing the following:

user@computer:$ heroku local

[WARN] ENOENT: no such file or directory, open ‘Procfile’
[FAIL] No Procfile and no package.json file found in Current Directory – See heroku local –help

You probably noticed the errors there, this isn’t really an issue, we just need to complete a few things.  First lets create a file called “.env” and add an environment variable to it that tells Heroku to use a version later than go 1.5, execute the following:

user@computer:$ echo “GO15VENDOREXPERIMENT=1” > .env

This file should be located at the root of your application directory (“myapp/.env”).

Next, let’s create a simple Procfile to execute our revel run command.  Execute the following:

user@computer:$ echo “web:revel run myapp prod 9000” > Procfile

Finally, let’s add our application name to the “.godir” file.  Execute the following:

user@computer:$ echo “myapp” > .godir

Now, let’s run “heroku local” again, you should see something similar to the following:

user@computer:$ heroku local

[OKAY] Loaded ENV .env File as KEY=VALUE Format
3:37:24 PM web.1 | ~
3:37:24 PM web.1 | ~ revel! http://revel.github.io
3:37:24 PM web.1 | ~
3:37:24 PM web.1 |
3:37:24 PM web.1 | INFO 2016/06/29 15:37:24 revel.go:365: Loaded module static
3:37:24 PM web.1 | INFO 2016/06/29 15:37:24 revel.go:365: Loaded module testrunner
3:37:24 PM web.1 | INFO 2016/06/29 15:37:24 revel.go:230: Initialized Revel v0.13.1 (2016-06-06) for >= go1.4
3:37:24 PM web.1 | INFO 2016/06/29 15:37:24 run.go:57: Running myapp (myapp) in dev mode
3:37:24 PM web.1 | INFO 2016/06/29 15:37:24 harness.go:170: Listening on :9000
3:37:24 PM web.1 |

Now open your browser to http://localhost:9000 again and verify that it runs correctly in the heroku environment.

If everything runs well locally, then we’re ready to deploy.

Deploy To Heroku

First let’s add all of our changes to git, execute the following:

user@computer:$ git add .

Next, let’s commit:

user@computer:$ git commit -m “Initial commit”

Now let’s push the changes to Heroku

user@computer:$ git push heroku master

Counting objects: 377, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (352/352), done.
Writing objects: 100% (377/377), 845.31 KiB | 0 bytes/s, done.
Total 377 (delta 96), reused 0 (delta 0)
remote: Compressing source files… done.
remote: Building source:
remote:
remote: —–> Fetching set buildpack https://github.com/revel/heroku-buildpack-go-revel.git… done
remote: —–> Revel app detected
remote: —–> Installing go1.6… done
remote: Installing Virtualenv… done
remote: Installing Mercurial… done
remote: Installing Bazaar… done
remote: —–> Running: godep restore
remote:
remote: —–> Discovering process types
remote: Default types for buildpack -> web
remote:
remote: —–> Compressing…
remote: Done: 127.8M
remote: —–> Launching…
remote: Released v4
remote: https://obscure-beyond-93544.herokuapp.com/ deployed to Heroku
remote:
remote: Verifying deploy… done.
To https://git.heroku.com/obscure-beyond-93544.git

Once completed you should be ready to go.  Next run the following to test:

user@computer:$ heroku open

If you received errors during deployment or testing, you may be able to help resolve your issue by viewing the heroku logs:

user@computer:$ heroku logs

 

Hopfully everything turned out well for you.  If it didn’t, drop me a line and I’ll see if I can help you out.

Drawing on Textures in ThreeJS

If you’re an avid user of ThreeJS you have more than likely at some point run into the situation where you need to either create a texture from scratch for the point of coloring it, or have had the need to update/draw on a texture for one reason or another.  The following will walk you through what is needed to accomplish this.

First, start by creating an in memory canvas object that we will use to draw on.  This will eventually be transformed into a texture object.

//create in memory canvas
var canvas = document.createElement("canvas");
canvas.width = 100;
canvas.height = 100;
var context = canvas.getContext("2d");

 

For this example, I’m creating a simple 100×100 pixel image and retrieving the 2d context which we will next use for performing that actual modification of the image.

//paint the image red
context.fillStyle = "#ff0000";
context.fillRect(0, 0, 100, 100);

 

Here we’re setting the fill color to red and painting the entire image.  Next, let’s convert our in memory canvas object into an image.

//create image from canvas
var image = new Image();
image.src = canvas.toDataURL();

 

Finally, let’s take our new image and create a texture object which can be applied to a material.

//create new texture
var texture = new THREE.Texture(image);
texture.anisotropy = 4;
texture.needsUpdate = true;
 
//apply texture to a material, go forth and be merry

 

That’s really all there is to it, you can make the context draw lines, rects, etc. onto the in memory canvas until your heart’s content.  For convenience, the following is the entire code example wrapped up into one.

//create in memory canvas
var canvas = document.createElement("canvas");
canvas.width = 100;
canvas.height = 100;
var context = canvas.getContext("2d");
 
//paint the image red
context.fillStyle = "#ff0000";
context.fillRect(0, 0, 100, 100);
 
//create image from canvas
var image = new Image();
image.src = canvas.toDataURL();
 
//create new texture
var texture = new THREE.Texture(image);
texture.anisotropy = 4;
texture.needsUpdate = true;
 
//apply texture to a material, go forth and be merry

New Rubik’s Cube jQuery ThreeJS WebGL Plugin

I have previously released a jQuery Rubik’s cube plugin comprised of only HTML, JavaScript and CSS.  Although pretty nifty, there were a few issues with regards to support from some browsers, most notably Firefox and Internet Explorer.  I decided to recreate the plugin using WebGL.  The result is my jquery.cube.threejs.js plugin. As seen below.

The plugin code is available here: https://github.com/godlikemouse/jquery.cube.threejs.js

See the Pen jQuery ThreeJS Rubik’s Cube by Jason Graves (@godlikemouse) on CodePen.

This plugin utilizes standard cube notation, both modern and old to execute moves. For example, the moves executed by the 3×3 cube above are utilizing the following notation:

x (R’ U R’) D2 (R U’ R’) D2 (R U R’) D2 (R U’ R’) D2 (R U R’) D2 (R U’ R’) D2 R2 x’

This algorithm is being passed to the cube plugin by way of the execute() method or data-moves attribute.

This plugin currently supports a number of configuration options that can be passed at instantiation time including, cube face colors, animation delay time with more to come shortly.

This incarnation of the plugin also supports cube sizes of 2×2 up to 9×9 with no issues.  Cubes larger than 9×9 may see some degradation in frame rates depending on the speed of the client devices.

For more information on this plugin or to view the source please visit: https://github.com/godlikemouse/jquery.cube.threejs.js

New Rubik’s Cube jQuery Plugin

I consider myself an avid speed cuber. Not world class, but fast enough and not too obsessed with it. That being said all the sites I’ve seen in the past with references to algorithms all seem to use really clunky client side Java or Flash plugins to handle the demonstration of moves.  I wanted to create a simple to implement jQuery plugin that utilizes the browser’s built in features to do the same thing.  The result is my jquery.cube.js plugin. As seen below.

The plugin code is available here: https://github.com/godlikemouse/jquery.cube.js

 

See the Pen jQuery and CSS Implemented Rubik’s Cube by Jason Graves (@godlikemouse) on CodePen.

This plugin utilizes standard cube notation, both modern and old to execute moves. For example, the moves executed above are utilizing the following notation:

x (R’ U R’) D2 (R U’ R’) D2 (R U R’) D2 (R U’ R’) D2 (R U R’) D2 (R U’ R’) D2 R2 x’

This algorithm is being passed to the cube plugin by way of the execute() method.

This plugin currently supports a number of configuration options that can be passed at instantiation time including, cube face colors, animation delay time with more to come shortly.

For more information on this plugin or to view the source please visit: https://github.com/godlikemouse/jquery.cube.js

Responsive CSS Truncate and Ellipsis

We’ve all run into situations where we need to truncate text due to length constraints.  Most of the time this simply requires a truncation function which determines the maximum length of text and if the string exceeds that length, truncates it and adds an ellipsis (“…”).  But what about when we are developing responsive web applications that require text to be truncated according to the current device screen or browser size.  Luckily CSS3 supports a text-transform property called “ellipsis”, this however also requires that the bounding box defined with an overflow and a definite width and height.  Or does it…?

I’ve found an interesting way to implement CSS text truncation in a responsive setting that can be used with responsive layouts such as Pure, Bootstrap, etc.

The following code creates a single line responsive truncate and ellipsis behavior.

.truncate-ellipsis {
    display: table;
    table-layout: fixed;
    width: 100%;
    white-space: nowrap;
}
 
.truncate-ellipsis > * {
    display: table-cell;
    overflow: hidden;
    text-overflow: ellipsis;
}

 

<div class="truncate-ellipsis">
    <span>
        Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam porta tortor vitae nisl tincidunt varius. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut vel ullamcorper tortor. Nullam vel turpis a augue tempor posuere vel quis nibh. Nam ultrices felis turpis, at commodo ipsum tristique non. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Suspendisse in accumsan dui, finibus pharetra est. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque vitae velit eu dui rutrum pellentesque vel imperdiet sem. Morbi ut lacinia lacus, in commodo nibh. Sed cursus ante ut nunc molestie viverra.
    </span>
</div>

 

This works by taking a parent element, in this case a “div” with the class “truncate-ellipsis” and turning it into a display type table with a fixed table layout. Its child element, in this case a “span” is transformed into a table cell with the overflow set to hidden and the text-transform property set to ellipsis.  This of course can be modified to suit specific needs of height and container size and should act as a pretty good starting point for basic implementation.