Idea: Building A Quantum Interface Card (QIC)

A few years ago, an idea came to me regarding quantum entanglement and networking/communication technology.  After some time and research into trying to patent the idea, I found that others had similar ideas but were not at all interested in acting on its development.  To this end, I thought I might share my idea with the world, freely, in the hopes that perhaps someone, someday might find it useful in the furthering of how we not only communicate digitally, but how we share information as a whole.  The idea is as follows and is described using the simplest terms possible not to loose viewers not currently versed in quantum theory.  This post will also contain a simple entanglement primer.

Understanding Quantum Entanglement

To start, one must understand some simple quantum theory observations.  The first, particles entangled at the quantum level, retain their entangled state regardless of distance between said particles.  These particles, share an interesting state which can be described by the following.

Imagine for a moment that in front of you sat a quantum blender.  This magical device would take two particles, blend them up and spit out entangled resulting particles.  Now, imagine that the quantum particles where actually two tennis balls.  One orange and one green.  Once the tennis balls have been placed in the blender and turned on, the resulting particles could be observed as having the following characteristics:

The first ball which was orange, becomes half orange and half green.
The second ball which as green, becomes half green and half orange, in that order and yes, the order does matter and here’s why.

The state of the first tennis ball will always be the exact opposite state of the second and vice versa.  If the first tennis ball is imagined as being rotated 180 degrees, the second tennis ball could be visualized as rotating the opposite direction and have the opposite state.  This is where the importance of quantum communication begins.

If the state of the first tennis ball, when it’s orange state facing you could be thought of as 1, then the opposite state could be expressed as 0.  At the same time that the first ball’s state is 0 the second ball’s  state would be 1.  This is important because synchronization of the two states requires that one end of the entangled particles become inverted or not-ed.  This would render a synchronous state so that when the first tennis ball is facing with a state of 1, the second would also have a state of 1 and once again when changed, vice versa (ie. Orange:1 = Green:1, Orange:0 = Green:0).

Hopefully all of the above helps in understanding of the quantum theory being employed here.  If not, please feel free to ask questions in the comments and I will do my best to answer them.  Also, please keep in mind the idea above is a simplified example to only be used as a reference or mental model for understanding what’s to be explained in the development of a QIC.

Building A Quantum Interface Card (QIC)

In quantum physics it is understood that the sheer act of observing a quantum system changes the state of the system and this must be taken into account.  Currently two large hurdles exist at the time of this writing, the first of which is stabilizing a quantum particle or particles for a long duration of time.  The second, is creating a non-disruptive observance of a quantum particle or at the very least, a system of observance by which the state can be obtained from a quantum system before/during disruption as to obtain the correct state of the particle at the point in time of observance.

The Client Interface

In this model, the client interface could be thought of as a hand held device such as a phone or other user machine such as a desktop/laptop computer, tablet, etc..

The Service Interface

In this model, the service interface could be thought of as the side in which has direct or near direct communication with the service provider or access point into a provider infrastructure such as a data center.

Implementation

The implementation for the QIC requires one or more particles to be entangled at the quantum level and stabilized, with client particle(s) residing in the device used by the end user (ie. phone, tablet, computer, etc.) and the service particles remaining at the data center or provider’s data hub.  Information exchanged from the service end will be instantaneously observed by the client and vice versa.  A protocol will need to be established whereby client communication is recognized and service communication is recognized and handled to avoid collisions.  The simplest approach that comes to mind is to have a series of observed particles dedicated as client to service, and another dedicated as service to client.  This would allow for truly bidirectional and parallel communication.  Information exchanged in this manner could initially be interpreted using simple states such as 1 and 0.  By determining an initial quantum state, any state changes thereafter would determine the transmission state of either 1 or 0 (1 -> 0, 0 -> 1).  Current compression and optimization techniques used for optimizing and encrypting network communication could still be used although, being that this implementation takes place at the quantum level, one might argue that for the time being at least, such techniques may not be necessary for quite some time, if at all.

Realization

Once realized, quantum communication opens up an entirely new avenue of real time information sharing easily measured in the gigabyte, terabyte and potentially in the petabyte per second range.  This caliber of wireless data sharing would allow any device supporting a QIC to download/upload information far beyond what current wired/wireless technology allows, thus putting new bottlenecks on the side of the service provider and the speed in which the client can actually process and persist the data as received.

Other applications for QIC communication go beyond terrestrial technology and into that of deep space communication, allowing instantaneous communication with deep space vehicles and probes.

Conclusion

The above idea expressed reflects my vision for the future of wireless communication in the hopes that it be put into practical use.  If you use this idea, please mention me as one of the authors and I wish you the very best of luck.  Please feel free to ask any questions regarding any of the information listed above.  Thanks.

Windows 10 Tips And Tricks: Keyboard Shortcuts

Windows 10 is chalked full of very helpful keyboard shortcuts, seemingly more so than any of its predecessors. The following is a list of keyboard shortcuts and combinations that can really help to accelerate your speed and use of applications and features.

Window Commands
Minimize All: Win + M
Toggle Desktop: Win + D
Peek Desktop: Win + ,
Maximize Window: Win + Up
Minimize Window: Win + Down
Pin Window Left: Win + Left
Pin Window Right: Win + Right
Next Window: Alt + Tab
Previous Window: Shift + Alt + Tab

System Commands
System Shutdown: Win + X, U, U
System Signout: Win + X, U, I
System Restart: Win + X, U, R
Lock System: Win + L
Control Panel: Win + X,  P
System Information: Win + Pause
Device Manager: Win + X, M
Task Manager: Ctrl + Shift + Esc
Run Dialog: Win + R
File Explorer: Win + E
Search: Win + S
Action Center: Win + A
Project: Win + P
System Settings: Win + I
Windows Ink: Win + W
Share: Win + H
Game Bar: Win + G
Feedback Hub: Win + F
Magnifier Zoom In: Win + +
Magnifier Zoom Out: Win + –
View All Windows: Win + Tab
OS Screenshot: Win + Print Screen

Taskbar Items
Items that have been pinned to the taskbar can be launched using:

Win + {position number}

Where {position number} is the numbered position from left to right.  For example, if you would like to launch the first application pinned to your task bar you would use the following command:

Win + 1

How To: Run Arch Linux on Windows 10

Windows 10 has the ability to run a Linux subsystem.  However, you can now run Arch Linux instead of the default Debian thanks to some great work by “turbo” at github.com.  Here’s the skinny on getting this setup.

First, download the alwsl.bat file from: https://github.com/alwsl/alwsl

Next, open a command Windows command prompt and execute the following:

alwsl install

Follow the steps provided by the batch file and voila! Arch Linux on Windows.

Lastly, just open ArchLinux from the Windows menu and enjoy.

I’ve noticed using Arch Linux instead of Debian on Windows appears to have some performance gains, especially around the usage of Vim and other console editors.

How To: Play Oculus Rift Games On HTC Vive

If you don’t have a VR headset yet, you may be wondering which to get. There seem to be ample support for both, but which one is the best?  Now, you don’t have worry, the answer thanks to some dedicated developers is now easy.  Get the HTC Vive.  Why?  Because the HTC Vive has the best versatility between the two.  You can play either sitting or standing (stationary) or within a specific set of bounds (room scale).  Now, you can also play Oculus Rift only games on the HTC Vive and here’s how.

First, if you haven’t already install SteamVR.  You’ve more than likely already done this, but it’s worth mentioning just in case.

Next, install the Oculus Rift Setup software:  http://www.oculus.com/setup

Finally, install the latest version of Revive: https://github.com/LibreVR/Revive/releases

Once both of those steps are done, launch SteamVR and you’ll have a new button at the bottom of the Steam overlay called “Revive”.

Now that you have revive installed, there are several free games that you can play from Oculus.  At the time of this publication they include:

And a few others.  Personally, I suggest Farlands, it’s a very cool Viva Pinata meets discovery game.  Enjoy the new set of games in your HTC Vive library.

Adding Custom CSS Properties Using gtkmm

Adding custom CSS properties in gtkmm is actually pretty simple once you know how.  It took me quite a bit of searching along with some trial and error before I came up with a pretty simple pattern for doing so.  What I was looking for was a way to use custom CSS without having to develop a whole bunch of custom classes which extended Gtk base classes.

First thing we need to do is to create a class that extends from Gtk::Widget.  Now, we don’t necessarily have to use the class directly as a visual object however, it does need to extend the Gtk::Widget class for it to work.  The following is my example for creating simple custom CSS properties.  Let’s start off by adding a single custom property called “width”.

 

class CustomCSS : public Gtk::Widget {
 
    public:
        Gtk::StyleProperty<int> width_property;
        int width;
 
        CustomCss() :
            Glib::ObjectBase("customcss"),
            Gtk::Widget(),
 
            //-gtkmm__CustomObject_customcss-width
            width_property(*this, "width", 100),
            width(100)
            {}
};

Let’s examine the above code to find out what’s really going on here.

 

class CustomCSS : public Gtk::Widget

First we need to create a class the extends the Gtk::Widget class.

 

public:
    Gtk::StyleProperty<int> width_property;
    int width;

Next, we need to add a public style property that defines what will be reading the width values and a simple property of the type we’ll want to use, in this case an int.

 

CustomCss() :
    Glib::ObjectBase("customcss"),
    Gtk::Widget(),
 
    //-gtkmm__CustomObject_customcss-width
    width_property(*this, "width", 100),
    width(100)
    {}

Finally, we’ll want to implement a default constructor where we set the name we will be using in our CSS to reference this class “customcss”, pass in the Gtk::Widget() constructor, then we’ll need to being initializing our custom properties to be used in CSS.  For this, we initialize the width_property, give it a CSS property name of “width” and set the default value to 100.  Then we do the same for the actual property by calling width(100).

Now, if we want to use the custom CSS class and properties we’ll need to add them to our CSS file.  I noted above also that in our CSS file, the “width” property would actually need to be specified as “-gtkmm__CustomObject_customcss-width”.  The first part “-gtkmm__CustomObject_” will be the default used by gtkmm when accessing the custom property, the last two parts “customcss” and “width” we control by setting the Glib::ObjectBase(“customcss”) and the width_property(*this, “width”, 100) respectively.

 

/* somefile.css */
 
* {
    -gtkmm__CsutomObject_customcss-width: 500;
}

To enable this custom property to be used, we first need to define an all selector “*” followed by our CSS implementation which contains “-gtkmm__CsutomObject_customcss-width: 500;”.  Now, let’s use it in our C++ application.

 

#include <gtkmm.h>
 
int main(int argc, char ** argv){
 
    //your code doing something useful
 
    //setup our css context and provider
    Glib::ustring cssFile = "/path/to/somefile.css";
    Glib::RefPtr<Gtk::CssProvider> css_provider = Gtk::CssProvider::create();
    Glib::RefPtr<Gtk::StyleContext> style_context = Gtk::StyleContext::create();
 
    //load our css file, wherever that may be hiding
    if(css_provider->load_from_path(HOME_DIRECTORY + "/.config/xfce4/finder/xfce4-finder.css")){
        Glib::RefPtr<Gdk::Screen> screen = window->get_screen();
        style_context->add_provider_for_screen(screen, css_provider, GTK_STYLE_PROVIDER_PRIORITY_USER);
    }
 
    //instantiate our custom css class
    CustomCss css;
 
    //get the width
    int width = css.width_property.get_value();
 
    //do something useful with the width we retrieved from css
 
    //run application run
    application->run(main_window);
 
    return EXIT_SUCCESS;
}

As you can see from the above, we’ve created both a CssProvider and a StyleContext, then we load our CSS file by using load_from_path, set our screen to be the main application window’s screen, then finally we add a provider by calling add_provider_for_screen.  Once that’s all done we can use our CustomCSS class.  Simply instantiate it, and call get_value() of the property we want to use.  It’s that simple.  If we wanted to add more than one property, we just added them to the constructor’s inline calls like so.

 

class CustomCSS : public Gtk::Widget {
    public:
 
    Gtk::StyleProperty<int> width_property;
    int width;
 
    Gtk::StyleProperty<int> height_property;
    int height;
 
    //etc...
 
    CustomCss() :
        Glib::ObjectBase("customcss"),
        Gtk::Widget(),
 
        //-gtkmm__CustomObject_customcss-width
        width_property(*this, "width", 100),
        width(100),
 
        //-gtkmm__CustomObject_customcss-height
        height_property(*this, "height", 50),
        height(50)
 
        //etc...
        {}
};

 

Once again, we add the new properties if we want to our CSS file.

/* somefile.css */
 
* {
    -gtkmm__CsutomObject_customcss-width: 500;
    -gtkmm__CsutomObject_customcss-height: 100;
    /* etc... */
}

 

Then we access them the same way as before in our C++ code.

//instantiate our custom css class
CustomCss css;
 
//get the width
int width = css.width_property.get_value();
 
//get the height
int height = css.height_property.get_value();
 
//get etc...

 

I hope this helps.  I found this a simple and clean way of introducing custom CSS properties without having to create a bunch of Gtk::Widget classes.  As always, thanks for stopping by and let me know if there’s anything I can do to help.

 

 

How To: Xfinity TV and Arch Linux

Hey everyone, I’ve figured out how to get tv.xfinity.com working on Arch Linux today. Here’s what I did in case anyone else needs help:

First install pipelight:

user@computer:$ yaourt -S pipelight

Next, if you don’t already have it install firefox:

user@computer:$ sudo pacman -S firefox

Next, update and configure pipelight:

user@computer:$ sudo mkdir /root/.gnupg
user@computer:$ sudo pipelight-plugin ––update

Next, enable the flash plugin:

user@computer:$ sudo pipelight-plugin ––enable flash

Next, add the plugins to firefox:

user@computer:$ sudo pipelight-plugin ––create-mozilla-plugins

Lastely, make sure you have hal installed:

user@computer:$ yaourt -S hal-flash-git
user@computer:$ yaourt -S hal-info

Once you’ve completed the steps above, completely kill of firefox (Super+Q), then restart and browse to tv.xinfinity.com, you should be able to watch it with no problems.

Arch Linux Hydra Build

For a long time now I’ve been wanting to build my ideal computer. A powerful Linux machine with surrounding monitors. Recently I had the opportunity to do so. The following is a time lapse video of the entire build as well as the configuration files and list of components and where to buy them. This way, if anyone else wants to construct either the same or something along the same lines, they’ll have some footsteps to follow if they need the help.

During the trial and error process there were little milestones along the way.

Finally I got everything working correctly and the Hydra came to life.

Build Components

The following is a list of all the components used in this build along with links to purchase.

#archlinux #hydra the calm before the storm. #Linux

A photo posted by Jason Graves (@godlikemouse) on

On to part three of the Linux Hydra setup, the heads. 24” monitors all around.

A photo posted by Jason Graves (@godlikemouse) on

Part 2 wire maintenance of the Linux Hydra complete.

A photo posted by Jason Graves (@godlikemouse) on

Physical Configuration

I went with Dual Nvidia Quadro K620 video cards because they were very inexpensive and had the ability to display up to 4 monitors per video card via daisy chain through DisplayPort 1.2.  The monitors that I found that were pretty inexpensive however didn’t support direct daisy chaining so I had to purchase 2 multi monitor display adapters which essentially did the daisy chaining.  Physically, this is the setup.

GPU 1 DisplayPort Splitter 1 Monitor 1
Monitor 2
Monitor 3
Monitor 4
GPU 2 DisplayPort Splitter 2 Monitor 5
Monitor 6
Monitor 7
Monitor 8

 

Xorg Configuration

My chosen distribution for this build was Arch Linux, yours maybe different however, the Xorg configuration should be similar if not identical if you’re using the same components.  If not, then perhaps this might just help you along the way.

My first setup was using the nvidia proprietary driver, installed by the following:

user@computer:$ yaourt nvidia-beta-all

This driver set didn’t have Base Mosaic nor Xinerama working.  Everytime it was turned on the screen would black out and I would have to hard reset the machine and manually remove the settings.

Visually what I was setting up as 2 screens, one for the first GPU and another for the second.  The use of nvidia-settings made this very painless, I suggest you use it as well if you’re using Nvidia cards.

user@computer:$ sudo pacman -S nvidia-settings
user@computer:$ sudo nvidia-settings

Once you save the changes to the Xorg.conf file, you should be able to see the file located at /etc/X11/xorg.conf.

Here’s a visual representation of my screen Xorg setup.

Screen0 Screen1
Monitor 1 Monitor 2 Monitor 5 Monitor 6
Monitor 3 Monitor 4 Monitor 7 Monitor 8

Hopefully this helps in understanding what I was trying to achieve. This gave great results and allowed me to get all monitors working however, there were two issues with this setup.

The first, getting a graphical environment to support eight monitors was a bit troublesome at first. I started off using Gnome 3, tried KDE, etc. yet none of them supported anything other than a dual monitor setup. Also, xrandr did nothing to help here and even reported that the second GPU wasn’t available which was absolutely not the case. The solution here was to install a different desktop which supported multiple monitors. I went with XFCE4.

The second, windows could only be moved in the screen in which the window was created. In other words, a window created in Screen0 could only move through monitors 1-4. This became more of a problem over time and was resolved later. Here’s my first working Xorg.conf file.

# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings: version 370.28 (buildmeister@swio-display-x64-rhel04-17) Thu Sep 1 20:21:47 PDT 2016
 
Section "ServerLayout"
 Identifier "Layout0"
 Screen 0 "Screen0" 0 0
 Screen 1 "Screen1" 3840 0
 InputDevice "Keyboard0" "CoreKeyboard"
 InputDevice "Mouse0" "CorePointer"
 Option "Xinerama" "0"
EndSection
 
Section "Files"
EndSection
 
Section "InputDevice"
 # generated from default
 Identifier "Mouse0"
 Driver "mouse"
 Option "Protocol" "auto"
 Option "Device" "/dev/psaux"
 Option "Emulate3Buttons" "no"
 Option "ZAxisMapping" "4 5"
EndSection
 
Section "InputDevice"
 # generated from default
 Identifier "Keyboard0"
 Driver "kbd"
EndSection
 
Section "Monitor"
 # HorizSync source: edid, VertRefresh source: edid
 Identifier "Monitor0"
 VendorName "Unknown"
 ModelName "Ancor Communications Inc VE248"
 HorizSync 30.0 - 83.0
 VertRefresh 50.0 - 76.0
 Option "DPMS"
EndSection
 
Section "Monitor"
 # HorizSync source: edid, VertRefresh source: edid
 Identifier "Monitor1"
 VendorName "Unknown"
 ModelName "Ancor Communications Inc VE248"
 HorizSync 30.0 - 83.0
 VertRefresh 50.0 - 76.0
 Option "DPMS"
EndSection
 
Section "Device"
 Identifier "Device0"
 Driver "nvidia"
 VendorName "NVIDIA Corporation"
 BoardName "Quadro K620"
 BusID "PCI:1:0:0"
EndSection
 
Section "Device"
 Identifier "Device1"
 Driver "nvidia"
 VendorName "NVIDIA Corporation"
 BoardName "Quadro K620"
 BusID "PCI:2:0:0"
EndSection
 
Section "Screen"
 Identifier "Screen0"
 Device "Device0"
 Monitor "Monitor0"
 DefaultDepth 24
 Option "Stereo" "0"
 Option "nvidiaXineramaInfoOrder" "DFP-2.2.1"
 Option "metamodes" "DP-1.2.1: nvidia-auto-select +1920+1080, DP-1.2.2: nvidia-auto-select +0+1080, DP-1.1.1: nvidia-auto-select +1920+0, DP-1.1.2: nvidia-auto-select +0+0"
 Option "SLI" "Off"
 Option "MultiGPU" "Off"
 Option "BaseMosaic" "off"
 SubSection "Display"
 Depth 24
 EndSubSection
EndSection
 
Section "Screen"
 Identifier "Screen1"
 Device "Device1"
 Monitor "Monitor1"
 DefaultDepth 24
 Option "Stereo" "0"
 Option "metamodes" "DP-1.2.1: nvidia-auto-select +0+0, DP-1.2.2: nvidia-auto-select +1920+0, DP-1.1.1: nvidia-auto-select +0+1080, DP-1.1.2: nvidia-auto-select +1920+1080"
 Option "SLI" "Off"
 Option "MultiGPU" "Off"
 Option "BaseMosaic" "off"
 SubSection "Display"
 Depth 24
 EndSubSection
EndSection

The above configuration eventually became a bit problematic and reached out to NVidia to find out if there was anything that could be done to get Base Mosaic and Xinerama working in their driver.  The response I received was to try using the following driver:

http://www.nvidia.com/download/driverResults.aspx/106780/en-us

This turned out be very helpful.  Once the driver was installed I once again turned on Base Mosaic and to my surprise the entire panel of screens were being utilized under a single Screen0 entry.  This made it possible to drag windows from Monitor 1 all the way across to Monitor 8.

Screen0
Monitor 1 Monitor 2 Monitor 5 Monitor 6
Monitor 3 Monitor 4 Monitor 7 Monitor 8

This final Xorg.conf file is what I decided to keep and use going forward.

# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings: version 367.44 (buildmeister@swio-display-x86-rhel47-01) Wed Aug 17 22:53:32 PDT 2016
 
Section "ServerLayout"
 Identifier "Layout0"
 Screen 0 "Screen0" 0 0
 InputDevice "Keyboard0" "CoreKeyboard"
 InputDevice "Mouse0" "CorePointer"
 Option "Xinerama" "0"
EndSection
 
Section "Files"
EndSection
 
Section "InputDevice"
 # generated from default
 Identifier "Mouse0"
 Driver "mouse"
 Option "Protocol" "auto"
 Option "Device" "/dev/psaux"
 Option "Emulate3Buttons" "no"
 Option "ZAxisMapping" "4 5"
EndSection
 
Section "InputDevice"
 # generated from default
 Identifier "Keyboard0"
 Driver "kbd"
EndSection
 
Section "Monitor"
 # HorizSync source: edid, VertRefresh source: edid
 Identifier "Monitor0"
 VendorName "Unknown"
 ModelName "Ancor Communications Inc VE248"
 HorizSync 30.0 - 83.0
 VertRefresh 50.0 - 76.0
 Option "DPMS"
EndSection
 
Section "Device"
 Identifier "Device0"
 Driver "nvidia"
 VendorName "NVIDIA Corporation"
 BoardName "Quadro K620"
 BusID "PCI:1:0:0"
EndSection
 
Section "Screen"
 Identifier "Screen0"
 Device "Device0"
 Monitor "Monitor0"
 DefaultDepth 24
 Option "Stereo" "0"
 Option "nvidiaXineramaInfoOrder" "DFP-2.2.1"
 Option "metamodes" "GPU-05f6316a-a480-b2c4-7618-d19e2bd555aa.GPU-0.DP-1.2.1: nvidia-auto-select +1920+1080, GPU-05f6316a-a480-b2c4-7618-d19e2bd555aa.GPU-0.DP-1.2.2: nvidia-auto-select +0+1080, GPU-05f6316a-a480-b2c4-7618-d19e2bd555aa.GPU-0.DP-1.1.1: nvidia-auto-select +1920+0, GPU-05f6316a-a480-b2c4-7618-d19e2bd555aa.GPU-0.DP-1.1.2: nvidia-auto-select +0+0, GPU-0cc1f3f6-5d22-fe69-aa54-cb7c8c051daa.GPU-1.DP-1.2.1: nvidia-auto-select +3840+0, GPU-0cc1f3f6-5d22-fe69-aa54-cb7c8c051daa.GPU-1.DP-1.2.2: nvidia-auto-select +5760+0, GPU-0cc1f3f6-5d22-fe69-aa54-cb7c8c051daa.GPU-1.DP-1.1.1: nvidia-auto-select +3840+1080, GPU-0cc1f3f6-5d22-fe69-aa54-cb7c8c051daa.GPU-1.DP-1.1.2: nvidia-auto-select +5760+1080"
 Option "MultiGPU" "Off"
 Option "SLI" "off"
 Option "BaseMosaic" "on"
 SubSection "Display"
 Depth 24
 EndSubSection
EndSection

I hope this helps anyone who gets stuck trying to get multiple monitors working under Linux using Nvidia video cards.  Please feel free to drop me a line if you get stuck and I’ll see what I can do to help.

Running Graboid on Linux

Want to run Graboid on Linux, cool, here’s how you do it.  First you’ll need to install either wine  to run graboid.

user@computer:$ sudo pacman -S wine

Great, next we’ll need to install Graboid.  The good news is I’ve made it very easy for you, simply download my premade version containing everything you’ll need.

Download: Graboid for Linux

Next unpack the tgz file.

user@computer:$ tar xpf graboid.tgz

Under the Graboid directory you’ll notice two files, the first is the graboid.png file which you can copy to your shared icons directory (on Arch Linux this is /usr/share/pixmaps).  The second is a simple bash script which you can use to launch Graboid from the command line.  Feel free to modify this file until your heart’s content.  Also, if for some reason you can’t execute the graboid file, be sure to change the permissions of the file.

user@computer:$ chmod +x graboid

Lastely, run Graboid.

user@computer:$ ./graboid

I hope this helps you get up and running with Graboid on Linux.  Enjoy!

Windows 10 BASH: Tips and Tricks

I’ve been using the Windows 10 BASH shell for quite a while now forcing myself to get used to it. Along the way I’ve figured out a few working arounds for features such as sshfs and simple copy paste commands. First, the copy paste. When I first started using the Windows BASH shell I found it very frustrating when trying the copy and paste using the CTRL-C and CTRL-V commands. As any Linux user will tell you, these are kill commands and it doesn’t really make sense to use them. However, I did find another set of commands which work much better, here they are.

COPY: Select text, hit ENTER.
PASTE: Right click

So much easier than trying to fiddle with CTRL commands which are sometimes working and sometimes not.

Now let’s talk about sshfs, at the time of this writing FUSE support was not part of the Windows 10 BASH shell. There is/was however a petition to get it introduced, if it’s still not included and you would like to help, vote it up using the link below.

Windows 10 BASH FUSE Support

So, how did I get something similar to sshfs working, simple I put together a couple of shell scripts that utilize rsync and here’s the gist.  Basically what we want is a way to pull all the files we want to modify locally, use an editor to modify files and when the files are modified, push them back up to the server.  To do this, I created 3 BASH shell scripts named “pull”, “push” and “watcher”.  First the pull script, this script performs an incremental rsync pulling the remote files locally into a directory of your choosing.  The “pull” file looks like this:

#!/bin/sh
 
rsync -au --progress user@someserver:/path/to/remote/directory /path/to/local/directory

 

The above requires that you pass change the “user@someserver” to your actual username and server name.  Also, the “/path/to/remote/directory” and “/path/to/local/directory” need to be changed to your specific files.  For my specific needs, I used something similar to the following:

root@myserver:/var/www/html/zf /mnt/c/Users/Jason/Documents/zf

You’ll noticed I have the end point of the rsync specified under the standard Windows file system, this is so I can use a native Windows editor (LightTable in my case) to edit the files.

Next, let’s look at the “push” file.  The “push” file looks like this:

#!/bin/sh
 
rsync -au --progress /path/to/local/directory user@someserver:/path/to/remote/directory

 

The above is very similar to the “pull” file, just in reverse. This will do an incremental push only for files that have changed.

Lastely let’s look at the “watcher” file:

#!/bin/sh
 
while ./pull && ./push;do : ;done

 

The above file continually attempts to do an incremental rsync push until you kill it (Ctrl-C). After you’ve customized the above files to your liking, all you need to do is execute a pull then execute the watcher, and you’ll have yourself a poor man’s sshfs.

user@computer:$ ./watcher

 

As an optimization approach, you can also add –exclude directives to the “pull” file which will ignore files and directories, for example:

#!/bin/sh
 
rsync -au --progress --exclude .svn --exclude .git user@someserver:/path/to/remote/directory /path/to/local/directory

Zenhance NodeJS MVC Framework Is Alive

A Zend Framework like MVC approach to building NodeJS enterprise applications in an easy to install NodeJS module.

The new Zenhance NodeJS MVC Framework is up and running and available for install.  The entire framework is comprised of a single NPM module and can be installed and running in less that 5 minutes once you have NodeJS installed.  To begin, simply run:

user@computer:$ npm install zenhance

To install the Zenhance module and all of its dependencies

Next, create a server.js file with a single line of code

require('zenhance').run()

Then execute the server.js file with node

user@computer:$ node server.js

That’s all there is to it.  Full documentation and more information is available on Github and on collaboradev at Zenhance.