I’ve entered the world of digital photography with a DSLR.  A good friend of mine said I have “entered the money pit”.  This has become true.  Suddenly, things like aperture, exposure, shutter speeds, etc – have entered my vocabulary.  I’ve spent way too much time on Ken Rockwell’s website.  Researching lenses, flashes, shot composition, and have opened the firehose of information.

I bought a Nikon D40.  I found that after researching, reading, debating, and re-reading, it is the best option for me right now, as opposed to overspending on a body.  By purchasing an older body, with a kit lens, I can really test the waters of how far I want to go with this hobby.  If it turns out I take crummy pictures, I’m not out as much money.  If I fully dive in, its still a great body to learn on, and by realizing its faults, I’ll be better prepared to upgrade.  On the D40 itself, it’s a 6.1MP camera.  Now many people, including myself up until about 2 weeks ago, think that higher megapixel count automatically translates into better pictures.  Not necessarily true.  You can take a blurry 15MP shot just as well as a blurry 6.1MP shot.  Granted, getting that awesome shot with 15 megapixels allows you the flexibility to crop more, but I’d like to hope that I can grow to a point where the shot I take doesn’t require much editing.  Plus, staying with the Nikon line, I know any lens I buy will be compatible with the next body I decide (if ever) to buy.

I’m still learning about the camera, but I think the next items on my list are the following:

Nikon SB-400 Flash

Nikon 35mm f1.8 AF-S

– or –

Tamron 70-300mm f/4-5.6

Some sample shots that I’ve taken with the D40 and the 18-55mm kit lens.

Nikon D40 - Nikon 18-55mm f5.6

Nikon D40 - Nikon 18-55mm f5.6

(before white-balance lesson, and just a bit of blur) - Nikon D40 - Nikon 18-55mm f5

You’re likely aware that the TSA has been putting new scanning machines in airports around the country, as well as, policies for a more ‘invasive’ pat-down procedure should an ‘anomaly’ show up on the scan, or should the passenger choose to opt-out of the scan.  This post isn’t about the potential health concerns of the scanning, nor the likening of the pat-down procedure to sexual assault.  This post is largely about the effectiveness of the TSA as an organization when it comes to the safety of Americans.

9/11 was obviously one of the major tragedies in American history.  It was a horrific event, causing thousands of people to prematurely lose their lives, led by a group of men who overtook the airplanes using items such as box-cutters, fear, etc.  What happened?  Summarizing what I understand to be the driving factor behind creation of the “Department of Homeland Security“: lack of communication and sharing of intelligence amongst government agencies.  Could the events that unfolded that tragic morning have been prevented, had proper communication channels been available?  We’ll never know.  Regardless, the American public needed to know it was safe to fly in the immediate days, weeks, months, even years following.  Steps were taken.  DHS was created.  Cockpit doors reinforced.  Additional training and ticketing procedures were put in place to raise flags.

Then, the shoe bomber.  His attempts to detonate the plane failed, but it gives the TSA another reactionary element to their scanning procedures.  We now take our shoes off to be scanned.

Enter the underwear bomber of Christmas 2009.  On an International flight to Detroit, he attempts to detonate a device he has hidden in his underwear.  The attempt to cause destruction, ultimately fails.

Now we have scanners that see through clothes.  Do you see the pattern?  Everything to this point from the TSA has been reactionary.  In the years following 9/11, were there attempted hijackings of domestic flights?  None that I’m aware of.  Were there any cases of explosive devices found in the shoes we all removed to have scanned?  Again, none that I’m aware of.  It has only been a year, but no other individual has attempted to ignite his underwear, that I’m aware of, on a domestic flight within the US, let alone another international flight.  We does the government assume that terrorist organizations will only use air travel as a means of destruction?  It happened once.  Yes, it was tragic, horrible, and inexplicably terrible.  But, does that mean we should now assume everyone is a suspect if they’re flying?  It doesn’t make sense.

Why is security so paramount to air travel to the extent it is today?  The metal detector and baggage scans appear to be working just fine.  Statistically speaking, I’m putting myself in MORE danger by choosing to drive than I am flying in a plane that is full of unscanned individuals.

As a corollary, does anyone fear entering a Federal building upon the chance that someone could park a Ryder van outside packed with explosives?  No.  Measures were taken in the months following, I believe, to ensure a vehicle could not be parked so close to the building, but no rights are violated, no one is searched beyond a metal detector as a means of entering the building.

Logically, it just doesn’t make sense that this much security is required to partake in air travel.  More people die in DUI related accidents than were ever killed in 9/11 – yet, there are no checkpoints upon entering the interstate.  There is no alcohol detection required to start a vehicle.

I’ve had a problem with the local Tires Plus location (#244242) I recently visited.

I took my 2005 Toyota 4Runner in for a full-set of new tires (all 4 replaced).  I was called a couple hours later, and was told by the technician that they replaced a TPMS sensor.  They did not state that it was broken prior to bringing it in, nor did they state that the technician broke it while replacing the tire.  They did state that there would be no charge to me (I assume this is an acceptance of fault).

I later picked up my vehicle, received my receipt which showed the replaced TPMS sensor, no charge.  Within 36 hours of having this work done, the TPMS system light illuminated and began flashing on the dashboard.  Upon consulting my owners manual, it stated this indicates a ‘malfunction’ of the TPMS system, not a pressure problem on the tires.  I checked pressure to validate this.

I brought the vehicle back to Tires Plus to have the light evaluated.  I was told they were booked, and to bring it back Sunday.  I did so, and was again told, they were booked, could I come back Monday.  I did.

Monday, I was told that they ran a test, and the light was signaling a problem with the spare tire sensor.  Which had NOT been worked on.  I was told that they’d have to replace it, but it wouldn’t be free since it is a sensor they didn’t work on.  They reset the indicator, and within 24 – 36 hours, it came on again.

This is extremely suspect.

a)  I was not told why they needed to replace one of the TPMS sensors.

b)  If I was given the original replacement for free, at no admitted fault of the technician, why not give me another for the spare?

c)  There was no indication of a problem prior to bringing my vehicle to Tires Plus, yet within 36 hours, a sensor dies?

What really ticks me off about the whole process, is they attempt to up-sell you at every step, yet, when it comes down to basic tire replacement (well, what I consider basic for a TIRES store), they can’t do the job 100% correct.  If they can’t replace tires correctly, why on God’s green earth would I trust them to clean my fuel system, or check my brakes?

I’ve sent a communication via the Tires Plus website.

UPDATE (11/23/2010 10:00AM): After speaking on the phone with a very defensive district manager, Tires Plus will be replacing the sensor on the spare tire.  If that does not resolve the issue, they wipe their hands clean of the situation.  He asks why I’m so distrustful, and my only response is “Because I had no issue, had tires replaced, and now have an issue”.  I do not think my reasoning is unreasonable.  Stay tuned.

UPDATE (11/24/2010 11:00AM): Spare tire sensor has been replaced and all sensors have been ‘re-learned’ to the system.  According to the service manager, all sensors were reading, saying at this point if the light comes back on, I will need to take it to the dealer as it is likely the computer.

02
Nov
stored in: Hardware and tagged: , , , , ,

Sony has decided to disable a key feature on laptops, to enable virtualization extensions.  My laptop is one such laptop.  Using various resources available on the internet (FreeDOS, symcmos.exe, and a list of firmware codes) – I was able to enable the VT-x extensions.  Awesome.

For those Googling, my laptop model is Sony Vaio VGN-FZ140N.  Firmware model: R0050J7.  I modified the value ’02D3′ to ‘0001’ using the symcmos tool, and all is well.

Recently, a friend and co-worker of mine launched a side-project of his.  Gootimer is a service, which you can use to manage your time-tracking tasks.  James is an excellent developer, and very meticulous about tracking his time spent on various tasks.  It’s a quality I envy greatly, as it significantly adds to overall organization, and estimation power for later tasks.  Likewise, it answers the question “Where did last week go?!” to great detail.

Gootimer works like this:  You already use Google Calendar to manage meetings and what not, right?  RIGHT?  Well, by simply adding additional calendars for each project you’re currently tasked with, you can track your time as easily as adding a meeting.  By having the event tied to the calendar, Gootimer can aggregate your hours spent on each project, and give you a nice breakdown of where you spent your time.  Amazing, right?  There’s more!

In todays online world, privacy has become king.  We need to protect our data, and access to that data.  At the same time, we want to be able to access that data from wherever possible, whenever we need it.  Gootimer doesn’t store your data – none.  You authorize it to access your data (kindly protected by Google) and the data is used real-time.

That’s pretty neat!

Read James’ announcement here: http://www.rodenkirch.com/2010/10/task-based-time-tracking-redux/

I recently encountered a situation where I had a query that was built with quite a few joins (~8), and I found that the query was taking a bit longer than I expected, especially for the number of rows to look at / return.  With each join, you add in more complexity for MySQL to handle in how to best utilize indexes, etc.

(more…)

RoweWare Solutions, LLC is proud to announce its first software offering! Name-O Bingo Cards is a simple application to make creating custom bingo cards an easier task for anyone that uses them.  Launch the application, edit your word lists, hit Print, and you have fresh, hot, Bingo cards!

As this is the first product, as well as the initial release, we’re offering Name-O Bingo Cards at a great discount.  $10 for LIFE!  Buy once and you’re entitled to all the updates we release to the product, forever.  Purchasing a license also entitles you to expedited support.  We accept bug reports and feature requests from everyone, but those from licensed users will receive priority.  Licensed users are also given priority notice of upcoming releases.  Buy today, it’s an easy $10 to save you the time of manually creating these cards yourself.

We’d love to hear your thoughts!  Comment here or use the contact us form on the Name-O site.

Thanks!

-Dave

04
Jun

I use KDE as my main desktop environment. Recently I was rebuilding an installation, and saw my clock was set to the 24-hour style, instead of the American style of 12-hour with AMPM. Clicking through the settings on the clock widget itself, I found no settings for getting that changed back. I always find out where, but not for a couple hours. So, to post this here, such that I’ll find it next time, you simply go into System Settings -> Regional & Language -> Time & Dates (tab) and select pH:MM:SS AMPM from the select box.

You may need to logout / login to restart the clock and make the setting take effect.

03
Jun
stored in: General and tagged:

Note to ArchLinux users: If you decide to rebuild an installation, and of course you’re going to use yaourt for community built packages, you need to remember to install ‘base-devel’. If you don’t install ‘base-devel’ you may receive vague messages like “Unable to read PKGBUILD for “. For me, the solution was a simple ‘pacman -S base-devel’

If you ask someone for an export of data, and you know the data is coming from SQL Server, be sure to clarify what encoding you’d like the export in (if they can configure it) – I spent a bit of time trying to figure out why I couldn’t reliably read a file, and by using a hexeditor, I found the leading bytes were the culprit. Comparing to a listing on Wikipedia, I found the file was in UTF-16, when I’m expecting simple UTF-8 or ASCII. Easy solution though if you’re on a *nix machine:

iconv -f UTF-16 -t UTF-8 input_file > output_file

And you’re done! Easy as pie…when you know what the problem is.

For a project, we have the need to create charts dynamically from data.  In another project, we’ve used ChartDirector for this.  It has worked great there, so we pulled it into this project as well.  Now, the type of charts I was working with in particular is a stacked percentage chart, which is kind of like a mash between a pie chart and a traditional bar chart.  An example:

Percentage Bar

Now, with dynamic data, you can’t predict what your data will look like, and your code needs to be flexible enough to handle any situation without causing headaches for the user.  With ChartDirector, you pass in datasets via arrays across the chart, so for example the above datasets would be created by:

$data0 = array(100, 125, 245, 147, 67);
$data1 = array(85, 156, 179, 211, 123);
$data2 = array(97, 87, 56, 267, 157);

Such that, the numbers line up in the arrays (vertically) in how they correspond to the resulting chart. In my situation and test data, I found I had a situation like the following:

$data0 = array(100, 125, 0, 147, 67);
$data1 = array(85, 156, 0, 211, 123);
$data2 = array(97, 87, 0, 267, 157);

So, the bar for the 3rd item in the chart would be distributed equally as 33.33% when, in fact, there was no data. I assumed the chart would display a blank spot for that bar. After searching the less than optimal support forums (to no fault of the creators / maintainers), I found I needed to instead use a constant defined in the ChartDirector code – ‘NoValue’, where I had…wait for it….no value. Putting a small check in my code to replace zeros with ‘NoValue’ proved to produce the results I was after.

Note to fellow developers: If you’re posting in a forum for assistance, please be more verbose in your subject line. It really helps with searching if the subject can provide some bit of context around what the problem is.

01
Mar

FYI – If you’re using jQuery in your application, and you’re trying to submit a form programmatically (ie, $(‘#myform’).submit(); ), you’ll want to make sure you don’t have a button with an ID/Name of ‘submit’ – the code fails silently, with no indication of why. This is something that caught me today, and was somewhat frustrating since it was such a basic concept of getting a form to fire.

Reference: http://api.jquery.com/submit/#comment-30950448 – Thanks Scotty!

24
Feb

You can now sign-in to Facebook Chat using your favorite XMPP/Jabber client (Pidgin, Adium, Kopete, etc). If you’re on Linux (Arch, specifically) you’ll need to install the cyrus-sasl package.

10
Feb
stored in: General and tagged:

I love SSH tunnels. I use them as a cheap VPN solution when traveling, and if I need to get access to an internal web server on the inside of a network (assuming the network isn’t separated). As an example, I have 2 computers at home which I use daily for development, etc. When traveling, I have a laptop that I use. Well, I use VirtualBox at home, since the computers there have plenty of RAM to support it, where my laptop isn’t as VM friendly (its old, but has served me well, and will continue to do so until it croaks.), so I needed a way to access my applications running on the VM while on the go. Enter SSH tunnels. SSH tunnels work by opening a port over which traffic can flow to the remote location. Using ‘dynamic ports’, you get a SOCKS proxy.

You create SSH tunnels using:

ssh -D 8080 username@remote_server

Which opens port 8080 on the local machine. Then, you can configure your browser of choice to use a SOCKS v5 Proxy at 127.0.0.1:8080. Specifically in Firefox, make sure that none of your other proxy settings are set.

It should look like the following:

Now, you can check the IP address for your connection by visiting a site like: http://www.whatsmyip.org/

07
Feb
stored in: General and tagged:

A quick shout-out to a great product. Concrete5 is an excellent CMS. With easy theming, and even easier setup, it’s a snap solution for some of the most particular of tastes.

It’s open-source, which I really like, but the ease of getting it setup, and the polished look and feel just make me happy to use it.

Great work guys!

04
Feb
stored in: General and tagged:

At work, we’re developing an application that uses LDAP for authentication. Specifically, we’re using OpenLDAP. We use a VM for development, which allows each developer to have a copy of the ‘standard’ environment, to ensure we’re on the same version of libraries, compilers, databases, etc. As part of managing the VMs, we write maintenance scripts to keep everyones VM in line with each other. I wrote a script to install a baseline installation of OpenLDAP. I thought I’d covered my bases with permissions, but upon startup OpenLDAP created a new file which was owned by root, and had 0600 permissions, which meant no one but root could read or write to that file. I had configured OpenLDAP to run as ‘openldap’, so of course, it couldn’t read the file. Unfortunately, the error message is less than helpful:

'0x50 (Other (e.g., implementation specific) error): updating: <my DN, etc etc>'

So, checking to see the file permissions under /var/lib/ldap, I see a file objectClass.bdb owned by root. Changed it to openldap:openldap, and all is well.

Moral of the story: Always check file permissions. Especially after starting up the server.

26
Jan

Recently, I was tasked with creating a single-sign-on solution for phpBB, where the user would login to our application, and when clicking a link to take them to a support forum, they’d already be logged in. phpBB isn’t known for having a great API with which to integrate, but the code works, and the product works. The authentication works on the premise of providing credentials, logging in, which creates a session in the phpBB database. A cookie value is set, which ties the user to the server-side session. When you have the 2 disparate systems, the domains might be different, but on the same top-level domain. This means if we could get the session ID and set a cookie on the next domain level up, we could be logged in.

The implementation is fairly simple, upon login, we use cURL or something similar and generate a POST request, using the username and password. The remote script grabs the session ID and user ID and returns the values to the originating server. We then, set the cookie values.

Now, the interesting bit. phpBB has multiple layers by which it validates the session. Since our remote server is originating the request, we don’t have the same IP as the user. Second, it uses the User-Agent string of the browser to validate the session. Using cURL, we don’t have a browser. Now, with cURL, you can set various settings (User-Agent string, X-Forwarded-For header, etc) – but if you’d rather not depend on that, you can simply un-check those settings in phpBB.

Of course, I’d recommend using the cURL settings, but to get you started and ensure the connectivity is working.

I’m evaluating a DataGrid for use in a project which is using the Zend Framework, and I came across the ZFDataGrid project.  Fantastic work, and the grid works wonderfully.  It enables you to filter your data and export it in various formats (PDF, Doc, Docx, OpenOffice, etc).  The sample on the site works exactly like this.  The only issue is the manual doesn’t exactly explain how to enable the export functionality.  It doesn’t ‘just work’, but it was reasonably easy to find since the code for the sample site is in Google Code.  But, it isn’t in an intuitive place like the manual or on the site itself.  So, to hopefully save someone else some time, I’ll post the code here – it is from the sample SiteController, and not originally written by me.

$export = $this->getRequest ()->getParam ( 'export' );
 
switch ($export)
{
    case 'odt' :
        $grid = "Bvb_Grid_Deploy_Odt";
        break;
    case 'ods' :
        $grid = "Bvb_Grid_Deploy_Ods";
        break;
    case 'xml' :
        $grid = "Bvb_Grid_Deploy_Xml";
        break;
    case 'csv' :
        $grid = "Bvb_Grid_Deploy_Csv";
        break;
    case 'excel' :
        $grid = "Bvb_Grid_Deploy_Excel";
        break;
    case 'word' :
        $grid = "Bvb_Grid_Deploy_Word";
        break;
    case 'wordx' :
        $grid = "Bvb_Grid_Deploy_Wordx";
        break;
    case 'pdf' :
        $grid = "Bvb_Grid_Deploy_Pdf";
        break;
    case 'print' :
        $grid = "Bvb_Grid_Deploy_Print";
        break;
    default :
        $grid = "Bvb_Grid_Deploy_Table";
        break;
}
 
$grid = new $grid (false, 'DataGrid Example', '/tmp', array('download'));
$grid->setDataFromCsv(dirname(__FILE__).'/Detail_Limited.csv');
$grid->imagesUrl = '/images/';
 
$this->view->grid = $grid->deploy();

The code at the end is mine, which basically tells the DataGrid where to render/save the exported file, which is then immediately sent for download. I also am not using the Zend_Db stuff for the data. As a proof-of-concept, I’m using a simple dataset in CSV, which works amazingly well. The filters, sorting, and pagination still work with CSV.

I’m thinking about writing an adapter for Doctrine, such that one could construct a Doctrine query object, pass it into the DataGrid, and everything would work, as it does with the Zend_Db counterparts.

23
Dec

In my previous post, I used a key style that is open to debate and has been for many years amongst DB folks. The idea of every table having a surrogate key, regardless of the purpose of the table. This says, that for any record in the table I have a single column that acts as the primary key. Given a many-to-many relationship, using a surrogate key on the linking table allows me to describe the relationship in terms of objects and how they’re represented. As shown in the below diagram – each user may have many user_role instances, which are tied to a single role instance. This makes the lives of ORMs much easier since you can create objects for the linking table, which has a simple key to reference.

The ORM then has a User, UserRole, and Role object to use in accessing these tables and adding / removing relationships with ease, since it only needs to worry about the single surrogate ‘id’ key on each table.  In the linking table (as a design concern), one should place a Unique Index on the user_id/role_id column combination.

The other option is using a composite candidate key.  I may have the specific terminology wrong, but the idea is that instead of the single surrogate key to identify a unique record in the linking table, you use a design like the diagram below, which combines the columns that are foreign keys to their respective tables to create the primary key.  The combination of the columns creates a unique identifying key.  The difficulty emerges with ORMs attempting to create objects out this design, and attempting to correctly generate the SQL required to make updates / deletes, etc, using each member of the composite key.

Personally speaking, I’m a fan of the surrogate key approach, but I’ve worked with both.  I won’t discuss the performance impacts of either design, since I don’t have nearly the research base to accurately describe it.  But, using simple integer based keys, the difference should be low.

MySQL provides cascading updates / deletes with the relationships, but I tend not to use them, specifically because I want to control just how far these updates and deletes cascade!  But, given a situation where I have a design similar to this:

multi-delete

I would like to be able to remove a single Foo, without having to first remove all the associated data from the other 3 tables.  Or, I know the ID of the Foo I want to remove, so instead of running multiple queries to find the associated rows, lets just knock it out with a single, multi-table delete!

DELETE
  db.zap as z,
  db.baz as bz,
  db.bar as z,
  db.foo as f
FROM
  db.zap as b,
  db.baz as bz,
  db.bar as z,
  db.foo as f
WHERE
  b.baz_id = bz.id AND
  b.zap_id = z.id AND
  bz.foo_id = f.id AND
  z.foo_id = f.id AND
  f.id = ?

This will then remove the rows associated with the single Foo record I’ve referenced, in one fell swoop.