Notification of Failed Instances

I’ve recently seen questions from several different sources where individuals wanted to understand their options for being notified of failed BPD instances in IBM BPM.  It turns out this feature has existed in BPDs since they were first introduced in Teamworks 5.0, but it seems many people don’t know how to leverage it.  Lets fix that.

Step One – Task Notification

Likely you already know how to get notified when a task is created and assigned to either you or a group you belong to.  However lets review and make sure we are all on the same page.  To make this work, do the following – (note: this assumes you are in IBM BPM 8.x, but the only thing that changes for other environment is the location of these items)

  1. Login to the Portal for BPM instance you are targeting.
  2. Click on your name in the upper right hand corner.
  3. Select “User Profile”
  4. Make sure you have an e-mail address set.  If you do not just click on it to edit.
  5. Check the check boxes for task notification
  6. Click Update.

Now send yourself a task.  Hopefully within a few seconds you get an e-mail notification as well.  If so, go to Step Three, otherwise, check out Step Two.

Step Two – Troubleshoot SMTP server

If you got an e-mail go on to step 3.  In my experience, most non production environments don’t have their SMTP server configured properly and therefore no e-mail is sent.  Lets fix that.  Please do the following –

  1. Determine the address of an SMTP server that you can use that does not require a login.  (This may be a blocker for some people, but there are any number of SMTP servers you can run on Linux or Windows locally).
  2. Create a file called something like 105SMTP.xml and put the the text below into the file (Sorry couldn’t get this in line and keep the numbering)
  3. Upload this file to your server and put it in the directory <bpm-install>/profiles/<target profile>/config/cells/<target cell>/nodes/<target-node>/server/<target-server>/process-center/config
  4. Restart the server.
  5. Retest the task e-mail scenario.

Text for file (replace “localhost” with your SMTP Server) –

<properties>
 <server merge="mergeChildren">
 <email merge="mergeChildren">
 <smtp-server merge="replace">localhost</smtp-server>
 </email>
 </server>
</properties>

Notes:

  • You can see your current setting for smtp-server in the TeamworksConfiguration.running.xml file.
  • If you are in an ND install I likely have told you to do the wrong thing and you need to modify the DManager and synch nodes, but I’ll leave that to the WAS experts.

Step Three – Setup Notification User

Now that your server can send user’s e-mails, you are ready to tell it to send you the task/e-mail if an instance fails.  Realistically you likely want some sort of generic user that IT in general can be added to, but lets assume here you are a small shop and having to do this just on your own.  We want to create another XML file.  Lets call this one 110FailureUser.xml.  The content will be –

<properties>
 <event-manager merge="mergeChildren">
   <notify-error merge="replace">apaier</notify-error>
 </event-manager>
</properties>

Obviously, replace “apaier” with the correct user name.  If you had to do Step Two above the next steps will be familiar (and the same notes apply to this)

  1. Upload this file to your server and put it in the directory <bpm-install>/profiles/<target profile>/config/cells/<target cell>/nodes/<target-node>/server/<target-server>/process-center/config
  2. Restart the server.

Step Four – Test failure scenario

This setting, as you may have determined from the context, creates a task for a specific user when anything fails in BPM’s event manager.  This means tasks will be created for any unhanded exceptions that are raised to a BPD, or for any UCA failures.

The easiest way to test this is to simply create a BPD that assigns the first task to you. In that task I usually put a very simple coach (so I can be sure the task exists) with an OK button that goes to a server script that attempts to assign a value to a variable that doesn’t exist –

tw.local.nonExistentVariable = 'Bob';

This way when I hit “OK” a exception gets thrown.  Now run an instance of this BPD and hit “OK”.  Shortly you should get a new failure notification in your inbox.  I also tested this with the “Error” end point in a service and that works as well.

Conclusion

Hopefully you now are able to be significantly more aware of BPD instances that fail due to things like technical outages, where fixing the underlying problem and restarting the BPD instance will allow things to continue forward properly.  Of course you will also be notified of failures due to bad code.  Hopefully that is a very rare occurrence for you.

Posted in BPM, Development | Tagged , , | Comments Off on Notification of Failed Instances

Joins and IBM BPM

In answering some questions on the IBM BPM Developer Works forum, I realize that not everyone has fully groked the join options available in IBM BPM. I hope in this post to clarify that for some of you.

(Caveat – what I’m writing below is based off of my experiences delivering BPM solutions using Lombardi Teamworks and IBM BPM.  There are items here that may not be true when this subject is approached as an academic exercise in attempting to understand the BPMN spec.  I can’t help that.  I’ve not memorized the spec.)

Joins Defined

Joins are any place in your BPMN flow where multiple lines point to the same box.  In a more rational world this is always done with a Join icon (usually a diamond), but it turns out you can also do it by drawing multiple lines to any activity, which just creates an implied join (see end of post).

Whenever you have a join, it tells your business process “You cannot move forward from this point until the join rules are met.”  This typically follows a split where you decided to allow multiple activities run in parallel.

Join Types

In IBM BPM there are 2 basic join types, although they have had different names over the years.  There is an “AND” join (now called “Parallel Gateway”) and an “OR” join (now called “Inclusive Gateway”).  I have no idea which is the technically correct definition, but I will continue to think of them as “AND” and “OR” since that is how I learned them.

It is important to understand the difference between these 2 joins or you will likely get confused in your more complex BPM flows.  Here is my basic guideline.  If not fully correct, it seems to cover the vast majority of cases I’ve encountered.

  • AND Join – this join essentially means “There is a token on every in bound line.”  So if I have 5 lines pointed at me, I need a token on each of the 5 lines before I move forward.  Until that happens, the token is going to wait here.
  • OR Join – This one is more subtle.  The rule here is “Every Token that can reach me, has reached me.”

What’s the big deal?

Well, if you use the wrong join, your process won’t do what you expect and you will wind up with problem.  Lets first look at an AND join. This is pretty easy –

Screen Shot 2013 09 03 at 5 01 54 PM

After this process starts A, B, and C activities will all be sent out, and D won’t happen until A, B, and C are all complete.  If you are looking in the inspector, as each activity completes you will see a token move to the Join.  When there is a token on each line, then they will “Collapse” to a single token and a single instance of Activity D will be sent out.

Now, here is an example for an OR join –

Screen Shot 2013 09 03 at 5 05 08 PM

This process starts out the same, and, yes, the token won’t move until A,B, and C are all complete.  But there are some differences here.  Compared to the AND join, this allows all of the following to be true.

  • When C is complete, depending on the data, we can immediately issue E no matter what the status is of A or B.
  • If C completes and flows to the Join, D will still wait for A and B (like in the AND scenario).
  • If A and B are complete and C routes to E, D will be issued in Parallel to E

That last scenario is why we need the OR (inclusive) join.  If we didn’t do that, if we used an AND join, if we followed the path to E, the A and B tokens would be stuck on the AND join forever as the line from C would never get a token.

Why Ever Use AND Joins?

If you are following along you might ask the question above.  And you are correct an OR join will act as an AND join if dropped into the first diagram.  However OR joins are computationally expensive, especially in complex real world processes.  (The computation is easy in simple education exercises.)  The AND join is a very simple computation “Hey, is there a token on every line?  Cool, move on.”  The OR Join requires walking backwards through the diagram to each split (which may have occurred in a higher level diagram) and determining if there are any tokens on the lines that could reach you.  And by the way, don’t forget to check for loop backs, which will make your computation go into an infinite loop if you don’t do it right.

So you could use OR joins everywhere and likely be safe, however this will cause your system to work harder than it needs to.

Did you say “implied joins”

Yes.  It is important to realize that just because you didn’t drag a join in from the pallet doesn’t mean you aren’t joining.  In our first diagram, if you were to remove the join and reconnect its inbound lines to the activity “D” box, in my experience you won’t wind up with 3 D’s issues as each item completes, you get 1 “D” issued when A, B, and C are complete.  Functionally you wind up with the same behavior as in the AND join, but without the clarity of the join for users who don’t know that is going to happen.

Correction on September 9th, 2013 –  Based on some user feedback I just went and tested the above and on IBM BPM 8.0.1 this is not an implied AND, you do wind up with 3 D activities issued as A, B, and C complete.  I’m pretty sure this behavior has changed over time though, so I would not count on it.  I don’t think it would be the right way to model the process, and could potentially have migration impacts if you did model it this way.

More Complex Scenarios

There are ways to handle more complex scenarios in IBM BPM, but I will cover those on another post.  They usually call for a different approach than AND/OR joins can handle.

Posted in BPM, Development | Tagged | Comments Off on Joins and IBM BPM

How can this keep happening

Being an IT professional, I’m amazed that this keep happening.  The Register reported today about yet another big government IT failure.  Some of the highlights include an initial scope of $AUS 14 million and 1 year blowing out to $AUS 50 million, and 3 years.  With the end result being a failure.

Among the findings the article mentions –

“… high turnover of SI consultants saw a total of eleven Fujitsu project managers during the three years from the start of the project until Phase 1 go-live in April 2012.” One of those project managers seems to have worked on the project for eight hours.

I’m amazed that all of these large consulting firms seem to be able to source these sorts of projects given their failure rate.  I’m equally amazed that clients are willing to accept big bang waterfall approaches to tough problems even though most real technology firms abandoned them 7-8 years ago.

Our team has encountered some of these projects first hand.  We try to set them straight when possible, but in some cases they have already committed to the big consulting firm and cannot extricate themselves.  It is interesting to watch the dance that comes down to “If you try and change now and fail, it will be your fault.  If you follow our plan for another 2-3 years and it fails, then you can blame us.”

For anyone on the business side reading this, if you get a proposal for a project that is to last more than 18 months, there are several questions I think you should ask the provider (whether internal or external) –

  • Can I speak to several references for projects of this size you delivered successfully?
  • Have you ever had a project of this size fail?
  • Have you successfully delivered a project using the technologies you are recommending?
  • What is your (our) team’s turn over rate?  Rate of project reassignment?
  • Can I talk to some team members that have stayed on the same project for 18+ months?
  • How do I know you aren’t going to pad your resume for 12 months on this project and then move on to another project/company with your “new” credentials?

This may seem adversarial, but better you make them convince you up front than thinking about these things after you are dissatisfied.

(Pro-tip:  The answer to the 2nd bullet point should not be “No, we’ve never failed”.  It is either “Yes, here is where we failed and this is what we learned from that..” or “We’ve had several items that were headed towards failure, but this is how we detected that and the course corrections we made in order to succeed”)

Posted in Big IT, Development | Tagged , | 4 Comments

IBM BPM and Log4J

Contrary to common wisdom it turns out that while the logging mechanism for IBM BPM was moved from log4j to IBM’s standard logging, the log4j jar file itself was not removed from the shipping binaries.  This means that you can leverage log4j in your code base if you wish by making some fairly easy changes to your BPM WAS instance.

There is an article on the wiki (found here) that has some recommendations and samples for using log4j, I see a number of problems with this implementation.

  1. The configuration JS is appears to depend on a global JS variable.  This is generally considered a bad idea.
  2. This implementation depends on a log4j contained as a managed assets.  It is not clear to me how the class loader will behave when the same classes are present in different areas of the product.
  3. The initialization of log4j in this solution is done in Javascript.  It is unclear to me what would happen if 2 different Process Apps tried to initialize log4j in this case, especially if the values conflicted.  Ideally each would have its own log4j writing to its own files, but I’m not sure the behavior would be by design or by happenstance of what a random developer did.
  4. All of the above becomes even more confusing and concerning if you are working in a clustered configuration.

So, the question becomes “If I want to use Log4j to be able to log to specific files either from Javascript or my Java integrations how can I do it?”

Well, as we said, log4j is present in the shipping binaries through the IBM BPM 8.0.1.1 release (haven’t checked 8.5 yet and in 8.5 as well).  Even if you have done nothing to configure it you can confirm this for yourself by opening the process designer and doing the following in a JS Block –

var myLog = Packages.org.apache.log4j.Logger.getLogger('Custom');

Were log4j not available in the WAS class path this would cause a “class not found” exception when you attempted to run the code.  If you don’t see a class not found exception, then log4j.jar must be available.

So now the problem becomes one of how to configure log4j to write to the files you want. There are numerous resources to tell you how to create a configuration file for log4j.  I’ll attach a sample here later.  The final step can be found in this answer on stack overflow. Essentially you can configure WAS to make the location of the log4j file available to log4j by giving an argument to the JVM. You will need to restart WAS but this does work. Make sure the directory where you are logging the files exists. While log4j will create the file if it is not present, it will not create the full path.

Finally if you do find you need to change log4j’s configuration on the fly, you can use the log4j class org.apache.log4j.xml.DOMConfigurator to re-parse the configuration file if you reload it by calling the configure() method with an argument of the file name of the file to use for configuration. I don’t recommend this in a cluster for the same concerns as above, but if in a development environment this would allow you to easily update the log4j file. You might even want to be nice and write a service for use in development that allows people to up/download the log4j configuration file and will make this call on an upload…

UPDATE 8/20/2013: Further experimentation has shown that if you call the DOMConfigurator, the results are additive if there is no overlap between the old configuration and the new. This means multiple configuration files will only affect one another where either the logger or appender entries have the same name.

Posted in Development | Tagged , , , | 2 Comments

Event Correlation

(this post was inspired by this question on the developer works forum)

One of the important concepts to grasp in creating complex Business Processes in IBM BPM is the idea of event correlation.  I’m going to try to help with some of the complexity here.

Definition

When your business user is talking to you about their process and they indicate that the process needs to either wait for something external to the process to occur or react to something that happens outside of the process, we will likely need to model this as an Intermediate Message Event (IME).

For example if you were working on a corporate relocation process the business user might say “Of course, at any time in this process the person relocating may call up and say that they changed their mind and are canceling the relocation and the process needs to then stop everything and we need to follow a flow to undo the work thus far.”

The customer canceling their relocation can be modeled as an IME in your process.  (I’ll deal with the details of modeling and triggering this in a later post).

Correlation Defined

The next thing to figure out when one of these events triggers is “Which BPD instance needs to react to this”.  This is handled in the product through a “correlation” value.  Your Under Cover Agent (UCA) service that triggers the IME will need to have at least 1 data output associated with it.  In your IME on the BPD you will tell the system to compare this output value to the value of a variable on your BPD and, if they match (and if your IME is listening) trigger the IME.

Your UCA/IME is not limited to mapping only the correlation value.  You can also tell it to map other output values to variables in the BPD and this mapping will occur when the correlation values match.  This allows you to update BPD values when the IME triggers.  In our relocation example this might contain the reason the person cancelled their relocation, or other important details.

Complex Correlation

Sometimes it is difficult to use a single value for the correlation in your IME.  In that case there are 2 approaches you can follow for your correlation.  There is one that is explicitly supported by the product and then there is the one that has seen the most use in the field

Product supported option

In the IME UI there is a “Condition” value that can be used to examine variable values on the BPD Instance and determine if the process should move the token from the IME to the next step.  While this sounds great in theory, the reality falls a bit short.

Ideally what you would want to be able to do is to check a value on the BPD against a value in the message and see if they meet some criteria, and, if they do then move forward.  However in my testing it seems the logic can only check the current values on the BPD and there is no way to access the data in the IME payload, even if you have it mapped, until after the correlation is successful.  This means this conditional is only really a good idea if you want to check if the BPD data is in a certain state before you allow correlation to occur.

Real World Option

So if you do need to use multiple variables to determine if your IME should correlate, what do people do?  One easy answer is using a complex key.  Lets say you have a business process that kicks off a Multi Instance Loop (MIL).  For example lets say it is approval for a number of things, and each of the items requires an approval.  Now lets say the user wants to cancel one of the items from their request.  Your IME is inside the loop, but if you use, say, the request ID for the message, all of the requests will be cancelled.  Compound keys to the rescue!

In our scenario there is one request id with any number of linked instances kicked off in a  MIL and we want to cancel one of them.  One option would be to have the IME that is listening for the cancel use a compound key for it’s correlation value.  This could be, for example, the request id and the MIL number.  We would create a string that has those 2 options separated by a delimiter (I typically use | ) so the if our request id is 1654 then the 3rd MIL instance would have a correlation value of 1654|2 (the MIL list is zero based so the 3rd item has a value of 2 for its count).

Now as long as the sender can determine this data (and MIL’s do spin up in a predictable order when handed a list, first to last) you can send the cancel signal to a specific IME even though many were listening for that event.

Other examples

To be honest, I know that I’ve used this technique many times in the field, but when I sat down to write this post, I couldn’t come up with any real world scenarios better than the one mentioned above.  I hope that this might aid you in the future with complex event correlation scenarios.

 

Posted in Development | Tagged , , | Comments Off on Event Correlation

IBM BPM and Taskless Services

I haven’t taken the time to write much on this blog. Full Stop. More so, I’ve not taken the time to write anything about my career which is working for BP3 a BPM consulting company. I’m going to try to change that and use this as a place to write up some of the more long winded responses that I post to the BPM forum on developer works.

This post has to do with “Taskless” Services.

I mentioned this in passing in one thread and it seemed to cause a good bit of confusion.  To be specific what we are concerned with here are Human services that are not part of a process.  These are frequently used for things like administrative screens or other simple UI required to support a process solution.

Like many things in life, these things are not harmful if used in moderation, but when abused I have seen them cause significant problems including system crashes and outages.  So, first lets get some clarity on what we are talking about, then lets discuss how this causes a problem, finally lets look at the options to avoid this problem.

What we are talking about

When you create a human service in IBM BPM you have the option to expose that service in the overview tab of the service –

Screen Shot 2013-06-19 at 4.14.28 PMI

if you select any option other than “Startable Service” you have allowed the creation of a “taskless” Human service.  That is the human service will be executed without an underlying task the user can come back to if they want to continue with that service’s execution.

The Problem

If the thing the user is doing with this service is very light weight, or isn’t done very often this will not cause you a problem.  However, if this is a service that contains a lot of data and is likely to be executed very frequently, we have a problem.  Each time the user clicks the link to run the service they will spin up a new instance of that service into the server’s memory.  If they leave the service in some way (close the window, go to another URL, click on some other link in the portal) the server has no way of knowing it should “retire” that service from memory because it is no longer being used.

This is not as large a problem with task based human services for 2 reasons.  First the user will come back eventually and run the same task, allowing the server to either use or reclaim the memory allocated to that server.  Secondly tasks will always eventually route to an end allowing the server to retire the service’s execution.

This problem happens most frequently with customers that create their own service to behave as the user’s inbox.  These services are designed to be run very often by many users and typically contain a lot of data.  Additionally since they generally are used to redirect the browser to run tasks, many of them are designed so that the user’s main usage will wind up causing the browser to re-direct to another URL meaning the service never reaches an end point.

Possible Solutions

There are a few possible solutions to this problem.  The first is to simply design your service so that it does execute to an end point (or postpone) so that the server can reclaim its memory.  The easiest way to do this would be to put in an alert on the “onbeforeunload” event so that you user will be told “Please use button X to save your data and close this window.”  This will help reduce the number of orphaned service executions.

The above does require that you design your service with logical exit or postpone points (note that a truly taskless execution will not be able to be postponed).  Sometimes this is not doable.

Another solution is to deliver this piece of the UI in a different technology that is capable of handling the use case in question.  A JSP for example will not cause a bunch of memory to be allocated on the server to retain the user’s state.  So if you were creating a replacement inbox you might think about creating a small web app that uses the REST API to display the information your users need to see.

Future options

I have some of our team looking into other approaches that may be able to avoid these problems for IBM BPM developers without requiring too much planning or creation of UIs external to the Process Designer.  As we make headway on these initiatives I hope to find ways to roll these solutions out to the community.

 

 

Posted in Development | Tagged , , | Comments Off on IBM BPM and Taskless Services

OpenLDAP on EC2

So today I’ve tried to get OpenLDAP running on the EC2 servers. However none of the instructions I found were adequate to get it working. I’m sure the OpenLDAP gurus out there will look at my comments and say “Of course that is the way to figure it out”, but I’m posting this in the hopes it will help out other people lacking such knowledge at some point in the future.

First off, the base article that helped me figure this out is the Ubuntu documentation.  However every article I read told me that I could just add my FQDN to the /etc/hosts file, and the right values would simply be created for the base dc entries.  So I walked through this article and did everything mentioned, and when I tired to query for my admin user, the query didn’t work.

I also tried any number of other people’s walk throughs.  When I did those, I wound up failing at the add user step with a credential problem (error 43) and no idea what was wrong.  The above article finally gave me a clue.  If you look at the file in

/etc/ldap/slapd.d/cn=config/olcDatabase={1}hdb.ldif

you will see the base dc entries and mine would up being dc=compute-1,dc=com

What happened?

Well if you do a –

hostname -f

You will see that the Fully Qualified Domain Name (FQDN) of your server is not the value you expect.  If you want to get it to be your expected value, you will need to change your host name.  You can do this temporarily with a


sudo hostname = yourmachine.yourdomain.com

But be aware this will not survive an OS reboot.  If you want to change it permanently google the answer for your linux distro.

After making this change, and making sure you updated the etc/hosts, you should see that  your hostname query returns the expected value.  Now if you follow the linked example, you will wind up with a working OpenLDAP install.

 

Posted in Uncategorized | Tagged | Comments Off on OpenLDAP on EC2

Break it to fix it

In my new work developing some solutions that will either enhance BP-3’s consulting offerings, or perhaps become some new products, I’ve been experimenting with Ruby on Rails.

Over the last few days I was struggling with attempting to model a many-to-many relationship in the ActiveRecord construct.  No matter what I did, I didn’t seem to be able to construct the relationship I really wanted.  The results were always somehow cross-ways to the relationship I wanted.

I was finally able to break through my problem by basically breaking the code even worse than it was when I was getting incorrect results.  When I was getting the incorrect results I couldn’t really determine what the underlying system was doing.  It was “working” just working wrong.  I finally changed the configuration to one that I knew was fundamentally broken.  This resulted in the Rails spitting out a SQL statement that allowed me to understand the complexity it was hiding from me.  From there it was a short set of work to get everything configured correctly.

What is interesting to me here is that it seems that sometimes, in order to actually fix something that is working incorrectly you may need to fully break it so that you have the opportunity to create the correct fix.

Posted in Development | Comments Off on Break it to fix it

Cheating is Okay

“Cheating is Okay.”  As a parent, I won’t attempt to explain this to my children until they are ready for college.  However in the real world, done well, cheating is okay.  What do I mean by cheating?  I mean taking data that is external to the current problem you are trying to solve and use it to make the problem easier to solve.

Many times I see solutions where someone took a very hard problem and spent a what appears to be a great deal of time trying to solve it given the rules as they see them.  I usually look at these solutions and I am amazed at the dedication those involved have shown, the sophistication of their solution, the purity of their vision.  I also tend to wind up feeling the end result isn’t worth it.  They should have cheated instead.

What do I mean by cheating?  Lets look at Apple’s iPhoto for an example.

I love the ability to have faces detected in my photos and spend a large amount of time making sure I tag the people in these photos.  It makes it easy to look for a good photo of my daughter for something, or show her herself when she was a small baby (and activity she never tires of).

The thing is that whatever heuristic Apple is using, they need to cheat more.  From what I can tell Apple’s approach to every face is to detect the face and then compare it to all the other faces that have ever been named.  It seems to me that this is making the problem significantly harder than it needs to be, and is leaving numerous possible “cheats” out of the equation.  Below are the cheats I would like to see them add to their system

Cheat 1 – Frequency

We are a family of 4.  The vast majority of the photos I take that have people in them have some combination of my wife, son, and daughter in them.  I’m betting if a significant percentage of the time it simply guessed 1 of these 3 people (4 if you want to add me in) the hit rate would be significantly higher than I currently see.  As it is, since my children are young, their faces have changed significantly, so I think the software has a hard time recognizing them at all.  If it just randomly guessed these people in my photos, I’d know they were cheating, but they also would probably be correct more often then they are today.

Cheat 2 – Time awareness – Large

We all have people come into and out of our lives.  Yes, iPhoto, I know that to you the face I just took a picture of looks a lot like the boy Linus my son played with 4 years ago.  However you may have noticed that since that time, we don’t have a single new picture of him in iPhoto.  Anywhere.  Ever.  He’s moved back to Germany, so I don’t think I’ll have any soon.  Take him off your guessing list please.  Odds of you seeing him again are very small.

Cheat 3 – Time Awareness – Small

I now have a camera that can take ~4 pictures a second.  I love it and take a lot of pictures.  Now, if 2 pictures are less than 1 second apart and have the same number of faces in the same relative positions what are the odds they are different people?  Since I have a digital camera (and young squirmy children) I will take the same photo many times in attempt to get one that is a keeper.  When I tag one, try and go to the next how about using that tagged photo for the basis of your guess?  You can probably even expand the time frame out to 10 seconds and still make the results better than today.

Cheat 4 – Events

Look iPhoto, I just imported a bunch of photos and put them in the same event.  I then work my way through to put names on the people.  If it really is an event, odds are the same people can be seen throughout the event.  Use that information.  Weight those people more heavily in your guesses.  Heck, if that is too hard, let me tell you who was at the event, then you go and try to guess.  Or give me an option when you guess to not only correct the guess but say “No, Bob wasn’t at this event, if you are guessing that you are wrong, go through the photos and get rid of that guess.”

My Point

I understand that there are some problems that are simply fun to try and solve.  And when you look at them, you think “Man, if I can solve this problem there are so many things we can do with it.”  But when you move on to implement a solution to a specific problem, come down out of your ivory tower and get dirty with the rest of us.  Cheat a little if the cheats work.  I really don’t need facial recognition software that can meet the TSA’s goals of naming people on the fly.  I just want iPhoto to make tagging faces easier.  Go ahead and cheat.  I promise to look the other way.

Posted in Uncategorized | Comments Off on Cheating is Okay

Watching NFL Games

For the last 2 years we have not had cable TV at our house.  This was a conscious decision on our part.  With 2 young children and being very busy at work, I believe we (the adults) watch a total of ~2-3 hours of TV a week most weeks.  We still have cable for broadband, but don’t have a TV package.

The one thing I’ve missed is our DIRECTV (I checked.  Apparently that is the right way to write their name) subscription to the NFL.  I’m not someone who really wants to see every game, but having grown up in Philadelphia, I really want to see the Eagles games.  I did not mind paying DIRECTV $200-300 for that right.  The 2 things I did mind were the inability to buy just the games I couldn’t otherwise get, and having to pay for all the other DIRECTV stuff in order to get the right to pay them for access.

Last year DIRECTV added an internet streaming option to their top package.  While it is my understanding that DIRECTV will allow you to buy the steaming capability from them if you pay the full price, there is no clear way to do that on their site, and I didn’t feel like trying to talk to a person to do it.  I gave a friend who bought full access some $$$ to use his login.  This worked okay, however it still isn’t perfect because you cannot see the Monday Night or Thursday Night game with this option.  Since I don’t have cable, I still couldn’t see these games.

An interesting thing though – when I was out of the country for business, I could buy a subscription to see the NFL games.  The offered subscription under the following options –

  • Full Season – every game $250
  • Follow your team – every game for a given team $100
  • 1 Month – every game that month $70
  • 1 Week – every game that week $25

Even better – when you sign up you get the ability to watch all previous games that season with a DVR type interface.  So if you aren’t going to be available for kick off, no problem, just start it when you can, and watch the game from the beginning.

So – how does this help me?  Easy – pay for a VPN in Europe.  (I used www.hidemyass.com) and route all your traffic through it.  Then login to the NFL, buy your subscription.  Then watch your games.

Now, at first this caused some issues.  Bouncing the traffic off of a server in Europe made the feed a bit poor.  But, it turns out, the NFL only checks where you are coming from when you log in. So after you login, and before you launch their app, simply drop the VPN connection.  You now get your full bandwidth.  It is a true HD feed as far as I can see.

You can also watch multiple games at a time if you wish.

I still have my friend’s DIRECTV login, however it seems to me that either their servers are overloaded, or they are purposefully degrading their feed to push you to the satellite feed.  The DIRECTV feed running on the same computer and same connection is significantly worse than the one from the NFL.

So, if you want to watch the NFL how you want to watch it, simply paying ~$10/month for the VPN connection and pay the NFL for their product (something I do not mind doing), there is a very viable option available.

Posted in NFL, TV | 4 Comments