Hours Worked Indicates Project Health

As you may or may not know, I love data and track just about everything.  Most of the time I don't know what I'll do with the data while I'm tracking it, but figuring it out after the fact is half the fun.  Over the last 7 years I've religiously tracked every hour I've worked, what I was doing, if I was interrupted, and a variety of other data points.  Here is 7 years of data summed up in one graph:

Great... so what was the point?  Other than seeing I work WAY too much, I can correlate this data with other data to see what we can learn, but before I do that let me state my hypothesis: I think an employee's time worked will directly correlate with the health of a project.  Okay, with that out of the way, let's isolate a project so we can focus on that:

As yes, my current project; 2 years and counting.  The format of this graph is great for comparing hours across years, but it's not the right format to analyze a specific project.  I also think there are better ways to slice this data if I'm aiming to predict project health.

Salaried employees that care, tend to work extra to hit deadlines; they personally absorb project overrun and render many issues invisible to those not attuned to the state of the project.  As a result, I think the best indicator will be a combination of a rolling 7-day total of hours worked and the number of contiguous days worked; these should highlight an employee absorbing more work than a project timeline allows.  Let's look at this project through that lens:

Okay, what are we looking at?  Well, apparently I worked for 26 days straight and topped out at 111.5 hours for a 7-day period in October 2015.  I had raised the red flag several weeks before this and we ended up hiring a dedicated BA to help with the project as a result.  On the surface, it looks like this graph is highlighting the issues I want to see.  Let's get rid of anything that looks like normal activity and layer in more data, like project milestones to see if all of these spikes line up:

Looks about right; the spikes in contiguous days worked lead directly into important dates.  I'll skip the detailed breakdown of this since this is one of the most complicated projects you will ever encounter; instead, let me layer on project phases:

Green is requirements/design, blue is build, and red is training/UAT.

After the first phase of this project you can see we learned a few things, namely that delivering monthly milestones was not appropriate for the audience we were dealing with.  Not only did these milestones result in scope being introduced to the project that we were forced to absorb without moving the delivery date, the audience actually had trouble consuming the milestones and didn't raise the biggest scope item until the very end, causing a second phase of development to be required.  For the second phase of development, we switched to a more traditional waterfall approach since the audience was more comfortable with that style.  This resulted in a more manageable draw on our resources and kept scope better contained, but still used far more of our resources than it should due to external pressures on the timeline.

That was a lot of data, but what was the point?

Since I was running this project and playing an active role on the team executing it, I was aware of all the extra hours going into it.  I was working extra and could see everyone else doing the same.  I was very aware of all the problems and could either manage them early or raise them to someone who could, but this is not typical of many projects.  Many people manage projects from afar and have people providing status updates that may not be reflective of the real situation.  From past experience, this could either be because that person is blind to the problem or is actually scrambling trying to catch up because they know they are behind and don't want to get in trouble; in either case, the project is in trouble and the person managing it should be aware of the problem.

I know it's a huge pain, especially if you are a salaried employee, but please track your time accurately using whatever time tracking tool your company uses.  It can provide an early warning to your manager that a project is about to go sideways and your manager can do something about it before it becomes a bigger problem.  If they don't know until it's too late, everyone loses.

Data is Beautiful: Timesheet 2016

I love data.  I collect it all the time, not knowing if or when I might use it.

One thing that I have tracked consistently over the last 7 years is how many hours I've worked.  Not only do I keep track of the hours, but I also have data about what I was working on, the time of day I worked on it, if I was interrupted, and a variety of other data points.  One day I may sift through it all, but there is one graph I produce annually to reflect on the year that has passed.

Paid hours worked over the last 7 years.  The green line represents the project I've been working on for the better part of 2 years.

You can see spikes of work effort throughout the graph, likely coinciding with project deadlines or something I'm passionate about (which usually has me losing track of time late into the night).  Let's drop everything except the project I'm currently working on, which is really what I want to see.

My current project.

The first year of this project was a bit rough because the project was understaffed; you can see my workload steadily going up and peaking at some crazy number of hours twice in 2015.  By the end of 2015 we had hired an additional body and my workload was halved.  The first half of 2016 was reasonable, with a large milestone mid-year, and another that has just passed.

Hours worked is a good indicator of project health.  Overall the project is in a better place than it was this time last year and my hours reflect that.  Once this project launches in early 2017, I'm looking forward to some vacation.

Azure Pricing

I've been experimenting with Azure on and off for several years, but since Microsoft opened their first Canadian regions earlier this year I've been using it more and more.  One of the first things I did when they became available was move my VMs over to the Canada Central (Toronto).  Other than faster response times due to their close proximity, the experience was exactly the same as being in any other Azure region, which is kind of the point of the whole cloud thing.

To my surprise, a short while after switching I noticed that I had almost burned through my entire monthly credit.  OK, so what happened?  Well, when I switched to Canada Central I also switched to SSD storage and made the incorrect assumption that the "estimated cost" for a VM with SSD storage included the price of the SSD.  Spoiler alert: it doesn't.  After researching I learned that if you use SSDs, you're billed for the entire amount that has been allocated even if you're only using a fraction of it.  After thinking about it, this makes complete sense since you're likely provisioned a set of disks that are entirely yours, which is why some VM sizes with SSDs are unavailable in certain regions.  I don't need SSDs at that price, especially since these VMs are not production boxes.  Let's call that a $200 lesson learned and switch back to basic storage shall we?

This little fiasco highlighted something: I don't think I really understand Azure's pricing at the granular level I should.  As a started to drill into my bill I noticed something else: Canada Central was more expensive than other regions for almost everything.  Of course I knew prices would vary based on region for all sorts of reasons, but it never really clicked until I saw my bill.  I think part of the problem is I haven't seen anywhere in the Azure portal where the difference in region-based pricing difference manifests in a visual way that puts the regions side-by-side.

Alright, so how much more expensive is Canada Central compared to East US2?  What about Canada Central compared to Canada East?  To answer this I started flipping back and forth between pages, but I needed to see it all in one place.  Time to break out the trusty spreadsheet!

In this sheet you'll see the discounted MSDN rate estimates for 1 month.  I priced out the common VMs I use, as well as SSD and non-SSD storage.  The coloring compares each VM across regions horizontally, not within a region vertically.

For completeness, here are the non-discounted Windows and Linux rates respectively (storage prices are the same).

The most surprising part of this isn't that prices vary by region, it's that prices don't vary equally across regions (i.e. region X isn't always 10% more expensive for everything compared to region Y).  For some reason East US 2 and South Central US are cheap for everything except A1-A4 VMs.  I triple checked this because I thought I made an error; trust me, it's correct.  My best guess is that the hardware is purchased for specific classes of VMs and the cost of that hardware (at the time a specific region is built) is used to determine the running costs of the VMs running on it.  The best evidence I have of this is that the brand new F-class VMs are cheaper than the A-class VMs for the MSDN and Linux rates.

Despite all of this pricing confusion between regions, there is one rule that appears to be true: Canada Central is f***ing expensive.

JavaScript: Web Servers of the Future

I posted an article about the History of Data Validation and I think that was only half of the full idea that was rattling around in my head.  Recently I've been doing a lot of work on single-page applications and the technologies that make them possible, but what's really interesting in this world is how to reuse server assets on the client and vice versa.  At the end of my previous post I was talking about Node.js and sharing server code with the client.  I believe that is how all web applications will be written in the future; the specific technology might vary, but conceptually that's how they will be built.

Don't believe me?

Microsoft has been developing TypeScript for several years now and many view it as a competitor to languages like CoffeeScript.  I don't think that is the case at all.  TypeScript's primary goal is to allow developers to write enterprise-scale JavaScript applications in a manageable, reliable, scalable, and structured way; they're trying to eliminate the fragility inherent with JavaScript.  TypeScript accomplishes this by making it easy to write modular code and by layering in a type system that gives you compile-time checking (which also gives you Intellisense).

On top of that, Microsoft has continued to advance the Windows Runtime for some time now, which allows you to write Windows applications in JavaScript.  That JavaScript code loaded in the Windows Runtime is then executed in a hosted instance of Chakra, the JavaScript engine powering Internet Explorer 11 (it started in IE9).  This is very similar to Google's V8 JavaScript engine that is used to power Node.js.  Not too long ago, Microsoft actually released an API for Chakra that allows you to host the JavaScript engine inside your own application, which means that someone could build a Node.js-like web server that runs on Chakra if they really wanted to.

I think JavaScript is quite powerful and the benefit of using the same technology on the server and the client has an incredible amount of potential that's on the cusp of being tapped.  As soon as you have a truly shared code base between the client and server, I think the traditional division of where functionality belongs will get fuzzy.  This means that you're free to shift more of your application to the server or push more of it into the client with very little effort; it's entirely up to you and the requirements of the application you're building.

Still need proof of the impending shift?  It's already happening in places you might not expect.

Since Microsoft has been putting such a huge investment into TypeScript, I can't help but think they saw this coming a long time ago.  In fact, Microsoft's project "Monaco" (a Visual Studio IDE in the browser) is 100% TypeScript on the client and on the server, and it runs entirely on Node.js (not IIS like you might have expected).  TypeScript has allowed them to build a massive JavaScript application with complete confidence in their code base.  I saw a really interesting talk at BUILD about this very topic.

If you're a server-side developer and don't know anything about JavaScript, I encourage you to learn it now while you're still ahead of the game.

SignalR: Tame Your AJAX

SignalR is a pretty interesting technology.  It's primary use-case is really about push notifications from your server to your clients.  This can be leveraged to do all sorts of cool things like message broadcasts and returning results to the client in a truly asynchronous manner.  What I propose however, is that SignalR can be used instead of traditional your traditional AJAX calls.

I've spent quite a bit of time trying to figure out the best way to deal with data transport to and from the browser in a manageable, traceable, and consistent way.  I looked at a variety of technologies to help out here:

WebAPI: This could also be standard MVC controllers, but essentially you have control over how your endpoints are structured, you can control serialization, and a variety of other things.  This is really the standard technology in the Microsoft world for standing up RESTful endpoints.  Again, this is just on the server, so you need to look elsewhere if you want help managing things on the client.

JSend: This is a standardized structure that you should wrap all of your AJAX responses in to give you a consistent way to handle results and error messages.

Vanilla JavaScript: You can make AJAX calls on your client to any server endpoint.  It's up to you to manage everything.

jQuery: This provides some syntactic sugar for your AJAX calls, but you're still on your own to manage the endpoints themselves.

Amplify: This is a library built on top of jQuery that allows you to define all of your endpoints in a centralized place.  This is a huge step forward since it minimizes the maintenance associated with URL management, query strings, and more.

Before I decided to try SignalR, I was using WebAPI and Amplify with JSend wrappers.  This was good, but there was still more manual upkeep that I'd prefer.

Now, take a moment and imagine a world where:

  • you can define RESTful (or at least RESTful-esque) endpoints on your server
  • your client magically knows about all of your server endpoints
  • your client-server programming model is more like remote procedure calls
  • your JavaScript files have complete IntelliSense for your server endpoints
  • you never have to maintain any URLs for your server calls
  • you never have to deal with (de)serialization on the client or the server
  • your server can push notifications to your client
  • your client can make synchronous or asynchronous calls to your server
  • your client and server will automatically figure out the best way to talk to each other

Sound good?  That's what SignalR can do for you; if you only think about SignalR as a persistent connection, then you're missing out on a ton of the benefits.

What's all of this look like?  If you're just using it as a replacement for WebAPI, it's actually pretty close; you could easily grab your WebAPI code and tweak it into a SignalR Hub.  On the client is where you really see a difference.  I'll leave you with a code snippet:

var person;

//SignalR (with full IntelliSense)
person = connection.personHub.server.getPerson(1);

//jQuery
$.ajax({
  url: '/api/person',
  type: 'GET',
  success: function (data, textStatus, jqXHR) {
    person = data;
  },
  error: function (data, textStatus, errorThrown) {
    //Handle error
  },
  contentType: "application/json",
  data: JSON.stringify({ personId: 1 })
});

On a final note, please keep in mind that I'm using this to build a large business application that requires users to log in before they can do anything.  I certainly wouldn't recommend SignalR for a public site with millions of concurrent visitors, but for a business application that only deals with thousands of users, it's a technology choice that can give your developers a significant productivity boost.

JavaScript MV*: Knockout & Durandal

I've been doing a lot of work with JavaScript MV* frameworks lately and I thought it would be a good idea to write down some of my experiences.  This isn't going to be about which framework is better, this is just my opinion about several frameworks based on my experiences with them.  I've got quite a bit of experience using Knockout on several projects, so I thought that would be a good place to start.

Knockout

This isn't an MV* framework in the fullest sense.  It's a really good data binding library, that also happens to render small HTML templates (really just snippets).  There are a lot of comparisons made between Knockout and MV* frameworks, but I think anyone that tries to make those comparisons either doesn't understand what Knockout is trying to accomplish or doesn't understand what the other frameworks were designed for.  I don't blame them though.  It took me quite some time to get my head around the complex landscape that is client-side MV* frameworks; for the longest time I thought Knockout versus Angular was a valid comparison, but now I know it's not.

In my opinion, Knockout has two use-cases that it's really good at:

  1. Adding a layer of rich client-side interactivity to a server-side MV* framework.
  2. Serving as the data binding component in a client-side MV* framework you assemble yourself.

In either case, Knockout is only one component of an MV* framework; it isn't a complete MV* framework on its own.  In the first scenario, I've successfully used it in the past to add rich client-side interactivity to a traditional ASP.NET application.  In the second scenario, you could build your own MV* framework using various specialized libraries (e.g. data binding, routing, etc.) or you could use a framework that someone else has assembled using the same methodology.  In this particular instance I'm referring to Durandal (which I'll talk about shortly).

If you decide to use Knockout for either scenario I would recommend a few things to get you started in the right direction:

  1. Don't mix your JavaScript and your HTML; keep them completely separate.  I've covered this specific topic before and the quick summary is to use the classBindingProvider plugin to keep your HTML and your logic separate.
  2. Spend some time thinking about how you should structure your JavaScript models and logic.  Knockout doesn't care how you structure your code, but if you don't come up with a standard structure it will quickly get out of hand (especially if several developers are working on it at the same time).
  3. If the models coming from your server are big and complicated, the default mapping plugin might not cut it.  I've had good experience with the viewModel mapping plugin and I'd recommend it to others if you have to deal with these more complex scenarios.
  4. If you're only targeting browsers that are ECMAScript 5 compliant (IE9+), then I'd recommend using the Knockout ES5 plugin.  This plugin makes use of ECMAScript 5 getter/setter pairs, which means that you don't have to remember all of the parentheses the Knockout traditionally demands.

Durandal

This is the framework that should be used when attempting to compare Knockout with various MV* frameworks.  Durandal is a fully-featured MV* framework that uses Knockout as its data binding mechanism, along with few other libraries and some custom code to make everything work well together.  If you're a big fan of Knockout or have a lot of experience with it, using this framework is an easy way to take your first step into the client-side MV* world.

I don't have a ton of experience with this framework compared to a lot of the others, but I did spend a full week building various demos to give this framework a fair chance.  Based on my limited experience, Durandal does one thing very well:

  1. It makes it very easy to create a client-side MV* application if you have experience with Knockout and the standard ASP.NET stack.

It's great for that and I found it very easy to use given that background; however, I believe Durandal has some weaknesses:

  1. It has a very small community compared to the other major players in this space.  This is fine for small applications, but when you need to build bigger applications that you need to support for years, a large community becomes a significant resource that must be considered.
  2. By definition, the framework is an assembly of several other libraries.  For some this could be an asset since you can easily switch-out components.  For me, if I'm reaching for a large framework like this to start with, I'd actually prefer something that was designed end-to-end with that intent.
  3. The project is maintained largely by a single developer.  Rob Eisenberg has done great work and I'm honestly impressed with everything he's done, but having a larger number of people backing the project and driving it forward gives me more confidence that it will be around for a long time (I know that's largely an illusion, but it still makes me feel better).
  4. There was recently a failed Kickstarter to fund the next phase of development.  I know that doesn't signal the death of the project, but it doesn't inspire confidence that this is something that the community will get behind for years to come.

Again, I think it's a great framework for what it's good at, but it just didn't match the criteria for the project I was evaluating it for.

History of Data Validation

This is an interesting subject to me because the way that we validate data has a quite a large impact on our web applications.  At the end of the day we want a few things out of our validation code:

  1. Only define it once
    • DRY (Don't Repeat Yourself)
    • Single Source of Truth
  2. Run it on the client
    • Provides a good user experience
  3. Run it on the server
    • Provides security (never trust your client)
  4. Define your rules in a uniform way
    • Use the same technology for every rule (e.g. blocks of code vs. model attributes)

As technology and user expectations have evolved, we've always sacrificed one or more of these requirements in order to provide a better (richer) user experience.  At a high-level, the progression over time has looked something like this:

  1. Server-Side Only: This is where we started a long time ago.  This is essentially Web Forms, where the client is forced to round-trip to the server for every action.  It's great for everything except user experience.
    • DRY
    • Poor User Experience
    • Secure
    • Uniform Definition of Validation Rules
  2. Server-Side / Handwritten Client: As JavaScript started to become popular, developers began to selectively add manually written validation rules on the client.  These were duplicating rules that were already defined on the server, but the maintenance cost of maintaining the two copies of the rule was justified because of the improvements to user experience.
    • Not DRY
    • Good User Experience
    • Secure
    • Varied Definition of Validation Rules
  3. Server-Side / Generated Client: As time marched on we found ways to define basic (field-level) rules on the server and generate their client-side counterparts automatically.  This only covers the most basic validation scenarios, and complex rules still need to be implemented using one of the previous methods.
    • DRY
    • Good User Experience
    • Secure
    • Varied Definition of Validation Rules
  4. Client-Side Only: With the rising popularity of single-page applications, some advocates in that community say that you should trust your client and push everything into the browser.  The server essentially becomes your client's data layer and doesn't re-validate any of the data from the client.  I disagree with this approach.
    • DRY
    • Great User Experience
    • Insecure
    • Uniform Definition of Validation Rules
  5. Client-Side / Server-Side (shared): Node.js is adding some incredible value here.  Since your entire server is written in JavaScript you can execute all of your business logic on the server, but you can also ship the exact same code to your client and execute those rules immediately in the browser.  The server is really just double-checking everything.
    • DRY
    • Great User Experience
    • Secure
    • Uniform Definition of Validation Rules

The last option here might seem a little crazy today, but my prediction is that this is where all web applications end up in the next few years.  Our web applications live on a continuum somewhere between completely server-side and completely client-side, and I really think we're about to experience a big shift from the middle-ground we currently occupy to much more client-side applications.

JavaScript MV*: Library vs Framework

The landscape of JavaScript MV* frameworks is complicated for anyone just tuning in; when I first tried to get caught up on everything, it was certainly confusing and I had no idea where to start.  Once I had it all figured out, I then had the challenge of explaining it to everyone around me, which I couldn't articulate well until I saw this video of Tom Dale and Yehuda Katz.  After I saw that, everything I was trying to explain finally clicked and it was just a matter of plotting all of the frameworks on a graph:

library_vs_framework.png

The graph is less about each framework's exact position and more about illustrating an idea.  That idea is simple:

In order to build any large application you need to use a rigid, opinionated framework.

Now, I'm certainly not saying that if you want to build a large application you have to use something like Ember or Chaplin.  What I'm saying is that you have the choice between using a framework that already exists or you'll have to write your own by using one of the low-level libraries and then build up your own opinions on top of that.  However you slice it, you effectively end up at the same place before you're able to build a large-scale application.

I've read countless articles about developers using Backbone (or similar low-level library) and most of them sound like this:

"Backbone was rough at the start, but now it's the best!  It took us about a year to figure out all of the patterns and structure we wanted, but now that all of the developers are on the same page it's going pretty well."

I'm sorry, but did you say a year?  I'm not joking; I actually read an article where it took the team a full year before they didn't hate working with it.  What took so long?  That was how long it took them to figure out how to structure their code, and define their internal patterns and best practices in a way that made sense for all of the developers on the team.

Wow.

Sure that's Backbone though, and everyone knows that Backbone is only a library.  Well, I know lots of people love Angular and believe it's the best thing out there, but I really think it is midway between a library and a framework.  It gives you more tools and higher-level abstractions than something like Backbone or Knockout, but it completely abandons you on when it comes to application structure and best practices.  After watching one of the teams at Google talk about their experiences building a large Angular application, it just reinforced my view; a lot of what they talk about is their struggle to find a structured way to build the application.  That's right, Google is struggling to find structure with their own tool.

I'm not saying that any specific library or framework is good or bad.  All I'm saying is that you should pick the right tool for the right job.  You shouldn't waste your time figuring out best practices and code structure if there's already something out there that's close enough.  Of course, if you're only building a small application or there's only one developer, the benefit of the larger frameworks isn't as clear and it's more likely to just get in your way.  In the end, you should be spending your time solving problems unique to the application you're building and not fiddling around with the tooling underneath it.

Best vs. Popular

The most popular technology isn't always the best technology.  We see this all the time and one of the classic examples I always hear is VHS vs. Betamax.  Betamax was arguably the better technology (I'm told), but VHS won the popularity contest.

How is this relevant?  Well I find this particularly interesting when it applies to development tools, libraries, and frameworks.  Should you pick what you think is the best tool for the job or the most popular tool even if it's not the best tool?  I don't think this is a big deal for small projects since the tool isn't going to just disappear in the span of 6 months, but what about a project that will take 2 years to build and you need to maintain for 5 - 10 years once it's running in production?

A simple example that I'm currently struggling with is Bootstrap 3 vs Foundation 5.  I know I had a post a few months ago and I said Bootstrap 2.3.2 was the best framework available, but with the latest versions of both frameworks I'm now torn between the two.  Here is what I'm weighing between the two frameworks:

Foundation 5

  • Out-of-the-box this meets 95% of our requirements
  • Roadmap is deliberate and consistent
  • Seems to be geared toward business applications (e.g. comes with validation plugins)
  • Small community
  • Small pool of 3rd-party plugins/customizations
  • Primarily backed by a company
  • Paid support is available

Bootstrap 3

  • Out-of-the-box this meets 80% of our requirements
  • Roadmap is driven by community pressure
  • Seems to be geared toward small websites
  • Large community
  • Large pool of 3rd-party plugins/customizations
  • Primarily backed by 2 developers at Twitter and the community
  • No paid support

In this scenario, I want to pick Foundation since it's so close to everything that we want in a framework, but Bootstrap's popularity and everything that comes with it is a huge asset.  It's such an asset that I feel compelled to use Bootstrap and take the time to make it what we need it to be just so we can leverage all of the resources surrounding it in the future.

Should I use the best tool for the job today or the most popular tool in order to make development easier at some point in the future?  I don't have an answer yet.  I honestly have both integrated into my project at the moment and I'm trying to find a critical fault with one of them to make my decision easier, but so far nothing's managed to sway me further in either direction.

Entity Framework: Use with Caution

I've talked about the Entity Framework before and I'm certainly an advocate for it when using it makes sense, but for me it tends to be unsuitable more often than not.  When does it make sense to me?

  1. A large project with a single developer that will be responsible for everything
  2. A small project with 1 - 3 developers
  3. A large project with a team of developers that will develop the application in horizontal slices (i.e. there will be at least one developer dedicated to the data layer)

I think #3 is really the Entity Framework's sweet spot, but outside of that I'd argue that you're better off using something like Massive or Dapper.  Again, this all comes down to using the right tool for the problem at hand and how your development team operates, but that's getting a little off topic.

I recently used the Entity Framework for a project that matches the description of #2 listed above.  I thought it would be a good idea to capture all of the issues that came up using the Entity Framework:

  1. SQL Server 2005 support is fragile unless the EDMX file is generated off of a SQL Server 2005 database instance from the start, and for all subsequent updates.  If you don't, you'll find yourself manually updating the EDMX file's XML a lot.
  2. Explicit use of TransactionScope escalates to a Distributed Transaction on SQL Server 2005.  Upgrading to Entity Framework 6.0 allowed a workaround using a Transaction object.
  3. UDFs are treated as query language enhancements.  Manual code needs to be written to access them in your C# code.  This is so you can use them in C# query statements.
  4. The database needs to have proper Primary and Foreign Keys.  This doesn't sound like an issue, but it is if you need it to work with legacy systems that you don't control.
  5. A true understanding of what’s happening (i.e. when queries are run and what queries they are) is not apparent without advanced knowledge or tracing.  This means that unless a developer really knows what's going on, they could be triggering lots of database calls as they dot into objects.
  6. Views with no defined Primary Key return the same row over and over again.  The correct query gets run, unique records are transferred over the network, but the framework is trying to be "smart" about what data it hands your code... which ends up being the wrong data.
  7. Maintaining a customized EDMX file is a huge pain.  Merging is a nightmare when multiple developers are making concurrent changes.

For the average developer the Entity Framework is a black box that magically gets you your data.  This leads to two problems:

  1. When and where database interactions happen are not clear to the developer, which can lead to lots of odd performance problems
  2. If there is a problem with how the "magic" works, it involves a lot of troubleshooting and you may discover that there's no good way to fix the problem and be forced to work around it

When working on a team of full-stack developers that are also the database experts, the hoops that it forces them to jump through in order to do something that should be simple just doesn't make sense.  I tend to favor simple tools that are very clear about how they operate; this results in far less surprises midway through a project.