Create an Angular Azure web site with angular-cli and azure-cli

This is on Windows 10, running angular-cli in command prompt Administrator mode.


npm i -g angular-cli       *if you haven’t already
ng new myapp
cd myapp
*optionally edit your app here*
ng serve -prod  *create the prod version*

At that point we have a deployable Angular2 app!  Thank you Angular team, and I am very happy that the `-prod` command is working on Windows.

And then running azure-cli, with an azure subscription set up.
npm i -g azure-cli
I had to modify the angular-cli project with these steps for it to work with Kudu on Azure:
  1. Modify package.json, change “script”: “start”: “ng serve”, to “start”: “ng serve -prod”,
    If you want the prod version, which more than likely you do.
  2.  Create Kudu .deployment file.  Add a file at the root of the project (same level as package.json) called “.deployment”.  This file will tell KUDU where our project is.  The contents of .deployment are:
    project = dist
Then you can use this azure-cli command:
azure site create --git <yoursitename>

Then run git as your normally would

git add .
git commit -m "initial"
git push azure master

git push azure master tells KUDU to deploy the site.   Wait a few minutes and voila!

That’s really, really cool!  Spin up an Angular2 site on Azure in about 10 minutes.
Some other cool Kudu commands:
Kudu Control Panel (SCM): add scm in between your domain and the
TFS: /dev/wwwroot
Environment variables: add /Env.cshtml to end

Azure “You say I’m a dreamer, well I’m not the only one.” – John Lennon

What do I think I am doing here? Well, it all started when I was a kid on the floor surrounded by a pile of Legos. I’d put them together in huge cars, trucks, castles and spaceships. Then I would take them apart (using my teeth, my first set of which was worn near to the gums because I thought God had given me a set of pliers for a mouth.) Ah, the endless cycle of creation and discovery…

Now it’s like being a carpenter. I have some tools, some supplies and a plan. I want to build something. When I had my own company I built houses. Now that I’ve join Cardinal Solutions we build skyscrapers!

I love this job, yet, as a developer I hate to repeat myself. I’ll take a few hours out of my schedule to make a LinqPad script to consume a .CS or .SQL file and generate the code for me, rather than copy and paste things manually. The idea of learning machines that generate scripting code is possibly the highest thing in my digital imagination.

Another metaphor I have for development is making a restaurant. Over the last 1,000 years, the general design of a restaurant hasn’t changed much. There are tables & chairs, plates and lights and a kitchen with a flame. When I develop code I notice patterns, there are containers and transformations, services and presentation layers. After a while I still feel like I am repeating myself and what I want is to move to the next level.

Software is a logical progression from binary electrical circuits to machine code to high level languages like C and C#. I still consider this “programming”. The next level of development is “configuration”. I saw Iron Man 2 where his computer is a holographic projector where he swipes components into each other to configure massive circuits of logic, yet he didn’t write any code.

Speaking of massive circuits of logic, have you seen the Azure Marketplace recently?

My dream now is that the Azure components will be floating around me in my Holo-Lens. When someone wants a database I swipe it into existence (like Mickey Mouse in Fantasia). Then I configure it. Then I select some App or Api from the Azure Marketplace and place it next to my database. Here’s where the real magic comes into play, I believe they call it AppFabric, where my database is now connected to my Api layer. Instead of all that code I was about to write, I just draw some configuration matrices with my fingers in the air. Now I just need a presentation layer. Or, how about 3 different presentation layers! Why not (if I can fit the monthly tab)? I swipe them onto the immaterial plane of software pre-existance. Then I bring them together with a crescendo of excellence, knowing that most of my code will not need to be unit or performance tested.

But wait! I’ve made an error and selected the wrong presentation layer! Egad, I refactor my entire library with a few hand movements (and save my teeth).

Now I’ve just to select some branding and theming from a library, add in a few widgets and away we go! Oh, but I should perform tests before I release my newest creation.

(Like Scotty from Star Trek) “Computer? Please create a Windows Server VM for me in Azure. Install LoadRunner and run a performance test on my App with 10,000 Users every ten minutes for the next hour. Simulate real traffic and log the results using Lexicon Based Sentiment Analysis API built with Azure Machine Learning.”

Then I smile. Thems some cool Legos!


Self Proofing Business Communication

Many times a day I have to make a professional statement in writing, be it an email, forum post or something like Yammer. What I’ve noticed about myself is that my first draft is rarely perfect, what I would call the A+ statement. Knowing this I will proof the statement myself and edit it. I’ll switch roles, pretending to be the receivers and think what they would think about it (audience targeting). I’ll re-read it again, usually 3 or 4 times before I press Send or Publish. The important part to note is that all this happens at the initial time of writing the statement.

Then, a few hours to a few days later, I am finding myself wanting to edit the statement again. Many times I’ll go back to a forum or blog post the next day and make some changes, tidy things up a bit and get my message as clear as possible. Delivering what I then feel to be the A+ statement.

On these mission critical statements, instead of posting them to the live world, I should sandbox them, maybe emailing them to myself first. Then I should switch tasks completely and allow for my internal review. Go for a walk, sleep overnight, whatever it takes and let that statement sit as if I had sent it out to the real world. Relax my desire to Post right now and force myself onto some other task. It’s funny that my internal editorial staff will chime in hours later when I am thinking about something completely different, but if I learn to work with that, I can get my revision count (and my version history) down closer to one.

I just had an idea. I love the SharePoint feature of being able to send people a Share link to a document instead of the document as an attachment. Why doesn’t email follow this model? Instead of sending an email, I send a link to an email. That would centralize the data and give me time to modify it after sending.


Angular JS “dataType.js” – or, “Where do I put my mess?”

After working with Angular JS for a few months now I got a chance to make my own Angular JS project from scratch. But I still want “types” and I want my code to be as neatly organized as it can.

app.js – application setup and a config object which stores application “globals”.

common\common.js – commonly re-used functions. Common.js is a factory that provides some objects such as $q and $http. By routing all requests to these objects through Common, it can serve as another dependency injector.

services\log.srv.js – the logging service. Directing all logging requests through here makes changes to logging easy.

services\data.srv.js – once a quagmire of functions and oData syntax, this is now just one public function “getOData(oDataInfo)”. See more about oDataInfobelow.

controllers...js – all the controllers. Controllers are now as simple as possible. They get data from the data.srv.js based upon a dataType and then perform some formatting on it before passing it off. Other things such as creating tabs with scope functions can occur in controllers, but they remain relatively simple.

dataTypes.js – where I put my mess!

The first thing in dataTypes.js is the oDataInfo class:

ODataInfo: function(listName, fields, filter, expand, orderBy) {
this.listName = listName;
this.fields = fields;
this.filter = filter;
this.expand = expand;
this.orderBy = orderBy;

This is created for each oData source. It’s simple, but could be made more complex. The important part is that the oDataInfo is related to the objects that use it below.

I am fan of strongly typed libraries like C#. In dataTypes.js I create some JavaScript “classes” like this:

headline: function (title, link, author, authorLink, backgroundImageUrl, sortOrder) {
this.title = title; = link; = author;
this.authorLink = authorLink;
this.backgroundImageUrl = backgroundImageUrl;
this.sortOrder = sortOrder;

Now I “know” what a headline is in the rest of my site. The entire listing for a headline, including the oDataInfo is:

// Headlines
headlinesOData: new baseTypes.ODataInfo(
[“Title”, “Link”, “Author”, “AuthorLink”, “BackgroundImageURL”, “SortOrder”],
“IsActive eq true”,
headline: function (title, link, author, authorLink, backgroundImageUrl, sortOrder) {
this.title = title; = link; = author;
this.authorLink = authorLink;
this.backgroundImageUrl = backgroundImageUrl;
this.sortOrder = sortOrder;
getHeadlines: function(jsonResult) {
var items = [];
if( && && {
angular.forEach(, function (item) {
new dataTypes.headline(
return items;

The idea behind this goes back to C# classes.

headlinesOData – contains the oData info for the headlines object. If something changes in the SharePont list which controls it, then it’s easy to make the change here and below in the headline “class”.

headline – very simple class like structure. But the important part is that it’s a structure, not just some amalgomous JavaScript object.

getHeadlines – does all the finicky work of turning JSON from $http into a list of headline objects.

All of this in one place!

Then in the controller, it ends up looking like this:

dataService.getOData(dataTypes.headlinesOData).then(function (results) {
vm.headlines = dataTypes.getHeadlines(results);


The goal of adding this “dataTypes.js” (or someone may come up with a better name for it) is to keep all the oData to JS data relationships in one place and make the rest of the project easy to read.


CSOM is winning me over.

As a web applications programmer, I’ve been tied to server side technology for a very long time. The benefits of deploying pre-compiled code on a server are:

1) It runs fast.

2) It is easy to secure a server.

3) It’s easy to keep track of with logging, etc.

It’s a very “HERE & MINE (period).” way of doing things. As if the server side application is telling us, “You can not run this code, I do not trust you, only I can run this!” And for a long time, I agreed with this approach.

Slowly but surely though, CSOM (Client Side Object Model) libraries like Angular and Bootstrap are winning me over, liberating both the development and application lifecycles.

The biggest realization I had is related to what any server dev will tell you that their number one concerns when allocating resources are hardware related. How much disk space will be needed? How much RAM and processer power?

Well, we’re not going to get around disk space, but since that’s moving to the cloud, what about processor power and RAM? In 2014 we can begin to estimate that the CSOM will be running on a processor with at least 2 gHz of processing power. That’s a lot! Most CPUs are idling during web browser time and we can easily make use of it. I bought a medium-market laptop recently that came with 8 gigs of RAM! That’s phenomenal.

That being said, our business logic web application software will begin to actually run faster in CSOM than on privatized server clusters! The primary factor there is, of course, related to internet download speeds, which are, like processors, spending most of their time being blazingly fast or in an idle or low capacity state.

Here are some interesting points, some of them borrowed from this article by John Munsch (

1) In a Web Application, you are going to have to go back and forth between the server eventually. In the old days, we assumed the worst for CSOM, that they would have little or no capability to store data (M), process it (C) or make it look nice (V). The round-trip of sending data back and forth from the server to the client (and verifying the packets, oh, and verifying that we verified the packets, and of course prioritizing our packets in a router queue, and…)

Now a days we can send the data to the CSOM application, and let the user do what they want with it. At that point of the “data transcation”, the end-users computer is probably better suited at running the MVC application in itself, rather than relying on the internet and bouncing packets back and forth to the server.

This will free the server to do what it does best, safely serving data, and allow for the tailored user experience of an MVC Web Application to occur using the client’s resources.

2) Development and deployment are faster. With new tools for creating and running JavaScript or jQuery libraries, we can make changes to our application with out having to go through the arduous tasks of shutting down the server, deploying our software, re-starting the server and testing. The code to test cycle is blazingly fast, allowing for us to create our applications and debug them at the same time.

What happens when you have to move your app, say from a server to the cloud? If most of your app is CSOM, this becomes a non-issue. That’s huge!

You can use a simple tool like WebStorm and Google chrome to do almost the entire development and debug process of your application’s development lifecycle. And deploying it is as easy as copying files.

3) Flexibility. How much time have we spent as Developer’s making interfaces to code injenction libraries just so that we can change our code without having to go through the process of shutting down the application and suffering the loss of downtime? The, “What if this … changes?” question is a killer to web application developments that rely on so many different factors. We have to jump through eleven dozen different hoops just because of Justin. You know, Just-in-case, not Justin Beiber.

With CSOM if you want to release a new version, you release a new version! No injection or interfaces to scalable libraries needed. Plain and simple.

4) We can do more, working together. Since most of the software packages are open source by nature, such as Twitter’s Bootstrap or Google’s Angular, they are opened up to the wide world of web developers. Add-ons, forks and customizations are becoming the way to go as the true spirit of the internet, sharing information, flourishes.

This is exciting.

Instead of “My computer is this big: > <.” we now are using the internet as a sum total, meaning all of the computers together. Our clunky siloed computer with limited, pre-allocated resources is evaporating into a watery, cloud-like atmosphere!


CSOM and JavaScript – where is my application going?

A few days ago I watched this video from SPC14:

I was thinking about this some more today and wondering that the more we move towards CSOM to keep from installing code on the server, the more we are likely to rely on the big JavaScript libraries (AngularJS, dataJS, Breeze, Knockout). My training as a web developer is now taking a 180 degree turn. Before it was, “If you can do it in compiled code, then take it out of JavaScript.”, now it is seems to be trending towards the other way around. This is a little frightening to me, having a large chunk of my application be exposed and compiled at run-time by browsers.

From what I understand of that, and it makes perfect sense, is that the SharePoint team wants the SharePoint server to remain as pristine as possible. Then, using OData and JavaScript, I can create my App to run on top without really deploying anything but a web page and some JavaScript. Perhaps what I need to know is that:
1.Performance is not an issue.
2.We trust that our application is running inside the browser via JavaScript will produce the correct results.
3.The more places we have for injections, web services, etc. will not pose security threats.

Do you think it may become viable to have JavaScript libraries run on some type of Application Server? Then we as developer’s could know our code is being run properly and safely. Almost an extension of the MVVM idea. The MV is server side, but the VM is client side. To me, that’s scary! If we could have the native SharePoint MV remain, but somehow get the VM to run on a safe JavaScript Application Server, I would feel better about it and would be able to fully agree with this new CSOM approach towards App Development.


Welcome to my blog!

Hi, my name is Jonathan Matthew Beck, born on December 10th, 1975 to Jonathan Paul Beck and Sharon McInerney in the Druid Hills of Atlanta, Ga.

I grew up on both coasts, primarily in New England and California. My early love for computer science brought me to Nintendo’s video game programming school, DigiPen, then in Vancouver, B.C.

After graduation I took the first job I could find as a web developer for an early Monster/Dice competitor called BridgePath. I learned ASP, Visual Basic and SQL Server in 1999 while working for U.C. Berkeley. For the next 10 years I worked as a web developer in the Bay Area.

In 2010 I returned to DigiPen with hopes to complete my Bachelor’s Degree and obtain a career in gaming education. Unfortunately the coursework had changed so much in 13 years that by the time year two rolled around I was still a Sophmore, but a modernized one.

I decided to take the training in Visual Studio, C++ Programming, SCRUM (from some ex-Microsoft employees) and Project Management and apply it to my vast web development experience. I took an MCPD course in ASP.NET and then worked as a SharePoint Consultant for 2 years.

Hopefully I’ll be settling down soon. I miss my dog and want to get another, or maybe some cats instead. In my free time I dance, play music, write poetry and short stories and spend a ot of time outdoors.

I am also an avid gamer and impressed at how gaming is now becoming a “sport” with teams, tournaments and tv shows.