AzCopy, Azure


AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. This article helps you download AzCopy, connect to your storage account, and then transfer files.

Microsoft Documentation

Download here:

The download has the zip that contains the azcopy executable. Download, extract and move the executable to some folder on your system and add its location to the system path for convenience.

Based on what you need to do you will probably require some metadata/information from your azure portal. In this article I’ll be showing example on how to:

  • Copy files from local to a blob container in azure
  • Copy files from one container to another.

You can use following powershell commands to get some of this information:

  1. Login to your azure account: Connect-AzureRmAccount. This will launch a window for you to provide your azure credentials.  If successful you should get following information in the response:
    • Account
    • Subscription Name
    • Subscription Id
    • Tenant Id
    • Environment
  2. Get list of storage accounts in your account: Get-AzureRmStorageAccount
    • Or create a new storage account using New-AzureRmStorageAccount

Using AzCopy

  1. To use azcopy first you need to login to azure. Run following command:
    • azcopy login –tenant-id “<Tenant Id from step 1.d>”
  2. Next you will need to provision a blob container in your storage account to hold the files you’re copying. You can create container using following command:
    • azcopy make https://<storage account name><container name>
  3. Copy files (from local):
    • azcopy copy “<Local system path to folder containing files>\*” https://<storage account name><container name>/
    • If you would like to copy all folders and files under them then add –recursive flag
  4. Copy files (from container)
    • When you copy files from one container to another then you need to pass Shared Access Signature (SAS) token from the source container.
    • SAS token can be created from Storage Account -> Shared Access Signature
    • azcopy copyhttps://<source storage><source container>/?<SAS Token>” “https://<target storage><target container>/” –recursive

Some other useful links for reference:

Angular, SPA

Angular 9 is out – What’s in it for me?

Angular officially released version 9 few days. Most notable change that Angular 9 includes is Ivy – a new view rendering engine. This is guaranteed to provide following features:

  • Smaller bundle size
  • Faster Rendering
  • Better Debugging
  • Improved Type Checking

Ivy – A new view rendering engine replacing older VE

This is responsible to take your component template and convert it into instructions that the browser would execute to build the D.O.M. Faster this logic, faster the app would render.

Since all these changes are in the framework (under the hood), there is no effort for the developers to utilize this feature. There is no change in the way you’d develop component etc.

This is basically free with the Angular 9 upgrade.

Smaller Bundles

Ivy is also responsible for building the app into much smaller bundles. This means that the application would be quick to download especially for the mobile devices the start-up time would be much quicker.

image source

AOT Compilation

Angular 9 brings AOT compilation to the compile time which means the errors that would uncover when you build app with –prod flag would be seen during development.

Strict Type-Checking

Now you can specify how strict you’d like Angular to check for issues with the component templates.

In addition to the full mode behavior, Angular version 9:

  • Verifies that component/directive bindings are assignable to their @Input()s.
  • Obeys TypeScript’s strictNullChecks flag when validating the above.
  • Infers the correct type of components/directives, including generics.
  • Infers template context types where configured (for example, allowing correct type-checking of NgFor).
  • Infers the correct type of $event in component/directive, DOM, and animation event bindings.
  • Infers the correct type of local references to DOM elements, based on the tag name (for example, the type that document.createElement would return for that tag).

Upgrade to Angular 9

You can run following command to upgrade to latest version.

ng update @angular/cli @angular/core

However it is recommended to upgrade first to version 8 and then to latest. For more information on how to upgrade visit angular documentation here. There is also an interactive guide to upgrading to version 9 it might be worth trying.


Local Development Server – Salesforce

With some experience working with the salesforce development team I noticed that local development works differently for salesforce development. Teams use their favorite code editors like eclipse, vscode etc and every time developer changes the the code and wants to test it, the code needs to be pushed onto a salesforce org. Vscode is the recommended IDE and provides a great integrated development and provides easy mechanism to sync the code with the org. 

From my personal experience working on the salesforce development environment this may easily get turned into a frustrating development experience and slow things down a bit. Since there is no instant feedback on your code change, the coding experience feels prehistoric. On the other side having worked on lots of Angular applications previously the developer feedback was instant (thanks to the live-reload feature).

In year 2019, Salesforce introduced a brand new feature to solve this problem. They called it ‘Local Development Server’.

What is a Local development server?

“The Local Development Server is a Salesforce CLI plug-in that configures and runs a Lightning Web Components-enabled server on your computer. You can develop Lightning web components and see live changes without publishing the components to an org. “

Salesforce documentation

Wow, that sounds promising!

So now if you’re developing components you could easily test them locally without submitting it to the org and you see instant feedback of errors etc. Fix errors and instantly you see the local development server refreshing with your changes. 

Development server aims to help in three main areas to improve productivity:

  1. Provide component rendering experience as close to a real environment as possible. If your component works locally, it will work in the org.
  2. Make it easier and faster to debug, find and resolve errors. The local server shows you the error in the browser with exact error message, file and line # where the error occurs.
  3. Integration with the real org data. Any request for the data from your salesforce org will get proxied to the org and returned to your local component. You can do CRUD operations, call apex controllers.

How do you set it up?

  1. Install vscode
  2. Install Salesforce CLI
  3. Install Salesforce Extension Pack
  4. Local Development will ship with the Salesforce CLI out of the box but for now you need to install it by running following command
    1. sfdx plugins:install @salesforce/lwc-dev-server
    2. Next you create your project in vscode and authorize an org.
  5. Now that the plugin is installed and the project is created you can launch the lightning web development server using this command
    1. sfdx force:lightning:lwc:start Or
    2. Use the vscode command palette and choose ‘SFDX: Open Local Development Server’. If the local development server isn’t running, this would start the server.
  6. Open localhost:3333 in  your web browser and this is what you would see.
Landing page on my local development server showing all lwc components from my project
Clicking on c-hello-world component produces this page which shows the output from the component.

At this point, any change you make to the component will refresh this browser window and reflect your changes.

Makes life little easier!


What is a “Scratch Org”

As per definition from Salesforce developer portal:

The scratch org is a source-driven and disposable deployment of Salesforce code and metadata. A scratch org is fully configurable, allowing developers to emulate different Salesforce editions with different features and preferences. You can share the scratch org configuration file with other team members, so you all have the same basic org in which to do your development.

As the definition states that the Scratch Orgs are disposable deployments which means they are not permanent and are great for temporary deployments. This can help in boosting developer productivity and easier for team collaboration especially for automating test & deployment. But the scratch orgs typically get automatically disposed off in 7 days. The max you can keep them for is 30 days.

Developer can share the scratch org with other members of the teams and develop together.

To be able to work with Scratch Orgs, you should first enable the Dev Hub in your business / production org.

Steps to create scratch org:


  1. Open VsCode
  2. Open Command Palette (Ctrl + Shift + P) and choose: SFDX: Create Project with Manifest
  3. Choose ‘Standard’ as project template.
  4. Enter name for the project.
  5. When asked, select a folder on the disk to save your project to.
    1. VsCode would run the CLI command to create a basic structure for your project.
  6. Open Command Palette and choose ‘SFDX: Open Default Org’. This should launch a new browser window and logging you to the new scratch org’s setup section.
  7. At this point you can start creating the code artifacts. For example: to create an apex class, open Command Palette and choose ‘SFDX: Create Apex Class’, specify the file name and press enter. This would create a new apex class for you.
List of commands to generate various other artifacts in scratchorg

Push & Pull Code

  1. To push your code to the scratch org, open Command Palette and choose ‘SFDX: Push Source to Default Scratch Org’.
  2. If everything ran successfully you should see your custom apex class in the scratch org (setup à Custom Code à Apex Classes).
  3. This way you can build various other code artifacts like Trigger, Lightning Component, Lightning Apps, Visualforce Pages etc. and deploy them to scratch org and test.
  4. Like push, you can also pull the code from scratch org. From the Command Palette, choose ‘SFDX: Pull Source from Default Scratch Org’. This should pull all the code from the org to your local system.

For more information follow the salesforce documentation here.


Scratch Org

Code generation, WebStorm

Live Templates – WebStorm

Live templates is another great feature of WebStorm which can really make your development very interesting. I personally find these to be very useful. You can utilize this feature to inject your favorite or very frequently used code snippets and make your life little easier. 🙂 You can utilize the live templates to inject any kind of code, be it html, java-script etc.

Let me give you some examples:

Live template can be created using the Live Templates page on the File -> Settings dialog Or by pressing Alt + F7 keyboard combination.

When the Settings dialog opens, search for live template by tying in the search field:


And you will see the live templates box as below. You will notice that there are tons of templates that are already predefined by WebStorm for you.


The highlighted template example shows that by typing ‘ngdl’, the code snipped defined in the template text will be injected into your code at the same position where you type ‘ngdl’ followed by a tab key.

Creating a new template

Now let’s understand this further by creating a simple template. Clicking on the ‘+’ icon on the top right of the dialog and choose Template Group. It is better to put your custom templates under your custom group. Give your group a name and press OK.


Next click on ‘+’ icon again and this time select first option to create a live template. Type your desired abbreviation (shortcut to your template) and provide a meaningful description to your template. In the template text box enter the code template/snippet that you would like to inject. In this example I have specified a template to generate an angular controller.



Please make sure to choose a valid context for your template as below. Since the given template is a JavaScript code so I’d choose JavaScript option here.


Now let’s try to use it. Go to your project and create a new JavsScript file and open it. Now start typing the ‘myctrl’ abbreviation you have given in previous step. You will note that WebStorm gives you a hint that it found your template.


Now press tab and you will see your code snipped injected into productsCtrl.js file.

(function() {
     'use strict';
     .controller('myCtrl', function() {
         var vm = this;
         Object.defineProperties(vm, {
             ctrlProperty: {
                 value: 1, writable: true

@w3s0m3! 🙂

But! I still want to enhance this template further to make it more dynamic, I mean currently it is going to inject the controller code with name myCtrl every time, which isn’t of much help. So let’s try to make use of variables which is another feature live template provides. So go back to live templates settings dialog and modify your template as below:


Notice that there are bunch of words surrounded with ‘$’ signs. Live templates use this syntax to indicate a variable in the template which you can re-use or use to format and populate other variables.  Click on ‘Edit variables’ button and ensure you have following settings:


Basically there are four variables:

  • moduleName – It takes whatever you type.
  • ctrlName – It takes whatever you type.
  • friendlyCtrlName – Notice that it has a custom expression. It would call certain predefined functions to format the string that it receives in ctrlName variable. Since you are defining the value of this variable from within the template, check the ‘Skip if defined’ box.
  • END – This is a predefined live template variable that indicates where should the cursor be when your template gets fully expanded.

Now go back to your js file, delete entire content and type ‘myctrl’ again and press tab key.

Notice the positioning of your keyboard caret is where your first variable ($moduleName$) was defined.


Type my-app and press tab. The focus will be now on the controller name. Start typing your controller name, notice that while you are typing the name of the controller the comment gets created automatically. Also because of the capitalize and spaceSeparated functions we used, ‘myProduct’ got converted into ‘My Product’ for us.


Press tab and now the cursor is set where you had defined $END$ in the template. I had set it at this position if you would like to define more properties to the controller so you are at correct position to do so:


Another Template

Similarly I have defined a useful template to generate the code for defining the object properties. Here is the template:


And here is the output when I type ‘prval’ followed by tab key. Property name and the assigned value, both are used to generate a user friendly comment.


Similarly you can define various such templates in your code to generate your html, JavasScript code components and it enhances your code quality and consistency.

I hope you find this article useful!


To “Gulp” or to “Grunt”

Hello "Blog"!

Well, the answer is what you prefer! 🙂

Both of these are great tools to automate the tasks for your application. Both have great plugins and allow the developers to really speed up their development cycle by integrating some very cool plugins to automate your process. For instance code analysis, linting, minification and many more, but the question is what would you choose?

Here are some high level differences:

Grunt Gulp

More support and bigger community.

More than 5000 plugins.

Relatively newer.

Lesser number of plugins

File based

Each task input/output spins up hard-disk as the results are stored in the hard-drive. Multiple tasks require multiple disk read/writes.

Stream based

Gulp uses in-memory streams to run multiple tasks sequentially. Therefore only the final output results in disk activity.

Might perform slower when there are multiple tasks configured. Relatively faster due to in memory processing.

Configuration over code.

Tasks are configured…

View original post 228 more words


To “Gulp” or to “Grunt”

Well, the answer is what you prefer! 🙂

Both of these are great tools to automate the tasks for your application. Both have great plugins and allow the developers to really speed up their development cycle by integrating some very cool plugins to automate your process. For instance code analysis, linting, minification and many more, but the question is what would you choose?

Here are some high level differences:

Grunt Gulp

More support and bigger community.

More than 5000 plugins.

Relatively newer.

Lesser number of plugins


File based

Each task input/output spins up hard-disk as the results are stored in the hard-drive. Multiple tasks require multiple disk read/writes.

Stream based

Gulp uses in-memory streams to run multiple tasks sequentially. Therefore only the final output results in disk activity.

Might perform slower when there are multiple tasks configured. Relatively faster due to in memory processing.

Configuration over code.

Tasks are configured using configuration files. This configuration method requires the developer to be knowledgeable of the options for each task.

Gulp prefers code over configuration. Developers might find themselves comfortable with this as it is easier to understand and debug.

Also here are the sample files:

Sample Gulp file

var gulp = require('gulp');
var args = require('yargs').argv;
var $ = require('gulp-load-plugins')({lazy: true});
var config = require('./gulp.config')();
var del = require('del');

gulp.task('vet', function(){
    return gulp.
        .pipe($.if(args.verbose, $.print()))
        .pipe($.jshint.reporter('jshint-stylish', {verbose: true}))

gulp.task('styles'/*, ['clean-styles']*/, function(done){
 return gulp.
//        .on('error', errorLogger)
        .pipe($.autoprefixer({browsers: ['last 2 version', '> 5%']}))

//----- MORE TASKS


Sample Grunt file

module.exports = function(grunt) {
    jshint: {
      files: ['Gruntfile.js', 'src/**/*.js', 'test/**/*.js'],
      options: {
        globals: {
          jQuery: true
    watch: {
      files: ['<%= jshint.files %>'],
      tasks: ['jshint']


  grunt.registerTask('default', ['jshint']);

Most likely if you are just starting up a new project and don’t have much prior experience with these tools then I would assume that you’ll be leaning toward choosing gulp as your task automation tool because of its ease of use and performance but again I will leave it up to you to decide.





Quick intro to Git – Part I

Hello "Blog"!

Hello friends!

Lately I was challenged and given an assignment to consume some source code hosted on GitHub and contribute to it by performing few given tasks. I, being pretty new to Git spent few hours over internet and trying to figure out how does this thing work. I had no prior knowledge on it and given the fact that of my past experience has been on Microsoft technologies I was actually little scared. 🙂

But, let me tell you, i felt much better when i completed that challenge. Although I must admit that despite being a simple challenge it was still daunting to me.

Anyways, I felt that it would be kind of nice to share my experience and may be help others to understand about git.

What is Git?

Git is an open source project that has a great community support and a solid user base. If you are an open source developer then Git is probably a first class requirement for you…

View original post 562 more words


Publishing a node package

Publishing your node package to npm registry is a pretty easy task. To publish to npm registry you should have valid user credentials. You either create your user at link or you can create a user using following commands:

   λ npm adduser
   Username: User12345
    npm WARN Name must be lowercase
    Username: user_test
    Password: *******
    Email: (this IS public)
    Logged in as user_test on

If you already have a user then you can simply use npm login command to login to to npm registry. Once you are logged in, run the command npm whoami command to confirm that you are logged in.

Publishing a package

You can publish any folder which has a valid package.json file. To publish cd to the directory which has package.json file and then type  npm publish command.

Note: Your package name must be a unique, so ensure that it is not in use by someone else. To confirm this go to: where mypackage is the name your package.

If while publishing you get following error then most likely the package already exists in the npm repository:

npm ERR! you do not have permission to publish "my-package". 
Are you logged in as the correct user? : my-package

if the publish is successful then you should see following:


Publish successful!

Now switch over to the browser and browse to and search for your package to see if it is now visible in npm registry:


Your published package found in search

And voila !

Updating package

Since you’d also like to keep adding functionality to your packages and therefore would also like the update the package in the npm registry as well. For this you would need to do following:

  1. npm version <major|minor|patch>, this will ensure that your version in package.json is incremented.
  2. npm publish

When both the commands are successful then you can see your changes are updated in the npm registry.