More Effective Meetings with Google G Suite

There are plenty of spirited articles that outline techniques for more effective and efficient meetings. But assuming you’ve done the basics that the theorists recommend, how can modern software tools help you squeeze that little bit of extra time and effort out of your meetings?

In this blog post, I’ll show you some simple tips on how to use Google’s G Suite (a.k.a. Google Docs) to reduce the running time of your meetings, efficiently identify and assign actions, as well as ways for attendees to get more value from the meeting and track what actions were assigned to them (at this and other meetings they attended).

And please don’t be tempted to give up after reading the first 1-2 items, thinking you know this stuff already. Trust me, the rest of them will be worth it.

Google G Suite

Originally named Google Docs, G Suite is the current name of Google’s web-hosted productivity software offering. Along with the usual Email (Gmail) and File Sharing (Drive) services, it also comes with a variety of “office” software products, including (but not limited to) Docs, Sheets and Slides, each of which support a wide range of very neat collaboration features.

There are, of course, similar offerings from other vendors but I’ve not used those as much as G Suite. But enough about thats, let’s get your efficiency up!

1) Sharing

Because Google Docs are stored in the Google Cloud (and not on your local laptop), more than one person can access them at the same time. And in terms of what these people (a.k.a. collaborators) can do, they:

  • Can View – people can see/read the contents of the document but cannot change it.
  • Can Comment – people can see the contents of the document, can make comments on that content but cannot make changes to the document itself. Their changes need to be reviewed (and approved) by the document owner.
  • Can Edit – people can edit the document directly themselves, or make comments on content created by others.

So assuming that you’ve already/recently created a Google Doc to track your meeting(s), and assuming you’ve outlined a very basic agenda therein, the first thing you should do is share that document with the others attendees, giving them Edit access.

That way, they can add their comments/updates ahead of time and give a verbal update during the meeting instead. Not only will this be a more engaging experience for them and others (allowing for a more focused discussion) but it will also save the chairperson the time of having to minute their verbal update, which will keep the meeting moving along. This could also give the chairperson just enough time to record actions relevant to those updates there and then, also saving them time after the meeting.

Once the meeting has concluded and once you’ve made any final adjustments to the notes/actions, you should then share the document to the final, wider audience with Can Comment access. This will automatically alert them that the meeting notes are available for review, but also allow them to ask any follow-up questions they might have but without consuming the time of all the original attendees – just the document owner.

2) Comments

Anyone with Can Edit or Can Comment access to a Google Doc can select sections of text and make a comment about them. These comments are then recorded in the document for others to see (or respond to). The document owner is also alerted (by email or mobile alert) when a comment is made in one of their documents.

It’s also possible to reference another collaborator when making a comment in a document (assuming they have access to the document). This can be done by referencing their email address (with a plus symbol before it) in the comment body. In this case, that collaborator will also receive an alert (as will the owner).

Once a comment thread (or discussion) has concluded (i.e. the question has been answered), the document owner can Resolve the comment, after which it will no longer be visible. It will always be recorded in the document history but only visible to the document owner thereafter.

3) Introducing Action Items

This is where it begins to get really interesting, so thanks for sticking with us until now.

In more recent updates to G Suite, Google enhanced the commenting functionality so that when referencing another collaborator you have the option to Assign the comment as an Action Item to them. The difference between this and an ordinary comment may not be entirely obvious yet, but keep reading and you’ll see the value shortly.

4) Auto-Assignment of Action Items

In order to initiate a regular comment (or Action Item) in a document, you first need to select some text, choose the Insert, Comment menu option, address the intended collaborator and tick the to option to Assign as Task. That’s a lot of typing and clicking, when you’re otherwise trying to listen to meeting attendees give verbal updates and transpose those into appropriate notes and actions (for them or others).

Fortunately, G Suite has a very clever feature that can help (subject to certain conditions). If the document owner (or another collaborator with Edit access) phrases an update to the document in a certain way (e.g. “John to follow up with the Sales team”) and the document has been explicitly shared with someone called John, then G Suite will automatically attempt to assign that piece of text as an Action Item to John (prompting you first of course).

This is not only another excellent time saver but another reason to share the document ahead of time (to the right people). It’s also a strong incentive to be more prescriptive and succinct in your narrative as the meeting chairperson.

5) Revealing and Reviewing Your Action items

So you and your colleagues are a few weeks into your new G Suite regime and you’ve personally chaired a good few meetings and attended several others. And in doing so, you know you’ve amassed a sizeable number of action items but have no idea which documents they’re in or how to find them (since your last browser restart did not preserve your open tabs).

So give this a try instead:

  1. Go to your Google Drive home page.
  2. In the Search box at the top, enter the criteria: followup:actionitems (or click the Search Options, scroll to the bottom and select the Follow up drop-down menu and select the Action items only option).
  3. Voila! You now have a list of all Google Docs where there’s an action on you (including ones not owned by you).

Note that this does not work for ordinary comments – you need to be sure that the original comments were Assigned as Action Items in their respective documents.

You’re Welcome!

Reflections of Red Hat Summit 2017

I returned from the annual Red Hat Summit last week, which was as exciting and inspiring as ever. This year the summit returned to Boston and was held at the highly impressive Boston Convention & Exhibition Center (BCEC), which is located in the Seaport District of Boston.

The summit took place over 3 days and we were treated to an astonishing array of sessions from Red Hat developers, staff, customers and partners, with each one being better than the next.

It was also an excellent opportunity to see a series of presentations from several members of the Red Hat Mobile team, including two from myself; a Lightning Talk at the main DevZone entitled, Application Health Monitoring from the Inside Out and a breakout session on Cloud Solutions for Enterprise Mobility.

There were numerous other highlights but some of my personal favourites were:

  1. A new partnership between Red Hat and Amazon Web Services;
  2. The launch of OpenShift.io, a hosted developer environment for creating and deploying hybrid cloud services;
  3. The launch of the industry’s first Health Index for container images;
  4. The story of Easier AG (a Swiss medical company) and how Red Hat’s Innovation Labs helped to make their ideas a reality, which all started in our home office in Waterford.
  5. Attending a baseball game at Fenway Park to see the Boston Red Sox in action, live!

I’m already setting my sights on Summit 2018 and am very much looking forward to the year ahead preparing for it.

A case for more Open Source at Apple

Open Source Context

I’ve been involved in the software industry for almost 30 years and have long been an admirer of open source software, both in terms of what it stands for and in terms of the inherent value it provides to the communities that create it and support it.

I’m even old enough to remember reading about the creation of the Free Software Foundation itself by Richard Stallman in 1985 and couldn’t be happier to now find myself working at the king pins of open source, Red Hat (through the acquisition of FeedHenry in 2014).

And while in recent years it’s been reassuring to see more and more other companies adopt an open source strategy for some of their products, including the likes of Apple and Microsoft, it’s been equally soul destroying having to live with the continued closed source nature of some of their other products. Take Apple’s Photos app for iOS as a case in point.

Apple iPhoto

Some time around 2011, I took the decision to switch to Apple’s excellent iPhoto app for managing my personal photo collection, principally due the facial recognition and geolocation features but also because of the exceptional and seamless user experience across the multitude of Apple devices I was amassing.

Then, in late 2012, I undertook a very lengthly personal project (spanning 9 months or more) to convert my extended family’s vintage photo collection to digital format, importing them to iPhoto and going the extra mile to complete the facial and location tagging also.

The resultant experience was incredible, particularly when synced onto my iPad of the time (running iOS 6). Hours at a time were spent perusing through the memories it invoked, with brief interludes of tears and laughter along the way. What was particularly astonishing was how the older generations embraced the iPad experience within minutes of holding the device for the very first time. This was the very essence of what Steve Jobs worked his entire life for, and for this I am eternally grateful to the genius he clearly was.

Apple Photos

However, since then, with the launch of subsequent releases of iOS I have never been able to recreate the same experience, for two reasons.

Firstly, the user interface of the iPhoto app kept changing (becoming less intuitive each time, proven by the lessening magic experienced by the same generation that previously loved it so much), and secondly, it was replaced by the Photos app outright which, incredibly, has one simple but quite incredulous bug – it cannot sort!

Yes, quite incredibly, the Photos app for iOS cannot sort my photos when using the Faces view. If you don’t believe me, just Google phrase “apple photos app sort faces” and take your pick of the articles lamenting such a rudimentary failing.

A Case for Open Source

So what does this have to with open source?“, I hear you ask.

Well, trawling through the countless support articles on Apple’s user forums, it seems that this bug has been confirmed by hundreds of users but, several years later, it is still not fixed. If this was an open source project, it would have been long since fixed by any one of a number of members of the community I’m sure would form around it, and potentially even by me!

So c’mon Apple, let’s have some more open source and let’s make your products better, together.

A Simple Model for Managing Change Windows

One of the more common things we do in the Cloud Operations team at Red Hat Mobile is facilitate changes to environments hosted on the Red Hat Mobile Application Platform, either on behalf of our customers or for our own internal operational purposes.

These are normally done within what is commonly known as a “Change Window”, which is a predetermined period of time during which specific changes are allowed to be made to a system, in the knowledge that fewer people will be using the system or where some level of service impact (or diminished performance) has been deemed acceptable by the business owner.

We have used a number of different models for managing Change Windows over the years, but one of our favourite approaches (that adapts equally well to both simple and complex changes and that is easy for our customers and internal stakeholders to understand) is this 5-phase model.

Planning

The planning phase is basically about identifying (and documenting) a solid plan that will serve as a rule book for all the other elements in this model (below). In addition to specifying the (technical) steps required to make (and validate) the necessary changes, your plan should also include additional (non-technical) information that you will most likely need to share externally so as to set the appropriate expectations with the affected users. This includes specifying:

  • What changes are you planning to make?
  • When are you proposing to make them?
  • How long will they take to complete?
  • What will the impact (if any) be on the users of the system before, during and after the changes are made?
  • Is there anything your customers/users need to do beforehand or afterwards?
  • Why are you making these changes?

Your planning phase should also include a provision for formally communicating the key elements of your plan (above) with those interested in (or affected by) it.

Commencement

The commencement phase is about executing on the elements of your plan that can be done ahead of time (i.e. in the hours or minutes before the Change Window formally opens) but that do not involve any actual changes.

Examples include:

  1. Capturing the current state of the system (before it is changed) so that you can verify the system has returned to this state afterwards.
  2. Issuing a final communication notice to your users, confirming that the Change Window is still going ahead.
  3. Configuring any monitoring dashboards so that the progress (and impact) of the changes can be analysed in real time once they commence.

The commencement phase can be a very effective way to maximise the time available during the formal Change Window itself, giving you extra time to test your changes or handle any unexpected issues that arise.

Execution

The execution phase is where the planned changes actually take place. Ideally, this will involve iterating through a predefined set of commands (or steps) in accordance with your plan.

One important mantra which has stood us in good stead here over the years is, “stick to the plan”. By this we mean, within reason, try not to get distracted by minor variations in system responses which could consume valuable time, to the point where you run out of time and have to abandon (or roll back) your changes.

It’s also strongly recommended that the input to (and outputs from) all commands/steps are recorded for reference. This data can be invaluable later on if there is a delayed impact on the system and steps need to be retraced.

Validation

Again this phase should be about iterating through a predefined set of verification steps that may include examining various monitoring dashboards, running automated acceptance/regression test tooling, all in accordance with two very basic principles:

  1. Have the changes achieved what they were designed to (i.e. does the new functionality work)?
  2. Have there been any unintended consequences of the changes (i.e. does all the old functionality still work, or have you broken something)?

Again, it’s very important to capture evidence of the outcomes from validation phase, both as evidence to confirm the changes have been completed successfully and that the system has returned to it’s original state.

All Clear

This phase is very closely linked to the validation phase but is slightly more abstract (and usually less technical) in nature. It’s primary purpose is to act as a higher-level checklist of tasks that must to be completed, in order that the final, formal communication to the customer (or users) can be sent, confirming that the work has been completed and verified successfully.

 

A new era begins for Red Hat in Waterford

As the year that was 2016 draws to a close, we embrace and celebrate the dawn of a new era for Red Hat in Ireland with the opening of our brand new offices in my home city of Waterford on Monday, 12 December 2016.

This is an immensely proud moment for the entire Red Hat team in Waterford, especially so for those involved in the FeedHenry acquisition from October 2014 which has lead us to this wonderful occasion.

It is also fitting that the new offices are the first to feature the trademark We Are Red Hat internal branding in the Irish language, which translates as “Is Sinne Red Hat”.

So to the management, staff, families and friends of the growing Red Hat community in Waterford, take a bow and enjoy the celebration and delights that this day will bring.

It is everything that we have worked for and is no more than our wonderful city deserves.

Muhammad Ali and My Grandfather

Edmund (Neddy) Mernin

One of my most prized possessions is some very old VHS footage of my Grandfather, Edmund (Neddy) Mernin (1893-1983), being interviewed for an Irish history documentary in 1969. It really is such a privilege to be able to share this tiny slice of family history with my own children, where they can see real-life footage of their Great Grandfather.

The documentary, entitled Gift of a Church, tells the unusual story of how (in 1965) the church in his home village of Villierstown, Co. Waterford, had been donated by the Church of Ireland to the Catholic people in the village so that they would not have to walk several miles to the nearest village to celebrate Mass on Sunday. The church was in need of some repair and was seemingly no longer needed as the Protestant population had moved away.

The documentary was first broadcast on 30 October 1969 as part of an RTÉ programme called Newsbeat and was reported by the renowned Irish history documentary maker and TV presenter, Cathal O’Shannon (well known for this distinctive voice). And with huge thanks to the team at RTÉ Archives, an excerpt (showing Neddy speaking) is now available online.

Muhammad Ali

So what’s the connection with Muhammad Ali, I hear you say? Well, it turns out that the very same broadcaster that interviewed my Grandfather in 1969 also went on to conduct an infamous interview with boxing legend, Muhammad Ali, when he visited Ireland (to fight Alvin Lewis in Croke Park) in July 1972.

That visit was also chronicled in another Irish documentary from 2012 by Ross Whitaker, entitled When Ali Came to Ireland.

Two Legends, One Story

So it turns out that Muhammad Ali was not the only legendary world figure that Cathal O’Shannon had the privilege of interviewing. He also interviewed Neddy Mernin!

 

How to exhaust the processing power of 128 CPUs

Amazon Web Services launched another first earlier this year in the form of a virtual server (or EC2 instance as they call it) with a staggering 128 virtual CPUs and over 1.9 Terabytes of memory.

The instance, which is an x1.32xlarge in their naming scheme, could cost you as much as $115,000 per year to operate but you could certainly reduce that figure significantly (e.g. $79,000) if you knew ahead of time that you would be running it 24×7 during that time.

In any case, during a recent experiment using one of these instances, we set about trying to find some novel ways to max out the processing power and memory, and here are the two techniques we settled on (with evidence of each of them in action).

CPU Exhaustion

This was strangely a lot easier than we expected and simply involved using the Unix yes command which, it seems, draws excessive amounts of processing power when used in isolation from it’s normal purpose.

So for our x1.32xlarge instance, with it’s 128 vCPUs, we used the command below to spawn 127 processes each running the yes command and we then monitored it’s impact using the htop command.

$ for i in {1..127}; do yes>/dev/null & done

And here it is in action:

The reason for spawning just 127 processes (instead of the full 128) was to ensure that the htop monitoring utility itself would have enough resources to be able to function, which can been seen clearly above.

Memory Exhaustion

Exhausting the memory was a little harder (to do quickly) but one of the more hard-core Unix guys came up with this old-school beauty which combines the processor-hungry yes command with some complex character replacements, plus searches for character sequences that will never be found.

$ for i in `seq 1 40`; do cat <(yes | tr \\n x | head -c $((10240*1024*4096))) <(sleep 18000) | grep n &  done

And here it is in action too, noting the actual memory usage in the bottom, left:

Note also that the CPU usage, while almost at the limit, is not as clear-cut as before and all processors are being utilised equally (for the most part). Note also the Load Average of 235 (bottom, right of centre) which supports the theory that Unix systems can theoretically sustain load averages of twice the number of processors before encountering performance issues. Some folks believe this to be closed to one times the number of processors but the results above suggest otherwise.

Amazon Web Services X1

The original announcement of the X1 instance type is available at:

How to store 1.8 trillion photos on AWS

During a recent evaluation of Amazon’s Elastic File System service, we were astounded to discover that it is backed by what can only be described as a gargantuan storage volume, spanning a whopping 9 Exabytes in size.

For those of you familiar with the Unix operating system, here is a screen shot showing the 9 Exabytes in action (note the sheer number of digits in the Available space column).

To put this mammoth of a number into perspective, assume that:

  1. The average size of a photo taken with a decent smartphone these days is around 5 Megabytes (MB).
  2. It’s quite common to see many such smartphones with a capacity of 16 Gigabytes (GB), which is 16,000 Megabytes, which is around 3,200 photos.
  3. Several laptop models now come with as much as 1 Terabyte (TB) of storage, which is the equivalent of 1,000 Gigabytes (1,000,000 Megabytes), which is enough for well over 200,000 photos.

But to get from there to an Exabyte, you’d need eat your way through a further 1,000 Terabytes to get to what’s known as a Petabyte. And it’d take a further 1,000 of those to finally get to an Exabyte. And remembering that our EFS volume is 9 of those, that’s the equivalent of 1,800 billion (or 1.8 trillion) photos!

And fascinatingly, when using the more helpful alternative of the above Unix command (df -h) which shows the used space in percentage terms, you would have to copy an astonishing 90 Petabytes of information (or 18 billion photos) onto this disk volume just to get the Used column to move off 0%.

Amazon Elastic File System

The main selling points of EFS are that it:

  1. Elastically grows (and shrinks) to meet your storage needs;
  2. Runs across multiple data centres (Availability Zones);
  3. Can be attached to more than one server at a time (by virtue of the fact it’s powered by NFS).
  4. You only pay for the storage you consume.

For more, see http://docs.aws.amazon.com/efs/latest/ug/whatisefs.html.

Why you should always evaluate each new AWS region

Amazon Web Services launched another new hosting region last month, this time in Mumbia, India. The official press release is available at:

Based on our experiences from previous region launches (e.g. Sydney), where we discovered that not all of the services we use/require are available at the time of launch, we decided to compile a list of those features and put together a set of evaluation criteria for determining if/when we might be able to launch some new services from Mumbai.

And while it may seem excessive or wasteful to formally evaluate if all of the services you use are available (plus, the press release normally contains some information about this), we actually discovered that a number of Instance Types (that we were still using in other regions) we not in fact not being made available in Mumbai at all.

Findings

The items below are what we discovered were not as we expected (in the Mumbai region) but the list could, of course, be different for you or for the next region launch.  So there is still value in compiling a list of the services your organisation uses/requires also.

EC2 Instance Types

Not all instances types were available.

Elastic File System

At the time of Mumbai launch this was also not yet available in the Sydney region either.

EC2 API Version

It was actually when first evaluating the Frankfurt region (eu-central-1) that we discovered that AWS would not be supporting V1 of their APIs there (so we had to update some of our tooling). It was the same in Mumbai.

Availability Zones

The Mumbai region only had two Availability Zones at initial launch and this was insufficient for a Disaster Recovery (DR) deployment of MongoDB. This is because MongoDB requires a minimum of 3 members to form a valid Replica Set. And while you can still form a 3-member Replica Set using two AZs, you could lose access to two of those in the event of an AZ failure.

Summary

None of these issues were insurmountable for us but learning about these differences ahead of time did enable us to adjust our delivery timelines and manage customer expectations accordingly (e.g. to allow extra time for new AMI generation), which is a very valuable part of any planning process.

Top Tips for AWS Certificate Manager service

On foot of the recent launch of the AWS Certificate Manager service, we decided to check it out. Here are some of our highlights along with some noteworthy items you may find helpful.

Highlights

  1. The acronym for the new service is ACM (AWS Certificate Manager).
  2. You can programmatically generate certificates, using either the AWS command-line tools or via their APIs (see below).
  3. Certificates generated via ACM are free of charge.
  4. The certificates will automatically renew each year.
  5. Wildcard certificates are also fully supported.

Important to Note

  1. You can only use the certificates within AWS and so cannot extract them to use with externally hosted web servers.
  2. Even though you can programmatically generate certificates, there is still a manual validation process that needs to be completed.
  3. This validation process will be triggered as part of the automatic annual renewal of certificates.
  4. When generating wildcard certificates (e.g. *.acme.example.com), you must also ensure that you include the non-wildcard (base) address as a Subject Alternative Name so that visitors to the site using only that base address (e.g. https://acme.example.com) will avoid security warnings.
  5. You do not appear to have control over the name/id of the generated certificate, so if you had devised some tooling around a naming convention for your previous certs (imported from another provider), the ACM certs may not work with this.

Examples

This command will generate a wildcard certificate in you default region using your default AWS profile (i.e. account):

$ aws acm request-certificate --domain-name *.acme.example.com --subject-alternative-names acme.example.com

This command shows how to specify the region and profile to be used for the new certificate:

$ aws acm request-certificate --profile default --region us-east-1 --domain-name *.acme.example.com --subject-alternative-names acme.example.com

For more details about the AWS Certificate Manager service, visit https://aws.amazon.com/certificate-manager.