Tag Archives: IT Blog

How to plan a successful enrollment fair for the adoption of a new technology solution

30 Mar

If you are just joining us, be sure to check out Part 1, “Why hosting an enrollment fair is crucial to the adoption of a new technology solution”.

What goes into planning, executing, and ensuring a successful enrollment fair? There are a few key resources and processes that we need for this.

  1. Location
  2. Equipment
  3. Volunteers
  4. Schedule and Communication
  5. Script, Elevator speech, and tracker

A location must be secured for the enrollment fair. Ideally this is a public area where you can interact freely with your end users. The tables and volunteers should be highly visible.

You’ll need some equipment to make this all work. Obviously, you need tables to work from. Chairs for the volunteers to rest. Banners and signs for way finding. Most importantly you’ll need the proper devices or technology to demonstrate the solution. If the solution requires users to input their data to be enrolled then ensure that the equipment has this capability otherwise you’ll have issues!

Volunteers have to drive this fair. Where do the volunteers come from? The organization itself! As this is a project, most likely, the project team members should volunteer and they should recruit others from their respective teams and departments to help. The executive sponsors of the project would be highly visible and should be encouraged to participate.

You need to develop a schedule for the enrollment fairs that coincides with your go live or deployment schedule. The enrollment fair comes first, then a deployment, then a fair, then another deployment… This plan needs to be communicated clearly and often!

Finally your volunteers need a process to follow. This process may be a simple script, an elevator speech for example. They will also need a way to track who comes to the enrollment fair. This is useful for gauging the impact of the fair and tracking how many people are educated about the technology solution.

If you’ve made it this far you probably have two opinions:

  • “This is a great idea and I agree that we should do this!”
  • “I don’t think this applies to me, this sounds like a lot of work, the end users will adapt anyways.”

If you have the first opinion then congratulations – you get it! You understand the important of connecting with your end users. You understand how critical it is to align with your business and partner for results. If you have the second opinion… I’m sorry but your project may be doomed for failure. However if you do have this second opinion and would like to discuss it more then give me a call – I would be happy to talk about the reasons why this applies to your specific situation and why you need to do this to be successful.


Sr. Enterprise Virtualization Consultant

Sr. Enterprise Virtualization Consultant

Richard Maloley is a Senior Consultant within the Enterprise Technology Services group at Open Systems Technologies. In this role Richard focuses on managing large scale transformational projects with a focus on end user computing technologies. Richard has been a consultant for 4 years at OST and worked in-industry prior. With a passion for people Richard approaches customers and projects with a people-first attitude in order to positively change and improve the relationships within an organization.


IT as a Competitive Advantage – The Real Imperative for Managed Services

3 Feb

A number of years ago I was interviewing for the Information Technology Manager position at a commercial furniture company in the Grand Rapids, Michigan area. From an initial perspective, the role looked like a great fit for me, a solid company which wanted to grow and do great things in a niche market and an immature information technology strategy.

As I was interviewing with the President of the company, he and I were sitting in his office discussing the role and our respective approaches to technology and leadership. They were a year or so into (and digging out from) the implementation of an Enterprise Resource Planning (ERP) system which had not gone well at all. They were struggling with communication to their market and independent rep sales force and they were experiencing consistent outages and failures of technology and systems. They really needed help from someone who could come in and get their arms around the situation and have fast, meaningful and positive impact.

We were having a great discussion until I asked him to whom the IT Manager would report, and his answer was the Controller.

At this news I told him that with all due respect in this case I did not feel like this would be a good position for me, and I did not want to waste any more of his time or that of the organization.

As you might guess, this declaration took him by surprise. To his credit (and my relief), he was intrigued and asked me to explain my position.

I told him that the only reason to implement information technology within an organization was to drive competitive advantage. The investment and mindshare demanded by a mature information technology strategy could only be justified if it drove and realized an improvement in competitive advantage for the organization. Some examples;

  • The ability to make “widgets” faster in order to get them to market sooner
  • The ability to make “widgets” at a higher quality in order to increase customer satisfaction and drive market sales.
  • The ability to make “widgets” cheaper and therefore drive higher margins which left more capital to invest in the organization.
  • The ability to deliver accurate invoices quickly in order to realize improved cash flow and lower accounts receivables this yielding the opportunity for better financing and faster response to markets.
  • The ability to drive external branding driving more clients to the door improving sales and results.
  • The ability to manage order to cash – allowing for efficiencies and higher profits allowing for further investment in the organization.
  • The ability to communicate quickly, efficiently and accurately with all links along the supply chain from supplier through customer.

I went on to explain that the person who was tasked with leading a part of the organization which was charged with driving competitive advantage needed to clearly understand the strategies and goals of the organization at the highest level. That person needed to be a part of the leadership team of the organization, reporting to the person tasked with developing and driving the strategy, having visibility to the goals, and strategies That person needed to be focused on and integrated with the entire organization and should be viewed as a part of the strategic leadership of the company.

Having the IT Manager report to the Controller was a sure way to stifle the creative leveraging of information technology for competitive advantage. The viewpoint of the Controller’s team is centered around cost efficiency, low risk projects, back office operations and financial pressures. Investment designed to drive competitive advantage could be stymied as being too expensive or risky before it ever reached the eyes of the staff or president. A focus on cost reduction would run counter to enhancing strategic direction.

We spent some time exploring this idea, and he promised to give it some thought. In the end, he offered me the position, reporting to him. I accepted and spent twelve great years providing leadership in various roles from IT Manager through Vice President of Technology and Customer Satisfaction.

Why do I tell this story? Because it sets the basis for a further discussion around focus within an IT organization and how that focus can be enhanced.

If we accept the premise that the purpose of information technology within an organization is to drive competitive advantage, how does that stance influence the structure and direction of the information technology team? My contention is that it should influence the team to focus on those things which provide the greatest competitive advantage and value to the business, and should offload those things which do not.

Consider the following graphic as an illustration of my point:

vancil - 1

Plotting IT initiatives (projects or services) into the different quadrants based upon the amount of domain business knowledge (vertical axis) required to successfully complete or implement the initiative and the amount of competitive advantage (horizontal axis) that the initiative will drive yield a very clear indication of those upon on which the information technology team should be focused.

  1. Lower Left Quadrant – Requires a low level of knowledge of the business and does not drive competitive advantage.
  2. Upper Left Quadrant – Requires a high level of knowledge of the business and does not drive competitive advantage.
  3. Lower Right Quadrant – Does not require a high level of knowledge of the business and does drive competitive advantage.
  4. Upper Right Quadrant – Requires a high level of knowledge of the business and drives a great deal of competitive advantage.

Clearly the information technology team should be focusing on those initiatives which fall into the upper right quadrant. The amount of domain business knowledge required indicates that we need people who are already within and understand the business, and the high level of competitive advantage yielded shows the importance of the effort. Outside influencers to initiatives within this quadrant should be selected to fill gaps in the existing team and bring specialized knowledge and capabilities.

What of the other quadrants?

At the upper left we have initiatives which require a great deal of business domain knowledge yet yield little competitive advantage. These initiatives should be examined and questioned. Why are we doing this? Is there a different way that we should do this? If it is determined that an initiative should be pursued in spite of the low competitive advantage yield, then how can we do it without impacting our team of highly business knowledgeable people? In other words, with whom can we partner to make these projects happen?

At the lower right we have projects which require little in the way of domain business knowledge yet yield great competitive advantage. We will absolutely pursue these projects, so how can we do so without impacting our team of highly business knowledgeable people? These initiatives are ripe for being handed off completely to a trusted partner with only minimal involvement by existing IT staff, so who can we employ to make sure these initiatives get done right and in a timely manner?

And that brings us to the lower left quadrant. initiatives which require very little domain business knowledge and bring little competitive advantage. When we have initiatives which fall into this quadrant yet we know they must be done, it only makes sense to pass them off completely to someone else to take care of. Why would we ever impact the time and efforts of our business knowledgeable people when we can just pay someone else to take care of things? This quadrant is the one which lends itself very clearly to the idea of outsourcing projects or services.

Gartner has consistently used the terms “Run”, “Grow” and “Transform” in recent years to try to help IT leaders to focus on what is important to their organization and teams. If we layer those terms over the graphic above, we get a representation of how those terms map very nicely onto our initiatives.

vancil - 2

Outsourcing? We don’t need no stinkin’ outsourcing!”

Okay… so let’s address the elephant in the room, “outsourcing.” The term outsourcing brings a bad connotation to many people. They associate it with layoffs and people losing their jobs. They associate it with loss of control and endless fighting with suppliers to get things done. It is too bad that the term carries this impression, but it does – and rather than fight that uphill battle we will instead refer to the activities we ask a partner to perform and lead as Managed Services.

Managed Services can take many forms – but in the end it all comes down to paying another organization to take care of tasks which fall into an area where we deem that we have insufficient capabilities or insufficient time to focus upon them. In other words, we are going to pay someone else to take on services which free our skilled, knowledgeable people up to focus on things which bring competitive advantage. One of the advantages of using someone else to perform the services which do not require a great deal of domain business knowledge is that the focus they are afforded to pay attention to the details often yields a capability to perform the services more efficiently and with higher quality than we can ourselves. This means that they will actually do a better job and in the long run it will cost us less to accomplish the initiatives.

There are other advantages to using Managed Services. Some are financial, having to do with certain organizations being better served to spend money on operational expenses (OPEX) rather than capital expenses (CAPEX). Some are technical, having to do with the fact that a team which manages many different clients sees many different situations and learns how to handle and prevent them more efficiently. In other words, the partner that is providing the services gets to learn on other client’s and bring the advantages to us. From a management perspective, it is fairly easy as well to set clear, documented and specific goals for a managed service provider, then let them go away and do the work and meet those goals while IT leadership focuses on the initiatives in the upper right quadrant.

Here is the call to action for today’s IT Leadership: Focus your teams of talented, business knowledgeable technologists on initiatives which bring great competitive advantage and requires their skills and capabilities. Identify those initiatives which do not require your talented and business knowledgeable technologists and give them off in whole to someone else. Your organization, your team and you will all be the better for it!

_ _ _

John Vancil

John Vancil

John Vancil is a twenty-eight year veteran of the Information Technology field, currently holding the position of Director of Professional Services for Open Systems Technologies (OST) in Grand Rapids Michigan. During his career, John has held numerous development, support, management and staff level positions with companies ranging from enterprise (Electronic Data Systems, Baan) to the SMB space (Nucraft Furniture, OST). Today John is responsible for a $29 million dollar services operation which encompasses Data Center Solutions, Application Development, Data Analytics, Design, ERP and Advisory Services, Security, and Managed Services. John shares his life with wife Amy, daughter Catherine and Lambeau the world’s most exuberant Golden Retriever. When he is not serving the OST team, John likes to golf, fly-fish, compose and perform music and hang out with the family

Legacy Systems: The (Often) Necessary Evil

15 Jun

GR GiveCamp 2014Many IT environments include one or more systems that are really far behind the times. You might be surprised it hasn’t died of natural causes yet. You know the one I’m talking about – tucked deep away in the back of a closet; both the file system and the original installer are now gathering cobwebs. Generally, we at OST see the most legacy systems in healthcare networks – but they can be anywhere.

From a security standpoint, these kinds of hosts are a nightmare. Often running an unsupported operating system (already a big red mark), these machines are generally not well patched and may be running easily exploitable applications. A thorn in the network administrator’s side, they consistently show up in the results of IT security assessments as high risk and requiring attention.

If you don’t work in IT, you might be wondering why anyone would allow such an obvious security hole to exist in an environment. The answer is quite simple. Most administrators and IT managers are aware of these systems, but their hands are tied – usually for one of two reasons:

  1. It’s a vendor-managed host (for example, a system that operates radiology equipment) and the vendor refuses to upgrade or patch, for a variety of reasons.
  2. It’s an internally managed host, but runs a mission-critical application or service that requires an unsupported OS.

You can see how environments that rely heavily on third party equipment and applications could easily find themselves unable to secure the hardware that sits on their network.

This can lead to serious problems if the organization ever finds itself on the receiving end of a malicious hacking attack. Due to security flaws noted above, legacy systems are often easily taken advantage of and leveraged during an attack. Depending on how the system is positioned in the environment and what kind of data it contains, this can lead to the loss of sensitive data (such as social security numbers, medical records, or other personally identifiable information) or perhaps a full breach of the organization’s domain.

To help illustrate, consider this analogy: if you lived in a bad neighborhood, you’d most certainly want all of your doors locked as often as possible. Well, for starters – the Internet is most definitely a bad neighborhood, and having insecure legacy systems in your environment would be akin to leaving a side window not only unlocked, but also open. Did we mention that your in-laws own the window, and have lodged it open so that it will not shut? Yeah, it’s a lot like that!

How should this predicament be approached? Here are our suggestions on how to mitigate the risk of having insecure legacy systems in a computing environment.

  1. Re-assess if you actually need the system. Is it truly critical to your operations? Sometimes the simplest solution to a security problem is to just power off and retire the offending host. If one person uses it once per quarter to run a single report, it’s probably not mission critical and the risk it poses is greater than the service it offers. If you re-assess and do find that you need it, keep reading.
  2. If it’s a vendor-managed host, reach out to the vendor directly and ask for a resolution. If you’re trying to meet compliance standards (i.e. GLBA, HIPAA), be sure to get an official response from the vendor for your records.
  3. If it’s a dated application and your organization simply hasn’t ponied up the dough to purchase the new, secure version, build that into the budget now. If this is the situation you find yourself in, you’re gravely underestimating the cost of a data breach. Go ahead, Google the average cost of a data breach. We dare you.
  4. If you find yourself in the worst-case scenario, where the vendor will not budge, the application has no patched version and the system genuinely is mission critical, all is not lost. Here’s our recommended plan of attack:

Isolate, isolate, isolate.

Going back to the open window in your house analogy, the next logical step (if you can’t shut it), is to lock the room that the window is in and take all the important stuff out of there. Make that system invisible to everyone on the network except those who need it. Put it on it’s own VLAN and permit access with great discretion. Harden the system with firewalling and an endpoint protection that has IPS/IDS modules. Remove ALL unnecessary applications and services from the host. Patch it as much as you can. We’ve advised clients to, when it made sense, literally unplug the computer from the network and require that be accessed only physically.

Build a Plan for Moving Forward

Get it into your FY plan to retire that system, if you can. If you can’t, make sure that the Board of Directors for your organization is aware of the risk and chooses to accept that risk.

A comprehensive IT security assessment is the best way to find insecure legacy systems. As part of an assessment, a penetration test is generally conducted to help identify the severity of security holes that may exist on these machines.

The OST Security Practice has conducted over 1,000 security assessments for clients in a wide variety of industries and is exceedingly proficient at identify vulnerabilities on legacy systems and helping organizations mitigate the risks they pose.

For more information on an IT security assessment services, please contact dkilpatrick@ostusa.com.


W. Scott Montgomery

W. Scott Montgomery

W. Scott Montgomery joined OST in the spring of 2009 as the Manager of the OST Security Practice. Scott joined OST with over 30 years of IT and IT Security related experience. Scott has personally performed more than 1,000 Security Assessments for several hundred organizations. Using a proprietary and unique assessment approach, developed by Scott and used since 1998, the OST Security Team has the ability to gather, analyze and assess the security of any organization.


Data can help us know what not to do…

9 Feb

So much of analytics and healthcare IT becomes focused on understanding what we should do, that we miss that data and analysis may identify things we should not do. 

In my recent physical I had a few days ago, my doctor spent a fair amount of time talking about the routine tests that I did not need to get.  He cited how certain tests and “standard procedures” may actually cause more harm than good.

We tend to think of tests as innocuous, but based upon my genomic information (which I shared with my doctor) my risk factors for some things were so low, that certain tests are not deemed necessary. While other areas, like my 4.8X factor for certain heart conditions means that I will likely be more focused on aspects of heart health (and yes, I just finished doing my cardio today).

The value of personalized medicine, clinical informatics and data-driven medicine is that we will have the facts we need to apply the tools of modern medicine in a more focused manner, and in doing so fulfill the ancient tradition of “first, do no harm”.

This article describes the pros and cons of getting treated in a great manner, and illuminates how we would approach our decisions for treatment differently if we thought about the combined benefit and risks (via The New York Times).


Jim VanderMey, Chief Innovation Officer at OST

Jim VanderMey has served as VP of Technical Operations, CTO and now Chief Innovation Officer for OST. Jim has provided the technical leadership and product strategic planning for the organization since the very beginning. Jim is a technology visionary who sets the long and short-term direction for OST. He specializes in seeing the “big picture” of technology, industry trends and the business objectives supported by IT. As OST has gained an international reputation, Jim has taught and spoken at conferences in Europe, Japan, and throughout the US. Lastly, we must confess that some of OST’s peculiar culture is a direct derivation of Jim’s unorthodox style.