September 4, 2012 By Dan Lohrmann
I was surfing the web a few weeks back looking for the best airfare deals from Michigan to various California airports. I checked a variety of travel websites as well as various airport combinations and different days of the week to find the right combination. After more than a week of comparing airfares, airlines and a host of other factors, I finally made my decision and purchased tickets.
But along the way, I noticed ads showing up all over the place asking me to come back to their websites. Whether I was checking baseball scores at ESPN, doing a Google maps search for driving directions or researching a cybersecurity article at various tech websites, the computer browser was beckoning me to return and buy plane tickets, with targeted ads asking me questions. The advertisements had different colors, shapes and sizes, but they were all trying to get my attention with pointed questions like:
Do you still want to go to San Diego?
'Looking for airfares to Orange County?'
'Special deal, today only, on ‘xyz’ airline!'
'Airfares cut to California.'
It was a bit weird. Were they trying to read my mind or just watching my actions (via cookies)?
Perhaps you are thinking – so what? Amazon has been doing this for years.
But the tracking and targeting of online users is growing. I’ve been watching this trend closely for the past four-plus years since I wrote the book Virtual Integrity, in which I describe various upcoming scenarios in cyberspace chapters nine and ten. I find myself agreeing with the privacy communities that this trend of “tempting the click” is evolving in ways which raises many concerns.
New Twitter Targeted Ads
The Wall Street Journal recently announced that Twitter will now allow targeted ads based upon interest. Here’s an excerpt of that article:
“Twitter said it will let advertisers push their marketing messages to users based on indications of what they like. A company that sells a sports drink, for example, could elect to show paid ads to Twitter users who are fans of professional football.
Twitter said it will identify a football fan, say, by analyzing whether the Twitter user ‘follows’ many Twitter accounts from football players or commentators, or because the user recirculates Twitter messages from those accounts. Twitter said it doesn't target ads based on the text of a user's Twitter posts, known as tweets.”
For example, in my search of plane tickets, and in other searches I’ve performed for various purchases online, the websites require that cookies be enabled to work through the entire process.
While this type of targeting of ads does not bother me enough to stop using various online services, I know it bothers quite a few friends and relatives of mine even more. They see this as “big brother,” especially if they are not specifically asked whether they are ok with this practice. And yes, I’d rather be asked than just tracked. Where is this going next?
One friend said he doesn’t mind if Amazon suggests books he may like when he connects to their portal, since it stays within that website and he trusts them with certain information anyway - like his “wish list.” However, he does not like this targeting of ads if it spread all over the Internet to companies and websites that he has no relationship with.
Are Government Sponsored Targeted Ads Next?
This brings me to the main point of this blog. Where is this headed within government technology circles? Will federal, state and local governments start to target ads to users in the same way that private companies do now? Will they use this new Twitter feature to get you to renew your driver’s license? How about an ad to come back and enjoy the same national park that you visited last year – as you check football scores?
Advocates of this approach will no doubt say that people get emails today that do this very same thing, if they “opt-in” to government listserves on various topics. This may be true, but will the targeting be “opt-in” or “opt-out?” Will the new functionality and the ever-growing list of Google AdWords services become too much for governments to resist?
There are certainly some wonderful opportunities to help consumers with this new capability, but others will see this as government “Big Brother.”
Will Big Data Lead to More Ads?
Back in March, 2012, an important Government Computer News article was entitled: Big data’s target: Users. Here’s an excerpt:
“Government is amassing data at an accelerating rate, and it even has tools available to process and analyze it. But to get real value from all that information, the government must put it into the hands of users, the General Services Administration’s Dave McClure told a Washington audience.”
The question becomes, how will this government data be used, assuming good intentions?
Will we be surfing government portals five years from now and be sent targeted ads about driving schools if someone has a bad driving record? Will late taxes prompt an ad from H&R Block? Or…. What?
I don’t know the answers, but I’d love to hear your perspectives? There is no doubt that the technology is now available to target users. The private sector is “all-in” when it comes to using this technology.
Will governments be next? I just don’t know how this will be done without many complaints. In fact, I think it should be an opt-in approach and not the default. There could also be incentives for opting-in (like cheaper campground reservations.)
What are your thoughts?
August 4, 2012 By Dan Lohrmann
Suddenly, without warning, no power. The blackout spreads. The grid goes down. Six hundred million residents, which is one-tenth of the world’s population, left in the dark in the summer heat for a second day. That was India this past week.
Here’s one news report:
“The first power grid collapse, on Monday, was the country's worst blackout in a decade. It affected seven states in northern India that are home to more than 350 million people.
But Tuesday's failure was even larger, hitting eastern and northeastern areas as well.
Both blackouts cut power in the Indian capital, New Delhi.”
In addition to the many inconveniences faced by millions of families, the emergency had ripple effects. The Washington Post reported:
“Tuesday’s blackout, which hit the northern and eastern parts of the country, brought more than 500 trains screeching to a halt, left thousands of passengers stuck for nearly an hour inside the capital’s Metro line and trapped more than 200 miners underground.
There was gridlock on many streets in the capital as traffic lights stopped working. Bank ATMs also failed, but airports and major industries were unaffected, switching instantly to backup generators in a country used to power outages.”
Remember the Past
“Fifty years ago, most Indians had no electricity. Power belonged — politically and financially — to the rich. As the young wife of a modestly paid U.S. civil servant, I, most improbably, was rich in comparison to the bulk of the Indian population.
And yet, most evenings, standing around at receptions or visiting friends, we'd hear a low groan, as the lights and air conditioning flickered, then died. We just chatted away to the faint clink of ice melting in our drinks. Chatting, with trickles of sweat running down our faces and fancy silk dresses and saris. We joked about the shoebox-shaped power station in town. How the guy in charge of it probably had to walk from one end to the other to flick some switch to keep the power on….”
Beyond Cause - A New Crisis?
Whenever an emergency like this occurs, the finger-pointing almost always begins and everyone wants to the know the specific cause(s) of the incident.
And yet, many infrastructure experts are calling this emergency a potential defining moment or possible turning point for India. Business Week described the problems this way:
“The proximate cause of a power outage on this scale almost always seems trivial,” said Michael Parker, an analyst in Hong Kong with Sanford C. Bernstein & Co. “The blackout highlights the big underlying issue India faces in terms of infrastructure quality. To keep the lights on, India needs to add power capacity, build robust transmission and distribution systems, ensure fuel supply and transport and reform power pricing. Most of that is expensive.”
The article goes on to list the many underlying causes of the power problems and the unique challenges that India faces. Many experts say that such a similar outage in the USA is very unlikely:
“Since the Northeast blackout in 2003 — the largest in the U.S., which affected 55 million — 16,000 miles of new transmission lines have been added to the grid.
And even though some lines in the Northeast are more than 70 years old, Wellinghoff said that the chances of a blackout like India’s were very low.”
However, the same article goes on and quotes Richard Clarke, a former national security adviser who said that the US grid is still vulnerable to a cyberattack.
“The U.S. power grid is extremely vulnerable to cyberattack. The government is aware of that. Recently the government held a White House level cyberexercise in which the scenario was a cyberterrorist attack that took down the power grid.”
Lessons for Us?
The Christian Science Monitor wasted no time in proclaiming the moral of the Indian Power Outage story for countries around the world:
“Countries that rely on a national infrastructure for basic needs – from electricity to roads to food – can take a lesson from India’s massive power outages. A big, complex system must have resiliency built into it from scratch – all the better to absorb disruption and recover from hardship…
India is still probing the cause of its blackouts – perhaps not enough water in hydroelectric dams or state governments taking more than their share of power. Whatever the reason, such a disruption in a nation’s basic supplies would occur less often if such systems were decentralized into networks of local, smaller suppliers – similar to how the Internet is structured.
Resiliency in systems is much like in people. The ability to bounce back depends on having deep connections to others, such as family, friends, or those who hold similar moral views. Collaboration with a diversity of sources creates a better capacity to absorb shocks….”
The article goes further and suggests smaller banks may have prevented our recent financial crisis in the USA.
Other articles describing lessons from the energy crisis suggested “India’s deep dependence on coal” was a big part of the problem. Climate groups called for new investments in renewable clean energy. And others pointed to the recent storms in the Mid-Atlantic USA and the power outages in India as proof that natural causes were more likely to cause major disruptions than cyber attacks.
Never Say Never
What lessons do I take from this series of incidents? Here are a few thoughts:
1) Outages will occur, so hope for the best and (truly) prepare for the worst. I never expected the NorthEast Power Outage in 2003 to hit Michigan – but it did happen. I spent four 18-hours days at our State Emergency Operations Center (SEOC) recovering from the outage, and their were many unexpected situations.
2) We need to be doing scenario-based emergency planning for “all hazards” (whether cybersecurity issues or weather-related storms or system failures.) Ask what if?
3) Think about the dominoes that may fall. When the power goes off, what happens next? Do you have generators? How will other sectors be impacted? As the article above describes, are critical infrastructures resilient?
4) Understand your partnerships and the keys to your government’s restoration - no matter what the problem or scope of the outage.
5) Never, Never Say Never – I get wary when anyone says, “It can’t happen to us!” These global incidents always remind me that "it" absolutely can happen here. Pride comes before a fall. No doubt, the odds may be lower, the conditions different, the controls better and the infrastructure more dependable. Nevertheless, we need to be ready, just in case. Are you really prepared?
What are your thoughts on India’s recent power outages?
July 14, 2012 By Dan Lohrmann
e-Discovery, information management and the legal aspects associated with enterprise data are hot topics for technology leaders to address with their business customers. But what information governance strategies are legally defensible? What compliance approaches work best in the long run? How can enterprises reduce risk when they save or delete data?
To answer these questions, along with several related security topics, I recently interviewed Jim McGann, who is VP of Marketing for Index Engines, a leading electronic discovery provider based in New Jersey.
Dan: Can you briefly describe your background and overall experiences dealing with e-Discovery?
Jim: In my over 20 years of specializing in information management, in which I frequently write and speak on topics that impact legal and compliance on corporate data, I have seen some paradigm changes in the way that organizations regulate and manage their data. In the last 5 years I have seen a shift in organizations to clean up the “data lake” that has been generated and to become more proactive in managing their data assets. It is important to defensibly delete data that no longer has business value and archive what is needed for legal purposes.
Within the first 15 years of my career, I worked with organizations on deploying technology aimed at generating information faster and storing large volumes. Back then, organizations could save anything and easily hide the content that could become a liability, but that won’t work these days. Lawyers and judges are more tech savvy and they won’t accept excuses about complexity and cost issues anymore.
Dan: What is Defensible Deletion and why is it important?
Jim: Defensible Deletion is a process within an overall information governance strategy that applies value-based decisions against organizations’ content. It aims to segregate the content between what is useful to the agency and what is not. This methodology guides disposal of valueless content to meet business, legal and regulatory requirements.
Dan: How does Defensible Deletion control long term risk and liabilities?
Jim: Implementing a defensible deletion strategy and methodology not only mitigates long term risks and liabilities related to enterprise data assets, but also saves time and expense in supporting ongoing litigation and discovery efforts, while reducing budget used for storing and managing content that is no longer useful. A large volume of the “unknown” data, such as files and email from employees that left the organization years ago, or aged data that is no longer managed by the user who owns it, can be easily purged with no legal or regulatory implications.
Dan: How does Defensible Deletion help with always changing regulatory and compliance policies?
Jim: Government agencies are now facing new and complex information management challenges. Not only legal issues, but also regulatory requirements such as the Federal Records Act (FRA), Federal Data Center Consolidation Initiative (FDCCI) and Freedom of Information Act (FOIA) are causing issues for every information management executive in the industry. Managing these regulations and also supporting legal requirements is complex, especially when the large bulk of data are on networks and hidden in legacy backup tape archives, which are expensive and time consuming to rummage through.
Managing data according to ever-changing regulatory and compliance polices is difficult. Enormous volumes of sensitive files and email are scattered about every organization. This data flows through massive networks and is cloistered away in proprietary repositories and archives, which makes access even more of a challenge. As a result, information management strategies are nearly impossible to design and deploy. Understanding and profiling this data is essential and will drive efficiency and management of the content.
Dan: What are the most common and high risk types of content repositories?
Jim: Breaking down the corporate content environment by repository type simplifies the plan of attack towards a defensible deletion methodology. Data repositories can be desktops, network servers, email servers and even legacy backup tapes. Managing each of these repositories presents a significant challenge, especially if you need to manage all of them at once. However, by breaking down the enterprise content environment and prioritizing by data that represents the most risk and liability to the company, the organization can create tiered classifications based on storage capacity and presumed risk. The highest risk data environments are typically email servers and legacy backup tapes. Email is the most common source of evidence produced for litigation and regulatory requests. Legacy backup tapes are a snapshot of everything, including email and files. Using this approach can make a monumental task much more manageable.
Dan: What is Data Mapping and how can governments use it for tiered storage via data classification?
Jim: Creating a data map of content will provide a greater understanding of what data exists and where it is located. A data map can provide information such as age of the data, last accessed or modified date, owner, location, email sender/receiver and even sensitive keywords. A data map will deliver the knowledge required to make “keep or delete” decisions for files and email. An actionable data map can then help you execute on these decisions and defensibly delete what is no longer required, and archive what must be kept. Data mapping can also be utilized to determine how to best store and manage data assets. For example, as a cloud on-ramping platform, a data map can help find content according to policies and migrate it to cloud storage.
Dan: What one action can CIO and CISOs take that would reduce enterprise risk in this area?
Jim: One action a CIO or CISO can take to reduce enterprise risk is to develop a plan that is achievable and measurable. The plan should have small-scale, incrementally applied projects that allow the organization to get started. The biggest risk information governance programs face is getting overwhelmed with the process and methodology. Once the organization has developed a strong understanding of what information it has and where that information is stored, it can then develop an overall information governance strategy that defines what a reasonable deletion methodology should look like.
My advice is to start small and work up to a master plan. A place to start could be with purging ex-employee data, or determining what data has not been accessed in 5 years and could be migrated to less expensive storage such as the cloud, or can eventually be purged. Getting started is the biggest challenge in a defensible deletion program, however even with a small start the organizations’ risk and expenses are positively impacted.
Dan: Thanks Jim for sharing your insights related to managing enterprise data. For more information, you can contact Jim at: firstname.lastname@example.org. Or, feel free to leave a question or comment below.
April 14, 2012 By Dan Lohrmann
Everyone is talking about the sinking of the Titanic – and they should be. The people, the stories, the technology, and especially the tragic ending, are legendary. It has been one hundred years since she sank. Books have been written, movies made – and remade in 3D. But somehow, we can’t seem to forget what happened or miss a chance to hear the remarkable, mysterious story again.
Numerous theories still abound analyzing the never-ending question: “Why did it happen?” The very word “Titanic” has become synonymous with words like enormous, monumental, gigantic, massive, huge and immense. But most of us aren’t picturing a monumental home run or an enormous successful product launch. No, the word Titanic has also been seared into our brains as a massive failure.
Here are some Titanic facts: It took three years to build her, would cost about $400 million in today’s dollars and the Titanic was thought to be unsinkable. For the next week, you can see the passenger list free of charge at Findmypast.com. This list records the names, port of departure, occupation, nationality, age, class of travel, destination and country of intended residence of those who sailed from Portsmouth, England, and Queensland, Ireland, on April 10 and 11, 1912.
Before her maiden voyage, people called her a crowning achievement of human ingenuity. A living, breathing example of man conquering nature, a model to emulate how things could be done and perhaps the finest high-tech marvel of the (relatively new) 20th Century. The ship inspired hope and awe. And yet, somehow, everything went horribly wrong.
While it may seem abrupt to jump straight to “lessons learned,” I believe it is important for everyone living one hundred years later to ponder the question: Are we susceptible to the same problems that led to the sinking of the Titanic? I think the answer is yes.
Other tragic events such the Challenger disaster in 1986 (part of at NASA’s space shuttle program) have most of the same scary characteristics as the Titanic disaster. Every time I watch the actual video of the Challenger disaster on CNN, I somehow hope for a different ending.
The horrible events that took place on September 11, 2011 also have many of these same elements. Yes, terrorists deliberately caused those planes to fly into the World Trade Center towers and there are other differences. And yet, these historic events must cause us to stop and rethink our technological and even security approaches or we will certainly fall prey to the same mistakes again.
No, these five pragmatic lessons are not new. In fact, several go back to Biblical times. But we humans constantly seem to forget them. Yes, these are also relevant in lesser situations that may not reach today’s global news networks.
Please understand that I am an optimist, I’ve been called a technophile by critics. Nevertheless, we need to learn and apply these lessons for small, medium and large size technology projects at work and at home.
Five Lessons for Technology and Security Professionals from the Sinking of the Titanic:
1) Pride Comes Before a Fall – Numerous experts start with overconfidence when they discuss the “unsinkable” Titanic. One author describes Titanic Arrogance. Here’s an excerpt:
The first few years of the 20th century, when the Titanic was built, were full of brash optimism based on remarkable advances in science and technology. It was a time of peace, progress and endless promise. Things were getting bigger, better and faster—the age more opulent and prosperous. “What could possibly stop the engines of progress or the captains of industry at their controls?” the book’s prologue asks.
The Titanic thus embodied a spirit of invulnerability characteristic of the times. In fact, when at the beginning of her maiden voyage one of the deck hands was asked whether the ship really was unsinkable, he replied, “God Himself could not sink this ship!”
Wow – sounds like today! Our immediate reaction should be “watch out.” Whether in sports, in politics or in technology adoption, we need to be wary of claims of the invincible. Things can, and certainly will go wrong. The 9/11 attacks used rather simply means to overcome complex technology defenses. We need to hope for the best and plan for the worst. As described before in other blogs, humility needs to be at the top of the list of lessons learned for security pros.
2) Don’t forget the people and the process – We have heard it hundreds times, successful projects require well-thought out plans for people, process and technology areas. And yet, we always focus on the technology and underestimate the people and process aspects of situation.
In the case of the Titanic, numerous sources insist that the real mistakes happened by the crew after the Titanic hit the iceberg. In fact, one author says the Titanic sunk because of a steering mistake. We make the same mistakes today, but focusing the majority of our efforts on “new black boxes” while ignoring or downplaying the people and process side of the equation.
3) Thinking Our Invention is “Too Big To Fail” – As noted before, experts felt the Titanic was beyond stoppable – but underestimated the power of an iceberg. Here’s a quote from Michael Kaplan:
The 1,517 people who drowned in the Titanic disaster did not die in vain. In inquiries on both sides of the Atlantic and new international agreements for maritime safety, we began to make the rules necessary for a bigger and better-connected world. We now admit that scaling up size increases complexity; the larger systems become, the greater the likelihood of unseen contingencies. Every project risks its iceberg. Nothing is too big to fail; instead, the bigger it is, the more insidious and thus devastating its modes of failure must be.
Recently, analysts have even been using this “too big to fail” warning to describe our perspective on the US or world economies. I’ve also heard experts discuss similar questions related to the Internet, cloud computing, certain companies or certain local projects that seemed foolproof. Buyer beware.
4) Health and Safety Comes First – I find it interesting to contrast the beginning versus the ending of the Titanic movie. One cannot fail to be impressed with the beauty and wealth displayed on the ship at the start of the voyage, but none of that mattered when the ship was sinking.
While the list of passengers and their stories is fascinating, the lessons for us revolve around the battle for hearts and minds of the people during emergencies. How well have we planned for various scenarios? What is most important if (and when) things go wrong? Is the focus of our product on the bells and whistles or on what truly matters? How do we communicate? Bottom line: are we prepared?
5) True stories are always the most intriguing, interesting, relevant and effective for our customers. - Experts are divided on why we are so fascinated by the Titanic stories, but one thing is clear – it really happened. We long to hear about the families, the fortunes lost, those who helped and those who didn’t. There were survivors and brave men and women who gave up their lives for others.
As we try to get the attention of our customers, stakeholder and executives today, we need to ask more questions and learn more about: What has really happened in our field of technology and security expertise. What “real life” experiences have others had? How do we benchmark against others?
Even when we ask the question “what if” this cyber attack occurred, we tend to talk about the scenario in terms that people understand. For example, we say there is a coming “cyber Pearl Harbor” or an “electronic 9/11.” Why? Because Pearl Harbor and 9/11 really happened. People can relate to those historic events.
As a current security professional, I find that most customers want to hear about true stories from other places and how those facts relate to them. We can learn from an historic event that happened 100 years ago, compelling stories that are true can last more than a lifetime. But have we forgotten what the survivors learned?
October 22, 2011 By Dan Lohrmann
According to Politico and other sources, Mark Weatherford has been named as the new deputy undersecretary for cybersecurity at the Department of Homeland Security (DHS). Mark will fill the role formerly held by Philip Reitinger, who resigned in May.
Politico wrote: “Weatherford will manage the department’s cybersecurity operations, which include overseeing the agency's partnership with the private sector and security of the dot-gov network. The Obama administration gave DHS an elevated role in managing the federal government’s cyber defenses in its legislative proposal released this spring, making Weatherford a key player for the government.”
Weatherford is currently the Chief Security Officer at the North American Electricity Reliability Council NERC), and will begin his new role with DHS in mid-November.
Mark is well known amongst state and local government leaders for many reasons. He was the Chief Information Security Officer (CISO) in both Colorado and in California. Mark was also a regular security blogger and columnist for Government Technology Magazine and PCIO Magazines. Some of his posts can be found here.
Mark has been a leader in the wider security community for years with a wealth of knowledge and expertise. He was active in several cross-government organizations including the Multi-State Information Sharing & Analysis Center and National Association of State CIOs.
In my opinion, Mark is an excellent choice by DHS. I think he will do a great job and be a respected friend and colleague to state and local government technology and security leaders around the country. He understands our needs and vulnerabilities, and Mark also grasps the global cybersecurity problems facing America.
More than that, Mark is a thoughtful executive who has both military service and hands-on experience dealing with every aspect of our cyber ecosystem. He “gets it” when it comes to addressing the vast task in front of him, including the training needs and culture change that is required for governments, private sector businesses and even families to succeed online. Mark’s an enabler who wants to get meaningful projects done to protect our sensitive data and critical infrastructure from attacks.
While this endorsement may sound too positive with no negatives, I have no hesitation in backing Mark Weatherford. No doubt, he has a very tough road ahead. There will be new threats, politics and unexpected challenges from all over the place. How long he stays in this role may be determined by events beyond his control. Nevertheless, I can think of no one better to help the cyber community right now.
I'm glad Mark took the job, and I wish him all the best. I am confident that he is the right person for this job as we head into 2012.
What are your thoughts on this appointment?
Building effective virtual government requires new ideas, innovative thinking and hard work. From federal stimulus projects to enterprise architectures to cloud computing, Dan Lohrmann will discuss what's hot and what's not in the world of technology infrastructure.