Intelligence, Artificial and Existential

"Not to Be or Not to Be?" artwork by Shuwit, http://shuwit.deviantart.com/

“Not to Be or Not to Be?” artwork by Shuwit, http://shuwit.deviantart.com/

I just published a short piece over at SIGNAL Magazine on an increasingly public debate over artificial intelligence, which the editor gave a great Shakespearean title echoing Hamlet’s timeless question “To be, or not to be”:

AI or Not AI?

Caution tempers opportunity as experts ponder artificial intelligence

May 6,2015 – Artificial intelligence, or AI, has been on my mind recently—and yes, that’s something of a sideways pun. But it’s worth exploring the phrase from another double-entendre standpoint by asking whether the nation’s intelligence professionals are paying enough attention to AI.

In the past week I have seen two brand-new movies with AI at their center: the big-budget sequel Avengers: Age of Ultron (I give it one star, for CGI alone), and the more artistically minded Ex Machina (three stars, for its lyrical dialogue expressed in a long-running Turing Test of sorts). With Hollywood’s efforts, the uptick in public attention to AI is mimicking the increasing capabilities of real-world AI systems. And the dystopian plot elements of both Ultron and Ex Machina also are mirroring a heightened sense of impending danger or doom among many of the world’s most advanced thinkers….

…continues at “AI or Not AI?”

Besides the Hollywood attention, mainstream publications are exploring the topic. On a flight from London yesterday I read EconomistThe Economist’s new cover story, “Rise of the Machines: Artificial ntelligence scares people—excessively so,” and recommend it as an up-to-the-moment backgrounder on the economic and social questions being posed with the increasing levels of AI investment and research by the likes of Google, Facebook, Microsoft, and Baidu.

My interests in the topic include the applicability, potential benefits, and any unanticipated risks of AI use in national security, including defense and intelligence systems. Last month I helped lead the National Reconnaissance Office’s 2015 Industry Day, which laid out in a classified setting the mission architectures and implementation of advanced research efforts. While those briefings were classified, NRO Director Betty Sapp has been quoted describing NRO’s Sentient Project:

[Director Sapp] cites an experiment now in limited operations known as Sentient. It is demonstrating the power of using the full architecture against a problem set by doing automated tipping and cueing from one sensor to another—acting at machine speeds, not at the pace of humans. “I can see the strength of that [complete ground system approach] when I look at Sentient in even the way it is behaving in operations,” Sapp states. Saying Sentient is doing a very good job of getting new capability out of existing assets, she allows that more people from the defense and intelligence communities have come to the NRO to view the system’s demonstrations than for any other capability since the beginning of the organization’s history. “It is demonstrating the capabilities we want throughout our Future Ground Architecture,” she offers, adding that these capabilities probably will become operational in the year 2020 or beyond.

If the overall AI topic tickles your fancy, as I point out in the SIGNAL piece there are only a few seats left for the May 20/21 Spring Intelligence Symposium where I’ll be discussing the topic with Elon Musk, in a broader discussion of the future of Research & Development. If you have a TS clearance, please join me and register here.

Meet the Future-Makers

Question: Why did Elon Musk just change his Twitter profile photo? I notice he’s now seeming to evoke James Bond or Dr. Evil:

twitter photos, Elon v Elon

I’m not certain, but I think I know the answer why. Read on…

________________________________________

“Prediction is very difficult, especially about the future.”

      – Niels Bohr, winner of the 1922 Nobel Prize for Physics

“History will be kind to me for I intend to write it.”

      – Winston Churchill

If you take those two quotations to heart, you might decide to forego the difficulty of predicting the future, instead aiming to bend the future’s story arc yourself. In a nutshell, that’s what R&D is all about: making the future.

Who makes the future for the intelligence community? Who has more influence on the future technologies which intelligence professionals will use: government R&D specialists, or private-sector industry?

On the one hand, commercial industry’s R&D efforts are pulled by billions of invisible consumer hands around the globe, driving rapid innovation and ensuring that bold bets can be rewarded in the marketplace. Recent examples are things like Web search, mobile phones and tablets, and SpaceX launches.

To be fair, though, the US IC and DoD have the ability to focus intently on specific needs, with billions of dollars if necessary, and to drive exotic game-changing R&D for esoteric mission use. During my time in government I saw great recent successes which are of course classified, but they exist.

If you want to explore both sides and you have a Top Secret clearance, you’re in luck, because you can attend what I expect will be an extraordinary gathering of Future-Makers from inside and outside the IC, at next month’s AFCEA Spring Intelligence Symposium.

Spring IntellLast fall, the organizing committee for this annual classified Symposium began our planning on topics and participants. We decided that this year’s overall theme had to be “IC Research & Development” – and we decided to depart from tradition and bring together an unprecedented array of senior leaders from inside and outside, to explore the path forward for IC innovation and change.

The May 20-21 Symposium, held at NGA’s Headquarters, will be a one-of-a-kind event designed to set the tone and agenda for billions of dollars in IC investment.  On the government front, attendees will witness the roll-out of the new (classified) Science & Technology 2015-2019 Roadmap; see this article for some background on that. Attendees will also meet and hear R&D leaders from all major IC agencies, including:

  • Dr. David Honey, Director of Science and Technology, ODNI
  • Dr. Peter Highnam, Director, Intelligence Advanced Research Projects Activity (IARPA)
  • Glenn Gaffney, Deputy Director for Science & Technology, CIA
  • Stephanie O’Sullivan, Principal Deputy Director of National Intelligence
  • Dr. Greg Treverton, Chairman, National Intelligence Council
  • The IC’s functional managers for SIGINT, MASINT, GEOINT, HUMINT, OSINT, and Space

Meanwhile from the private sector, we’ll have:

  • Elon Musk, CEO/CTO of SpaceX, CEO/Chief Product Architect of Tesla Motors, CEO of SolarCity, Co-founder of PayPal
  • Gilman Louie, Partner at Alsop Louie Venture Capital, former CEO of In-Q-Tel
  • Bill Kiczuk, Raytheon VP, CTO, and Senior Principal Engineering Fellow
  • Zach Lemnios, IBM VP for Research Strategy and Worldwide Operations
  • Pres Winter, Oracle VP, National Security Group

When I first proposed that we invite an array of “outside” future-makers to balance the government discussion with a different perspective, I said to my colleagues on the planning committee, “Wouldn’t it be awesome to get someone like Elon Musk…”

Well, we did, and next month I’ll be welcoming him on stage.

These are dark and challenging times in international security, but for scientists, technologists, and engineers, there’s never been a more exciting time – and like them, intelligence professionals should stretch their horizons.

I’m looking forward to the conference… and here’s your link to register to join us.

PS: Just to whet your appetite: new video of this week’s SpaceX revolutionary Falcon9 first-stage landing attempt on a drone barge at sea – nearly made it, very exciting:

Insider’s Guide to the New Holographic Computing

In my seven happy years at Microsoft before leaving a couple of months ago, I was never happier than when I was involved in a cool “secret project.”

Last year my team and I contributed for many months on a revolutionary secret project – Holographic Computing – which is being revealed today at Microsoft headquarters.  I’ve been blogging for years about a variety of research efforts which additively culminated in today’s announcements: HoloLens, HoloStudio for 3D holographic building, and a series of apps (e.g. HoloSkype, HoloMinecraft) for this new platform on Windows 10.

For my readers in government, or who care about the government they pay for, PAY CLOSE ATTENTION.

It’s real. I’ve worn it, used it, designed 3D models with it, explored the real surface of Mars, played and laughed and marveled with it. This isn’t Einstein’s “spooky action at a distance.” Everything in this video works today:

These new inventions represent a major new step-change in the technology industry. That’s not hyperbole. The approach offers the best benefit of any technology: empowering people simply through complexity, and by extension a way to deliver new & unexpected capabilities to meet government requirements.

Holographic computing, in all the forms it will take, is comparable to the Personal Computing revolution of the 1980s (which democratized computing), the Web revolution of the ’90s (which universalized computing), and the Mobility revolution of the past eight years, which is still uprooting the world from its foundation.

One important point I care deeply about: Government missed each of those three revolutions. By and large, government agencies at all levels were late or slow (or glacial) to recognize and adopt those revolutionary capabilities. That miss was understandable in the developing world and yet indefensible in the United States, particularly at the federal level.

I worked at the Pentagon in the summer of 1985, having left my own state-of-the-art PC at home at Stanford University, but my assigned “analytical tool” was a typewriter. In the early 2000s, I worked at an intelligence agency trying to fight a war against global terror networks when most analysts weren’t allowed to use the World Wide Web at work. Even today, government agencies are lagging well behind in deploying modern smartphones and tablets for their yearning-to-be-mobile workforce.

This laggard behavior must change. Government can’t afford (for the sake of the citizens it serves) to fall behind again, and  understanding how to adapt with the holographic revolution is a great place to start, for local, national, and transnational agencies.

Now some background…

Programmatic Context for HoloLens

An enduring aspect of working on new technologies is stealthiness. It isn’t always the right approach – sometimes open collaboration beyond company borders has superior value to a quiet insular team. I learned the distinction well when I was in government (Intellipedia and A-Space were among our results) and in the startup culture in Silicon Valley before that.

But stealth has an electric appeal, the spark of conspiracy. At Microsoft and some other companies, the terminology is “Tented” – you have no clue about the work if you’re outside the tent, enforced as rigorously as in a SCIF.

Last March my Microsoft Institute team at Microsoft was quietly invited to add our efforts to a startling tented project in Redmond – one which was already gaining steam based on its revolutionary promise and technical wizardry, but which would require extraordinary stealth in development, for a variety of reasons.  I won’t share anything proprietary of course, but will say that our secrecy was Apple-esque, to use a Valley term of high praise.

That project is being announced to the world today, as HoloLens. I couldn’t be prouder of my (erstwhile) colleagues at Microsoft who are launching a revolutionary platform.  The praise is already rolling in.  WIRED‘s story is “Our Exclusive Hands-On With Microsoft’s Unbelievable New Holographic Goggles” while TechCrunch quickly assesses “Augmented reality has had some false starts on mobile, but in this context, it seems more viable, and thus more credible than it ever has before.”)

Next, let’s look at some background on the technical area which HoloLens now stands astride like a colossus among the Oculus Rift and Google Glass lesser-rans. Then below I’ll sketch some initial observations on the relevance for government uses and the world at large.

Technology Context: Ambient Computing

I’ve been writing about virtual reality and augmented reality (the VR/AR split) for a decade, first inside government and over the past seven years on this blog.  The term I prefer for the overall approach is “Ambient Computing” – combining advanced projection, immersion, machine vision, and environmental sensing.

Ambient computing devices are embedded all around in the environment, are addressable or usable via traditional human senses like sight, hearing, and touch/gestures, and can understand people’s intent, and even operate on their behalf.

Previous ShepherdsPi posts on Virtual Reality/Augmented Reality:

2008 War is Virtual Hell on emergent defense thinking about virtual reality

2008 Stretching collaboration with Embodied Social Proxies on robotics and VR

2009 Immersed in Augmented Reality, with the concept of “Instrumenting the World” as an important foundation for what is now called the Internet of Things.

2010 The promise of mobile augmented reality

2010 Air Everything

2011 Kinecting Communities

By 2010 I could look ahead (“Playing with virtual data in spatial reality“) and see clearly where we are heading based on trends:

We’re further along in this area than I thought we’d be five years ago, and I suspect we’ll be similarly surprised by 2015. In particular, there is great interest (both in and out of the government circles I travel in) in the “device-less” or environmental potential of new AR technologies. Not everyone will have a fancy smartphone on them at all times, or want to stare at a wall-monitor while also wearing glasses or holding a cellphone in front of them in order to access other planes of information. The really exciting premise of these new approaches is the fully immersive aspect of  “spatial AR,” and the promise of controlling a live 3D environment of realtime data.

That vision begins to become “virtually real” with today’s HoloLens announcement.

Competitive Context

I’ll leave to analysts, and to the holiday-market later this year and next, to judge where the competing technologies lie on the “hype-curve” of reality and utility.  I can list the efforts I’m playing closest attention to, and why:

Samsung’s Gear VR and Project Beyond: The Gear VR headset hasn’t lit expectations very brightly among analysts or the tech media, but it does now have alongside it the recently announced “Project Beyond,” a 360-degree panopticon camera module which is planned to capture a gigapixel of surrounding 3D footage every second, and stream that footage back to someone wearing a Gear VR headset, “essentially transporting them into that world.” Unlike HoloLens, it’s not a full computer.

Google Glass: The granddaddy of widely available AR experiments. Withdrawn from the public last week, not before inspiring a raft of venture-funded lookalikes which are also now also-rans. Google undoubtedly learned a great deal by dipping its giant toe into the virtual realm so enthusiastically with its Explorers program, but most of my friends who participated developed a “ho-hum” attitude about the device, which now gathers dust on shelves across the world.

Magic Leap: Google’s withdrawal of Glass can be seen in the context of the revelation that the search/advertising giant has now instead plowed in a large amount of cash to this start-up, followed by several A-list Silicon Valley VC funds. Magic Leap has now raised an astonishing $542 million in Series B funding – yes, that’s half a billion, with no product or launch date in sight, but a long list of developer openings on its website. (But don’t worry, the company just hired a novelist as its Chief Futurist.)

Oculus VR and the Rift (or its follow-ons): Oculus Rift has to be considered the leading rival to Microsoft’s HoloLens, so much so that Facebook acquired its parent startup company for an eye-opening $2 billion, ten months ago. Mark Zuckerberg at the time indicated patience and the long-view in his strategy, but industry watchers don’t expect a device release until late 2015 or 2016. And Rift, as of its descriptions to date, isn’t a full computing experience, merely a virtual-reality immersion.  There’s also no see-through aspect to its headset (unlike the visible real-world context of HoloLens), which has led to widely-reported nausea problems among Rift prototype users.

These all feel a bit laggard now, particularly because the companies involved (with the exception of Google) don’t have the experience of Microsoft in launching global computing platforms on which communities of developers can make magic.  Most importantly, none of these efforts are audacious enough to incorporate a full computing device (CPU, GPU, wirelessly connected) into a comfortably wearable device.

Bottom Line for Government…

Ambient computational resources are driving a new revolution, which the private sector is exploiting rapidly. That industrial and consumer revolution is in useful parallel with a virtuous cycle of ubiquitous sensing (Internet of Things) producing zettabytes of Big Data, being manipulated and mined by pioneering Machine Learning techniques for so-called “soft AI” (see IBM’s Irving Wladawsky-Berger in last week’s Wall Street Journal, “Soft Artificial Intelligence is Suddenly Everywhere“).

We humans, we of soft tissue, need all the help we can get to preside over those new and accelerating forces of technological change. The real magic is when our tools give us such powerful command in a simple and fun way. That is the promise of Holographic Computing, and HoloLens.

There are inevitably challenges. There’ll be devious uses of Holographic Computing, of course. Already we see the deceptive capabilities of regular screen-based “virtual reality,” and one can only imagine the perils of viewing these techniques from the wrong hands in full 3D immersion; check out these examples from the Emmy-winning special-video-effects (VFX) team behind HBO’s “Boardwalk Empire”:

We can’t allow government to waddle slowly behind, as real people live their lives increasingly affected by immersive technologies used for good or ill.

Governments exist to answer the needs of their citizens; government agencies and personnel should be using up-to-date tools capable of keeping up with what individual citizens are using, if only to avoid embarrassment and dinosauric irrelevance!

Holographic Computing offers government agencies real benefits:

  • Unique and insanely powerful mission applications; the company has been working on training, modeling & simulation, event forensics, gesture-driven immersive big-data visualization, distance learning, and you can easily imagine uses in widely varied fields like remote logistics management, geospatial analytics, telemedicine… Anything that uses personal computing software.
  • Government workforce and workplace transformed by collaboration transformation already evident in early applications like HoloSkype;
  • Awareness of and contemporaneous familiarity with the technological changes affecting society, through consumer and entertainment channels.

I’ll end with the newly-released overall video on Microsoft’s Holographic Computing; note the NASA/Jet Propulsion Lab scenes studying the surface of Mars first-hand. Note the 3D modeling from HoloStudio and its infinite shelf of parts. Note the HoloSkype example of real-time step-by-step advice on technical repair, from someone remote yet as near as by your side.

Imagine what you could do with HoloLens….

Let me know your ideas.

Bullshit Detector Prototype Goes Live

I like writing about cool applications of technology that are so pregnant with the promise of the future, that they have to be seen to be believed, and here’s another one that’s almost ready for prime time.

TruthTeller PrototypeThe Washington Post today launched an exciting new technology prototype invoking powerful new technologies for journalism and democratic accountability in politics and government. As you can see from the screenshot (left), it runs an automated fact-checking algorithm against the streaming video of politicians or other talking heads and displays in real time a “True” or “False” label as they’re speaking.

Called “Truth Teller,” the system uses technologies from Microsoft Research and Windows Azure cloud-computing services (I have included some of the technical details below).

But first, a digression on motivation. Back in the late 1970s I was living in Europe and was very taken with punk rock. Among my favorite bands were the UK’s anarcho-punk collective Crass, and in 1980 I bought their compilation LP “Bullshit Detector,” whose title certainly appealed to me because of my equally avid interest in politics :)

Today, my driving interests are in the use of novel or increasingly powerful technologies for the public good, by government agencies or in the effort to improve the performance of government functions. Because of my Jeffersonian tendencies (I did after all take a degree in Government at Mr. Jefferson’s University of Virginia), I am even more interested in improving government accountability and popular control over the political process itself, and I’ve written or spoken often about the “Government 2.0″ movement.

In an interview with GovFresh several years ago, I was asked: “What’s the killer app that will make Gov 2.0 the norm instead of the exception?”

My answer then looked to systems that might “maintain the representative aspect (the elected official, exercising his or her judgment) while incorporating real-time, structured, unfiltered but managed visualizations of popular opinion and advice… I’m also a big proponent of semantic computing – called Web 3.0 by some – and that should lead the worlds of crowdsourcing, prediction markets, and open government data movements to unfold in dramatic, previously unexpected ways. We’re working on cool stuff like that.”

The Truth Teller prototype is an attempt to construct a rudimentary automated “Political Bullshit Detector, and addresses each of those factors I mentioned in GovFresh – recognizing the importance of political leadership and its public communication, incorporating iterative aspects of public opinion and crowd wisdom, all while imbuing automated systems with semantic sense-making technology to operate at the speed of today’s real world.

Real-time politics? Real-time truth detection.  Or at least that’s the goal; this is just a budding prototype, built in three months.

Cory Haik, who is the Post’s Executive Producer for Digital News, says it “aims to fact-check speeches in as close to real time as possible” in speeches, TV ads, or interviews. Here’s how it works:

The Truth Teller prototype was built and runs with a combination of several technologies — some new, some very familiar. We’ve combined video and audio extraction with a speech-to-text technology to search a database of facts and fact checks. We are effectively taking in video, converting the audio to text (the rough transcript below the video), matching that text to our database, and then displaying, in real time, what’s true and what’s false.

We are transcribing videos using Microsoft Audio Video indexing service (MAVIS) technology. MAVIS is a Windows Azure application which uses State of the Art of Deep Neural Net (DNN) based speech recognition technology to convert audio signals into words. Using this service, we are extracting audio from videos and saving the information in our Lucene search index as a transcript. We are then looking for the facts in the transcription. Finding distinct phrases to match is difficult. That’s why we are focusing on patterns instead.

We are using approximate string matching or a fuzzy string searching algorithm. We are implementing a modified version Rabin-Karp using Levenshtein distance algorithm as our first implementation. This will be modified to recognize paraphrasing, negative connotations in the future.

What you see in the prototype is actual live fact checking — each time the video is played the fact checking starts anew.

 – Washington Post, “Debuting Truth Teller

The prototype was built with funding from a Knight Foundation’s Prototype Fund grant, and you can read more about the motivation and future plans over on the Knight Blog, and you can read TechCrunch discussing some of the political ramifications of the prototype based on the fact-checking movement in recent campaigns.

Even better, you can actually give Truth Teller a try here, in its infancy.

What other uses could be made of semantic “truth detection” or fact-checking, in other aspects of the relationship between the government and the governed?

Could the justice system use something like Truth Teller, or will human judges and  juries always have a preeminent role in determining the veracity of testimony? Will police officers and detectives be able to use cloud-based mobile services like Truth Teller in real time during criminal investigations as they’re evaluating witness accounts? Should the Intelligence Community be running intercepts of foreign terrorist suspects’ communications through a massive look-up system like Truth Teller?

Perhaps, and time will tell how valuable – or error-prone – these systems can be. But in the next couple of years we will be developing (and be able to assess the adoption of) increasingly powerful semantic systems against big-data collections, using faster and faster cloud-based computing architectures.

In the meantime, watch for further refinements and innovation from The Washington Post’s prototyping efforts; after all, we just had a big national U.S.  election but congressional elections in 2014 and the presidential race in 2016 are just around the corner. Like my fellow citizens, I will be grateful for any help in keeping candidates accountable to something resembling “the truth.”

2012 Year in Review for Microsoft Research

The year draws to a close… and while the banality and divisiveness of politics and government has been on full display around the world during the past twelve months, the past year has been rewarding for me personally when I can retreat into the world of research. Fortunately there’s a great deal of it going on among my colleagues.

2012 has been a great year for Microsoft Research, and I thought I’d link you to a quick set of year-in-review summaries of some of the exciting work that’s been performed and the advances made:

Microsoft Research 2012 Year in Review

The work ranges from our Silicon Valley lab work in “erasure code” to social-media research at the New England lab in Cambridge, MA; from “transcending the architecture of quantum computers” at our Station Q in Santa Barbara, to work on cloud data systems and analytics by the eXtreme Computing Group (XCG) in Redmond itself.

Across global boundaries we have seen “work towards a formal proof of the Feit-Thompson Theorem” at Microsoft Research Cambridge (UK), and improvements for Bing search in Arab countries made at our Advanced Technology Labs in Cairo, Egypt.

All in all, an impressive array of research advance, benefiting from an increasing amount of collaboration with academic and other researchers as well. The record is one more fitting tribute to our just-departing Chief Research and Strategy Officer Craig Mundie, who is turning over his reins including MSR oversight to Eric Rudder (see his bio here), while Craig focuses for the next two years on special work reporting to CEO Steve Ballmer. Eric’s a great guy and a savvy technologist, and has been a supporter of our Microsoft Institute’s work as well … I did say he’s savvy :)

There’s a lot of hard work already going on in projects that should pay off in 2013, and the New Year promises to be a great one for technologists and scientists everywhere – with the possible exception of any remaining Mayan-apocalypse/ancient-alien-astronaut-theorists. But even to them, and perhaps most poignantly to them, I say Happy New Year!

MSR gets wired, WIRED gets MSR

MS Research in natural-user-interaction technologies
MSR natural-user-interaction immersive technologies

WIRED Magazine’s online site ran a great long profile of Microsoft Research late yesterday, with interviews and project features: “How Microsoft Researchers Might Invent a Holodeck.”

I have written about or mentioned all of the individual projects or technologies on my blog before, but the writing at WIRED is so much better than my own – and the photographs so cool – that I thought I should post a link to the story. Continue reading

Virtual recipe stirs in Apple iPad, Microsoft Kinect

Who says Apple and Microsoft can’t work together?  They certainly do, at least when it involves the ingenuity of their users, the more inventive of whom use technologies from both companies (and others).

Here’s a neat example, “a just-for-fun experiment from the guys at Laan Labs” where they whip up a neat Augmented Reality recipe: take one iPad, one Kinect, and stir:

Some technical detail from the Brothers Laan, the engineers who did the work:

We used the String Augmented Reality SDK to display real-time 3d video+audio recorded from the Kinect. Libfreenect from http://openkinect.org/ project was used for recording the data coming from the Kinect. A textured mesh was created from the calibrated depth+rgb data for each frame and played back in real-time. A simple depth cutoff allowed us isolate the person in the video from the walls and other objects. Using the String SDK, we projected it back onto a printed image marker in the real world.” – source, Laan Labs blog.

As always, check out http://www.kinecthacks.com/ for the latest and greatest Kinect hacks – or more accurately now, the latest cool uses of the openly released free Kinect SDK, available here.

There are several quiet projects underway around the DC Beltway to make use of the SDK, testing non-commercial but government-relevant deployments – more detail and examples at the appropriate time. We will eventually release a commercial SDK with even more functionality and higher-level programming controls, which will directly benefit government early adopters.

In the meantime, I may report on some of the new advances being made by our research group on Computational User Experiences, who “apply expertise in machine learning, visualization, mobile computing, sensors and devices, and quantitative and qualitative evaluation techniques to improve the state of the art in physiological computing, healthcare, home technologies, computer-assisted creativity, and entertainment.” That’s a rich agenda, and the group is in the very forefront of defining how Natural User Interaction (NUI) will enhance our personal and professional lives….

Share this post on Twitter

Email this post to a friend

Follow

Get every new post delivered to your Inbox.

Join 6,654 other followers

%d bloggers like this: