Egy újszülöttnek minden vicc új, így én a régi viccekre szakosodtam, azokat mondom el újra és újra.

Floorshrink diaries

Floorshrink diaries

TLDR nr.3: The cathedral IN the bazaar

2019. június 14. - Floorshrink

TLDR: Open Source SW development has changed. Once a protest against closed source SW with questionable price-performance characteristics, initiated by amazing fellows in dorm rooms like Linus Torvalds, today it is driven by a handful mega scale SaaS players to support their own services and used as a tractor beam to pull traditional enterprise IT closer to their core offerings. As a byproduct this change is going to turn the entire Open Source movement upside down and will render some of the top HW players largely irrelevant in the next ten years. The cathedral walked into the bazaar and transformed it.

CATB statements with question marks - something has changed

Eric S. Raymond published his essay The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary in 1999 (CATB for short). He used Linux as an example (besides his own experience with Fetchmail) and came to the conclusion that Open Source SW development has a fair chance to produce better products than its traditional closed source counterparts. I sail to stormy waters with the observations below, but I risk them anyway. (Parts in italic are quotes from CATB)

  • “The development of the Linux operating system by a loose confederation of thousands of programmers -- without central project management or control -- turns on its head everything we thought we knew about software project management.” Today most impactful OSS projects (see Appendix 1 and 2) start their life as an in-house development effort targeting technical challenges related to the core business of SaaS giants. Later – once the problem is addressed – they are shared with the broader world as open sourced technologies. In fact, most contributors to the Linux kernel today come from IBM (Red Hat), Intel, Samsung, Facebook and Huawei. Even Raymond acknowledges this: “It’s fairly clear that one cannot code from the ground up in bazaar style.” Linux was built on the foundation of Minix, an early Unix port to i386 machines.
  • ‘‘Given enough eyeballs, all bugs are shallow.’’ Take a look at Heartbleed or the CVE vulnerability lists, and you will find that some Linux distros have more distinct vulnerabilities than the once despised Windows. The really bad thing is that governing bodies (PMCs) of certain Open Source projects make decisions that break contracts (the API-s used by other projects) that render the end product - using this component - fail. Raymond’s argument works WITHIN a project but falls victim of the Conway’s law when it gets to inter-component cooperation.
  • ‘Plan to throw one away; you will, anyhow.(Fred Brooks, The Mythical Man-Month, Chapter 11) Or, to put it another way, you often don’t really understand the problem until after the first time you implement a solution. The second time, maybe you know enough to do it right. True, with a caveat: for each building block you will end up with 2-3 functionally overlapping, but slightly different and surely incompatible implementations. When someone thinks that innovation in a given project stalled, she forks the project thinking little about compatibility or a smooth migration path from the old to the new. Tough luck, Mr. Customer.
  • “Hackerdom has long explicitly recognized "egoboo" (ego-boosting, or the enhancement of one's reputation among other fans) as the basic drive behind volunteer activity.” This phenomenon is still at work albeit with major problems: when egos of developers working on different OSS projects collide, the product and eventually the customer suffers. Even more painful when developers ignore the voice of the customer coming through product management. The mirroring of this sentiment is when product management reports INTO Engineering. Auch …

The root cause for these changes is not in technology nor in process, but in the business model. We have to differentiate between the value creation and revenue creation since the two are not identical in this case.

  • The most important OSS projects are infrastructure building blocks (operating systems, containers, databases, SW development frameworks, scheduling, provisioning and monitoring tools) BUT NOT core business applications.
  • The revenue streams of the greatest contributors do not come from selling these innovations – while they could not create their mega scale core services without these technologies. Facebook, Netflix, Twitter, Lyft, Uber etc. are not “de jure” SW companies, even Google, the nr. 1 OSS contributor makes the bulk of its revenue from advertising, not from SW subscriptions. “De facto” all of them are.

 The original value creation model

The following chart is from the presentation titled Open Source Projects and Product Management - Need, Pain or Useless? by Patrick Maier and Holger Dyroff. Here the OSS project is driven by delighted individuals striving for freedom, in most instances contributors to the code are users of the same code (in sync with CATB). The product uses the project as a source of innovation and give back stability (bugfixes) in return. The revenue comes from customers of the product (not detailed if this is from support or subscription) The investment is through improvements into the code base (anything that the community may not find interesting, therefore would not touch, while customer do care about them).

old_model.png

 The new value creation model

new_model.png

In the new model SaaS and public cloud providers create an SaaS offering that is the core of their business. They are after millions of consumers, hence they run into technical problems that no Enterprise IT have ever experienced. (their scale dwarfs even the largest enterprise IT shops). The new challenges require new solutions, hence the foundation of scale out architectures, containers, new development frameworks and big data platforms emerge. The mega players fund the creation of these platform components from their core business and pass them on to the open source community and specialists. Enterprise IT who have been engaged in a never-ending struggle to justify infrastructure investments, jump on the free cool stuff and bake them into their environment, thus paving the way that eventually will lead to moving their payloads to the very same cloud providers.

As always, I will appreciate any feedback or comment.

_____________________________________________________________________________________________ 

Suggested reading: The original essay: The Cathedral and the Bazaar

 

Appendix 1: Major contributions to the top-level OS projects by original vendor:

  • Facebook: React, React Native, Presto, RocksDB, Cassandra, Torch, PyTorch
  • Google: Android (an ARM based Linux fork for smartphones), Chromium (web browser), Tensorflow (math library for machine learning), Kubernetes (container orchestration system), Angular (development framework), Polymer (JS library), js (JS framework), BorgMon (forked as Prometheus (Kubernetes monitoring and alerting))
  • Yahoo: Hadoop (based on the MapReduce white paper from Google), Pulsar (pub-sub messaging system)
  • Red Hat (now IBM): OpenShift (container management platform)
  • Microsoft: Visual Studio Code (source code editor), .Net Core (developer platform)
  • Netflix: Chaos Monkey (service resiliency checking tool)
  • Airbnb: Airflow (workflow scheduler)
  • Oracle/Sun: MySQL (RDBMS), Hudson that forked to Jenkins, (build automation server)
  • Lyft: Envoy (distributed services proxy for applications)
  • Twitter: Zipkin (distributed tracing for latency problems in microservices)
  • Uber: Jaeger (distributed tracking for microservices based applications)
  • University of Berkeley: PostgreSQL (RDBMS)

 

Appendix 2: The digging I did before I dared to state anything

There are multiple ways to answer the question who contributes the most to the open source movement. One way is to measure the number of contributions (commits on Github) or the number of employees contributing.  Another way is to track the finalists of the BOSSIE awards and reconcile them with several lists compiled by various organizations like CNCF and Datamation.

BOSSIE Awards (Best of Open Source Software Award) from Infoworld:

 

Rankings from Felipe Hoffa and Fil Maj based on the number of employees of a given firm contributing to OSS repos vs. the total number of contributions.

 

A handful of subjective lists:

 

A few articles and talks on the OSS business model

 

Examples to prove that OS SW is not exempt from bugs:

 

TLDR nr. 2: When the music’s over

Summary: Cloud computing is expensive. The recognition of this fact will lead to a small-scale return to on premises computing while importing the cloud operating model. More importantly it will trigger a change how IT and its internal customers think about and use computing resources. This is a commodity, but with a price tag.

 A few months ago, the business plan published for the Lyft IPO shed some light on the scale of computing cost a cloud only IT has to live with. (a 300 million USD payment obligation to Amazon for the next 3 years) Cloud computing is not cheap, that is the sales pitch only.

 For the record, cloud providers are not greedier than their commercial clients, the problem is NOT with the pricing. AWS runs with 30% gross margin while it uses all levers economy of scale can offer like 20+% discount on parts (eg. from Intel) and 30% or more on electricity, not mentioning the efficiency gains in operating cost and on the plant itself. Bottom line: you are unlikely to be cheaper in a 1-kilogram AWS CPU vs. 1- kilogram on prem. CPU shootout.

There are two forces at play here:

  • Compute and storage workloads moved to a metered environment while the consumption patterns stayed unchanged from the honky-dory “it’s a flat fee we already paid” times. Cloud providers simply exposed the uncomfortable truth about enterprise consumption habits. (it is like my kids listening to the music of Youtube videos on a 4G metered internet connection.)
  • The other factor is the “Kids in the candy shop” syndrome: If you give IT guys instant access to a virtually unlimited computing and storage capacity without a strong cost control, a surge in consumption is inevitable.

Why enterprise folks leave the lights on in their traditional on prem. DC environments? I think there is a combination of technical, accounting and emotional factors at play. 

  • The technical driver is straightforward: it would be difficult to keep in mind what component is dependent on which another component, let alone the existing application layer is unable to start an external resource it needs, since it was written with the assumption in mind that those resources are just there. If they are not there in that very moment when they are needed, the app throws an alert and stops. There are nasty traps ahead for those who do shut down their servers. I remember a company who shut down their AS400-s for maintenance after several years of no downtime. Half of their disks did not spin up after the bearings cooled down between Christmas and New Year’s Eve. Simply put shutting down a DC is a project by itself.
  • The accounting aspect is less obvious, but equally powerful: Most enterprise IT acts as a cost center and the finance folks demand that they distribute every dollar IT spend to their internal customers. This allows no tangible unassigned compute/storage capacity within the IT department, therefore when a new demand arrives, they cannot serve it right away. They need to procure it. This is when procurement (the “Vogons”) arrive.
  • The “my precious” syndrome: back in my childhood when a baby was born in Eastern Germany, the parents shelled out the initial payment for a Trabant (for those who missed it: an automotive gem with a two stroke engine with 22 HP, a fiber reinforced paper body and a minus 2 stars NCAP evaluation) to make sure by the time the kid turned 18, the car would be there. You can imagine how valuable a second hand Trabi was… The same thing is at play in the modern enterprise: the lead time (approval, procurement, “unforeseen” dependencies with the telco guys) between requesting a new server to actually deploying a workload on that box takes 5+ months. This is not a surprise that the business unit – once it got its precious new HW acts like Gollam, it would not give it up (or just shut it down) even if the given server was idle for months. This behavior is the key driver behind the server utilization figures in range with a steam engine.

 What can we do about this problem? As always there are two approaches: fight of flight.

 FIGHT:

  • The low hanging fruit is monitoring and reporting any cost to the business unit which is behind that cost. Link the cost to the payload and pair it with the utilization: “this cluster runs a credit app that you retired last year so it did nothing in the last 4 months but cost you 17k USD.” The thing works best when you break down cost to the employee where you can. (eg. dev machines) I guess cloud cost optimization will become a separate business on its own right soon.
  • In case of databases show the number of transactions on that DB in the last few billing periods. Introduce storage tiering in sync with the data lifecycle along the capacity-IOPS-cost dimensions and come up with saving suggestions like moving the workload to a cheaper availability zone, or to reserved instances, or where the workload permits moving to spot instances.
  • Autoscaling, while seems like the best option, is a double-edged sword: Enterprises care more about the predictability of their cost than a potential reduction to it. This is okay to spend a lot as long as you are within budget. Not mentioning planning practices based on last year’s actuals: “if you underspend your budget, we will shrink it for you.
  • Needless to say, the whole thing will work only if there is a strong incentive on the business to care about these reports. Otherwise they just archive it.
  • The tough part is changing the mindset of both the IT and their internal customers: you need to make them realize that “their precious” is actually a disposable resource, a commodity but with a price tag. In order to get there, you need to make sure that provisioning and tearing down compute capacity is absolutely painless and fast. For the record: any compute-storage-networking environment is disposable only when the payload has no direct references to a given resource. Static IP addresses hard-wired into the interface code anyone?
  • Make the internal IT cost breakdown as similar to the cloud providers’ bill as possible. Let talented folks work on the cross-charge mechanisms and make sure they read the works of behavioral economists first.
  • A word on vendor lock in: I have lingering doubt about cloud brokers due to the added complexity, but I am sure most large enterprises will end up with 2 if not 3 public cloud providers in the long run. The vendor lock-in will be created not by the cloud providers directly, but the payload, ISVs writing their apps hooked into specific cloud offerings. (eg. “my stuff works only with S3”. De ja vu Windows in the 90’s…)

 FLIGHT:

  • In case of predictable, static workloads it might make sense to bring it back to on prem. (one might ask why you moved it to the cloud in the first place, but it surely happened during the reign of the previous CIO.)
  • If you do bring back data and workloads from a public cloud to your own DC make sure you do not fall back to the old habits, ie. you implement the cloud operating model instead of the ancient regime. Automated provisioning and auto-scaling, config and build artifacts in JIRA just like code and all other goodies the big guys invented. And yes, convince the CFO that you are an internal cloud provider who has to have spare capacity.

 Let me finish this blog post with my favorite song from the Doors: “When the music’s over, turn out the light!” As always, I appreciate any feedback and comments.

----------------------------------- addition based on feedback ----------------------------------------------------------------

I received a fair amount of feedback that prompted me to follow up on it. So here is the add on to the original:

So here is my theory: the compute platform development in the last 15-20 years can be viewed as an improvement in area approximation, aka. an areal integral, where the f(x) is the computing workload and x is the time required to bring up-down a new compute instance. I am talking about this:

 areal_integral.png

 approximation.png

  • Back in the mainframe days you bought the compute capacity that your valet could afford and prayed that that iron could handle whatever the business threw at it. These computers ran the core banking or billing app ie. the backbone of your business, having a few hundred interfaces implemented in the traditional spaghetti fashion. Replacing a mainframe during its lease was expensive, therefore x was measured in years (see the graph on the left). Even if you had another mainframe (eg. for DR purposes), splitting the peak workload between the two was difficult. Applications on these beasts were designed with scale up in mind, aka. “buy a bigger machine if you need more juice”.
  • Fast forward many years: your core app was running on a VM, therefore in theory you could move the workload from a small VM to a large VM in a few hours (still talking about scale-up) to respond to an increase in demand. x was measured in hours (see the graph in the middle)
  • In the future – running your application payload on containers you can add or remove compute capacity very fast IF you completely rewrote your app from the old scale-up behavior to scale-out. x will be measured in seconds (see the graph on the right).

My forecast about the pendulum swinging back a bit is valid for IaaS workloads only that were migrated to a public cloud provider as is and where the user and IT behavior did not adjust to the metered environment then the CFO dinged the CIO for the increasing cloud spend. As Gabor Varga put it: "Think about Cloud as a continuum rather than a binary classification of deployment models. That mental frame would help everyone understand that IaaS has more in common with onprem than with true PaaS or SaaS which are higher abstraction levels of IT resources." Yes, this is why one can walk backward sometimes. The new billing schemes introduced by serverless offerings (like this: https://cloud.google.com/functions/pricing) might stir the pot and surely will demand even more scrutiny of the cost evaluation. (thanks for Zoltan Szalontay for bringing it up and for Sandor Murakozi for helping me we real life examples.)

 

Sources used for this article: 

 

 

TLDR nr. 1: “It’s Difficult to Make Predictions, Especially About the Future”

I am not quite sure if the observation above is from Niels Bohr or Yogi Bear, but I wholeheartedly believe in it. Some companies create predictions for living, and sometimes they might be wrong. A few months ago, Dave Cappuccio from Gartner made a prediction: “By 2025, 80% of enterprises will have shut down their traditional data center, versus 10% today.” While I am convinced that for many workloads and many companies moving to the public cloud is the only reasonable option, I think this prediction is false. This article attempts to summarize why.

What customers actually want

 If you ask CEOs what they need from their IT department they are likely to answer with some of these demands:

  • Increased speed to achieve economic return on their investments
  • The ability to respond quickly to any market change
  • Increased productivity of any group in their firm

If you ask the same question from CIOs of these firms, you are likely to get answers like these:

  • Increased productivity of their developers
  • To run any workload at an optimal location (own DC, public cloud, edge)
  • A two-way street between their on prem DCs and the cloud (the ability to switch from one provider to another or back to their own DC without major cost or time implications)
  • IT infrastructure elasticity: optimal utilization of their existing assets
  • Consuming new functionality without the risk and hassle of upgrading

Summarizing these wishes by Wikibone: Customers want a cloud experience where their data lives. This is not necessarily a public cloud provider though. One thing is for sure: most of the prevailing myths about the risks related to a public cloud (eg. data security, reliability) are simply false, the top tier cloud players spend more energy and resources on cyber security and disaster recovery capabilities than even the largest enterprise clients. Simply put, they can afford it and will be the best in these areas soon. If you add elasticity, scalability and the chance to avoid upfront CAPEX expenditure you get an impressive combination that is compelling to any CEO. This created a tremendous “cloud first” pressure on enterprise CIOs.

Smoking a pipe while playing the flute

Customers want to make a safe bet while fear being milked due to a vendor lock in. Betting on a single vendor can be good, since you need to spend less on integrating your infrastructure components. “Nobody gets fired for buying IBM” as the adage from the 1960s tells. On the other hand, when a well-established RDBMS vendor started twisting the arm of its customers by increasing the support fees for its products other vendors could ramp up the budding RDBMS business on hatred against the previous vendor.

Cloud providers also want to lock you in: eg. they let you ingest your data free but extracting the same data may cost you your proverbial shirt. As a result, most enterprise customers are likely to end up with three, potentially mismatched ecosystems on their hands: their own data center and two public clouds, plus may not use the features of any vendor that would differentiate them.

Where the market is moving

The following list is by far not complete, but may give an overview what is happening in the public cloud arena:

  • Getting closer to the customer’s data: Azure Stack, AWS Outpost and GKE On-Prem are all acknowledgements of the fact that - while customers love the cloud experience – they stick to their on prem DC investments (mostly due to the stickiness of the application layer), and integrating the two realms – making cloud bursting real - is a daunting task. These new offerings are likely to make this integration easier. The AWS – VMWare cooperation tackles the same problem the other way around, providing the existing VMWare experience in a public cloud. There is an ongoing investment in this area by the giants, eg. the acquisition of Avere by MSFT.  
  • Fighting latency and legal issues: several years ago I pinged a host in the Amsterdam DC of Azure from Johannesburg. The 400 ms round trip was a showstopper for most UI intensive apps. This is not a surprise that MSFT placed their first African Azure DCs in Johannesburg and Cape Town. The following quote on the legal side is from a friend of mine: "the European Union is determined to remove obstacles to move data from one member to another, as evidenced by a recent regulation requiring member states to remove data residency barriers (framework for the free flow of non-personal data in the European Union, Regulation (EU) 2018/1807). Additionally, the latest European Banking Authority recommendation has some soft statements for preference for data location in EEA and no requirements to limit data location to national boundaries."
  • Moving to containers and microservices: probably this is the most important change with implications bigger than virtual machines brought 10+ years ago. The caveat is that microservices require the rebuild of your entire application stack. The promise is – a déjà vu for JVMs some twenty years earlier – write once and run anywhere. This time it may work.
  • Custom chipsets for specific workloads – albeit a bit controversial since common purpose chips ruled the industry in the last 15 years – the largest players buy chips in quantities that allow them to request custom designs. Some of these new chips will target specific workloads (eg. artificial intelligence, where the battle is already on between GPUs and CPUs), some might be just slightly more efficient in a generic task or will consume a bit less energy, but at AWS/Azure scale this might make a difference.
  • Consolidation: if you miss the boat in development, you may catch up with an acquisition, this is what IBM did with buying Red Hat. The Dell – EMC merger a few years ago falls into a similar bucket, staying competitive together. The recent acquisition of Mellanox by nVidia is also related to the cloud, it will help nVidia to realize its plans to become a top HPC provider, competing with Intel head to head for AI/ML specific workloads. Amazon’s close partnership with VMWare – albeit not a takeover – also shows the clear signs of consolidation.

What a local datacenter provider can do to stay competitive

The good news: hyper scale cloud providers: Amazon, Microsoft, Google, IBM and Alibaba are not likely to establish a major DC footprint in Hungary soon, we are just too small for that. Beyond this many Hungarian SMB customers do not need the WW coverage (eg. servers in Hong Kong, while the disaster recovery in California), they may not want a CDN (Content Delivery Network) or chipsets built for AI but do need to comply with local and EU data regulations and like the proximity of the DC to their business. It seems that the HUN local regulation is more strict than the equivalent EU rules. Side note:  even the otherwise tough GDPR regulation does not create any restriction on non-personal data storage. (within the EU boundaries)

The bad news: colocation, server hostels and leased physical servers and VMs do not cut it, local DC providers have to provide more if they plan to survive. They also need to keep in mind that the big boys eventually will go beyond the current DC services, will offer specialization, layers of additional security, management and monitoring services and 3rd party integrated SW functionality (or their own productivity suits, using their directory services and storage) that will be difficult to copy. Some players (eg. Rackspace) already adopted a dual strategy: they still offer OpenStack (their own stuff) while they provide services on top of all major public cloud providers. I gave it a try to list what local DC providers could do to stay relevant vs. the giants in the next ten years.

  1. The immediate homework: automated provisioning, patching and monitoring of their server offerings, installing common workloads from a canned image (RDBMS, noSQL, application and web servers, frameworks)
  2. Giving more: load balancing, alerting, log analysis, security scans, archiving, telco neutrality (same latency from all major local telco networks)
  3. Going beyond VMs: containers and microservices will become a commonplace within a few years, local offerings have to be compatible with them, ie. will have to provide the services listed in point 1 and 2 for them, not just for VMs.

In a nutshell: copy most things big boys do, at a local scale and potentially with some local flavor.

Where we will be two years from now

The warning in the title applies, these are my bets, will be happy to evaluate later on:

  • the top tier public cloud providers will widen their lead vs. the bulk of the industry regarding their core strengths: scalability, elasticity, worldwide footprint and added services.
  • Their market share will force new applications to be compatible with them out of the box, potentially sold in their marketplaces first.
  • As my favorite fridge magnet from the Computer History Museum puts it: “Software makes hardware happen.” In a bit more elaborated way SW defined networks will rule the DCs soon. (servers with fixed IP addresses maintained in a spreadsheet should go away asap.) HW manufacturers “mimic” cloud providers: Hyperconverged Infrastructures (eg. offerings from the Dell/EMC – VMWare duo) will become a compelling choice for any enterprise or DC provider revamping their data centers.
  • The hurdle that will slow down cloud supremacy is the stickiness of the application layer. Any large enterprise accumulated a portfolio of applications over the years, where the sheer effort of porting these apps to the cloud kills the business case. (Let alone is the long-term lease agreements of the DC space)  
  • The cloud “operating model” will become dominant: regional players, telcos and enterprises alike will adopt it.
  • This one is from Wikibon: “The long-term industry trend will not be to move all data to the public cloud, but to move the cloud experience to the data.”

The final word

While Mark Twain was on a worldwide speaking tour, local newspapers reported on his serious health issues. One of them even issued his obituary. Twain’s answer seems appropriate to disband the Gartner prediction on the demise of the on prem DC: “Reports of my death are greatly exaggerated.” For sure it will require a lot of change and investment by the DC guys, but this game is not quite over yet.

Sources used for this blog post

 

Filippov Gábor: A hibrid ellenforradalom kora c. cikkének a kivonata

Ez a szösszenet Filippov Gábor: A hibrid ellenforradalom kora c. cikkének a kivonata. A célom az, hogy az egyébként remek politológiai esszéből kiszűrjem a lényeget és átadjam azoknak, akik vélhetőleg nem olvasnák el a 14+ oldalas eredetit. A saját megjegyzéseimet dőlt betűvel jelöltem. A cikk túlmutat az átlagmagyar politikai ismeretanyagán és ismeretigényén, ugyanakkor éppen ez az ismerethiány az egyik oka annak, hogy a NER működhet. (amikor egy ismerősünk- egy dolgos, abszolút tisztességes hetven feletti asszony - kijelentette, hogy ő fél a migránsoktól és hogy Orbán Viktor megvédi őt, ezért rá szavazott - akkor már tisztán látszott, hogy baj van.) Ez ellen próbálok tenni a fakardommal. Jó olvasást.

Ami Magyarországon az elmúlt évtizedben történt (a jogállamiság leépülése), nem elszigetelt eseménysor. Nem magyar, de még csak nem is kelet európai betegség, hanem egy világtrend tünete. Nem a „mutálódott fasizmus” megjelenése, nem is „az ötvenes évek visszatérése”, hanem egy saját logikával és jellegzetességekkel bíró globális jelenség.

Elavult összehasonlítások (ez nem a Horthy rendszer reinkarnációja), féligazságokon alapuló azonosítások nemcsak a tisztánlátásunkat, de reakcióinkat is torzítják, ami a meg nem értett jelenségek konzerválódásához vezethet. (értsd a fiúk itt maradnak egy-két generációnyi időre, és csak egy, a magyar történelemben már megszokott kataklizma távolítja majd el őket.)

A kilencvenes évek a demokráciával kapcsolatos optimizmus évtizede volt. A folyamat mindenhol hasonló sémát követett: az állam hatalmának (ön)korlátozása; az egypárti struktúrák leépülése, az állam és a társadalom különválasztása; a versengő többpártrendszer és a rendszeresen megtartott választások intézményesülése; a joguralom és a független alkotmányos intézmények kiépülése; a civil szervezetek és a szabad sajtó burjánzása; összességében a politika, mint a közös ügyek megbeszélésének és intézésének decentralizálódása, „társadalmi üggyé” válása.

A demokrácia diadalmenetének hátterét a Szovjetunió összeomlása teremtette meg. (részletekért lásd Németh Miklós Mert ez az ország érdeke c. önéletrajzát.) Ezzel egyrészt a befolyási övezetében tapasztalt demokratikus törekvések elfojtására kész katonai hatalom tűnt el (lásd 1956). Másrészt a kommunista diktatúra a közvetlen érdekszféráján kívül fennálló önkényuralmi rendszereket sem volt képes tovább segíteni pénzügyi támogatással (lásd Száz év szorongás).

Az autokraták új felismerése: a demokratikus intézményeket nem kell feltétlenül megsemmisíteni: meg is lehet „hekkelni” azokat. Miért fáradoznánk a demokrácia erődjének ostromával, ha sokkal költséghatékonyabb és fenntarthatóbb belülről elfoglalni? Magyarországon ez a folyamat zajlott le.

A demokráciaoptimizmus megtorpanása: Az illiberális demokráciák felemelkedése c. cikk 21 éve jelent meg. A szerző bizonyítja, hogy a többpárti választások önmagukban nem a liberális demokrácia indikátorai. Sőt, sokszor éppenséggel annak elfedésére szolgálnak, hogy a hatalmon lévők rendszerszinten megsértik a törvények uralmát. Ezek volnának az „illiberális demokráciák”: vagyis a formálisan többpárti választásokat működtető, de a demokrácia „liberális” komponenseit, a polgári jogokat, a joguralmat, az állam semlegességét és a hatalomkorlátozást kiiktató rezsimek.

Az „illiberális demokráciák” a demokratikus intézmények és az antidemokratikus hatalomgyakorlás vegyülékei, átmenetek a demokrácia és a tiszta diktatúra között. Ezek az országokban minden megkérdőjelezhető lépéskor az őket megszavazó választói többségre hivatkoznak – vagyis a demokrácia többségi elvére. Nem dobják kukába az alkotmányt, hanem a saját képükre szabják azt. Nem tiltják be az ellenzék működését, csak olyan választási szabályokat és médiakörnyezetet teremtenek, amelyek keretei között a hatalmon lévőket csak óriási erőfeszítések árán lehet leváltani. Ha ez valós fenyegetéssé válik, nem végzik ki ellenfeleiket, hanem adóhatósági zaklatással, köztörvényes vádakkal citálják őket bíróság elé, vagy éppen gazdasági ellehetetlenítéssel veszik el a kedvüket a kellemetlenkedéstől. Legrosszabb esetben pedig ott van a megvesztegetés, az ellenzék kivásárlása, sőt akár saját ellenzék létrehozása, amelyhez korlátlanul állnak rendelkezésre költségvetési források.

A fentiekből adódóan az adott társadalmakban nem alakul ki „betegségtudat”, hiszen formálisan minden demokratikus és jogszerű, és vér sem folyik patakokban az utcán. (A társadalom elalszik.)

Az új autokraták megszegik a joguralom szellemét és szabályait, folyamatosan kiskapukkal élnek, de amíg lehet, ragaszkodnak a formális törvényességhez, és kerülik a nyílt elnyomást. Amikor csak lehet, a vér és puskapor helyett a jogalkotást, a gazdasági erőt és a célzott adminisztratív eszközöket választják.

A demokrácia akkor is szükségszerűen liberális, amikor ultrakonzervatív vagy szocialista pártok vannak kormányon, hiszen az alkotmányosságra, a hatalmi ágak elválasztására, az emberi és polgári jogok széles körű biztosítására épül. A hibrid rezsim ezzel szemben az önkényuralom lényegét tekintve újszerű formája, amelyet ugyanakkor alapvető vonások különböztetnek meg olyan, nyíltan elnyomó diktatúráktól, mint Szaúd-Arábia, Észak-Korea vagy Fehéroroszország. (Ott vagyunk már? Hál istennek még nem…)

A hibrid rezsimekben tartanak szabad választásokat, azaz a hatalom elvileg békésen leváltható, de az intézményi környezet szinte behozhatatlan előnybe hozza a hatalmon lévőket. Az elvileg semleges állami intézmények pártos megszállása, a diszkriminatív jogalkalmazás, az állami forrásokhoz való egyenlőtlen hozzáférés és hatalom médiatúlsúlya olyan mértékben torzítja a demokratikus versenyt, hogy a voltaképpeni választási csalás vagy a nyílt állami erőszak feleslegessé válik. (a külföldön élő magyarok szavazásának adminisztratív ellehetetlenítése, vs. a Magyarországon adót sohasem fizető kettős állampolgárok levélszavazatai már ide tartoznak. 500k nem a Fideszre leadható szavazat kuka, 300k Fideszre leadott szavazat plusz, ez már 10% feletti manipuláció.) A választások sem nevezhetők fairnek, de viszonylag szabadok – épp elégséges mértékben ahhoz, hogy a majdnem biztos vereség tudatában is érdemes legyen indulni rajtuk.

Az autokrácia ellentámadása

Míg a kilencvenes évek az önkényuralom defenzív korszaka volt, az ezredforduló után nem túlzás autoriter újjáéledésről és ellenoffenzíváról beszélni. Ennek nemzetközi kontextusát a nyugati demokratikus tömb hegemóniájának megtörése határozza meg. Ha a demokratizálódás harmadik hullámát az Egyesült Államok egyeduralma jellemezte, az autoriter ellentámadás nemzetközi motorja négy regionális nagyhatalom: Kína, Oroszország, Szaúd-Arábia és Irán meghatározó szerepe. Nemcsak belföldön voltak sikeresek az ellenzék és a civil szféra megregulázásában, illetve a nyilvánosság ellenőrzésében, de befolyásukat a határaikon túlra is kiterjesztették. Ezen államok know-how exporttá alakították az elnyomás finomhangolt jogi és ideológiai eszközeit.

Az autoriter magállamok ideológiai és jogalkotási mintákat kínálnak az új autokratáknak, és kölcsönösen legitimálják egymást, sőt olyan szervezetek tevékenységét is befolyásolják, mint az ENSZ Emberi Jogi Tanácsa. A demokratikus és emberjogi normák számonkérésekor mindig kéznél van a „szuverenitás”, a „nemzeti és civilizációs sajátosságok” vagy a „hagyományos értékek” jelszava. A konszolidált demokráciák korábban mérvadónak számító „demokráciaféltése” már csak egy vélemény a sok közül.

A hibrid franchise

A mai hibrid rendszerek (köztük a magyar) alapját és tartósságuk kulcsát azok a felismerések adják, amelyeket a múlt század diktatúráinak kudarcaiból vontak le:

  1. A nyílt elnyomás és a rendszerszintű erőszak túl költséges. Az ellenzéki pártok és csoportok betiltása és üldözése, egy elnyomó erőszakszervezet fenntartása és a szabad sajtó korlátozása nem csak pénzben és ráfordított energiában kerül sokba. A rendszer társadalmi elfogadottságát, vagyis legitimitását is gyengíti, ezáltal pedig ösztönzi a társadalmi ellenállást. A teljes tiltásnál mindig hatékonyabb és fenntarthatóbb az éppen csak „szükséges mértékű” korlátozás.

Ahelyett például, hogy felszámolnák az alkotmánybíróságokat, inkább törvényben korlátozzák a hatáskörüket, vagy egyszerűen bővítik a létszámukat, és az új helyeket megbízható káderekkel töltik fel. Megfelelő törvényhozási többség birtokában ugyanígy szállják meg vagy üresítik ki a többi kulcsfontosságú kontrollintézményt: az ügyészi és bírói szervezetet, a médiafelügyeletet, a választási szerveket, a jegybankot, az állami számvevőszéket stb.

A szabad sajtó törvényi felszámolása helyett gazdasági eszközökkel, a rendszert támogató oligarchák segítségével is kiépíthető az állami információs monopólium: a nyilvánosság olyan szerkezete, amelyben az ellenzéki hangok nem jutnak el a lakosság kritikus tömegéhez. Összességében a hibrid rezsimek fő erőforrását a kritikus mértékű társadalmi „betegségtudat” hiánya jelenti.

  1. A demokratikus intézményrendszer nem akadály, hanem erőforrás. Miért tartanak az új autokraták választásokat, amelyeket el is veszíthetnek? Egyszerűen azért, mert költségesebb és hosszú távon kockázatosabb nem vállalni ezt a kockázatot. A nyílt elnyomás öncenzúrára és hamis konformizmusra ösztönzi a polgárokat, ami viszont megfosztja a hatalmat az országban uralkodó valós közhangulatra vonatkozó megbízható tudástól. A relatíve szabad választások a legreprezentatívabb közvélemény-kutatás is egyben. A demokratikusnak tűnő pártverseny erős legitimációt ad a nép által újra és újra megválasztott hatalomnak, és a választási harc az esetlegesen belső konfliktusoktól terhelt hatalmi elit sorait is összezárja a rendszer védelmében. Ugyanakkor a vezetők azt is felmérhetik, mekkora a rezsim valós támogatottsága, és milyen mértékű mobilizációra képesek az ellenfelei, ami az önkorrekcióra is lehetőséget biztosít. (lásd pl. az internet adóról való gyors lefordulás és az autópálya díjakkal való kihelyettesítése.)
  2. A demokrácia jelszavai és intézményei a leghatékonyabb fegyverek a demokrácia ellen. Az új autokraták a demokrácia saját nyelvét használják fegyverként. A „többség diktatúráját”, a mindenkori győztes túlkapásait megakadályozni hivatott, független intézményekkel szemben, mint az alkotmánybíróságok, a bírói kar vagy a nemzetközi jogvédő szervezetek, rendre az állami szuverenitásra és a népképviseleti felhatalmazásra hivatkoznak: „senki által meg nem választott bürokraták nem írhatják felül az elsöprő többséggel megválasztott parlamenti többség akaratát”.
  3. Az erőszak privatizálható és kiszervezhető. Egy diktatúra félreismerhetetlen megkülönböztető jegye a saját polgárai ellen alkalmazott átfogó erőszak állam általi gyakorlása. Az állam (vagy az állampárt) képviselői (a Gestapo, az NKVD, az ÁVO stb.) lépnek fel az ellenzéki hangokkal szemben. Ők az állam nevében börtönöznek be vagy gyilkolnak meg újságírókat, koncepciós perek keretében, bírósági ítélet nélkül tüntetnek el civileket az állam börtöneiben vagy táboraiban.

Egy demokratikus színfalak között működő hibrid rezsimben ezzel szemben nem a rendőrség vagy pártmilíciák terrorizálják az ellenzékieket, hanem a hatalomtól formálisan független „civil” biztonsági szolgálatok és ifjúsági szervezetek. A politikai természetű erőszak felelőssége így elhárítható az államtól, amely kívülállóként minősítheti alulról szerveződő társadalmi konfliktusnak az általa gerjesztett és a saját érdekeit szolgáló repressziót. Magyarországon a szocialisták népszavazási kezdeményezését 2016-ban fizikai erővel akadályozó, a sajtóban csak „kopaszokként” emlegetett csoport következmény nélkül maradt fellépése jól mutatja, hogy szükség esetén itthon is akad olyan magánszereplő, amely elvégzi az állam helyett a piszkos munkát.

  1. A civil szervezeteket nem kell betiltani, elég betörni. A demokráciákban a politikai diskurzus és döntéshozatal nem az állam kiváltsága, hanem a társadalmi erők sokkal szélesebb köre számára nyitott. Meghatározott témák köré szerveződött civilek befolyásolják a döntéshozókat, hogy fontos, de elhanyagolt problémákkal foglalkozzanak, vagy éppen ellátják azokat a feladatokat, amelyeket az állam nem, vagy nem elég jól lát el. A szabad sajtó mellett független társadalmi szervezetek figyelik, ellenőrzik az államhatalom minden lépését, és felhívják a figyelmet az esetleges állami visszaélésekre, jogsértésekre. A civilek politikai tevékenysége nem demokratikus defektus, ahogy a kormány állítja, hanem a konszolidált demokráciák egyik alapköve. (ezt érzem Orbán Viktor igazi vétkének, ti. hogy akadályozza a magyar társadalomfejlődést, azaz megakadályozza a valódi polgáriasodást, amit 30 évvel ezelőtt ő maga is képviselt.) A független civil társadalomra az autokraták a liberális demokrata nyugat ötödik hadoszlopaként kezdtek tekinteni, ez a Stop Soros valódi indítéka.

A civil szervezetek működését korlátozó jogszabályok igyekeznek kontroll alá vonni és megbénítani az NGO-kat: előzetes kormányzati jóváhagyáshoz kötik a működésüket, „idegen ügynök”-törvényekkel bélyegzik meg a tagjaikat, pénzmosás- és terrorizmusellenes jogszabályokra hivatkozva korlátozzák vagy megadóztatják a külföldről érkező támogatásaikat, rágalmazási perekkel vegzálják a civileket, illetve baráti álcivil szervezetekbe próbálják becsatornázni a külföldi donorok hozzájárulásait. Ezek az intézkedések öncenzúrára vagy teljes depolitizálódásra ösztönzi azokat, akik egyáltalán megmaradnak a civil szférában. Figyelemre méltó a „zombi NGO-k” vagy GONGO-k (government-organized non-governmental organization), vagyis a rezsim által fenntartott, „házi” álcivil szervezetek megjelenése. Ezek elsődleges funkciója, hogy a hatalom által „helyesnek” tartott civilséget jelenítsék meg. A kormányzati propagandamédiához hasonlóan céljuk a kormánytól nem függő szakértői kritikák hiteltelenítése, a kormányzati kommunikációs panelek szakmai véleményként való közvetítése.

Bár hajlamosak vagyunk történelmi párhuzamokhoz nyúlni a jelen megértésekor, fontos felismerni, hogy a hibrid rendszerek nagyon kevés vonatkozásban emlékeztetnek a „klasszikus”, huszadik századi diktatúrákra, mindenekelőtt a leggyakrabban hivatkozott példákra: a náci Németországra és a sztálini Szovjetunióra.

Ezért is uralkodik például a magyar közbeszédben a szembenálló (az egyszerűség kedvéért: fideszes és nem fideszes) szekértáborok között a kölcsönösen egymást tagadó párhuzamos valóságok diskurzusa. Az egyik oldal joggal mutat rá a demokrácia és a jogállamiság súlyos leépülésére, az intézményesített autoriter gyakorlatok terjedésére, mindenekelőtt pedig a nyilvánosság és a pártverseny durván esélytorzító szerkezetére. Eközben a másik oldalnak is igaza van, amikor túlzónak minősíti a diktatúrázást, a fasizmussal, a sztálinizmussal való összevetést, amely egyszerűen figyelmen kívül hagyja a (Magyarországon szerencsére nem létező) rendszerszintű erőszak jelentőségét, illetve a viszonylag szabad (bár fairnek már nem mondható) választások meglétét. Azért kulcsfontosságú felismernünk a NER hibridizálódó jellegét, mert a népirtó rezsimekkel való indokolatlan rokonítás minden érdemi kritikát hiteltelenít. Hagyjuk a náci párhuzamokat, ez árt.

Magyarország hibridizációja

A demokrácia, a diktatúra és a kettő közötti „szürke zóna” nem háromosztatú tér, hanem sokkal inkább egy széles skála három különböző tartománya. Nehéz meghatározni, mikortól nem nevezhető többé demokráciának egy ország politikai rendszere. Magyarország esetében is bizonytalan, hogy a hatalommegosztás, a joguralom és a pártverseny tisztasága pontosan mikorra erodálódott a demokráciákkal összeegyeztethetetlen mértékig. Csak egy dolog biztos: Magyarországon mára pártirányítás alá került az elvben semleges állami intézményrendszer és a nyilvánosság meghatározó része, a kormányoldal pedig oly mértékben képes jogalkotási és adminisztratív eszközökkel az érdekei szerint manipulálni a választási környezetet, hogy hazánk mostanra megfelel a versengő autoriter rezsim definíciójának. Magyarország egy polgári rezsim, amelyben léteznek a formális demokratikus intézmények, és ezeket tekintik a hatalomra jutás elsődleges eszközeinek, de amelyben a hatalmon lévőket jelentős előnyhöz juttatja az állami intézményekkel való visszaélés (…), és a játéktér súlyos mértékben a hatalmon lévők javára lejt. A verseny így valódi, de nem tiszta: egyetlen párt (ne udvariaskodjunk most a KDNP-vel) alkotott olyan új választójogi környezetet, amelynek a választókerületi beosztástól és a második forduló elhagyásától a kampányfinanszírozási rendszeren, illetve a televíziós és plakáthirdetések szabályozáson át a győzteskompenzációig és választásért felelős szervek összetételéig minden eleme a kormányzó párt aktuális érdekeire lett szabva.

Ezt az előnyt tetézi a közmédia és a közpénzzel megtámogatott, lojális oligarchák által megszerzett kereskedelmi médiaportfólió pártutasításos vezérlése; az ügyészi szervezet, az Állami Számvevőszék és az Alkotmánybíróság diszkriminatív jogalkalmazása, komplett iparágak fideszes ellenőrzés alá vonása; valamint az állam kegyeitől függő legszegényebb rétegek totális egzisztenciális kiszolgáltatottsága. A rezsim ráadásul mindeddig kellő biztonságban érezhette magát ahhoz, hogy még csak ki se játssza az összes kártyáját. Hogy milyen további adminisztratív lehetőségek állnak rendelkezésére arra az esetre, ha egyszer valóban fenyegetve érezné magát, abból csak ízelítőt adott az Állami Számvevőszék akciója a kampány kellős közepén a Jobbik és más ellenzéki pártok ellen. Ettől tartok igazán, egy sarokba szorított diktátor sutba dobja minden erkölcsi gátlását a hatalom megtartásáért ill. a felelősségre vonás elkerülése érdekében.

Félreértés ne essék: mindez nem mentség az ellenzéki pártok inkompetenciájára és látványos kontraszelekciójára, a hatékony politikai alternatívaképzés totális hiányára. A magyar intézményes ellenzék rossz minőségű és ügyetlen. Orbán Viktor továbbra is a magyar kínálat legtehetségesebb hatalompolitikusa, de vízszintes pályán meg lehetett verni. (lásd Medgyessy Péter esetén. BTW: egy tisztességes közgazdásztól nem vártam volna egy „Jóléti rendszerváltás” néven elkövetett ámokfutást.) A szocialista-szabaddemokrata koalíciók arcpirító korruptsága joggal ásta alá a velük szembeni állampolgári bizalmat (és vezetett az első kétharmadhoz, ami lehetővé tette a NER bebetonozását), de a korrupció mai szintje sem indokolja a harmadik kétharmadot.

A demokrácia természetéből adódóan sérülékeny rendszer, amely a többségi elv (a folyamatosan változó „népakarat”) és az előbbit kordában tartó alkotmányosság (a fékek és ellensúlyok) törékeny egyensúlyára épül. Ez az egyensúly pedig bármelyik irányban könnyen megbillen. A képviseleti rendszer elitjellege elszakíthatja a „népuralmat” a néptől. (el is szakította…) Ugyanakkor a demokrácia lényegéből fakadóan annak a veszélynek is mindig ki van téve, hogy az aktuális többség az alkotmányosságot semmisíti meg, és végső soron magát a rendszert váltja le. Különösen nagy a kockázat nálunk, ahol az állampárti múlthoz való viszonyra épülő ellentétek megosztják a politikai közösséget, és az ország két fele ellenségként tekint egymásra. Az ilyen politikai atmoszféra előbb-utóbb megteremti a lehetőséget a hatalom extrém túlterjeszkedéséhez: ez nem egyes politikusok egyéni romlottságából, hanem a hatalom logikájából adódik. Azaz: Orbán NEM 1G, hanem egy a hatalom logikáját értő és az ellenzék bénázását profin kihasználó politikus. Persze ettől még nem ok, amit csinál.

Nem kell különösebb gonoszság ahhoz, hogy valakiből autokrata legyen, inkább ahhoz szükséges kivételes államférfiúi bölcsesség és önmérséklet, hogy valaki ne legyen az, amikor megtehetné.

A hibrid rendszerek csele, hogy a jogállami színfalak miatt nem-demokratikus és szabadságellenes jellegükkel gyakran még működtetői és támogatói jelentős része sincs feltétlenül tisztában. Fontos belátni, hogy a „posztkommunista maffiaállam” népszerű koncepciójával szemben a magyar demokráciát nem hataloméhes bűnözők maroknyi csoportja tartja tetszhalott állapotban, akik kizárólagos célja a nemzeti és uniós erőforrások minél alaposabb kiszipolyozása. A korrupció és a közpénz közpénzjellegének elveszítése nem cél, hanem eszköz. Nem a közhatalom rendelődik alá a járadékvadászatnak, hanem fordítva: az intézményes korrupció a hatalom megtartásának egyik előfeltétele. Ez fontos, a lenyúlt pénz kell a klientúra etetéséhez ill. az agymosó média életben tartásához. Nem véletlen, hogy ezekről az orgánumokról nem közölnek példányszám ill. nézettségi adatokat.

A jó hír az, hogy a hibrid rendszerek fő erőssége egyben fő gyengeségük is. Mindaddig, amíg választásokat tartanak, le is válthatók békés úton. Ha pedig ezt úgy akarnák megelőzni, hogy betiltják vagy látványosan elcsalják a választásokat, legitimitásuk alapját ássák alá. Igaz, az ellenzék győzelme nem jelenti a demokrácia szükségszerű restaurációját: a legyőzött autoriter kormányokat gyakran hasonló autokraták követik.

Ha a demokrácia védelmezői választás útján akarják megdönteni a rendszert, ahhoz „egyszerűen” demokratikus többségre van szükségük. Igaz, ennek a többségnek a megteremtéséhez épp a rendszer természetéből adódóan sokkal több kreativitásra és erőfeszítésre van szükségük, mint a konszolidált demokráciákban. Bármennyire kevéssé biztatóan hangzik is, tiszta és kiegyensúlyozott pártversenyben egy idő után még egy ilyen ellenzéknek is kell tudnia győznie. Míg utóbbiban egy csapnivaló ellenzék is képes lehet a győzelemre, egy hibrid rezsimben csak átlagon felüli ideológiai és kommunikációs innovációt, illetve kivételes egységet felmutatni képes ellenzék győzhet. Nyilvánvaló, hogy a mai magyar ellenzék nem ilyen. Sőt, számos jel mutat arra, hogy meghatározó erői rég megindultak a kooptáció ösvényén: vagyis zsarolás, megvesztegetés hatására a rendszer díszleteivé váltak, mint megannyi más hibrid rendszerben.

Két dolog ma is jól látható: először is, hogy a technikai bűvészkedés (összefogás vagy koordináció, választási párt vagy civil jelöltek) önmagában nem fogja megoldani a problémát, csak elvonja az energiát a valós feladattól: az új többség előállításától. A közel hárommilliós szavazóbázissal szemben a létező ellenzék bázisának akármilyen kombinációja legjobb esetben is legfeljebb egy újabb kétharmad megakadályozására, de demokratikus restaurációra semmiképp nem elég. Ehhez az ellenzéki pártoknak nemcsak hosszú távú politikai stratégiájukat, de egymáshoz való viszonyukat, öntisztulásukat és a választók elérésének új eszközeit is a rendszer egyre mélyülő hibridizációjának tényéhez kell igazítaniuk.

Ma az ellenzéki politikusok, miközben diktatúrát emlegetnek, úgy politizálnak, mintha konszolidált demokráciában versenyeznének, pedig egyik sem igaz: sajtótájékoztatókat tartanak, törvényjavaslatokat nyújtanak be, interpellációkat olvasnak fel, reggel és este pedig felháborodnak vagy az összefogás esélyeit elemzik az ellenzéki médiában, esetleg a köztévén próbálnak öt percben vitát nyerni az egész nap a nézőre zúduló propagandával szemben.

A második axióma, hogy a politika visszahódítása a fentiek ellenére is a politika feladata. A civil szervezeteknek, a független sajtónak, a „véleményformáló értelmiségnek” fontos normaképző és ellenőrző funkciójuk van. Intézményes politikai alternatívát azonban egyik sem képezhet. Ugyanez igaz az Európai Unióra is, amelyet nem arra hoztak létre, hogy autoriter rendszerekkel hadakozzon, és a jelek szerint a jelenlegi állapotában nem is képes erre. Mindez a pártok feladata.  http://ketfarkukutya.com/

The memoirs of Kilgore Trout nr. 5: Humpty Dumpty

 “Humpty Dumpty sat on the wall, Humpty Dumpty had a great fall. All the king’s horses and all the king’s men couldn’t put Humpty together again.” 

humpty-dumpty.png

A few weeks ago, I got into a discussion with a friend of mine who doubted the validity of traditional employee satisfaction (ES) surveys. We agreed that Performance = Ability x Motivation x Opportunity to perform. In order to get there you need to let intrinsic motivation do its job. Employee Satisfaction surveys attempt to track the later two ingredients. The following blog post is focused on the question: does the old-style way of measuring ES make any sense and if not, shall we borrow a page from the book on customer loyalty measurement instead.

 The known aspects of the IT labor market:

  • the demand – supply equilibrium will remain a dream for the foreseeable future (one might argue that in Hungary this stands true not only for this industry).
  • The lack of balance will keep increasing the cost of developers until:
    1. the margin you can make on a SW engineer will diminish (can a developer earn more than the CEO? J)
    2. an untapped supply of skilled resources shows up (eg. you move your R&D to India, not as simple as it sounds)
    3. growing abstraction reduces the complexity of the job, ie. a less qualified person can do it, hence the new undergraduate degree under the traditional BSc and the cross-training institutions that turn a chemistry teacher to a Java guy in a few months. BTW: who will teach chemistry in 2030?)
    4. an economic downturn makes the demand shrink. (as per Forrest Gump: shit happens).

 A word on compensation before we talk about ES:  Until the balance is achieved, regularly adjusting compensation is unavoidable, otherwise your attrition goes through the roof, and you will hire the backfills at market price (+ a risk premium) anyway.  (see Herzberg's motivation-hygiene theory for the details) Painful no doubt, but not doing it will make your existing employees pay the “loyalty tax”, ie. the longer they remain at their current company the more likely that they are paid under market, so they jump.

 If you pay them well already then tracking employee satisfaction becomes key since you want to know where to act. This is where companies run into the HR version of the Heisenberg uncertainty principle: the measurement itself will have an effect on the measured quantity. 

  • Exit interviews: this is autopsy, they may discover the reasons why folks leave, but too late. (a trailing indicator)
  • Traditional employee satisfaction surveys are weird:
    1. HR wants to know EVERYTHING, so they come up with a 70+ question survey (considered as a book by Gen Y) – the developer gets bored after the 10th question and starts clicking her choices in a diagonal style because it looks cool or selects the super happy answer for every question just to get it done fast. (it is fun to create pivot tables on the results, but the fun goes away quickly when you realise that the questions are changed vs. last year, let alone a large portion of responders also changed.)
    2. After asking all these questions, excel macros collapse the whole thing into a single number and this is the only thing management will remember. (as we know from Douglas Adams: the meaning of life is 42.)
    3. Management is measured on the response rate, so employees get dozens of mails from various big shots instructing, (begging) to crank up this metric.
    4. They run it once a year (some firms every second year), create a committee (per location plus the HQ) to evaluate the results, work hard for 6 months and come up with a plan that they execute in the next 18 months. By this time 15-25% of the responders already left the original org unit (or the firm).
    5. They compare the data between their international sites (being thousands of miles apart with different cultures) while skip the comparison to the local labor market participants (called the competition) because “this is expensive”. (roughly the recruitment cost of two backfills.)
    6. They DO NOT share the results with the responders, only a stripped-down version (in pdf format) is presented to mid management. (uuh – ooh, the favourable response for question 67 went down by 4%, how embarrassing)
    7. The management may not genuinely interested in the responses, they do it because they have to. I recall a place in my previous life where OHI (Organisational Health Index) directly impacted any manager’s compensation. In retrospect it was a good thing, but could also be used to undermine your manager's chances for a bigger job.
    8. IF the management did care, they arranged small group survey evaluation sessions with their staff, where the results did become actionable, but at the expense of losing anonymity. 

 Here comes the ugly part: at another place in my previous life we managed to increase ES year over year, while our attrition increased. (negative correlation) Something is not right with the traditional approach. From this point on I will talk about ideas from the discussion, that I am happy to debate.

 In our view employee satisfaction surveys miss the goal  if:

  1. There is no action based on the feedback (the definition of insanity is doing the same thing over and over again and expecting a different outcome) - BAD
  2. The questions are wrong - I would argue that they are reasonable, just way too many (the absolute winner from my previous life went up to 300 questions, see my comment on responding in auto pilot mode.)
  3. The way of asking is wrong – the rest of the post will focus on this one.

 And now something completely different: eNPS

 Net Promoter Score is not new, online retailers have been using it for a long time to assess customer loyalty. In a nutshell NPS is a single question: “On the scale from 1 to 10 how likely it is that you would recommend this product to friend?” Answers 9-10 are supporters, 7-8 are neutral and 1-6 are detractors.  NPS = (sum of supporters - sum of detractors) * 100. The focus is not only on the result but on the trends in it. Employee NPS is the same thing, but with a twist of treating the employees as if they were consumers of the “product called employment”. So it looks like this: "On a scale of 1 to 10, how likely would you be to recommend this company to a friend or colleague as a place to work?"

 If you google the term eNPS, you will find hundreds of articles on it. The interesting thing is why companies do not use it already all over the place.

  • Anonymity: The broad belief is that anonymity is key to get honest answers, otherwise responders would sugar coat their feedback to avoid retaliation from their management. The counter argument is that anonymity is the hotbed of whining (“Management will not know who answered, I don’t expect any change, so I ventilate a bit.”) One way to ease the concerns about retaliation is providing an opt-in, ie. the responder can disclose her identity (only a fraction of responders will choose it) if she wants OR even better if the management built a level of trust that the employees KNOW that there will be no retribution. (for the record: any decent SW developer gets a LinkedIn in-mail from a recruiter per week, so she will be gone in a month if the manager is stupid enough to retaliate.) The good news: in case of non-anonymity you will contact the responder and follow up on her reasons for a low score. Personal discussion and acting upon the feedback do miracles. 
  • Frequency: originally NPS was collected after every transaction. This might be an overkill for eNPS, but once per quarter, potentially as part of the All hands meeting or doing it in small chunks on a sub unit level it makes sense. It is a bit like CI/CD (Continuous Integration/Continuous Delivery) for HR. 
  • Complexity – the number of questions: there are multiple schools in this matter, some of them ask a clarifying question if a response is under 7, some ask the same question on the products of the company the employee works for.

 FYI: it does not matter how you measure employee satisfaction if the basics are not right: ie. if the work is mundane, employees cannot see how their work corresponds to the greater goal (no intrinsic motivation), if there is no clear connection between achievement and appreciation, if there is no well-articulated company vision (start with the why) or if the line managers are a pain in the neck.

 The final word: traditional employee satisfaction surveys may no longer be enough, if you want to keep your hand on the pulse of the organisation, you need something shorter, more actionable and more frequent. A viable option is eNS. Give it a try.

The floorshrink diaries #6 - Finding Herbie

A few weeks ago in a customer care related discussion I proposed a mini customer satisfaction survey, something like this one on the right. The idea was a copycat; I saw this simple customer feedback idea at Heathrow airport.

heathrow.JPG

I figured it was great since it allowed the customers to express their opinions in a second, while you could log the time when the feedback arrived, ie. you would know which shift generated the feedback. Guess what the answer to my suggestion was: “We don’t have the resources to do anything with that feedback anyway, so this is better not rattling the cage in the first place…” Ouch. As my father used to tell “anger without power is folly” so I wrote this blog post instead.
According to Mary Poppendieck there are three easy ways to piss of a client:

1. reduce the quality of service (eg. your telco sells you an 8 Mbit ADSL that in fact is 3.5 Mbit)
2. overcharge for the service (your telco still charges as if their service was indeed 8 Mbit)
3. keep the client in anxiety (your telco provides LTE in your area, but with a small data limit and without a Wi-Fi router (with a few Ethernet ports and a built in LTE modem) and it does not let you off the hook with your existing contract.

The result: you buy an LTE based internet service from ANOTHER telco and show the middle finger to the first provider (and you feel good for a moment). Learning: the client is a human being and humans are predictably irrational if you push them over the limit.

In the following example I will refer to examples from the imaginary ACME Corp. Disclaimer: Any resemblance to actual firms is purely coincidental.
At ACME the Jira queues of IT are graveyards of dead wishes, some items are 2+ years old; you can bet that even the requestors forgot about them, let alone if they are still with ACME. As Mary puts it anything beyond the output capacity of a service component is wishful thinking: the surplus will never get executed.

theory_of_constraints.JPG

 On the other hand this setup guarantees customer anxiety: unpredictable delivery times “when my service request will be served?”, using the “who shouts loudest will be served first” method as selection criteria to work on any item topped with employee frustration: “ever changing priorities, clients are bypassing the queue with ad-hoc requests allowing no self-accomplishment.” This setup at ACME is not new, after several years it made its way to the genes of the organization: top performers are those who are good at fighting fires, but not so good at resource planning. Okay, we have dissatisfied clients and dissatisfied staff (and these folks vote with their feet), so what shall we do about it? (Relax, this is at ACME.) Let’s have a look at what others have suggested to solve a similar problem in manufacturing.

This blog post is based upon the book by Eliyahu Goldratt, the Goal. I am not going to summarize this
brilliant novel here (I suggest you read it for yourself); I just name a few concepts necessary to carry on.
Productivity is accomplishing your goal: to make money. You can play with the following three levers to
make it happen:

  • Throughput – Is the rate at which your company generates money through selling products or
    services.
  • Inventory – the money your company has invested in purchasing things which it intends to sell.
  • Operational Expenses – the money your company spends in order to turn inventory into
    throughput.

Now imagine a hiking experience with a bunch of Boy Scouts, one of them being Herbie. During the hike,
you notice that the line of hikers is stretching longer and longer as time passes. You recognize that one
particular scout, Herbie, is the slowest hiker in the group. Herbie holds up all scouts behind him. One
would think that even though these hikers all walk at different speeds, their average rate of progress
should be estimable. This average rate should become the nominal rate of progress for the entire team.
Instead the troop is completing the hike at the rate of its slowest member, Herbie. Herbie is being left
behind at a longer and longer stretch because he can't go any faster. Herbie is a CONSTRAINT. If you
want to reach your goal - in this case the entire team arriving at a camp site before sunset - you need to
eliminate the constraint – in this case by redistributing the stuff in Herbie’s backpack to other kids and
by placing Herbie at the front of the troops. It may sound a bit extreme, but the same principles are at
work in the Boy Scout hike example, in the operation of a manufacturing plant and in running an IT
shop. Learning: Constraints will define the overall throughput of your system. Identifying the
bottlenecks in your operation (finding Herbie) is the first step in improving the throughput of your
business, regardless if this is a hiking trip, a manufacturing plant or an IT department.

Viewing an organization from the operating expense perspective causes one to believe that an
organization is composed of independent variables. But viewing the organization from the throughput
perspective will make you realize that the organization is a collection of dependent variables where an
action targeting one of these variables could have a significant effect on the throughput of the entire
system. An example: In the Hungarian medical system someone in his infinite wisdom figured he could
save money by banning the repair of resectoscopes (they are expensive) that were used to carry out
TURP (transurethral resection of the prostate) to cure BPH (benign prostatic hyperplasia). This decision
forced the docs to go back to the traditional open surgery with ten times the hospital treatment cost,
the healing time and the chance for complications. Cost saving mission accomplished, right? WRONG.
Learning: Partial efficiencies are the number one enemy of overall productivity.
Managing the parts of an organization as if they were independent is not the right thing to do if your
goal is to increase the throughput of the org. If you want to increase throughput you must define the
real goal of your entire group and then you must find and eliminate (by increasing the performance of)
your constraints being in the value stream producing that throughput.

We learned the following so far:

1. Keeping the client in anxiety is a guaranteed way top piss them off.
2. A properly upset client will react, so unless you are an unchallengeable monopoly, you risk your
business. (Ignoring this fact triggered the whole post.)
3. Maintaining work item queues beyond the capacity of a given service provider is a proven
method to annoy your clients (and your employees).
4. Constraints will define the overall throughput of your system.
5. Going after local efficiencies will undermine your overall productivity.

The rest of this post is just speculation on what I would do if I worked for ACME. I realize this is pure
theory (although supported by another book named the Phoenix Project) So here we go:

suggested_action.JPG

I dreamed enough for today. If you happen to have the appetite to test the above suggestions in your
daily life, pls. let me know. I would be happy to assist you. As always I appreciate any feedback on this
blog post.

References:

  • https://www.slideshare.net/AgileOnTheBeach/value-stream-mapping-9358435 
  • http://www.poppendieck.com/workshop.html 
  • https://www.amazon.com/Goal-Process-Ongoing-Improvement/dp/0884271951 

 

The memoires of Kilgore Trout nr.4 - Dr. Strangelove

A few days ago, I found myself in a discussion with a group of university students. The topic eventually evolved around enterprise architectures. This post is the summary of this conversation. OR: it is about how I stopped worrying about Enterprise Architectures and learned to care about teams instead.

The age of creative destruction – and the smoking gun is information technology

The term was coined by Joseph Schumpeter in 1942 and it goes like this: The "process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one.“ I used the average lifespan of companies on the S&P 500 index as an example. It has been dropping since the 1980’s, that is a new member can get into this prestigious club faster than ever before but can get out of it at similar speed. (footnote: those who dropped out usually don’t stop falling at this point.)

average_company_lifespan.JPG

A collision of views: on the one hand IT is becoming a commodity (see Nick Carr’s IT doesn’t matter) while it is the force (accompanied by new business models) that shakes industries in a few years (see Marc Andreessen’s Why Software Is Eating the World). I remember the BlackBerry CEO’s Famous Last Words at the 2007 iPhone Launch: 'We'll Be Fine'.

There are two consequences of this shift: The business demands an IT that can respond to challenges within days rather than months as it did before. This requires an underlying infrastructure that can grow or shrink within hours and can scale to “infinity”. The trend is clear, just have a look at the chart below on IT infrastructure spend. (source: Mary Meeker’s keynote on the INTERNET TRENDS 2017 conference - slide 181.) Chances are you will have some flavour and combination of IaaS/PaaS partially or entirely at a public provider within 5 years.

it_infrastructure_spend.jpg

The other no-brainer is the front end: Many years ago, my team was tasked to protect the market share of Internet Explorer in Hungary. (a bummer in retrospect.) We used http://ranking.gemius.com/hu to track progress and we could not avoid noticing that it was not just IE under fire, but Windows as well. Fast forward 6 years and we arrived at another chart that was unthinkable just a decade earlier.

internet_usage_ww.jpg

The picture may not be this decisive if we filter in the time spent on the web but anyway your stuff has to have a mobile frontend, you are even likely to follow a mobile first, desktop second approach.

The last step in my less than sophisticated Enterprise Architecture pitch was the stuff between the servers and the mobile front ends: containers and microservices. I ended up with something like this:

an_enterprise_infrastructure_bet.jpg

Since the topic was Enterprise Architectures I needed “a chart” on the subject. I was thinking about a TOGAF chart, then it downed on me that this thing was hanging on the wall (printed in A1+ size) of any decent enterprise architect. The other certainty was that the system envisioned on this picture was never built. I had another picture on my mind: the elephant in the room, that these frameworks forget about the real thing: the people who design and implement this architecture.

the_elephant_in_the_room.jpg

Conway’s law

There are forces that shape your enterprise architecture that has nothing to do with technology. One of them is the Conway law: “organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.” The law is based on a sociological observation. It is a consequence of the fact that two software modules A and B cannot interface correctly with each other unless the designer and implementer of A communicates with the designer and implementer of B. Thus the interface structure of a software system necessarily will show a congruence with the social structure of the organization that produced it. Now get this: the very act of organizing a design team means that certain design decisions have already been made, explicitly or otherwise.” In case of ACME (a strictly hypothetical company) I have seen cases where fragments of a dev team spread over 3 continents (say two guys in each time zone, due to tactical hiring practices driven by speed and sprinkled with cost considerations.) This is no surprise that this setup produced scary designs and very high attrition.

Fred Brooks made this observation in the Mythical Man-Month in 1975: “Because the design that occurs first is almost never the best possible, the prevailing system concept may need to change. Therefore, flexibility of organization is important to effective design.” So have a look at the structure of the key stakeholder org units and the dev team itself before you start your system design.

So how can we escape the technical design doomsday with non technical measures?

  • Don’t use technologies that require scarcely available specialists! Linkedin is a good starting point: if you find less than 300 people in Hungary who refer to a given technology in their profiles, this is a safe bet to drop it, half of them are sales or PMs who dealt with people who knew this technology, the other half will not move. (Try it with keywords like Erlang.)
  • Avoid prima donnas! I had a case when two of these folks started to disparage the design of each other in front of the CLIENT! (right after pointing out that the operations team members (of the client) had the brains of a midget.)
  • Create small teams with multi-faceted skillsets and the shortest possible communication paths. Make sure that the organization is compatible with the product architecture!
  • Collocate your testers with the dev team: a remote testing team in India – while certainly the cheapest - many not be the optimal solution.
  • Be prepared to homomorphism: Organizations with long lived systems will adopt a structure modelled on the system. (particularly true for mainframe based systems, a state within the state, the secret of the longevity of these beasts is that this is even more expensive to get rid of them than to keep them.)
  • Go with short release cycles - give something to your client soon, don’t let them change their mind or sponsor.
  • Keep in mind the theory of constraints: Constraints will define the overall throughput of your dev team. Identifying the bottlenecks is the first step in improving the throughput. Use KANBAN.

All the best Guys, it was great talking with you! If anything sticks from all of this above, be it the movie:-).

References:

Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1964)  

http://www.econlib.org/library/Enc/CreativeDestruction.html

https://www.innosight.com/insight/creative-destruction/

https://a16z.com/2016/08/20/why-software-is-eating-the-world/

https://hbr.org/2003/05/it-doesnt-matter

BlackBerry's Famous Last Words At 2007 iPhone Launch: 'We'll Be Fine'

http://dq756f9pzlyr3.cloudfront.net/file/Internet+Trends+2017+Report.pdf

http://gs.statcounter.com/press/mobile-and-tablet-internet-usage-exceeds-desktop-for-first-time-worldwide

Using the TOGAF® 9.1 Framework with the  ArchiMate® 3.0 Modeling Language

Melvin Conway: How Do Committees Invent?

http://www.melconway.com/Home/Conways_Law.html

Once upon a time there was a gas factory

For years my office had a view onto the water tower of the old Gas Factory in Óbuda. From time to time I lamented about sneaking in (the complex was guarded from trespassers due to its seriously deteriorated condition) and taking a few pictures, but for some reason it was always dark by the time I got out of the office, so my dream did not materialize. 

One day index.hu published a post from Urbanista that featured images of the gas factory. I was pissed off for two reasons, first: someone got there earlier, second: there was much more in these amazing buildings than this guy managed to show.

There was a small photographer community around me, some of them I infected with photography during my MSFT services years, some of them were bitten by this bug earlier, but for sure they were more talented than me. On the other hand I knew something that was essential to do this: I could cut a deal with the owners of both sections of the area and arranged a legal and free entry for two whole days. We had two fantastic and a bit risky days, eg. i climbed up to the top of the water tower (not easy for a limbless guy), by the time i got to the top my friends downstairs started to split my lenses since they figured i would surely break my neck in the process. Find my shoes on the picture below:-)

 

Later I run the selection process (artists can be amazingly difficult when it gets to evaluating the work of each other), I did the DTP work and published the result on blurb.com. I fell in love with the project, so I even rented a helicopter and flew over the area to take a shot from above with all 4 buildings on it. (this was before the drone age, folks)

Here is the outcome of the project: http://www.blurb.com/b/4429187-volt-egyszer-egy-g-zgy-r 

 

Computer quiz

What can an IT guy do if he happens to be a history buff? He goes for the overlapping area, that is the history of computer science. The quiz below is from the Summer closing party of 2016. It was allowed to use their mobile phones to solve it. Then a quite amazing thing happened: a group of 3 HR ladies + 1 engineer won the competition with a fair margin out of 170 participants.  Their secret weapon: they split the task into four subsets and googled very efficiently.

So here we go, happy solving. The cheat sheet comes right after the questions, so do not scroll down too fast if you want to have fun!

Achievements
The inventor of the punched card loom - 1801
The designer of the Analytical Engine - 1837
The daughter of a famous English poet who wrote the algorithm for the Analytical Engine to compute Bernoulli numbers - 1842
The inventor of the Boolean algebra - 1854
The inventor of the punched card tabulator - 1889, the founder of The Tabulating Machine Company (IBM from 1924) - 1911
The most famous codebreaker - deciphering the ENIGMA - 1941
The designer of the Colossus – 1944 ( the world's first programmable, electronic, digital computer, although it was programmed by switches and plugs and not by a stored program.)
The inventor of the differential analyzer , the author of the seminal paper “As we may think” - 1945
A Hungarian polymath who created the The stored-program concept and wrote the First Draft of a Report on the EDVAC - 1945
The original conceptual designer behind IBM’s - Harvard Mark I – 1944 (a general purpose electromechanical computer)
The designers fo ENIAC – the firstTuring complete digital all-purpose computer – 1946 and the EDVAC - 1949
The inventors of the transistor - 1947
The dean of Stanford University, the father of Silicon Valley - 1956
Inventor of the first complier, the grandmother of COBOL - 1959
The founder of Digital Equipment Corp - the company that created the first minicomputer, the PDP-1 - 1959
The inventor of the microchip at Texas Instruments - 1959
The inventor of the microchip at Fairchild Semiconductor – 1959, co-founder of Intel - 1968
The creator of the mathematical theory of packet networks - 1961
Author of the concept of packet switched networks United Kingdom - 1967
Author of the concept of packet switched networks - USA – 1964 (later ARPANET - 1969)
The guy who managed the development of IBM's System/360 family of computers and the OS/360 The IBM's System/360 – 1965, the author of the “The Mythical Man-Month”
The designer of the on-board flight software for the Apollo space program - 1968
The author of Moore’s law – 1965, Co-founder of Intel - 1968
Hungarian born CEO of Intel, the author of the book “Only the paranoid survive”
A pioneer in Concurrent programming, the mutual exclusion and Distributed computing - 1965
The inventor of the mouse, a pioneer of human-computer interaction - 1968
The creator of PASCAL - 1970
the inventors of TCP/IP - 1972
The founders of Microsoft – 1975
Designer of several PDP machines, then overseeing the creation of the VAX - 1977
The idea man behind the Dynabook –1972, the inventor of OOP, GUI, the designer of Smalltalk at XEROX PARC
The designer of XEROX Alto – 1973, the inspiration for the Macintosh
The founder of MITS, the designer of MITS Altair - 1975
The designer of Apple I – 1976, and Apple II – 1977
The authors of the original Unix and the C language - 1978
The guy behind the first BSD Unix - 1977, the creator of NFS, one of the co-founders of SUN Microsystems - 1982
The inventors of the Ethernet at XEROX PARC- 1980
The initiator of the GNU project - 1983 and the Free Software Foundation - 1985
The father of the Web, the creator of HTTP - 1989
The creator of Linux - 1991
A key OS designer, who created RSX, VMS and later at MSFT designed Windows NT 3.1 - 1993
The father of Java - 1995

 

 This space is left blank intentionally - do not scroll down unless you are done with your guesswork! 

 

Cheat sheet

Names - Achievements - References


Joseph Marie Jacquard - The inventor of the punched card loom - 1801
https://en.wikipedia.org/wiki/Joseph_Marie_Jacquard
Charles Babbage - The designer of the Analytical Engine - 1837
https://en.wikipedia.org/wiki/Charles_Babbage
Ada Lovelace - The daughter of a famous English poet who wrote the algorithm for the Analytical Engine to compute Bernoulli numbers - 1842
https://en.wikipedia.org/wiki/Ada_Lovelace
George Boole - The inventor of the Boolean algebra - 1854
https://en.wikipedia.org/wiki/George_Boole
Herman Hollerith - The inventor of the punched card tabulator - 1889, the founder of The Tabulating Machine Company (IBM from 1924) - 1911
https://en.wikipedia.org/wiki/Herman_Hollerith
Alan Turing - The most famous codebreaker - deciphering the ENIGMA - 1941
https://en.wikipedia.org/wiki/Alan_Turing
Tommy Flowers - The designer of the Colossus – 1944 ( the world's first programmable, electronic, digital computer, although it was programmed by switches and plugs and not by a stored program.)
https://en.wikipedia.org/wiki/Tommy_Flowers
Vannevar Bush - The inventor of the differential analyzer , the author of the seminal paper “As we may think” - 1945 https://en.wikipedia.org/wiki/Vannevar_Bush
John von Neumann - A Hungarian polymath who created the The stored-program concept and wrote the First Draft of a Report on the EDVAC - 1945 https://en.wikipedia.org/wiki/John_von_Neumann
Howard H. Aiken - The original conceptual designer behind IBM’s - Harvard Mark I – 1944 (a general purpose electromechanical computer) https://en.wikipedia.org/wiki/Howard_H._Aiken
John Mockley and J. Presper Eckert - The designers fo ENIAC – the first Turing complete digital all-purpose computer – 1946 and the EDVAC - 1949
https://en.wikipedia.org/wiki/John_Mauchly
https://en.wikipedia.org/wiki/J._Presper_Eckert
William Shockley - John Bardeen - Walter Houser Brattain - The inventors of the transistor - 1947
https://en.wikipedia.org/wiki/William_Shockley
https://en.wikipedia.org/wiki/John_Bardeen
https://en.wikipedia.org/wiki/Walter_Houser_Brattain
Frederick Terman - The dean of Stanford University, the father of Silicon Valley - 1956
https://en.wikipedia.org/wiki/Frederick_Terman
Grace Hopper - Inventor of the first complier, the grandmother of COBOL - 1959
https://en.wikipedia.org/wiki/Grace_Hopper
Ken Olsen - The founder of Digital Equipment Corp - the company that created the first minicomputer, the PDP-1 - 1959 https://en.wikipedia.org/wiki/Ken_Olsen
Jack Kelby - The inventor of the microchip at Texas Instruments - 1959
https://en.wikipedia.org/wiki/Jack_Kilby
Robert Norton Noyce - The inventor of the microchip at Fairchild Semiconductor – 1959, co-founder of Intel - 1968
https://en.wikipedia.org/wiki/Robert_Noyce
Leonard Kleinrock - The creator of the mathematical theory of packet networks - 1961
https://en.wikipedia.org/wiki/Leonard_Kleinrock
Donald Davies - Author of the concept of packet switched networks United Kingdom - 1967
https://en.wikipedia.org/wiki/Donald_Davies
Paul Baran - Author of the concept of packet switched networks - USA – 1964 (later ARPANET - 1969)
https://en.wikipedia.org/wiki/Paul_Baran
Fred Brooks - The guy who managed the development of IBM's System/360 family of computers and the OS/360 The IBM's System/360 – 1965, the author of the “The Mythical Man-Month”
https://en.wikipedia.org/wiki/Fred_Brooks

Margaret Hamilton  - The designer of the on-board flight software for the Apollo space program - 1968
https://en.wikipedia.org/wiki/Margaret_Hamilton
Gordon Moore - The author of Moore’s law – 1965, Co-founder of Intel - 1968
https://en.wikipedia.org/wiki/Gordon_Moore
Andy Grove - Hungarian born CEO of Intel, the author of the book “Only the paranoid survive”
https://en.wikipedia.org/wiki/Andrew_Grove
Edsger W. Dijkstra - A pioneer in Concurrent programming, the mutual exclusion and Distributed computing - 1965
https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
Douglas Engelbart - The inventor of the mouse, a pioneer of human-computer interaction - 1968
https://en.wikipedia.org/wiki/Douglas_Engelbart
Niklaus Wirth - The creator of PASCAL - 1970
https://en.wikipedia.org/wiki/Niklaus_Wirth
Vinton Cerf and Bob Kahn - the inventors of TCP/IP - 1972
https://en.wikipedia.org/wiki/Vint_Cerf
https://en.wikipedia.org/wiki/Bob_Kahn
Paul Allen and Bill Gates - The founders of Microsoft – 1975
https://en.wikipedia.org/wiki/Paul_Allen
https://en.wikipedia.org/wiki/Bill_Gates
Gordon Bell - Designer of several PDP machines, then overseeing the creation of the VAX - 1977
https://en.wikipedia.org/wiki/Gordon_Bell
Alan Kay - The idea man behind the Dynabook –1972, the inventor of OOP, GUI, the designer of Smalltalk at XEROX PARC
https://en.wikipedia.org/wiki/Alan_Kay
Charles P. Thacker - The designer of XEROX Alto – 1973, the inspiration for the Macintosh
https://en.wikipedia.org/wiki/Charles_P._Thacker
Ed Roberts - The founder of MITS, the designer of MITS Altair - 1975
https://en.wikipedia.org/wiki/Ed_Roberts
Steve Wozniak - The designer of Apple I – 1976, and Apple II – 1977
https://en.wikipedia.org/wiki/Steve_Wozniak
Dennis Ritchie and Ken Thompson - The authors of the original Unix and the C language - 1978
https://en.wikipedia.org/wiki/Dennis_Ritchie
https://en.wikipedia.org/wiki/Ken_Thompson
Bill Joy - The guy behind the first BSD Unix - 1977, the creator of NFS, one of the co-founders of SUN Microsystems - 1982
https://en.wikipedia.org/wiki/Bill_Joy
Robert Metcalfe and David Boggs - The inventors of the Ethernet at XEROX PARC- 1980
https://en.wikipedia.org/wiki/Robert_Metcalfe
https://en.wikipedia.org/wiki/David_Boggs
Richard Stallman - The initiator of the GNU project - 1983 and the Free Software Foundation - 1985
https://en.wikipedia.org/wiki/Richard_Stallman
Tim Berners-Lee - The father of the Web, the creator of HTTP - 1989
https://en.wikipedia.org/wiki/Tim_Berners-Lee
Linus Torvalds - The creator of Linux - 1991
https://en.wikipedia.org/wiki/Linus_Torvalds
Dave Cutler - A key OS designer, who created RSX, VMS and later at MSFT designed Windows NT 3.1 - 1993
https://en.wikipedia.org/wiki/Dave_Cutler
James Gosling - The father of Java - 1995
https://en.wikipedia.org/wiki/James_Gosling





















The bookworm academy

In the last couple of years I started to arrange the books I have read into a list that I could give to my students if I become some sort of a teacher one day. (The age of retirement will be around 68-70 by the time a get there, so I have plenty of time to accomplish this.)

This list is by far not complete. I figured I would solve this in Tom Sawyer style, ie. if you send me suggestions, I will add them to the catalog.  (In fact some of these books came as recommendations from colleagues at my workplace.)

So here we go, the first cut of the titles Electrical Engineering undergraduates at the Tech University might want to read to broaden their views about the people they work with and the professions they may not respect yet. I know that electrical engineers (developers in particular) are seated on the right of the Lord (it was the same 30 years ago, see on the left) but I think learning the basics of other areas of life – psychology in particular – will help you to reach your goals faster.

If you get into the situation that you need to lead other people (nope, you are not a naturally born leader…)

Eliot Aronson

The social animal

The book offers an introduction to social psychology. It probes the patterns and motives of human behavior, covering such diverse topics as conformity, obedience, politics, race relations, interpersonal attraction, and many others.

Patrick Lencioni

Death by Meeting

The book is centered around a cure for the most painful yet underestimated problem of modern business: bad meetings. 

Eric Berne

Games People Play

The foundation of transactional analysis: The book is a clear catalogue of the psychological theatricals that human beings play over and over again.

Jim Collins

From good to great

Collins takes up a challenge in the book: identifying and evaluating the factors and variables that allow a small fraction of companies to make the transition from merely good to truly great.

The Arbinger Institute

Leadership and Self-Deception

The authors expose the fascinating ways that we can blind ourselves to our true motivations and unwittingly sabotage the effectiveness of our own efforts to achieve success and increase happiness.

Cynthia Shapiro

Corporate Confidential 

A world of insider information and insights that can save your career!

Laszlo Bock

Work Rules! 

Learn from your best employees-and your worst, hire only people who are smarter than you are, pay unfairly (it's more fair!), Don't trust your gut: Use data to predict and shape the future, Default to open-be transparent and welcome feedback.

Malcolm Gladwell

Outliers: The Story of Success 

Malcolm Gladwell takes us on an intellectual journey through the world of "outliers"--the best and the brightest, the most famous and the most successful. He asks the question: what makes high-achievers different?

Sun Tzu

The Art of War

“If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”

Stephen Covey

The 7 Habits of Highly Effective People 

At the start of every week, write a two-by-two matrix on a blank sheet of paper where one side of the matrix says “urgent” and “not urgent” and the other side of the matrix says “important” and “not important.” Then, write all the things you want to do that week.

Ken Blanchard

Leadership and the One Minute Manager

You’ll learn why tailoring management styles to individual employees is so important; why knowing when to delegate, support, or direct is critical; and how to identify the leadership style suited to a particular person.

Roger Fisher

William Ury

Bruce Patton

Getting to yes

Negotiating Agreement Without Giving In - Getting to Yes offers a proven, step-by-step strategy for coming to mutually acceptable agreements in every sort of conflict.

Eliyahu M. Goldratt  

The Goal: A Process of Ongoing Improvement

A harried plant manager working ever more desperately to try improve performance. His factory is rapidly heading for disaster. So is his marriage. He has ninety days to save his plant - or it will be closed by corporate HQ. He takes a chance meeting with a professor from student days - Jonah - to help him break out of conventional ways of thinking to see what needs to be done.

Gene Kim

Kevin Behr
George Spafford

The Phoenix Project

Three luminaries of the DevOps movement deliver a story that anyone who works in IT will recognize. Readers will not only learn how to improve their own IT organizations, they'll never view IT the same way again.

 

If you need to fight the Killer Rabbit and the Holy Hand Grenade of Antioch is not around

Thomas Friedman 

The World Is Flat

Friedman explains how the flattening of the world (globalization) happened at the dawn of the twenty-first century; what it means to countries, companies, communities, and you; and how governments and societies must, adapt.

Nicholas Carr

The Big Switch

Computing is turning into a utility, and the effects of this transition will ultimately change society as completely as the advent of cheap electricity did. 

Clayton Christensen

The Innovator's Dilemma

Have you ever wondered why Wang, DEC or Compaq vanished? Great companies can fail precisely because they do everything right.

Peter Schwartz

Inevitable Surprises

The rapid advance of technology forces constant reevaluation of our society. With so many powerful forces at work and seemingly unpredictable events occurring, Schwartz argues that the future is foreseeable, and that by examining the dynamics at work today we can predict the “inevitable surprises” of tomorrow.

Geoffrey Moore

Crossing the Chasm

If you care about technology marketing, this is a must read book that focuses on the specifics of marketing high tech products during the early startup period. 

If you are interested in the history (and the future) of IT

Andrew S. Grove

Only the Paranoid Survive

The nightmare for every leader - when massive change occurs and a company must adapt or fall. Grove calls such a moment a Strategic Inflection Point, which can be set off by almost anything: mega-competition, a change in regulations, or a seemingly modest change in technology. When a Strategic Inflection Point hits, the ordinary rules of business go out the window.

Walter Isaacson

The Innovators

 

The Innovators is a saga of collaborative genius destined to be the standard history of the digital revolution—and an indispensable guide to how innovation really happens. 

Martyn Burke

Pirates in Silicon Valley

The history of Apple and Microsoft - a movie, not a book, but a must see for any IT guy.

Paul Allen

Idea Man

You may not like MSFT, but this is still around. This is how it started.

Louis Gerstner

Who says elephants can’t dance?

The book tells the story of IBM's competitive and cultural transformation. Gerstner offers an account of his campaign to rebuild the leadership team and give the workforce a renewed sense of purpose. In the process, Gerstner defined a strategy for the computing giant and remade the ossified culture bred by the company's own success.

Akio Morita

Made in Japan

The story of SONY from the very beginning.

Bob Lutz

Car Guys vs. Bean Counters

This is actually about cars, but I left it in for those who wonder why product management matters.

Paul Ceruzzi

A History of Modern Computing 

This history covers modern computing from the development of the first electronic digital computer through the dot-com crash.

Martin Ford

Rise of the Robots

Technology and the Threat of a Jobless Future - “Computers can only do what they are programmed to do.” Well, not any more. Computers have long outgrown this quaint summation. Instead, they can now work things out for themselves.

 

If you are interested in the financial industry

Patricia Beard

Blue Blood and Mutiny

The Fight for the Soul of Morgan Stanley

Michael Lewis

The Big Short

Inside the Doomsday Machine - the bond and real estate markets where greed invented derivative securities to profit from the shortsightedness of lower class Americans who could not pay their debts. And in return made the world economy collapse.

Michael Lewis

Flash boys

If you ever wondered why those microseconds in a trade matter.

Jonathan Knee

The Accidental Investment Banker

Inside the Decade That Transformed Wall Street - The author witnessed the lavish deal-making of the freewheeling nineties, when bankers rode the wave of the Internet economy, often by devil-may-care means. By the turn of the twenty-first century, the bubble burst and the industry was in free fall. What happened? You can learn it from this book.

Michael Lewis

Liar's Poker

This insider’s account of 1980s Wall Street excess transformed Michael Lewis from a disillusioned bond salesman to the best-selling literary icon he is today. Together, the three books - Flash Boys and The Big Short and this - cover thirty years of endemic global corruption―perhaps the defining problem of our age―which has never been so hilariously skewered as in Liar's Poker.

Andrew Ross Sorkin

Too big to fail

The Inside Story of How Wall Street and Washington Fought to Save the Financial System--and Themselves.

Maintaining your edge – technical subjects

Simon Singh

The code book

The Science of Secrecy from Ancient Egypt to Quantum Cryptography - the first sweeping history of encryption, tracing its evolution and revealing the dramatic effects codes have had on wars, nations, and individual lives. From Mary, Queen of Scots, trapped by her own code, to the Navajo Code Talkers who helped the Allies win World War II, to the incredible (and incredibly simple) logistical breakthrough that made Internet commerce secure.

Samuel Greengard

The Internet of Things

The Internet of Things is a networked world of connected devices, objects, and people. In this book, Samuel Greengard offers a guided tour through this emerging world and how it will change the way we live and work.

David Anderson

Kanban

This book answers the questions: What is Kanban? Why would I want to use Kanban? How do I go about implementing Kanban? How do I recognize improvement opportunities and what should I do about them?

Frederick P. Brooks Jr.

The Mythical Man-Month

 

The Mythical Man-Month. With a blend of software engineering facts and thought-provoking opinions, Fred Brooks offers insight for anyone managing complex projects. These essays draw from his experience as project manager for the IBM System/360 computer family and then for OS/360.

süti beállítások módosítása