Recently I was looking to add OAuth2 login to my wordpress system.
Side note: when you see an icon 'sign-in with Google/Github/...' on a site, please use that, rather than creating a new account. The protocol is called 'OAuth2'. Its not (usually) giving that new site permission to read your gmail, rather its allowing you to federate and use less logins.
Back to the story. I was looking to add OAuth2 to my wordpress. I wanted to handle a few different logins (Google, Github, Linkedin being the minimum). This is not overly complex, I was not expecting trouble. A couple of hours later of looking at a bewildering array of plugins, all of which had 2 or more of these properties:
Some freemium approach where the part you wanted is not available, the yearly cost is high, and its not obvious they will stay in business
There is a SaaS component you are delegating to that now owns your site
Has not been updated in 4 years
Sigh. So I picked one (miniOrange). It worked, I was ready to hit the buy now on $400/yr, when suddenly, the removed it from the wordpress marketplace. No reason given. This one used a SaaS component so I was instantly broken.
Enraged, I wrote the attached, on Github. Its certainly not great. Its not even good. But it works, I've made it free (as in speech, also as in beer). It uses only your site, no 3rd party site is involved in the OAuth2 dance with your credentials. And, more importantly, I can now improve it. And so can you, pull requests welcome.
Every year in mid march people start to get antsy. There's still some snow, but its nearing spring, maybe I should swap my snow tires out.
For those unfortunate enough to live where they don't understand the concept of snow tires, its simple. When the ground is cold, rubber is harder, and gets less traction. So tires are made with a super-soft compound and marketed as 'snow', 'ice', 'snow and ice' etc. But the upshot is they are softer at lower temperatures, thus getting better traction.
However, these super-soft gumball tires also don't have a long life. So you want to coax another year out of them by removing them as soon as feasible, and take the chance on 'one last big snowfall'.
Yesterday I swapped the snow tires off the bike. You may recall they were metal-spiked as well as being soft-compound and very wide. I gotta say, they were fantastic, I'm not sure I would be alive today if it were not for those tires. But, progress, its spring, its time to remove.
Its very weird driving now with nearly 1" less width and a whole lot less noise. Its not that the summer tires are quiet per se, but compared to the metal spiked monsters they are silent!
Many years ago there was a great short story "the midas plague". You can actually read it online at that link.
In a nutshell, advances in productivity and energy and automation meant that more and more things were getting created, and people had less and less hours to work. The economy started to struggle. So the poorer people were forced to work harder to consume. If you were rich you could have a small amount of things, if you were poor you needed a lot. People had to keep feeding the machine as it went into production/consumerism overdrive. Its a classic 50's sci-fi, totally worth the time to read.
It strikes me that today we are heading down the same path. Not just on the physical production side (GM makes more cars than ever, but employs way less people), but also on the information side.
How many of you have seen or read some content that seemed suspiciously machine-generated click-baitey? Those youtube videos that are a script read by a machine? A breathless title ending in a question mark leads to an article that is a bit... um... lacking.
That is all machine-generated content, designed to vacuum up advertising dollars.
But wait, machines are also now indexing and learning. The Google engine is crawling all of this, as are untold millions of other bots and things.
So we literally have machines creating, and consuming content. Soon only the poor people will read Facebook threads, if you are wealthy enough you can turn the internet off for a bit.
A funny problem exists that you may not be aware of. If you like being blissfully unaware, perhaps head over here to kittenwar for a bit. But it involves the words 'first' or 'only'.
You see, in a cloud-native world, there is a continuum. There is no 'first' or 'only', only the many. Its kind of like the 'borg'. You have a whole bunch of things running already, and there was no start time. There was no bootstrap, initial creation. No 'let there be light' moment. But, you may have some pre-requisite, some thing that must be done exactly once before the universe is ready to go online.
Perhaps its installing the schema into your database. Or upgrading it. if you have a Deployment with n replicas, if n>1, they will all come up and try and install this schema, non-transactionally, badly.
How can you solve this dilemma? You could read this long issue #1171 here. It's all going in the right direction, replicaset lifecycle hooks, etc. And then it falls off a cliff. Perhaps all the people involved in it were beamed up by aliens? It seems the most likely answer.
But, while you are waiting, I have another answer for you. Let's say you have a Django or Flask (or Quart you Asynchio lover!) application. It uses SQLAlchemy. The schema upgrades are bulletproof and beautiful. If only you had a time you could run them in Kubernetes.
You could make a Job. It will run once. But only once, not on upgrade. You can make an initContainer, but it runs on each Pod in the replica (here a Deployment). So, lets use a database transaction to serialise safely.
Now, last chance to head to kittenwar before this gets a bit complex. OK, still here? Well, uh, Python time.
In a nutshell:
start nested session
run external commands
Easy, right? I chose the external commands method rather than calling (here flask) migrate to allow the technique to work for other things.
This exists to solve a simple problem. We have a Deployment with >1
Pods. Each Pod requires that the database be up-to-date with the
right schema for itself. The schema install is non-transactional.
If we start 2 Pods in parallel, and each tries to upgrade the schema,
If we don't upgrade the schema, then we can't go online until some
Instead we create a 'install_locks' table in the database. A wrapper
python script creates a transaction lock exclusive on this table,
and then goes on w/ the initial setup / upgrade of the schema.
This will serialise. Now 1 Pod will do the work while the other waits.
the 2nd will then have no work to do.
Whenever the imageTag is changed, this deployment will update
and the process will repeat.
The initContainer doing this must run the same software.
Note: we could have done this *not* as an initContainer, in the main
See kubernetes/community#1171 for a longer discussion
Could have just run this:
db = SQLAlchemy(app)
migrate = Migrate(app, db)
from flask_migrate import upgrade as _upgrade
but want this to be generate for other db operations
so call os.system
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
from sqlalchemy import inspect
env = environ.Env(DEBUG=(bool, False), )
SQLALCHEMY_DATABASE_URI = env(
print("USE DB %s" % SQLALCHEMY_DATABASE_URI)
db = create_engine(SQLALCHEMY_DATABASE_URI)
# Note: there is a race here, we check the table
# then create. If the create fails, it was likely
# created by another instance.
if not db.dialect.has_table(db, 'install_locks'):
metadata = MetaData(db)
Table('install_locks', metadata, Column('lock', Integer))
Session = sessionmaker(bind=db)
session = Session()
session.execute('BEGIN; LOCK TABLE install_locks IN ACCESS EXCLUSIVE MODE;')
os.system("/usr/local/bin/superset db upgrade") ... other init commands ...