Two weeks ago, Robert Rowley did a detailed post-mortem analysis for Patchstack of a severe vulnerability in Ninja Forms. Chloe Chamberland at Wordfence had identified and also described it in detail the day before. Both Robert, Chloe, and Sarah Gooding from WPTavern noted how this vulnerability was quietly patched with a backported update from Ninja Forms — and then received a forced update from WordPress.org to make sure it got to users who didn’t update on their own.
In the end, Robert wrote, “The coordinated efforts of the plugin developer and the WordPress.org volunteers pushing this forced update helped secure these [1 million+] websites.”
Good news! So how exactly do those vital coordinated efforts work?
Are there unofficial channels and processes, like a decision tree, for escalating to a forced update?
We’ve seen forced updates become increasingly common and less controversial over time. (Developers still can prevent them from running if they wish.) But who decides, and how is that decision made?
Inquiring minds want to know!
Robert, who is a Security Advocate at Patchstack, answered the one question I had about forced updates in the #security channel on Post Status Slack:
Are there unofficial channels and processes, like a decision tree, for escalating to a forced update? I assumed yes, and the answer is indeed, ‘yes.’
In Robert’s view, there is a process in place that’s been good and has only gotten better as more people in the WordPress product space have become aware and involved with it.
You’ve got to talk about (security bug) fight club some of the time…
This is a strange topic to write about — open secrets in open source. It’s a bit of a paradox. In a culture where “transparency” is praised to an extreme in often simplistic and even fundamentalist ways, we don’t (and can’t) have it absolutely across the board.
Communicating openly about defects in our software — personally or as an open-source ecosystem — is fraught with challenges we’ve identified previously this year as important ones to address better.
On the one hand, there are and always have been good reasons and practices for carefully managed disclosure of in/security information.
On the other hand, those processes can’t exist without the ever-changing and fuzzy definition of “the right people” knowing about them.
For example, how does the WordPress ecosystem induct new developers, product owners, and security researchers into unofficial, semi-secretive, yet collaborative, and trust-based open-source processes? Communicating openly about defects in our software — personally or as an open-source ecosystem — is fraught with challenges we’ve identified previously this year as important ones to address better.
What can be codified and documented?
Even an old hand in these circles like John Jacoby (speaking for himself, not core security) might like to know more, as John interjected to the #security conversation on Post Status Slack:
I would like to better understand what could be codified into policy/writing – in a way that satisfies the desire to be open to everyone, while also maintaining a level of security through the (current) obscurity of not having “forced updates” be made public before the force is finished.
John also outlined what he believes is already public knowledge about the current process for collaborating on security issues significant enough to require a forced update — and he asked others to reach out if they would like to talk about what might be more publicly documented regarding forced updates.
If that’s you, please do connect with John.
From where I’m standing, there should never be an update, or any other change of a program, that the user can’t at least opt out of. Major updates that aren’t critical security updates must always be opt-in, but while the rest may default to on, keeping the average user safe, there needs to be a setting somewhere that stops all. No exceptions, no excuses.
Well, these are critical security updates, and anyone can opt out if they have enough technical knowledge to understand how. That’s probably a good place to leave it. A global setting whose functioning (or nonfunctioning) provides no clear feedback is way too easy to mistakenly leave on (or off) and forget about.