alcinnz
alcinnz boosted

A vulnerability on a security chip present in over 100 Dell laptop models could allow attackers to steal sensitive data and monitor some computer activities.

https://mobilesyrup.com/2025/08/05/dell-fixed-security-chip-vulnerability-that-left-millions-open-to-attack/
- - -
Une vulnérabilité dans une puce de sécurité présente dans plus de 100 modèles d'ordinateurs portables Dell pourrait permettre aux attaquants de voler des données sensibles et de surveiller certaines activités informatiques.

// Article en anglais //

#Dell#InfoSec#InformationSecurity#Cybersécurité#IT#TI

A vulnerability on a security chip present in over 100 Dell laptop models could allow attackers to steal sensitive data and monitor some computer activities.

https://mobilesyrup.com/2025/08/05/dell-fixed-security-chip-vulnerability-that-left-millions-open-to-attack/
- - -
Une vulnérabilité dans une puce de sécurité présente dans plus de 100 modèles d'ordinateurs portables Dell pourrait permettre aux attaquants de voler des données sensibles et de surveiller certaines activités informatiques.

// Article en anglais //

#Dell#InfoSec#InformationSecurity#Cybersécurité#IT#TI

Cybersecurity, risk management, long post, brainstorming

Hey folks, I'm currently working on a thing for a company, and I need a brainstorm buddy as my team went on a corporate retreat.

It has to do with risk management.

Let's say we have a qualitatively assessed risk that was initially based mostly on vibes rather than solid data.

Now let's say we have an incident that stems from this specific risk. At the end of the incident, we need to re-assess the risk based on the data we collected.

Now, the requirement is a risk model that accommodates a shift from qualitative assessment to quantitative, starting with a single occurrence.

Anyone knows any papers on the topic or dealt with something similar? From my past experience quantitative risk in cybersec is mostly bullshit anyway and everyone just kind of makes up numbers, especially for probability/frequency, just so they can get a bigger budget approved, which kind of goes against the spirit of risk management in my eyes.

My current train of thought is the following:
The risk model should calculate the risk not based on the traditional impact * probability formula, but something more detailed, like a weighted score based on the threat characteristics multiplied by asset value divided by current defence capability multiplied by real-world statistics.
Based on the incident, we first adjust our threat model, possibly tweaking some numbers, then have a critical look at our capability and adjust that based on the results of the root cause analysis, and then add a statistical multiplier with the default value of 1.

Then for every incident within the same year we multiply the statistical multiplier by 2, and every year without this risk being triggered we divide it by 2.

Also every year a threat model gets reviewed based on OSINT, updated, risks get recalculated.

Also also every year the independent audit cycle happens, controls get assessed, maturity scores get updated, risks get recalculated.

At that point the risk team only needs to get threat modelling reports, audit reports, new asset inventories, and interview asset owners to verify there were no changes in asset value.

Thoughts?

#infosec #infosecurity #informationsecurity #cyber #cybersec #cybersecurity #riskmanagement

Investigated an urgent security incident.

A colleague had hit F12 and activated the browser devtools window.

Really happy they call in this kind of thing instead of closing the window and thinking or hoping everything will be alright.

Does anyone have a good idea for a little token of appreciation I could hand out on such occasions?

#informationsecurity #infosec