The Hydrogen Truck Problem Isn't the Truck
https://www.mikeayles.com/blog/hydrogen-refuelling-road-freight/
#HackerNews #HydrogenTruck #Problem #Truck #Freight #Future #SustainableTransport
GPU Rack Power Density, 2015–2025
https://syaala.com/blog/gpu-rack-density-timeline-2026
#HackerNews #GPU #Power #Density #GPU #Trends #DataCenter #Technology #Future #Computing
Anthropic Education the AI Fluency Index
https://www.anthropic.com/research/AI-fluency-index
#HackerNews #Anthropic #Education #AI #Fluency #Index #AI #Literacy #Machine #Learning #Future #of #AI
How close are we to a vision for 2010?
https://shkspr.mobi/blog/2026/02/how-close-are-we-to-a-vision-for-2010/Twenty five years ago today, the EU's IST advisory group published a paper about the future of "Ambient Intelligence". Way before the world got distracted with cryptoscams and AI slop, we genuinely thought that computers would be so pervasive and well-integrated that the dream of "Ubiquitous Computing" would become a reality.
The ISTAG published an optimistic paper called "Scenarios for ambient intelligence in 2010". It's a brilliant look at what the future might have been. Let's go through some of the scenarios and see how close 2026 is to 2000's vision of 2010.
Scenario 1: ‘Maria’ – Road Warrior (close-term future)
Our titular heroine steps off a long haul flight into a foreign country.
she knows that she can travel much lighter than less than a decade ago, when she had to carry a collection of different so-called personal computing devices (laptop PC, mobile phone, electronic organisers and sometimes beamers and printers). Her computing system for this trip is reduced to one highly personalised communications device, her ‘P–Com’ that she wears on her wrist.
Well… OK! Not a bad start. You probably wouldn't want everything controlled by your smart watch - but the mobile is a good substitute. Although wireless video casting works, you'd probably want a trusty USB-C just to make sure.
she is able to stroll through immigration without stopping because her P-Comm is dealing with the ID checks as she walks.
We're getting closer to digital ID. But outside of a few experiments, there's no international consensus. However, every modern passport has an NFC chip which can be read by most airports. You still need to hold your passport on the reader, but it's usually quicker than queuing for a human.
Maria heads to her rented car:
The car opens as she approaches. It starts at the press of a button: she doesn’t need a key. She still has to drive the car but she is supported in her journey downtown to the conference centre-hotel by the traffic guidance system that had been launched by the city government as part of the ‘AmI-Nation’ initiative two years earlier.
Lots of cars now have wireless entry and are button controlled. Rental cars often have mobile app unlocking.
The traffic guidance is not provided by local governments. A mixture of international satellites provide positioning information, and a bunch of private companies provide traffic guidance.
Downtown traffic has been a legendary nightmare in this city for many years, and draconian steps were taken to limit access to the city centre. But Maria has priority access rights into the central cordon because she has a reservation in the car park of the hotel. Central access however comes at a premium price, in Maria’s case it is embedded in a deal negotiated between her personal agent and the transaction agents of the car-rental and hotel chains
Ah! The dream of personal agents. Not even close.
In the car Maria’s teenage daughter comes through on the audio system. Amanda has detected from ‘En Casa’ system at home that her mother is in a place that supports direct voice contact.
Hurrah for Bluetooth! Every car supports that now. Presence and location sensing is also common. Although the idea of a teenager willingly making a voice call is, sadly, a fantasy.
Her room adopts her ‘personality’ as she enters. The room temperature, default lighting and a range of video and music choices are displayed on the video wall.
Pffft! Nope. But do people really want this? The music and video are stored on her phone, so there's no need to transmit private data to a hotel.
Using voice commands she adjusts the light levels and commands a bath. Then she calls up her daughter on the video wall, while talking she uses a traditional remote control system to browse through a set of webcast local news bulletins from back home that her daughter tells her about. They watch them together.
Do you want an always-on Alexa in your hotel room? We have the technology, but we seem to shun in outside of specific scenarios.
We still have traditional remotes for browsing, and how lovely that they predicted the rise of simultaneous viewing!
Later on she ‘localises’ her presentation with the help of an agent that is specialised in advising on local preferences (colour schemes, the use of language).
I'd say we're there with a mixture of templates and LLMs. Translation and localisation is good enough.
She stores the presentation on the secure server at headquarters back in Europe. In the hotel’s seminar room where the sales pitch is take place, she will be able to call down an encrypted version of the presentation and give it a post presentation decrypt life of 1.5 minutes
Yup! Most things live in the cloud. Access controls are a thing. Whether people can be bothered to use them is another matter!
As she enters the meeting she raises communications access thresholds to block out anything but red-level ‘emergency’ messages
Do-Not-Disturb is a feature on every modern phone.
Coming out of the meeting she lowers the communication barriers again and picks up a number of amber level communications including one from her cardio-monitor warning her to take some rest now.
Ah! The constant chastising FitBit!
Scenario 2: ‘Dimitrios’ and the Digital Me’ (D-Me) (near-term future)
Dimitrios is the sort of self-facilitating media node you would never get tired of slapping.
Dimitrios is wearing, embedded in his clothes (or in his own body), a voice activated ‘gateway’ or digital avatar of himself, familiarly known as ‘D-Me’ or ‘Digital Me’. […] He feels quite confident with his D-Me and relies upon its ‘intelligent‘ reactions.
Nope! Oh, sure, your phone can auto-suggest some stock phrases to reply to emails. But we are nowhere close to having a physically embedded system which learns from us and can be trusted to respond.
Dimitrios receives calls which are:
answered formally but smoothly in corresponding languages by Dimitrios’ D-Me with a nice reproduction of Dimitrios’ voice and typical accent,
Vocal cloning is here. It is almost out of the uncanny valley. But I think most people would prefer to send a quick text or voice-note rather than use an AI.
a call from his wife is further analysed by his D-Me. In a first attempt, Dimitrios’ ‘avatar-like’ voice runs a brief conversation with his wife, with the intention of negotiating a delay while explaining his current environment.
She's going to leave him.
Dimitrios’ D-Me has caught a message from an older person’s D-Me, located in the nearby metro station. This senior has left his home without his medicine and would feel at ease knowing where and how to access similar drugs in an easy way. He has addressed his query in natural speech to his D-Me.
This is weird. Yes, we have smart-agents which are just about good enough to recognise speech and understand it. Why is it being sent to Dimitrios?
Dimitrios happens to suffer from similar heart problems and uses the same drugs. Dimitrios’ D-Me processes the available data as to offer information to the senior. It ‘decides’ neither to reveal Dimitrios’ identity (privacy level), nor to offer Dimitrios’ direct help (lack of availability), but to list the closest drug shops, the alternative drugs, offer a potential contact with the self-help group. This information is shared with the senior’s D-Me, not with the senior himself as to avoid useless information overload
We're nowhere close to this. At most, you might be able to post on social media and hope someone could help. I like the idea of a local social network, and there's a good understanding of privacy. But this seems needlessly convoluted - why wouldn't the senior's D-Me just look up the information online?
Meanwhile, his wife’s call is now interpreted by his D-Me as sufficiently pressing to mobilise Dimitrios. It ‘rings’ him using a pre-arranged call tone. Dimitrios takes up the call with one of the available Displayphones of the cafeteria. Since the growing penetration of D-Me, few people still bother to run around with mobile terminals: these functions are sufficiently available in most public and private spaces and your D-Me can always point at the closest…functioning one!
A hit and a miss! They predicted the rise of personalised ringtones - which have now all but vanished - but no one wants to use a pay-phone when they have their own mobile!
While doing his homework their 9 year-old son is meant to offer some insights on everyday life in Egypt. In a brief 3-way telephone conference, Dimitrios offers to pass over the query to the D-Me to search for an available direct contact with a child in Egypt. Ten minutes later, his son is videoconferencing at home with a girl of his own age, and recording this real-time translated conversation as part of his homework.
ChatRoulette for kids! What could possibly go wrong!
Ignoring that aspect, it's relatively common for kids to videocall each other - especially for language learning. Real-time translation is also possible.
Scenario 3 - Carmen: traffic, sustainability & commerce (further-term future)
Carmen is a modern, 21st century woman. Let's see how technology helps her:
She wants to leave for work in half an hour and asks AmI, by means of a voice command, to find a vehicle to share with somebody on her route to work.
Voice commands work - although usually only if you know the correct invocation.
AmI starts searching the trip database and, after checking the willingness of the driver, finds someone that will pass by in 40 minutes. The in-vehicle biosensor has recognised that this driver is a non-smoker – one of Carmen requirements for trip sharing. From that moment on, Carmen and her driver are in permanent contact if wanted (e.g. to allow the driver to alert Carmen if he/she will be late). Both wear their personal area networks (PAN) allowing seamless and intuitive contacts.
The aim of "ride-sharing" was originally this sort of thing. A driver would give a lift to someone if they happened to be travelling that route. Nowadays that model is over - it's all professional drivers.
Ubiquitous geo-tracking now means you can see if your driver is late, and they can see if you've moved street. We have too many privacy concerts to allow PANs to share much more.
She would like also to cook a cake and the e-fridge flashes the recipe. It highlights the ingredients that are missing milk and eggs. She completes the shopping on the e-fridge screen and asks for it to be delivered to the closest distribution point in her neighbourhood.
Oh! The Internet-Connected Fridge! Beloved by technologists and spurned by users! While there are a few fridges with build-in web-browsers, most people do their shopping from their phone.
Home delivery is now seamless and cheap. The "Amazon Locker" is also a reality.
All goods are smart tagged, so that Carmen can check the progress of her virtual shopping expedition, from any enabled device at home, the office or from a kiosk in the street
Do you care whether the eggs have been packed yet? I can see that it would be useful to the store to have realtime info on stock levels (and they mostly do for online shopping) but why expose that to the user?
Would you bother using a public terminal?
When Carmen gets into the car, the VAN system (Vehicle Area Network) registers her and by doing that she sanctions the payment systems to start counting. A micro-payment system will automatically transfer the amount into the e-purse of the driver when she gets out of the car.
I don't think Uber's app uses Bluetooth to detect whether driver and passenger are in proximity. Maybe it should?
Cryptocurrencies still can't do instantaneous micro-transactions. But credit-cards work pretty well.
Carmen is alerted by her PAN that a Chardonnay wine that she has previously identified as a preferred choice is on promotion. She adds it to her shopping order
Personal Agents always working for the user! Again, a fantasy which has yet to emerge. The reality is more like a push notification from the shop.
On the way home the shared car system senses a bike on a dedicated lane approaching an intersection on their route. The driver is alerted […] so a potential accident is avoided.
Tesla's crappy implementation notwithstanding, modern cars are relatively good about detecting bikes, pedestrians, and other vehicles.
the traffic density has caused pollution levels to rise above a control threshold. The city-wide engine control systems automatically lower the maximum speeds (for all motorised vehicles) and when the car enters a specific urban ring toll will be deducted via the Automatic Debiting System (ADS)
Half-and-half. No one is allowing their car to be remotely controlled, although plenty of roads have dynamic speed limits. Most modern metros have Automatic Number Plate Recognition and can bill drivers who enter congestion zones.
Carmen arrives at the local distribution node (actually her neighbourhood corner shop) where she picks up her goods. The shop has already closed but the goods await Carmen in a smart delivery box. By getting them out, the system registers payment
This is pretty much how the Amazon Locker works!
Scenario 4 – Annette and Solomon in the Ambient for Social Learning (far-term future)
Let's now go to an environmental study group meeting at a learning space.
Some are scheduled to work together in real time and space and thus were requested to be present together (the ambient accesses their agendas to do the scheduling).
Ah! Sadly not. At best we have shared calenders where people can look up suitable times, or Doodle polls where people can suggest their preferred times. Some integrated systems like Office365 will do a basic attempt to suggest meeting times - but it is a closed and proprietary system.
Here's Annette:
Annette is an active and advanced student so the ambient says it might be useful if Annette spends some time today trying to pin down the problem with the model using enhanced interactive simulation and projection facilities. It then asks if Annette would give a brief presentation to the group. The ambient goes briefly through its understanding of Annette’s availability and preferences for the day’s work.
A demo of that today would wow people. LLMs can convincingly do some of these tasks, but they're not integrated into anything sufficiently complex.
Here's Solomon, a new participant:
The ambient establishes Solomon’s identity; asks Solomon for the name of an ambient that ‘knows’ Solomon; gets permission from Solomon to acquire information about Solomon’s background and experience in Environmental Studies. The ambient then suggests Solomon to join the meeting and to introduce himself to the group.
Again, we barely have coherent online identities. We certainly don't have trusted ambient intelligences who can claim to know us. I do like the fact that it asks for permission. Not always a given today!
In these private conversations the mental states of the group are synchronised with the ambient, individual and collective work plans are agreed and in most cases checked with the mentor through the ambient.
Nope!
During the presentation the mentor is feeding observations and questions to the ambient, together with William, an expert who was asked to join the meeting. William, although several thousand miles away, joins to make a comment and answer some questions.
Telepresence is a reality today. Video-calling experts in a natural and expected part life here in 2026.
During the day the mentor and ambient converse frequently, establishing where the mentor might most usefully spend his time, and in some cases altering the schedule. The ambient and the mentor will spend some time negotiating shared experiences with other ambients – for example mounting a single musical concert with players from two or more distant sites.
I feel we're still about 25 years away from this future!
Key technological requirements for Ambient Intelligence (AmI)
The above scenarios are designed to be provocative thought experiments. If that's the future that people want, how would we get there?
The researches suggest five technological requirements:
- Very unobtrusive hardware
- A seamless mobile/fixed communications infrastructure
- Dynamic and massively distributed device networks
- Natural feeling human interfaces
- Dependability and security
I think they're bang on the money there.
Hardware is getting unobtrusive. Wearables are limited at the moment to wrist-mounted sensors, some medical devices, and video glasses. The hardware in our environment is even better at being unobtrusive. Presence sensors, cameras, and microphones are embedded all around us. We're unfortunately limited by short-life batteries.
While the promise of 5G hasn't quite materialised, it is increasing rare to be offline. WiFi is in every building, urban areas are flooded with mobile signals, and satellite comms are becoming cheaper. OK, IPv6 still isn't widespread, but it is mostly seamless when a device moves between radio technologies.
Distributed device networks are still yet to emerge. The current crop of monopolist technology providers want everything to go through their systems. There's very little standardisation.
Humane interfaces are getting there. Voice-to-text mostly works - but it does rely on training humans sufficiently well. Lots of things are still monolingual.
Security and privacy are constant thorns in the side of progress. Everything would be easier if we didn't need to worry about keeping people safe and secure. Dependability is the crux of any system - every time you experience a failure, you're less likely to return.
What Have We Learned
The whole paper is worth reading, especially the longer versions of each scenario which dive into some of the socio-political issues.
Some of the visions for 2010 are here! We have GPS, ride-sharing, and video-calls with real-time translations. Our groceries and other items can be delivered to smart-lockers, locks are opened with digital keys, and voice cloning mostly works.
We don't have public pay-phones (not even video enabled ones!) and cars aren't centrally controlled. For all the promises of AI, it still isn't even close to providing a seamless experience.
What strikes me most about the possible futures discussed isn't their optimism nor their missteps - it's that most of these things could be possible today if there were sufficient open standards which the public and private sector adopted.
Anyone who has read "The Entrepreneurial State" knows that these things take significant public investment. We've reached a point where the private sector has generated wealth from previous public research, but seems unwilling to invest in any long-term research itself. That's short-changing our future.
#AI #future #predictions🆕 blog! “How close are we to a vision for 2010?”
Twenty five years ago today, the EU's IST advisory group published a paper about the future of "Ambient Intelligence". Way before the world got distracted with cryptoscams and AI slop, we genuinely thought that computers would be so pervasive and well-integrated that the dream of "Ubiquitous Computing" would become a…
👀 Read more: https://shkspr.mobi/blog/2026/02/how-close-are-we-to-a-vision-for-2010/
⸻
#AI #future #predictions
🆕 blog! “How close are we to a vision for 2010?”
Twenty five years ago today, the EU's IST advisory group published a paper about the future of "Ambient Intelligence". Way before the world got distracted with cryptoscams and AI slop, we genuinely thought that computers would be so pervasive and well-integrated that the dream of "Ubiquitous Computing" would become a…
👀 Read more: https://shkspr.mobi/blog/2026/02/how-close-are-we-to-a-vision-for-2010/
⸻
#AI #future #predictions
How close are we to a vision for 2010?
https://shkspr.mobi/blog/2026/02/how-close-are-we-to-a-vision-for-2010/Twenty five years ago today, the EU's IST advisory group published a paper about the future of "Ambient Intelligence". Way before the world got distracted with cryptoscams and AI slop, we genuinely thought that computers would be so pervasive and well-integrated that the dream of "Ubiquitous Computing" would become a reality.
The ISTAG published an optimistic paper called "Scenarios for ambient intelligence in 2010". It's a brilliant look at what the future might have been. Let's go through some of the scenarios and see how close 2026 is to 2000's vision of 2010.
Scenario 1: ‘Maria’ – Road Warrior (close-term future)
Our titular heroine steps off a long haul flight into a foreign country.
she knows that she can travel much lighter than less than a decade ago, when she had to carry a collection of different so-called personal computing devices (laptop PC, mobile phone, electronic organisers and sometimes beamers and printers). Her computing system for this trip is reduced to one highly personalised communications device, her ‘P–Com’ that she wears on her wrist.
Well… OK! Not a bad start. You probably wouldn't want everything controlled by your smart watch - but the mobile is a good substitute. Although wireless video casting works, you'd probably want a trusty USB-C just to make sure.
she is able to stroll through immigration without stopping because her P-Comm is dealing with the ID checks as she walks.
We're getting closer to digital ID. But outside of a few experiments, there's no international consensus. However, every modern passport has an NFC chip which can be read by most airports. You still need to hold your passport on the reader, but it's usually quicker than queuing for a human.
Maria heads to her rented car:
The car opens as she approaches. It starts at the press of a button: she doesn’t need a key. She still has to drive the car but she is supported in her journey downtown to the conference centre-hotel by the traffic guidance system that had been launched by the city government as part of the ‘AmI-Nation’ initiative two years earlier.
Lots of cars now have wireless entry and are button controlled. Rental cars often have mobile app unlocking.
The traffic guidance is not provided by local governments. A mixture of international satellites provide positioning information, and a bunch of private companies provide traffic guidance.
Downtown traffic has been a legendary nightmare in this city for many years, and draconian steps were taken to limit access to the city centre. But Maria has priority access rights into the central cordon because she has a reservation in the car park of the hotel. Central access however comes at a premium price, in Maria’s case it is embedded in a deal negotiated between her personal agent and the transaction agents of the car-rental and hotel chains
Ah! The dream of personal agents. Not even close.
In the car Maria’s teenage daughter comes through on the audio system. Amanda has detected from ‘En Casa’ system at home that her mother is in a place that supports direct voice contact.
Hurrah for Bluetooth! Every car supports that now. Presence and location sensing is also common. Although the idea of a teenager willingly making a voice call is, sadly, a fantasy.
Her room adopts her ‘personality’ as she enters. The room temperature, default lighting and a range of video and music choices are displayed on the video wall.
Pffft! Nope. But do people really want this? The music and video are stored on her phone, so there's no need to transmit private data to a hotel.
Using voice commands she adjusts the light levels and commands a bath. Then she calls up her daughter on the video wall, while talking she uses a traditional remote control system to browse through a set of webcast local news bulletins from back home that her daughter tells her about. They watch them together.
Do you want an always-on Alexa in your hotel room? We have the technology, but we seem to shun in outside of specific scenarios.
We still have traditional remotes for browsing, and how lovely that they predicted the rise of simultaneous viewing!
Later on she ‘localises’ her presentation with the help of an agent that is specialised in advising on local preferences (colour schemes, the use of language).
I'd say we're there with a mixture of templates and LLMs. Translation and localisation is good enough.
She stores the presentation on the secure server at headquarters back in Europe. In the hotel’s seminar room where the sales pitch is take place, she will be able to call down an encrypted version of the presentation and give it a post presentation decrypt life of 1.5 minutes
Yup! Most things live in the cloud. Access controls are a thing. Whether people can be bothered to use them is another matter!
As she enters the meeting she raises communications access thresholds to block out anything but red-level ‘emergency’ messages
Do-Not-Disturb is a feature on every modern phone.
Coming out of the meeting she lowers the communication barriers again and picks up a number of amber level communications including one from her cardio-monitor warning her to take some rest now.
Ah! The constant chastising FitBit!
Scenario 2: ‘Dimitrios’ and the Digital Me’ (D-Me) (near-term future)
Dimitrios is the sort of self-facilitating media node you would never get tired of slapping.
Dimitrios is wearing, embedded in his clothes (or in his own body), a voice activated ‘gateway’ or digital avatar of himself, familiarly known as ‘D-Me’ or ‘Digital Me’. […] He feels quite confident with his D-Me and relies upon its ‘intelligent‘ reactions.
Nope! Oh, sure, your phone can auto-suggest some stock phrases to reply to emails. But we are nowhere close to having a physically embedded system which learns from us and can be trusted to respond.
Dimitrios receives calls which are:
answered formally but smoothly in corresponding languages by Dimitrios’ D-Me with a nice reproduction of Dimitrios’ voice and typical accent,
Vocal cloning is here. It is almost out of the uncanny valley. But I think most people would prefer to send a quick text or voice-note rather than use an AI.
a call from his wife is further analysed by his D-Me. In a first attempt, Dimitrios’ ‘avatar-like’ voice runs a brief conversation with his wife, with the intention of negotiating a delay while explaining his current environment.
She's going to leave him.
Dimitrios’ D-Me has caught a message from an older person’s D-Me, located in the nearby metro station. This senior has left his home without his medicine and would feel at ease knowing where and how to access similar drugs in an easy way. He has addressed his query in natural speech to his D-Me.
This is weird. Yes, we have smart-agents which are just about good enough to recognise speech and understand it. Why is it being sent to Dimitrios?
Dimitrios happens to suffer from similar heart problems and uses the same drugs. Dimitrios’ D-Me processes the available data as to offer information to the senior. It ‘decides’ neither to reveal Dimitrios’ identity (privacy level), nor to offer Dimitrios’ direct help (lack of availability), but to list the closest drug shops, the alternative drugs, offer a potential contact with the self-help group. This information is shared with the senior’s D-Me, not with the senior himself as to avoid useless information overload
We're nowhere close to this. At most, you might be able to post on social media and hope someone could help. I like the idea of a local social network, and there's a good understanding of privacy. But this seems needlessly convoluted - why wouldn't the senior's D-Me just look up the information online?
Meanwhile, his wife’s call is now interpreted by his D-Me as sufficiently pressing to mobilise Dimitrios. It ‘rings’ him using a pre-arranged call tone. Dimitrios takes up the call with one of the available Displayphones of the cafeteria. Since the growing penetration of D-Me, few people still bother to run around with mobile terminals: these functions are sufficiently available in most public and private spaces and your D-Me can always point at the closest…functioning one!
A hit and a miss! They predicted the rise of personalised ringtones - which have now all but vanished - but no one wants to use a pay-phone when they have their own mobile!
While doing his homework their 9 year-old son is meant to offer some insights on everyday life in Egypt. In a brief 3-way telephone conference, Dimitrios offers to pass over the query to the D-Me to search for an available direct contact with a child in Egypt. Ten minutes later, his son is videoconferencing at home with a girl of his own age, and recording this real-time translated conversation as part of his homework.
ChatRoulette for kids! What could possibly go wrong!
Ignoring that aspect, it's relatively common for kids to videocall each other - especially for language learning. Real-time translation is also possible.
Scenario 3 - Carmen: traffic, sustainability & commerce (further-term future)
Carmen is a modern, 21st century woman. Let's see how technology helps her:
She wants to leave for work in half an hour and asks AmI, by means of a voice command, to find a vehicle to share with somebody on her route to work.
Voice commands work - although usually only if you know the correct invocation.
AmI starts searching the trip database and, after checking the willingness of the driver, finds someone that will pass by in 40 minutes. The in-vehicle biosensor has recognised that this driver is a non-smoker – one of Carmen requirements for trip sharing. From that moment on, Carmen and her driver are in permanent contact if wanted (e.g. to allow the driver to alert Carmen if he/she will be late). Both wear their personal area networks (PAN) allowing seamless and intuitive contacts.
The aim of "ride-sharing" was originally this sort of thing. A driver would give a lift to someone if they happened to be travelling that route. Nowadays that model is over - it's all professional drivers.
Ubiquitous geo-tracking now means you can see if your driver is late, and they can see if you've moved street. We have too many privacy concerts to allow PANs to share much more.
She would like also to cook a cake and the e-fridge flashes the recipe. It highlights the ingredients that are missing milk and eggs. She completes the shopping on the e-fridge screen and asks for it to be delivered to the closest distribution point in her neighbourhood.
Oh! The Internet-Connected Fridge! Beloved by technologists and spurned by users! While there are a few fridges with build-in web-browsers, most people do their shopping from their phone.
Home delivery is now seamless and cheap. The "Amazon Locker" is also a reality.
All goods are smart tagged, so that Carmen can check the progress of her virtual shopping expedition, from any enabled device at home, the office or from a kiosk in the street
Do you care whether the eggs have been packed yet? I can see that it would be useful to the store to have realtime info on stock levels (and they mostly do for online shopping) but why expose that to the user?
Would you bother using a public terminal?
When Carmen gets into the car, the VAN system (Vehicle Area Network) registers her and by doing that she sanctions the payment systems to start counting. A micro-payment system will automatically transfer the amount into the e-purse of the driver when she gets out of the car.
I don't think Uber's app uses Bluetooth to detect whether driver and passenger are in proximity. Maybe it should?
Cryptocurrencies still can't do instantaneous micro-transactions. But credit-cards work pretty well.
Carmen is alerted by her PAN that a Chardonnay wine that she has previously identified as a preferred choice is on promotion. She adds it to her shopping order
Personal Agents always working for the user! Again, a fantasy which has yet to emerge. The reality is more like a push notification from the shop.
On the way home the shared car system senses a bike on a dedicated lane approaching an intersection on their route. The driver is alerted […] so a potential accident is avoided.
Tesla's crappy implementation notwithstanding, modern cars are relatively good about detecting bikes, pedestrians, and other vehicles.
the traffic density has caused pollution levels to rise above a control threshold. The city-wide engine control systems automatically lower the maximum speeds (for all motorised vehicles) and when the car enters a specific urban ring toll will be deducted via the Automatic Debiting System (ADS)
Half-and-half. No one is allowing their car to be remotely controlled, although plenty of roads have dynamic speed limits. Most modern metros have Automatic Number Plate Recognition and can bill drivers who enter congestion zones.
Carmen arrives at the local distribution node (actually her neighbourhood corner shop) where she picks up her goods. The shop has already closed but the goods await Carmen in a smart delivery box. By getting them out, the system registers payment
This is pretty much how the Amazon Locker works!
Scenario 4 – Annette and Solomon in the Ambient for Social Learning (far-term future)
Let's now go to an environmental study group meeting at a learning space.
Some are scheduled to work together in real time and space and thus were requested to be present together (the ambient accesses their agendas to do the scheduling).
Ah! Sadly not. At best we have shared calenders where people can look up suitable times, or Doodle polls where people can suggest their preferred times. Some integrated systems like Office365 will do a basic attempt to suggest meeting times - but it is a closed and proprietary system.
Here's Annette:
Annette is an active and advanced student so the ambient says it might be useful if Annette spends some time today trying to pin down the problem with the model using enhanced interactive simulation and projection facilities. It then asks if Annette would give a brief presentation to the group. The ambient goes briefly through its understanding of Annette’s availability and preferences for the day’s work.
A demo of that today would wow people. LLMs can convincingly do some of these tasks, but they're not integrated into anything sufficiently complex.
Here's Solomon, a new participant:
The ambient establishes Solomon’s identity; asks Solomon for the name of an ambient that ‘knows’ Solomon; gets permission from Solomon to acquire information about Solomon’s background and experience in Environmental Studies. The ambient then suggests Solomon to join the meeting and to introduce himself to the group.
Again, we barely have coherent online identities. We certainly don't have trusted ambient intelligences who can claim to know us. I do like the fact that it asks for permission. Not always a given today!
In these private conversations the mental states of the group are synchronised with the ambient, individual and collective work plans are agreed and in most cases checked with the mentor through the ambient.
Nope!
During the presentation the mentor is feeding observations and questions to the ambient, together with William, an expert who was asked to join the meeting. William, although several thousand miles away, joins to make a comment and answer some questions.
Telepresence is a reality today. Video-calling experts in a natural and expected part life here in 2026.
During the day the mentor and ambient converse frequently, establishing where the mentor might most usefully spend his time, and in some cases altering the schedule. The ambient and the mentor will spend some time negotiating shared experiences with other ambients – for example mounting a single musical concert with players from two or more distant sites.
I feel we're still about 25 years away from this future!
Key technological requirements for Ambient Intelligence (AmI)
The above scenarios are designed to be provocative thought experiments. If that's the future that people want, how would we get there?
The researches suggest five technological requirements:
- Very unobtrusive hardware
- A seamless mobile/fixed communications infrastructure
- Dynamic and massively distributed device networks
- Natural feeling human interfaces
- Dependability and security
I think they're bang on the money there.
Hardware is getting unobtrusive. Wearables are limited at the moment to wrist-mounted sensors, some medical devices, and video glasses. The hardware in our environment is even better at being unobtrusive. Presence sensors, cameras, and microphones are embedded all around us. We're unfortunately limited by short-life batteries.
While the promise of 5G hasn't quite materialised, it is increasing rare to be offline. WiFi is in every building, urban areas are flooded with mobile signals, and satellite comms are becoming cheaper. OK, IPv6 still isn't widespread, but it is mostly seamless when a device moves between radio technologies.
Distributed device networks are still yet to emerge. The current crop of monopolist technology providers want everything to go through their systems. There's very little standardisation.
Humane interfaces are getting there. Voice-to-text mostly works - but it does rely on training humans sufficiently well. Lots of things are still monolingual.
Security and privacy are constant thorns in the side of progress. Everything would be easier if we didn't need to worry about keeping people safe and secure. Dependability is the crux of any system - every time you experience a failure, you're less likely to return.
What Have We Learned
The whole paper is worth reading, especially the longer versions of each scenario which dive into some of the socio-political issues.
Some of the visions for 2010 are here! We have GPS, ride-sharing, and video-calls with real-time translations. Our groceries and other items can be delivered to smart-lockers, locks are opened with digital keys, and voice cloning mostly works.
We don't have public pay-phones (not even video enabled ones!) and cars aren't centrally controlled. For all the promises of AI, it still isn't even close to providing a seamless experience.
What strikes me most about the possible futures discussed isn't their optimism nor their missteps - it's that most of these things could be possible today if there were sufficient open standards which the public and private sector adopted.
Anyone who has read "The Entrepreneurial State" knows that these things take significant public investment. We've reached a point where the private sector has generated wealth from previous public research, but seems unwilling to invest in any long-term research itself. That's short-changing our future.
#AI #future #predictionsOver 80% of 16 to 24-year-olds would vote to rejoin the EU
#HackerNews #EU #Youth #Vote #Rejoin #Poll #YoungVoters #Democracy #Future #Europe
#OpenAI has deleted the word ‘safely’ from its #mission – and its new structure is a test for whether #AI serves #society or shareholders
see: theconversation.com/openai-has…
#economy #technology #news #security #humanity #future #finance #money #capitalism #ethics #problem #software #profit #politics
I'm Islam from Gaza, and these are my students... They used to come every morning full of enthusiasm and hope. 📚✏️ Today, all that remains of the center is rubble, but their eyes still hold dreams.
Help us reopen the center, to give them a safe space to learn and live again. These young faces deserve a new chance... With your support, we will reopen the center that was their home and their hope.
#Education #future #success
#gaza #writing #palestine #genocide #writer
https://chuffed.org/project/154105-islam-and-family-in-gaza-rebuilding-hope-after-war
#Media organizations hold #power.
With power comes #duty. That duty is simple: show #reality as accurately as possible. Not only what is comfortable. Not only what feels safe. Also what is uncertain, complex, or uncomfortable.
If a highly powerful political leader repeatedly confuses countries, shows sudden speech disruptions, or displays noticeable physical changes, this is not #gossip. It can be an indicator of #cognitive decline. In medicine, patterns matter more than isolated events. One mistake can be random. Many similar mistakes over time are a signal. #Science works exactly this way: it looks for repeated deviations from normal baselines.
This is where the #moral #responsibility of the media begins. Media do not need to #diagnose. But they should not completely ignore signals that are being seriously discussed by qualified experts. Because omission also shapes public reality. In #psychology, this is known as #normalization through #exposure. If something is ignored often enough, it starts to appear normal.
Pop culture illustrates this clearly. Imagine a sci-fi series where a spaceship crew keeps ignoring warning lights. Every episode they say, “Probably just a sensor glitch.” Eventually the warp core explodes. Not because nobody could see the problem. But because nobody wanted to say it out loud.
Or think about multiplayer games. When one player is clearly lagging, the team adapts strategy. They do not pretend everyone is perfectly synchronized. Recognizing reality is the basis for good decision-making. Politics works the same way. Voters and institutions need an accurate situation assessment.
The concept often described as “sanewashing” reflects a broader social tendency. Humans and institutions prefer stability. The #brain likes predictability. That is why deviations are often downplayed. But science shows that early #warning signs are crucial. In #medicine. In #climate #research. In #engineering safety. And in political #analysis.
A common counterargument is: without a confirmed diagnosis, nothing should be discussed. This sounds careful. But it is only partially scientific. Science often operates with probabilities, not absolute certainty. Evidence-based expert hypotheses are not taboo. They are part of #knowledge formation. The key is #transparency: What is proven? What is #hypothesis? Who says it? Based on which methods?
If media avoid all relevant expert hypotheses out of #fear of being wrong, an #information vacuum forms. That vacuum will be filled by uncontrolled sources. Social media rumors. Extreme narratives. This weakens #trust in professional #journalism.
Honor in journalism does not mean #perfection. Honor means #responsibility. Moral #integrity does not mean zero risk. It means pursuing #truth despite risk. The principle “tell it as it is” remains correct. But “what is” constantly changes. Reality is not a frozen picture. It is closer to a live stream. Showing only old frames is no longer truth.
In pop culture terms: Reality is not a finished movie. It is an open-world game. New events constantly appear. If media only show the starting map, they mislead the players.
Therefore, media institutions should evolve their #tools and standards. They need clear, ethical frameworks for discussing medical or cognitive warning signals in extremely powerful public figures. Not sensationalist. Not #disrespectful. But also not #blind.
Because ultimately, this is not about individual politicians. It is about systems. About #democracy. About informed citizens. Truth is rarely comfortable. But ignoring it does not make it less real.
Moral journalism means this: Do not only report what appears stable. Also report when #stability itself may be in #question. That is where real responsibility begins.
#press #economy #mainstream #health #politics #world #globalization #future #humanity #wisdom #ethics #compass #system #matrix #trump #biden #usa #whitehouse #government #humanrights #law #justice #epstein #epsteinfiles #epsteingate #conspiracy #sanewashing
Elon Musk wants to build an ice cream castle in the Horsehead Nebula
Elon Musk wants to ride a pink pegasus to the prom with his best gal Sydney Sweeney
Elon Musk wants to win a threepeat threepeat for the Chicago Bulls
Elon Musk wants to piss champagne and fart love potion
Elon Musk wants.
To many of us, the moon is precious and sacrosanct.
If that self possessed idiot gets to interfere with the moon, anything could happen.
Without the moon, we lose our strange tidal systems. Life on earth would change completely and it would be a complete catastrophe.
The moon also could be used as a global threat, against all states.
Use of the moon MUST be regulated internationally and not for a few greedy unthinking nut-cases.
A web & ap suggestion to help you get off of iNaturalist - especially for Europeans and some Caribbean nations! (Read my pinned post to learn about why, as a scientist who used to love it, I now recommend leaving iNat.) https://observation.org/
Observation.org is a site older than iNaturalist, with the same basic concept, but some huge key differences. Like iNat, they use traditional computer learning to create a database to help identify creatures from photos; sadly that feature is only available for Europe and a few other countries. However, anyone anywhere can contribute! Observations that are confirmed are pushed to GBIF, a database that scientists use, similar to how "research grade" on iNat is pushed to GBIF.
How does an observation become verified? Well, volunteer experts (both academia and community experts alike) are tasked with this validation. This is a bit different than iNat, which is built on "anyone can confirm an ID". Validation is thus slower, however, can be argued to be more accurate as someone cannot just go through and auto-agree with a computer-suggested ID without actual human knowledge validation, which happens a lot on iNat. I have some colleagues in Europe who when they use GBIF they actually don't even bother with iNat and filter out those observations. Can you volunteer as an expert? Certainly! There is an application process.
You own all your own data on observation.org - You retain all intellectual property rights to your media, and you have full control over how your data is shared and the licensing of your media files. This includes deleting your data whenever you wish. You can find more information on this here: https://observation.org/tos/
I emailed and asked specifically about their current and future AI use plans, especially regarding things like GenAI. They clearly understood the differences and harms, and to quote a part of the reply, "We currently have no plans to incorporate LLMs or generative AI. Our focus is on using and developing AI to assist with species identification, and we do this in close collaboration with scientific institutions such as Naturalis Biodiversity Centre. We always adhere to EU law and directives."
I have been using it a bit, and I find it lacks a lot of our cave invertebrates but it is much more European focused. This is going to be a downside for many, but the good news is that you can get species added - there is a process for that. And, most of our critters and plants and such do seem to be in it. Either way, even my rare findings are able to still be pushed to GBIF in this manner, it's just slower is all.
If you want the gamification and fast pace of iNat, you won't like observation.org.
If you thirst for careful identification, sharpening your skills outside of using computer assist, and still being able to contribute to GBIF, and knowing that the site will not be enshittified...well you will probably enjoy it! My guess is it did not take off because it is 'Less Accessible' due to limited ID help and slower confirmation and feedback. But, I think the people sick and tired of iNat will be the kind of people who enjoy Observation.org!
#iNat #iNaturalist #science #nature #community #communityScience #butterfly #bloomscrolling #hope #future #naturalist #biology #ecology #enshittification #climate #DivestDecember
Holy cow ... This video might seem innocent, but I implore you ... watch it up to the VERY end! https://www.youtube.com/watch?v=KtQ9nt2ZeGM He always made great videos, but this one might be his most important one to date, and maybe ever. No content warning. Nobody should be warned from learning the truth. #technology #solar #energy #future #resist
Polluting Earth isn't enough. The shit show expands into space… 😖
SpaceX Eyes 1 Million Satellites For Orbital Data Center Push
Not just thousands. In an FCC filing, the company mentions deploying up to a staggering 'one million satellites' in orbits ranging from 500 kilometers to 2,000km.
https://www.pcmag.com/news/spacex-eyes-1-million-satellites-for-orbital-data-center-push
#space #ElonMusk #SpaceX #satellites #environment #pollution #future #Earth #world #humanity #science #data #info #information
Polluting Earth isn't enough. The shit show expands into space… 😖
SpaceX Eyes 1 Million Satellites For Orbital Data Center Push
Not just thousands. In an FCC filing, the company mentions deploying up to a staggering 'one million satellites' in orbits ranging from 500 kilometers to 2,000km.
https://www.pcmag.com/news/spacex-eyes-1-million-satellites-for-orbital-data-center-push
#space #ElonMusk #SpaceX #satellites #environment #pollution #future #Earth #world #humanity #science #data #info #information
🧠✨ Researchers at #Cornell have developed a "MOTE" – a tiny wireless sensor that can record #brain activity while sitting on a grain of salt.
Powered by infrared light rather than bulky #batteries, these microscopic devices are small enough to monitor neural #health for years without irritating sensitive brain tissue.
#technology #tech #medicine #science #engineering #innovation #neuroscience #future #discovery
🧠✨ Researchers at #Cornell have developed a "MOTE" – a tiny wireless sensor that can record #brain activity while sitting on a grain of salt.
Powered by infrared light rather than bulky #batteries, these microscopic devices are small enough to monitor neural #health for years without irritating sensitive brain tissue.
#technology #tech #medicine #science #engineering #innovation #neuroscience #future #discovery