Are AI ecosystems agents of disruption? | IoT Now News & Reports

Are AI ecosystems agents of disruption? | IoT Now News & Reports

Source Node: 2587298

When ChatGPT directed global attention to the transformative potential of artificial intelligence (AI), it marked a pivotal moment in technology history: It moved AI from the minds of a few thousand scientists to 100 million people and 50 languages. That rate of growth and proliferation of technology is one we have never seen before. There is much speculation and debate on how it will impact the future of practically every industry. Navigating this hype with some pragmatic steps to win with AI is possible, writes Vincent Korstanje, the CEO of Kigen.

Top-of-mind Gen AI concerns for IT leaders  Can AI have 
your attention
The number of mentions of AI 
in earnings call transcripts has 
increased by 6x since the release 
of ChatGPT in November 2022.Top-of-mind Gen AI concerns for IT leaders  Can AI have 
your attention
The number of mentions of AI 
in earnings call transcripts has 
increased by 6x since the release 
of ChatGPT in November 2022.
  • 97% of global executives agree AI foundation models will enable connections across data types, revolutionising where and how AI is used in their own organisations1
  • 6x increase in the mentions of AI in earnings call transcripts since the release of ChatGPT in November 20222

The large language models (LLMs) behind ChatGPT, Bard and others mark a significant turning point for machine intelligence with two key developments:

  1. AI finally grasped the intent and language complexity that is fundamental to human communication – for the first time, machines can express answers, bring up context and can be independently generative.
  2. Using the vast amount of training data in rich text, video, lyrics and image formats, AI can now adapt to wide range of tasks, and can be repurposed or reused in various forms.

The ability of these LLMs to follow instructions, perform high-level reasoning, and generate code, will overturn the enterprise data, analytics and app marketplace: This is a disruptive opportunity for device makers.

LLMs are built and trained on huge amounts of data – ChatGPT, for example, was trained on a massive corpus of text data, around 570GB of datasets3, including web pages, books and other sources. It will exhaust the available written text and articles at some point in the foreseeable future and will have to rely on verifiable real-life data. Sensor-driven data is essential for this and would be the most potent way to sense, verify and add to the integrity of the data that AI inferences are based on.

At Kigen, we have been talking about machine learning applications applications for several years4, and the fact that LLMs can now be run on readily available computing platforms such as Raspberry Pi is encouraging. As AI capabilities propel forward, we may see them co-exist and collaborate through ecosystems to offer personalised user experiences. In this interlinked context, where AI agents aid or take actions on behalf of users, it is paramount that the data exchanges are secure — all the way from on-device sensors, processors and cloud — wherever that may be appropriately used.

On-device AI is another fast-emerging development – Increased compute power, more efficient hardware, and robust software, as well as an explosion in sensor data from the Internet of Things — are enabling AI to process data on devices that have direct user contact rather than piping everything to the cloud, which can carry privacy and security risks. Such on-device AI capabilities open new ways to personalise experiences.

However, according to a KPMG survey5, cybersecurity and privacy remain top of mind concerns around AI for IT leaders. So, how do you move forward? The answer is start with what you can control: invest in secure-by-design sensors and IoT devices and integrate security end-to-end. One simple implementation of this that spans from the most constrained and simplest sensor to any edge device and cloud is Kigen’s IoTSAFE based on GSMA standards.

The greatest risk associated with using GenAI is a loss of data confidentiality and integrity from inputting sensitive data into the AI system or using unverified outputs from it. For OEMs looking to be leaders in this space, integrating security into their sensors, devices and through the tech stack is a must.

In the age of AI, security is not just a feature, it is a necessity.

Comment on this article via X: @IoTNow_

Time Stamp:

More from IoT Now