Last updated: 30 Oct 2023 10:00 Posted in:
Only 36% of businesses across the globe say security concerns are a priority as integrate generative AI into their businesses.
US cybersecurity company ExtraHop surveyed 1,200 IT and cybersecurity professionals from around the world and found that inaccurate responses were the top concern for businesses using GenAI tools, cited by 40% of respondents. Only 36% of respondents cited security issues as their main concern, including the loss of employee and customer data.
Despite many companies prohibiting the use of GenAI tools such as ChatGPT, ExtraHop’s study found that of these bans have been ineffective.
While a third of businesses who took part in the study had banned GenAI tools, only 5% of respondents stated that their employees never use these tools at work.
ExtraHop’s survey also found that less than half (46%) of businesses had AI governance policies in place, and just over two in five (42%) trained employees on how to use AI tools.
In a separate survey, data analytics firm GlobalData found that just 17% of firms have already integrated AI into their businesses. However, it also said training employees on proper AI use and risk management of generative AI should be a top priority for businesses eager to implement the technology.
GlobalData’s principal analyst, Steven Schuchart, said that while AI cannot be “stuffed back into the bottle”, its rapid growth remains largely positive for businesses so long as they maintain good ‘cyber hygiene’ and ethical practices.
“There is a serious need for robust laws about how AI can be used,” Schuchart said. “This must include comprehensive strengthening of individual privacy and personal data sovereignty.” He also called for AI to be auditable and transparent.
“There is a serious need for robust laws about how AI can be used. This must include comprehensive strengthening of individual privacy and personal data sovereignty.”
Steven Schuchart, Principal Analyst, GlobalData