• HOME
  • News
  • Blog
  • Resources
  • Reviews
  • AI Tech
  • Contact us
CALCULATOR
CurrencyRate.Today
recently

Xbox Is going Crypto? Leaked Microsoft Roadmap Contains Pockets Plans

27 September 2023

BTC, ETH, BNB, XRP, ADA, DOGE, SOL, TON, DOT, MATIC

27 September 2023

Lengthy-Time period Holders Deposit To Exchanges

27 September 2023
Facebook Twitter Instagram
  • Demos
  • Buy Now
  • HOME
  • News
  • Blog
  • Resources
  • Reviews
  • AI Tech
  • Contact us
Facebook Twitter Instagram Pinterest YouTube LinkedIn Telegram
digitechlifestyledigitechlifestyle
  • HOME
  • News
  • Blog
  • Resources
  • Reviews
  • AI Tech
  • Contact us
digitechlifestyledigitechlifestyle
Home»News»Generative AI a Most sensible Rising Possibility for Organizations: Gartner Survey
News

Generative AI a Most sensible Rising Possibility for Organizations: Gartner Survey

digitechlifestyleBy digitechlifestyle11 August 2023Updated:11 August 2023No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email



In a recent survey of risk executives at 249 organizations conducted by American tech research and consulting firm Gartner, generative AI models like OpenAI’s ChatGPT were named the second greatest emerging risk to enterprise.

According to a Tuesday blog post on the survey by Gartner, experts at the consultancy identified three pressing points that need to be addressed to mitigate risk from large language models (LLMs) like ChatGPT.

Two of the concerns—intellectual property rights and data privacy—are compromised by the current ambiguity around how ChatGPT uses its dataset to generate its outputs.

If, for instance, a company’s intellectual property or sensitive data are inputted as prompts into ChatGPT while chat history is not disabled, they could be outputted as unsourced responses to users and organizations outside the enterprise.

Cybersecurity is the third area of concern. Hackers have recently been able to trick ChatGPT into generating malware and ransomware code, leading to what Garnet calls the “industrialization of advanced phishing attacks.”

In an earlier blog post, Gartner also flagged up generative AI’s sometimes fabricated or inaccurate answers, aka “hallucinations”, its potential undermining of consumer trust (for example, if consumers don’t know they’re chatting with a machine learner instead of a live consumer support agent), and its output biases—one Asian-American student’s LinkedIn profile pic was turned white when she used generative AI to edit it.

Decrypt reached out to Gartner to ask whether organizations are taking timely and practical action to respond to the perceived risk, but did not receive an immediate response.

The road to legislation

To advocates, AI is a tool that will lighten our workloads, improve our designs and potentially usher in a new era in health, learning, work, recreation, creativity and just about every other human endeavor.

However, a growing number of tech experts and luminaries are increasingly vocal about the need to globally regulate the development of advanced machine learning systems to prepare for the advent of human-level artificial cognition, aka Artificial General Intelligence (AGI).

In April, ChatGPT developer Logan Kilpatrick assured his followers on Twitter that work has not commenced on GPT-5 and will not “for some time.”

The announcement came soon after a petition calling for a halt on development of systems more powerful than GPT4 attracted well over a thousand signatures by prominent technologists and researchers, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak.

The following month, executives from Microsoft, Google, and ChatGPT progenitor OpenAI, sounded a stark warning to governments about the failure to adequately prepare for advanced systems.

In June, the European Parliament—the legislative body overseeing the European Union—voted overwhelmingly in favor of passing a draft law of the Artificial Intelligence Act, a comprehensive bit of legislation that aims to set the global standard. It categorizes AI risks into “unacceptable”, “high” or “limited”.

Meanwhile, Chat GPT’s creator OpenAI has been lobbying European lawmakers to not classify its systems as “high risk” as it would subject them to stringent legal requirements and, OpenAI argues, would mean that should high risks uses suddenly emerge, the company would have a significantly delayed response time due to added bureaucratic layers to cross.

Stay on top of crypto news, get daily updates in your inbox.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
digitechlifestyle
  • Website
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • LinkedIn

Related Posts

Xbox Is going Crypto? Leaked Microsoft Roadmap Contains Pockets Plans

27 September 2023

BTC, ETH, BNB, XRP, ADA, DOGE, SOL, TON, DOT, MATIC

27 September 2023

Former NBA Champion Matthew Dellavedova Joins Swan Bitcoin

27 September 2023

ETH Staking Has a Vibrant Long term, Regardless of Regulatory Uncertainty

27 September 2023
Add A Comment

Leave A Reply Cancel Reply

Advertisement
Facebook Twitter Instagram Pinterest YouTube LinkedIn Telegram
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
  • About us
  • Contact us
© 2023 Designed by https://digitechlifestyle.com/

Type above and press Enter to search. Press Esc to cancel.

  • bitcoinBitcoin(BTC)$26,250.000.30%
  • ethereumEthereum(ETH)$1,594.730.59%
  • tetherTether(USDT)$1.000.00%
  • binancecoinBNB(BNB)$211.750.03%
  • rippleXRP(XRP)$0.50-0.24%
  • usd-coinUSDC(USDC)$1.00-0.05%
  • staked-etherLido Staked Ether(STETH)$1,593.760.56%
  • cardanoCardano(ADA)$0.243881-0.07%
  • dogecoinDogecoin(DOGE)$0.0604860.43%
  • solanaSolana(SOL)$18.90-1.31%