5 things about AI you may have missed today: Chatbots spreading racist medical ideas, new AI investment tool, and more


AI Roundup: Cryptocurrency alternate platform Bitget introduced the launch of its newest AI instrument referred to as Future Quant, which leverages AI expertise and complex algorithms to supply customers with information to take knowledgeable funding selections. In a separate improvement, the Philippine navy has been instructed to stop utilizing AI apps because of potential safety dangers. All this, and extra in in the present day’s AI roundup.

1. Bitget introduces AI-powered instrument

Cryptocurrency alternate platform Bitget introduced the launch of its newest AI instrument on Friday. As per a launch, the AI instrument, referred to as Future Quant, leverages AI expertise and complex algorithms to supply customers with premium portfolios and knowledgeable funding selections. Bitget says that Future Quant doesn’t require any human enter and it may possibly use AI to routinely modify Settings in response to the market dynamics.

2. Curbs on AI chips may assist Huawei, analysts say

Amidst the continuing curbs on the export of AI chips enforced by the US, it may assist Huawei Applied sciences to broaden its market in its residence nation China, Reuters reported on Friday. Though Nvidia has an nearly 90 % market share in China, the continuing restrictions may assist Chinese language tech corporations within the race to turn out to be the highest AI chip supplier. Jiang Yifan, chief market analyst at brokerage Guotai Junan Securities posted on his Weibo account, “This U.S. transfer, in my view, is definitely giving Huawei’s Ascend chips an enormous present.”

3. Philippine navy ordered to cease utilizing AI apps

Whereas the entire world is adopting AI, the Philippine navy has been ordered to cease utilizing AI apps, AP reported on Friday. This order got here from Philippine Protection Secretary Gilberto Teodoro Jr. because of safety dangers posed by apps that require customers to submit a number of images of themselves to create an AI likeness. “This seemingly innocent and amusing AI-powered utility may be maliciously used to create faux profiles that may result in identification theft, social engineering, phishing assaults and different malicious actions”, Teodoro stated.

4. AI chatbots are propagating racist medical concepts, analysis says

A brand new examine led by Stanford Faculty of Drugs revealed on Friday has revealed that AI chatbots have the potential to assist sufferers by summarizing medical doctors’ notes and checking well being information, however are spreading racist medical concepts which have already been debunked. The analysis, published within the Nature Journal, concerned asking medical questions associated to kidney perform and lung capability to 4 AI chatbots together with ChatGPT and Google. As a substitute of offering medically correct solutions, the chatbots responded with “incorrect beliefs concerning the variations between white sufferers and Black sufferers on issues akin to pores and skin thickness, ache tolerance, and mind dimension.”

5. AI used to determine sufferers with backbone fracture

The NHS ADOPT examine has begun figuring out sufferers with backbone fractures utilizing AI, a release issued by the College of Oxford stated on Friday. The AI program, referred to as Nanox.AI, research computed tomography (CT) scans to detect backbone fractures, alerting the specialist staff for rapid therapy. The AI program has been developed by the College of Oxford in collaboration with Addenbrooke’s Hospital, Cambridge, medical imaging expertise firm Nanox.AI, and the Royal Osteoporosis Society.


Source link

Related posts

NASA clocks Asteroid 2023 PX at 27252 kmph and it is hurtling towards Earth


Snapchat is now allowing websites to embed content


Bluesky buckled following Twitter/X’s announcement about the end of blocks


Leave a Comment