As to "unannotated" I gave two recent and very well known noteworthy examples.
The black Nazi Storm tropper was produced by Google's much vaunted Gemini artificial intelligence model: based apparently on the idea that history was up for woke racial diversity. Interesting to note how it even bastardised the swastika and other Nazi military symbols: clearly in case anyone found them " offensive"...
The other is of course Israel's "Lavender" used to help identify bombing targets in Gaza.
This "AI-based tool" collects information on “almost everybody” in the Gaza Strip, and then supposedly identifies "... how probable it is that the targeted individuals are connected to Hamas or Islamic Jihad military wings."
" Underestimating something you don’t like seems a bit complacent to me, and that is just putting it mildly. "
What's being "underestimated" are the dangers inherent in these systems and the above examples highlight (-I think rather concisely) the issues inherent in trusting such AI systems, when as can be seen, theres clearly a systemic problem in that they are merely supplying the kinds of "answers" the owners of the system want to see: there's the "complacency".
-Perhaps it also needs to be underlined again that that others "overestimate" the value of these systems to the extent of allowing them to target human beings for murder.
SISO: Shit-in = Shit-out because in both cases the systems have been loaded with material (and probably deliberately) skewed with "confirmation bias". -It's not an auspicious start.
Responses « Back to index | View thread »