Digital Transformation in Healthcare in 2022:
TikTok is making strides to improve its platform’s safety for users by incorporating artificial intelligence (AI) to identify and remove violative content. The company has announced that it will be utilizing automated systems to detect and remove certain types of violative content uploaded by its users in addition to manually removed content by the Safety Team.
According to Trusted Reviews, TikTok’s automated system will initially target violations concerning minors’ safety, adult nudity, sexual activities, violent and graphic content, and illegal activities and regulated goods. The company will be gradually improving the system’s accuracy to avoid incorrect removals, and creators can appeal their video removal directly if they believe it was done so unfairly.
Paul Bischoff, a Privacy Advocate at Comparitech, explained to Trusted Reviews that the AI system would be more efficient in removing content than human moderators since a human bottleneck can allow for violating content to spread before removal or not be removed at all. The automated system can work faster and remove violative content 24/7 without being subject to human error or fatigue.
However, Chris Hauk, a Consumer Privacy Champion at Pixel Privacy, also noted that the new system might unintentionally remove non-violative content, and the algorithm could take some time to refine and reduce false positives.
Tom Gaffney, a Security Consultant for F-Secure, expressed his concerns about how content is classified in different regions since social norms differ globally. While TikTok will aim to minimize false positives to a reasonable degree, Gaffney acknowledged that some mistakes are inevitable.
Overall, TikTok is taking steps to make its platform safer for its one billion users in over 150 countries. While the introduction of automated systems may present some challenges, the incorporation of AI is a significant improvement in identifying and removing violative content to create a safer environment for users.