Princeton should proceed with caution in its “AI Hub” partnership (On the Campus, March issue) so that the enterprise is not at the expense of careful deliberations about how best to address numerous grave risks that can result from accelerating unregulated AI deployment. PAW notes that Princeton’s AI Hub’s goals include building “a thriving regional AI economy” and mentions companies with business interests in AI being Princeton’s AI Hub partners. That’s all fine and dandy, as AI promises some real benefits to humanity, but such an enterprise must be pursued by Princeton consistent with its primary responsibility to act as a nonprofit educational organization, supporting meaningful partnering with scholars and public interest organizations concerned with identifying and mitigating recognized harms that AI may foist upon us, ranging from the proliferation of disinformation, ease of abuse by bad actors and “bots,” wrongful harvesting of copyrighted art and content without the consent of artists and authors, and potentially tragic, massive unemployment in sectors susceptible to rapid AI deployment. When it comes to AI, Princeton should prioritize an academic approach, with critical thinking about AI’s potential harms to society, or we risk weakening of the educational model that is the foundation of the University’s true greatness.
Princeton should proceed with caution in its “AI Hub” partnership (On the Campus, March issue) so that the enterprise is not at the expense of careful deliberations about how best to address numerous grave risks that can result from accelerating unregulated AI deployment. PAW notes that Princeton’s AI Hub’s goals include building “a thriving regional AI economy” and mentions companies with business interests in AI being Princeton’s AI Hub partners. That’s all fine and dandy, as AI promises some real benefits to humanity, but such an enterprise must be pursued by Princeton consistent with its primary responsibility to act as a nonprofit educational organization, supporting meaningful partnering with scholars and public interest organizations concerned with identifying and mitigating recognized harms that AI may foist upon us, ranging from the proliferation of disinformation, ease of abuse by bad actors and “bots,” wrongful harvesting of copyrighted art and content without the consent of artists and authors, and potentially tragic, massive unemployment in sectors susceptible to rapid AI deployment. When it comes to AI, Princeton should prioritize an academic approach, with critical thinking about AI’s potential harms to society, or we risk weakening of the educational model that is the foundation of the University’s true greatness.