This dissertation examines how the rising deployment of artificial intelligence (AI) agents across service settings reshapes the social expectations placed on these technologies. As AI systems move beyond narrowly scripted tasks and begin to function as subordinates, coworkers, teachers, and leaders, they are no longer confined to technical or backstage work but increasingly occupy socially embedded roles that require interaction abilities, collaboration skills, and leadership. Recognizing that AI agents inherit the social demands tied to the roles they assume, this dissertation therefore shifts the conversation from what AI agents can do technically to how they must behave socially to be accepted and effective. Chapter 2 establishes the behavioral foundation of the dissertation by introducing emotional communication as a core capability, showing that even in the most limited service roles, AI agents must be able to enact simple, socially appropriate signals to support smooth interactions. Chapter 3 provides initial empirical evidence, showing that although human-like appearance can elicit rapport in simple server interactions, these cues break down when service failures occur, underscoring the limits of appearance-based socialness. Building on this, Chapter 4 then demonstrates that as soon as AI agents enter peer-like coworker roles, successful collaboration depends on their ability to demonstrably honor core social norms—most notably reciprocity—because people are otherwise less willing to help and more likely to harm them. Chapter 5 moves to high-authority teaching roles and demonstrates that AI tutors must shape the social climate—such as by fostering psychological safety through confidentiality—to support student engagement. Finally, Chapter 6 examines AI leaders and shows that their ability to exert social influence—for example through charismatic signaling—substantially enhances their motivational impact and their subordinate’s performance. It also introduces ResearchChatAI, a flexible, open-source experimental platform that enables researchers to script, deploy, and systematically test social behaviors in dynamic human–AI interactions, thereby opening new possibilities for studying enacted social behavior across roles. Together, these chapters advance three theoretical contributions: they foreground enacted behavior, rather than appearance, as the basis of social legitimacy; they refine the Computers Are Social Actors (CASA) paradigm by demonstrating that users attribute socialness selectively; and they develop a contingency perspective that treats social behavior as beneficial only when aligned with role-specific expectations. Practically, the dissertation highlights the need to design role-calibrated behavioral capabilities, tailor socialness to task demands, and prepare employees for socially competent human–AI collaboration.