Unfortunately it is always possible that users could abuse community privileges in a way that is harmful to the company, group, and individual users. Online services with chat platforms manage this, in part, with moderation tools. Therefore, it would be helpful to have moderation features for spaces that allowed moderators a small range of features from being able to approve content before it posts (most extreme) to the ability to monitor and remove inappropriate content (less invasive), as well as the ability to bar or ban users in violation of policies. As part of the moderation features, it would be helpful if the system could monitor for keywords (e.g. illegal, profanity, sexually explicit, rude or hateful, and other inappropriate content) and provide notice to admins when this is occurring and which users are responsible. In conjunction, it is probably a good idea for owners/admins to provide community guidelines so that users have a clear sense of the values espoused for use of the platform and a clear sense of what constitutes unacceptable use and consequences for misuse of the platform. The hope is that this kind of behavior wouldn't be typical or happen often, but the ability to deal with it when and if it does happen is important.