Everyone knows that getting your app on Apple’s App Store is a must for app success. What people may not know, however, are Apple’s strict requirements when it comes to you moderating your app’s content. Even an app that boasts innocent content could be denied by Apple should the right checks and balances not be in place. Moderation is especially important if your app allows user-generated content (UGC), where oftentimes you cannot anticipate what people—even kids— will post, which is why the App Store takes these precautions.
We have had numerous clients come to WebPurify for help after being rejected by Apple. Here is some essential information to help you navigate Apple’s stringent app requirements around moderation and safety.
Apple’s App Moderation Requirements
Under its guidelines, the app giant says, “The guiding principle of the App Store is simple—we want to provide a safe experience for users to get apps and a great opportunity for all developers to be successful.” It talks about safety even before it talks about performance, business, design and legal issues. A big part of safety in this day and age where online media are concerned is being able to filter objectionable content that is user generated. Here’s what Apple says about UGC:
“Apps with user-generated content present particular challenges, ranging from intellectual property infringement to anonymous bullying. To prevent abuse, apps with user-generated content or social networking services must include:
- A method for filtering objectionable material from being posted to the app
- A mechanism to report offensive content and timely responses to concerns
- The ability to block abusive users from the service
- Published contact information so users can easily reach you
Apps with user-generated content or services that end up being used primarily for pornographic content, objectification of real people (e.g. “hot-or-not” voting), making physical threats, or bullying do not belong on the App Store and may be removed without notice. If your app includes user-generated content from a web-based service, it may display incidental mature “NSFW” content, provided that the content is hidden by default and only displayed when the user turns it on via your website.”
Using Instagram on Your App
If your app pulls in content from Instagram, you may come across a popular dilemma. The App Store requires a way to flag and remove inappropriate content and offensive users. However, Instagram’s API does not support flagging content. Even though an app, which may use InstagramKit to pull in the content, does not technically “generate” this user content on Instagram, Apple has been known to reject it anyway. Utilizing solutions like WebPurify’s algorithms and live teams to identify images and text comments that may be unsuitable for Apple is a key piece to getting your app approved.
Having an End User License Agreement (EULA)
For Apple to approve an app that allows UGC, it must also have an EULA in place. This is an agreement that users commit to when they sign up for your app. It basically states that there is a no tolerance policy for objectionable content and what that includes, and that the app will moderate all content and decide whether or not it’s appropriate to live on the app.
Additional Reasons for App Store Rejection
There are more reasons an app could be rejected because of Apple’s “Safety” requirements, including inappropriateness for kids, if it behaves in a way that risks physical harm and a failure to include developer contact information. Read more here.
How WebPurify Can Help Get Your App Approval-Ready
Through our Profanity Filter and live and automated Image and Video Moderation solutions, WebPurify is your app’s method for filtering objectionable content and behind-the-scenes censorship team. We are also your consultants on this matter, helping you determine which of our services is the best fit for your unique app.
Contact us to discuss your app and how to filter objectionable content.