We process a large volume of data, including campaign contribution records from the Federal Election Commission, for apprecs.com and our apps. Since we already have this FEC data, we figured we'd get sidetracked a bit and see what other uses it might have.Turns out to be pretty interesting.
Donalds, for example, have tended to be Republicans (3981 Democrats vs. 6419 Republicans).Jessicas have tended to be Democrats (1241 vs. 574).
We looked at all 20 million campaign contributions since 1996 and broke them out by name and whether they were for the Democratic or Republican party. We flagged individuals who primarily made Republican contributions as Republicans, and likewise for Democrats. Then, we made a list of the ratios of individuals with each name who appeared to be Democrats vs. Republicans.
A quick search reveals those who donate as "Ted" tend to be in Republican territory (53.4%). What about Rafael, as in Rafael Edward "Ted" Cruz? Democrat (63.1%).
We went a bit further and looked at names vs. individual candidates for the 2016 presidential election. Who tends to contribute to Hillary vs. Bernie? Trump vs. Cruz? Finally, we looked at how much people contribute at a time, possibly an indicator of wealth. Who spends the most? Who spends the least?
One caveat here, much like in our previous Democratic vs. Republican Occupations chart, is that campaign contributors might not represent a perfect cross section of society. What if, say, Democrats are more likely to make campaign contributions than Republicans? That would skew all the names a bit more toward the left. That said, it seems unlikely that such behavior varies at a per-name level, so all names skewed a bit left or right shouldn't be an issue when comparing names to each other (e.g., Donald is further to the right than Jessica).
Pretty fascinating stuff. Now, back to our regular programming.
UPDATE - I just ran the data against the current members of the US Congress, and 65.5% of members of the House of Representatives and Senate have party affiliations that match their name leanings. So their names, or at least what they choose to go by, predict their affiliations about two thirds of the time. Interesting!
We've been slogging away here at apprecs.com and are proud to announce some cool new features.
Got comments on the new features? Want something else? We'd love to hear from you!

We usually skim reviews before buying or downloading apps from the App Store™, but getting an accurate story is difficult to impossible. Many app makers manipulate their reviews and ratings in one way or another — they bribe users, purchase positive reviews, or use subtler tactics. How do you really know which reviews are genuine vs. ones you should ignore?
In our efforts to offload this labor to apprecs.com, we've analyzed millions of reviews and developed a classification scheme.
At the top tier are the most trustworthy, and at the bottom are the least. Let's start with the ones we consider to be generally untrustworthy, pictured in red.
Aggressively Requested. These are reviews that result from the app excessively nagging the user for positive reviews. Users will often post a review to get the app to stop hounding them and leave them alone. Such a practice becomes obvious when the reviewer points it out, though a high frequency of very short reviews can also be a clue.

Filtered / Cherry-Picked. At the next level are filtered reviews. For such reviews, the app asks for users' opinions and then asks only those users that responded positively to post reviews. The other users — those who expressed negative opinions — are not asked for reviews. This system serves to cherry-pick reviews and skew the app's overall rating in a positive direction.

Network-Sourced. Below that level dwell the reviews from a network of some sort. This could be the app developer's personal network of friends and family. It could be an online community where the developer has requested reviews to boost the app's ranking. Or it could be a review exchange network, providing tit-for-tat reviews ("I'll rate yours if you rate mine"). On the surface, such reviews can be virtually indistinguishable from organic reviews, but many can be detected by looking for similar review patterns across multiple reviewers.
Reward-Driven. One level down lie the incentivized reviews. These are the ones where the app users expect to receive rewards of some sort for posting their reviews. Apps may promise to unlock a feature, dole out free coins or gems, or give some other bonus. You're not exactly given cash to post a review, but you're given something else of value, and you might have otherwise have had to spend money to get it.

Paid. Way down at the bottom, in the bowels of review purgatory, are the paid reviews. Be they from a review mill in China or from, say, a fiverr freelancer, they're driven by a pure cash motive, most certainly not a selfless desire to spread truth.

Apple's policy dictates that developers not attempt to manipulate reviews, but it's clear that that's often not the case.
The most trustworthy reviews are the ones that users write spontaneously, without any incentive ("organic" reviews). Unfortunately, they can be very hard for apps to acquire, and reviews or the lack thereof can make or break an app.
To cope with this, apps often employ an occasional request for an objective review. Though reviews triggered that way might not be quite as unbiased as spontaneous ones, this practice appears to be allowed within the App store, and it can serve to counteract the phenomenon where users with negative feelings are more vocal than others.
Reviews are vital to an app's success, and it's very difficult for honest app developers to gain a foothold in the app market when they must compete with other app developers that violate that 3.10 guideline. Furthermore, biased reviews make it difficult for users to discover the absolute best apps.
Have you ever been suspicious of an app with too many glowing reviews? Ever wondered if some of those reviews might not be entirely trustworthy?
We have, and it got us thinking. What if there were a way to identify and filter out those reviews? If we could somehow do that, how would the overall rating for each app change? Would some apps drop from, say, 4 stars to 2 stars?
And... if we could get more accurate ratings, could we use that information for other things?
How about improving the app search engine? We'd want a search feature to filter apps by the more accurate rating. While we're at it, let's filter by other useful things such as whether it's been a really long time since the app's been updated. Let's also use it to make really good app recommendations.
You guessed it -- we built all of this. It's called apprecs.com. Think of it as an upgraded interface to the App Store®.
The website's fast and simple, but powerful. Here's why we think it's awesome:
AppRecs is completely free, and we strive to be as transparent and straightforward as possible. We don't sell favorable placements, we don't sell data, and we don't even show ads. We receive income solely from Apple's affiliate program, which sends us a small percentage of their revenue if you end up buying any apps via referral.
We're just getting started. Android support is next on the roadmap, as are enhancements to review classification and search result ordering. If you have any comments or questions, we'd love to hear from you.