Unfortunately, you can't circumvent this restriction, but I can help you model the data in a slightly different way.
First off, Bigtable is suited to very fast reads off large databases - the kind you do when have a million people hitting your app at the same time. What you're trying to do here is a report on historical data. While I would recommend moving the reporting to a RDBMS, there is a way you can do it on Bigtable.
First, override the put() method on your item model to split the date before saving it. What you would do is something like
def put(self):
self.manufacture_day = self.manufacture_date.day
self.manufacture_month = self.manufacture_date.month
self.manufacture_year = self.manufacture_date.year
super(self.__class__, self).put()
You can do this to any level of granularity you want, even hours, minutes, seconds, whatever.
You can apply this retroactively to your database by just loading and saving your item entities. The mapper is very convenient for this.
Then change your query to use the inequality only on the item count, and select the days / months / years you want using normal equalities. You can do ranges by either firing multiple queries or using the IN clause. (Which does the same thing anyway).
This does seem contrived and tough to do, but keep in mind that your reports will run almost instantaneously if you do this, even when millions of people try to run them at the same time. You might not need this kind of scale, but well... that's what you get :D