From your description it sounds like your data already fits onto a single machine, so sharding might not even be neccessary. You can create a clustered index on your date-time column. This operation in itself could take a large amount of time. Once you have that, selecting the 16 M rows you need to process should be fairly quick.
Does the processing of the data take a long time once you've found the 16M rows you need? You may want to insert the raw 16M rows (without processing) into a staging table, then create additional indexes which could aid the processing. If you can give more detail on this, I could give you some additional suggestions.
If the database is continuing to grow, a traditional time-based sharding may be effective too. You create a new database for every month of data, and in your application layer determine which database(s) you need to query and merge the result. This allows you to purge old data by simply dropping databases instead of selectively deleting massive amounts of data from existing tables. The latter can cause performance problems for other queries running at the same time on a live system.