I'm using PyTables 2.4.0 and Python 2.7 I've got a database that contains the following typical table:
/anc/asc_wind_speed (Table(87591,), shuffle, blosc(3)) 'Wind speed'
description := {
"value_seconds": Time64Col(shape=(), dflt=0.0, pos=0),
"update_seconds": Time64Col(shape=(), dflt=0.0, pos=1),
"status": UInt8Col(shape=(), dflt=0, pos=2),
"value": Float64Col(shape=(), dflt=0.0, pos=3)}
byteorder := 'little'
chunkshape := (2621,)
autoIndex := True
colindexes := {
"update_seconds": Index(9, full, shuffle, zlib(1)).is_CSI=True,
"value": Index(9, full, shuffle, zlib(1)).is_CSI=True}
I populate the timestamp columns using float seconds.
The data looks OK in my IPython session:
array([(1343779432.2160001, 1343779431.8529999, 0, 5.2975000000000003),
(1343779433.2190001, 1343779432.9430001, 0, 5.7474999999999996),
(1343779434.217, 1343779433.9809999, 0, 5.8600000000000003), ...,
(1343866301.934, 1343866301.5139999, 0, 3.8424999999999998),
(1343866302.934, 1343866302.5799999, 0, 4.0599999999999996),
(1343866303.934, 1343866303.642, 0, 3.7825000000000002)],
dtype=[('value_seconds', '<f8'), ('update_seconds', '<f8'), ('status', '|u1'), ('value', '<f8')])
.. but when I try to do an in-kernel search using the indexed column 'update_seconds', everything goes pear-shaped:
len(wstable.readWhere('(update_seconds <= 1343866303.642)'))
0
ie I get 0 rows returned when I was expecting all 87591 of them. Occasionally I do manage to get some rows with a '>=' query, but the timestamp columns are then returned as huge floats (~10^79). It seems that there is some implicit type-conversion going on that causes the Time64Col values to be misinterpreted. Can someone spot my mistake, or should I forget about Time64Cols and convert them all to Float64 (and how do I do this?)