这个 SQLite 查询可以更快吗?

Can this SQLite query be made much faster?

我有一个表示安全摄像机 NVR 元数据的数据库。每 1 分钟的视频片段有一个 26 字节 recording 行。 (如果你很好奇,设计文档正在进行中 here。)我的设计限制是 8 个摄像头,1 年(约 400 万行,每个摄像头 50 万)。我伪造了一些数据来测试性能。此查询比我预期的要慢:

select
  recording.start_time_90k,
  recording.duration_90k,
  recording.video_samples,
  recording.sample_file_bytes,
  recording.video_sample_entry_id
from
  recording
where
  camera_id = ?
order by
  recording.start_time_90k;

这只是扫描一个摄像头的所有数据,使用索引过滤掉其他摄像头并进行排序。索引看起来像这样:

create index recording_camera_start on recording (camera_id, start_time_90k);

explain query plan 看起来符合预期:

0|0|0|SEARCH TABLE recording USING INDEX recording_camera_start (camera_id=?)

行数很小。

$ sqlite3_analyzer duplicated.db
...

*** Table RECORDING w/o any indices *******************************************

Percentage of total database......................  66.3%
Number of entries................................. 4225560
Bytes of storage consumed......................... 143418368
Bytes of payload.................................. 109333605   76.2%
B-tree depth...................................... 4
Average payload per entry......................... 25.87
Average unused bytes per entry.................... 0.99
Average fanout.................................... 94.00
Non-sequential pages.............................. 1            0.0%
Maximum payload per entry......................... 26
Entries that use overflow......................... 0            0.0%
Index pages used.................................. 1488
Primary pages used................................ 138569
Overflow pages used............................... 0
Total pages used.................................. 140057
Unused bytes on index pages....................... 188317      12.4%
Unused bytes on primary pages..................... 3987216      2.8%
Unused bytes on overflow pages.................... 0
Unused bytes on all pages......................... 4175533      2.9%

*** Index RECORDING_CAMERA_START of table RECORDING ***************************

Percentage of total database......................  33.7%
Number of entries................................. 4155718
Bytes of storage consumed......................... 73003008
Bytes of payload.................................. 58596767    80.3%
B-tree depth...................................... 4
Average payload per entry......................... 14.10
Average unused bytes per entry.................... 0.21
Average fanout.................................... 49.00
Non-sequential pages.............................. 1            0.001%
Maximum payload per entry......................... 14
Entries that use overflow......................... 0            0.0%
Index pages used.................................. 1449
Primary pages used................................ 69843
Overflow pages used............................... 0
Total pages used.................................. 71292
Unused bytes on index pages....................... 8463         0.57%
Unused bytes on primary pages..................... 865598       1.2%
Unused bytes on overflow pages.................... 0
Unused bytes on all pages......................... 874061       1.2%

...

我希望每次点击特定网页时都这样(可能一次只有一个月,而不是一整年)运行,所以我希望它非常快.但是在我的笔记本电脑上,它需要花费大部分时间,而在我想支持的 Raspberry Pi 2 上,它太慢了。以下时间(以秒为单位);它是 CPU-bound (user+sys time ~= real time):

laptop$ time ./bench-profiled
trial 0: time 0.633 sec
trial 1: time 0.636 sec
trial 2: time 0.639 sec
trial 3: time 0.679 sec
trial 4: time 0.649 sec
trial 5: time 0.642 sec
trial 6: time 0.609 sec
trial 7: time 0.640 sec
trial 8: time 0.666 sec
trial 9: time 0.715 sec
...
PROFILE: interrupts/evictions/bytes = 1974/489/72648

real    0m20.546s
user    0m16.564s
sys     0m3.976s
(This is Ubuntu 15.10, SQLITE_VERSION says "3.8.11.1")

raspberrypi2$ time ./bench-profiled
trial 0: time 6.334 sec
trial 1: time 6.216 sec
trial 2: time 6.364 sec
trial 3: time 6.412 sec
trial 4: time 6.398 sec
trial 5: time 6.389 sec
trial 6: time 6.395 sec
trial 7: time 6.424 sec
trial 8: time 6.391 sec
trial 9: time 6.396 sec
...
PROFILE: interrupts/evictions/bytes = 19066/2585/43124

real    3m20.083s
user    2m47.120s
sys 0m30.620s
(This is Raspbian Jessie; SQLITE_VERSION says "3.8.7.1")

我可能最终会处理某种非规范化数据,但首先我想看看我是否可以让这个简单的查询尽可能好地执行。我的基准非常简单;它提前准备语句然后循环:

void Trial(sqlite3_stmt *stmt) {
  int ret;
  while ((ret = sqlite3_step(stmt)) == SQLITE_ROW) ;
  if (ret != SQLITE_DONE) {
    errx(1, "sqlite3_step: %d (%s)", ret, sqlite3_errstr(ret));
  }
  ret = sqlite3_reset(stmt);
  if (ret != SQLITE_OK) {
    errx(1, "sqlite3_reset: %d (%s)", ret, sqlite3_errstr(ret));
  }
}

我用 gperftools 创建了一个 CPU 个人资料。图片:

$ google-pprof bench-profiled timing.pprof
Using local file bench-profiled.
Using local file timing.pprof.
Welcome to pprof!  For help, type 'help'.
(pprof) top 10
Total: 593 samples
     154  26.0%  26.0%      377  63.6% sqlite3_randomness
     134  22.6%  48.6%      557  93.9% sqlite3_reset
      83  14.0%  62.6%       83  14.0% __read_nocancel
      61  10.3%  72.8%       61  10.3% sqlite3_strnicmp
      41   6.9%  79.8%       46   7.8% sqlite3_free_table
      26   4.4%  84.1%       26   4.4% sqlite3_uri_parameter
      25   4.2%  88.4%       25   4.2% llseek
      13   2.2%  90.6%      121  20.4% sqlite3_db_config
      12   2.0%  92.6%       12   2.0% __pthread_mutex_unlock_usercnt (inline)
      10   1.7%  94.3%       10   1.7% __GI___pthread_mutex_lock

这看起来很奇怪,我希望它能得到改进。也许我在做一些愚蠢的事情。我特别怀疑 sqlite3_randomness and sqlite3_strnicmp 操作:

架构:

-- Each row represents a single recorded segment of video.
-- Segments are typically ~60 seconds; never more than 5 minutes.
-- Each row should have a matching recording_detail row.
create table recording (
  id integer primary key,
  camera_id integer references camera (id) not null,

  sample_file_bytes integer not null check (sample_file_bytes > 0),

  -- The starting time of the recording, in 90 kHz units since
  -- 1970-01-01 00:00:00 UTC.
  start_time_90k integer not null check (start_time_90k >= 0),

  -- The duration of the recording, in 90 kHz units.
  duration_90k integer not null
      check (duration_90k >= 0 and duration_90k < 5*60*90000),

  video_samples integer not null check (video_samples > 0),
  video_sync_samples integer not null check (video_samples > 0),
  video_sample_entry_id integer references video_sample_entry (id)
);

我已经把我的测试数据+测试程序涂上了焦油;你可以下载它 here.


编辑 1:

啊,翻看SQLite代码,我看到了一个线索:

int sqlite3_step(sqlite3_stmt *pStmt){
  int rc = SQLITE_OK;      /* Result from sqlite3Step() */
  int rc2 = SQLITE_OK;     /* Result from sqlite3Reprepare() */
  Vdbe *v = (Vdbe*)pStmt;  /* the prepared statement */
  int cnt = 0;             /* Counter to prevent infinite loop of reprepares */
  sqlite3 *db;             /* The database connection */

  if( vdbeSafetyNotNull(v) ){
    return SQLITE_MISUSE_BKPT;
  }
  db = v->db;
  sqlite3_mutex_enter(db->mutex);
  v->doingRerun = 0;
  while( (rc = sqlite3Step(v))==SQLITE_SCHEMA
         && cnt++ < SQLITE_MAX_SCHEMA_RETRY ){
    int savedPc = v->pc;
    rc2 = rc = sqlite3Reprepare(v);
    if( rc!=SQLITE_OK) break;
    sqlite3_reset(pStmt);
    if( savedPc>=0 ) v->doingRerun = 1;
    assert( v->expired==0 );
  }

看起来 sqlite3_step 在架构更改时调用 sqlite3_reset。 (FAQ entry) 虽然我的声明已经准备好,但我不知道为什么会有架构更改...


编辑 2:

我下载了 SQLite 3.10.1 "amalgation" 并使用调试符号对其进行了编译。我现在得到了一个完全不同的配置文件,看起来并不奇怪,但它并没有更快。也许我之前看到的奇怪结果是由于相同代码折叠或其他原因造成的。


编辑 3:

试试下面 Ben 的聚簇索引解决方案,速度大约快 3.6 倍。我认为这是我要对这个查询做的最好的事情。在我的笔记本电脑上,SQLite 的 CPU 性能约为 ~700 MB/s。除了重写它以使用 JIT 编译器用于它的虚拟机或类似的东西,我不会做任何更好的事情。特别是,我认为我在第一个个人资料中看到的奇怪电话实际上并没有发生;由于优化或其他原因,gcc 必须写入误导性的调试信息。

即使 CPU 性能得到改善,吞吐量也超过了我的存储现在冷读的能力,我认为在 Pi 上也是如此(USB 2.0 总线有限用于 SD 卡)。

$ time ./bench
sqlite3 version: 3.10.1
trial 0: realtime 0.172 sec cputime 0.172 sec
trial 1: realtime 0.172 sec cputime 0.172 sec
trial 2: realtime 0.175 sec cputime 0.175 sec
trial 3: realtime 0.173 sec cputime 0.173 sec
trial 4: realtime 0.182 sec cputime 0.182 sec
trial 5: realtime 0.187 sec cputime 0.187 sec
trial 6: realtime 0.173 sec cputime 0.173 sec
trial 7: realtime 0.185 sec cputime 0.185 sec
trial 8: realtime 0.190 sec cputime 0.190 sec
trial 9: realtime 0.192 sec cputime 0.192 sec
trial 10: realtime 0.191 sec cputime 0.191 sec
trial 11: realtime 0.188 sec cputime 0.188 sec
trial 12: realtime 0.186 sec cputime 0.186 sec
trial 13: realtime 0.179 sec cputime 0.179 sec
trial 14: realtime 0.179 sec cputime 0.179 sec
trial 15: realtime 0.188 sec cputime 0.188 sec
trial 16: realtime 0.178 sec cputime 0.178 sec
trial 17: realtime 0.175 sec cputime 0.175 sec
trial 18: realtime 0.182 sec cputime 0.182 sec
trial 19: realtime 0.178 sec cputime 0.178 sec
trial 20: realtime 0.189 sec cputime 0.189 sec
trial 21: realtime 0.191 sec cputime 0.191 sec
trial 22: realtime 0.179 sec cputime 0.179 sec
trial 23: realtime 0.185 sec cputime 0.185 sec
trial 24: realtime 0.190 sec cputime 0.190 sec
trial 25: realtime 0.189 sec cputime 0.189 sec
trial 26: realtime 0.182 sec cputime 0.182 sec
trial 27: realtime 0.176 sec cputime 0.176 sec
trial 28: realtime 0.173 sec cputime 0.173 sec
trial 29: realtime 0.181 sec cputime 0.181 sec
PROFILE: interrupts/evictions/bytes = 547/178/24592

real    0m5.651s
user    0m5.292s
sys     0m0.356s

我可能需要保留一些非规范化数据。幸运的是,我想我可以将它保留在我的应用程序的 RAM 中,因为它不会太大,启动不必非常快,并且只有一个进程会写入数据库。

您需要聚集索引,或者如果您使用的 SQLite 版本不支持聚集索引,则需要覆盖索引。

Sqlite 3.8.2 及以上版本

在 SQLite 3.8.2 及以上版本中使用:

create table recording (
  camera_id integer references camera (id) not null,

  sample_file_bytes integer not null check (sample_file_bytes > 0),

  -- The starting time of the recording, in 90 kHz units since
  -- 1970-01-01 00:00:00 UTC.
  start_time_90k integer not null check (start_time_90k >= 0),

  -- The duration of the recording, in 90 kHz units.
  duration_90k integer not null
      check (duration_90k >= 0 and duration_90k < 5*60*90000),

  video_samples integer not null check (video_samples > 0),
  video_sync_samples integer not null check (video_samples > 0),
  video_sample_entry_id integer references video_sample_entry (id),

  --- here is the magic
  primary key (camera_id, start_time_90k)
) WITHOUT ROWID;

早期版本

在早期版本的 SQLite 中,您可以使用这种东西来创建覆盖索引。这应该允许 SQLite 从索引中提取数据值,避免为每一行获取单独的页面:

create index recording_camera_start on recording (
     camera_id, start_time_90k,
     sample_file_bytes, duration_90k, video_samples, video_sync_samples, video_sample_entry_id
 );

讨论

成本很可能是 IO(不管你说它不是)因为回想一下 IO 需要 CPU 因为数据必须被复制到总线和从总线复制。

如果没有聚簇索引,将使用 rowid 插入行,并且可能没有任何合理的顺序。这意味着对于您请求的每 26 字节行,系统可能必须从 SD 卡中获取一个 4KB 的页面——这是一个很大的开销。

在 8 个摄像头的限制下,id 上的简单聚集索引可确保它们按插入顺序出现在磁盘上,通过确保获取的页面包含下一个 10-需要 20 行。

相机和时间上的聚簇索引应确保获取的每个页面包含 100 行或更多行。