How to improve performance of file reading by multiple threads?

I need to read a single file using multiple threads under Linux. There are reading operations only and no need of writing. The file reading don't need read the whole file every time. It need read one or more portions of a file every time. I store the offset of each portion beforehand. The file is too large to put into main memory. So for example, many users want to read such file. I use a thread or a process to read the file to answer user requests. What will happen under Linux? Will all the read operations be queued? And the OS will complete the file reading one by one? Is it possible to improve the performance of such operations? I'm trying to implement a simple inverted index used in information retrieval. I put dictionary in memory and posting lists in files. Each file contains a segment of the index. In the dictionary, I can store something like offset to point to the position of the word's posting list. When 100 users want to search something in one second, they submit different queries. So each reading will read different part of the file.
I said something wrong when I ask. The file reading don't need read the whole file every time. It need read one or more portions of a file every time. I store the offset of each portion beforehand.

以上就是How to improve performance of file reading by multiple threads?的详细内容,更多请关注web前端其它相关文章!

赞(0) 打赏
未经允许不得转载:web前端首页 » CSS3 答疑

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址

前端开发相关广告投放 更专业 更精准

联系我们

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

微信扫一扫打赏