Fast matching is a crucial task in many computer vision applications due to its computationally intensive overhead, especially for high feature spaces. Promising
techniques to address this problem have been investigated
in the literature such as product quantization, hierarchical
clustering decomposition, etc. In these approaches, a distance
metric must be learned to support the re-ranking step that
helps filter out the best candidates. Nonetheless, computing
the distances is a much intensively computational task and
is often done during the online search phase. As a result,
this process degrades the search performance. In this work,
we conduct a study on parameter tuning to make efficient
the computation of distances. Different searching strategies
are also investigated to justify the impact of coding quality
on search performance. Experiments have been conducted
in a standard product quantization framework and showed
interesting results in terms of both coding quality and search
efficiency.
7 trang |
Chia sẻ: Thục Anh | Ngày: 11/05/2022 | Lượt xem: 615 | Lượt tải: 0
Nội dung tài liệu A study on parameter tuning for optimal indexing on large scale datasets, để tải tài liệu về máy bạn click vào nút DOWNLOAD ở trên
question would be
concerned with the search efficiency when applying to
the ANN search task. In the following discussions, we
shall continue to show the performance of our method
for this task. Figure 3 presents the operating points of
search speedups as a function of precision for all the HPQ
versions in our study. For the SIFT dataset, the gap in
performance is not that much for all the HPQ versions.
In details, HPQ64 performs best in this case although
its behavior is slightly superior to that of HPQ128. This
observation is not fully synchronized for GIST dataset
as shown in Figure 3(b). First, the performance gap is
more noticeable, say for instances at 920× and 732×
in speedups of HPQ128 and HPQ32, respectively, at the
precision of 80%. Second, HPQ192 tends to be close to the
winner (HPQ128), especially when considering very high
search precisions (> 90%). These new findings can be
explained by the high dimensional space of GIST features
in which coding quality plays a role to the success of
search efficiency. As already noted in the Figure 1 (b),
HPQ128 is virtually identical to HPQ192 in terms of
A STUDY ON PARAMETER TUNING FOR OPTIMAL INDEXING ON LARGE SCALE DATASETS
80 82.5 85 87.5 90 92.5
150
200
250
300
350
400
450
500
Precision (%)
Sp
ee
du
p
ov
er
s
eq
ue
nc
e
se
ar
ch
SIFT dataset (128D): 10K queries and 1M data points
HPQ64
HPQ128
HPQ32
HPQ192
(a)
80 82.5 85 87.5 90 92.5
200
300
400
500
600
700
800
900
1000
Precision (%)
Sp
ee
du
p
ov
er
s
eq
ue
nc
e
se
ar
ch
GIST dataset (960D): 1K queries and 1M data points
HPQ128
HPQ192
HPQ64
HPQ32
(b)
Hình 3. ANN search performance of system (HPQ) with varying number of codewords: (a) SIFT and (b) GIST features.
80 82.5 85 87.5 90 92.5 95
100
150
200
250
300
400
500
Precision (%)
Sp
ee
du
p
ov
er
s
eq
ue
nc
e
se
ar
ch
SIFT dataset (128D): 10K queries and 1M data points
HPQ64
OEPQ
EPQ
best−FLANN
(a)
80 82.5 85 87.5 90 92.5 95
50
100
200
300
400
500
600
700
800
900
1000
Precision (%)
Sp
ee
du
p
ov
er
s
eq
ue
nc
e
se
ar
ch
GIST dataset (960D): 1K queries and 1M data points
HPQ128
OEPQ
EPQ
best−FLANN
(b)
Hình 4. ANN search performance of our system and other methods: (a) SIFT and (b) GIST features.
coding quality, whereas HPQ128 incurs less computational
overhead than HPQ192 does. As a result, HPQ128 gives
the best search speedups in the studied experiments.
The last experiment has been conducted as shown
in Figure 4 in which comparative search efficiency is
provided for the best HPQ version (i.e., HPQ64 for SIFT
and HPQ128 for GIST features), Optimized EPQ (OEPQ),
EPQ, and the best method of FLANN (i.e., best-FLANN).
It is worth highlighting that OEPQ is the state-of-the-art
method for ANN search on the SIFT and GIST datasets
[9], [16]. In this study, one can realize that HPQ64 can
also reach the same level of search efficiency as OEPQ
does for the SIFT dataset. Noticeably, using HPQ with 128
codewords provides substantial improvements for the GIST
features. For instances, it gives a speedup of 921× com-
pared to sequence scan when fixing the search precision
of 80%. All these results confirm the superiority of our
method, in terms of both coding quality and ANN search
efficiency, especially when working in high dimensional
spaces.
IV. CONCLUSIONS
In this work, a deep analysis and study of hierarchical
product quantization has been conducted to examine its
performance on the aspects of quantization quality and
ANN search efficiency. Our proposal has been targeted to
the fact that using finer space decomposition is essential
for accomplishing these double-goal objective. Throughout
extensive experiments in comparison with other methods,
it was showed that our method provides significant im-
provement for various datasets and even tends to performs
well with respect to the increase in space dimensionality.
An interesting remark derived from our study is that a
decent product quantizer can be constructed even without
using a high number of codewords. As shown in our
experiments, by using just as few as 32 codewords, one can
also obtain satisfactory performance. Despite the obtained
results are promising, we plan to investigate the inclusion
of ADC distance as well as other deep learning based
encoders for optimizing the method in follow-up works.
Dinh-Nghiep Le, Van-Thi Hoang, Duc-Toan Nguyen, and The-Anh Pham
TÀI LIỆU THAM KHẢO
[1] H. Jegou, M. Douze, and C. Schmid, “Product Quantization for
Nearest Neighbor Search,” IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 33, no. 1, pp. 117–128, 2011.
[2] A. Babenko and V. Lempitsky, “The inverted multi-index,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 37, no. 6, pp. 1247–1260,
2015.
[3] T. Ge, K. He, Q. Ke, and J. Sun, “Optimized product quantization,”
IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 4, pp. 744–755,
2014.
[4] M. Norouzi and D. J. Fleet, “Cartesian k-means,” in Proceedings
of the 2013 IEEE Conference on Computer Vision and Pattern
Recognition, ser. CVPR ’13, 2013, pp. 3017–3024.
[5] Y. Kalantidis and Y. Avrithis, “Locally optimized product quanti-
zation for approximate nearest neighbor search,” in Proceedings of
International Conference on Computer Vision and Pattern Recog-
nition (CVPR 2014), Columbus, Ohio, June 2014, pp. 2329–2336.
[6] A. Babenko and V. Lempitsky, “Additive quantization for extreme
vector compression,” in 2014 IEEE Conference on Computer Vision
and Pattern Recognition, 2014, pp. 931–938.
[7] ——, “Tree quantization for large-scale similarity search and
classification,” in 2015 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2015, pp. 4240–4248.
[8] T.-A. Pham and N.-T. Do, “Embedding hierarchical clustering in
product quantization for feature indexing,” Multimedia Tools and
Applications, vol. 78, no. 1, pp. 9991–10 012, 2018.
[9] V.-H. Le, T.-A. Pham, and D.-N. Le, “Hierarchical product quan-
tization for effective feature indexing,” in IEEE 26th International
Conference on Telecommunications (ICT 2019), 2019, pp. 385–389.
[10] T.-A. Pham, D.-N. Le, and T.-L.-P. Nguyen, “Product sub-vector
quantization for feature indexing,” Journal of Computer Science
and Cybernetics, vol. 35, no. 1, pp. 1–15, 2018.
[11] D. Nister and H. Stewenius, “Scalable recognition with a vocab-
ulary tree,” in Proceedings of the 2006 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition - Volume
2, ser. CVPR’06, 2006, pp. 2161–2168.
[12] M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with
automatic algorithm configuration,” in Proceedings of International
Conference on Computer Vision Theory and Applications, ser.
VISAPP’09, 2009, pp. 331–340.
[13] ——, “Scalable nearest neighbor algorithms for high dimensional
data,” IEEE Transactions on Pattern Analysis and Machine Intelli-
gence, vol. 36, pp. 2227–2240, 2014.
[14] J. McNames, “A fast nearest-neighbor algorithm based on a prin-
cipal axis search tree,” IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 23, no. 9, pp. 964–976, 2001.
[15] T.-A. Pham, S. Barrat, M. Delalandre, and J.-Y. Ramel, “An
efficient tree structure for indexing feature vectors,” Pattern Recog-
nition Letters, vol. 55, no. 1, pp. 42–50, 2015.
[16] T.-A. Pham, “Improved embedding product quantization,” Machine
Vision and Applications, vol. 30, no. 3, pp. 447–459, 2019.
[17] ——, “Pair-wisely optimized clustering tree for feature indexing,”
Computer Vision and Image Understanding, vol. 154, no. 1, pp.
35–47, 2017.
NGHIÊN CỨU SỰ ẢNH HƯỞNG CỦA CÁC THAM
SỐ TRONG TỐI ƯU HÓA CHỈ MỤC CHO CÁC CƠ SỞ
DỮ LIỆU LỚN
Tóm tắt: Đối sánh nhanh là một trong những bài toán
quan trọng của các ứng dụng thị giác máy bởi độ phức tạp
tính toán lớn, đặc biệt là trong các không gian đặc trưng
có số chiều lớn. Các kỹ thuật tiềm năng cho bài toán này
đã được nghiên cứu và đề xuất trước đây như tích lượng
tử (Product Quantization), thuật toán phân cụm phân cấp
(Hierarchical Clustering Decomposition). Đối với các kỹ
thuật này, một hàm khoảng cách sẽ được đề xuất để tạo
một danh sách các ứng viên tiềm năng gần nhất với đối
tượng truy vấn. Tuy nhiên, quá trình tính toán hàm khoảng
cách này thường có độ phức tạp tính toán lớn và được thực
hiện trong giai đoạn tìm kiếm (online), do vậy, làm ảnh
hưởng đến hiệu năng tìm kiếm. Trong bài báo này, chúng
tôi thực hiện các nghiên cứu trên các tham số ảnh hưởng
đến quá trình lập chỉ mục và tối ưu hóa quá trình tính toán
hàm khoảng cách. Ngoài ra, các chiến lược tìm kiếm khác
nhau cũng được thực hiện để đánh giá chất lượng của quá
trình lượng tử hóa. Các thử nghiệm đã được thực hiện và
cho thấy những kết quả nổi bật cả về chất lượng lượng tử
hóa và hiệu năng tìm kiếm.
Từ khóa: Lập chỉ mục, tìm kiếm xấp xỉ nhanh, tích
lượng tử.
Dinh-Nghiep Le has been work at Hong Duc
University as lecturer and permanent researcher
since 2012. His research interests include: fea-
ture extraction and indexing, image detection
and recognition, computer vision.
Van-Thi Hoang received his PhD thesis in
2006 from Hanoi National University of Ed-
ucation (Vietnam). He has been a lecturer at
Hong Duc University until 2017. He has since
then working at Department of Education and
Training, Thanh Hoa city.
Duc-Toan Nguyen received the Master degree
from University of Wollongong, Australia, in
2014. He has worked for Department of Indus-
try and Trade since 2014, Thanh Hoa province.
His research interests include: data mining,
computer vision and machine learning.
The-Anh Pham has been working at Hong
Duc University as a permanent researcher since
2004. He received his PhD Thesis in 2013 from
Francois Rabelais university in France. Starting
from June 2014 to November 2015, he has
worked as a full research fellow position at
Polytech’s Tours, France. His research interests
include document image analysis, image com-
pression, feature extraction and indexing, shape
analysis and representation.
Các file đính kèm theo tài liệu này:
- a_study_on_parameter_tuning_for_optimal_indexing_on_large_sc.pdf