相似图片调研

Wu Jun 2020-01-09 12:43:49
Categories: > Tags:

通过调研,相似图片判断领域使用较多的算法有“感知哈希”算法(谷歌、360),“直方图”算法(阿里)和“特征提取”算法(百度),下面对这些算法进行了测试与对比,实现上可能还有进一步改进的地方。

测试用的图片附在文末的 附1
之前未能成功安装的, postgresql 的 imgsmlr 插件,将它的一个简介附在文末的 附2

目前只完成了“感知哈希”算法和“直方图”算法的比较,“特征提取”算法的量化对比还未完成。

对比速览
特征 描述 值形式 相似度计算 计算速度 匹配速度 准确度
ahash RGB均值哈希 simhash 海明距离
dHash RGB差异哈希 simhash 海明距离
pHash RGB感知哈希 simhash 海明距离
直方图 颜色分布向量 整数数组 余弦相似度、欧氏距离等
SIFT 局部特征 二维数组,宽64,长度不定 匹配点个数
SURF SIFT改进 同SIFT 同SIFT

一、感知哈希算法

感知哈希算法是Neal Krawetz提出的关键技术,其不是以严格的方式计算哈希值,而只是通过判断图像相邻像素的差异,为图像生成一个指纹(字符串格式),对于待比较的两张图像,通过对比其指纹,计算两张图像之间指纹的汉明距离,即可判断出其图像相似度,进而检索相似图像。

对每张图片生成一个simhash,再比较海明距离。海明距离不超过5,表示很相似;海明距离大于10,表明完全不同。

常用的有三种:平均哈希(aHash),感知哈希(pHash),差异值哈希(dHash)

1、平均哈希(aHash)

1)算法步骤
  1. 缩小尺寸:8*8像素
  2. 简化色彩:图片灰度化
  3. 计算平均值:计算所有64个像素的灰度平均值
  4. 像素二值化:大于或等于平均值,记为1;小于平均值,记为0。
  5. 生成hash值:将上一步的比较结果,组合成64位的01字符串。
2)测试数据
Origin Filter Copyright LowQuality Thumbnail SemiSimilar Different
海明距离 5 5 1 6 34 44
相似度(%) 92.19 92.19 98.44 90.63 46.88 31.25
1次读取+1次计算匹配(ms) 38 29 27 40 58 25
读取后,100次计算匹配(ms) 4-9 5-15 7-10 5-11 4-8 10-18
3)小结

简单快速,有一定准确率。

2、差异值哈希(dHash)

1)算法步骤
  1. 缩小尺寸:9*8像素
  2. 简化色彩:图片灰度化
  3. 计算差异值:每行9个像素之间产生了8个不同的差异,一共8行。前一个像素大于后一个像素则为1,否则为0。这样会生成8*8的差值
  4. 生成哈希值(同平均哈希)
2)测试数据
Origin Filter Copyright LowQuality Thumbnail SemiSimilar Different
海明距离 5 3 7 11 29 33
相似度(%) 92.19 95.31 89.06 82.81 54.69 48.44
1次读取+1次计算匹配(ms) 38 25 35 25 30 39
读取后,100次计算匹配(ms) 5-12 8-14 5-9 8-13 5-9 7-13
3)小结

和 aHash 效率类似。

3、感知哈希(pHash)

phash.org 有官方实现

1)算法步骤
  1. 缩小尺寸:32*32像素
  2. 简化色彩:图片灰度化
  3. 计算DCT:对图片进行离散余弦变换,得到32*32的DCT系数矩阵。
  4. 缩小DCT:取矩阵左上角8*8大小(取图片低频部分)
  5. 计算平均值,并根据平均值二值化(同平均哈希)
  6. 生成hash值(同平均哈希)
2)测试数据
Origin Filter Copyright LowQuality Thumbnail SemiSimilar Different
海明距离 0 1 0 0 14 18
相似度(%) 100 98.44 100 100 78.13 71.88
1次读取+1次计算匹配(ms) 50 44 38 32 36 48
读取后,100次计算匹配(ms) 448-528 439-558 462-599 244-298 641-804 463-520
3)小结

相比 aHash 和 dHash, pHash的速度要慢很多,但效果要好很多,改变大小、分辨率、加滤镜,都不会改变哈希值。

二、直方图

1)算法原理

每张图片都可以生成颜色分布的直方图。如果两张图片的直方图很接近,就可以认为它们很相似。

红绿蓝三原色,每种原色可取256个值,计算量太大。要采用简化方法,将255分成四个区统计,总共可以构成64种组合(4的3次方)。

统计每一种组合包含的像素数量,颜色分布组成一个64维向量,这个向量就是这张图片的特征值。

于是,寻找相似图片就变成了找出与其最相似的向量。这可以用皮尔逊相关系数或者余弦相似度算出。

2)测试数据

0.9以上算相似

Origin Filter Copyright LowQuality Thumbnail SemiSimilar Different
相似度(%) 89.40 99.92 85.93 99.54 79.56 63.34
1次读取+1次计算匹配(ms) 41 28 26 38 58 40
读取后,100次计算匹配(ms) 684-857 631-770 715-919 390-433 1103-1679 632-762
3)小结

由于直方图自身的局限性:仅反映图像像素各灰度值的数量,不能反映图像纹理结构,很明显该方法存在很多误判。

变色后的 Filter 与原图相似度偏低,仅仅颜色相近的 SemiSimilar 和 Different 相似度结果偏高。

效果上不及 PHash。

三、特征提取

1、SIFT

SIFT(Scale-Invariant Feature Transform,尺度不变特征转换)自1999年由David Lowe提出以后被广泛的应用于CV的各种领域:图像识别,图像检索,3D重建等等。

1)算法步骤

Lowe将算法分为了四个分解步骤:

  1. 尺度空间的极值检测 搜索所有尺度空间上的图像,通过高斯微分函数来识别潜在的对尺度和选择不变的兴趣点。
  2. 特征点定位 在每个候选的位置上,通过一个拟合精细模型来确定位置尺度,关键点的选取依据他们的稳定程度。
  3. 特征方向赋值 基于图像局部的梯度方向,分配给每个关键点位置一个或多个方向,后续的所有操作都是对于关键点的方向、尺度和位置进行变换,从而提供这些特征的不变性。
  4. 特征点描述 在每个特征点周围的邻域内,在选定的尺度上测量图像的局部梯度,这些梯度被变换成一种表示,这种表示允许比较大的局部形状的变形和光照变换。
2)SIFT优点
  1. 对于不同的图像,有很强的稳定性
  2. 独特性好,信息量丰富
  3. 扩展能力强,可以方便的将更强能力的某一单一处理算法引入
3)SIFT缺点
  1. 计算速度慢
  2. 原始图像较小的情况下,特征点下降严重
  3. 对边缘光滑的目标无法准确提取特征点,尤其是圆,完全无能为力

2、SURF

SURF(Speeded Up Robust Features,加速稳健特征),是对 SIFT 的改进,提升了算法的执行效率。SURF在大致流程上与SIFT相似,在尺度空间和关键点描述方面做了优化。

SIFT 和 SURF 算法都受专利保护,要收费。

四、参考资料

五、附录

附 1. 测试图片

1)Origin.jpg,429 KB

2)Filter.jpg,165 KB

3)Copyright.jpg,459 KB

4)LowQuality.jpg,59.7 KB

5)Thumbnail.jpg,28.2 KB

6)SemiSimilar.jpg,747 KB

7)Different.jpg,63.2 KB

附 2. postgresql 的 imgsmlr 插件原理

PostgreSQL的图像搜索插件 imgsmlr 使用了 Haar wavelet 技术对图像进行变换后存储。

ImgSmlr 提供了两种数据类型: pattern 和 signature.

Datatype Storage length Description
pattern 16388 bytes 存储图片的哈尔小波变换结果
signature 64 bytes pattern 的简短表示,用来支持使用 GiST 索引的快速搜索
1)计算步骤
  1. 解压缩图像。
  2. 使图像变为黑白色。
  3. 将图像大小调整为64x64像素。
  4. 将Haar小波变换应用于图像。
2)对比步骤

对比图片的 pattern,返回64位的 simhash,通过 signature 索引

3)查询示例
SELECT
	id,
	smlr
FROM
(
	SELECT
		id,
		pattern <-> (SELECT pattern FROM pat WHERE id = :id) AS smlr
	FROM pat
	WHERE id <> :id
	ORDER BY
		signature <-> (SELECT signature FROM pat WHERE id = :id)
	LIMIT 100
) x
ORDER BY x.smlr ASC 
LIMIT 10

附 3. Java 调用 OpenCV 包

1)Maven 引包
<!-- https://mvnrepository.com/artifact/org.openpnp/opencv -->
<dependency>
    <groupId>org.openpnp</groupId>
    <artifactId>opencv</artifactId>
    <version>2.4.13-0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/commons-io/commons-io -->
<dependency>
    <groupId>commons-io</groupId>
    <artifactId>commons-io</artifactId>
    <version>2.6</version>
</dependency>
2)感知哈希算法
public class PHashUtils {
    /**
    * 使用之前需要先执行加载语句
    */
    static {
        nu.pattern.OpenCV.loadShared();
    }
    /**
    * 测试 main
    */
    public static void main(String[] args) {
        String strUrl1 = "http://testUrl1";
        String strUrl2 = "http://testUrl2";
        int distance = getDistance(strUrl1, strUrl2);
    }

    /**
    * 计算海明距离
    */
    public static int getDistance(String strUrl1, String strUrl2) {
        if (null != strUrl1 && null != strUrl2) {
            try {
                Mat source1 = getMatByUrl(strUrl1);
                long hash1 = getPHash(source1);
                Mat source2 = getMatByUrl(strUrl2);
                long hash2 = getPHash(source2);
                return Long.bitCount(hash1 ^ hash2);
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
        return 64;
    }

    /**
    * 根据 url 计算 Mat,gif 取第一帧
    */
    private static Mat getMatByUrl(String strUrl) {
        byte[] bytes = null;
        try {
            URL url = new URL(strUrl);
            DataInputStream input = new DataInputStream(url.openStream());
            ByteArrayOutputStream output = new ByteArrayOutputStream();
            if (strUrl.toLowerCase().endsWith(".gif")) {
                MemoryCacheImageInputStream in = new MemoryCacheImageInputStream(input);
                GIFImageReader gifReader = new GIFImageReader(new GIFImageReaderSpi());
                gifReader.setInput(in);
                int num = gifReader.getNumImages(true);
                if (num > 0) {
                    BufferedImage read = gifReader.read(0);
                    ImageIO.write(read, "jpg", output);
                }
            } else {
                IOUtils.copy(input, output);
            }
            bytes = output.toByteArray();
            output.close();
            input.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
        if (null == bytes) {
            return null;
        }
        return getMatByBytes(bytes);
    }

    /**
    * 根据二进制数据计算 Mat
    */
    private static Mat getMatByBytes(byte[] bytes) {
        Mat encoded = new Mat(1, bytes.length, CvType.CV_8U);
        encoded.put(0, 0, bytes);
        //从内存中读,返回Mat形式
        Mat decoded = Highgui.imdecode(encoded, -1);
        encoded.release();
        return decoded;
    }

    /**
    * 根据本地文件地址计算 Mat
    */
    private static Mat getMatByPath(String path) {
        return Highgui.imread(path);
    }

    /**
    * 计算 phash
    */
    private static long getPHash(Mat image) {
        Mat grayImage = new Mat();
        Imgproc.cvtColor(image, grayImage, Imgproc.COLOR_BGR2GRAY);

        Size size = new Size(32, 32);
        double fx = size.width / grayImage.cols();
        double fy = size.height / grayImage.rows();
        Mat newImage = new Mat();
        Imgproc.resize(grayImage, newImage, size, fx, fy, Imgproc.INTER_AREA);

        newImage.convertTo(newImage, CvType.CV_32FC1);

        Mat dst = new Mat(newImage.size(), newImage.type());
        Core.dct(newImage, dst);

        double mean = 0;
        for (int i = 0; i < 8; i++) {
            for (int j = 0; j < 8; j++) {
                mean = mean + dst.get(i, j)[0];
            }
        }
        mean = mean / 64;

        long l = 0;
        long x = 1;
        for (int i = 0; i < 8; i++) {
            for (int j = 0; j < 8; j++) {
                if (dst.get(i, j)[0] >= mean) {
                    l |= x << (63 - i * 8 - j);
                }
            }
        }
        return l;
    }


    /**
    * 计算 AHash
    */
    private static long getAHash(Mat source) {
        Mat resizeMat = new Mat();
        Imgproc.resize(source, resizeMat, new Size(8, 8));
        Mat BGRMat = new Mat();
        Imgproc.cvtColor(resizeMat, BGRMat, Imgproc.COLOR_BGR2GRAY);

        double sum = 0;
        for (int row = 0; row < BGRMat.height(); row++) {
            for (int cols = 0; cols < BGRMat.width(); cols++) {
                sum += (int) BGRMat.get(row, cols)[0];
            }
        }
        double avg = (float) sum / (BGRMat.height() * BGRMat.width());

        long l = 0;
        long x = 1;
        for (int row = 0; row < BGRMat.height(); row++) {
            for (int cols = 0; cols < BGRMat.width(); cols++) {
                if (BGRMat.get(row, cols)[0] >= avg) {
                    l |= x << (63 - row * 8 - cols);
                }
            }
        }
        return l;
    }

    /**
    * 计算 DHash
    */
    private static long getDHash(Mat source) {
        Mat resizeMat = new Mat();
        Imgproc.resize(source, resizeMat, new Size(9, 8));
        Mat grayMat = new Mat();
        Imgproc.cvtColor(resizeMat, grayMat, Imgproc.COLOR_BGR2GRAY);

        long l = 0;
        long x = 1;
        for (int row = 0; row < grayMat.height(); row++) {
            for (int cols = 0; cols < grayMat.width() - 1; cols++) {
                if((grayMat.get(row, cols)[0]) > grayMat.get(row, cols + 1)[0]){
                    l |= x << (63 - row * 8 - cols);
                }
            }
        }
        return l;
    }

}
3)直方图
/**
* 计算 Hist
*/
private static long getHist(Mat source1, Mat source2) {
    Mat histImg1 = calcHist(source1);
    Mat histImg2 = calcHist(source2);
    double distance = Imgproc.compareHist(histImg1, histImg2, Imgproc.CV_COMP_CORREL);
}

/**
* 计算 Hist
*/
private static Mat calcHist(Mat src) {
    //1 图片转HSV
    Mat hsv  = new Mat();
    Imgproc.cvtColor(src,hsv,Imgproc.COLOR_BGR2HSV);
    //2 计算直方图
    List<Mat> matList = new LinkedList<>();
    matList.add(hsv);
    Mat histogram = new Mat();
    Imgproc.calcHist(matList, new MatOfInt(0), new Mat(), histogram, new MatOfInt(255), new MatOfFloat(0, 256));
    //3 归一化
    Core.normalize(histogram, histogram, 1, histogram.rows(), Core.NORM_MINMAX, -1, new Mat());

    //5 绘制几何直方图
//        Mat histImage = Mat.zeros(100, (int) histSize.get(0, 0)[0], CvType.CV_8UC1);
//        for (int i = 0; i < (int) histSize.get(0, 0)[0]; i++) {
//            Imgproc.line(histImage, new org.opencv.core.Point(i, histImage.rows()), new org.opencv.core.Point(i, histImage.rows() - Math.round(histogram.get(i, 0)[0])), new Scalar(255, 255, 255), 1, 8, 0);
//        }
//        HighGui.imshow("直方图计算"+System.currentTimeMillis(), histImage);
//        HighGui.waitKey(0);
    return histogram;
}
3)特征提取

SIFT

public class sift_opencv {

    public static void main(String[] args) throws IOException {
        System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
        Mat blurredImage = new Mat();
        Mat hsvImage = new Mat();
        Mat mask = new Mat();
        Mat morphOutput = new Mat();
        Mat img;
        Mat maskedImage;
        //BufferedImage img;
        //img=ImageIO.read(new File("...\\Penguins.png"));
        //File f= new File("...\\Penguins.png");

        img= Highgui.imread("...\\burj.png");
        System.out.println(img);
        // remove some noise
        Highgui.imwrite("out.png", img);
        Imgproc.blur(img, blurredImage, new Size(7, 7));

        // convert the frame to HSV
        Imgproc.cvtColor(blurredImage, hsvImage, Imgproc.COLOR_BGR2HSV);


        //convert to gray
        //Mat mat = new Mat(img.width(), img.height(), CvType.CV_8U, new Scalar(4));
        Mat gray = new Mat(img.width(), img.height(), CvType.CV_8U, new Scalar(4));
        Imgproc.cvtColor(img, gray, Imgproc.COLOR_BGR2GRAY);


        FeatureDetector fd = FeatureDetector.create(FeatureDetector.FAST);
        MatOfKeyPoint regions = new MatOfKeyPoint();
        fd.detect(gray, regions);

        Mat output=new Mat();
        //int r=regions.rows();
        //System.out.println("REGIONS ARE: " + regions);
        Features2d.drawKeypoints(gray, regions,output );
        Highgui.imwrite("out.png", output);

    }
}

SURF

public class SURFDetector {

    public static void main(String[] args) {
        nu.pattern.OpenCV.loadShared();

        String source = "...\\1.jpeg";
        String target = "...\\2.jpeg";

        Mat objectImage = Highgui.imread(source, Highgui.CV_LOAD_IMAGE_COLOR);
        Mat sceneImage = Highgui.imread(target, Highgui.CV_LOAD_IMAGE_COLOR);

        MatOfKeyPoint objectKeyPoints = new MatOfKeyPoint();
        FeatureDetector featureDetector = FeatureDetector.create(FeatureDetector.SURF);
        System.out.println("Detecting key points...");
        featureDetector.detect(objectImage, objectKeyPoints);
        KeyPoint[] keypoints = objectKeyPoints.toArray();
        System.out.println(keypoints);

        MatOfKeyPoint objectDescriptors = new MatOfKeyPoint();
        DescriptorExtractor descriptorExtractor = DescriptorExtractor.create(DescriptorExtractor.SURF);
        System.out.println("Computing descriptors...");
        descriptorExtractor.compute(objectImage, objectKeyPoints, objectDescriptors);

        // Create the matrix for output image.
        Mat outputImage = new Mat(objectImage.rows(), objectImage.cols(), Highgui.CV_LOAD_IMAGE_COLOR);
        Scalar newKeypointColor = new Scalar(255, 0, 0);

        System.out.println("Drawing key points on object image...");
        Features2d.drawKeypoints(objectImage, objectKeyPoints, outputImage, newKeypointColor, 0);

        // Match object image with the scene image
        MatOfKeyPoint sceneKeyPoints = new MatOfKeyPoint();
        MatOfKeyPoint sceneDescriptors = new MatOfKeyPoint();
        System.out.println("Detecting key points in background image...");
        featureDetector.detect(sceneImage, sceneKeyPoints);
        System.out.println("Computing descriptors in background image...");
        descriptorExtractor.compute(sceneImage, sceneKeyPoints, sceneDescriptors);

        Mat matchoutput = new Mat(sceneImage.rows() * 2, sceneImage.cols() * 2, Highgui.CV_LOAD_IMAGE_COLOR);
        Scalar matchestColor = new Scalar(0, 255, 0);

        List<MatOfDMatch> matches = new LinkedList<MatOfDMatch>();
        DescriptorMatcher descriptorMatcher = DescriptorMatcher.create(DescriptorMatcher.FLANNBASED);
        System.out.println("Matching object and scene images...");
        descriptorMatcher.knnMatch(objectDescriptors, sceneDescriptors, matches, 2);

        System.out.println("Calculating good match list...");
        LinkedList<DMatch> goodMatchesList = new LinkedList<DMatch>();

        float nndrRatio = 0.7f;

        for (int i = 0; i < matches.size(); i++) {
            MatOfDMatch matofDMatch = matches.get(i);
            DMatch[] dmatcharray = matofDMatch.toArray();
            DMatch m1 = dmatcharray[0];
            DMatch m2 = dmatcharray[1];

            if (m1.distance <= m2.distance * nndrRatio) {
                goodMatchesList.addLast(m1);

            }
        }

        System.out.println("goodMatches: "+goodMatchesList.size()+" / " +matches.size());

        List<MatOfDMatch> matches2 = new LinkedList<MatOfDMatch>();
        String json = keypointsToJson(objectDescriptors);
        MatOfKeyPoint matOfKeyPoint = keypointsFromJson(json);
        descriptorMatcher.knnMatch(matOfKeyPoint, sceneDescriptors, matches2, 2);

        LinkedList<DMatch> goodMatchesList2 = new LinkedList<DMatch>();
        for (int i = 0; i < matches2.size(); i++) {
            MatOfDMatch matofDMatch = matches2.get(i);
            DMatch[] dmatcharray = matofDMatch.toArray();
            DMatch m1 = dmatcharray[0];
            DMatch m2 = dmatcharray[1];

            if (m1.distance <= m2.distance * nndrRatio) {
                goodMatchesList2.addLast(m1);

            }
        }
        System.out.println("goodMatches2: "+goodMatchesList2.size()+" / " +matches2.size());

        if (goodMatchesList.size() >= 7) {
            System.out.println("Object Found!!!");

            List<KeyPoint> objKeypointlist = objectKeyPoints.toList();
            List<KeyPoint> scnKeypointlist = sceneKeyPoints.toList();

            LinkedList<Point> objectPoints = new LinkedList<>();
            LinkedList<Point> scenePoints = new LinkedList<>();

            for (int i = 0; i < goodMatchesList.size(); i++) {
                objectPoints.addLast(objKeypointlist.get(goodMatchesList.get(i).queryIdx).pt);
                scenePoints.addLast(scnKeypointlist.get(goodMatchesList.get(i).trainIdx).pt);
            }

            MatOfPoint2f objMatOfPoint2f = new MatOfPoint2f();
            objMatOfPoint2f.fromList(objectPoints);
            MatOfPoint2f scnMatOfPoint2f = new MatOfPoint2f();
            scnMatOfPoint2f.fromList(scenePoints);

            Mat homography = Calib3d.findHomography(objMatOfPoint2f, scnMatOfPoint2f, Calib3d.RANSAC, 3);

            Mat obj_corners = new Mat(4, 1, CvType.CV_32FC2);
            Mat scene_corners = new Mat(4, 1, CvType.CV_32FC2);

            obj_corners.put(0, 0, new double[]{0, 0});
            obj_corners.put(1, 0, new double[]{objectImage.cols(), 0});
            obj_corners.put(2, 0, new double[]{objectImage.cols(), objectImage.rows()});
            obj_corners.put(3, 0, new double[]{0, objectImage.rows()});

            System.out.println("Transforming object corners to scene corners...");
            Core.perspectiveTransform(obj_corners, scene_corners, homography);

            Mat img = Highgui.imread(target, Highgui.CV_LOAD_IMAGE_COLOR);

            Core.line(img, new Point(scene_corners.get(0, 0)), new Point(scene_corners.get(1, 0)), new Scalar(0, 255, 0), 4);
            Core.line(img, new Point(scene_corners.get(1, 0)), new Point(scene_corners.get(2, 0)), new Scalar(0, 255, 0), 4);
            Core.line(img, new Point(scene_corners.get(2, 0)), new Point(scene_corners.get(3, 0)), new Scalar(0, 255, 0), 4);
            Core.line(img, new Point(scene_corners.get(3, 0)), new Point(scene_corners.get(0, 0)), new Scalar(0, 255, 0), 4);

            System.out.println("Drawing matches image...");
            MatOfDMatch goodMatches = new MatOfDMatch();
            goodMatches.fromList(goodMatchesList);

            Features2d.drawMatches(objectImage, objectKeyPoints, sceneImage, sceneKeyPoints, goodMatches, matchoutput, matchestColor, newKeypointColor, new MatOfByte(), 2);

            Highgui.imwrite("...\\output//outputImage.jpg", outputImage);
            Highgui.imwrite("...\\output//matchoutput.jpg", matchoutput);
            Highgui.imwrite("...\\output//img.jpg", img);
        } else {
            System.out.println("Object Not Found");
        }

        System.out.println("Ended....");
    }

    public static MatOfKeyPoint serialize( MatOfKeyPoint mat) {
        byte[] bytes = serializeMat(mat);
        int type = mat.type();
        int rows = mat.rows();
        int cols = mat.cols();

        MatOfKeyPoint matOfKeyPoint =  new MatOfKeyPoint();
        matOfKeyPoint.create(rows, cols, type);
        matOfKeyPoint.put(0,0,bytes);
        return matOfKeyPoint;
    }
    public static byte[] serializeMat( MatOfKeyPoint mat) {
        ByteArrayOutputStream bos = new ByteArrayOutputStream();
        try {
            float[] data = new float[(int) mat.total() * mat.channels()];
            mat.get(0, 0, data);
            ObjectOutput out = new ObjectOutputStream(bos);
            out.writeObject(data);
            out.close();
            // Get the bytes of the serialized object
            byte[] buf = bos.toByteArray();
            return buf;
        } catch (IOException ioe) {
            ioe.printStackTrace();
            return null;
        }
    }
    public static String keypointsToJson(MatOfKeyPoint mat){
        if(mat!=null && !mat.empty()){
            Gson gson = new Gson();

            JsonArray jsonArr = new JsonArray();

            KeyPoint[] array = mat.toArray();
            for(int i=0; i<array.length; i++){
                KeyPoint kp = array[i];

                JsonObject obj = new JsonObject();

                obj.addProperty("c", kp.class_id);
                obj.addProperty("x",        kp.pt.x);
                obj.addProperty("y",        kp.pt.y);
                obj.addProperty("s",     kp.size);
                obj.addProperty("a",    kp.angle);
                obj.addProperty("o",   kp.octave);
                obj.addProperty("r", kp.response);

                jsonArr.add(obj);
            }

            String json = gson.toJson(jsonArr);

            return json;
        }
        return "{}";
    }

    public static MatOfKeyPoint keypointsFromJson(String json){
        MatOfKeyPoint result = new MatOfKeyPoint();

        JsonParser parser = new JsonParser();
        JsonArray jsonArr = parser.parse(json).getAsJsonArray();

        int size = jsonArr.size();

        KeyPoint[] kpArray = new KeyPoint[size];

        for(int i=0; i<size; i++){
            KeyPoint kp = new KeyPoint();

            JsonObject obj = (JsonObject) jsonArr.get(i);

            Point point = new Point(
                    obj.get("x").getAsDouble(),
                    obj.get("y").getAsDouble()
            );

            kp.pt       = point;
            kp.class_id = obj.get("c").getAsInt();
            kp.size     =     obj.get("s").getAsFloat();
            kp.angle    =    obj.get("a").getAsFloat();
            kp.octave   =   obj.get("o").getAsInt();
            kp.response = obj.get("r").getAsFloat();

            kpArray[i] = kp;
        }

        result.fromArray(kpArray);

        return result;
    }
}