Java - 从图像中获取像素数组

ryy*_*yst 111 java bufferedimage javax.imageio

我正在寻找int[][]从a 获取像素数据(在表单中)的最快方法BufferedImage.我的目标是能够(x, y)使用图像来处理图像中的像素int[x][y].我发现的所有方法都没有这样做(大多数都返回int[]s).

Mot*_*sim 168

我只是在玩这个相同的主题,这是访问像素的最快方法.我目前知道有两种方法可以做到这一点:

  1. 使用getRGB()@ tskuzzy的答案中描述的BufferedImage 方法.
  2. 通过直接访问像素数组:

    byte[] pixels = ((DataBufferByte) bufferedImage.getRaster().getDataBuffer()).getData();
    
    Run Code Online (Sandbox Code Playgroud)

如果您正在使用大型图像并且性能是一个问题,那么第一种方法绝对不是可行的方法.该getRGB()方法将alpha,red,green和blue值组合成一个int,然后返回结果,在大多数情况下,您将执行相反操作以获取这些值.

第二种方法将直接为每个像素返回红色,绿色和蓝色值,如果有alpha通道,则会添加alpha值.使用这种方法在计算指数方面更难,但比第一种方法快得多.

在我的应用程序中,通过从第一种方法切换到第二种方法,我能够将处理像素的时间减少90%以上!

以下是我设置的比较两种方法的比较:

import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.IOException;
import javax.imageio.ImageIO;

public class PerformanceTest {

   public static void main(String[] args) throws IOException {

      BufferedImage hugeImage = ImageIO.read(PerformanceTest.class.getResource("12000X12000.jpg"));

      System.out.println("Testing convertTo2DUsingGetRGB:");
      for (int i = 0; i < 10; i++) {
         long startTime = System.nanoTime();
         int[][] result = convertTo2DUsingGetRGB(hugeImage);
         long endTime = System.nanoTime();
         System.out.println(String.format("%-2d: %s", (i + 1), toString(endTime - startTime)));
      }

      System.out.println("");

      System.out.println("Testing convertTo2DWithoutUsingGetRGB:");
      for (int i = 0; i < 10; i++) {
         long startTime = System.nanoTime();
         int[][] result = convertTo2DWithoutUsingGetRGB(hugeImage);
         long endTime = System.nanoTime();
         System.out.println(String.format("%-2d: %s", (i + 1), toString(endTime - startTime)));
      }
   }

   private static int[][] convertTo2DUsingGetRGB(BufferedImage image) {
      int width = image.getWidth();
      int height = image.getHeight();
      int[][] result = new int[height][width];

      for (int row = 0; row < height; row++) {
         for (int col = 0; col < width; col++) {
            result[row][col] = image.getRGB(col, row);
         }
      }

      return result;
   }

   private static int[][] convertTo2DWithoutUsingGetRGB(BufferedImage image) {

      final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
      final int width = image.getWidth();
      final int height = image.getHeight();
      final boolean hasAlphaChannel = image.getAlphaRaster() != null;

      int[][] result = new int[height][width];
      if (hasAlphaChannel) {
         final int pixelLength = 4;
         for (int pixel = 0, row = 0, col = 0; pixel + 3 < pixels.length; pixel += pixelLength) {
            int argb = 0;
            argb += (((int) pixels[pixel] & 0xff) << 24); // alpha
            argb += ((int) pixels[pixel + 1] & 0xff); // blue
            argb += (((int) pixels[pixel + 2] & 0xff) << 8); // green
            argb += (((int) pixels[pixel + 3] & 0xff) << 16); // red
            result[row][col] = argb;
            col++;
            if (col == width) {
               col = 0;
               row++;
            }
         }
      } else {
         final int pixelLength = 3;
         for (int pixel = 0, row = 0, col = 0; pixel + 2 < pixels.length; pixel += pixelLength) {
            int argb = 0;
            argb += -16777216; // 255 alpha
            argb += ((int) pixels[pixel] & 0xff); // blue
            argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
            argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
            result[row][col] = argb;
            col++;
            if (col == width) {
               col = 0;
               row++;
            }
         }
      }

      return result;
   }

   private static String toString(long nanoSecs) {
      int minutes    = (int) (nanoSecs / 60000000000.0);
      int seconds    = (int) (nanoSecs / 1000000000.0)  - (minutes * 60);
      int millisecs  = (int) ( ((nanoSecs / 1000000000.0) - (seconds + minutes * 60)) * 1000);


      if (minutes == 0 && seconds == 0)
         return millisecs + "ms";
      else if (minutes == 0 && millisecs == 0)
         return seconds + "s";
      else if (seconds == 0 && millisecs == 0)
         return minutes + "min";
      else if (minutes == 0)
         return seconds + "s " + millisecs + "ms";
      else if (seconds == 0)
         return minutes + "min " + millisecs + "ms";
      else if (millisecs == 0)
         return minutes + "min " + seconds + "s";

      return minutes + "min " + seconds + "s " + millisecs + "ms";
   }
}
Run Code Online (Sandbox Code Playgroud)

你能猜出输出吗?;)

Testing convertTo2DUsingGetRGB:
1 : 16s 911ms
2 : 16s 730ms
3 : 16s 512ms
4 : 16s 476ms
5 : 16s 503ms
6 : 16s 683ms
7 : 16s 477ms
8 : 16s 373ms
9 : 16s 367ms
10: 16s 446ms

Testing convertTo2DWithoutUsingGetRGB:
1 : 1s 487ms
2 : 1s 940ms
3 : 1s 785ms
4 : 1s 848ms
5 : 1s 624ms
6 : 2s 13ms
7 : 1s 968ms
8 : 1s 864ms
9 : 1s 673ms
10: 2s 86ms

BUILD SUCCESSFUL (total time: 3 minutes 10 seconds)
Run Code Online (Sandbox Code Playgroud)

  • 对于那些懒得阅读代码的人来说,有两个测试`convertTo2DUsingGetRGB`和`convertTo2DWithoutUsingGetRGB`.第一次测试平均需要16秒.第二次测试平均需要1.5秒.起初我认为"s"和"ms"是两个不同的列.@Mota,很棒的参考. (8认同)
  • 人们注意到色差和/或不正确的字节顺序:@Mota的代码假定*BGR*排序.您应该检查传入的`BufferedImage`的`type`,例如`TYPE_INT_RGB`或`TYPE_3BYTE_BGR`并进行适当的处​​理.这是`getRGB()`为你做的事情之一,它使它变慢:-( (6认同)
  • @Mota在convertTo2DUsingGetRGB中为什么要取结果[row] [col] = image.getRGB(col,row); 而不是结果[row] [col] = image.getRGB(row,col); (4认同)
  • 看来这已经不管用了。用法给出了“java.lang.ClassCastException:java.awt.image.DataBufferInt 无法转换为java.awt.image.DataBufferByte”。使用“DataBufferInt”给出“rgb[-16777216,-16777216,-16777216]”数组。 (3认同)
  • 如果我错了,请纠正我,但是使用`| =`代替`+ =`来组合方法2中的值会不会更有效率? (2认同)

tsk*_*zzy 22

像这样的东西?

int[][] pixels = new int[w][h];

for( int i = 0; i < w; i++ )
    for( int j = 0; j < h; j++ )
        pixels[i][j] = img.getRGB( i, j );
Run Code Online (Sandbox Code Playgroud)

  • 这不是非常低效吗?虽然`BufferedImage`会使用2D int数组存储像素吗? (9认同)
  • @ryyst如果你想要一个数组中的所有像素,这就像它获得的效率一样高 (4认同)
  • 我很确定图像在内部存储为一维数据结构。因此,无论您如何操作,操作都将花费 O(W*H)。您可以先将其存储到一维数组中,然后将一维数组转换为二维数组,从而避免方法调用开销。 (2认同)
  • @tskuzzy 这种方法比较慢。检查 Mota 的方法,这比这种传统方法要快。 (2认同)

Rob*_*ton 17

我发现莫塔的回答让我的速度提高了10倍 - 所以感谢莫塔.

我已经将代码包装在一个方便的类中,该类在构造函数中获取BufferedImage并公开了一个等效的getRBG(x,y)方法,该方法使用BufferedImage.getRGB(x,y)代替代码.

import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;

public class FastRGB
{

    private int width;
    private int height;
    private boolean hasAlphaChannel;
    private int pixelLength;
    private byte[] pixels;

    FastRGB(BufferedImage image)
    {

        pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
        width = image.getWidth();
        height = image.getHeight();
        hasAlphaChannel = image.getAlphaRaster() != null;
        pixelLength = 3;
        if (hasAlphaChannel)
        {
            pixelLength = 4;
        }

    }

    int getRGB(int x, int y)
    {
        int pos = (y * pixelLength * width) + (x * pixelLength);

        int argb = -16777216; // 255 alpha
        if (hasAlphaChannel)
        {
            argb = (((int) pixels[pos++] & 0xff) << 24); // alpha
        }

        argb += ((int) pixels[pos++] & 0xff); // blue
        argb += (((int) pixels[pos++] & 0xff) << 8); // green
        argb += (((int) pixels[pos++] & 0xff) << 16); // red
        return argb;
    }
}
Run Code Online (Sandbox Code Playgroud)


小智 9

除非您的BufferedImage来自单色位图,否则Mota的答案很棒.单色位图的像素只有2个可能的值(例如0 =黑色,1 =白色).当使用单色位图然后

final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
Run Code Online (Sandbox Code Playgroud)

call以这样的方式返回原始像素阵列数据,即每个字节包含多个像素.

因此,当您使用单色位图图像创建BufferedImage对象时,这是您要使用的算法:

/**
 * This returns a true bitmap where each element in the grid is either a 0
 * or a 1. A 1 means the pixel is white and a 0 means the pixel is black.
 * 
 * If the incoming image doesn't have any pixels in it then this method
 * returns null;
 * 
 * @param image
 * @return
 */
public static int[][] convertToArray(BufferedImage image)
{

    if (image == null || image.getWidth() == 0 || image.getHeight() == 0)
        return null;

    // This returns bytes of data starting from the top left of the bitmap
    // image and goes down.
    // Top to bottom. Left to right.
    final byte[] pixels = ((DataBufferByte) image.getRaster()
            .getDataBuffer()).getData();

    final int width = image.getWidth();
    final int height = image.getHeight();

    int[][] result = new int[height][width];

    boolean done = false;
    boolean alreadyWentToNextByte = false;
    int byteIndex = 0;
    int row = 0;
    int col = 0;
    int numBits = 0;
    byte currentByte = pixels[byteIndex];
    while (!done)
    {
        alreadyWentToNextByte = false;

        result[row][col] = (currentByte & 0x80) >> 7;
        currentByte = (byte) (((int) currentByte) << 1);
        numBits++;

        if ((row == height - 1) && (col == width - 1))
        {
            done = true;
        }
        else
        {
            col++;

            if (numBits == 8)
            {
                currentByte = pixels[++byteIndex];
                numBits = 0;
                alreadyWentToNextByte = true;
            }

            if (col == width)
            {
                row++;
                col = 0;

                if (!alreadyWentToNextByte)
                {
                    currentByte = pixels[++byteIndex];
                    numBits = 0;
                }
            }
        }
    }

    return result;
}
Run Code Online (Sandbox Code Playgroud)