Search Results

Search found 19850 results on 794 pages for 'image zoom'.

Page 9/794 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Scale an image which is stored as a byte[] in Java

    - by Sergio del Amo
    I upload a file with a struts form. I have the image as a byte[] and I would like to scale it. FormFile file = (FormFile) dynaform.get("file"); byte[] fileData = file.getFileData(); fileData = scale(fileData,200,200); public byte[] scale(byte[] fileData, int width, int height) { // TODO } Anyone knows an easy function to do this? public byte[] scale(byte[] fileData, int width, int height) { ByteArrayInputStream in = new ByteArrayInputStream(fileData); try { BufferedImage img = ImageIO.read(in); if(height == 0) { height = (width * img.getHeight())/ img.getWidth(); } if(width == 0) { width = (height * img.getWidth())/ img.getHeight(); } Image scaledImage = img.getScaledInstance(width, height, Image.SCALE_SMOOTH); BufferedImage imageBuff = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB); imageBuff.getGraphics().drawImage(scaledImage, 0, 0, new Color(0,0,0), null); ByteArrayOutputStream buffer = new ByteArrayOutputStream(); ImageIO.write(imageBuff, "jpg", buffer); return buffer.toByteArray(); } catch (IOException e) { throw new ApplicationException("IOException in scale"); } } If you run out of Java Heap Space in tomcat as I did, increase the heap space which is used by tomcat. In case you use the tomcat plugin for Eclipse, next should apply: In Eclipse, choose Window Preferences Tomcat JVM Settings Add the following to the JVM Parameters section -Xms256m -Xmx512m

    Read the article

  • how to set the output image use com.android.camera.action.CROP

    - by adi.zean
    I have code to crop an image, like this : public void doCrop(){ Intent intent = new Intent("com.android.camera.action.CROP"); intent.setType("image/"); List<ResolveInfo> list = getPackageManager().queryIntentActivities(intent,0); int size = list.size(); if (size == 0 ){ Toast.makeText(this, "Cant find crop app").show(); return; } else{ intent.setData(selectImageUri); intent.putExtra("outputX", 300); intent.putExtra("outputY", 300); intent.putExtra("aspectX", 1); intent.putExtra("aspectY", 1); intent.putExtra("scale", true); intent.putExtra("return-data", true); if (size == 1) { Intent i = new Intent(intent); ResolveInfo res = list.get(0); i.setComponent(new ComponentName(res.activityInfo.packageName, res.activityInfo.name)); startActivityForResult(i, CROP_RESULT); } } } public void onActivityResult (int requestCode, int resultCode, Intent dara){ if (resultCode == RESULT_OK){ if (requestCode == CROP_RESULT){ Bundle extras = data.getExtras(); if (extras != null){ bmp = extras.getParcelable("data"); } File f = new File(selectImageUri.getPath()); if (f.exists()) f.delete(); Intent inten3 = new Intent(this, tabActivity.class); startActivity(inten3); } } } from what i have read, the code intent.putExtra("outputX", 300); intent.putExtra("outputY", 300); is use to set the resolution of crop result, but why i can't get the result image resolution higer than 300x300? when i set the intent.putExtra("outputX", 800); intent.putExtra("outputY", 800); the crop function has no result or crash, any idea for this situation? the log cat say "! ! ! FAILED BINDER TRANSACTION ! ! !

    Read the article

  • iphone zoom to user location on mapkit

    - by satyam
    I'm using map kit and showing user's location using "showsUserLocation" I"m using following code to zoom to user's location, but not zooming. Its zooming to some location in Africa, though the user location on map is showing correct. MKCoordinateRegion newRegion; MKUserLocation* usrLocation = mapView.userLocation; newRegion.center.latitude = usrLocation.location.coordinate.latitude; newRegion.center.longitude = usrLocation.location.coordinate.longitude; newRegion.span.latitudeDelta = 20.0; newRegion.span.longitudeDelta = 28.0; [self.mapView setRegion:newRegion animated:YES]; Why is user's location correctly showing and not zooming properly. Can some one correct me please?

    Read the article

  • Scaling an Image in GWT

    - by Daniel
    Changing the size of an Image Widget in GWT changes the size of the image element, but does not rescale the image on the screen. Therefore, the following will not work: Image image = new Image(myImageResource); image.setHeight(newHeight); image.setWidth(newWidth); image.setPixelSize(newWidth, newHeight); This is because GWT implements its Image widget by setting the background-image of the HTML <img... /> element as the image, using CSS. How does one get the actual image to resize?

    Read the article

  • Cross-Browser jQuery text-zoom implementation

    - by JMC Creative
    I've got some code to increase/decrease font size. This is giving me a headache because each browser seems to implement the $.css('font-size') differently (see jquery bug). The part that's really killing me, though, is that Firefox is scaling up okay, but when I use the scale down function below, it scales up. Webkit & IE are both working as expected. Any ideas on that? I put this in a fiddle here: http://jsfiddle.net/srQ3P/1/ where you can see it working as expected, and you can see it broken in firefox at the project page: http://cumberlandme.info/residents MAJOR EDIT Sorry, the issue is not the code, it's firefox buggy behavior. After I zoom in or out with the browser controls (ctrl + plus or ctrol + minus) the js goes haywire. This doesn't happen in other browsers. This is the real issue. Any advice on that

    Read the article

  • Where will the image be saved? [closed]

    - by Dummy Derp
    import java.awt.Dimension; import java.awt.Rectangle; import java.awt.Robot; import java.awt.Toolkit; import java.awt.image.BufferedImage; import javax.imageio.ImageIO; import java.io.File; ... public void captureScreen(String fileName) throws Exception { Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize(); Rectangle screenRectangle = new Rectangle(screenSize); Robot robot = new Robot(); BufferedImage image = robot.createScreenCapture(screenRectangle); ImageIO.write(image, "png", new File(fileName)); } ... Going over this code I found on the internet. I got everything except the part where file is created. In what format file name should be? Should it be C:/myFolder/myImage.png" or just myImage.png and where will it be saved? Here is what docs say: File public File(String pathname) Creates a new File instance by converting the given pathname string into an abstract pathname. If the given string is the empty string, then the result is the empty abstract pathname. Parameters: pathname - A pathname string Throws: NullPointerException - If the pathname argument is null

    Read the article

  • Image mapping using lookup tables [on hold]

    - by jblasius
    I have an optimization problem. I'm using a look-up table to map a pixel in an image: for (uint32_t index = 0u; index < imgSize; index++) { img[ lt[ index ] ] = val; } Is there a faster way to do this, perhaps using a reinterpret_cast or something like that? I am accessing two different memory addresses, so what is the compiler doing? One solution is to do a set of reads to access adjacent memory addresses. struct mblock { uint32_t buf[10u]; }; mblock mb; for (uint32_t index = 0u; index < imgSize; index += 10u) { mb = *reinterpret_cast<mblock*>(lt + index)); for (uint8_t i = 0u; i < 10u; i ++) { mb.buf[i] += img; } for (uint8_t i = 0u; i < 10u; i ++) { *( mb.buf[i] ) = val; } } This speeds up the code because I'm separating the image access from the table look-up; the positions in the look-up table are adjacent. I still get the image access problem as it is accessing random address positions.

    Read the article

  • On-the-fly lossless image compression

    - by geschema
    I have an embedded application where an image scanner sends out a stream of 16-bit pixels that are later assembled to a grayscale image. As I need to both save this data locally and forward it to a network interface, I'd like to compress the data stream to reduce the required storage space and network bandwidth. Is there a simple algorithm that I can use to losslessly compress the pixel data? I first thought of computing the difference between two consecutive pixels and then encoding this difference with a Huffman code. Unfortunately, the pixels are unsigned 16-bit quantities so the difference can be anywhere in the range -65535 .. +65535 which leads to potentially huge codeword lengths. If a few really long codewords occur in a row, I'll run into buffer overflow problems.

    Read the article

  • Cropping image with ImageScience

    - by fl00r
    ImageScience is cool and light. I am using it in my sinatra app. But I can't understand how can I crop image with not square form and how can I make thumbnail with two dimensions. As I found on ImageScience site: ImageScience.with_image(file) do |img| img.cropped_thumbnail(100) do |thumb| thumb.save "#{file}_cropped.png" end img.thumbnail(100) do |thumb| thumb.save "#{file}_thumb.png" end img.resize(100, 150) do |img2| img2.save "#{file}_resize.png" end end I can crop thumb and resize thumb only with ONE dimension but I want to use two, as in RMagick. For example I want to crop 100x200px box from image, or I want to make thumbnail with width or height not bigger then 300 (width) or 500 (height) pixels.

    Read the article

  • Scaling Image to multiple sizes for Deep Zoom

    - by AnthonyWJones
    Lets assume I have a bitmap with a square aspect and width of 2048 pixels. In order to create a set of files need by Silverlight's DeepZoomImageTileSource I need to scale this bitmap to 1024 then to 512 then to 256 etc down to 1 pixel image. There are two, I suspect naive, approaches:- For each image required scale the original full size image to the required size. However it seems excessive to be scaling the full image to the very small sizes. Having scaled from one level to the next discard the original image and scale each sucessive scaled image as the source of the next smaller image. However I suspect that this would generate images in the 256-64 range with poor fidelity than using option 1. Note unlike with the Deep Zoom Composer this tool is expected to act in an on-demand fashion hence it needs to complete in a reasonable timeframe (tops 30 seconds). On the pluse side I'm only creating a single multiscale image not a pyramid of mutliple high-res images. I am outside my comfort zone here, any graphics experts got any advice? Am I wrong about point 2? Is point 1 reasonably performant and I'm worrying about nothing? Option 3?

    Read the article

  • IE6 image scaling with bicubic filter

    - by thehuby
    I have a project where I have to resize some images in the actual browser side. IE8, FF3 et al all apply a filter to smooth the resizing of the image, so in these browsers everything looks good. In IE7 I have applied the following fix which works great: -ms-interpolation-mode:bicubic; In IE6 however I can only find references to the AlphaImage Filter (the same one used to enable alpha transparency on PNG files). However I can't find an example of how to use it, nor have I been able to get it working myself. Can anyone provide me with an example? Preferably applied to actual img tags, though I could use background images if required. MSDN link (for what its worth): http://msdn.microsoft.com/en-us/library/ms532969%28VS.85%29.aspx The code I am using in my CSS is applied to the img, though I've tried applying it to the img container as well (with no effect): #provider-list li img { filter: progid:DXImageTransform.Microsoft.AlphaImageLoader(src="/image.gif", sizingMethod="scale"); } A thousand thank you's in advance :) Rick

    Read the article

  • PHP - Resize an image and fill gaps of proportions with a color

    - by Kerry
    I am uploading logos to my system, and they need to fix in a 60x60 pixel box. I have all the code to resize it proportionately, and that's not a problem. My 454x292px image becomes 60x38. The thing is, I need the picture to be 60x60, meaning I want to pad the top and bottom with white each (I can fill the rectangle with the color). The theory is I create a white rectangle, 60x60, then I copy the image and resize it to 60x38 and put it in my white rectangle, starting 11px from the top (which adds up to the 22px of total padding that I need. I would post my code but it's decently long, though I can if requested. Does anyone know how to do this or can you point me to code/tutorial that does this?

    Read the article

  • Image loader cant load my live image url

    - by Bindhu
    In my application i need to load the images in list view, when using locale(ip ported url) then no problem all images are loading properly, But when using live url then the images are not loading, My image loader class: public class ImageLoader { MemoryCache memoryCache = new MemoryCache(); FileCache fileCache; private Map<ImageView, String> imageViews = Collections .synchronizedMap(new WeakHashMap<ImageView, String>()); ExecutorService executorService; public ImageLoader(Context context) { fileCache = new FileCache(context); executorService = Executors.newFixedThreadPool(5); } final int stub_id = R.drawable.appointeesample; public void DisplayImage(String url, ImageView imageView) { imageViews.put(imageView, url); Bitmap bitmap = memoryCache.get(url); if (bitmap != null) imageView.setImageBitmap(bitmap); else { Log.d("stub", "stub" + stub_id); queuePhoto(url, imageView); imageView.setImageResource(stub_id); } } private void queuePhoto(String url, ImageView imageView) { PhotoToLoad p = new PhotoToLoad(url, imageView); executorService.submit(new PhotosLoader(p)); } private Bitmap getBitmap(String url) { File f = fileCache.getFile(url); // from SD cache Bitmap b = decodeFile(f); if (b != null) return b; // from web try { Bitmap bitmap = null; URL imageUrl = new URL(url); HttpURLConnection conn = (HttpURLConnection) imageUrl .openConnection(); conn.setConnectTimeout(30000); conn.setReadTimeout(30000); conn.setInstanceFollowRedirects(true); InputStream is = conn.getInputStream(); BufferedInputStream bis = new BufferedInputStream(is, 81960); BitmapFactory.Options opts = new BitmapFactory.Options(); opts.inJustDecodeBounds = true; OutputStream os = new FileOutputStream(f); Utils.CopyStream(bis, os); os.close(); bitmap = decodeFile(f); Log.d("bitmap", "Bit map" + bitmap); return bitmap; } catch (Exception ex) { ex.printStackTrace(); return null; } } // decodes image and scales it to reduce memory consumption private Bitmap decodeFile(File f) { try { try { BitmapFactory.Options o = new BitmapFactory.Options(); o.inJustDecodeBounds = true; BitmapFactory.decodeStream(new FileInputStream(f), null, o); final int REQUIRED_SIZE = 200; int scale = 1; while (o.outWidth / scale / 2 >= REQUIRED_SIZE && o.outHeight / scale / 2 >= REQUIRED_SIZE) scale *= 2; BitmapFactory.Options o2 = new BitmapFactory.Options(); o2.inSampleSize = scale; return BitmapFactory.decodeStream(new FileInputStream(f), null, o2); } catch (FileNotFoundException e) { } finally { System.gc(); } return null; } catch (Exception e) { } return null; } // Task for the queue private class PhotoToLoad { public String url; public ImageView imageView; public PhotoToLoad(String u, ImageView i) { url = u; imageView = i; } } class PhotosLoader implements Runnable { PhotoToLoad photoToLoad; PhotosLoader(PhotoToLoad photoToLoad) { this.photoToLoad = photoToLoad; } @Override public void run() { if (imageViewReused(photoToLoad)) return; Bitmap bmp = getBitmap(photoToLoad.url); memoryCache.put(photoToLoad.url, bmp); if (imageViewReused(photoToLoad)) return; BitmapDisplayer bd = new BitmapDisplayer(bmp, photoToLoad); Activity a = (Activity) photoToLoad.imageView.getContext(); a.runOnUiThread(bd); } } boolean imageViewReused(PhotoToLoad photoToLoad) { String tag = imageViews.get(photoToLoad.imageView); if (tag == null || !tag.equals(photoToLoad.url)) return true; return false; } // Used to display bitmap in the UI thread class BitmapDisplayer implements Runnable { Bitmap bitmap; PhotoToLoad photoToLoad; public BitmapDisplayer(Bitmap b, PhotoToLoad p) { bitmap = b; photoToLoad = p; } public void run() { if (imageViewReused(photoToLoad)) return; if (bitmap != null) photoToLoad.imageView.setImageBitmap(bitmap); else photoToLoad.imageView.setImageResource(stub_id); } } public void clearCache() { memoryCache.clear(); fileCache.clear(); } My Live Image url for Example: https://goappointed.com/images_upload/3330Torana_Logo.JPG I have referred google but no solution is working, Thanks a lot in advance.

    Read the article

  • How to prevent the floating layout wrapping when firefox zoom is reduced

    - by Dmitri Zhuchkov
    Given the following HTML. It display two columns: #left, #right. Both are fixed width and have 1px borders. Width and borders equal the size of upper container: #wrap. When I zoom out Firefox 3.5.2 by pressing Ctrl+- columns get wrapped (demo). How to prevent this? <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Test</title> <style type="text/css"> div {float:left} #wrap {width:960px} #left, #right {width:478px;border:1px solid} </style> </head> <body> <div id="wrap"> <div id="left"> left </div> <div id="right"> right </div> </div> </body> </html>

    Read the article

  • output image is not displayed

    - by gerry chocolatos
    so, im doing an image detection which i have to process on each red, green, n blue element to get the edge map and combine them become one to show the output. but it doesnt show my output image would anyone pls be kind enough to help me? here is my code so far. //get the red element process_red = new int[width * height]; counter = 0; for(int i = 0; i < 256; i++) { for(int j = 0; j < 256; j++) { int clr = buff_red.getRGB(j, i); int red = (clr & 0x00ff0000) >> 16; red = (0xFF<<24)|(red<<16)|(red<<8)|red; process_red[counter] = red; counter++; } } //set threshold value for red element int threshold = 100; for (int x = 0; x < width; x++) { for (int y = 0; y < height; y++) { int bin = (buff_red.getRGB(x, y) & 0x000000ff); if (bin < threshold) bin = 0; else bin = 255; buff_red.setRGB(x,y, 0xff000000 | bin << 16 | bin << 8 | bin); } } and i do the same way for my green n blue elements. and then i wanted to get to combination of the three by doing it this way: //combine the three elements process_combine = new int[width * height]; counter = 0; for(int i = 0; i < 256; i++) { for(int j = 0; j < 256; j++) { int clr_a = buff_red.getRGB(j, i); int ar = clr_a & 0x000000ff; int clr_b = buff_green.getRGB(j, i); int bg = clr_b & 0x000000ff; int clr_c = buff_blue.getRGB(j, i); int cb = clr_b & 0x000000ff; int alpha = 0xff000000; int combine = alpha|(ar<<16)|(bg<<8)|cb; process_combine[counter] = combine; counter++; } } buff_rgb = new BufferedImage(width,height, BufferedImage.TYPE_INT_ARGB); Graphics rgb; rgb = buff_rgb.getGraphics(); rgb.drawImage(output_rgb, 0, 0, null); rgb.dispose(); repaint(); and to show the output whic is from the combining process, i use a draw method: g.drawImage(buff_rgb,800,100,this); but still it doesnt show the image. can anyone pls help me? ur help is really appreciated. thanks.

    Read the article

  • Greyscale Image from YUV420p data

    - by fergs
    From what I have read on the internet the Y value is the luminance value and can be used to create a grey scale image. The following link: http://www.bobpowell.net/grayscale.htm, has some C# code on working out the luminance of a bitmap image : { Bitmap bm = new Bitmap(source.Width,source.Height); for(int y=0;y<bm.Height;y++) public Bitmap ConvertToGrayscale(Bitmap source) { for(int x=0;x<bm.Width;x++) { Color c=source.GetPixel(x,y); int luma = (int)(c.R*0.3 + c.G*0.59+ c.B*0.11); bm.SetPixel(x,y,Color.FromArgb(luma,luma,luma)); } } return bm; } I have a method that returns the YUV values and have the Y data in a byte array. I have the current piece of code and it is failing on Marshal.Copy – attempted to read or write protected memory. public Bitmap ConvertToGrayscale2(byte[] yuvData, int width, int height) { Bitmap bmp; IntPtr blue = IntPtr.Zero; int inputOffSet = 0; long[] pixels = new long[width * height]; try { for (int y = 0; y < height; y++) { int outputOffSet = y * width; for (int x = 0; x < width; x++) { int grey = yuvData[inputOffSet + x] & 0xff; unchecked { pixels[outputOffSet + x] = UINT_Constant | (grey * INT_Constant); } } inputOffSet += width; } blue = Marshal.AllocCoTaskMem(pixels.Length); Marshal.Copy(pixels, 0, blue, pixels.Length); // fails here : Attempted to read or write protected memory bmp = new Bitmap(width, height, width, PixelFormat.Format24bppRgb, blue); } catch (Exception) { throw; } finally { if (blue != IntPtr.Zero) { Marshal.FreeHGlobal(blue); blue = IntPtr.Zero; } } return bmp; } Any help would be appreciated?

    Read the article

  • Python Image Library, Close method

    - by DNN
    Hello, I have been using pil for the first time today. And I wanted to resize an image assuming it was larger than 800x600 and also create a thumbnail. I could do either of these tasks separately but not together in one method (I am doing a custom save method in django admin). This returns a "cannot identify image file" error message. The error is on the line "image = Image.open(self.photo)" after "#if image is size is greatet than 800 x 600 then resize image." I thought this may be because the image is already open, but if i remove the line I still get issues. So I thought I could try closing after creating a thumbnail and then reopening. But I couldn't find a close method.... This is my code: def save(self): #create thumbnail Thumb_Size = (75, 75) image = Image.open(self.photo) if image.mode not in ('L', 'RGB'): image = image.convert('RGB') image.thumbnail(Thumb_Size, Image.ANTIALIAS) temp_handle = StringIO() image.save(temp_handle, 'jpeg') temp_handle.seek(0) suf = SimpleUploadedFile(os.path.split(self.photo.name)[-1], temp_handle.read(), content_type='image/jpg') self.thumbnail.save(suf.name+'.jpg', suf, save=False) #if image is size is greatet than 800 x 600 then resize image. image = Image.open(self.photo) if image.size[0] > 800: if image.size[1] > 600: Max_Size = (800, 600) if image.mode not in ('L', 'RGB'): image = image.convert('RGB') image.thumbnail(Max_Size, Image.ANTIALIAS) temp_handle = StringIO() image.save(temp_handle, 'jpeg') temp_handle.seek(0) suf = SimpleUploadedFile(os.path.split(self.photo.name)[-1], temp_handle.read(), content_type='image/jpg') self.photo.save(suf.name+'.jpg', suf, save=False) #enter info to database super(Photo, self).save()

    Read the article

  • UIScrollView strange zoom behavior when content is a UIView subclass

    - by sigsegv
    Hi, I'm experiencing the following: I created a UIView subclass with a CATiledLayer as backing layer by overriding the layerClass method. The layer properties (delegate, tileSize, etc) are set in the initWithFrame: method of the subclass. +(Class)layerClass { return [CATiledLayer class]; } -(id)initWithFrame:(CGRect)frame { if(self = [super initWithFrame:frame]) { renderer = [[MFPDFRenderer alloc]init]; tiledLayer = (CATiledLayer *)[self layer]; [tiledLayer setFrame:frame]; [tiledLayer setLevelsOfDetail:2]; [tiledLayer setLevelsOfDetailBias:3]; [tiledLayer setTileSize:CGSizeMake(512, 512)]; [tiledLayer setDelegate:renderer]; } return self; } Then I add an instance of said class as the content of an UIScrollView and set UIScrollView properties and implement the required delegate's methods. Everything works fine but when zooming the scroll view keep repositioning itself on its center. It's hardly noticeable when zooming in the center of the content, but unbearable otherwise. The same scroll view works fine when I use as (zoomable) content any other view such as an UIImageView or even a normal UIView with a CATiledLayer with the same properties and delegate of the subclass implementation as sublayer. When I check layer bounds and frame in the drawLayer:inContext: method of the delegate I get the following result as the zoom increase UIView with CATiledLayer as sublayer: 2010-04-03 21:05:33.499 Renderer[89293:4903] Layer: (0.000, 0.000) 320.000 x 460.000 2010-04-03 21:05:33.500 Renderer[89293:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 2010-04-03 21:05:33.529 Renderer[89293:4903] Layer: (0.000, 0.000) 320.000 x 460.000 2010-04-03 21:05:33.534 Renderer[89293:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 Custom subclass: 2010-04-03 21:04:15.969 Renderer[88957:4903] Layer: (0.000, 0.000) 657.910 x 945.746 2010-04-03 21:04:15.970 Renderer[88957:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 2010-04-03 21:04:17.428 Renderer[88957:4903] Layer: (-0.000, 0.000) 766.964 x 1102.510 2010-04-03 21:04:17.429 Renderer[88957:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 [...] 2010-04-03 21:19:10.388 Renderer[92573:4903] Layer: (-0.000, 0.000) 905.680 x 1301.916 2010-04-03 21:19:10.388 Renderer[92573:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 I suppose that's the culprit or at least another symptom. I can add that I get the same erratic behavior if my subclass is built over a standard CALayer with the same renderer. Any suggestion will be appreciated!

    Read the article

  • What image processing Library should I use

    - by Swippen
    I have been reading What is the best image manipulation library? And tried a few libraries and are now looking for inputs on what is the best for our need. I will start by describing our current setting and problems. We have a system that needs to resize and crop a large amount of images from big original images. We handle 50 000+ images every day on 2 powerfull servers. Today we use ImageGlue from WebSupergoo but we don't like it at all, it is slow and hangs the service now and then (Its in another unanswered stack overflow question). We have a threaded windows service that uses Microsoft ThreadPool to resize as much as possible on the 8 core machines. I have tried AForge and it went very well it was loads faster and never crashed or anything. But I had problems with quality on a few images. This due to what algorithms I used ofc so can be tweaked. But want to widen our eyes to see if thats the right way to go. so: It needs to be c# .net and run in a windows service. (Since we wont change the rest of the service only image handling) It needs to handle threaded environment well. We have a great need of it being fast since today its too slow. But we also want good quality and small filesize since the images are later displayed on webpage with loads of visitors and needs good quality. So we have a lot of demands on ability to get god quality at a fast pace, and also secondary keep filesizes lowered even if that can be adjusted with compression a bit. Any comments or suggestions on what library to use?

    Read the article

  • How to scale JPEG images with a non-standard sampling factor in Java?

    - by HRJ
    I am using Java AWT for scaling a JPEG image, to create thumbnails. The code works fine when the image has a normal sampling factor ( 2x2,1x1,1x1 ) However, an image which has this sampling factor ( 1x1, 1x1, 1x1 ) creates problem when scaled. The colors get corrupted though the features are recognizable. The original and the thumbnail: The code I am using is roughly equivalent to: static BufferedImage awtScaleImage(BufferedImage image, int maxSize, int hint) { // We use AWT Image scaling because it has far superior quality // compared to JAI scaling. It also performs better (speed)! System.out.println("AWT Scaling image to: " + maxSize); int w = image.getWidth(); int h = image.getHeight(); float scaleFactor = 1.0f; if (w > h) scaleFactor = ((float) maxSize / (float) w); else scaleFactor = ((float) maxSize / (float) h); w = (int)(w * scaleFactor); h = (int)(h * scaleFactor); // since this code can run both headless and in a graphics context // we will just create a standard rgb image here and take the // performance hit in a non-compatible image format if any Image i = image.getScaledInstance(w, h, hint); image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB); Graphics2D g = image.createGraphics(); g.drawImage(i, null, null); g.dispose(); i.flush(); return image; } (Code courtesy of this page ) Is there a better way to do this? Here's a test image with sampling factor of [ 1x1, 1x1, 1x1 ].

    Read the article

  • Boot ISO image from GRUB4DOS on EFI machines

    - by Vladimir Tikhomirov
    I failed with loading ISO image (non-distro) from GRUB2 from USB stick, but found the way how I can boot the GRUB4DOS and then load the image from there. However, it doesn't work all the time and the questions is WHY it doesn't? Environment and loading process: We need to have EFI machine, USB stick, booting ISO, GRUB2 and GRUB4DOS. Last 3 on USB stick. Boot: USB - EFI loader - GRUB2 - GRUB4DOS - ISO image Configuration files To boot GRUB4DOS I use this from grub.cfg: menuentry "image.iso" { linux /syslinux/grub.exe --config-file="/menu.lst" } My menu.lst is here: timeout 20 default 0 title image.iso find --set-root --ignore-floppies --ignore-cd //image.iso map --heads=0 --sectors-per-track=0 //image.iso (hd32) map --hook chainloader (hd32) This works perfectly with Legacy machines. However, when I come to GRUB4DOS, I don't see the menu with image.iso, I see only GRUB command line. That means that my menu.lst didn't load. Why is it like this? Background and ideas I have an idea that GRUB4DOS doesn't recognize my USB stick as a device. I tried the command find and got (hd0,0), (hd0,1), (hd0,2), (rd). When I tried to set root to any of these devices I don't see fat file system, how it was with Legacy machines. The root device is (hd0,0), which has ntfs file system which should be partition with Windows. EFI machines support only GRUB2, so I can't boot GRUB4DOS straight away. Please, don't suggest anything like this, because my image doesn't have kernel. You can imagine that you load HDAT2 or Hiren's boot cd, for example. menuentry "Blancco Blancco5.iso" { set isofile="/image.iso" loopback loop $isofile set root=(loop) linux /isolinux/vmlinuz isofile=$isofile splash quiet initrd /isolinux/initrd }

    Read the article

  • Fastest light-weight image viewer over forwarded x11 session (linux)

    - by Matthew
    I have a slow network connection over which I'm forwarding x11 over ssh. I want to view images on the remote host (Ubuntu) quickly and efficiently. I'm looking for an image viewer that will take into account the image viewer window's resolution and downsize the image before sending it over the network, instead of sending the full size image. The images I want to view will be around 5MB and I only need to be able to browse through tiny thumbnails of the images to identify the image I'm looking for. It is not necessary to be able to see more than one image at a time. Highest speed over slow network connection is the priority. Thanks! Matthew EDIT: It's possible that the way x11 forwarding works, only the image at the display resolution will be transferred anyway. If that's true, please confirm and the question still stands for which image viewer will be the fastest over a slow connection

    Read the article

  • Can an image based backup potentially corrupt data?

    - by ServerAdminGuy45
    I'm considering doing image based backups (Acronis) on production Windows systems during non-peak hours. I'm just wondering if they can potentially lead to application data corruption. Lets say that I have a database that is getting hit pretty hard. Could I potentially have the beginning blocks of the database be commit ed to the image, data inserted into the db (which changes the beginning blocks of the DB on the server but not the image), then the blocks of data committed to the image (leading to an inconsistent state). Here's an example of what I'm trying to illustrate. Imagine a simple data structure which has a number in the front which represents the number of "a"s in a file. The number and data are delimited by a "-". For example: 4-ajjjjjjjajuuuuuuuaoffffa If an "a" is changed, the datastructure resets the number in the begining of the file such as: 3-ajjjjjjjajuuuuuuuboffffa I assume acronis writes block by block being a straight up image so here is what i'm invisioning happening with my database t0: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here t1: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here (all data before this is comitted to the image) t2: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) Also notice how one of the "a"s change to a b. There are only 3 "a"s now t3: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) The final image now reads "4-ajjjjjjjajuuuuuuuboffffa", while the true data is "3-ajjjjjjjajuuuuuuuboffffa" leading to a corrupt "database". Basically changes further down the blockchain could be reflected in the image, while important header and synchronization could already be committed. The out of date header information doesn't accurately reflect the structure of the blocks to come.

    Read the article

  • data maintenance/migrations in image based sytems

    - by User
    Web applications usually have a database. The code and the database work hand in hand together. Therefore Frameworks like Ruby on Rails and Django create migration files Sure there are also servers written in Self or Smalltalk or other image-based systems that face the same problem: Code is not written on the server but in a separate image of the programmer. How do these systems deal with a changing schema, changing classes/prototypes. Which way do the migrations go? Example: What is the process of a new attribute going from programmer's idea to the server code and all objects? I found the Gemstone/S manual chapter 8 but it does not really talk about the process of shipping code to the server.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >