Search Results

Search found 25888 results on 1036 pages for 'image map'.

Page 89/1036 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Need to produce an animated texture of Water where each image tiles in all directions

    - by ProfVersaggi
    I need to produce a 2D 'animated' texture of "water" for a game in which each image tiles in 'all' directions, much like those produced by the Caustics Generator, but with the power and flexibility of something the likes of Blender. The final result from Caustics Generator is 32 images that are actually animated such that when the full 32 images are played in a loop they will seamlessly loop forever. They will not only loop in time, but each image also tile in all directions. This is nice, but it comes in only one flavor so to speak. I'd like to accomplish the same thing with a Blender type tool, and I have actually gotten to the point where I generate the X number of images, but they do not tile in 'all' directions, nor are they slightly animated. I've tried Blender texture animations using offsets but with only limited success. Does anyone know of how to (or of a tool) which will animate textures such that they tile in all (4) directions? Many thanks in advance ....

    Read the article

  • Need a good quality bitmap rotation algorithm for Android

    - by Lumis
    I am creating a kaleidoscopic effect on an android tablet. I am using the code below to rotate a slice of an image, but as you can see in the image when rotating a bitmap 60 degrees it distorts it quite a lot (red rectangles) – it is smudging the image! I have set dither and anti-alias flags but it does not help much. I think it is just not a very sophisticated bitmap rotation algorithm. canvas.save(); canvas.rotate(angle, screenW/2, screenH/2); canvas.drawBitmap(picSlice, screenW/2, screenH/2, pOutput); canvas.restore(); So I wonder if you can help me find a better way to rotate a bitmap. It does not have to be fast, because I intend to use a high quality rotation only when I want to save the screen to the SD card - I would redraw the screen in memory before saving. Do you know any comprehensible or replicable algorithm for bitmap rotation that I could programme or use as a library? Or any other suggestion? EDIT: The answers below made me think if Android OS had bilinear or bicubic interpolation option and after some search I found that it does have its own version of it called FilterBitmap. After applying it to my paint pOutput.setFilterBitmap(true); I get much better result

    Read the article

  • How does the new google maps make buildings and cityscapes 3D?

    - by Aerovistae
    Anyone who's seen the new Google maps has no doubt taken note of the incredible amount of three-dimensional detail in select American cities such as Boston, New York, Chicago, and San Francisco. They've even modeled the trees, bridges and some of the boats in the harbor! Minor architectural details are present. It's crazy. Looking at it up close, I've found there's a rectangular area around each of those cities, and anything within them is 3Dified, but it cuts off hard and fast at the edge, even if it's in the middle of a building. The edge of the rectangle is where the 3D stops. This leads me to think it's being done algorithmically (which would make sense, given the scale of the project, how many trees and buildings and details there are), and yet I can't imagine how that's possible. How could an algorithm model all these things without extensive data on their shapes and contours? How could it model the individual wires of a bridge, or the statues in a park? It must be done by hand, and yet how could it be for so much detail! Does anyone have any insight on this?

    Read the article

  • Using a Higher Precision (than 8-bit unsigned integer) Buffered Image for Heightmaps in Java

    - by pl12
    I am generating a heightmap for every quad in my quadtree in openCL. The way I was creating the image is as follows: DataBufferInt dataBuffer = (DataBufferInt)img.getRaster().getDataBuffer(); int data[] = dataBuffer.getData(); //img is a bufferedimage inputImageMem = CL.clCreateImage2D( context, CL_MEM_READ_WRITE | CL_MEM_USE_HOST_PTR, new cl_image_format[]{imageFormat}, size, size, size * Sizeof.cl_uint, Pointer.to(data), null); This works ok but the major issue is that as the quads get smaller and smaller the 8-bit format of the buffered image starts to cause intolerable "stepping" issues as seen below: I was wondering if there was an alternate way I could go about doing this? Thanks for the time.

    Read the article

  • Tooltip Support For ASP.NET Image Controls v2010 vol 1

    Tooltip property has been added to DevExpress ASP.NET image controls! Starting with DXperience v2010.1, all DevExpress ASP.NET image controls like the ASPxImage have a new property called: Tooltip. Whats a tooltip? Wikipedia defines it as: The tooltip is a common graphical user interface element. It is used in conjunction with a cursor, usually a mouse pointer. The user hovers the cursor over an item, without clicking it, and a tooltip may appear a small "hover box" with information...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Best container to store this information

    - by user2368481
    I'm trying to write a smallish system as a homework excercise, I don't have much experience with containers and I'm not sure the best way of storing this data would be: Incident Records object holds instants of Incident Report. Report is a superclass which has 3 subclasses, Police, Fire or Medical. Record must must record which of these types apply, and which response teams are to be involved. So Record has to keep track of the Report objects, the type of the report (Police, Fire or Medical) and the teams involved in the reports. I was initially thinking of an array but that wouldn't be sufficient to hold all the info. Record<>---------Report<|----------Police, Fire or Medical

    Read the article

  • How to display consistent background image

    - by Tofu_Craving_Redish_BlueDragon
    Drawing a large background is relatively slow in PyGame. In order to avoid drawing BG every frame, you could draw it once, then do nothing. However, if something is overdrawn onto the surface and keeps moving, you will need to redraw the background in order to "erase" the color pixels left by moving object; otherwise, you will have "traces" of the moving object. I have a moving object in my PyGame. However, I do not want to "clear the color buffer" by redrawing the background image. Redrawing the background image every frame is slow. My solution : I will "clear" only required portions (where the "traces" of moving object are left) of the "buffer" by redrawing portions of background. Is there any other better way to have a consistent background?

    Read the article

  • When and How is an image cached for an ASPX with ContentType = image/jpeg ?

    - by Aamir Hasan
     In asp.net you can cache your page. You can vary the output cache by the followingThe query string in an initial request (HTTP GET).Control values passed on postback (HTTP POST values).The HTTP headers passed with a request.The major version number of the browser making the request.      A custom string in the page. In that case, you create custom code in the Global.asax file to specify the page's caching behavior.Link: http://msdn2.microsoft.com/en-us/library/xadzbzd6(VS.80).aspxyou can set the output caching for your GetImage.aspx, so that you dont have to requery the database every image request ,but you must use varybyParam , so that you have a cached version for every parameters arrangement:set the output cache for your page like this :At top of ASPX page: <%@ OutputCache Duration="600" VaryByParam="ID,Height,Width" %>VaryByParam  attribute allows you to vary the cached output depending on the query string.Adding this will make your images cached for 600 seconds, so that if the image request within this period ,the cahed version will be returned

    Read the article

  • Size of an image imported with FreeImage

    - by KaiserJohaan
    I'm having abit of a brainfart and I can't quite grasp what I'm doing wrong. It's quite simple, I am importing an image with FreeImage (http://freeimage.sourceforge.net/) which has a method FreeImage_GetBits that returns a pointer to the first byte of the image data. I then try to load all the data into memory using (bitsperpixel / 8) * pixelsWidth ' pixelHeight, like this: uint32_t bitsPerPixel = FreeImage_GetBPP(bitmap); // resolves to 24 uint32_t widthInPixels = FreeImage_GetWidth(bitmap); // resolves to 1024 uint32_t heightInPixels = FreeImage_GetHeight(bitmap); // resolves to 1024 // container is a std::vector<uint8_t> pkgMaterial.mTextureData.insert(pkgMaterial.mTextureData.begin(), FreeImage_GetBits(bitmap), FreeImage_GetBits(bitmap) + ((bitsPerPixel/8) * widthInPixels * heightInPixels)); I have a jpg which is 31 kilobytes in size on disc. Yet when I load it using the above formula, I see the vector is then filled with 3145728 bytes, which is approx 3145 kilobytes. What am I doing wrong?

    Read the article

  • Split Texture2D Across Line

    - by Simie
    Background I'm using a texture as a damage map represented by an array of floats for displaying damage effects on a space ship in a process. An upside of this is being able to 'slice' ships in half, using an input damage map with a line down the middle, as shown in the diagram below. Question However, I'm encountering problems separating this split ship into two different entities. I would like to be able to split the image down the 'damage line', in a process similar to this: I don't have much idea where to start detecting the red line above. Any advice would be welcome.

    Read the article

  • Free or cheap Media(image) hosting for web application

    - by Asish Bhattarai
    I am building a small social web application. I am developing it with rails. I want some cheap or free image hosting which I can use as my image repository for my website because I can't afford Amazon cloud store or something like that. Can I use flickr, imageshack or service something like that? Do they allow me to store images for my website? Suppose I wanna use pics for blogpost and I will be extracting pics from their api and show on my blog post. Is that possible? Sorry I'm beginner.

    Read the article

  • LWJGL: Camera distance from image plane?

    - by Rogem
    Let me paste some code before I ask the question... public static void createWindow(int[] args) { try { Display.setFullscreen(false); DisplayMode d[] = Display.getAvailableDisplayModes(); for (int i = 0; i < d.length; i++) { if (d[i].getWidth() == args[0] && d[i].getHeight() == args[1] && d[i].getBitsPerPixel() == 32) { displayMode = d[i]; break; } } Display.setDisplayMode(displayMode); Display.create(); } catch (Exception e) { e.printStackTrace(); System.exit(0); } } public static void initGL() { GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glShadeModel(GL11.GL_SMOOTH); GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); GL11.glClearDepth(1.0); GL11.glEnable(GL11.GL_DEPTH_TEST); GL11.glDepthFunc(GL11.GL_LEQUAL); GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); GLU.gluPerspective(45.0f, (float) displayMode.getWidth() / (float) displayMode.getHeight(), 0.1f, 100.0f); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST); } So, with the camera and screen setup out of the way, I can now ask the actual question: How do I know what the camera distance is from the image plane? I also would like to know what the angle between the image plane's center normal and a line drawn from the middle of one of the edges to the camera position is. This will be used to consequently draw a vector from the camera's position through the player's click-coordinates to determine the world coordinates they clicked (or could've clicked). Also, when I set the camera coordinates, do I set the coordinates of the camera or do I set the coordinates of the image plane? Thank you for your help. EDIT: So, I managed to solve how to calculate the distance of the camera... Here's the relevant code... private static float getScreenFOV(int dim) { if (dim == 0) { float dist = (float) Math.tan((Math.PI / 2 - Math.toRadians(FOV_Y))/2) * 0.5f; float FOV_X = 2 * (float) Math.atan(getScreenRatio() * 0.5f / dist); return FOV_X; } else if (dim == 1) { return FOV_Y; } return 0; } FOV_Y is the Field of View that one defines in gluPerspective (float fovy in javadoc). This seems to be (and would logically be) for the height of the screen. Now I just need to figure out how to calculate that vector.

    Read the article

  • Which should I use for mouse over tooltip for image (alt, longdesc, title)

    - by Virtual Jasper
    Currently, my webpage images use the alt attribute only. Users complain that their IE8 cannot show the "tooltip" bubble when they mouse over the image. On Microsoft's What's New in Internet Explorer 8 page, it says The alt attribute is no longer displayed as the image tooltip when the browser is running in IE8 Standards mode. Instead, the target of the longDesc attribute is used as the tooltip if present; otherwise, the title is displayed. The alt attribute is still used as the Microsoft Active Accessibility name, and the title attribute is used as the fallback name only if alt is not present. I also found that many say title should be used. Which should I use to meet the industrial standard: alt, longdesc or title?

    Read the article

  • What's the best way to fill or paint around an image in Java?

    - by wsorenson
    I have a set of images that I'm combining into a single image mosaic using JAI's MosaicDescriptor. Most of the images are the same size, but some are smaller. I'd like to fill in the missing space with white - by default, the MosaicDescriptor is using black. I tried setting the the double[] background parameter to { 255 }, and that fills in the missing space with white, but it also introduces some discoloration in some of the other full-sized images. I'm open to any method - there are probably many ways to do this, but the documentation is difficult to navigate. I am considering converting any smaller images to a BufferedImage and calling setRGB() on the empty areas (though I am unsure what to use for the scansize on the batch setRGB() method). My question is essentially: What is the best way to take an image (in JAI, or BufferedImage) and fill / add padding to a certain size? Is there a way to accomplish this in the MosaicDescriptor call without side-effects? For reference, here is the code that creates the mosaic: for (int i = 0; i < images.length; i++) { images[i] = JPEGDescriptor.create(new ByteArraySeekableStream(images[i]), null); if (i != 0) { images[i] = TranslateDescriptor.create(image, (float) (width * i), null, null, null); } } RenderedOp finalImage = MosaicDescriptor.create(ops, MosaicDescriptor.MOSAIC_TYPE_OVERLAY, null, null, null, null, null);

    Read the article

  • What file format can represent an uncompressed raster image at 48 or 64 bits per pixel?

    - by finnw
    I am creating screenshots under Windows and using the LockBits function from GDI+ to extract the pixel data, which will then be written to a file. To maximise performance I am also: Using the same PixelFormat as the source bitmap, to avoid format conversion Using the ImageLockModeUserInputBuf flag to extract the pixel data into a pre-allocated buffer This pre-allocated buffer (pointed to by BitmapData::Scan0) is part of a memory-mapped file (to avoid copying the pixel data again.) I will also be writing the code that reads the file, so I can use (or invent) any format I wish. However I would prefer to use a well-known format that existing programs (ideally web browsers) are able to read, because that means I can visually confirm that the images are correct before writing the code for the other program (that reads the image.) I have implemented this successfully for the PixelFormat32bppRGB format, which matches the format of a 32bpp BMP file, so if I extract the pixel data directly into the memory-mapped BMP file and prefix it with a BMP header I get a valid BMP image file that can be opened in Paint and most browsers. Unfortunately one of the machines I am testing on returns pixels in PixelFormat64bppPARGB format (presumably this is influenced by the video adapter driver) and there is no corresponding BMP pixel format for this. Converting to a 16, 24 or 32bpp BMP format slows the program down considerably (as well as being lossy) so I am looking for a file format that can use this pixel format without conversion, so I can extract directly into the memory-mapped file as I have done with the 32bpp format. What raster image file formats support 48bpp and/or 64bpp?

    Read the article

  • Load Images in WPF application

    - by LLS
    I am not familiar with WPF, and I just feel quite confusing. I am trying to create a little computer game, and there are elements I want to display dynamically. I use Image class and add the images to a canvas. But I'm not sure whether it's a good practice. I feel that adding controls to canvas seem to be a little wired. And I'm a little concerned about the performance, because I may need many images. The real problem is, I can't load the images from the image files. I see an example in a text book like this (XMAL): <Image Panel.ZIndex="0" Margin="0,0,0,0" Name ="image1"> <Image.Source> <BitmapImage UriSource="Bell.gif" /> </Image.Source> </Image> Where Bell.gif is added into the project. And I tried to copy this in code to create an image. new Image { Source = new BitmapImage(new Uri("Blockade.bmp"))} But I got invalid Uri exception. After some search in the Internet, I found that loading resources dynamically seems to be difficult. Can I load the files added to the project dynamically? If I use absolute path then it's OK. But I can't expect every computer will put the files in the same location. Can I use relative path? I tried new Image { Source = new BitmapImage(new Uri(@"Pictures\Blank Land.bmp", UriKind.Relative)) } But it doesn't work. (The image is blank) Thanks.

    Read the article

  • Silverlight - how to render image from non-visible data bound user control?

    - by MagicMax
    Hello! I have such situation - I'd like to build timeline control. So I have UserControl and ItemsControl on it (every row represents some person). The ItemsControl contains another ItemsControl as ItemsControl.ItemTemplate - it shows e.g. events for the person arranged by event's date. So it looks as some kind of grid with dates as a column headers and e.g. peoples as row headers. ........................|.2010.01.01.....2010.01.02.....2010.01.03 Adam Smith....|......[some event#1].....[some event#2]...... John Dow.......|...[some event#3].....[some event#4]......... I can have a lot of persons (ItemsControl #1 - 100-200 items) and a lot of events occured by some day (1-10-30 events per person in one day) the problem is that when user scrolls ItemsControl #1/#2 it happened toooo sloooooowwww due to a lot of elements should be rendered in one time (as I have e.g. a bit of text boxes and other elements in description of particular event) Question #1 - how can I improve it? May be somebody knows better way to build such user control? I have to mention that I'm using custom Virtual panel, based on some custom Virtual panel implementation found somewhere in internet... Question #2 - I'd like to make image with help of WriteableBitmap and render data bound control to image and to show image instead of a lot of elements. Problem is that I'm trying to render invisible data bound control (created in code behind) and it has actualWidth/Height equals to zero (so nothing rendered). How can I solve it? Thank you very much for you help!

    Read the article

  • Recommendations for a free GIS library supporting raster images

    - by gspr
    Hi. I'm quite new to the whole field of GIS, and I'm about to make a small program that essentially overlays GPS tracks on a map together with some other annotations. I primarily need to allow scanned (thus raster) maps (although it would be nice to support proper map formats and something like OpenStreetmap in the long run). My first exploratory program uses Qt's graphics view framework and overlays the GPS points by simply projecting them onto the tangent plane to the WGS84 ellipsoid at a calibration point. This gives half-decent accuracy, and actually looks good. But then I started wondering. To get the accuracy I need (i.e. remove the "half" in "half-decent"), I have to correct for the map projection. While the math is not a problem in itself, supporting many map projection feels like needless work. Even though a few projections would probably be enough, I started thinking about just using something like the PROJ.4 library to do my projections. But then, why not take it all the way? Perhaps I might aswell use a full-blown map library such as Mapnik (edit: Quantum GIS also looks very nice), which will probably pay off when I start to want even more fancy annotations or some other symptom of featuritis. So, finally, to the question: What would you do? Would you use a full-blown map library? If so, which one? Again, it's important that it supports using (and zooming in and out with) raster maps and has pretty overlay features. Or would you just keep it simple, and go with Qt's own graphics view framework together with something like PROJ.4 to handle the map projections? I appreciate any feedback! Some technicalities: I'm writing in C++ with a Qt-based GUI, so I'd prefer something that plays relatively nicely with those. Also, the library must be free software (as in FOSS), and at least decently cross-platform (GNU/Linux, Windows and Mac, at least). Edit: OK, it seems I didn't do quite enough research before asking this question. Both Quantum GIS and Mapnik seem very well suited for my purpose. The former especially so since it's based on Qt.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >