Search Results

Search found 25888 results on 1036 pages for 'image map'.

Page 37/1036 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • libgdx - collision detection with tiled map java

    - by user2875021
    currently, I am working on a 2d rpg game which is similar to final fantasy 1-4. I can load up a tiled map and the sprite can walk freely on the map. However, I will like to create a wall for it to stop walking through it. I created three tiled layer Background, Collision, Overhead and one Collision object layer with rectangles only. "How do I handle collisions with the object layer in the tiled map?" "Do I have to create every single rectangle that is in the object layer with Rectangle rectangle = new Rectangle() and rectangle.set(x, y, width, height)in the code?" Thank you very much in advance. Any help is greatly appreciated!

    Read the article

  • How can I make image opaque to some level?

    - by Nikki
    hello everyone....... I would like to know if I can make an image opaque if set in image view or set as background of relative layout. How ca I make image opaque using image view or relative layout for setting the image dynamically or Is there any other option to set image and its opacity dynamically? I also want the same image to rotate in both directions and also can zoomin and zoomout to anylevel. Thanks in advance

    Read the article

  • Why do I get completely different results when saving a BitmapSource to bmp, jpeg, and png in WPF

    - by DanM
    I wrote a little utility class that saves BitmapSource objects to image files. The image files can be either bmp, jpeg, or png. Here is the code: public class BitmapProcessor { public void SaveAsBmp(BitmapSource bitmapSource, string path) { Save(bitmapSource, path, new BmpBitmapEncoder()); } public void SaveAsJpg(BitmapSource bitmapSource, string path) { Save(bitmapSource, path, new JpegBitmapEncoder()); } public void SaveAsPng(BitmapSource bitmapSource, string path) { Save(bitmapSource, path, new PngBitmapEncoder()); } private void Save(BitmapSource bitmapSource, string path, BitmapEncoder encoder) { using (var stream = new FileStream(path, FileMode.Create)) { encoder.Frames.Add(BitmapFrame.Create(bitmapSource)); encoder.Save(stream); } } } Each of the three Save methods work, but I get unexpected results with bmp and jpeg. Png is the only format that produces an exact reproduction of what I see if I show the BitmapSource on screen using a WPF Image control. Here are the results: BMP - too dark JPEG - too saturated PNG - correct Why am I getting completely different results for different file types? I should note that the BitmapSource in my example uses an alpha value of 0.1 (which is why it appears very desaturated), but it should be possible to show the resulting colors in any image format. I know if I take a screen capture using something like HyperSnap, it will look correct regardless of what file type I save to. Here's a HyperSnap screen capture saved as a bmp: As you can see, this isn't a problem, so there's definitely something strange about WPF's image encoders. Do I have a setting wrong? Am I missing something?

    Read the article

  • Gradient Mapping in .NET

    - by Otaku
    Is there a way in .NET to perform the same technique Photoshop uses for Gradient Mapping (Image - Adjustments - Gradient Map [Gradient Editor])? Any ideas, links, code, etc. would be welcome.

    Read the article

  • Baffled by differences between WPF BitmapEncoders

    - by DanM
    I wrote a little utility class that saves BitmapSource objects to image files. The image files can be either bmp, jpeg, or png. Here is the code: public class BitmapProcessor { public void SaveAsBmp(BitmapSource bitmapSource, string path) { Save(bitmapSource, path, new BmpBitmapEncoder()); } public void SaveAsJpg(BitmapSource bitmapSource, string path) { Save(bitmapSource, path, new JpegBitmapEncoder()); } public void SaveAsPng(BitmapSource bitmapSource, string path) { Save(bitmapSource, path, new PngBitmapEncoder()); } private void Save(BitmapSource bitmapSource, string path, BitmapEncoder encoder) { using (var stream = new FileStream(path, FileMode.Create)) { encoder.Frames.Add(BitmapFrame.Create(bitmapSource)); encoder.Save(stream); } } } Each of the three Save methods work, but I get unexpected results with bmp and jpeg. Png is the only format that produces an exact reproduction of what I see if I show the BitmapSource on screen using a WPF Image control. Here are the results: BMP - too dark JPEG - too saturated PNG - correct Why am I getting completely different results for different file types? I should note that the BitmapSource in my example uses an alpha value of 0.1 (which is why it appears very desaturated), but it should be possible to show the resulting colors in any image format. I know if I take a screen capture using something like HyperSnap, it will look correct regardless of what file type I save to. Here's a HyperSnap screen capture saved as a bmp: As you can see, this isn't a problem, so there's definitely something strange about WPF's image encoders. Do I have a setting wrong? Am I missing something?

    Read the article

  • What is a good way of Enhancing contrast of color images?

    - by erjik
    I split color image for 3 channels and made a contrast enhancement of each channel. Then merged them together, I like the image at the result, but it has different colors. Black objects became yellow and so on... EDIT: The algorithm I used is to calculate the 5th percentile and the 95th percentile as min and max values, and then expand the values of image so that it will have min and max values as 0 and 255. If there is a better approach please tell me.

    Read the article

  • How should images be stored when multiple sizes are needed?

    - by Josh Curren
    What is the best way to store images? Currently when an image is uploaded I resize it to 3 different sizes (a thumbnail, a normal size, and a large size). I save in a database a description of the image, the format, and use the id number from the database as the image name. Each size image has its own directory. Should I be storing the images in the database? Should I only be storing the larger size and generate the thumbnail as needed? Or any other ideas you have?

    Read the article

  • jQuery update Google Map

    - by Beardy
    I am trying to update a google map v3 with jQuery and at the moment it loads the map but when .preview is clicked the map scaled to the width and height and then goes grey. $('.preview').click(function(){ var width = $('#width').val(); var height = $('#height').val(); $('#map').css({ 'width':width, 'height':height }); var mapElement = document.getElementById('map'); var updateOptions = { zoom: 6 } var map = new google.maps.Map(mapElement, updateOptions); });

    Read the article

  • Images won't load if they are of high size

    - by Fahim Parkar
    I have created web-application using JSF 2.0 and mysql. I am storing images in DB using MEDIUMBLOB. When I try to load image, I am able to see those images. However if the image size is big (1 MB or more), I can see half or 3/4th image on the browser. Any idea how to overcome this issue? Do I need to set any variable in JSF or MySQL? I know I should have saved the images over disk instead of DB, however this was client requirement. Client wanted to backup data and provide it to someone else and client don't want to backup DB and images also. Edit 1 Do I need to set any variables on mysql like query_cache. Edit 2 When I download same image and put below code it works perfectly. <h:graphicImage value="images/myImage4.png" width="50%" /> Edit 3 code is as below. <h:graphicImage value="DisplayImage?mainID=drawing" /> DisplayImage.java String imgLen = rs1.getString(1); int len = imgLen.length(); byte[] rb = new byte[len]; InputStream readImg = rs1.getBinaryStream(1); InputStream inputStream = readImg; int index = readImg.read(rb, 0, len); response.reset(); response.setHeader("Content-Length", String.valueOf(len)); response.setHeader("Content-disposition", "inline;filename=/file.png"); response.setContentType("image/png"); response.getOutputStream().write(rb, 0, len); response.getOutputStream().flush(); When I print len I get value as len=1548432

    Read the article

  • JavaScript or PHP based WYSIWYG vector based image editor

    - by Jeroen Pluimers
    For a PHP based site of a client, I'm looking for a vector based image editor that allows: end user creation of vectored images consisting of objects supports upload of bitmap images to be used as objects inside the vector image supports adding text objects to add to the vector image, and change properties (font name, font style, font size) of the text objects preferably supports layering or grouping of objects inside the vector image integrates nicely with a PHP based site (so a PHP or JavaScript library is preferred) can store the vector image in SVG, EPS or PDF Both commercial and FOSS solutions are OK. Any idea where to find such a library? --jeroen

    Read the article

  • OpenGL/GLSL: Render to cube map?

    - by BobDole
    I'm trying to figure out how to render my scene to a cube map. I've been stuck on this for a bit and figured I would ask you guys for some help. I'm new to OpenGL and this is the first time I'm using a FBO. I currently have a working example of using a cubemap bmp file, and the samplerCube sample type in the fragment shader is attached to GL_TEXTURE1. I'm not changing the shader code at all. I'm just changing the fact that I wont be calling the function that was loading the cubemap bmp file and trying to use the below code to render to a cubemap. You can see below that I'm also attaching the texture again to GL_TEXTURE1. This is so when I set the uniform: glUniform1i(getUniLoc(myProg, "Cubemap"), 1); it can access it in my fragment shader via uniform samplerCube Cubemap. I'm calling the below function like so: cubeMapTexture = renderToCubeMap(150, GL_RGBA8, GL_RGBA, GL_UNSIGNED_BYTE); Now, I realize in the draw loop below that I'm not changing the view direction to look down the +x, -x, +y, -y, +z, -z axis. I really was just wanting to see something working first before implemented that. I figured I should at least see something on my object the way the code is now. I'm not seeing anything, just straight black. I've made my background white still the object is black. I've removed lighting, and coloring to just sample the cubemap texture and still black. I'm thinking the problem might be the format types when setting my texture which is GL_RGB8, GL_RGBA but I've also tried: GL_RGBA, GL_RGBA GL_RGB, GL_RGB I thought this would be standard since we are rendering to a texture attached to a framebuffer, but I've seen different examples that use different enum values. I've also tried binding the cube map texture in every draw call that I'm wanting to use the cube map: glBindTexture(GL_TEXTURE_CUBE_MAP, cubeMapTexture); Also, I'm not creating a depth buffer for the FBO which I saw in most examples, because I'm only wanting the color buffer for my cube map. I actually added one to see if that was the problem and still got the same results. I could of fudged that up when I tried. Any help that can point me in the right direction would be appreciated. GLuint renderToCubeMap(int size, GLenum InternalFormat, GLenum Format, GLenum Type) { // color cube map GLuint textureObject; int face; GLenum status; //glEnable(GL_TEXTURE_2D); glActiveTexture(GL_TEXTURE1); glGenTextures(1, &textureObject); glBindTexture(GL_TEXTURE_CUBE_MAP, textureObject); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for (face = 0; face < 6; face++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, InternalFormat, size, size, 0, Format, Type, NULL); } // framebuffer object glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X, textureObject, 0); status = glCheckFramebufferStatus(GL_FRAMEBUFFER); printf("%d\"\n", status); printf("%d\n", GL_FRAMEBUFFER_COMPLETE); glViewport(0,0,size, size); for (face = 1; face < 6; face++) { drawSpheres(); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, textureObject, 0); } //Bind 0, which means render to back buffer, as a result, fb is unbound glBindFramebuffer(GL_FRAMEBUFFER, 0); return textureObject; }

    Read the article

  • convert pixels into image

    - by Zeta Op
    what i am trying to do is to convert a pixel from a video cam, into an image to expalin it better imagine a 3d model so.. the pixels would be each polying, and i want to do is to conver each polyigon into an image. what i have so far is this ** import processing.video.*; PImage hoja; Capture cam; boolean uno, dos, tres, cuatro; import ddf.minim.*; Minim minim; AudioPlayer audio; float set; void setup() { //audio minim = new Minim(this); // audio = minim.loadFile("audio"); // audio.loop(); // uno=false; dos=false; tres=false; cuatro=true; size(640, 480); hoja=loadImage("hoja.gif"); cam = new Capture(this, width, height); cam.start(); } void draw() { if (cam.available() == true) { cam.read(); if (uno==true) { filtroUno(); image(cam, 0, 0, 640, 480); } if (dos==true) { filtroDos(); } if(tres==true){ filtroTres(); } if(cuatro==true){ filtroCuatro(); image(cam, set, 0,640,480); } } // The following does the same, and is faster when just drawing the image // without any additional resizing, transformations, or tint. //set(0, 0, cam); } void filtroUno() { cam.loadPixels(); hoja.loadPixels(); for (int i=0;i<cam.pixels.length;i++) { if (brightness(cam.pixels[i])>110) { cam.pixels[i]=color(0, 255, 255); } else { cam.pixels[i]=color(255, 0, 0); } } for (int i=0;i<cam.width;i+=10) { for (int j=0;j<cam.height;j+=10) { int loc=i+(j*cam.width); if (cam.pixels[loc]==color(255, 0, 0)) { for (int x=i;x<i+10;x++) { for (int y=j;y<j+10;y++) { // println("bla"); int locDos=i+(j*cam.width); cam.pixels[locDos]=hoja.get(x, y); } } } } } cam.updatePixels(); } ** the broblem is that each pixel is creating me a matrix, so.. is not recreating what id that to do. i had the method filtroUno but it wasn't showing ok.. and was the result void filtroUno() { cam.loadPixels(); hoja.loadPixels(); for (int i=0;i<cam.pixels.length;i++) { if (brightness(cam.pixels[i])>110) { cam.pixels[i]=color(0, 255, 255); } else { cam.pixels[i]=color(255, 0, 0); } } for (int i=0;i<cam.width;i+=10) { for (int j=0;j<cam.height;j+=10) { int loc=i+j*hoja.width*10; if (cam.pixels[loc]==color(255, 0, 0)) { for (int x=i;x<i+10;x++) { for (int y=j;y<j+10;y++) { // println("bla"); int locDos=x+y*hoja.height*10; cam.pixels[locDos]=hoja.get(x, y); } } } } } cam.updatePixels(); } i hope you can help me thanks note: each red pixel should be the gif image the imge size is 10x10

    Read the article

  • How to upload Image on Android?

    - by Mattiah85
    I havve to upload image from my SD card to PHP server. I have read a lot of articles and topics but I have some problems... First I have use that code: HttpURLConnection connection = null; DataOutputStream outputStream = null; //DataInputStream inputStream = null; String urlServer = hostName+"Upload"; String lineEnd = "\r\n"; String twoHyphens = "--"; String boundary = "*****"; String serverResponseMessage; //int serverResponseCode; int bytesRead, bytesAvailable, bufferSize; byte[] buffer; int maxBufferSize = 1*1024*1024; try { showLog("uploading file: " + file); FileInputStream fileInputStream = new FileInputStream(new File(pictureFileDir+"/"+file) ); URL url = new URL(urlServer); connection = (HttpURLConnection) url.openConnection(); // Allow Inputs &amp; Outputs. connection.setDoInput(true); connection.setDoOutput(true); connection.setUseCaches(false); // Set HTTP method to POST. connection.setRequestMethod("POST"); connection.setRequestProperty("Connection", "Keep-Alive"); connection.setRequestProperty("Content-Type", "multipart/form-data;boundary="+boundary); outputStream = new DataOutputStream( connection.getOutputStream() ); outputStream.writeBytes(twoHyphens + boundary + lineEnd); outputStream.writeBytes("Content-Disposition: form-data; name=\"uploaded_file\";filename=\"" + file +"\"" + lineEnd); outputStream.writeBytes(lineEnd); bytesAvailable = fileInputStream.available(); bufferSize = Math.min(bytesAvailable, maxBufferSize); buffer = new byte[bufferSize]; // Read file bytesRead = fileInputStream.read(buffer, 0, bufferSize); while (bytesRead > 0) { outputStream.write(buffer, 0, bufferSize); bytesAvailable = fileInputStream.available(); bufferSize = Math.min(bytesAvailable, maxBufferSize); bytesRead = fileInputStream.read(buffer, 0, bufferSize); } outputStream.writeBytes(lineEnd); outputStream.writeBytes(twoHyphens + boundary + twoHyphens + lineEnd); // Responses from the server (code and message) //serverResponseCode = connection.getResponseCode(); serverResponseMessage = connection.getResponseMessage(); showLog("server response: " + serverResponseMessage); fileInputStream.close(); outputStream.flush(); outputStream.close(); } catch (Exception ex) { ex.printStackTrace(); } but server response 200/OK and no file was on destination server... After i have read about Multipart: try { HttpParams params = new BasicHttpParams(); params.setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); DefaultHttpClient mHttpClient = new DefaultHttpClient(params); File image = new File(pictureFileDir + "/" + filename); HttpPost httppost = new HttpPost(hostName+"Upload"); MultipartEntity multipartEntity = new MultipartEntity(HttpMultipartMode.BROWSER_COMPATIBLE); multipartEntity.addPart("Image", new FileBody(image)); httppost.setEntity(multipartEntity); mHttpClient.execute(httppost, new PhotoUploadResponseHandler()); } catch (Exception e) { e.printStackTrace(); } but then a i have such LOG in LogCat and nothing else... 06-04 06:50:52.277: D/dalvikvm(1584): DexOpt: couldn't find static field Lorg/apache/http/message/BasicHeaderValueParser;.INSTANCE 06-04 06:50:52.277: W/dalvikvm(1584): VFY: unable to resolve static field 6688 (INSTANCE) in Lorg/apache/http/message/BasicHeaderValueParser; 06-04 06:50:52.277: D/dalvikvm(1584): VFY: replacing opcode 0x62 at 0x001b ServerSide Script: $target_path = "uploads"; $target_path = $target_path . basename( $_FILES['Image']); if(move_uploaded_file($_FILES['tmp_name'], $file_path)) { echo "success"; } else{ echo "fail"; } why? What is the simplest way to upload image?

    Read the article

  • Swap image with jquery and show zoom image

    - by Neil Bradley
    Hi there, On my site I have 4 thumbnail product images that when clicked on swap the main image. This part is working okay. However, on the main image I'm also trying to use the jQZoom script. The zoom script works for the most part, except that the zoomed image always displays the zoom of the first image, rather than the one selected. This can be seen in action here; http://www.wearecapital.com/productdetails-new.asp?id=6626 I was wondering if someone might be able to suggest a solution? My code for the page is here; <% if session("qstring") = "" then session("qstring") = "&amp;rf=latest" maxProducts = 6 prodID = request("id") if prodID = "" or not isnumeric(prodid) then response.Redirect("listproducts.asp?err=1" & session("qstring")) else prodId = cint(prodId) end if SQL = "Select * from products,subcategories,labels where subcat_id = prod_subcategory and label_id = prod_label and prod_id = " & prodID set conn = server.CreateObject("ADODB.connection") conn.Open(Application("DATABASE")) set rs = conn.Execute(SQL) if rs.eof then ' product is not valid name = "Error - product id " & prodID & " is not available" else image1 = rs.fields("prod_image1") image1Desc = rs.fields("prod_image1Desc") icon = rs.fields("prod_icon") subcat = rs.fields("prod_subcategory") image2 = rs.fields("prod_image2") image2Desc = rs.fields("prod_image2Desc") image3 = rs.fields("prod_image3") image3Desc = rs.fields("prod_image3Desc") image4 = rs.fields("prod_image4") image4Desc = rs.fields("prod_image4Desc") zoomimg = rs.Fields("prod_zoomimg") zoomimg2 = rs.Fields("prod_zoomimg2") zoomimg3 = rs.Fields("prod_zoomimg3") zoomimg4 = rs.Fields("prod_zoomimg4") thumb1 = rs.fields("prod_preview1").value thumb2 = rs.fields("prod_preview2").value thumb3 = rs.fields("prod_preview3").value thumb4 = rs.fields("prod_preview4").value end if set rs = nothing conn.Close set conn = nothing %> <!-- #include virtual="/includes/head-product.asp" --> <body id="detail"> <!-- #include virtual="/includes/header.asp" --> <script type="text/javascript" language="javascript"> function switchImg(imgName) { var ImgX = document.getElementById("mainimg"); ImgX.src="/images/products/" + imgName; } </script> <script type="text/javascript"> $(document).ready(function(){ var options = { zoomWidth: 466, zoomHeight: 260, xOffset: 34, yOffset: 0, title: false, position: "right" //and MORE OPTIONS }; $(".MYCLASS").jqzoom(options); }); </script> <!-- #include virtual="/includes/nav.asp" --> <div id="column-left"> <div id="main-image"> <% if oldie = false then %><a href="/images/products/<%=zoomimg%>" class="MYCLASS" title="MYTITLE"><img src="/images/products/<%=image1%>" title="IMAGE TITLE" name="mainimg" id="mainimg" style="width:425px; height:638px;" ></a><% end if %> </div> </div> <div id="column-right"> <div id="altviews"> <h3 class="altviews">Alternative Views</h3> <ul> <% if oldie = false then writeThumb thumb1,image1,zoomimg,image1desc writeThumb thumb2,image2,zoomimg2,image2desc writeThumb thumb3,image3,zoomimg3,image3desc writeThumb thumb4,image4,zoomimg4,image4desc end if %> </ul> </div> </div> <!-- #include virtual="/includes/footer-test.asp" --> <% sub writeThumb(thumbfile, imgfile, zoomfile, thumbdesc) response.Write "<li>" if thumbfile <> "65/default_preview.jpg" and thumbfile <> "" and not isnull(thumbfile) then if imgFile <> "" and not isnull(imgfile) then rimgfile = replace(imgfile,"/","//") else rimgfile = "" if thumbdesc <> "" and not isnull(thumbdesc) then rDescription = replace(thumbdesc,"""","&quot;") else rDescription = "" response.write "<img src=""/images/products/"& thumbfile &""" style=""cursor: pointer"" border=""0"" style=""width:65px; height:98px;"" title="""& rDescription &""" onclick=""switchImg('" & rimgfile & "')"" />" & vbcrlf else response.write "<img src=""/images/products/65/default_preview.jpg"" alt="""" />" & vbCrLF end if response.write "</li>" & vbCrLF end sub %>

    Read the article

  • Storing an Image with php?

    - by Chris
    I'm trying to store an Image in my website so I can use it easily but I found this php code from here and I can't quite make much sense of it.. I'm just starting php and I dont quite know what to change and what to keep.. I'd greatly appreciate it if you could explain this a little better for me, thanks. <?php $allowedExts = array("jpg", "jpeg", "gif", "png"); $extension = end(explode(".", $_FILES["file"]["name"])); if ((($_FILES["file"]["type"] == "image/gif") || ($_FILES["file"]["type"] == "image/jpeg") || ($_FILES["file"]["type"] == "image/png") || ($_FILES["file"]["type"] == "image/pjpeg")) && ($_FILES["file"]["size"] < 20000) && in_array($extension, $allowedExts)) { if ($_FILES["file"]["error"] > 0) { echo "Return Code: " . $_FILES["file"]["error"] . "<br />"; } else { echo "Upload: " . $_FILES["file"]["name"] . "<br />"; echo "Type: " . $_FILES["file"]["type"] . "<br />"; echo "Size: " . ($_FILES["file"]["size"] / 1024) . " Kb<br />"; echo "Temp file: " . $_FILES["file"]["tmp_name"] . "<br />"; if (file_exists("upload/" . $_FILES["file"]["name"])) { echo $_FILES["file"]["name"] . " already exists. "; } else { move_uploaded_file($_FILES["file"]["tmp_name"], "upload/" . $_FILES["file"]["name"]); echo "Stored in: " . "upload/" . $_FILES["file"]["name"]; } } } else { echo "Invalid file"; } ?>

    Read the article

  • Create Advanced Panoramas with Microsoft Image Composite Editor

    - by Matthew Guay
    Do you enjoy making panoramas with your pictures, but want more features than tools like Live Photo Gallery offer?  Here’s how you can create amazing panoramas for free with the Microsoft Image Composite Editor. Yesterday we took a look at creating panoramic photos in Windows Live Photo Gallery. Today we take a look at a free tool from Microsoft that will give you more advanced features to create your own masterpiece. Getting Started Download Microsoft Image Composite Editor from Microsoft Research (link below), and install as normal.  Note that there are separate version for 32 & 64-bit editions of Windows, so make sure to download the correct one for your computer. Once it’s installed, you can proceed to create awesome panoramas and extremely large image combinations with it.  Microsoft Image Composite Editor integrates with Live Photo Gallery, so you can create more advanced panoramic pictures directly.  Select the pictures you want to combine, click Extras in the menu bar, and select Create Image Composite. You can also create a photo stitch directly from Explorer.  Select the pictures you want to combine, right-click, and select Stitch Images… Or, simply launch the Image Composite Editor itself and drag your pictures into its editor.  Either way you start a image composition, the program will automatically analyze and combine your images.  This application is optimized for multiple cores, and we found it much faster than other panorama tools such as Live Photo Gallery. Within seconds, you’ll see your panorama in the top preview pane. From the bottom of the window, you can choose a different camera motion which will change how the program stitches the pictures together.  You can also quickly crop the picture to the size you want, or use Automatic Crop to have the program select the maximum area with a continuous picture.   Here’s how our panorama looked when we switched the Camera Motion to Planar Motion 2. But, the real tweaking comes in when you adjust the panorama’s projection and orientation.  Click the box button at the top to change these settings. The panorama is now overlaid with a grid, and you can drag the corners and edges of the panorama to change its shape. Or, from the Projection button at the top, you can choose different projection modes. Here we’ve chosen Cylinder (Vertical), which entirely removed the warp on the walls in the image.  You can pan around the image, and get the part you find most important in the center.  Click the Apply button on the top when you’re finished making changes, or click Revert if you want to switch to the default view settings. Once you’ve finished your masterpiece, you can export it easily to common photo formats from the Export panel on the bottom.  You can choose to scale the image or set it to a maximum width and height as well.  Click Export to disk to save the photo to your computer, or select Publish to Photosynth to post your panorama online. Alternately, from the File menu you can choose to save the panorama as .spj file.  This preserves all of your settings in the Image Composite Editor so you can edit it more in the future if you wish.   Conclusion Whether you’re trying to capture the inside of a building or a tall tree, the extra tools in Microsoft Image Composite Editor let you make nicer panoramas than you ever thought possible.  We found the final results surprisingly accurate to the real buildings and objects, especially after tweaking the projection modes.  This tool can be both fun and useful, so give it a try and let us know what you’ve found it useful for. Works with 32 & 64-bit versions of XP, Vista, and Windows 7 Link Download Microsoft Image Composite Editor Similar Articles Productive Geek Tips Change or Set the Greasemonkey Script Editor in FirefoxNew Vista Syntax for Opening Control Panel Items from the Command-lineTune Your ClearType Font Settings in Windows VistaChange the Default Editor From Nano on Ubuntu LinuxMake MSE Create a Restore Point Before Cleaning Malware TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • Image SEO - always repeat main keyword in alt text?

    - by Marcus Edensky
    I'm working on an Easter Island website and I'm currently redesigning my image system. Virtually all my photos are of Easter Island. My question is, should I always include the keywords "Easter Island" for Google to easier understand that my photos are from Easter Island, or is it sufficient that the "Easter Island" keywords are in the domain, as well as in all other pages of the site? For example, Alt text 1: "Moai statues at volcano Rano Raraku at Easter Island (Rapa Nui)" or Alt text 2: "Moai statues at volcano Rano Raraku" Would example 1 be considered keyword stuffing by Google

    Read the article

  • Image Line Trace Math Help Hard To Explain

    - by Ozzy
    Hi all, sorry for the confusing title, its really hard for me to explain what i want. So i created this image :) Ok so the two RED dots are points on an image. The distance between them isnt important. What I want to do is, Using the coordinates for the two dots, work out the angle of the space between them (as shown by the black line between the red dots) Then once the angle is found, on the last red dot, create two points which cross the angle of the first line. Then from that, scan a Half semicircle and get the coordinates of every pixel of the image that the orange line passes. I dnot know if this makes any sense to you lot so i drew another picture: As you can see in the second picture, my idea is applied to a line drawn on a black canavs. The two red dots are the starting coordinates then at the end of the two dots, a less then half semicircle is created. The part that is orange shows the pixels of the image that should be recorded. I have no clue how to start this, so if anyone has any ideas on how i can or on what i need to do, any help is much appreciated :)

    Read the article

  • Add a marker to an image in javascript?

    - by Richard
    Hi, Anyone know how I can add a marker to an image (not a map) in Javascript? Ideally I'd like a handler that behaves much like adding a marker to a map - i.e. onclick causes a marker to be displayed at the point that was clicked, and returns the x/y pixel coordinates of the point that was clicked. Is this possible? Cheers Richard

    Read the article

  • send Image from J2ME to SERVLET

    - by Akash
    Hi, I want to send Image from J2ME to SERVLET. I am able to convert image into Byte Array, and send by Http POST. I have coded as : - From Mobile : conn = (HttpConnection)Connector.open(url,Connector.READ_WRITE,true); conn.setRequestMethod(HttpConnection.POST); conn.setRequestProperty("Content-Type", "application/x-www-form-urlencoded"); os.write(bytes, 0, bytes.length);//bytes = byte array of image At servlet : String line; BufferedReader r1 = new BufferedReader(new InputStreamReader(in)); while ((line = r1.readLine()) != null) { System.out.println("line=" + line); buf.append(line); } String s = buf.toString(); byte[] img_byte = s.getBytes(); Now d problem I found is, when I send Bytes from Mob App, some bytes are LOST , whose value is 0A and 0D-Hex ... Exactly, Cr- Carriage Return & Lf- Line Feed... It means, POST method OR readLine() not able to accept 0A & 0D value... And so I come to know that, LOST bytes are 0A and 0D occurrence in image's byte array.... Any one have any idea, how to do this, or how to use any another method..... Thanks -Akash

    Read the article

  • Flex: FileReference and Image unhandled IOErrorEvent

    - by deux11
    The following code shows a button that allows you to select a file (should be an image) and display it into an image component. When I select an invalid image (e.g. a word document), I get the following error: "Error #2044: Unhandled IOErrorEvent:. text=Error #2124: Loaded file is an unknown type." I know I can pass a FileFilter to the FileReference:browse call, but that's beyond the point. My question is... I want to handle the IOErrorEvent myself, what event listener am I missing? private var file:FileReference = new FileReference(); private function onBrowse():void { file.browse(null); file.addEventListener(Event.SELECT, handleFileSelect); file.addEventListener(Event.COMPLETE, handleFileComplete); } private function handleFileSelect(event:Event):void { file.load(); } private function handleFileComplete(event:Event):void { myImage.source = file.data; } private function handleImageIoError(evt:IOErrorEvent):void { Alert.show("IOErrorEvent"); } <mx:Button click="onBrowse()" label="Browse"/> <mx:Image id="myImage" width="100" height="100" ioError="handleImageIoError(event)"/>

    Read the article

  • Alternative to google map api, so that I can use it on a HTTPS/SSL encrypted website.

    - by Zeeshan Rang
    I have a question regarding map api. I was using the the google map api in my website before. But since I have encryption the site using HTTPS/SSL support, the google map api stopped working. I checked online, and realised that google has a Premier account only that would allow me to use HTTPS supported maps api and it cost $10,000 per year. I do not this kind of money with me. So, can you give any other alternative to have a map api on my website. Anything that could give me driving directions would be fine. Regards Zeeshan

    Read the article

  • merging UIImagePickerController image with cameraOverlayView

    - by GameDev
    Im really in the need for some help and advice. Spent the last week on this and have now just become frustrated as i cant get it to work! Basically, im trying to merge two images into one image to display/save. First the user picks an image from album, it goes to edit image screen where user can move and scale the image. On this screen is an overlay image (320x480) for the person to align there eyes in. Once aligned I want to save this image (edited and overlay) into one and pass the image onto my next screen. It works fine when the image is filling the edit/crop box, but when the image is widescreen with top and bottom not filling the box, then when i save the image the coords of the overlay dont get saved correctly! Heres my code, ive tried various ways of doing this but have failed at every attempt :( - (void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { // Access the cropped image from info dictionary UIImage *image = [info objectForKey:@"UIImagePickerControllerEditedImage"]; // Combine image with overlay before saving!! image = [self addOverlayToImage:image]; overlayGraphicView.image = nil; // Take the picture image to the post picture view controller postPictureView = [[PostPictureViewController alloc] init:image Company:companyName withLink:buyButtonLink]; [picker pushViewController:postPictureView animated:YES]; [picker release],picker = nil; } The problem is that the image picked (originalImage) could be of any height, my overlayImage is however always 320x480, its almost all transparent with just two eye images in center which i want to save over the original images eyes! - (UIImage*) addOverlayToImage:(UIImage*)originalImage { CGRect cgRect =[[UIScreen mainScreen] bounds]; CGSize size = cgRect.size; UIGraphicsBeginImageContext(size); [originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)]; UIImage* overlayImage = [UIImage imageNamed:overlayGraphicName]; [(UIImage *)overlayImage drawInRect:CGRectMake(0, 0, size.width, size.height)]; UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext(); [finalImage retain]; UIGraphicsEndImageContext(); return finalImage; } I wish there was just an easy way to take a screenshot of whatever is in the edit crop box :( Please if someone could help me with this ASAP as I need to finish this in 1-2 days time! Thank you. EDIT:- I should also mention that with this I get the correct center of the screen and placement of the overlay on my next screen: [(UIImage *)overlayImage drawInRect:CGRectMake(0, 0, size.width, size.height)]; However, I am unable to work out the correct position of the main image especially as the height is different for every image if not fullscreen! I tried this to center it into the correct position but it doesnt work: [originalImage drawInRect:CGRectMake(0,(size.height/2 - originalImage.size.height/2), originalImage.size.width, originalImage.size.height)];

    Read the article

  • What container type provides better (average) performance than std::map?

    - by Truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 - 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. UPDATE: This example is rather trivial and hides the true complexity of what I'm trying to achieve. My real world project is a simple scripting language that uses a parser, data tree, and interpreter (instead of a VM stack system). I need to use some kind of data structure (perhaps map) to store the variables names created by script programmers. These are likely to be pretty randomly named, so I need a lookup method that can quickly find a particular key within a (probably) fairly large list of names. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >