Search Results

Search found 25888 results on 1036 pages for 'image map'.

Page 418/1036 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Re-factoring a CURL request to Ruby's RestClient

    - by user94154
    I'm having trouble translating this CURL request into Ruby using RestClient: system("curl --digest -u #{@user}:#{@pass} '#{@endpoint}/#{id}' --form image_file=@'#{path}' -X PUT") I keep getting 400 Bad Request errors. As far as I can tell, the request does get properly authenticated, but hangs up from the file upload part. Here are my best attempts, all of which get me those 400 errors: resource = RestClient::Resource.new "#{@endpoint}/#{id}", @user, @pass #attempt 1 resource.put :image_file => File.new(path, 'rb'), :content_type => 'image/jpg' #attempt 2 resource.put File.read(path), :content_type => 'image/jpg' #attempt 3 resource.put File.open(path) {|f| f.read}, :content_type => 'image/jpg'

    Read the article

  • KeyListener problem

    - by rgksugan
    In my apllication i am using a jpanel in which i want to add a key listener. I did it. But it doesnot work. Is it because i am using a swingworker to update the contents of the panel every second. Here is my code to update the panel RenderedImage image = ImageIO.read(new ByteArrayInputStream((byte[]) get())); Graphics graphics = remote.rdpanel.getGraphics(); if (graphics != null) { Image readyImage = new ImageIcon(UtilityFunctions.convertRenderedImage(image)).getImage(); graphics.drawImage(readyImage, 0, 0, remote.rdpanel.getWidth(), remote.rdpanel.getHeight(), null); }

    Read the article

  • image_to_function in Rails

    - by FCastellanos
    I have this method on rails so that I have an image calling a javascript function def image_to_function(name, function, html_options = {}) html_options.symbolize_keys! tag(:input, html_options.merge({ :type => "image", :src => image_path(name), :onclick => (html_options[:onclick] ? "#{html_options[:onclick]}; " : "") + "#{function};" })) end I grabbed this code from the application helper of the redmine source code, the problem I'm having is that when I click on the image it's sending a POST, does some one know how can I stop that? This is how I'm using it <%= image_to_function "eliminar-icon.png", "mark_for_destroy(this, '.task')" %> Thanks alot!

    Read the article

  • iPhone: UIImageView not showing images

    - by George
    Hello, I've got UIImageView drawn in my nib-file and it's connected to a imageView iboutlet. I can load single pictures which will show up very nicely, but when it comes to drawing many images separately as like an animation, images won't show. I've got drawImage() function which takes NSData-objects(image data) and draws it to a screen(imageView). Main function has got for loop which loops 300 times as quickly as it can and each time it calls that drawImage function and passes different image data to it. Sometimes when I execute this code, last picture from that "animation" shows up, sometimes not at all. Maybe I need to schedule enough time for imageView so that the image can be shown? Hope someone has some clues. Thanks in advance!

    Read the article

  • Base64 Android encode to PHP decode make error

    - by studio lambda
    I'm a french guy, so, I'm sorry for my english... I'm developing an Android App which communicate with a PHP REST service. So, when I try to encode an image file into Base64 like this : InputStream fileInputStream = context.getContentResolver().openInputStream(uri); BufferedInputStream in = new BufferedInputStream(fileInputStream); StringWriter out = new StringWriter(); int b; while ((b = in.read()) != -1) out.write(b); out.flush(); out.close(); in.close(); String encoded = new String(android.util.Base64.encode(out.toString() .getBytes(), android.util.Base64.DEFAULT)); On server side, I make : $data=base64_decode(chunk_split($base64BinaryData)); The result is that my image file is corrupted! INFO : the image is made by an Intent to android.provider.MediaStore.ACTION_IMAGE_CAPTURE Activity in Emulator mode (avd 5554) I've already read lots of discussions about similar problem but nothing fix my bug. thanks for help Regards,

    Read the article

  • UIImage Rotation

    - by Kamchatka
    I display an image in a UIImageView (within a UIScrollView) which is also stored in CoreData. In the interface, I want the user to be able to rotate the picture by 90 degrees. I also want it to be saved in CoreData. What should I rotate in the display? the scrollview, the uiimageview or the image itself? (If possible I would like the rotation to be animated) But then I also have to save the picture to CoreData. I thought about changing the image orientation but this property is readonly.

    Read the article

  • Java paint speed relative to color model

    - by Jon
    I have a BufferedImage with an IndexColorModel. I need to paint that image onto the screen, but I've noticed that this is slow when using an IndexColorModel. However, if I run the BufferedImage through an identity affine transform it creates an image with a DirectColorModel and the painting is significantly faster. Here's the code I'm using AffineTransformOp identityOp = new AffineTransformOp(new AffineTransform(), AffineTransformOp.TYPE_BILINEAR); displayImage = identityOp.filter(displayImage, null); I have three questions 1. Why is painting the slower on an IndexColorModel? 2. Is there any way to speed up the painting of an IndexColorModel? 3. If the answer to 2. is no, is this the most efficient way to convert from an IndexColorModel to a DirectColorModel? I've noticed that this conversion is dependent on the size of the image, and I'd like to remove that dependency. Thanks for the help

    Read the article

  • Help needed with MySQL query to join data spanning multiple tables with data used as column names

    - by gurun8
    I need a little help putting together a SQL query that will give me the following resultsets: and The data model looks like this: The tricky part for me is that the columns to the right of the "Product" in the resultset aren't really columns in the database but rather key/value pairs spanned across the data model. Table data is as follows: My apologies in advance for the image heavy question and the image quality. This just seemed like the easiest way to convey the information. It'll probably take someone less time to write the query statement to achieve the results than it did for me to assemble this question. By the way, the "product_option" table image is truncated but it illustrated the general idea of the data structure. The MySQL server version is 5.1.45.

    Read the article

  • starting subactivity for the second time causes java.lang.OutOfMemoryError

    - by Zacherl
    Hi there, I am developing a simple app which does a little bit of image-processing. It's divided in two activities; the main one with some display elements and the second one which is used to capture images off the phone's camera. To discribe my problem: I start the app, capture an image (by starting a new Intent with the subactivity) and all data is displayed correctly. If I capture another image after this, I run in an java.lang.OutOfMemoryError - bitmap size exceeds VM budget I dont store the captured bitmap, in the second activity I just extract some data from it and pass it to the main-activity; finishing (finish()) the sub-activity afterwards. I really dont know what I can do about it. Thanks in advance! greetings, Zacherl PS: It is my first approach to android, so I apologize for any stupid beginner error I did; if someone needs any further information, I would be happy to provide it.

    Read the article

  • How to know preferred icon size for MenuItem?

    - by barmaley
    Hi folks, I my application I have one large PNG file containg hi-res image. Depending on situation I would like to use this image either as icon or as placeholder for ImageView. For MenuItem this image is too large, so I need to scale-down it to suitable size. I mean if it has to be displayed on large enough device like Samsung Galaxy Tab - I need to use one scale, for in small ones another, etc. I just noticed that for small-sized devices MenuItem icon is not scaled just cut - which is ugly. So the question is how should detect which is preferred size?

    Read the article

  • Find the closest vector

    - by Alexey Lebedev
    Hello! Recently I wrote the algorithm to quantize an RGB image. Every pixel is represented by an (R,G,B) vector, and quantization codebook is a couple of 3-dimensional vectors. Every pixel of the image needs to be mapped to (say, "replaced by") the codebook pixel closest in terms of euclidean distance (more exactly, squared euclidean). I did it as follows: class EuclideanMetric(DistanceMetric): def __call__(self, x, y): d = x - y return sqrt(sum(d * d, -1)) class Quantizer(object): def __init__(self, codebook, distanceMetric = EuclideanMetric()): self._codebook = codebook self._distMetric = distanceMetric def quantize(self, imageArray): quantizedRaster = zeros(imageArray.shape) X = quantizedRaster.shape[0] Y = quantizedRaster.shape[1] for i in xrange(0, X): print i for j in xrange(0, Y): dist = self._distMetric(imageArray[i,j], self._codebook) code = argmin(dist) quantizedRaster[i,j] = self._codebook[code] return quantizedRaster ...and it works awfully, almost 800 seconds on my Pentium Core Duo 2.2 GHz, 4 Gigs of memory and an image of 2600*2700 pixels:( Is there a way to somewhat optimize this? Maybe the other algorithm or some Python-specific optimizations.

    Read the article

  • Accessing Textboxes in Repeater Control

    - by CccTrash
    All the ways I can think to do this seem very hackish. What is the right way to do this, or at least most common? I am retrieving a set of images from a LINQ-to-SQL query and databinding it and some other data to a repeater. I need to add a textbox to each item in the repeater that will let the user change the title of each image, very similar to Flickr. How do I access the textboxes in the repeater control and know which image that textbox belongs to? Here is what the repeater control would look like, with a submit button which would update all the image rows in Linq-to-SQL:

    Read the article

  • Uploading a picture to a album using the graph api

    - by kielie
    Hi guys, I am trying to upload an image to a album, but it's not working, here is the code I am using, $uid = $facebook->getUser(); $args = array('message' => $uid); $file_path = "http://www.site.com/path/to/file.jpg"; $album_id = '1234'; $args['name'] = '@' . realpath($file_path); $data = $facebook->api('/'. $album_id . '/photos', 'post', $args); print_r($data); This code is in a function.php file that gets called when a user clicks on a button inside of a flash file that is embedded on my canvas, so basically what I want it to do is, when the flash takes a screen shot and passes the variable "image" to the function, it should upload $_GET['image'] to the album. How could I go about doing this? Thanx in advance!

    Read the article

  • FBConnect - uploading images to Wall

    - by SteveU
    Hi All, i've configured FBConnect and it uploads to the wall, but what i want to do is take a screenshot and then upload the screenshot to FB. In the below code there is an image but as a url. Can i intersect this and put in my screenshot image? FBStreamDialog *dialog = [[[FBStreamDialog alloc] init]autorelease]; dialog.userMessagePrompt = @"Tell your friends about Fridgit!!:"; dialog.attachment = [NSString stringWithFormat:@"{\"name\":\"Facebook Connect for iPhone\",\"href\":\"http://developers.facebook.com/connect.phptab=iphone\",\"caption\":\"Caption\",\"description\":\"Description\",\"media\":[{\"type\":\"image\",\"src\":\"screenShot\",\"href\":\"http://developers.facebook.com/connect.php?tab=iphone/\"}],\"properties\":{\"another link\":{\"text\":\"Facebook home page\",\"href\":\"http://www.facebook.com\"}}}", self.screenShot]; [dialog show]; } i know all the code is default i haven't edited it yet incase i can't do what i want to do. Cheers

    Read the article

  • Looking for an extended version of Lightbox

    - by itsandy
    Hi All, I am not sure how to put this, maybe I'm not able to search it properly I'm not sure what is it called but I am looking for a script which is a kind of an extended version of lightbox script. I want to place some images in my website which when clicked opens as lightbox and can go next and previous but the trick is the next images have to be sub pics of the pic which is displayed. So lets say I have "a" "b" "c" ....images shown on my website but when some clicks "a" the image "a" opens but then when he clicks next image {{ with the help of the lightbox script }} he goes to "a.1" "a.2" ....and so on for image "b" ...... Can anyone help me finding this script I have seen i somewhere bu not sure of the search term. Many Thanks

    Read the article

  • WPF Dispatcher {"The calling thread cannot access this object because a different thread owns it."}

    - by user359446
    first I need to say that I´m noob with WPF and C#. Application: Create Mandelbrot Image (GUI) My disptacher works perfektly this this case: private void progressBarRefresh(){ while ((con.Progress) < 99) { progressBar1.Dispatcher.Invoke(DispatcherPriority.Send, new Action(delegate { progressBar1.Value = con.Progress; } )); } } I get the Message (Title) when tring to do this with the below code: bmp = BitmapSource.Create(width, height, 96, 96, pf, null, rawImage, stride); this.Dispatcher.Invoke(DispatcherPriority.Send, new Action(delegate { img.Source = bmp; ViewBox.Child = img; //vllt am schluss } )); I will try to explain how my program works. I created a new Thread (because GUI dont response) for the calculation of the pixels and the colors. In this Thread(Mehtod) I´m using the Dispatcher to Refresh my Image in the ViewBox after the calculations are ready. When I´m dont put the calculation in a seperate Thread then I can refresh or build my Image.

    Read the article

  • Outputing resized animated gif to browser using imagick

    - by Freeman
    I intent to resize an animated gif and outputing it to the browser on-the-fly. My problem is that when I save the resized image it is of good quality, but if I echo it to the browser it is of poor quality and the animation is removed. Here is the code:` header("Content-type:image/gif"); try { /* Read in the animated gif */ $animation = new Imagick("images/nikks.gif"); /*** Loop through the frames ***/ foreach ($animation as $frame) { /*** Thumbnail each frame ***/ $frame->thumbnailImage(200, 200); /*** Set virtual canvas size to 100x100 ***/ $frame->setImagePage(200, 200, 0, 0); } /*** Write image to disk. Notice writeImages instead of writeImage ***/ //$animation->writeImages("images/nikkyo1.gif",true); echo $animation; } catch(Exception $e) { echo $e-getMessage(); } `

    Read the article

  • Application is crash..

    - by user338322
    Below is my crash Report. 0 0x326712f8 in prepareForMethodLookup () 1 0x3266cf5c in lookUpMethod () 2 0x32668f28 in objc_msgSend_uncached () 3 0x33f70996 in NSPopAutoreleasePool () 4 0x33f82a6c in -[NSAutoreleasePool drain] () 5 0x00003d3e in -[CameraViewcontroller save:] (self=0x811400, _cmd=0x319c00d4, number=0x11e210) at /Users/hardikrathore/Desktop/LiveVideoRecording/Classes/CameraViewcontroller.m:266 6 0x33f36f8a in __NSFireDelayedPerform () 7 0x32da44c2 in CFRunLoopRunSpecific () 8 0x32da3c1e in CFRunLoopRunInMode () 9 0x31bb9374 in GSEventRunModal () 10 0x30bf3c30 in -[UIApplication _run] () 11 0x30bf2230 in UIApplicationMain () 12 0x00002650 in main (argc=1, argv=0x2ffff474) at /Users/hardikrathore/Desktop/LiveVideoRecording/main.m:14 And this is the code. lines, where I am getting the error. -(void)save:(id)number { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; j =[number intValue]; while(screens[j] != NULL){ NSLog(@" image made : %d",j); UIImage * image = [UIImage imageWithCGImage:screens[j]]; image=[self imageByCropping:image toRect:CGRectMake(0, 0, 320, 240)]; NSData *imgdata = UIImageJPEGRepresentation(image,0.3); [image release]; CGImageRelease(screens[j]); screens[j] = NULL; UIImage * image1 = [UIImage imageWithCGImage:screens[j+1]]; image1=[self imageByCropping:image1 toRect:CGRectMake(0, 0, 320, 240)]; NSData *imgdata1 = UIImageJPEGRepresentation(image1,0.3); [image1 release]; CGImageRelease(screens[j+1]); screens[j+1] = NULL; NSString *urlString=@"http://www.test.itmate4.com/iPhoneToServerTwice.php"; // setting up the request object now NSMutableURLRequest *request = [[NSMutableURLRequest alloc]init]; [request setURL:[NSURL URLWithString:urlString]]; [request setHTTPMethod:@"POST"]; NSString *fileName=[VideoID stringByAppendingString:@"_"]; fileName=[fileName stringByAppendingString:[NSString stringWithFormat:@"%d",k]]; NSString *fileName2=[VideoID stringByAppendingString:@"_"]; fileName2=[fileName2 stringByAppendingString:[NSString stringWithFormat:@"%d",k+1]]; /* add some header info now we always need a boundary when we post a file also we need to set the content type You might want to generate a random boundary.. this is just the same as my output from wireshark on a valid html post */ NSString *boundary = [NSString stringWithString:@"---------------------------14737809831466499882746641449"]; NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@",boundary]; [request addValue:contentType forHTTPHeaderField: @"Content-Type"]; /* now lets create the body of the post */ //NSString *count=[NSString stringWithFormat:@"%d",front];; NSMutableData *body = [NSMutableData data]; [body appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //[body appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"userfile\"; count=\"@\"";filename=\"%@.jpg\"\r\n",count,fileName] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"userfile\"; filename=\"%@.jpg\"\r\n",fileName] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithString:@"Content-Type: application/octet-stream\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[NSData dataWithData:imgdata]]; [body appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //second boundary NSString *string1 = [[NSString alloc] initWithFormat:@"\r\n--%@\r\n",boundary]; NSString *string2 =[[NSString alloc] initWithFormat:@"Content-Disposition: form-data; name=\"userfile2\"; filename=\"%@.jpg\"\r\n",fileName2]; NSString *string3 =[[NSString alloc] initWithFormat:@"\r\n--%@--\r\n",boundary]; [body appendData:[string1 dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[string2 dataUsingEncoding:NSUTF8StringEncoding]]; //experiment //[body appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"userfile2\"; filename=\"%@.jpg\"\r\n",fileName2] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithString:@"Content-Type: application/octet-stream\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[NSData dataWithData:imgdata1]]; //[body appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[string3 dataUsingEncoding:NSUTF8StringEncoding]]; // setting the body of the post to the reqeust [request setHTTPBody:body]; // now lets make the connection to the web NSData *returnData = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil]; NSString *returnString = [[NSString alloc] initWithData:returnData encoding:NSUTF8StringEncoding]; if([returnString isEqualToString:@"SUCCESS"]) { NSLog(returnString); k=k+2; j=j+2; [self performSelectorInBackground:@selector(save:) withObject:(id)[NSNumber numberWithInt:j]]; } //k=k+2; [imgdata release]; [imgdata1 release]; [NSThread sleepForTimeInterval:.01]; } [pool drain]; <-------------Line 266 } As you can see in log report. I am getting the error, Line 266. Some autorelease problem Any help !!!? coz I am not getting why its happening.

    Read the article

  • Swapping UIImages causing 'unrecognized selector sent to instance' ?

    - by user158103
    Error: [__NSCFDate drawAtPoint:]: unrecognized selector sent to instance 0xd251e0 Termininating app due to uncaught exception 'NSInvalidArgumentException' Scenario: for the most part this works. But I notice this error, even on the simulator, when I swapping UIImages, slowly, but consistently. For example, I have a retained reference to a UIImage that im drawing. By the click of a picker control I am changing the face image (this occurs in another view-controller). I can consistently recreate this error by continuously changing the faces. It usually crashes at about the 4th swap or more. My theory: It's not loading the image, therefore the image reference is nil. I know ive read a bit about UIImage being cached, so I wouldnt think im running out of memory. Any ideas? Thanks!

    Read the article

  • extend web server to serve static files

    - by Turtle
    Hello, I want to extend a web server which is only able to handle RPC handling now. The web server is written in C#. It provides a abstract handler function like following: public string owsHandler(string request, string path, string param, OSHttpRequest httpRequest, OSHttpResponse httpResponse) And I wrote following code to handle image files: Bitmap queryImg = new Bitmap(path); System.IO.MemoryStream stream = new System.IO.MemoryStream(); queryImg.Save(stream, System.Drawing.Imaging.ImageFormat.Bmp); queryImg.Dispose(); byte[] byteImage = stream.ToArray(); stream.Dispose(); return Convert.ToBase64String(byteImage); And I test it in the browser, the image is returned but the image dimension info is missed. Shall I add something more to the code? Or is any general way to server static files? I do not want to serve it in a ASP.net server. Thanks

    Read the article

  • jquery form select (if exists in $_POST)

    - by SoulieBaby
    Hi all, I'm trying to redo this in jQuery: <script language="JavaScript" type="text/javascript"> var thisIsSelected = '<?php echo $_POST['image']; ?>'; var sel1= document.getElementById('image'); sel1.value = thisIsSelected; </script> But I seem to keep breaking it lol Basically I want jQuery to check if $_POST['image'] exists and if so make it selected on the form. I'm assuming if it's possibly with javascript, jquery can do it easier ;)

    Read the article

  • CSS [custom?] attributes

    - by Michael
    radio[pane] { list-style-image: url("jar:resource:///chrome/classic.jar! /skin/classic/browser/preferences/Options.png"); } radio[pane="prefpane-appearance"] { -moz-image-region: rect(0px, 32px, 32px, 0px); } radio[pane="prefpane-appearance"]:hover, radio[pane="prefpane-appearance"][selected="true"] { -moz-image-region: rect(32px, 32px, 64px, 0px); } Can anyone explain a syntax of this css, particularly what is pane.. I couldn't find such attribute for radio element in context of XUL. So I guess it's some custom attribute? If it is, then how it is evolving through the lines, first declaration, then several assignments? It has also selected, which means can have multiple custom attributes? How can those attributes be used later?

    Read the article

  • ImageChops.duplicate - python

    - by ariel
    Hi I am tring to use the function ImageChops.dulpicate from the PIL module and I get an error I don't understand: this is the code import PIL import Image import ImageChops import os PathDemo4a='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/demo4a' PathDemo4b='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/demo4b' PathDemo4c='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/demo4c' PathBlackBoard='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/BlackBoard.bmp' Slides=os.listdir(PathDemo4a) for slide in Slides: #BB=Image.open(PathBlackBoard) BB=ImageChops.duplicate(PathBlackBoard) #BB=BlackBoard and this is the error; Traceback (most recent call last): File "", line 1, in ImageChops.duplicate('c:/1.BMP') File "C:\Python26\lib\site-packages\PIL\ImageChops.py", line 57, in duplicate return image.copy() AttributeError: 'str' object has no attribute 'copy' any help would be much appriciated Ariel

    Read the article

  • artsexylightbox problem when using IE8

    - by Daniel
    I'm using the art sexy lightbox for my pictures presentation and also for html content in joomla. I'm using the Chrome and it works fine and displays everything as it should. The problem starts when i switch to ie8. When i click on the image to xpand in the lightbox the image displays in the center of the page while the thole frame of the picture is on the left of the image. I've tried playing with the artsexylightbox css file but couldnt get it to work in both browsers. does anyone can say why is the difference? I suspect that the browsers treat the absolute,relative orders differently. please help:(

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >