Search Results

Search found 7580 results on 304 pages for 'coordinate systems'.

Page 65/304 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • is ksplice production ready?

    - by faultyserver
    I would be interested to hear the serverfault community's experiences with Ksplice in production. Quick blurb from wikipedia: Ksplice is a free and open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. and Ksplice can, without restarting the kernel, apply any source code patch that only needs to modify the kernel code. Unlike other hot update systems, Ksplice takes as input only a unified diff and the original kernel source code, and it updates the running kernel correctly, with no further human assistance required. Additionally, taking advantage of Ksplice does not require any preparation before the system is originally booted (the running kernel does not need to have been specially compiled, for example). In order to generate an update, Ksplice must determine what code within the kernel has been changed by the source code patch. So a few questions: How has the stability been? any odd issues that you have encountered with its 'rebootless live patching' of the kernel? Kernel panics or horror stories? I have been running it on a few test systems and so far its been working as advertised, but I am interested in what other sysadmins experiences have been with Ksplice before going 'all in' and deploying this on our production servers. So, anybody using Kspice in production? update: hmm, not seeing any real activity on this question after a couple of hours (besides some kind upvotes and favs). Maybe to spark some activity I'll also ask a few more questions and see if we can get this discussion going... "If you are aware of Ksplice, is there a reason you are not using it?" "Do you feel its still too bleeding edge, unproven or untested?" "Does Ksplice not fit well within your current patch-management system?" "Do you hate having systems that have long (and secure) uptimes?" ;-)

    Read the article

  • windows 95 virtualization and CPU idle

    - by brtlvrs
    We are in the situation that we need to migrate our windows 95 systems (i know, that is so last century ). There is no hardware for a reasonable price available to run our windows 95 systems. So we are extending the overdue lifetime of it, by making them virtual. Now we see that VMware reports 100% CPU utilization of a windows 95 VM. This is because windows 95 doesn't know how to manage a CPU. For this reason software like rain or Waterfall, or CPUCool are introduced. The they are sending a HLT instruction to the CPU. This causes the CPU to halt and wait for new triggers to work. I've tested the mentioned programs in the VM, but they don't work, but generate errors. anyone has a valid workaround, solution ?? btw. I know that the best solutiuon is to replace the windows95 with windows XP. But in our situation that will take at least 5 years. Our windows 95 systems are running factory process control software.....

    Read the article

  • Is Ubuntu a viable replacement of Windows XP for small enterprise environments?

    - by Alex. S.
    Hi all, I'm a newbie systems administrator, so any advice would be great. I would like to setup ubuntu 8.04 lts in a small office of consulting in management (around 50 workstations) instead of Windows XP. I would install MS Office 2007 via WINE (*). It would be a fresh installation, so the migration would be less of a pain. The new setup would also include a small server as document repository and a backup server by now. Later, I would install another goodies like a IM server, a document management solution, and whatnot collaborative tool. What do you advice in this scenario? Do you think is viable? Should I try to convince my managers this is a good idea? I consider myself as a fair experienced user in both systems, and I'm the only guy in charge of everything. I need to cut costs down, and I think that antivirus and antimalware software are a waste of money and time. Is this good idea?, or should I resign and try to lock down the Windows systems and install AV software? Is there anything else in this setup I'm not foreseeing? (*) The only catch in my test machine until now had been that Office SmartArt doesn't work properly, the rest of Office 2007 may seem ok.

    Read the article

  • Over gigabit connection, Teracopy does 31MB/s, but Windows 8 does it at ~109MB per second?

    - by Gaurang
    I got my brain-melting first taste of Gigabit networking today, between my 2011 MacMini and Windows 8 Pro desktop connected via Cat.5e to Linksys WRT320N(sporting dd-WRT). After making sure that the line speed on both systems showed 1Gbps, I proceeded to copying a 2.4GB MP4 from the Mini to the Win 8 desktop (SMB sharing). Although satisfied with the 30-34 MB/s that Teracopy was showing (that was a proper step-up for me from 10 MB/s), I still was curious about this massive difference in the advertised and real-world speed. 2 hours of Google had me believing that there were other factors that resulted in less speed, SMB being one. So just for the sake of doing it, I iPerf'd both the systems and guess what that showed - around 875mbps on both systems! I then stumbled upon this little piece of info after which I turned off Teracopy and copied the same file through Windows 8's regular copier. 109 MB/s. Molten brains :) What exactly is causing this? And can I enable such speeds via Teracopy? I really dig the extra features that Teracopy has, will surely miss them now :D

    Read the article

  • VM automatic provisioning advice

    - by jdgregson
    In my lab we have 24 workstations, each with five technician-maintained virtual machines set up in VMware Workstation. These provide a lot of management overhead, as we have to update them as well as the host operating systems every three months (the start of the next quarter), which adds up to 144 systems to update instead of just 24. Whenever we need to reimage the hosts, the VMs add another 130GB to each image, which is over 3TB of extra data to send over the network, and a lot more time to apply each image, and then we still have to boot all 120 VMs and assign them a unique IP Address and host names. We would like to get the VMs off the hosts and onto a server, but after looking around for a few days, I still don't know where to begin looking for a solution. There may be a better way to do this, but in my mind, the ideal solution would be to replace the VMs on the host machines with five Thin Client operating systems, each configured to connect to a server and be sent or connected to a unique virtual machine. We can't have 120 VMs running on the server all the time, so the server would have to create a copy of the VM from a template whenever a student tries to boot one, and destroy the VM after the student is finished with it. If there is another client application that has to be installed on the hosts that would be fine, the only reason I'd like to keep them in VMware Workstation is because students already know to look there for the VMs when they need to use them. What, if any, virtualization software will allow this? Is there some other solution I'm not seeing?

    Read the article

  • How can I change mouse keymapping

    - by zuberuber
    I have Razer DeathAdder(left handed edition) and A4Tech wireless mouse. My problem is I don't know how to change wireless mouse keymapping(swaping left/right click). Can somebody guide me how to do such thing? List of my devices: ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Logitech Unifying Device. Wireless PID:4004 id=8 [slave pointer (2)] ? ? Razer Razer DeathAdder id=11 [slave pointer (2)] ? ? A4TECH USB Device id=12 [slave pointer (2)] ? ? A4TECH USB Device id=13 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Power Button id=7 [slave keyboard (3)] ? Logitech USB Keyboard id=9 [slave keyboard (3)] ? Logitech USB Keyboard id=10 [slave keyboard (3)] This is my Razer xinput: Device 'Razer Razer DeathAdder': Device Enabled (121): 1 Coordinate Transformation Matrix (123): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 Device Accel Profile (246): 0 Device Accel Constant Deceleration (247): 5.000000 Device Accel Adaptive Deceleration (248): 1.000000 Device Accel Velocity Scaling (249): 10.000000 Device Product ID (240): 5426, 22 Device Node (241): "/dev/input/event4" Evdev Axis Inversion (250): 0, 0 Evdev Axes Swap (252): 0 Axis Labels (253): "Rel X" (131), "Rel Y" (132), "Rel Vert Wheel" (274) Button Labels (254): "Button Left" (124), "Button Middle" (125), "Button Right" (126), "Button Wheel Up" (127), "Button Wheel Down" (128), "Button Horiz Wheel Left" (129), "Button Horiz Wheel Right" (130), "Button Side" (269), "Button Extra" (270), "Button Forward" (271), "Button Back" (272), "Button Task" (273), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243) Evdev Middle Button Emulation (255): 0 Evdev Middle Button Timeout (256): 50 Evdev Third Button Emulation (257): 0 Evdev Third Button Emulation Timeout (258): 1000 Evdev Third Button Emulation Button (259): 3 Evdev Third Button Emulation Threshold (260): 20 Evdev Wheel Emulation (261): 0 Evdev Wheel Emulation Axes (262): 0, 0, 4, 5 Evdev Wheel Emulation Inertia (263): 10 Evdev Wheel Emulation Timeout (264): 200 Evdev Wheel Emulation Button (265): 4 Evdev Drag Lock Buttons (266): 0 And this is my wireless mouse xinput: Device 'A4TECH USB Device': Device Enabled (121): 1 Coordinate Transformation Matrix (123): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 Device Accel Profile (246): 0 Device Accel Constant Deceleration (247): 1.000000 Device Accel Adaptive Deceleration (248): 1.000000 Device Accel Velocity Scaling (249): 10.000000 Device Product ID (240): 2522, 1359 Device Node (241): "/dev/input/event16" Evdev Axis Inversion (250): 0, 0 Evdev Axes Swap (252): 0 Axis Labels (253): "Rel X" (131), "Rel Y" (132), "Rel Horiz Wheel" (245), "Rel Vert Wheel" (274) Button Labels (254): "Button Left" (124), "Button Middle" (125), "Button Right" (126), "Button Wheel Up" (127), "Button Wheel Down" (128), "Button Horiz Wheel Left" (129), "Button Horiz Wheel Right" (130), "Button Side" (269), "Button Extra" (270), "Button Forward" (271), "Button Back" (272), "Button Task" (273), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243) Evdev Middle Button Emulation (255): 0 Evdev Middle Button Timeout (256): 50 Evdev Third Button Emulation (257): 0 Evdev Third Button Emulation Timeout (258): 1000 Evdev Third Button Emulation Button (259): 3 Evdev Third Button Emulation Threshold (260): 20 Evdev Wheel Emulation (261): 0 Evdev Wheel Emulation Axes (262): 0, 0, 4, 5 Evdev Wheel Emulation Inertia (263): 10 Evdev Wheel Emulation Timeout (264): 200 Evdev Wheel Emulation Button (265): 4 Evdev Drag Lock Buttons (266): 0

    Read the article

  • GLSL subroutine not being used

    - by amoffat
    I'm using a gaussian blur fragment shader. In it, I thought it would be concise to include 2 subroutines: one for selecting the horizontal texture coordinate offsets, and another for the vertical texture coordinate offsets. This way, I just have one gaussian blur shader to manage. Here is the code for my shader. The {{NAME}} bits are template placeholders that I substitute in at shader compile time: #version 420 subroutine vec2 sample_coord_type(int i); subroutine uniform sample_coord_type sample_coord; in vec2 texcoord; out vec3 color; uniform sampler2D tex; uniform int texture_size; const float offsets[{{NUM_SAMPLES}}] = float[]({{SAMPLE_OFFSETS}}); const float weights[{{NUM_SAMPLES}}] = float[]({{SAMPLE_WEIGHTS}}); subroutine(sample_coord_type) vec2 vertical_coord(int i) { return vec2(0.0, offsets[i] / texture_size); } subroutine(sample_coord_type) vec2 horizontal_coord(int i) { //return vec2(offsets[i] / texture_size, 0.0); return vec2(0.0, 0.0); // just for testing if this subroutine gets used } void main(void) { color = vec3(0.0); for (int i=0; i<{{NUM_SAMPLES}}; i++) { color += texture(tex, texcoord + sample_coord(i)).rgb * weights[i]; color += texture(tex, texcoord - sample_coord(i)).rgb * weights[i]; } } Here is my code for selecting the subroutine: blur_program->start(); blur_program->set_subroutine("sample_coord", "vertical_coord", GL_FRAGMENT_SHADER); blur_program->set_int("texture_size", width); blur_program->set_texture("tex", *deferred_output); blur_program->draw(); // draws a quad for the fragment shader to run on and: void ShaderProgram::set_subroutine(constr name, constr routine, GLenum target) { GLuint routine_index = glGetSubroutineIndex(id, target, routine.c_str()); GLuint uniform_index = glGetSubroutineUniformLocation(id, target, name.c_str()); glUniformSubroutinesuiv(target, 1, &routine_index); // debugging int num_subs; glGetActiveSubroutineUniformiv(id, target, uniform_index, GL_NUM_COMPATIBLE_SUBROUTINES, &num_subs); std::cout << uniform_index << " " << routine_index << " " << num_subs << "\n"; } I've checked for errors, and there are none. When I pass in vertical_coord as the routine to use, my scene is blurred vertically, as it should be. The routine_index variable is also 1 (which is weird, because vertical_coord subroutine is the first listed in the shader code...but no matter, maybe the compiler is switching things around) However, when I pass in horizontal_coord, my scene is STILL blurred vertically, even though the value of routine_index is 0, suggesting that a different subroutine is being used. Yet the horizontal_coord subroutine explicitly does not blur. What's more is, whichever subroutine comes first in the shader, is the subroutine that the shader uses permanently. Right now, vertical_coord comes first, so the shader blurs vertically always. If I put horizontal_coord first, the scene is unblurred, as expected, but then I cannot select the vertical_coord subroutine! :) Also, the value of num_subs is 2, suggesting that there are 2 subroutines compatible with my sample_coord subroutine uniform. Just to re-iterate, all of my return values are fine, and there are no glGetError() errors happening. Any ideas?

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • Problems implementing a screen space shadow ray tracing shader

    - by Grieverheart
    Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video: video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader: #version 330 core uniform sampler2D DepthMap; uniform vec2 projAB; uniform mat4 projectionMatrix; const vec3 light_p = vec3(-30.0, 30.0, -10.0); noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } void main(void){ vec3 origin = CalcPosition(); if(origin.z < -60) discard; vec2 pixOrigin = pass_TexCoord; //tex coords vec3 dir = normalize(light_p - origin); vec2 texel_size = vec2(1.0 / 600.0); float t = 0.1; ivec2 pixIndex = ivec2(pixOrigin / texel_size); out_AO = 1.0; while(true){ vec3 ray = origin + t * dir; vec4 temp = projectionMatrix * vec4(ray, 1.0); vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5; ivec2 newIndex = ivec2(texCoord / texel_size); if(newIndex != pixIndex){ float depth = texture(DepthMap, texCoord).r; float linearDepth = projAB.y / (depth - projAB.x); if(linearDepth > ray.z + 0.1){ out_AO = 0.2; break; } pixIndex = newIndex; } t += 0.5; if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break; } } As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.

    Read the article

  • 64-bit Windows CE 5.0 Emulator

    - by FreshCode
    When attempting to run the Windows CE 5.0 emulator on my 64-bit system, I get the following error: "Emulator for Windows CE is incompatible with the host operating system. Emulator for CE will not run on 64-bit host operating systems.64-bit host operating systems." How can I run the Windows CE 5.0 Emulator under 64-bit Windows 7 without resorting to a VM?

    Read the article

  • Interpolation and Morphing of an image in labview and/or openCV

    - by Marc
    I am working on an image manipulation problem. I have an overhead projector that projects onto a screen, and I have a camera that takes pictures of that. I can establish a 1:1 correspondence between a subset of projector coordinates and a subset of camera pixels by projecting dots on the screen and finding the centers of mass of the resulting regions on the camera. I thus have a map proj_x, proj_y <-- cam_x, cam_y for scattered point pairs My original plan was to regularize this map using the Mathscript function griddata. This would work fine in MATLAB, as follows [pgridx, pgridy] = meshgrid(allprojxpts, allprojypts) fitcx = griddata (proj_x, proj_y, cam_x, pgridx, pgridy); fitcy = griddata (proj_x, proj_y, cam_y, pgridx, pgridy); and the reverse for the camera to projector mapping Unfortunately, this code causes Labview to run out of memory on the meshgrid step (the camera is 5 megapixels, which apparently is too much for labview to handle) I then started looking through openCV, and found the cvRemap function. Unfortunately, this function takes as its starting point a regularized pixel-pixel map like the one I was trying to generate above. However, it made me hope that functions for creating such a map might be available in openCV. I couldn't find it in the openCV 1.0 API (I am stuck with 1.0 for legacy reasons), but I was hoping it's there or that someone has an easy trick. So my question is one of the following 1) How can I interpolate from scattered points to a grid in openCV; (i.e., given z = f(x,y) for scattered values of x and y, how to fill an image with f(im_x, im_y) ? 2) How can I perform an image transform that maps image 1 to image 2, given that I know a scattered mapping of points in coordinate system 1 to coordinate system 2. This could be implemented either in Labview or OpenCV. Note: I am tagging this post delaunay, because that's one method of doing a scattered interpolation, but the better tag would be "scattered interpolation"

    Read the article

  • Ray-box Intersection Theory

    - by Myx
    Hello: I wish to determine the intersection point between a ray and a box. The box is defined by its min 3D coordinate and max 3D coordinate and the ray is defined by its origin and the direction to which it points. Currently, I am forming a plane for each face of the box and I'm intersecting the ray with the plane. If the ray intersects the plane, then I check whether or not the intersection point is actually on the surface of the box. If so, I check whether it is the closest intersection for this ray and I return the closest intersection. The way I check whether the plane-intersection point is on the box surface itself is through a function bool PointOnBoxFace(R3Point point, R3Point corner1, R3Point corner2) { double min_x = min(corner1.X(), corner2.X()); double max_x = max(corner1.X(), corner2.X()); double min_y = min(corner1.Y(), corner2.Y()); double max_y = max(corner1.Y(), corner2.Y()); double min_z = min(corner1.Z(), corner2.Z()); double max_z = max(corner1.Z(), corner2.Z()); if(point.X() >= min_x && point.X() <= max_x && point.Y() >= min_y && point.Y() <= max_y && point.Z() >= min_z && point.Z() <= max_z) return true; return false; } where corner1 is one corner of the rectangle for that box face and corner2 is the opposite corner. My implementation works most of the time but sometimes it gives me the wrong intersection. I was wondering if the way I'm checking whether the intersection point is on the box is correct or if I should use some other algorithm. Thanks.

    Read the article

  • CGContext - PDF margin

    - by Manoj Khaylia
    Hi All I am showing PDF content on a view using this code using Quartz Sample // PDF page drawing expects a Lower-Left coordinate system, so we flip the coordinate system // before we start drawing. CGContextTranslateCTM(context, 0.0, self.bounds.size.height); CGContextScaleCTM(context, 1.0, -1.0); // Grab the first PDF page CGPDFPageRef page = CGPDFDocumentGetPage(pdf, pageNo); // We're about to modify the context CTM to draw the PDF page where we want it, so save the graphics state in case we want to do more drawing CGContextSaveGState(context); // CGPDFPageGetDrawingTransform provides an easy way to get the transform for a PDF page. It will scale down to fit, including any // base rotations necessary to display the PDF page correctly. CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, self.bounds, 0, true); // And apply the transform. CGContextConcatCTM(context, pdfTransform); // Finally, we draw the page and restore the graphics state for further manipulations! CGContextDrawPDFPage(context, page); CGContextRestoreGState(context); Using this all works fine I want to set the margin for the PDF context, bydefault it showing 50 px margin in every side.. I have tried CGContext methods but not got the appropriate one. Can any body help me regarding this Thanks Monaj

    Read the article

  • MIPS return address in main

    - by Alexander
    I am confused why in the code below I need to decrement the stack pointer and store the return address again. If I don't do that... then PCSpim keeps on looping.. Why is that? ######################################################################################################################## ### main ######################################################################################################################## .text .globl main main: addi $sp, $sp, -4 # Make space on stack sw $ra, 0($sp) # Save return address # Start test 1 ############################################################ la $a0, asize1 # 1st parameter: address of asize1[0] la $a1, frame1 # 2nd parameter: address of frame1[0] la $a2, window1 # 3rd parameter: address of window1[0] jal vbsme # call function # Printing $v0 add $a0, $v0, $zero # Load $v0 for printing li $v0, 1 # Load the system call numbers syscall # Print newline. la $a0, newline # Load value for printing li $v0, 4 # Load the system call numbers syscall # Printing $v1 add $a0, $v1, $zero # Load $v1 for printing li $v0, 1 # Load the system call numbers syscall # Print newline. la $a0, newline # Load value for printing li $v0, 4 # Load the system call numbers syscall # Print newline. la $a0, newline # Load value for printing li $v0, 4 # Load the system call numbers syscall ############################################################ # End of test 1 lw $ra, 0($sp) # Restore return address addi $sp, $sp, 4 # Restore stack pointer jr $ra # Return ######################################################################################################################## ### vbsme ######################################################################################################################## #.text .globl vbsme vbsme: addi $sp, $sp, -4 # create space on the stack pointer sw $ra, 0($sp) # save return address exit: add $v1, $t5, $zero # (v1) x coordinate of the block in the frame with the minimum SAD add $v0, $t4, $zero # (v0) y coordinate of the block in the frame with the minimum SAD lw $ra, 0($sp) # restore return address addi $sp, $sp, 4 # restore stack pointer jr $ra # return If I delete: addi $sp, $sp, -4 # create space on the stack pointer sw $ra, 0($sp) # save return address and lw $ra, 0($sp) # restore return address addi $sp, $sp, 4 # restore stack pointer on vbsme: PCSpim keeps on running... Why??? I shouldn't have to increment/decrement the stack pointer on vbsme and then do the jr again right? The jal in main is supposed to handle that

    Read the article

  • Objective C / iPhone comparing 2 CLLocations /GPS coordinates

    - by user289503
    have an app that finds your GPS location successfully, but I need to be able to compare that GPS with a list of GPS locations, if both are the same , then you get a bonus. I thought I had it working, but it seems not. I have 'newLocation' as the location where you are, I think the problem is that I need to be able to seperate the long and lat data of newLocation. So far ive tried this: Code: NSString *latitudeVar = [[NSString alloc] initWithFormat:@"%g°", newLocation.coordinate.latitude]; NSString *longitudeVar = [[NSString alloc] initWithFormat:@"%g°", newLocation.coordinate.longitude]; An example of the list of GPS locations: Code: location:(CLLocation*)newLocation; CLLocationCoordinate2D bonusOne; bonusOne.latitude = 37.331689; bonusOne.longitude = -122.030731; and then Code: if (latitudeVar == bonusOne.latitude && longitudeVar == bonusOne.longitude) { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"infinite loop firday" message:@"infloop" delegate:nil cancelButtonTitle:@"Stinky" otherButtonTitles:nil ]; [alert show]; [alert release]; this comes up with an error 'invalid operands to binary == have strut NSstring and CLlocationDegrees' Any thoughts?

    Read the article

  • how to parse pdf document in iphone or ipad?

    - by JohnWhite
    I have implemented pdf parsing application for ipad.But content of pdf display not clearly.I dont know why this happning.can you help me for this problem. Edit: I have used below code for pdf parsing.But text and image is not clear. (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code /* Demo PDF printed directly from Wikipedia without permission; all content (c) respective owners */ CFURLRef pdfURL = CFBundleCopyResourceURL(CFBundleGetMainBundle(), CFSTR("2008_11_mp.pdf"), NULL, NULL); pdf = CGPDFDocumentCreateWithURL((CFURLRef)pdfURL); CFRelease(pdfURL); counts+=1; self.pageNumber = counts; self.backgroundColor = nil; self.opaque = NO; self.userInteractionEnabled = NO; } return self; } (void)drawRect:(CGRect)rect { // Drawing code CGContextRef context = UIGraphicsGetCurrentContext(); // PDF page drawing expects a Lower-Left coordinate system, so we flip the coordinate system // before we start drawing. // Grab the first PDF page CGPDFPageRef page = CGPDFDocumentGetPage(pdf, pageNumber); CGContextTranslateCTM(context, 0, self.bounds.size.height); CGContextScaleCTM(context, 1, -1); // We're about to modify the context CTM to draw the PDF page where we want it, so save the graphics state in case we want to do more drawing CGContextSaveGState(context); // CGPDFPageGetDrawingTransform provides an easy way to get the transform for a PDF page. It will scale down to fit, including any // base rotations necessary to display the PDF page correctly. CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFMediaBox, self.bounds, 0, true); // And apply the transform. CGContextConcatCTM(context, pdfTransform); // Finally, we draw the page and restore the graphics state for further manipulations! CGContextDrawPDFPage(context, page); CGContextRestoreGState(context); } Please help me solve this problem

    Read the article

  • What exactly is NoSQL?

    - by bodacydo
    What exactly is NoSQL? Is it database systems that only work with {key:value} pairs? As far as I know MemCache is one of such database systems, am I right? What other popular NoSQL databases are there and where exactly are they useful? Thanks, Boda Cydo.

    Read the article

  • Positioning SVG Elements

    - by Rob Wilkerson
    In the course of toying with SVG for the first time (using the Raphael library), I've run into a problem positioning dynamic elements on the canvas in such a way that they're completely contained within the canvas. What I'm trying to do is randomly position n words/short phrases. Since the text is variable, its position needs to be variable as well so what I'm doing is: Initially creating the text at point 0,0 with no opacity. Checking the width of the drawn text element using text.getBBox().width. Setting a new x coordinate as Math.random() * (canvas_width - ( text_width/2 ) - pad). Altering the x coordinate of the text to the newly set value (text.attr( 'x', x ) ). Setting the opacity attribute of the text to 1. I'll be the first to admit that my math acumen is limited, but this seems pretty straightforward. Somehow, I still end up with text running off beyond the right edge of my canvas. For simplicity above, I removed the bit that also sets a minimum x value by adding it to the Math.random() result. It is there, though, and I see the same problem on the leading edge of the canvas. My understanding (such as it is), is that the Math.random() bits would generate a number between 0 and 1 which could then be multiplied by some number (in my case, the canvas width - half of the text width - some arbitrary padding) to get the outer bound. I'm dividing the width of the text in half because its position on the grid is set at its center. I hope I've just been staring at this for too long, but is my math that rusty or am I misunderstanding something about the behavior of Math.random(), SVG, text or anything else that's under the hood of this solution?

    Read the article

  • How does _mm_mwait works?

    - by osgx
    Hello How does _mm_mwait from pmmintrin.h works? (I mean not the asm for it, but action and how this action is taken in NUMA systems. The store monitoring is easy to implement only on bus-based SMP systems with snooping of bus.) What processors does implement it? Is it used in some spinlocks?

    Read the article

  • Cocos2D: Problem Rotating a CCMenu

    - by srikanth rongali
    If I try to do actions over menuItems but the actions are not running as expected. I think code below should make the menuItem rotate by 90 degrees but when I run it, the menuItem translates from its coordinates to another coordinate then returns to its original coordinate. The complete translation takes 3 seconds. What I need is for the menuItem to rotate by 90 degrees in place within a 3 seconds duration. Please explain where I have done wrong? CCMenuItemImage *targetE;//Globally declared CCMenu *menu;//Globally declared -(id)init { if( (self = [super init]) ) { isTouchEnabled = YES; CGSize windowSize = [[CCDirector sharedDirector] winSize]; targetE = [CCMenuItemImage itemFromNormalImage:@"grossinis_sister1.png" selectedImage:@"grossinis_sister1.png" target:self selector:@selector(touch:)]; menu = [CCMenu menuWithItems:targetE,nil]; id action4 = [CCRotateBy actionWithDuration:3.0 angle:90]; [menu runAction: [CCSequence actions: action4, nil]]; menu.position = ccp(windowSize.width/2 + 200, windowSize.height/2); [self addChild: menu z:10]; } return self; } @end Thank You.

    Read the article

  • Distance between Long Lat coord using SQLITE

    - by munchine
    I've got an sqlite db with long and lat of shops and I want to find out the closest 5 shops. So the following code works fine. if(sqlite3_prepare_v2(db, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { while (sqlite3_step(compiledStatement) == SQLITE_ROW) { NSString *branchStr = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; NSNumber *fLat = [NSNumber numberWithFloat:(float)sqlite3_column_double(compiledStatement, 1)]; NSNumber *fLong = [NSNumber numberWithFloat:(float)sqlite3_column_double(compiledStatement, 2)]; NSLog(@"Address %@, Lat = %@, Long = %@", branchStr, fLat, fLong); CLLocation *location1 = [[CLLocation alloc] initWithLatitude:currentLocation.coordinate.latitude longitude:currentLocation.coordinate.longitude]; CLLocation *location2 = [[CLLocation alloc] initWithLatitude:[fLat floatValue] longitude:[fLong floatValue]]; NSLog(@"Distance i meters: %f", [location1 getDistanceFrom:location2]); [location1 release]; [location2 release]; } } I know the distance from where I am to each shop. My question is. Is it better to put the distance back into the sqlite row, I have the row when I step thru the database. How do I do that? Do I use the UPDATE statement? Does someone have a piece of code to help me. I can read the sqlite into an array and then sort the array. Do you recommend this over the above approach? Is this more efficient? Finally, if someone has a better way to get the closest 5 shops, love to hear it.

    Read the article

  • how to export bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this. Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script. I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these. The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based) Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v. Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v. Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled. Given the above, in order to skin vertex v at keyframe n. I need to: take v from world space to joint j space modify j (while v stays fixed in j space and is thus taken along in the transformation) take v back from the modified j space to world space So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong. Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v Now on to the implementation (Blender side): Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton: Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is: keyframe 0: the joint is in bind / rest pose (the way you see it in the image). keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise. keyframe 59: the joint goes back to the same configuration it was in keyframe 0. My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api. Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system: # World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine. About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following: For joint bind poses: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous. For joint current / keyframe poses: for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case. The implementation (Engine / OpenGL side): My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity): static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } Finally, this is my vertex shader: #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise. If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint. Any thoughts?

    Read the article

  • What is the relation between database books by Ullman et al.?

    - by macias
    A First Course in Database Systems by Jeffrey D. Ullman, Jennifer Widom (Amazon links) Database System Implementation by Hector Garcia-Molina, Jeffrey D. Ullman, Jennifer D. Widom Database Systems: The Complete Book by Hector Garcia-Molina, Jeffrey D. Ullman, Jennifer Widom As far as I know the second one is the second "part" of the first one. But what about the third one -- is it just first+second published in one volume? I would like to buy them, but I don't want to get redundant reading. Thank you in advance for clarification.

    Read the article

  • iPhone: addAnnotation not working when called from another view

    - by Nic Hubbard
    I have two views, the first view has an MKMapView on it named ridesMap. The second view is just a view with a UITableView in it. When you click the save button in the second view, it calls a method from the first view: // Get my first views class MyRidesMapViewController *rideMapView = [[MyRidesMapViewController alloc] init]; // Call the method from my first views class that removes an annotation [rideMapView addAnno:newRidePlacemark.coordinate withTitle:rideTitle.text withSubTitle:address]; This correctly calls the addAnno method, which looks like: - (void)addAnno:(CLLocationCoordinate2D)anno withTitle:(NSString *)annoTitle withSubTitle:(NSString *)subTitle { Annotation *ano = [[[Annotation alloc] init] autorelease]; ano.coordinate = anno; ano.title = annoTitle; ano.subtitle = subTitle; if ([ano conformsToProtocol:@protocol(MKAnnotation)]) { NSLog(@"YES IT DOES!!!"); } [ridesMap addAnnotation:ano]; }//end addAnno This method creates an annotation which does conform to MKAnnotation, and it suppose to add that annotation to the map using the addAnnotation method. But, the annotation never gets added. I NEVER get any errors when the annotation does not get added. But it never appears when the method is called. Why would this be? It seems that I have done everything correctly, and that I am passing a correct MKAnnotation to the addAnnotation method. So, I don't get why it never drops a pin? Could it be because I am calling this method from another view? Why would that matter?

    Read the article

  • longitude and latitude for current location returned from CLLocationManager in UK region is not corr

    - by bond
    Hi I am getting latitude and longitude of current location from CLLocationManager delegate method. It works fine for some region but its giving problem in UK region. When it is used in UK region, the current location longitude and latitude returned from CLLocationManager is not proper. Thanks heres a part of the logic i am using. -(void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; self.locationManager = [[CLLocationManager alloc] init]; if(self.locationManager.locationServicesEnabled) { self.locationManager.delegate = self; self.locationManager.distanceFilter = kCLDistanceFilterNone; self.locationManager.desiredAccuracy = kCLLocationAccuracyBest; } } -(void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation { NSLog(@"Updating"); //if the time interval returned from core location is more than 15 seconds //we ignore it because it might be from an old session if ( abs([newLocation.timestamp timeIntervalSinceDate: [NSDate date]]) < 15) { self.latitude = newLocation.coordinate.latitude; self.longitude = newLocation.coordinate.longitude; NSLog(@"longitude=%f-- latitude=%f--",self.longitude,self.latitude); [self.locationManager stopUpdatingLocation]; [self removeActivityIndicator]; [[self locationSaved] setHidden:NO]; [[self viewLocationInMap] setEnabled:YES]; }}

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >