Search Results

Search found 779 results on 32 pages for 'coordinate'.

Page 10/32 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Announcing SonicAgile – An Agile Project Management Solution

    - by Stephen.Walther
    I’m happy to announce the public release of SonicAgile – an online tool for managing software projects. You can register for SonicAgile at www.SonicAgile.com and start using it with your team today. SonicAgile is an agile project management solution which is designed to help teams of developers coordinate their work on software projects. SonicAgile supports creating backlogs, scrumboards, and burndown charts. It includes support for acceptance criteria, story estimation, calculating team velocity, and email integration. In short, SonicAgile includes all of the tools that you need to coordinate work on a software project, get stuff done, and build great software. Let me discuss each of the features of SonicAgile in more detail. SonicAgile Backlog You use the backlog to create a prioritized list of user stories such as features, bugs, and change requests. Basically, all future work planned for a product should be captured in the backlog. We focused our attention on designing the user interface for the backlog. Because the main function of the backlog is to prioritize stories, we made it easy to prioritize a story by just drag and dropping the story from one location to another. We also wanted to make it easy to add stories from the product backlog to a sprint backlog. A sprint backlog contains the stories that you plan to complete during a particular sprint. To add a story to a sprint, you just drag the story from the product backlog to the sprint backlog. Finally, we made it easy to track team velocity — the average amount of work that your team completes in each sprint. Your team’s average velocity is displayed in the backlog. When you add too many stories to a sprint – in other words, you attempt to take on too much work – you are warned automatically: SonicAgile Scrumboard Every workday, your team meets to have their daily scrum. During the daily scrum, you can use the SonicAgile Scrumboard to see (at a glance) what everyone on the team is working on. For example, the following scrumboard shows that Stephen is working on the Fix Gravatar Bug story and Pete and Jane have finished working on the Product Details Page story: Every story can be broken into tasks. For example, to create the Product Details Page, you might need to create database objects, do page design, and create an MVC controller. You can use the Scrumboard to track the state of each task. A story can have acceptance criteria which clarify the requirements for the story to be done. For example, here is how you can specify the acceptance criteria for the Product Details Page story: You cannot close a story — and remove the story from the list of active stories on the scrumboard — until all tasks and acceptance criteria associated with the story are done. SonicAgile Burndown Charts You can use Burndown charts to track your team’s progress. SonicAgile supports Release Burndown, Sprint Burndown by Task Estimates, and Sprint Burndown by Story Points charts. For example, here’s a sample of a Sprint Burndown by Story Points chart: The downward slope shows the progress of the team when closing stories. The vertical axis represents story points and the horizontal axis represents time. Email Integration SonicAgile was designed to improve your team’s communication and collaboration. Most stories and tasks require discussion to nail down exactly what work needs to be done. The most natural way to discuss stories and tasks is through email. However, you don’t want these discussions to get lost. When you use SonicAgile, all email discussions concerning a story or a task (including all email attachments) are captured automatically. At any time in the future, you can view all of the email discussion concerning a story or a task by opening the Story Details dialog: Why We Built SonicAgile We built SonicAgile because we needed it for our team. Our consulting company, Superexpert, builds websites for financial services, startups, and large corporations. We have multiple teams working on multiple projects. Keeping on top of all of the work that needs to be done to complete a software project is challenging. You need a good sense of what needs to be done, who is doing it, and when the work will be done. We built SonicAgile because we wanted a lightweight project management tool which we could use to coordinate the work that our team performs on software projects. How We Built SonicAgile We wanted SonicAgile to be easy to use, highly scalable, and have a highly interactive client interface. SonicAgile is very close to being a pure Ajax application. We built SonicAgile using ASP.NET MVC 3, jQuery, and Knockout. We would not have been able to build such a complex Ajax application without these technologies. Almost all of our MVC controller actions return JSON results (While developing SonicAgile, I would have given my left arm to be able to use the new ASP.NET Web API). The controller actions are invoked from jQuery Ajax calls from the browser. We built SonicAgile on Windows Azure. We are taking advantage of SQL Azure, Table Storage, and Blob Storage. Windows Azure enables us to scale very quickly to handle whatever demand is thrown at us. Summary I hope that you will try SonicAgile. You can register at www.SonicAgile.com (there’s a free 30-day trial). The goal of SonicAgile is to make it easier for teams to get more stuff done, work better together, and build amazing software. Let us know what you think!

    Read the article

  • Render To Texture Using OpenGL is not working but normal rendering works just fine

    - by Franky Rivera
    things I initialize at the beginning of the program I realize not all of these pertain to my issue I just copy and pasted what I had //overall initialized //things openGL related I initialize earlier on in the project glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClearDepth( 1.0f ); glEnable(GL_ALPHA_TEST); glEnable( GL_STENCIL_TEST ); glEnable(GL_DEPTH_TEST); glDepthFunc( GL_LEQUAL ); glEnable(GL_CULL_FACE); glFrontFace( GL_CCW ); glEnable(GL_COLOR_MATERIAL); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint( GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST ); //we also initialize our shader programs //(i added some shader program functions for definitions) //this enum list is else where in code //i figured it would help show you guys more about my //shader compile creation function right under this enum list VVVVVV /*enum eSHADER_ATTRIB_LOCATION { VERTEX_ATTRIB = 0, NORMAL_ATTRIB = 2, COLOR_ATTRIB, COLOR2_ATTRIB, FOG_COORD, TEXTURE_COORD_ATTRIB0 = 8, TEXTURE_COORD_ATTRIB1, TEXTURE_COORD_ATTRIB2, TEXTURE_COORD_ATTRIB3, TEXTURE_COORD_ATTRIB4, TEXTURE_COORD_ATTRIB5, TEXTURE_COORD_ATTRIB6, TEXTURE_COORD_ATTRIB7 }; */ //if we fail making our shader leave if( !testShader.CreateShader( "SimpleShader.vp", "SimpleShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; if( !testScreenShader.CreateShader( "ScreenShader.vp", "ScreenShader.fp", 3, VERTEX_ATTRIB, "vVertexPos", NORMAL_ATTRIB, "vNormal", TEXTURE_COORD_ATTRIB0, "vTexCoord" ) ) return false; SHADER PROGRAM FUNCTIONS bool CShaderProgram::CreateShader( const char* szVertexShaderName, const char* szFragmentShaderName, ... ) { //here are our handles for the openGL shaders int iGLVertexShaderHandle = -1, iGLFragmentShaderHandle = -1; //get our shader data char *vData = 0, *fData = 0; int vLength = 0, fLength = 0; LoadShaderFile( szVertexShaderName, &vData, &vLength ); LoadShaderFile( szFragmentShaderName, &fData, &fLength ); //data if( !vData ) return false; //data if( !fData ) { delete[] vData; return false; } //create both our shader objects iGLVertexShaderHandle = glCreateShader( GL_VERTEX_SHADER ); iGLFragmentShaderHandle = glCreateShader( GL_FRAGMENT_SHADER ); //well we got this far so we have dynamic data to clean up //load vertex shader glShaderSource( iGLVertexShaderHandle, 1, (const char**)(&vData), &vLength ); //load fragment shader glShaderSource( iGLFragmentShaderHandle, 1, (const char**)(&fData), &fLength ); //we are done with our data delete it delete[] vData; delete[] fData; //compile them both glCompileShader( iGLVertexShaderHandle ); //get shader status int iShaderOk; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLVertexShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLVertexShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szVertexShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLVertexShaderHandle); return false; } glCompileShader( iGLFragmentShaderHandle ); //get shader status glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iShaderOk ); if( iShaderOk == GL_FALSE ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //lets check to see if the fragment shader compiled int iCompiled = 0; glGetShaderiv( iGLVertexShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { //this shader did not compile leave return false; } //lets check to see if the fragment shader compiled glGetShaderiv( iGLFragmentShaderHandle, GL_COMPILE_STATUS, &iCompiled ); if( !iCompiled ) { char* buffer; //get what happend with our shader glGetShaderiv( iGLFragmentShaderHandle, GL_INFO_LOG_LENGTH, &iShaderOk ); buffer = new char[iShaderOk]; glGetShaderInfoLog( iGLFragmentShaderHandle, iShaderOk, NULL, buffer ); //sprintf_s( buffer, "Failure Our Object For %s was not created", szFileName ); MessageBoxA( NULL, buffer, szFragmentShaderName, MB_OK ); //delete our dynamic data free( buffer ); glDeleteShader(iGLFragmentShaderHandle); return false; } //make our new shader program m_iShaderProgramHandle = glCreateProgram(); glAttachShader( m_iShaderProgramHandle, iGLVertexShaderHandle ); glAttachShader( m_iShaderProgramHandle, iGLFragmentShaderHandle ); glLinkProgram( m_iShaderProgramHandle ); int iLinked = 0; glGetProgramiv( m_iShaderProgramHandle, GL_LINK_STATUS, &iLinked ); if( !iLinked ) { //we didn't link return false; } //NOW LETS CREATE ALL OUR HANDLES TO OUR PROPER LIKING //start from this parameter va_list parseList; va_start( parseList, szFragmentShaderName ); //read in number of variables if any unsigned uiNum = 0; uiNum = va_arg( parseList, unsigned ); //for loop through our attribute pairs int enumType = 0; for( unsigned x = 0; x < uiNum; ++x ) { //specify our attribute locations enumType = va_arg( parseList, int ); char* name = va_arg( parseList, char* ); glBindAttribLocation( m_iShaderProgramHandle, enumType, name ); } //end our list parsing va_end( parseList ); //relink specify //we have custom specified our attribute locations glLinkProgram( m_iShaderProgramHandle ); //fill our handles InitializeHandles( ); //everything went great return true; } void CShaderProgram::InitializeHandles( void ) { m_uihMVP = glGetUniformLocation( m_iShaderProgramHandle, "mMVP" ); m_uihWorld = glGetUniformLocation( m_iShaderProgramHandle, "mWorld" ); m_uihView = glGetUniformLocation( m_iShaderProgramHandle, "mView" ); m_uihProjection = glGetUniformLocation( m_iShaderProgramHandle, "mProjection" ); ///////////////////////////////////////////////////////////////////////////////// //texture handles m_uihDiffuseMap = glGetUniformLocation( m_iShaderProgramHandle, "diffuseMap" ); if( m_uihDiffuseMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihDiffuseMap, RM_DIFFUSE+GL_TEXTURE0 ); (0)+ } m_uihNormalMap = glGetUniformLocation( m_iShaderProgramHandle, "normalMap" ); if( m_uihNormalMap != -1 ) { //store what texture index this handle will be in the shader glUniform1i( m_uihNormalMap, RM_NORMAL+GL_TEXTURE0 ); (1)+ } } void CShaderProgram::SetDiffuseMap( const unsigned& uihDiffuseMap ) { (0)+ glActiveTexture( RM_DIFFUSE+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihDiffuseMap ); } void CShaderProgram::SetNormalMap( const unsigned& uihNormalMap ) { (1)+ glActiveTexture( RM_NORMAL+GL_TEXTURE0 ); glBindTexture( GL_TEXTURE_2D, uihNormalMap ); } //MY 2 TEST SHADERS also my math order is correct it pertains to my matrix ordering in my math library once again i've tested the basic rendering. rendering to the screen works fine ----------------------------------------SIMPLE SHADER------------------------------------- //vertex shader looks like this #version 330 in vec3 vVertexPos; in vec3 vNormal; in vec2 vTexCoord; uniform mat4 mWorld; // Model Matrix uniform mat4 mView; // Camera View Matrix uniform mat4 mProjection;// Camera Projection Matrix out vec2 vTexCoordVary; // Texture coord to the fragment program out vec3 vNormalColor; void main( void ) { //pass the texture coordinate vTexCoordVary = vTexCoord; vNormalColor = vNormal; //calculate our model view projection matrix mat4 mMVP = (( mWorld * mView ) * mProjection ); //result our position gl_Position = vec4( vVertexPos, 1 ) * mMVP; } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; in vec3 vNormalColor; uniform sampler2D diffuseMap; uniform sampler2D normalMap; out vec4 fragColor[2]; void main( void ) { //CORRECT fragColor[0] = texture( normalMap, vTexCoordVary ); fragColor[1] = vec4( vNormalColor, 1.0 ); }; ----------------------------------------SCREEN SHADER------------------------------------- //vertext shader looks like this #version 330 in vec3 vVertexPos; // This is the position of the vertex coming in in vec2 vTexCoord; // This is the texture coordinate.... out vec2 vTexCoordVary; // Texture coord to the fragment program void main( void ) { vTexCoordVary = vTexCoord; //set our position gl_Position = vec4( vVertexPos.xyz, 1.0f ); } //fragment shader looks like this #version 330 in vec2 vTexCoordVary; // Incoming "varying" texture coordinate uniform sampler2D diffuseMap;//the tile detail texture uniform sampler2D normalMap; //the normal map from earlier out vec4 vTheColorOfThePixel; void main( void ) { //CORRECT vTheColorOfThePixel = texture( normalMap, vTexCoordVary ); }; .Class RenderTarget Main Functions //here is my render targets create function bool CRenderTarget::Create( const unsigned uiNumTextures, unsigned uiWidth, unsigned uiHeight, int iInternalFormat, bool bDepthWanted ) { if( uiNumTextures <= 0 ) return false; //generate our variables glGenFramebuffers(1, &m_uifboHandle); // Initialize FBO glBindFramebuffer(GL_FRAMEBUFFER, m_uifboHandle); m_uiNumTextures = uiNumTextures; if( bDepthWanted ) m_uiNumTextures += 1; m_uiTextureHandle = new unsigned int[uiNumTextures]; glGenTextures( uiNumTextures, m_uiTextureHandle ); for( unsigned x = 0; x < uiNumTextures-1; ++x ) { glBindTexture( GL_TEXTURE_2D, m_uiTextureHandle[x]); // Reserve space for our 2D render target glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, iInternalFormat, uiWidth, uiHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + x, GL_TEXTURE_2D, m_uiTextureHandle[x], 0); } //if we need one for depth testing if( bDepthWanted ) { glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0); glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[uiNumTextures-1], 0);*/ // Must attach texture to framebuffer. Has Stencil and depth glBindRenderbuffer(GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glRenderbufferStorage(GL_RENDERBUFFER, /*GL_DEPTH_STENCIL*/GL_DEPTH24_STENCIL8, TEXTURE_WIDTH, TEXTURE_HEIGHT ); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, m_uiTextureHandle[uiNumTextures-1]); } glBindFramebuffer(GL_FRAMEBUFFER, 0); //everything went fine return true; } void CRenderTarget::Bind( const int& iTargetAttachmentLoc, const unsigned& uiWhichTexture, const bool bBindFrameBuffer ) { if( bBindFrameBuffer ) glBindFramebuffer( GL_FRAMEBUFFER, m_uifboHandle ); if( uiWhichTexture < m_uiNumTextures ) glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + iTargetAttachmentLoc, m_uiTextureHandle[uiWhichTexture], 0); } void CRenderTarget::UnBind( void ) { //default our binding glBindFramebuffer( GL_FRAMEBUFFER, 0 ); } //this is all in a test project so here's my straight forward rendering function for testing this render function does basic rendering steps keep in mind i have already tested my textures i have already tested my box thats being rendered all basic rendering works fine its just when i try to render to a texture then display it in a render surface that it does not work. Also I have tested my render surface it is bound exactly to the screen coordinate space void TestRenderSteps( void ) { //Clear the color and the depth glClearColor( 0.0f, 0.0f, 0.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //bind the shader program glUseProgram( testShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().GetBufferHandle() ); //2) how our stream will be split here ( 4 bytes position, ..ext ) CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //send the needed information into the shader testShader.SetWorldMatrix( boxPosition ); testShader.SetViewMatrix( Static_Camera.GetView( ) ); testShader.SetProjectionMatrix( Static_Camera.GetProjection( ) ); testShader.SetDiffuseMap( iTextureID ); testShader.SetNormalMap( iTextureID2 ); GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 }; glDrawBuffers(2, buffers); //bind to our render target //RM_DIFFUSE, RM_NORMAL are enums (0 && 1) renderTarget.Bind( RM_DIFFUSE, 1, true ); renderTarget.Bind( RM_NORMAL, 1, false); //false because buffer is already bound //i clear here just to clear the texture to make it a default value of white //by doing this i can see if what im rendering to my screen is just drawing to the screen //or if its my render target defaulted glClearColor( 1.0f, 1.0f, 1.0f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ); //i have this box object which i draw testBox.Draw(); //the draw call looks like this //my normal rendering works just fine so i know this draw is fine // glDrawElementsBaseVertex( m_sides[x].GetPrimitiveType(), // m_sides[x].GetPrimitiveCount() * 3, // GL_UNSIGNED_INT, // BUFFER_OFFSET(sizeof(unsigned int) * m_sides[x].GetStartIndex()), // m_sides[x].GetStartVertex( ) ); //we unbind the target back to default renderTarget.UnBind(); //i stop mapping my vertex format CVertexBufferManager::GetInstance()->GetPositionNormalTexBuffer().UnMapVertexStride(); //i go back to default in using no shader program glUseProgram( 0 ); //now that everything is drawn to the textures //lets draw our screen surface and pass it our 2 filled out textures //NOW RENDER THE TEXTURES WE COLLECTED TO THE SCREEN QUAD //bind the shader program glUseProgram( testScreenShader.m_iShaderProgramHandle ); //1) grab the vertex buffer related to our rendering glBindBuffer( GL_ARRAY_BUFFER, CVertexBufferManager::GetInstance()->GetPositionTexBuffer().GetBufferHandle() ); //2) how our stream will be split here CVertexBufferManager::GetInstance()->GetPositionTexBuffer().MapVertexStride(); //3) set the index buffer if needed glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, CIndexBuffer::GetInstance()->GetBufferHandle() ); //pass our 2 filled out textures (in the shader im just using the diffuse //i wanted to see if i was rendering anything before i started getting into other techniques testScreenShader.SetDiffuseMap( renderTarget.GetTextureHandle(0) ); //SetDiffuseMap definitions in shader program class testScreenShader.SetNormalMap( renderTarget.GetTextureHandle(1) ); //SetNormalMap definitions in shader program class //DO the draw call drawing our screen rectangle glDrawElementsBaseVertex( m_ScreenRect.GetPrimitiveType(), m_ScreenRect.GetPrimitiveCount() * 3, GL_UNSIGNED_INT, BUFFER_OFFSET(sizeof(unsigned int) * m_ScreenRect.GetStartIndex()), m_ScreenRect.GetStartVertex( ) );*/ //unbind our vertex mapping CVertexBufferManager::GetInstance()->GetPositionTexBuffer().UnMapVertexStride(); //default to no shader program glUseProgram( 0 ); } Last words: 1) I can render my box just fine 2) i can render my screen rect just fine 3) I cannot render my box into a texture then display it into my screen rect 4) This entire project is just a test project I made to test different rendering practices. So excuse any "ugly-ish" unclean code. This was made just on a fly run through when I was trying new test cases.

    Read the article

  • Java Simple WGS84 Lat Lon to Pixel X, Y

    - by Cnich
    I've read a multitude of information regarding map projection today. The amount of information available is overwhelming. I am attempting to simply convert lat, long values into a screen X, Y coordinate not using any map. I do not need the values projected onto any map, just on the window. The window itself is representing approx. a 1500x1500 meter location. Lat, Long accuracy needed is to a 1/10th of a second. What may be some simpler ways in converting lat/long representation to the screen? I've read several articles and post regarding translation onto images, but nothing related to the natural java coordinate system. Thanks for any insight.

    Read the article

  • Why the CLLocationManager delegate is not getting called in iPhone SDK 4.0 ???

    - by Biranchi
    Hi, This is the code that I have in my AppDelegate Class - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { CLLocationManager *locationManager = [[CLLocationManager alloc] init]; locationManager.delegate = self; locationManager.distanceFilter = 1000; // 1 Km locationManager.desiredAccuracy = kCLLocationAccuracyKilometer; [locationManager startUpdatingLocation]; } And this is the delegate method i have in my AppDelegate Class //This is the delegate method for CoreLocation - (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation { //printf("AppDelegate latitude %+.6f, longitude %+.6f\n", newLocation.coordinate.latitude, newLocation.coordinate.longitude); } Its working in 3.0, 3.1, 3.1.3 , but its not working in 4.0 simulator and device both. What is the reason ?? I am getting frustrated ...... Thanks....

    Read the article

  • iphone zoom to user location on mapkit

    - by satyam
    I'm using map kit and showing user's location using "showsUserLocation" I"m using following code to zoom to user's location, but not zooming. Its zooming to some location in Africa, though the user location on map is showing correct. MKCoordinateRegion newRegion; MKUserLocation* usrLocation = mapView.userLocation; newRegion.center.latitude = usrLocation.location.coordinate.latitude; newRegion.center.longitude = usrLocation.location.coordinate.longitude; newRegion.span.latitudeDelta = 20.0; newRegion.span.longitudeDelta = 28.0; [self.mapView setRegion:newRegion animated:YES]; Why is user's location correctly showing and not zooming properly. Can some one correct me please?

    Read the article

  • Trying to convert openGL to MFC coordinates and having Problems with "gluProject"

    - by Erez
    Hi, i'm trying to find the naswer on the web and can't find a full solution that i can use and that will work... I'm developing a MFC project with static picture as the canvas for an openGL class that draw the grphics for my game. On moush down, i need to retrive a shape coordinate from the openGL class. I'm looking for a way to convert the openGL coordinates to MFC coordinates but no matter what i try i get junk after using the gluProject or gluUnProject (i've tried to do both ways but non is working) GLdouble modelMatrix[16]; glGetDoublev(GL_MODELVIEW_MATRIX,modelMatrix); GLdouble projMatrix[16]; glGetDoublev(GL_PROJECTION_MATRIX,projMatrix); int viewport[4]; glGetIntegerv(GL_VIEWPORT,viewport); POINT mouse; // Stores The X And Y Coords For The Current Mouse Position GetCursorPos(&mouse); // Gets The Current Cursor Coordinates (Mouse Coordinates) ScreenToClient(hWnd, &mouse); GLdouble winX, winY, winZ; // Holds Our X, Y and Z Coordinates winX; = (float)point.x; // Holds The Mouse X Coordinate winY; = (float)point.y; // Holds The Mouse Y Coordinate winY = (float)viewport[3] - winY; glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); GLdouble posX=s1->getPosX(), posY=s1->getPosY(), posZ=s1->getPosZ(); // Hold The Final Values gluUnProject( winX, winY, winZ, modelMatrix, projMatrix, viewport, &posX, &posY, &posZ); gluProject(posX, posY, posZ, modelMatrix, projMatrix, viewport, &winX, &winY, &winZ); This is just part of the code i've tried. ofcourse not gluProject and gluUnProject together. just had them both here to show.....and i know there is lots of junk over there, its from some of my tries... p.s. i've tried many many more combinations and examples from the web and nothing seem to work in my case.... Can any one show me what is the right way to do the transformation? 10x

    Read the article

  • How to flip the origin.y of an CGRect?

    - by mystify
    I'm trying to draw an UIImageView, but the frame's origin is wrong when I flip the coordinate system for drawing not upside-down. CGRect imgRect = imgView.frame; imgRect.origin.y += 10.0f; CGContextTranslateCTM(context, 0.0f, imgRect.size.height); CGContextScaleCTM(context, 1.0f, -1.0f); CGContextDrawImage(context, imgRect, imgView.image.CGImage); like you can see I move the image down by 10, but instead of going down by 10, it goes UP by 10. That's completely unlogical since I had actually inverted the wrong coordinate system to look exactly like the one in UIView, right? What to do about it?

    Read the article

  • Java :moving ball with angle ?

    - by Meko
    Hi all.I am started to learn game physics and I am trying to move ball with an angle.But it does not change its angle .Java coordinate sydstem is a little different and i think my problem is there.Here my codes this is for calculating x and y speed scale_X= Math.sin(angle); scale_Y=Math.cos(angle); velosity_X=(speed*scale_X); velosity_Y=(speed*scale_Y); This is for moving ball in run() function ball.posX =ball.posX+(int)velosity_X; ball.posY=ball.posY+(int)velosity_Y; I used (int)velosity_X and (int)velosity_Y because in ball class I draw object g.drawOval(posX, posX, width, height); and here g.drawOval requares int.I dont know is it problem or not. Also if I use angle 30 it goes +X and +Y but if I use angle 35 it goes -X and -Y. I didnot figure out how work coordinate system on java.

    Read the article

  • MATLAB: Convert two array to a sparse matrix

    - by CziX
    I'm looking for an a command or trick to convert two arrays to a sparse matrix. The two arrays contain x-values and y-values, which gives a coordinate in the cartesian coordinate system. I want to group the coordinates, which if the value is between some value on the x-axes and the y-axes. % MATLAB x_i = find(x > 0.1 & x < 0.9); y_i = find(y > 0.4 & y < 0.8); %Then I want to find indicies which are located in both x_i and y_i Is there an easy way to this little trick?

    Read the article

  • MKMapView not centered on pin

    - by Val Smith
    Hi all, I have an mkmapview that i'm currently adding pins to, but for some reason when I call [mapView setRegion:[detailItem coordinateRegion] animated:YES]; the pin is off-centered (toward the right side of the screen) on the map. Here is the code for [deailItem coordinateRegion]: - (MKCoordinateRegion)coordinateRegion { MKCoordinateRegion region = { {0.0, 0.0 }, { 0.0, 0.0 } }; region.center = self.coordinate; region.span.longitudeDelta = 0.0075f; region.span.latitudeDelta = 0.0075f; return (region); } I'm setting the coordinateRegion's center to the object's x,y coordinate, so why is it off-center on the map? I feel like there's something I'm missing here... ::Val::

    Read the article

  • Erlang list comprehension, traversing two lists and excluding values

    - by ErJab
    I need to generate a set of coordinates in Erlang. Given one coordinate, say (x,y) I need to generate (x-1, y-1), (x-1, y), (x-1, y+1), (x, y-1), (x, y+1), (x+1, y-1), (x+1, y), (x+1, y+1). Basically all surrounding coordinates EXCEPT the middle coordinate (x,y). To generate all the nine coordinates, I do this currently: [{X,Y} || X<-lists:seq(X-1,X+1), Y<-lists:seq(Y-1,Y+1)] But this generates all the values, including (X,Y). How do I exclude (X,Y) from the list using filters in the list comprehension?

    Read the article

  • Zoom on userLocation

    - by Marco
    Hello, how can I zoom the map on my userLocation automatically in my app? I have the following code to zomm in the map but i must zoom on the userLocation and the following code zooms always to africa? MKCoordinateRegion zoomIn = mapView.region; zoomIn.span.latitudeDelta *= 0.2; zoomIn.span.longitudeDelta *= 0.2; zoomIn.center.latitude = mapView.userLocation.location.coordinate.latitude; zoomIn.center.longitude = mapView.userLocation.location.coordinate.longitude; [mapView setRegion:zoomIn animated:YES];

    Read the article

  • How to fix rotations in a Rubik's Cube?

    - by Eindbaas
    I'm trying to create a Rubik's Cube in Flash & Papervision and i'm really stuck here. I'm up to the point where i can rotate any plane of cubes once, but after that...it's messed up because all local coordinate systems are messy. I dont really know where to go from here, can anybody give any advice on what do do? I'm not looking for 'read about transformation matrices', i know i should (and i am doing that), but i'm not really sure what to look for. My idea is that, after each rotation, i should fix each coordinate system of each cube again, but i have no idea how. Any hints on what i want to achieve (in words), and why, are much appreciated. http://dl.dropbox.com/u/250155/rubik/main.html (you can press the K key once ;) )

    Read the article

  • TomTom integration does not work anymore?

    - by viezel
    I did for 6 months back an integration for TomTom navigation with URL Scheme. This worked out great, but then TomTom updated their iPhone app to version 1.10. Since then the integration does not work. Anyone tried the same? My iOS integration looks like this: (Titanium appcelerator) var lat = dbDataArray[chosenRoute]["lat"].toFixed(6); //geo coordinate var lng = dbDataArray[chosenRoute]["lng"].toFixed(6); //geo coordinate //open navigation var url = "tomtomhome://geo:action=navigateto&lat="+lat+"&long="+lng+"&name=name"; //show map //var url = "tomtomhome://geo:action=show&lat="+lat+"&long="+lng+"&name=name"; if (Ti.Platform.canOpenURL(url)) { Ti.Platform.openURL(url); }else { alert('Please install TomTom Navigation app'); } Note: I cannot get TomTom to tell me what has changed.

    Read the article

  • I've got my 2D/3D conversion working perfectly, how to do perspective

    - by user346992
    Although the context of this question is about making a 2d/3d game, the problem i have boils down to some math. Although its a 2.5D world, lets pretend its just 2d for this question. // xa: x-accent, the x coordinate of the projection // mapP: a coordinate on a map which need to be projected // _Dist_ values are constants for the projection, choosing them correctly will result in i.e. an isometric projection xa = mapP.x * xDistX + mapP.y * xDistY; ya = mapP.x * yDistX + mapP.y * yDistY; xDistX and yDistX determine the angle of the x-axis, and xDistY and yDistY determine the angle of the y-axis on the projection (and also the size of the grid, but lets assume this is 1-pixel for simplicity). x-axis-angle = atan(yDistX/xDistX) y-axis-angle = atan(yDistY/yDistY) a "normal" coordinate system like this --------------- x | | | | | y has values like this: xDistX = 1; yDistX = 0; xDistY = 0; YDistY = 1; So every step in x direction will result on the projection to 1 pixel to the right end 0 pixels down. Every step in the y direction of the projection will result in 0 steps to the right and 1 pixel down. When choosing the correct xDistX, yDistX, xDistY, yDistY, you can project any trimetric or dimetric system (which is why i chose this). So far so good, when this is drawn everything turns out okay. If "my system" and mindset are clear, lets move on to perspective. I wanted to add some perspective to this grid so i added some extra's like this: camera = new MapPoint(60, 60); dx = mapP.x - camera.x; // delta x dy = mapP.y - camera.y; // delta y dist = Math.sqrt(dx * dx + dy * dy); // dist is the distance to the camera, Pythagoras etc.. all objects must be in front of the camera fac = 1 - dist / 100; // this formula determines the amount of perspective xa = fac * (mapP.x * xDistX + mapP.y * xDistY) ; ya = fac * (mapP.x * yDistX + mapP.y * yDistY ); Now the real hard part... what if you got a (xa,ya) point on the projection and want to calculate the original point (x,y). For the first case (without perspective) i did find the inverse function, but how can this be done for the formula with the perspective. May math skills are not quite up to the challenge to solve this. ( I vaguely remember from a long time ago mathematica could create inverse function for some special cases... could it solve this problem? Could someone maybe try?)

    Read the article

  • Rotation towards an object in 3d space

    - by retoucher
    hello, i have two coordinates on a 2d plane in 3d space, and am trying to rotate one coordinate (a vector) to face the other coordinate. my vertical axis is the y-axis, so if both of the coordinates are located flat on the 2d plane, they would both have a y-axis of 0, and their x and z coordinates determine their position length/width-wise on the plane. right now, i'm calculating the angle like so (language agnostic): angle = atan2(z2-z1,x2-x1); and am rotating/translating in space like so: pushMatrix(); rotateY(angle); popMatrix(); this doesn't seem to be working though. are my calculations/process correct?

    Read the article

  • Find top "n" nearby coordinates.

    - by John Hamelink
    I have a coordinate. I want to find the top "n" (n being a variable value) nearest coordinates out of several thousand rows stored on a MySQL database. I also want to be able to define maximum and minimum distances between the coordinate in question and the coordinates in the database. How best am I to go about this? Would it be bonkers to use PHP as I understand the syntax much better than MySQL? If I use a MySQL function, how do I move it between databases if I choose to switch servers? How is it stored? Lastly, what is the most efficient method of getting through all these coordinates accurately - the coordinates are all relatively close to one another? Thanks for your time, John.

    Read the article

  • Can't access annotation property of subclassed uibutton - editted

    - by Tzur Gazit
    Below is my original question. I kept investigating and found out that the type of the button I allocate is of type UIButton instead of the subclassed type CustomButton. the capture below is the allocation of the button and connection to target. I break immediately after the allocation and check the button type (po rightButton at the debugger console). It's turned out tht the type is UIButton instead of CustomButton. CustomButton* rightButton = [CustomButton buttonWithType:UIButtonTypeDetailDisclosure]; [rightButton addTarget:self action:@selector(showDetails:) forControlEvents:UIControlEventTouchUpInside]; I have a mapView to which I add annotations. The pin's callout have a button (rightCalloutAccessoryView). In order to be able to display various information when the button is pushed, i've subclassed uibutton and added a class called "Annotation". @interface CustomButton : UIButton { NSIndexPath *indexPath; Annotation *mAnnotation; } @property (nonatomic, retain) NSIndexPath *indexPath; @property (nonatomic, copy) Annotation *mAnnotation; - (id) setAnnotation2:(Annotation *)annotation; @end Here is "Annotation": @interface Annotation : NSObject <MKAnnotation> { CLLocationCoordinate2D coordinate; NSString *mPhotoID; NSString *mPhotoUrl; NSString *mPhotoName; NSString *mOwner; NSString *mAddress; } @property (nonatomic, assign) CLLocationCoordinate2D coordinate; @property (nonatomic, copy) NSString *mPhotoID; @property (nonatomic, copy) NSString *mPhotoUrl; @property (nonatomic, copy) NSString *mPhotoName; @property (nonatomic, copy) NSString *mOwner; @property (nonatomic, copy) NSString *mAddress; - (id) initWithCoordinates:(CLLocationCoordinate2D)coordinate; - (id) setPhotoId:(NSString *)id url:(NSString *)url owner:(NSString *)owner address:(NSString *)address andName:(NSString *)name; @end I want to set the annotation property of the uibutton at - (MKAnnotationView *)mapView:(MKMapView *)pMapView viewForAnnotation:(id )annotation, in order to refer to it at the button push handler (-(IBAction) showDetails:(id)sender). The problem is that I can't set the annotation property of the button. I get the following message at run time: 2010-04-27 08:15:11.781 HotLocations[487:207] *** -[UIButton setMAnnotation:]: unrecognized selector sent to instance 0x5063400 2010-04-27 08:15:11.781 HotLocations[487:207] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[UIButton setMAnnotation:]: unrecognized selector sent to instance 0x5063400' 2010-04-27 08:15:11.781 HotLocations[487:207] Stack: ( 32080987, 2472563977, 32462907, 32032374, 31884994, 55885, 30695992, 30679095, 30662137, 30514190, 30553882, 30481385, 30479684, 30496027, 30588515, 63333386, 31865536, 31861832, 40171029, 40171226, 2846639 ) I appreciate the help. Tzur.

    Read the article

  • Triangular bounding volumes

    - by Cheery
    I've come up with an alternative for beziers that might be easier to ray-trace, perhaps even though a plain vertex shader. Though there's missing a piece. I need to find the parametric surface equation from the surface normals I have for edge vertices. I also have to know it's peak and valley so I can constraint the depth of my bounding triangle. Image explains the overall idea: I build a bounding-volume from a control triangle. Then apply a function to each parametric coordinate of the triangle (s+t+u=1 where s,t,u = 0) to get the height coordinate for that certain point. Simply put, it produces a procedurally generated height-map for the triangle's surface. I just need to find a function that generates the height-map so I can make it work.

    Read the article

  • Python: Repeat elements in a list comprehension?

    - by User
    I have the following list comprehension which returns a list of coordinate objects for each location. coordinate_list = [Coordinates(location.latitude, location.longitude) for location in locations] This works. Now suppose the location object has a number_of_times member. I want a list comprehension to generate n Coordinate objects where n is the number_of_times for the particular location. So if a location has number_of_times = 5 then the coordinates for that location will be repeated 5 times in the list. (Maybe this is a case for a for-loop but I'm curious if it can be done via list comprehensions)

    Read the article

  • draw line with php using coordinates from txt file

    - by netmajor
    I have file A2.txt with coordinate x1,y1,x2,y2 in every line like below : 204 13 225 59 225 59 226 84 226 84 219 111 219 111 244 192 244 192 236 209 236 209 254 223 254 223 276 258 276 258 237 337 in my php file i have that code. This code should take every line and draw line with coordinate from line. But something was wrong cause nothing was draw :/: <?php $plik = fopen("A2.txt", 'r') or die("blad otarcia"); while(!feof($plik)) { $l = fgets($plik,20); $k = explode(' ',$l); imageline ( $mapa , $k[0] , $k[1] , $k[2] , $k[3] , $kolor ); } imagejpeg($mapa); imagedestroy($mapa); fclose($plik) ; ?> If I use imagejpeg and imagedestroy in while its only first line draw. What to do to draw every line ?? Please help :)

    Read the article

  • Draw and move a point over an image in python

    - by frx08
    Hi all I have to do a little script in Python. In this script I have a variable (that represents a coordinate) that is continuously updated to a new value. So I have to draw a red point over a image and update the point position every time the variable that contains the coordinate is updated. I tried to explain what I need doing something like this but obviously it doesn't works: import Tkinter, Image, ImageDraw, ImageTk i=0 root = Tkinter.Tk() im = Image.open("img.jpg") root.geometry("%dx%d" % (im.size[0], im.size[1])) while True: draw = ImageDraw.Draw(im) draw.ellipse((i, 0, 10, 10), fill=(255, 0, 0)) pi = ImageTk.PhotoImage(im) label = Tkinter.Label(root, image=pi) label.place(x=0, y=0, width=im.size[0], height=im.size[1]) i+=1 del draw someone may help me please? thanks very much!

    Read the article

  • Searching a 2D array for a range of values in java

    - by Paige O
    I have a 2^n size int array and I want to check if an element exists that is greater than 0. If the element exists, I want to divide the array by 4 and check if the coordinates of the found element are in the 1st, 2nd, 3rd or 4th quadrant of the array. For example, logically if the element exists in the first quadrant it would look something like this: If array[][] 0 && the row of that coordinate is in the range 0-(grid.length/2-1) && the column of that coordinate is in the range 0-(grid.length/2-1) then do something. I'm really not sure how to check the row and column index of the found element and store those coordinates to use in my if statement. Help!

    Read the article

  • How does one search/poll all modal info in a frame automatically?

    - by user310631
    As you'll no-doubt be able to tell momentarily, I have little knowledge of the programming world. That being said, here goes.... In this scenario, there's a Java-based game that has a map of the game world oriented in an X, Y coordinate tile system. Some of the grid tiles are player cities, some are non-player locations. The game runs inside a frame in the browser, the X, Y coordinate map feature is one optional view, and the entire map is not available to view at any one time. Each grid tile has an "Onclick" event and an "Onmouseover" event. The mouseover event is a tooltip, the click event is something called a "modal" that has information specific to that tile. What I'd like to find out is: How can I poll all the grid tiles' "modal" information using some kind of script or other auto-running polling feature?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >