Search Results

Search found 17148 results on 686 pages for 'screen coordinates'.

Page 268/686 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • Are large include files like iostream efficient? (C++)

    - by Keand64
    Iostream, when all of the files it includes, the files that those include, and so on and so forth, adds up to about 3000 lines. Consider the hello world program, which needs no more functionality than to print something to the screen: #include <iostream> //+3000 lines right there. int main() { std::cout << "Hello, World!"; return 0; } this should be a very simple piece of code, but iostream adds 3000+ lines to a marginal piece of code. So, are these 3000+ lines of code really needed to simply display a single line to the screen, and if not, do they create a less efficient program than if I simply copied the relevant lines into the code?

    Read the article

  • How to debug and detect hang issue

    - by igor
    I am testing my application (Windows 7, WinForms, Infragistics controls, C#, .Net 3.5). I have two monitors and my application saves and restores forms' position on the first or second monitors. So I physically switched off second monitor and disabled it at Screen Resolution on the windows display settings form. I need to know it is possible for my application to restore windows positions (for those windows that were saved on the second monitor) to the first one. I switched off second monitor and press Detect to apply hardware changes. Then Windows switched OFF the first monitor for a few seconds to apply new settings. When the first monitor screen came back, my application became unresponsive. My application was launched in debug mode, so I tried to navigate via stack and threads (Visual Studio 2008), paused application, started and did not find any thing that help me to understand why my application is not responsive. Could somebody help my how to detect the source of issue.

    Read the article

  • Animation in quartz 2D

    - by coure06
    I want to create an app which after each 1 second will show 4-5 words on screen but the last word will zoom out/in. I can easily create static words and for the last animating word i need to draw the static again n again. How can i create 2 separate layers so the static text is on one layer ( i will fill it after each second) and the last word (animated one) will be on other layer. How to create 2 separate layers? Attached on same screen but handling their drawRect method separately?

    Read the article

  • Android - OPENGL cube is NOT in the display

    - by Marc Ortiz
    I'm trying to display a square on my display and i can't. Whats my problem? How can I display it on the screen (center of the screen)? I let my code below! Here's my render class: public class GLRenderEx implements Renderer { private GLCube cube; Context c; GLCube quad; // ( NEW ) // Constructor public GLRenderEx(Context context) { // Set up the data-array buffers for these shapes ( NEW ) quad = new GLCube(); // ( NEW ) } // Call back when the surface is first created or re-created. @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { // NO CHANGE - SKIP } // Call back after onSurfaceCreated() or whenever the window's size changes. @Override public void onSurfaceChanged(GL10 gl, int width, int height) { // NO CHANGE - SKIP } // Call back to draw the current frame. @Override public void onDrawFrame(GL10 gl) { // Clear color and depth buffers using clear-values set earlier gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset model-view matrix ( NEW ) gl.glTranslatef(-1.5f, 0.0f, -6.0f); // Translate left and into the // screen ( NEW ) // Translate right, relative to the previous translation ( NEW ) gl.glTranslatef(3.0f, 0.0f, 0.0f); quad.draw(gl); // Draw quad ( NEW ) } } And here is my square class: public class GLCube { private FloatBuffer vertexBuffer; // Buffer for vertex-array private float[] vertices = { // Vertices for the square -1.0f, -1.0f, 0.0f, // 0. left-bottom 1.0f, -1.0f, 0.0f, // 1. right-bottom -1.0f, 1.0f, 0.0f, // 2. left-top 1.0f, 1.0f, 0.0f // 3. right-top }; // Constructor - Setup the vertex buffer public GLCube() { // Setup vertex array buffer. Vertices in float. A float has 4 bytes ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); // Use native byte order vertexBuffer = vbb.asFloatBuffer(); // Convert from byte to float vertexBuffer.put(vertices); // Copy data into buffer vertexBuffer.position(0); // Rewind } // Render the shape public void draw(GL10 gl) { // Enable vertex-array and define its buffer gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); // Draw the primitives from the vertex-array directly gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } Thanks!!

    Read the article

  • iPhone + alternative to drop downs

    - by pratik
    Hello, I am developing native iPhone app. I have one requirement that, there are 5 drop downs in the screen, where user will select each value and based on that there will be a graph generated on screen. Also the drop downs are cascading, meaning that depending on value selected in upper drop down, the values in lower drop downs will change. If it was a web app than, I had no problem as we have drop downs available. But it is a native iPhone app and we don't have drop down in iPhone SDK. Please suggest me some other alternative to achieve the same. Regards, Pratik

    Read the article

  • Which containers / graphics components to use in a simple Java Swing game?

    - by rize
    I'm creating a simple labyrinth game with Java + Swing. The game draws a randomized labyrinth on the screen, places a figure in the middle, and the player is then supposed to find the way out by moving the figure with arrow-keys. As for now, I'm using a plain background and drawing the walls of the labyrinth with Graphics.drawLine(). I have a custom picture of the figure in a .gif file, which I load as a BufferedImage object. However, I want the player to see only part of the labyrinth at a time, and the screen should follow the figure in the game, as the player moves around. I'm planning to do this by creating an Image object of the whole labyrinth when it is created, and then "cutting" a square around the current position of the figure and displaying this with Graphics.drawImage(). I'm new with Swing though, and I can't figure out how to draw the figure at different positions "above" the labyrinth without redrawing the whole thing. Which container/component should I use for the labyrinth and then for the figure to achieve this?

    Read the article

  • Why I can't implement this simple CSS

    - by nXqd
    <!DOCTYPE html> <html lang="en"> <head> <title>Enjoy BluePrint</title> <link rel="stylesheet" href="css/blueprint/screen.css" type="text/css" media="screen, projection"> <link rel="stylesheet" href="css/blueprint/print.css" type="text/css" media="print"> <!--[if lt IE 8]><link rel="stylesheet" href="css/blueprint/ie.css" type="text/css" media="screen, projection"><![endif]--> <!-- <link rel="stylesheet" href="global.css" type="text/css" media="screen"> --> <script type="text/css"> h1.logo { width:181px; height:181px; background: url("img/logo.png"); text-indent: -9999px; } </script> </head> <body> <div class="container"> <!-- Header --> <div id="header" class="span-24"> <div id="logo" class="span-6"> <h1 class="logo">This is my site</h1> </div> <div id="script" class="span-10"> <p>Frank Chimero is a graphic designer, illustrator, teac`her, maker, writer, thinker-at-large in Portland, Oregon.</p> </div> <div id="contact" class="span-8 last"> contact </div> </div> <!-- Content --> <div id="main-content" class="span-12"> <h3>DISCOVERY</h3> <p>My fascination with the creative process, curiosity, and visual experience informs all of my work in some way. Each piece is the part of an exploration in finding wit, surprise, honesty, and joy in the world around us, then, trying to document those things with all deliberate speed before they vanish.</p><br/> <p>Our creative output can have a myriad intended outcomes: to inform, to persuade or sell, or delight. There are many other creative people who do well in servicing the needs to inform or persuade, but there are not many out there who have taken up the mantle of delighting people. I’ll try my best.</p><br/> <p>It’s not about pretty; it is about beauty. Beauty in form, sure, but also beauty in the fit of a bespoke idea that transcends not only the tasks outlined, but also fulfilling the objectives that caused the work to be produced in the first place.</p><br/> <p>The best creative work connects us by speaking to what we share. From that, we hope to make things that will last. Work made without staying power and lasting relevance leads to audiences that are fickle, strung along on a diet of crumbs.</p><br/> <p>The work should be nourishing in some way, both while a creative person is making it, but also while someone consumes it. When I think of all my favorite books, movies, art and albums, they all make me a little less alone and a little more sentient. Perhaps that is what making is for: to document the things that make us feel most alive.</p> </div> <!-- Side --> <div id="award" class="span-4"> Awards </div> <div id="right-sidebar" class="span-8 last"> Right sidebar </div> </div> </body> </html> I'm 100% sure the code works, and I can't replace image at h1.logo . I try to use live-editing CSS tool and it works fine . Thanks for reading :)

    Read the article

  • ViewController vs. View

    - by James
    Trying to wrap my head around the apple design scheme. I have a UIViewController and the corresponding XIB file that has my main screen in my application. I want to have a button on this XIB that displays another "form" (this is my disconnect) in the foreground where the user selects from a myriad of choices, then it hides that "form" and goes back to the first one. I'm completely lost here. Initially I thought I'd just add another view and set the self.view of my controller to the new view, add another IBAction and call it a day, but I can't seem to make that work. For sake of argument, say I want to "gray out" the current form, have a modal type window that takes up roughly 60% of the screen and requires you select an option, then it hides itself and we go back to normal. What is the standard approach here? Thanks

    Read the article

  • Pros / Cons displaying list of users at login page

    - by Radu094
    We seem to have a lot of clients asking us to change the login screen in this manner: Display a list of all available users (thumbnail picture + name) User selects a username from the list A password prompt appears near the username User enters password then presses enter This sounds remarcably similar to the Windows XP login, which is probably where they got the ideea in the first place. There are only about 4 - 5 different users that can login at any given station, so implementing that list on one screen is feasable. So I was wondering if there are any usability experts with some word on this method of login. As far as I can tell, MS droped this behaviour in Vista/Win7, didn't they?

    Read the article

  • Visual Editor vs Manual code

    - by Albinoswordfish
    I'm not sure how it is using other frameworks but this questions is strictly regarding Java swing. Is it better to use a Visual Editor to place objects or to manually code the placement of the objects onto the frame (Layout managers or null layouts)? From my experience I've had a lot of trouble using Visual editors when it comes to different screen resolutions or changing the window size. Using manual code to place objects I've found that my GUIs behave a lot better with regard to the screen size issue. However when I want to change a small part of my GUI it takes a lot more work compared to using a visual editor Just wondering what people's thoughts were on this?

    Read the article

  • Is there a way to change the maximum width of a window without using the WM_GETMINMAXINFO message?

    - by David
    I want to change the imposed Windows maximum width that a window can be resized to, for an external application's window (not my C#/WinForms program's window). The documentation of GetSystemMetrics for SM_CXMAXTRACK says: "The default maximum width of a window that has a caption and sizing borders, in pixels. This metric refers to the entire desktop. The user cannot drag the window frame to a size larger than these dimensions. A window can override this value by processing the WM_GETMINMAXINFO message." Is there a way to modify this SM_CXMAXTRACK value (either system wide or for one particular window), without processing the WM_GETMINMAXINFO message? Maybe an undocumented function, a registry setting, etc.? (Or: The documentation for MINMAXINFO.ptMaxTrackSize says: "This value is based on the size of the virtual screen and can be obtained programmatically from the system metrics SM_CXMAXTRACK and SM_CYMAXTRACK." Maybe there is a way to change the size of the virtual screen?) Thank you

    Read the article

  • iPhone: Save with validation on back navigation

    - by iPhone beginner
    In my iPhone application I have navigation controller, main screen and some edit screens. On edit screen user does some input that has to be validated before I can save it. Ideally I would like to update data automatically on back navigation without additional "Done" button. Can I do some validation and save on back navigation (i.e. when user taps on standard back button) in a way that allows my to stop navigation and show some error message if something is wrong? I see several other possibilities: Create my custom left button and make it looks like standard back. (Why Apple didn't put this button style into public API?) Add "Done" button and save data only if user taps it but both these choices I like much less. So if there is a way to achieve what I want, I'd like to use it.

    Read the article

  • Positioning elements outside an Activity on Android

    - by Aleksander Kmetec
    Is there a way to absolutely position an UI element on Android so that it is located outside an Activity? For example: can you create a fullscreen ImageView simply by moving/resizing an ImageView inside an existing regular Activity instead of creating a new fullscreen activity? EDIT: Re-reading my question I see I wasn't very clear about what I'm trying to accomplish. I'd like to temporarily extend an element to cover the notification bar at the top of the screen. I need to create a semitranslucent fullscreen overlay but since translucent activities cannot cover the notification bar I'm trying to find out if it's possible for an element to break out of activity's bounds and resize itself to fill the whole screen, top to bottom.

    Read the article

  • Deferred Rendering With Diffuse,Specular, and Normal maps

    - by John
    I have been reading up on deferred rendering and I am trying to implement a renderer using the Sponza atrium model, which can be found here, as my sandbox.Note I am also using OpenGL 3.3 and GLSL. I am loading the model from a Wavefront OBJ file using Assimp. I extract all geometry information including tangents and bitangents. For all the aiMaterials,I extract the following information which essentially comes from the sponza.mtl file. Ambient/Diffuse/Specular/Emissive Reflectivity Coefficients(Ka,Kd,Ks,Ke) Shininess Diffuse Map Specular Map Normal Map I understand that I must render vertex attributes such as position ,normals,texture coordinates to textures as well as depth for the second render pass. A lot of resources mention putting colour information into a g-buffer in the initial render pass but do you not require the diffuse,specular and normal maps and therefore lights to determine the fragment colour? I know that doesnt make since sense because lighting should be done in the second render pass. In terms of normal mapping, do you essentially just pass the tangent,bitangents, and normals into g-buffers and then construct the tangent matrix and apply it to the sampled normal from the normal map. Ultimately, I would like to know how to incorporate this material information into my deferred renderer.

    Read the article

  • Ways to dynamically render a real world 3d environment in Unity3D

    - by Jake M
    Using Unity3D and C# I am attempting to display a 3d version of a real world location. Inside my Unity3D app, the user will specify the GPS coordinates of a location, then my app will have to generate a 3d plane(anything doesn't have to be a plane) of that location. The plane will show a 500 metre by 500 metre 3d snapshot of that location. How would you suggest I achieve this in Unity3D? What methodology would you use to achieve this? NOTE: I understand that this is a very difficult endevour(to render real world locations dynamically in Unity3d) so I expect to perform many actions to achieve this. I just don't know of all the technologies out there and which would be best for my needs For example: Suggested methodology 1: Prompt user to specify GPS coords Use Google earth API and HTTP to programmatically obtain a .khm file describing that location(Not sure if google earth provides that capability does it?) Unzip the .khm so I have the .dae file Convert that file to a .3ds file using ??? third party converter(is there a converter that exists?) Import .3ds into Unity3D at runtime as a plane(is this possible)? Suggested methodology 2: Prompt user to specify GPS coords Use Google earth API and HTTP to programmatically obtain a .khm file describing that location(Not sure if google earth provides that capability does it?) Unzip the .khm so I have the .dae file Parse .dae file using my own C# parser I will write(do you think its possible to write a .dae parser that can parse the .dae into an array of Vector3 that describe the height map of that location?) Dynamically create a plane in Unity3D and populate it with my array/list of Vector3 points(is it possible to create a plane this way?) Maybe I am meant to create a mesh instead of a plane? Can you think of any other ways I could render a real world 3d environment in Unity3D?

    Read the article

  • How to dismiss modal view controller from UITabBarController

    - by user563697
    Currently im developing an iPhone Game...When app loaded a login page is seen...when logged in...from login view controller a welcome screen view controller with tabbar(UITabbarcontroller iVar declared inside and connected to tabbarcontroller with interface builder) is presented(using presentModalViewCotroller)..There the first tab is dealing with account ..loaded from accountController NIb and view controller...inside which there's a logout button...when clicked i need to go to login page under loginview controller... Inside logout button click action method...i had coded like this [self dismissModalViewControllerAnimated:NO]; but on button click nothing happening... first : parent--loginviewcontroller child--welcomescreen view controller Inside welcome screen,in account tab,on logout button click: how could i dismiss the above MVC.... can anyone give me a solution as soon as possible...its urgent...

    Read the article

  • Composite Views and View Controllers

    - by BillyK
    Hi, I'm somewhat new to Android and am in the process of designing an application with a couple fairly complex views. One of the views is intended to involve a complex view displaying information associated with model objects and segregated into several different views; the navigation of which is meant to be achieved using sliding effects (i.e. sliding one's finger across the screen to traverse from one screen to the next, etc). The view itself will be used to host multiple sets of views for varying types of model objects, but with a general structure that is reused between them all. As a rough example, the view might come up to display information about a person (the model object), displaying several details views: a view for general information, a view displaying a list of hobbies, and a view displaying a list of other individuals associated with their social network. The same general view, when given a model object representing a particular car would give several different views: A general view with details, A view containing photo images for that vehicle, a view representing locations which it could be purchased from, and a view providing a list of related cars. (NOTE: This is not the real data involved, but is representative of the general intent for the view). The subviews will NOT cover the entire screen real-estate and other features within the view should be both visible and able to be interacted with by the user. The idea here is that there is a general view structure that is reusable and which will manage a set of subviews dynamically generated based upon the type of model object handed to the view. I'm trying to determine the appropriate way to leverage the Android framework in order to best achieve this without violating the integrity of the framework. Basically, I'm trying to determine how to componentize this larger view into reusable units (i.e. general view, model-specific sub-view controllers, and individual detail views). To be more specific, I'm trying to determine if this view is best designed as a composite of several custom View classes or a composite of several Activity classes. I've seen several examples of custom composite views, but they typically are used to compose simple views without complex controllers and without attention to the general Activity lifecycle (i.e. storing and retrieving data related to the model objects, as appropriate). On the other hand, the only real example I've seen regarding a view comprised of a composite of Activities is the TabActivity itself, and that isn't customizable in the fashion that would be necessary for this. Does anyone have any suggestions as to the appropriate way to structure my view to achieve the application framework I'm looking for? Views? Activities? Something else? Any guidance would be appreciated. Thanks in advance.

    Read the article

  • Ouput all the page's media queries in a list

    - by alecrust
    Using JavaScript, what would be the best way to output a list containing all media queries that are being applied to the current page. I assume this would need to filtering to find embedded media queries i.e. <link rel="stylesheet" media="only screen and (min-width: 30em)" href="/css/30em.css"> as well as media queries located in CSS files, i.e. @media only screen and (min-width: 320px) {} An example output of what I'm looking for: <p>There are 3 media queries loaded on this page</p> <ol> <li>30em</li> <li>40em</li> <li>960px</li> </ol>

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • How do I make a child control re-anchor to its parent Form when it has been cut off on a small resol

    - by Paul Fedory
    I have a Windows Form with a default size of 1100 x 400, and I have a DataGridView control on it anchored to Top, Left, Bottom, Right. Resizing the form on a screen with resolution higher than 1100 x 400 works fine, and the anchoring works well, resizing the DataGridView control as expected. When I launch the form on a screen with resolution 800x600, the form is cut off, and made to fit the 800 x 600. The DataGridView is cut off, and cannot be seen entirely - it bleeds off the form to the right, so it's not respecting the right anchor. Resizing the form in this situation doesn't respect the anchoring settings for some reason: the DataGridView control does not resize when the form is resized. Is there a way programmatically (on a resize event or something) to force the child DataGridView control to anchor to the sides of the form? I've already tried calling a PerformLayout and Refresh in the Form's resize event but it's rather redundant, isn't it?

    Read the article

  • Circle collision detection and Vector math: HELP?

    - by Griffin
    Hey so i'm currently going through the wildbunny blog to learn about collision detection, but i'm a bit confused on how the vectors he's talking about come into play QUOTED BLOG: p = ||A-B|| – (r1+r2) The two spheres are penetrating by distance p. We would also like the penetration vector so that we can correct the penetration once we discover it. This is the vector that moves both circles to the point where they just touch, correcting the penetration. Importantly it is not only just a vector that does this, it is the only vector which corrects the penetration by moving the minimum amount. This is important because we only want to correct the error, not introduce more by moving too much when we correct, or too little. N = (A-B) / ||A-B|| P = N*p Here we have calculated the normalised vector N between the two centres and the penetration vector P by multiplying our unit direction by the penetration distance. Ok so i understand that p is the distance each circle is penetrating each other, but i don't get what exactly N and P is. it seems to me N is just the coordinates of the 3rd point of the right trianlge formed by point A and B (A-B) then being divided by the hypotenuse of that triangle or distance between A and B (||A-B||) Whats the significance of this? Also, what is the penetration vector used for? It seems to me like a movement that one of the circles would perform to get un-penetrated.

    Read the article

  • Question about A Sketching Website on the IPAD, Dragging and Touching

    - by pfunc
    I've been testing out the new functionality of html5 and js to create a sketching site. I've been looking into this for a possible client that wants their site to be ipad accessible, but also have drawing features on it. So i created a rough experiment where you can drag your mouse across a screen to draw lines. I went to test it on an ipad and realized this doesn't work. Why? because dragging on an ipad is reserved for actually dragging the screen around. Is there something you can do to get around this? I'm sure this could be done if made for an app, but what about just a normal website.

    Read the article

  • WindowStartupLocation.CenterScreen is opening my window on the wrong display

    - by chaiguy
    According to the documentation, CenterScreen should open the window in the center of the screen that contains the mouse cursor, however this is not happening. I have a dual monitor setup, with my secondary monitor appearing to the left of my primary monitor (the one with the task bar). When the user clicks a link in my interface on the primary (right-most) monitor, and I open a new window with WindowStartupLocation set to CenterScreen, the window appears in the center of the screen of the left monitor. This is not where the mouse is and I am lost as to why it is picking that display. Has anyone experienced this before or have any ideas for a workaround? If I can't get the proper behavior using WindowStartupLocation, I will just manually move the window to the proper display (I still have to figure that out though).

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >