Search Results

Search found 555 results on 23 pages for 'viewport'.

Page 13/23 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Android browser scaling?

    - by Joren
    I'm trying to create a mobile website for android. When I set the width of the body to 480px (the width of the screen) the result is about 50% larger than what I expect. It seems that android is scaling what it draws and messing up all my layouts. Does anyone know how to disable this, or work around it? I'm already using this: <meta name="viewport" content="width=device-width, height=device-height, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0" />

    Read the article

  • Recording custom overlay on iPhone

    - by Marc
    Hi all, I'm interested in recording a video with a custom overlay which would end up in the video itself. They could be UIImage or even better, an OpenGL viewport, is there even such possibility right now on any iPhone devices/SDK ? Thanks

    Read the article

  • change elements' css immediately on scroll event

    - by jedierikb
    I want to change the background color of in-viewport elements (using overflow: scroll) So here was my first attempt: http://jsfiddle.net/2YeZG/ As you see, there is a brief flicker of the previous color before the new color is painted. Others have had similar problems. Following the HTML5 rocks instructions, I tried to introduce requestAnimationFrame to fix this problem to no avail: http://jsfiddle.net/RETbF/ What am I doing wrong here?

    Read the article

  • Can ggplot2 work with R's canvas backend

    - by Casbon
    Having installed canvas from here http://www.rforge.net/canvas/files/ I try to plot: > canvas('test.js') > qplot(rnorm(100), geom='histogram') stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this. Error in grid.Call.graphics("L_setviewport", pvp, TRUE) : Non-finite location and/or size for viewport >

    Read the article

  • How do efficiently display items on Google map in the viewing range of the user?

    - by Cory
    When a user moves the Google map, I would like to display items in the viewing range of the user automatically. How can I efficiently and quickly display the items? I have basic understanding of calling getBounds() method every time the user moves the map, but I am not sure how I can efficiently search and get from my database the items within the lat/lng of the bounds of the current viewport. Is there easier and faster way of doing this?

    Read the article

  • XMLReader : how to catch syntax errors in the xml file ?

    - by mishal153
    Hi, I have an XML file with syntax errors. eg. <Viewport thisisbad Left="0" Top="0" Width="1280" Height="720" > When i create an XML reader it does not throw any errors. I there a way to do syntax checking automatically, like XMLDocument does ? I have tried setting various XmlReaderSettings flags but found nothing useful.

    Read the article

  • Google Maps API v3 show local businesses/points of interest

    - by julio
    I know at some point I'd come across a Google service built into their API that would provide local information as markers on the viewport (schools, hospitals, etc.)-- I believe this was a simple control that could be added just like the normal map controls, to allow users to turn it on or off with a checkbox or button. I can't seem to find this feature documented anywhere in the v3.0 API docs. Can anyone provide some information on this, or let me know if it was deprecated? Thanks

    Read the article

  • XNA: Rotating Bones

    - by MLM
    XNA 4.0 I am trying to learn how to rotate bones on a very simple tank model I made in Cinema 4D. It is rigged by 3 bones, Root - Main - Turret - Barrel I have binded all of the objects to the bones so that all translations/rotations work as planned in C4D. I exported it as .fbx I based my test project after: http://create.msdn.com/en-US/education/catalog/sample/simple_animation I can build successfully with no errors but all the rotations I try to do to my bones have no effect. I can transform my Root successfully using below but the bone transforms have no effect: myModel.Root.Transform = world; Matrix turretRotation = Matrix.CreateRotationY(MathHelper.ToRadians(37)); Matrix barrelRotation = Matrix.CreateRotationX(barrelRotationValue); MainBone.Transform = MainTransform; TurretBone.Transform = turretRotation * TurretTransform; BarrelBone.Transform = barrelRotation * BarrelTransform; I am wondering if my model is just not right or something important I am missing in the code. Here is my Game1.cs using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace ModelTesting { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; float aspectRatio; Tank myModel; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here myModel = new Tank(); base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here myModel.Load(Content); aspectRatio = graphics.GraphicsDevice.Viewport.AspectRatio; } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here float time = (float)gameTime.TotalGameTime.TotalSeconds; // Move the pieces /* myModel.TurretRotation = (float)Math.Sin(time * 0.333f) * 1.25f; myModel.BarrelRotation = (float)Math.Sin(time * 0.25f) * 0.333f - 0.333f; */ base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // Calculate the camera matrices. float time = (float)gameTime.TotalGameTime.TotalSeconds; Matrix rotation = Matrix.CreateRotationY(MathHelper.ToRadians(45)); Matrix view = Matrix.CreateLookAt(new Vector3(2000, 500, 0), new Vector3(0, 150, 0), Vector3.Up); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, graphics.GraphicsDevice.Viewport.AspectRatio, 10, 10000); // TODO: Add your drawing code here myModel.Draw(rotation, view, projection); base.Draw(gameTime); } } } And here is my tank class: using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace ModelTesting { public class Tank { Model myModel; // Array holding all the bone transform matrices for the entire model. // We could just allocate this locally inside the Draw method, but it // is more efficient to reuse a single array, as this avoids creating // unnecessary garbage. public Matrix[] boneTransforms; // Shortcut references to the bones that we are going to animate. // We could just look these up inside the Draw method, but it is more // efficient to do the lookups while loading and cache the results. ModelBone MainBone; ModelBone TurretBone; ModelBone BarrelBone; // Store the original transform matrix for each animating bone. Matrix MainTransform; Matrix TurretTransform; Matrix BarrelTransform; // current animation positions float turretRotationValue; float barrelRotationValue; /// <summary> /// Gets or sets the turret rotation amount. /// </summary> public float TurretRotation { get { return turretRotationValue; } set { turretRotationValue = value; } } /// <summary> /// Gets or sets the barrel rotation amount. /// </summary> public float BarrelRotation { get { return barrelRotationValue; } set { barrelRotationValue = value; } } /// <summary> /// Load the model /// </summary> public void Load(ContentManager Content) { // TODO: use this.Content to load your game content here myModel = Content.Load<Model>("Models\\simple_tank02"); MainBone = myModel.Bones["Main"]; TurretBone = myModel.Bones["Turret"]; BarrelBone = myModel.Bones["Barrel"]; MainTransform = MainBone.Transform; TurretTransform = TurretBone.Transform; BarrelTransform = BarrelBone.Transform; // Allocate the transform matrix array. boneTransforms = new Matrix[myModel.Bones.Count]; } public void Draw(Matrix world, Matrix view, Matrix projection) { myModel.Root.Transform = world; Matrix turretRotation = Matrix.CreateRotationY(MathHelper.ToRadians(37)); Matrix barrelRotation = Matrix.CreateRotationX(barrelRotationValue); MainBone.Transform = MainTransform; TurretBone.Transform = turretRotation * TurretTransform; BarrelBone.Transform = barrelRotation * BarrelTransform; myModel.CopyAbsoluteBoneTransformsTo(boneTransforms); // Draw the model, a model can have multiple meshes, so loop foreach (ModelMesh mesh in myModel.Meshes) { // This is where the mesh orientation is set foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } // Draw the mesh, will use the effects set above mesh.Draw(); } } } }

    Read the article

  • SpriteBatch being drawn outside of Stage?

    - by pyko
    Currently working on my first game, though running into some problems with libgx and screen aspect ratio. What I have is a Stage which contains things like menu buttons etc, and the rest of the game is pretty much sprites being drawn with via SpriteBatch. To avoid having multiple SpriteBatches and cameras, I have re-used the ones that are created when Stage is created. stage = new Stage(WIDTH, HEIGHT, true); // keep aspect ratio batch = stage.getSpriteBatch(); camera = (OrthographicCamera) stage.getCamera(); // move camera so 'active' screen is centred stage.getCamera().translate(-stage.getGutterWidth(), -stage.getGutterHeight(), 0); Anything that is Stage/Actor related is drawn fine - all goes within the aspect ratio adjusted boundaries. The problem I'm having is anything that drawn via SpriteBatch, seems to ignore this viewport that is defined by Stage and can be visible outside of the Stage area. batch.begin(); ... sirWuffles.draw(batch); ... batch.end(); For example, in the above, if Sir Wuffles is generated outside of the defined WIDTH/HEIGHT it might still appear in the "gutters" of the screen. Tried to explain it in the below screenshot. It's an exaggerated screen ratio to make the gutters large. I've also covered most of the gutter area in the blue/cyan rectangle so they are very obvious. Does anyone know what is happening? and how to fix it? Currently, my "fix" is to use ShapeRenderer to draw rectangles that correspond to the gutters on top of the sprites...

    Read the article

  • Screen resolution of Googlebot mobile?

    - by Baumr
    Does Googlebot-Mobile have a viewport resolution it sends across? If so, what is it? It's a general question with broad relevance, but I am asking with reference to responsive design: particularly when serving different image resolution to different viewports via JavaScript. While Googlebot has its issues with JavaScript, it will become better with time. Thus, it would be good to know which version of the same image would be crawled (since most responsive image JS solutions base their logic on resolution). Feature phones Googlebot-Mobile: SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html) DoCoMo/2.0 N905i(c100;TB;W24H16) (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html) Smartphone Googlebot-Mobile: Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_1 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8B117 Safari/6531.22.7 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)

    Read the article

  • What is the Xbox360's D3DRS_VIEWPORTENABLE equivalent on WinXP D3D9?

    - by Jim Buck
    I posted this on StackOverlow, but of course it should be posted here. I am maintaining a multiplatform codebase for Xbox360 and WinXP. I am seeing an issue on the XP side that appears to be related to D3DRS_VIEWPORTENABLE on the Xbox360 version not having an equivalent on WinXP D3D9. This article had an interesting idea, but the only way to construct an identity matrix is to supply negative numbers to D3DVIEWPORT9::X and D3DVIEWPORT9::Height, but they are unsigned numbers. (I tried to put in negative numbers anyway, but nothing interesting happened.) So, how does one emulate the behavior of D3DRS_VIEWPORTENABLE under WinXP/D3D9? (For clarity, the result I'm seeing is that a 2d screen-aligned quad works fine on Xbox360 but is offset/stretched on WinXP. In fact, the (0, 0) starts in the center of the screen on WinXP instead of in the lower-left corner like on the Xbox360 as a result of applying the viewport transform.) Update: I didn't have an Xbox360 devkit at the time I wrote up this question, but I've since gotten one. I commented out the disabling of the D3DRS_VIEWPORTENABLE state, and the exact same behavior resulted on the Xbox360 as on the WinXP build. So, there must be some DirectX magic to bridge the gap here for emulating D3DRS_VIEWPORTENABLE being turned off on WinXP.

    Read the article

  • Inverted textures

    - by brainydexter
    I'm trying to draw textures aligned with this physics body whose coordinate system's origin is at the center of the screen. (XNA)Spritebatch has its default origin set to top-left corner. I got the textures to be positioned correctly, but I noticed my textures are vertically inverted. That is, an arrow texture pointing Up , when rendered points down. I'm not sure where I am going wrong with the math. My approach is to convert everything in physic's meter units and draw accordingly. Matrix proj = Matrix.CreateOrthographic(scale * graphics.GraphicsDevice.Viewport.AspectRatio, scale, 0, 1); Matrix view = Matrix.Identity; effect.World = Matrix.Identity; effect.View = view; effect.Projection = proj; effect.TextureEnabled = true; effect.VertexColorEnabled = true; effect.Techniques[0].Passes[0].Apply(); SpriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, DepthStencilState.Default, RasterizerState.CullNone, effect); m_Paddles[1].Draw(gameTime); SpriteBatch.End(); where Paddle::Draw looks like: SpriteBatch.Draw(paddleTexture, mBody.Position, null, Color.White, 0f, new Vector2(16f, 16f), // origin of the texture 0.1875f, SpriteEffects.None, // width of box is 3*2 = 6 meters. texture is 32 pixels wide. to make it 6 meters wide in world space: 6/32 = 0.1875f 0); The orthographic projection matrix seem fine to me, but I am obviously doing something wrong somewhere! Can someone please help me figure out what am i doing wrong here ? Thanks

    Read the article

  • Converting world space coordinate to screen space coordinate and getting incorrect range of values

    - by user1423893
    I'm attempting to convert from world space coordinates to screen space coordinates. I have the following code to transform my object position Vector3 screenSpacePoint = Vector3.Transform(object.WorldPosition, camera.ViewProjectionMatrix); The value does not appear to be in screen space coordinates and is not limited to a [-1, 1] range. What step have I missed out in the conversion process? EDIT: Projection Matrix Perspective(game.GraphicsDevice.Viewport.AspectRatio, nearClipPlaneZ, farClipPlaneZ); private void Perspective(float aspect_Ratio, float z_NearClipPlane, float z_FarClipPlane) { nearClipPlaneZ = z_NearClipPlane; farClipPlaneZ = z_FarClipPlane; float yZoom = 1f / (float)Math.Tan(fov * 0.5f); float xZoom = yZoom / aspect_Ratio; matrix_Projection.M11 = xZoom; matrix_Projection.M12 = 0f; matrix_Projection.M13 = 0f; matrix_Projection.M14 = 0f; matrix_Projection.M21 = 0f; matrix_Projection.M22 = yZoom; matrix_Projection.M23 = 0f; matrix_Projection.M24 = 0f; matrix_Projection.M31 = 0f; matrix_Projection.M32 = 0f; matrix_Projection.M33 = z_FarClipPlane / (nearClipPlaneZ - farClipPlaneZ); matrix_Projection.M34 = -1f; matrix_Projection.M41 = 0f; matrix_Projection.M42 = 0f; matrix_Projection.M43 = (nearClipPlaneZ * farClipPlaneZ) / (nearClipPlaneZ - farClipPlaneZ); matrix_Projection.M44 = 0f; } View Matrix // Make our view matrix Matrix.CreateFromQuaternion(ref orientation, out matrix_View); matrix_View.M41 = -Vector3.Dot(Right, position); matrix_View.M42 = -Vector3.Dot(Up, position); matrix_View.M43 = Vector3.Dot(Forward, position); matrix_View.M44 = 1f; // Create the combined view-projection matrix Matrix.Multiply(ref matrix_View, ref matrix_Projection, out matrix_ViewProj); // Update the bounding frustum boundingFrustum.SetMatrix(matrix_ViewProj);

    Read the article

  • Order of operations to render VBO to FBO texture and then rendering FBO texture full quad

    - by cyberdemon
    I've just started using OpenGL with C# via the OpenTK library. I've managed to successfully render my game world using VBOs. I now want to create a pixellated affect by rendering the frame to an offscreen FBO with a size half of my GameWindow size and then render that FBO to a full screen quad. I've been looking at the OpenTK example here: http://www.opentk.com/doc/graphics/frame-buffer-objects ...but the result is a black form. I'm not sure which parts of the example code belongs in the OnLoad event and OnRenderFrame. Can someone please tell me if the below code shows the correct order of operations? OnLoad { // VBO. // DataArrayBuffer GenBuffers/BindBuffer/BufferData // ElementArrayBuffer GenBuffers/BindBuffer/BufferData // ColourArrayBuffer GenBuffers/BindBuffer/BufferData // FBO. // ColourTexture GenTextures/BindTexture/TexParameterx4/TexImage2D // Create FBO. // Textures Ext.GenFramebuffers/Ext.BindFramebuffer/Ext.FramebufferTexture2D/Ext.FramebufferRenderbuffer } OnRenderFrame { // Use FBO buffer. Ext.BindFramebuffer(FBO) GL.Clear // Set viewport to FBO dimensions. GL.DrawBuffer((DrawBufferMode)FramebufferAttachment.ColorAttachment0Ext) // Bind VBO arrays. GL.BindBuffer(ColourArrayBuffer) GL.ColorPointer GL.EnableClientState(ColorArray) GL.BindBuffer(DataArrayBuffer) // If world changed GL.BufferData(DataArrayBuffer) GL.VertexPointer GL.EnableClientState(VertexArray) GL.BindBuffer(ElementArrayBuffer) // Render VBO. GL.DrawElements // Bind visible buffer. GL.Ext.BindFramebuffer(0) GL.DrawBuffer(Back) GL.Clear // Set camera to view texture. GL.BindTexture(ColourTexture) // Render FBO texture GL.Begin(Quads) // Draw texture on quad // TexCoord2/Vertex2 GL.End SwapBuffers }

    Read the article

  • Need to sanity-check my .htaccess, especially Limit GET POST line for Google repellent

    - by jose
    I need a sanity check on this .htaccess (from a WordPress site) I inherited from a 5 month+ old site. What's the symptom? Google + Bing crawl, but don't index any of the pages. Let me be clear: I'm not mad about "not ranking high." I think something is (accidentally) rejecting search engine indexing. I am not an expert on .htaccess, but one part especially looked funny, the Limit GET POST line. Is it not weird to have both Allow and Deny all, with no parameters? Also, I've ruled out robots.txt, but if I were you I'd want to see it, so here it is: User-agent: * Crawl-delay: 30 And here's the more suspect .htaccess: # temp redirect wordpress content feeds to feedburner <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_USER_AGENT} !FeedBurner [NC] RewriteCond %{HTTP_USER_AGENT} !FeedValidator [NC] RewriteRule ^feed/?([_0-9a-z-]+)?/?$ http://feeds.feedburner.com/anonymousblog [R=302,NC,L] </IfModule> # temp redirect wordpress comment feeds to feedburner <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_USER_AGENT} !FeedBurner [NC] RewriteCond %{HTTP_USER_AGENT} !FeedValidator [NC] RewriteRule ^comments/feed/?([_0-9a-z-]+)?/?$ http://feeds.feedburner.com/anonymous_comments [R=302,NC,L] </IfModule> <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> IndexIgnore .htaccess */.??* *~ *# */HEADER* */README* */_vti* <Limit GET POST> order deny,allow deny from all allow from all </Limit> <Limit PUT DELETE> order deny,allow deny from all </Limit> php_value memory_limit 32M Adding header by request: <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta name="robots" content="noindex,nofollow" /> <meta name="description" content="buncha junk i've deleted." /> <meta name="keywords" content="keywords i've deleted" /> <meta name="viewport" content="width=device-width" />

    Read the article

  • Input handling between game loops

    - by user48023
    This may be obvious and trivial for you but as I am a newbie in programming I come with a specific question. I have three loops in my game engine which are input-loop, update-loop and render-loop. Update-loop is set to 10 ticks per second with a fixed timestep, render-loop is capped at around 60 fps and the input-loop runs as fast as possible. I am using one of the Javascript frameworks which provide such things but it doesn't really matter. Let's say I am rendering a tile map and the view of which elements are rendered depends on camera-like movement variables which are modified during key pressing. This is only about camera/viewport and rendering, no game physics involved here. And now, how can I handle input events among these loops to keep consistent engine reaction? Am I supposed to read the current variable modified with input and do some needed calculations in a update-loop and share the result so it could be interpolated in a render-loop? Or read the input effect directly inside the render-loop and put needed calculations inside? I thought interpreting user input inside an update-loop with a low tick rate would be inaccurate and kind of unresponsive while rendering with interpolation in the final view. How it is done properly in games overall?

    Read the article

  • OpenGL camera moves faster than player

    - by opiop65
    I have a side scroller game made in OpenGL, and I'm trying to center the player in the viewport when he moves. I know how to do it: cameraX = Width / 2 / TileSize - playerPosX cameraY = Height / 2 / TileSize - playerPosY However, I have a problem. The player and "camera" move, but the player moves faster than the "camera" scrolls. So, the player can actually move out of the screen. Some code, this is how I translate the camera: public Camera(){ } public void update(Player p){ glTranslatef(-p.getPos().x - Main.WIDTH / 64 / 2, -p.getPos().y - Main.HEIGHT / 64 / 2, 1); } Here's how I move the player: public void update(){ if(Keyboard.isKeyDown(Keyboard.KEY_D)){ this.move(MOVESPEED, 0); } if(Keyboard.isKeyDown(Keyboard.KEY_A)){ this.move(-MOVESPEED, 0); } } The move method: public void move(float x, float y){ this.getPos().set(this.getPos().x + x, this.getPos().y + y); } And then after I move the player, I update the player's geometry, which shouldn't matter. What am I doing wrong here, this seems like such a simple problem, yet it doesn't work!

    Read the article

  • How can OpenGL graphics be displayed remotely using VNC?

    - by Jared Brown
    I am attempting to run a program that uses OpenGL to render a model in a viewport through VNC unsuccessfully. The error message I receive is - Xlib: extension "GLX" missing on display ":1.0". It was my understanding that VNC can be configured to render all graphics remotely and send a compressed screen grab from the display buffer to the local client. This would seem to negate the need for GLX extensions on the local client. Can VNC be configured this way and could you briefly describe how? Remote host: vncserver on RHEL 5 Local client: UltraVNC on Windows XP

    Read the article

  • images within noscript

    - by Guilherme Nascimento
    Note: My question is not about javascript Note: My question is how to make the HTML accessible to search engines. Note: My question is not about hiding texts, is on block loading of images in order to use LazyLoad. I tested various techniques of blocking the loading of images to use effect LazyLoad (I'm developing in javascript), was the only efficient <NOSCRIPT>: The HTML structure that would, with LazyLoad loading of images is achieved via the viewport (visible area of the website in browser). <p>Lorem ipsum dolor sit amet, <span class="lazyload"> <noscript><img src="foto-m0101.jpg" alt="image description"></noscript> </span> consectetur adipiscing elit. </p> <p>Lorem ipsum dolor sit amet, <span class="lazyload"> <noscript><img src="foto-m0201.jpg" alt="image description"></noscript> </span> consectetur adipiscing elit. </p> <p>Lorem ipsum dolor sit amet, <span class="lazyload"> <noscript><img src="foto-m0301.jpg" alt="image description"></noscript> </span> consectetur adipiscing elit. </p> This is a bad practice for search engines? If it is a bad practice, you could put an example of good practice? If there is any other issue with noscript talking pictures, forgive me. Note: I did not find any doubts about noscript with images.

    Read the article

  • Black stripe appears to the right of the screen, impossible to get rid of

    - by Gabriele Cirulli
    After clicking the "Auto Config" button on my Acer AL2216w screen a stripe appeared on the right of the screen where the screen doesn't "exist" and I can't seem to take the screen viewport back even by using the OCD setting and moving it to the right. The left part of the screen is also hidden and I'm not able to see what's going on there. The PC is connected to the screen through a DVI adapter and a VGA cable. I also use multiple monitors and this is the second monitor. Anyway this seems not to be a related issue, as this used to happen even when I only had a single monitor. I managed to fix this issue once but it was more than two years ago and I can't remember what I did, and out of all of the things I've tried so far (connecting the screen to another PC and performing auto adjustment, switching the cables, etc.) none worked. Here's how it looks: Can anyone help me fix this?

    Read the article

  • Developing Ext JS Charts in NetBeans IDE

    - by Geertjan
    I took my first tentative steps into the world of Ext JS charts today, in NetBeans IDE 7.4. Click to enlarge the image. I will make a screencast soon showing how charts such as the above can be created with NetBeans IDE and Ext JS. Setting up Ext JS is easy in NetBeans IDE because there's a JavaScript library browser, by means of which I can browse for the Ext JS libraries that I need and then NetBeans IDE sets up the project for me. The JavaScript code shown above comes directly from here: http://www.quizzpot.com/courses/learning-ext-js-3/articles/chart-series The index.html is as follows: <html> <head> <title>TODO supply a title</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="js/libs/extjs/resources/css/ext-all.css"/> <script src="js/libs/ext-core/ext-core.js"></script> <script src="js/libs/extjs/adapter/ext/ext-base-debug.js"></script> <script src="js/libs/extjs/ext-all-debug.js"></script> <script src="app.js"></script> </head> <body> </body> </html> More info on Ext JS: http://docs.sencha.com/extjs/4.1.3/ By the way, quite a few other articles are out there on Ext JS and NetBeans IDE, such as these, which I will be learning from during the coming days: http://netbeans.dzone.com/extjs-rest-netbeans http://netbeans.dzone.com/articles/create-your-first-extjs-4 http://netbeans.dzone.com/articles/mixing-extjs-json-p-and-java

    Read the article

  • double screen in ubuntu 12.04?

    - by johan
    I am using ubuntu 12.04 and my video card is ATI Radeon 5000. I cannot use double screen (extended version). I get this error The selected configuration for displays could not be applied requested position/size for CRTC 148 is outside the allowed limit: position=(1280, 0), size=(1280, 768), maximum=(1440, 1440) I tried all display settings but it does not work. Some outputs from the system settings: root@ubuntu:~# lshw -C display *-display description: VGA compatible controller product: Madison [Radeon HD 5000M Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:46 memory:e0000000-efffffff memory:f0020000-f003ffff ioport:d000(size=256) memory:f0000000-f001ffff root@ubuntu:~# aticonfig --initial Uninitialised file found, configuring. Using /etc/X11/xorg.conf Saving back-up to /etc/X11/xorg.conf.original-0 root@ubuntu:~# cat /etc/X11/xorg.conf Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" Load "glx" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "Default Screen" DefaultDepth 24 EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection I would appreciate any suggestions how to solve the problem. Thank you

    Read the article

  • Static "LoD" hack opinions

    - by David Lively
    I've been playing with implementing dynamic level of detail for rendering a very large mesh in XNA. It occurred to me that (duh) the whole point of this is to generate small triangles close to the camera, and larger ones far away. Given that, rather than constantly modifying or swapping index buffers based on a feature's rendered size or distance from the camera, it would be a lot easier (and potentially quite a bit faster), to render a single "fan" or flat wedge/frustum-shaped planar mesh that is tessellated into small triangles close to the near or small end of the frustum and larger ones at the far end, sort of like this (overhead view) (Pardon the gap in the middle - I drew one side and mirrored it) The triangle sizes are chosen so that all are approximately the same size when projected. Then, that mesh would be transformed to track the camera so that the Z axis (center vertical in this image) is always aligned with the view direction projected into the XZ plane. The vertex shader would then read terrain heights from a height texture and adjust the Y coordinate of the mesh to match a height field that defines the terrain. This eliminates the need for culling (since the mesh is generated to match the viewport dimensions) and the need to modify the index and/or vertex buffers when drawing the terrain. Obviously this doesn't address terrain with overhangs, etc, but that could be handled to a certain extent by including a second mesh that defines a sort of "ceiling" via a different texture. The other LoD schemes I've seen aren't particularly difficult to implement and, in some cases, are a lot more flexible, but this seemed like a decent quick-and-dirty way to handle height map-based terrain without getting into geometry manipulation. Has anyone tried this? Opinions?

    Read the article

  • Incomplete mesh using DrawIndexedPrimitives after rotating mesh

    - by user1278255
    Through help on this site I was able to draw the triangles of an unrotated, nonscaled nontransformed mesh created in Blender and exported to OBJ, accurately imported through Assimp and rendered in XNA Graphics. However after applying rotation on a single axis in Blender(Z) and adding materials(I wanted to test loading of materials through Assimp) the same mesh appears incomplete. Is something wrong with my view matrix or is it something else? This is what the unrotated mesh looks like: http://www.4shared.com/photo/qXNUSvxtba/okcube.html Here is the rotated mesh: http://www.4shared.com/photo/HAys2rWvba/badcube.html Camera, View and Projection are defined as follows: cameraPos = new Vector3(0, 5, 9); viewMatrix = Matrix.CreateLookAt(cameraPos, new Vector3(0, 0, 1), new Vector3(0, 1, 0)); projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, device.Viewport.AspectRatio, 1.0f, 200.0f); Rendering is done through this code: device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); effect = new BasicEffect(GraphicsDevice); effect.VertexColorEnabled = true; effect.View = viewMatrix; effect.Projection = projectionMatrix; effect.World = Matrix.Identity; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(vertexBuffer); device.Indices = indexBuffer; device.DrawIndexedPrimitives(Microsoft.Xna.Framework.Graphics.PrimitiveType.TriangleList, 0, 0, oScene.Meshes[0].VertexCount, 0, mMesh.FaceCount); } base.Draw(gameTime);

    Read the article

  • Transform coordinates from 3d to 2d without matrix or built in methods

    - by Thomas
    Not to long ago i started to create a small 3D engine in javascript to combine this with an html5 canvas. One of the issues I run into is how can you transform 3d to 2d coords. Since I cannot use matrices or built in transformation methods I need another way. I've tried implementing the next explanation + pseudo code: http://freespace.virgin.net/hugo.elias/routines/3d_to_2d.htm Unfortunately no luck there. I've replace all the input variables with data from my own camera and object classes. I have the following data: An object with a rotation, position vector and an array of 4 3d coords (its just a plane) a camera with a position and rotation vector the viewport - a square 600 x 600 surface. The example uses a zoom factor which I've set as 1 Most hits on google use either matrix calculations or don't implement camera rotation. Basic transformation should be like this: screen.x = x / z * zoom screen.y = y / z * zoom Can anyone point me in the right direction or explain to me howto achieve this? edit: Thanks for all your posts, I haven't been able to apply all this to my project yet but I hope to do this soon.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >