Search Results

Search found 17870 results on 715 pages for 'screen resolution'.

Page 563/715 | < Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >

  • jQuery .val Enigma between two input boxes

    - by Matt
    I'm trying to get it so that if I move a red div-square around the screen using jQuery UI and jQuery, then an input field updates with the position of the div. I got that working with a simple .val. But, it's hard to explain why, but I need to make it so that when I move the square, it updates my input box, and when the input box value is changed, another input box reflects the new value of the old input box. Do I make any sense, coz I'm confusing myself :). I made a jsfiddle, so perhaps it'll make more sense there. If you move the red square, then the input box directly above it updates, but the input box above that does not, even though it is programmed to reflect the value of the input box below itself. P.S. Is this specific to only jQuery, or is this problem present in all of JavaScript. Thanks! http://jsfiddle.net/xmCsq/27/

    Read the article

  • Getting differences between WVGA800 and WVGA854 in Android Emulator

    - by Droidy
    So far I just have one XML layout file, and one drawable directory, drawable-hdpi. I first want to target the high density screens. I added a bunch of imagebuttons to a relativelayout and everything looks perfect in the WVGA800 emulator. The problem arises when I view it in the WVGA854 emulator. Not only do the imagebuttons not position the same, but the images are blurry. I used dip for the layout margins on the imagebuttons even though it shouldn't matter in this case because WVGA800 and 854 are both high density. What is the problem? Why would it look totally different on emulators that have the same density and almost have the same exact screen dimensions? Thanks!

    Read the article

  • Printing to STDOUT and log file while removing ANSI color codes

    - by Arrieta
    I have the following functions for colorizing my screen messages: def error(string): return '\033[31;1m' + string + '\033[0m' def standout(string): return '\033[34;1m' + string + '\033[0m' I use them as follows: print error('There was a problem with the program') print "This is normal " + standout("and this stands out") I want to log the output to a file (in addition to STDOUT) WITHOUT the ANSI color codes, hopefully without having to add a second "logging" line to each print statement. The reason is that if you simply python program.py > out then the file out will have the ANSI color codes, which look terrible if you open in a plain text editor. Any advice?

    Read the article

  • Uploading a picture to a album using the graph api

    - by kielie
    Hi guys, I am trying to upload an image to a album, but it's not working, here is the code I am using, $uid = $facebook->getUser(); $args = array('message' => $uid); $file_path = "http://www.site.com/path/to/file.jpg"; $album_id = '1234'; $args['name'] = '@' . realpath($file_path); $data = $facebook->api('/'. $album_id . '/photos', 'post', $args); print_r($data); This code is in a function.php file that gets called when a user clicks on a button inside of a flash file that is embedded on my canvas, so basically what I want it to do is, when the flash takes a screen shot and passes the variable "image" to the function, it should upload $_GET['image'] to the album. How could I go about doing this? Thanx in advance!

    Read the article

  • Simple vector program error

    - by Codeguru
    Hi iam new to c++ and iam trying out this vector program and i am getting the following error: error: conversion from test*' to non-scalar typetest' requested| Here is the code include include include include using namespace std; class test{ string s; vector <string> v; public: void read(){ ifstream in ("c://test.txt"); while(getline(in,s)) { v.push_back(s); } for(int i=0;i<v.size();i++) { cout<<v[i]<<"\n"; } } }; int main() { cout<<"Opening the file to read and displaying on the screen"< }

    Read the article

  • Object reference error even when object is not null

    - by Shrewd Demon
    hi, i have an application wherein i have incorporate a "Remember Me" feature for the login screen. I do this by creating a cookie when the user logs in for the first time, so next time when the user visits the site i get the cookie and load the user information. i have written the code for loading user information in a common class in the App_Code folder...and all my pages inherit from this class. code for loading the user info is as follows: public static void LoadUserDetails(string emailId) { UsersEnt currentUser = UsersBL.LoadUserInfo(emailId); if (currentUser != null) HttpContext.Current.Session["CurrentUser"] = currentUser; } Now the problem is i get an "Object reference" error when i try to store the currentUser object in the session variable (even though the currentUser object is not null). However the password property in the currentUser object is null. Am i getting the error because of this...or is there some other reason?? thank you

    Read the article

  • Class to manage e-mail from iPhone

    - by Scott Pendleton
    I'm working on an iPhone app that offers the user the opportunity to send an e-mail in 3 different places in the app, and for 3 different purposes. Rather than put the same code for showing the e-mail composer in 3 different view controllers, shouldn't I develop a separate E-mail class, create an instance, and then set properties such as To, CC, BCC, Body, HTML_Or_Not, and so on? Also, if I create an instance of such a class, and it brings up the e-mail composer, is it OK to release the class even before the e-mail composer has left the screen?

    Read the article

  • In a WPF Application, What happens after my code in the main window constructor is executed?

    - by KyleGobel
    I'm wondering what happens after the constructor is done executing my code, because the constructor is taking like 10 seconds to run on a cold start up, but according to the profiler, my code is done executing in like 2 seconds. Also stepping through the code in the debugger, after the last line of my constructor, I sit there and wait for 7-8 seconds before the window appears. Why is this? If the window is loading content or something, why isn't it displayed on the screen, done loading or not after the constructor finishes it's job? What's the hold up? (or how do i figure that out)

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • ASP.NET: Turning on errors

    - by JamesBrownIsDead
    This is what I see when I visit my web site. How do I instead get the Yellow Screen of Death so I know what the error is? I have GoDaddy shared hosting and I think the problem is that I don't have the correct MVC binaries in the /bin folder. My web.config shows this: <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Abstractions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> But I'm not positive I copied the right .DLL files into /bin. I've got like 8 of each file--which version is which?!

    Read the article

  • Java Swing: How to make the JComboxBox drop down list taller?

    - by NoozNooz42
    How to make the "dropdown" (or "popup", I don't know how it's called) of a JComboBox taller on the screen? By default, when I open my JComboBox I see, say, 7 out of 29 items, then I need to scroll. What should I do so that I can see, say, 15 out of these 32 items? (or if the dropdown is, say, 150 pixels tall, how can I make it 300 pixels tall?) I've read the Sun tutorial on JComboBox and the JavaDoc but I must have overlooked the method(s) to call.

    Read the article

  • What should I grab as a development platform, an iPod or an iPad?

    - by mmr
    Hey all, I've recently gotten into the world of contract programming, and two of my clients have indicated that they'd like to do something 'trendy', like ipod touch/iphone/ipad development. I have a mac laptop (first gen macbook pro) that I'll have to upgrade to snow leopard to do the development for any of them, from what I've read. So that's already a bit of a commitment, given all the stuff I have on that laptop I'll have to make sure is recoverable from backup. My budget is limited, but I think I need to learn this skill. Which device should I get to learn this kind of development, an iPod touch or an iPad? I don't have the money for an iPhone. I think that the iPhone/iPad SDK has an emulator mode, but I like to have the device I'm going to roll out on available to make sure that everything works as I'd expect, ie, what's easily readable on a laptop screen is still readable on the touch, etc.

    Read the article

  • Compare date fields in SQL server

    - by huslayer
    Hi all, I've a flat file that I cleaned the data out using SSIS, the output looks like that : MEDICAL ADMIT PATIENT PATIENT DATE OF DX REC NO DATE NUMBER NAME DISCHARGE Code DRG # 123613 02/16/09 12413209 MORIBALDI ,GEMMA 02/19/09 428.20 988 130897 01/23/09 12407193 TINLEY ,PATRICIA 01/23/09 535.10 392 139367 02/27/09 36262509 THARPE ,GLORIA 03/05/09 562.10 392 141954 02/25/09 72779499 SHUMATE ,VALERIA 02/25/09 112.84 370 141954 03/07/09 36271732 SHUMATE ,VALERIA 03/10/09 493.92 203 145299 01/21/09 12406294 BAUGH ,MARIA 01/21/09 366.17 117 and the report (final results) attached in the screen shot from the final excel report. so what's happening is IF the same name or same account number is duplicate, that means the patient has entered the hospital again and needs to be included in the report. ![alt text][1] what I need to do is... Eliminate any rows that is NOT duplicate (not everybody in this file has been admitted again) and compare the dates to get the ReAdmitdate and ReDischargedate I dumped the data into a SQL table and trying to compare the dates to figure out "ReAdmitdate" and "ReDischargedate" any help is appreciated. Thanks [link text][1]

    Read the article

  • Showing edittext obliquely in android

    - by Chandra Sekhar
    I have an EditText, which generally shows parallel to the screen X-axis. I want to show it obliquely (around 45 degree to horizontal axis). Is it possible to do this in Android. Please guide me in a direction so that I can try for it. After getting the two links in the answer by pawelzeiba, I proceed a little bit in solving this, but stuck again so I put another question on this. here is the link. So please help me to solve this.

    Read the article

  • WPF TreeViewItem deselected item still lightly highlighted

    - by Patric Hua
    Hello WPF fellows, I have multiple expander controls with a ViewTree control within each expander control. When I select a ViewTreeItem from one ViewTree and then select another ViewTreeItem from another ViewTree, the newly selected ViewTreeItem is highlighted in dark blue, but the last selected item is now highlighted in a very light shade of blue. Please look at www.zunjaa.com/public/images/screen.jpg to see what I'm talking about. How do I make it so that no longer active item does not show the lighter blue? Thanks.

    Read the article

  • Shoes and Gems and how to get them working.

    - by Pselus
    I have seen this question asked all over the internet and answered in many different ways. None of them seem to be working for me. I am trying to get Gems to work in Shoes (specifically the gem Mechanize). Whenever I use the code: Shoes.setup do gem 'mechanize' end require 'mechanize' It gives me the popup that says it is installing native extensions and sits at that screen for 30 minutes and longer (I've only ever waited as long as 30 minutes). I have seen people say that you should be putting the .gem files in ~/.shoes/+gem/gem (on OS X) but that hasn't worked for me. Neither has putting the source code for the gem there. On another odd note, both the gems RedCloth and Nokogiri come with Shoes...but using the above code for them gets me No such file to load errors for both of them. Anyone have any expertise in this area and can help me out?

    Read the article

  • Is there any way to put extras to Intent from preferences ?

    - by Alex Volovoy
    Hi i'm launching activity from preferences screen. Activity is shared among three preferences. I wonder if i can set extras for this activity in xml <Preference android:key="action_1" android:title="@string/action_1_title" > <intent android:action="com.package.SHAREDACTION" > </intent> </Preference> i wonder if i can do something like <extras> <item android:name="" android:value=""/> </extras> All i need to do to pass an integer really. I can different actions and check action instead of extras.

    Read the article

  • map integration in iphone application

    - by Filthy Night
    Hi Guyz, i want to integrate maps using map kit in iphone, and i am successful at that, but now the problem which i am facing is i have 2 locations coordinates, Location1 and Location2, now i want those two points to be shown on map but i want that they appear on the screen both at 1 time, means if they are very far then the zoom level goes to that point and show those two points on the map, if they are near to each other then zoom level shows from that angel (i mean very near). now i know that using longitude delta and latitude delta i can fix this problem, but i cant find a way to make it dynamic, so that i dont have to hardcode the delta value Any Help appreciated. Thanks

    Read the article

  • How to run java code using Java code?

    - by Nitz
    Hey Guys i want to do basically two things 1)I want to know is there any way that i can run the java code, using some java code. 2 ) and if it is possible then , and whatever the out put is then it should get that out put [ maybe output or error or exception ] and show on my screen, so i need to get that also. I know this is possible bcz one of my senior had done that..but i don't know how? May be with using the java's inbuilt classes. Note: user will write the code in some text file and then i will store that file content in some variable and then may be run that code.

    Read the article

  • Resizing page format on iReport

    - by pringlesinn
    I've been trying to print a pdf made from iReport in less than a page A4. it's like half A4 page height. I'm using a Line Matrix printer, doesn't matter which one. So, when I try to print 2 files at same file, it should print everything on the right place, but just first file is printed correctly. The second one is based on a A4 page format, and just starts printing after A4 page height is over, skipping a big blank. Where can I set the size of page in iReport? The only thing I could do was setting size of what is shown on screen while I edit the file. I tried my best to explain the situation, any doubts, ask me and I'll try even harder.

    Read the article

  • Consuming all touchEvents not preventing scrolling on the blackberry storm

    - by Some guy with a headache
    Hello, I am trying to make a custom control for the BlackBerry Storm using SDK v5.0. This control needs to disable scrolling while the user is dragging elements within a field. The problem is that even if I my control consumes every single touch event send to it, when the user lifts their finger off the screen it still flings up or down as if its finishing a scroll action. Does anyone know of a way to prevent this from happening or what I might be doing wrong ? Thank you.

    Read the article

  • Java paint speed relative to color model

    - by Jon
    I have a BufferedImage with an IndexColorModel. I need to paint that image onto the screen, but I've noticed that this is slow when using an IndexColorModel. However, if I run the BufferedImage through an identity affine transform it creates an image with a DirectColorModel and the painting is significantly faster. Here's the code I'm using AffineTransformOp identityOp = new AffineTransformOp(new AffineTransform(), AffineTransformOp.TYPE_BILINEAR); displayImage = identityOp.filter(displayImage, null); I have three questions 1. Why is painting the slower on an IndexColorModel? 2. Is there any way to speed up the painting of an IndexColorModel? 3. If the answer to 2. is no, is this the most efficient way to convert from an IndexColorModel to a DirectColorModel? I've noticed that this conversion is dependent on the size of the image, and I'd like to remove that dependency. Thanks for the help

    Read the article

  • CSS Relative sized header/footer

    - by superexsl
    Hi I'm not a CSS expert so I might be missing something obvious. I'm trying to set a relative size header and footer (so that on larger monitors, the header and footer are larger than on smaller screens). To do this, I'm using a percentage height. However, this only works if I set the position to absolute. The problem is, setting it to absolute means that it overlaps the main part of the screen (inbetween the header and footer). Setting it to relative doesn't work since it relies on items being inside the header/footer. This is what my header looks like: .header { position:absolute; top:0px; background-color:white; width:100%; height:30%; } the ASPX page simply contains <div class="header"></div> Is there a way to get relatively proportioned header and footers? Thanks

    Read the article

  • How to save search, choose a recent search and then populate the form using jQuery and/or asp.net

    - by user187870
    I want to do something similar to what priceline does. It saves the recent searches in a dropdown menu. When you pick one from the recent search. The form will be populated accordingly. (See screen shot http://yfrog.com/5fscreenshot20100501at105p) This is what I am thinking. (1) Save the searches into an array in a cookie (2), when a recent search item is chosen, retrive the corresponding array element from the cookie and then populate the form. What do you think is the best way to implement this? I especially want to know how to save the form entries into the cookie and how to populate the form.

    Read the article

  • Super Cam iphone app how do they make it possible?

    - by Silent
    there is an iphone app called supercam and you can get it through the app store free. This app features a way to connect your webcam or dv cam that is connected on the internet, you could set up the ip address and enter the data on the app and it will connect to your online camera. the thing is that they have the video stream and it looks like they embedded the video in a uiview or webview at the bottom they have buttons to choose from all the cameras you have set up. so this is different from other video streaming apps because it does not play the video from the full screen mode (MPMediaPlayer API) would there be any tutorials about this or somehow take reverse engineer this?

    Read the article

< Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >