Search Results

Search found 12827 results on 514 pages for 'camera tool'.

Page 80/514 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Why does ubuntu have a separate package for unison version 2.27.57?

    - by intuited
    The current ubuntu repo contains an extra set of packages for version 2.27.57 of the unison file sychronization utility: $ aptitude search unison p unison - A file-synchronization tool for Unix and W p unison-gtk - A file-synchronization tool for Unix and W p unison2.27.57 - A file-synchronization tool for Unix and W p unison2.27.57-gtk - A file-synchronization tool for Unix and W $ aptitude show '~nunison[^-]*$' | grep 'Package\|Version' Package: unison Version: 2.32.52-1ubuntu2 Package: unison2.27.57 Version: 2.27.57-2 What is the reason for this? Are there backwards incompatibilities in more recent versions of unison?

    Read the article

  • Away3D & Directional Light w/ Rotating Meshes

    - by seethru
    This is likely a stupid error but I can't seem to find what I've done wrong. I've got a simple scene with 10 cylinders rotating at a default speed. If I grab one of these cylinders I can rotate it in the opposite direction or at a greater speed. I have a single directional light in the scene. It would appear that the directional light is only calculated at initialization and not on further frames. The shadow created by the light rotates with the cylinder giving the impression that the light is rotating when it isn't. Camera & Light Initialization _view = new View3D(); addChild(_view); _view.antiAlias = 4; _view.backgroundColor = 0xFFFFFF; _view.camera.z = -850; _view.camera.y = 0; _view.camera.x = 0; _view.camera.lookAt(new Vector3D()); _view.camera.lens = new PerspectiveLens(15); _view.mousePicker = PickingType.RAYCAST_BEST_HIT; _light = new DirectionalLight(); _light.z = -850; _light.direction = new Vector3D(1, 1, 1); _light.color = 0xFFFFFF; _light.ambient = 0.1; _light.diffuse = 0.7; _view.scene.addChild(_light); Mesh and Material creation var material:TextureMaterial = new TextureMaterial(createPow2Texture(sprite, _colors[i]) , true, false, true); material.animateUVs = true; material.lightPicker = _lightPicker; cylinder = new Mesh(new CylinderGeometry(radius, radius, 13, 70, 1, true, true), material); cylinder.subMeshes[0].scaleU = spriteWidth / sprite.width; cylinder.y = y; cylinder.mouseEnabled = true; cylinder.pickingCollider = PickingColliderType.AS3_BEST_HIT; cylinder.addEventListener(MouseEvent3D.MOUSE_OVER, onMouseOverMesh); cylinder.addEventListener(MouseEvent3D.MOUSE_MOVE, onMouseOverMesh); cylinder.addEventListener(MouseEvent3D.MOUSE_OUT, onMouseOutMesh); _cylinders.push(cylinder); Frame private function onEnterFrame(event:Event):void { for each (var mesh:Mesh in _cylinders) { if (mesh == _mouseOverMesh) continue; mesh.rotationY += 0.25; } _view.render(); }

    Read the article

  • AndEngine doesn't fill correctly an image on my device

    - by Guille
    I'm learning about AndEngine a little bit, I'm trying to follow a tutorial but I don't get to fill the background image correctly, so, it's just appear in one side of my screen. My device is a Galaxy Nexus (1270x768 I think...). The image is 800x480. The code is: public EngineOptions onCreateEngineOptions() { camera = new Camera(0, 0, 800, 480); EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_FIXED, new FillResolutionPolicy(), this.camera); engineOptions.getAudioOptions().setNeedsMusic(true).setNeedsSound(true); engineOptions.getRenderOptions().setMultiSampling(true);//.getConfigChooserOptions().setRequestedMultiSampling(true); engineOptions.setWakeLockOptions(WakeLockOptions.SCREEN_ON); return engineOptions; } I have been trying with several values in the camera, but it doesn't fill in all the screen, why?

    Read the article

  • Fitting a rectangle into screen with XNA

    - by alecnash
    I am drawing a rectangle with primitives in XNA. The width is: width = GraphicsDevice.Viewport.Width and the height is height = GraphicsDevice.Viewport.Height I am trying to fit this rectangle in the screen (using different screens and devices) but I am not sure where to put the camera on the Z-axis. Sometimes the camera is too close and sometimes to far. This is what I am using to get the camera distance: //Height of piramid float alpha = 0; float beta = 0; float gamma = 0; alpha = (float)Math.Sqrt((width / 2 * width/2) + (height / 2 * height / 2)); beta = height / ((float)Math.Cos(MathHelper.ToRadians(67.5f)) * 2); gamma = (float)Math.Sqrt(beta*beta - alpha*alpha); position = new Vector3(0, 0, gamma); Any idea where to put the camera on the Z-axis?

    Read the article

  • Light following me around the room. Something is wrong with my shader!

    - by Robinson
    I'm trying to do a spot (Blinn) light, with falloff and attenuation. It seems to be working OK except I have a bit of a space problem. That is, whenever I move the camera the light moves to maintain the same relative position, rather than changing with the camera. This results in the light moving around, i.e. not always falling on the same surfaces. It's as if there's a flashlight attached to the camera. I'm transforming the lights beforehand into view space, so Light_Position and Light_Direction are already in eye space (I hope!). I made a little movie of what it looks like here: My camera rotating around a point inside a box. The light is fixed in the centre up and its "look at" point in a fixed position in front of it. As you can see, as the camera rotates around the origin (always looking at the centre), so don't think the box is rotating (!). The lighting follows it around. To start, some code. This is how I'm transforming the light into view space (it gets passed into the shader already in view space): // Compute eye-space light position. Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition; MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition); // Compute eye-space light direction vector. Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition); MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection); MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection); Can anyone give me a clue as to what I'm doing wrong here? I think the light should remain looking at a fixed point on the box, regardless of the camera orientation. Here are the vertex and pixel shaders: /////////////////////////////////////////////////// // Vertex Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Uniform Buffer Structures /////////////////////////////////////////////////// // Camera. layout (std140) uniform Camera { mat4 Camera_View; mat4 Camera_ViewInverseTranspose; mat4 Camera_Projection; }; // Matrices per model. layout (std140) uniform Model { mat4 Model_World; mat4 Model_WorldView; mat4 Model_WorldViewInverseTranspose; mat4 Model_WorldViewProjection; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Streams (per vertex) /////////////////////////////////////////////////// layout(location = 0) in vec3 attrib_Position; layout(location = 1) in vec3 attrib_Normal; layout(location = 2) in vec3 attrib_Tangent; layout(location = 3) in vec3 attrib_BiNormal; layout(location = 4) in vec2 attrib_Texture; /////////////////////////////////////////////////// // Output streams (per vertex) /////////////////////////////////////////////////// out vec3 attrib_Fragment_Normal; out vec4 attrib_Fragment_Position; out vec2 attrib_Fragment_Texture; out vec3 attrib_Fragment_Light; out vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main() { // Transform normal into eye space attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz; // Transform vertex into eye space (world * view * vertex = eye) vec4 position = Model_WorldView * vec4(attrib_Position, 1.0); // Compute vector from eye space vertex to light (light is in eye space already) attrib_Fragment_Light = Light_Position - position.xyz; // Compute vector from the vertex to the eye (which is now at the origin). attrib_Fragment_Eye = -position.xyz; // Output texture coord. attrib_Fragment_Texture = attrib_Texture; // Compute vertex position by applying camera projection. gl_Position = Camera_Projection * position; } and the pixel shader: /////////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////////// #version 420 /////////////////////////////////////////////////// // Samplers /////////////////////////////////////////////////// uniform sampler2D Map_Diffuse; /////////////////////////////////////////////////// // Global Uniforms /////////////////////////////////////////////////// // Material. layout (std140) uniform Material { vec4 Material_Ambient_Colour; vec4 Material_Diffuse_Colour; vec4 Material_Specular_Colour; vec4 Material_Emissive_Colour; float Material_Shininess; float Material_Strength; }; // Spotlight. layout (std140) uniform OmniLight { float Light_Intensity; vec3 Light_Position; vec3 Light_Direction; vec4 Light_Ambient_Colour; vec4 Light_Diffuse_Colour; vec4 Light_Specular_Colour; float Light_Attenuation_Min; float Light_Attenuation_Max; float Light_Cone_Min; float Light_Cone_Max; }; /////////////////////////////////////////////////// // Input streams (per vertex) /////////////////////////////////////////////////// in vec3 attrib_Fragment_Normal; in vec3 attrib_Fragment_Position; in vec2 attrib_Fragment_Texture; in vec3 attrib_Fragment_Light; in vec3 attrib_Fragment_Eye; /////////////////////////////////////////////////// // Result /////////////////////////////////////////////////// out vec4 Out_Colour; /////////////////////////////////////////////////// // Main /////////////////////////////////////////////////// void main(void) { // Compute N dot L. vec3 N = normalize(attrib_Fragment_Normal); vec3 L = normalize(attrib_Fragment_Light); vec3 E = normalize(attrib_Fragment_Eye); vec3 H = normalize(L + E); float NdotL = clamp(dot(L,N), 0.0, 1.0); float NdotH = clamp(dot(N,H), 0.0, 1.0); // Compute ambient term. vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour; // Diffuse. vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL; // Specular. float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength; vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity; // Light attenuation (so we don't have to use 1 - x, we step between Max and Min). float d = length(-attrib_Fragment_Light); float attenuation = smoothstep(Light_Attenuation_Max, Light_Attenuation_Min, d); // Adjust attenuation based on light cone. float LdotS = dot(-L, Light_Direction), CosI = Light_Cone_Min - Light_Cone_Max; attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0); // Final colour. Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation; }

    Read the article

  • How do I create a third Person View using DXUTCamera in DX10?

    - by David
    I am creating a 3d flying game and using DXUTCamera for my view. I can get the camera to take on the characters position, But I would like to view my character in the 3rd person. Here is my code for first person view: //Put the camera on the object. D3DXVECTOR3 viewerPos; D3DXVECTOR3 lookAtThis; D3DXVECTOR3 up ( 5.0f, 1.0f, 0.0f ); D3DXVECTOR3 newUp; D3DXMATRIX matView; //Set the viewer's position to the position of the thing. viewerPos.x = character->x; viewerPos.y = character->y; viewerPos.z = character->z; // Create a new vector for the direction for the viewer to look character->setUpWorldMatrix(); D3DXVECTOR3 newDir, lookAtPoint; D3DXVec3TransformCoord(&newDir, &character->initVecDir, &character->matAllRotations); // set lookatpoint D3DXVec3Normalize(&lookAtPoint, &newDir); lookAtPoint.x += viewerPos.x; lookAtPoint.y += viewerPos.y; lookAtPoint.z += viewerPos.z; g_Camera.SetViewParams(&viewerPos, &lookAtPoint); So does anyone have an ideas how I can move the camera to the third person view? preferably timed so there is a smooth action in the camera movement. (I'm hoping I can just edit this code instead of bringing in another camera class)

    Read the article

  • How can I improve my isometric tile-picking algorithm?

    - by Cypher
    I've spent the last few days researching isometric tile-picking algorithms (converting screen-coordinates to tile-coordinates), and have obviously found a lot of the math beyond my grasp. I have come fairly close and what I have is workable, but I would like to improve on this algorithm as it's a little off and seems to pick down and to the right of the mouse pointer. I've uploaded a video to help visualize the current implementation: http://youtu.be/EqwWcq1zuaM My isometric rendering algorithm is based on what is found at this stackoverflow question's answer, with the exception that my x and y axis' are inverted (x increased down-right, while y increased up-right). Here is where I am converting from screen to tiles: // these next few lines convert the mouse pointer position from screen // coordinates to tile-grid coordinates. cameraOffset captures the current // mouse location and takes into consideration the camera's position on screen. System.Drawing.Point cameraOffset = new System.Drawing.Point( 0, 0 ); cameraOffset.X = mouseLocation.X + (int)camera.Left; cameraOffset.Y = ( mouseLocation.Y + (int)camera.Top ); // the camera-aware mouse coordinates are then further converted in an attempt // to select only the "tile" portion of the grid tiles, instead of the entire // rectangle. this algorithm gets close, but could use improvement. mouseTileLocation.X = ( cameraOffset.X + 2 * cameraOffset.Y ) / Global.TileWidth; mouseTileLocation.Y = -( ( 2 * cameraOffset.Y - cameraOffset.X ) / Global.TileWidth ); Things to make note of: mouseLocation is a System.Drawing.Point that represents the screen coordinates of the mouse pointer. cameraOffset is the screen position of the mouse pointer that includes the position of the game camera. mouseTileLocation is a System.Drawing.Point that is supposed to represent the tile coordinates of the mouse pointer. If you check out the above link to youtube, you'll notice that the picking algorithm is off a bit. How can I improve on this?

    Read the article

  • Opengl-es picking object

    - by lacas
    I saw a lot of picking code opengl-es, but nothing worked. Can someone give me what am I missing? My code is (from tutorials/forums) Vec3 far = Camera.getPosition(); Vec3 near = Shared.opengl().getPickingRay(ev.getX(), ev.getY(), 0); Vec3 direction = far.sub(near); direction.normalize(); Log.e("direction", direction.x+" "+direction.y+" "+direction.z); Ray mouseRay = new Ray(near, direction); for (int n=0; n<ObjectFactory.objects.size(); n++) { if (ObjectFactory.objects.get(n)!=null) { IObject obj = ObjectFactory.objects.get(n); float discriminant, b; float radius=0.1f; b = -mouseRay.getOrigin().dot(mouseRay.getDirection()); discriminant = b * b - mouseRay.getOrigin().dot(mouseRay.getOrigin()) + radius*radius; discriminant = FloatMath.sqrt(discriminant); double x1 = b - discriminant; double x2 = b + discriminant; Log.e("asd", obj.getName() + " "+discriminant+" "+x1+" "+x2); } } my camera vectors: //cam Vec3 position =new Vec3(-obj.getPosX()+x, obj.getPosZ()-0.3f, obj.getPosY()+z); Vec3 direction =new Vec3(-obj.getPosX(), obj.getPosZ(), obj.getPosY()); Vec3 up =new Vec3(0.0f, -1.0f, 0.0f); Camera.set(position, direction, up); and my picking code: public Vec3 getPickingRay(float mouseX, float mouseY, float mouseZ) { int[] viewport = getViewport(); float[] modelview = getModelView(); float[] projection = getProjection(); float winX, winY; float[] position = new float[4]; winX = (float)mouseX; winY = (float)Shared.screen.width - (float)mouseY; GLU.gluUnProject(winX, winY, mouseZ, modelview, 0, projection, 0, viewport, 0, position, 0); return new Vec3(position[0], position[1], position[2]); } My camera moving all the time in 3d space. and my actors/modells moving too. my camera is following one actor/modell and the user can move the camera on a circle on this model. How can I change the above code to working?

    Read the article

  • Mesh with Alpha Texture doesn't blend properly

    - by faulty
    I've followed example from various place regarding setting OutputMerger's BlendState to enable alpha/transparent texture on mesh. The setup is as follows: var transParentOp = new BlendStateDescription { SourceBlend = BlendOption.SourceAlpha, DestinationBlend = BlendOption.InverseDestinationAlpha, BlendOperation = BlendOperation.Add, SourceAlphaBlend = BlendOption.Zero, DestinationAlphaBlend = BlendOption.Zero, AlphaBlendOperation = BlendOperation.Add, }; I've made up a sample that display 3 mesh A, B and C, where each overlaps one another. They are drawn sequentially, A to C. Distance from camera where A is nearest and C is furthest. So, the expected output is that A see through and saw part of B and C. B will see through and saw part of C. But what I get was none of them see through in that order, but if I move C closer to the camera, then it will be semi transparent and see through A and B. B if move closer to camera will see A but not C. Sort of reverse. So it seems that I need to draw them in reverse order where furthest from camera is drawn first then nearest to camera is drawn last. Is it suppose to be done this way, or I can actually configure the blendstate so it works no matter in which order i draw them? Thanks

    Read the article

  • How do I transfer videos from DV camera to divx?

    - by Ward
    I have a videocamera that uses mini-dv tapes. In the past, I've transferred the files and made DVDs, but that was time- and disk-space- consuming. I wanted to find new tools and to figure out how to convert the videos to something smaller like divx but I didn't know enough about all the different formats to answer a previous question. Well, now I've done a bunch of research and I understand some of the details of video encoding, and in the process I wrote up some notes on the different formats involved in going from a DV videocam to divx or H.264 They're a bit rambling, but in case it's of any use, I'm going to post them as an answer. I'd be very interested in anyone else's answer as well.

    Read the article

  • what's wrong with my lookAt and move forward code?

    - by alaslipknot
    so am still in the process of getting familiar with libGdx and one of the fun things i love to do is to make basics method for reusability on future projects, and for now am stacked on getting a Sprite rotate toward target (vector2) and then move forward based on that rotation the code am using is this : // set angle public void lookAt(Vector2 target) { float angle = (float) Math.atan2(target.y - this.position.y, target.x - this.position.x); angle = (float) (angle * (180 / Math.PI)); setAngle(angle); } // move forward public void moveForward() { this.position.x += Math.cos(getAngle())*this.speed; this.position.y += Math.sin(getAngle())*this.speed; } and this is my render method : @Override public void render(float delta) { // TODO Auto-generated method stub Gdx.gl.glClearColor(0, 0, 0.0f, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); // groupUpdate(); Vector3 mousePos = new Vector3(Gdx.input.getX(), Gdx.input.getY(), 0); camera.unproject(mousePos); ball.lookAt(new Vector2(mousePos.x, mousePos.y)); // if (Gdx.input.isTouched()) { ball.moveForward(); } batch.begin(); batch.draw(ball.getSprite(), ball.getPos().x, ball.getPos().y, ball .getSprite().getOriginX(), ball.getSprite().getOriginY(), ball .getSprite().getWidth(), ball.getSprite().getHeight(), .5f, .5f, ball.getAngle()); batch.end(); } the goal is to make the ball always look at the mouse cursor, and then move forward when i click, am also using this camera : // create the camera and the SpriteBatch camera = new OrthographicCamera(); camera.setToOrtho(false, 800, 480); aaaand the result was so creepy lol Thank you

    Read the article

  • C++ OpenGL trouble trapping cursor in window

    - by ezio160324
    I am using OpenGL and I try to trap my cursor inside my game window (using both SetCursorPos and ClipCursor) But, these conflict with my camera rotation code as my camera is rotated with my mouse. If there is a way to do it, please let me know. If possible, I would be willing to make it so that when the cursor reaches an edge of the screen, it jumps to the opposite edge (though I fear that would also conflict with my camera code).

    Read the article

  • Is there a tool for managing redundant pages across a website?

    - by dmanexe
    I am in charge of constructing a website with a '2-dimensional' site map, as explained later. I am looking for (preferably a Wordpress plugin, as the site is built in Wordpress already) that would make managing thousands of pages a lot easier. To explain further, let me iterate my situation. I am building a website for a construction company, and they have several key cities and several key services. Now, they want a parent page for each service, and another unique page for the child sub-service, and finaly, a grandchild page for the city they are performing the service in. For example, if they were doing Concrete Construction in Los Angeles, the URL would look like: /concrete/construction/los-angeles The content on /los-angeles would be the same as on /malibu, or /burbank. However, there would be a different set of content for /concrete/design/los-angeles, but the entire page content (sans a few variables with city names) would be the same. Is there a way to manage or automate 'matrixing' this information on the site? I am looking for a tool that would allow me to easily add a 'city' with the same content across all grandchildren, per the child's content requirements. All of the grandchildren pages will have redundant content across them. Should something like this not exist, how difficult would it be to create, as a freelance side project? I need a tool like this, because I am approaching about ~500 cities and 50 services (Concrete Construction, Concrete Design, Concrete Engineering, etc)

    Read the article

  • Any diff/merge tool that provides a report (metrics) of conflicts?

    - by cad
    CONTEXT: I am preparing a big C# merge using visual studio 2008 and TFS. I need to create a report with the files and the number of collisions (total changes and conflicts) for each file (and in total of course) PROBLEM: I cannot do it for two reasons (first one is solved): 1- Using TFS merge I can have access to the file comparison but I cannot export the list of conflicting files... I can only try to resolve the conflicts. (I have solved problem 1 using beyond compare. It allows me to export the file list) 2- Using TFS merge I can only access manually for each file to get the number of conflicts... but I have more than 800 files (and probably will have to repeat it in the close future so is not an option doing it manually) There are dozens of file comparison tools (http://en.wikipedia.org/wiki/Comparison_of_file_comparison_tools ) but I am not sure which one could (if any) give me these metrics. I have also read several forums and questions here but are more general questions (which diff tool is better) and I am looking for a very specific report. So my questions are: Is Visual Studio 2010 (using still TFS2008) capable of doing such reports/exportation? Is there any tool that provide this kind of metrics (Now I am trying Beyond Compare)

    Read the article

  • Is there a free tool which can help visualize the logic of a stored procedure in SQL Server 2008 R2?

    - by Hamish Grubijan
    I would like to be able to plot a call graph of a stored procedure. I am not interested in every detail, and I am not concerned with dynamic SQL (although it would be cool to detect it and skip it maybe or mark it as such.) I would like the tool to generate a tree for me, given the server name, db name, stored proc name, a "call tree", which includes: Parent stored procedure. Every other stored procedure that is being called as a child of the caller. Every table that is being modified (updated or deleted from) as a child of the stored proc which does it. Hopefully it is clear what I am after; if not - please do ask. If there is not a tool that can do this, then I would like to try to write one myself. Python 2.6 is my language of choice, and I would like to use standard libraries as much as possible. Any suggestions? EDIT: For the purposes of bounty Warning: SQL syntax is COMPLEX. I need something that can parse all kinds of SQL 2008, even if it looks stupid. No corner cases barred :) EDIT2: I would be OK if all I am missing is graphics.

    Read the article

  • unable to capture picture using camera in j2me polish?

    - by SIVAKUMAR.J
    I'm, developing a mobile app in j2me.Now im converting it into j2me polish. In my app I capture a picture using camera in mobile phone. It works fine in j2me. But it does not work fine in j2me polish. I cannot resolve it. The code snippet given below public class VideoCanvas extends Canvas { // private VideoMIDlet midlet; // Form frm Form frm=null; public VideoCanvas(VideoControl videoControl) { int width = getWidth(); int height = getHeight(); // this.midlet = midlet; //videoControl.initDisplayMode(VideoControl.USE_DIRECT_VIDEO, this); //Canvas canvas = StyleSheet.currentScreen; //canvas = MasterCanvas.instance; videoControl.initDisplayMode( VideoControl.USE_DIRECT_VIDEO,this); try { videoControl.setDisplayLocation(2, 2); videoControl.setDisplaySize(width - 4, height - 4); } catch (MediaException me) {} videoControl.setVisible(true); } public VideoCanvas(VideoControl videoControl,Form ff) { frm=ff; int width = getWidth(); int height = getHeight(); // this.midlet = midlet; Ticker ticker=new Ticker("B4 video controll init"); frm.setTicker(ticker); //Canvas canvas = StyleSheet.currentScreen; videoControl.initDisplayMode(VideoControl.USE_DIRECT_VIDEO,this); ticker=new Ticker("after video controll init"); frm.setTicker(ticker); try { videoControl.setDisplayLocation(2, 2); videoControl.setDisplaySize(width - 4, height - 4); } catch (MediaException me) {} videoControl.setVisible(true); ticker=new Ticker("Device not supported"); frm.setTicker(ticker); } public void paint(Graphics g) { int width = getWidth(); int height = getHeight(); g.setColor(0x00ff00); g.drawRect(0, 0, width - 1, height - 1); g.drawRect(1, 1, width - 3, height - 3); } } In normal j2me the above code works correctly. But in j2me polish videoControl.initDisplayMode(VideoControl.USE_DIRECT_VIDEO,this) here this refers to VideoCanvas (which extends from javax.microedition.lcdui.Canvas). But it throws an "IllegalArgumentException - container should be canvas" like that. How to solve the issue?

    Read the article

  • Ogre material scripts; how do I give a technique multiple lod_indexes?

    - by BlueNovember
    I have an Ogre material script that defines 4 rendering techniques. 1 using GLSL shaders, then 3 others that just use textures of different resolutions. I want to use the GLSL shader unconditionally if the graphics card supports it, and the other 3 textures depending on camera distance. At the moment my script is; material foo { lod_distances 1600 2000 technique shaders { lod_index 0 lod_index 1 lod_index 2 //various passes here } technique high_res { lod_index 0 //various passes here } technique medium_res { lod_index 1 //various passes here } technique low_res { lod_index 2 //various passes here } Extra information The Ogre manual says; Increasing indexes denote lower levels of detail You can (and often will) assign more than one technique to the same LOD index, what this means is that OGRE will pick the best technique of the ones listed at the same LOD index. OGRE determines which one is 'best' by which one is listed first. Currently, on a machine supporting the GLSL version I am using, the script behaves as follows; Camera 2000 : Shader technique Camera 1600 <= 2000 : Medium Camera <= 1600 : High If I change the lod order in shader technique to { lod_index 2 lod_index 1 lod_index 0 } The behaviour becomes; Camera 2000 : Low Camera 1600 <= 2000 : Medium Camera <= 1600 : Shader implying only the latest lod_index is used. If I change it to lod_index 0 1 2 It shouts at me Compiler error: fewer parameters expected in foo.material(#): lod_index only supports 1 argument So how do I specify a technique to have 3 lod_indexes? Duplication works; technique shaders { lod_index 0 //various passes here } technique shaders1 { lod_index 1 //passes repeated here } technique shaders2 { lod_index 2 //passes repeated here } ...but it's ugly.

    Read the article

  • How to resolve the only ImagePicker control view in landscap mode and whole application in portrait mode?

    - by Wolvorin
    I have tried almost all the answers during last two days provided by Google and SO but no luck :( What I want is my whole application is in portrait mode only. And it working fine in ios 6+. The only support required at now. But the problem is I need to launch UIImagePickerViewController with image source type camera in only landscap mode. What I tried till now is : (1) I try to create one category for UIImagePickerController for orientation. -(BOOL)shouldAutorotate { return NO; } -(NSUInteger)supportedInterfaceOrientations { return UIInterfaceOrientationMaskLandscape; } - (UIInterfaceOrientation)preferredInterfaceOrientationForPresentation { return UIInterfaceOrientationLandscapeLeft; } Like this. But the camera view is not proper aligned. It just follows the orientation of device with some +/- 90 angle but not what I required. Even the button of the camera shown by camera view as camera control is also follows the camera view, ie. the view is rotated to 90 anti clock vise and stays to that way. Is there any way to use the camera with proper alignment? or have to use other framework to work with it? Please help me. I stuck with it for last two days.

    Read the article

  • What tool can I use to merge wsdl and xsd file?

    - by LukLed
    I have two files, one with webservice description (wsdl), second with data structures used in webservice (xsd). I have nothing more, webservice doesn't work yet. I need to merge them into one, because Delphi 7 WSDL Importer doesn't handle included xsd files to well. Where can I find tool to do it?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >