Search Results

Search found 5035 results on 202 pages for '3d printing'.

Page 15/202 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • WPF: Best method for printing paginated datagrids

    - by FauxReal
    Boy did I get an education looking into this. I guess I've been spoiled by Powerbuilder, which has fantastic functionality for this out of the box. Does anyone seriously write custom documentpaginator objects to handle reporting needs for their LOB apps? I want to be able to print "for free" and not have to code like crazy just to take whats on the screen and throw it on paper. How are people doing this? Does anyone have a recommended 3rd party for allowing printing of largish datagrids? Thanks

    Read the article

  • Windows Service Printing Behaviour

    - by Andre
    Alright, I was tasked to develop a Windows Service that listens to a directory for files that are dropped in it, read them, delete them and print out a report. I installed the service on my work laptop (Win 7 x86) and a test machine (XP x86) under a User account at first. It would do everything as it should except the print the report. No errors, nothing. Then I made it run under Local System and it produced a "No printers found" exception. Converting the app to a Console Application and running on these machines gave the desired result. OK, so now I was assuming that there are security "stuff" involved. Then I installed the service on a Server 2008 x64 machine (under Local System) and it just worked. Can anybody explain to me why this is happening? Why does the service allow printing from Server OS but not from a Desktop OS or am I missing something very obvious?

    Read the article

  • Deferred printing in Java

    - by Bober02
    I have a specific issue with general console printing and I was wondering whether anyone has a solution for it. I am trying to print a dataTable which would look like sth like this: Table ---------------------- Name |Surname | ---------------------- Mike |Mikhailowish| Rafaello|Mirena | and so on. In order to print the border of the bar I need to know what the maximum length of each column value is. I don't want to go through the whole database to find that out and then again to print it. I would rather like to do sth like: System.out.printLater(s); //herejust leave a pointer to a StringBuilder you will build ... s.append("--------"); ... System.out.printAllDeferred(); I understand the above is probably in 99.99999% chances impossible, but perhaps you guys have a clever way of achieving the above?

    Read the article

  • Printing python tkinter output

    - by Eric
    I am trying to print the contents of a python tkinter canvas. I have tried using the postscript method of canvas to create a postscript file, but I get a blank page. I know this is because I have embedded widgets, and these do not get rendered by the postscript method. Before I rewrite my program to create a more printer-friendly layout, can someone suggest a way to approach this problem? All of the programming books I have ever read approach the problem of sending output to a printer with a bit of hand-waving, something along the lines of: "It's a difficult problem that depends on interacting with the operating system." I also have a hard time finding resources about this because of all the pages related to printing to the screen. I am using Python 2.6, on Ubuntu 9.04.

    Read the article

  • Printing contents of a dynamically created iframe from parent window

    - by certainlyakey
    I have a page with a list of links and a div which serves as a placeholder for the pages the links lead to. Everytime user clicks on a link an iframe is created in the div and its src attribute gets the value of the link href attribute - simple and clear, sort of external web pages gallery. What i do have a problem with is printing the contents of the iframe. To do this i use this code: function PrintIframe() { frames["name"].focus(); frames["name"].print(); } The issue seems to be that iframe is created dynamically by JQuery - when I insert an iframe right into the html code, the browser prints the external page all right. But with the same code injected by JavaScript nothing happens. I tried to use JQ 1.3 'live' event on "Print" link, no success. Is there a way to overcome this?

    Read the article

  • Mac OS X duplex printing problem: one- vs. multi-paged documents

    - by Christian Lindig
    I like to print on pre-printed stationery using the Preview.app and a duplex-capable HP Color Laserjet 4700 (PostScript) printer. The print dialog handles one and two-paged documents differently: the paper needs to be placed differently into the tray if the document contains one page versus when it contains two pages. This is not obvious when printing on plain paper but becomes obvious when front and reverse side of sheets are marked. Otherwise the first page would end up on the reverse side of the first sheet. I believe the problem is caused by the printer driver setting duplex printing to false (using the PostScript setpagedevice operator) when emitting a single-page document versus keeping it set to true when emitting multi-page documents. All this despite that duplex printing is always specified in the printer dialog. When printing a single-sided document, duplex=true and duplex=false seem to make a difference with respect which side of a sheet gets printed on. It would be also helpful if others could confirm the problem actually exists. I suspect this problem is not limited to specific printers. I'm on OS X 10.6 and I checked two different HP printers.

    Read the article

  • How do I keep a 3D model on the screen in OpenGL?

    - by NoobScratcher
    I'm trying to keep a 3D model on the screen by placing my glDrawElement functions inside the draw function with the declarations at the top of .cpp. When I render the model, the model attaches it self to the current vertex buffer object. This is because my whole graphical user interface is in 2D quads except the window frame. Is there a way to avoid this from happening? or any common causes of this? Creating the file object: int index = IndexAssigner(1, 1); //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); //Make another string //string line; points.push_back(Point()); Point p; int face[4]; Model rendering code: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); cout << "Size Of Point" << sizeof(Point) << endl; GLuint vertexbuffer; glGenVertexArrays(1, &vao[3]); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size()*sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, num_bytes, &points[0]); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, points.size(), &points[0]); glEnableClientState(GL_INDEX_ARRAY); glIndexPointer(GL_FLOAT, faces.size(), faces.data()); glEnableVertexAttribArray(0); glDrawElements(GL_QUADS, points.size(), GL_UNSIGNED_INT, points.data()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data());

    Read the article

  • How does the new google maps make buildings and cityscapes 3D?

    - by Aerovistae
    Anyone who's seen the new Google maps has no doubt taken note of the incredible amount of three-dimensional detail in select American cities such as Boston, New York, Chicago, and San Francisco. They've even modeled the trees, bridges and some of the boats in the harbor! Minor architectural details are present. It's crazy. Looking at it up close, I've found there's a rectangular area around each of those cities, and anything within them is 3Dified, but it cuts off hard and fast at the edge, even if it's in the middle of a building. The edge of the rectangle is where the 3D stops. This leads me to think it's being done algorithmically (which would make sense, given the scale of the project, how many trees and buildings and details there are), and yet I can't imagine how that's possible. How could an algorithm model all these things without extensive data on their shapes and contours? How could it model the individual wires of a bridge, or the statues in a park? It must be done by hand, and yet how could it be for so much detail! Does anyone have any insight on this?

    Read the article

  • 3D transformations in WPF & DirectX/Direct3D or OpenGL

    - by user2723417
    I need your help with 3D transformations. I have a sphere and I want to deform it by a mouse click or a mouse move. I want to make a furrow or to bite off a piece of the sphere without any breaks of 3D material. It is possible in WPF, but if the quantity of 3D points is more then 25 000, it creates some freezes in a dynamic mode (animation breaks), because the object of MeshGeometry3D class should be reconstructed every time to stop the breaks of 3D material. Give me advice about tools for the realization of my task. Maybe it can be done with the help of DirectX/Direct3D or OpenGL? I am a newcomer in these collections of APIs, but I would like to study them. I need to integrate the process of transformation in WPF application.

    Read the article

  • This Week in Geek History: Zelda Turns 25, Birth of the Printing Press, and the Unveiling of ENIAC

    - by Jason Fitzpatrick
    Every week we bring you interesting highlights from the history of geekdom. This week we take a look at The Legend of Zelda’s 25th anniversary, the Gutenberg press, and the unveiling of primitive super computer ENIAC. Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Reclaim Vertical UI Space by Moving Your Tabs to the Side in Firefox Wind and Water: Puzzle Battles – An Awesome Game for Linux and Windows How Star Wars Changed the World [Infographic] Tabs Visual Manager Adds Thumbnailed Tab Switching to Chrome Daisies and Rye Swaying in the Summer Wind Wallpaper Read On Phone Pushes Data from Your Desktop to the Appropriate Android App

    Read the article

  • What is the status in 3D technology for TVs [closed]

    - by Eduardo Molteni
    Maybe it is off-topic for programmers.SE, but don't know where else to ask, and I trust my fellow programmers :) I've recently being lightly following new 3D technologies and I thought that glass-free 3D TV was about to take over the scene since it make much more sense to TVs (I can't imagine having glasses for all the family, kids breaking them, etc) But it seems that I'm wrong, since LG, Sony and Samsumg keep fighting over glasses (active-passive) What is the current and future status in 3D technology for TVs?

    Read the article

  • Calculating 3D camera positions from a video

    - by Geotarget
    I need to calculate the 3D camera position and rotation for each frame in a given video. This is typically used for motion-tracking, and to insert 3D objects into a video. I'm currently using VideoTrace to calculate this for me, and I'm getting the data exported as a 3DS Maxscript file. However when I try to use the 3D camera rotations, I'm getting strange errors in my 3D calculations, as if there is an error with the 3x3 rotation matrices. Can you spot any error with the data itself? Or is it my other calculations that are erroneous? frame 1 rotation=(matrix3[-0.011938, 0.756018, -0.654442][-0.382040, -0.608284, -0.695727][-0.924068, 0.241718, 0.296091][0, 0, 0]).rotationpart position=[-0.767177, 0.308723, -0.232722] fov=57.352135 frame 2 rotation=(matrix3[-0.460922, -0.726580, -0.509541][-0.200163, 0.644491, -0.737947][ 0.864572, -0.238145, -0.442495][0, 0, 0]).rotationpart position=[-0.856630, 0.198654, -0.243853] fov=57.352135

    Read the article

  • Java Applet or Unity3D for Cross-Platform 3D Surveying App

    - by Jake M
    Do you think a Java Applet or Unity3D Application is the best option to make a cross-browser 3d web-app? I intend to make a web application that displays 3d environments that can be navigated by dragging(with a finger or mouse depending on the platform). The web app will render 3d environments of development sites including contours, water pipeline locations, buildings etc. The application must work on Windows Desktop, Android, iOS and Windows Phone. So this is why I am tending towards a web-app as opposed to cross-platform smart phone library(like Mosync or Marmalade). The 3d environments will be navigable(by dragging around) and contain simple(not detailed) 3d objects like buildings, mountains, pipelines, etc. One thing I know is that WebGL is out because it doesn't work on IE and has limited support on Smart Phones(am I correct to completely disregard WebGL?). Will future Smart Phone browsers continue to support Java Applets? Also is it really true I can write ONE Application/Game in Unity3D and simply compile it to run on Windows Windows, Mac, Xbox 360, PlayStation 3, Wii, iPad, iPhone and Android? Would you suggest the Unity3D application path or the Unity3D Web Player path? Concerning Unity3D, there's one thing I am unsure about: do all Unity3D features work on iOS and Android?

    Read the article

  • 3D Vector "End Point" Calculation for procedural Vector Graphics

    - by FrostFlame64
    Alright, So I need some help with some Vector Math. I've developing some game Engines that have Procedural Fractal Generation for Some Graphics, such as using Lindenmayer Systems for generating Trees and Plants. L-Systems, are drawn by using Turtle Graphics, which is a form of Vector graphics. I first created a system to draw in 2D Graphics, which works perfectly fine. But now I want to make a 3D equivalent, and I’ve run into an issue. For my 2D Version, I created a Method for quickly determining the “End Point” of a Vector-like movement. Given a starting point (X, Y), a direction (between 0 and 360 degrees), and a distance, the end point is calculated by these formulas: newX = startX + distance * Sin((PI * direction) / 180) newY = startY + distance * Cos((PI * direction) / 180) Now I need something Similarly Equivalent for performing this Calculation in 3D, But I haven’t been able to Google anything that could show me how to do this. I'm flexible enough to get whatever required information is needed for this method calculation, in any reasonable form (Vector3, Quaternion, ect). To summarize: Given a starting point/vector position in 3D space (X, Y, Z), a Direction in 3D space (Vector3, Quaternion, ect), and a Distance, I need to find the “End Point” in 3D Space. Thank you for your time and help.

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • How can I set 'Print to File' as my default printing option?

    - by edm
    At the moment when I print, my Deskjet-3050 is selected as the default printer. I would like 'Print to File' to be the default 'printer' without using cups-pdf I specifically do not want to use cups-pdf because of the way it renders text (see below). I am not entirely sure what it is doing but it seems as though it renders the text as bitmaps and embeds them in pdf (as I am not able to highlight/copy/search embedded text as I am using a standard Print to File pdf). N.B. this is not a dupe of: Can I make PDF the default for 'print to file'

    Read the article

  • Creating a 2D perspective in 3D game

    - by Accatyyc
    I'm new to XNA and 3D game development in general. I'm creating a puzzle game kind of similar to tetris, built with blocks. I decided to build the game in 3D since I can do some cool animations and transitions when using 3D blocks with physics etc. However, I really do want the game to look "2D". My blocks are made up of 3D models, but I don't want that to be visible when they're not animating. I have followed some XNA tutorials and set up my scene like this: this.view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero, Vector3.Up); this.aspectRatio = graphics.GraphicsDevice.Viewport.AspectRatio; this.projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); ... and it gives me a very 3D-ish look. For example, the blocks in the center of the screen looks exactly how I want them, but closer to the edges of the screen I can see the rotation and sides of them. My guess is that I'm not after a perspective field of view, but any help on which field of view/settings to use to get a "flat" look when the blocks aren't rotated would be great!

    Read the article

  • Unity: Render 2D textures on a 3D object's face

    - by www.Sillitoy.com
    I am not familiar with 3D graphics and I'd like to know what is the right way to render some 2D figures on different points of a wider face of a 3D object. My 3D object is just a cube representing a poker table. I have 2D png for players placeholders and I'd like to render these figures on the 3D object where needed. An alternative solution would be to render the whole face with a big picture containing all the placeholders figures. However it would be a waste of memory and thus less efficient. What do you suggest me?

    Read the article

  • Render 2D textures on a 3D object's face

    - by www.Sillitoy.com
    I am not familiar with 3D graphics, and I'd like to know the right way to render some 2D figures on different points of a wider face of a 3D object. My 3D object is just a cube representing a poker table. I have a 2D png for players' placeholders, and I'd like to render these figures on the 3D object where needed. An alternative solution would be to render the whole face with a big picture containing all the placeholders figures. However, it would be a waste of memory and thus less efficient. What do you suggest?

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • How Do I Import a 3D Object into Adobe Flex?

    - by Kim
    Hi everyone, I wanted to know if anyone here knows how to import a 3D Object (i.e. Maya 3D Model) into Adobe Flex Application? I needed to create a simple Flex application which will allow me to rotate the 3D Object by dragging but I cannot seem to start doing it because I'm having a hard time trying to figure out how I can import my 3D model into Flex. This is exactly what I wanted to do: 3D Object in Flex I hope someone can help me. Thanks a lot :)

    Read the article

  • Silverlight 4, Out of browser, Printing, Automatic updates

    - by minal
    I have a very critial business application presently running using Winforms. The application is a very core UI shell. It accepts input data, calls a webservice on my server to do the computation, displays the results on the winforms app and finally send a print stream to the printer. Presently the application is deployed using Click-once. Moving forward, I am trying to contemplate wheather I should move the application into a Silverlight application. Couple of reasons I am thinking silverlight. Gives clients the feel that it is a cloud based solution. Can be accessed from any PC. While the clickonce app is able to do this as well, they have to install an app, and when updates are available they have to click "Yes" to update. The application presently has a drop down list of customers, this list has expanded to over 3000 records. Scrolling through the list is very painful. With Silverlight I am thinking of the auto complete ability. Out of the browser - this will be handy for those users who use the app daily. I haven't used Silverlight previous hence looking for some expert advice on a few things: Printing - does silverlight allow sending raw print data to the printer. The application prints to a Zebra Thermal label printer. I have to send raw bytes to the printer with the commands. Can this be done with SL, or will it always prompt the "Print" dialog? Out of browser - when SL apps are installed as out of browser, how to updates come through, does the app update automatically or is the user prompted to opt for update?

    Read the article

  • Printing an invisible NSView

    - by Rodger Wilson
    Initially I created a simple program with a custom NSView. I drew a picture (certificate) and printed it! beautiful! Everything worked great! I them moved my custom NSView to an existing application. My hope was that when a user hit print it would print this certificate. Simple enough. I figured a could have a NSView pointer in my controller code. Then at initialization I would populate the pointer. Then when someone wanted to print the certificate it would print. The problem is that all of my drawing code is in the "drawRect" method. This method doesn't get called because this view is never displayed in a window. I have heart that others use non-visible NSView objects just for printing. What do I need to do. I really don't want to show this view to the screen. Rodger

    Read the article

  • Formatting data for printing automatically

    - by 0bytes
    I have a requirement to retrieve data, format it to mimic an old request format we've used for years and then send it via IP address to any of a number of printers. Gathering the data and selecting the printer is no problem. I need to format the output for the printer and I'm just not sure what's best. The requirement is that the end users not have any interaction with the print request that's generated; only the intended recipients of the request job will know of the printed (& in a future release e-mailed) request. We also a requirement for a future update that we will have to incorporate into this solution an option to change the config in the DB so that each recipient site can choose printing or e-mail notification so I expect that I'll need to keep the control in a console app or webservice. I have the data and I can send it to the printers or generate an HTML-formatted e-mail easily; I just need to format it for the autoprinting. I don't think SSRS will help because I don't know of a way to designate a printer when running a report. Thanks for all help & suggestions.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >