Search Results

Search found 18741 results on 750 pages for 'screen sharing'.

Page 596/750 | < Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >

  • How to scroll and zoom in/out large images on iPhone?

    - by Horace Ho
    I have a large image, size around 30000 (w) x 6000 (h) pixels. You may consider it's like a big map. I assume I need to crop it up into smaller tiles. Questions: what are the right ViewControllers to use? (link) what is the tile strategy? (I put this in another question, as it's not iPhone specific) Requirements: whole image (though cropped) can be scrolled up/down/left/right by swipes zoom in (up to pixel-to-pixel) out (down to screen-fit-by-height) by the 2-finger operation memory efficiency by lazy loading tiles Bonus requirements: automatic scroll, say from left to right slowly and smoothly Thanks!

    Read the article

  • HTC Hero Mouse Ball click not working on Custom ListView?

    - by UMMA
    dear friends, i have created custom list view using class EfficientAdapter extends BaseAdapter implements { private LayoutInflater mInflater; private Context context; public EfficientAdapter(Context context) { mInflater = LayoutInflater.from(context); this.context = context; } public View getView(final int position, View convertView, ViewGroup parent) { ViewHolder holder; convertView = mInflater.inflate(R.layout.adaptor_content, null); convertView.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { } }); } and other necessary methods... also } using touch screen when i click on list item OnClickListener of list item is called. but when i use Mouse Ball / Track Ball (Phone Hardware) to click on ListItem OnClickListener of list item is not called. can any one guide me is this Phone bug or My Fault? any help would be appriciated.

    Read the article

  • Consuming all touchEvents not preventing scrolling on the blackberry storm

    - by Some guy with a headache
    Hello, I am trying to make a custom control for the BlackBerry Storm using SDK v5.0. This control needs to disable scrolling while the user is dragging elements within a field. The problem is that even if I my control consumes every single touch event send to it, when the user lifts their finger off the screen it still flings up or down as if its finishing a scroll action. Does anyone know of a way to prevent this from happening or what I might be doing wrong ? Thank you.

    Read the article

  • Update image in ASP.NET page from memory without refreshing the page.

    - by ZeeMan
    I have a command line C# server and an ASP.NET client, they both communicate via .NET Remoting. The server is sending the client still images grabbed from a webcam at say 2 frames a second. I can pass the images down to the client via a remote method call and the image ends up in this client method: public void Update(System.Drawing.Image currentFrame) { // I need to display the currentFrame on my page. } How do i display the image on the a page without saving the image to the hard disc? If it was a winForms app i could pass currentFrame to a picturebox ie. picturebox.Image = currentFrame. The above Update method gets fired at least 2 times a second. If the client was a winForms app i could simply put picturebox.Image = currentFrame and the current frame on the screen would automatically get updated. How do i achieve the same effect with ASP.NET? Will the whole page need to be reloaded etc?

    Read the article

  • iphone: custom UITableViewCell taller than the default cell view.

    - by Luc
    Hello, I'm creating an application that consists of a ListView with 5 rows: - 1st one containing a graph - 2nd to 5th rows containing some data with the same formatting. I have created 2 classes: GraphCustomViewCell and DataCustomViewCell. Depending upon the position of the cell I load the correct CustomCell with: NSArray *topLevelObjects = [[NSBundle mainBundle] loadNibNamed:@"GraphCustomViewCell" owner:nil options:nil]; for(id currentObject in topLevelObjects) { if([currentObject isKindOfClass:[GraphCustomViewCell class]]) { cell = (GraphCustomViewCell *)currentObject; break; } } That works fine except that the first row, the one corresponding to the graph, is bigger (in height) that the 4 other rows, and as a results it hides the 3 first other rows. Is there any option in the table view that enables the custom cell to expand the default cells ? I'd like that the 5 rows (graph + 4 data rows) to fit the entire screen (480 - tabbar's height). Thanks a lot for your help. Regards, Luc

    Read the article

  • Uploading a picture to a album using the graph api

    - by kielie
    Hi guys, I am trying to upload an image to a album, but it's not working, here is the code I am using, $uid = $facebook->getUser(); $args = array('message' => $uid); $file_path = "http://www.site.com/path/to/file.jpg"; $album_id = '1234'; $args['name'] = '@' . realpath($file_path); $data = $facebook->api('/'. $album_id . '/photos', 'post', $args); print_r($data); This code is in a function.php file that gets called when a user clicks on a button inside of a flash file that is embedded on my canvas, so basically what I want it to do is, when the flash takes a screen shot and passes the variable "image" to the function, it should upload $_GET['image'] to the album. How could I go about doing this? Thanx in advance!

    Read the article

  • Show javascript execution progress

    - by Midhat
    I have some javascript functions that take about 1 to 3 seconds. (some loops or mooML templating code.) During this time, the browser is just frozen. I tried showing a "loading" animation (gif image) before starting the operation and hiding it afterwords. but it just doesnt work. The browser freezes before it could render the image and hides it immediately when the function ends. Is there anything I can do to tell the browser to update the screen before going into javascript execution., Something like Application.DoEvents or background worker threads. So any comments/suggestions about how to show javascript execution progress. My primary target browser is IE6, but should also work on all latest browsers

    Read the article

  • QScrollBar Snap to value

    - by j3frea
    Hi all, I'm wanting to implement a scroll bar that snaps to particular values (like windows can snap to the edge of a screen). The idea is that as I drag the scrollbar down it snaps the bar to values as it approaches them. My scenario is displaying 3 chapters of text. I would like to be able to snap to the beginning or end of a chapter. Of course, to go to the start of the first chapter the scrollbar one can just scroll to the top and likewise with the end of the third chapter. So I'd like to draw two lines on the scrollbar to represent the start of the second and third chapters and then have the top and bottom of the scrollbar snap to those lines. So I'm really actually wanting to use this within a QTextBrowser but I can control a QTextBrowser with a "QSnapScrollBar" I just don't really know where to start. Any help would be greatly appreciated.

    Read the article

  • Feedback on meeting of the Linux User Group of Mauritius

    Once upon a time in a country far far away... Okay, actually it's not that bad but it has been a while since the last meeting of the Linux User Group of Mauritius (LUGM). There have been plans in the past but it never really happened. Finally, Selven took the opportunity and organised a new meetup with low administrative overhead, proper scheduling on alternative dates and a small attendee's survey on the preferred option. All the pre-work was nicely executed. First, I wasn't sure whether it would be possible to attend. Luckily I got some additional information, like children should come, too, and I was sold to this community gathering. According to other long-term members of the LUGM it was the first time 'ever' that a gathering was organised outside of Quatre Bornes, and I have to admit it was great! LUGM - user group meeting on the 15.06.2013 in L'Escalier Quick overview of Linux & the LUGM With a little bit of delay the LUGM meeting officially started with a quick overview and introduction to Linux presented by Avinash. During the session he told the audience that there had been quite some activity over the island some years ago but unfortunately it had been quiet during recent times. Of course, we also spoke about the acknowledged world dominance of Linux - thanks to Android - and the interesting possibilities for countries like Mauritius. It is known that a couple of public institutions have there back-end infrastructure running on Red Hat Linux systems but the presence on the desktop is still very low. Users are simply hanging on to Windows XP and older versions of Microsoft Office. Following the introduction of the LUGM Ajay joined into the session and it quickly changed into a panel discussion with lots of interesting questions and answers, sharing of first-hand experience either on the job or in private use of Linux, and a couple of ideas about how the LUGM could promote Linux a bit more in Mauritius. It was great to get an insight into other attendee's opinion and activities. Especially taking into consideration that I'm already using Linux since around 1996/97. Frankly speaking, I bought a SuSE 4.x distribution back in those days because I couldn't achieve certain tasks on Windows NT 4.0 without spending a fortune. OpenELEC Mediacenter Next, Selven gave us decent introduction on OpenELEC: Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes. I didn't know about it until this presentation. In the past, I was mainly attached to Video Disk Recorder (VDR) as it allows the use of satellite receiver cards very easily. Hm, somehow I'm still missing my precious HTPC that I had to leave back in Germany years ago. It was great piece of hardware and software; self-built PC in a standard HiFi-sized (43cm) black desktop casing with 2 full-featured Hauppauge DVB-s cards, an old-fashioned Voodoo graphics card, WiFi card, Pioneer slot-in DVD drive, and fully remote controlled via infra-red thanks to Debian, VDR and LIRC. With EP Guide, scheduled recordings and general multimedia centre it offered all the necessary comfort in the living room, besides a Nintendo game console; actually a GameCube at that time... But I have to admit that putting OpenELEC on a Raspberry Pi would be a cool DIY project in the near future. LUGM - our next generation of linux users (15.06.2013) Project Evil Genius (PEG) Don't be scared of the paragraph header. Ish gave us a cool explanation why he named it PEG - Project Evil Genius; it's because of the time of the day when he was scripting down his ideas to be able to build, package and provide software applications to various Linux distributions. The main influence came from openSuSE but the platform didn't cater for his needs and ideas, so he started to work out something on his own. During his passionate session he also talked about the amazing experience he had due to other Linux users from all over the world. During the next couple of days Ish promised to put his script to GitHub... Looking forward to that. Check out Ish's personal blog over at hacklog.in. Highly recommended to read. Why India? Simply because the registration fees per year for an Indian domain are approximately 20 times less than for a Mauritian domain (.mu). Exploring the beach of L'Escalier af the meeting 'After-party' at the beach of L'Escalier Puh, after such interesting sessions, ideas around Linux and good conversation during the breaks and over lunch it was time for a little break-out. Selven suggested that we all should head down to the beach of L'Escalier and get some impressions of nature down here in the south of the island. Talking about 'beach' ;-) - absolutely not comparable to the white-sanded ones here in Flic en Flac... There are no lagoons down at the south coast of Mauriitus, and watching the breaking waves is a different experience and joy after all. Unfortunately, I was a little bit worried about the thoughtless littering at such a remote location. You have to drive on natural paths through the sugar cane fields and I was really shocked by the amount of rubbish lying around almost everywhere. Sad, really sad and it concurs with Yasir's recent article on the same topic. Resumé & outlook It was a great event. I met with new people, had some good conversations, and even my children enjoyed themselves the whole day. The location was well-chosen, enough space for each and everyone, parking spaces and even a playground for the children. Also, a big "Thank You" to Selven and his helpers for the organisation and preparation of lunch. I'm kind of sure that this was an exceptional meeting of LUGM and I'm really looking forward to the next gathering of Linux geeks. Hopefully, soon. All images are courtesy of Avinash Meetoo. More pictures are available on Flickr.

    Read the article

  • What would make offsetParent null?

    - by Brian Ramsay
    I am trying to do positioning in JavaScript. I am using a cumulative position function based on the classic quirksmode function that sums offsetTop and offsetLeft for each offsetParent until the top node. However, I am running into an issue where the element I'm interested in has no offsetParent in Firefox. In IE offsetParent exists, but offsetTop and offsetLeft all sum up to 0, so it has the same problem in effect as in Firefox. What would cause an element that is clearly visible and usable on the screen to not have an offsetParent? Or, more practically, how can I find the position of this element in order to place a drop-down beneath it?

    Read the article

  • UILabel: Using the userInteractionEnabled method on a label

    - by Kevin Bomberry
    Hello again. I am wondering if anyone has used the userInteractionEnabled method on a UILabel to allow the label to act like a button (or just to fire off a method). Any help would be greatly appreciated. Cheers! Update (4/30/09 @1:07pm) Clarification: I have a standard InfoButton and next to it I want to place a label with the text "settings" and I would like the label to function like the button (which flips over to a settings screen. So, basically I need to tie the already defined showSettinsView to the "infoLabel" label; a user clicks on the infoButton or infoLabel and the method fires off. The infoButton is already working and is using an IBAction to trigger the method. I would like to know how to wire up the label to implement the same method. That is all. Cheers!

    Read the article

  • pyplot: really slow creating heatmaps

    - by cvondrick
    I have a loop that executes the body about 200 times. In each loop iteration, it does a sophisticated calculation, and then as debugging, I wish to produce a heatmap of a NxM matrix. But, generating this heatmap is unbearably slow and significantly slow downs an already slow algorithm. My code is along the lines: import numpy import matplotlib.pyplot as plt for i in range(200): matrix = complex_calculation() plt.set_cmap("gray") plt.imshow(matrix) plt.savefig("frame{0}.png".format(i)) The matrix, from numpy, is not huge --- 300 x 600 of doubles. Even if I do not save the figure and instead update an on-screen plot, it's even slower. Surely I must be abusing pyplot. (Matlab can do this, no problem.) How do I speed this up?

    Read the article

  • How to get the size of a NSString

    - by camilo
    Hi. A "quicky": how can I get the size (width) of a NSString? I'm trying to see if the string width of a string to see if it is bigger than a given width of screen, case in which I have to "crop" it and append it with "...", getting the usual behavior of a UILabel. string.length won't do the trick since AAAAAAAA and iiiiii have the same length but different sizes (for example). I'm kind of stuck. Thanks a lot.

    Read the article

  • Why aren't Admob click callback delegate methods getting called?

    - by executor21
    I'm integrating the latest version of the admob sdk (version 20100412) into my app. The ads get displayed, but I need the app to make some changes when an ad is clicked and admob displays a full-screen browser. However, none of the callback methods (willPresentFullScreenModal, didPresentFullScreenModal, willDismissFullScreenModal, and didDismissFullScreenModal) are called, even though other delegate methods are. Why aren't these callbacks being made? They were in the previous versions of the SDK, and the sample app doesn't use them, so it's no help. EDIT: removed the double negative from the question title

    Read the article

  • Wordpress Site: Can't logout or post comment

    - by Chloé
    I need help with my site http://VelvetArt.net. I can´t logout, post a comment, and when i put index.php after site adress, is does not work too, it just displays white screen. I have this theme also on my test site http://velvetart.lnb.sk. All working here, logout, index, comments.. Maybe the problem may be with wordpress default files (index.php, wp-blog-header.php, wp-comments-post.php) but i don't know how to solve this issue. Can anyone help me with this please?

    Read the article

  • tabbarcontroller as a subview

    - by Boaz
    Hi I have a main view screen with navcontroller in my app with two UIbuttons. Now i want that when I press one button a subview will be pushed. this I know how to do. The problem is that I want this subview to be with tabbarcontroller. I know how to implement tabbar on the main views (within the app delegate), but I have really hard time to do this as a subview.. How should I implement this? Thanks.

    Read the article

  • Question about how to read the Safari/Chrome developer tool result

    - by richard
    Hi, I am using the developer tool in chrome (i think it is the same as safari). I did a timeline when I load wwww.yahoo.com. I attached the screen shot: http://yfrog.com/4jpicture2yyp You see: * Send Request (http://www.yahoo.com) * Receive Response (http://www.yahoo.com) * Receive Response (http://www.yahoo.com) * Event (unload) * Function Call * Recalculate Style * Recalculate Style * Recalculate Style * Parse What I don't understand is why 'Parse' happens AFTER Function call and Recalculate Style? Shouldn't it need to parse HTML source FIRST Before it parses CSS file (I assume which triggers the 'Recalculate Style') and Java Script file (I assume which triggers the 'Function Call')

    Read the article

  • embed multiple youtube videos to chromeless player cue

    - by Quaze
    So in the project ive been working on i use the youtube API to add a video to a chromeless player (got custom buttons, everything works no problems). It loads the youtube id which it gets from the database (Codeigniter, PHP). But what i would like to see is: instead of loading 1 video, id like to add all the videos i get from the database in the cue of that one player. So only one screen, first video retrieved from dbase gets played first, when its done second get loaded preferably also looped. Is there any way i can achieve this? My first guess would be to save the array with youtube id's somewhere and on state change (when the 'video stop'-event gets fired) load the next id from the array. Havent tried this yet because id prefer if the cue gets just gets filled on the init, so it doesnt have to load after a video has been ended. Is This possible?

    Read the article

  • How to draw a transparent stroke (or anyway clear part of an image) on the iPhone

    - by devguy
    I have a small app that allows the user to draw on the screen with the finger. Yes, nothing original, but it's part of something larger :) I have a UIImageView where the user draws, by creating a CGContextRef and the various CG draw functions. I primarily draw strokes/lines with the function CGContextAddLineToPoint Now the problem is this: the user can draw lines of various colors. I want to give him the ability to use a "rubber" tool to delete some part of the image drawn so far, with the finger. I initially did this by using a white color for the stroke (set with the CGContextSetRGBStrokeColor function) but it did't work out...because I discovered later that the UIImage on the UIImageView was actually with transparent background, not white...so I would end up with a transparent image with white lines on it! Is there anyway to set a "transparent" stroke color...or is there any other way to clear the content of the CGContextRef under the user's finger, when he moves it? Thanks

    Read the article

  • Recommendations for an in memory database vs thread safe data structures

    - by yx
    TLDR: What are the pros/cons of using an in-memory database vs locks and concurrent data structures? I am currently working on an application that has many (possibly remote) displays that collect live data from multiple data sources and renders them on screen in real time. One of the other developers have suggested the use of an in memory database instead of doing it the standard way our other systems behaves, which is to use concurrent hashmaps, queues, arrays, and other objects to store the graphical objects and handling them safely with locks if necessary. His argument is that the DB will lessen the need to worry about concurrency since it will handle read/write locks automatically, and also the DB will offer an easier way to structure the data into as many tables as we need instead of having create hashmaps of hashmaps of lists, etc and keeping track of it all. I do not have much DB experience myself so I am asking fellow SO users what experiences they have had and what are the pros & cons of inserting the DB into the system?

    Read the article

  • 3D Graphics with XNA Game Studio 4.0 bug in light map?

    - by Eibis
    i'm following the tutorials on 3D Graphics with XNA Game Studio 4.0 and I came up with an horrible effect when I tried to implement the Light Map http://i.stack.imgur.com/BUWvU.jpg this effect shows up when I look towards the center of the house (and it moves with me). it has this shape because I'm using a sphere to represent light; using other light shapes gives different results. I'm using a class PreLightingRenderer: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Dhpoware; using Microsoft.Xna.Framework.Content; namespace XNAFirstPersonCamera { public class PrelightingRenderer { // Normal, depth, and light map render targets RenderTarget2D depthTarg; RenderTarget2D normalTarg; RenderTarget2D lightTarg; // Depth/normal effect and light mapping effect Effect depthNormalEffect; Effect lightingEffect; // Point light (sphere) mesh Model lightMesh; // List of models, lights, and the camera public List<CModel> Models { get; set; } public List<PPPointLight> Lights { get; set; } public FirstPersonCamera Camera { get; set; } GraphicsDevice graphicsDevice; int viewWidth = 0, viewHeight = 0; public PrelightingRenderer(GraphicsDevice GraphicsDevice, ContentManager Content) { viewWidth = GraphicsDevice.Viewport.Width; viewHeight = GraphicsDevice.Viewport.Height; // Create the three render targets depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24); normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth, viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24); // Load effects depthNormalEffect = Content.Load<Effect>(@"Effects\PPDepthNormal"); lightingEffect = Content.Load<Effect>(@"Effects\PPLight"); // Set effect parameters to light mapping effect lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth); lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight); // Load point light mesh and set light mapping effect to it lightMesh = Content.Load<Model>(@"Models\PPLightMesh"); lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect; this.graphicsDevice = GraphicsDevice; } public void Draw() { drawDepthNormalMap(); drawLightMap(); prepareMainPass(); } void drawDepthNormalMap() { // Set the render targets to 'slots' 1 and 2 graphicsDevice.SetRenderTargets(normalTarg, depthTarg); // Clear the render target to 1 (infinite depth) graphicsDevice.Clear(Color.White); // Draw each model with the PPDepthNormal effect foreach (CModel model in Models) { model.CacheEffects(); model.SetModelEffect(depthNormalEffect, false); model.Draw(Camera.ViewMatrix, Camera.ProjectionMatrix, Camera.Position); model.RestoreEffects(); } // Un-set the render targets graphicsDevice.SetRenderTargets(null); } void drawLightMap() { // Set the depth and normal map info to the effect lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg); lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg); // Calculate the view * projection matrix Matrix viewProjection = Camera.ViewMatrix * Camera.ProjectionMatrix; // Set the inverse of the view * projection matrix to the effect Matrix invViewProjection = Matrix.Invert(viewProjection); lightingEffect.Parameters["InvViewProjection"].SetValue(invViewProjection); // Set the render target to the graphics device graphicsDevice.SetRenderTarget(lightTarg); // Clear the render target to black (no light) graphicsDevice.Clear(Color.Black); // Set render states to additive (lights will add their influences) graphicsDevice.BlendState = BlendState.Additive; graphicsDevice.DepthStencilState = DepthStencilState.None; foreach (PPPointLight light in Lights) { // Set the light's parameters to the effect light.SetEffectParameters(lightingEffect); // Calculate the world * view * projection matrix and set it to // the effect Matrix wvp = (Matrix.CreateScale(light.Attenuation) * Matrix.CreateTranslation(light.Position)) * viewProjection; lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp); // Determine the distance between the light and camera float dist = Vector3.Distance(Camera.Position, light.Position); // If the camera is inside the light-sphere, invert the cull mode // to draw the inside of the sphere instead of the outside if (dist < light.Attenuation) graphicsDevice.RasterizerState = RasterizerState.CullClockwise; // Draw the point-light-sphere lightMesh.Meshes[0].Draw(); // Revert the cull mode graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; } // Revert the blending and depth render states graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; // Un-set the render target graphicsDevice.SetRenderTarget(null); } void prepareMainPass() { foreach (CModel model in Models) foreach (ModelMesh mesh in model.Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) { // Set the light map and viewport parameters to each model's effect if (part.Effect.Parameters["LightTexture"] != null) part.Effect.Parameters["LightTexture"].SetValue(lightTarg); if (part.Effect.Parameters["viewportWidth"] != null) part.Effect.Parameters["viewportWidth"].SetValue(viewWidth); if (part.Effect.Parameters["viewportHeight"] != null) part.Effect.Parameters["viewportHeight"].SetValue(viewHeight); } } } } that uses three effect: PPDepthNormal.fx float4x4 World; float4x4 View; float4x4 Projection; struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Depth : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 viewProjection = mul(View, Projection); float4x4 worldViewProjection = mul(World, viewProjection); output.Position = mul(input.Position, worldViewProjection); output.Normal = mul(input.Normal, World); // Position's z and w components correspond to the distance // from camera and distance of the far plane respectively output.Depth.xy = output.Position.zw; return output; } // We render to two targets simultaneously, so we can't // simply return a float4 from the pixel shader struct PixelShaderOutput { float4 Normal : COLOR0; float4 Depth : COLOR1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; // Depth is stored as distance from camera / far plane distance // to get value between 0 and 1 output.Depth = input.Depth.x / input.Depth.y; // Normal map simply stores X, Y and Z components of normal // shifted from (-1 to 1) range to (0 to 1) range output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5; // Other components must be initialized to compile output.Depth.a = 1; output.Normal.a = 1; return output; } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPLight.fx float4x4 WorldViewProjection; float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; sampler2D depthSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 LightColor; float3 LightPosition; float LightAttenuation; // Include shared functions #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 LightPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WorldViewProjection); output.LightPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Find the pixel coordinates of the input position in the depth // and normal textures float2 texCoord = postProjToScreen(input.LightPosition) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; // Extract the normal from the normal map and move from // 0 to 1 range to -1 to 1 range float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2; // Perform the lighting calculations for a point light float3 lightDirection = normalize(LightPosition - position); float lighting = clamp(dot(normal, lightDirection), 0, 1); // Attenuate the light to simulate a point light float d = distance(LightPosition, position); float att = 1 - pow(d / LightAttenuation, 6); return float4(LightColor * lighting * att, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } PPShared.vsi has some common functions: float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } and finally from the Game class I set up in LoadContent with: effect = Content.Load(@"Effects\PPModel"); models[0] = new CModel(Content.Load(@"Models\teapot"), new Vector3(-50, 80, 0), new Vector3(0, 0, 0), 1f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); house = new CModel(Content.Load(@"Models\house"), new Vector3(0, 0, 0), new Vector3((float)-Math.PI / 2, 0, 0), 35.0f, Content.Load(@"Textures\prova_texture_autocad"), GraphicsDevice); models[0].SetModelEffect(effect, true); house.SetModelEffect(effect, true); renderer = new PrelightingRenderer(GraphicsDevice, Content); renderer.Models = new List(); renderer.Models.Add(house); renderer.Models.Add(models[0]); renderer.Lights = new List() { new PPPointLight(new Vector3(0, 120, 0), Color.White * .85f, 2000) }; where PPModel.fx is: float4x4 World; float4x4 View; float4x4 Projection; texture2D BasicTexture; sampler2D basicTextureSampler = sampler_state { texture = ; addressU = wrap; addressV = wrap; minfilter = anisotropic; magfilter = anisotropic; mipfilter = linear; }; bool TextureEnabled = true; texture2D LightTexture; sampler2D lightSampler = sampler_state { texture = ; minfilter = point; magfilter = point; mipfilter = point; }; float3 AmbientColor = float3(0.15, 0.15, 0.15); float3 DiffuseColor; #include "PPShared.vsi" struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4x4 worldViewProjection = mul(World, mul(View, Projection)); output.Position = mul(input.Position, worldViewProjection); output.PositionCopy = output.Position; output.UV = input.UV; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Sample model's texture float3 basicTexture = tex2D(basicTextureSampler, input.UV); if (!TextureEnabled) basicTexture = float4(1, 1, 1, 1); // Extract lighting value from light map float2 texCoord = postProjToScreen(input.PositionCopy) + halfPixel(); float3 light = tex2D(lightSampler, texCoord); light += AmbientColor; return float4(basicTexture * DiffuseColor * light, 1); } technique Technique1 { pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I don't have any idea on what's wrong... googling the web I found that this tutorial may have some bug but I don't know if it's the LightModel fault (the sphere) or in a shader or in the class PrelightingRenderer. Any help is very appreciated, thank you for reading!

    Read the article

  • Flex Builder 3 Unclosable Editor Windows

    - by Sam Goldman
    See: http://i.imgur.com/pQNQh.png Somehow I made this happen and I don't know how to undo it. The image linked above shows my Flex Builder session. The largest section of the window is the editor. Initially, there was a blank window on the screen so I tried closing it, but I couldn't. Then I tried dragging it and realized I could drag it into a corner of itself, hence all the nested windows. I have no idea how to close these windows or simply reset the view. I went to the preferences under General Perspectives, but the "reset" button was disabled for every available perspective. Help?

    Read the article

  • cross-user C# mutex

    - by Martin
    My app is forced to use a 3rd party module which will blue-screen Windows if two instances are started at the same time on the same machine. To work around the issue, my C# app has a mutex: static Mutex mutex = new Mutex(true, "{MyApp_b9d19f99-b83e-4755-9b11-d204dbd6d096}"); And I check if it's present - and if so I show an error message and close the app: bool IsAnotherInstanceRunning() { if (mutex.WaitOne(TimeSpan.Zero, true)) return (true); else return (false); } The problem is if two users can log in and open the application at the same time, and IsAnotherInstanceRunning() will return false. How do I get around this?

    Read the article

  • How to transform (rotate) a already exist CALayer/animation?

    - by Flocked
    Hello, I have added a CALayer to the UIView of my app: CATransition *animation = [CATransition animation]; [animation setDelegate:self]; [animation setDuration:0.35]; [animation setTimingFunction:UIViewAnimationCurveEaseInOut]; animation.type = @"pageCurl"; animation.fillMode = kCAFillModeForwards; animation.endProgress = 0.58; [animation setRemovedOnCompletion:NO]; [[self.view layer] addAnimation:animation forKey:@"pageCurlAnimation"]; Now when the user rotates the device, the layer always stays in the same position on the screen (in line to the homescreen button). Is it possible to rotate the animation/layer? I tried self.view.layer.transform = CGAffineTransformMakeRotation (angle); but this code just rotates the UIView and not the animation/layer I set.

    Read the article

  • alt-tab sort of web design

    - by sureshgl
    I am thinking of designing a web site having multiple related services. For every action of the user in a service there will be some computation going on in each of the other services. I want to display the service in action (chosen by the user) in the middle of the page in enlarged mode and rest of the services as small sized (shrunk) versions around the enlarged mode. The services which are shown in the shrunk version, should actually show what is happening in that service, at run time as if that was the chosen service. A close match to this sort of behavior I know is in Reliance BigTV, where all the small images of all the channels will be going on, and we can choose the one that we want to watch. After choosing that one image will become big and occupy the screen. Please, let me know if I can do some thing like this using css, html, ajax & php.

    Read the article

< Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >