Search Results

Search found 8831 results on 354 pages for 'intel graphics'.

Page 93/354 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Turning off antialiasing in Löve2D

    - by cjanssen
    I'm using Löve2D for writing a small game. Löve2D is an open source game engine for Lua. The problem I'm encountering is that some antialias filter is automatically applied to your sprites when you draw it at non-integer positions. love.graphics.draw( sprite, x, y ) So when x or y is not round (for example, x=100.24), the sprite appears blurred. The same happens when the sprite size is not even, because (x,y) points to the center of the sprite. For example, a sprite which is 31x30 big will appear blurred again, because its pixels are painted in non-integer positions. Since I am using pixel art, I want to avoid this all the way, otherwise the art is destroyed by this effect. The workaround I am using so far is to force the coordinates to be round by littering the code with calls to math.floor(), and forcing all the sprites to have even sizes by adding a row or column of transparent pixels with the paint program, if needed. Is there some command to deactivate the antialiasing I can call at program startup?

    Read the article

  • Scaling Image to multiple sizes for Deep Zoom

    - by AnthonyWJones
    Lets assume I have a bitmap with a square aspect and width of 2048 pixels. In order to create a set of files need by Silverlight's DeepZoomImageTileSource I need to scale this bitmap to 1024 then to 512 then to 256 etc down to 1 pixel image. There are two, I suspect naive, approaches:- For each image required scale the original full size image to the required size. However it seems excessive to be scaling the full image to the very small sizes. Having scaled from one level to the next discard the original image and scale each sucessive scaled image as the source of the next smaller image. However I suspect that this would generate images in the 256-64 range with poor fidelity than using option 1. Note unlike with the Deep Zoom Composer this tool is expected to act in an on-demand fashion hence it needs to complete in a reasonable timeframe (tops 30 seconds). On the pluse side I'm only creating a single multiscale image not a pyramid of mutliple high-res images. I am outside my comfort zone here, any graphics experts got any advice? Am I wrong about point 2? Is point 1 reasonably performant and I'm worrying about nothing? Option 3?

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • Using the contents of an array to set individual pixels in a Quartz bitmap context

    - by Magic Bullet Dave
    I have an array that contains the RGB colour values for each pixel in a 320 x 180 display. I would like to be able to set individual pixel values in the a bitmap context of the same size offscreen then display the bitmap context in a view. It appears that I have to create 1x1 rects and either put a stroke on them or a line of length 1 at the point in question. Is that correct? I'm looking for a very efficient way of getting the array data onto the graphics context as you can imagine this is going to be an image buffer that cycles at 25 frames per second and drawing in this way seems inefficient. I guess the other question is should I use OPENGL ES instead? Thoughts/best practice would be much appreciated. Regards Dave OK, have come a short way, but can't make the final hurdle and I am not sure why this isn't working: - (void) displayContentsOfArray1UsingBitmap: (CGContextRef)context { long bitmapData[WIDTH * HEIGHT]; // Build bitmap int i, j, h; for (i = 0; i < WIDTH; i++) { for (j = 0; j < HEIGHT; j++) { h = frameBuffer01[i][j]; bitmapData[i * j] = h; } } // Blit the bitmap to the context CGDataProviderRef providerRef = CGDataProviderCreateWithData(NULL, bitmapData,4 * WIDTH * HEIGHT, NULL); CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); CGImageRef imageRef = CGImageCreate(WIDTH, HEIGHT, 8, 32, WIDTH * 4, colorSpaceRef, kCGImageAlphaFirst, providerRef, NULL, YES, kCGRenderingIntentDefault); CGContextDrawImage(context, CGRectMake(0.0, HEIGHT, WIDTH, HEIGHT), imageRef); CGImageRelease(imageRef); CGColorSpaceRelease(colorSpaceRef); CGDataProviderRelease(providerRef); }

    Read the article

  • Ogre material scripts; how do I give a technique multiple lod_indexes?

    - by BlueNovember
    I have an Ogre material script that defines 4 rendering techniques. 1 using GLSL shaders, then 3 others that just use textures of different resolutions. I want to use the GLSL shader unconditionally if the graphics card supports it, and the other 3 textures depending on camera distance. At the moment my script is; material foo { lod_distances 1600 2000 technique shaders { lod_index 0 lod_index 1 lod_index 2 //various passes here } technique high_res { lod_index 0 //various passes here } technique medium_res { lod_index 1 //various passes here } technique low_res { lod_index 2 //various passes here } Extra information The Ogre manual says; Increasing indexes denote lower levels of detail You can (and often will) assign more than one technique to the same LOD index, what this means is that OGRE will pick the best technique of the ones listed at the same LOD index. OGRE determines which one is 'best' by which one is listed first. Currently, on a machine supporting the GLSL version I am using, the script behaves as follows; Camera 2000 : Shader technique Camera 1600 <= 2000 : Medium Camera <= 1600 : High If I change the lod order in shader technique to { lod_index 2 lod_index 1 lod_index 0 } The behaviour becomes; Camera 2000 : Low Camera 1600 <= 2000 : Medium Camera <= 1600 : Shader implying only the latest lod_index is used. If I change it to lod_index 0 1 2 It shouts at me Compiler error: fewer parameters expected in foo.material(#): lod_index only supports 1 argument So how do I specify a technique to have 3 lod_indexes? Duplication works; technique shaders { lod_index 0 //various passes here } technique shaders1 { lod_index 1 //passes repeated here } technique shaders2 { lod_index 2 //passes repeated here } ...but it's ugly.

    Read the article

  • C# / Silverlight / WPF / Fast rendering lots of circles

    - by Walt W
    I want to render a lot of circles or small graphics within either silverlight or wpf (around 1000-10000) as fast and as frequently as possible. If I have to go to DX or OGL, that's fine, but I'm wondering about doing this within either of those two frameworks first (read: it's OK if an answer is WPF-only or Silverlight-only). Also, if there is a way to access DX through WPF and render on a surface that way, I would be interested in that as well. So, what's the fastest way to draw a load of circles? They can be as plain as necessary, but they do need to have a radius. Currently I'm using DrawingVisual and a DrawingContext.DrawEllipse() command for each circle, then rendering the visual to a RenderTargetBItmap, but it becomes very slow as the number of circles rises. By the way, these circles move every frame, so caching isn't really an option unless you're going to suggest caching the individual circles . . . But their sizes are dynamic, so I'm not sure that's a great approach.

    Read the article

  • Can I access views when drawn in the drawRect method?

    - by GianPac
    When creating the content view of a tableViewCell, I used the drawInRect:withFont, drawAtPoint... and others, inside drawRect: since all I needed was to lay some text. Turns out, part of the text drawn need to be clickable URLs. So I decided to create a UIWebView and insert it into my drawRect. Everything seems to be laid out fine, the problem is that interaction with the UIWebView is not happening. I tried enabling the interaction, did not work. I want to know, since with drawRect: I am drawing to the current graphics context, there is anything I can do to have my subviews interact with the user? here is some of my code I use inside the UITableViewCell class. -(void) drawRect:(CGRect)rect{ NSString *comment = @"something"; [comment drawAtPoint:point forWidth:195 withFont:someFont lineBreakMode:UILineBreakModeMiddleTruncation]; NSString *anotherCommentWithURL = @"this comment has a URL http://twtr.us"; UIWebView *webView = [[UIWebView alloc] initWithFrame:someFrame]; webView.delegate = self; webView.text = anotherCommentWithURL; [self addSubView:webView]; [webView release]; } As I said, the view draws fine, but there is no interaction with the webView from the outside. the URL gets embedded into HTML and should be clickable. It is not. I got it working on another view, but on that one I lay the views. Any ideas?

    Read the article

  • OpenGL fast texture drawing with vertex buffer objects. Is this the way to do it?

    - by Matthew Mitchell
    Hello. I am making a 2D game with OpenGL. I would like to speed up my texture drawing by using VBOs. Currently I am using the immediate mode. I am generating my own coordinates when I rotate and scale a texture. I also have the functionality of rounding the corners of a texture, using the polygon primitive to draw those. I was thinking, would it be fastest to make a VBO with vertices for the sides of the texture with no offset included so I can then use glViewport, glScale (Or glTranslate? What is the difference and most suitable here?) and glRotate to move the drawing position for my texture. Then I can use the same VBO with no changes to draw the texture each time. I could only change the VBO when I need to add coordinates for the rounded corners. Is that the best way to do this? What things should I look out for while doing it? Is it really fastest to use GL_TRIANGLES instead of GL_QUADS in modern graphics cards? Thank you for any answer.

    Read the article

  • Using CGContextDrawTiledImage at different zooms causes massive memory growth

    - by Jacques
    I'm working on app an where there's a view in a zoomable UIScrollView. When the user zooms in or out, I redraw the view that's in the UIScrollView to be nice and sharp. That view has a background image that I draw with CGContextDrawTiledImage. I noticed that memory usage grows every time I switch to a new zoom level. It looks like CGContextDrawTiledImage keeps a cache somewhere of the image scaled to different sizes. So, If I go from 1.0 to 1.1x zoom, memory use grows. Going back to 1.0 doesn't cause it to grow, but then going to 1.05 and then 1.2 causes it to grow twice. Back to 1.1 and no growth. Of course, the zoom level is under user control so I don't have control over how many zoom levels happen. Right now my background image is kind of massive (512x512), so this causes memory usage to grow very quickly. It doesn't show up as a memory leak in Instruments, just additional allocations that never get freed. I've tried to find a way to free the cache that appears to be being created, but no luck. It doesn't seem to respond to low memory warnings, for example. I also tried setting the view's backgroundColor to a UIColor created with colorWithPatternImage, but that doesn't work because I'm doing the scaling by changing the graphics context's CTM, not by setting the view's transform. Any ideas on how to keep memory usage from blowing up?

    Read the article

  • Pythagoras tree with g2d

    - by owca
    I'm trying to build my first fractal (Pythagoras Tree): in Java using Graphics2D. Here's what I have now : import java.awt.*; import java.awt.geom.*; import javax.swing.*; import java.util.Scanner; public class Main { public static void main(String[] args) { int i=0; Scanner scanner = new Scanner(System.in); System.out.println("Give amount of steps: "); i = scanner.nextInt(); new Pitagoras(i); } } class Pitagoras extends JFrame { private int powt, counter; public Pitagoras(int i) { super("Pythagoras Tree."); setSize(1000, 1000); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setVisible(true); powt = i; } private void paintIt(Graphics2D g) { double p1=450, p2=800, size=200; for (int i = 0; i < powt; i++) { if (i == 0) { g.drawRect((int)p1, (int)p2, (int)size, (int)size); counter++; } else{ if( i%2 == 0){ //here I must draw two squares } else{ //here I must draw right triangle } } } } @Override public void paint(Graphics graph) { Graphics2D g = (Graphics2D)graph; paintIt(g); } So basically I set number of steps, and then draw first square (p1, p2 and size). Then if step is odd I need to build right triangle on the top of square. If step is even I need to build two squares on free sides of the triangle. What method should I choose now for drawing both triangle and squares ? I was thinking about drawing triangle with simple lines transforming them with AffineTransform but I'm not sure if it's doable and it doesn't solve drawing squares.

    Read the article

  • How to implement a grapher in C#

    - by iansinke
    So I'm writing a graphing calculator. So far I have a semi-functional grapher, however, I'm having a hard time getting a good balance between accurate graphs and smooth looking curves. The current implementation (semi-pseudo-code) looks something like this: for (float i = GraphXMin; i <= GraphXMax; i++) { PointF P = new PointF(i, EvaluateFunction(Function, i) ListOfPoints.Add(P) } Graphics.DrawCurve(ListOfPoints) The problem with this is since it only adds a point at every integer value, graphs end up distorted when their turning points don't fall on integers (e.g. sin(x)^2). I tried incrementing i by something smaller (like 0.1), which works, but the graph looks very rough. I am using C# and GDI+. I have SmoothingMethod set to AntiAlias, so that's not the problem, as you can see from the first graph. Is there some sort of issue with drawing curves with a lot of points? Should the points perhaps be positioned exactly on pixels? I'm sure some of you have worked on something very similar before, so any suggestions? While you're at it, do you have any suggestions for graphing functions with asymptotes? e.g. 1/x^2 P.S. I'm not looking for a library that does all this - I want to write it myself.

    Read the article

  • Quality questionnaire php mysql graphipcs

    - by Marcelo
    Hi, i'm making a questionnaire about a service quality, its contains the options (poor, regular, good, very good). It's contains 6 questions (radio button) and a suggestion box (textbox). In the table of the database i created 6 rows for questions, 1 for suggestion and 1 for date (a friend of mine tole me to use this but i didn't get why). q1) I'm going to atribute a value form 1 to 4 to the radio buttons options, and i'd like to sum every answer for each question, and then divide by the numbers of user that answered that question and give the mean. how am i supposed to to that? I'd also like generate reports of the month, of the year. q2) not only about the questionnaire but for registration too. I need all the fields to be completed, no blank options, if he don't complete all of fields it'll not be submitted and there will be a warning message to the user. q3) about the field type, i'd like it to be the same class that is in the database, i'm having a "problem". Ex: Name(varchar) : 1234(int), in the field 'name' of the table of the database 1234 will be shown as name, and i don't want this, i want only the type that i declared in the construction of the table. q4) i'd also like to know if it's possible to create pizza graphics, about the percentage of each question, is this possible? q5) I'm using phpmyadmin and some of my id's are auto_increment, but 'cause of my tests they at a high number, i'd like to restart to 0 the ids number, is this possible? Thanks for the attention.

    Read the article

  • Is there a way to capture a bitmap from a WPF window using native C++?

    - by Mike Caron
    Imagine a document window in a MDI application which contains a child WPF window, say a sidebar for example. How can one get a bitmap containing both the WPF pixels AND the GDI (non-wpf) pixels? I've discovered that when making my thumbnail preview for the Win7 taskbar app icon hover, I get black in the parts of the preview where the WPF pixels should be. My current method simply grabs a bitmap capture of the document window. Then I get a DC for the preview, make a memory DC from it and select my bitmap into it. Then I do some size adjustments and bitblt the memory dc to the real dc. I'm guessing that the BitBlt operation doesn't take into account the fact that the WPF pixels are hardware accelerated and therefore need to be grabbed from the graphics hardware. All the stuff in GDI is managed just fine, though and when there's no WPF child windows, the preview image looks fine. I'm wondering if it's at all possible to grab a bitmap of the WPF window from native C++. Then I can blt that onto the black area of the previous preview.

    Read the article

  • java.awt -- when java outputs an image to my monitor (screen), where is the file that is output to the monitor card?

    - by user1405870
    Suppose that I am drawing a set of images using java graphics objects. Suppose that I java is outputting these images to my monitor. Where is the file or files that are sent to the monitor card (the graphical representation files). How can I take this file and save it to disk, or how can I take this file and write it to an array, or how can I take these files and combine the results of their output (to the monitor) into a single file for saving? I don't want to use a screen shot feature, I want to be able to redirect (xor capture also) the output to the monitor to some sort of byte-stream. I note that monitors are much better than semaphores, when you are talking about display capabilities; I don't need a counter example. I might not be asking the correct question. It might be that I want to capture the file while it is still in User Space, before it is put into 'Device Space'. I would like to try and capture the byte stream so that I can convert it to MPEG-4 format. I either need a streaming output from the MPEG-4 converter, coming from the streaming input, or else, I need to take static images at discrete times and convert the images. What format will the output from User Space be in? What format will the Device Space output be in? Try to keep speculation to a minimum. http://docs.oracle.com/javame/config/cdc/opt-pkgs/api/jsr927/index.html I guess that Java has made a means of displaying AWT objects on a television screen. Thank you. Ryan Zoerner

    Read the article

  • Laptop Asus P50IJ with Intel 4500M GMA output going to a Dell 1907FP external monitor will not allow

    - by ProfessionalAmateur
    Hello - I just purchased an Asus P50IJ-X2 laptop which has a Intel GMA 4500M video card running Windows7. At work I output this laptop to a Dell 1907FP LCD which has a maximum resolution of 1280x1024. Not matter what I do the Windows will not allow the laptop to set a resolution higher than 1024x768 to this LCD monitor. Ive even gone to the extent of downloading PowerStrip (I'd post a link but Im new and can only enter 1 url, if you google for powerstrip its the first option) to create a custom driver for my monitor thinking Windows was having a hard time seeing the available resolutions it would accept. However, powerstrip read the registery and properly sees the monitor and what its capable of so Im now at a complete loss as to why Windows7 will not allow me to set/use a 1280x1024 resolution for this external monitor (as my last laptop did running Vista). The Intel documentation (http://software.intel.com/en-us/articles/quick-reference-guide-to-intel-integrated-graphics/) indicates that the GMA 4500M should be able to run up to a 2560x1600 max res. The Dell 1907FP specification states it can run up to a 1280x1024 res. But no matter what the computer will not allow me to set higher than a 1024x768. I'm completely baffled but I would really like to be able to output this laptop to a reasonable resolution, 1024x768 makes me feel like I'm using my mom's computer. Any help would be greatly appreciated! Here are some attached images (I apologize for the links, being new I cannot post images) that should help explain this better: Image 1 - This image is from powerstrip which shows the monitors max accepted resolution and at the top right the max res my PC currently allows. (http://imgur.com/agrno.png) Image 2 - This shows my Windows7 resolution picker. (http://imgur.com/3nv6q.png) Image 3 - The 'List all modes' option taken from the Screen Resolution Advanced Settings List All Modes. (http://imgur.com/AMREh.png) Image 4 - Monitor information from registry read by powerstrip, this shows the laptop is able to read the necessary info from the LCD monitor. (http://imgur.com/hUX4D.png)

    Read the article

  • Microphone not capturing sound on 12.04 Lenovo G580

    - by Yam Marcovic
    In both Skype and the Sound Recorder application, I am not capturing any audio from my built-in microphone. I'm not sure why. Otherwise, sound output is working well. I have tried running gstreamer-properties and setting the Default Input plugin to PulseAUdio as well (to match the output), and it didn't help. I have tried running alsamixer -V all and I only get 2 input-related entries: Capture(L R) which is on 100 and not muted (can't be either), and Analog Mic Boost which is on 20db. Extra info: Camera (video) is working well on Skype and Kamerka. Can you please help me get my microphone to work? lspci: 00:00.0 Host bridge: Intel Corporation Ivy Bridge DRAM Controller (rev 09) Subsystem: Lenovo Device 3977 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx- Latency: 0 Capabilities: <access denied> Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) (prog-if 00 [VGA controller]) Subsystem: Lenovo Device 3977 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 42 Region 0: Memory at e0000000 (64-bit, non-prefetchable) [size=4M] Region 2: Memory at d0000000 (64-bit, prefetchable) [size=256M] Region 4: I/O ports at 3000 [size=64] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) (prog-if 30 [XHCI]) Subsystem: Lenovo Device 3977 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 41 Region 0: Memory at e0600000 (64-bit, non-prefetchable) [size=64K] Capabilities: <access denied> Kernel driver in use: xhci_hcd 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) Subsystem: Lenovo Device 3977 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 43 Region 0: Memory at e0614000 (64-bit, non-prefetchable) [size=16] Capabilities: <access denied> Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) (prog-if 20 [EHCI]) Subsystem: Lenovo Device 3977 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at e0619000 (32-bit, non-prefetchable) [size=1K] Capabilities: <access denied> Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) Subsystem: Lenovo Device 3977 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 44 Region 0: Memory at e0610000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) (prog-if 00 [Normal decode]) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Bus: primary=00, secondary=01, subordinate=01, sec-latency=0 I/O behind bridge: 00002000-00002fff Memory behind bridge: e0500000-e05fffff Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) (prog-if 00 [Normal decode]) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 Memory behind bridge: e0400000-e04fffff Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) (prog-if 20 [EHCI]) Subsystem: Lenovo Device 3977 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 23 Region 0: Memory at e0618000 (32-bit, non-prefetchable) [size=1K] Capabilities: <access denied> Kernel driver in use: ehci_hcd 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) Subsystem: Lenovo Device 3977 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Capabilities: <access denied> Kernel modules: iTCO_wdt 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) (prog-if 01 [AHCI 1.0]) Subsystem: Lenovo Device 3977 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin B routed to IRQ 40 Region 0: I/O ports at 3088 [size=8] Region 1: I/O ports at 3094 [size=4] Region 2: I/O ports at 3080 [size=8] Region 3: I/O ports at 3090 [size=4] Region 4: I/O ports at 3060 [size=32] Region 5: Memory at e0617000 (32-bit, non-prefetchable) [size=2K] Capabilities: <access denied> Kernel driver in use: ahci 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) Subsystem: Lenovo Device 3977 Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin C routed to IRQ 10 Region 0: Memory at e0615000 (64-bit, non-prefetchable) [size=256] Region 4: I/O ports at 3040 [size=32] Kernel modules: i2c-i801 01:00.0 Ethernet controller: Atheros Communications Inc. AR8162 Fast Ethernet (rev 08) Subsystem: Lenovo Device 3979 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 11 Region 0: Memory at e0500000 (64-bit, non-prefetchable) [size=256K] Region 2: I/O ports at 2000 [size=128] Capabilities: <access denied> 02:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: Lenovo Device 31a1 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 17 Region 0: Memory at e0400000 (64-bit, non-prefetchable) [size=64K] Capabilities: <access denied> Kernel driver in use: ath9k Kernel modules: ath9k aplay -l **** List of PLAYBACK Hardware Devices **** card 0: PCH [HDA Intel PCH], device 0: CONEXANT Analog [CONEXANT Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0

    Read the article

  • How do I make a jumping dolphin rotate realistically?

    - by Johnny
    I want to program a dolphin that jumps and rotates like a real dolphin. Jumping is not the problem, but I don't know how to make the rotation. At the moment, my dolphin rotates a little weird. But I want that it rotates like a real dolphin does. How can I improve the rotation? public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D image, water; float Gravity = 5.0F; float Acceleration = 20.0F; Vector2 Position = new Vector2(1200,720); Vector2 Velocity; float rotation = 0; SpriteEffects flip; Vector2 Speed = new Vector2(0, 0); public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 720; } protected override void Initialize() { base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); image = Content.Load<Texture2D>("cartoondolphin"); water = Content.Load<Texture2D>("background"); flip = SpriteEffects.None; } protected override void Update(GameTime gameTime) { float VelocityX = 0f; float VelocityY = 0f; float time = (float)gameTime.ElapsedGameTime.TotalSeconds; KeyboardState kbState = Keyboard.GetState(); if(kbState.IsKeyDown(Keys.Left)) { rotation = 0; flip = SpriteEffects.None; VelocityX += -5f; } if(kbState.IsKeyDown(Keys.Right)) { rotation = 0; flip = SpriteEffects.FlipHorizontally; VelocityX += 5f; } // jump if the dolphin is under water if(Position.Y >= 670) { if (kbState.IsKeyDown(Keys.A)) { if (flip == SpriteEffects.None) { rotation += 0.01f; VelocityY += 40f; } else { rotation -= 0.01f; VelocityY += 40f; } } } else { if (flip == SpriteEffects.None) { rotation -= 0.01f; VelocityY += -10f; } else { rotation += 0.01f; VelocityY += -10f; } } float deltaY = 0; float deltaX = 0; deltaY = Gravity * (float)gameTime.ElapsedGameTime.TotalSeconds; deltaX += VelocityX * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; deltaY += -VelocityY * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; Speed = new Vector2(Speed.X + deltaX, Speed.Y + deltaY); Position += Speed * (float)gameTime.ElapsedGameTime.TotalSeconds; Velocity.X = 0; if (Position.Y + image.Height/2 > graphics.PreferredBackBufferHeight) Position.Y = graphics.PreferredBackBufferHeight - image.Height/2; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(water, new Rectangle(0, graphics.PreferredBackBufferHeight -100, graphics.PreferredBackBufferWidth, 100), Color.White); spriteBatch.Draw(image, Position, null, Color.White, rotation, new Vector2(image.Width / 2, image.Height / 2), 1, flip, 1); spriteBatch.End(); base.Draw(gameTime); } } I changed my code a little. But I still have some trouble with the rotation. Here's the entire code. The dolphin looks at the wrong direction if I press the left or right key. For example, it looks down if I press the left key. What is wrong with the rotation? At the beginning, the dolphin looks at the left side, but after I pressed a key it just looks down or up. I deleted the "rotation += 0.01f;" lines in the code. Is that correct? public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D image, water; float Gravity = 5.0F; float Acceleration = 20.0F; Vector2 Position = new Vector2(1200,720); Vector2 Velocity; float rotation = 0; SpriteEffects flip; Vector2 Speed = new Vector2(0, 0); Vector2 prevPos; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 720; } protected override void Initialize() { base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); image = Content.Load<Texture2D>("cartoondolphin"); water = Content.Load<Texture2D>("background"); flip = SpriteEffects.None; } protected override void Update(GameTime gameTime) { float VelocityX = 0f; float VelocityY = 0f; float time = (float)gameTime.ElapsedGameTime.TotalSeconds; KeyboardState kbState = Keyboard.GetState(); if(kbState.IsKeyDown(Keys.Left)) { flip = SpriteEffects.None; VelocityX += -5f; } if(kbState.IsKeyDown(Keys.Right)) { flip = SpriteEffects.FlipHorizontally; VelocityX += 5f; } rotation = (float)Math.Atan2(Position.X - prevPos.X, Position.Y - prevPos.Y); prevPos = Position; // jump if the dolphin is under water if(Position.Y >= 670) { if (kbState.IsKeyDown(Keys.A)) { if (flip == SpriteEffects.None) { VelocityY += 40f; } else { VelocityY += 40f; } } } else { if (flip == SpriteEffects.None) { VelocityY += -10f; } else { VelocityY += -10f; } } float deltaY = 0; float deltaX = 0; deltaY = Gravity * (float)gameTime.ElapsedGameTime.TotalSeconds; deltaX += VelocityX * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; deltaY += -VelocityY * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; Speed = new Vector2(Speed.X + deltaX, Speed.Y + deltaY); Position += Speed * (float)gameTime.ElapsedGameTime.TotalSeconds; Velocity.X = 0; if (Position.Y + image.Height/2 > graphics.PreferredBackBufferHeight) Position.Y = graphics.PreferredBackBufferHeight - image.Height/2; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(water, new Rectangle(0, graphics.PreferredBackBufferHeight -100, graphics.PreferredBackBufferWidth, 100), Color.White); spriteBatch.Draw(image, Position, null, Color.White, rotation, new Vector2(image.Width / 2, image.Height / 2), 1, flip, 1); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • How to better create stacked bar graphs with multiple variables from ggplot2?

    - by deoksu
    I often have to make stacked barplots to compare variables, and because I do all my stats in R, I prefer to do all my graphics in R with ggplot2. I would like to learn how to do two things: First, I would like to be able to add proper percentage tick marks for each variable rather than tick marks by count. Counts would be confusing, which is why I take out the axis labels completely. Second, there must be a simpler way to reorganize my data to make this happen. It seems like the sort of thing I should be able to do natively in ggplot2 with plyR, but the documentation for plyR is not very clear (and I have read both the ggplot2 book and the online plyR documentation. My best graph looks like this, the code to create it follows: the R code I use to get it is the following: library(epicalc) ### recode the variables to factors ### recode(c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ), c(1,2,3,4,5,6,7,8,9, NA), c('Very Interested','Somewhat Interested','Not Very Interested','Not At All interested',NA,NA,NA,NA,NA,NA)) ### Combine recoded variables to a common vector Interest1<-c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ) ### Create a second vector to label the first vector by original variable ### a1<-rep("News about Bangladesh", length(int_newcoun)) a2<-rep("Neighboring Countries", length(int_newneigh)) [...] a17<-rep("Education", length(int_educ)) Interest2<-c(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17) ### Create a Weighting vector of the proper length ### Interest.weight<-rep(weight, 17) ### Make and save a new data frame from the three vectors ### Interest.df<-cbind(Interest1, Interest2, Interest.weight) Interest.df<-as.data.frame(Interest.df) write.csv(Interest.df, 'C:\\Documents and Settings\\[name]\\Desktop\\Sweave\\InterestBangladesh.csv') ### Sort the factor levels to display properly ### Interest.df$Interest1<-relevel(Interest$Interest1, ref='Not Very Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Somewhat Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Very Interested') Interest.df$Interest2<-relevel(Interest$Interest2, ref='News about Bangladesh') Interest.df$Interest2<-relevel(Interest$Interest2, ref='Education') [...] Interest.df$Interest2<-relevel(Interest$Interest2, ref='European Politics') detach(Interest) attach(Interest) ### Finally create the graph in ggplot2 ### library(ggplot2) p<-ggplot(Interest, aes(Interest2, ..count..)) p<-p+geom_bar((aes(weight=Interest.weight, fill=Interest1))) p<-p+coord_flip() p<-p+scale_y_continuous("", breaks=NA) p<-p+scale_fill_manual(value = rev(brewer.pal(5, "Purples"))) p update_labels(p, list(fill='', x='', y='')) I'd very much appreciate any tips, tricks or hints. Thanks.

    Read the article

  • Java Hardware Acceleration

    - by Freezerburn
    I have been spending some time looking into the hardware acceleration features of Java, and I am still a bit confused as none of the sites that I found online directly and clearly answered some of the questions I have. So here are the questions I have for hardware acceleration in Java: 1) In Eclipse version 3.6.0, with the most recent Java update for Mac OS X (1.6u10 I think), is hardware acceleration enabled by default? I read somewhere that someCanvas.getGraphicsConfiguration().getBufferCapabilities().isPageFlipping() is supposed to give an indication of whether or not hardware acceleration is enabled, and my program reports back true when that is run on my main Canvas instance for drawing to. If my hardware acceleration is not enabled now, or by default, what would I have to do to enable it? 2) I have seen a couple articles here and there about the difference between a BufferedImage and VolatileImage, mainly saying that VolatileImage is the hardware accelerated image and is stored in VRAM for fast copy-from operations. However, I have also found some instances where BufferedImage is said to be hardware accelerated as well. Is BufferedImage hardware accelerated as well in my environment? What would be the advantage of using a VolatileImage if both types are hardware accelerated? My main assumption for the advantage of having a VolatileImage in the case of both having acceleration is that VolatileImage is able to detect when its VRAM has been dumped. But if BufferedImage also support acceleration now, would it not have the same kind of detection built into it as well, just hidden from the user, in case that the memory is dumped? 3) Is there any advantage to using someGraphicsConfiguration.getCompatibleImage/getCompatibleVolatileImage() as opposed to ImageIO.read() In a tutorial I have been reading for some general concepts about setting up the rendering window properly (tutorial) it uses the getCompatibleImage method, which I believe returns a BufferedImage, to get their "hardware accelerated" images for fast drawing, which ties into question 2 about if it is hardware accelerated. 4) This is less hardware acceleration, but it is something I have been curious about: do I need to order which graphics get drawn? I know that when using OpenGL via C/C++ it is best to make sure that the same graphic is drawn in all the locations it needs to be drawn at once to reduce the number of times the current texture needs to be switch. From what I have read, it seems as if Java will take care of this for me and make sure things are drawn in the most optimal fashion, but again, nothing has ever said anything like this clearly. 5) What AWT/Swing classes support hardware acceleration, and which ones should be used? I am currently using a class that extends JFrame to create a window, and adding a Canvas to it from which I create a BufferStrategy. Is this good practice, or is there some other type of way I should be implementing this? Thank you very much for your time, and I hope I provided clear questions and enough information for you to answer my several questions.

    Read the article

  • How do I destruct data associated with an object after the object no longer exists?

    - by Phineas
    I'm creating a class (say, C) that associates data (say, D) with an object (say, O). When O is destructed, O will notify C that it soon will no longer exist :( ... Later, when C feels it is the right time, C will let go of what belonged to O, namely D. If D can be any type of object, what's the best way for C to be able to execute "delete D;"? And what if D is an array of objects? My solution is to have D derive from a base class that C has knowledge of. When the time comes, C calls delete on a pointer to the base class. I've also considered storing void pointers and calling delete, but I found out that's undefined behavior and doesn't call D's destructor. I considered that templates could be a novel solution, but I couldn't work that idea out. Here's what I have so far for C, minus some details: // This class is C in the above description. There may be many instances of C. class Context { public: // D will inherit from this class class Data { public: virtual ~Data() {} }; Context(); ~Context(); // Associates an owner (O) with its data (D) void add(const void* owner, Data* data); // O calls this when he knows its the end (O's destructor). // All instances of C are now aware that O is gone and its time to get rid // of all associated instances of D. static void purge (const void* owner); // This is called periodically in the application. It checks whether // O has called purge, and calls "delete D;" void refresh(); // Side note: sometimes O needs access to D Data *get (const void *owner); private: // Used for mapping owners (O) to data (D) std::map _data; }; // Here's an example of O class Mesh { public: ~Mesh() { Context::purge(this); } void init(Context& c) const { Data* data = new Data; // GL initialization here c.add(this, new Data); } void render(Context& c) const { Data* data = c.get(this); } private: // And here's an example of D struct Data : public Context::Data { ~Data() { glDeleteBuffers(1, &vbo); glDeleteTextures(1, &texture); } GLint vbo; GLint texture; }; }; P.S. If you're familiar with computer graphics and VR, I'm creating a class that separates an object's per-context data (e.g. OpenGL VBO IDs) from its per-application data (e.g. an array of vertices) and frees the per-context data at the appropriate time (when the matching rendering context is current).

    Read the article

  • Spinning a circle in J2ME using a Canvas.

    - by JohnQPublic
    Hello all! I have a problem where I need to make a multi-colored wheel spin using a Canvas in J2ME. What I need to do is have the user increase the speed of the spin or slow the spin of the wheel. I have it mostly worked out (I think) but can't think of a way for the wheel to spin without causing my cellphone to crash. Here is what I have so far, it's close but not exactly what I need. class MyCanvas extends Canvas{ //wedgeOne/Two/Three define where this particular section of circle begins to be drawn from int wedgeOne; int wedgeTwo; int wedgeThree; int spinSpeed; MyCanvas(){ wedgeOne = 0; wedgeTwo = 120; wedgeThree = 240; spinSpeed = 0; } //Using the paint method to public void paint(Graphics g){ //Redraw the circle with the current wedge series. g.setColor(255,0,0); g.fillArc(getWidth()/2, getHeight()/2, 100, 100, wedgeOne, 120); g.setColor(0,255,0); g.fillArc(getWidth()/2, getHeight()/2, 100, 100, wedgeTwo, 120); g.setColor(0,0,255); g.fillArc(getWidth()/2, getHeight()/2, 100, 100, wedgeThree, 120); } protected void keyPressed(int keyCode){ switch (keyCode){ //When the 6 button is pressed, the wheel spins forward 5 degrees. case KEY_NUM6: wedgeOne += 5; wedgeTwo += 5; wedgeThree += 5; repaint(); break; //When the 4 button is pressed, the wheel spins backwards 5 degrees. case KEY_NUM4: wedgeOne -= 5; wedgeTwo -= 5; wedgeThree -= 5; repaint(); } } I have tried using a redraw() method that adds the spinSpeed to each of the wedge values while(spinSpeed0) and calls the repaint() method after the addition, but it causes a crash and lockup (I assume due to an infinite loop). Does anyone have any tips or ideas how I could automate the spin so you do not have the press the button every time you want it to spin? (P.S - I have been lurking for a while, but this is my first post. If it's too general or asking for too much info (sorry if it is) and I either remove it or fix it. Thank you!)

    Read the article

  • Create a Color Picker, similar to Photoshop's, using Javascript and HTML Canvas

    - by André Alçada Padez
    I am not at all versed in Computer Graphics and am in need of creating a color picker as a javascript tool to embed in an HTML page. First, and looking at Photoshop's one, i thought of the RGB palette as a three-dimensional matrix. My first attempt envolved: <script type="text/javascript"> var rgCanvas = document.createElement('canvas'); rgCanvas.width = 256; rgCanvas.height = 256; rgCanvas.style.border = '3px solid black'; for (g = 0; g < 256; g++){ for (r = 0; r < 256; r++){ var context = rgCanvas.getContext('2d'); context.beginPath(); context.moveTo(r,g); context.strokeStyle = 'rgb(' + r + ', ' + g + ', 0)'; context.lineTo(r+1,g+1); context.stroke(); context.closePath(); } } var bCanvas = document.createElement('canvas'); bCanvas.width = 20; bCanvas.height = 256; bCanvas.style.border = '3px solid black'; for (b = 0; b < 256; b++){ var context = bCanvas.getContext('2d'); context.beginPath(); context.moveTo(0,b); context.strokeStyle = 'rgb(' + 0 + ', ' + 0 + ', ' + b + ')'; context.lineTo(20, b); context.stroke(); context.closePath(); } document.body.appendChild(rgCanvas); document.body.appendChild(bCanvas); </script> this results in something like My thought is this is too linear, comparing to the ones i see in Photoshop and on the web. I would like to know the logic behind the color mapping in a picker like this: I don't really need the algorythms itself, i'm mainly trying to understand the logic. Thanks

    Read the article

  • William C. Lowe, le créateur de l'IBM PC est décédé, son appareil avait contribué à l'essor de Microsoft et Intel

    Le créateur du PC est décédé un périphérique qui a contribué à l'essor des firmes Microsoft et Intel William C. Lowe, le père du PC, a rendu l'âme le 19 octobre 2013 à la suite d'une crise cardiaque. Ce qu'on retient de ce grand homme est qu'il fut celui qui a donné à IBM son tout premier ordinateur personnel destiné au grand public. Dans les années 70, Big Blue a pratiquement le monopole de la construction des mainframes pour les grandes entreprises et les gouvernements, mais un marché semble...

    Read the article

  • Why did the old 3D games have "jittery" graphics?

    - by dreta
    I've been playing MediEvil lately and it got me wondering, what causes some of the old 3D games have "flowing" graphics when moving? It's present in games like Final Fantasy VII, MediEvil, i remember Dungeon Keeper 2 having the same thing in zoom mode, however f.e. Quake 2 didn't have this "issue" and it's just as old. The resolution doesn't seem to be the problem, everything is rendered perfectly fine when you stand still. So is the game refreshing slowly or it's something to do with buffering?

    Read the article

  • Intel lance un concours de création d'applications pour sa caméra 3D RealSense, avec plus d'un million de dollars US à se partager

    Intel lance un concours de création d'applications pour sa caméra 3D RealSense, avec plus d'un million de dollars US à se partager Mise à jour du 18/08/2014Intel annonce l'édition 2014 de son concours Perceptual Computing App Challenge articulé autour de sa technologie RealSense. Pour rappel, sa caméra 3D RealSense a été présentée comme étant son premier produit « d'informatique perceptuelle » et embarque entre autre un système de reconnaisse faciale, vocale, la suppression des arrières plans,...

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >