Search Results

Search found 8165 results on 327 pages for '3d graphics'.

Page 101/327 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Screenshot of the Nexus One from adb?

    - by Marcus
    My goal is to be able to type a one word command and get a screenshot from a rooted Nexus One attached by USB. So far, I can get the framebuffer which I believe is a 32bit xRGB888 raw image by pulling it like this: adb pull /dev/graphics/fb0 fb0 From there though, I'm having a hard time getting it converted to a png. I'm trying with ffmpeg like this: ffmpeg -vframes 1 -vcodec rawvideo -f rawvideo -pix_fmt rgb8888 -s 480x800 -i fb0 -f image2 -vcodec png image.png That creates a lovely purple image that has parts that vaguely resemble the screen, but it's by no means a clean screenshot.

    Read the article

  • How do I implement AABB ray cast hit checking for opengl es on the iPhone

    - by Big Fizzy
    Basically, I draw a 3D cube, I can spin it around but I want to be able to touch it and know where on my cube's surface the user touched. I'm using for setting up, generating and spinning. Its based on the Molecules code and NeHe tutorial #5. Any help, links, tutorials and code would be greatly appreciated. I have lots of development experience but nothing much in the way of openGL and 3d. // // GLViewController.h // NeHe Lesson 05 // // Created by Jeff LaMarche on 12/12/08. // Copyright Jeff LaMarche Consulting 2008. All rights reserved. // #import "GLViewController.h" #import "GLView.h" @implementation GLViewController - (void)drawBox { static const GLfloat cubeVertices[] = { -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f,-1.0f, 1.0f, -1.0f,-1.0f, 1.0f, -1.0f, 1.0f,-1.0f, 1.0f, 1.0f,-1.0f, 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f }; static const GLubyte cubeNumberOfIndices = 36; const GLubyte cubeVertexFaces[] = { 0, 1, 5, // Half of top face 0, 5, 4, // Other half of top face 4, 6, 5, // Half of front face 4, 6, 7, // Other half of front face 0, 1, 2, // Half of back face 0, 3, 2, // Other half of back face 1, 2, 5, // Half of right face 2, 5, 6, // Other half of right face 0, 3, 4, // Half of left face 7, 4, 3, // Other half of left face 3, 6, 2, // Half of bottom face 6, 7, 3, // Other half of bottom face }; const GLubyte cubeFaceColors[] = { 0, 255, 0, 255, 255, 125, 0, 255, 255, 0, 0, 255, 255, 255, 0, 255, 0, 0, 255, 255, 255, 0, 255, 255 }; glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, cubeVertices); int colorIndex = 0; for(int i = 0; i < cubeNumberOfIndices; i += 3) { glColor4ub(cubeFaceColors[colorIndex], cubeFaceColors[colorIndex+1], cubeFaceColors[colorIndex+2], cubeFaceColors[colorIndex+3]); int face = (i / 3.0); if (face%2 != 0.0) colorIndex+=4; glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_BYTE, &cubeVertexFaces[i]); } glDisableClientState(GL_VERTEX_ARRAY); } //move this to a data model later! - (GLfixed)floatToFixed:(GLfloat)aValue; { return (GLfixed) (aValue * 65536.0f); } - (void)drawViewByRotatingAroundX:(float)xRotation rotatingAroundY:(float)yRotation scaling:(float)scaleFactor translationInX:(float)xTranslation translationInY:(float)yTranslation view:(GLView*)view; { glMatrixMode(GL_MODELVIEW); GLfixed currentModelViewMatrix[16] = { 45146, 47441, 2485, 0, -25149, 26775,-54274, 0, -40303, 36435, 36650, 0, 0, 0, 0, 65536 }; /* GLfixed currentModelViewMatrix[16] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 65536 }; */ //glLoadIdentity(); //glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 4.0f); // Reset rotation system if (isFirstDrawing) { //glLoadIdentity(); glMultMatrixx(currentModelViewMatrix); [self configureLighting]; isFirstDrawing = NO; } // Scale the view to fit current multitouch scaling GLfixed fixedPointScaleFactor = [self floatToFixed:scaleFactor]; glScalex(fixedPointScaleFactor, fixedPointScaleFactor, fixedPointScaleFactor); // Perform incremental rotation based on current angles in X and Y glGetFixedv(GL_MODELVIEW_MATRIX, currentModelViewMatrix); GLfloat totalRotation = sqrt(xRotation*xRotation + yRotation*yRotation); glRotatex([self floatToFixed:totalRotation], (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[1] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[0]), (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[5] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[4]), (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[9] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[8]) ); // Translate the model by the accumulated amount glGetFixedv(GL_MODELVIEW_MATRIX, currentModelViewMatrix); float currentScaleFactor = sqrt(pow((GLfloat)currentModelViewMatrix[0] / 65536.0f, 2.0f) + pow((GLfloat)currentModelViewMatrix[1] / 65536.0f, 2.0f) + pow((GLfloat)currentModelViewMatrix[2] / 65536.0f, 2.0f)); xTranslation = xTranslation / (currentScaleFactor * currentScaleFactor); yTranslation = yTranslation / (currentScaleFactor * currentScaleFactor); // Grab the current model matrix, and use the (0,4,8) components to figure the eye's X axis in the model coordinate system, translate along that glTranslatef(xTranslation * (GLfloat)currentModelViewMatrix[0] / 65536.0f, xTranslation * (GLfloat)currentModelViewMatrix[4] / 65536.0f, xTranslation * (GLfloat)currentModelViewMatrix[8] / 65536.0f); // Grab the current model matrix, and use the (1,5,9) components to figure the eye's Y axis in the model coordinate system, translate along that glTranslatef(yTranslation * (GLfloat)currentModelViewMatrix[1] / 65536.0f, yTranslation * (GLfloat)currentModelViewMatrix[5] / 65536.0f, yTranslation * (GLfloat)currentModelViewMatrix[9] / 65536.0f); // Black background, with depth buffer enabled glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); [self drawBox]; } - (void)configureLighting; { const GLfixed lightAmbient[] = {13107, 13107, 13107, 65535}; const GLfixed lightDiffuse[] = {65535, 65535, 65535, 65535}; const GLfixed matAmbient[] = {65535, 65535, 65535, 65535}; const GLfixed matDiffuse[] = {65535, 65535, 65535, 65535}; const GLfixed lightPosition[] = {30535, -30535, 0, 0}; const GLfixed lightShininess = 20; glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glEnable(GL_COLOR_MATERIAL); glMaterialxv(GL_FRONT_AND_BACK, GL_AMBIENT, matAmbient); glMaterialxv(GL_FRONT_AND_BACK, GL_DIFFUSE, matDiffuse); glMaterialx(GL_FRONT_AND_BACK, GL_SHININESS, lightShininess); glLightxv(GL_LIGHT0, GL_AMBIENT, lightAmbient); glLightxv(GL_LIGHT0, GL_DIFFUSE, lightDiffuse); glLightxv(GL_LIGHT0, GL_POSITION, lightPosition); glEnable(GL_DEPTH_TEST); glShadeModel(GL_SMOOTH); glEnable(GL_NORMALIZE); } -(void)setupView:(GLView*)view { const GLfloat zNear = 0.1, zFar = 1000.0, fieldOfView = 60.0; GLfloat size; glMatrixMode(GL_PROJECTION); glEnable(GL_DEPTH_TEST); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect = view.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glScissor(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glTranslatef(0.0f, 0.0f, -6.0f); isFirstDrawing = YES; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • iPhone - Drawing 2D Shapes

    - by Hawdon
    Hi guys! I have an array of 2D points which make an irregular polygon. What I want to do is draw the borders of it and then fill it with a color. I am using Cocos2d to code the game around, but I have not found a fill function in Cocos2d, only the ccDrawLine and such. Is there a simple way to draw filled shapes in Cocos2? I have also noted that Core Graphics would work beautifully for this purpose, but I am not able to integrate it with Cocos2d. I put this in to the draw function of my CCLayer: CGContextRef ctx = UIGraphicsGetCurrentContext(); CGContextClearRect(ctx, [[UIScreen mainScreen] bounds]); And every time I run it i get this error: <Error>: CGContextClearRect: invalid context I really need to get this working... Any ideas?

    Read the article

  • .NET pie chart: how to add text to slices and rotate chart

    - by Sajee
    The code below creates a 24 slice pie chart. How do I: Add text labels to each slice a la "Wheel of Fortune". Rotate the pie chart? I want it to spin like "Wheel of Fortune". private void DrawPieChart() { Graphics g = this.CreateGraphics(); g.Clear(this.BackColor); Rectangle rect = new Rectangle(0, 0, 300, 300); float angle = 0; Random random = new Random(); int sectors = 24; int sweep = 360 / sectors; for(int i=0; i&lt;24;i++) { Color clr = Color.FromArgb(random.Next(0, 255),random.Next(0, 255), random.Next(0, 255)); g.FillPie(new SolidBrush(clr), rect, angle, sweep); angle += sweep; } g.Dispose(); }

    Read the article

  • Flipping issue when interpolating Rotations using Quaternions

    - by uhuu
    I use slerp to interpolate between two quaternions representing rotations. The resulting rotation is then extracted as Euler angles to be fed into a graphics lib. This kind of works, but I have the following problem; when rotating around two (one works just fine) axes in the direction of the green arrow as shown in the left frame here the rotation soon jumps around to rotate from the opposite site to the opposite visual direction, as indicated by the red arrow in the right frame. This may be logical from a mathematical perspective (although not to me), but it is undesired. How could I achieve an interpolation with no visual flipping and changing of directions when rotating around more than one axis, following the green arrow at all times until the interpolation is complete? Thanks in advance.

    Read the article

  • Crystal Reports and WMF image

    - by Eve
    I have a problem with inserting a vector graphics (WMF) into report (Crystal Reports v10.5 from Visual Studio 2008). The image is static, inserted by choosing "Insert Picture" during report design in VS. The problem is that it displays differently (size and aspect ratio) on machines with different operating systems and screen resolutions. Converting to bitmap isn't possible beacause loss of the quality of print isn't acceptable. I thought about dynamic loading of the image, but in this version of CR. I don't see a possibility to set dynamic graphic location in picture properties. Is there any way to solve this problem?

    Read the article

  • Direct video card access

    - by icemanind
    Guys, I am trying to write a class in C# that can be used as a direct replacement for the C# Bitmap class. What I want to do instead though is perform all graphic functions done on the bitmap using the power of the video card. From what I understand, functions such as DrawLine or DrawArc or DrawText are primitive functions that use simple cpu math algorithms to perform the job. I, instead, want to use the graphics card cpu and memory to do these and other advanced functions, such as skinning a bitmap (applying a texture) and true transparancy. My problem is, in C#, how do I access direct video functions? Is there a library or something I need?

    Read the article

  • The easiest way to draw an image?

    - by Benno
    Assume you want to read an image file in a common file format from the hard drive, change the color of one pixel, and display the resulting image to the screen, in C++. Which (open-source) libraries would you recommend to accomplish the above with the least amount of code? Alternatively, which libraries would do the above in the most elegant way possible? A bit of background: I have been reading a lot of computer graphics literature recently, and there are lots of relatively easy, pixel-based algorithms which I'd like to implement. However, while the algorithm itself would usually be straightforward to implement, the necessary amount of frame-work to manipulate an image on a per-pixel basis and display the result stopped me from doing it.

    Read the article

  • Does Quartz2D test intersection of rect by line before drawing it.

    - by ddnv
    I'm drawing a big scheme that consist of a lot of lines. I do it in the drawRect: method of UIView. The scheme is larger than the layer of view and I check each line and draw it only if it intersects the visible rect. But at one moment I thought, should I do this? Maybe Quartz is already doing this test? So the question is: When I use function CGContextAddLineToPoint() does the Core Graphics tests this line for intersection with layer rect or it just draw it anyway?

    Read the article

  • How to create plots in multiple windows and keep them seperate in R

    - by PaulHurleyuk
    Hello SOers, I'm sure this is an easy problem, but my google / help foo has failed me, so it's up to you. I have an R script that generates several plots, and I want to view all the plots on screen at once (in seperate windows), but I can't work out how to open multiple graphics windows. I'm using ggplot2, but I feel this is a more basic problem, so I'm just using base grapics for this simple example x<-c(1:10) y<-sin(x) z<-cos(x) dev.new() plot(y=y,x=x) dev.off() dev.new() plot(x=x,y=z) But this doesn't work. I'm on Windows if this matters (Windows + Eclipse + StatEt)

    Read the article

  • How do I re-set a BMP file's resolution (DPI) indicator?

    - by Joshua Fox
    I have a BMP tagged as 299 DPI resolution. I'd like to change that to 99 DPI. Importantly, the DPI marker in a BMP has no structural meaning. An image has a certain width and height in pixels. The displaying application can show the image at any width in inches. So, the DPI is just a hint. However, I am dealing with some third-party software which behaves differently depending on this marker, so I need to re-set it. I will appreciate suggestions on how to do this programmatically (especially in Java) as well as in GUI graphics tools (e.g. Gimp).

    Read the article

  • How to "flick" a UIImageView?

    - by Meltemi
    I've got some UIViews that I'd like the user to be able to "flick" across the screen. They're not scroll views. They simply contain a raster image (png). Can anyone point me to some sample code, etc to help get me started? Something a little more heavyweight than "MoveMe" out there that helps detect a "flick" (vs a "nudge" or a drag and drop) and then carries the view off in the direction of the "flick"? OpenGL probably overkill. If possible I'd like to stay w/in the realm of Core Graphics/Animation.

    Read the article

  • Efficiently draw a grid in Windows Forms

    - by Joel
    I'm writing an implementation of Conway's Game of Life in C#. This is the code I'm using to draw the grid, it's in my panel_Paint event. g is the graphics context. for (int y = 0; y < numOfCells * cellSize; y += cellSize) { for (int x = 0; x < numOfCells * cellSize; x += cellSize) { g.DrawLine(p, x, 0, x, y + numOfCells * cellSize); g.DrawLine(p, 0, x, y + size * drawnGrid, x); } } When I run my program, it is unresponsive until it finishes drawing the grid, which takes a few seconds at numOfCells = 100 & cellSize = 10. Removing all the multiplication makes it faster, but not by very much. Is there a better/more efficient way to draw my grid? Thanks

    Read the article

  • Java : VolatileImage slower than BufferedImage.

    - by Norswap
    I'm making a game in java and in used BufferedImages to render content to the screen. I had performance issues on low end machines where the game is supposed to run, so I switched to VolatileImage which are normally faster. Except they actually slow the whole thing down. The images are created with GraphicsConfiguration.createCompatibleVolatileImage(...) and are drawn to the screen with Graphics.drawImage(...) (follow link to see which one specifically). They are drawn upon a Canvas using double buffering. Does someone has an idea of what is going wrong here ?

    Read the article

  • Looking for a mobile platform to view vector data and use it like a simple map

    - by Orchestrator
    I would like to develop or use an existing platform that will allow me to view custom vector data and use it as a map on mobile phones such as Android/IPhone (Maybe even WP7). I'm hoping that there's already a good infrastructure for what I need so I would not need to develop a whole infrastructure by myself. In Conclusion - Is there any existing platform that may answer my needs? If not, how would you guys suggest I should begin? How should I save my vector data? How could I read it? Should I view it with a graphics engine like OpenGL? Is there any chance this solution could be cross-platform? I know that it's possible since it was already done with apps like Waze. And it works the same on iOS and Android. Thanks!

    Read the article

  • Java SWT - dissolve (fade) from one image to the next.

    - by carillonator
    I'm pretty new to Java and SWT, hoping to have one image dissolve into the next. I have one image now in a Label (relevant code): Device dev = shell.getDisplay(); try { Image photo = new Image(dev, "photo.jpg"); } catch(Exception e) { } Label label = new Label(shell, SWT.IMAGE_JPEG); label.setImage(photo); Now I'd like photo to fade into a different image at a speed that I specify. Is this possible as set up here, or do I need to delve more into the org.eclipse.swt.graphics api? Also, this will be a slide show that may contain hundreds of photos, only ever moving forward (never back to a previous image)---considering this, is there something I explicitly need to do in order to remove the old images from memory? Thanks!!

    Read the article

  • Why do we need normalized coordinate system? Options

    - by jcyang
    Hi, I have problem understand following sentences in my textbook Computer Graphics with OpenGL. "To make viewing process independent of the requirements of any output device,graphic system convert object descriptions to normalized coordinates and apply the clipping routines." Why normalized coordinates could make viewing process independent of the requirements of any output devices? Isn't the projection coordinates already independent of output device?We only need to first scale and then translate the projection coordinate then we will get device coordinate. So why do we need first convert the projection coordinate to normalized coordinate first? "Clipping is usually performed in normlized coordinates.This allows us to reduce computations by first concatenating the various transformation matrices" Why clipping is usually performed in normlized coordinates? What kind of transformation concatenated? thanks. jcyang.

    Read the article

  • My Apache access log contains weird GET and POST requests, what can I do?

    - by Konstantin
    My Apache access log contains weird GET and POST requests, is it possible to examine which of these are harmful? For example: 114.232.151.185 - - [11/Jun/2014:20:11:33 +0200] "GET http://hotel.qunar.com/render/hoteldiv.jsp?&__jscallback=XQScript_4 HTTP/1.1" 404 1167 103.30.175.10 - - [12/Jun/2014:08:35:17 +0200] "GET /vtigercrm/ HTTP/1.1" 404 1034 69.174.245.163 - - [14/Jun/2014:01:22:38 +0200] "GET /w00tw00t.at.blackhats.romanian.anti-sec:) HTTP/1.1" 404 1034 69.174.245.163 - - [14/Jun/2014:01:22:38 +0200] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 404 1034 94.74.229.110 - - [16/Jun/2014:18:46:43 +0200] "GET http://www.msftncsi.com/ncsi.txt HTTP/1.1" 404 1037 80.73.11.164 - - [20/Jun/2014:01:52:14 +0200] "POST /cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3D%30+%2D%6E HTTP/1.1" 404 1034 162.253.66.76 - - [24/Jun/2014:23:54:30 +0200] "GET /rutorrent HTTP/1.1" 400 226 122.226.223.69 - - [25/Jun/2014:01:14:27 +0200] "GET http://todd0738.gotoip4.com//hello.html HTTP/1.1" 404 1041 My Apache access log file: http://pastebin.com/2x0naQBK

    Read the article

  • Will the Driver Support for Intel HD Graphics be Improved in 12.10?

    - by Hiranya
    I recently installed Ubuntu 12.04 on a HP Pavilion dv4 laptop. This is a core i7 machine with Intel HD graphics and also a separate nVidia VGA card. I had a lot of issues getting Ubuntu 12.04 working on this system. First there were issues booting up the live CD for installation. I worked around that by using the 'nomodeset' option. Then I continued to have similar issues after installation has completed. So I had to permanently add the nomodeset option to my GRUB boot configuration. At the moment I have a working installation but there are many issues: Ubuntu GUI is a bit flaky at times. The mouse pointer goes on and off when hovering over certain icons. Certain things doesn't get rendered properly on the screen. I can't access any of the tty consoles. Hitting Ctrl+Alt+F[1-6] gives me a blank screen. And once that happens I can't even come back to the UI by hitting Ctrl+Alt+F7. I've realized that tty consoles are actually working. I just can't see the text. If I enter a command like 'sudo reboot' into the empty screen the machine reboots. Can't get external displays (monitors, projectors etc) working. But I think this is probably because the VGA out is wired to the nVidia card which is not being used by Linux. colord program crashes every now and then triggering a popup message. So my main question is, will the support for Intel HD graphics be improved in the next release? Or will I have to keep using the nomodeset option in the new release too? Also I appreciate if anybody can shed some light on any of the issues listed above. Thanks in advance.

    Read the article

  • What to do about "system running in low-graphics mode"?

    - by ubuntubabe
    My Dell which was 5 years old suddenly karked it and I had the "low graphics" black screen and useless dialogue box. As I believed it was a dead graphics card I went out and bought a brand new machine. I put aside the new machine and tried again in vain to open the Dell. I eventually got to the command line via Ctrl+Alt+F1. I logged into my account from there, and simply started a series of sudo apt-get remove of various softwares that I knew were installed on my PC (software without any great consequence like Google earth, tweak, Skype etc). Lo and behold after a sudo reboot my computer was fine again! So now I have 2 computers. BUT one week after buying the other one and installing 12.04 because I love Ubuntu, the SAME PROBLEM arrived! I once again deleted Google earth, Skype, and did a sudo reboot and everything worked as before. I think there is a bug or something in 12.04 as this problem has never arisen with any other versions of Ubuntu.

    Read the article

  • Is it possible for a WPF control to have an ActualWidth and ActualHeight if it has never been render

    - by DanM
    I need a Viewport3D for the sole purpose of doing geometric calculations using Petzold.Media3D.ViewportInfo. I do now want to place it in a Window. I'm creating a Viewport3D using the following code: private Viewport3D CreateViewport(MainSettings settings) { var cameraPosition = new Point3D(0, 0, settings.CameraHeight); var cameraLookDirection = new Vector3D(0, 0, -1); var cameraUpDirection = new Vector3D(0, 1, 0); var camera = new PerspectiveCamera { Position = cameraPosition, LookDirection = cameraLookDirection, UpDirection = cameraUpDirection }; var viewport = new Viewport3D { Camera = camera, Width = settings.ViewportWidth, Height = settings.ViewportHeight }; return viewport; } Later, I'm attempting to use this viewport to convert the mouse location to a 3D location using this method: public Point3D? Point2dToPoint3d(Point point) { var range = new LineRange(); var isValid = ViewportInfo.Point2DtoPoint3D(_viewport, point, out range); if (isValid) return range.PointFromZ(0); else return null; } Unfortunately, it's not working. I think the reason is that the ActualWidth and ActualHeight of the viewport and both zero (and these are read-only properties, so I can't set them manually). (I have tested the exact same with an actual rendered Viewport3D, so I know the issue is not with my converter method.) Any idea how I can get WPF to assign the ActualWidth and ActualHeight based on my Width and Height settings? I tried setting the HorizontalAlignment and VerticalAlignment to Left and Top, respectively, and I also messed with the MinWidth and MinHeight, but none of these properties had any effect on the ActualWidth or ActualHeight.

    Read the article

  • How to speed up marching cubes?

    - by Dan Vinton
    I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate. Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this. I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls? Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option... I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.

    Read the article

  • Are we DELPHI, VCL or Pascal programmers?

    - by José Eduardo
    i´ve been a delphi database programmer since D2. Now i´m facing some digital imaging and 3D challenges that make me to start study OpenGL, DirectX, Color Spaces and so on. I´m really trying but nobody seems to use Delphi for this kind of stuff, just the good-old-paycheck Database programming. ok, i know that we have some very smart guys behind some clever components, some of this open-source. Is there any PhotoShop, Blender, Maya, Office, Sonar, StarCraft, Call of Dutty written in Delphi? Do i have to learn C++ to have access to zillions of books about that kind of stuff? What is the fuzz/hype behind this: int *varName = &anhoterThing? Why pointers seems to be the holy graal to this apps? I´ve downloaded MSVC++ Express and start to learn some WPF and QT integration, and i think: "Man, Delphi does this kind of stuff, with less code, less headaches, since the wheels were invented" This lead my mind to the following... Do you ever tried to write a simple notepad program using just notepad and dcc32 in Pascal/Delphi? if so embarcadero could make our beloved pascal compiler free, and sell just the ide, the vcl, the customer support ... and back to the question: Are we DELPHI, VCL or Pascal programmers?

    Read the article

  • C# Hook Forms / Windows / Dialogs etc. (via HWND?) to Capture Video Buffer (D3D Device?)

    - by Drax
    I am looking to create a very simple C# application which runs Full-Screen in Direct3D, and is able to grab the Desktop 'scene', mapping each Window from the Desktop to a Textured Polygon in my D3D Scene... I'm hoping to create a simplistic "3D Desktop" type of application as an experiment, and I'm wondering if there is a specific method for doing something like the following: 1)Get a list of all the Windows open on the Desktop (List of HWNDs?). 2)Grab the X,Y position of each Window, as well as the Width and Height. 3)Grab the Rendered image of each Window (magic happens here). 4)Create a new Texture/Surface in D3D using the Width and Height of the Window(s), and apply the Image we grabbed as a Texture. Is there an efficient 'best practice' for acquiring the actual image(s) being rendered to the Desktop? Is there also a 'best practice' for "extending the desktop" to a virtual second, third, etc. "desktop" and being able to swap between them, including creating a unique instance of the task-bar for each virtual desktop. Thanks a million for any suggestions!

    Read the article

  • How to get results efficiently out of an Octree/Quadtree?

    - by Reveazure
    I am working on a piece of 3D software that has sometimes has to perform intersections between massive numbers of curves (sometimes ~100,000). The most natural way to do this is to do an N^2 bounding box check, and then those curves whose bounding boxes overlap get intersected. I heard good things about octrees, so I decided to try implementing one to see if I would get improved performance. Here's my design: Each octree node is implemented as a class with a list of subnodes and an ordered list of object indices. When an object is being added, it's added to the lowest node that entirely contains the object, or some of that node's children if the object doesn't fill all of the children. Now, what I want to do is retrieve all objects that share a tree node with a given object. To do this, I traverse all tree nodes, and if they contain the given index, I add all of their other indices to an ordered list. This is efficient because the indices within each node are already ordered, so finding out if each index is already in the list is fast. However, the list ends up having to be resized, and this takes up most of the time in the algorithm. So what I need is some kind of tree-like data structure that will allow me to efficiently add ordered data, and also be efficient in memory. Any suggestions?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >