Search Results

Search found 891 results on 36 pages for 'scaling out'.

Page 13/36 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Sprites, Primitives and logic entity as structs

    - by Jeffrey
    I'm wondering would it be considered acceptable: The window class is responsible for drawing data, so it will have a method: Window::draw(const Sprite&); Window::draw(const Rect&); Window::draw(const Triangle&); Window::draw(const Circle&); and all those primitives + sprites would be just public struct. For example Sprite: struct Sprite { float x, y; // center float origin_x, origin_y; float width, height; float rotation; float scaling; GLuint texture; Sprite(float w, float h); Sprite(float w, float h, float a, float b); void useTexture(std::string file); void setOrigin(float a, float b); void move(float a, float b); // relative move void moveTo(float a, float b); // absolute move void rotate(float a); // relative rotation void rotateTo(float a); // absolute rotation void rotationReset(); void scale(float a); // relative scaling void scaleTo(float a); // absolute scaling void scaleReset(); }; So instead of having each primitive to call their draw() function, which is a little bit off topic for their object, I let the Window class handle all the OpenGL stuff and manipulate them as simple objects that will be drawn later on. Is this pattern used? Does it have any cons against it's primitives-draw-themself pattern? Are there any other related patterns?

    Read the article

  • OpenGL: Move camera regardless of rotation

    - by Markus
    For a 2D board game I'd like to move and rotate an orthogonal camera in coordinates given in a reference system (window space), but simply can't get it to work. The idea is that the user can drag the camera over a surface, rotate and scale it. Rotation and scaling should always be around the center of the current viewport. The camera is set up as: gl.glMatrixMode(GL2.GL_PROJECTION); gl.glLoadIdentity(); gl.glOrtho(-width/2, width/2, -height/2, height/2, nearPlane, farPlane); where width and height are equal to the viewport's width and height, so that 1 unit is one pixel when no zoom is applied. Since these transformations usually mean (scaling and) translating the world, then rotating it, the implementation is: gl.glMatrixMode(GL2.GL_MODELVIEW); gl.glLoadIdentity(); gl.glRotatef(rotation, 0, 0, 1); // e.g. 45° gl.glTranslatef(x, y, 0); // e.g. +10 for 10px right, -2 for 2px down gl.glScalef(zoomFactor, zoomFactor, zoomFactor); // e.g. scale by 1.5 That however has the nasty side effect that translations are transformed as well, that is applied in world coordinates. If I rotate around 90° and translate again, X and Y axis are swapped. If I reorder the transformations so they read gl.glTranslatef(x, y, 0); gl.glScalef(zoomFactor, zoomFactor, zoomFactor); gl.glRotatef(rotation, 0, 0, 1); the translation will be applied correctly (in reference space, so translation along x always visually moves the camera sideways) but rotation and scaling are now performed around origin. It shouldn't be too hard, so what is it I'm missing?

    Read the article

  • Exploring TCP throughput with DTrace

    - by user12820842
    One key measure to use when assessing TCP throughput is assessing the amount of unacknowledged data in the pipe. This is sometimes termed the Bandwidth Delay Product (BDP) (note that BDP is often used more generally as the product of the link capacity and the end-to-end delay). In DTrace terms, the amount of unacknowledged data in bytes for the connection is the different between the next sequence number to send and the lowest unacknoweldged sequence number (tcps_snxt - tcps_suna). According to the theory, when the number of unacknowledged bytes for the connection is less than the receive window of the peer, the path bandwidth is the limiting factor for throughput. In other words, if we can fill the pipe without the peer TCP complaining (by virtue of its window size reaching 0), we are purely bandwidth-limited. If the peer's receive window is too small however, the sending TCP has to wait for acknowledgements before it can send more data. In this case the round-trip time (RTT) limits throughput. In such cases the effective throughput limit is the window size divided by the RTT, e.g. if the window size is 64K and the RTT is 0.5sec, the throughput is 128K/s. So a neat way to visually determine if the receive window of clients may be too small should be to compare the distribution of BDP values for the server versus the client's advertised receive window. If the BDP distribution overlaps the send window distribution such that it is to the right (or lower down in DTrace since quantizations are displayed vertically), it indicates that the amount of unacknowledged data regularly exceeds the client's receive window, so that it is possible that the sender may have more data to send but is blocked by a zero-window on the client side. In the following example, we compare the distribution of BDP values to the receive window advertised by the receiver (10.175.96.92) for a large file download via http. # dtrace -s tcp_tput.d ^C BDP(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count -1 | 0 0 | 6 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 9 4096 | 14 8192 | 27 16384 | 67 32768 |@@ 1464 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 32396 131072 | 0 SWND(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count 16384 | 0 32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 17067 65536 | 0 Here we have a puzzle. We can see that the receiver's advertised window is in the 32768-65535 range, while the amount of unacknowledged data in the pipe is largely in the 65536-131071 range. What's going on here? Surely in a case like this we should see zero-window events, since the amount of data in the pipe regularly exceeds the window size of the receiver. We can see that we don't see any zero-window events since the SWND distribution displays no 0 values - it stays within the 32768-65535 range. The explanation is straightforward enough. TCP Window scaling is in operation for this connection - the Window Scale TCP option is used on connection setup to allow a connection to advertise (and have advertised to it) a window greater than 65536 bytes. In this case the scaling shift is 1, so this explains why the SWND values are clustered in the 32768-65535 range rather than the 65536-131071 range - the SWND value needs to be multiplied by two since the reciever is also scaling its window by a shift factor of 1. Here's the simple script that compares BDP and SWND distributions, fixed to take account of window scaling. #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @bdp["BDP(bytes)", args[2]-ip_daddr, args[4]-tcp_sport] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); } tcp:::receive / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @swnd["SWND(bytes)", args[2]-ip_saddr, args[4]-tcp_dport] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } And here's the fixed output. # dtrace -s tcp_tput_scaled.d ^C BDP(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count -1 | 0 0 | 39 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 22 16384 | 37 32768 |@ 99 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3858 131072 | 0 SWND(bytes) 10.175.96.92 80 value ------------- Distribution ------------- count 512 | 0 1024 | 1 2048 | 0 4096 | 2 8192 | 4 16384 | 7 32768 | 14 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1956 131072 | 0

    Read the article

  • Whats wrong with this video embed code?

    - by jamietelin
    Following embed code is from http://hd.se/landskrona/2010/04/09/kunglig-glans-pa-idrottsgalan/ but it doesn't work in Internet Explorer 8. Firefox no problems. Any recommendations for improvements? Thanks for your time! <object width="480px" height="294px" id="_36313041" data="http://hd.se/static/media/html/flash/video-3/flowplayer.swf" type="application/x-shockwave-flash"> <param name="movie" value="http://hd.se/static/media/html/flash/video-3/flowplayer.swf" /> <param name="allowfullscreen" value="true" /> <param name="allowscriptaccess" value="always" /> <param name="flashvars" value='config={"key":"$3fff7448b28a8cffc85","contextMenu":["hd.se videospelare 1.0"],"plugins":{"rtmp":{"url":"http://hd.se/static/media/html/flash/video-3/flowplayer.rtmp.swf"},"controls":{"height":24,"opacity":1,"all":false,"play":true,"time":true,"scrubber":true,"playlist":false,"mute":true,"volume":true,"fullscreen":true,"backgroundColor":"#222222","backgroundGradient":"none","buttonColor":"#7c7c7c","buttonOverColor":"#36558b","progressColor":"#7c7c7c","bufferColor":"#7c7c7c","timeColor":"#ffffff","durationColor":"#ffffff","timeBgColor":"#222222","scrubberHeightRatio":0.5,"scrubberBarHeightRatio":0.5,"volumeSliderHeightRatio":0.5,"volumeBarHeightRatio":0.5,"autoHide":"fullscreen","hideDelay":1800,"tooltips":{"buttons":true,"play":"Spela","pause":"Paus","next":"Nästa","previous":"Föregående","mute":"Ljud av","unmute":"Ljud på","fullscreen":"Fullskärmsläge","fullscreenExit":"Lämna fullskärmsläge"},"tooltipColor":"#153872","tooltipTextColor":"#ffffff"},"contentIntro":{"url":"http://hd.se/static/media/html/flash/video-3/flowplayer.content.swf","top":0,"width":736,"border":"none","backgroundColor":"#202020","backgroundGradient":"none","borderRadius":"none","opacity":"85pct","display":"none","closeButton":true}},"canvas":{"backgroundColor":"#000000","backgroundGradient":"none"},"play":{"replayLabel":"Spela igen"},"screen":{"bottom":24},"clip":{"scaling":"fit","autoPlay":true},"playlist":[{"provider":"rtmp","netConnectionUrl":"rtmp://fl0.c06062.cdn.qbrick.com/06062","url":"ncode/hdstart","autoPlay":false,"scaling":"fit"},{"url":"http://hd.se/multimedia/archive/00425/_kunligglans_HD_VP6_425359a.flv","scaling":"fit","autoPlay":true},{"provider":"rtmp","netConnectionUrl":"rtmp://fl0.c06062.cdn.qbrick.com/06062","url":"ncode/hdstopp","autoPlay":true,"scaling":"fit"}]}' /> </object>

    Read the article

  • What do you use RightScale for?

    - by npt
    I'm currently evaluating whether to use RightScale to manage a production environment in EC2. I intend to use Puppet for configuration management either way (the declarative approach seems far better than running scripts), am running a somewhat nonstandard stack (e.g. MongoDB), and am uncertain about how much value RightScale would add relative to Puppet + Amazon's auto-scaling + another hosted monitoring system. Those who use RightScale, what features do you find important? Is its auto-scaling support (including keeping single instances running) more powerful than Amazon's?

    Read the article

  • 3D World to Local transformation

    - by Bill Kotsias
    Hello. I am having a real headache trying to set a node's local position to match a given world position. I was given a solution but, AFAICS, it only takes into account orientation and position but NOT scaling : node_new_local_position = node_parent.derivedOrientation().Inverse() * ( world_position_to_match - node_parent.derivedPosition() ); The node in question is a child of node_parent; node_parent local and derived properties (orientation, position and scaling) are known, as well as its full matrix transform. All the positions are 3d vectors; the orientation is a quaternion; the full transform is a 4x4 matrix. Could someone please help me to modify the solution to support scaling in the node hierarchy? Many thanks in advance, Bill

    Read the article

  • Why is mesh baking causing huge performance spikes?

    - by jellyfication
    A couple of seconds into the gameplay on my Android device, I see huge performance spikes caused by "Mesh.Bake Scaled Mesh PhysX CollisionData" In my game, a whole level is a parent object containing multiple ridigbodies with mesh colliders. Every FixedUpdate(), my parent object rotates around the player. Rotating the world causes mesh scaling. Here is the code that handles world rotation. private void Update() { input.update(); Vector3 currentInput = input.GetDirection(); worldParent.rotation = initialRotation; worldParent.DetachChildren(); worldParent.position = transform.position; world.parent = worldParent; worldParent.Rotate(Vector3.right, currentInput.x * 50f); worldParent.Rotate(Vector3.forward, currentInput.z * 50f); } How can I get rid of mesh scaling ? Mesh.Bake physx seems to take effect after some time, is it possible to disable this function ? The profiler looks like this: Bottom-left panel shows data before spikes, the right after

    Read the article

  • How to control CPU frequency

    - by Tim
    I am using CPU Frequency Scaling Monitor 2.30.0 on the panel to show and control CPU frequency. My CPU frequency by default will change according to load. But I want to make CPU work at the lowest level and so I choose 800 MHz in CPU Frequency Scaling Monitor. After a few seconds, however, it automatically changes back to Powersave or Performance mode which has automatic adjustment to CPU frequency. I was wondering how to actually make CPU work at the lowest level? Thanks!

    Read the article

  • Resolution independent physics

    - by user46877
    I'm making a game like Doodlejump but don't know how to make the physics scale on multiple resolutions. I also can't find anything related to this on Google. Right now I'm scaling the game using letterboxing and tested scaling the jump height with this code: gravity = graphics.getHeight() * 0.001f; jumpVel = graphics.getHeight() * -0.04f; ... velY += gravity; y += velY; But if I test this on my smartphone or emulator with different resolutions, I always get a slightly different jump height. I know that Farseer is resolution independent. How can I replicate this in my game? Thanks in advance.

    Read the article

  • How do I reconfigure my GLES frame buffer after a rotation?

    - by Panda Pajama
    I am implementing interface rotation for my GLES based game for iOS, written in Xamarin.iOS with OpenTK. I am detecting the rotation by overriding WillRotate, in my UIViewController, and I correctly re-setup all of my projection matrices. However, when drawing a sprite, the image looks a bit blurrier on the landscape version compared to the portrait version, as you can see in the following closeups magnified 10x. Portrait (before rotating) Landscape (after rotating) In both cases, I'm using the same texture with the same sampler, the same shader, and the same GL state. I just changed the order of the parameters in the projection matrix, so the resulting sizes should be exactly the same pixelwise. Since this could be thought of as a window resize, I suppose that the framebuffer has to be recreated to the new size. When working on desktop apps on Direct3D11 (SharpDX), I would have to call swapChain.ResizeBuffers() to do this. I have tried setting AutoResize = true in my iPhoneOSGameView, but then the framebuffer gets clipped as I rotate the interface, and then everything disappears when rotating the interface again. I'm not doing anything strange, my framebuffer initialization is pretty vanilla: int scaling = (int)UIScreen.MainScreen.Scale; DeviceWidth = (int)UIScreen.MainScreen.Bounds.Width * scaling; DeviceHeight = (int)UIScreen.MainScreen.Bounds.Height * scaling; Size = new System.Drawing.Size((int)(DeviceWidth), (int)(DeviceHeight)); Bounds = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); Frame = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); ContextRenderingApi = EAGLRenderingAPI.OpenGLES2; AutoResize = true; LayerRetainsBacking = true; LayerColorFormat = EAGLColorFormat.RGBA8; I get inconsistent results when changing Size, Bounds and Frame on my CreateFrameBuffer override, but since the documentation is so incomplete (it has nothing on Bounds and Frame), I have resorted to randomly changing stuff here and there without really knowing what is going on. There is a similar question which has no answers. However, I don't know if they're experiencing the same problem as I am. Is my supposition that recreating the framebuffer is necessary, correct? If so, does anybody know how to do it correctly in OpenTK for Xamarin.iOS?

    Read the article

  • Is there a good way to get pixel-perfect collision detection in XNA?

    - by ashes999
    Is there a well-known way (or perhaps reusable bit of code) for pixel-perfect collision detection in XNA? I assume this would also use polygons (boxes/triangles/circles) for a first-pass, quick-test for collisions, and if that test indicated a collision, it would then search for a per-pixel collision. This can be complicated, because we have to account for scale, rotation, and transparency. WARNING: If you're using the sample code from the link from the answer below, be aware that the scaling of the matrix is commented out for good reason. You don't need to uncomment it out to get scaling to work.

    Read the article

  • Which one scales better asp or php?

    - by Marin
    Let's say the website is doing fine(forums,pictures,ajax). And it needs scaling up/scaling out. I feel more comfortable with php but I have worked with asp.net as well. Would you say asp.net is much more powerful, more robust and thus easier to scale out? What would be the pros and cons of converting the website to asp.net in regards to scalability and performance versus keeping the website written in PHP? Examples of personal experience in making such a conversion would be a plus. Thank you.

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • Black bars around screen? Catalyst Control Center problem?

    - by Josh B
    I just newly install Ubuntu 12.04, and im running an HDMI cable from my computer to my ASUS monitor. Now in Windows 7, i did not have these black bar issues running at 1080p. But now in Ubuntu, i have these black bars. I installed the ATI Catalyst Control Center, and I went to go in to fix the scaling but it is grayed out. As you can see, even with the override box checked i still can not set the scaling. The monitor was set to a lower resolution to hopefully fix it but that did not work either. Does anyone know how to fix this? Thanks.

    Read the article

  • How to adjust DPI in 14.04

    - by jake
    I asked about fixing DPI in 12.04. The 14.04 release notes list "Support for High-DPI screens and desktop scaling." Post upgrade, it seems that nothing has changed. Similar symptoms from my previous post persist: The 1" square here is closer to 1/2" Despite the line xserver-command=X -dpi 170 in /etc/lightdm/lightdm.conf, xdpyinfo reports 96x96 dpi I did find that I was able to use the "Scale for menu and title bars" slider in System SettingsDisplays to fix title bar text size instead of setting org.gnome.desktop.interface text-scaling-factor as described here. The last post also mentions that in Gnome 3, DPI is hard coded to 96. Is this a limitation in 14.04? (I am somewhat ignorant to the distinction between Gnome and Unity) Can I do anything to properly set my DPI?

    Read the article

  • Anyone have real world experience with Rackspace Cloud Sites at high scale?

    - by Allara
    I have a pure web service application layer using .NET. I was originally planning to use Amazon EC2, but rolling my own autoscaling procedures is a bit intimidating, and the scaling isn't very granular from a cost perspective. If the app is successful, we could be looking at relatively high scale (millions of requests per month). The app uses Amazon SimpleDB as the database layer. As a test, I have the app running successfully in Rackspace Cloud Sites. Performance seems to be equal to (if not better than) a standard EC2 instance, even with the added latency of the SimpleDB requests travelling to the Rackspace network. However, testing at this stage is at a very low scale. My question is this: has anyone had real-world experience running a high scale application on Rackspace Cloud Sites? Moreover, once you pass the "included" 10,000 compute cycles per month, does the overall cost seem to be lower than rolling lots of EC2 instances? My assumption would be that with completely smooth scaling (i.e. only adding compute resources as needed), the cost could be lower on average. However, their stated goal of calibrating 10,000 CCs as a single 1.2 Ghz CPU seems on average to be much more expensive than EC2. I like the idea of no-touch scaling, but is it too good to be true?

    Read the article

  • Java Imaging Framework

    - by Prabhakar
    Is there any Open source or Commercial Java frameworks for doing image operations such as converting the images from one format to other and scaling the images etc. There should be no installation.Set of jars that are in classpath that will do the job. I have looked into the java-image-scaling library but it is having issues. Thanks in advance.

    Read the article

  • iPhone: scale UIView about a specific point

    - by Greg Maletic
    I want to animate the scaling down of a UIView, but not about its center: about a different point. As a shot in the dark, I tried translating the view, scaling, then translating back, using a series of CGAffineTransforms. But it doesn't work: it still scales about the center. Anyone know how to do this? Thanks very much.

    Read the article

  • Subband decomposition using Daubechies filter

    - by misha
    I have the following two 8-tap filters: h0 ['-0.010597', '0.032883', '0.030841', '-0.187035', '-0.027984', '0.630881', '0.714847', '0.230378'] h1 ['-0.230378', '0.714847', '-0.630881', '-0.027984', '0.187035', '0.030841', '-0.032883', '-0.010597'] Here they are on a graph: I'm using it to obtain the approximation (lower subband of an image). This is a(m,n) in the following diagram: I got the coefficients and diagram from the book Digital Image Processing, 3rd Edition, so I trust that they are correct. The star symbol denotes one dimensional convolution (either over rows or over columns). The down arrow denotes downsampling in one dimension (either over rows, or columns). My problem is that the filter coefficients for h0 and h1 sum to greater than 1 (approximately 1.4 or sqrt(2) to be exact). Naturally, if I convolve any image with the filter, the image will get brighter. Indeed, here's what I get (expected result on right): Can somebody suggest what the problem is here? Why should it work if the convolution filter coefficients sum to greater than 1? I have the source code, but it's quite long so I'm hoping to avoid posting it here. If it's absolutely necessary, I'll put it up later. EDIT What I'm doing is: Decompose into subbands Filter one of the subbands Recompose subbands into original image Note that the point isn't just to have a displayable subband-decomposed image -- I have to be able to perfectly reconstruct the original image from the subbands as well. So if I scale the filtered image in order to compensate for my decomposition filter making the image brighter, this is what I will have to do: Decompose into subbands Apply intensity scaling Filter one of the subbands Apply inverse intensity scaling Recompose subbands into original image Step 2 performs the scaling. This is what @Benjamin is suggesting. The problem is that then step 4 becomes necessary, or the original image will not be properly reconstructed. This longer method will work. However, the textbook explicitly says that no scaling is performed on the approximation subband. Of course, it's possible that the textbook is wrong. However, what's more possible is I'm misunderstanding something about the way this all works -- this is why I'm asking this question.

    Read the article

  • How to maintain encapsulation with composition in C++?

    - by iFreilicht
    I am designing a class Master that is composed from multiple other classes, A, Base, C and D. These four classes have absolutely no use outside of Master and are meant to split up its functionality into manageable and logically divided packages. They also provide extensible functionality as in the case of Base, which can be inherited from by clients. But, how do I maintain encapsulation of Master with this design? So far, I've got two approaches, which are both far from perfect: 1. Replicate all accessors: Just write accessor-methods for all accessor-methods of all classes that Master is composed of. This leads to perfect encapsulation, because no implementation detail of Master is visible, but is extremely tedious and makes the class definition monstrous, which is exactly what the composition should prevent. Also, adding functionality to one of the composees (is that even a word?) would require to re-write all those methods in Master. An additional problem is that inheritors of Base could only alter, but not add functionality. 2. Use non-assignable, non-copyable member-accessors: Having a class accessor<T> that can not be copied, moved or assigned to, but overrides the operator-> to access an underlying shared_ptr, so that calls like Master->A()->niceFunction(); are made possible. My problem with this is that it kind of breaks encapsulation as I would now be unable to change my implementation of Master to use a different class for the functionality of niceFunction(). Still, it is the closest I've gotten without using the ugly first approach. It also fixes the inheritance issue quite nicely. A small side question would be if such a class already existed in std or boost. EDIT: Wall of code I will now post the code of the header files of the classes discussed. It may be a bit hard to understand, but I'll give my best in explaining all of it. 1. GameTree.h The foundation of it all. This basically is a doubly-linked tree, holding GameObject-instances, which we'll later get to. It also has it's own custom iterator GTIterator, but I left that out for brevity. WResult is an enum with the values SUCCESS and FAILED, but it's not really important. class GameTree { public: //Static methods for the root. Only one root is allowed to exist at a time! static void ConstructRoot(seed_type seed, unsigned int depth); inline static bool rootExists(){ return static_cast<bool>(rootObject_); } inline static weak_ptr<GameTree> root(){ return rootObject_; } //delta is in ms, this is used for velocity, collision and such void tick(unsigned int delta); //Interaction with the tree inline weak_ptr<GameTree> parent() const { return parent_; } inline unsigned int numChildren() const{ return static_cast<unsigned int>(children_.size()); } weak_ptr<GameTree> getChild(unsigned int index) const; template<typename GOType> weak_ptr<GameTree> addChild(seed_type seed, unsigned int depth = 9001){ GOType object{ new GOType(seed) }; return addChildObject(unique_ptr<GameTree>(new GameTree(std::move(object), depth))); } WResult moveTo(weak_ptr<GameTree> newParent); WResult erase(); //Iterators for for( : ) loop GTIterator& begin(){ return *(beginIter_ = std::move(make_unique<GTIterator>(children_.begin()))); } GTIterator& end(){ return *(endIter_ = std::move(make_unique<GTIterator>(children_.end()))); } //unloading should be used when objects are far away WResult unloadChildren(unsigned int newDepth = 0); WResult loadChildren(unsigned int newDepth = 1); inline const RenderObject& renderObject() const{ return gameObject_->renderObject(); } //Getter for the underlying GameObject (I have not tested the template version) weak_ptr<GameObject> gameObject(){ return gameObject_; } template<typename GOType> weak_ptr<GOType> gameObject(){ return dynamic_cast<weak_ptr<GOType>>(gameObject_); } weak_ptr<PhysicsObject> physicsObject() { return gameObject_->physicsObject(); } private: GameTree(const GameTree&); //copying is only allowed internally GameTree(shared_ptr<GameObject> object, unsigned int depth = 9001); //pointer to root static shared_ptr<GameTree> rootObject_; //internal management of a child weak_ptr<GameTree> addChildObject(shared_ptr<GameTree>); WResult removeChild(unsigned int index); //private members shared_ptr<GameObject> gameObject_; shared_ptr<GTIterator> beginIter_; shared_ptr<GTIterator> endIter_; //tree stuff vector<shared_ptr<GameTree>> children_; weak_ptr<GameTree> parent_; unsigned int selfIndex_; //used for deletion, this isn't necessary void initChildren(unsigned int depth); //constructs children }; 2. GameObject.h This is a bit hard to grasp, but GameObject basically works like this: When constructing a GameObject, you construct its basic attributes and a CResult-instance, which contains a vector<unique_ptr<Construction>>. The Construction-struct contains all information that is needed to construct a GameObject, which is a seed and a function-object that is applied at construction by a factory. This enables dynamic loading and unloading of GameObjects as done by GameTree. It also means that you have to define that factory if you inherit GameObject. This inheritance is also the reason why GameTree has a template-function gameObject<GOType>. GameObject can contain a RenderObject and a PhysicsObject, which we'll later get to. Anyway, here's the code. class GameObject; typedef unsigned long seed_type; //this declaration magic means that all GameObjectFactorys inherit from GameObjectFactory<GameObject> template<typename GOType> struct GameObjectFactory; template<> struct GameObjectFactory<GameObject>{ virtual unique_ptr<GameObject> construct(seed_type seed) const = 0; }; template<typename GOType> struct GameObjectFactory : GameObjectFactory<GameObject>{ GameObjectFactory() : GameObjectFactory<GameObject>(){} unique_ptr<GameObject> construct(seed_type seed) const{ return unique_ptr<GOType>(new GOType(seed)); } }; //same as with the factories. this is important for storing them in vectors template<typename GOType> struct Construction; template<> struct Construction<GameObject>{ virtual unique_ptr<GameObject> construct() const = 0; }; template<typename GOType> struct Construction : Construction<GameObject>{ Construction(seed_type seed, function<void(GOType*)> func = [](GOType* null){}) : Construction<GameObject>(), seed_(seed), func_(func) {} unique_ptr<GameObject> construct() const{ unique_ptr<GameObject> gameObject{ GOType::factory.construct(seed_) }; func_(dynamic_cast<GOType*>(gameObject.get())); return std::move(gameObject); } seed_type seed_; function<void(GOType*)> func_; }; typedef struct CResult { CResult() : constructions{} {} CResult(CResult && o) : constructions(std::move(o.constructions)) {} CResult& operator= (CResult& other){ if (this != &other){ for (unique_ptr<Construction<GameObject>>& child : other.constructions){ constructions.push_back(std::move(child)); } } return *this; } template<typename GOType> void push_back(seed_type seed, function<void(GOType*)> func = [](GOType* null){}){ constructions.push_back(make_unique<Construction<GOType>>(seed, func)); } vector<unique_ptr<Construction<GameObject>>> constructions; } CResult; //finally, the GameObject class GameObject { public: GameObject(seed_type seed); GameObject(const GameObject&); virtual void tick(unsigned int delta); inline Matrix4f trafoMatrix(){ return physicsObject_->transformationMatrix(); } //getter inline seed_type seed() const{ return seed_; } inline CResult& properties(){ return properties_; } inline const RenderObject& renderObject() const{ return *renderObject_; } inline weak_ptr<PhysicsObject> physicsObject() { return physicsObject_; } protected: virtual CResult construct_(seed_type seed) = 0; CResult properties_; shared_ptr<RenderObject> renderObject_; shared_ptr<PhysicsObject> physicsObject_; seed_type seed_; }; 3. PhysicsObject That's a bit easier. It is responsible for position, velocity and acceleration. It will also handle collisions in the future. It contains three Transformation objects, two of which are optional. I'm not going to include the accessors on the PhysicsObject class because I tried my first approach on it and it's just pure madness (way over 30 functions). Also missing: the named constructors that construct PhysicsObjects with different behaviour. class Transformation{ Vector3f translation_; Vector3f rotation_; Vector3f scaling_; public: Transformation() : translation_{ 0, 0, 0 }, rotation_{ 0, 0, 0 }, scaling_{ 1, 1, 1 } {}; Transformation(Vector3f translation, Vector3f rotation, Vector3f scaling); inline Vector3f translation(){ return translation_; } inline void translation(float x, float y, float z){ translation(Vector3f(x, y, z)); } inline void translation(Vector3f newTranslation){ translation_ = newTranslation; } inline void translate(float x, float y, float z){ translate(Vector3f(x, y, z)); } inline void translate(Vector3f summand){ translation_ += summand; } inline Vector3f rotation(){ return rotation_; } inline void rotation(float pitch, float yaw, float roll){ rotation(Vector3f(pitch, yaw, roll)); } inline void rotation(Vector3f newRotation){ rotation_ = newRotation; } inline void rotate(float pitch, float yaw, float roll){ rotate(Vector3f(pitch, yaw, roll)); } inline void rotate(Vector3f summand){ rotation_ += summand; } inline Vector3f scaling(){ return scaling_; } inline void scaling(float x, float y, float z){ scaling(Vector3f(x, y, z)); } inline void scaling(Vector3f newScaling){ scaling_ = newScaling; } inline void scale(float x, float y, float z){ scale(Vector3f(x, y, z)); } void scale(Vector3f factor){ scaling_(0) *= factor(0); scaling_(1) *= factor(1); scaling_(2) *= factor(2); } Matrix4f matrix(){ return WMatrix::Translation(translation_) * WMatrix::Rotation(rotation_) * WMatrix::Scale(scaling_); } }; class PhysicsObject; typedef void tickFunction(PhysicsObject& self, unsigned int delta); class PhysicsObject{ PhysicsObject(const Transformation& trafo) : transformation_(trafo), transformationVelocity_(nullptr), transformationAcceleration_(nullptr), tick_(nullptr) {} PhysicsObject(PhysicsObject&& other) : transformation_(other.transformation_), transformationVelocity_(std::move(other.transformationVelocity_)), transformationAcceleration_(std::move(other.transformationAcceleration_)), tick_(other.tick_) {} Transformation transformation_; unique_ptr<Transformation> transformationVelocity_; unique_ptr<Transformation> transformationAcceleration_; tickFunction* tick_; public: void tick(unsigned int delta){ tick_ ? tick_(*this, delta) : 0; } inline Matrix4f transformationMatrix(){ return transformation_.matrix(); } } 4. RenderObject RenderObject is a base class for different types of things that could be rendered, i.e. Meshes, Light Sources or Sprites. DISCLAIMER: I did not write this code, I'm working on this project with someone else. class RenderObject { public: RenderObject(float renderDistance); virtual ~RenderObject(); float renderDistance() const { return renderDistance_; } void setRenderDistance(float rD) { renderDistance_ = rD; } protected: float renderDistance_; }; struct NullRenderObject : public RenderObject{ NullRenderObject() : RenderObject(0.f){}; }; class Light : public RenderObject{ public: Light() : RenderObject(30.f){}; }; class Mesh : public RenderObject{ public: Mesh(unsigned int seed) : RenderObject(20.f) { meshID_ = 0; textureID_ = 0; if (seed == 1) meshID_ = Model::getMeshID("EM-208_heavy"); else meshID_ = Model::getMeshID("cube"); }; unsigned int getMeshID() const { return meshID_; } unsigned int getTextureID() const { return textureID_; } private: unsigned int meshID_; unsigned int textureID_; }; I guess this shows my issue quite nicely: You see a few accessors in GameObject which return weak_ptrs to access members of members, but that is not really what I want. Also please keep in mind that this is NOT, by any means, finished or production code! It is merely a prototype and there may be inconsistencies, unnecessary public parts of classes and such.

    Read the article

  • calculix data visualizor using QT

    - by Ann
    include "final1.h" include "ui_final1.h" include include include ifndef GL_MULTISAMPLE define GL_MULTISAMPLE 0x809D endif define numred 100 define numgrn 10 define numblu 6 final1::final1(QWidget *parent) : QGLWidget(parent) { setFormat(QGLFormat(QGL::SampleBuffers)); rotationX = -38.0; rotationY = -58.0; rotationZ = 0.0; scaling = .05; // glPolygonMode(GL_FRONT_AND_BACK,GL_FILL); //createGradient(); createGLObject(); } final1::~final1() { makeCurrent(); glDeleteLists(glObject, 1); } void final1::paintEvent(QPaintEvent * /* event */) { QPainter painter(this); draw(); } void final1::mousePressEvent(QMouseEvent *event) { lastPos = event-pos(); } void final1::mouseMoveEvent(QMouseEvent *event) { GLfloat dx = GLfloat(event-x() - lastPos.x()) / width(); GLfloat dy = GLfloat(event-y() - lastPos.y()) / height(); if (event->buttons() & Qt::LeftButton) { rotationX += 180 * dy; rotationY += 180 * dx; update(); } else if (event->buttons() & Qt::RightButton) { rotationX += 180 * dy; rotationZ += 180 * dx; update(); } lastPos = event->pos(); } void final1::createGLObject() { makeCurrent(); GLfloat f1[150],f2[150],f3[150],length=0; qreal size=2; int k=1,a,b,c,d,e,f,g,h,element_node_no=0; GLfloat x,y,z; QString str1,str2,str3,str4,str5,str6,str7,str8; int red,green,blue,index=1,displacement; int LUT[1000][3]; for(red=100;red glShadeModel(GL_SMOOTH); glObject = glGenLists(1); glNewList(glObject, GL_COMPILE); // qglColor(QColor(255, 239, 191)); glLineWidth(1.0); QLinearGradient linearGradient(0, 0, 100, 100); linearGradient.setColorAt(0.0, Qt::red); linearGradient.setColorAt(0.2, Qt::green); linearGradient.setColorAt(1.0, Qt::black); //renderArea->setBrush(linearGradient); //glColor3f(1,0,0);pow((f1[e]-f1[a]),2) QFile file("/home/41407/input1.txt"); if (!file.open(QIODevice::ReadOnly | QIODevice::Text)) return; QTextStream in(&file); while (!in.atEnd()) { QString line = in.readLine(); if(k<=125) { str1= line.section(',', 1, 1); str2=line.section(',', 2, 2); str3=line.section(',', 3, 3); x=str1.toFloat(); y=str2.toFloat(); z=str3.toFloat(); f1[k]=x; f2[k]=y; f3[k]=z; /* glBegin(GL_TRIANGLES); // glColor3f(LUT[k][0],LUT[k][1],LUT[k][2]); //QColorAt();//setPointSize(size); glVertex3f(x,y,z); glEnd();*/ } else if(k>125) { element_node_no=0; qCount(line.begin(),line.end(),',',element_node_no); // printf("\n%d",element_node_no); str1= line.section(',', 1, 1); str2=line.section(',', 2, 2); str3=line.section(',', 3, 3); str4= line.section(',', 4, 4); str5=line.section(',', 5, 5); str6=line.section(',', 6, 6); str7= line.section(',', 7, 7); str8=line.section(',', 8, 8); a=str1.toInt(); b=str2.toInt(); c=str3.toInt(); d=str4.toInt(); e=str5.toInt(); f=str6.toInt(); g=str7.toInt(); h=str8.toInt(); glBegin(GL_POLYGON); glPolygonMode(GL_FRONT_AND_BACK,GL_FILL); //brush.setColor(Qt::black);//setColor(QColor::black()); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); // pmp.setBrush(gradient); glVertex3f(f1[a],f2[a] ,f3[a]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[b],f2[b] ,f3[b]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[c],f2[c] ,f3[c]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[d],f2[d] ,f3[d]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[a],f2[a] ,f3[a]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); //glEnd(); //glBegin(GL_LINE_LOOP); glVertex3f(f1[e],f2[e] ,f3[e]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[f],f2[f] ,f3[f]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[g],f2[g], f3[g]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[h],f2[h], f3[h]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[d],f2[d] ,f3[d]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[a],f2[a] ,f3[a]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glEnd(); glBegin(GL_POLYGON); //glVertex3f(f1[a],f2[a] ,f3[a]); glVertex3f(f1[e],f2[e] ,f3[e]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[h],f2[h], f3[h]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); //glVertex3f(f1[d],f2[d] ,f3[d]); glVertex3f(f1[g],f2[g], f3[g]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[c],f2[c] ,f3[c]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[f],f2[f] ,f3[f]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[b],f2[b] ,f3[b]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glEnd(); /*length=sqrt(pow((f1[e]-f1[a]),2)+pow((f2[e]-f2[a]),2)+pow((f3[e]-f3[a]),2)); printf("\n%d",length);*/ } k++; } glEndList(); file.close(); k=1; QFile file1("/home/41407/op.txt"); if (!file1.open(QIODevice::ReadOnly | QIODevice::Text)) return; QTextStream in1(&file1); k=1; while (!in1.atEnd()) { QString line = in1.readLine(); // if(k<=125) { str1= line.section(' ', 1, 1); x=str1.toFloat(); str2=line.section(' ', 2, 2); y=str2.toFloat(); str3=line.section(' ', 3, 3); z=str3.toFloat(); displacement=sqrt(pow( (x-f1[k]),2)+pow((y-f2[k]),2)+pow((z-f3[k]),2)); //printf("\n %d : %d",k,displacement); glBegin(GL_POLYGON); //glColor3f(LUT[displacement][0],LUT[displacement][1],LUT[displacement][2]); glVertex3f(f1[k],f2[k],f3[k]); glEnd(); a1[k]=x+f1[k]; a2[k]=y+f2[k]; a3[k]=z+f3[k]; //printf("\nc: %f %f %f",x,y,z); //printf("\nf: %f %f %f",f1[k],f2[k],f3[k]); //printf("\na: %f %f %f",a1[k],a2[k],a3[k]); } k++; glEndList(); } } void final1::draw() { glPushAttrib(GL_ALL_ATTRIB_BITS); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); GLfloat x = 3.0 * GLfloat(width()) / height(); glOrtho(-x, +x, -3.0, +3.0, 4.0, 15.0); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); glTranslatef(0.0, 0.0, -10.0); glScalef(scaling, scaling, scaling); glRotatef(rotationX, 1.0, 0.0, 0.0); glRotatef(rotationY, 0.0, 1.0, 0.0); glRotatef(rotationZ, 0.0, 0.0, 1.0); glEnable(GL_MULTISAMPLE); glCallList(glObject); glMatrixMode(GL_MODELVIEW); glPopMatrix(); glMatrixMode(GL_PROJECTION); glPopMatrix(); glPopAttrib(); } /*uint final1::colorAt(int x) { generateShade(); QPolygonF pts = m_hoverPoints->points(); for (int i=1; i < pts.size(); ++i) { if (pts.at(i-1).x() <= x && pts.at(i).x() >= x) { QLineF l(pts.at(i-1), pts.at(i)); l.setLength(l.length() * ((x - l.x1()) / l.dx())); return m_shade.pixel(qRound(qMin(l.x2(), (qreal(m_shade.width() - 1)))), qRound(qMin(l.y2(), qreal(m_shade.height() - 1)))); } } return 0;*/ //final1:: //} /*void final1::createGLObject() { makeCurrent(); //QPainter painter; QPixmap pm(20, 20); QPainter pmp(&pm); pmp.fillRect(0, 0, 10, 10, Qt::blue); pmp.fillRect(10, 10, 10, 10, Qt::lightGray); pmp.fillRect(0, 10, 10, 10, Qt::darkGray); pmp.fillRect(10, 0, 10, 10, Qt::darkGray); pmp.end(); QPalette pal = palette(); pal.setBrush(backgroundRole(), QBrush(pm)); //setAutoFillBackground(true); setPalette(pal); //GLfloat f1[150],f2[150],f3[150],a1[150],a2[150],a3[150]; int k=1,a,b,c,d,e,f,g,h; //int p=0; GLfloat x,y,z; int displacement; QString str1,str2,str3,str4,str5,str6,str7,str8; int red,green,blue,index=1; int LUT[8000][3]; for(red=0;red //glShadeModel(GL_LINE); glObject = glGenLists(1); glNewList(glObject, GL_COMPILE); //qglColor(QColor(120,255,210)); glLineWidth(1.0); //glColor3f(1,0,0); QFile file("/home/41407/input.txt"); if (!file.open(QIODevice::ReadOnly | QIODevice::Text)) return; QTextStream in(&file); while (!in.atEnd()) { //glColor3f(LUT[k][0],LUT[k][1],LUT[k][2]); QString line = in.readLine(); if(k<=125) { //printf("\nline :%c",line); str1= line.section(',', 1, 1); str2=line.section(',', 2, 2); str3=line.section(',', 3, 3); x=str1.toFloat(); y=str2.toFloat(); z=str3.toFloat(); f1[k]=x; f2[k]=y; f3[k]=z; //printf("\nf: %f %f %f",f1[k],f2[k],f3[k]); } else if(k125) //for(p=0;p<6;p++) { //glColor3f(LUT[k][0],LUT[k][1],LUT[k][2]); update(); str1= line.section(',', 1, 1); str2=line.section(',', 2, 2); str3=line.section(',', 3, 3); str4= line.section(',', 4, 4); str5=line.section(',', 5, 5); str6=line.section(',', 6, 6); str7= line.section(',', 7, 7); str8=line.section(',', 8, 8); a=str1.toInt(); b=str2.toInt(); c=str3.toInt(); d=str4.toInt(); e=str5.toInt(); f=str6.toInt(); g=str7.toInt(); h=str8.toInt(); //for (p = 0; p < 6; p++) { // glBegin(GL_LINE_WIDTH); //glColor3f(LUT[126][0],LUT[126][1],LUT[126][2]); //update(); //glNormal3fv(&n[p][0]); //glVertex3f(f1[i],f2[i],f3[i]); glVertex3fv(&v[faces[i][1]][0]); glVertex3fv(&v[faces[i][2]][0]); glVertex3fv(&v[faces[i][3]][0]); //glEnd(); //} glBegin(GL_LINE_LOOP); //glColor3f(p*20,p*20,p); glColor3f(1,0,0); glVertex3f(f1[a],f2[a] ,f3[a]); //painter.fillRect(QRectF(f1[a],f2[a] ,f3[a], 2), Qt::magenta); glVertex3f(f1[b],f2[b] ,f3[b]); glVertex3f(f1[c],f2[c] ,f3[c]); glVertex3f(f1[d],f2[d] ,f3[d]); glVertex3f(f1[a],f2[a] ,f3[a]); glVertex3f(f1[e],f2[e] ,f3[e]); glVertex3f(f1[f],f2[f] ,f3[f]); glVertex3f(f1[g],f2[g], f3[g]); glVertex3f(f1[h],f2[h], f3[h]); glVertex3f(f1[d],f2[d] ,f3[d]); glVertex3f(f1[a],f2[a] ,f3[a]); //glColor3f(1,0,0); //QLinearGradient ( f1[a], f2[a], f1[b], f2[b] ); glEnd(); glBegin(GL_LINES); //glNormal3fv(&n[p][0]); //glColor3f(LUT[k][0],LUT[k][1],LUT[k][2]); glVertex3f(f1[e],f2[e] ,f3[e]); glVertex3f(f1[h],f2[h], f3[h]); glVertex3f(f1[g],f2[g], f3[g]); glVertex3f(f1[c],f2[c] ,f3[c]); glVertex3f(f1[f],f2[f] ,f3[f]); glVertex3f(f1[b],f2[b] ,f3[b]); glEnd(); } } k++; } glEndList(); qglColor(QColor(239, 255, 191)); glLineWidth(1.0); glColor3f(0,1,0); k=1; QFile file1("/home/41407/op.txt"); if (!file1.open(QIODevice::ReadOnly | QIODevice::Text)) return; QTextStream in1(&file1); k=1; while (!in1.atEnd()) { QString line = in1.readLine(); // if(k<=125) { str1= line.section(' ', 1, 1); x=str1.toFloat(); str2=line.section(' ', 2, 2); y=str2.toFloat(); str3=line.section(' ', 3, 3); z=str3.toFloat(); displacement=sqrt(pow( (x-f1[k]),2)+pow((y-f2[k]),2)+pow((z-f3[k]),2)); printf("\n %d : %d",k,displacement); glBegin(GL_POINT); glColor3f(LUT[displacement][0],LUT[displacement][1],LUT[displacement][2]); glVertex3f(x,y,z); glLoadIdentity(); glEnd(); a1[k]=x+f1[k]; a2[k]=y+f2[k]; a3[k]=z+f3[k]; //printf("\nc: %f %f %f",x,y,z); //printf("\nf: %f %f %f",f1[k],f2[k],f3[k]); //printf("\na: %f %f %f",a1[k],a2[k],a3[k]); } k++; glEndList(); } }*/ /*void final1::draw() { glPushAttrib(GL_ALL_ATTRIB_BITS); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); GLfloat x = 3.0 * GLfloat(width()) / height(); glOrtho(-x, +x, -3.0, +3.0, 4.0, 15.0); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); glTranslatef(0.0, 0.0, -10.0); glScalef(scaling, scaling, scaling); glRotatef(rotationX, 1.0, 0.0, 0.0); glRotatef(rotationY, 0.0, 1.0, 0.0); glRotatef(rotationZ, 0.0, 0.0, 1.0); glEnable(GL_MULTISAMPLE); glCallList(glObject); glMatrixMode(GL_MODELVIEW); glPopMatrix(); glMatrixMode(GL_PROJECTION); glPopMatrix(); glPopAttrib(); }*/ I need to change the color of a portion of beam where pressure is applied.But I am not able to color the front end back phase.

    Read the article

  • Parallelism in .NET – Introduction

    - by Reed
    Parallel programming is something that every professional developer should understand, but is rarely discussed or taught in detail in a formal manner.  Software users are no longer content with applications that lock up the user interface regularly, or take large amounts of time to process data unnecessarily.  Modern development requires the use of parallelism.  There is no longer any excuses for us as developers. Learning to write parallel software is challenging.  It requires more than reading that one chapter on parallelism in our programming language book of choice… Today’s systems are no longer getting faster with each generation; in many cases, newer computers are actually slower than previous generation systems.  Modern hardware is shifting towards conservation of power, with processing scalability coming from having multiple computer cores, not faster and faster CPUs.  Our CPU frequencies no longer double on a regular basis, but Moore’s Law is still holding strong.  Now, however, instead of scaling transistors in order to make processors faster, hardware manufacturers are scaling the transistors in order to add more discrete hardware processing threads to the system. This changes how we should think about software.  In order to take advantage of modern systems, we need to redesign and rewrite our algorithms to work in parallel.  As with any design domain, it helps tremendously to have a common language, as well as a common set of patterns and tools. For .NET developers, this is an exciting time for parallel programming.  Version 4 of the .NET Framework is adding the Task Parallel Library.  This has been back-ported to .NET 3.5sp1 as part of the Reactive Extensions for .NET, and is available for use today in both .NET 3.5 and .NET 4.0 beta. In order to fully utilize the Task Parallel Library and parallelism, both in .NET 4 and previous versions, we need to understand the proper terminology.  For this series, I will provide an introduction to some of the basic concepts in parallelism, and relate them to the tools available in .NET.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >