Search Results

Search found 1379 results on 56 pages for 'fragment shader'.

Page 37/56 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Getting Unity 3D working on legacy Nvidia card

    - by user69545
    I installed the latest nVIDIA drivers for my FX5500 card. I understand that the X server version does not officially support this driver or card but was wondering what I can do to get compiz running. I have researched for hours on this issue but cannot come up with an answer for myself. I might be doing all this for nothing but I wanted to at least try. Here is the output of my test: mike@mike-linux-box:~$ /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce FX 5500/AGP/SSE2 OpenGL version string: 2.1.2 NVIDIA 173.14.35 Not software rendered: yes Not blacklisted: no GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no So I was wondering what is the "Not Blacklisted" test? Is this the Nouveau Blacklisting? nVIDIA driver did that automatically. Does this need to be removed? Any help would be appreciated. I just want to run compiz effects. Thanks.

    Read the article

  • Transition from 2D to 3D Game development [closed]

    - by jakebird451
    I have been working in the 2D world for a long time from manual blitting in windows to SDL to Python (pygame, pyopengl) and a bunch in between. Needless to say I have been programming for a while. So a while ago I started to program in OpenGL via C++ on my Mac. I then got a little intricate with my work after a while (3D models with skeleton structure and terrain development). After a long time of tinkering, I stopped due to the heavy work just to yield a low level understanding of how OpenGL works. Still interested in Graphics and Game Development I went on a search for a stable game engine with some features to grow on. Licence Requirement: Anything other than GPL (LGPL will do) OS Requirement: Mac & Windows Shader: GLSL or CG (GLSL preferred due to experience) Models: Any model structure with rigging (bone) support & animation I am looking at http://www.ogre3d.org/ currently and am starting to meddle around with some examples. However I am a little reluctant to spend a lot of time on it only to yield another dead end. So instead of falling down a spiraling black pit, I am posting my question to you guys to lead me in the right direction based on my requirements. How was your experience with the engine you recommend? Is it well documented? Does it have well documented examples? Any library requirements (Boost, libpng, etc)?

    Read the article

  • Per-vertex animation with VBOs: Stream each frame or use index offset per frame?

    - by charstar
    Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will be at different poses/keyframes at the same time). Assume color and texture coordinate buffers are static. Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs). Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? Are there other methods that I am missing?

    Read the article

  • Architecture a for a central renderer rather than self-rendering

    - by The Communist Duck
    For the architectural side of rendering, there's two main ways: having each object render itself, and having a single renderer which renders everything. I'm currently aiming for the second idea, for the following reasons: The list can be sorted to only use shaders once. Else each object would have to bind the shader, because it's not sure if it's active. The objects could be sorted and grouped. Easier to swap APIs. With a few macro lines, it can be easy to swap between a DirectX renderer and an OpenGL renderer (not a reason for my project, but still a good point) Easier to manage rendering code Of course, if anyone has strong recommendations for the first method, I will listen to them. But I was wondering how make this work. First idea The renderer has a list of pointers to the renderable components of each entity, which register themselves on RenderCompoent creation. However, I'm worrying that this may end up as a lot of extra pointer weight. But I can sort the list of pointers every so often. Second idea The entire list of entities is passed to the renderer each render call. The renderer then sorts the list (each call, or maybe once?) and gets what it wants. That's a lot of passing and/or sorting, however. Other ideas ??? PROFIT Anyone got ideas? Thank you.

    Read the article

  • There's IP but can't reach gateway

    - by icky
    I have just installed ubuntu 12.04 on my new laptop, and brought it back to home, but I found the wireless network does not work. Strangely, it has the correct ip, but can't connect to the gateway. ifconfig gives ip 192.168.64.36, with broadcast 192.168.79.255 and mask 255.255.240.0, this are all correct, the gateway is at 192.168.64.1 cat /etc/resolv.conf nameserver 192.168.64.1 nameserver 127.0.0.1 which i think it's also right. but when I ping 192.168.64.1, all packages are lost. Please help me with this, I really do not know what happened to my network settings. Huckle, Thank you for your reply ifconfig wlan0 Link encap:Ethernet Hwaddr 88:f9:af:2a:ca:1b inet addr:192.168.64.36 Bcast:192.168.79.255 Mask:255.255.240.0 inet6 addr: fe80::8a9f:faff:fea2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:27 errors:0 dropped:0 overruns:0 frame:0 TX packets:376 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3950 TX byetes:60288 iwconfig wlan0 IEEE 802.11bgn ESSID:"Chiono" Mode:Managed Frequency:2.417 GHz Access Point: 82:54:99:94:6D:43 Bit Rate=13.5 Mb/s Tx-Power=13 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on Link Quality=70/70 Signal Level=-32 dBm Rx invalid nwid:0 Rx invalid crypt:0 RX invalid frag:0 Tx excessive retries: 9 Invalid misc:10 Missed beacon:0 route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.64.1 0.0.0.0 UG 0 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 192.168.64.0 * 255.255.240.0 U 2 0 0 0 wlan0 Thank you very much

    Read the article

  • How do I change until the next underscore in VIm?

    - by Nathan Long
    If I have this text in vim, and my cursor is at the first character: www.foo.com I know that I can do: cw to change up to the first period, because a word (lowercase w) ends at any punctuation OR white space cW to change the whole address, because a Word (uppercase w) ends only at whitespace Now, what if I have this: stupid_method_name and want to change it to this? awesome_method_name Both cw and cW change the whole thing, but I just want to change the fragment before the underscore. My fallback technique is c/_, meaning 'change until you hit the next underscore in a search,' but for me, that also causes all underscores to be highlighted as search terms, which is slightly annoying. Is there a specifier like w or W that doesn't include underscores?

    Read the article

  • New video card? [closed]

    - by TutorialPoint
    I ran into some problems with my ATI Radeon x1200. I want it to support vertex shader 3.0, but it only does 2.0. This is because Call of Duty: Modern Warfare 2 only works with 3.0. So, I want a new video card. Can someone help me, with a more clean look to it? I bet if I would stick to some seller, I would end up with a video card that does not support what I want, or is too expansive. I really do not want it to be above $75, if possible. Some info about my PC: Manufacter: XXODD Processor: AMD Athlon64 X2 DualCore 4000+ 2Ghz (but running currently 32 bit OS) ATI Radeon X1200 video card (the problem) 1 GB RAM DDR2 MS-7367 Motherboard Windows 7 Ultimate OS 32-Bit Build 7600 RTM

    Read the article

  • Internet Timeouts with TP-Link TL-WN821N v2 wireless usb stick

    - by user1622959
    A short time after accessing the internet, the browser/download times out. Before the timeout, the internet works OK briefly; afterwards, the wireless is still connected with a strong signal, but every internet access results in a timeout. When I leave the PC for a while, the internet is back just to timeout again as soon as I start using it. The same happens when I reconnect to the router. Also, when I surf the internet, it takes a couple of minutes until the timeout, but when I download something, it times out in a matter of seconds. The wireless adapter works just fine in Windows and internet via ethernet cable works just fine in Ubuntu. Does anyone have the same problem or knows a solution. I use Ubuntu 12.10 x64. The problem occurs since I installed ubuntu (which was a few days ago). Here some stuff that might be usefull: serus@serus-Ubuntu-PC:~$ lsusb Bus 002 Device 002: ID 0cf3:1002 Atheros Communications, Inc. TP-Link TL-WN821N v2 802.11n [Atheros AR9170] serus@serus-Ubuntu-PC:~$ lsmod Module Size Used by carl9170 82083 0 serus@serus-Ubuntu-PC:~$ modinfo carl9170 filename: /lib/modules/3.5.0-21- generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko alias: arusb_lnx alias: ar9170usb firmware: carl9170-1.fw description: Atheros AR9170 802.11n USB wireless serus@serus-Ubuntu-PC:~$ iwconfig wlan0 IEEE 802.11bgn ESSID:"virginmedia0137463" Mode:Managed Frequency:2.462 GHz Access Point: A0:21:B7:F8:29:B6 Bit Rate=240 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=66/70 Signal level=-44 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:1399 Invalid misc:18 Missed beacon:0 serus@serus-Ubuntu-PC:~$ sudo lshw -C network *-network description: Wireless interface physical id: 1 bus info: usb@2:2 logical name: wlan0 serial: 00:27:19:bb:00:19 capabilities: ethernet physical wireless configuration: broadcast=yes driver=carl9170 driverversion=3.5.0-21-generic firmware=1.9.4 ip=192.168.0.6 link=yes multicast=yes wireless=IEEE 802.11bgn

    Read the article

  • How can I avoid a 302 for Fetch as Bot?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • How to use mount points in MilkShape models?

    - by vividos
    I have bought the Warriors & Commoners model pack from Frogames and the pack contains (among other formats) two animated models and several non-animated objects (axe, shield, pilosities, etc.) in MilkShape3D format. I looked at the official "MilkShape 3D Viewer v2.0" (msViewer2.zip at http://www.chumba.ch/chumbalum-soft/ms3d/download.html) source code and implemented loading the model, calculating the joint matrices and everything looks fine. In the model there are several joints that are designated as the "mount points" for the static objects like axe and shield. I now want to "put" the axe into the hand of the animated model, and I couldn't quite figure out how. I put the animated vertices in a VBO that gets updated every frame (I know I should do this with a shader, but I didn't have time to do this yet). I put the static vertices in another VBO that I want to keep static and not updated every frame. I now tried to render the animated vertices first, then use the joint matrix for the "mount joint" to calculate the location of the static object. I tried many things, and what about seems to be right is to transpose the joint matrix, then use glMatrixMult() to transform the modelview matrix. For some objects like the axe this is working, but not for others, e.g. the pilosities. Now my question: How is this generally implemented when using bone/joint models, and especially with MilkShape3D models? Am I on the right track?

    Read the article

  • My internet connection slows or dies unexpectedly

    - by genesis
    I installed Ubuntu 10.04 once again and I'm having some problems which I had before, but I have no idea how I solved them. On Windows, everything's working fine and I had no problems with this. My problem is that sometimes, when browsing through the internet, webpages just start to load really slow, sometimes it doesn't load anything at all (Error 118 (net::ERR_CONNECTION_TIMED_OUT): The operation timed out.) and it starts to work after few minutes. My IPv4 settings are automatic (DHCP), and IPv6 settings are Ignored/Disabled. I think my previous problems had something to do with IPv6, but I'm not sure. Is there a fix for this? iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"Fsite1" Mode:Managed Frequency:2.442 GHz Access Point: C8:3A:35:40:43:68 Bit Rate=0 kb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on Link Quality=43/70 Signal level=-67 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0

    Read the article

  • Flickering problem with world matrix

    - by gnomgrol
    I do have a pretty wierd problem today. As soon as I try to change my translation- or rotationmatrix for an object to something else than (0,0,0), the object starts to flicker (scaling works fine). It rapid and randomly switches between the spot it should be in and a crippled something. I first thought that the problem would be z-fighting, but now Im pretty sure it isn't. I have now clue at all what it could be, here are two screenshots of the two states the plant is switching between. I already used PIX, but could find anything of use (Im not a very good debugger anyway) I would appreciate any help, thanks a lot! Important code: D3DXMatrixIdentity(&World); D3DXVECTOR3 rotaxisX = D3DXVECTOR3(1.0f, 0.0f, 0.0f); D3DXVECTOR3 rotaxisY = D3DXVECTOR3(0.0f, 1.0f, 0.0f); D3DXVECTOR3 rotaxisZ = D3DXVECTOR3(0.0f, 0.0f, 1.0f); D3DXMATRIX temprot1, temprot2, temprot3; D3DXMatrixRotationAxis(&temprot1, &rotaxisX, 0); D3DXMatrixRotationAxis(&temprot2, &rotaxisY, 0); D3DXMatrixRotationAxis(&temprot3, &rotaxisZ, 0); Rotation = temprot1 *temprot2 * temprot3; D3DXMatrixTranslation(&Translation, 0.0f, 10.0f, 0.0f); D3DXMatrixScaling(&Scale, 0.02f, 0.02f, 0.02f); //Set objs world space using the transformations World = Translation * Rotation * Scale; shader: cbuffer cbPerObject { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix);

    Read the article

  • Could someone explain why my world reconstructed from depth position is incorrect?

    - by yuumei
    I am attempting to reconstruct the world position in the fragment shader from a depth texture. I pass in the 8 frustum points in world space and interpolate them across fragments and then interpolate from near to far by the depth: highp float depth = (2.0 * CameraPlanes.x) / (CameraPlanes.y + CameraPlanes.x - texture( depthTexture, textureCoord ).x * (CameraPlanes.y - CameraPlanes.x)); // Reconstruct the world position from the linear depth highp vec3 world = mix( nearWorldPos, farWorldPos, depth ); CameraPlanes.x is the near plane CameraPlanes.y is the far. Assuming that my frustum positions are correct, and my depth looks correct, why is my world position wrong? (My depth texture is of format GL_DEPTH_COMPONENT32F if that matters) Thanks! :D Update: Screenshot of world position http://imgur.com/sSlHd So you can see it looks nearly correct. However as the camera moves, the colours (positions) change, which they shouldnt. I can get this to work, if I do the following: Write this into the depth attachment in the previous pass: gl_FragDepth = gl_FragCoord.z / gl_FragCoord.w / CameraPlanes.y; and then read the depth texture like so: depth = texture( depthTexture, textureCoord ).x However this will kill the hardware z buffer optimizations.

    Read the article

  • LOD in modern games

    - by Firas Assaad
    I'm currently working on my master's thesis about LOD and mesh simplification, and I've been reading many academic papers and articles about the subject. However, I can't find enough information about how LOD is being used in modern games. I know many games use some sort of dynamic LOD for terrain, but what about elsewhere? Level of Detail for 3D Graphics for example points out that discrete LOD (where artists prepare several models in advance) is widely used because of the performance overhead of continuous LOD. That book was published in 2002 however, and I'm wondering if things are different now. There has been some research in performing dynamic LOD using the geometry shader (this paper for example, with its implementation in ShaderX6), would that be used in a modern game? To summarize, my question is about the state of LOD in modern video games, what algorithms are used and why? In particular, is view dependent continuous simplification used or does the runtime overhead make using discrete models with proper blending and impostors a more attractive solution? If discrete models are used, is an algorithm used (e.g. vertex clustering) to generate them offline, do artists manually create the models, or perhaps a combination of both methods is used?

    Read the article

  • HLSL What you get when you subtract world position from InvertViewProjection.Transform?

    - by cubrman
    In one of NVIDIA's Vertex shaders (the metal one) I found the following code: // transform object normals, tangents, & binormals to world-space: float4x4 WorldITXf : WorldInverseTranspose < string UIWidget="None"; >; // provide tranform from "view" or "eye" coords back to world-space: float4x4 ViewIXf : ViewInverse < string UIWidget="None"; >; ... float4 Po = float4(IN.Position.xyz,1); // homogeneous location coordinates float4 Pw = mul(Po,WorldXf); // convert to "world" space OUT.WorldView = normalize(ViewIXf[3].xyz - Pw.xyz); The term OUT.WorldView is subsequently used in a Pixel Shader to compute lighting: float3 Ln = normalize(IN.LightVec.xyz); float3 Nn = normalize(IN.WorldNormal); float3 Vn = normalize(IN.WorldView); float3 Hn = normalize(Vn + Ln); float4 litV = lit(dot(Ln,Nn),dot(Hn,Nn),SpecExpon); DiffuseContrib = litV.y * Kd * LightColor + AmbiColor; SpecularContrib = litV.z * LightColor; Can anyone tell me what exactly is WorldView here? And why do they add it to the normal?

    Read the article

  • Is a 302 redirect to a random URL from the homepage an SEO problem?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Low Graphics and laggy screen after update to 13.10 from 13.04

    - by Wh0RU
    After updating from 13.04 to 13.10 many of 3D unity stuff has stopped working, I am suing Samsung Series 3 (NP350V5X) Laptop which has Switchable Intel and AMD Radeon HD 7670M GFX. xserver-xorg-video-ati does works but NO 3D support and graphics are very low. [I am currently using this] fglrx & fglrx-updates shows blank screen after Login. Intel Graphic Install doesn't work either (dependency error) Output of $ sudo lshw -c video *-display description: VGA compatible controller product: Thames [Radeon HD 7500M/7600M Series] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:45 memory:e0000000-efffffff memory:c0120000-c013ffff ioport:3000(size=256) memory:c0100000-c011ffff *-display description: VGA compatible controller product: 3rd Gen Core processor Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:bfc00000-bfffffff memory:d0000000-dfffffff ioport:4000(size=64) Similarly $ /usr/lib/nux/unity_support_test -p OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.3, 256 bits) OpenGL version string: 2.1 Mesa 9.2.1 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no Shows no 3D support. Can anyone please guide how to make things working again.

    Read the article

  • Views : ViewControllers, many to one, or one to one?

    - by conor
    I have developed an Android application where, typically, each view (layout.xml) displayed on the screen has it's own corresponding fragment (for the purpose of this question I may refer to this as a ViewController). These views and Fragments/ViewControllers are appropriately named to reflect what they display. So this has the effect of allowing the programmer to easily pinpoint the files associated with what they see on any given screen. The above refers to the one to one part of my question. Please note that with the above there are a few exceptions where very similar is displayed on two views so the ViewController is used for two views. (Using a simple switch (type) to determine what layout.xml file to load) On the flip side. I am currently working on the iOS version of the same app, which I didn't develop. It seems that they are adopting more of a one-to-many (ViewController:View) approach. There appears to be one ViewController that handles the display logic for many different types of views. In the ViewController are an assortment of boolean flags and arrays of data (to be displayed) that are used to determine what view to load and how to display it. This seems very cumbersome to me and coupled with no comments/ambiguous variable names I am finding it very difficult to implement changes into the project. What do you guys think of the two approaches? Which one would you prefer? I'm really considering putting in some extra time at work to refactor the iOS into a more 1:1 oriented approach. My reasoning for 1:1 over M:1 is that of modularity and legibility. After all, don't some people measure the quality of code based on how easy it is for another developer to pick up the reigns or how easy it is to pull a piece of code and use it somewhere else?

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • Problems when rendering code on Nvidia GPU

    - by 2am
    I am following OpenGL GLSL cookbook 4.0, I have rendered a tesselated quad, as you see in the screenshot below, and i am moving Y coordinate of every vertex using a time based sin function as given in the code in the book. This program, as you see on the text in the image, runs perfectly on built in Intel HD graphics of my processor, but i have Nvidia GT 555m graphics in my laptop, (which by the way has switchable graphics) when I run the program on the graphic card, the OpenGL shader compilation fails. It fails on following instruction.. pos.y = sin.waveAmp * sin(u); giving error Error C1105 : Cannot call a non-function I know this error is coming on the sin(u) function which you see in the instruction. I am not able to understand why? When i removed sin(u) from the code, the program ran fine on Nvidia card. Its running with sin(u) fine on Intel HD 3000 graphics. Also, if you notice the program is almost unusable with intel HD 3000 graphics, I am getting only 9FPS, which is not enough. Its too much load for intel HD 3000. So, sin(X) function is not defined in the OpenGL specification given by Nvidia drivers or something else??

    Read the article

  • AME : How to Diagnose Issues With the Default Approver List in Purchasing When Using Approvals Management

    - by Oracle_EBS
    Do you need help in understanding the concepts or how to setup the Approval Management Engine (AME) for requisition approvals? See the new diagnostic Note 1437183.1 'AME : How to Diagnose Issues With the Default Approver List in Purchasing When Using Approvals Management'. AME is designed to generate the approval list according to the conditions and rules you define in the setup. This troubleshooting guide will help you understand how AME builds the default approval list for Purchasing and help users find solutions for scenarios where the approval list fails to be generated. Follow along with the logical steps for troubleshooting.  The note first reviews how to generate the AME Setup report.   For example in the note we see a fragment of the setup report. Notice it has different sections for each one of the setup categories including attributes, conditions, rules, action types, approval groups etc.  How the default approval list is built in AME is then reviewed, followed by the logical steps for diagnosing issues.  The diagnostic steps include how to run the Test Workbench, as well as how to obtain valuable debug and exception information.  Then follow along using the steps to build a simple test case to sharpen your understanding.

    Read the article

  • wireless blocked after installing ubuntu 12.04

    - by Cornelia Frank
    I am using a lenovo S10-3 ideapad; had no problems with earlier version of ubuntu, only since installing 12.04. Have looked through many of the questions on the same issue and tried potential solutions but cannot seem to solve my problem. The hardware switch is in 'on' position and the wireless light comes on very briefly (2-3 sec) when the laptop starts up but then goes off and stays off. Pressing FN+F5 does nothing at all. I'd be grateful for any assistance. Cornelia Have received the following responses in Terminal: cf@cf-Lenovo:~$ rfkill list all 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: ideapad_bluetooth: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: yes cf@cf-Lenovo:~$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off eth0 no wireless extensions. cf@cf-Lenovo:~$ lshw -C network WARNING: you should run this program as super-user. *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 02 serial: 00:26:9e:ee:7f:4c size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=N/A ip=10.0.1.8 latency=0 multicast=yes port=MII speed=100Mbit/s resources: irq:43 ioport:2000(size=256) memory:f0520000-f0520fff memory:f0510000-f051ffff memory:f0540000-f055ffff *-network DISABLED description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:09:00.0 logical name: wlan0 version: 01 serial: c4:17:fe:f8:bc:d7 width: 64 bits clock: 33MHz capabilities: bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-31-generic-pae firmware=N/A latency=0 multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:f0100000-f010ffff WARNING: output may be incomplete or inaccurate, you should run this program as super-user.

    Read the article

  • Adding root bone in 3DS Max?

    - by carlturtle
    my animation artist has made me a nice first person pair of arms, animated it, textured it, and given it to me. Then he went on vacation. I am programming my animations, and I am trying to test the model he has given me. Building my project gives me a warning: Multiple skeletons were found in the file. The first skeleton, named "frame l upperarm" has been moved to be a child of the scene root. The other, "frame r upperarm", will be ignored. Fragment identifier "frame r upperarm". Then an error: "Vertex is bound to bone "frame l forearm", but this bone is not present in the skeleton." I realize this means that there are two skeletons, as said in this problem: Importing 3d model with multiple skeletons I have 3DS Max, but I have no idea how to use it, and Google/CGTalk/Plycount turn up nothing relevant on how to add a root bone or combine skeletons. If anyone knows how, it would help me out greatly. Thanks.

    Read the article

  • OpenGL Application displays only 1 frame

    - by Avi
    EDIT: I have verified that the problem is not the VBO class or the vertex array class, but rather something else. I have a problem where my vertex buffer class works the first time its called, but displays nothing any other time its called. I don't know why this is, and it's also the same in my vertex array class. I'm calling the functions in this order to set up the buffers: enable client states bind buffers set buffer / array data unbind buffers disable client states Then in the draw function, that's called every frame: enable client states bind buffers set pointers unbind buffers bind index buffer draw elements unbind index buffer disable client states Is there something wrong with the order in which I'm calling the functions, or is it a more specific code error? EDIT: here's some of the code Code for setting pointers: //element is the vertex attribute being drawn (e.g. normals, colors, etc.) static void makeElementPointer(VertexBufferElements::VBOElement element, Shader *shade, void *elementLocation) { //elementLocation is BUFFER_OFFSET(n) if a buffer is bound switch (element) { .... glVertexPointer(3, GL_FLOAT, 0, elementLocation); //changes based on element .... //but I'm only dealing with } //vertices for now } And that's basically all the code that isn't just a straight OpenGL function call.

    Read the article

  • Unity on Ubuntu 11.10 - The Dash Home button brings up the panel, but is empty

    - by David M. Coe
    The dash home button brings up a panel that is greyed out, but it is totally empty. It seems to be the very same issue as this: Dash home button brings up blank window which is unanswered. /usr/lib/nux/unity_support_test -p returns OpenGL vendor string: X.Org R300 Project OpenGL renderer string: Gallium 0.4 on ATI RV370 OpenGL version string: 2.1 Mesa 7.11 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes I've tried a unity --reset but that doesn't seem to work. Unity seems to reset, but I get the following warning over and over: cs space validation failed unity What should I do next to try and fix this? Edit: Attempted fixes: I've refomatted, did not work. I've done apt-get remove unity then apt-get update then apt-get install unity, did not work. I've switched to Unity 2d and this seems to work. How can I get regualar Unity working or atleast find the error?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >