Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 69/151 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • mysql service do not launch

    - by ted
    Sorry for my English; I was trying to create db with rake in RoR application that has been configured for MySQL(gem installed, settings changed). After that attempt mysql-server broke: d@calister:~$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) mysqld is not running at all: d@calister:~$ ps aux | grep mysql d 3769 0.0 0.0 4368 832 pts/0 S+ 18:03 0:00 grep --color=auto mysql And also it doesn't seem it would like to run: d@calister:~$ sudo service mysql start start: Job failed to start Any suggestions? Thanks EDIT: d@calister:~$ sudo -u mysql mysqld 120520 18:45:11 [Note] Plugin 'FEDERATED' is disabled. mysqld: Table 'mysql.plugin' doesn't exist 120520 18:45:11 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 120520 18:45:11 InnoDB: The InnoDB memory heap is disabled 120520 18:45:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins 120520 18:45:11 InnoDB: Compressed tables use zlib 1.2.3.4 120520 18:45:11 InnoDB: Initializing buffer pool, size = 128.0M 120520 18:45:11 InnoDB: Completed initialization of buffer pool 120520 18:45:11 InnoDB: highest supported file format is Barracuda. 120520 18:45:12 InnoDB: Waiting for the background threads to start 120520 18:45:13 InnoDB: 1.1.8 started; log sequence number 1589459 120520 18:45:13 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist

    Read the article

  • IOS OpenGl transparency performance issue

    - by user346443
    I have built a game in Unity that uses OpenGL ES 1.1 for IOS. I have a nice constant frame rate of 30 until i place a semi transparent texture over the top on my entire scene. I expect the drop in frames is due to the blending overhead with sorting the frame buffer. On 4s and 3gs the frames stay at 30 but on the iPhone 4 the frame rate drops to 15-20. Probably due to the extra pixels in the retina compared to the 3gs and smaller cpu/gpu compared to the 4s. I would like to know if there is anything i can do to try and increase the frame rate when a transparent texture is rendered on top of the entire scene. Please not the the transparent texture overlay is a core part of the game and i can't disable anything else in the scene to speed things up. If its guaranteed to make a difference I guess I can switch to OpenGl ES 2.0 and write the shaders but i would prefer not to as i need to target older devices. I should add that the depth buffer is disabled and I'm blending using SrcAlpha One. Any advice would be highly appreciated. Cheers

    Read the article

  • XNA Required information to represent 2D Sprite graphically

    - by Fire-Dragon-DoL
    I was thinking about dividing my game engine into 2 threads: render thread and update thread (I can't come up on how to divide update thread from physic thread at the moment). That said, I have to duplicate all Sprite informations, what do I really need to represents a 2D Sprite graphically? Here are my ideas (I'll mark with ? things that I'm not sure): Vector2 Position float Rotation ? Vector2 Pivot ? Rectangle TextureRectangle Texture2D Texture Vector2 ImageOrigin ? (is it tracked somewhere else?) If you have any suggestion about using different types for datas, it's appreciated Last part of the question: isn't this a lot of data to copy in a buffer?what should I really copy in the buffer?I'm following this tutorial: http://www.sgtconker.com/2009/11/article-multi-threading-your-xna/3/ Thanks UPDATE 1: Newer values at the moment: Vector2 Position float Rotation Vector2 Pivot Rectangle TextureRectangle Texture2D Texture Color Color byte Facing (can be left or right, I'll do it with an enum) I re-read the tutorial, what I was doing wrong is not that I need to pass all those values, I need to pass only changed values as messages. UPDATE 2: Vector2 Position float Rotation Vector2 Pivot Rectangle TextureRectangle Texture2D Texture Color Color bool Flip uint DrawOrder Vector2 Scale bool Visible ? Mhhh, should Visibile be included?

    Read the article

  • JMS : Specifying Message Paging Directory on Weblogic server.

    - by adejuanc
    Two ways to configure or modify Paging directory, here the examples : 1.- Via config.xml file. <paging-directory>C:\temp</paging-directory> <jms-server> <name>JMSServerMS1</name> <target>MS1</target> <persistent-store xsi:nil="true"></persistent-store> <hosting-temporary-destinations>true</hosting-temporary-destinations> <temporary-template-resource xsi:nil="true"></temporary-template-resource> <temporary-template-name xsi:nil="true"></temporary-template-name> <message-buffer-size>-1</message-buffer-size> <paging-directory>C:\temp</paging-directory> <paging-file-locking-enabled>true</paging-file-locking-enabled> <expiration-scan-interval>30</expiration-scan-interval> </jms-server> ------------------------------------------------------- 2 .- Via WLST (Weblogic scripting tool) startEdit() cd('/Deployments/JMSServerMS1') cmo.setPagingDirectory('C:\\temp') activate()

    Read the article

  • Unable to install mysql in ubuntu

    - by Anand
    I purged my existing mysql-server from ubuntu and re-installed the same. This was due there was an upgrade from my ubuntu and I was unable to start my sql-server. After cleaned up and taking backup of my data , I re-insalled mysql. During installaion I received following message popup. Unable to set password for the MySQL "root" user An error occurred while setting the password for the MySQL administrative user. This may have happened because the account already has a password, or because of a communication problem with the MySQL server. You should check the account's password after the package installation. Please read the /usr/share/doc/mysql-server-5.5/README.Debian file for more information. ¦ ¦ After the installation was failed, following error were received. 121109 20:36:18 InnoDB: Initializing buffer pool, size = 128.0M 121109 20:36:18 InnoDB: Completed initialization of buffer pool 121109 20:36:18 InnoDB: highest supported file format is Barracuda. 121109 20:36:18 InnoDB: Waiting for the background threads to start 121109 20:36:19 InnoDB: 1.1.8 started; log sequence number 2395841 ERROR: 130 Incorrect file format 'user' 121109 20:36:19 [ERROR] Aborting 121109 20:36:19 InnoDB: Starting shutdown... 121109 20:36:20 InnoDB: Shutdown completed; log sequence number 2395841 121109 20:36:20 [Note] /usr/sbin/mysqld: Shutdown complete start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Simultaneous AI in turn based games

    - by Eduard Strehlau
    I want to hack together a roguelike. Now I thought about entity and world representation and got to a quite big problem. If you want all the AI to act simultaneously you would normally(in cellular automa for examble) just copy the cell buffer and let all action of indiviual cells depend on the copy. Actions which are not valid anymore after some cell before the cell you are currently operating on changed the original enviourment(blocking the path) are just ignored or reapplied with the "current"(between turns) environment. After all cells have acted you copy the current map to the buffer again. Now for an environment with complex AI and big(datawise) entities the copying would take too long. So I thought you could put every action and entity makes into a que(make no changes to the environment) and execute the whole que after everyone took their move. Every interaction on this que are realy interacting entities, so if a entity tries to attack another entity it sends a message to it, the consequences of the attack would be visible next turn, either by just examining the entity or asking the entity for data. This would remove problems like what happens if an entity dies middle in the cue but got actions or is messaged later on(all messages would go to null, and the messages from the entity would either just be sent or deleted(haven't decided yet) But what would happen if a monster spawns a fireball which by itself tracks the player(in the same turn). Should I add the fireball to the enviourment beforehand, so make a change to the environment before executing the action list or just add the ball to the "need updated" list as a special case so it doesn't exist in the environment and still operates on it, spawing after evaluating the action list? Are there any solutions or papers on this subject which I can take a look at? EDIT: I don't need information on writing a roguelike I need information on turn based ai in respective to a complex enviourment.

    Read the article

  • Multiple vulnerabilities in Firefox

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-1960 Information Exposure vulnerability 5.0 Firefox Solaris 10 SPARC: 145080-12 X86: 145081-11 CVE-2012-1970 Denial of Service (DoS) vulnerability 10.0 CVE-2012-1971 Denial of Service (DoS) vulnerability 9.3 CVE-2012-1972 Resource Management Errors vulnerability 10.0 CVE-2012-1973 Resource Management Errors vulnerability 10.0 CVE-2012-1974 Resource Management Errors vulnerability 10.0 CVE-2012-1975 Resource Management Errors vulnerability 10.0 CVE-2012-1976 Resource Management Errors vulnerability 10.0 CVE-2012-3956 Resource Management Errors vulnerability 10.0 CVE-2012-3957 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-3958 Resource Management Errors vulnerability 10.0 CVE-2012-3959 Resource Management Errors vulnerability 10.0 CVE-2012-3960 Resource Management Errors vulnerability 10.0 CVE-2012-3961 Resource Management Errors vulnerability 10.0 CVE-2012-3962 Arbitrary code execution vulnerability 9.3 CVE-2012-3963 Resource Management Errors vulnerability 10.0 CVE-2012-3964 Resource Management Errors vulnerability 10.0 CVE-2012-3966 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-3967 Arbitrary code execution vulnerability 6.8 CVE-2012-3968 Resource Management Errors vulnerability 10.0 CVE-2012-3969 Numeric Errors vulnerability 9.3 CVE-2012-3970 Resource Management Errors vulnerability 10.0 CVE-2012-3972 Information Exposure vulnerability 5.0 CVE-2012-3974 Resource Management Errors vulnerability 6.9 CVE-2012-3976 Denial of Service (DoS) vulnerability 5.8 CVE-2012-3978 Permissions, Privileges, and Access Controls vulnerability 6.8 CVE-2012-3980 Improper Control of Generation of Code ('Code Injection') vulnerability 9.3 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Bad DMA/do_IRQ errors on suspend/resume, with occasional freezing

    - by Steve Kroon
    Every time I suspend or resume my laptop (Dell Latitude E6520, bought this year), I get 2 messages of the form displayed on the console just before shutting down/starting up: [ 407.107610] ehci_hcd 0000:00:1d.0: dma_pool_free buffer-128, f6f18000/36f18000 (bad dma) On occasion, I get a message of the form: [ 3753.979066] do_IRQ: 0.177 No irq handler for vector (irq -1) On occasion, my machine freezes with a flashing Caps Lock button when suspending, after which I need to do a hard shutdown. This never happened before the messages started appearing (a while back), and I think it never happens without a do_IRQ message appearing (although I'm not sure about that). [There's nothing in the owner's manual on a flashing Caps Lock button; apparently it may be a kernel panic if the scroll lock also flashes, but the laptop doesn't have a scroll lock light, and there's no message on the console saying kernel panic.] Are these bad DMA/do IRQ messages serious, and what can I do to investigate/troubleshoot them and the freezing? Edit: I've also now received the following error messages a few times: [246943.023908] JBD: I/O error detected when updating journal superblock for sdb1. [246943.023958] Buffer I/O error on device sdb1, logical block 0 [246943.023996] EXT3-fs (sdb1): I/O error while writing superblock Edit: Output of dmesg at http://pastebin.com/ra7MTQEj ; contents of /var/log/kern.log at http://pastebin.com/i6jf0Md9 Edit: the output of some smartctl (-a, -x, --log=error, --log=xerror) instructions is available at http://paste.ubuntu.com/1088488/ . Edit (31/8/2012): Output of dmesg|grep -i ehci available at http://paste.ubuntu.com/1177246/ .

    Read the article

  • OpenGL ES Loading

    - by kuroutadori
    I want to know what is the norm of loading rendering code. Take a button. When the application is loaded, a texture is loaded which has the image of the button on it. When the button is tapped, it then adds a loader into a queue, which is loaded on render thread. It then loads up an array buffer with vertexes and tex coords when render is called. It then adds to a render tree. Then it renders. the render function looks like this void render() { update(); mBaseRenderer->render(); } update() is when the queue is checked to see if anything needs loading. mBaseRenderer->render() is the render tree. What I am asking then is, should I even have the update() there at all and instead have everything preloaded before it renders? If I can have it loaded when need, for instance when there is tap, then how can it be done (My current code causes an dequeueing buffer error (Unknown error: -75) which I assume is to do with OpenGL ES and the context)?

    Read the article

  • Strange and erratic transformations when using OpenGL VBOs to render scene

    - by janoside
    I have an existing iOS game with fairly simple scenes (all textured quads) and I'm using Apple's "Texture2D" class. I'm trying to convert this class to use VBOs since the vertices of my objects basically never change so I may as well not re-create them for every object every frame. I have the scene rendering using VBOs but the sizes and orientations of all rendered objects are strange and erratic - though locations seem generally correct. I've been toying with this code for a few days now, and I've found something odd: if I re-create all of my VBOs each frame, everything looks correct, even though I'm almost certain my vertices are not changing. Other notes I'm basing my work on this tutorial, and therefore am also using "IBOs" I create my buffers before rendering begins My buffers include vertex and texture data I'm using OpenGL ES 1.1 Fearing some strange effect of the current matrix GL state at the time of buffer creation I've also tried wrapping my buffer-setup code in a "pushMatrix-loadIdentity-popMatrix" block which (as expected) had no effect I'm aware that various articles have been published demonstrating that VBOs may not help performance, but I want to understand this problem and at least have the option to use them. I realize this is a shot in the dark, but has anyone else experienced this type of strange behavior? What might I be doing to result in this behavior? It's rather difficult for me to isolate the problem since I'm working in an existing, moderately complex project, so suggestions about how to approach the problem are also quite welcome.

    Read the article

  • how to properly implement alpha blending in a complex 3d scene

    - by Gajet
    I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing. these are the methods I've though about so far: first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera. then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer. the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth? I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above. Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.

    Read the article

  • Best way to mask 2D sprites in XNA?

    - by electroflame
    I currently am trying to mask some sprites. Rather than explaining it in words, I've made up some example pictures: The area to mask (in white) Now, the red sprite that needs to be cropped. The final result. Now, I'm aware that in XNA you can do two things to accomplish this: Use the Stencil Buffer. Use a Pixel Shader. I have tried to do a pixel shader, which essentially did this: float4 main(float2 texCoord : TEXCOORD0) : COLOR0 { float4 tex = tex2D(BaseTexture, texCoord); float4 bitMask = tex2D(MaskTexture, texCoord); if (bitMask.a > 0) { return float4(tex.r, tex.g, tex.b, tex.a); } else { return float4(0, 0, 0, 0); } } This seems to crop the images (albeit, not correct once the image starts to move), but my problem is that the images are constantly moving (they aren't static), so this cropping needs to be dynamic. Is there a way I could alter the shader code to take into account it's position? Alternatively, I've read about using the Stencil Buffer, but most of the samples seem to hinge on using a rendertarget, which I really don't want to do. (I'm already using 3 or 4 for the rest of the game, and adding another one on top of it seems overkill) The only tutorial I've found that doesn't use Rendertargets is one from Shawn Hargreaves' blog over here. The issue with that one, though is that it's for XNA 3.1, and doesn't seem to translate well to XNA 4.0. It seems to me that the pixel shader is the way to go, but I'm unsure of how to get the positioning correct. I believe I would have to change my onscreen coordinates (something like 500, 500) to be between 0 and 1 for the shader coordinates. My only problem is trying to work out how to correctly use the transformed coordinates. Thanks in advance for any help!

    Read the article

  • USB flash module giving errors

    - by vshenoy
    Hi, I have a SATA USB flash module which was earlier running a 2.4 linux kernel (2.4.36.6) and on which now I am trying to install ubuntu server 10.04.1 LTS. I have two such USB flash modules and on one of them the installation process itself giving these errors: sd 4:0:0:0 [sda] Device not ready sd 4:0:0:0 [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE sd 4:0:0:0 [sda] Sense Key : Not Ready [current] sd 4:0:0:0 [sda] Add. Sense: Medium not present sd 4:0:0:0 [sda] CDB: Write(10): 2a 00 00 05 48 02 00 00 04 00 end_request: I/O error, dev sda, sector 46114 usb 1-1: reset high speed USB device using ehci_hcd and address 2 Buffer I/O error on device sda1, logical block 172033 lost page write due to I/O error on sda1 Buffer I/O error on device sda1, logical block 172034 lost page write due to I/O error on sda1 on the other the installation is successful, but after a day or two of running the machine hangs because of kernel spewing these messages: Remounting filesystem read-only EXT2-fs error (device sda1): read_block_bitmap: Cannot read block [bitmap - block_group = 105, block_bitmap = 860161] EXT2-fs error (device sda1): ext2_get_inode: unable to read inode block - inode=13083, block=24683 ext2_free_inode: bit already cleared for inode 83966 and the machine needs to be hard rebooted. On both the systems SCSI emulation with usb_storage driver is being used to detect the module. Here is the output of /proc/scsi/scsi on 2.4: # cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: TS Model: UFM Rev: 1100 Type: Direct-Access ANSI SCSI revision: 02 and on 2.6: # cat /proc/scsi/scsi Attached devices: Host: scsi6 Channel: 00 Id: 00 Lun: 00 Vendor: TS Model: UFM Rev: 1100 Type: Direct-Access ANSI SCSI revision: 00 i.e. only 'ANSI SCSI revision:' is shown as different, although I am not sure if this can cause any problem. Really appreciate if someone can point as to how to debug this issue or any mailing list where I can further ask questions about this.

    Read the article

  • Failing Screen Resize Method

    - by StrongJoshua
    So I want my game to draw to a specific "optimal" size and then be stretched to fit screens that are a different size. I'm using LibGDX and figured that I could just draw everything to a FrameBuffer and then resize that buffer to the appropriate size when drawing it to the actual display. However, my method does not work, it just results in a black screen with the top right quarter of the screen white.Intermediary is the FBO, interMatrix is a Matrix4 object, and camera is an OrthographicCamera. @Override public void render() { // update actors currentStage.act(); //render to intermediary buffer batch.setProjectionMatrix(interMatrix); intermediary.begin(); batch.begin(); currentStage.draw(); batch.flush(); intermediary.end(); //resize to actual width and height Sprite s = new Sprite(intermediary.getColorBufferTexture()); s.flip(true, false); batch.setProjectionMatrix(camera.combined); batch.draw(s.getTexture(), 0, 0, width, height); batch.end(); } These are the constructors for the above mentioned objects (GAME_WIDTH and HEIGHT are the "optimal" settings, width and height are the actual sizes, which are the same when running on desktop). intermediary = new FrameBuffer(Format.RGBA8888, GAME_WIDTH, GAME_HEIGHT, false); interMatrix = new Matrix4(); camera = new OrthographicCamera(width, height); interMatrix.setToOrtho2D(0, 0, GAME_WIDTH, GAME_HEIGHT); Is there a better way of doing this or can is this a viable option and how do I fix what I have?

    Read the article

  • OpenGL render to texture causing edge artifacts

    - by mysticalOso
    This is my first post here so any help would be massively appreciated :) I'm using C++ with SDL and OpenGL 3.3 When rendering directly to screen I get the following result And when I render to texture I this happens Anti-aliasing is turned off for both. I'm guessing this has something to do with depth buffer accuracy but I've tried a lot of different methods to improve the result but, no success :( I'm currently using the following code to set up my FBO: GLuint frameBufferID; glGenFramebuffers(1, &frameBufferID); glBindFramebuffer(GL_FRAMEBUFFER, frameBufferID); glGenTextures(1, &coloursTextureID); glBindTexture(GL_TEXTURE_2D, coloursTextureID); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,SCREEN_WIDTH,SCREEN_HEIGHT,0,GL_RGB,GL_UNSIGNED_BYTE,NULL); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); //Depth buffer setup GLuint depthrenderbuffer; glGenRenderbuffers(1, &depthrenderbuffer); glBindRenderbuffer(GL_RENDERBUFFER, depthrenderbuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, SCREEN_WIDTH,SCREEN_HEIGHT); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthrenderbuffer); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, coloursTextureID, 0); GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0}; glDrawBuffers(1, DrawBuffers); // if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) return false; Thank you so much for any help :)

    Read the article

  • deb package building

    - by newcode
    While trying to build a package, I gave the following command in terminal: cd Downloads/src/ cd unity-5.10.0/ dpkg-buildpackage -rfakeroot -uc -b Then it gives the output: dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): -D_FORTIFY_SOURCE=2 dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions -Wl,-z,relro dpkg-buildpackage: source package unity dpkg-buildpackage: source version 5.10.0-0ubuntu6 dpkg-buildpackage: source changed by Didier Roche <[email protected]> dpkg-buildpackage: host architecture i386 dpkg-source --before-build unity-5.10.0 dpkg-checkbuilddeps: Unmet build dependencies: libutouch-grail-dev (>= 1.0.20) libutouch-geis-dev (>= 2.0.10) dpkg-buildpackage: warning: Build dependencies/conflicts unsatisfied; aborting. dpkg-buildpackage: warning: (Use -d flag to override.) Then I tried to install the package using: cd.. sudo dpkg -i *deb And it gives: [sudo] password for harshnarang8: dpkg: error processing *deb (--install): cannot access archive: No such file or directory Errors were encountered while processing: *deb What is exactly causing the problem and how to encounter it?

    Read the article

  • Why isn't one of the constant buffers being loaded inside the shader?

    - by Paul Ske
    I however got the model to load under tessellation; only problem is that one of the constant buffers aren't actually updating the shader's tessellation factor inside the hullshader. I created a messagebox at the rendering point so I know for sure the tessellation factor is assigned to the dynamic constant buffer. Inside the shader code where it says .Edges[1] = tessellationAmount; the tessellationAmount is suppose to be sent from the dynamic buffer to the shader. Otherwise it's just a plain box. In better explanation; there's a matrixBuffer, cameraBuffer, TessellationBuffer for constant. There's a multiBuffer array that assigns the matrix, camera, tesselation. So, when I set the Hull Shader, PixelShader, VertexShader, DomainShader it gets assigned by the multibuffer. E.G. devcon-HSSetConstantBuffers(0,3,multibuffer); The only way around the whole ideal would be to go in the shader and change how much the edges tessellate and inside the edges as well with the same number. My question is why wouldn't the tessellationBuffer not work in the shader?

    Read the article

  • How to configure background image to be at the bottom OpenGL Android

    - by Maxim Shoustin
    I have class that draws white line: public class Line { //private FloatBuffer vertexBuffer; private FloatBuffer frameVertices; ByteBuffer diagIndices; float[] vertices = { -0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; public Line(GL10 gl) { // a float has 4 bytes so we allocate for each coordinate 4 bytes ByteBuffer vertexByteBuffer = ByteBuffer.allocateDirect(vertices.length * 4); vertexByteBuffer.order(ByteOrder.nativeOrder()); // allocates the memory from the byte buffer frameVertices = vertexByteBuffer.asFloatBuffer(); // fill the vertexBuffer with the vertices frameVertices.put(vertices); // set the cursor position to the beginning of the buffer frameVertices.position(0); } /** The draw method for the triangle with the GL context */ public void draw(GL10 gl) { gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, frameVertices); gl.glColor4f(1.0f, 1.0f, 1.0f, 1f); gl.glDrawArrays(GL10.GL_LINE_LOOP , 0, vertices.length / 3); gl.glLineWidth(5.0f); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } It works fine. The problem is: When I add BG image, I don't see the line glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setEGLConfigChooser(8, 8, 8, 8, 16, 0); glView.setRenderer(new mainRenderer(this)); // Use a custom renderer glView.setBackgroundResource(R.drawable.bg_day); // <- BG glView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY); glView.getHolder().setFormat(PixelFormat.TRANSLUCENT); How to get rid of that?

    Read the article

  • What problem does double or triple buffering solve in modern games?

    - by krokvskrok
    I want to check if my understanding of the causes for using double (or triple) buffering is correct: A monitor with 60Hz refresh's the monitor-display 60 times per second. If the monitor refresh the monitor-display, he updates pixel for pixel and line for line. The monitor requests the color values for the pixels from the video memory. If I run now a game, then this game is constantly manipulating this video memory. If this game don't use a buffer strategy (double buffering etc.) then the following problem can happen: The monitor is now refreshing his monitor-display. At this moment the monitor had refreshed the first half monitor-display already. At the same time, the game had manipulated the video memory with new data. Now the monitor accesses for the second half monitor-display this new manipulated data from the video memory. The problems can be tearing or flickering. Is my understanding of cases of using buffer strategy correct? Are there other reasons?

    Read the article

  • What's wrong with this Open GL ES 2.0. Shader?

    - by Project Dumbo Dev
    I just can't understand this. The code works perfectly on the emulator(Which is supposed to give more problems than phones…), but when I try it on a LG-E610 it doesn't compile the vertex shader. This is my log error(Which contains the shader code as well): EDITED Shader: uniform mat4 u_Matrix; uniform int u_XSpritePos; uniform int u_YSpritePos; uniform float u_XDisplacement; uniform float u_YDisplacement; attribute vec4 a_Position; attribute vec2 a_TextureCoordinates; varying vec2 v_TextureCoordinates; void main(){ v_TextureCoordinates.x= (a_TextureCoordinates.x + u_XSpritePos) * u_XDisplacement; v_TextureCoordinates.y= (a_TextureCoordinates.y + u_YSpritePos) * u_YDisplacement; gl_Position = u_Matrix * a_Position; } Log reports this before loading/compiling shader: 11-05 18:46:25.579: D/memalloc(1649): /dev/pmem: Mapped buffer base:0x51984000 size:5570560 offset:4956160 fd:46 11-05 18:46:25.629: D/memalloc(1649): /dev/pmem: Mapped buffer base:0x5218d000 size:5836800 offset:5570560 fd:49 Maybe it has something to do with that men alloc? The phone is also giving a constant error while plugged: ERROR FBIOGET_ESDCHECKLOOP fail, from msm7627a.gralloc Edited: "InfoLog:" refers to glGetShaderInfoLog, and it's returning nothing. Since I removed the log in a previous edit I will just say i'm looking for feedback on compiling shaders. Solution + More questions: Ok, the problem seems to be that either ints are not working(generally speaking) or that you can't mix floats with ints. That brings to me the question, why on earth glGetShaderInfoLog is returning nothing? Shouldn't it tell me something is wrong on those lines? It surely does when I misspell something. I solved by turning everything into floats, but If someone can add some light into this, It would be appreciated. Thanks.

    Read the article

  • ????????????3?????

    - by Feng
    ?? ??blog?????oracle????????????,??????????????,??????: ?????????. ???????: ??????????,????????; ????????????,?” ???”??. 1. OS swapping/paging ??????concurrency??????? Oracle?????????, ??latch/mutex???????”?”,??????????????/???(????????????,??????????????????). ????OS??????swapping/paging????,???????????,??latch/mutex???????,????????????hung/slow???. ??swapping/paging??????: a). ???? b). ??????; ?????, ?????????????? c). ?????/????? ????????????????? ???????: Lock SGA, ??SGA(???latch/mutex)???pin???????swapping???. ???SGA??????,????large page(hugepage)??,??latch/mutex??/?????. 2. SGA resizing?????????? ?AMM/ASMM??????????, shared pool?buffer cache?????component????????????,??ora-4031???.??????????,???????resize????????????(?latch/mutex?????)?????, ?????????latch/mutex??. ????shared pool?resize??????,??latch/mutex???????. ?????????:  ?????bug; ???????????,??resize???????????????,???????????. ??bug?fix??????????impact, ???????????. ???????: 1). ??buffer cache?shared pool??(???????????,?????????) 2). ??resize???????16?? alter system set "_memory_broker_stat_interval"=999; Disable AMM/ASMM?????????,?????: ??ora-4031????????????. 3. DDL?????????? ??????????????????. ???????????DDL (??grant, ?????, ????????),???????????SQL?????invalidate?;????????SQL????????????,?????????hard parse ? SQL??????. ??????? “hardparse storm”, latch/mutex????????, ??library cache lock/row cache lock????; ??????????slow/hung. ???????: ???????????DDL ??????????,???????????,?? “????????????3?????"?

    Read the article

  • Issues with HLSL and lighting

    - by numerical25
    I am trying figure out whats going on with my HLSL code but I have no way of debugging it cause C++ gives off no errors. The application just closes when I run it. I am trying to add lighting to a 3d plane I made. below is my HLSL. The problem consist when my Pixel shader method returns the struct "outColor" . If I change the return value back to the struct "psInput" , everything goes back to working again. My light vectors and colors are at the top of the fx file // PS_INPUT - input variables to the pixel shader // This struct is created and fill in by the // vertex shader cbuffer Variables { matrix Projection; matrix World; float TimeStep; }; struct PS_INPUT { float4 Pos : SV_POSITION; float4 Color : COLOR0; float3 Normal : TEXCOORD0; float3 ViewVector : TEXCOORD1; }; float specpower = 80.0f; float3 camPos = float3(0.0f, 9.0, -256.0f); float3 DirectLightColor = float3(1.0f, 1.0f, 1.0f); float3 DirectLightVector = float3(0.0f, 0.602f, 0.70f); float3 AmbientLightColor = float3(1.0f, 1.0f, 1.0f); /*************************************** * Lighting functions ***************************************/ /********************************* * CalculateAmbient - * inputs - * vKa material's reflective color * lightColor - the ambient color of the lightsource * output - ambient color *********************************/ float3 CalculateAmbient(float3 vKa, float3 lightColor) { float3 vAmbient = vKa * lightColor; return vAmbient; } /********************************* * CalculateDiffuse - * inputs - * material color * The color of the direct light * the local normal * the vector of the direct light * output - difuse color *********************************/ float3 CalculateDiffuse(float3 baseColor, float3 lightColor, float3 normal, float3 lightVector) { float3 vDiffuse = baseColor * lightColor * saturate(dot(normal, lightVector)); return vDiffuse; } /********************************* * CalculateSpecular - * inputs - * viewVector * the direct light vector * the normal * output - specular highlight *********************************/ float CalculateSpecular(float3 viewVector, float3 lightVector, float3 normal) { float3 vReflect = reflect(lightVector, normal); float fSpecular = saturate(dot(vReflect, viewVector)); fSpecular = pow(fSpecular, specpower); return fSpecular; } /********************************* * LightingCombine - * inputs - * ambient component * diffuse component * specualr component * output - phong color color *********************************/ float3 LightingCombine(float3 vAmbient, float3 vDiffuse, float fSpecular) { float3 vCombined = vAmbient + vDiffuse + fSpecular.xxx; return vCombined; } //////////////////////////////////////////////// // Vertex Shader - Main Function /////////////////////////////////////////////// PS_INPUT VS(float4 Pos : POSITION, float4 Color : COLOR, float3 Normal : NORMAL) { PS_INPUT psInput; float4 newPosition; newPosition = Pos; newPosition.y = sin((newPosition.x * TimeStep) + (newPosition.z / 3.0f)) * 5.0f; // Pass through both the position and the color psInput.Pos = mul(newPosition , Projection ); psInput.Color = Color; psInput.ViewVector = normalize(camPos - psInput.Pos); return psInput; } /////////////////////////////////////////////// // Pixel Shader /////////////////////////////////////////////// //Anthony!!!!!!!!!!! Find out how color works when multiplying them float4 PS(PS_INPUT psInput) : SV_Target { float3 normal = -normalize(psInput.Normal); float3 vAmbient = CalculateAmbient(psInput.Color, AmbientLightColor); float3 vDiffuse = CalculateDiffuse(psInput.Color, DirectLightColor, normal, DirectLightVector); float fSpecular = CalculateSpecular(psInput.ViewVector, DirectLightVector, normal); float4 outColor; outColor.rgb = LightingCombine(vAmbient, vDiffuse, fSpecular); outColor.a = 1.0f; //Below is where the error begins return outColor; } // Define the technique technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); } } Below is some of my c++ code. Reason I am showing this is because it is pretty much what creates the surface normals for my shaders to evaluate. for the lighting for(int z=0; z < NUM_ROWS; ++z) { for(int x = 0; x < NUM_COLS; ++x) { int curVertex = x + (z * NUM_VERTSX); indices[curIndex] = curVertex; indices[curIndex + 1] = curVertex + NUM_VERTSX; indices[curIndex + 2] = curVertex + 1; D3DXVECTOR3 v0 = vertices[indices[curIndex]].pos; D3DXVECTOR3 v1 = vertices[indices[curIndex + 1]].pos; D3DXVECTOR3 v2 = vertices[indices[curIndex + 2]].pos; D3DXVECTOR3 normal; D3DXVECTOR3 cross; D3DXVec3Cross(&cross, &D3DXVECTOR3(v2 - v0),&D3DXVECTOR3(v1 - v0)); D3DXVec3Normalize(&normal, &cross); vertices[indices[curIndex]].normal = normal; vertices[indices[curIndex + 1]].normal = normal; vertices[indices[curIndex + 2]].normal = normal; indices[curIndex + 3] = curVertex + 1; indices[curIndex + 4] = curVertex + NUM_VERTSX; indices[curIndex + 5] = curVertex + NUM_VERTSX + 1; v0 = vertices[indices[curIndex + 3]].pos; v1 = vertices[indices[curIndex + 4]].pos; v2 = vertices[indices[curIndex + 5]].pos; D3DXVec3Cross(&cross, &D3DXVECTOR3(v2 - v0),&D3DXVECTOR3(v1 - v0)); D3DXVec3Normalize(&normal, &cross); vertices[indices[curIndex + 3]].normal = normal; vertices[indices[curIndex + 4]].normal = normal; vertices[indices[curIndex + 5]].normal = normal; curIndex += 6; } } and below is my c++ code, in it's entirety. showing the drawing and also calling on the passes #include "MyGame.h" //#include "CubeVector.h" /* This code sets a projection and shows a turning cube. What has been added is the project, rotation and a rasterizer to change the rasterization of the cube. The issue that was going on was something with the effect file which was causing the vertices not to be rendered correctly.*/ typedef struct { ID3D10Effect* pEffect; ID3D10EffectTechnique* pTechnique; //vertex information ID3D10Buffer* pVertexBuffer; ID3D10Buffer* pIndicesBuffer; ID3D10InputLayout* pVertexLayout; UINT numVertices; UINT numIndices; }ModelObject; ModelObject modelObject; // World Matrix D3DXMATRIX WorldMatrix; // View Matrix D3DXMATRIX ViewMatrix; // Projection Matrix D3DXMATRIX ProjectionMatrix; ID3D10EffectMatrixVariable* pProjectionMatrixVariable = NULL; //grid information #define NUM_COLS 16 #define NUM_ROWS 16 #define CELL_WIDTH 32 #define CELL_HEIGHT 32 #define NUM_VERTSX (NUM_COLS + 1) #define NUM_VERTSY (NUM_ROWS + 1) // timer variables LARGE_INTEGER timeStart; LARGE_INTEGER timeEnd; LARGE_INTEGER timerFreq; double currentTime; float anim_rate; // Variable to hold how long since last frame change float lastElaspedFrame = 0; // How long should the frames last float frameDuration = 0.5; bool MyGame::InitDirect3D() { if(!DX3dApp::InitDirect3D()) { return false; } // Get the timer frequency QueryPerformanceFrequency(&timerFreq); float freqSeconds = 1.0f / timerFreq.QuadPart; lastElaspedFrame = 0; D3D10_RASTERIZER_DESC rastDesc; rastDesc.FillMode = D3D10_FILL_WIREFRAME; rastDesc.CullMode = D3D10_CULL_FRONT; rastDesc.FrontCounterClockwise = true; rastDesc.DepthBias = false; rastDesc.DepthBiasClamp = 0; rastDesc.SlopeScaledDepthBias = 0; rastDesc.DepthClipEnable = false; rastDesc.ScissorEnable = false; rastDesc.MultisampleEnable = false; rastDesc.AntialiasedLineEnable = false; ID3D10RasterizerState *g_pRasterizerState; mpD3DDevice->CreateRasterizerState(&rastDesc, &g_pRasterizerState); mpD3DDevice->RSSetState(g_pRasterizerState); // Set up the World Matrix D3DXMatrixIdentity(&WorldMatrix); D3DXMatrixLookAtLH(&ViewMatrix, new D3DXVECTOR3(200.0f, 60.0f, -20.0f), new D3DXVECTOR3(200.0f, 50.0f, 0.0f), new D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // Set up the projection matrix D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, (float)D3DX_PI * 0.5f, (float)mWidth/(float)mHeight, 0.1f, 100.0f); pTimeVariable = NULL; if(!CreateObject()) { return false; } return true; } //These are actions that take place after the clearing of the buffer and before the present void MyGame::GameDraw() { static float rotationAngle = 0.0f; // create the rotation matrix using the rotation angle D3DXMatrixRotationY(&WorldMatrix, rotationAngle); rotationAngle += (float)D3DX_PI * 0.0f; // Set the input layout mpD3DDevice->IASetInputLayout(modelObject.pVertexLayout); // Set vertex buffer UINT stride = sizeof(VertexPos); UINT offset = 0; mpD3DDevice->IASetVertexBuffers(0, 1, &modelObject.pVertexBuffer, &stride, &offset); mpD3DDevice->IASetIndexBuffer(modelObject.pIndicesBuffer, DXGI_FORMAT_R32_UINT, 0); pTimeVariable->SetFloat((float)currentTime); // Set primitive topology mpD3DDevice->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // Combine and send the final matrix to the shader D3DXMATRIX finalMatrix = (WorldMatrix * ViewMatrix * ProjectionMatrix); pProjectionMatrixVariable->SetMatrix((float*)&finalMatrix); // make sure modelObject is valid // Render a model object D3D10_TECHNIQUE_DESC techniqueDescription; modelObject.pTechnique->GetDesc(&techniqueDescription); // Loop through the technique passes for(UINT p=0; p < techniqueDescription.Passes; ++p) { modelObject.pTechnique->GetPassByIndex(p)->Apply(0); // draw the cube using all 36 vertices and 12 triangles mpD3DDevice->DrawIndexed(modelObject.numIndices,0,0); } } //Render actually incapsulates Gamedraw, so you can call data before you actually clear the buffer or after you //present data void MyGame::Render() { // Get the start timer count QueryPerformanceCounter(&timeStart); currentTime += anim_rate; DX3dApp::Render(); QueryPerformanceCounter(&timeEnd); anim_rate = ( (float)timeEnd.QuadPart - (float)timeStart.QuadPart ) / timerFreq.QuadPart; } bool MyGame::CreateObject() { VertexPos vertices[NUM_VERTSX * NUM_VERTSY]; for(int z=0; z < NUM_VERTSY; ++z) { for(int x = 0; x < NUM_VERTSX; ++x) { vertices[x + z * NUM_VERTSX].pos.x = (float)x * CELL_WIDTH; vertices[x + z * NUM_VERTSX].pos.z = (float)z * CELL_HEIGHT; vertices[x + z * NUM_VERTSX].pos.y = (float)(rand() % CELL_HEIGHT); vertices[x + z * NUM_VERTSX].color = D3DXVECTOR4(1.0, 0.0f, 0.0f, 0.0f); } } DWORD indices[NUM_VERTSX * NUM_VERTSY * 6]; int curIndex = 0; for(int z=0; z < NUM_ROWS; ++z) { for(int x = 0; x < NUM_COLS; ++x) { int curVertex = x + (z * NUM_VERTSX); indices[curIndex] = curVertex; indices[curIndex + 1] = curVertex + NUM_VERTSX; indices[curIndex + 2] = curVertex + 1; D3DXVECTOR3 v0 = vertices[indices[curIndex]].pos; D3DXVECTOR3 v1 = vertices[indices[curIndex + 1]].pos; D3DXVECTOR3 v2 = vertices[indices[curIndex + 2]].pos; D3DXVECTOR3 normal; D3DXVECTOR3 cross; D3DXVec3Cross(&cross, &D3DXVECTOR3(v2 - v0),&D3DXVECTOR3(v1 - v0)); D3DXVec3Normalize(&normal, &cross); vertices[indices[curIndex]].normal = normal; vertices[indices[curIndex + 1]].normal = normal; vertices[indices[curIndex + 2]].normal = normal; indices[curIndex + 3] = curVertex + 1; indices[curIndex + 4] = curVertex + NUM_VERTSX; indices[curIndex + 5] = curVertex + NUM_VERTSX + 1; v0 = vertices[indices[curIndex + 3]].pos; v1 = vertices[indices[curIndex + 4]].pos; v2 = vertices[indices[curIndex + 5]].pos; D3DXVec3Cross(&cross, &D3DXVECTOR3(v2 - v0),&D3DXVECTOR3(v1 - v0)); D3DXVec3Normalize(&normal, &cross); vertices[indices[curIndex + 3]].normal = normal; vertices[indices[curIndex + 4]].normal = normal; vertices[indices[curIndex + 5]].normal = normal; curIndex += 6; } } //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"NORMAL",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 28, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false; modelObject.numIndices = sizeof(indices)/sizeof(DWORD); bufferDesc.ByteWidth = sizeof(DWORD) * modelObject.numIndices; bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER; initData.pSysMem = indices; hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pIndicesBuffer); if(FAILED(hr)) return false; ///////////////////////////////////////////////////////////////////////////// //Set up fx files LPCWSTR effectFilename = L"effect.fx"; modelObject.pEffect = NULL; hr = D3DX10CreateEffectFromFile(effectFilename, NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, mpD3DDevice, NULL, NULL, &modelObject.pEffect, NULL, NULL); if(FAILED(hr)) return false; pProjectionMatrixVariable = modelObject.pEffect->GetVariableByName("Projection")->AsMatrix(); pTimeVariable = modelObject.pEffect->GetVariableByName("TimeStep")->AsScalar(); //Dont sweat the technique. Get it! LPCSTR effectTechniqueName = "Render"; modelObject.pTechnique = modelObject.pEffect->GetTechniqueByName(effectTechniqueName); if(modelObject.pTechnique == NULL) return false; //Create Vertex layout D3D10_PASS_DESC passDesc; modelObject.pTechnique->GetPassByIndex(0)->GetDesc(&passDesc); hr = mpD3DDevice->CreateInputLayout(layout, numElements, passDesc.pIAInputSignature, passDesc.IAInputSignatureSize, &modelObject.pVertexLayout); if(FAILED(hr)) return false; return true; }

    Read the article

  • Autocorrelation returns random results with mic input (using a high pass filter)

    - by Niall
    Hello, Sorry to ask a similar question to the one i asked before (FFT Problem (Returns random results)), but i've looked up pitch detection and autocorrelation and have found some code for pitch detection using autocorrelation. Im trying to do pitch detection of a users singing. Problem is, it keeps returning random results. I've got some code from http://code.google.com/p/yaalp/ which i've converted to C++ and modified (below). My sample rate is 2048, and data size is 1024. I'm detecting pitch of both a sine wave and mic input. The frequency of the sine wave is 726.0, and its detecting it to be 722.950820 (which im ok with), but its detecting the pitch of the mic as a random number from around 100 to around 1050. I'm now using a High pass filter to remove the DC offset, but it's not working. Am i doing it right, and if so, what else can i do to fix it? Any help would be greatly appreciated! double* doHighPassFilter(short *buffer) { // Do FFT: int bufferLength = 1024; float *real = malloc(bufferLength*sizeof(float)); float *real2 = malloc(bufferLength*sizeof(float)); for(int x=0;x<bufferLength;x++) { real[x] = buffer[x]; } fft(real, bufferLength); for(int x=0;x<bufferLength;x+=2) { real2[x] = real[x]; } for (int i=0; i < 30; i++) //Set freqs lower than 30hz to zero to attenuate the low frequencies real2[i] = 0; // Do inverse FFT: inversefft(real2,bufferLength); double* real3 = (double*)real2; return real3; } double DetectPitch(short* data) { int sampleRate = 2048; //Create sine wave double *buffer = malloc(1024*sizeof(short)); double amplitude = 0.25 * 32768; //0.25 * max length of short double frequency = 726.0; for (int n = 0; n < 1024; n++) { buffer[n] = (short)(amplitude * sin((2 * 3.14159265 * n * frequency) / sampleRate)); } doHighPassFilter(data); printf("Pitch from sine wave: %f\n",detectPitchCalculation(buffer, 50.0, 1000.0, 1, 1)); printf("Pitch from mic: %f\n",detectPitchCalculation(data, 50.0, 1000.0, 1, 1)); return 0; } // These work by shifting the signal until it seems to correlate with itself. // In other words if the signal looks very similar to (signal shifted 200 data) than the fundamental period is probably 200 data // Note that the algorithm only works well when there's only one prominent fundamental. // This could be optimized by looking at the rate of change to determine a maximum without testing all periods. double detectPitchCalculation(double* data, double minHz, double maxHz, int nCandidates, int nResolution) { //-------------------------1-------------------------// // note that higher frequency means lower period int nLowPeriodInSamples = hzToPeriodInSamples(maxHz, 2048); int nHiPeriodInSamples = hzToPeriodInSamples(minHz, 2048); if (nHiPeriodInSamples <= nLowPeriodInSamples) printf("Bad range for pitch detection."); if (1024 < nHiPeriodInSamples) printf("Not enough data."); double *results = new double[nHiPeriodInSamples - nLowPeriodInSamples]; //-------------------------2-------------------------// for (int period = nLowPeriodInSamples; period < nHiPeriodInSamples; period += nResolution) { double sum = 0; // for each sample, find correlation. (If they are far apart, small) for (int i = 0; i < 1024 - period; i++) sum += data[i] * data[i + period]; double mean = sum / 1024.0; results[period - nLowPeriodInSamples] = mean; } //-------------------------3-------------------------// // find the best indices int *bestIndices = findBestCandidates(nCandidates, results, nHiPeriodInSamples - nLowPeriodInSamples - 1); //note findBestCandidates modifies parameter // convert back to Hz double *res = new double[nCandidates]; for (int i=0; i < nCandidates;i++) res[i] = periodInSamplesToHz(bestIndices[i]+nLowPeriodInSamples, 2048); double pitch2 = res[0]; free(res); free(results); return pitch2; } /// Finds n "best" values from an array. Returns the indices of the best parts. /// (One way to do this would be to sort the array, but that could take too long. /// Warning: Changes the contents of the array!!! Do not use result array afterwards. int* findBestCandidates(int n, double* inputs,int length) { //int length = inputs.Length; if (length < n) printf("Length of inputs is not long enough."); int *res = new int[n]; double minValue = 0; for (int c = 0; c < n; c++) { // find the highest. double fBestValue = minValue; int nBestIndex = -1; for (int i = 0; i < length; i++) { if (inputs[i] > fBestValue) { nBestIndex = i; fBestValue = inputs[i]; } } // record this highest value res[c] = nBestIndex; // now blank out that index. if(nBestIndex!=-1) inputs[nBestIndex] = minValue; } return res; } int hzToPeriodInSamples(double hz, int sampleRate) { return (int)(1 / (hz / (double)sampleRate)); } double periodInSamplesToHz(int period, int sampleRate) { return 1 / (period / (double)sampleRate); } Thanks, Niall. Edit: Changed the code to implement a high pass filter with a cutoff of 30hz (from What Are High-Pass and Low-Pass Filters?, can anyone tell me how to convert the low-pass filter using convolution to a high-pass one?) but it's still returning random results. Plugging it into a VST host and using VST plugins to compare spectrums isn't an option to me unfortunately.

    Read the article

  • Marshalling to a native library in C#

    - by Daniel Baulig
    I'm having trouble calling functions of a native library from within managed C# code. I am developing for the 3.5 compact framework (Windows Mobile 6.x) just in case this would make any difference. I am working with the waveIn* functions from coredll.dll (these are in winmm.dll in regular Windows I believe). This is what I came up with: // namespace winmm; class winmm [StructLayout(LayoutKind.Sequential)] public struct WAVEFORMAT { public ushort wFormatTag; public ushort nChannels; public uint nSamplesPerSec; public uint nAvgBytesPerSec; public ushort nBlockAlign; public ushort wBitsPerSample; public ushort cbSize; } [StructLayout(LayoutKind.Sequential)] public struct WAVEHDR { public IntPtr lpData; public uint dwBufferLength; public uint dwBytesRecorded; public IntPtr dwUser; public uint dwFlags; public uint dwLoops; public IntPtr lpNext; public IntPtr reserved; } public delegate void AudioRecordingDelegate(IntPtr deviceHandle, uint message, IntPtr instance, ref WAVEHDR wavehdr, IntPtr reserved2); [DllImport("coredll.dll")] public static extern int waveInAddBuffer(IntPtr hWaveIn, ref WAVEHDR lpWaveHdr, uint cWaveHdrSize); [DllImport("coredll.dll")] public static extern int waveInPrepareHeader(IntPtr hWaveIn, ref WAVEHDR lpWaveHdr, uint Size); [DllImport("coredll.dll")] public static extern int waveInStart(IntPtr hWaveIn); // some other class private WinMM.WinMM.AudioRecordingDelegate waveIn; private IntPtr handle; private uint bufferLength; private void setupBuffer() { byte[] buffer = new byte[bufferLength]; GCHandle bufferPin = GCHandle.Alloc(buffer, GCHandleType.Pinned); WinMM.WinMM.WAVEHDR hdr = new WinMM.WinMM.WAVEHDR(); hdr.lpData = bufferPin.AddrOfPinnedObject(); hdr.dwBufferLength = this.bufferLength; hdr.dwFlags = 0; int i = WinMM.WinMM.waveInPrepareHeader(this.handle, ref hdr, Convert.ToUInt32(Marshal.SizeOf(hdr))); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInPrepare"; return; } i = WinMM.WinMM.waveInAddBuffer(this.handle, ref hdr, Convert.ToUInt32(Marshal.SizeOf(hdr))); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInAddrBuffer"; return; } } private void setupWaveIn() { WinMM.WinMM.WAVEFORMAT format = new WinMM.WinMM.WAVEFORMAT(); format.wFormatTag = WinMM.WinMM.WAVE_FORMAT_PCM; format.nChannels = 1; format.nSamplesPerSec = 8000; format.wBitsPerSample = 8; format.nBlockAlign = Convert.ToUInt16(format.nChannels * format.wBitsPerSample); format.nAvgBytesPerSec = format.nSamplesPerSec * format.nBlockAlign; this.bufferLength = format.nAvgBytesPerSec; format.cbSize = 0; int i = WinMM.WinMM.waveInOpen(out this.handle, WinMM.WinMM.WAVE_MAPPER, ref format, Marshal.GetFunctionPointerForDelegate(waveIn), 0, WinMM.WinMM.CALLBACK_FUNCTION); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInOpen"; return; } setupBuffer(); WinMM.WinMM.waveInStart(this.handle); } I read alot about marshalling the last few days, nevertheless I do not get this code working. When my callback function is called (waveIn) when the buffer is full, the hdr structure passed back in wavehdr is obviously corrupted. Here is an examlpe of how the structure looks like at that point: - wavehdr {WinMM.WinMM.WAVEHDR} WinMM.WinMM.WAVEHDR dwBufferLength 0x19904c00 uint dwBytesRecorded 0x0000fa00 uint dwFlags 0x00000003 uint dwLoops 0x1990f6a4 uint + dwUser 0x00000000 System.IntPtr + lpData 0x00000000 System.IntPtr + lpNext 0x00000000 System.IntPtr + reserved 0x7c07c9a0 System.IntPtr This obiously is not what I expected to get passed. I am clearly concerned about the order of the fields in the view. I do not know if Visual Studio .NET cares about actual memory order when displaying the record in the "local"-view, but they are obviously not displayed in the order I speciefied in the struct. Then theres no data pointer and the bufferLength field is far to high. Interestingly the bytesRecorded field is exactly 64000 - bufferLength and bytesRecorded I'd expect both to be 64000 though. I do not know what exactly is going wrong, maybe someone can help me out on this. I'm an absolute noob to managed code programming and marshalling so please don't be too harsh to me for all the stupid things I've propably done. Oh here's the C code definition for WAVEHDR which I found here, I believe I might have done something wrong in the C# struct definition: /* wave data block header */ typedef struct wavehdr_tag { LPSTR lpData; /* pointer to locked data buffer */ DWORD dwBufferLength; /* length of data buffer */ DWORD dwBytesRecorded; /* used for input only */ DWORD_PTR dwUser; /* for client's use */ DWORD dwFlags; /* assorted flags (see defines) */ DWORD dwLoops; /* loop control counter */ struct wavehdr_tag FAR *lpNext; /* reserved for driver */ DWORD_PTR reserved; /* reserved for driver */ } WAVEHDR, *PWAVEHDR, NEAR *NPWAVEHDR, FAR *LPWAVEHDR; If you are used to work with all those low level tools like pointer-arithmetic, casts, etc starting writing managed code is a pain in the ass. It's like trying to learn how to swim with your hands tied on your back. Some things I tried (to no effect): .NET compact framework does not seem to support the Pack = 2^x directive in [StructLayout]. I tried [StructLayout(LayoutKind.Explicit)] and used 4 bytes and 8 bytes alignment. 4 bytes alignmentgave me the same result as the above code and 8 bytes alignment only made things worse - but that's what I expected. Interestingly if I move the code from setupBuffer into the setupWaveIn and do not declare the GCHandle in the context of the class but in a local context of setupWaveIn the struct returned by the callback function does not seem to be corrupted. I am not sure however why this is the case and how I can use this knowledge to fix my code. I'd really appreciate any good links on marshalling, calling unmanaged code from C#, etc. Then I'd be very happy if someone could point out my mistakes. What am I doing wrong? Why do I not get what I'd expect.

    Read the article

  • Run-Time Check Failure #2 - Stack around the variable 'indices' was corrupted.

    - by numerical25
    well I think I know what the problem is. I am just having a hard time debugging it. I am working with the directx api and I am trying to generate a plane along the x and z axis according to a book I have. The problem is when I am creating my indices. I think I am setting values out of the bounds of the indices array. I am just having a hard time figuring out what I did wrong. I am unfamiliar with the this method of generating a plane. so its a little difficult for me. below is my code. Take emphasis on the indices loop. #include "MyGame.h" //#include "CubeVector.h" /* This code sets a projection and shows a turning cube. What has been added is the project, rotation and a rasterizer to change the rasterization of the cube. The issue that was going on was something with the effect file which was causing the vertices not to be rendered correctly.*/ typedef struct { ID3D10Effect* pEffect; ID3D10EffectTechnique* pTechnique; //vertex information ID3D10Buffer* pVertexBuffer; ID3D10Buffer* pIndicesBuffer; ID3D10InputLayout* pVertexLayout; UINT numVertices; UINT numIndices; }ModelObject; ModelObject modelObject; // World Matrix D3DXMATRIX WorldMatrix; // View Matrix D3DXMATRIX ViewMatrix; // Projection Matrix D3DXMATRIX ProjectionMatrix; ID3D10EffectMatrixVariable* pProjectionMatrixVariable = NULL; //grid information #define NUM_COLS 16 #define NUM_ROWS 16 #define CELL_WIDTH 32 #define CELL_HEIGHT 32 #define NUM_VERTSX (NUM_COLS + 1) #define NUM_VERTSY (NUM_ROWS + 1) bool MyGame::InitDirect3D() { if(!DX3dApp::InitDirect3D()) { return false; } D3D10_RASTERIZER_DESC rastDesc; rastDesc.FillMode = D3D10_FILL_WIREFRAME; rastDesc.CullMode = D3D10_CULL_FRONT; rastDesc.FrontCounterClockwise = true; rastDesc.DepthBias = false; rastDesc.DepthBiasClamp = 0; rastDesc.SlopeScaledDepthBias = 0; rastDesc.DepthClipEnable = false; rastDesc.ScissorEnable = false; rastDesc.MultisampleEnable = false; rastDesc.AntialiasedLineEnable = false; ID3D10RasterizerState *g_pRasterizerState; mpD3DDevice->CreateRasterizerState(&rastDesc, &g_pRasterizerState); mpD3DDevice->RSSetState(g_pRasterizerState); // Set up the World Matrix D3DXMatrixIdentity(&WorldMatrix); D3DXMatrixLookAtLH(&ViewMatrix, new D3DXVECTOR3(0.0f, 10.0f, -20.0f), new D3DXVECTOR3(0.0f, 0.0f, 0.0f), new D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // Set up the projection matrix D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, (float)D3DX_PI * 0.5f, (float)mWidth/(float)mHeight, 0.1f, 100.0f); if(!CreateObject()) { return false; } return true; } //These are actions that take place after the clearing of the buffer and before the present void MyGame::GameDraw() { static float rotationAngle = 0.0f; // create the rotation matrix using the rotation angle D3DXMatrixRotationY(&WorldMatrix, rotationAngle); rotationAngle += (float)D3DX_PI * 0.0f; // Set the input layout mpD3DDevice->IASetInputLayout(modelObject.pVertexLayout); // Set vertex buffer UINT stride = sizeof(VertexPos); UINT offset = 0; mpD3DDevice->IASetVertexBuffers(0, 1, &modelObject.pVertexBuffer, &stride, &offset); mpD3DDevice->IASetIndexBuffer(modelObject.pIndicesBuffer, DXGI_FORMAT_R32_UINT, 0); // Set primitive topology mpD3DDevice->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // Combine and send the final matrix to the shader D3DXMATRIX finalMatrix = (WorldMatrix * ViewMatrix * ProjectionMatrix); pProjectionMatrixVariable->SetMatrix((float*)&finalMatrix); // make sure modelObject is valid // Render a model object D3D10_TECHNIQUE_DESC techniqueDescription; modelObject.pTechnique->GetDesc(&techniqueDescription); // Loop through the technique passes for(UINT p=0; p < techniqueDescription.Passes; ++p) { modelObject.pTechnique->GetPassByIndex(p)->Apply(0); // draw the cube using all 36 vertices and 12 triangles mpD3DDevice->DrawIndexed(modelObject.numIndices,0,0); } } //Render actually incapsulates Gamedraw, so you can call data before you actually clear the buffer or after you //present data void MyGame::Render() { DX3dApp::Render(); } bool MyGame::CreateObject() { VertexPos vertices[NUM_VERTSX * NUM_VERTSY]; for(int z=0; z < NUM_VERTSY; ++z) { for(int x = 0; x < NUM_VERTSX; ++x) { vertices[x + z * NUM_VERTSX].pos.x = (float)x * CELL_WIDTH; vertices[x + z * NUM_VERTSX].pos.z = (float)z * CELL_HEIGHT; vertices[x + z * NUM_VERTSX].pos.y = 0.0f; vertices[x + z * NUM_VERTSX].color = D3DXVECTOR4(1.0, 0.0f, 0.0f, 0.0f); } } DWORD indices[NUM_VERTSX * NUM_VERTSY]; int curIndex = 0; for(int z=0; z < NUM_ROWS; ++z) { for(int x = 0; x < NUM_COLS; ++x) { int curVertex = x + (z * NUM_VERTSX); indices[curIndex] = curVertex; indices[curIndex + 1] = curVertex + NUM_VERTSX; indices[curIndex + 2] = curVertex + 1; indices[curIndex + 3] = curVertex + 1; indices[curIndex + 4] = curVertex + NUM_VERTSX; indices[curIndex + 5] = curVertex + NUM_VERTSX + 1; curIndex += 6; } } //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false; modelObject.numIndices = sizeof(indices)/sizeof(DWORD); bufferDesc.ByteWidth = sizeof(DWORD) * modelObject.numIndices; bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER; initData.pSysMem = indices; hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pIndicesBuffer); if(FAILED(hr)) return false; ///////////////////////////////////////////////////////////////////////////// //Set up fx files LPCWSTR effectFilename = L"effect.fx"; modelObject.pEffect = NULL; hr = D3DX10CreateEffectFromFile(effectFilename, NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, mpD3DDevice, NULL, NULL, &modelObject.pEffect, NULL, NULL); if(FAILED(hr)) return false; pProjectionMatrixVariable = modelObject.pEffect->GetVariableByName("Projection")->AsMatrix(); //Dont sweat the technique. Get it! LPCSTR effectTechniqueName = "Render"; modelObject.pTechnique = modelObject.pEffect->GetTechniqueByName(effectTechniqueName); if(modelObject.pTechnique == NULL) return false; //Create Vertex layout D3D10_PASS_DESC passDesc; modelObject.pTechnique->GetPassByIndex(0)->GetDesc(&passDesc); hr = mpD3DDevice->CreateInputLayout(layout, numElements, passDesc.pIAInputSignature, passDesc.IAInputSignatureSize, &modelObject.pVertexLayout); if(FAILED(hr)) return false; return true; }

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >