Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 29/151 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Parenting Opengl with Groups in LibGDX

    - by Rudy_TM
    I am trying to make an object child of a Group, but this object has a draw method that calls opengl to draw in the screen. Its class its this public class OpenGLSquare extends Actor { private static final ImmediateModeRenderer renderer = new ImmediateModeRenderer10(); private static Matrix4 matrix = null; private static Vector2 temp = new Vector2(); public static void setMatrix4(Matrix4 mat) { matrix = mat; } @Override public void draw(SpriteBatch batch, float arg1) { // TODO Auto-generated method stub renderer.begin(matrix, GL10.GL_TRIANGLES); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y1, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x1, y0, 0f); renderer.color(color.r, color.g, color.b, color.a); renderer.vertex(x0, y0, 0f); renderer.end(); } } In my screen class I have this, i call it in the constructor MyGroupClass spriteLab = new MyGroupClass(spriteSheetLab); OpenGLSquare square = new OpenGLSquare(); square.setX0(100); square.setY0(200); square.setX1(400); square.setY1(280); square.color.set(Color.BLUE); square.setSize(); //spriteLab.addActorAt(0, clock); spriteLab.addActor(square); stage.addActor(spriteLab); And the render in the screen I have @Override public void render(float arg0) { this.gl.glClear(GL10.GL_COLOR_BUFFER_BIT |GL10.GL_DEPTH_BUFFER_BIT); stage.draw(); stage.act(Gdx.graphics.getDeltaTime()); } The problem its that when i use opengl with parent, it resets all the other chldren to position 0,0 and the opengl renderer paints the square in the exact position of the screen and not relative to the parent. I tried using batch.enableBlending() and batch.disableBlending() that fixes the position problem of the other children, but not the relative position of the opengl drawing and it also puts alpha to the glDrawing. What am i doing wrong?:/

    Read the article

  • Triple buffering causes input lag?

    - by user782220
    Consider some time in between two vsyncs. Suppose the first display buffer is being used to display the current image, and suppose the game was really fast and computed and rendered the next image to the second display buffer and the next one after that to the third display buffer. That is the rendering to the second and third display buffer happens so fast that it occurs before the next vsync. Suppose input from the user comes in now. What you would like is for the results of the input to show up on the next vsync or (probably more typical) the vsync after that. However, with the third display buffer already rendered the input can only effect the image after that. Meaning the input will only take effect at best 3 vsyncs later. I wish i had an image to show the exact timings of what I mean.

    Read the article

  • What utility is like Ten Clips, providing an enumerated clipboard?

    - by Aaron Newton
    A very useful (Windows) utility I use is TenClips - http://www.paludour.net/TenClips.html It allows you to create enumerated clipboards/emacs-like buffers easily using ctrl + f1, ctrl + f2, ctrl + f3, etc., copy to the clipboard in the first buffer, switch to the second buffer, copy without loosing our first buffer, switch back to the first buffer and paste, switch to the second buffer and paste and so forth. Does something like this exist for Ubuntu? The closest post I could find was Looking for an application that saves clipboard history which recommended Parcellite (http://parcellite.sourceforge.net/?page_id=2) - which keeps the history - but this is not quite what I'm after. If not I might make this a pet project :D

    Read the article

  • How does opengl-es 2 assemble primitives?

    - by stephelton
    Two things I'm quite confused about. 1) OpenGL ES 2.0 creates primitives before the vertex shader is invoked. Why, then, does it not automatically provide the vertex shader the position of the vertex? 2) OpenGL ES 2.0 supports glDrawElements(), but it does not support glEnableClientState() or GL_VERTEX_ARRAY, so how can this call possibly be used to construct primitives? NOTE: this is OpenGL ES 2.0, NOT normal OpenGL! Thanks!

    Read the article

  • Why does unity obj import flip my x coordinate?

    - by milkplus
    When I import my wavefront obj model into unity and then draw lines over it with the same coordinates in the obj file, the x coordinate is negated. I don't see any option in the importer that might be doing that. And I'm using the same localToWorldMatrix and the same coordinate data in the .obj file. Hmmm GL.PushMatrix(); GL.MultMatrix(transform.localToWorldMatrix); CreateMaterial(); lineMaterial.SetPass(0); GL.Color(new Color(0, 1, 0)); GL.Begin(GL.LINES); GL.Vertex(p1); GL.Vertex(p2); GL.Vertex(p2); GL.Vertex(p3); //... GL.End(); GL.PopMatrix();

    Read the article

  • How to make other semantics behave like SV_Position?

    - by object
    I'm having a lot of trouble with shadow mapping, and I believe I've found the problem. When passing vectors from the vertex shader to the pixel shader, does the hardware automatically change any of the values based on the semantic? I've compiled a barebones pair of shaders which should illustrate the problem. Vertex shader : struct Vertex { float3 position : POSITION; }; struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; cbuffer Matrices { matrix projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), projection); output.light_position = output.position; // We simply pass the same vector in screenspace through different semantics. return output; } And a simple pixel shader to go along with it: struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; float4 RenderPixelShader(Pixel input) : SV_Target { // At this point, (input.position.z / input.position.w) is a normal depth value. // However, (input.light_position.z / input.light_position.w) is 0.999f or similar. // If the primitive is touching the near plane, it very quickly goes to 0. return (0.0f).rrrr; } How is it possible to make the hardware treat light_position in the same way which position is being treated between the vertex and pixel shaders? EDIT: Aha! (input.position.z) without dividing by W is the same as (input.light_position.z / input.light_position.w). Not sure why this is.

    Read the article

  • What is the minimum of shader I need to use to run basic calculation on GPU?

    - by Jinxi
    I read, that the Hull Shader, Domain Shader, Geometry Shader and Pixel Shader can be used optional. So, is the Vertex Shader optional too? If no: What does a basic Vertex Shader look like? Just like a simple pass through? Is the Vertex Shader necessary to tell what kind of datastructure (Van Stripes or Meshes) are used? What can I do, with just the vertex shader? Are the fixed functions working without any help of programming a programmable stage?

    Read the article

  • Implementing a robust async stream reader

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a Stream in an event-based manner. The stream, in my scenario, is guaranteed to be a FileStream and there is also an associated StreamReader already present to leverage. The public interface of the class is this: public class MyStreamManager { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } Obviously this specific scenario has to do with a console's standard output, but that is a detail and does not play an important role. StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Since we are only handing off data from the stream to a consumer, and that consumer may well have inside knowledge about the size and/or format of these chunks, I want to call event subscribers exactly once for each chunk. Otherwise the abstraction breaks down and the subscribers have to buffer the incoming data and reconstruct the chunks themselves using said knowledge. This is much less convenient to the calling code, and detracts from the usefulness of my class. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream (thus preserving the chunks). private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer, all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Maintains the "chunkiness" of the data; this allows the calling code to use inside knowledge of the data without doing any extra work Is almost agnostic to the buffer size (it will work correctly with any size buffer irrespective of the data being read) The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this.

    Read the article

  • Thread Local Memory, Using std::string's internal buffer for c-style Scratch Memory.

    - by Hassan Syed
    I am using Protocol Buffers and OpensSSL to generate, HMACs and then CBC encrypt the two fields to obfuscate the session cookies -- similar Kerberos tokens. Protocol Buffers' API communicates with std::strings and has a buffer caching mechanism; I exploit the caching mechanism, for successive calls in the the same thread, by placing it in thread local memory; additionally the OpenSSL HMAC and EVP CTX's are also placed in the same thread local memory structure ( see this question for some detail on why I use thread local memory and the massive amount of speedup it enables even with a single thread). The generation and deserialization, "my algorithms", of these cookie strings uses intermediary void *s and std::strings and since Protocol Buffers has an internal memory retention mechanism I want these characteristics for "my algorithms". So how do I implement a common scratch memory ? I don't know much about the rdbuf(streambuf - strinbuf ??) of the std::string object. I would presumeably need to grow it to the lowest common size ever encountered during the execution of "my algorithms". Thoughts ? My question I guess would be: " is the internal buffer of a string re-usable, and if so, how ?" Edit: See comments to Vlad's answer please.

    Read the article

  • Implementing a robust async stream reader for a console

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a Stream in an event-based manner. The stream, in my scenario, is guaranteed to be a FileStream and there is also an associated StreamReader already present to leverage. The public interface of the class is this: public class MyStreamManager { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } Obviously this specific scenario has to do with a console's standard output. StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Since we are only handing off data from the stream to a consumer, and that consumer may well have inside knowledge about the size and/or format of these chunks, I want to call event subscribers exactly once for each chunk. Otherwise the abstraction breaks down and the subscribers have to buffer the incoming data and reconstruct the chunks themselves using said knowledge. This is much less convenient to the calling code, and detracts from the usefulness of my class. Edit: There are comments below correctly stating that since the data is coming from a stream, there is absolutely nothing that the receiver can infer about the structure of the data unless it is fully prepared to parse it. What I am trying to do here is leverage the "flush the output" "structure" that the owner of the console imparts while writing on it. I am prepared to assume (better: allow my caller to have the option to assume) that the OS will pass me the data written between two flushes of the stream in exactly one piece. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream (thus preserving the chunks). private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer, all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Maintains the "chunkiness" of the data; this allows the calling code to use inside knowledge of the data without doing any extra work Is almost agnostic to the buffer size (it will work correctly with any size buffer irrespective of the data being read) The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this.

    Read the article

  • What would it take to get auto-revert-mode to actually work in my dired buffer?

    - by Cheeso
    Apparently auto-revert-mode is supposed to work in dired buffers. I had never heard of this, but the doc says it works. Then I read a little more and found some fine print: Auto-reverting Dired buffers currently works on GNU or Unix style operating systems. It may not work satisfactorily on some other systems. ...and... [dired buffers] do not auto-revert when information about a particular file changes (e.g. when the size changes) or when inserted subdirectories change. To be sure that all listed information is up to date, you have to manually revert using g, even if auto-reverting is enabled in the Dired buffer. source Well, uh, gee.... That doesn't sound like autorevert to me. What would it take to get auto-revert for dired to actually work? Even on (gasp) non-Unix operating systems. Could I just modify auto-revert-handler to call revert-buffer on dired buffers?

    Read the article

  • How to buffer an IPoint or IGeometry? (How to do buffered intersection checks on an IPoint?)

    - by Quigrim
    How would I buffer an IPoint to do an intersection check using IRelationalOperator? I have, for arguments sake: IPoint p1 = xxx; IPoint p2 = yyy; IRelationalOperator rel1 = (IRelationalOperator)p1; if (rel.Intersects (p2)) // Do something But now I want to add a tolerance to my check, so I assume the right way to do that is by either buffering p1 or p2. Right? How do I add such a buffer? Note: the Intersects method I am using is an extension method I wrote to simplify my code. Here it is: /// <summary> /// Returns true if the IGeometry is intersected. /// This method negates the Disjoint method. /// </summary> /// <param name="relOp">The rel op.</param> /// <param name="other">The other.</param> /// <returns></returns> public static bool Intersects ( this IRelationalOperator relOp, IGeometry other) { return (!relOp.Disjoint (other)); }

    Read the article

  • XNA Reach profile with VMWare - Vertex Buffers not working?

    - by Nektarios
    Running XNA app, using Reach profile, in VMWare Fusion host OS Mac OSX, VM is Windows XP SP 3 (my dual-boot OS). Running on MacBook Pro w/NVidia 320M graphics card When I am booted in to XP natively, my code works. The code is drawing cubes that are set up using vertex buffers When another friend runs this same code on Windows 7, it also works for him just fine When I am running my code in the VM, it doesn't work. I have billboarding sprites running in a shader program and this part displays fine. I get no crashing or errors, the geometry just doesn't appear. I tried Debug and Release. This is very basic operation so I'm thinking VMWare isn't the problem, but it's my code.... My init code: var vertexArray = verts.ToArray(); var indexArray = indices.ToArray(); indexBuffer = new IndexBuffer(GraphicsDevice, typeof(Int16), indexArray.Length, BufferUsage.WriteOnly); indexBuffer.SetData(indexArray); vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(VertexPositionColor), vertexArray.Length, BufferUsage.WriteOnly); vertexBuffer.SetData(vertexArray); My Draw code: // problem isn't here, tried no cull GraphicsDevice.RasterizerState = RasterizerState.CullClockwise; GraphicsDevice.BlendState = BlendState.AlphaBlend; GraphicsDevice.DepthStencilState = new DepthStencilState() { DepthBufferEnable = true }; // Update View and Projection TileEffect.View = ((Game1)Game).Camera.View; TileEffect.Projection = ((Game1)Game).Camera.Projection; TileEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.SetVertexBuffer(vertexBuffer); GraphicsDevice.Indices = indexBuffer; GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, indices.Count, 0, indices.Count / 3); For LoadContent: TileEffect = new BasicEffect(GraphicsDevice) { World = Matrix.Identity, View = ((Game1)Game).Camera.View, Projection = ((Game1)Game).Camera.Projection, VertexColorEnabled = true };

    Read the article

  • question about fgets

    - by user105033
    Is this safe to do? (does fgets terminate the buffer with null) or should I be setting the 20th byte to null after the call to fgets before i call clean. // strip new lines void clean(char *data) { while (*data) { if (*data == '\n' || *data == '\r') *data = '\0'; data++; } } // for this, assume that the file contains 1 line no longer than 19 bytes // buffer is freed elsewhere char *load_latest_info(char *file) { FILE *f; char *buffer = (char*) malloc(20); if (f = fopen(file, "r")) if (fgets(buffer, 20, f)) { clean(buffer); return buffer; } free(buffer); return NULL; }

    Read the article

  • Why does this code fail in D2010, but not D7?

    - by Tom1952
    Why does this code get an access error on the Result := Buffer line in D2010, but not D7? Something, I'd guess, involving UniCode, but the compiler doesn't generate any warnings. Any suggestions on an elegant workaround? function GetTempPathAndFileName( const aExtension: string): string; var Buffer: array[0..MAX_PATH] of Char; begin repeat GetTempPath(SizeOf(Buffer) - 1, Buffer); GetTempFileName(Buffer, '~', 0, Buffer); Result := Buffer; // <--- crashes on this line, Result := ChangeFileExt(Result, aExtension); until not FileExists(Result); end; { GetTempPathAndFileName }

    Read the article

  • Pointer problem in C for char*

    - by egebilmuh
    Hi guys, i use pointer for holding name and research lab property. But when i print the existing Vertex ,when i print the vertex, i cant see so -called attributes properly. For example though real value of name is "lancelot" , i see it as wrong such as "asdasdasdasd" struct vertex { int value; char*name; char* researchLab; struct vertex *next; struct edge *list; }; void GRAPHinsertV(Graph G, int value,char*name,char*researchLab) { //create new Vertex. Vertex newV = malloc(sizeof newV); // set value of new variable to which belongs the person. newV->value = value; newV->name=name; newV->researchLab=researchLab; newV->next = G->head; newV->list = NULL; G->head = newV; G->V++; } /*** The method creates new person. **/ void createNewPerson(Graph G) { int id; char name[30]; char researchLab[30]; // get requeired variables. printf("Enter id of the person to be added.\n"); scanf("%d",&id); printf("Enter name of the person to be added.\n"); scanf("%s",name); printf("Enter researc lab of the person to be added\n"); scanf("%s",researchLab); // insert the people to the social network. GRAPHinsertV(G,id,name,researchLab); } void ListAllPeople(Graph G) { Vertex tmp; Edge list; for(tmp = G->head;tmp!=NULL;tmp=tmp->next) { fprintf(stdout,"V:%d\t%s\t%s\n",tmp->value,tmp->name,tmp->researchLab); } system("pause"); }

    Read the article

  • When do I need to deallocate memory?

    - by extintor
    I am using this code inside a class to make a webbrowser control visit a website: void myClass::visitWeb(const char *url) { WCHAR buffer[MAX_LEN]; ZeroMemory(buffer, sizeof(buffer)); MultiByteToWideChar(CP_ACP, MB_ERR_INVALID_CHARS, url, strlen(url), buffer, sizeof(buffer)-1); VARIANT vURL; vURL.vt = VT_BSTR; vURL.bstrVal = SysAllocString(buffer); // webbrowser navigate code... VariantClear(&vURL); } I call visitWeb from another void function that gets called on the handlemessage() for the app. Do I need to do some memory deallocation here?, I see vURL is being deallocated by VariantClear but should I deallocate memory for buffer? I've been told that in another bool I have in the same app I shouldn't deallocate anything because everything clear out when the bool return true/false, but what happens on this void?

    Read the article

  • emacs list-buffers behavior

    - by Stephen
    In GNU emacs, every time I hit Ctrl-x Ctrl-b to see all of my buffers, the window is split to show the buffer list, or if I have my window already split in 2 (for instance, I will have a shell running in the lower window), the buffer list appears in the other window. My desired behavior is for the buffer list to appear in my active window so that I can select the buffer I want and continue to working in the same window, rather than having to Ctrl-x Ctrl-o to the other buffer, selecting the buffer (with enter) and editing that buffer in the other window... I've googled for it but it doesn't seem to be a common desire? I wonder if anyone has an elispy (or other) solution?

    Read the article

  • MPFR Rounding 0.9999 to 1?

    - by Silmaersti
    I'm attempting to store the value 0.9999 into an mpfr_t variable But 0.9999 is rounded to 1 (or some other value != 0.9999) during storage, no matter the round value (GMP_RNDD, GMP_RNDU, GMP_RNDN, GMP_RNDZ) So what's the best method to store 0.9999 in an mpfr_t variable? Is it possible? Here is my test program, it prints "buffer is: 1", instead of the wanted "buffer is: 0.9999": int main() { size_t precision = 4; mpfr_t mpfrValue; mpfr_init2(mpfrValue, precision); mpfr_set_str(mpfrValue, "0.9999", 10, GMP_RNDN); char *buffer = (char*)malloc((sizeof(char) * precision) + 3); mp_exp_t exponent; mpfr_get_str(buffer, &exponent, 10, precision, mpfrValue, GMP_RNDN); printf("buffer is: %s\n", buffer); free(buffer); mpfr_clear(mpfrValue); return 0; } Thanks for any help !

    Read the article

  • When do I need to deallocate memory? C++

    - by extintor
    I am using this code inside a class to make a webbrowser control visit a website: void myClass::visitWeb(const char *url) { WCHAR buffer[MAX_LEN]; ZeroMemory(buffer, sizeof(buffer)); MultiByteToWideChar(CP_ACP, MB_ERR_INVALID_CHARS, url, strlen(url), buffer, sizeof(buffer)-1); VARIANT vURL; vURL.vt = VT_BSTR; vURL.bstrVal = SysAllocString(buffer); // webbrowser navigate code... VariantClear(&vURL); } Do I need to do some memory deallocation here?, I see vURL is being deallocated by VariantClear but should I deallocate memory for buffer? I've been told that in another bool I have in the same app I shouldn't deallocate anything because everything clear out when the bool return true/false, but what happens on this void?

    Read the article

  • elisp: call command on current file

    - by Jasie
    Hello, I want to set a key in emacs to perform a shell command on the file in the buffer, and revert the buffer without prompting. The shell command is: p4 edit 'currentfilename.ext' (global-set-key [\C-E] (funcall 'revert-buffer 1 1 1)) ;; my attempt above to call revert-buffer with a non-nil ;; argument (ignoring the shell command for now) -- get an init error: ;; Error in init file: error: "Buffer does not seem to be associated with any file" Completely new to elisp. From the emacs manual, here is the definition of revert-buffer: Command: revert-buffer &optional ignore-auto noconfirm preserve-modes Thanks!

    Read the article

  • Need help making this code more efficient

    - by Rendicahya
    I always use this method to easily read the content of a file. Is it efficient enough? Is 1024 good for the buffer size? public static String read(File file) { FileInputStream stream = null; StringBuilder str = new StringBuilder(); try { stream = new FileInputStream(file); } catch (FileNotFoundException e) { } FileChannel channel = stream.getChannel(); ByteBuffer buffer = ByteBuffer.allocate(1024); try { while (channel.read(buffer) != -1) { buffer.flip(); while (buffer.hasRemaining()) { str.append((char) buffer.get()); } buffer.rewind(); } } catch (IOException e) { } finally { try { channel.close(); stream.close(); } catch (IOException e) { } } return str.toString(); }

    Read the article

  • to store char* from function return value

    - by samprat
    Hi folks, I am trying to implement a function which reads from Serial Port ( Linux) and retuns char*. The function works fine but how would I store return value from function. example of function is char *ReadToSerialPort() { char *bufptr; char buffer[256]; // Input buffer/ / //char *bufptr; // Current char in buffer // int nbytes; // Number of bytes read // bufptr = buffer; while ((nbytes = read(fd, bufptr, buffer+sizeof(buffer)-bufptr -1 )) > 0) { bufptr += nbytes; // if (bufptr[-1] == '\n' || bufptr[-1] == '\r') /*if ( bufptr[sizeof(buffer) -1] == '*' && bufptr[0] == '$' ) { break; }*/ } // while ends if ( nbytes ) return bufptr; else return 0; *bufptr = '\0'; } // end ReadAdrPort //In main int main( int argc , char *argv[]) { char *letter; if(strcpy(letter, ReadToSerialPort()) >0 ) { printf("Response is %s\n",letter); } }

    Read the article

  • i am using winsock2.h in c language the following errors are unuderstandable help required?

    - by moon
    i am going to paste here my code an errors :::: #include "stdio.h" #include "winsock2.h" #define SIO_RCVALL _WSAIOW(IOC_VENDOR,1) //this removes the need of mstcpip.h void StartSniffing (SOCKET Sock); //This will sniff here and there void ProcessPacket (unsigned char* , int); //This will decide how to digest void PrintIpHeader (unsigned char* , int); void PrintUdpPacket (unsigned char* , int); void ConvertToHex (unsigned char* , unsigned int); void PrintData (unsigned char* , int); //IP Header Structure typedef struct ip_hdr { unsigned char ip_header_len:4; // 4-bit header length (in 32-bit words) normally=5 (Means 20 Bytes may be 24 also) unsigned char ip_version :4; // 4-bit IPv4 version unsigned char ip_tos; // IP type of service unsigned short ip_total_length; // Total length unsigned short ip_id; // Unique identifier unsigned char ip_frag_offset :5; // Fragment offset field unsigned char ip_more_fragment :1; unsigned char ip_dont_fragment :1; unsigned char ip_reserved_zero :1; unsigned char ip_frag_offset1; //fragment offset unsigned char ip_ttl; // Time to live unsigned char ip_protocol; // Protocol(TCP,UDP etc) unsigned short ip_checksum; // IP checksum unsigned int ip_srcaddr; // Source address unsigned int ip_destaddr; // Source address } IPV4_HDR; //UDP Header Structure typedef struct udp_hdr { unsigned short source_port; // Source port no. unsigned short dest_port; // Dest. port no. unsigned short udp_length; // Udp packet length unsigned short udp_checksum; // Udp checksum (optional) } UDP_HDR; //ICMP Header Structure typedef struct icmp_hdr { BYTE type; // ICMP Error type BYTE code; // Type sub code USHORT checksum; USHORT id; USHORT seq; } ICMP_HDR; FILE *logfile; int tcp=0,udp=0,icmp=0,others=0,igmp=0,total=0,i,j; struct sockaddr_in source,dest; char hex[2]; //Its free! IPV4_HDR *iphdr; UDP_HDR *udpheader; int main() { SOCKET sniffer; struct in_addr addr; int in; char hostname[100]; struct hostent *local; WSADATA wsa; //logfile=fopen("log.txt","w"); //if(logfile==NULL) printf("Unable to create file."); //Initialise Winsock printf("\nInitialising Winsock..."); if (WSAStartup(MAKEWORD(2,2), &wsa) != 0) { printf("WSAStartup() failed.\n"); return 1; } printf("Initialised"); //Create a RAW Socket printf("\nCreating RAW Socket..."); sniffer = socket(AF_INET, SOCK_RAW, IPPROTO_IP); if (sniffer == INVALID_SOCKET) { printf("Failed to create raw socket.\n"); return 1; } printf("Created."); //Retrive the local hostname if (gethostname(hostname, sizeof(hostname)) == SOCKET_ERROR) { printf("Error : %d",WSAGetLastError()); return 1; } printf("\nHost name : %s \n",hostname); //Retrive the available IPs of the local host local = gethostbyname(hostname); printf("\nAvailable Network Interfaces : \n"); if (local == NULL) { printf("Error : %d.\n",WSAGetLastError()); return 1; } for (i = 0; local->h_addr_list[i] != 0; ++i) { memcpy(&addr, local->h_addr_list[i], sizeof(struct in_addr)); printf("Interface Number : %d Address : %s\n",i,inet_ntoa(addr)); } printf("Enter the interface number you would like to sniff : "); scanf("%d",&in); memset(&dest, 0, sizeof(dest)); memcpy(&dest.sin_addr.s_addr,local->h_addr_list[in],sizeof(dest.sin_addr.s_addr)); dest.sin_family = AF_INET; dest.sin_port = 0; printf("\nBinding socket to local system and port 0 ..."); if (bind(sniffer,(struct sockaddr *)&dest,sizeof(dest)) == SOCKET_ERROR) { printf("bind(%s) failed.\n", inet_ntoa(addr)); return 1; } printf("Binding successful"); //Enable this socket with the power to sniff : SIO_RCVALL is the key Receive ALL ;) j=1; printf("\nSetting socket to sniff..."); if (WSAIoctl(sniffer, SIO_RCVALL,&j, sizeof(j), 0, 0,(LPDWORD)&in,0, 0) == SOCKET_ERROR) { printf("WSAIoctl() failed.\n"); return 1; } printf("Socket set."); //Begin printf("\nStarted Sniffing\n"); printf("Packet Capture Statistics...\n"); StartSniffing(sniffer); //Happy Sniffing //End closesocket(sniffer); WSACleanup(); return 0; } void StartSniffing(SOCKET sniffer) { unsigned char *Buffer = ( unsigned char *)malloc(65536); //Its Big! int mangobyte; if (Buffer == NULL) { printf("malloc() failed.\n"); return; } do { mangobyte = recvfrom(sniffer,(char *)Buffer,65536,0,0,0); //Eat as much as u can if(mangobyte > 0) ProcessPacket(Buffer, mangobyte); else printf( "recvfrom() failed.\n"); } while (mangobyte > 0); free(Buffer); } void ProcessPacket(unsigned char* Buffer, int Size) { iphdr = (IPV4_HDR *)Buffer; ++total; switch (iphdr->ip_protocol) //Check the Protocol and do accordingly... { case 1: //ICMP Protocol ++icmp; //PrintIcmpPacket(Buffer,Size); break; case 2: //IGMP Protocol ++igmp; break; case 6: //TCP Protocol ++tcp; //PrintTcpPacket(Buffer,Size); break; case 17: //UDP Protocol ++udp; PrintUdpPacket(Buffer,Size); break; default: //Some Other Protocol like ARP etc. ++others; break; } printf("TCP : %d UDP : %d ICMP : %d IGMP : %d Others : %d Total : %d\r",tcp,udp,icmp,igmp,others,total); } void PrintIpHeader (unsigned char* Buffer, int Size) { unsigned short iphdrlen; iphdr = (IPV4_HDR *)Buffer; iphdrlen = iphdr->ip_header_len*4; memset(&source, 0, sizeof(source)); source.sin_addr.s_addr = iphdr->ip_srcaddr; memset(&dest, 0, sizeof(dest)); dest.sin_addr.s_addr = iphdr->ip_destaddr; fprintf(logfile,"\n"); fprintf(logfile,"IP Header\n"); fprintf(logfile," |-IP Version : %d\n",(unsigned int)iphdr->ip_version); fprintf(logfile," |-IP Header Length : %d DWORDS or %d Bytes\n",(unsigned int)iphdr->ip_header_len); fprintf(logfile," |-Type Of Service : %d\n",(unsigned int)iphdr->ip_tos); fprintf(logfile," |-IP Total Length : %d Bytes(Size of Packet)\n",ntohs(iphdr->ip_total_length)); fprintf(logfile," |-Identification : %d\n",ntohs(iphdr->ip_id)); fprintf(logfile," |-Reserved ZERO Field : %d\n",(unsigned int)iphdr->ip_reserved_zero); fprintf(logfile," |-Dont Fragment Field : %d\n",(unsigned int)iphdr->ip_dont_fragment); fprintf(logfile," |-More Fragment Field : %d\n",(unsigned int)iphdr->ip_more_fragment); fprintf(logfile," |-TTL : %d\n",(unsigned int)iphdr->ip_ttl); fprintf(logfile," |-Protocol : %d\n",(unsigned int)iphdr->ip_protocol); fprintf(logfile," |-Checksum : %d\n",ntohs(iphdr->ip_checksum)); fprintf(logfile," |-Source IP : %s\n",inet_ntoa(source.sin_addr)); fprintf(logfile," |-Destination IP : %s\n",inet_ntoa(dest.sin_addr)); } void PrintUdpPacket(unsigned char *Buffer,int Size) { unsigned short iphdrlen; iphdr = (IPV4_HDR *)Buffer; iphdrlen = iphdr->ip_header_len*4; udpheader = (UDP_HDR *)(Buffer + iphdrlen); fprintf(logfile,"\n\n***********************UDP Packet*************************\n"); PrintIpHeader(Buffer,Size); fprintf(logfile,"\nUDP Header\n"); fprintf(logfile," |-Source Port : %d\n",ntohs(udpheader->source_port)); fprintf(logfile," |-Destination Port : %d\n",ntohs(udpheader->dest_port)); fprintf(logfile," |-UDP Length : %d\n",ntohs(udpheader->udp_length)); fprintf(logfile," |-UDP Checksum : %d\n",ntohs(udpheader->udp_checksum)); fprintf(logfile,"\n"); fprintf(logfile,"IP Header\n"); PrintData(Buffer,iphdrlen); fprintf(logfile,"UDP Header\n"); PrintData(Buffer+iphdrlen,sizeof(UDP_HDR)); fprintf(logfile,"Data Payload\n"); PrintData(Buffer+iphdrlen+sizeof(UDP_HDR) ,(Size - sizeof(UDP_HDR) - iphdr->ip_header_len*4)); fprintf(logfile,"\n###########################################################"); } void PrintData (unsigned char* data , int Size) { for(i=0 ; i < Size ; i++) { if( i!=0 && i%16==0) //if one line of hex printing is complete... { fprintf(logfile," "); for(j=i-16 ; j<i ; j++) { if(data[j]>=32 && data[j]<=128) fprintf(logfile,"%c",(unsigned char)data[j]); //if its a number or alphabet else fprintf(logfile,"."); //otherwise print a dot } fprintf(logfile,"\n"); } if(i%16==0) fprintf(logfile," "); fprintf(logfile," %02X",(unsigned int)data[i]); if( i==Size-1) //print the last spaces { for(j=0;j<15-i%16;j++) fprintf(logfile," "); //extra spaces fprintf(logfile," "); for(j=i-i%16 ; j<=i ; j++) { if(data[j]>=32 && data[j]<=128) fprintf(logfile,"%c",(unsigned char)data[j]); else fprintf(logfile,"."); } fprintf(logfile,"\n"); } } } following are the errors Error 1 error LNK2019: unresolved external symbol __imp__WSACleanup@0 referenced in function _main sniffer.obj sniffer test Error 2 error LNK2019: unresolved external symbol __imp__closesocket@4 referenced in function _main sniffer.obj sniffer test Error 3 error LNK2019: unresolved external symbol __imp__WSAIoctl@36 referenced in function _main sniffer.obj sniffer test Error 4 error LNK2019: unresolved external symbol __imp__bind@12 referenced in function _main sniffer.obj sniffer test Error 5 error LNK2019: unresolved external symbol __imp__inet_ntoa@4 referenced in function _main sniffer.obj sniffer test Error 6 error LNK2019: unresolved external symbol __imp__gethostbyname@4 referenced in function _main sniffer.obj sniffer test Error 7 error LNK2019: unresolved external symbol __imp__WSAGetLastError@0 referenced in function _main sniffer.obj sniffer test Error 8 error LNK2019: unresolved external symbol __imp__gethostname@8 referenced in function _main sniffer.obj sniffer test Error 9 error LNK2019: unresolved external symbol __imp__socket@12 referenced in function _main sniffer.obj sniffer test Error 10 error LNK2019: unresolved external symbol __imp__WSAStartup@8 referenced in function _main sniffer.obj sniffer test Error 11 error LNK2019: unresolved external symbol __imp__recvfrom@24 referenced in function "void __cdecl StartSniffing(unsigned int)" (?StartSniffing@@YAXI@Z) sniffer.obj sniffer test Error 12 error LNK2019: unresolved external symbol __imp__ntohs@4 referenced in function "void __cdecl PrintIpHeader(unsigned char *,int)" (?PrintIpHeader@@YAXPAEH@Z) sniffer.obj sniffer test Error 13 fatal error LNK1120: 12 unresolved externals E:\CWM\sniffer test\Debug\sniffer test.exe sniffer test

    Read the article

  • How to install ac-R mode in emacs?

    - by David
    I have recently added the file ac-R.el to /usr/local/share/emacs/site-lisp, along with (require 'ac-R) to ~/.emacs Now, when I open emacs with --debug-init, I get the error Debugger entered--Lisp error: (void-variable ac-modes) add-to-list(ac-modes ess-mode) eval-buffer(#<buffer *load*<2>> nil "/usr/local/share/emacs/site-lisp/ac-R.el" nil t) ; Reading at buffer position 7191 load-with-code-conversion("/usr/local/share/emacs/site-lisp/ac-R.el" "/usr/local/share/emacs/site-lisp/ac-R.el" nil t) require(ac-R) eval-buffer(#<buffer *load*> nil "/home/dlebauer/.emacs" nil t) ; Reading at buffer position 3548 load-with-code-conversion("/home/dlebauer/.emacs" "/home/dlebauer/.emacs" t t) load("~/.emacs" t t) #[nil "\205\264 and when clicking on load-with-code-conversion, it says Can't find library /usr/share/emacs/23.1.50/lisp/international/mule.el even though I have installed mule via synaptic (I am using Ubuntu 10.04) How can I get the mule library in the right place?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >