Search Results

Search found 3136 results on 126 pages for 'buffer overrun'.

Page 37/126 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Is this a dual monitor reset bug?

    - by Tentresh
    My two displays are: Intel GMA x4500 Laptop (1280x800 native resolution of the built-in display) External display (1920x1080) A few minutes after I login to my dual monitor setup, it gets reset to mirror screens. If I restore the settings via the displays application, everything is fine. On each reset, the following messages are written into /var/log/Xorg.0.log: [ 60.852] (II) PM Event received: Capability Changed [ 60.852] I830PMEvent: Capability change [ 132.920] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 132.920] (II) intel(0): Printing DDC gathered Modelines: [ 132.920] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 134.228] (II) intel(0): Allocated new frame buffer 1280x800 stride 5120, tiled Whereas right on startup or manual resolution reset, /var/log/Xorg.0.log reports the expected frame buffer allocation: [ 1562.382] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 1562.382] (II) intel(0): Printing DDC gathered Modelines: [ 1562.382] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 1576.740] (II) intel(0): Allocated new frame buffer 3200x1080 stride 12800, tiled Is Ubuntu 12.04 not compatible with my video card? Can this be solved within Ubuntu? I like its interface, but manually fiddling with resolution on every login is not bearable.

    Read the article

  • Per-vertex animation with VBOs: Stream each frame or use index offset per frame?

    - by charstar
    Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will be at different poses/keyframes at the same time). Assume color and texture coordinate buffers are static. Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs). Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? Are there other methods that I am missing?

    Read the article

  • Draw contour around object in Opengl

    - by Maciekp
    I need to draw contour around 2d objects in 3d space. I tried drawing lines around object(+points to fill the gap), but due to line width, some part of it(~50%) was covering object. I tried to use stencil buffer, to eliminate this problem, but I got sth like this(contour is green): http://goo.gl/OI5uc (sorry I can't post images, due to my reputation) You can see(where arrow points), that some parts of line are behind object, and some are above. This changes when I move camera, but always there is some part, that is covering it. Here is code, that I use for drawing object: glColorMask(1,1,1,1); std::list<CObjectOnScene*>::iterator objIter=ptr->objects.begin(),objEnd=ptr->objects.end(); int countStencilBit=1; while(objIter!=objEnd) { glColorMask(1,1,1,1); glStencilFunc(GL_ALWAYS,countStencilBit,countStencilBit); glStencilOp(GL_REPLACE,GL_KEEP,GL_REPLACE ); (*objIter)->DrawYourVertices(); glStencilFunc(GL_NOTEQUAL,countStencilBit,countStencilBit); glStencilOp(GL_KEEP,GL_KEEP,GL_REPLACE); (*objIter)->DrawYourBorder(); ++objIter; ++countStencilBit; } I've tried different settings of stencil buffer, but always I was getting sth like that. Here is question: 1.Am I setting stencil buffer wrong? 2. Are there any other simple ways to create contour on such objects? Thanks in advance.

    Read the article

  • Draw contour around object in Opengl

    - by Maciekp
    I need to draw contour around 2d objects in 3d space. I tried drawing lines around object(+points to fill the gap), but due to line width, some part of it(~50%) was covering object. I tried to use stencil buffer, to eliminate this problem, but I got sth like this(contour is green): http://goo.gl/OI5uc (sorry I can't post images, due to my reputation) You can see(where arrow points), that some parts of line are behind object, and some are above. This changes when I move camera, but always there is some part, that is covering it. Here is code, that I use for drawing object: glColorMask(1,1,1,1); std::list<CObjectOnScene*>::iterator objIter=ptr->objects.begin(),objEnd=ptr->objects.end(); int countStencilBit=1; while(objIter!=objEnd) { glColorMask(1,1,1,1); glStencilFunc(GL_ALWAYS,countStencilBit,countStencilBit); glStencilOp(GL_REPLACE,GL_KEEP,GL_REPLACE ); (*objIter)->DrawYourVertices(); glStencilFunc(GL_NOTEQUAL,countStencilBit,countStencilBit); glStencilOp(GL_KEEP,GL_KEEP,GL_REPLACE); (*objIter)->DrawYourBorder(); ++objIter; ++countStencilBit; } I've tried different settings of stencil buffer, but always I was getting sth like that. Here is question: 1.Am I setting stencil buffer wrong? 2. Are there any other simple ways to create contour on such objects? Thanks in advance. EDIT: 1. I don't have normals of objects. 2. Object can be concave. 3. I can't use shaders(see below why).

    Read the article

  • Is using a dedicated thread just for sending gpu commands a good idea?

    - by tigrou
    The most basic game loop is like this : while(1) { update(); draw(); swapbuffers(); } This is very simple but have a problem : some drawing commands can be blocking and cpu will wait while he could do other things (like processing next update() call). Another possible solution i have in mind would be to use two threads : one for updating and preparing commands to be sent to gpu, and one for sending these commands to the gpu : //first thread while(1) { update(); render(); // use gamestate to generate all needed triangles and commands for gpu // put them in a buffer, no command is send to gpu // two buffers will be used, see below pulse(); //signal the other thread data is ready } //second thread while(1) { wait(); // wait for second thread for data to come send_data_togpu(); // send prepared commands from buffer to graphic card swapbuffers(); } also : two buffers would be used, so one buffer could be filled with gpu commands while the other would be processed by gpu. Do you thing such a solution would be effective ? What would be advantages and disadvantages of such a solution (especially against a simpler solution (eg : single threaded with triple buffering enabled) ?

    Read the article

  • Matrix loading problems with jbullet and lwjgl

    - by Quintin
    The following code does not load the matrix correctly from jbullet. //box is a RigidBody Transform trans = new Transform(); trans = box.getMotionState().getWorldTransform(trans); float[] matrix = new float[16]; trans.getOpenGLMatrix(matrix); // pass that matrix to OpenGL and render the cube FloatBuffer buffer = ByteBuffer.allocateDirect(4*16).asFloatBuffer().put(matrix); buffer.rewind(); glPushMatrix(); glMultMatrix(buffer); glBegin(GL_POINTS); glVertex3f(0,0,0); glEnd(); glPopMatrix(); the jbullet is configured as so: CollisionConfiguration = new DefaultCollisionConfiguration(); dispatcher = new CollisionDispatcher(collisionConfiguration); Vector3f worldAabbMin = new Vector3f(-10000,-10000,-10000); Vector3f worldAabbMax = new Vector3f(10000,10000,10000); AxisSweep3 overlappingPairCache = new AxisSweep3(worldAabbMin, worldAabbMax); SequentialImpulseConstraintSolver solver = new SequentialImpulseConstraintSolver(); dynamicWorld = new DiscreteDynamicsWorld(dispatcher, overlappingPairCache, solver, collisionConfiguration); dynamicWorld.setGravity(new Vector3f(0,-10,0)); dynamicWorld.getDispatchInfo().allowedCcdPenetration = 0f; CollisionShape groundShape = new BoxShape(new Vector3f(1000.f, 50.f, 1000.f)); Transform groundTransform = new Transform(); groundTransform.setIdentity(); groundTransform.origin.set(new Vector3f(0.f, -60.f, 0.f)); float mass = 0f; Vector3f localInertia = new Vector3f(0, 0, 0); DefaultMotionState myMotionState = new DefaultMotionState(groundTransform); RigidBodyConstructionInfo rbInfo = new RigidBodyConstructionInfo(mass, myMotionState, groundShape, localInertia); RigidBody body = new RigidBody(rbInfo); dynamicWorld.addRigidBody(body); dynamicWorld.clearForces(); Nothing is rendered on the screen. What am I doing wrong?

    Read the article

  • Ubuntu 12.04 dual monitor reset bug

    - by Tentresh
    My two displays are: Intel GMA x4500 Laptop (1280x800 native resolution of the built-in display) External display (1920x1080) Few minutes after I login to my dual monitor setup its get reset to mirror screens. If I restore the settings via displays application everything is fine. On each reset the following messages are written into /var/log/Xorg.0.log: [ 60.852] (II) PM Event received: Capability Changed [ 60.852] I830PMEvent: Capability change [ 132.920] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 132.920] (II) intel(0): Printing DDC gathered Modelines: [ 132.920] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 134.228] (II) intel(0): Allocated new frame buffer 1280x800 stride 5120, tiled Whereas right on startup or manual resolution reset /var/log/Xorg.0.log reports the expected frame buffer allocation: [ 1562.382] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 1562.382] (II) intel(0): Printing DDC gathered Modelines: [ 1562.382] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 1576.740] (II) intel(0): Allocated new frame buffer 3200x1080 stride 12800, tiled Is Ubuntu 12.04 not compatible with my video card? Can this be solved within Ubuntu? I like it's interface, but manually fiddling with resolution on every login is not bearable.

    Read the article

  • Sprite batching seems slow

    - by Dekowta
    I have implemented a sprite batching system in OpenGL which will batch sprites based on their texture. How ever when I'm rendering ~5000 sprites all using the same texture i'm getting roughly 30fps. The process is as followed create sprite batch which also create a VBO with a set size and also creates the shaders as well call begin and initialise the render mode (at the moment just setting alpha on) call Draw with a sprite. This checks to see if the texture of the sprite has already been loaded and if so it just creates a pointer to the batch item and adds the new sprite coords. If not then it creates a new batch item and adds the sprite coords to that; it adds the batch item to the main batch. if the max sprite count is reached render will be called call end which calls render to render the left over sprites in the batch. and also resets the buffer offset render loops through each item in the batch and will bind the texture of the batch item, map the data to the buffer and then draw the array. the buffer will then be offset by the amount of sprites drawn. I have a feeling that it could be the method i'm using to store the batched sprites or it could be something else that i'm missing but I still can work it out. the cpp and h files are as followed http://pastebin.com/ZAytErGB http://pastebin.com/iCB608tA On top of this i'm also getting a weird issue where then two sprites are batched on after the other the second sprite will use the same coordinates as the last. And then when one if drawn after it is fine. I can't seem to find what is causing this issue. any help would be appreciated iv been sat trying to work this all out for a while now and cant seems to put my finger on what's causing it all.

    Read the article

  • reading the file name from user input in MIPS assembly

    - by Hassan Al-Jeshi
    I'm writing a MIPS assembly code that will ask the user for the file name and it will produce some statistics about the content of the file. However, when I hard code the file name into a variable from the beginning it works just fine, but when I ask the user to input the file name it does not work. after some debugging, I have discovered that the program adds 0x00 char and 0x0a char (check asciitable.com) at the end of user input in the memory and that's why it does not open the file based on the user input. anyone has any idea about how to get rid of those extra chars, or how to open the file after getting its name from the user?? here is my complete code (it is working fine except for the file name from user thing, and anybody is free to use it for any purpose he/she wants to): .data fin: .ascii "" # filename for input msg0: .asciiz "aaaa" msg1: .asciiz "Please enter the input file name:" msg2: .asciiz "Number of Uppercase Char: " msg3: .asciiz "Number of Lowercase Char: " msg4: .asciiz "Number of Decimal Char: " msg5: .asciiz "Number of Words: " nline: .asciiz "\n" buffer: .asciiz "" .text #----------------------- li $v0, 4 la $a0, msg1 syscall li $v0, 8 la $a0, fin li $a1, 21 syscall jal fileRead #read from file move $s1, $v0 #$t0 = total number of bytes li $t0, 0 # Loop counter li $t1, 0 # Uppercase counter li $t2, 0 # Lowercase counter li $t3, 0 # Decimal counter li $t4, 0 # Words counter loop: bge $t0, $s1, end #if end of file reached OR if there is an error in the file lb $t5, buffer($t0) #load next byte from file jal checkUpper #check for upper case jal checkLower #check for lower case jal checkDecimal #check for decimal jal checkWord #check for words addi $t0, $t0, 1 #increment loop counter j loop end: jal output jal fileClose li $v0, 10 syscall fileRead: # Open file for reading li $v0, 13 # system call for open file la $a0, fin # input file name li $a1, 0 # flag for reading li $a2, 0 # mode is ignored syscall # open a file move $s0, $v0 # save the file descriptor # reading from file just opened li $v0, 14 # system call for reading from file move $a0, $s0 # file descriptor la $a1, buffer # address of buffer from which to read li $a2, 100000 # hardcoded buffer length syscall # read from file jr $ra output: li $v0, 4 la $a0, msg2 syscall li $v0, 1 move $a0, $t1 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg3 syscall li $v0, 1 move $a0, $t2 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg4 syscall li $v0, 1 move $a0, $t3 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg5 syscall addi $t4, $t4, 1 li $v0, 1 move $a0, $t4 syscall jr $ra checkUpper: blt $t5, 0x41, L1 #branch if less than 'A' bgt $t5, 0x5a, L1 #branch if greater than 'Z' addi $t1, $t1, 1 #increment Uppercase counter L1: jr $ra checkLower: blt $t5, 0x61, L2 #branch if less than 'a' bgt $t5, 0x7a, L2 #branch if greater than 'z' addi $t2, $t2, 1 #increment Lowercase counter L2: jr $ra checkDecimal: blt $t5, 0x30, L3 #branch if less than '0' bgt $t5, 0x39, L3 #branch if greater than '9' addi $t3, $t3, 1 #increment Decimal counter L3: jr $ra checkWord: bne $t5, 0x20, L4 #branch if 'space' addi $t4, $t4, 1 #increment words counter L4: jr $ra fileClose: # Close the file li $v0, 16 # system call for close file move $a0, $s0 # file descriptor to close syscall # close file jr $ra Note: I'm using MARS Simulator, if that makes any different

    Read the article

  • Access Violation Using memcpy or Assignment to an Array in a Struct

    - by Synetech inc.
    Hi, I wrote a program last night that worked just fine but when I refactored it today to make it more extensible, I ended up with a problem. The original version had a hard-coded array of bytes. After some processing, some bytes were written into the array and then some more processing was done. To avoid hard-coding the pattern, I put the array in a structure so that I could add some related data and create an array of them. However now, I cannot write to the array in the structure. Here’s a pseudo-code example: main() { char pattern[]="\x32\x33\x12\x13\xba\xbb"; PrintData(pattern); pattern[2]='\x65'; PrintData(pattern); } That one works but this one does not: struct ENTRY { char* pattern; int somenum; }; main() { ENTRY Entries[] = { {"\x32\x33\x12\x13\xba\xbb\x9a\xbc", 44} , {"\x12\x34\x56\x78", 555} }; PrintData(Entries[0].pattern); Entries[0].pattern[2]='\x65'; //0xC0000005 exception!!! :( PrintData(Entries[0].pattern); } The second version causes an access violation exception on the assignment. I’m sure it’s because the second version allocates memory differently, but I’m starting to get a headache trying to figure out what’s what or how to get fix this. (I’m currently working around it by dynamically allocating a buffer of the same size as the pattern array, copying the pattern to the new buffer, making the changes to the buffer, using the buffer in the place of the pattern array, and then trying to remember to free the—temporary—buffer.) (Specifically, the original version cast the pattern array—+offset—to a DWORD* and assigned a DWORD constant to it to overwrite the four target bytes. The new version cannot do that since the length of the source is unknown—may not be four bytes—so it uses memcpy instead. I’ve checked and re-checked and have made sure that the pointers to memcpy are correct, but I still get an access violation. I use memcpy instead of str(n)cpy because I am using plain chars (as an array of bytes), not Unicode chars and ignoring the null-terminator. Using an assignment as above causes the same problem.) Any ideas? Thanks a lot.

    Read the article

  • Android - Getting audio to play through earpiece

    - by Donal Rafferty
    I currently have code that reads a recording in from the devices mic using the AudioRecord class and then playing it back out using the AudioTrack class. My problem is that when I play it out it plays vis the speaker phone. I want it to play out via the ear piece on the device. Here is my code: public class LoopProg extends Activity { boolean isRecording; //currently not used AudioManager am; int count = 0; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); am = (AudioManager) getSystemService(Context.AUDIO_SERVICE); am.setMicrophoneMute(true); while(count <= 1000000){ Record record = new Record(); record.run(); count ++; Log.d("COUNT", "Count is : " + count); } } public class Record extends Thread { static final int bufferSize = 200000; final short[] buffer = new short[bufferSize]; short[] readBuffer = new short[bufferSize]; public void run() { isRecording = true; android.os.Process.setThreadPriority (android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); int buffersize = AudioRecord.getMinBufferSize(11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT); AudioRecord arec = new AudioRecord(MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize); AudioTrack atrack = new AudioTrack(AudioManager.STREAM_MUSIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize, AudioTrack.MODE_STREAM); am.setRouting(AudioManager.MODE_NORMAL,1, AudioManager.STREAM_MUSIC); int ok = am.getRouting(AudioManager.ROUTE_EARPIECE); Log.d("ROUTING", "getRouting = " + ok); setVolumeControlStream(AudioManager.STREAM_VOICE_CALL); //am.setSpeakerphoneOn(true); Log.d("SPEAKERPHONE", "Is speakerphone on? : " + am.isSpeakerphoneOn()); am.setSpeakerphoneOn(false); Log.d("SPEAKERPHONE", "Is speakerphone on? : " + am.isSpeakerphoneOn()); atrack.setPlaybackRate(11025); byte[] buffer = new byte[buffersize]; arec.startRecording(); atrack.play(); while(isRecording) { arec.read(buffer, 0, buffersize); atrack.write(buffer, 0, buffer.length); } arec.stop(); atrack.stop(); isRecording = false; } } } As you can see if the code I have tried using the AudioManager class and its methods including the deprecated setRouting method and nothing works, the setSpeatPoneOn method seems to have no effect at all, neither does the routing method. Has anyone got any ideas on how to get it to play via the earpiece instead of the spaker phone?

    Read the article

  • How to send audio stream via UDP in java?

    - by Nob Venoda
    Hi to all :) I have a problem, i have set MediaLocator to microphone input, and then created Player. I need to grab that sound from the microphone, encode it to some lower quality stream, and send it as a datagram packet via UDP. Here's the code, i found most of it online and adapted it to my app: public class AudioSender extends Thread { private MediaLocator ml = new MediaLocator("javasound://44100"); private DatagramSocket socket; private boolean transmitting; private Player player; TargetDataLine mic; byte[] buffer; private AudioFormat format; private DatagramSocket datagramSocket(){ try { return new DatagramSocket(); } catch (SocketException ex) { return null; } } private void startMic() { try { format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 8000.0F, 16, 2, 4, 8000.0F, true); DataLine.Info info = new DataLine.Info(TargetDataLine.class, format); mic = (TargetDataLine) AudioSystem.getLine(info); mic.open(format); mic.start(); buffer = new byte[1024]; } catch (LineUnavailableException ex) { Logger.getLogger(AudioSender.class.getName()).log(Level.SEVERE, null, ex); } } private Player createPlayer() { try { return Manager.createRealizedPlayer(ml); } catch (IOException ex) { return null; } catch (NoPlayerException ex) { return null; } catch (CannotRealizeException ex) { return null; } } private void send() { try { mic.read(buffer, 0, 1024); DatagramPacket packet = new DatagramPacket( buffer, buffer.length, InetAddress.getByName(Util.getRemoteIP()), 91); socket.send(packet); } catch (IOException ex) { Logger.getLogger(AudioSender.class.getName()).log(Level.SEVERE, null, ex); } } @Override public void run() { player = createPlayer(); player.start(); socket = datagramSocket(); transmitting = true; startMic(); while (transmitting) { send(); } } public static void main(String[] args) { AudioSender as = new AudioSender(); as.start(); } } And only thing that happens when I run the receiver class, is me hearing this Player from the sender class. And I cant seem to see the connection between TargetDataLine and Player. Basically, I need to get the sound form player, and somehow convert it to bytes[], therefore I can sent it as datagram. Any ideas? Everything is acceptable, as long as it works :)

    Read the article

  • Synchronization requirements for FileStream.(Begin/End)(Read/Write)

    - by Doug McClean
    Is the following pattern of multi-threaded calls acceptable to a .Net FileStream? Several threads calling a method like this: ulong offset = whatever; // different for each thread byte[] buffer = new byte[8192]; object state = someState; // unique for each call, hence also for each thread lock(theFile) { theFile.Seek(whatever, SeekOrigin.Begin); IAsyncResult result = theFile.BeginRead(buffer, 0, 8192, AcceptResults, state); } if(result.CompletedSynchronously) { // is it required for us to call AcceptResults ourselves in this case? // or did BeginRead already call it for us, on this thread or another? } Where AcceptResults is: void AcceptResults(IAsyncResult result) { lock(theFile) { int bytesRead = theFile.EndRead(result); // if we guarantee that the offset of the original call was at least 8192 bytes from // the end of the file, and thus all 8192 bytes exist, can the FileStream read still // actually read fewer bytes than that? // either: if(bytesRead != 8192) { Panic("Page read borked"); } // or: // issue a new call to begin read, moving the offsets into the FileStream and // the buffer, and decreasing the requested size of the read to whatever remains of the buffer } } I'm confused because the documentation seems unclear to me. For example, the FileStream class says: Any public static members of this type are thread safe. Any instance members are not guaranteed to be thread safe. But the documentation for BeginRead seems to contemplate having multiple read requests in flight: Multiple simultaneous asynchronous requests render the request completion order uncertain. Are multiple reads permitted to be in flight or not? Writes? Is this the appropriate way to secure the location of the Position of the stream between the call to Seek and the call to BeginRead? Or does that lock need to be held all the way to EndRead, hence only one read or write in flight at a time? I understand that the callback will occur on a different thread, and my handling of state, buffer handle that in a way that would permit multiple in flight reads. Further, does anyone know where in the documentation to find the answers to these questions? Or an article written by someone in the know? I've been searching and can't find anything. Relevant documentation: FileStream class Seek method BeginRead method EndRead IAsyncResult interface

    Read the article

  • C# SerialPort - Problems mixing ports with different baud rates.

    - by GrandAdmiral
    Greetings, I have two devices that I would like to connect over a serial interface, but they have incompatible connections. To get around this problem, I connected them both to my PC and I'm working on a C# program that will route traffic on COM port X to COM port Y and vice versa. The program connects to two COM ports. In the data received event handler, I read in incoming data and write it to the other COM port. To do this, I have the following code: private void HandleDataReceived(SerialPort inPort, SerialPort outPort) { byte[] data = new byte[1]; while (inPort.BytesToRead > 0) { // Read the data data[0] = (byte)inPort.ReadByte(); // Write the data if (outPort.IsOpen) { outPort.Write(data, 0, 1); } } } That code worked fine as long as the outgoing COM port operated at a higher baud rate than the incoming COM port. If the incoming COM port was faster than the outgoing COM port, I started missing data. I had to correct the code like this: private void HandleDataReceived(SerialPort inPort, SerialPort outPort) { byte[] data = new byte[1]; while (inPort.BytesToRead > 0) { // Read the data data[0] = (byte)inPort.ReadByte(); // Write the data if (outPort.IsOpen) { outPort.Write(data, 0, 1); while (outPort.BytesToWrite > 0); //<-- Change to fix problem } } } I don't understand why I need that fix. I'm new to C# (this is my first program), so I'm wondering if there is something I am missing. The SerialPort defaults to a 2048 byte write buffer and my commands are less than ten bytes. The write buffer should have the ability to buffer the data until it can be written to a slower COM port. In summary, I'm receiving data on COM X and writing the data to COM Y. COM X is connected at a faster baud rate than COM Y. Why doesn't the buffering in the write buffer handle this difference? Why does it seem that I need to wait for the write buffer to drain to avoid losing data? Thanks!

    Read the article

  • Some questions about writing on ASP.NET response stream

    - by vtortola
    Hi, I'm making tests with ASP.NET HttpHandler for download a file writting directly on the response stream, and I'm not pretty sure about the way I'm doing it. This is a example method, in the future the file could be stored in a BLOB in the database: public void GetFile(HttpResponse response) { String fileName = "example.iso"; response.ClearHeaders(); response.ClearContent(); response.ContentType = "application/octet-stream"; response.AppendHeader("Content-Disposition", "attachment; filename=" + fileName); using (FileStream fs = new FileStream(Path.Combine(HttpContext.Current.Server.MapPath("~/App_Data"), fileName), FileMode.Open)) { Byte[] buffer = new Byte[4096]; Int32 readed = 0; while ((readed = fs.Read(buffer, 0, buffer.Length)) > 0) { response.OutputStream.Write(buffer, 0, readed); response.Flush(); } } } But, I'm not sure if this is correct or there is a better way to do it. My questions are: When I open the url with the browser, appears the "Save File" dialog... but it seems like the server has started already to push data into the stream before I click "Save", is that normal? If I remove the line"response.Flush()", when I open the url with the browser, ... I see how the web server is pushing data but the "Save File" dialog doesn't come up, (or at least not in a reasonable time fashion) why? When I open the url with a WebRequest object, I see that the HttpResponse.ContentLength is "-1", although I can read the stream and get the file. What is the meaning of -1? When is HttpResponse.ContentLength going to show the length of the response? For example, I have a method that retrieves a big xml compresed with deflate as a binary stream, but in that case... when I access it with a WebRequest, in the HttpResponse I can actually see the ContentLength with the length of the stream, why? What is the optimal length for the Byte[] array that I use as buffer for optimal performance in a web server? I've read that is between 4K and 8K... but which factors should I consider to make the correct decision. Does this method bloat the IIS or client memory usage? or is it actually buffering the transference correctly? Sorry for so many questions, I'm pretty new in web development :P Cheers.

    Read the article

  • How to quickly acquire and process real time screen output

    - by Akusete
    I am trying to write a program to play a full screen PC game for fun (as an experiment in Computer Vision and Artificial Intelligence). For this experiment I am assuming the game has no underlying API for AI players (nor is the source available) so I intend to process the visual information rendered by the game on the screen. The game runs in full screen mode on a win32 system (direct-X I assume). Currently I am using the win32 functions #include <windows.h> #include <cvaux.h> class Screen { public: HWND windowHandle; HDC windowContext; HBITMAP buffer; HDC bufferContext; CvSize size; uchar* bytes; int channels; Screen () { windowHandle = GetDesktopWindow(); windowContext = GetWindowDC (windowHandle); size = cvSize (GetDeviceCaps (windowContext, HORZRES), GetDeviceCaps (windowContext, VERTRES)); buffer = CreateCompatibleBitmap (windowContext, size.width, size.height); bufferContext = CreateCompatibleDC (windowContext); SelectObject (bufferContext, buffer); channels = 4; bytes = new uchar[size.width * size.height * channels]; } ~Screen () { ReleaseDC(windowHandle, windowContext); DeleteDC(bufferContext); DeleteObject(buffer); delete[] bytes; } void CaptureScreen (IplImage* img) { BitBlt(bufferContext, 0, 0, size.width, size.height, windowContext, 0, 0, SRCCOPY); int n = size.width * size.height; int imgChannels = img->nChannels; GetBitmapBits (buffer, n * channels, bytes); uchar* src = bytes; uchar* dest = (uchar*) img->imageData; uchar* end = dest + n * imgChannels; while (dest < end) { dest[0] = src[0]; dest[1] = src[1]; dest[2] = src[2]; dest += imgChannels; src += channels; } } The rate at which I can process frames using this approach is much to slow. Is there a better way to acquire screen frames?

    Read the article

  • Workflows not starting after fresh install

    - by Greg McGuffey
    I just installed Dynamics CRM 4.0. It is working nicely except for workflows. They won't start. I turned on tracing and it appears that there is an IO error. The server is setup with IFD and SSL. No issues accessing it internally or externally. Here is the trace: # CRM Tracing Version 2.0 # LocalTime: 2010-06-08 11:34:58.2 # Categories: # CallStackOn: No # ComputerName: FOX-CRM1 # CRMVersion: 4.0.7333.2741 # DeploymentType: OnPremise # ScaleGroup: # ServerRole: AppServer, AsyncService, DiscoveryService, WebService, ApiServer, HelpServer, DeploymentService [2010-06-08 11:34:58.2] Process:CrmAsyncService |Organization:821a137e-7191-49a4-86cc-69101e2b6d20 |Thread: 24 |Category: Platform.Async |User: 00000000-0000-0000-0000-000000000000 |Level: Error | AsyncOperationCommand.Execute >Exception while trying to execute AsyncOperationId: {DF68F483-2C73-DF11-9A34-18A9053B7B38} AsyncOperationType: 1 - System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. ---> System.IO.IOException: The handshake failed due to an unexpected packet format. at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Microsoft.Crm.SdkTypeProxy.CrmService.Retrieve(String entityName, Guid id, ColumnSetBase columnSet) at Microsoft.Crm.Asynchronous.SdkTypeProxyCrmServiceWrapper.Retrieve(String entityName, Guid id, ColumnSetBase columnSet) at Microsoft.Crm.Asynchronous.SdkPluginDescriptionProvider.GetPluginTypeDescription(Guid pluginTypeId, IOrganizationContext context) at Microsoft.Crm.Caching.PluginTypeCacheLoader.LoadCacheData(Guid key, IOrganizationContext context) at Microsoft.Crm.Caching.CrmMultiOrgCache`2.CreateEntry(TKey key, IOrganizationContext context) at Microsoft.Crm.Caching.CrmSharedMultiOrgCache`2.LookupEntry(TKey key, IOrganizationContext context) at Microsoft.Crm.Caching.PluginTypeCache.LookupEntry(Guid pluginTypeId, IOrganizationContext context) at Microsoft.Crm.Asynchronous.AsyncOperationCommand.GetPluginType(Guid pluginTypeId) at Microsoft.Crm.Asynchronous.EventOperation.InternalExecute(AsyncEvent asyncEvent) at Microsoft.Crm.Asynchronous.AsyncOperationCommand.Execute(AsyncEvent asyncEvent) The only thing I've tried to to update the AsyncSdkRootDomain row in the Deployment table to match the ADSdkRootDomain and the ADApplicationRootDomain values. It was blank. That didn't appear to work. After some more research, I think this might be caused because the Asynch service can't access the SDK web services using SSL. If this is correct, how would one configure a CRM server for secure access, internal and external (IFD) and still allow asynch service to hit web site? Thanks for your help!

    Read the article

  • Windows Service HTTPListener Memory Issue

    - by crawshaws
    Hi all, Im a complete novice to the "best practices" etc of writing in any code. I tend to just write it an if it works, why fix it. Well, this way of working is landing me in some hot water. I am writing a simple windows service to server a single webpage. (This service will be incorperated in to another project which monitors the services and some folders on a group of servers.) My problem is that whenever a request is recieved, the memory usage jumps up by a few K per request and keeps qoing up on every request. Now ive found that by putting GC.Collect in the mix it stops at a certain number but im sure its not meant to be used this way. I was wondering if i am missing something or not doing something i should to free up memory. Here is the code: Public Class SimpleWebService : Inherits ServiceBase 'Set the values for the different event log types. Public Const EVENT_ERROR As Integer = 1 Public Const EVENT_WARNING As Integer = 2 Public Const EVENT_INFORMATION As Integer = 4 Public listenerThread As Thread Dim HTTPListner As HttpListener Dim blnKeepAlive As Boolean = True Shared Sub Main() Dim ServicesToRun As ServiceBase() ServicesToRun = New ServiceBase() {New SimpleWebService()} ServiceBase.Run(ServicesToRun) End Sub Protected Overrides Sub OnStart(ByVal args As String()) If Not HttpListener.IsSupported Then CreateEventLogEntry("Windows XP SP2, Server 2003, or higher is required to " & "use the HttpListener class.") Me.Stop() End If Try listenerThread = New Thread(AddressOf ListenForConnections) listenerThread.Start() Catch ex As Exception CreateEventLogEntry(ex.Message) End Try End Sub Protected Overrides Sub OnStop() blnKeepAlive = False End Sub Private Sub CreateEventLogEntry(ByRef strEventContent As String) Dim sSource As String Dim sLog As String sSource = "Service1" sLog = "Application" If Not EventLog.SourceExists(sSource) Then EventLog.CreateEventSource(sSource, sLog) End If Dim ELog As New EventLog(sLog, ".", sSource) ELog.WriteEntry(strEventContent) End Sub Public Sub ListenForConnections() HTTPListner = New HttpListener HTTPListner.Prefixes.Add("http://*:1986/") HTTPListner.Start() Do While blnKeepAlive Dim ctx As HttpListenerContext = HTTPListner.GetContext() Dim HandlerThread As Thread = New Thread(AddressOf ProcessRequest) HandlerThread.Start(ctx) HandlerThread = Nothing Loop HTTPListner.Stop() End Sub Private Sub ProcessRequest(ByVal ctx As HttpListenerContext) Dim sb As StringBuilder = New StringBuilder sb.Append("<html><body><h1>Test My Service</h1>") sb.Append("</body></html>") Dim buffer() As Byte = Encoding.UTF8.GetBytes(sb.ToString) ctx.Response.ContentLength64 = buffer.Length ctx.Response.OutputStream.Write(buffer, 0, buffer.Length) ctx.Response.OutputStream.Close() ctx.Response.Close() sb = Nothing buffer = Nothing ctx = Nothing 'This line seems to keep the mem leak down 'System.GC.Collect() End Sub End Class Please feel free to critisise and tear the code apart but please BE KIND. I have admitted I dont tend to follow the best practice when it comes to coding.

    Read the article

  • .NET C# Filestream writing to file and reading the bfile

    - by pythonrg7
    I have a web service that checks a dictionary to see if a file exists and then if it does exist it reads the file, otherwise it saves to the file. This is from a web app. I wonder what is the best way to do this because I occasionally get a FileNotFoundException exception if the same file is accessed at the same time. Here's the relevant parts of the code: String signature; signature = "FILE," + value1 + "," + value2 + "," + value3 + "," + value4; // this is going to be the filename string result; MultipleRecordset mrSummary = new MultipleRecordset(); // MultipleRecordset is an object that retrieves data from a sql server database if (mrSummary.existsFile(signature)) { result = mrSummary.retrieveFile(signature); } else { result = mrSummary.getMultipleRecordsets(System.Configuration.ConfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString.ToString(), value1, value2, value3, value4); mrSummary.saveFile(signature, result); } Here's the code to see if the file already exists: private static Dictionary dict = new Dictionary(); public bool existsFile(string signature) { if (dict.ContainsKey(signature)) { return true; } else { return false; } } Here's what I use to retrieve if it already exists: try { byte[] buffer; FileStream fileStream = new FileStream(@System.Configuration.ConfigurationManager.AppSettings["CACHEPATH"] + filename, FileMode.Open, FileAccess.Read, FileShare.Read); try { int length = 0x8000; // get file length buffer = new byte[length]; // create buffer int count; // actual number of bytes read JSONstring = ""; while ((count = fileStream.Read(buffer, 0, length)) > 0) { JSONstring += System.Text.ASCIIEncoding.ASCII.GetString(buffer, 0, count); } } finally { fileStream.Close(); } } catch (Exception e) { JSONstring = "{\"error\":\"" + e.ToString() + "\"}"; } If the file doesn't previously exist it saves the JSON to the file: try { if (dict.ContainsKey(filename) == false) { dict.Add(filename, true); } else { this.retrieveFile(filename, ipaddress); } } catch { } try { TextWriter tw = new StreamWriter(@System.Configuration.ConfigurationManager.AppSettings["CACHEPATH"] + filename); tw.WriteLine(JSONstring); tw.Close(); } catch { } Here are the details to the exception I sometimes get from running the above code: System.IO.FileNotFoundException: Could not find file 'E:\inetpub\wwwroot\cache\FILE,36,36.25,14.5,14.75'. File name: 'E:\inetpub\wwwroot\cache\FILE,36,36.25,14.5,14.75' at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share) at com.myname.business.MultipleRecordset.retrieveFile(String filename, String ipaddress)

    Read the article

  • OpenGL suppresses exceptions in MFC dialog-based application

    - by Mikhail
    Hello. I have an MFC-driven dialog-based application created with MSVS2005. Here is my problem step by step. I have button on my dialog and corresponding click-handler with code like this: int* i = 0; *i = 3; I'm running debug version of program and when I click on the button, Visual Studio catches focus and alerts "Access violation writing location" exception, program cannot recover from the error and all I can do is to stop debugging. And this is the right behavior. Now I add some OpenGL initialization code in the OnInitDialog() method: HDC DC = GetDC(GetSafeHwnd()); static PIXELFORMATDESCRIPTOR pfd = { sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd 1, // version number PFD_DRAW_TO_WINDOW | // support window PFD_SUPPORT_OPENGL | // support OpenGL PFD_DOUBLEBUFFER, // double buffered PFD_TYPE_RGBA, // RGBA type 24, // 24-bit color depth 0, 0, 0, 0, 0, 0, // color bits ignored 0, // no alpha buffer 0, // shift bit ignored 0, // no accumulation buffer 0, 0, 0, 0, // accum bits ignored 32, // 32-bit z-buffer 0, // no stencil buffer 0, // no auxiliary buffer PFD_MAIN_PLANE, // main layer 0, // reserved 0, 0, 0 // layer masks ignored }; int pixelformat = ChoosePixelFormat(DC, &pfd); SetPixelFormat(DC, pixelformat, &pfd); HGLRC hrc = wglCreateContext(DC); ASSERT(hrc != NULL); wglMakeCurrent(DC, hrc); Of course this is not exactly what I do, it is the simplified version of my code. Well now the strange things begin to happen: all initialization is fine, there are no errors in OnInitDialog(), but when I click the button... no exception is thrown. Nothing happens. At all. If I set a break-point at the *i = 3; and press F11 on it, the handler-function halts immediately and focus is returned to the application, which continue to work well. I can click button again and the same thing will happen. It seems like someone had handled occurred exception of access violation and silently returned execution into main application message-receiving cycle. If I comment the line wglMakeCurrent(DC, hrc);, all works fine as before, exception is thrown and Visual Studio catches it and shows window with error message and program must be terminated afterwards. I experience this problem under Windows 7 64-bit, NVIDIA GeForce 8800 with latest drivers (of 11.01.2010) available at website installed. My colleague has Windows Vista 32-bit and has no such problem - exception is thrown and application crashes in both cases. Well, hope good guys will help me :) PS The problem originally where posted under this topic.

    Read the article

  • Android - OPENGL cube is NOT in the display

    - by Marc Ortiz
    I'm trying to display a square on my display and i can't. Whats my problem? How can I display it on the screen (center of the screen)? I let my code below! Here's my render class: public class GLRenderEx implements Renderer { private GLCube cube; Context c; GLCube quad; // ( NEW ) // Constructor public GLRenderEx(Context context) { // Set up the data-array buffers for these shapes ( NEW ) quad = new GLCube(); // ( NEW ) } // Call back when the surface is first created or re-created. @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { // NO CHANGE - SKIP } // Call back after onSurfaceCreated() or whenever the window's size changes. @Override public void onSurfaceChanged(GL10 gl, int width, int height) { // NO CHANGE - SKIP } // Call back to draw the current frame. @Override public void onDrawFrame(GL10 gl) { // Clear color and depth buffers using clear-values set earlier gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset model-view matrix ( NEW ) gl.glTranslatef(-1.5f, 0.0f, -6.0f); // Translate left and into the // screen ( NEW ) // Translate right, relative to the previous translation ( NEW ) gl.glTranslatef(3.0f, 0.0f, 0.0f); quad.draw(gl); // Draw quad ( NEW ) } } And here is my square class: public class GLCube { private FloatBuffer vertexBuffer; // Buffer for vertex-array private float[] vertices = { // Vertices for the square -1.0f, -1.0f, 0.0f, // 0. left-bottom 1.0f, -1.0f, 0.0f, // 1. right-bottom -1.0f, 1.0f, 0.0f, // 2. left-top 1.0f, 1.0f, 0.0f // 3. right-top }; // Constructor - Setup the vertex buffer public GLCube() { // Setup vertex array buffer. Vertices in float. A float has 4 bytes ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); // Use native byte order vertexBuffer = vbb.asFloatBuffer(); // Convert from byte to float vertexBuffer.put(vertices); // Copy data into buffer vertexBuffer.position(0); // Rewind } // Render the shape public void draw(GL10 gl) { // Enable vertex-array and define its buffer gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); // Draw the primitives from the vertex-array directly gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } Thanks!!

    Read the article

  • Inserting newlines into a GtkTextView widget (GTK+ programming)

    - by Mark Roberts
    I've got a button which when clicked copies and appends the text from a GtkEntry widget into a GtkTextView widget. (This code is a modified version of an example found in the "The Text View Widget" chapter of Foundations of GTK+ Development.) I'm looking to insert a newline character before the text which gets copied and appended, such that each line of text will be on its own line in the GtkTextView widget. How would I do this? I'm brand new to GTK+. Here's the code sample: #include <gtk/gtk.h> typedef struct { GtkWidget *entry, *textview; } Widgets; static void insert_text (GtkButton*, Widgets*); int main (int argc, char *argv[]) { GtkWidget *window, *scrolled_win, *hbox, *vbox, *insert; Widgets *w = g_slice_new (Widgets); gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_window_set_title (GTK_WINDOW (window), "Text Iterators"); gtk_container_set_border_width (GTK_CONTAINER (window), 10); gtk_widget_set_size_request (window, -1, 200); w->textview = gtk_text_view_new (); w->entry = gtk_entry_new (); insert = gtk_button_new_with_label ("Insert Text"); g_signal_connect (G_OBJECT (insert), "clicked", G_CALLBACK (insert_text), (gpointer) w); scrolled_win = gtk_scrolled_window_new (NULL, NULL); gtk_container_add (GTK_CONTAINER (scrolled_win), w->textview); hbox = gtk_hbox_new (FALSE, 5); gtk_box_pack_start_defaults (GTK_BOX (hbox), w->entry); gtk_box_pack_start_defaults (GTK_BOX (hbox), insert); vbox = gtk_vbox_new (FALSE, 5); gtk_box_pack_start (GTK_BOX (vbox), scrolled_win, TRUE, TRUE, 0); gtk_box_pack_start (GTK_BOX (vbox), hbox, FALSE, TRUE, 0); gtk_container_add (GTK_CONTAINER (window), vbox); gtk_widget_show_all (window); gtk_main(); return 0; } /* Insert the text from the GtkEntry into the GtkTextView. */ static void insert_text (GtkButton *button, Widgets *w) { GtkTextBuffer *buffer; GtkTextMark *mark; GtkTextIter iter; const gchar *text; buffer = gtk_text_view_get_buffer (GTK_TEXT_VIEW (w->textview)); text = gtk_entry_get_text (GTK_ENTRY (w->entry)); mark = gtk_text_buffer_get_insert (buffer); gtk_text_buffer_get_iter_at_mark (buffer, &iter, mark); gtk_text_buffer_insert (buffer, &iter, text, -1); } You can compile this command (assuming the file is named file.c): gcc file.c -o file `pkg-config --cflags --libs gtk+-2.0` Thanks everybody!

    Read the article

  • Switch between speakerphone and headset on Android

    - by user210504
    Hi! I wish to know if there is a way, using which we can switch between the speaker and headset dynamically in an android application. I am using this sample code, I found online for my experiments final float frequency = 440; float increment = (float)(2*Math.PI) * frequency / 44100; // angular increment for each sample float angle = 0; AndroidAudioDevice device = new AndroidAudioDevice( ); AudioManager am = (AudioManager)getSystemService(AUDIO_SERVICE); am.setMode(AudioManager.MODE_IN_CALL); float samples[] = new float[1024]; int count = 0; while( count < 10 ) { count++; for( int i = 0; i < samples.length; i++ ) { samples[i] = (float)Math.sin( angle ) ; angle += increment; } device.writeSamples( samples ); } device.stop(); am.setMode(AudioManager.MODE_NORMAL); ---- next class public class AndroidAudioDevice { AudioTrack track; short[] buffer = new short[1024]; public AndroidAudioDevice( ) { int minSize =AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT ); track = new AudioTrack( AudioManager.STREAM_VOICE_CALL, 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, minSize, AudioTrack.MODE_STREAM); track.play(); } public void writeSamples(float[] samples) { fillBuffer( samples ); track.write( buffer, 0, samples.length ); } private void fillBuffer( float[] samples ) { if( buffer.length < samples.length ) buffer = new short[samples.length]; for( int i = 0; i < samples.length; i++ ) buffer[i] = (short)(samples[i] * Short.MAX_VALUE);; } public void stop() { track.stop(); } } As per my understanding this should play audio on headset, because we have not enabled the speaker phone. However, the audio is playing on the speaker phone. 1 Am I doing something wrong here? 2 What would be a way to switch between internal speaker and speaker phone dynamically for same code peice Any help will be appreciated.

    Read the article

  • HttpWebResponse get mixed up when used inside multiple threads

    - by Holli
    In my Application I have a few threads who will get data from a web service. Basically I just open an URL and get an XML output. I have a few threads who do this continuously but with different URLs. Sometimes the results are mixed up. The XML output doesn't belong to the URL of a thread but to the URL of another thread. In each thread I create an instance of the class GetWebPage and call the method Get from this instance. The method is very simple and based mostly on code from the MSDN documentation. (See below. I removed my error handling here!) public string Get(string userAgent, string url, string user, string pass, int timeout, int readwriteTimeout, WebHeaderCollection whc) { string buffer = string.Empty; HttpWebRequest myWebRequest = (HttpWebRequest)WebRequest.Create(url); if (!string.IsNullOrEmpty(userAgent)) myWebRequest.UserAgent = userAgent; myWebRequest.Timeout = timeout; myWebRequest.ReadWriteTimeout = readwriteTimeout; myWebRequest.Credentials = new NetworkCredential(user, pass); string[] headers = whc.AllKeys; foreach (string s in headers) { myWebRequest.Headers.Add(s, whc.Get(s)); } using (HttpWebResponse myWebResponse = (HttpWebResponse)myWebRequest.GetResponse()) { using (Stream ReceiveStream = myWebResponse.GetResponseStream()) { Encoding encode = Encoding.GetEncoding("utf-8"); StreamReader readStream = new StreamReader(ReceiveStream, encode); // Read 1024 characters at a time. Char[] read = new Char[1024]; int count = readStream.Read(read, 0, 1024); int break_counter = 0; while (count > 0 && break_counter < 10000) { String str = new String(read, 0, count); buffer += str; count = readStream.Read(read, 0, 1024); break_counter++; } } } return buffer; As you can see I have no public properties or any other shared resources. At least I don't see any. The url is the service I call in the internet and buffer is the XML Output from the server. Like I said I have multiple instances of this class/method in a few threads (10 to 12) and sometimes buffer does not belong the the url of the same thread but another thread.

    Read the article

  • java double buffering problem

    - by russell
    Whats wrong with my applet code which does not render double buffering correctly.I am trying and trying.But failed to get a solution.Plz Plz someone tell me whats wrong with my code. import java.applet.* ; import java.awt.* ; import java.awt.event.* ; public class Ball extends Applet implements Runnable { // Initialisierung der Variablen int x_pos = 10; // x - Position des Balles int y_pos = 100; // y - Position des Balles int radius = 20; // Radius des Balles Image buffer=null; //Graphics graphic=null; int w,h; public void init() { Dimension d=getSize(); w=d.width; h=d.height; buffer=createImage(w,h); //graphic=buffer.getGraphics(); setBackground (Color.black); } public void start () { // Schaffen eines neuen Threads, in dem das Spiel l?uft Thread th = new Thread (this); // Starten des Threads th.start (); } public void stop() { } public void destroy() { } public void run () { // Erniedrigen der ThreadPriority um zeichnen zu erleichtern Thread.currentThread().setPriority(Thread.MIN_PRIORITY); // Solange true ist l?uft der Thread weiter while (true) { // Ver?ndern der x- Koordinate repaint(); x_pos++; y_pos++; //x2--; //y2--; // Neuzeichnen des Applets if(x_pos>410) x_pos=20; if(y_pos>410) y_pos=20; try { Thread.sleep (30); } catch (InterruptedException ex) { // do nothing } Thread.currentThread().setPriority(Thread.MAX_PRIORITY); } } public void paint (Graphics g) { Graphics screen=null; screen=g; g=buffer.getGraphics(); g.setColor(Color.red); g.fillOval(x_pos - radius, y_pos - radius, 2 * radius, 2 * radius); g.setColor(Color.green); screen.drawImage(buffer,0,0,this); } public void update(Graphics g) { paint(g); } } what change should i make.When offscreen image is drawn the previous image also remain in screen.How to erase the previous image from the screen??

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >