Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 51/151 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • How to Capture a live stream from Windows Media Server 2008

    - by Hummad Hassan
    I want to capture the live stream from windows media server to filesystem on my pc I have tried with my own media server with the following code. but when i have checked the out put file i have found this in it. FileStream fs = null; try { HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://mywmsserver/test"); CookieContainer ci = new CookieContainer(1000); req.Timeout = 60000; req.Method = "Get"; req.KeepAlive = true; req.MaximumAutomaticRedirections = 99; req.UseDefaultCredentials = true; req.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"; req.ReadWriteTimeout = 90000000; req.CookieContainer = ci; //req.MediaType = "video/x-ms-asf"; req.AllowWriteStreamBuffering = true; HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Stream resps = resp.GetResponseStream(); fs = new FileStream("d:\\dump.wmv", FileMode.Create, FileAccess.ReadWrite); byte[] buffer = new byte[1024]; int bytesRead = 0; while ((bytesRead = resps.Read(buffer, 0, buffer.Length)) > 0) { fs.Write(buffer, 0, bytesRead); } } catch (Exception ex) { } finally { if (fs != null) fs.Close(); }

    Read the article

  • How to inherit from a non-prototype object

    - by Andres Jaan Tack
    The node-binary binary parser builds its object with the following pattern: exports.parse = function parse (buffer) { var self = {...} self.tap = function (cb) {...}; self.into = function (key, cb) {...}; ... return self; }; How do I inherit my own, enlightened parser from this? Is this pattern designed intentionally to make inheritance awkward? My only successful attempt thus far at inheriting all the methods of binary.parse(<something>) is to use _.extend as: var clever_parser = function(buffer) { if (this instanceof clever_parser) { this.parser = binary.parse(buffer); // I guess this is super.constructor(...) _.extend(this.parser, this); // Really? return this.parser; } else { return new clever_parser(buffer); } } This has failed my smell test, and that of others. Is there anything about this that makes in tangerous?

    Read the article

  • How to Capture a live stream from Windows Media Server 2008 using c#.net

    - by Hummad Hassan
    I want to capture the live stream from windows media server to filesystem on my pc I have tried with my own media server with the following code. but when i have checked the out put file i have found this in it. please help me with this. Thanks [Reference] Ref1=http://mywindowsmediaserver/test?MSWMExt=.asf Ref2=http://mywindowsmediaserver/test?MSWMExt=.asf FileStream fs = null; try { HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://mywmsserver/test"); CookieContainer ci = new CookieContainer(1000); req.Timeout = 60000; req.Method = "Get"; req.KeepAlive = true; req.MaximumAutomaticRedirections = 99; req.UseDefaultCredentials = true; req.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"; req.ReadWriteTimeout = 90000000; req.CookieContainer = ci; //req.MediaType = "video/x-ms-asf"; req.AllowWriteStreamBuffering = true; HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Stream resps = resp.GetResponseStream(); fs = new FileStream("d:\\dump.wmv", FileMode.Create, FileAccess.ReadWrite); byte[] buffer = new byte[1024]; int bytesRead = 0; while ((bytesRead = resps.Read(buffer, 0, buffer.Length)) > 0) { fs.Write(buffer, 0, bytesRead); } } catch (Exception ex) { } finally { if (fs != null) fs.Close(); }

    Read the article

  • exec() in BeanShell macro causes jEdit to hang when it returns non-zero exit code

    - by rossmeissl
    I have a jEdit BeanShell macro that runs my Markdown files through Maruku when I save them: if (buffer.getMode().toString().equals("markdown")) { cmd = "C:\\Ruby\\bin\\maruku.bat -o " + buffer.getDirectory() + buffer.getName().replaceAll("markdown$", "html") + " " + buffer.getPath(); exec(cmd); } This works great when the Markdown file is valid. But if I've made a mistake, jEdit just waits around forever for the exec() call to "succeed," which it never will. When this happens, I have to kill jEdit's javaw.exe process and run Maruku manually from the command line to discover the error, e.g.: E:\bp\plan\supply_chain>maruku business_plan.markdown ___________________________________________________________________________ | Maruku tells you: +--------------------------------------------------------------------------- | Could not find ref_id = "17" for md_link(["17"],"17") | Available refs are [] +--------------------------------------------------------------------------- !C:/Ruby/lib/ruby/gems/1.8/gems/maruku-0.6.0/lib/maruku/errors_management.rb:49:in `maruku_error' !C:/Ruby/lib/ruby/gems/1.8/gems/maruku-0.6.0/lib/maruku/output/to_html.rb:716:in `to_html_link' !C:/Ruby/lib/ruby/gems/1.8/gems/maruku-0.6.0/lib/maruku/output/to_html.rb:970:in `send' !C:/Ruby/lib/ruby/gems/1.8/gems/maruku-0.6.0/lib/maruku/output/to_html.rb:970:in `array_to_html' !C:/Ruby/lib/ruby/gems/1.8/gems/maruku-0.6.0/lib/maruku/output/to_html.rb:961:in `each' \___________________________________________________________________________ Not creating a link for ref_id = "17". Then I restart jEdit, fix the error, and re-save the file, at which point the macro succeeds. How can I make my macro more resilient to either die helpfully (display Maruku's error output) or, at the very least, die silently so I don't have to kill jEdit?

    Read the article

  • how to send binary data within an xml string

    - by daemonkid
    I want to send a binary file to .net c# component in the following xml format <BinaryFileString fileType='pdf'> <!--binary file data string here--> </BinaryFileString> In the component that is called I will use the above xml string and convert the binary string recieved within the BinaryFileString tag, into a file as specified by the filetype='' attribute. The file type could be doc/pdf/xls/rtf I have the code in the calling application to get out the bytes from the file to be sent. How do I prepare it to be sent with xml tags wrapped around it? I want the application to send out a string to the component and not a byte stream. This is because there is no way I can decipher the file type [pdf/doc/xls] by just looking at the byte stream. Hence the xml string with the filetype attribute. Any ideas on this? method for extracting Bytes below FileStream fs = new FileStream(_filePath, FileMode.Open, FileAccess.Read); using (Stream input = fs) { byte[] buffer = new byte[8192]; int bytesRead; while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0) { } } return buffer; Thanks.

    Read the article

  • From where starts the process' memory space and where does it end?

    - by nhaa123
    Hi, I'm trying to dump memory from my application where the variables lye. Here's the function: void MyDump(const void *m, unsigned int n) { const unsigned char *p = reinterpret_cast<const unsigned char *(m); char buffer[16]; unsigned int mod = 0; for (unsigned int i = 0; i < n; ++i, ++mod) { if (mod % 16 == 0) { mod = 0; std::cout << " | "; for (unsigned short j = 0; j < 16; ++j) { switch (buffer[j]) { case 0xa: case 0xb: case 0xd: case 0xe: case 0xf: std::cout << " "; break; default: std::cout << buffer[j]; } } std::cout << "\n0x" << std::setfill('0') << std::setw(8) << std::hex << (long)i << " | "; } buffer[i % 16] = p[i]; std::cout << std::setw(2) << std::hex << static_cast<unsigned int(p[i]) << " "; if (i % 4 == 0 && i != 1) std::cout << " "; } } Now, how can I know from which address starts my process memory space, where all the variables are stored? And how do I now, how long the area is? For instance: MyDump(0x0000 /* <-- Starts from here? */, 0x1000 /* <-- This much? */); Best regards, nhaa123

    Read the article

  • Facebook app request in java not working

    - by Arpit Solanki
    I am trying to send a facebook app request to a user through the code below.But it gives an IO Exception and HTTP status code 400 in running.I dont see a any app request being sent to a user on running this. StringBuffer buffer = new StringBuffer(); buffer.append("access_token").append('=').append(this.app_access_token); buffer.append('&').append("message=").append("sent an app request!"); String content = buffer.toString(); try{ URLConnection connection = new URL("https://graph.facebook.com/me/apprequests").openConnection(); connection.setDoOutput(true); connection.setRequestProperty("Content-Type","application/x-www-form-urlencoded"); connection.setRequestProperty("Content-Length",Integer.toString(content.length())); DataOutputStream outs = new DataOutputStream(connection.getOutputStream()); outs.writeBytes(content); outs.flush(); outs.close(); BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream())); String inputLine; while ((inputLine = in.readLine()) != null) { System.out.println(inputLine); } in.close(); } catch(Exception e){ System.out.println(e); }

    Read the article

  • Java equivalent of the VB Request.InputStream

    - by Android Addict
    I have a web service that I am re-writing from VB to a Java servlet. In the web service, I want to extract the body entity set on the client-side as such: StringEntity stringEntity = new StringEntity(xml, HTTP.UTF_8); stringEntity.setContentType("application/xml"); httppost.setEntity(stringEntity); In the VB web service, I get this data by using: Dim objReader As System.IO.StreamReader objReader = New System.IO.StreamReader(Request.InputStream) Dim strXML As String = objReader.ReadToEnd and this works great. But I am looking for the equivalent in Java. I have tried this: ServletInputStream dataStream = req.getInputStream(); byte[] data = new byte[dataStream.toString().length()]; dataStream.read(data); but all it gets me is an unintelligible string: data = [B@68514fec Please advise. Edit Per the answers, I have tried: ServletInputStream dataStream = req.getInputStream(); ByteArrayOutputStream buffer = new ByteArrayOutputStream(); int r; byte[] data = new byte[1024*1024]; while ((r = dataStream.read(data, 0, data.length)) != -1) { buffer.write(data, 0, r); } buffer.flush(); byte[] data2 = buffer.toByteArray(); System.out.println("DATA = "+Arrays.toString(data2)); whichs yields: DATA = [] and when I try: System.out.println("DATA = "+data2.toString()); I get: DATA = [B@15282c7f So what am I doing wrong? As stated earlier, the same call to my VB service gives me the xml that I pass in.

    Read the article

  • OpenGLES - Rendering a background image only once and not wiping it

    - by chaosbeaker
    Hello, first time asking a question here but been watching others answers for a while. My own question is one for improving the performance of my program. Currently I'm wiping the viewFrameBuffer on each pass through my program and then rendering the background image first followed by the rest of my scene. I was wondering how I go about rendering the background image once, and only wiping the rest of the scene for updating/re-rendering. I tried using a seperate buffer but I'm not sure how to present this new buffer to the render buffer. // Set the current EAGLContext and bind to the framebuffer. This will direct all OGL commands to the // framebuffer and the associated renderbuffer attachment which is where our scene will be rendered [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); // Define the viewport. Changing the settings for the viewport can allow you to scale the viewport // as well as the dimensions etc and so I'm setting it for each frame in case we want to change i glViewport(0, 0, screenBounds.size.width , screenBounds.size.height); // Clear the screen. If we are going to draw a background image then this clear is not necessary // as drawing the background image will destroy the previous image glClearColor(0.0f, 1.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // Setup how the images are to be blended when rendered. This could be changed at different points during your // render process if you wanted to apply different effects glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); switch (currentViewInt) { case 1: { [background render:CGPointMake(240, 0) fromTopLeftBottomRightCenter:@"Bottom"]; // Other Rendering Code }} // Bind to the renderbuffer and then present this image to the current context glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; Hopefully by solving this I'll also be able to implement another buffer just for rendering particles as I can set them to always use a black background as their alpha source. Any help is greatly appreciated

    Read the article

  • Bluetooth in Java Mobile: Handling connections that go out of range

    - by Albus Dumbledore
    I am trying to implement a server-client connection over the spp. After initializing the server, I start a thread that first listens for clients and then receives data from them. It looks like that: public final void run() { while (alive) { try { /* * Await client connection */ System.out.println("Awaiting client connection..."); client = server.acceptAndOpen(); /* * Start receiving data */ int read; byte[] buffer = new byte[128]; DataInputStream receive = client.openDataInputStream(); try { while ((read = receive.read(buffer)) > 0) { System.out.println("[Recieved]: " + new String(buffer, 0, read)); if (!alive) { return; } } } finally { System.out.println("Closing connection..."); receive.close(); } } catch (IOException e){ e.printStackTrace(); } } } It's working fine for I am able to receive messages. What's troubling me is how would the thread eventually die when a device goes out of range? Firstly, the call to receive.read(buffer) blocks so that the thread waits until it receives any data. If the device goes out of range, it would never proceed onward to check if meanwhile it has been interrupted. Secondly, it would never close the connection, i.e. the server would not accept the device once it goes back in range. Thanks! Any ideas would be highly appreciated! Merry Christmas!

    Read the article

  • Clone existing structs with different alignment in Visual C++

    - by Crend King
    Is there a way to clone an existing struct with different member alignment in Visual C++? Here is the background: I use an 3rd-party library, which uses several structs. To fill up the structs, I pass the address of the struct instances to some functions. Unfortunately, the functions only returns unaligned buffer, so that data of some members are always wrong. /Zp is out of choice, since it breaks the other parts of the program. I know #pragma pack modifies the alignment of the following struct, but I would like to avoid copying the structs into my code, for the definitions in the library might change in the future. Sample code: test.h: struct am_aligned { BYTE data1[10]; ULONG data2; }; test.cpp: #include "test.h" // typedef alignment(1) struct am_aligned am_unaligned; int APIENTRY wWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { char buffer[20] = {}; for (int i = 0; i < sizeof(unaligned_struct); i++) { buffer[i] = i; } am_aligned instance = *(am_aligned*) buffer; return 0; } Consider am_aligned is defined in the library header file. am_unaligned is my custom declaration, and only effective in test.cpp. The commented line does not work of course. instance.data2 is 0x0f0e0d0c, while 0x0d0c0b0a is desired. Thanks for help!

    Read the article

  • jQuery ajax post of jpg image to .net webservice. Image results corrupted

    - by sosergio
    I have a phonegap jquery app that opens the camera and take a picture. I then POST this picture to a .net webservice, wich I've coded. I can't use phonegap FileTransfer because such isn't supported by Bada os, wich is a requirement. I believe I've successfully loaded the image from phonegap FileSystem API, I've attached it into an .ajax type:post, I've even received it from .net side, but when .net save the image into the server, the image results corrupted. It seems to me that two sides of the communication have different data type. Has anyone experience in this? Any help will be appreciated. This is my code: //PHONEGAP CAMERA ACCESS (summed up) navigator.camera.getPicture(onGetPictureSuccess, onGetPictureFail, { quality: 50, destinationType:Camera.DestinationType.FILE_URI }); window.resolveLocalFileSystemURI(imageURI, onResolveFileSystemURISuccess, onResolveFileSystemURIError); fileEntry.file(gotFileSuccess, gotFileError); new FileReader().readAsDataURL(file); //UPLOAD FILE function onDataReadSuccess(evt) { var image_data = evt.target.result; var filename = unique_id(); var filext = "jpg"; $.ajax({ type : 'POST', url : SERVICE_BASE_URL+"/fotos/"+filename+"?ext="+filext, cache: false, timeout: 100000, processData: false, data: image_data, contentType: 'image/jpeg', success : function(data) { console.log("Data Uploaded with success. Message: "+ data); $.mobile.hidePageLoadingMsg(); $.mobile.changePage("ok.html"); } }); } On my .net Web Service this is the method that gets invoked: public string FotoSave(string filename, string extension, Stream fileContent) { string filePath = HttpContext.Current.Server.MapPath("~/foto_data/") + "\\" + filename; FileStream writeStream = new FileStream(filePath, FileMode.OpenOrCreate, FileAccess.Write); int Length = 256; Byte[] buffer = new Byte[Length]; int bytesRead = readStream.Read(buffer, 0, Length); // write the required bytes while (bytesRead > 0) { writeStream.Write(buffer, 0, bytesRead); bytesRead = readStream.Read(buffer, 0, Length); } readStream.Close(); writeStream.Close(); }

    Read the article

  • Bluetooth txt file byte increases????

    - by cheesebunz
    Hi everyone, i am able to xfer files from 1 mobile device to another. When the sender sends this text file of 8 bytes, the receiver end will become a 256bytes txt file and when i open the contents of the txt file, there are my infos plus alot of square boxes. Here is my code from the sender: string fileName = @"SendTest.txt"; System.Uri uri = new Uri("obex://" + selectedAddr + "/" + System.IO.Path.GetFileName(fileName)); ObexWebRequest request = new ObexWebRequest(uri); Stream requestStream = request.GetRequestStream(); FileStream fs = File.OpenRead(fileName); byte[] buffer = new byte[1024]; int readBytes = 1; while (readBytes != 0) { readBytes = fs.Read(buffer,0, buffer.Length); requestStream.Write(buffer,0, readBytes); } requestStream.Close(); ObexWebResponse response = (ObexWebResponse)request.GetResponse(); MessageBox.Show(response.StatusCode.ToString()); response.Close(); Any1 knws how do i solve it?

    Read the article

  • c++ File input/output

    - by Myx
    Hi: I am trying to read from a file using fgets and sscanf. In my file, I have characters on each line of the while which I wish to put into a vector. So far, I have the following: FILE *fp; fp = fopen(filename, "r"); if(!fp) { fprintf(stderr, "Unable to open file %s\n", filename); return 0; } // Read file int line_count = 0; char buffer[1024]; while(fgets(buffer, 1023, fp)) { // Increment line counter line_count++; char *bufferp = buffer; ... while(*bufferp != '\n') { char *tmp; if(sscanf(bufferp, "%c", tmp) != 1) { fprintf(stderr, "Syntax error reading axiom on " "line %d in file %s\n", line_count, filename); return 0; } axiom.push_back(tmp); printf("put %s in axiom vector\n", axiom[axiom.size()-1]); // increment buffer pointer bufferp++; } } my axiom vector is defined as vector<char *> axiom;. When I run my program, I get a seg fault. It happens when I do the sscanf. Any suggestions on what I'm doing wrong?

    Read the article

  • Low Level Console Input

    - by Soulseekah
    I'm trying to send commands to to the input of a cmd.exe application using the low level read/write console functions. I have no trouble reading the text (scraping) using the ReadConsole...() and WriteConsole() functions after having attached to the process console, but I've not figured out how to write for example "dir" and have the console interpret it as a sent command. Here's a bit of my code: CreateProcess(NULL, "cmd.exe", NULL, NULL, FALSE, CREATE_NEW_CONSOLE, NULL, NULL, &si, &pi); AttachConsole(pi.dwProcessId); strcpy(buffer, "dir"); WriteConsole(GetStdHandle(STD_INPUT_HANDLE), buffer, strlen(buffer), &charRead, NULL); STARTUPINFO attributes of the process are all set to zero, except, of course, the .cb attribute. Nothing changes on the screen, however I'm getting an Error 6: Invalid Handle returned from WriteConsole to STD_INPUT_HANDLE. If I write to (STD_OUTPUT_HANDLE) I do get my dir written on the screen, but nothing of course happens. I'm guessing SetConsoleMode() might be of help, but I've tried many mode combinations, nothing helped. I've also created a quick console application that waits for input (scanf()) and echoes back whatever goes in, didn't work. I've also tried typing into the scanf() promp and then peek into the input buffer using PeekConsoleInput(), returns 0, but the INPUT_RECORD array is empty. I'm aware that there is another way around this using WriteConsoleInput() to directly inject INPUT_RECORD structured events into the console, but this would be way too long, I'll have to send each keypress into it. I hope the question is clear. Please let me know if you need any further information. Thanks for your help.

    Read the article

  • Modify existing struct alignment in Visual C++

    - by Crend King
    Is there a way to modify the member alignment of an existing struct in Visual C++? Here is the background: I use an 3rd-party library, which uses several structs. To fill up the structs, I pass the address of the struct instance to some functions. Unfortunately, the functions only returns unaligned buffer, so that data of some members are always wrong. /Zp is out of choice, since it breaks the other parts of the program. I know #pragma pack modifies the alignment of the following struct, but I do not want to copy the structs into my code, for the definitions in the library might change in the future. Sample code: test.h: struct am_aligned { BYTE data1[10]; ULONG data2; }; test.cpp: include "test.h" // typedef alignment(1) struct am_aligned am_unaligned int APIENTRY wWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { char buffer[20] = {}; for (int i = 0; i < sizeof(unaligned_struct); i++) { buffer[i] = i; } am_aligned instance = *(am_aligned*) buffer; return 0; } instance.data2 is 0x0f0e0d0c, while 0x0d0c0b0a is desired. The commented line does not work of course. Thanks for help!

    Read the article

  • problem with kCFSocketReadCallBack

    - by zp26
    Hello. I have a problem with my program. I created a socket with "kCFSocketReadCallBack. My intention was to call the "acceptCallback" only when it receives a string to the socket. Instead my program does not just accept the connection always goes into "startReceive" stop doing so and sometimes crash the program. Can anybody help? Thanks readSocket = CFSocketCreateWithNative( NULL, fd, kCFSocketReadCallBack, AcceptCallback, &context ); static void AcceptCallback(CFSocketRef s, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) // Called by CFSocket when someone connects to our listening socket. // This implementation just bounces the request up to Objective-C. { ServerVistaController * obj; #pragma unused(address) // assert(address == NULL); assert(data != NULL); obj = (ServerVistaController *) info; assert(obj != nil); #pragma unused(s) assert(s == obj->listeningSocket); if (type & kCFSocketAcceptCallBack){ [obj acceptConnection:*(int *)data]; } if (type & kCFSocketAcceptCallBack){ [obj startReceive:*(int *)data]; } } -(void)startReceive:(int)fd { CFReadStreamRef readStream = NULL; CFIndex bytes; UInt8 buffer[MAXLENGTH]; CFStreamCreatePairWithSocket( kCFAllocatorDefault, fd, &readStream, NULL); if(!readStream){ close(fd); [self updateLabel:@"No readStream"]; } CFReadStreamOpen(readStream); [self updateLabel:@"OpenStream"]; bytes = CFReadStreamRead( readStream, buffer, sizeof(buffer)); if (bytes < 0) { [self updateLabel:(NSString*)buffer]; close(fd); } CFReadStreamClose(readStream); }

    Read the article

  • How to backup using backup API's in c++

    - by user1603185
    I am writing an application that used to backup some specified file, therefore using the backup API calls i.e CreateFile BackupRead and WriteFile API's. getting errors Access violation reading location. I have attached code below. #include <windows.h> int main() { HANDLE hInput, hOutput; //m_filename is a variable holding the file path to read from hInput = CreateFile(L"C:\\Key.txt", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); //strLocation contains the path of the file I want to create. hOutput= CreateFile(L"C:\\tmp\\", GENERIC_WRITE, NULL, NULL, CREATE_ALWAYS, NULL, NULL); DWORD dwBytesToRead = 1024 * 1024 * 10; BYTE *buffer; buffer = new BYTE[dwBytesToRead]; BOOL bReadSuccess = false,bWriteSuccess = false; DWORD dwBytesRead,dwBytesWritten; LPVOID lpContext; //Now comes the important bit: do { bReadSuccess = BackupRead(hInput, buffer, sizeof(BYTE) *dwBytesToRead, &dwBytesRead, false, true, &lpContext); bWriteSuccess= WriteFile(hOutput, buffer, sizeof(BYTE) *dwBytesRead, &dwBytesWritten, NULL); }while(dwBytesRead == dwBytesToRead); return 0; } Any one suggest me how to use these API's? Thanks.

    Read the article

  • How to limit TCP writes to particular size and then block untlil the data is read

    - by ustulation
    {Qt 4.7.0 , VS 2010} I have a Server written in Qt and a 3rd party client executable. Qt based server uses QTcpServer and QTcpSocket facilities (non-blocking). Going through the articles on TCP I understand the following: the original implementation of TCP mentioned the negotiable window size to be a 16-bit value, thus maximum being 65535 bytes. But implementations often used the RFC window-scale-extension that allows the sliding window size to be scalable by bit-shifting to yield a maximum of 1 gigabyte. This is implementation defined. This could have resulted in majorly different window sizes on receiver and sender end as the server uses Qt facilities without hardcoding any window size limit. Client 1st asks for all information it can based on the previous messages from the server before handling the new (accumulating) incoming messages. So at some point Server receives a lot of messages each asking for data of several MB's. This the server processes and puts it into the sender buffer. Client however is unable to handle the messages at the same pace and it seems that client’s receiver buffer is far smaller (65535 bytes maybe) than sender’s transmit window size. The messages thus get accumulated at sender’s end until the sender’s buffer is full too after which the TCP writes on sender would block. This however does not happen as sender buffer is much larger. Hence this manifests as increase in memory consumption on the sender’s end. To prevent this from happening, I used Qt’s socket’s waitForBytesWritten() with timeout set to -1 for infinite waiting period. This as I see from the behaviour blocks the thread writing TCP data until the data has actually been sensed by the receiver’s window (which will happen when earlier messages have been processed by the client at application level). This has caused memory consumption at Server end to be almost negligible. is there a better alternative to this (in Qt) if i want to restrict the memory consumption at server end to say x MB's? Also please point out if any of my understandings is incorrect.

    Read the article

  • GetLongPathName Undeclared

    - by iwizardpro
    When I try to compile my code with the function GetLongPathName(), the compiler tells me that the function is undeclared. I have already read the MSDN documentation located @ http://msdn.microsoft.com/en-us/library/aa364980%28VS.85%29.aspx. But, even though I included those header files, I am still getting the undeclared function error. Which header file(s) am I supposed to include when using the function? #include <Windows.h> #include <WinBase.h> #define DLLEXPORT extern "C" __declspec(dllexport) DLLEXPORT char* file_get_long(char* path_original) { long length = 0; TCHAR* buffer = NULL; if(!path_original) { return "-10"; } length = GetLongPathName(path_original, NULL, 0); if(length == 0) { return "-10"; } buffer = new TCHAR[length]; length = GetLongPathName(path_original, buffer, length); if(length == 0) { return "-10"; } return buffer; } And, if it makes a difference, I am currently compiling using Dev-C++ on a Windows Vista 64-bit.

    Read the article

  • Strange compiler complaint when using similar code

    - by Jason
    For a project, I have to ask the user for a file name, and I'm reading it in character by character using getchar. From the main, I call the function char *coursename= introPrint(); //start off to print the usage directions and get the first bit of input. That function is defined as char *introPrint(){ int size= 20; int c; int length=0; char buffer[size]; //instructions printout, cut for brevity //get coursename from user and return it while ( (c=getchar()) != EOF && (c != '\n') ){ buffer[length++]= c; if (length==size-1) break; } buffer[length]=0; return buffer; } This is basically identical code I wrote to ask the user for input, replace the character echo with asterisks, then print out the results. Here, though, I'm getting a function returns address of local variable warning for the return statement. So why am I getting no warnings from the other program, but trigger one for this code?

    Read the article

  • Java: Inputting text from a file using split

    - by 00PS
    I am inputting an adjacency list for a graph. There are three columns of data (vertex, destination, edge) separated by a single space. Here is my implementation so far: FileStream in = new FileStream("input1.txt"); Scanner s = new Scanner(in); String buffer; String [] line = null; while (s.hasNext()) { buffer = s.nextLine(); line = buffer.split("\\s+"); g.add(line[0]); System.out.println("Added vertex " + line[0] + "."); g.addEdge(line[0], line[1], Integer.parseInt(line[2])); System.out.println("Added edge from " + line[0] + " to " + line[1] + " with a weight of " + Integer.parseInt(line[2]) + "."); } System.out.println("Size of graph = " + g.size()); Here is the output: Added vertex a. Added edge from a to b with a weight of 9. Exception in thread "main" java.lang.NullPointerException at structure5.GraphListDirected.addEdge(GraphListDirected.java:93) at Driver.main(Driver.java:28) I was under the impression that line = buffer.split("\\s+"); would return a 2 dimensional array of Strings to the variable line. It seemed to work the first time but not the second. Any thoughts? I would also like some feedback on my implementation of this problem. Is there a better way? Anything to help out a novice! :)

    Read the article

  • i not find how in powershell pass through http autentification then use a webservices (lotus/domino)

    - by user1716616
    We have here a domino/lotus webservices i want use with powershell. probleme is in front of webservices lotus admin ask a http autentification. how i can use this webservice?? here what i tryed first scrap the first page and get cookie. $url = "http://xxxxxxx/names.nsf?Login" $CookieContainer = New-Object System.Net.CookieContainer $postData = "Username=web.services&Password=jesuisunestar" $buffer = [text.encoding]::ascii.getbytes($postData) [net.httpWebRequest] $req = [net.webRequest]::create($url) $req.method = "POST" $req.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" $req.Headers.Add("Accept-Language: en-US") $req.Headers.Add("Accept-Encoding: gzip,deflate") $req.Headers.Add("Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7") $req.AllowAutoRedirect = $false $req.ContentType = "application/x-www-form-urlencoded" $req.ContentLength = $buffer.length $req.TimeOut = 50000 $req.KeepAlive = $true $req.Headers.Add("Keep-Alive: 300"); $req.CookieContainer = $CookieContainer $reqst = $req.getRequestStream() $reqst.write($buffer, 0, $buffer.length) $reqst.flush() $reqst.close() [net.httpWebResponse] $res = $req.getResponse() $resst = $res.getResponseStream() $sr = new-object IO.StreamReader($resst) $result = $sr.ReadToEnd() this seem work but now no idea how i can use cookie with a webservicesproxy??? ps: i success have this to work with c# + visualstudio (just the class reference is autobuilt and i don't understand half of this but it allow me to use .CookieContenaire on the generated webservice )

    Read the article

  • how to tune BufferedInputStream read()?

    - by technomax
    I am reading a BLOB column from a Oracle database, then writing it to a file as follows: public static int execute(String filename, BLOB blob) { int success = 1; try { File blobFile = new File(filename); FileOutputStream outStream = new FileOutputStream(blobFile); BufferedInputStream inStream = new BufferedInputStream(blob.getBinaryStream()); int length = -1; int size = blob.getBufferSize(); byte[] buffer = new byte[size]; while ((length = inStream.read(buffer)) != -1) { outStream.write(buffer, 0, length); outStream.flush(); } inStream.close(); outStream.close(); } catch (Exception e) { e.printStackTrace(); System.out.println("ERROR(img_exportBlob) Unable to export:"+filename); success = 0; } } The file size is around 3MB and it takes 40-50s to read the buffer. Its actually a 3D image data. So, is there any way by which I can reduce this time?

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >