Search Results

Search found 7188 results on 288 pages for 'frame buffer'.

Page 245/288 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • how to do display text message from cellphone to textbox

    - by Jiggy Degamo
    I am just making a project automated polling system using sms but i just a beginner of serial communication i have seen a code on the net so i apply it as my guide just a simple text message and display it on the textbox it connects but the provlem is its didn't display on the textbox. here is the code below please help me why it didn't display and please give me a cite that have a tutorial of serial communication using vb.net Imports System.IO.Ports Public Class Form1 Dim inputData As String = "" Private WithEvents serialPort1 As New IO.Ports.SerialPort Public Event DataReceived As IO.Ports.SerialDataReceivedEventHandler Private Sub Form1_FormClosed(ByVal sender As Object, ByVal e As System.Windows.Forms.FormClosedEventArgs) Handles Me.FormClosed ' Close the Serial Port serialPort1.Close() End Sub Private Sub Form1_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load 'Set values for some properties serialPort1.PortName = "COM10" serialPort1.BaudRate = 2400 SerialPort1.Parity = IO.Ports.Parity.None SerialPort1.DataBits = 8 SerialPort1.StopBits = IO.Ports.StopBits.One SerialPort1.Handshake = IO.Ports.Handshake.None SerialPort1.RtsEnable = True ' Open the Serial Port SerialPort1.Open() 'Writes data to the Serial Port output buffer If SerialPort1.IsOpen = True Then SerialPort1.Write("MicroCommand") End If End Sub ' Receive data from the Serial Port 'Show received data on UI controls and do something Public Sub DoUpdate() TextBox1.Text = TextBox1.Text & inputData End Sub Private Sub serialPort1_DataReceived(ByVal sender As Object, ByVal e As System.IO.Ports.SerialDataReceivedEventArgs) Handles serialPort1.DataReceived inputData = serialPort1.ReadExisting 'or serialPort1.ReadLine Me.Invoke(New EventHandler(AddressOf DoUpdate)) End Sub End Class

    Read the article

  • Scheduling algorithm optimized to execute during low usage periods.

    - by The Rook
    Lets say there is a Web Application serving mostly one country. Because of normal sleep habits website traffic follows a Sine wave, where 1 period lasts 24 hours and the lowest part of the wave is at about midnight. Is there a scheduling algorithm optimized to execute during low usage periods? I am thinking of this as a liquid that is "pored into" this sine wave to flatten out resource usage. A ideal algorithm would take the integral of this empty space. If the same tasks need to be run daily the amount of resources consumed by previous executions could be used to predict future usage by looking at the rate in which resource usage is increasing. By knowing the amount of resources required this algorithm could fill in this empty space while leaving as much buffer as possible on either side such that its interference was reduced as much as possible. It would also be possible to detect if there isn't enough resources before execution begins, this opens the door for a cloud to help out. Does anything like this exist? Or should I build it into an existing scheduler like quartz and make it open source?

    Read the article

  • UITableView with dynamic cell heights -- what do I need to do to fix scrolling down?

    - by Ian Terrell
    I am building a teensy tiny little Twitter client on the iPhone. Naturally, I'm displaying the tweets in a UITableView, and they are of course of varying lengths. I'm dynamically changing the height of the cell based on the text quite fine: - (CGFloat)heightForTweetCellWithString:(NSString *)text { CGFloat height = Buffer + [text sizeWithFont:Font constrainedToSize:Size lineBreakMode:LineBreakMode].height; return MAX(height, MinHeight); } - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { NSString *text = // get tweet text for this indexpath return [self heightForTweetCellWithString:text]; } } I'm displaying the actual tweet cell using the algorithm in the PragProg book: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"TweetCell"; TweetCell *cell = (TweetCell *)[tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [self createNewTweetCellFromNib]; } cell.tweet.text = // tweet text // set other labels, etc return cell; } When I boot up, all the tweets visible display just fine. However, when I scroll down, the tweets below are quite mussed up -- it appears that once a cell has scrolled off the screen, the cell height for the one above it gets resized to be larger than it should be, and obscures part of the cell below it. When the cell reaches the top of the view, it resets itself and renders properly. Scrolling up presents no difficulties. Here is a video that shows this in action: http://screencast.com/t/rqwD9tpdltd I've tried quite a bit already: resizing the cell's frame on creation, using different identifiers for cells with different heights (i.e. [NSString stringWithFormat:@"Identifier%d", rowHeight]), changing properties in Interface Builder... If there are additional code snippets I can post, please let me know. Thanks in advance for your help!

    Read the article

  • Boost ASIO async_write "Vector iterator not dereferencable"

    - by xeross
    Hey, I've been working on an async boost server program, and so far I've got it to connect. However I'm now getting a "Vector iterator not dereferencable" error. I suspect the vector gets destroyed or dereferenced before he packet gets sent thus causing the error. void start() { Packet packet; packet.setOpcode(SMSG_PING); send(packet); } void send(Packet packet) { cout << "DEBUG> Transferring packet with opcode " << packet.GetOpcode() << endl; async_write(m_socket, buffer(packet.write()), boost::bind(&Session::writeHandler, shared_from_this(), placeholders::error, placeholders::bytes_transferred)); } void writeHandler(const boost::system::error_code& errorCode, size_t bytesTransferred) { cout << "DEBUG> Transfered " << bytesTransferred << " bytes to " << m_socket.remote_endpoint().address().to_string() << endl; } Start gets called once a connection is made. packet.write() returns a uint8_t vector Would it matter if I'd change void send(Packet packet) to void send(Packet& packet) Not in relation to this problem but performance wise.

    Read the article

  • Sending files using Winsock - optimal send() data length?

    - by Meta
    I am using Winsock with non-blocking sockets to send a file to a client. The way I'm doing it right now is that I read a chunk of 8192 bytes from the file, and then loop until all of it successfully goes through send() (obviously handling WSAEWOULDBLOCK as it occurs). I then move on and read the next 8192 bytes, and so on... Although I can use any other number than 8192 when I test the transfer on my local machine, once I try it over a network, it seems like 8191 is the largest number I can use. When I try to use any number higher than 8191 (starting with 8192), the file transfer becomes extremely slow (about 5 times slower). Is there any reason why 8191 is so special? I've done some more testing and it turns out that using 8000 is slightly faster (by 0.5%). If you understand why 8191 is so special, can you tell me if there is a number better than the others (better than 8000)? I have a feeling that it has something to do with the fact that the default send buffer allocated to the socket by Winsock is 8KB, but I don't understand why. It might also have something to do with the Nagle algorithm, but again, I'm not sure how. Note that I have not modified the SO_SNDBUF option nor the TCP_NODELAY option. Or am I doing this all wrong? What's the best way of sending a file over a non-blocking socket?

    Read the article

  • What is an efficient strategy for multiple threads posting jobs and waiting for response from a single thread?

    - by jakewins
    In java, what is an efficient solution to the following problem: I have multiple threads (10-20 or so) generating jobs ("Job Creators"), and a single thread capable of performing them ("The worker"). Once a job creator has posted a job, it should wait for the job to finish, yielding no result other than "it's done", before it keeps going. For sending the jobs to the worker thread, I think a ring buffer or similar standard fan-in setup would perhaps be a good approach? But for a Job Creator to find out that her job has been done, I'm not so sure.. The job creators could sleep, and the worker interrupt them when done.. Or each job creator could have an atomic boolean that it checks, and that the worker sets. I dunno, neither of those feel very nice. I'd like to do it with as few (none, if possible) locks as absolutely possible. So to be clear: What I'm looking for is speed, not necessarily simplicity. Does anyone have any suggestions? Links to reading about concurrency strategies would also be very welcome!

    Read the article

  • iPhone OpenGL ES Texture2D Masking

    - by Robert Neagu
    What's the best choice when trying to mask a texture like ColorSplash or other apps like iSteam, etc? I started learning OPENGL ES like... 4 days ago (I'm a total rookie) and tried the following approach: 1) I created a colored texture2D, a grayscale version of the first texture and a third texture2D called mask 2) I also created a texture2D for the brush... which is grayscale and it's opaque (brush = black = 0,0,0,1 and surroundings = white = 1,1,1,1). My intention was to create an antialiased brush with smooth edges but i'm fine with a normal one right now 3) I searched for masking techniques on the internet and found this tutorial ZeusCMD - Design and Development Tutorials : OpenGL ES Programming Tutorials - Masking about masking. The tutorial tells me to use blending to achieve masking... first draw colored, then mask with glBlendFunc(GL_DST_COLOR, GL_ZERO) and then grayscale with glBlendFunc(GL_ONE, GL_ONE) ... and this gives me something close to what i want... but not exactly what i want. The result is masked but it's somehow overbright-ed 4) For drawing to the mask texture i used an extra frame buffer object (FBO) I'm not really happy with the resulting image (overbright-ed picture) nor with the speed achieved with this method. I think the normal way was to draw directly to the grayscale (overlay) texture2D affecting only it's alpha channel in the places where the brush hits. Is there a fast way to achieve this? I have searched a lot and never got an answer that's clear and understandable. Then, in the main draw loop I could only draw the colored texture and then blend the grayscale ontop with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I just want to learn to use OPENGL ES and it's driving me nuts because i can't get it to work properly. An advice, a link to a tutorial would be much appreciated.

    Read the article

  • HTTP Handler error when downloading files - SSL

    - by Chiefy
    Ok big problem as this is affecting two projects on our new server. We have a file that is downloaded by users, the files are downloaded using a HTTPHandler. Since moving the site to the server and setting SSL the downloads have stopped working and we get an error message "Unable to download DownloadDocument.ashx" from site". DownloadDocument.ashx is the handler page that is set in the web.config and the button that goes there is a hyperlink with the id of the document as a querystring. Ive read the article on http://support.microsoft.com/kb/316431 and read a few other requests on this site but nothing seems to be working. This problem only happens in IE and works fine when I run it on the server in http instead of https. public override void HandleRequest(HttpContext context) { Guid guid = new Guid(context.Request.QueryString["ID"]); DataTable dt = Documents.GetDocument(guid); if (dt != null) { context.Response.Cache.SetCacheability(HttpCacheability.Private); context.Response.AddHeader("content-disposition", string.Format("attachment; filename={0}", dt.Rows[0]["DocumentName"].ToString())); context.Response.AddHeader("Content-Transfer-Encoding", "binary"); context.Response.AddHeader("Content-Length", ((byte[])dt.Rows[0]["Document"]).Length.ToString()); context.Response.ContentType = string.Format("application/{0}", dt.Rows[0]["Extension"].ToString().Remove(0, 1)); context.Response.Buffer = true; context.Response.BinaryWrite((byte[])dt.Rows[0]["Document"]); context.Response.Flush(); context.Response.End(); } } The above is my current code for the request. Ive used the base handler on http://haacked.com/archive/2005/03/17/AnAbstractBoilerplateHttpHandler.aspx. Any ideas on what this might be and how we can fix it. Thanks in advance for all responses.

    Read the article

  • Set Renderbuffer Width and Height (Open GL ES)

    - by Josh Elsasser
    I'm currently experiencing an issue with an Open GL ES renderbuffer where the backing and width are are both set to 15. Is there any way to set them to the width of 320 and 480? My project is built up on Apple's EAGLView class and ES1Renderer, but I've moved it from the app delegate to a controller. I also moved the CADisplayLink outside of it (I update my game logic with the timestamp from this) Any help would be greatly appreciated. I add the glview to the window as follows: CGRect applicationFrame = [[UIScreen mainScreen] applicationFrame]; [window addSubview:gameController.glview]; [window makeKeyAndVisible]; I synthesize the controller and the glview within it. The EAGLView and Renderer are otherwise unmodified. Renderer Initialization: // Get the layer CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = TRUE; eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:FALSE], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; renderer = [[ES1Renderer alloc] init]; Render "resize from layer" Method - (BOOL)resizeFromLayer:(CAEAGLLayer *)layer { // Allocate color buffer backing based on the current layer size glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer); [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer]; glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth); glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight); NSLog(@"Backing Width:%i and Height: %i", backingWidth, backingHeight); if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"Failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)); return NO; } return YES; }

    Read the article

  • Endianness and C API's: Specifically OpenSSL.

    - by Hassan Syed
    I have an algorithm that uses the following OpenSSL calls: HMAC_update() / HMAC_final() // ripe160 EVP_CipherUpdate() / EVP_CipherFinal() // cbc_blowfish These algorithm take a unsigned char * into the "plain text". My input data is comes from a C++ std::string::c_str() which originate from a protocol buffer object as a encoded UTF-8 string. UTF-8 strings are meant to be endian neutrial. However I'm a bit paranoid about how OpenSSL may perform operations on the data. My understanding is that encryption algorithms work on 8-bit blocks of data, and if a unsigned char * is used for pointer arithmetic when the operations are performed the algorithms should be endian neutral and I do not need to worry about anything. My uncertainty is compounded by the fact that I am working on a little-endian machine and have never done any real cross-architecture programming. My beliefs/reasoning are/is based on the following two properties std::string (not wstring) internally uses a 8-bit ptr and a the resulting c_str() ptr will itterate the same way regardless of the CPU architecture. Encryption algorithms are either by design, or by implementation, endian neutral. I know the best way to get a definitive answer is to use QEMU and do some cross-platform unit tests (which I plan to do). My question is a request for comments on my reasoning, and perhaps will assist other programmers when faced with similar problems.

    Read the article

  • OUTPUT DID NOT REDIRECT TO THE INTENDED FILE OPENED FOR..

    - by rockyurock
    HELLO ALL, I USED THE BELOW CODE FOR CAPTURING THE OUTPUT (BELOW IN lines) IN A FILE "my_output.txt" BUT FAILED TO CAPTURE. **************output*************** inside value loop ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 108 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.16.2 port 5001 connected with 192.168.16.1 port 3189 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0- 5.0 sec 2.14 MBytes 3.61 Mbits/sec 0.369 ms 0/ 1528 (0%) inside value loop3 clue1 clue2 inside value loop4 one iperf completed *************************************** however when i enabled the local *STDOUT; in below code then i could see the above output on command prompt display (ofcourse server is sending some data). could anybody suggest me how can i capture the o/p in a file intended? below is the code i am using .. my $file = 'my_output.txt'; use Win32::Process; print"inside value loop\n"; # redirect stdout to a file #local *STDOUT; open STDOUT, '>', $file or die "can't redirect STDOUT to <$file> $!"; Win32::Process::Create(my $ProcessObj, "D:\\IOT_AUTOMATION_UTILITY\\_SATURDAY_09-04-10\\adb_cmd.bat", "adb shell /data/app/iperf -u -s -p 5001", 0, NORMAL_PRIORITY_CLASS, ".") || die ErrorReport(); #$alarm_time = $IPERF_RUN_TIME+10; #20sec #$ProcessObj->Wait(40); #print"inside value loop2\n"; #sleep $alarm_time; sleep 40; $ProcessObj->Kill(0); sub ErrorReport{ print Win32::FormatMessage( Win32::GetLastError() ); } /rocky

    Read the article

  • Use PermGen space or roll-my-own intern method?

    - by Adamski
    I am writing a Codec to process messages sent over TCP using a bespoke wire protocol. During the decode process I create a number of Strings, BigDecimals and dates. The client-server access patterns mean that it is common for the client to issue a request and then decode thousands of response messages, which results in a large number of duplicate Strings, BigDecimals, etc. Therefore I have created an InternPool<T> class allowing me to intern each class of object. Internally, the pool uses a WeakHashMap<T, WeakReferemce<T>>. For example: InternPool<BigDecimal> pool = new InternPool<BigDecimal>(); ... // Read BigDecimal from in buffer and then intern. BigDecimal quantity = pool.intern(readBigDecimal(in)); My question: I am using InternPool for BigDecimal but should I consider also using it for String instead of String's intern() method, which I believe uses PermGen space? What is the advantage of using PermGen space?

    Read the article

  • Life Scope of Temporary Variable

    - by Yan Cheng CHEOK
    #include <cstdio> #include <string> void fun(const char* c) { printf("--> %s\n", c); } std::string get() { std::string str = "Hello World"; return str; } int main() { const char *cc = get().c_str(); // cc is not valid at this point. As it is pointing to // temporary string internal buffer, and the temporary string // has already been destroyed at this point. fun(cc); // But I am surprise this call will yield valid result. // It seems that the returned temporary string is valid within // scope (...) // What my understanding is, scope means {...} // Is this valid behavior guarantee by C++ standard? Or it depends // on your compiler vendor implementations? fun(get().c_str()); getchar(); } The output is : --> --> Hello World Hello, may I know the correct behavior is guarantee by C++ standard, or it depends on your compiler vendor implementations? I have tested this under VC2008 and VC6. Works fine for both.

    Read the article

  • Renderbuffer Width (Open GL ES)

    - by Josh Elsasser
    I'm currently experiencing an issue with an Open GL ES renderbuffer where the backing and width are are both set to 15. Is there any way to set them to the width of 320 and 480? My project is built up on Apple's EAGLView class and ES1Renderer, but I've moved it from the app delegate to a controller. I also moved the CADisplayLink outside of it (I update my game logic with the timestamp from this) Any help would be greatly appreciated. I add the glview to the window as follows: CGRect applicationFrame = [[UIScreen mainScreen] applicationFrame]; [window addSubview:gameController.glview]; [window makeKeyAndVisible]; I synthesize the controller and the glview within it. The EAGLView and Renderer are otherwise unmodified. Renderer Initialization: // Get the layer CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = TRUE; eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:FALSE], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; renderer = [[ES1Renderer alloc] init]; Render "resize from layer" Method - (BOOL)resizeFromLayer:(CAEAGLLayer *)layer { // Allocate color buffer backing based on the current layer size glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer); [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:layer]; glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth); glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight); NSLog(@"Backing Width:%i and Height: %i", backingWidth, backingHeight); if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"Failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)); return NO; } return YES; }

    Read the article

  • ADO.NET DataTable/DataRow Thread Safety

    - by Allen E. Scharfenberg
    Introduction A user reported to me this morning that he was having an issue with inconsistent results (namely, column values sometimes coming out null when they should not be) of some parallel execution code that we provide as part of an internal framework. This code has worked fine in the past and has not been tampered with lately, but it got me to thinking about the following snippet: Code Sample lock (ResultTable) { newRow = ResultTable.NewRow(); } newRow["Key"] = currentKey; foreach (KeyValuePair<string, object> output in outputs) { object resultValue = output.Value; newRow[output.Name] = resultValue != null ? resultValue : DBNull.Value; } lock (ResultTable) { ResultTable.Rows.Add(newRow); } (No guarantees that that compiles, hand-edited to mask proprietery information.) Explanation We have this cascading type of locking code other places in our system, and it works fine, but this is the first instance of cascading locking code that I have come across that interacts with ADO .NET. As we all know, members of framework objects are usually not thread safe (which is the case in this situation), but the cascading locking should ensure that we are not reading and writing to ResultTable.Rows concurrently. We are safe, right? Hypothesis Well, the cascading lock code does not ensure that we are not reading from or writing to ResultTable.Rows at the same time that we are assigning values to columns in the new row. What if ADO .NET uses some kind of buffer for assigning column values that is not thread safe--even when different object types are involved (DataTable vs. DataRow)? Has anyone run into anything like this before? I thought I would ask here at StackOverflow before beating my head against this for hours on end :) Conclusion Well, the consensus appears to be that changing the cascading lock to a full lock has resolved the issue. That is not the result that I expected, but the full lock version has not produced the issue after many, many, many tests. The lesson: be wary of cascading locks used on APIs that you do not control. Who knows what may be going on under the covers!

    Read the article

  • How do I build a filtered_streambuf based on basic_streambuf?

    - by swestrup
    I have a project that requires me to insert a filter into a stream so that outgoing data will be modified according to the filter. After some research, it seems that what I want to do is create a filtered_streambuf like this: template <class StreamBuf class filtered_streambuf: public StreamBuf { ... } And then insert a filtered_streambuf<> into whichever stream I need to be filtered. My problem is that I don't know what invariants I need to maintain while filtering a stream, in order to ensure that Derived classes can work as expected. In particular, I may find I have filtered_streambufs built over other filtered_streambufs. All the various stream inserters, extractors and manipulators work as expected. The trouble is that I just can't seem to work out what the minimal interface is that I need to supply in order to guarantee that an iostream will have what it needs to work correctly. In particular, do I need to fake the movement of the protected pointer variables, or not? Do I need a fake data buffer, or not? Can I just override the public functions, rewriting them in terms of the base streambuf, or is that too simplistic?

    Read the article

  • Boost::Asio - Remove the "null"-character in the end of tcp packets.

    - by shump
    I'm trying to make a simple msn client mostly for fun but also for educational purposes. And I started to try some tcp package sending and receiving using Boost Asio as I want cross-platform support. I have managed to send a "VER"-command and receive it's response. However after I send the following "CVR"-command, Asio casts an "End of file"-error. After some further researching I found by packet sniffing that my tcp packets to the messenger server got an extra "null"-character (Ascii code: 00) at the end of the message. This means that my VER-command gets an extra character in the end which I don't think the messenger server like and therefore shuts down the connection when I try to read the CVR response. This is how my package looks when sniffing it, (it's Payload): (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0a 0a 00 (Char:) VER 1 MSNP15 CVR 0... and this is how Adium(chat client for OS X)'s package looks: (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0d 0a (Char:) VER 1 MSNP15 CVR 0.. So my question is if there is any way to remove the null-character in the end of each package, of if I've misunderstood something and used Asio in a wrong way. My write function (slightly edited) looks lite this: int sendVERMessage() { boost::system::error_code ignored_error; char sendBuf[] = "VER 1 MSNP15 CVR0\r\n"; boost::asio::write(socket, boost::asio::buffer(sendBuf), boost::asio::transfer_all(), ignored_error); if(ignored_error) { cout << "Failed to send to host!" << endl; return 1; } cout << "VER message sent!" << endl; return 0; } And here's the main documentation on the msn protocol I'm using. Hope I've been clear enough.

    Read the article

  • Drawing an image in Java, slow as hell on a netbook.

    - by Norswap
    In follow-up to my previous questions (especially this one : http://stackoverflow.com/questions/2684123/java-volatileimage-slower-than-bufferedimage), i have noticed that simply drawing an Image (it doesn't matter if it's buffered or volatile, since the computer has no accelerated memory*, and tests shows it's doesn't change anything), tends to be very long. (*) System.out.println(GraphicsEnvironment.getLocalGraphicsEnvironment() .getDefaultScreenDevice().getAvailableAcceleratedMemory()); --> 0 How long ? For a 500x400 image, about 0.04 seconds. This is only drawing the image on the backbuffer (obtained via buffer strategy). Now considering that world of warcraft runs on that netbook (tough it is quite laggy) and that online java games seems to have no problem whatsoever, this is quite thought provoking. I'm quite certain I didn't miss something obvious, I've searched extensively the web, but nothing will do. So do any of you java whiz have an idea of what obscure problem might be causing this (or maybe it is normal, tough I doubt it) ? PS : As I'm writing this I realized this might be cause by my Linux installation (archlinux) tough I have the correct Intel driver. But my computer normally has "Integrated Intel Graphics Media Accelerator 950", which would mean it should have accelerated video memory somehow. Any ideas about this side of things ?

    Read the article

  • determining if value is in range with 0=360 degree problem.

    - by Raven
    Hi, I am making a piece of code for DirectX app. It's meaning is to not show faces that are not visible. Normaly it would be just using Z-buffer, but I'm making many moves and rotations of mesh, so I would like to not do them and save computing power. I will describe this on cube. You are looking from the front so you see just one face and you don't need to rotate the 5 that left. If you would have one side of cube from 100*100 meshes, it would be great to not have to turn around 50k meshes that you really don't need. So I have stored X,Y,Z rotation of camera(the Z rotation I'm not using), and also X,Y,Z rotation of faces. In this cube simplified I would see faces that makes this statement true: cRot //camera rotation in degrees oRot //face rotation in degrees if(oRot.x > cRot.x-90 && oRot.x < cRot.x+90 && oRot.y > cRot.y-90 && oRot.y < cRot.y+90) But there comes a problem. If I will rotate arround, the camera can get to value 330 for exapmple. In this state, I would see front and right side of cube. Right side have rotation 270 so that's allright in IF statement. Problem is with 0 rotation of front face, which is also 360 degrees. So my question is how to make this statement to work, because when I use modulo, it will be failing for that right side and in this way it won't work for 0=360.

    Read the article

  • ASP Force Download

    - by Thomas Clayson
    In PHP I can do: header("Content-type: application/octet-stream") and then anything that I output is downloaded instead of showing in the browser. Is there a similar way to do this in ASP? I have seen about all the file streaming and such using ADODB.Stream, but that doesn't seem to work for me and always requires another file to load the content from. Bit of an ASP noob, so go easy on me. :p All I want to do is have a script that outputs a CSV and that will force download instead of showing in the browser. Thanks EDIT here is my script currently: reportingForce.aspx.vb Public Class reportingForce Inherits System.Web.UI.Page Dim FStream Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Response.Buffer = True Response.ContentType = "application/octet-stream" Response.AddHeader("Content-disposition", "attachment; filename=" & Chr(34) & "my output file.csv" & Chr(34)) Response.Write("1,2,3,4,5" & vbCrLf) Response.Write("5,6,7,8,9" & vbCrLf) End Sub End Class reportingForce.aspx Hello,World

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • Syncing two AS3 NetStreams

    - by Lowgain
    I'm writing an app that requires an audio stream to be recording while a backing track is played. I have this working, but there is an inconsistent gap in between playback and record starting. I don't know if I can do anything to make the sync perfect every time, so I've been trying to track what time each stream starts so I can calculate the delay and trim it server-side. This also has proved to be a challenge as no events seem to be sent when a connection starts (as far as I know). I've tried using various properties like the streams' buffer sizes, etc. I'm thinking now that as my recorded audio is only mono, I may be able to put some kind of 'control signal' on the second stereo track which I could use to determine exactly when a sound starts recording (or stick the whole backing track in that channel so I can sync them that way). This leaves me with the new problem of properly injecting this sound into the NetStream. If anyone has any idea whether or not any of these ideas will work, how to execute them, or some alternatives, that would be extremely helpful! Been working on this issue for awhile

    Read the article

  • recvfrom returns invalid argument when *from* is passed

    - by Aditya Sehgal
    I am currently writing a small UDP server program in linux. The UDP server will receive packets from two different peers and will perform different operations based on from which peer it received the packet. I am trying to determine the source from where I receive the packet. However, when select returns and recvfrom is called, it returns with an error of Invalid Argument. If I pass NULL as the second last arguments, recvfrom succeeds. I have tried declaring fromAddr as struct sockaddr_storage, struct sockaddr_in, struct sockaddr without any success. Is their something wrong with this code? Is this the correct way to determine the source of the packet? The code snippet follows. ` /*TODO : update for TCP. use recv */ if((pkInfo->rcvLen=recvfrom(psInfo->sockFd, pkInfo->buffer, MAX_PKTSZ, 0, /* (struct sockaddr*)&fromAddr,*/ NULL, &(addrLen) )) < 0) { perror("RecvFrom failed\n"); } else { /*Apply Filter */ #if 0 struct sockaddr_in* tmpAddr; tmpAddr = (struct sockaddr_in* )&fromAddr; printf("Received Msg From %s\n",inet_ntoa(tmpAddr->sin_addr)); #endif printf("Packet Received of len = %d\n",pkInfo->rcvLen); } `

    Read the article

  • different thread accessing MemoryStream

    - by Wayne
    There's a bit of code which writes data to a MemoryStream object directly into it's data buffer by calling GetBuffer(). It also uses and updates the Position and SetLength() properties appropriately. This code works purposes 99.9999% of the time. Literally. Only every so many 100,000's of iterations it will barf. The specific problem is that the memory.Position property suddenly returns zero instead of the appropriate value. However, code was added that checks for the 0 and throws an exception which include log of the MemoryStream properties like Position and Length in a separate method. Those return the correct value. Further addition shows that when this rare condition occurs, the memory.Position only has zero inside this particular method. Okay. Obviously, this must be a threading issue. But this code is well locked. However, the nature of this software is that it's organized by "tasks" with a scheduler and so any one of several actual O/S thread may run this code at any give time--but never more than one at a time. So it's my guess that ordinarily it so happens that the same thread keeps getting used for this method and then on a rare occasion a different thread get used. Then due to compiler optimizations, the different thread never gets the correct value. It gets a "stale" value. Ordinarily in a situation like this I would apply a "volatile" keyword to the variable in question. But that (those) variables are inside the MemoryStream object. Does anyone have any other idea? Or does this mean we have to implement our own MemoryStream object? (Just like we end up having to do with practically every collection in .NET?) It's a shame to have such an awesome platform as .NET and have virtually the entire system useless as-is for seriously parallelized applications. If I'm wrong or you have other ideas, please advise. Sincerely, Wayne

    Read the article

  • Make Errors: Missing Includes in C++ Script?

    - by Abs
    Hello all, I just got help in how to compile this script a few mintues ago on SO but I have managed to get errors. I am only a beginner in C++ and have no idea what the below erros means or how to fix it. This is the script in question. I have read the comments from some users suggesting they changed the #include parts but it seems to be exactly what the script has, see this comment. [root@localhost wkthumb]# qmake-qt4 && make g++ -c -pipe -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -Wall -W -D_REENTRANT -DQT_NO_DEBUG -DQT_GUI_LIB -DQT_CORE_LIB -I/usr/lib/qt4/mkspecs/linux-g++ -I. -I/usr/include/QtCore -I/usr/include/QtGui -I/usr/include -I. -I. -I. -o main.o main.cpp main.cpp:5:20: error: QWebView: No such file or directory main.cpp:6:21: error: QWebFrame: No such file or directory main.cpp:8: error: expected constructor, destructor, or type conversion before ‘*’ token main.cpp:11: error: ‘QWebView’ has not been declared main.cpp: In function ‘void loadFinished(bool)’: main.cpp:18: error: ‘view’ was not declared in this scope main.cpp:18: error: ‘QWebSettings’ has not been declared main.cpp:19: error: ‘QWebSettings’ has not been declared main.cpp:20: error: ‘QWebSettings’ has not been declared main.cpp: In function ‘int main(int, char**)’: main.cpp:42: error: ‘view’ was not declared in this scope main.cpp:42: error: expected type-specifier before ‘QWebView’ main.cpp:42: error: expected `;' before ‘QWebView’ make: *** [main.o] Error 1 I have the web kit on my Fedora Core 10 machine: qt-4.5.3-9.fc10.i386 qt-devel-4.5.3-9.fc10.i386 Thanks all for any help

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >