Search Results

Search found 3140 results on 126 pages for 'stencil buffer'.

Page 12/126 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Redirecting exec output to a buffer or file

    - by devin
    I'm writing a C program where I fork(), exec(), and wait(). I'd like to take the output of the program I exec'ed to write it to file or buffer. For example, if I exec ls I want to write file1 file2 etc to buffer/file. I don't think there is a way to read stdout, so does that mean I have to use a pipe? Is there a general procedure here that I haven't been able to find?

    Read the article

  • understanding z buffer formates direct x

    - by numerical25
    A z buffer is just a 3d array that shows what object should be written in front of another object. each element in the array represents a pixel that holds a value from 0.0 to 1.0. My question is if that is all a z buffer does, then why are some buffers 24bit, 32bit, and 16 bit ??

    Read the article

  • MFC: Reading entire file to buffer...

    - by deostroll
    I've meddled with some code but I am unable to read the entire file properly...a lot of junk gets appended to the output. How do I fix this? // wmfParser.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "wmfParser.h" #include <cstring> #ifdef _DEBUG #define new DEBUG_NEW #endif // The one and only application object CWinApp theApp; using namespace std; int _tmain(int argc, TCHAR* argv[], TCHAR* envp[]) { int nRetCode = 0; // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { // TODO: change error code to suit your needs _tprintf(_T("Fatal Error: MFC initialization failed\n")); nRetCode = 1; } else { // TODO: code your application's behavior here. CFile file; CFileException exp; if( !file.Open( _T("c:\\sample.txt"), CFile::modeRead, &exp ) ){ exp.ReportError(); cout<<'\n'; cout<<"Aborting..."; system("pause"); return 0; } ULONGLONG dwLength = file.GetLength(); cout<<"Length of file to read = " << dwLength << '\n'; /* BYTE* buffer; buffer=(BYTE*)calloc(dwLength, sizeof(BYTE)); file.Read(buffer, 25); char* str = (char*)buffer; cout<<"length of string : " << strlen(str) << '\n'; cout<<"string from file: " << str << '\n'; */ char str[100]; file.Read(str, sizeof(str)); cout << "Data : " << str <<'\n'; file.Close(); cout<<"File was closed\n"; //AfxMessageBox(_T("This is a test message box")); system("pause"); } return nRetCode; }

    Read the article

  • Win32 C/C++ Load Image from memory buffer

    - by Bruno
    I want to load a image (.bmp) file on a Win32 application, but I do not want to use the standard LoadBitmap/LoadImage from Windows API: I want it to load from a buffer that is already in memory. I can easily load a bitmap directly from file and print it on the screen, but this issue is making me stuck :( What I'm looking for is a function that works like this: HBITMAP LoadBitmapFromBuffer(char* buffer, int width, int height); Thanks.

    Read the article

  • How to copy depth buffer to CPU memory in DirectX?

    - by Ashwin
    I have code in OpenGL that uses glReadPixels to copy the depth buffer to a CPU memory buffer: glReadPixels(0, 0, w, h, GL_DEPTH_COMPONENT, GL_FLOAT, dbuf); How do I achieve the same in DirectX? I have looked at a similar question which gives the solution to copy the RGB buffer. I've tried to write similar code to copy the depth buffer: IDirect3DSurface9* d3dSurface; d3dDevice->GetDepthStencilSurface(&d3dSurface); D3DSURFACE_DESC d3dSurfaceDesc; d3dSurface->GetDesc(&d3dSurfaceDesc); IDirect3DSurface9* d3dOffSurface; d3dDevice->CreateOffscreenPlainSurface( d3dSurfaceDesc.Width, d3dSurfaceDesc.Height, D3DFMT_D32F_LOCKABLE, D3DPOOL_SCRATCH, &d3dOffSurface, NULL); // FAILS: D3DERR_INVALIDCALL D3DXLoadSurfaceFromSurface( d3dOffSurface, NULL, NULL, d3dSurface, NULL, NULL, D3DX_FILTER_NONE, 0); // Copy from offscreen surface to CPU memory ... The code fails on the call to D3DXLoadSurfaceFromSurface. It returns the error value D3DERR_INVALIDCALL. What is wrong with my code?

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • PHP output buffer settings ignored by server

    - by Ecom Evolution
    I have been trying to flush the output of certain scripts to the browser on demand, but they do not work on our production server. For instance, I tried running the "Phoca Changing Collation tool" (find it on Google) and I don't see any output until the script finishes executing. I've tried immediately flushing the buffer on other scripts that work fine on any server but this one using the following code: echo "something"; ob_flush(); flush(); Setting "ob_implicit_flush(1);" doesn't help either. The server is Apache 2.2.21 with PHP 5.2.17 running on Linux. You can see our php.ini file here if that will help: http://www.smallfiles.org/download/1123/php.ini.html This isn't the only problem we are having with the server ignoring in-script directives. The server also ignores timeout coding such as: ini_set('max_execution_time', 900*60); AND set_time_limit(86400); Script always times out at the php.ini default. Doesn't seem to matter if script is executed in IE or Firefox. Tried "ini_set('zlib.output_compression_level', 'Off');" and checked that it is "Off" in the php.ini file. The code "apache_setenv('no-gzip', 1);" causes a fatal error so tried uploading a .htaccess file with the "mod_gzip_on No" directive. Neither helps. Tried running Apache as fcgi and suphp, but same results. Server is NOT in safe mode. Pullin ma hair out!

    Read the article

  • High data on recv-q buffer and thread lock on java.io.BufferedInputStream in linux

    - by Sagar Patel
    We have a java application running on linux (ubuntu server). We have been facing high recv-q problem since quite some time. Application gets hang and does not read data from socket every few hours. In thread dump, we have found below stack trace. "Receiver-146" daemon prio=10 tid=0x00007fb3fc010000 nid=0x7642 runnable [0x00007fb5906c5000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream. socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) - locked <0x00000007688f1ff0> (a java.io.BufferedInputStream) at org.smpp.TCPIPConnection.receive(TCPIPConnection.java:413) at org.smpp.ReceiverBase.receivePDUFromConnection(ReceiverBase.java:197) at org.smpp.Receiver.receiveAsync(Receiver.java:351) at org.smpp.ReceiverBase.process(ReceiverBase.java:96) at org.smpp.util.ProcessingThread.run(ProcessingThread.java:199) at java.lang.Thread.run(Thread.java:722) We are not able to trace the exact reason behind this? Kindly help. We are using 16 core machine and load on the system is around 30-40 at the time of issue. We use command ss dst <ip> to find out recv-q. Recently we have been facing issues with recv-q size getting hung, were in receive buffer gets stuck at some point of time. But recvQ size is not decreasing and as a result we are losing a lot of hits from the other side, our application is not accepting any data.

    Read the article

  • Non-blocking TCP buffer issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them, how can I avoid performance penalty, etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • EVP_PKEY from char buffer in x509 (PKCS7)

    - by sid
    Hi All, I have a DER certificate from which I am retrieving the Public key in unsigned char buffer as following, is it the right way of getting? pStoredPublicKey = X509_get_pubkey(x509); if(pStoredPublicKey == NULL) { printf(": publicKey is NULL\n"); } if(pStoredPublicKey->type == EVP_PKEY_RSA) { RSA *x = pStoredPublicKey->pkey.rsa; bn = x->n; } else if(pStoredPublicKey->type == EVP_PKEY_DSA) { } else if(pStoredPublicKey->type == EVP_PKEY_EC) { } else { printf(" : Unkown publicKey\n"); } //extracts the bytes from public key & convert into unsigned char buffer buf_len = (size_t) BN_num_bytes (bn); key = (unsigned char *)malloc (buf_len); n = BN_bn2bin (bn, (unsigned char *) key); for (i = 0; i < n; i++) { printf("%02x\n", (unsigned char) key[i]); } keyLen = EVP_PKEY_size(pStoredPublicKey); EVP_PKEY_free(pStoredPublicKey); And, With this unsigned char buffer, How do I get back the EVP_PKEY for RSA? OR Can I use following ???, EVP_PKEY *d2i_PublicKey(int type, EVP_PKEY **a, unsigned char **pp, long length); int i2d_PublicKey(EVP_PKEY *a, unsigned char **pp);

    Read the article

  • Symbian: clear buffer of RSocket object

    - by Heinz
    Hi, I have to come back once again to sockets in Symbian. Code to set up a connection to a remote server looks as follows: TInetAddr serverAddr; TUint iPort=111; TRequestStatus iStatus; TSockXfrLength len; TInt res = iSocketSrv.Connect(); res = iSocket.Open(iSocketSrv,KAfInet,KSockStream, KProtocolInetTcp); res = iSocket.SetOpt(KSoTcpSendWinSize, KSolInetTcp, 0x10000); serverAddr.SetPort(iPort); serverAddr.SetAddress(INET_ADDR(11,11,179,154)); iSocket.Connect(serverAddr,iStatus); User::WaitForRequest(iStatus); Over the iSocket i receive packets of variable size. On very few occurences it happens that such a packet is corrupted. What I would like to do then is to clear all the data that is currently in the iSocket buffer and ready to be read. I have not seen any method of RSocket that allows me to clear the content of the buffer. Does anyone know how to do that? If possible, I would like to avoid using RecvOneOrMore() or similar recv function clear the buffer Thanks

    Read the article

  • Java: how to avoid circual references when dumping object information with reflection?

    - by Tom
    I've modified an object dumping method to avoid circual references causing a StackOverflow error. This is what I ended up with: //returns all fields of the given object in a string public static String dumpFields(Object o, int callCount, ArrayList excludeList) { //add this object to the exclude list to avoid circual references in the future if (excludeList == null) excludeList = new ArrayList(); excludeList.add(o); callCount++; StringBuffer tabs = new StringBuffer(); for (int k = 0; k < callCount; k++) { tabs.append("\t"); } StringBuffer buffer = new StringBuffer(); Class oClass = o.getClass(); if (oClass.isArray()) { buffer.append("\n"); buffer.append(tabs.toString()); buffer.append("["); for (int i = 0; i < Array.getLength(o); i++) { if (i < 0) buffer.append(","); Object value = Array.get(o, i); if (value != null) { if (excludeList.contains(value)) { buffer.append("circular reference"); } else if (value.getClass().isPrimitive() || value.getClass() == java.lang.Long.class || value.getClass() == java.lang.String.class || value.getClass() == java.lang.Integer.class || value.getClass() == java.lang.Boolean.class) { buffer.append(value); } else { buffer.append(dumpFields(value, callCount, excludeList)); } } } buffer.append(tabs.toString()); buffer.append("]\n"); } else { buffer.append("\n"); buffer.append(tabs.toString()); buffer.append("{\n"); while (oClass != null) { Field[] fields = oClass.getDeclaredFields(); for (int i = 0; i < fields.length; i++) { if (fields[i] == null) continue; buffer.append(tabs.toString()); fields[i].setAccessible(true); buffer.append(fields[i].getName()); buffer.append("="); try { Object value = fields[i].get(o); if (value != null) { if (excludeList.contains(value)) { buffer.append("circular reference"); } else if ((value.getClass().isPrimitive()) || (value.getClass() == java.lang.Long.class) || (value.getClass() == java.lang.String.class) || (value.getClass() == java.lang.Integer.class) || (value.getClass() == java.lang.Boolean.class)) { buffer.append(value); } else { buffer.append(dumpFields(value, callCount, excludeList)); } } } catch (IllegalAccessException e) { System.out.println("IllegalAccessException: " + e.getMessage()); } buffer.append("\n"); } oClass = oClass.getSuperclass(); } buffer.append(tabs.toString()); buffer.append("}\n"); } return buffer.toString(); } The method is initially called like this: System.out.println(dumpFields(obj, 0, null); So, basically I added an excludeList which contains all the previousely checked objects. Now, if an object contains another object and that object links back to the original object, it should not follow that object further down the chain. However, my logic seems to have a flaw as I still get stuck in an infinite loop. Does anyone know why this is happening?

    Read the article

  • How large should my recv buffer be when calling recv in the socket library

    - by Silmaril89
    Hi, I have a few questions about the socket library in C. Here is a snippet of code I'll refer to in my questions. char recv_buffer[3000]; recv(socket, recv_buffer, 3000, 0); First, How do I decide how big to make recv_buffer? I'm using 3000, but it's arbitrary. Second, what happens if recv() receives a packet bigger than my recv_buffer? Third, how can I know if I have received the entire message without calling recv again and have it wait forever when there is nothing to be received? And finally, is there a way I can make a buffer not have a fixed amount of space, so that I can keep adding to it without fear of running out of space? maybe using strcat to concatenate the latest recv() response to the buffer? I know it's a lot of questions in one, but I would greatly appreciate any responses.

    Read the article

  • ORA-22835 using JPA (Buffer too small)

    - by Kenneth
    I am trying to persist an Entity with a @Lob annotated String field. The content of that fiels if bigger than the 40k buffer size limit. The first problem I had was related to the setString method used internally by the JPA implementation (Hibernate in my case) and the Oracle JDBC Driver. This problem was solved adding <property name="hibernate.connection.SetBigStringTryClob" value="true"/> to my persistence.xml file. Then, the error changed to a ORA-22835 error (the buffer is too small). ¿Is there any way that JPA solves this problem without going to a low-level implementation? ¿Any suggestions?

    Read the article

  • OpenGL Motion blur with the accumulation buffer in WxWidgets

    - by Klaus
    Hello, I'm trying to achieve a motion blur effect in my OpenGL application. I read somewhere this solution, using the accumulation buffer: glAccum(GL_MULT, 0.90); glAccum(GL_ACCUM, 0.10); glAccum(GL_RETURN, 1.0); glFlush(); at the end of the render loop. But nothing happens... What am I missing ? Additions after genpfault answer: Indeed I did not asked for an accumulation buffer when I initialized my context. So I tried to pass an array of attributes to the constructor of my wxGLCanvas, as described here: http://docs.wxwidgets.org/2.6/wx_wxglcanvas.html : int attribList[]={ WX_GL_RGBA , WX_GL_DOUBLEBUFFER , WX_GL_MIN_ACCUM_RED, WX_GL_MIN_ACCUM_GREEN, WX_GL_MIN_ACCUM_BLUE, 0} But all I get is a friendly Seg fault. Does someone understand how to use this ? (no problems with int attribList[]={ WX_GL_RGBA , WX_GL_DOUBLEBUFFER , 0})

    Read the article

  • how to flush the console buffer?

    - by DoronS
    Hi all, i have some code that run repetedly : printf("do you want to continue? Y/N: \n"); keepplaying = getchar(); in the next my code is running it doesnt wait for input. i found out that getchar in the seconed time use '\n' as the charcter. im gussing this is due to some buffer the sdio has, so it save the last input which was "Y\n" or "N\n". my Q is, how do i flush the buffer before using the getchar, which will make getchar wait for my answer?

    Read the article

  • evaluating buffer in emacs python-mode on remote host

    - by Adrian
    Hello, I'm using emacs23 with tramp to modify python scripts on a remote host. I found that when I start the python shell within emacs it starts up python on the remote host. My problem is that when I then try to call python-send-buffer via C-c C-c it comes up with the error Traceback (most recent call last): File "", line 1, in ? ImportError: No module named emacs Traceback (most recent call last): File "", line 1, in ? NameError: name 'emacs' is not defined Now, I must admit that I don't really know what's going on here. Is there a way for me to configure emacs so that I can evaluate the buffer on the remote host? Many thanks.

    Read the article

  • snipMate only working on empty buffer?

    - by JesseBuesking
    I'm attempting to use snipMate with sql files, however it doesn't seem to work when editing an existing file. If I create a new empty buffer (no file; e.g. launch gvim from the start menu), and set the filetype to sql (:set ft=sql), it works. However, if I then try to open a sql file (e.g. :e c:\blah.sql) and edit it, snipMate no longer works. What gives!? Setup: gvim vim 7.3 Windows 7 snipMate 0.84 Also, I do in fact have filetype plugin on in my .vimrc file. edit Apparently if I open an empty buffer, set the filetype to sql, then save to file using w c:\blah.sql, I now have a sql file open AND snipMate continues to work. edit Here's a gist of my current .vimrc in case it helps: https://gist.github.com/3946877

    Read the article

  • How to signal a buffer full state between posix threads

    - by mikip
    Hi I have two threads, the main thread 'A' is responsible for message handling between a number of processes. When thread A gets a buffer full message, it should inform thread B and pass a pointer to the buffer which thread B will then process. When thread B has finished it should inform thread A that it has finished. How do I go about implementing this using posix threads using C on linux. I have looked at conditional variables, is this the way to go? . I'm not experienced in multi threaded programming and would like some advice on the best avenue to take. Thanks

    Read the article

  • buffer overflow with boost::program_options

    - by f4
    Hello, I have a problem using boost:program_options this simple program, copy-pasted from boosts' documentation : #include <boost/program_options.hpp> int main( int argc, char** argv ) { namespace po = boost::program_options; po::options_description desc("Allowed options"); desc.add_options() ("help", "produce help message") ("compression", po::value<int>(), "set compression level") ; return 0; } fails with a buffer overflow. I have activated the "buffer security switch", and when I run it I get an "unknown exception (0xc0000409)" when I step over the line desc.add_options()... I use Visual Studio 2005 and boost 1.43.0. By the way it does run if I deactivate the switch but I don't feel comfortable doing so... unless it's possible to deactivate it locally. So do you have a solution to this problem? EDIT I found the problem I was linking against libboost_program_options-vc80-mt.lib which wasn't the good library.

    Read the article

  • Huge page buffer vs. multiple simultaneous processes

    - by Andrei K.
    One of our customer has a 35 Gb database with average active connections count about 70-80. Some tables in database have more than 10M records per table. Now they have bought new server: 4 * 6 Core = 24 Cores CPU, 48 Gb RAM, 2 RAID controllers 256 Mb cache, with 8 SAS 15K HDD on each. 64bit OS. I'm wondering, what would be a fastest configuration: 1) FB 2.5 SuperServer with huge buffer 8192 * 3500000 pages = 29 Gb or 2) FB 2.5 Classic with small buffer of 1000 pages. Maybe some one has tested such case before and will save me days of work :) Thanks in advance.

    Read the article

  • Simple Emacs keybindings

    - by User1
    I have two operations that I do all the time in Emacs: Create a new buffer and paste the clipboard. [C-S-n] Close the current buffer. [C-S-w] Switch to the last viewed buffer [C-TAB] I feel like a keyboard acrobat when doing the first two operations. I think it would be worth trying some custom keybindings and macros. A few questions about this customization: How would I make a macro for #1? Are these good keybindings (i know this is a bit subjective, but they might be used by something popular that I don't use) Has anyone found a Ctrl-Tab macro that will act like Alt-Tab in Linux/Windows? Specifically, I want it have a stack of buffers according to the last viewed timestamp (most recent on top). I want to continue cycling through the stack until I let go of the ctrl key. When the ctrl key is released, I want the current buffer to get an updated position on the stack.

    Read the article

  • OpenGL index buffer object with additional data

    - by muksie
    I have a large set of lines, which I render from a vertex buffer object using glMultiDrawArrays(GL_LINE_STRIP, ...); This works perfectly well. Now I have lots of vertex pairs which I also have to visualize. Every pair consists of two vertices on two different lines, and the distance between the vertices is small. However, I like to have the ability to draw a line between all vertex pairs with a distance less than a certain value. What I like to have is something like a buffer object with the following structure: i1, j1, r1, i2, j2, r2, i3, j3, r3, ... where the i's and j's are indices pointing to vertices and the r's are the distances between those vertices. Thus every vertex pair is stored as a (i, j, r) tuple. Then I like to have a (vertex) shader which only draws the vertex pairs with r < SOME_VALUE as a line. So my question is, what is the best way to achieve this?

    Read the article

  • Socket receive buffer size

    - by Kanishka
    Is there a way to determine the receive buffer size of a TCPIP socket in c#. I am sending a message to a server and expecting a response where I am not sure of the receive buffer size. IPEndPoint ipep = new IPEndPoint(IPAddress.Parse("192.125.125.226"),20060); Socket server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); server.Connect(ipep); String OutStr= "49|50|48|48|224|48|129|1|0|0|128|0|0|0|0|0|4|0|0|32|49|50"; byte[] temp = OutStr.Split('|').Select(s => byte.Parse(s)).ToArray(); int byteCount = server.Send(temp); byte[] bytes = new byte[255]; int res=0; res = server.Receive(bytes); return Encoding.UTF8.GetString(bytes);

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >