Search Results

Search found 3953 results on 159 pages for 'byte slave'.

Page 68/159 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • android java.lang.OutOfMemoryError

    - by xiangdream
    hi, all, when i download large data from website, i got this error information: I/global (20094): Default buffer size used in BufferedInputStream constructor. It would be better to be explicit if an 8k buffer is required. D/dalvikvm(20094): GC freed 6153 objects / 3650840 bytes in 335ms I/dalvikvm-heap(20094): Forcing collection of SoftReferences for 3599051-byte al location D/dalvikvm(20094): GC freed 320 objects / 11400 bytes in 144ms E/dalvikvm-heap(20094): Out of memory on a 3599051-byte allocation. I/dalvikvm(20094): "Thread-9" prio=5 tid=17 RUNNABLE I/dalvikvm(20094): | group="main" sCount=0 dsCount=0 s=0 obj=0x439b9480 I/dalvikvm(20094): | sysTid=25762 nice=0 sched=0/0 handle=4065496 anyone can help me?

    Read the article

  • MFC: Reading entire file to buffer...

    - by deostroll
    I've meddled with some code but I am unable to read the entire file properly...a lot of junk gets appended to the output. How do I fix this? // wmfParser.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "wmfParser.h" #include <cstring> #ifdef _DEBUG #define new DEBUG_NEW #endif // The one and only application object CWinApp theApp; using namespace std; int _tmain(int argc, TCHAR* argv[], TCHAR* envp[]) { int nRetCode = 0; // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { // TODO: change error code to suit your needs _tprintf(_T("Fatal Error: MFC initialization failed\n")); nRetCode = 1; } else { // TODO: code your application's behavior here. CFile file; CFileException exp; if( !file.Open( _T("c:\\sample.txt"), CFile::modeRead, &exp ) ){ exp.ReportError(); cout<<'\n'; cout<<"Aborting..."; system("pause"); return 0; } ULONGLONG dwLength = file.GetLength(); cout<<"Length of file to read = " << dwLength << '\n'; /* BYTE* buffer; buffer=(BYTE*)calloc(dwLength, sizeof(BYTE)); file.Read(buffer, 25); char* str = (char*)buffer; cout<<"length of string : " << strlen(str) << '\n'; cout<<"string from file: " << str << '\n'; */ char str[100]; file.Read(str, sizeof(str)); cout << "Data : " << str <<'\n'; file.Close(); cout<<"File was closed\n"; //AfxMessageBox(_T("This is a test message box")); system("pause"); } return nRetCode; }

    Read the article

  • Basic Client-Server Design for persistent connections?

    - by cam
    Here's as far as I understand it: Client & Server make connection Client sends server data Server interprets data, sends client data So on, and so forth, until client sends disconnect signal. I'm just wondering about implementation. Step 2 and 3 are confusing to me, maybe I'm over-complicating it. Is there anymore to interpreting the data than a giant switch statement? Any good books on client/server design? Specifically talking about multithreaded servers, scalability, and message design (byte 1 = header info, byte 2 = blah blah, etc)? Specifically geared towards C++.

    Read the article

  • Unix Sockets in Go

    - by marketer
    I'm trying to make a simple echo client and server that uses Unix sockets. In this example, the server can receive data from the client, but it can't send the data back. If I use tcp connections instead, it works great: Server package main import "net" import "fmt" func echoServer(c net.Conn) { for { buf := make([]byte, 512) nr, err := c.Read(buf) if err != nil { return } data := buf[0:nr] fmt.Printf("Received: %v", string(data)) _, err = c.Write(data) if err != nil { panic("Write: " + err.String()) } } } func main() { l, err := net.Listen("unix", "/tmp/echo.sock") if err != nil { println("listen error", err.String()) return } for { fd, err := l.Accept() if err != nil { println("accept error", err.String()) return } go echoServer(fd) } } Client package main import "net" import "time" func main() { c,err := net.Dial("unix","", "/tmp/echo.sock") if err != nil { panic(err.String()) } for { _,err := c.Write([]byte("hi\n")) if err != nil { println(err.String()) } time.Sleep(1e9) } }

    Read the article

  • hash password in SQL Server (asp.net)

    - by ile
    Is this how hashed password stored in SQL Server should look like? This is function I use to hash password (I found it in some tutorial) public string EncryptPassword(string password) { //we use codepage 1252 because that is what sql server uses byte[] pwdBytes = Encoding.GetEncoding(1252).GetBytes(password); byte[] hashBytes = System.Security.Cryptography.MD5.Create().ComputeHash(pwdBytes); return Encoding.GetEncoding(1252).GetString(hashBytes); } EDIT I tried to use sha-1 and now strings seem to look like as they are suppose to: public string EncryptPassword(string password) { return FormsAuthentication.HashPasswordForStoringInConfigFile(password, "sha1"); } // example output: 39A43BDB7827112409EFED3473F804E9E01DB4A8 Result from the image above looks like broken string, but this sha-1 looks normal.... Will this be secure enough?

    Read the article

  • bitshift large strings for encoding QR Codes

    - by icekreaman
    As an example, suppose a QR Code data stream contains 55 data words (each one byte in length) and 15 error correction words (again one byte). The data stream begins with a 12 bit header and ends with four 0 bits. So, 12 + 4 bits of header/footer and 15 bytes of error correction, leaves me 53 bytes to hold 53 alphanumeric characters. The 53 bytes of data and 15 bytes of ec are supplied in a string of length 68 (str68). The problem seems simple enough - concatenate 2 bytes of (right-shifted) header data with str68 and then left shift the entire 70 bytes by 4 bits. This is the first time in many years of programming that I have ever needed to do something like this, I am a c and bit shifting noob, so please be gentle... I have done a little investigation and so far have not been able to figure out how to bitshift 70 bytes of data; any help would be greatly appreciated. Larger QR codes can hold 2000 bytes of data...

    Read the article

  • Java - Need help with binary/code string manipulation

    - by ShrimpCrackers
    For a project, I have to convert a binary string into (an array of) bytes and write it out to a file in binary. Say that I have a sentence converted into a code string using a huffman encoding. For example, if the sentence was: "hello" h = 00 e = 01, l = 10, o = 11 Then the string representation would be 0001101011. How would I convert that into a byte? <-- If that question doesn't make sense it's because I know little about bits/byte bitwise shifting and all that has to do with manipulating 1's and 0's.

    Read the article

  • How to set up a Bitmap with unmanaged data?

    - by Danvil
    I have int width, height; and IntPtr data; which comes from a unmanaged unsigned char* pointer and I would like to create a Bitmap to show the image data in a GUI. Please consider, that width must not be a multiple of 4, i do not have a "stride" and my image data is aligned as BGRA. The following code works: byte[] pixels = new byte[4*width*height]; System.Runtime.InteropServices.Marshal.Copy(data, pixels, 0, pixels.Length); var bmp = new Bitmap(width, height, System.Drawing.Imaging.PixelFormat.Format32bppArgb); for(int i=0; i<height; i++) { for(int j=0; j<width; j++) { int p = 4*(width*i + j); bmp.SetPixel(j, i, Color.FromArgb(pixels[p+3], pixels[p+2], pixels[p+1], pixels[p+0])); } } Is there a more direct way to copy the data?

    Read the article

  • Potential problems porting to different architectures

    - by Brendan Long
    I'm writing a Linux program that currently compiles and works fine on x86 and x86_64, and now I'm wondering if there's anything special I'll need to do to make it work on other architectures. What I've heard is that for cross platform code I should: Don't assume anything about the size of a pointer, int or size_t Don't make assumptions about byte order (I don't do any bit shifting -- I assume gcc will optimize my power of two multiplication/division for me) Don't use assembly blocks (obvious) Make sure your libraries work (I'm using SQLite, libcurl and Boost, which all seem pretty cross-platform) Is there anything else I need to worry about? I'm not currently targeting any other architectures, but I expect to support ARM at some point, and I figure I might as well make it work on any architecture if I can. Also, regarding my second point about byte order, do I need to do anything special with text input? I read files with getline(), so it seems like that should be done automatically as well.

    Read the article

  • connecting to exchange server

    - by MyHeadHurts
    I am using this code to connect to my exchange server. I am trying to retrieve an inbox of basically emails that have not been read however, i am just getting a bunch of gibberish and its reading an email. can you help me modify my code to just read the most recent messages. Try tcpClient.Connect(hostName, 110) Dim networkStream As NetworkStream = tcpClient.GetStream() Dim bytes(tcpClient.ReceiveBufferSize) As Byte Dim sendBytes As Byte() networkStream.Read(bytes, 0, CInt(tcpClient.ReceiveBufferSize)) sendBytes = Encoding.ASCII.GetBytes("User " + userName + vbCrLf) networkStream.Write(sendBytes, 0, sendBytes.Length) sTemp = networkStream.Read(bytes, 0, CInt(tcpClient.ReceiveBufferSize)) sendBytes = Encoding.ASCII.GetBytes("Pass " + userPassword + vbCrLf) networkStream.Write(sendBytes, 0, sendBytes.Length) sTemp = networkStream.Read(bytes, 0, CInt(tcpClient.ReceiveBufferSize)) sendBytes = Encoding.ASCII.GetBytes("STAT" + vbCrLf) networkStream.Write(sendBytes, 0, sendBytes.Length) sTemp = networkStream.Read(bytes, 0, CInt(tcpClient.ReceiveBufferSize)) sendBytes = Encoding.ASCII.GetBytes("RETR " + messageNumber + vbCrLf) networkStream.Write(sendBytes, 0, sendBytes.Length) networkStream.Read(bytes, 0, CInt(tcpClient.ReceiveBufferSize)) returnMessage = Encoding.ASCII.GetString(bytes) EmailContent.Text = returnMessage sendBytes = Encoding.ASCII.GetBytes("QUIT" + vbCrLf) networkStream.Write(sendBytes, 0, sendBytes.Length) tcpClient.Close() Catch ex As Exception EmailContent.Text = "Could not retrieve email or your inbox is empty" End Try

    Read the article

  • convert from physical path to virtual path

    - by user710502
    I have this function that gets the fileData as a byte array and a file path. The error I am getting is when it tries to set the fileInfo in the code bewlo. It says 'Physical Path given, Virtual Path expected' public override void WriteBinaryStorage(byte[] fileData, string filePath) { try { // Create directory if not exists. System.IO.FileInfo fileInfo = new System.IO.FileInfo(System.Web.HttpContext.Current.Server.MapPath(filePath)); //when it gets to this line the error is caught if (!fileInfo.Directory.Exists) { fileInfo.Directory.Create(); } // Write the binary content. System.IO.File.WriteAllBytes(System.Web.HttpContext.Current.Server.MapPath(filePath), fileData); } catch (Exception) { throw; } } When debugging it, is providing the filePath as "E:\\WEBS\\webapp\\default\\images\\mains\\myimage.jpg" . And the error message is 'E:/WEBS/webapp/default/images/mains/myimage.jpg' is a physical path, but a virtual path was expected. Also, what it is triggering this to happen is the following call properties.ResizeImage(imageName, Configurations.ConfigSettings.MaxImageSize, Server.MapPath(Configurations.EnvironmentConfig.LargeImagePath));

    Read the article

  • How not to abort http response c#

    - by user194076
    I need to run several methods after sending file to a user for a download. What happens is that after I send a file to a user, response is aborted and I can no longer do anything after response.end(). for example, this is my sample code: Response.Clear(); Response.AddHeader("content-disposition", "attachment; filename=test.pdf"); Response.ContentType = "application/pdf"; byte[] a = System.Text.Encoding.UTF8.GetBytes("test"); Response.BinaryWrite(a); Response.End(); StartNextMethod(); Response.Redirect(URL); So, in this example StartNextMethod and Response.Redirect are not executing. What I tried is I created a separate handler(ashx) with the following code: public void ProcessRequest(HttpContext context) { context.Response.Clear(); context.Response.AddHeader("content-disposition", "attachment; filename=test.pdf"); context.Response.ContentType = "application/pdf"; byte[] a = System.Text.Encoding.UTF8.GetBytes("test"); context.Response.BinaryWrite(a); context.Response.End(); } and call it like this: Download d = new Download(); d.ProcessRequest(HttpContext.Current); StartNextMethod(); Response.Redirect(URL); but the same error happen. I've tryied to replace Response.End with CompleteRequest but it doesn't help. I guess the problem is that I'm using HttpContext.Current but should use a separate response stream. Is that correct? how do I do that in a separate method generically (Assume that I want my handler to accept byte array of data and content type and be downloadable from a separate response. I really do not want to use a separate page for a response. UPDATE I still didn't find a good solution. I'd like to do some actions after user has downloaded a file, but without using a separate page for a response\request thing.

    Read the article

  • Efficiently display file status when using background thread

    - by schmoopy
    How can i efficiently display the status of a file when using a background thread? For instance, lets say i have a 100MB file: when i do the code below via a thread (just as an example) it runs in about 1 min: foreach(byte b in file.bytes) { WriteByte(b, xxx); } But... if i want to update the user i have to use a delegate to update the UI from the main thread, the code below takes - FOREVER - literally i don't know how long im still waiting, ive created this post and its not even 30% done. int total = file.length; int current = 0; foreach(byte b in file.bytes) { current++; UpdateCurrentFileStatus(current, total); WriteByte(b, xxx); } public delegate void UpdateCurrentFileStatus(int cur, int total); public void UpdateCurrentFileStatus(int cur, int total) { // Check if invoke required, if so create instance of delegate // the update the UI if(this.InvokeRequired) { } else { UpdateUI(...) } }

    Read the article

  • How can I set the buffer size for the underneath Socket UDP? C#

    - by Jack
    Hi all As we know for UDP receive, we use Socket.ReceiveFrom or UdpClient.receive Socket.ReceiveFrom accept a byte array from you to put the udp data in. UdpClient.receive returns directly a byte array where the data is My question is that How to set the buffer size inside Socket. I think the OS maintains its own buffer for receive UDP data, right? for e.g., if a udp packet is sent to my machine, the OS will put it to a buffer and wait us to Socket.ReceiveFrom or UdpClient.receive, right? How can I change the size of that internal buffer? I have tried Socket.ReceiveBuffSize, it has no effect at all for UDP, and it clearly said that it is for TCP window. Also I have done a lot of experiments which proves Socket.ReceiveBufferSize is NOT for UDP. Can anyone share some insights for UDP internal buffer??? Thanks

    Read the article

  • How do I Convert ARGB value from string to color?

    - by James
    I am trying to use the MakeColor method in the GDIAPI unit but the conversion from int to byte is not returning me the correct value. Example var argbStr: string; A, R, G, B: Byte; begin argbStr := 'ffffcc88'; A := StrToInt('$' + Copy(AValue, 0, 2)); R := StrToInt('$' + Copy(AValue, 3, 2)); G := StrToInt('$' + Copy(AValue, 5, 2)); B := StrToInt('$' + Copy(AValue, 7, 2)); Result := MakeColor(A, R, G, B); end; What am I doing wrong?

    Read the article

  • How do I split a short into its two bytes ?

    - by aPoC
    Hi. I have to split up a short into its two bytes. They have to be in Network order. I need that for a small server telling the current size of the rest packet's data. List<byte> o = new List<byte>(); o.Add(0x03); // here this short o.AddRange(MapData); o.Add(0xFF); Send(o);

    Read the article

  • OpenGL texture shifted somewhat to the left when applied to a quad

    - by user308226
    I'm a bit new to OpenGL and I've been having a problem with using textures. The texture seems to load fine, but when I run the program, the texture displays shifted a couple pixels to the left, with the section cut off by the shift appearing on the right side. I don't know if the problem here is in the my TGA loader or if it's the way I'm applying the texture to the quad. Here is the loader: #include "texture.h" #include <iostream> GLubyte uncompressedheader[12] = {0,0, 2,0,0,0,0,0,0,0,0,0}; GLubyte compressedheader[12] = {0,0,10,0,0,0,0,0,0,0,0,0}; TGA::TGA() { } //Private loading function called by LoadTGA. Loads uncompressed TGA files //Returns: TRUE on success, FALSE on failure bool TGA::LoadCompressedTGA(char *filename,ifstream &texturestream) { return false; } bool TGA::LoadUncompressedTGA(char *filename,ifstream &texturestream) { cout << "G position status:" << texturestream.tellg() << endl; texturestream.read((char*)header, sizeof(header)); //read 6 bytes into the file to get the tga header width = (GLuint)header[1] * 256 + (GLuint)header[0]; //read and calculate width and save height = (GLuint)header[3] * 256 + (GLuint)header[2]; //read and calculate height and save bpp = (GLuint)header[4]; //read bpp and save cout << bpp << endl; if((width <= 0) || (height <= 0) || ((bpp != 24) && (bpp !=32))) //check to make sure the height, width, and bpp are valid { return false; } if(bpp == 24) { type = GL_RGB; } else { type = GL_RGBA; } imagesize = ((bpp/8) * width * height); //determine size in bytes of the image cout << imagesize << endl; imagedata = new GLubyte[imagesize]; //allocate memory for our imagedata variable texturestream.read((char*)imagedata,imagesize); //read according the the size of the image and save into imagedata for(GLuint cswap = 0; cswap < (GLuint)imagesize; cswap += (bpp/8)) //loop through and reverse the tga's BGR format to RGB { imagedata[cswap] ^= imagedata[cswap+2] ^= //1st Byte XOR 3rd Byte XOR 1st Byte XOR 3rd Byte imagedata[cswap] ^= imagedata[cswap+2]; } texturestream.close(); //close ifstream because we're done with it cout << "image loaded" << endl; glGenTextures(1, &texID); // Generate OpenGL texture IDs glBindTexture(GL_TEXTURE_2D, texID); // Bind Our Texture glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // Linear Filtered glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, type, width, height, 0, type, GL_UNSIGNED_BYTE, imagedata); delete imagedata; return true; } //Public loading function for TGA images. Opens TGA file and determines //its type, if any, then loads it and calls the appropriate function. //Returns: TRUE on success, FALSE on failure bool TGA::loadTGA(char *filename) { cout << width << endl; ifstream texturestream; texturestream.open(filename,ios::binary); texturestream.read((char*)header,sizeof(header)); //read 6 bytes into the file, its the header. //if it matches the uncompressed header's first 6 bytes, load it as uncompressed LoadUncompressedTGA(filename,texturestream); return true; } GLubyte* TGA::getImageData() { return imagedata; } GLuint& TGA::getTexID() { return texID; } And here's the quad: void Square::show() { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture.texID); //Move to offset glTranslatef( x, y, 0 ); //Start quad glBegin( GL_QUADS ); //Set color to white glColor4f( 1.0, 1.0, 1.0, 1.0 ); //Draw square glTexCoord2f(0.0f, 0.0f); glVertex3f( 0, 0, 0 ); glTexCoord2f(1.0f, 0.0f); glVertex3f( SQUARE_WIDTH, 0, 0 ); glTexCoord2f(1.0f, 1.0f); glVertex3f( SQUARE_WIDTH, SQUARE_HEIGHT, 0 ); glTexCoord2f(0.0f, 1.0f); glVertex3f( 0, SQUARE_HEIGHT, 0 ); //End quad glEnd(); //Reset glLoadIdentity(); }

    Read the article

  • Java: BufferedImage from raw BMP file format data

    - by Victor
    Hello there. I've got BMP file's raw pixels table in byte[], it's structure is: (b g r) (b g r) ... (b g r) padding ... (b g r) (b g r) ... (b g r) padding Where r, g, b are byte each, padding is to round row length up to a multiple of 4 bytes. So, how can I create new BufferedImage from this raw data without copying, just using this raw data? I took a look at creating BufferedImage from DataBuffer, but I just didn't get it. Unfortunately ImageIO is not allowed in my situation.

    Read the article

  • Using memory mapping in C for reading binary

    - by user1320912
    I am trying to read data from a binary file and process it.It is a very large file so I thought I would use memory mapping. I am trying to use memory mapping so I can read the file byte by byte. I am getting a few compiler errors while doing this. I am doing this on a linux platform #include <unistd.h> #include <sys/types.h> #include <sys/mman.h> int fd; char *data; fd = open("data.bin", O_RDONLY); pagesize = 4000; data = mmap((caddr_t)0, pagesize, PROT_READ, MAP_SHARED, fd, pagesize); The errors i get are : caddr not initialized, R_RDONLY not initialized, mmap has too few arguments. Could someone help me out ?

    Read the article

  • How to receive HTTP messages using Socket

    - by Poma
    I'm using Socket class for my web client. I can't use HttpWebRequest since it doesn't support socks proxies. So I have to parse headers and handle chunked encoding by myself. The most difficult thing is to determine length of content so I have to read it byte-by-byte. First I have to use ReadByte() to find last header ("\r\n\r\n" combination), then read chunk's size etc. But this approach has very poor performance. Can you suggest better solution? Maybe some open source examples or libraries that handle http request through sockets (not very big and complicated though, I'm a noob)

    Read the article

  • How can I intercept a Tomcat request at socket level?

    - by Miguel Pardal
    Hi, I'm doing a performance study for a web application framework running on Apache Tomcat 6. I'm trying to measure the time overhead of handling HTTP requests. What I would like to do is: / // just before first request byte is read long t1 = System.nanoTime(); // request is processed... // just after final byte is written to response long t2 = System.nanoTime(); / Then I would compute the total time (t2 - t1). Is there a way to do this? Thanks for your help!

    Read the article

  • Asp.Net Error Message:Unable to validate data

    - by Amitabh
    We have a Asp.Net Webform page which contains a GridView inside UpdatePanel and refreshes every minute. And every one minute we get the following error in Event log. Error Message:Unable to validate data. Stack Trace: at System.Web.Configuration.MachineKeySection.GetDecodedData(Byte[] buf, Byte[] modifier, Int32 start, Int32 length, Int32& dataLength) at System.Web.UI.ObjectStateFormatter.Deserialize(String inputString). We have tried the following. Adding a static machine key in the Web.Config. (Did not work?) Disabling the View State Mac in the Web,.Config using following entry. (Did not work) <pages buffer="true" enableViewStateMac="false"> Is there something else that might cause this?

    Read the article

  • Showing an image after loading it from sql database

    - by user330075
    I have a problem showing the image form database in a view Details and a ImageController. Inside the view I have: img src=Url.Action("GetFile","Image", new {id= Model.id}) and in controller: public FileContentResult GetFile(int idl) { //int idl = 32; SqlDataReader rdr; byte[] fileContent = null; ........... return File(,,); } When the view is called, function GetFile it just won't work. But if I cut out the parameter int id1 and I instantiate it as a variable it does work. public FileContentResult GetFile() { int idl = 32; SqlDataReader rdr; byte[] fileContent = null; ........... return File(,,); } Why?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >