Search Results

Search found 7985 results on 320 pages for 'multi byte'.

Page 98/320 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • memory alignment within gcc structs

    - by Mumbles
    I am porting an application to an ARM platform in C, the application also runs on an x86 processor, and must be backward compatible. I am now having some issues with variable alignment. I have read the gcc manual for __attribute__((aligned(4),packed)) I interpret what is being said as the start of the struct is aligned to the 4 byte boundry and the inside remains untouched because of the packed statement. originally I had this but occasionally it gets placed unaligned with the 4 byte boundary. typedef struct { unsigned int code; unsigned int length; unsigned int seq; unsigned int request; unsigned char nonce[16]; unsigned short crc; } __attribute__((packed)) CHALLENGE; so I change it to this. typedef struct { unsigned int code; unsigned int length; unsigned int seq; unsigned int request; unsigned char nonce[16]; unsigned short crc; } __attribute__((aligned(4),packed)) CHALLENGE; The understand I stated earlier seems to be incorrect as both the struct is now aligned to a 4 byte boundary, and and the inside data is now aligned to a four byte boundary, but because of the endianess, the size of the struct has increased in size from 42 to 44 bytes. This size is critical as we have other applications that depend on the struct being 42 bytes. Could some describe to me how to perform the operation that I require. Any help is much appreciated.

    Read the article

  • How can I get bitfields to arrange my bits in the right order?

    - by Jim Hunziker
    To begin with, the application in question is always going to be on the same processor, and the compiler is always gcc, so I'm not concerned about bitfields not being portable. gcc lays out bitfields such that the first listed field corresponds to least significant bit of a byte. So the following structure, with a=0, b=1, c=1, d=1, you get a byte of value e0. struct Bits { unsigned int a:5; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); (Actually, this is C++, so I'm talking about g++.) Now let's say I'd like a to be a six bit integer. Now, I can see why this won't work, but I coded the following structure: struct Bits2 { unsigned int a:6; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); Setting b, c, and d to 1, and a to 0 results in the following two bytes: c0 01 This isn't what I wanted. I was hoping to see this: e0 00 Is there any way to specify a structure that has three bits in the most significant bits of the first byte and six bits spanning the five least significant bits of the first byte and the most significant bit of the second? Please be aware that I have no control over where these bits are supposed to be laid out: it's a layout of bits that are defined by someone else's interface.

    Read the article

  • Error while uploading file method in Client Object Model Sharepoint 2010

    - by user1481570
    Error while uploading file method in Client Object Model + Sharepoint 2010. Once the file got uploaded. After that though the code compiles with no error I get the error while executing "{"Value does not fall within the expected range."} {System.Collections.Generic.SynchronizedReadOnlyCollection} I have a method which takes care of functionality to upload files /////////////////////////////////////////////////////////////////////////////////////////// public void Upload_Click(string documentPath, byte[] documentStream) { String sharePointSite = "http://cvgwinbasd003:28838/sites/test04"; String documentLibraryUrl = sharePointSite +"/"+ documentPath.Replace('\\','/'); //////////////////////////////////////////////////////////////////// //Get Document List List documentsList = clientContext.Web.Lists.GetByTitle("Doc1"); var fileCreationInformation = new FileCreationInformation(); //Assign to content byte[] i.e. documentStream fileCreationInformation.Content = documentStream; //Allow owerwrite of document fileCreationInformation.Overwrite = true; //Upload URL fileCreationInformation.Url = documentLibraryUrl; Microsoft.SharePoint.Client.File uploadFile = documentsList.RootFolder.Files.Add( fileCreationInformation); //uploadFile.ListItemAllFields.Update(); clientContext.ExecuteQuery(); } ///////////////////////////////////////////////////////////////////////////////////////////////// In the MVC 3.0 application in the controller I have defined the following method to invoke the upload method. ////////////////////////////////////////////////////////////////////////////////////////////////// public ActionResult ProcessSubmit(IEnumerable<HttpPostedFileBase> attachments) { System.IO.Stream uploadFileStream=null; byte[] uploadFileBytes; int fileLength=0; foreach (HttpPostedFileBase fileUpload in attachments) { uploadFileStream = fileUpload.InputStream; fileLength=fileUpload.ContentLength; } uploadFileBytes= new byte[fileLength]; uploadFileStream.Read(uploadFileBytes, 0, fileLength); using (DocManagementService.DocMgmtClient doc = new DocMgmtClient()) { doc.Upload_Click("Doc1/Doc2/Doc2.1/", uploadFileBytes); } return RedirectToAction("SyncUploadResult"); } ////////////////////////////////////////////////////////////////////////////////////////////////// Please help me to locate the error

    Read the article

  • Sending a file over web service from java to .net

    - by Goran
    Hello, I have built .NET 1.1 Web Service which should accept files and save them. Here is the code of the webmethod: [WebMethod] public bool SaveDocument(Byte[] docbinaryarray, string docname) { string dirPath = @"C:\Temp\WSTEST\"; if(!Directory.Exists(dirPath)) { Directory.CreateDirectory(dirPath); } string filePath = dirPath + docname; FileStream objfilestream = new FileStream(filePath, FileMode.Create, FileAccess.ReadWrite); objfilestream.Write(docbinaryarray, 0, docbinaryarray.Length); objfilestream.Close(); return true; } When I make a client in .NET with reference to this Web service everything goes great, but when a college of mine tries to send me a file from a JAVA client I don't get the actuall file. All I get is byte array with only one element. Definition of byte array for file, in WSDL looks like this: <s:element minOccurs="0" maxOccurs="1" name="docbinaryarray" type="s:base64Binary" /> He sends me base64binary and fails every time. All I get is Byte array with only one element inside.

    Read the article

  • help on ejb stateless datagram and message driven beans

    - by Kemmal
    i have a client thats sending a message to the ejbserver using UDP, i want the server(stateless bean) to echo back this message to the client but i cant seem to do this. or can i implement the same logic by using JMS? please help and enlighten. this is just a test, in the end i want a midp to be sending the message to the ejb using datagrams. here is my code. @Stateless public class SessionFacadeBean implements SessionFacadeRemote { public SessionFacadeBean() { } public static void main(String[] args) { DatagramSocket aSocket = null; byte[] buffer = null; try { while(true) { DatagramPacket request = new DatagramPacket(buffer, buffer.length); aSocket.receive(request); DatagramPacket reply = new DatagramPacket(request.getData(), request.getLength(), request.getAddress(), request.getPort()); aSocket.send(reply); } } catch (SocketException e) { System.out.println("Socket: " + e.getMessage()); } catch (IOException e) { System.out.println("IO: " + e.getMessage()); } finally { if(aSocket != null) aSocket.close(); } } } and the client: public static void main(String[] args) { DatagramSocket aSocket = null; try { aSocket = new DatagramSocket(); byte [] m = "Test message!".getBytes(); InetAddress aHost = InetAddress.getByName("localhost"); int serverPort = 6789; DatagramPacket request = new DatagramPacket(m, m.length, aHost, serverPort); aSocket.send(request); byte[] buffer = new byte[1000]; DatagramPacket reply = new DatagramPacket(buffer, buffer.length); aSocket.receive(reply); System.out.println("Reply: " + new String(reply.getData())); } catch (SocketException e) { System.out.println("Socket: " + e.getMessage()); } catch (IOException e) { System.out.println("IO: " + e.getMessage()); } finally { if(aSocket != null) aSocket.close(); } } please help.

    Read the article

  • Some Async Socket Code - Help with Garbage Collection?

    - by divinci
    Hi all, I think this question is really about my understanding of Garbage collection and variable references. But I will go ahead and throw out some code for you to look at. // Please note do not use this code for async sockets, just to highlight my question // SocketTransport // This is a simple wrapper class that is used as the 'state' object // when performing Async Socket Reads/Writes public class SocketTransport { public Socket Socket; public byte[] Buffer; public SocketTransport(Socket socket, byte[] buffer) { this.Socket = socket; this.Buffer = buffer; } } // Entry point - creates a SocketTransport, then passes it as the state // object when Asyncly reading from the socket. public void ReadOne(Socket socket) { SocketTransport socketTransport_One = new SocketTransport(socket, new byte[10]); socketTransport_One.Socket.BeginRecieve ( socketTransport_One.Buffer, // Buffer to store data 0, // Buffer offset 10, // Read Length SocketFlags.None // SocketFlags new AsyncCallback(OnReadOne), // Callback when BeginRead completes socketTransport_One // 'state' object to pass to Callback. ); } public void OnReadOne(IAsyncResult ar) { SocketTransport socketTransport_One = ar.asyncState as SocketTransport; ProcessReadOneBuffer(socketTransport_One.Buffer); // Do processing // New Read // Create another! SocketTransport (what happens to first one?) SocketTransport socketTransport_Two = new SocketTransport(socket, new byte[10]); socketTransport_Two.Socket.BeginRecieve ( socketTransport_One.Buffer, 0, 10, SocketFlags.None new AsyncCallback(OnReadTwo), socketTransport_Two ); } public void OnReadTwo(IAsyncResult ar) { SocketTransport socketTransport_Two = ar.asyncState as SocketTransport; .............. So my question is: The first SocketTransport to be created (socketTransport_One) has a strong reference to a Socket object (lets call is ~SocketA~). Once the async read is completed, a new SocketTransport object is created (socketTransport_Two) also with a strong reference to ~SocketA~. Q1. Will socketTransport_One be collected by the garbage collector when method OnReadOne exits? Even though it still contains a strong reference to ~SocketA~ Thanks all!

    Read the article

  • undefined reference to function, despite giving reference in c

    - by Jamie Edwards
    I'm following a tutorial, but when it comes to compiling and linking the code I get the following error: /tmp/cc8gRrVZ.o: In function `main': main.c:(.text+0xa): undefined reference to `monitor_clear' main.c:(.text+0x16): undefined reference to `monitor_write' collect2: ld returned 1 exit status make: *** [obj/main.o] Error 1 What that is telling me is that I haven't defined both 'monitor_clear' and 'monitor_write'. But I have, in both the header and source files. They are as follows: monitor.c: // monitor.c -- Defines functions for writing to the monitor. // heavily based on Bran's kernel development tutorials, // but rewritten for JamesM's kernel tutorials. #include "monitor.h" // The VGA framebuffer starts at 0xB8000. u16int *video_memory = (u16int *)0xB8000; // Stores the cursor position. u8int cursor_x = 0; u8int cursor_y = 0; // Updates the hardware cursor. static void move_cursor() { // The screen is 80 characters wide... u16int cursorLocation = cursor_y * 80 + cursor_x; outb(0x3D4, 14); // Tell the VGA board we are setting the high cursor byte. outb(0x3D5, cursorLocation >> 8); // Send the high cursor byte. outb(0x3D4, 15); // Tell the VGA board we are setting the low cursor byte. outb(0x3D5, cursorLocation); // Send the low cursor byte. } // Scrolls the text on the screen up by one line. static void scroll() { // Get a space character with the default colour attributes. u8int attributeByte = (0 /*black*/ << 4) | (15 /*white*/ & 0x0F); u16int blank = 0x20 /* space */ | (attributeByte << 8); // Row 25 is the end, this means we need to scroll up if(cursor_y >= 25) { // Move the current text chunk that makes up the screen // back in the buffer by a line int i; for (i = 0*80; i < 24*80; i++) { video_memory[i] = video_memory[i+80]; } // The last line should now be blank. Do this by writing // 80 spaces to it. for (i = 24*80; i < 25*80; i++) { video_memory[i] = blank; } // The cursor should now be on the last line. cursor_y = 24; } } // Writes a single character out to the screen. void monitor_put(char c) { // The background colour is black (0), the foreground is white (15). u8int backColour = 0; u8int foreColour = 15; // The attribute byte is made up of two nibbles - the lower being the // foreground colour, and the upper the background colour. u8int attributeByte = (backColour << 4) | (foreColour & 0x0F); // The attribute byte is the top 8 bits of the word we have to send to the // VGA board. u16int attribute = attributeByte << 8; u16int *location; // Handle a backspace, by moving the cursor back one space if (c == 0x08 && cursor_x) { cursor_x--; } // Handle a tab by increasing the cursor's X, but only to a point // where it is divisible by 8. else if (c == 0x09) { cursor_x = (cursor_x+8) & ~(8-1); } // Handle carriage return else if (c == '\r') { cursor_x = 0; } // Handle newline by moving cursor back to left and increasing the row else if (c == '\n') { cursor_x = 0; cursor_y++; } // Handle any other printable character. else if(c >= ' ') { location = video_memory + (cursor_y*80 + cursor_x); *location = c | attribute; cursor_x++; } // Check if we need to insert a new line because we have reached the end // of the screen. if (cursor_x >= 80) { cursor_x = 0; cursor_y ++; } // Scroll the screen if needed. scroll(); // Move the hardware cursor. move_cursor(); } // Clears the screen, by copying lots of spaces to the framebuffer. void monitor_clear() { // Make an attribute byte for the default colours u8int attributeByte = (0 /*black*/ << 4) | (15 /*white*/ & 0x0F); u16int blank = 0x20 /* space */ | (attributeByte << 8); int i; for (i = 0; i < 80*25; i++) { video_memory[i] = blank; } // Move the hardware cursor back to the start. cursor_x = 0; cursor_y = 0; move_cursor(); } // Outputs a null-terminated ASCII string to the monitor. void monitor_write(char *c) { int i = 0; while (c[i]) { monitor_put(c[i++]); } } void monitor_write_hex(u32int n) { s32int tmp; monitor_write("0x"); char noZeroes = 1; int i; for (i = 28; i > 0; i -= 4) { tmp = (n >> i) & 0xF; if (tmp == 0 && noZeroes != 0) { continue; } if (tmp >= 0xA) { noZeroes = 0; monitor_put (tmp-0xA+'a' ); } else { noZeroes = 0; monitor_put( tmp+'0' ); } } tmp = n & 0xF; if (tmp >= 0xA) { monitor_put (tmp-0xA+'a'); } else { monitor_put (tmp+'0'); } } void monitor_write_dec(u32int n) { if (n == 0) { monitor_put('0'); return; } s32int acc = n; char c[32]; int i = 0; while (acc > 0) { c[i] = '0' + acc%10; acc /= 10; i++; } c[i] = 0; char c2[32]; c2[i--] = 0; int j = 0; while(i >= 0) { c2[i--] = c[j++]; } monitor_write(c2); } monitor.h: // monitor.h -- Defines the interface for monitor.h // From JamesM's kernel development tutorials. #ifndef MONITOR_H #define MONITOR_H #include "common.h" // Write a single character out to the screen. void monitor_put(char c); // Clear the screen to all black. void monitor_clear(); // Output a null-terminated ASCII string to the monitor. void monitor_write(char *c); #endif // MONITOR_H common.c: // common.c -- Defines some global functions. // From JamesM's kernel development tutorials. #include "common.h" // Write a byte out to the specified port. void outb ( u16int port, u8int value ) { asm volatile ( "outb %1, %0" : : "dN" ( port ), "a" ( value ) ); } u8int inb ( u16int port ) { u8int ret; asm volatile ( "inb %1, %0" : "=a" ( ret ) : "dN" ( port ) ); return ret; } u16int inw ( u16int port ) { u16int ret; asm volatile ( "inw %1, %0" : "=a" ( ret ) : "dN" ( port ) ); return ret; } // Copy len bytes from src to dest. void memcpy(u8int *dest, const u8int *src, u32int len) { const u8int *sp = ( const u8int * ) src; u8int *dp = ( u8int * ) dest; for ( ; len != 0; len-- ) *dp++ =*sp++; } // Write len copies of val into dest. void memset(u8int *dest, u8int val, u32int len) { u8int *temp = ( u8int * ) dest; for ( ; len != 0; len-- ) *temp++ = val; } // Compare two strings. Should return -1 if // str1 < str2, 0 if they are equal or 1 otherwise. int strcmp(char *str1, char *str2) { int i = 0; int failed = 0; while ( str1[i] != '\0' && str2[i] != '\0' ) { if ( str1[i] != str2[i] ) { failed = 1; break; } i++; } // Why did the loop exit? if ( ( str1[i] == '\0' && str2[i] != '\0' || (str1[i] != '\0' && str2[i] =='\0' ) ) failed =1; return failed; } // Copy the NULL-terminated string src into dest, and // return dest. char *strcpy(char *dest, const char *src) { do { *dest++ = *src++; } while ( *src != 0 ); } // Concatenate the NULL-terminated string src onto // the end of dest, and return dest. char *strcat(char *dest, const char *src) { while ( *dest != 0 ) { *dest = *dest++; } do { *dest++ = *src++; } while ( *src != 0 ); return dest; } common.h: // common.h -- Defines typedefs and some global functions. // From JamesM's kernel development tutorials. #ifndef COMMON_H #define COMMON_H // Some nice typedefs, to standardise sizes across platforms. // These typedefs are written for 32-bit x86. typedef unsigned int u32int; typedef int s32int; typedef unsigned short u16int; typedef short s16int; typedef unsigned char u8int; typedef char s8int; void outb ( u16int port, u8int value ); u8int inb ( u16int port ); u16int inw ( u16int port ); #endif //COMMON_H main.c: // main.c -- Defines the C-code kernel entry point, calls initialisation routines. // Made for JamesM's tutorials <www.jamesmolloy.co.uk> #include "monitor.h" int main(struct multiboot *mboot_ptr) { monitor_clear(); monitor_write ( "hello, world!" ); return 0; } here is my makefile: C_SOURCES= main.c monitor.c common.c S_SOURCES= boot.s C_OBJECTS=$(patsubst %.c, obj/%.o, $(C_SOURCES)) S_OBJECTS=$(patsubst %.s, obj/%.o, $(S_SOURCES)) CFLAGS=-nostdlib -nostdinc -fno-builtin -fno-stack-protector -m32 -Iheaders LDFLAGS=-Tlink.ld -melf_i386 --oformat=elf32-i386 ASFLAGS=-felf all: kern/kernel .PHONY: clean clean: -rm -f kern/kernel kern/kernel: $(S_OBJECTS) $(C_OBJECTS) ld $(LDFLAGS) -o $@ $^ $(C_OBJECTS): obj/%.o : %.c gcc $(CFLAGS) $< -o $@ vpath %.c source $(S_OBJECTS): obj/%.o : %.s nasm $(ASFLAGS) $< -o $@ vpath %.s asem Hopefully this will help you understand what is going wrong and how to fix it :L Thanks in advance. Jamie.

    Read the article

  • ob_start() is partially capturing data

    - by AAA
    I am using the following code: PHP: // Generate Guid function NewGuid() { $s = strtoupper(uniqid(rand(),true)); $guidText = substr($s,0,8) . '-' . substr($s,8,4) . '-' . substr($s,12,4). '-' . substr($s,16,4). '-' . substr($s,20); return $guidText; } // End Generate Guid $Guid = NewGuid(); $alphabet = '123456789abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ'; function base_encode($num, $alphabet) { $base_count = strlen($alphabet); $encoded = ''; while ($num >= $base_count) { $div = $num/$base_count; $mod = ($num-($base_count*intval($div))); $encoded = $alphabet[$mod] . $encoded; $num = intval($div); } if ($num) $encoded = $alphabet[$num] . $encoded; return $encoded; } function base_decode($num, $alphabet) { $decoded = 0; $multi = 1; while (strlen($num) > 0) { $digit = $num[strlen($num)-1]; $decoded += $multi * strpos($alphabet, $digit); $multi = $multi * strlen($alphabet); $num = substr($num, 0, -1); } return $decoded; } ob_start(); echo base_encode($Guid, $alphabet); //should output: bUKpk $theid = ob_get_contents(); ob_get_clean(); The problem: When i echo $theid, it shows the complete entry, but as it is being inserted into the database, only the first entry in the sequence gets inserted, for example for the entry buKPK, only 'b' is being inserted not the rest.

    Read the article

  • C# TCP socket with session

    - by Zé Carlos
    Is there any way of dealing with sessions with sockets in C#? Example of my problem: I have a server with a socket listening on port 5672. TcpListener socket = new TcpListener(localAddr, 5672); socket.Start(); Console.Write("Waiting for a connection... "); // Perform a blocking call to accept requests. TcpClient client = socket.AcceptTcpClient(); Console.WriteLine("Connected to client!"); And i have two clients that will send one byte. Client A send 0x1 and client B send 0x2. From the server side, i read this data like this: Byte[] bytes = new Byte[256]; String data = null; NetworkStream stream = client.GetStream(); while ((stream.Read(bytes, 0, bytes.Length)) != 0) { byte[] answer = new ... stream.Write(answer , 0, answer.Length); } Then client A sends 0x11. I need a way to know that this client is the same that sent "0x1" before.

    Read the article

  • Question regarding ip checksum code

    - by looktt
    unsigned short /* this function generates header checksums */ csum (unsigned short *buf, int nwords) { unsigned long sum; for (sum = 0; nwords > 0; nwords--) // add words(16bits) together sum += *buf++; sum = (sum >> 16) + (sum & 0xffff); //add carry over sum += (sum >> 16); //what does this step do??? add possible left-over //byte? But isn't it already added in the loop (if //any)? return ((unsigned short) ~sum); } I assume nwords in the number of 16bits word, not 8bits byte (if there are odd byte, nword is rounded to next large), is it correct? The line sum = (sum 16) + (sum & 0xffff) is to add carry over to make 16bit complement sum += (sum 16); What's the purpose of this step? Add left-over byte? How? Thanks!

    Read the article

  • Find the Algorithm that generates the checksum

    - by knivmannen
    I have a sensing device that transmits a 6-byte message along with an 1-byte counter and supposely a checksum. The data looks something like this: ------DATA----------- -Counter- --Checksum?-- 55 FF 00 00 EC FF ---- 60---------- 1F The last four bits in the counter are always set 0, i.e those bits are probably not used. The last byte is assumed to be the checksum since it has a quite peculiar nature. It tends to randomly change as data changes. Now what i need is to find the algorithm to compute this checksum based on --DATA--. what i have tried is all possible CRC-8 polynomials, for each polynomial i have tried to reflect data, toggle it, initiate it with non-zeroes etc etc. Ive come to the conclusion that i am not dealing with a normal crc-algorithm. I have also tried some flether and adler methods without succes, xor stuff back and forth but still i have no clue how to generate the checksum. My biggest concern is, how is the counter used??? Same data but with different countervalue generates different checksums. I have tried to include the counter in my computations but without any luck. Here are some other datasamples: 55 FF 00 00 F0 FF A0 38 66 0B EA FF BF FF C0 CA 5E 18 EA FF B7 FF 60 BD F6 30 16 00 FC FE 10 81 One more thing that might be worth mentioning is that the last byte in the data only takes on the values FF or FE Plz if u have any tips or tricks that i may try post them here, I am truly desperate. Thx

    Read the article

  • Testing Broadcasting and receiving messages

    - by Avik
    Guys am having some difficulty figuring this out: I am trying to test whether the code(in c#) to broadcast a message and receiving the message works: The code to send the datagram(in this case its the hostname) is: public partial class Form1 : Form { String hostName; byte[] hostBuffer = new byte[1024]; public Form1() { InitializeComponent(); StartNotification(); } public void StartNotification() { IPEndPoint notifyIP = new IPEndPoint(IPAddress.Broadcast, 6000); hostName = Dns.GetHostName(); hostBuffer = Encoding.ASCII.GetBytes(hostName); UdpClient newUdpClient = new UdpClient(); newUdpClient.Send(hostBuffer, hostBuffer.Length, notifyIP); } } And the code to receive the datagram is: public partial class Form1 : Form { byte[] receivedNotification = new byte[1024]; String notificationReceived; StringBuilder listBox; UdpClient udpServer; IPEndPoint remoteEndPoint; public Form1() { InitializeComponent(); udpServer = new UdpClient(new IPEndPoint(IPAddress.Any, 1234)); remoteEndPoint=null; startUdpListener1(); } public void startUdpListener1() { receivedNotification = udpServer.Receive(ref remoteEndPoint); notificationReceived = Encoding.ASCII.GetString(receivedNotification); listBox = new StringBuilder(this.listBox1.Text); listBox.AppendLine(notificationReceived); this.listBox1.Items.Add(listBox.ToString()); } } For the reception of the code I have a form that has only a listbox(listBox1). The problem here is that when i execute the code to receive, the program runs but the form isnt visible. However when I comment the function call( startUdpListener1() ), the purpose isnt served but the form is visible. Whats going wrong?

    Read the article

  • Rapid calls to fread crashes the application

    - by Slynk
    I'm writing a function to load a wave file and, in the process, split the data into 2 separate buffers if it's stereo. The program gets to i = 18 and crashes during the left channel fread pass. (You can ignore the couts, they are just there for debugging.) Maybe I should load the file in one pass and use memmove to fill the buffers? if(params.channels == 2){ params.leftChannelData = new unsigned char[params.dataSize/2]; params.rightChannelData = new unsigned char[params.dataSize/2]; bool isLeft = true; int offset = 0; const int stride = sizeof(BYTE) * (params.bitsPerSample/8); for(int i = 0; i < params.dataSize; i += stride) { std::cout << "i = " << i << " "; if(isLeft){ std::cout << "Before Left Channel, "; fread(params.leftChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Left Channel, "; } else{ std::cout << "Before Right Channel, "; fread(params.rightChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Right Channel, "; offset += stride; std::cout << "After offset incr.\n"; } isLeft != isLeft; } } else { params.leftChannelData = new unsigned char[params.dataSize]; fread(params.leftChannelData, sizeof(BYTE), params.dataSize, file); }

    Read the article

  • How to open the download window when a dynamically created link is clicked in asp.net

    - by Ranjana
    i have stored the txtfile in the database.i need to show the txtfile when i clik the link. and this link has to be created dynamically. my code below: aspx code: aspx.cs protected void Page_Load(object sender, EventArgs e) { if(!Page.IsPostBack) { DataTable dtassignment = new DataTable(); dtassignment = serviceobj.DisplayAssignment(Session["staffname"].ToString()); if (dtassignment != null) { Byte[] bytes = (Byte[])dtassignment.Rows[0]["Data"]; //download(dtassignment); } divlink.InnerHtml = ""; divlink.Visible = true; foreach (DataRow r in dtassignment.Rows) { divlink.InnerHtml += "<a href='" + "'onclick='download(dtassignment)'>" + r["Filename"].ToString() + "</a>" + "<br/>"; } } } - public void download(DataTable dtassignment) { System.Diagnostics.Debugger.Break(); Byte[] bytes = (Byte[])dtassignment.Rows[0]["Data"]; Response.Buffer = true; Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = dtassignment.Rows[0]["ContentType"].ToString(); Response.AddHeader("content-disposition", "attachment;filename=" + dtassignment.Rows[0]["FileName"].ToString()); Response.BinaryWrite(bytes); Response.Flush(); Response.End(); } i have got the link dynamically, but i did not able to download the txtfile when i clik the link. how to carry out this. pls help me out...

    Read the article

  • Alpha blending colors in .NET Compact Framwork 2.0

    - by Adam Haile
    In the Full .NET framework you can use the Color.FromArgb() method to create a new color with alpha blending, like this: Color blended = Color.FromArgb(alpha, color); or Color blended = Color.FromArgb(alpha, red, green , blue); However in the Compact Framework (2.0 specifically), neither of those prototypes are valid, you only get: Color.FromArgb(int red, int green, int blue); and Color.FromArgb(int val); The first one, obviously, doesn't even let you enter an alpha value, but the documentation for the latter shows that "val" is a 32bit ARGB value (as 0xAARRGGBB as opposed to the standard 24bit 0xRRGGBB), so it would make sense that you could just build the ARGB value and pass it to the function. I tried this with the following: private Color FromARGB(byte alpha, byte red, byte green, byte blue) { int val = (alpha << 24) | (red << 16) | (green << 8) | blue; return Color.FromArgb(val); } But no matter what I do, the alpha blending never works, the resulting color always as full opacity, even when setting the alpha value to 0. Has anyone gotten this to work on Compact Framework?

    Read the article

  • how to send binary data within an xml string

    - by daemonkid
    I want to send a binary file to .net c# component in the following xml format <BinaryFileString fileType='pdf'> <!--binary file data string here--> </BinaryFileString> In the component that is called I will use the above xml string and convert the binary string recieved within the BinaryFileString tag, into a file as specified by the filetype='' attribute. The file type could be doc/pdf/xls/rtf I have the code in the calling application to get out the bytes from the file to be sent. How do I prepare it to be sent with xml tags wrapped around it? I want the application to send out a string to the component and not a byte stream. This is because there is no way I can decipher the file type [pdf/doc/xls] by just looking at the byte stream. Hence the xml string with the filetype attribute. Any ideas on this? method for extracting Bytes below FileStream fs = new FileStream(_filePath, FileMode.Open, FileAccess.Read); using (Stream input = fs) { byte[] buffer = new byte[8192]; int bytesRead; while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0) { } } return buffer; Thanks.

    Read the article

  • Windows Build System: How to build a project (from its source code) which doesn't have *.sln or Visu

    - by claws
    I'm facing this problem. So, I need to build the support libraries (zlib, libtiff, libpng, libxml2, libiconv) with "Multithreaded DLL" (/MD) & "Multithreaded DLL Debug" (/MDd) run-time options. But the problem is there is no direct way . I mean there is no *.sln / *.vcproj file which I can open in Visual C++ and build it. I'm aware with the GNU build system: $./configure --with-all-sorts-of-required-switches $./make $./make install During my search I've encountered with something called CMake which generates *.vcproj & *.sln file but for that CMakeLists.txt is required. Not all projects provide CMakeLists.txt. I've never compiled anything from Visual C++ Command Line. Generally most projects provide makefile. Now how do I generate *.vcproj / *.sln from this? Can I compile with mingw-make of MinGW? If I can, how do I set different options ("Multi-Threaded"(/MT), "Multi-Threaded Debug"(/MTd), "Multi-Threaded DLL"(/MD), "Multi-Threaded DLL Debug"(/MDd)) for run-time libraries? I don't know what other ways are available. Please throw some light on this.

    Read the article

  • General String Encryption in .NET

    - by cryptospin
    I am looking for a general string encryption class in .NET. (Not to be confused with the 'SecureString' class.) I have started to come up with my own class, but thought there must be a .NET class that already allows you to encrypt/decrypt strings of any encoding with any Cryptographic Service Provider. Public Class SecureString Private key() As Byte Private iv() As Byte Private m_SecureString As String Public ReadOnly Property Encrypted() As String Get Return m_SecureString End Get End Property Public ReadOnly Property Decrypted() As String Get Return Decrypt(m_SecureString) End Get End Property Public Sub New(ByVal StringToSecure As String) If StringToSecure Is Nothing Then StringToSecure = "" m_SecureString = Encrypt(StringToSecure) End Sub Private Function Encrypt(ByVal StringToEncrypt As String) As String Dim result As String = "" Dim bytes() As Byte = Text.Encoding.UTF8.GetBytes(StringToEncrypt) Using provider As New AesCryptoServiceProvider() With provider .Mode = CipherMode.CBC .GenerateKey() .GenerateIV() key = .Key iv = .IV End With Using ms As New IO.MemoryStream Using cs As New CryptoStream(ms, provider.CreateEncryptor(), CryptoStreamMode.Write) cs.Write(bytes, 0, bytes.Length) cs.FlushFinalBlock() End Using result = Convert.ToBase64String(ms.ToArray()) End Using End Using Return result End Function Private Function Decrypt(ByVal StringToDecrypt As String) As String Dim result As String = "" Dim bytes() As Byte = Convert.FromBase64String(StringToDecrypt) Using provider As New AesCryptoServiceProvider() Using ms As New IO.MemoryStream Using cs As New CryptoStream(ms, provider.CreateDecryptor(key, iv), CryptoStreamMode.Write) cs.Write(bytes, 0, bytes.Length) cs.FlushFinalBlock() End Using result = Text.Encoding.UTF8.GetString(ms.ToArray()) End Using End Using Return result End Function End Class

    Read the article

  • Alignment in assembly

    - by jena
    Hi, I'm spending some time on assembly programming (Gas, in particular) and recently I learned about the align directive. I think I've understood the very basics, but I would like to gain a deeper understanding of its nature and when to use alignment. For instance, I wondered about the assembly code of a simple C++ switch statement. I know that under certain circumstances switch statements are based on jump tables, as in the following few lines of code: .section .rodata .align 4 .align 4 .L8: .long .L2 .long .L3 .long .L4 .long .L5 ... .align 4 aligns the following data on the next 4-byte boundary which ensures that fetching these memory locations is efficient, right? I think this is done because there might be things happening before the switch statement which caused misalignment. But why are there actually two calls to .align? Are there any rules of thumb when to call .align or should it simply be done whenever a new block of data is stored in memory and something prior to this could have caused misalignment? In case of arrays, it seems that alignment is done on 32-byte boundaries as soon as the array occupies at least 32 byte. Is it more efficient to do it this way or is there another reason for the 32-byte boundary? I'd appreciate any explanation or hint on literature.

    Read the article

  • Java InputStream encoding/charset

    - by Tobbe
    Running the following (example) code import java.io.*; public class test { public static void main(String[] args) throws Exception { byte[] buf = {-27}; InputStream is = new ByteArrayInputStream(buf); BufferedReader r = new BufferedReader(new InputStreamReader(is, "ISO-8859-1")); String s = r.readLine(); System.out.println("test.java:9 [byte] (char)" + (char)s.getBytes()[0] + " (int)" + (int)s.getBytes()[0]); System.out.println("test.java:10 [char] (char)" + (char)s.charAt(0) + " (int)" + (int)s.charAt(0)); System.out.println("test.java:11 string below"); System.out.println(s); System.out.println("test.java:13 string above"); } } gives me this output test.java:9 [byte] (char)? (int)63 test.java:10 [char] (char)? (int)229 test.java:11 string below ? test.java:13 string above How do I retain the correct byte value (-27) in the line-9 printout? And consequently receive the expected output of the System.out.println(s) command (å).

    Read the article

  • Parsing a JSON feed from YQL using jQuery

    - by Keith
    I am using YQL's query.multi to grab multiple feeds so I can parse a single JSON feed with jQuery and reduce the number of connections I'm making. In order to parse a single feed, I need to be able to check the type of result (photo, item, entry, etc) so I can pull out items in specific ways. Because of the way the items are nested within the JSON feed, I'm not sure the best way to loop through the results and check the type and then loop through the items to display them. Here is a YQL (http://developer.yahoo.com/yql/console/) query.multi example and you can see three different result types (entry, photo, and item) and then the items nested within them: select * from query.multi where queries= "select * from twitter.user.timeline where id='twitter'; select * from flickr.photos.search where has_geo='true' and text='san francisco'; select * from delicious.feeds.popular" or here is the JSON feed itself: http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20query.multi%20where%20queries%3D%22select%20*%20from%20flickr.photos.search%20where%20user_id%3D'23433895%40N00'%3Bselect%20*%20from%20delicious.feeds%20where%20username%3D'keith.muth'%3Bselect%20*%20from%20twitter.user.timeline%20where%20id%3D'keithmuth'%22&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=

    Read the article

  • Graphing the pitch (frequency) of a sound

    - by Coronatus
    I want to plot the pitch of a sound into a graph. Currently I can plot the amplitude. The graph below is created by the data returned by getUnscaledAmplitude(): AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(new BufferedInputStream(new FileInputStream(file))); byte[] bytes = new byte[(int) (audioInputStream.getFrameLength()) * (audioInputStream.getFormat().getFrameSize())]; audioInputStream.read(bytes); // Get amplitude values for each audio channel in an array. graphData = type.getUnscaledAmplitude(bytes, this); public int[][] getUnscaledAmplitude(byte[] eightBitByteArray, AudioInfo audioInfo) { int[][] toReturn = new int[audioInfo.getNumberOfChannels()][eightBitByteArray.length / (2 * audioInfo. getNumberOfChannels())]; int index = 0; for (int audioByte = 0; audioByte < eightBitByteArray.length;) { for (int channel = 0; channel < audioInfo.getNumberOfChannels(); channel++) { // Do the byte to sample conversion. int low = (int) eightBitByteArray[audioByte]; audioByte++; int high = (int) eightBitByteArray[audioByte]; audioByte++; int sample = (high << 8) + (low & 0x00ff); if (sample < audioInfo.sampleMin) { audioInfo.sampleMin = sample; } else if (sample > audioInfo.sampleMax) { audioInfo.sampleMax = sample; } toReturn[channel][index] = sample; } index++; } return toReturn; } But I need to show the audio's pitch, not amplitude. Fast Fourier transform appears to get the pitch, but it needs to know more variables than the raw bytes I have, and is very complex and mathematical. Is there a way I can do this?

    Read the article

  • How can I create an Image in GDI+ from a Base64-Encoded string in C++?

    - by Schnapple
    I have an application, currently written in C#, which can take a Base64-encoded string and turn it into an Image (a TIFF image in this case), and vice versa. In C# this is actually pretty simple. private byte[] ImageToByteArray(Image img) { MemoryStream ms = new MemoryStream(); img.Save(ms, System.Drawing.Imaging.ImageFormat.Tiff); return ms.ToArray(); } private Image byteArrayToImage(byte[] byteArrayIn) { MemoryStream ms = new MemoryStream(byteArrayIn); BinaryWriter bw = new BinaryWriter(ms); bw.Write(byteArrayIn); Image returnImage = Image.FromStream(ms, true, false); return returnImage; } // Convert Image into string byte[] imagebytes = ImageToByteArray(anImage); string Base64EncodedStringImage = Convert.ToBase64String(imagebytes); // Convert string into Image byte[] imagebytes = Convert.FromBase64String(Base64EncodedStringImage); Image anImage = byteArrayToImage(imagebytes); (and, now that I'm looking at it, could be simplified even further) I now have a business need to do this in C++. I'm using GDI+ to draw the graphics (Windows only so far) and I already have code to decode the string in C++ (to another string). What I'm stumbling on, however, is getting the information into an Image object in GDI+. At this point I figure I need either a) A way of converting that Base64-decoded string into an IStream to feed to the Image object's FromStream function b) A way to convert the Base64-encoded string into an IStream to feed to the Image object's FromStream function (so, different code than I'm currently using) c) Some completely different way I'm not thinking of here. My C++ skills are very rusty and I'm also spoiled by the managed .NET platform, so if I'm attacking this all wrong I'm open to suggestions.

    Read the article

  • How to encrypt/decrypt a file in Java?

    - by Petike
    Hello, I am writing a Java application which can "encrypt" and consequently "decrypt" whatever binary file. I am just a beginner in the "cryptography" area so I would like to write a very simple application for the beginning. For reading the original file, I would probably use the java.io.FileInputStream class to get the "array of bytes" byte originalBytes[] of the file. Then I would probably use some very simple cipher, for example "shift up every byte by 1" and then I would get the "encrypted" bytes byte encryptedBytes[] and let's say that I would also set a "password" for it, for example "123456789". Next, when somebody wants to "decrypt" that file, he has to enter the password ("123456789") first and after that the file could be decrypted (thus "shift down every byte by 1") and consequently saved to the output file via java.io.FileOutputStream I am just wondering how to "store" the password information to the encrypted file so that the decrypting application knows if the entered password and the "real" password equals? Probably it would be silly to add the password (for example the ASCII ordinal numbers of the password letters) to the beginning of the file (before the encrypted data). So my main question is how to store the password information to the encrypted file?

    Read the article

  • How to get SimpleRpcClient.Call() to be a blocking call to achieve synchronous communication with RabbitMQ?

    - by Nick Josevski
    In the .NET version (2.4.1) of RabbitMQ the RabbitMQ.Client.MessagePatterns.SimpleRpcClient has a Call() method with these signatures: public virtual object[] Call(params object[] args); public virtual byte[] Call(byte[] body); public virtual byte[] Call(IBasicProperties requestProperties, byte[] body, out IBasicProperties replyProperties); The problem: With various attempts, the method still continues to not block where I expect it to, so it's unable ever handle the response. The Question: Am I missing something obvious in the setup of the SimpleRpcClient, or earlier with the IModel, IConnection, or even PublicationAddress? More Info: I've also tried various paramater configurations of the QueueDeclare() method too with no luck. string QueueDeclare(string queue, bool durable, bool exclusive, bool autoDelete, IDictionary arguments); Some more reference code of my setup of these: IConnection conn = new ConnectionFactory{Address = "127.0.0.1"}.CreateConnection()); using (IModel ch = conn.CreateModel()) { var client = new SimpleRpcClient(ch, queueName); var queueName = ch.QueueDeclare("t.qid", true, true, true, null); ch.QueueBind(queueName, "exch", "", null); //HERE: does not block? var replyMessageBytes = client.Call(prop, msgToSend, out replyProp); } Looking elsewhere: Or is it likely there's an issue in my "server side" code? With and without the use of BasicAck() it appears the client has already continued execution.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >