Search Results

Search found 7985 results on 320 pages for 'multi byte'.

Page 99/320 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • How do you deserialize a collection with child collections?

    - by Stuart Helwig
    I have a collection of custom entity objects one property of which is an ArrayList of byte arrays. The custom entity is serializable and the collection property is marked with the following attributes: [XmlArray("Images"), XmlArrayItem("Image",typeof(byte[]))] So I serialize a collection of these custom entities and pass them to a web service, as a string. The web service receives the string and byte array in tact, The following code then attempts to deserialize the collection - back into custom entities for processing... XmlSerializer ser = new XmlSerializer(typeof(List<myCustomEntity>)); StringReader reader = new StringReader(xmlStringPassedToWS); List<myCustomEntity> entities = (List<myCustomEntity>)ser.Deserialize(reader); foreach (myCustomEntity e in entities) { // ...do some stuff... foreach (myChildCollection c in entities.ChildCollection { // .. do some more stuff.... } } I've checked the XML resulting from the initial serialization and it does contain byte array - the child collection, as does the StringReader built above. After the deserialization process, the resulting collection of custom entites is fine, except that each object in the collection does not contain any items in its child collection. (i.e. it doesn't get to "...do some more stuff..." above. Can someone please explain what I am doing wrong? Is it possible to serialize ArrayLists within a generic collection of custom entities?

    Read the article

  • Can't decrypt after encrypting with blowfish Java

    - by user2030599
    Hello i'm new to Java and i have the following problem: i'm trying to encrypt the password of a user using the blowfish algorithm, but when i try to decrypt it back to check the authentication it fails to decrypt it for some reason. public static String encryptBlowFish(String to_encrypt, String salt){ String dbpassword = null; try{ SecretKeySpec skeySpec = new SecretKeySpec( salt.getBytes(), "Blowfish" ); // Instantiate the cipher. Cipher cipher = Cipher.getInstance("Blowfish/CBC/PKCS5Padding"); cipher.init(Cipher.ENCRYPT_MODE, skeySpec); //byte[] encrypted = cipher.doFinal( URLEncoder.encode(data).getBytes() ); byte[] encrypted = cipher.doFinal( to_encrypt.getBytes() ); dbpassword = new String(encrypted); } catch (Exception e) { System.out.println("Exception while encrypting"); e.printStackTrace(); dbpassword = null; } finally { return dbpassword; } } public static String decryptBlowFish(String to_decrypt, String salt){ String dbpassword = null; try{ SecretKeySpec skeySpec = new SecretKeySpec( salt.getBytes(), "Blowfish" ); // Instantiate the cipher. Cipher cipher = Cipher.getInstance("Blowfish/CBC/PKCS5Padding"); cipher.init(Cipher.DECRYPT_MODE, skeySpec); //byte[] encrypted = cipher.doFinal( URLEncoder.encode(data).getBytes() ); byte[] encrypted = cipher.doFinal( to_decrypt.getBytes() ); dbpassword = new String(encrypted); } catch (Exception e) { System.out.println("Exception while decrypting"); e.printStackTrace(); dbpassword = null; } finally { return dbpassword; } } When i call the decrypt function it gives me the following error: java.security.InvalidKeyException: Parameters missing Any ideas? Thank you

    Read the article

  • C# performance methods of receiving data from a socket?

    - by Daniel
    Lets assume we have a simple internet socket, and its going to send 10 megabytes (because i want to ignore memory issues) of random data through. Is there any performance difference or a best practice method that one should use for receiving data? The final output data should be represented by a byte[]. Yes i know writing an arbitrary amount of data to memory is bad, and if I was downloading a large file i wouldn't be doing it like this. But for argument sake lets ignore that and assume its a smallish amount of data. I also realise that the bottleneck here is probably not the memory management but rather the socket receiving. I just want to know what would be the most efficient method of receiving data. A few dodgy ways can think of is: Have a List and a buffer, after the buffer is full, add it to the list and at the end list.ToArray() to get the byte[] Write the buffer to a memory stream, after its complete construct a byte[] of the stream.Length and read it all into it in order to get the byte[] output. Is there a more efficient/better way of doing this?

    Read the article

  • What is the Pythonic way to implement a simple FSM?

    - by Vicky
    Yesterday I had to parse a very simple binary data file - the rule is, look for two bytes in a row that are both 0xAA, then the next byte will be a length byte, then skip 9 bytes and output the given amount of data from there. Repeat to the end of the file. My solution did work, and was very quick to put together (even though I am a C programmer at heart, I still think it was quicker for me to write this in Python than it would have been in C) - BUT, it is clearly not at all Pythonic and it reads like a C program (and not a very good one at that!) What would be a better / more Pythonic approach to this? Is a simple FSM like this even still the right choice in Python? My solution: #! /usr/bin/python import sys f = open(sys.argv[1], "rb") state = 0 if f: for byte in f.read(): a = ord(byte) if state == 0: if a == 0xAA: state = 1 elif state == 1: if a == 0xAA: state = 2 else: state = 0 elif state == 2: count = a; skip = 9 state = 3 elif state == 3: skip = skip -1 if skip == 0: state = 4 elif state == 4: print "%02x" %a count = count -1 if count == 0: state = 0 print "\r\n"

    Read the article

  • Incorrect syntax inserting data into table

    - by SelectDistinct
    I am having some trouble with my update() method. The idea is that the user Provides a recipe name, ingredients, instructions and then selects an image using Filestream. Once the user clicks 'Add Recipe' this will call the update method, however as things stand I am getting an error which is mentioning the contents of the text box: Here is the update() method code: private void updatedata() { // filesteam object to read the image // full length of image to a byte array try { // try to see if the image has a valid path if (imagename != "") { FileStream fs; fs = new FileStream(@imagename, FileMode.Open, FileAccess.Read); // a byte array to read the image byte[] picbyte = new byte[fs.Length]; fs.Read(picbyte, 0, System.Convert.ToInt32(fs.Length)); fs.Close(); //open the database using odp.net and insert the lines string connstr = @"Server=mypcname\SQLEXPRESS;Database=RecipeOrganiser;Trusted_Connection=True"; SqlConnection conn = new SqlConnection(connstr); conn.Open(); string query; query = "insert into Recipes(RecipeName,RecipeImage,RecipeIngredients,RecipeInstructions) values (" + textBox1.Text + "," + " @pic" + "," + textBox2.Text + "," + textBox3.Text + ")"; SqlParameter picparameter = new SqlParameter(); picparameter.SqlDbType = SqlDbType.Image; picparameter.ParameterName = "pic"; picparameter.Value = picbyte; SqlCommand cmd = new SqlCommand(query, conn); cmd.Parameters.Add(picparameter); cmd.ExecuteNonQuery(); MessageBox.Show("Image successfully saved"); cmd.Dispose(); conn.Close(); conn.Dispose(); Connection(); } } catch (Exception ex) { MessageBox.Show(ex.Message); } } Can anyone see where I have gone wrong with the insert into Recipes query or suggest an alternative approach to this part of the code?

    Read the article

  • JNA Passing Structure By Reference Help

    - by tyeh26
    Hi all, I'm trying to use JNA to talk over a USB device plugged into the computer. Using Java and a .dll that was provided to me. I am having trouble with the Write function: C code: typedef struct { unsigned int id; unsigned int timestamp; unsigned char flags; unsigned char len; unsigned char data[16]; } CANMsg; CAN_STATUS canplus_Write( CANHANDLE handle, //long CANMsg *msg ); Java Equivalent: public class CANMsg extends Structure{ public int id = 0; public int timestamp = 0; public byte flags = 0; public byte len = 8; public byte data[] = new byte[16]; } int canplus_Write(NativeLong handle, CANMsg msg); I have confirmed that I can open and close the device. The close requires the NativeLong handle, so i am assuming that the CANMsg msg is the issue here. I have also confirmed that the device works when tested with C only code. I have read the the JNA documentation thoroughly... I think. Any pointers. Thanks all.

    Read the article

  • Generic that takes only numeric types (int double etc)?

    - by brandon
    In a program I'm working on, I need to write a function to take any numeric type (int, short, long etc) and shove it in to a byte array at a specific offset. There exists a Bitconverter.GetBytes() method that takes the numeric type and returns it as a byte array, and this method only takes numeric types. So far I have: private void AddToByteArray<T>(byte[] destination, int offset, T toAdd) where T : struct { Buffer.BlockCopy(BitConverter.GetBytes(toAdd), 0, destination, offset, sizeof(toAdd)); } So basically my goal is that, for example, a call to AddToByteArray(array, 3, (short)10) would take 10 and store it in the 4th slot of array. The explicit cast exists because I know exactly how many bytes I want it to take up. There are cases where I would want a number that is small enough to be a short to really take up 4 bytes. On the flip side, there are times when I want an int to be crunched down to just a single byte. I'm doing this to create a custom network packet, if that makes any ideas pop in to your heads. If the where clause of a generic supported something like "where T : int || long || etc" I would be ok. (And no need to explain why they don't support that, the reason is fairly obvious) Any help would be greatly appreciated! Edit: I realize that I could just do a bunch of overloads, one for each type I want to support... but I'm asking this question because I want to avoid precisely that :)

    Read the article

  • Java Out of memory - Out of heap size

    - by user1907849
    I downloaded the sample program , for file transfer between the client and server. When I try running the program with 1 GB file , I get the Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at Client.main(Client.java:31). Edit: Line no 31: byte [] mybytearray = new byte [FILE_SIZE]; public final static int FILE_SIZE = 1097742336; // receive file long startTime = System.nanoTime(); byte [] mybytearray = new byte [FILE_SIZE]; InputStream is = sock.getInputStream(); fos = new FileOutputStream(FILE_TO_RECEIVED); bos = new BufferedOutputStream(fos); bytesRead = is.read(mybytearray,0,mybytearray.length); current = bytesRead; do { bytesRead = is.read(mybytearray, current, (mybytearray.length-current)); if(bytesRead >= 0) current += bytesRead; } while(bytesRead > -1); bos.write(mybytearray, 0 , current); bos.flush(); Is there any fix for this?

    Read the article

  • How to backup using backup API's in c++

    - by user1603185
    I am writing an application that used to backup some specified file, therefore using the backup API calls i.e CreateFile BackupRead and WriteFile API's. getting errors Access violation reading location. I have attached code below. #include <windows.h> int main() { HANDLE hInput, hOutput; //m_filename is a variable holding the file path to read from hInput = CreateFile(L"C:\\Key.txt", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); //strLocation contains the path of the file I want to create. hOutput= CreateFile(L"C:\\tmp\\", GENERIC_WRITE, NULL, NULL, CREATE_ALWAYS, NULL, NULL); DWORD dwBytesToRead = 1024 * 1024 * 10; BYTE *buffer; buffer = new BYTE[dwBytesToRead]; BOOL bReadSuccess = false,bWriteSuccess = false; DWORD dwBytesRead,dwBytesWritten; LPVOID lpContext; //Now comes the important bit: do { bReadSuccess = BackupRead(hInput, buffer, sizeof(BYTE) *dwBytesToRead, &dwBytesRead, false, true, &lpContext); bWriteSuccess= WriteFile(hOutput, buffer, sizeof(BYTE) *dwBytesRead, &dwBytesWritten, NULL); }while(dwBytesRead == dwBytesToRead); return 0; } Any one suggest me how to use these API's? Thanks.

    Read the article

  • Memory increases with Java UDP Server

    - by Trevor
    I have a simple UDP server that creates a new thread for processing incoming data. While testing it by sending about 100 packets/second I notice that it's memory usage continues to increase. Is there any leak evident from my code below? Here is the code for the server. public class UDPServer { public static void main(String[] args) { UDPServer server = new UDPServer(15001); server.start(); } private int port; public UDPServer(int port) { this.port = port; } public void start() { try { DatagramSocket ss = new DatagramSocket(this.port); while(true) { byte[] data = new byte[1412]; DatagramPacket receivePacket = new DatagramPacket(data, data.length); ss.receive(receivePacket); new DataHandler(receivePacket.getData()).start(); } } catch (IOException e) { e.printStackTrace(); } } } Here is the code for the new thread that processes the data. For now, the run() method doesn't do anything. public class DataHandler extends Thread { private byte[] data; public DataHandler(byte[] data) { this.data = data; } @Override public void run() { System.out.println("run"); } }

    Read the article

  • send and receive in socket [closed]

    - by user3696492
    I have trouble in sending an object through socket in c#, my client can send to server but server can't send to client, i think there is something wrong with the client. Server private void Form1_Load(object sender, EventArgs e) { CheckForIllegalCrossThreadCalls = false; Thread a = new Thread(connect); a.Start(); } private void sendButton_Click(object sender, EventArgs e) { client.Send(SerializeData(ShapeList[ShapeList.Count - 1])); } void connect() { try { server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); iep = new IPEndPoint(IPAddress.Parse("127.0.0.1"), 5555); server.Bind(iep); server.Listen(10); client = server.Accept(); while (true) { byte[] data = new byte[1024]; client.Receive(data); PaintObject a = (PaintObject)DeserializeData(data); ShapeList.Add(a); Invalidate(); } } catch (Exception ex) { MessageBox.Show(ex.Message); } } client private void Form1_Load(object sender, EventArgs e) { CheckForIllegalCrossThreadCalls = false; Thread a = new Thread(connect); a.Start(); } private void SendButton_Click(object sender, EventArgs e) { client.Send(SerializeData(ShapeList[ShapeList.Count - 1])); } void connect() { try { client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); iep = new IPEndPoint(IPAddress.Parse("127.0.0.1"), 5555); client.Connect(iep); while (true) { byte[] data = new byte[1024]; client.Receive(data); PaintObject a = (PaintObject)DeserializeData(data); ShapeList.Add(a); Invalidate(); } } catch (Exception ex) { MessageBox.Show(ex.Message); } }

    Read the article

  • txt file read/overwrite/append. Is this feasible? (Visual C#)

    - by Arcadian
    Hi, I'm writing a program for some data entry I have to periodically do. I have begun testing a few things that the program will have to do but i'm not sure about this part. What i need this part to do is: read a .txt file of data take the first 12 characters from each line take the first 12 characters from each line of the data that has been entered in a multi-line text box compare the two lists line by line if one of the 12 character blocks from the multi-line text box match one of the blocks in the .txt file then overwrite that entire line (only 17 characters in total) if one of the 12 character blocks from the multi-line text box DO NOT match any of the blocks in the.txt file then append that entire line to the file thats all it has to do. i'll do an example: TXT FILE: G01:78:08:32 JG05 G08:80:93:10 JG02 G28:58:29:28 JG04 MULTI-LINE TEXT BOX: G01:78:08:32 JG06 G28:58:29:28 JG03 G32:10:18:14 JG01 G32:18:50:78 JG07 RESULTING TXT FILE: G01:78:08:32 JG06 G08:80:93:10 JG02 G28:58:29:28 JG03 G32:10:18:14 JG01 G32:18:50:78 JG07 as you can see lines 1 and 3 were overwriten, line 2 was left alone as it did not match any blocks in the text box, lines 4 and 5 were appended to the file. thats all i want it to do. How do i go about this? Thanks in advance

    Read the article

  • C++ bit shifting

    - by JB_SO
    Hi, I am new to working with bits & bytes in C++ and I'm looking at some previously developed code and I need some help in understanding what is going on with the code. There is a byte array and populating it with some data and I noticed that the data was being '&' with a 0x0F (Please see code snipped below). I don't really understand what is going on there....if somebody could please explain that, it would be greatly apperciated. Thanks! //Message Definition /* Byte 1: Bit(s) 3:0 = Unused; set to zero Bit(s) 7:4 = Message ID; set to 10 */ /* Byte 2: Bit(s) 3:0 = Unused; set to zero Bit(s) 7:4 = Acknowledge Message ID; set to 11 */ //Implementation BYTE Msg_Arry[2]; int Msg_Id = 10; int AckMsg_Id = 11; Msg_Arry[0] = Msg_Id & 0x0F; //MsgID & Unused Msg_Arry[1] = AckMsg_Id & 0x0F; //AckMsgID & Unused

    Read the article

  • file transfer through bluetooth

    - by venkat
    is it possible to transfer files from one android phone to any other device through bluetooth? if possible the send give me a link the sample code... switch (msg.what) { case MESSAGE_STATE_CHANGE: if(D) Log.i(TAG, "MESSAGE_STATE_CHANGE: " + msg.arg1); switch (msg.arg1) { case BluetoothChatService.STATE_CONNECTED: mTitle.setText(R.string.title_connected_to); mTitle.append(mConnectedDeviceName); mConversationArrayAdapter.clear(); break; case BluetoothChatService.STATE_CONNECTING: mTitle.setText(R.string.title_connecting); break; case BluetoothChatService.STATE_LISTEN: case BluetoothChatService.STATE_NONE: mTitle.setText(R.string.title_not_connected); break; } break; case MESSAGE_WRITE: byte[] writeBuf = (byte[]) msg.obj; // construct a string from the buffer String writeMessage = new String(writeBuf); mConversationArrayAdapter.add("Me: " + writeMessage); break; case MESSAGE_READ: byte[] readBuf = (byte[]) msg.obj; // construct a string from the valid bytes in the buffer String readMessage = new String(readBuf, 0, msg.arg1); mConversationArrayAdapter.add(mConnectedDeviceName+": " + readMessage); break; case MESSAGE_DEVICE_NAME: // save the connected device's name mConnectedDeviceName = msg.getData().getString(DEVICE_NAME); Toast.makeText(getApplicationContext(), "Connected to " + mConnectedDeviceName, Toast.LENGTH_SHORT).show(); break; case MESSAGE_TOAST: Toast.makeText(getApplicationContext(), msg.getData().getString(TOAST), Toast.LENGTH_SHORT).show(); break;

    Read the article

  • Java - Display % of upload done

    - by tr-raziel
    I have a java applet for uploading files to server. I want to display the % of data sent but when I use ObjectOutputStream.write() it just writes to the buffer, does not wait until the data has actually been sent. How can I achieve this. Perhaps I need to use thread synchronization or something. Any clues would be most helpful. This is the code I'm using right now: try{ for(File file : ficheiros){ FileInputStream stream = new FileInputStream (file); int bytesRead1 = 0;; int off1 = 0; int len1 = 100000; if(file.length() < 100000) len1 = new Long(file.length()).intValue(); byte[] bytes1 = new byte[len1]; while (off1 < file.length()) { bytes1 = new byte[len1]; if((file.length() - off1) < len1){ len1 = (new Long(file.length()).intValue() - off1); bytes1 = new byte[len1]; } if((bytesRead1 = stream.read(bytes1)) != -1){ //I want this to block until all data has been sent outputToServlet.write(bytes1, 0, bytesRead1 ); System.out.println("off1: " + off1); off1 = off1 + len1; outputToServlet.flush(); } sent += len1; if(sent>totalLength) sent = (int)totalLength; updateFeedback(sent,totalLength,false);//calls method to display % } updateFeedback(-1,-1,true); } }catch(Exception e){ e.printStackTrace(); } Thanks

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • Optimizing AES modes on Solaris for Intel Westmere

    - by danx
    Optimizing AES modes on Solaris for Intel Westmere Review AES is a strong method of symmetric (secret-key) encryption. It is a U.S. FIPS-approved cryptographic algorithm (FIPS 197) that operates on 16-byte blocks. AES has been available since 2001 and is widely used. However, AES by itself has a weakness. AES encryption isn't usually used by itself because identical blocks of plaintext are always encrypted into identical blocks of ciphertext. This encryption can be easily attacked with "dictionaries" of common blocks of text and allows one to more-easily discern the content of the unknown cryptotext. This mode of encryption is called "Electronic Code Book" (ECB), because one in theory can keep a "code book" of all known cryptotext and plaintext results to cipher and decipher AES. In practice, a complete "code book" is not practical, even in electronic form, but large dictionaries of common plaintext blocks is still possible. Here's a diagram of encrypting input data using AES ECB mode: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ AESKey-->(AES Encryption) AESKey-->(AES Encryption) | | | | \/ \/ CipherTextOutput CipherTextOutput Block 1 Block 2 What's the solution to the same cleartext input producing the same ciphertext output? The solution is to further process the encrypted or decrypted text in such a way that the same text produces different output. This usually involves an Initialization Vector (IV) and XORing the decrypted or encrypted text. As an example, I'll illustrate CBC mode encryption: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ IV >----->(XOR) +------------->(XOR) +---> . . . . | | | | | | | | \/ | \/ | AESKey-->(AES Encryption) | AESKey-->(AES Encryption) | | | | | | | | | \/ | \/ | CipherTextOutput ------+ CipherTextOutput -------+ Block 1 Block 2 The steps for CBC encryption are: Start with a 16-byte Initialization Vector (IV), choosen randomly. XOR the IV with the first block of input plaintext Encrypt the result with AES using a user-provided key. The result is the first 16-bytes of output cryptotext. Use the cryptotext (instead of the IV) of the previous block to XOR with the next input block of plaintext Another mode besides CBC is Counter Mode (CTR). As with CBC mode, it also starts with a 16-byte IV. However, for subsequent blocks, the IV is just incremented by one. Also, the IV ix XORed with the AES encryption result (not the plain text input). Here's an illustration: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ AESKey-->(AES Encryption) AESKey-->(AES Encryption) | | | | \/ \/ IV >----->(XOR) IV + 1 >---->(XOR) IV + 2 ---> . . . . | | | | \/ \/ CipherTextOutput CipherTextOutput Block 1 Block 2 Optimization Which of these modes can be parallelized? ECB encryption/decryption can be parallelized because it does more than plain AES encryption and decryption, as mentioned above. CBC encryption can't be parallelized because it depends on the output of the previous block. However, CBC decryption can be parallelized because all the encrypted blocks are known at the beginning. CTR encryption and decryption can be parallelized because the input to each block is known--it's just the IV incremented by one for each subsequent block. So, in summary, for ECB, CBC, and CTR modes, encryption and decryption can be parallelized with the exception of CBC encryption. How do we parallelize encryption? By interleaving. Usually when reading and writing data there are pipeline "stalls" (idle processor cycles) that result from waiting for memory to be loaded or stored to or from CPU registers. Since the software is written to encrypt/decrypt the next data block where pipeline stalls usually occurs, we can avoid stalls and crypt with fewer cycles. This software processes 4 blocks at a time, which ensures virtually no waiting ("stalling") for reading or writing data in memory. Other Optimizations Besides interleaving, other optimizations performed are Loading the entire key schedule into the 128-bit %xmm registers. This is done once for per 4-block of data (since 4 blocks of data is processed, when present). The following is loaded: the entire "key schedule" (user input key preprocessed for encryption and decryption). This takes 11, 13, or 15 registers, for AES-128, AES-192, and AES-256, respectively The input data is loaded into another %xmm register The same register contains the output result after encrypting/decrypting Using SSSE 4 instructions (AESNI). Besides the aesenc, aesenclast, aesdec, aesdeclast, aeskeygenassist, and aesimc AESNI instructions, Intel has several other instructions that operate on the 128-bit %xmm registers. Some common instructions for encryption are: pxor exclusive or (very useful), movdqu load/store a %xmm register from/to memory, pshufb shuffle bytes for byte swapping, pclmulqdq carry-less multiply for GCM mode Combining AES encryption/decryption with CBC or CTR modes processing. Instead of loading input data twice (once for AES encryption/decryption, and again for modes (CTR or CBC, for example) processing, the input data is loaded once as both AES and modes operations occur at in the same function Performance Everyone likes pretty color charts, so here they are. I ran these on Solaris 11 running on a Piketon Platform system with a 4-core Intel Clarkdale processor @3.20GHz. Clarkdale which is part of the Westmere processor architecture family. The "before" case is Solaris 11, unmodified. Keep in mind that the "before" case already has been optimized with hand-coded Intel AESNI assembly. The "after" case has combined AES-NI and mode instructions, interleaved 4 blocks at-a-time. « For the first table, lower is better (milliseconds). The first table shows the performance improvement using the Solaris encrypt(1) and decrypt(1) CLI commands. I encrypted and decrypted a 1/2 GByte file on /tmp (swap tmpfs). Encryption improved by about 40% and decryption improved by about 80%. AES-128 is slighty faster than AES-256, as expected. The second table shows more detail timings for CBC, CTR, and ECB modes for the 3 AES key sizes and different data lengths. » The results shown are the percentage improvement as shown by an internal PKCS#11 microbenchmark. And keep in mind the previous baseline code already had optimized AESNI assembly! The keysize (AES-128, 192, or 256) makes little difference in relative percentage improvement (although, of course, AES-128 is faster than AES-256). Larger data sizes show better improvement than 128-byte data. Availability This software is in Solaris 11 FCS. It is available in the 64-bit libcrypto library and the "aes" Solaris kernel module. You must be running hardware that supports AESNI (for example, Intel Westmere and Sandy Bridge, microprocessor architectures). The easiest way to determine if AES-NI is available is with the isainfo(1) command. For example, $ isainfo -v 64-bit amd64 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this software. Solaris libraries and kernel automatically determine if it's running on AESNI-capable machines and execute the correctly-tuned software for the current microprocessor. Summary Maximum throughput of AES cipher modes can be achieved by combining AES encryption with modes processing, interleaving encryption of 4 blocks at a time, and using Intel's wide 128-bit %xmm registers and instructions. References "Block cipher modes of operation", Wikipedia Good overview of AES modes (ECB, CBC, CTR, etc.) "Advanced Encryption Standard", Wikipedia "Current Modes" describes NIST-approved block cipher modes (ECB,CBC, CFB, OFB, CCM, GCM)

    Read the article

  • Protecting Cookies: Once and For All

    - by Your DisplayName here!
    Every once in a while you run into a situation where you need to temporarily store data for a user in a web app. You typically have two options here – either store server-side or put the data into a cookie (if size permits). When you need web farm compatibility in addition – things become a little bit more complicated because the data needs to be available on all nodes. In my case I went for a cookie – but I had some requirements Cookie must be protected from eavesdropping (sent only over SSL) and client script Cookie must be encrypted and signed to be protected from tampering with Cookie might become bigger than 4KB – some sort of overflow mechanism would be nice I really didn’t want to implement another cookie protection mechanism – this feels wrong and btw can go wrong as well. WIF to the rescue. The session management feature already implements the above requirements but is built around de/serializing IClaimsPrincipals into cookies and back. But if you go one level deeper you will find the CookieHandler and CookieTransform classes which contain all the needed functionality. public class ProtectedCookie {     private List<CookieTransform> _transforms;     private ChunkedCookieHandler _handler = new ChunkedCookieHandler();     // DPAPI protection (single server)     public ProtectedCookie()     {         _transforms = new List<CookieTransform>             {                 new DeflateCookieTransform(),                 new ProtectedDataCookieTransform()             };     }     // RSA protection (load balanced)     public ProtectedCookie(X509Certificate2 protectionCertificate)     {         _transforms = new List<CookieTransform>             {                 new DeflateCookieTransform(),                 new RsaSignatureCookieTransform(protectionCertificate),                 new RsaEncryptionCookieTransform(protectionCertificate)             };     }     // custom transform pipeline     public ProtectedCookie(List<CookieTransform> transforms)     {         _transforms = transforms;     }     public void Write(string name, string value, DateTime expirationTime)     {         byte[] encodedBytes = EncodeCookieValue(value);         _handler.Write(encodedBytes, name, expirationTime);     }     public void Write(string name, string value, DateTime expirationTime, string domain, string path)     {         byte[] encodedBytes = EncodeCookieValue(value);         _handler.Write(encodedBytes, name, path, domain, expirationTime, true, true, HttpContext.Current);     }     public string Read(string name)     {         var bytes = _handler.Read(name);         if (bytes == null || bytes.Length == 0)         {             return null;         }         return DecodeCookieValue(bytes);     }     public void Delete(string name)     {         _handler.Delete(name);     }     protected virtual byte[] EncodeCookieValue(string value)     {         var bytes = Encoding.UTF8.GetBytes(value);         byte[] buffer = bytes;         foreach (var transform in _transforms)         {             buffer = transform.Encode(buffer);         }         return buffer;     }     protected virtual string DecodeCookieValue(byte[] bytes)     {         var buffer = bytes;         for (int i = _transforms.Count; i > 0; i—)         {             buffer = _transforms[i - 1].Decode(buffer);         }         return Encoding.UTF8.GetString(buffer);     } } HTH

    Read the article

  • How to safely reboot via First Boot script

    - by unixman
    With the cost and performance benefits of the SPARC T4 and SPARC T5 systems undeniably validated, the banking sector is actively moving to Solaris 11.  I was recently asked to help a banking customer of ours look at migrating some of their Solaris 10 logic over to Solaris 11.  While we've introduced a number of holistic improvements in Solaris 11, in terms of how we ease long-term software lifecycle management, it is important to appreciate that customers may not be able to move all of their Solaris 10 scripts and procedures at once; there are years of scripts that reflect fine-tuned requirements of proprietary banking software that gets layered on top of the operating system. One of these requirements is to go through a cycle of reboots, after the system is installed, in order to ensure appropriate software dependencies and various configuration files are in-place. While Solaris 10 introduced a facility that aids here, namely SMF, many of our customers simply haven't yet taken the time to take advantage of this - proceeding with logic that, while functional, without further analysis has an appearance of not being optimal in terms of taking advantage of all the niceties bundled in Solaris 11 at no extra cost. When looking at Solaris 11, we recognize that one of the vehicles that bridges the gap between getting the operating system image payload delivered, and the customized banking software installed, is a notion of a First Boot script.  I had a working example of this at one of the Oracle OpenWorld sessions a few years ago - we've since improved our documentation and have introduced sections where this is described in better detail.   If you're looking at this for the first time and you've not worked with IPS and SMF previously, you might get the sense that the tasks are daunting.   There is a set of technologies involved that are jointly engineered in order to make the process reliable, predictable and extensible. As you go down the path of writing your first boot script, you'll be faced with a need to wrap it into a SMF service and then packaged into a IPS package. The IPS package would then need to be placed onto your IPS repository, in order to subsequently be made available to all of your AI (Automated Install) clients (i.e. the systems that you're installing Solaris and your software onto).     With this blog post, I wanted to create a single place that outlines the entire process (simplistically), and provide a hint of how a good old "at" command may make the requirement of forcing an initial reboot handy. The syntax and references to commands here is based on running this on a version of Solaris 11 that has been updated since its initial release in 2011 (i.e. I am writing this on Solaris 11.1) Assuming you've built an AI server (see this How To article for an example), you might be asking yourself: "Ok, I've got some logic that I need executed AFTER Solaris is deployed and I need my own little script that would make that happen. How do I go about hooking that script into the Solaris 11 AI framework?"  You might start here, in Chapter 13 of the "Installing Oracle Solaris 11.1 Systems" guide, which talks about "Running a Custom Script During First Boot".  And as you do, you'll be confronted with command that might be unfamiliar to you if you're new to Solaris 11, like our dear new friend: svcbundle svcbundle is an aide to creating manifests and profiles.  It is awesome, but don't let its awesomeness overwhelm you. (See this How To article by my colleague Glynn Foster for a nice working example).  In order to get your script's logic integrated into the Solaris 11 deployment process, you need to wrap your (shell) script into 2 manifests -  a SMF service manifest and a IPS package manifest.  ....and if you're new to XML, well then -- buckle up We have some examples of small first boot scripts shown here, as templates to build upon. Necessary structure of the script, particularly in leveraging SMF interfaces, is key. I won't go into that here as that is covered nicely in the doc link above.    Let's say your script ends up looking like this (btw: if things appear to be cut-off in your browser, just select them, copy and paste into your editor and it'll be grabbed - the source gets captured eventhough the browser may not render it "correctly" - ah, computers). #!/bin/sh # Load SMF shell support definitions . /lib/svc/share/smf_include.sh # If nothing to do, exit with temporary disable completed=`svcprop -p config/completed site/first-boot-script-svc:default` [ "${completed}" = "true" ] && \ smf_method_exit $SMF_EXIT_TEMP_DISABLE completed "Configuration completed" # Obtain the active BE name from beadm: The active BE on reboot has an R in # the third column of 'beadm list' output. Its name is in column one. bename=`beadm list -Hd|nawk -F ';' '$3 ~ /R/ {print $1}'` beadm create ${bename}.orig echo "Original boot environment saved as ${bename}.orig" # ---- Place your one-time configuration tasks here ---- # For example, if you have to pull some files from your own pre-existing system: /usr/bin/wget -P /var/tmp/ $PULL_DOWN_ADDITIONAL_SCRIPTS_FROM_A_CORPORATE_SYSTEM /usr/bin/chmod 755 /var/tmp/$SCRIPTS_THAT_GOT_PULLED_DOWN_IN_STEP_ABOVE # Clearly the above 2 lines represent some logic that you'd have to customize to fit your needs. # # Perhaps additional things you may want to do here might be of use, like # (gasp!) configuring ssh server for root login and X11 forwarding (for testing), and the like... # # Oh and by the way, after we're done executing all of our proprietary scripts we need to reboot # the system in accordance with our operational software requirements to ensure all layered bits # get initialized properly and pull-in their own modules and components in the right sequence, # subsequently. # We need to set a "time bomb" reboot, that would take place upon completion of this script. # We already know that *this* script depends on multi-user-server SMF milestone, so it should be # safe for us to schedule a reboot for 5 minutes from now. The "at" job get scheduled in the queue # while our little script continues thru the rest of the logic. /usr/bin/at now + 5 minutes <<REBOOT /usr/bin/sync /usr/sbin/reboot REBOOT # ---- End of your customizations ---- # Record that this script's work is done svccfg -s site/first-boot-script-svc:default setprop config/completed = true svcadm refresh site/first-boot-script-svc:default smf_method_exit $SMF_EXIT_TEMP_DISABLE method_completed "Configuration completed"  ...and you're happy with it and are ready to move on. Where do you go and what do you do? The next step is creating the IPS package for your script. Since running the logic of your script constitutes a service, you need to create a service manifest. This is described here, in the middle of Chapter 13 of "Creating an IPS package for the script and service".  Assuming the name of your shell script is first-boot-script.sh, you could end up doing the following: $ cd some_working_directory_for_this_project$ mkdir -p proto/lib/svc/manifest/site$ mkdir -p proto/opt/site $ cp first-boot-script.sh proto/opt/site  Then you would create the service manifest  file like so: $ svcbundle -s service-name=site/first-boot-script-svc \ -s start-method=/opt/site/first-boot-script.sh \ -s instance-property=config:completed:boolean:false -o \ first-boot-script-svc-manifest.xml   ...as described here, and place it into the directory hierarchy above. But before you place it into the directory, make sure to inspect the manifest and adjust the appropriate service dependencies.  That is to say, you want to properly specify what milestone should be reached before your service runs.  There's a <dependency> section that looks like this, before you modify it: <dependency restart_on="none" type="service" name="multi_user_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user"/>  </dependency>  So if you'd like to have your service run AFTER the multi-user-server milestone has been reached (i.e. later, as multi-user-server has more dependencies then multi-user and our intent to reboot the system may have significant ramifications if done prematurely), you would modify that section to read:  <dependency restart_on="none" type="service" name="multi_user_server_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user-server"/>  </dependency> Save the file and validate it: $ svccfg validate first-boot-script-svc-manifest.xml Assuming there are no errors returned, copy the file over into the directory hierarchy: $ cp first-boot-script-svc-manifest.xml proto/lib/svc/manifest/site Now that we've created the service manifest (.xml), create the package manifest (.p5m) file named: first-boot-script.p5m.  Populate it as follows: set name=pkg.fmri value=first-boot-script-AT-1-DOT-0,5.11-0 set name=pkg.summary value="AI first-boot script" set name=pkg.description value="Script that runs at first boot after AI installation" set name=info.classification value=\ "org.opensolaris.category.2008:System/Administration and Configuration" file lib/svc/manifest/site/first-boot-script-svc-manifest.xml \ path=lib/svc/manifest/site/first-boot-script-svc-manifest.xml owner=root \ group=sys mode=0444 dir path=opt/site owner=root group=sys mode=0755 file opt/site/first-boot-script.sh path=opt/site/first-boot-script.sh \ owner=root group=sys mode=0555 Now we are going to publish this package into a IPS repository. If you don't have one yet, don't worry. You have 2 choices: You can either  publish this package into your mirror of the Oracle Solaris IPS repo or create your own customized repo.  The best practice is to create your own customized repo, leaving your mirror of the Oracle Solaris IPS repo untouched.  From this point, you have 2 choices as well - you can either create a repo that will be accessible by your clients via HTTP or via NFS.  Since HTTP is how the default Solaris repo is accessed, we'll go with HTTP for your own IPS repo.   This nice and comprehensive How To by Albert White describes how to create multiple internal IPS repos for Solaris 11. We'll zero in on the basic elements for our needs here: We'll create the IPS repo directory structure hanging off a separate ZFS file system, and we'll tie it into an instance of pkg.depotd. We do this because we want our IPS repo to be accessible to our AI clients through HTTP, and the pkg.depotd SMF service bundled in Solaris 11 can help us do this. We proceed as follows: # zfs create rpool/export/MyIPSrepo # pkgrepo create /export/MyIPSrepo # svccfg -s pkg/server add MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg pkg application # svccfg -s pkg/server:MyIPSrepo setprop pkg/port=10081 # svccfg -s pkg/server:MyIPSrepo setprop pkg/inst_root=/export/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg general framework # svccfg -s pkg/server:MyIPSrepo addpropvalue general/complete astring: MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpropvalue general/enabled boolean: true # svccfg -s pkg/server:MyIPSrepo setprop pkg/readonly=true # svccfg -s pkg/server:MyIPSrepo setprop pkg/proxy_base = astring: http://your_internal_websrvr/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo setprop pkg/threads = 200 # svcadm refresh application/pkg/server:MyIPSrepo # svcadm enable application/pkg/server:MyIPSrepo Now that the IPS repo is created, we need to publish our package into it: # pkgsend publish -d ./proto -s /export/MyIPSrepo first-boot-script.p5m If you find yourself making changes to your script, remember to up-rev the version in the .p5m file (which is your IPS package manifest), and re-publish the IPS package. Next, you need to go to your AI install server (which might be the same machine) and modify the AI manifest to include a reference to your newly created package.  We do that by listing an additional publisher, which would look like this (replacing the IP address and port with your own, from the "svccfg" commands up above): <publisher name="firstboot"> <origin name="http://192.168.1.222:10081"/> </publisher>  Further down, in the  <software_data action="install">  section add: <name>pkg:/first-boot-script</name> Make sure to update your Automated Install service with the new AI manifest via installadm update-manifest command.  Don't forget to boot your client from the network to watch the entire process unfold and your script get tested.  Once the system makes the initial reboot, the first boot script will be executed and whatever logic you've specified in it should be executed, too, followed by a nice reboot. When the system comes up, your service should stay in a disabled state, as specified by the tailing lines of your SMF script - this is normal and should be left as is as it helps provide an auditing trail for you.   Because the reboot is quite a significant action for the system, you may want to add additional logic to the script that actually places and then checks for presence of certain lock files in order to avoid doing a reboot unnecessarily. You may also want to, alternatively, remove the SMF service entirely - if you're unsure of the potential for someone to try and accidentally enable that service -- eventhough its role in life is to only run once upon the system's first boot. That is how I spent a good chunk of my pre-Halloween time this week, hope yours was just as SPARCkly^H^H^H^H fun!    

    Read the article

  • C# Persistent WebClient

    - by Nullstr1ng
    I have a class written in C# (Windows Forms) It's a WebClient class which I intent to use in some website and for Logging In and navigation. Here's the complete class pastebin.com (the class has 197 lines so I just use pastebin. Sorry if I made a little bit harder for you to read the class, also below this post) The problem is, am not sure why it's not persistent .. I was able to log in, but when I navigate to other page (without leaving the domain), I was thrown back to log in page. Can you help me solving this problem? one issue though is, the site I was trying to connect is "HTTPS" protocol. I have not yet tested this on just a regular HTTP. Thank you in advance. /* * Web Client v1.2 * --------------- * Date: 12/17/2010 * author: Jayson Ragasa */ using System; using System.Collections; using System.Collections.Specialized; using System.Collections.Generic; using System.Text; using System.IO; using System.Net; using System.Web; namespace Nullstring.Modules.WebClient { public class WebClientLibrary { #region vars string _method = string.Empty; ArrayList _params; CookieContainer cookieko; HttpWebRequest req = null; HttpWebResponse resp = null; Uri uri = null; #endregion #region properties public string Method { set { _method = value; } } #endregion #region constructor public WebClientLibrary() { _method = "GET"; _params = new ArrayList(); cookieko = new CookieContainer(); } #endregion #region methods public void ClearParameter() { _params.Clear(); } public void AddParameter(string key, string value) { _params.Add(string.Format("{0}={1}", WebTools.URLEncodeString(key), WebTools.URLEncodeString(value))); } public string GetResponse(string URL) { StringBuilder response = new StringBuilder(); #region create web request { uri = new Uri(URL); req = (HttpWebRequest)WebRequest.Create(URL); req.Method = "GET"; req.GetLifetimeService(); } #endregion #region get web response { resp = (HttpWebResponse)req.GetResponse(); Stream resStream = resp.GetResponseStream(); int bytesReceived = 0; string tempString = null; int count = 0; byte[] buf = new byte[8192]; do { count = resStream.Read(buf, 0, buf.Length); if (count != 0) { bytesReceived += count; tempString = Encoding.UTF8.GetString(buf, 0, count); response.Append(tempString); } } while (count > 0); } #endregion return response.ToString(); } public string GetResponse(string URL, bool HasParams) { StringBuilder response = new StringBuilder(); #region create web request { uri = new Uri(URL); req = (HttpWebRequest)WebRequest.Create(URL); req.MaximumAutomaticRedirections = 20; req.AllowAutoRedirect = true; req.Method = this._method; req.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; req.KeepAlive = true; req.CookieContainer = this.cookieko; req.UserAgent = "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_8; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.224 Safari/534.10"; } #endregion #region build post data { if (HasParams) { if (this._method.ToUpper() == "POST") { string Parameters = String.Join("&", (String[])this._params.ToArray(typeof(string))); UTF8Encoding encoding = new UTF8Encoding(); byte[] loginDataBytes = encoding.GetBytes(Parameters); req.ContentType = "application/x-www-form-urlencoded"; req.ContentLength = loginDataBytes.Length; Stream stream = req.GetRequestStream(); stream.Write(loginDataBytes, 0, loginDataBytes.Length); stream.Close(); } } } #endregion #region get web response { resp = (HttpWebResponse)req.GetResponse(); Stream resStream = resp.GetResponseStream(); int bytesReceived = 0; string tempString = null; int count = 0; byte[] buf = new byte[8192]; do { count = resStream.Read(buf, 0, buf.Length); if (count != 0) { bytesReceived += count; tempString = Encoding.UTF8.GetString(buf, 0, count); response.Append(tempString); } } while (count > 0); } #endregion return response.ToString(); } #endregion } public class WebTools { public static string EncodeString(string str) { return HttpUtility.HtmlEncode(str); } public static string DecodeString(string str) { return HttpUtility.HtmlDecode(str); } public static string URLEncodeString(string str) { return HttpUtility.UrlEncode(str); } public static string URLDecodeString(string str) { return HttpUtility.UrlDecode(str); } } } UPDATE Dec 22GetResponse overload public string GetResponse(string URL) { StringBuilder response = new StringBuilder(); #region create web request { //uri = new Uri(URL); req = (HttpWebRequest)WebRequest.Create(URL); req.Method = "GET"; req.CookieContainer = this.cookieko; } #endregion #region get web response { resp = (HttpWebResponse)req.GetResponse(); Stream resStream = resp.GetResponseStream(); int bytesReceived = 0; string tempString = null; int count = 0; byte[] buf = new byte[8192]; do { count = resStream.Read(buf, 0, buf.Length); if (count != 0) { bytesReceived += count; tempString = Encoding.UTF8.GetString(buf, 0, count); response.Append(tempString); } } while (count 0); } #endregion return response.ToString(); } But still I got thrown back to login page. UPDATE: Dec 23 I tried listing the cookie and here's what I get at first, I have to login to a webform and this I have this Cookie JSESSIONID=368C0AC47305282CBCE7A566567D2942 then I navigated to another page (but on the same domain) I got a different Cooke? JSESSIONID=9FA2D64DA7669155B9120790B40A592C What went wrong? I use the code updated last Dec 22

    Read the article

  • Optical SPDIF audio from motherboard not working with receiver

    - by simon b
    Hi, I hope someone can help; I can't get my SPDIF optical out working through my receiver and all the responses I can see on the web assume you have a sound card, while I settled for the (seemingly high end) sound on my motherboard (Asus P7P55D-E PRO), which appears to limit some of my options. My set-up is a "new out of the box" one and is: *Windows 7 PC (using PowerDVD10 for DVDs/Blurays and Windows media player for music) *Asus P7P55D-E PRO motherboard - has 8-channel audio TRS jacks and SPDIF optical and coaxial out *An old Yamaha receiver, whose only multi-channel input options are optical in and 6 channel RCA in. However, it still can handle DTS and DD *Boston Acoustic Soundware XS 5.1 speakers I've currently got the SPDIF optical out from the motherboard connected to the in on my receiver, have SPDIF enabled in the sound menu and the light is glowing red down the fibre. But I'm getting no sound at all. What I want is to be able to play DVDs/BluRays in 5.1 but also to be able to play music in multi-channel mode (even though I know this will be "fake" multichannel; it's more about where I sit in the room and my requirement to use the sub because the Boston is a satellite/sub set-up) My questions are: *Will optical work at all for multi-channel? THe latest posts I can see suggest it does but some people seem to say optical only outputs stereo. Whom to believe? *Even if it does work, I've read that I have to disable AC-3 decoding, or make various other changes, which don't seem to be possible without the menu options that a sound-card brings. Is the motherboard-only option just too inflexible? *Although my SPDIF device is enabled in the sound menu, it insists under "Jack information" that it is a "rear panel RCA jack", when of course it is not (both TOSLINK and rCA jacks do exist). Has the PC just forgotten that it has an optical? *I think I could relatively easily connect the 8-channel 3.5mm TRS jacks to my receiver 6-ch input jacks by way of TRS/RCA cables, but would that not stop me from being able to play music from media-player in multi-channel mode, as I'm not sure the motherboard can cope *Or do I need to bite the bullet and buy a sound-card? And if so, how can I be sure the one I get doesn't have the same problem? Any thoughts gratefully received, Cheers, simon

    Read the article

  • Copying Windows 7 system onto Mac Pro partition?

    - by BEATFROMBRAIN
    Instead of installing a new Windows 7 system onto a mac pro partition, is it possible to copy over my complete Windows 7 system directly onto the partition, and get it to run? I just bought the mac pro, and it's running OSX 10.6... Copied from a comment: I wish to make a windows partition on the mac pro startup disk, but instead of installing windows, was wondering if I can copy over from my pc laptop byte for byte my windows system into said partition, and successfully run it

    Read the article

  • Copying Windows 7 system onto Mac Pro partition?

    - by BEATFROMBRAIN
    Instead of installing a new Windows 7 system onto a mac pro partition, is it possible to copy over my complete Windows 7 system directly onto the partition, and get it to run? I just bought the mac pro, and it's running OSX 10.6... Copied from a comment: I wish to make a windows partition on the mac pro startup disk, but instead of installing windows, was wondering if I can copy over from my pc laptop byte for byte my windows system into said partition, and successfully run it

    Read the article

  • How can I determine the sector size on an external hard drive?

    - by sigint
    Hard drives are transitioning from 512 byte to 4096 byte sector sizes, and it looks like Windows XP won't support these newer drives without additional software (such as WDalign from Western Digital) My question is: how does this affect external hard drives? I'll be buying a 1TB USB external drive, and it'll be plugged into a mix of Windows 7 and XP machines. Is there an easy way to tell what the sector size on an external hard drive is?

    Read the article

  • How I can determine the sector size on an external hard drive?

    - by sigint
    Hard drives are transitioning from 512 byte to 4096 byte sector sizes, and it looks like Windows XP won't support these newer drives without additional software (such as WDalign from Western Digital) My question is: how does this affect external hard drives? I'll be buying a 1TB USB external drive, and it'll be plugged into a mix of Windows 7 and XP machines. Is there an easy way to tell what the sector size on an external hard drive is?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >