Search Results

Search found 3443 results on 138 pages for 'byte'.

Page 30/138 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Indy Write Buffering / Efficient TCP communication

    - by Smasher
    I know, I'm asking a lot of questions...but as a new delphi developer I keep falling over all these questions :) This one deals with TCP communication using indy 10. To make communication efficient, I code a client operation request as a single byte (in most scenarios followed by other data bytes of course, but in this case only one single byte). Problem is that var Bytes : TBytes; ... SetLength (Bytes, 1); Bytes [0] := OpCode; FConnection.IOHandler.Write (Bytes, 1); ErrorCode := Connection.IOHandler.ReadByte; does not send that byte immediately (at least the servers execute handler is not invoked). If I change the '1' to a '9' for example everything works fine. I assumed that Indy buffers the outgoing bytes and tried to disable write buffering with FConnection.IOHandler.WriteBufferClose; but it did not help. How can I send a single byte and make sure that it is immediatly sent? And - I add another little question here - what is the best way to send an integer using indy? Unfortunately I can't find function like WriteInteger in the IOHandler of TIdTCPServer...and WriteLn (IntToStr (SomeIntVal)) seems not very efficient to me. Does it make a difference whether I use multiple write commands in a row or pack things together in a byte array and send that once? Thanks for any answers! EDIT: I added a hint that I'm using Indy 10 since there seem to be major changes concerning the read and write procedures.

    Read the article

  • Visual basic 6.0 - ComputeHash invalid procedure call or argument error

    - by Mohan Babu Vijaya Gopal
    I am getting the error "invalid procedure call or arguments" at the step computeHash(). Any help highly appreciated. Private Sub Form_Load() Dim rngcsp As New RNGCryptoServiceProvider '= new RNGCryptoServiceProvider() Dim u8 As Encoding 'u8 = Encoding.UTF8 Dim minSaltSize As Integer Dim maxSaltSize As Integer Dim saltSize As Integer minSaltSize = 4 maxSaltSize = 8 Dim randm As Random Set randm = New Random Dim saltBytes() As Byte ReDim saltBytes(saltSize) Set rngcsp = New RNGCryptoServiceProvider rngcsp.GetNonZeroBytes (saltBytes) Dim plainTextBytes() As Byte plainTextBytes() = ConvertStringToUtf8Bytes("Mohan") Dim plainTextBytesLen As Long plainTextBytesLen = UBound(plainTextBytes) - LBound(plainTextBytes) + 1 Dim saltBytesLen As Long saltBytesLen = UBound(saltBytes) - LBound(saltBytes) + 1 Dim plainTextWithSaltBytes() As Byte ReDim plainTextWithSaltBytes(plainTextBytesLen + saltBytesLen) For i = 0 To plainTextBytesLen - 1 plainTextWithSaltBytes(i) = plainTextBytes(i) Next For i = 0 To saltBytesLen - 1 plainTextWithSaltBytes(i) = saltBytes(i) Next 'Dim hash As HashAlgorithm = New MD5CryptoServiceProvider() Dim hash12 As New SHA256Managed 'SHA256Managed Dim totLen As Integer totLen = plainTextBytesLen + saltBytesLen Dim str As String Dim hashBytes() As Byte 'With hashBytes = hash12.computeHash(plainTextWithSaltBytes) ', 0, totLen) 'End With End Sub

    Read the article

  • Error: Bad Base64Coder input character

    - by sebby_zml
    Hi, I am currently facing an error called "Bad Base64Coder input character at ..." Here is my code in java. String nonce2 = strNONCE; byte[] nonceBytes1 = Base64Coder.decode(nonce2); System.out.println("nonceByte1 value : " + nonceBytes1); The problem now is i get Bad Base64Coder input character error and the nonceBytes1 value is printed as null. I am trying to decode the nonce2 from Base64Coder. My strNONCE value is 16 byte[] nonce = new byte[16]; Random rand; rand = SecureRandom.getInstance ("SHA1PRNG"); rand.nextBytes(nonce); //convert byte array to string. strNONCE = new String(nonce); any help is deeply appreciated

    Read the article

  • Workflows not starting after fresh install

    - by Greg McGuffey
    I just installed Dynamics CRM 4.0. It is working nicely except for workflows. They won't start. I turned on tracing and it appears that there is an IO error. The server is setup with IFD and SSL. No issues accessing it internally or externally. Here is the trace: # CRM Tracing Version 2.0 # LocalTime: 2010-06-08 11:34:58.2 # Categories: # CallStackOn: No # ComputerName: FOX-CRM1 # CRMVersion: 4.0.7333.2741 # DeploymentType: OnPremise # ScaleGroup: # ServerRole: AppServer, AsyncService, DiscoveryService, WebService, ApiServer, HelpServer, DeploymentService [2010-06-08 11:34:58.2] Process:CrmAsyncService |Organization:821a137e-7191-49a4-86cc-69101e2b6d20 |Thread: 24 |Category: Platform.Async |User: 00000000-0000-0000-0000-000000000000 |Level: Error | AsyncOperationCommand.Execute >Exception while trying to execute AsyncOperationId: {DF68F483-2C73-DF11-9A34-18A9053B7B38} AsyncOperationType: 1 - System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. ---> System.IO.IOException: The handshake failed due to an unexpected packet format. at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Microsoft.Crm.SdkTypeProxy.CrmService.Retrieve(String entityName, Guid id, ColumnSetBase columnSet) at Microsoft.Crm.Asynchronous.SdkTypeProxyCrmServiceWrapper.Retrieve(String entityName, Guid id, ColumnSetBase columnSet) at Microsoft.Crm.Asynchronous.SdkPluginDescriptionProvider.GetPluginTypeDescription(Guid pluginTypeId, IOrganizationContext context) at Microsoft.Crm.Caching.PluginTypeCacheLoader.LoadCacheData(Guid key, IOrganizationContext context) at Microsoft.Crm.Caching.CrmMultiOrgCache`2.CreateEntry(TKey key, IOrganizationContext context) at Microsoft.Crm.Caching.CrmSharedMultiOrgCache`2.LookupEntry(TKey key, IOrganizationContext context) at Microsoft.Crm.Caching.PluginTypeCache.LookupEntry(Guid pluginTypeId, IOrganizationContext context) at Microsoft.Crm.Asynchronous.AsyncOperationCommand.GetPluginType(Guid pluginTypeId) at Microsoft.Crm.Asynchronous.EventOperation.InternalExecute(AsyncEvent asyncEvent) at Microsoft.Crm.Asynchronous.AsyncOperationCommand.Execute(AsyncEvent asyncEvent) The only thing I've tried to to update the AsyncSdkRootDomain row in the Deployment table to match the ADSdkRootDomain and the ADApplicationRootDomain values. It was blank. That didn't appear to work. After some more research, I think this might be caused because the Asynch service can't access the SDK web services using SSL. If this is correct, how would one configure a CRM server for secure access, internal and external (IFD) and still allow asynch service to hit web site? Thanks for your help!

    Read the article

  • Are PyArg_ParseTuple() "s" format specifiers useful in Python 3.x C API?

    - by Craig McQueen
    I'm trying to write a Python C extension that processes byte strings, and I have something basically working for Python 2.x and Python 3.x. For the Python 2.x code, near the start of my function, I currently have a line: if (!PyArg_ParseTuple(args, "s#:in_bytes", &src_ptr, &src_len)) ... I notice that the s# format specifier accepts both Unicode strings and byte strings. I really just want it to accept byte strings and reject Unicode. For Python 2.x, this might be "good enough"--the standard hashlib seems to do the same, accepting Unicode as well as byte strings. However, Python 3.x is meant to clean up the Unicode/byte string mess and not let the two be interchangeable. So, I'm surprised to find that in Python 3.x, the s format specifiers for PyArg_ParseTuple() still seem to accept Unicode and provide a "default encoded string version" of the Unicode. This seems to go against the principles of Python 3.x, making the s format specifiers unusable in practice. Is my analysis correct, or am I missing something? Looking at the implementation for hashlib for Python 3.x (e.g. see md5module.c, function MD5_update() and its use of GET_BUFFER_VIEW_OR_ERROUT() macro) I see that it avoids the s format specifiers, and just takes a generic object (O specifier) and then does various explicit type checks using the GET_BUFFER_VIEW_OR_ERROUT() macro. Is this what we have to do?

    Read the article

  • java: can I convert strings with String.getBytes() without the BOM?

    - by Cheeso
    Suppose I have this code: String encoding = "UTF-16"; String text = "[Hello StackOverflow]"; byte[] message= text.getBytes(encoding); If I display the byte array in message, the result is: 0000 FE FF 00 5B 00 48 00 65 00 6C 00 6C 00 6F 00 20 ...[.H.e.l.l.o. 0010 00 53 00 74 00 61 00 63 00 6B 00 4F 00 76 00 65 .S.t.a.c.k.O.v.e 0020 00 72 00 66 00 6C 00 6F 00 77 00 5D .r.f.l.o.w.] As you can see, there's a BOM in the beginning. How can I: generate a UTF-16 byte array that lacks a BOM ? convert from a byte array that contains UTF=16 chars but lacks a BOM, back to a string?

    Read the article

  • Java bitshift strangeness

    - by Martin
    Java has 2 bitshift operators for right shifts: >> shifts right, and is dependant on the sign bit for the sign of the result >>> shifts right and shifts a zero into leftmost bits http://java.sun.com/docs/books/tutorial/java/nutsandbolts/op3.html This seems fairly simple, so can anyone explain to me why this code, when given a value of -128 for bar, produces a value of -2 for foo: byte foo = (byte)((bar & ((byte)-64)) >>> 6); What this is meant to do is take an 8bit byte, mask of the leftmost 2 bits, and shift them into the rightmost 2 bits. Ie: initial = 0b10000000 (-128) -64 = 0b11000000 initial & -64 = 0b10000000 0b10000000 >>> 6 = 0b00000010 The result actually is -2, which is 0b11111110 Ie. 1s rather than zeros are shifted into left positions

    Read the article

  • Help needed with AES between Java and Objective-C (iPhone)....

    - by Simon Lee
    I am encrypting a string in objective-c and also encrypting the same string in Java using AES and am seeing some strange issues. The first part of the result matches up to a certain point but then it is different, hence when i go to decode the result from Java onto the iPhone it cant decrypt it. I am using a source string of "Now then and what is this nonsense all about. Do you know?" Using a key of "1234567890123456" The objective-c code to encrypt is the following: NOTE: it is a NSData category so assume that the method is called on an NSData object so 'self' contains the byte data to encrypt. - (NSData *)AESEncryptWithKey:(NSString *)key { char keyPtr[kCCKeySizeAES128+1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [self length]; //See the doc: For block ciphers, the output size will always be less than or //equal to the input size plus the size of one block. //That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void *buffer = malloc(bufferSize); size_t numBytesEncrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES128, NULL /* initialization vector (optional) */, [self bytes], dataLength, /* input */ buffer, bufferSize, /* output */ &numBytesEncrypted); if (cryptStatus == kCCSuccess) { //the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted]; } free(buffer); //free the buffer; return nil; } And the java encryption code is... public byte[] encryptData(byte[] data, String key) { byte[] encrypted = null; Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider()); byte[] keyBytes = key.getBytes(); SecretKeySpec keySpec = new SecretKeySpec(keyBytes, "AES"); try { Cipher cipher = Cipher.getInstance("AES/ECB/PKCS7Padding", "BC"); cipher.init(Cipher.ENCRYPT_MODE, keySpec); encrypted = new byte[cipher.getOutputSize(data.length)]; int ctLength = cipher.update(data, 0, data.length, encrypted, 0); ctLength += cipher.doFinal(encrypted, ctLength); } catch (Exception e) { logger.log(Level.SEVERE, e.getMessage()); } finally { return encrypted; } } The hex output of the objective-c code is - 7a68ea36 8288c73d f7c45d8d 22432577 9693920a 4fae38b2 2e4bdcef 9aeb8afe 69394f3e 1eb62fa7 74da2b5c 8d7b3c89 a295d306 f1f90349 6899ac34 63a6efa0 and the java output is - 7a68ea36 8288c73d f7c45d8d 22432577 e66b32f9 772b6679 d7c0cb69 037b8740 883f8211 748229f4 723984beb 50b5aea1 f17594c9 fad2d05e e0926805 572156d As you can see everything is fine up to - 7a68ea36 8288c73d f7c45d8d 22432577 I am guessing I have some of the settings different but can't work out what, I tried changing between ECB and CBC on the java side and it had no effect. Can anyone help!? please....

    Read the article

  • tripledes encryption not yielding same results in PHP and C#

    - by Jones
    When I encrypt with C# I get arTdPqWOg6VppOqUD6mGITjb24+x5vJjfAufNQ4DN7rVEtpDmhFnMeJGg4n5y1BN static void Main(string[] args) { Encoding byteEncoder = Encoding.Default; String key = "ShHhd8a08JhJiho98ayslcjh"; String message = "Let us meet at 9 o'clock at the secret place."; String encryption = Encrypt(message, key, false); String decryption = Decrypt(encryption , key, false); Console.WriteLine("Message: {0}", message); Console.WriteLine("Encryption: {0}", encryption); Console.WriteLine("Decryption: {0}", decryption); } public static string Encrypt(string toEncrypt, string key, bool useHashing) { byte[] keyArray; byte[] toEncryptArray = UTF8Encoding.UTF8.GetBytes(toEncrypt); if (useHashing) { MD5CryptoServiceProvider hashmd5 = new MD5CryptoServiceProvider(); keyArray = hashmd5.ComputeHash(UTF8Encoding.UTF8.GetBytes(key)); } else keyArray = UTF8Encoding.UTF8.GetBytes(key); TripleDESCryptoServiceProvider tdes = new TripleDESCryptoServiceProvider(); tdes.Key = keyArray; tdes.Mode = CipherMode.ECB; tdes.Padding = PaddingMode.PKCS7; ICryptoTransform cTransform = tdes.CreateEncryptor(); byte[] resultArray = cTransform.TransformFinalBlock(toEncryptArray, 0, toEncryptArray.Length); return Convert.ToBase64String(resultArray, 0, resultArray.Length); } public static string Decrypt(string toDecrypt, string key, bool useHashing) { byte[] keyArray; byte[] toEncryptArray = Convert.FromBase64String(toDecrypt); if (useHashing) { MD5CryptoServiceProvider hashmd5 = new MD5CryptoServiceProvider(); keyArray = hashmd5.ComputeHash(UTF8Encoding.UTF8.GetBytes(key)); } else keyArray = UTF8Encoding.UTF8.GetBytes(key); TripleDESCryptoServiceProvider tdes = new TripleDESCryptoServiceProvider(); tdes.Key = keyArray; tdes.Mode = CipherMode.ECB; tdes.Padding = PaddingMode.PKCS7; ICryptoTransform cTransform = tdes.CreateDecryptor(); byte[] resultArray = cTransform.TransformFinalBlock(toEncryptArray, 0, toEncryptArray.Length); return UTF8Encoding.UTF8.GetString(resultArray); } When I encrypt with PHP I get: arTdPqWOg6VppOqUD6mGITjb24+x5vJjfAufNQ4DN7rVEtpDmhFnMVM+W/WFlksR <?php $key = "ShHhd8a08JhJiho98ayslcjh"; $input = "Let us meet at 9 o'clock at the secret place."; $td = mcrypt_module_open('tripledes', '', 'ecb', ''); $iv = mcrypt_create_iv (mcrypt_enc_get_iv_size($td), MCRYPT_RAND); mcrypt_generic_init($td, $key, $iv); $encrypted_data = mcrypt_generic($td, $input); mcrypt_generic_deinit($td); mcrypt_module_close($td); echo base64_encode($encrypted_data); ?> I don't know enough about cryptography to figure out why. Any ideas? Thanks.

    Read the article

  • Passing VB Callback function to C dll - noob is stuck.

    - by WaveyDavey
    Callbacks in VB (from C dll). I need to pass a vb function as a callback to a c function in a dll. I know I need to use addressof for the function but I'm getting more and more confused as to how to do it. Details: The function in the dll that I'm passing the address of a callback to is defined in C as : PaError Pa_OpenStream( PaStream** stream, const PaStreamParameters *inputParameters, const PaStreamParameters *outputParameters, double sampleRate, unsigned long framesPerBuffer, PaStreamFlags streamFlags, PaStreamCallback *streamCallback, void *userData ); where the function is parameter 7, *streamCallback. The type PaStreamCallback is defines thusly: typedef int PaStreamCallback( const void *input, void *output, unsigned long frameCount, const PaStreamCallbackTimeInfo* timeInfo, PaStreamCallbackFlags statusFlags, void *userData ); In my vb project I have: Private Declare Function Pa_OpenStream Lib "portaudio_x86.dll" _ ( ByVal stream As IntPtr _ , ByVal inputParameters As IntPtr _ , ByVal outputParameters As PaStreamParameters _ , ByVal samprate As Double _ , ByVal fpb As Double _ , ByVal paClipoff As Long _ , ByVal patestCallBack As IntPtr _ , ByVal data As IntPtr) As Integer (don't worry if I've mistyped some of the other parameters, I'll get to them later! Let's concentrate on the callback for now.) In module1.vb I have defined the callback function: Function MyCallback( ByVal inp As Byte, _ ByVal outp As Byte, _ ByVal framecount As Long, _ ByVal pastreamcallbacktimeinfo As Byte, _ ByVal pastreamcallbackflags As Byte, _ ByVal userdata As Byte) As Integer ' do clever things here End Function The external function in the dll is called with err = Pa_OpenStream( ptr, _ nulthing, _ outputParameters, _ SAMPLE_RATE, _ FRAMES_PER_BUFFER, _ clipoff, _ AddressOf MyCallback, _ dataptr) This is broken in the declaration of the external function - it doesn't like the type IntPtr as a function pointer for AddressOf. Can anyone show me how to implement passing this callback function please ? Many thanks David

    Read the article

  • how to extract only the 1st 2 bytes of NSString in Objective-C for iPhone programming

    - by suse
    Hello, 1) How to read the data from read stream in Objective-C, Below code would give me how many bytes are read from stream, but how to know what data is read from stream? CFIndex cf = CFReadStreameRead(Stream, buffer, length); 2) How to extract only the 1st 2bytes of NSString in Objective-C in iPhone programming. For Ex: If this is the string NSString *str = 017MacApp; 1st byte has 0 in it, and 2nd byte has 17 in it. how do i extract o and 17 into byte array? I know that the below code would give me back the byte array to int value. ((b[0] & 0xFF) << 8)+ (b[1] & 0xFF); but how to put 0 into b[0] and 17 into b[1]? plz help me to solve this.

    Read the article

  • CIL and JVM Little endian to big endian in c# and java

    - by Haythem
    Hello, I am using on the client C# where I am converting double values to byte array. I am using java on the server and I am using writeDouble and readDouble to convert double values to byte arrays. The problem is the double values from java at the end are not the double values at the begin giving to c# writeDouble in Java Converts the double argument to a long using the doubleToLongBits method , and then writes that long value to the underlying output stream as an 8-byte quantity, high byte first. DoubleToLongBits Returns a representation of the specified floating-point value according to the IEEE 754 floating-point "double format" bit layout. The Program on the server is waiting of 64-102-112-0-0-0-0-0 from C# to convert it to 1700.0 but he si becoming 0000014415464 from c# after c# converted 1700.0 this is my code in c#: class User { double workingStatus; public void persist() { byte[] dataByte; using (MemoryStream ms = new MemoryStream()) { using (BinaryWriter bw = new BinaryWriter(ms)) { bw.Write(workingStatus); bw.Flush(); bw.Close(); } dataByte = ms.ToArray(); for (int j = 0; j < dataByte.Length; j++) { Console.Write(dataByte[j]); } } public double WorkingStatus { get { return workingStatus; } set { workingStatus = value; } } } class Test { static void Main() { User user = new User(); user.WorkingStatus = 1700.0; user.persist(); } thank you for the help.

    Read the article

  • Advice on logic circuits and serial communications

    - by Spencer Ruport
    As far as I understand the serial port so far, transferring data is done over pin 3. As shown here: There are two things that make me uncomfortable about this. The first is that it seems to imply that the two connected devices agree on a signal speed and the second is that even if they are configured to run at the same speed you run into possible synchronization issues... right? Such things can be handled I suppose but it seems like there must be a simpler method. What seems like a better approach to me would be to have one of the serial port pins send a pulse that indicates that the next bit is ready to be stored. So if we're hooking these pins up to a shift register we basically have: (some pulse pin)-clk, tx-d Is this a common practice? Is there some reason not to do this? EDIT Mike shouldn't have deleted his answer. This I2C (2 pin serial) approach seems fairly close to what I did. The serial port doesn't have a clock you're right nobugz but that's basically what I've done. See here: private void SendBytes(byte[] data) { int baudRate = 0; int byteToSend = 0; int bitToSend = 0; byte bitmask = 0; byte[] trigger = new byte[1]; trigger[0] = 0; SerialPort p; try { p = new SerialPort(cmbPorts.Text); } catch { return; } if (!int.TryParse(txtBaudRate.Text, out baudRate)) return; if (baudRate < 100) return; p.BaudRate = baudRate; for (int index = 0; index < data.Length * 8; index++) { byteToSend = (int)(index / 8); bitToSend = index - (byteToSend * 8); bitmask = (byte)System.Math.Pow(2, bitToSend); p.Open(); p.Parity = Parity.Space; p.RtsEnable = (byte)(data[byteToSend] & bitmask) > 0; s = p.BaseStream; s.WriteByte(trigger[0]); p.Close(); } } Before anyone tells me how ugly this is or how I'm destroying my transfer speeds my quick answer is I don't care about that. My point is this seems much much simpler than the method you described in your answer nobugz. And it wouldn't be as ugly if the .Net SerialPort class gave me more control over the pin signals. Are there other serial port APIs that do?

    Read the article

  • How to pass a serialized object to appengine java task?

    - by aloo
    Hi all, I'm using java appengine and the task queue API to run async tasks. I would like to add a task to the task queue but pass a a java object as a parameter. I notic the task options api can add a parameter as a byte[] but I'm unsure how to use it. 1) How would I serialize my object to a byte[]? and 2) How would the task read the byte[] and reconstruct the original object? Thanks.

    Read the article

  • Thread Synchronisation 101

    - by taspeotis
    Previously I've written some very simple multithreaded code, and I've always been aware that at any time there could be a context switch right in the middle of what I'm doing, so I've always guarded access the shared variables through a CCriticalSection class that enters the critical section on construction and leaves it on destruction. I know this is fairly aggressive and I enter and leave critical sections quite frequently and sometimes egregiously (e.g. at the start of a function when I could put the CCriticalSection inside a tighter code block) but my code doesn't crash and it runs fast enough. At work my multithreaded code needs to be a tighter, only locking/synchronising at the lowest level needed. At work I was trying to debug some multithreaded code, and I came across this: EnterCriticalSection(&m_Crit4); m_bSomeVariable = true; LeaveCriticalSection(&m_Crit4); Now, m_bSomeVariable is a Win32 BOOL (not volatile), which as far as I know is defined to be an int, and on x86 reading and writing these values is a single instruction, and since context switches occur on an instruction boundary then there's no need for synchronising this operation with a critical section. I did some more research online to see whether this operation did not need synchronisation, and I came up with two scenarios it did: The CPU implements out of order execution or the second thread is running on a different core and the updated value is not written into RAM for the other core to see; and The int is not 4-byte aligned. I believe number 1 can be solved using the "volatile" keyword. In VS2005 and later the C++ compiler surrounds access to this variable using memory barriers, ensuring that the variable is always completely written/read to the main system memory before using it. Number 2 I cannot verify, I don't know why the byte alignment would make a difference. I don't know the x86 instruction set, but does mov need to be given a 4-byte aligned address? If not do you need to use a combination of instructions? That would introduce the problem. So... QUESTION 1: Does using the "volatile" keyword (implicity using memory barriers and hinting to the compiler not to optimise this code) absolve a programmer from the need to synchronise a 4-byte/8-byte on x86/x64 variable between read/write operations? QUESTION 2: Is there the explicit requirement that the variable be 4-byte/8-byte aligned? I did some more digging into our code and the variables defined in the class: class CExample { private: CRITICAL_SECTION m_Crit1; // Protects variable a CRITICAL_SECTION m_Crit2; // Protects variable b CRITICAL_SECTION m_Crit3; // Protects variable c CRITICAL_SECTION m_Crit4; // Protects variable d // ... }; Now, to me this seems excessive. I thought critical sections synchronised threads between a process, so if you've got one you can enter it and no other thread in that process can execute. There is no need for a critical section for each variable you want to protect, if you're in a critical section then nothing else can interrupt you. I think the only thing that can change the variables from outside a critical section is if the process shares a memory page with another process (can you do that?) and the other process starts to change the values. Mutexes would also help here, named mutexes are shared across processes, or only processes of the same name? QUESTION 3: Is my analysis of critical sections correct, and should this code be rewritten to use mutexes? I have had a look at other synchronisation objects (semaphores and spinlocks), are they better suited here? QUESTION 4: Where are critical sections/mutexes/semaphores/spinlocks best suited? That is, which synchronisation problem should they be applied to. Is there a vast performance penalty for choosing one over the other? And while we're on it, I read that spinlocks should not be used in a single-core multithreaded environment, only a multi-core multithreaded environment. So, QUESTION 5: Is this wrong, or if not, why is it right? Thanks in advance for any responses :)

    Read the article

  • Inserting instructions into method.

    - by Alix
    Hi, (First of all, this is a very lengthy post, but don't worry: I've already implemented all of it, I'm just asking your opinion.) I'm having trouble implementing the following; I'd appreciate some help: I get a Type as parameter. I define a subclass using reflection. Notice that I don't intend to modify the original type, but create a new one. I create a property per field of the original class, like so: [- ignore this text here; I had to add something or the formatting wouldn't work <-] public class OriginalClass { private int x; } public class Subclass : OriginalClass { private int x; public int X { get { return x; } set { x = value; } } } [This is number 4! Numbered lists don't work if you add code in between; sorry] For every method of the superclass, I create an analogous method in the subclass. The method's body must be the same except that I replace the instructions ldfld x with callvirt this.get_X, that is, instead of reading from the field directly I call the get accessor. I'm having trouble with step 4. I know you're not supposed to manipulate code like this, but I really need to. Here's what I've tried: Attempt #1: Use Mono.Cecil. This would allow me to parse the body of the method into human-readable Instructions, and easily replace instructions. However, the original type isn't in a .dll file, so I can't find a way to load it with Mono.Cecil. Writing the type to a .dll, then load it, then modify it and write the new type to disk (which I think is the way you create a type with Mono.Cecil), and then load it seems like a huge overhead. Attempt #2: Use Mono.Reflection. This would also allow me to parse the body into Instructions, but then I have no support for replacing instructions. I've implemented a very ugly and inefficient solution using Mono.Reflection, but it doesn't yet support methods that contain try-catch statements (although I guess I can implement this) and I'm concerned that there may be other scenarios in which it won't work, since I'm using the ILGenerator in a somewhat unusual way. Also, it's very ugly ;). Here's what I've done: private void TransformMethod(MethodInfo methodInfo) { // Create a method with the same signature. ParameterInfo[] paramList = methodInfo.GetParameters(); Type[] args = new Type[paramList.Length]; for (int i = 0; i < args.Length; i++) { args[i] = paramList[i].ParameterType; } MethodBuilder methodBuilder = typeBuilder.DefineMethod( methodInfo.Name, methodInfo.Attributes, methodInfo.ReturnType, args); ILGenerator ilGen = methodBuilder.GetILGenerator(); // Declare the same local variables as in the original method. IList<LocalVariableInfo> locals = methodInfo.GetMethodBody().LocalVariables; foreach (LocalVariableInfo local in locals) { ilGen.DeclareLocal(local.LocalType); } // Get readable instructions. IList<Instruction> instructions = methodInfo.GetInstructions(); // I first need to define labels for every instruction in case I // later find a jump to that instruction. Once the instruction has // been emitted I cannot label it, so I'll need to do it in advance. // Since I'm doing a first pass on the method's body anyway, I could // instead just create labels where they are truly needed, but for // now I'm using this quick fix. Dictionary<int, Label> labels = new Dictionary<int, Label>(); foreach (Instruction instr in instructions) { labels[instr.Offset] = ilGen.DefineLabel(); } foreach (Instruction instr in instructions) { // Mark this instruction with a label, in case there's a branch // instruction that jumps here. ilGen.MarkLabel(labels[instr.Offset]); // If this is the instruction that I want to replace (ldfld x)... if (instr.OpCode == OpCodes.Ldfld) { // ...get the get accessor for the accessed field (get_X()) // (I have the accessors in a dictionary; this isn't relevant), MethodInfo safeReadAccessor = dataMembersSafeAccessors[((FieldInfo) instr.Operand).Name][0]; // ...instead of emitting the original instruction (ldfld x), // emit a call to the get accessor, ilGen.Emit(OpCodes.Callvirt, safeReadAccessor); // Else (it's any other instruction), reemit the instruction, unaltered. } else { Reemit(instr, ilGen, labels); } } } And here comes the horrible, horrible Reemit method: private void Reemit(Instruction instr, ILGenerator ilGen, Dictionary<int, Label> labels) { // If the instruction doesn't have an operand, emit the opcode and return. if (instr.Operand == null) { ilGen.Emit(instr.OpCode); return; } // Else (it has an operand)... // If it's a branch instruction, retrieve the corresponding label (to // which we want to jump), emit the instruction and return. if (instr.OpCode.FlowControl == FlowControl.Branch) { ilGen.Emit(instr.OpCode, labels[Int32.Parse(instr.Operand.ToString())]); return; } // Otherwise, simply emit the instruction. I need to use the right // Emit call, so I need to cast the operand to its type. Type operandType = instr.Operand.GetType(); if (typeof(byte).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (byte) instr.Operand); else if (typeof(double).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (double) instr.Operand); else if (typeof(float).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (float) instr.Operand); else if (typeof(int).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (int) instr.Operand); ... // you get the idea. This is a pretty long method, all like this. } Branch instructions are a special case because instr.Operand is SByte, but Emit expects an operand of type Label. Hence the need for the Dictionary labels. As you can see, this is pretty horrible. What's more, it doesn't work in all cases, for instance with methods that contain try-catch statements, since I haven't emitted them using methods BeginExceptionBlock, BeginCatchBlock, etc, of ILGenerator. This is getting complicated. I guess I can do it: MethodBody has a list of ExceptionHandlingClause that should contain the necessary information to do this. But I don't like this solution anyway, so I'll save this as a last-resort solution. Attempt #3: Go bare-back and just copy the byte array returned by MethodBody.GetILAsByteArray(), since I only want to replace a single instruction for another single instruction of the same size that produces the exact same result: it loads the same type of object on the stack, etc. So there won't be any labels shifting and everything should work exactly the same. I've done this, replacing specific bytes of the array and then calling MethodBuilder.CreateMethodBody(byte[], int), but I still get the same error with exceptions, and I still need to declare the local variables or I'll get an error... even when I simply copy the method's body and don't change anything. So this is more efficient but I still have to take care of the exceptions, etc. Sigh. Here's the implementation of attempt #3, in case anyone is interested: private void TransformMethod(MethodInfo methodInfo, Dictionary<string, MethodInfo[]> dataMembersSafeAccessors, ModuleBuilder moduleBuilder) { ParameterInfo[] paramList = methodInfo.GetParameters(); Type[] args = new Type[paramList.Length]; for (int i = 0; i < args.Length; i++) { args[i] = paramList[i].ParameterType; } MethodBuilder methodBuilder = typeBuilder.DefineMethod( methodInfo.Name, methodInfo.Attributes, methodInfo.ReturnType, args); ILGenerator ilGen = methodBuilder.GetILGenerator(); IList<LocalVariableInfo> locals = methodInfo.GetMethodBody().LocalVariables; foreach (LocalVariableInfo local in locals) { ilGen.DeclareLocal(local.LocalType); } byte[] rawInstructions = methodInfo.GetMethodBody().GetILAsByteArray(); IList<Instruction> instructions = methodInfo.GetInstructions(); int k = 0; foreach (Instruction instr in instructions) { if (instr.OpCode == OpCodes.Ldfld) { MethodInfo safeReadAccessor = dataMembersSafeAccessors[((FieldInfo) instr.Operand).Name][0]; byte[] bytes = toByteArray(OpCodes.Callvirt.Value); for (int m = 0; m < OpCodes.Callvirt.Size; m++) { rawInstructions[k++] = bytes[put.Length - 1 - m]; } bytes = toByteArray(moduleBuilder.GetMethodToken(safeReadAccessor).Token); for (int m = instr.Size - OpCodes.Ldfld.Size - 1; m >= 0; m--) { rawInstructions[k++] = bytes[m]; } } else { k += instr.Size; } } methodBuilder.CreateMethodBody(rawInstructions, rawInstructions.Length); } private static byte[] toByteArray(int intValue) { byte[] intBytes = BitConverter.GetBytes(intValue); if (BitConverter.IsLittleEndian) Array.Reverse(intBytes); return intBytes; } private static byte[] toByteArray(short shortValue) { byte[] intBytes = BitConverter.GetBytes(shortValue); if (BitConverter.IsLittleEndian) Array.Reverse(intBytes); return intBytes; } (I know it isn't pretty. Sorry. I put it quickly together to see if it would work.) I don't have much hope, but can anyone suggest anything better than this? Sorry about the extremely lengthy post, and thanks.

    Read the article

  • Bad crypto error in .NET 4.0

    - by Andrey
    Today I moved my web application to .net 4.0 and Forms Auth just stopped working. After several hours of digging into my SqlMembershipProvider (simplified version of built-in SqlMembershipProvider), I found that HMACSHA256 hash is not consistent. This is the encryption method: internal string EncodePassword(string pass, int passwordFormat, string salt) { if (passwordFormat == 0) // MembershipPasswordFormat.Clear return pass; byte[] bIn = Encoding.Unicode.GetBytes(pass); byte[] bSalt = Convert.FromBase64String(salt); byte[] bAll = new byte[bSalt.Length + bIn.Length]; byte[] bRet = null; Buffer.BlockCopy(bSalt, 0, bAll, 0, bSalt.Length); Buffer.BlockCopy(bIn, 0, bAll, bSalt.Length, bIn.Length); if (passwordFormat == 1) { // MembershipPasswordFormat.Hashed HashAlgorithm s = HashAlgorithm.Create( Membership.HashAlgorithmType ); bRet = s.ComputeHash(bAll); } else { bRet = EncryptPassword( bAll ); } return Convert.ToBase64String(bRet); } Passing the same password and salt twice returns different results!!! It was working perfectly in .NET 3.5 Anyone aware of any breaking changes, or is it a known bug? UPDATE: When I specify SHA512 as hashing algorithm, everything works fine, so I do believe it's a bug in .NET 4.0 crypto Thanks! Andrey

    Read the article

  • Implementing ToArgb()

    - by Number8
    Hello -- System.Drawing.Color has a ToArgb() method to return the Int representation of the color. In Silverlight, I think we have to use System.Windows.Media.Color. It has A, R, G, B members, but no method to return a single value. How can I implement ToArgb()? In System.Drawing.Color, ToArgb() consists of return (int) this.Value; System.Windows.Media.Color has a FromArgb(byte A, byte R, byte G, byte B) method. How do I decompose the Int returned by ToArgb() to use with FromArgb()? Thanks for any pointers...

    Read the article

  • Java Array Comparison

    - by BlairHippo
    Working within Java, let's say I have two objects that, thanks to obj.getClass().isArray(), I know are both arrays. Let's further say that I want to compare those two arrays to each other -- possibly by using Arrays.equals. Is there a graceful way to do this without resorting to a big exhaustive if/else tree to figure out which flavor of Arrays.equals needs to be used? I'm looking for something that's less of an eyesore than this: if (obj1 instanceof byte[] && obj2 instanceof byte[]) { return Arrays.equals((byte[])obj1, (byte[])obj2); } else if (obj1 instanceof boolean[] && obj2 instanceof boolean[]) { ...

    Read the article

  • Rijndael managed: plaintext length detction

    - by sheepsimulator
    I am spending some time learning how to use the RijndaelManaged library in .NET, and developed the following function to test encrypting text with slight modifications from the MSDN library: Function encryptBytesToBytes_AES(ByVal plainText As Byte(), ByVal Key() As Byte, ByVal IV() As Byte) As Byte() ' Check arguments. If plainText Is Nothing OrElse plainText.Length <= 0 Then Throw New ArgumentNullException("plainText") End If If Key Is Nothing OrElse Key.Length <= 0 Then Throw New ArgumentNullException("Key") End If If IV Is Nothing OrElse IV.Length <= 0 Then Throw New ArgumentNullException("IV") End If ' Declare the RijndaelManaged object ' used to encrypt the data. Dim aesAlg As RijndaelManaged = Nothing ' Declare the stream used to encrypt to an in memory ' array of bytes. Dim msEncrypt As MemoryStream = Nothing Try ' Create a RijndaelManaged object ' with the specified key and IV. aesAlg = New RijndaelManaged() aesAlg.BlockSize = 128 aesAlg.KeySize = 128 aesAlg.Mode = CipherMode.ECB aesAlg.Padding = PaddingMode.None aesAlg.Key = Key aesAlg.IV = IV ' Create a decrytor to perform the stream transform. Dim encryptor As ICryptoTransform = aesAlg.CreateEncryptor(aesAlg.Key, aesAlg.IV) ' Create the streams used for encryption. msEncrypt = New MemoryStream() Using csEncrypt As New CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write) Using swEncrypt As New StreamWriter(csEncrypt) 'Write all data to the stream. swEncrypt.Write(plainText) End Using End Using Finally ' Clear the RijndaelManaged object. If Not (aesAlg Is Nothing) Then aesAlg.Clear() End If End Try ' Return the encrypted bytes from the memory stream. Return msEncrypt.ToArray() End Function Here's the actual code I am calling encryptBytesToBytes_AES() with: Private Sub btnEncrypt_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnEncrypt.Click Dim bZeroKey As Byte() = {&H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0} PrintBytesToRTF(encryptBytesToBytes_AES(bZeroKey, bZeroKey, bZeroKey)) End Sub However, I get an exception thrown on swEncrypt.Write(plainText) stating that the 'Length of the data to encrypt is invalid.' However, I know that the size of my key, iv, and plaintext are 16 bytes == 128 bits == aesAlg.BlockSize. Why is it throwing this exception? Is it because the StreamWriter is trying to make a String (ostensibly with some encoding) and it doesn't like &H0 as a value?

    Read the article

  • Help decrypting in ColdFusion passwords created in .NET

    - by KnightStalker
    I have a SQL db storing passwords that were encrypted through a .NET application, that I need to decrypt through a ColdFusion app. I just can't seem to get things set upproperly for the CF decryption to work. Any help would by appreciated. Thanks. The .NET decryption code is: public string Decrypt(string input) { try { DESCryptoServiceProvider des = new DESCryptoServiceProvider(); int ZeroBasedByteCount = (input.Length / 2); //Put the input string into the byte array byte[] inputByteArray = new byte[ZeroBasedByteCount]; int i; int x; for (x = 0;x<ZeroBasedByteCount;x++) { i = (Convert.ToInt32(input.Substring(x * 2, 2), 16)); inputByteArray[x] = (byte)i; } //Create the crypto objects des.Key = ASCIIEncoding.ASCII.GetBytes(key); des.IV = ASCIIEncoding.ASCII.GetBytes(key); MemoryStream ms = new MemoryStream(); CryptoStream cs = new CryptoStream(ms, des.CreateDecryptor(), CryptoStreamMode.Write); //Flush the data through the crypto stream into the memory stream cs.Write(inputByteArray, 0, inputByteArray.Length); cs.FlushFinalBlock(); //Get the decrypted data back from the memory stream StringBuilder ret = new StringBuilder(); foreach(byte b in ms.ToArray()) { ret.Append((char)b); } return ret.ToString(); } catch(Exception ex) { throw(ex); return null; } }

    Read the article

  • Parsing multibyte string in PHP

    - by Petr Peller
    I would like to write a (HTML) parser based on state machine but I have doubts how to acctually read/use an input. I decided to load the whole input into one string and then work with it as with an array and hold its index as current parsing position. There would be no problems with single-byte encoding, but in multi-byte encoding each value does not represent a character, but a byte of a character. Example: $mb_string = 'žšcr'; //4 multi-byte characters in UTF-8 for($i=0; $i < 4; $i++) { echo $mb_string[$i], PHP_EOL; } Outputs: L ž L A This means I cannot iterate through the string in a loop to check single characters, because I never know if I am in the middle of an character or not. So the questions are: How do I multi-byte safe read a single character from a string in a performance friendly way? Is it good idea to work with the string as it was an array in this case? How would you read the input?

    Read the article

  • How to get size of bytes?

    - by k80sg
    How do I obtain the number of bytes before allocating the byte size of the array 'handsize' as shown below as the incoming ByteArray data are sent in 3 different sizes. Thanks. BufferedInputStream bais = new BufferedInputStream(requestSocket.getInputStream()); DataInputStream datainput = new DataInputStream(bais); //need to read the number of bytes here before proceeding. byte[] handsize = new byte[bytesize]; datainput.readFully(handsize);

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >