Search Results

Search found 3449 results on 138 pages for 'tranquil byte'.

Page 20/138 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Asp.net membership salt?

    - by chobo2
    Hi Does anyone know how Asp.net membership generates their salt key and then how they encode it(ie is it salt + password or password + salt)? I am using sha1 with my membership but I would like to recreate the same salts so the built in membership stuff could hash the stuff the same way as my stuff can. Thanks Edit 2 Never Mind I mis read it and was thinking it said bytes not bit. So I was passing in 128 bytes not 128bits. Edit I been trying to make it so this is what I have public string EncodePassword(string password, string salt) { byte[] bytes = Encoding.Unicode.GetBytes(password); byte[] src = Encoding.Unicode.GetBytes(salt); byte[] dst = new byte[src.Length + bytes.Length]; Buffer.BlockCopy(src, 0, dst, 0, src.Length); Buffer.BlockCopy(bytes, 0, dst, src.Length, bytes.Length); HashAlgorithm algorithm = HashAlgorithm.Create("SHA1"); byte[] inArray = algorithm.ComputeHash(dst); return Convert.ToBase64String(inArray); } private byte[] createSalt(byte[] saltSize) { byte[] saltBytes = saltSize; RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider(); rng.GetNonZeroBytes(saltBytes); return saltBytes; } So I have not tried to see if the asp.net membership will recognize this yet the hashed password looks close. I just don't know how to convert it to base64 for the salt. I did this byte[] storeSalt = createSalt(new byte[128]); string salt = Encoding.Unicode.GetString(storeSalt); string base64Salt = Convert.ToBase64String(storeSalt); int test = base64Salt.Length; Test length is 172 what is well over the 128bits so what am I doing wrong? This is what their salt looks like vkNj4EvbEPbk1HHW+K8y/A== This is what my salt looks like E9oEtqo0livLke9+csUkf2AOLzFsOvhkB/NocSQm33aySyNOphplx9yH2bgsHoEeR/aw/pMe4SkeDvNVfnemoB4PDNRUB9drFhzXOW5jypF9NQmBZaJDvJ+uK3mPXsWkEcxANn9mdRzYCEYCaVhgAZ5oQRnnT721mbFKpfc4kpI=

    Read the article

  • Java: Detect if image isGIF or isTIFF and convert to JPG

    - by BoDiE2003
    I have a Java website, now Ive build an uploader, the validation for JPG images is done and a way to save it into the server it is too. But now I need to create an utility class using isGIF and isTIFF validated them by byte[] and convert that byte[] to java.awt.Image to save as JPG So basically I need boolean isGIF(byte[]), boolean isTIFF(byte[]), java.awt.Image convertGIF(byte[]) and java.awt.Image convertTIFF(byte[]). But I have no idea how to do it, could someone help me please? Thank you!

    Read the article

  • youtube - video upload failure - unable to convert file - encoding the video wrong?

    - by Anthony
    I am using .NET to create a video uploading application. Although it's communicating with YouTube and uploading the file, the processing of that file fails. YouTube gives me the error message, "Upload failed (unable to convert video file)." This supposedly means that "your video is in a format that our converters don't recognize..." I have made attempts with two different videos, both of which upload and process fine when I do it manually. So I suspect that my code is a.) not encoding the video properly and/or b.) not sending my API request properly. Below is how I am constructing my API PUT request and encoding the video: Any suggestions on what the error could be would be appreciated. Thanks P.S. I'm not using the client library because my application will use the resumable upload feature. Thus, I am manually constructing my API requests. Documentation: http://code.google.com/intl/ja/apis/youtube/2.0/developers_guide_protocol_resumable_uploads.html#Uploading_the_Video_File Code: // new PUT request for sending video WebRequest putRequest = WebRequest.Create(uploadURL); // set properties putRequest.Method = "PUT"; putRequest.ContentType = getMIME(file); //the MIME type of the uploaded video file //encode video byte[] videoInBytes = encodeVideo(file); public static byte[] encodeVideo(string video) { try { byte[] fileInBytes = File.ReadAllBytes(video); Console.WriteLine("\nSize of byte array containing " + video + ": " + fileInBytes.Length); return fileInBytes; } catch (Exception e) { Console.WriteLine("\nException: " + e.Message + "\nReturning an empty byte array"); byte [] empty = new byte[0]; return empty; } }//encodeVideo //encode custom headers in a byte array byte[] PUTbytes = encode(putRequest.Headers.ToString()); public static byte[] encode(string headers) { ASCIIEncoding encoding = new ASCIIEncoding(); byte[] bytes = encoding.GetBytes(headers); return bytes; }//encode //entire request contains headers + binary video data putRequest.ContentLength = PUTbytes.Length + videoInBytes.Length; //send request - correct? sendRequest(putRequest, PUTbytes); sendRequest(putRequest, videoInBytes); public static void sendRequest(WebRequest request, byte[] encoding) { Stream stream = request.GetRequestStream(); // The GetRequestStream method returns a stream to use to send data for the HttpWebRequest. try { stream.Write(encoding, 0, encoding.Length); } catch (Exception e) { Console.WriteLine("\nException writing stream: " + e.Message); } }//sendRequest

    Read the article

  • c# error for index was outside the bounds of array

    - by iliailiaey
    i have written below code but i have the error:Index was outside the bounds of the array.i cant understand its reason.how can i correct the code for preventing the error?(in the code,i want to make an array byte of size 57600 from an array byte of size 38400) int q = 0; int nbytes = 57600; byte[] gh = new byte[38400]; byte[] byte8 = new byte[nbytes]; byte[] aa = { 0xf8, 0x07, 0XE0, 0X1F }; for (int y = 0; y < nbytes-3; y += 3) { if (q < 38400-3) { byte8[y] = (byte)(gh[q] & aa[1]); byte8[y + 1] = (byte)(((gh[q] & aa[1]) << 5) | ((gh[q + 1] & aa[2]) >> 3)); byte8[y + 2] = (byte)((gh[q + 1] & aa[3]) << 3); q += 2; } }

    Read the article

  • slowAES encryption and java descryption

    - by amnon
    Hi , I've tried to implement the same steps as discussed in AES .NET but with no success , i can't seem to get java and slowAes to play toghter ... attached is my code i'm sorry i can't add more this is my first time trying to deal with encryption would appreciate any help private static final String ALGORITHM = "AES"; private static final byte[] keyValue = getKeyBytes("12345678901234567890123456789012"); private static final byte[] INIT_VECTOR = new byte[16]; private static IvParameterSpec ivSpec = new IvParameterSpec(INIT_VECTOR); public static void main(String[] args) throws Exception { String encoded = encrypt("watson?"); System.out.println(encoded); } private static Key generateKey() throws Exception { Key key = new SecretKeySpec(keyValue, ALGORITHM); // SecretKeyFactory keyFactory = SecretKeyFactory.getInstance(ALGORITHM); // key = keyFactory.generateSecret(new DESKeySpec(keyValue)); return key; } private static byte[] getKeyBytes(String key) { byte[] hash = DigestUtils.sha(key); byte[] saltedHash = new byte[16]; System.arraycopy(hash, 0, saltedHash, 0, 16); return saltedHash; } public static String encrypt(String valueToEnc) throws Exception { Key key = generateKey(); Cipher c = Cipher.getInstance("AES/CBC/PKCS5Padding"); c.init(Cipher.ENCRYPT_MODE, key,ivSpec); byte[] encValue = c.doFinal(valueToEnc.getBytes()); String encryptedValue = new BASE64Encoder().encode(encValue); return encryptedValue; } public static String decrypt(String encryptedValue) throws Exception { Key key = generateKey(); Cipher c = Cipher.getInstance(ALGORITHM); c.init(Cipher.DECRYPT_MODE, key); byte[] decordedValue = new BASE64Decoder().decodeBuffer(encryptedValue); byte[] decValue = c.doFinal(decordedValue); String decryptedValue = new String(decValue); return decryptedValue; } the bytes returned are different thanks in advance .

    Read the article

  • why can't I call .update on a MessageDigest instance

    - by Arthur Ulfeldt
    when i run this from the repl: (def md (MessageDigest/getInstance "SHA-1")) (. md update (into-array [(byte 1) (byte 2) (byte 3)])) I get: No matching method found: update for class java.security.MessageDigest$Delegate the Java 6 docs for MessageDigest show: update(byte[] input) Updates the digest using the specified array of bytes. and the class of (class (into-array [(byte 1) (byte 2) (byte 3)])) is [Ljava.lang.Byte; Am I missing something in the definition of update? Not creating the class I think I am? Not passing it the type I think I am?

    Read the article

  • How to send a Java integer in four bytes to another application?

    - by user1468729
    public void routeMessage(byte[] data, int mode) { logger.debug(mode); logger.debug(Integer.toBinaryString(mode)); byte[] message = new byte[8]; ByteBuffer byteBuffer = ByteBuffer.allocate(4); ByteArrayOutputStream baoStream = new ByteArrayOutputStream(); DataOutputStream doStream = new DataOutputStream(baoStream); try { doStream.writeInt(mode); } catch (IOException e) { logger.debug("Error converting mode from integer to bytes.", e); return; } byte [] bytes = baoStream.toByteArray(); bytes[0] = (byte)((mode >>> 24) & 0x000000ff); bytes[1] = (byte)((mode >>> 16) & 0x000000ff); bytes[2] = (byte)((mode >>> 8) & 0x00000ff); bytes[3] = (byte)(mode & 0x000000ff); //bytes = byteBuffer.array(); for (byte b : bytes) { logger.debug(b); } for (int i = 0; i < 4; i++) { //byte tmp = (byte)(mode >> (32 - ((i + 1) * 8))); message[i] = bytes[i]; logger.debug("mode, " + i + ": " + Integer.toBinaryString(message[i])); message[i + 4] = data[i]; } broker.routeMessage(message); } I've tried different ways (as you can see from the commented code) to convert the mode to four bytes to send it via a socket to another application. It works well with integers up to 127 and then again with integers over 256. I believe it has something to do with Java types being signed but don't seem to get it to work. Here are some examples of what the program prints. 127 1111111 0 0 0 127 mode, 0: 0 mode, 1: 0 mode, 2: 0 mode, 3: 1111111 128 10000000 0 0 0 -128 mode, 0: 0 mode, 1: 0 mode, 2: 0 mode, 3: 11111111111111111111111110000000 211 11010011 0 0 0 -45 mode, 0: 0 mode, 1: 0 mode, 2: 0 mode, 3: 11111111111111111111111111010011 306 100110010 0 0 1 50 mode, 0: 0 mode, 1: 0 mode, 2: 1 mode, 3: 110010 How is it suddenly possible for a byte to store 32 bits? How could I fix this?

    Read the article

  • How to dispatch a multimethod on the type of an array

    - by Arthur Ulfeldt
    I'm working on a multimethod that needs to update a hash for a bunch of different things in a sequence. Looked fairly straitforward until I tried to enter the 'type of an array of X'. (defmulti update-hash #(class %2)) (type (byte 1)) => java.lang.Byte (defmethod update-hash java.lang.Byte [md byte] (. md update byte)) (type (into-array [ (byte 1)])) => [Ljava.lang.Byte; (defmethod update-hash < WHAT GOES HERE > [md byte]

    Read the article

  • Convert byte array from Oracle RAW to System.Guid?

    - by Cory McCarty
    My app interacts with both Oracle and SQL Server databases using a custom data access layer written in ADO.NET using DataReaders. Right now I'm having a problem with the conversion between GUIDs (which we use for primary keys) and the Oracle RAW datatype. Inserts into oracle are fine (I just use the ToByteArray() method on System.Guid). The problem is converting back to System.Guid when I load records from the database. Currently, I'm using the byte array I get from ADO.NET to pass into the constructor for System.Guid. This appears to be working, but the Guids that appear in the database do not correspond to the Guids I'm generating in this manner. I can't change the database schema or the query (since it's reused for SQL Server). I need code to convert the byte array from Oracle into the correct Guid.

    Read the article

  • C++ project type: unicode vs multi-byte; pros and cons

    - by Stefan Valianu
    I'm wondering what the Stack Overflow community thinks when it comes to creating a project (thinking primarily c++ here) with a unicode or a multi-byte character set. Are there pros to going Unicode straight from the start, implying all your strings will be in wide format? Are there performance issues / larger memory requirements because of a standard use of a larger character? Is there an advantage to this method? Do some processor architectures handle wide characters better? Are there any reasons to make your project Unicode if you don't plan on supporting additional languages? What reasons would one have for creating a project with a multi-byte character set? How do all of the factors above collide in a high performance environment (such as a modern video game) ?

    Read the article

  • IA-32: Pushing a byte onto a stack isn't possible on Pentium, why?

    - by Tim Green
    Hi, I've come to learn that you cannot push a byte directly onto the Intel Pentium's stack, can anyone explain this to me please? The reason that I've been given is because the esp register is word-addressable (or, that is the assumption in our model) and it must be an "even address". I would have assumed decrementing the value of some 32-bit binary number wouldn't mess with the alignment of the register, but apparently I don't understand enough. I have tried some NASM tests and come up that if I declare a variable (bite db 123) and push it on to the stack, esp is decremented by 4 (indicating that it pushed 32-bits?). But, "push byte bite" (sorry for my choice of variable names) will result in a kind error: test.asm:10: error: Unsupported non-32-bit ELF relocation Any words of wisdom would be greatly appreciated during this troubled time. I am first year undergraduate so sorry for my naivety in any of this. Tim

    Read the article

  • How to get current byte position of the parser in Antlr with c# target?

    - by Aftershock
    Hi, I am interested in the current byte position in the stream when parsing something using Antlr 3. I have seen there is a similar question but there was no real answer there. That is why I am trying again. I am not interested in token index, byte position in a line etc... Could you someone tell me that? It is obvious that some code has to be written/overridden. Does someone have specific code to write? I use C#.

    Read the article

  • How should I handle searching through byte arrays in Java?

    - by Zombies
    Preliminary: I am writting my own httpclient in Java. I am trying to parse out the contents of chunked encoding. Here is my dilema: Since I am trying to parse out chunked http transfer encoding with a gzip payload there is a mix of ascii and binary. I can't just take the http resp content and convert it to a string and make use of StringUtils since the binary data can easily contain nil characters. So what I need to do is some basic things for parsing out each chunk and its chunk length (as per chunked transfer/HTTP/1.1 spec). Are there any helpful ways of searching through byte arrays of binary/part ascii data for certain patterns (like a CR LF) (instead of just a single byte) ? Or must I write the for loops for this?

    Read the article

  • Why StringWriter.ToString return `System.Byte[]` and not the data?

    - by theateist
    UnZipFile method writes the data from inputStream to outputWriter. Why sr.ToString() returns System.Byte[] and not the data? using (var sr = new StringWriter()) { UnZipFile(response.GetResponseStream(), sr); var content = sr.ToString(); } public static void UnZipFile(Stream inputStream, TextWriter outputWriter) { using (var zipStream = new ZipInputStream(inputStream)) { ZipEntry currentEntry; if ((currentEntry = zipStream.GetNextEntry()) != null) { var size = 2048; var data = new byte[size]; while (true) { size = zipStream.Read(data, 0, size); if (size > 0) { outputWriter.Write(data); } else { break; } } } } }

    Read the article

  • What makes signed integers behave differently?

    - by 000
    In this example of x86_64 hex/disassembled code I see: 48B80000000000000000 mov rax, 0x0 Signed Byte 52 Unsigned Byte 52 Signed Short 14388 Unsigned Short 14388 Signed Int 943863860 Unsigned Int 943863860 Signed Int64 3472328296363079732 Unsigned Int64 3472328296363079732 Float 4.630555e-05 Double 1.39804332763832e-76 String 48B80000000000000000 which to me appears to have the same functionality as: 48C7C000000000 mov rax, 0x0 48C7C000000000 Signed Byte 52 Unsigned Byte 52 Signed Short 14388 Unsigned Short 14388 Signed Int 927152180 Unsigned Int 927152180 Signed Int64 3472328377950746676 Unsigned Int64 3472328377950746676 Float 1.163599e-05 Double 1.39806836023098e-76 String 48C7C000000000 How is the first example treated differently from the second example?

    Read the article

  • Get Specific depth values in Kinect (XNA)

    - by N0xus
    I'm currently trying to make a hand / finger tracking with a kinect in XNA. For this, I need to be able to specify the depth range I want my program to render. I've looked about, and I cannot see how this is done. As far as I can tell, kinect's depth values only work with pre-set ranged found in the depthStream. What I would like to do is make it modular so that I can change the depth range my kinect renders. I know this has been down before but I can't find anything online that can show me how to do this. Could someone please help me out? I have made it possible to render the standard depth view with the kinect, and the method that I have made for converting the depth frame is as follows (I've a feeling its something in here I need to set) private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream, int depthFrame32Length) { int tooNearDepth = depthStream.TooNearDepth; int tooFarDepth = depthStream.TooFarDepth; int unknownDepth = depthStream.UnknownDepth; byte[] depthFrame32 = new byte[depthFrame32Length]; for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < depthFrame32.Length; i16++, i32 += 4) { int player = depthFrame[i16] & DepthImageFrame.PlayerIndexBitmask; int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth; // transform 13-bit depth information into an 8-bit intensity appropriate // for display (we disregard information in most significant bit) byte intensity = (byte)(~(realDepth >> 8)); if (player == 0 && realDepth == 00) { // white depthFrame32[i32 + RedIndex] = 255; depthFrame32[i32 + GreenIndex] = 255; depthFrame32[i32 + BlueIndex] = 255; } // omitted other if statements. Simple changed the color of the pixels if they went out of the pre=set depth values else { // tint the intensity by dividing by per-player values depthFrame32[i32 + RedIndex] = (byte)(intensity >> IntensityShiftByPlayerR[player]); depthFrame32[i32 + GreenIndex] = (byte)(intensity >> IntensityShiftByPlayerG[player]); depthFrame32[i32 + BlueIndex] = (byte)(intensity >> IntensityShiftByPlayerB[player]); } } return depthFrame32; } I have a strong hunch it's something I need to change in the int player and int realDepth values, but i can't be sure.

    Read the article

  • Delphi hook to redirect to different ip

    - by Chris
    What is the best way to redirect ANY browser to a different ip for specific sites? For example if the user will type www.facebook.com in any browser he will be redirected to 127.0.0.1. Also the same should happen if he will type 66.220.146.11. What I have until now is this: using the winpkfilter I am able to intercept all the traffic on port 80, with type(in or out), source ip, destination ip and packet. My problem is to modify somehow the packet so the browser will be redirected. This is the code that i have right now: program Pass; {$APPTYPE CONSOLE} uses SysUtils, Windows, Winsock, winpkf, iphlp; var iIndex, counter : DWORD; hFilt : THANDLE; Adapts : TCP_AdapterList; AdapterMode : ADAPTER_MODE; Buffer, ParsedBuffer : INTERMEDIATE_BUFFER; ReadRequest : ETH_REQUEST; hEvent : THANDLE; hAdapter : THANDLE; pEtherHeader : TEtherHeaderPtr; pIPHeader : TIPHeaderPtr; pTcpHeader : TTCPHeaderPtr; pUdpHeader : TUDPHeaderPtr; SourceIP, DestIP : TInAddr; thePacket : PChar; f : TextFile; SourceIpString, DestinationIpString : string; SourceName, DestinationName : string; function IPAddrToName(IPAddr : string) : string; var SockAddrIn : TSockAddrIn; HostEnt : PHostEnt; WSAData : TWSAData; begin WSAStartup($101, WSAData); SockAddrIn.sin_addr.s_addr := inet_addr(PChar(IPAddr)); HostEnt := gethostbyaddr(@SockAddrIn.sin_addr.S_addr, 4, AF_INET); if HostEnt < nil then begin result := StrPas(Hostent^.h_name) end else begin result := ''; end; end; procedure ReleaseInterface(); begin // Restore default mode AdapterMode.dwFlags := 0; AdapterMode.hAdapterHandle := hAdapter; SetAdapterMode(hFilt, @AdapterMode); // Set NULL event to release previously set event object SetPacketEvent(hFilt, hAdapter, 0); // Close Event if hEvent < 0 then CloseHandle(hEvent); // Close driver object CloseFilterDriver(hFilt); // Release NDISAPI FreeNDISAPI(); end; begin // Check the number of parameters if ParamCount() < 2 then begin Writeln('Command line syntax:'); Writeln(' PassThru.exe index num'); Writeln(' index - network interface index.'); Writeln(' num - number or packets to filter'); Writeln('You can use ListAdapters to determine correct index.'); Exit; end; // Initialize NDISAPI InitNDISAPI(); // Create driver object hFilt := OpenFilterDriver('NDISRD'); if IsDriverLoaded(hFilt) then begin // Get parameters from command line iIndex := StrToInt(ParamStr(1)); counter := StrToInt(ParamStr(2)); // Set exit procedure ExitProcessProc := ReleaseInterface; // Get TCP/IP bound interfaces GetTcpipBoundAdaptersInfo(hFilt, @Adapts); // Check paramer values if iIndex > Adapts.m_nAdapterCount then begin Writeln('There is no network interface with such index on this system.'); Exit; end; hAdapter := Adapts.m_nAdapterHandle[iIndex]; AdapterMode.dwFlags := MSTCP_FLAG_SENT_TUNNEL or MSTCP_FLAG_RECV_TUNNEL; AdapterMode.hAdapterHandle := hAdapter; // Create notification event hEvent := CreateEvent(nil, TRUE, FALSE, nil); if hEvent <> 0 then if SetPacketEvent(hFilt, hAdapter, hEvent) <> 0 then begin // Initialize request ReadRequest.EthPacket.Buffer := @Buffer; ReadRequest.hAdapterHandle := hAdapter; SetAdapterMode(hFilt, @AdapterMode); counter := 0; //while counter <> 0 do while true do begin WaitForSingleObject(hEvent, INFINITE); while ReadPacket(hFilt, @ReadRequest) <> 0 do begin //dec(counter); pEtherHeader := TEtherHeaderPtr(@Buffer.m_IBuffer); if ntohs(pEtherHeader.h_proto) = ETH_P_IP then begin pIPHeader := TIPHeaderPtr(Integer(pEtherHeader) + SizeOf(TEtherHeader)); SourceIP.S_addr := pIPHeader.SourceIp; DestIP.S_addr := pIPHeader.DestIp; if pIPHeader.Protocol = IPPROTO_TCP then begin pTcpHeader := TTCPHeaderPtr(Integer(pIPHeader) + (pIPHeader.VerLen and $F) * 4); if (pTcpHeader.SourcePort = htons(80)) or (pTcpHeader.DestPort = htons(80)) then begin inc(counter); if Buffer.m_dwDeviceFlags = PACKET_FLAG_ON_SEND then Writeln(counter, ') - MSTCP --> Interface') else Writeln(counter, ') - Interface --> MSTCP'); Writeln(' Packet size = ', Buffer.m_Length); Writeln(Format(' IP %.3u.%.3u.%.3u.%.3u --> %.3u.%.3u.%.3u.%.3u PROTOCOL: %u', [byte(SourceIP.S_un_b.s_b1), byte(SourceIP.S_un_b.s_b2), byte(SourceIP.S_un_b.s_b3), byte(SourceIP.S_un_b.s_b4), byte(DestIP.S_un_b.s_b1), byte(DestIP.S_un_b.s_b2), byte(DestIP.S_un_b.s_b3), byte(DestIP.S_un_b.s_b4), byte(pIPHeader.Protocol)] )); Writeln(Format(' TCP SRC PORT: %d DST PORT: %d', [ntohs(pTcpHeader.SourcePort), ntohs(pTcpHeader.DestPort)])); //get the data thePacket := pchar(pEtherHeader) + (sizeof(TEtherHeaderPtr) + pIpHeader.VerLen * 4 + pTcpHeader.Offset * 4); { SourceIpString := IntToStr(byte(SourceIP.S_un_b.s_b1)) + '.' + IntToStr(byte(SourceIP.S_un_b.s_b2)) + '.' + IntToStr(byte(SourceIP.S_un_b.s_b3)) + '.' + IntToStr(byte(SourceIP.S_un_b.s_b4)); DestinationIpString := IntToStr(byte(DestIP.S_un_b.s_b1)) + '.' + IntToStr(byte(DestIP.S_un_b.s_b2)) + '.' + IntToStr(byte(DestIP.S_un_b.s_b3)) + '.' + IntToStr(byte(DestIP.S_un_b.s_b4)); } end; end; end; // if ntohs(pEtherHeader.h_proto) = ETH_P_RARP then // Writeln(' Reverse Addr Res packet'); // if ntohs(pEtherHeader.h_proto) = ETH_P_ARP then // Writeln(' Address Resolution packet'); //Writeln('__'); if Buffer.m_dwDeviceFlags = PACKET_FLAG_ON_SEND then // Place packet on the network interface SendPacketToAdapter(hFilt, @ReadRequest) else // Indicate packet to MSTCP SendPacketToMstcp(hFilt, @ReadRequest); { if counter = 0 then begin Writeln('Filtering complete'); readln; break; end; } end; ResetEvent(hEvent); end; end; end; end.

    Read the article

  • What is the difference between using MD5.Create and MD5CryptoServiceProvider?

    - by byte
    In the .NET framework there are a couple of ways to calculate an MD5 hash it seems, however there is something I don't understand; What is the distinction between the following? What sets them apart from eachother? They seem to product identical results: public static string GetMD5Hash(string str) { MD5CryptoServiceProvider md5 = new MD5CryptoServiceProvider(); byte[] bytes = ASCIIEncoding.Default.GetBytes(str); byte[] encoded = md5.ComputeHash(bytes); StringBuilder sb = new StringBuilder(); for (int i = 0; i < encoded.Length; i++) sb.Append(encoded[i].ToString("x2")); return sb.ToString(); } public static string GetMD5Hash2(string str) { System.Security.Cryptography.MD5 md5 = System.Security.Cryptography.MD5.Create(); byte[] bytes = Encoding.Default.GetBytes(str); byte[] encoded = md5.ComputeHash(bytes); StringBuilder sb = new StringBuilder(); for (int i = 0; i < encoded.Length; i++) sb.Append(encoded[i].ToString("x2")); return sb.ToString(); }

    Read the article

  • Cisco "copy startup-config tftp" results in a 0 byte file on the server?

    - by Geoffrey
    I'm pulling my hair out figuring this out. My startup-config is good, I can view it with a show command. I'm trying to copy it to a tftp server: asa5505# copy startup-config tftp Address or name of remote host []? ipaddress Destination filename [startup-config]? t !! %Error writing tftp://ipaddress/t (Timed out attempting to connect) On my TFTP server (SolarWinds), I get the following: binary, PUT. Started file name: C:\TFTP-Root\t binary, PUT. File Exists, C:\TFTP-Root\t binary, PUT. Deleting Existing File. binary, PUT. Interrupted by client, cause: The process cannot access the file 'C:\TFTP-Root\t' because it is being used by another process I've used tftpd32 with same results. I've tried different servers, even one on the same network as the asa ... same results. It'll create a 0 byte file and never do the dump. What's going on? Everything is working normally except for this.

    Read the article

  • how convert this c# cod to php

    - by user3694473
    I'm trying to convert this class from C# to php and i wante to convert this c# class to php ... how i can do it Thanks in advance hi I'm trying to convert this class from C# to php and i wante to convert this c# class to php ... how i can do it Thanks in advance public class CreateCode { public string SazBon(string MM) { string RET = ""; string[] ME = new string[25]; for (int i = 1; i < MM.Length; i += 2) { ME[i] = MM[i - 1].ToString(); } for (int j = 0; j < MM.Length; j += 2) { ME[j] = MM[j + 1].ToString(); } ME[20] = "1"; ME[21] = "OH"; ME[22] = "23"; ME[23] = "fXC"; ME[24] = "5"; ME[5] = ME[14]; ME[13] = ME[23]; ME[2] = ME[22]; ME[18] = ME[21]; ME[23] = ME[11]; ME[19] = ME[0]; foreach (string item in ME) { RET += item; } string BACK = Encrypt(RET, RET, 256); BACK = encryptString(BACK); return BACK; } string encryptString(string strToEncrypt) // md5 { UTF8Encoding ue = new UTF8Encoding(); byte[] bytes = ue.GetBytes(strToEncrypt); MD5CryptoServiceProvider md5 = new MD5CryptoServiceProvider(); byte[] hashBytes = md5.ComputeHash(bytes); // Bytes to string return System.Text.RegularExpressions.Regex.Replace (BitConverter.ToString(hashBytes), "-", "").ToLower(); } private byte[] Encrypt(byte[] clearData, byte[] Key, byte[] IV) { MemoryStream ms = new MemoryStream(); Rijndael alg = Rijndael.Create(); alg.Key = Key; alg.IV = IV; CryptoStream cs = new CryptoStream(ms, alg.CreateEncryptor(), CryptoStreamMode.Write); cs.Write(clearData, 0, clearData.Length); cs.Close(); byte[] encryptedData = ms.ToArray(); return encryptedData; } byte[] A; private string Encrypt(string Data, string Password, int Bits) { byte[] clearBytes = System.Text.Encoding.Unicode.GetBytes(Data); PasswordDeriveBytes pdb = new PasswordDeriveBytes(Password, new byte[] { 0x00, 0x01, 0x02, 0x1C, 0x1D, 0x1E, 0x03, 0x04, 0x05, 0x0F, 0x20, 0x21, 0xAD, 0xAF, 0xA4 }); if (Bits == 128) { byte[] encryptedData = Encrypt(clearBytes, pdb.GetBytes(16), pdb.GetBytes(16)); return Convert.ToBase64String(encryptedData); } else if (Bits == 192) { byte[] encryptedData = Encrypt(clearBytes, pdb.GetBytes(24), pdb.GetBytes(16)); return Convert.ToBase64String(encryptedData); } else if (Bits == 256) { byte[] encryptedData = Encrypt(clearBytes, pdb.GetBytes(32), pdb.GetBytes(16)); return Convert.ToBase64String(encryptedData); } else { return string.Concat(Bits); } } // AES }

    Read the article

  • .Net Intermittent System.Web.Services.Protocols.SoapHeaderException

    - by ScottE
    We have a .net 3.5 web app that consumes third party web services. The proxy was created by adding a web reference to their wsdl. This proxy is not compiled. Our error logging is picking up frequent but intermittent exceptions: An exception of type 'System.Web.Services.Protocols.SoapHeaderException' occurred and was caught If I follow the url to the page that generated the exception, I can't recreate it. Edit: Here is most of the exception - where it bubbled up from Message : Internal Error Type : System.Web.Services.Protocols.SoapHeaderException, System.Web.Services, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Source : System.Web.Services Help link : Actor : Code : http://schemas.xmlsoap.org/soap/envelope/:Client Detail : Lang : Node : Role : SubCode : Data : System.Collections.ListDictionaryInternal TargetSite : System.Object[] ReadResponse(System.Web.Services.Protocols.SoapClientMessage, System.Net.WebResponse, System.IO.Stream, Boolean) Stack Trace : at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Vendor.getSearch(getSearchRequest getSearchRequest) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\be43c34e\b09edc7e\App_WebReferences.pww-cf-q.0.cs:line 73 Edit 2: Inner exceptions: I sometimes get the following inner exceptions logged: Message : Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. Type : System.IO.IOException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source : System Help link : Data : System.Collections.ListDictionaryInternal TargetSite : Int32 Read(Byte[], Int32, Int32) Stack Trace : at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) And/Or: Message : An existing connection was forcibly closed by the remote host Type : System.Net.Sockets.SocketException, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source : System Help link : ErrorCode : 10054 SocketErrorCode : ConnectionReset NativeErrorCode : 10054 Data : System.Collections.ListDictionaryInternal TargetSite : Int32 Receive(Byte[], Int32, Int32, System.Net.Sockets.SocketFlags) Stack Trace : at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) Update We're still working on it. Originally there was a route issue, which was resolved. We're still getting the inner exception with socket errors. We had MS support involved today, and they looked at some traces and network captures. The web service host does round-robin DNS, and they may be responding on a different IP address for the syn syn/ack from one ip, and the next from a different ip. This is not good. This is likely quite specific to our situation, but perhaps it applies to others as well. Microsoft Network Monitor and an application trace got us the information we needed.

    Read the article

  • Facing Memory Leaks in AES Encryption Method.

    - by Mubashar Ahmad
    Can anyone please identify is there any possible memory leaks in following code. I have tried with .Net Memory Profiler and it says "CreateEncryptor" and some other functions are leaving unmanaged memory leaks as I have confirmed this using Performance Monitors. but there are already dispose, clear, close calls are placed wherever possible please advise me accordingly. its a been urgent. public static string Encrypt(string plainText, string key) { //Set up the encryption objects byte[] encryptedBytes = null; using (AesCryptoServiceProvider acsp = GetProvider(Encoding.UTF8.GetBytes(key))) { byte[] sourceBytes = Encoding.UTF8.GetBytes(plainText); using (ICryptoTransform ictE = acsp.CreateEncryptor()) { //Set up stream to contain the encryption using (MemoryStream msS = new MemoryStream()) { //Perform the encrpytion, storing output into the stream using (CryptoStream csS = new CryptoStream(msS, ictE, CryptoStreamMode.Write)) { csS.Write(sourceBytes, 0, sourceBytes.Length); csS.FlushFinalBlock(); //sourceBytes are now encrypted as an array of secure bytes encryptedBytes = msS.ToArray(); //.ToArray() is important, don't mess with the buffer csS.Close(); } msS.Close(); } } acsp.Clear(); } //return the encrypted bytes as a BASE64 encoded string return Convert.ToBase64String(encryptedBytes); } private static AesCryptoServiceProvider GetProvider(byte[] key) { AesCryptoServiceProvider result = new AesCryptoServiceProvider(); result.BlockSize = 128; result.KeySize = 256; result.Mode = CipherMode.CBC; result.Padding = PaddingMode.PKCS7; result.GenerateIV(); result.IV = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; byte[] RealKey = GetKey(key, result); result.Key = RealKey; // result.IV = RealKey; return result; } private static byte[] GetKey(byte[] suggestedKey, SymmetricAlgorithm p) { byte[] kRaw = suggestedKey; List<byte> kList = new List<byte>(); for (int i = 0; i < p.LegalKeySizes[0].MaxSize; i += 8) { kList.Add(kRaw[(i / 8) % kRaw.Length]); } byte[] k = kList.ToArray(); return k; }

    Read the article

  • IEnumerable<T> ToArray usage, is it a copy or a pointer?

    - by Daniel
    I am parsing an arbitrary length byte array that is going to be passed around to a few different layers of parsing. Each parser creates a Header and a Packet payload just like any ordinary encapsulation. And my problem lies in how the encapsulation holds its packet byte array payload. Say i have a 100 byte array, and it has 3 levels of encapsulation. 3 packet objects will be created and i want to set the payload of these packets to the corresponding position in the byte array of the packet. For example lets say the payload size is 20 for all levels, then imagine it has a public byte[] Payload on each object. However the problem is that this byte[] Payload is a copy of the original 100 bytes. So i'm going to end up with 160 bytes in memory instead of 100. If it were in c++ i could just easily use a pointer however i'm writing this in c#. So i created the following class: public class PayloadSegment<T> : IEnumerable<T> { public readonly T[] Array; public readonly int Offset; public readonly int Count; public PayloadSegment(T[] array, int offset, int count) { this.Array = array; this.Offset = offset; this.Count = count; } public T this[int index] { get { if (index < 0 || index >= this.Count) throw new IndexOutOfRangeException(); else return Array[Offset + index]; } set { if (index < 0 || index >= this.Count) throw new IndexOutOfRangeException(); else Array[Offset + index] = value; } } public IEnumerator<T> GetEnumerator() { for (int i = Offset; i < Offset + Count; i++) yield return Array[i]; } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { IEnumerator<T> enumerator = this.GetEnumerator(); while (enumerator.MoveNext()) { yield return enumerator.Current; } } } This way i can simply reference a position inside the original byte array but use positional indexing. However if i do something like: PayloadSegment<byte> something = new PayloadSegment<byte>(someArray, 5, 10); byte[] somethingArray = something.ToArray(); Will the somethingArray be a copy of the bytes, or a reference to the original PayloadSegment which in turn is a reference to the original byte array? Sorry it was hard to word this lol _<

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >