Search Results

Search found 3953 results on 159 pages for 'byte slave'.

Page 51/159 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • How do I combine the two methods of cryptography, CBC cipher block and Vingenere Chiper [on hold]

    - by Bangpe
    Code Vingenere static String encrypt(String text, final String key) { String res = ""; text = text.toUpperCase(); for (int i = 0, j = 0; i < text.length(); i++) { char c = text.charAt(i); if (c < 'A' || c > 'Z') continue; res += (char)((c + key.charAt(j) - 2 * 'A') % 26 + 'A'); j = ++j % key.length(); } return res; } Code CBC Chiper public CbcBlockCipher( BlockCipher blockCipher ) { super( blockCipher.keySize(), blockCipher.blockSize() ); this.blockCipher = blockCipher; iv = new byte[blockSize()]; zeroBlock( iv ); temp = new byte[blockSize()]; }

    Read the article

  • Java deadlock problem....

    - by markovuksanovic
    I am using java sockets for communication. On the client side I have some processing and at this point I send an object to the cient. The code is as follows: while (true) { try { Socket server = new Socket("localhost", 3000); OutputStream os = server.getOutputStream(); InputStream is = server.getInputStream(); CommMessage commMessage = new CommMessageImpl(); ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos); oos.writeObject(commMessage); os.write(bos.toByteArray()); os.flush(); byte[] buff = new byte[512]; int bytesRead = 0; ByteArrayOutputStream receivedObject = new ByteArrayOutputStream(); while ((bytesRead = is.read(buff)) > -1) { receivedObject.write(buff, 0, bytesRead); System.out.println(receivedObject); } os.close(); Thread.sleep(10000); } catch (IOException e) { } catch (InterruptedException e) { } } Next on the server side I have the following code to read the object and write the response (Which is just an echo message) public void startServer() { Socket client = null; try { server = new ServerSocket(3000); logger.log(Level.INFO, "Waiting for connections."); client = server.accept(); logger.log(Level.INFO, "Accepted a connection from: " + client.getInetAddress()); os = new ObjectOutputStream(client.getOutputStream()); is = new ObjectInputStream(client.getInputStream()); // Read contents of the stream and store it into a byte array. byte[] buff = new byte[512]; int bytesRead = 0; ByteArrayOutputStream receivedObject = new ByteArrayOutputStream(); while ((bytesRead = is.read(buff)) > -1) { receivedObject.write(buff, 0, bytesRead); } // Check if received stream is CommMessage or not contents. CommMessage commMessage = getCommMessage(receivedObject); if (commMessage != null) { commMessage.setSessionState(this.sessionManager.getState().getState()); ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos); oos.writeObject(commMessage); os.write(bos.toByteArray()); System.out.println(commMessage.getCommMessageType()); } else { processData(receivedObject, this.sessionManager); } os.flush(); } catch (IOException e) { } finally { try { is.close(); os.close(); client.close(); server.close(); } catch (IOException e) { } } } The above code works ok if I do not try to read data on the client side (If i exclude the code related to reading). But if I have that code, for some reason, I get some kind of deadlock when accessing input streams. Any ideas what I might have done wrong? Thanks in advance.

    Read the article

  • Trying to packetize TCP with non-blocking IO is hard! Am I doing something wrong?

    - by Ricket
    Oh how I wish TCP was packet-based like UDP is! But alas, that's not the case, so I'm trying to implement my own packet layer. Here's the chain of events so far (ignoring writing packets) Oh, and my Packets are very simply structured: two unsigned bytes for length, and then byte[length] data. (I can't imagine if they were any more complex, I'd be up to my ears in if statements!) Server is in an infinite loop, accepting connections and adding them to a list of Connections. PacketGatherer (another thread) uses a Selector to figure out which Connection.SocketChannels are ready for reading. It loops over the results and tells each Connection to read(). Each Connection has a partial IncomingPacket and a list of Packets which have been fully read and are waiting to be processed. On read(): Tell the partial IncomingPacket to read more data. (IncomingPacket.readData below) If it's done reading (IncomingPacket.complete()), make a Packet from it and stick the Packet into the list waiting to be processed and then replace it with a new IncomingPacket. There are a couple problems with this. First, only one packet is being read at a time. If the IncomingPacket needs only one more byte, then only one byte is read this pass. This can of course be fixed with a loop but it starts to get sorta complicated and I wonder if there is a better overall way. Second, the logic in IncomingPacket is a little bit crazy, to be able to read the two bytes for the length and then read the actual data. Here is the code, boiled down for quick & easy reading: int readBytes; // number of total bytes read so far byte length1, length2; // each byte in an unsigned short int (see getLength()) public int getLength() { // will be inaccurate if readBytes < 2 return (int)(length1 << 8 | length2); } public void readData(SocketChannel c) { if (readBytes < 2) { // we don't yet know the length of the actual data ByteBuffer lengthBuffer = ByteBuffer.allocate(2 - readBytes); numBytesRead = c.read(lengthBuffer); if(readBytes == 0) { if(numBytesRead >= 1) length1 = lengthBuffer.get(); if(numBytesRead == 2) length2 = lengthBuffer.get(); } else if(readBytes == 1) { if(numBytesRead == 1) length2 = lengthBuffer.get(); } readBytes += numBytesRead; } if(readBytes >= 2) { // then we know we have the entire length variable // lazily-instantiate data buffers based on getLength() // read into data buffers, increment readBytes // (does not read more than the amount of this packet, so it does not // need to handle overflow into the next packet's data) } } public boolean complete() { return (readBytes > 2 && readBytes == getLength()+2); } Basically I need feedback on my code. Please suggest any improvements. Even overhauling my entire system would be okay, if you have suggestions for how better to implement the whole thing. Book recommendations are welcome too; I love books. I just get the feeling that something isn't quite right.

    Read the article

  • how to mount partitions from USB drives in Windows using Delphi?

    - by user569556
    Hi. I'm a Delphi programmer. I want to mount all partitions from USB drives in Windows (XP). The OS is doing this automatically but there are situations when such a program is useful. I know how to find if a drive is on USB or not. My code so far is: type STORAGE_QUERY_TYPE = (PropertyStandardQuery = 0, PropertyExistsQuery, PropertyMaskQuery, PropertyQueryMaxDefined); TStorageQueryType = STORAGE_QUERY_TYPE; STORAGE_PROPERTY_ID = (StorageDeviceProperty = 0, StorageAdapterProperty); TStoragePropertyID = STORAGE_PROPERTY_ID; STORAGE_PROPERTY_QUERY = packed record PropertyId: STORAGE_PROPERTY_ID; QueryType: STORAGE_QUERY_TYPE; AdditionalParameters: array[0..9] of AnsiChar; end; TStoragePropertyQuery = STORAGE_PROPERTY_QUERY; STORAGE_BUS_TYPE = (BusTypeUnknown = 0, BusTypeScsi, BusTypeAtapi, BusTypeAta, BusType1394, BusTypeSsa, BusTypeFibre, BusTypeUsb, BusTypeRAID, BusTypeiScsi, BusTypeSas, BusTypeSata, BusTypeMaxReserved = $7F); TStorageBusType = STORAGE_BUS_TYPE; STORAGE_DEVICE_DESCRIPTOR = packed record Version: DWORD; Size: DWORD; DeviceType: Byte; DeviceTypeModifier: Byte; RemovableMedia: Boolean; CommandQueueing: Boolean; VendorIdOffset: DWORD; ProductIdOffset: DWORD; ProductRevisionOffset: DWORD; SerialNumberOffset: DWORD; BusType: STORAGE_BUS_TYPE; RawPropertiesLength: DWORD; RawDeviceProperties: array[0..0] of AnsiChar; end; TStorageDeviceDescriptor = STORAGE_DEVICE_DESCRIPTOR; const IOCTL_STORAGE_QUERY_PROPERTY = $002D1400; var i: Integer; H: THandle; USBDrives: array of Byte; Query: TStoragePropertyQuery; dwBytesReturned: DWORD; Buffer: array[0..1023] of Byte; sdd: TStorageDeviceDescriptor absolute Buffer; begin SetLength(UsbDrives, 0); SetErrorMode(SEM_FAILCRITICALERRORS); for i := 0 to 99 do begin H := CreateFile(PChar('\\.\PhysicalDrive' + IntToStr(i)), 0, FILE_SHARE_READ or FILE_SHARE_WRITE, nil, OPEN_EXISTING, 0, 0); if H <> INVALID_HANDLE_VALUE then begin try dwBytesReturned := 0; FillChar(Query, SizeOf(Query), 0); FillChar(Buffer, SizeOf(Buffer), 0); sdd.Size := SizeOf(Buffer); Query.PropertyId := StorageDeviceProperty; Query.QueryType := PropertyStandardQuery; if DeviceIoControl(H, IOCTL_STORAGE_QUERY_PROPERTY, @Query, SizeOf(Query), @Buffer, SizeOf(Buffer), dwBytesReturned, nil) then if sdd.BusType = BusTypeUsb then begin SetLength(USBDrives, Length(USBDrives) + 1); UsbDrives[High(USBDrives)] := Byte(i); end; finally CloseHandle(H); end; end; end; for i := 0 to High(USBDrives) do begin // end; end. But I don't know how to access partitions on each drive and mounts them. Can you please help me? I searched before I asked and I couldn't find a working code. But if I did not properly then I'm sorry and please show me that topic. Best regards, John

    Read the article

  • spliiting code in java-don't know what's wrong [closed]

    - by ???? ?????
    I'm writing a code to split a file into many files with a size specified in the code, and then it will join these parts later. The problem is with the joining code, it doesn't work and I can't figure what is wrong! This is my code: import java.io.*; import java.util.*; public class StupidSplit { static final int Chunk_Size = 10; static int size =0; public static void main(String[] args) throws IOException { String file = "b.txt"; int chunks = DivideFile(file); System.out.print((new File(file)).delete()); System.out.print(JoinFile(file, chunks)); } static boolean JoinFile(String fname, int nChunks) { /* * Joins the chunks together. Chunks have been divided using DivideFile * function so the last part of filename will ".partxxxx" Checks if all * parts are together by matching number of chunks found against * "nChunks", then joins the file otherwise throws an error. */ boolean successful = false; File currentDirectory = new File(System.getProperty("user.dir")); // File[] fileList = currentDirectory.listFiles(); /* populate only the files having extension like "partxxxx" */ List<File> lst = new ArrayList<File>(); // Arrays.sort(fileList); for (File file : fileList) { if (file.isFile()) { String fnm = file.getName(); int lastDot = fnm.lastIndexOf('.'); // add to list which match the name given by "fname" and have //"partxxxx" as extension" if (fnm.substring(0, lastDot).equalsIgnoreCase(fname) && (fnm.substring(lastDot + 1)).substring(0, 4).equals("part")) { lst.add(file); } } } /* * sort the list - it will be sorted by extension only because we have * ensured that list only contains those files that have "fname" and * "part" */ File[] files = (File[]) lst.toArray(new File[0]); Arrays.sort(files); System.out.println("size ="+files.length); System.out.println("hello"); /* Ensure that number of chunks match the length of array */ if (files.length == nChunks-1) { File ofile = new File(fname); FileOutputStream fos; FileInputStream fis; byte[] fileBytes; int bytesRead = 0; try { fos = new FileOutputStream(ofile,true); for (File file : files) { fis = new FileInputStream(file); fileBytes = new byte[(int) file.length()]; bytesRead = fis.read(fileBytes, 0, (int) file.length()); assert(bytesRead == fileBytes.length); assert(bytesRead == (int) file.length()); fos.write(fileBytes); fos.flush(); fileBytes = null; fis.close(); fis = null; } fos.close(); fos = null; } catch (FileNotFoundException fnfe) { System.out.println("Could not find file"); successful = false; return successful; } catch (IOException ioe) { System.out.println("Cannot write to disk"); successful = false; return successful; } /* ensure size of file matches the size given by server */ successful = (ofile.length() == StupidSplit.size) ? true : false; } else { successful = false; } return successful; } static int DivideFile(String fname) { File ifile = new File(fname); FileInputStream fis; String newName; FileOutputStream chunk; //int fileSize = (int) ifile.length(); double fileSize = (double) ifile.length(); //int nChunks = 0, read = 0, readLength = Chunk_Size; int nChunks = 0, read = 0, readLength = Chunk_Size; byte[] byteChunk; try { fis = new FileInputStream(ifile); StupidSplit.size = (int)ifile.length(); while (fileSize > 0) { if (fileSize <= Chunk_Size) { readLength = (int) fileSize; } byteChunk = new byte[readLength]; read = fis.read(byteChunk, 0, readLength); fileSize -= read; assert(read==byteChunk.length); nChunks++; //newName = fname + ".part" + Integer.toString(nChunks - 1); newName = String.format("%s.part%09d", fname, nChunks - 1); chunk = new FileOutputStream(new File(newName)); chunk.write(byteChunk); chunk.flush(); chunk.close(); byteChunk = null; chunk = null; } fis.close(); System.out.println(nChunks); // fis = null; } catch (FileNotFoundException fnfe) { System.out.println("Could not find the given file"); System.exit(-1); } catch (IOException ioe) { System.out .println("Error while creating file chunks. Exiting program"); System.exit(-1); }System.out.println(nChunks); return nChunks; } } }

    Read the article

  • How can I configure a Hudson job to use a specific JDK?

    - by rewbs
    I have a number of projects running on a Hudson slave. I'd like one of them to run Ant under Java6, rather than the default (which is Java5 in my environment). In the project configuration view, I was hoping to find either: An explicit option allowing me to set a custom JDK location to use for this project. A way to set custom environment variables for this project, which would allow me to set JAVA_HOME to the JDK6 location. The would make Ant pick up and run on Java6 as desired. Is there a way to do either of the above? If one of those facilities is available, I can't see how to access it. I'm running on Hudson 1.285. I would rather avoid using an "execute shell" operation instead of the "invoke Ant" operation if possible: my slave is on z/OS and Hudson doesn't seem to create the temporary shell scripts properly on this platform (probably an encoding issue).

    Read the article

  • Looking for python lib to manage remote tasks

    - by Riz
    Hi, I have server with django on it, this server runs some manage.py commands and update database. Now I need to move some of this tasks to different servers. I don't want to allow remote db access and need some tool\lib to be able to start task on remote servers by main server's command and update tasks code/add new tasks. I have ssh access to every server, all servers run under debian and all code in python. I was thiking about creating my own xmpp based solution(server sends messages to slave servers with commands to execute, like "update task", "run task"), or maybe some low-level ssh based solution where main server logs to slave servers and executes bash commands. But I would be happy to hear any advices.

    Read the article

  • Cannot access certain URL on my wireless

    - by dehmann
    Problem: On my wireless network at home, there is one URL that I just cannot access with my browser: http://research.microsoft.com/ I have no problems with the Internet connection otherwise. But on that address I just get The connection was reset The connection to the server was reset while the page was loading. from Firefox. I am using a DSL modem (Westell) and Linksys wireless router (using DHCP). When I use my neighbor's wireless connection I can access the microsoft site without a problem. Additional technical details: But with my connection, here is what I get from nslookup. It is weird: It first cannot find the address, but after I look up another address it can find it: $ nslookup research.microsoft.com ;; connection timed out; no servers could be reached $ nslookup google.com Non-authoritative answer: Name: google.com Address: 72.14.204.104 Name: google.com Address: 72.14.204.147 Name: google.com Address: 72.14.204.99 Name: google.com Address: 72.14.204.103 $ nslookup research.microsoft.com Non-authoritative answer: Name: research.microsoft.com Address: 131.107.65.14 But even after nslookup finds it Firefox still cannot access it. Here is what traceroute says: $ traceroute http://research.microsoft.com/ traceroute: Warning: http://research.microsoft.com/ has multiple addresses; using 8.15.7.117 traceroute to http://research.microsoft.com/ (8.15.7.117), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 4.515 ms 2.760 ms 3.072 ms 2 * * * Traceroute just to the IP: $ traceroute 131.107.65.14 traceroute to 131.107.65.14 (131.107.65.14), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 11.912 ms 2.684 ms 2.808 ms 2 * * * Comparison: Traceroute to google.com IP: $ traceroute 72.14.204.99 traceroute to 72.14.204.99 (72.14.204.99), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 6.428 ms 6.981 ms 117.099 ms 2 * * * Any comments / help?

    Read the article

  • Is it the address bus size or the data bus size that determines "8-bit , 16-bit ,32-bit ,64-bit " systems?

    - by learner
    My simple understanding is as follows. Memory (RAM) is composed of bits, groups of 8 which form bytes, each of which can be addressed ,and hence byte addressable memory. Address Bus stores the location of a byte of memory. If an address bus is of size 32 bits, that means it can hold upto 232 numbers and it hence can refer upto 232 bytes of memory = 4GB of memory and any memory greater than that is useless. Data bus is used to send the value to be written to/read off the memory. If I have a data bus of size 32 bits, it means a maximum of 4 bytes can be written to/read off the memory at a time. I find no relation between this size and the maximum memory size possible. But I read here that: Even though most systems are byte-addressable, it makes sense for the processor to move as much data around as possible. This is done by the data bus, and the size of the data bus is where the names 8-bit system, 16-bit system, 32-bit system, 64-bit system, etc.. come from. When the data bus is 8 bits wide, it can transfer 8 bits in a single memory operation. When the data bus is 32 bits wide (as is most common at the time of writing), at most, 32 bits can be moved in a single memory operation. This says that the size of the data bus is what gives an OS the name, 8bit, 16bit and so on. What is wrong with my understanding?

    Read the article

  • directory services group query changing randomly

    - by yamspog
    I am receiving an unusual behaviour in my asp.net application. I have code that uses Directory Services to find the AD groups for a given, authenticated user. The code goes something like ... string username = "user"; string domain = "LDAP://DC=domain,DC=com"; DirectorySearcher search = new DirectorySearcher(domain); search.Filter = "(SAMAccountName=" + username + ")"; And then I query and get the list of groups for the given user. The problem is that the code was receiving the list of groups as a list of strings. With our latest release of the software, we are starting to receive the list of groups as a byte[]. The system will return string, suddenly return byte[] and then with a reboot it returns string again. Anyone have any ideas? code sample: DirectoryEntry dirEntry = new DirectoryEntry("LDAP://" + ldapSearchBase); DirectorySearcher userSearcher = new DirectorySearcher(dirEntry) { SearchScope = SearchScope.Subtree, CacheResults = false, Filter = ("(" + txtLdapSearchNameFilter.Text + "=" + userName + ")") }; userResult = userSearcher.FindOne(); ResultPropertyValueCollection valCol = userResult.Properties["memberOf"]; foreach (object val in valCol) { if (val is string) { distName = val.ToString(); } else { distName = enc.GetString((Byte[])val); } }

    Read the article

  • Are flash drives and hard drives thought of as "an ocean of bytes"?

    - by Jian Lin
    Why can a USB Flash drive be formatted as NTFS or FAT32? Is the USB Flash Drive and Hard Drive just to be thought of as "an ocean of bytes"? I get very used to hearing formatting a hard drive as FAT32 or NTFS, but we can also format a USB Flash drive as NTFS or FAT32? Is it because a hard drive or Flash drive both can be thought of as "an ocean of bits" or "an ocean of bytes"? I remember RAM as: it takes 16 bit or 32 bit as an address signal (the 16 or 32 copper footing on the circuit board), and give out 8 bit of data (the other 8 copper footing on the circuit board). So can a hard drive be thought of as working that way too? So that's why a Flash drive can be the same too? Just an "ocean of bytes". But is it true that hard drive's hardware make it an ocean of sector or something else, that is, the smaller unit of read / write is not byte but something else? So with this "ocean of bytes", NTFS has the format that says, "if the first byte is __, then it means __ (it is a file or folder, and link to which sector, indicated by byte 2 and 3, etc, etc)"

    Read the article

  • Partition is missing in /dev

    - by haimg
    I'm having a strange problem since I moved from Centos5 to Centos6. I have three disks, first two are used as a RAID1, and third one is a stand-alone backup disk that is not listed in /etc/fstab (it is mounded when needed and then unmounted). My problem: After a boot, /dev/sdc exists but /dev/sdc1 does not. Also, the links in /dev/disks are also absent for the first partition of sdc. Disk itself is fine, and if I hot-remove it and plug it back in, /dev/sdc1 appears ok and everything is working. My question: What subsystem manages auto-discovery of disks, partitions, etc. during the boot process (e.g. what creates /dev/disks/by-label)? How do I configure it to scan /dev/sdc too and create all relevant files and links in /dev ? Edit: Here's the relevant part of dmesg output (the only place sdc appears). It does list sdc1, but it's not in /dev! sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 3:0:0:0: [sdc] 976773168 512-byte logical blocks: (500 GB/465 GiB) sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 3:0:0:0: [sdc] Write Protect is off sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00 sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdb: sdc: sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: DMAR:[DMA Read] Request device [00:1e.0] fault addr 361bc000 DMAR:[fault reason 06] PTE Read access is not set sdb1 sdb2 sdb3 sdc1 sda1 sd 1:0:0:0: [sdb] Attached SCSI disk sd 3:0:0:0: [sdc] Attached SCSI disk sda2 sda3 sd 0:0:0:0: [sda] Attached SCSI disk

    Read the article

  • What may be wrong with String::ToIdentifier::EN tests?

    - by wk01
    I try to install Perl module String::ToIdentifier::EN (as depndency of DBIx::Class::Schema::Loader) but it fails on tests. I googled those errors but get no picture, where is problem: Building and testing String-ToIdentifier-EN-0.07 cp lib/String/ToIdentifier/EN.pm blib/lib/String/ToIdentifier/EN.pm cp lib/String/ToIdentifier/EN/Unicode.pm blib/lib/String/ToIdentifier/EN/Unicode.pm Manifying blib/man3/String::ToIdentifier::EN.3pm Manifying blib/man3/String::ToIdentifier::EN::Unicode.3pm PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'inc', 'blib/lib', 'blib/arch')" t/00_basic.t t/10_ascii.t t/20_capitalization.t Byte order is not compatible at ../../lib/Storable.pm (autosplit into ../../lib/auto/Storable/_retrieve.al) line 380, at /home/wanradt/perl5/lib/perl5/Lingua/EN/Tagger.pm line 167 # Looks like you planned 25 tests but ran 4. # Looks like your test exited with 25 just after 4. t/00_basic.t ........... Dubious, test returned 25 (wstat 6400, 0x1900) Failed 21/25 subtests Byte order is not compatible at ../../lib/Storable.pm (autosplit into ../../lib/auto/Storable/_retrieve.al) line 380, at /home/wanradt/perl5/lib/perl5/Lingua/EN/Tagger.pm line 167 # Looks like you planned 768 tests but ran 512. # Looks like your test exited with 25 just after 512. t/10_ascii.t ........... Dubious, test returned 25 (wstat 6400, 0x1900) Failed 256/768 subtests t/20_capitalization.t .. ok Test Summary Report ------------------- t/00_basic.t (Wstat: 6400 Tests: 4 Failed: 0) Non-zero exit status: 25 Parse errors: Bad plan. You planned 25 tests but ran 4. t/10_ascii.t (Wstat: 6400 Tests: 512 Failed: 0) Non-zero exit status: 25 Parse errors: Bad plan. You planned 768 tests but ran 512. Files=3, Tests=528, 1 wallclock secs ( 0.07 usr 0.02 sys + 0.42 cusr 0.04 csys = 0.55 CPU) Result: FAIL Failed 2/3 test programs. 0/528 subtests failed. make: *** [test_dynamic] Error 255 -> FAIL Installing String::ToIdentifier::EN failed. See /home/wanradt/.cpanm/build.log for details. Byte order is not compatible at... seems a key, but to where?

    Read the article

  • How to use XDMCP+GDM and Xnest?

    - by João Pinto
    I have been trying to enable XDMCP on GDM without much success. Following some instructions I have edited /etc/gdm/custom.conf and added: [daemon] RemoteGreeter=/usr/lib/gdm/gdm-xdmcp-chooser-slave [xdmcp] Enable=true Then restarted gdm and tried to connect both locally and from a remote system with: Xnest :1 -query localhost Xnest :1 -query remote_system_hostname I just get a black screen instead of the GDM window as expected. I am missing something ?

    Read the article

  • Setting MTU on Exalogic

    - by csoto
    For many reasons, a system administrator may want to change the MTU settings of a server. But in a system like Exalogic which contains lots of interconnected nodes and other various components, it's important to understand how this applies to the different networks. For example, when bringing up bonding of InfiniBand an error like the following may be thrown: Bringing up interface bond1: SIOCSIFMTU: Invalid argument Both scripts ifcfg-ib0 and ifcfg-ib1 (from the /etc/sysconfig/network-scripts/ direectory) have MTU set to 65500, which is a valid MTU value only if all IPoIB slaves operate in connected mode and are configured with the same value, so the line below must be added to both network scripts and then restart the network: CONNECTED_MODE=yes By the way, an error of the form “SIOCSIFMTU: Invalid argument” indicates that the requested MTU was rejected by the kernel. Typically this would be due to it exceeding the maximum value supported by the interface hardware. In that case you must either reduce the MTU to a value that is supported or obtain more capable hardware. This problem has been seen when trying to modify the MTU using the ifconfig command, like the output of the example below: [root@elxxcnxx ~]# ifconfig ib1 mtu 65520 SIOCSIFMTU: Invalid argument It's important to insist that in most cases the nodes must be rebooted after the MTU size has been changed. Although in some circumstances it may work without a reboot, it is not how it is typically documented. Now, in order to achieve a reduced memory consumption and improve performance for network traffic received on IPoIB related interfaces, it is recommend to reduce the MTU value in interface configuration files for IPoIB related bonds from 65520 to 64000. The change needs to be made to interface configuration files under the /etc/sysconfig/network-scripts directory and applies to the interface configuration files for bonds over IPoIB related slave devices, for example /etc/sysconfig/network-scripts/ifcfg-bond1. However, keep in mind that the numeric portion of the interface filenames that corresponding to IPoIB interfaces is expected to vary across compute nodes and vServers and so cannot be relied upon to identify which interface files are for bonds are over IPoIB rather than EoIB related slave interfaces. To fix these MTU values to the recommended settings, there are very useful instructions and a script on the MOS Note 1624434.1, and it's applicable physical and virtual configurations of Exalogic. Regarding the recommended MTU value for EoIB related interfaces, its maximum appropriate value is 1500. If for some reason a vServer has been created with a higher value (set on the /etc/sysconfig/network-scripts/ifcfg-bond0 file), then it must be fixed. An error like the following could be thrown under this circumstance: [root@vServer ~]# service network restart ... Bringing up interface bond0:  SIOCSIFMTU: Invalid argument Also an error like the one below can be seen on the /var/log/messages file of the vServer: kernel: T5074835532 [mlx4_vnic] eth1:vnic_change_mtu:360: failed: new_mtu 64000 2026 The MOS Note 1611657.1 is very useful for this purpose.

    Read the article

  • Botnet Malware Sleeps Eight Months Activation, Child Concerns

    Daily Safety Check experts used a computer forensic analysis of a significant botnet that consisted of Carberp and SpyEye malware to come up with the details for their report. The analysis found that the botnet profiled the behavior of the slave computers it infected, similar to surveillance techniques used by law enforcement agencies, for an average of eight months. During the eight months, the botnet analyzed each computer's users and assigned ratings to certain activities to form a complete profile for each. Doing so allowed those behind the scheme to determine which were the most favora...

    Read the article

  • Is a 1:* write:read thread system safe?

    - by Di-0xide
    Theoretically, thread-safe code should fix race conditions. Race conditions, as I understand it, occur because two threads attempt to write to the same location at the same time. However, what about a threading model in which a single thread is designed to write to a location, and several slave/worker threads simply read from the location? Assuming the value/timing at which they read the data isn't relevant/doesn't hinder the worker thread's outcome, wouldn't this be considered 'thread safe', or am I missing something in my logic?

    Read the article

  • C# how to get current encoding type used by C# to write/read configuration for config file?

    - by 5YrsLaterDBA
    I am doing connection string encryption. we use our own encryption key with AES algorithm to do this. during the process, we need to convert string to byte array and then convert byte array back to string. I found the encoding play an important role on those conversions. So I need to know the encoding C# is using to get above conversion right. Any idea how to get current encoding programmably? thanks,

    Read the article

  • How to use SQL file streaming win32 API and support WCF streaming

    - by Mahesh
    I'm using Sql server file stream type to store large files in the backend. I'm trying to use WCf to stream the file across to the clients. I'm able to get the handle to the file using SQLFileStream (API). I then try to return this stream. I have implemenetd data chunking on the client side to retrive the data from the stream. I'm able to do it for regular filestream and memory stream. Also if i convert then sqlfilestream in to memorystream that also works. The only think that doesn't work is when I try to return sqlfilestream. What am I doing wrong. I have tried both nettcpbinding with streaming enabled and http binding with MTOM encoding. This is the error message am getting : Socket connection was aborted. This could be caused by an error processing your mesage or a receive timeout being exceeded by the remote host, or an underlying network issue.. Local socket timneout was 00:09:59.... Here is my sample code RemoteFileInfo info = new RemoteFileInfo(); info.FileName = "SampleXMLFileService.xml"; string pathName = DataAccess.GetDataSnapshotPath("DataSnapshot1"); SqlConnection connection = DataAccess.GetConnection(); SqlTransaction sqlTransaction = connection.BeginTransaction("SQLSileStreamingTrans"); SqlCommand command = new SqlCommand(); command.Connection = connection; command.Transaction = sqlTransaction; command.CommandText = "SELECT GET_FILESTREAM_TRANSACTION_CONTEXT()"; byte[] transcationContext = command.ExecuteScalar() as byte[]; SqlFileStream stream = new SqlFileStream(pathName, transcationContext, FileAccess.Read); // byte[] bytes = new byte[stream.Length]; // stream.Read(bytes, 0, (int) stream.Length); // Stream reeturnStream = stream; // MemoryStream memoryStream = new MemoryStream(bytes); info.FileByteStream = stream; info.Length = info.FileByteStream.Length; connection.Close(); return info; [MessageContract] public class RemoteFileInfo : IDisposable { [MessageHeader(MustUnderstand = true)] public string FileName; [MessageHeader(MustUnderstand = true)] public long Length; [MessageBodyMember(Order = 1)] public System.IO.Stream FileByteStream; public void Dispose() { if (FileByteStream != null) { FileByteStream.Close(); FileByteStream = null; } } } ANy help is appreciated

    Read the article

  • C# serial port driver wrapper class code and concept quality

    - by Ruben Trancoso
    Hi folks, Would like to know from all you guys what you think about my Serial Wrapper class. Had be a while I've beem working with serial port but never shared the code what somekind make closed to my very own vision. Would like to know if it's a good/bad approach, if the interface is enough and what more you see on it. I know that Stackoverflow is for question but at the same time there's a lot of very good skilled people here and share code and opinion can also bennefit everybody, it's why I decided to post it anyway. thanks! using System.Text; using System.IO; using System.IO.Ports; using System; namespace Driver { class SerialSingleton { // The singleton instance reference private static SerialSingleton instance = null; // System's serial port interface private SerialPort serial; // Current com port identifier private string comPort = null; // Configuration parameters private int confBaudRate; private int confDataBits; private StopBits confStopBits; private Parity confParityControl; ASCIIEncoding encoding = new ASCIIEncoding(); // ================================================================================== // Constructors public static SerialSingleton getInstance() { if (instance == null) { instance = new SerialSingleton(); } return instance; } private SerialSingleton() { serial = new SerialPort(); } // =================================================================================== // Setup Methods public string ComPort { get { return comPort; } set { if (value == null) { throw new SerialException("Serial port name canot be null."); } if (nameIsComm(value)) { close(); comPort = value; } else { throw new SerialException("Serial Port '" + value + "' is not a valid com port."); } } } public void setSerial(string baudRate, int dataBits, StopBits stopBits, Parity parityControl) { if (baudRate == null) { throw new SerialException("Baud rate cannot be null"); } string[] baudRateRef = { "300", "600", "1200", "1800", "2400", "3600", "4800", "7200", "9600", "14400", "19200", "28800", "38400", "57600", "115200" }; int confBaudRate; if (findString(baudRateRef, baudRate) != -1) { confBaudRate = System.Convert.ToInt32(baudRate); } else { throw new SerialException("Baurate parameter invalid."); } int confDataBits; switch (dataBits) { case 5: confDataBits = 5; break; case 6: confDataBits = 6; break; case 7: confDataBits = 7; break; case 8: confDataBits = 8; break; default: throw new SerialException("Databits parameter invalid"); } if (stopBits == StopBits.None) { throw new SerialException("StopBits parameter cannot be NONE"); } this.confBaudRate = confBaudRate; this.confDataBits = confDataBits; this.confStopBits = stopBits; this.confParityControl = parityControl; } // ================================================================================== public string[] PortList { get { return SerialPort.GetPortNames(); } } public int PortCount { get { return SerialPort.GetPortNames().Length; } } // ================================================================================== // Open/Close Methods public void open() { open(comPort); } private void open(string comPort) { if (isOpen()) { throw new SerialException("Serial Port is Already open"); } else { if (comPort == null) { throw new SerialException("Serial Port not defined. Cannot open"); } bool found = false; if (nameIsComm(comPort)) { string portId; string[] portList = SerialPort.GetPortNames(); for (int i = 0; i < portList.Length; i++) { portId = (portList[i]); if (portId.Equals(comPort)) { found = true; break; } } } else { throw new SerialException("The com port identifier '" + comPort + "' is not a valid serial port identifier"); } if (!found) { throw new SerialException("Serial port '" + comPort + "' not found"); } serial.PortName = comPort; try { serial.Open(); } catch (UnauthorizedAccessException uaex) { throw new SerialException("Cannot open a serial port in use by another application", uaex); } try { serial.BaudRate = confBaudRate; serial.DataBits = confDataBits; serial.Parity = confParityControl; serial.StopBits = confStopBits; } catch (Exception e) { throw new SerialException("Serial port parameter invalid for '" + comPort + "'.\n" + e.Message, e); } } } public void close() { if (serial.IsOpen) { serial.Close(); } } // =================================================================================== // Auxiliary private Methods private int findString(string[] set, string search) { if (set != null) { for (int i = 0; i < set.Length; i++) { if (set[i].Equals(search)) { return i; } } } return -1; } private bool nameIsComm(string name) { int comNumber; int.TryParse(name.Substring(3), out comNumber); if (name.Substring(0, 3).Equals("COM")) { if (comNumber > -1 && comNumber < 256) { return true; } } return false; } // ================================================================================= // Device state Methods public bool isOpen() { return serial.IsOpen; } public bool hasData() { int amount = serial.BytesToRead; if (amount > 0) { return true; } else { return false; } } // ================================================================================== // Input Methods public char getChar() { int data = serial.ReadByte(); return (char)data; } public int getBytes(ref byte[] b) { int size = b.Length; char c; int counter = 0; for (counter = 0; counter < size; counter++) { if (tryGetChar(out c)) { b[counter] = (byte)c; } else { break; } } return counter; } public string getStringUntil(char x) { char c; string response = ""; while (tryGetChar(out c)) { response = response + c; if (c == x) { break; } } return response; } public bool tryGetChar(out char c) { c = (char)0x00; byte[] b = new byte[1]; long to = 10; long ft = System.Environment.TickCount + to; while (System.Environment.TickCount < ft) { if (hasData()) { int data = serial.ReadByte(); c = (char)data; return true; } } return false; } // ================================================================================ // Output Methods public void sendString(string data) { byte[] bytes = encoding.GetBytes(data); serial.Write(bytes, 0, bytes.Length); } public void sendChar(char c) { char[] data = new char[1]; data[0] = c; serial.Write(data, 0, 1); } public void sendBytes(byte[] data) { serial.Write(data, 0, data.Length); } public void clearBuffer() { if (serial.IsOpen) { serial.DiscardInBuffer(); serial.DiscardOutBuffer(); } } } }

    Read the article

  • [Java] RSA BadPaddingException : data must start with zero

    - by Robin Monjo
    Hello everyone. I try to implement an RSA algorithm in a Java program. I am facing the "BadPaddingException : data must start with zero". Here are the methods used to encrypt and decrypt my data : public byte[] encrypt(byte[] input) throws Exception { Cipher cipher = Cipher.getInstance("RSA/ECB/PKCS1Padding");// cipher.init(Cipher.ENCRYPT_MODE, this.publicKey); return cipher.doFinal(input); } public byte[] decrypt(byte[] input) throws Exception { Cipher cipher = Cipher.getInstance("RSA/ECB/PKCS1Padding");/// cipher.init(Cipher.DECRYPT_MODE, this.privateKey); return cipher.doFinal(input); } privateKey and publicKey attributes are read from files this way : public PrivateKey readPrivKeyFromFile(String keyFileName) throws IOException { PrivateKey key = null; try { FileInputStream fin = new FileInputStream(keyFileName); ObjectInputStream ois = new ObjectInputStream(fin); BigInteger m = (BigInteger) ois.readObject(); BigInteger e = (BigInteger) ois.readObject(); RSAPrivateKeySpec keySpec = new RSAPrivateKeySpec(m, e); KeyFactory fact = KeyFactory.getInstance("RSA"); key = fact.generatePrivate(keySpec); ois.close(); } catch (Exception e) { e.printStackTrace(); } return key; } Private key and Public key are created this way : public void Initialize() throws Exception { KeyPairGenerator keygen = KeyPairGenerator.getInstance("RSA"); keygen.initialize(2048); keyPair = keygen.generateKeyPair(); KeyFactory fact = KeyFactory.getInstance("RSA"); RSAPublicKeySpec pub = fact.getKeySpec(keyPair.getPublic(), RSAPublicKeySpec.class); RSAPrivateKeySpec priv = fact.getKeySpec(keyPair.getPrivate(), RSAPrivateKeySpec.class); saveToFile("public.key", pub.getModulus(), pub.getPublicExponent()); saveToFile("private.key", priv.getModulus(), priv.getPrivateExponent()); } and then saved in files : public void saveToFile(String fileName, BigInteger mod, BigInteger exp) throws IOException { FileOutputStream f = new FileOutputStream(fileName); ObjectOutputStream oos = new ObjectOutputStream(f); oos.writeObject(mod); oos.writeObject(exp); oos.close(); } I can't figured out how the problem come from. Any help would be appreciate ! Thanks in advance.

    Read the article

  • c# write big files to blob sqlite

    - by brizjin-gmail-com
    I have c# application which write files to sqlite database. It uses entity fraemwork for modeling data. Write file to blob (entity byte[] varible) with this line: row.file = System.IO.File.ReadAllBytes(file_to_load.FileName); //row.file is type byte[] //row is entity class table All work correctly when files size is less. When size more 300Mb app throw exception: Exception of type 'System.OutOfMemoryException' was thrown. How I can write to blob direct, without memory varibles?

    Read the article

  • An existing connection was forcibly closed by the remote host

    - by George
    I have a fat VB.NET Winform client that is using the an old asmx style web service. Very often, when I perform query that takes a while, I get the subject error. The error happenes The error seems to occur in < 1 min, which is far less that the web service timeout value that I have set or the timeout value on the ADO Command object that is performing the query within the web server. It seems to occur whenever I am performing a large query that expects to return a lot of rows or when I am sending up a large amount of data to the web service. For example, it just occurred when I was passing a large dataset to the web server: System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Smit.Pipeline.Bo.localhost.WsSR.SaveOptions(String emailId, DataSet dsNeighborhood, DataSet dsOption, DataSet dsTaskApplications, DataSet dsCcUsers, DataSet dsDistinctUsers, DataSet dsReferencedApplications) in C:\My\Code\Pipeline2\Smit.Pipeline.Bo\Web References\localhost\Reference.vb:line 944 at Smit.Pipeline.Bo.Options.Save(TaskApplications updatedTaskApplications) in I've been looking a tons of postings on this error and it is surprising at how varied the circumstances which cause this error are. I've tried messing with Wireshark, but I am clueless how to use it. This application only has about 20 users at any one time and I am able to reproduce this error in the middle of the night when probably no one is using the app, so I don't think that the number of requests to the web server or to the database is high. It's probably one right now when I just got the error now. It seems to have to do everything with the amt of data being passed in either direction. This error is really chronic and killing me. Please help.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >