Search Results

Search found 3953 results on 159 pages for 'byte slave'.

Page 44/159 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • mysql cluster virtual ip

    - by user225995
    I am new in mysql cluster and mysql cluster and versions are not my choice. I setup four machines. Two of them manager , Two of them data cluster (ndb and mysqld). And i integrate with mysql utilities master/slave configuration. Everything working fine. Mysql version 5.6.17, ndb 7.3.5 , servers ubuntu 14.04. There will be no much transactions. The only important thing is HA. Everythings must be double. My problem is virtual ip. Since I have only one farm which have master slave configuration, how can i do it without proxy? If I must use proxy which proxy is better?

    Read the article

  • Bluetooth mouse no longer paired after resuming from suspend since upgrading to 13.10

    - by Korakys
    Since upgrading to 13.10 from 13.04 my mouse no longer connects via bluetooth. In settings it states that the mouse is not paired. Restarting bluetooth with sudo /etc/init.d/bluetooth restart does not help. Restarting the computer does fix the problem if bluetooth is restarted also with the previously mentioned command, but this is not ideal. The mouse worked fine prior to updating to 13.10. The computer is a ThinkPad X230 with a Broadcom 'BCM20702A0' bluetooth module (I think). When it is not working hciconfig hci0 -a returns: hci0: Type: BR/EDR Bus: USB BD Address: C0:18:85:DB:F3:D1 ACL MTU: 1021:8 SCO MTU: 64:1 UP RUNNING PSCAN RX bytes:766129 acl:49888 sco:0 events:2233 errors:0 TX bytes:5953 acl:240 sco:0 commands:274 errors:0 Features: 0xbf 0xfe 0xcf 0xfe 0xdb 0xff 0x7b 0x87 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: RSWITCH SNIFF Link mode: SLAVE ACCEPT Name: 'BCM20702A' Class: 0x6e0100 Service Classes: Networking, Rendering, Capturing, Audio, Telephony Device Class: Computer, Uncategorized HCI Version: 4.0 (0x6) Revision: 0x1000 LMP Version: 4.0 (0x6) Subversion: 0x220e Manufacturer: Broadcom Corporation (15) When it is working hciconfig hci0 -a returns: hci0: Type: BR/EDR Bus: USB BD Address: C0:18:85:DB:F3:D1 ACL MTU: 1021:8 SCO MTU: 64:1 UP RUNNING PSCAN RX bytes:253334 acl:16391 sco:0 events:842 errors:0 TX bytes:2519 acl:65 sco:0 commands:84 errors:0 Features: 0xbf 0xfe 0xcf 0xfe 0xdb 0xff 0x7b 0x87 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: RSWITCH SNIFF Link mode: SLAVE ACCEPT Name: 'ubuntu-0' Class: 0x6e0100 Service Classes: Networking, Rendering, Capturing, Audio, Telephony Device Class: Computer, Uncategorized HCI Version: 4.0 (0x6) Revision: 0x1000 LMP Version: 4.0 (0x6) Subversion: 0x220e Manufacturer: Broadcom Corporation (15) I am a relative novice with linux so don't ask me compile anything please, but I can use google.

    Read the article

  • Creating ip alias on bonded interface ie. bond0:1

    - by bobothechimp
    System: HP Proliant DL360 G5 running CentOS 5.4 Bonded interface is working fine for a long time. I just went to add an alias the way I always have on a regular interface, and on first check it works (pinging on the local box) but it is not accessable from outside (iptables is turned off). In addition with this setup the normal network response started to decline, hanging for around a minute before I could get a prompt on login. Here are my config files: [root network-scripts]# cat ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no [root network-scripts]# cat ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no [root network-scripts]# cat ifcfg-bond0 DEVICE=bond0 BONDING_OPTS="mode=1 miimon=100" BOOTPROTO=none ONBOOT=yes NETWORK=10.2.1.0 NETMASK=255.255.255.0 IPADDR=10.2.1.11 USERCTL=no [root network-scripts]# cat ifcfg-bond0:1 DEVICE=bond0:1 BOOTPROTO=static ONBOOT=yes NETWORK=10.2.1.0 NETMASK=255.255.255.0 IPADDR=10.2.1.12 USERCTL=no any thoughts?

    Read the article

  • Which hardware to VM ratio for Build-Server virtualization?

    - by Martin
    Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me. Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.)) Therefore (and for scaling purposes) we would like to go virtual with these machines. Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts. Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones. And here begin my questions: Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs? That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or?? Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense?? Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.) Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.

    Read the article

  • How do I capture and playback http web requests against multiple web servers?

    - by KevM
    My overall goal is to not interrupt a production system while capturing HTTP Posts to a web application so that I can reverse engineer the telemetry coming from a closed application. I have control over the transmitter of the HTTP Posts but not the receiving web application. It seems like I need a request "forking" proxy. Sort of a reverse proxy that pushes the request to 2 endpoints, a master and slave, only relaying the response from the master endpoint back to the requester. I am not a server geek so something like this may exist but I don't know the term of art for what I am looking for. Another possibility could be a simple logging proxy. Capture a log of the web requests. Rewrite the log to target my "slave" web application. Playback the log with curl or something. Thank you for your assistance.

    Read the article

  • How to divide work to a network of computers?

    - by Morpork
    Imagine a scenario as follows: Lets say you have a central computer which generates a lot of data. This data must go through some processing, which unfortunately takes longer than to generate. In order for the processing to catch up with real time, we plug in more slave computers. Further, we must take into account the possibility of slaves dropping out of the network mid-job as well as additional slaves being added. The central computer should ensure that all jobs are finished to its satisfaction, and that jobs dropped by a slave are retasked to another. The main question is: What approach should I use to achieve this? But perhaps the following would help me arrive at an answer: Is there a name or design pattern to what I am trying to do? What domain of knowledge do I need to achieve the goal of getting these computers to talk to each other? (eg. will a database, which I have some knowledge of, be enough or will this involve sockets, which I have yet to have knowledge of?) Are there any examples of such a system? The main question is a bit general so it would be good to have a starting point/reference point. Note I am assuming constraints of c++ and windows so solutions pointing in that direction would be appreciated.

    Read the article

  • Software Architecture: How to divide work to a network of computers?

    - by Morpork
    Imagine a scenario as follows: Lets say you have a central computer which generates a lot of data. This data must go through some processing, which unfortunately takes longer than to generate. In order for the processing to catch up with real time, we plug in more slave computers. Further, we must take into account the possibility of slaves dropping out of the network mid-job as well as additional slaves being added. The central computer should ensure that all jobs are finished to its satisfaction, and that jobs dropped by a slave are retasked to another. The main question is: What approach should I use to achieve this? But perhaps the following would help me arrive at an answer: Is there a name or design pattern to what I am trying to do? What domain of knowledge do I need to achieve the goal of getting these computers to talk to each other? (eg. will a database, which I have some knowledge of, be enough or will this involve sockets, which I have yet to have knowledge of?) Are there any examples of such a system? The main question is a bit general so it would be good to have a starting point/reference point. Note I am assuming constraints of c++ and windows so solutions pointing in that direction would be appreciated.

    Read the article

  • Running Ubuntu off a USB drive?

    - by Solignis
    I was wondering if a USB 2.0 Thumb drive has enough bandwidth to act as a primary system drive in an Ubuntu Linux server. More specifically an SAN server. I am running an iSCSI target, ZFS and NFS-kernel-server, BIND9 (Slave), and Openldap (Slave). I was thinking of resorting to a thumb drive because my new motherboard only has 4 SATA ports and I have 5 disks. 4 (ZFS Pool) 1 (System). And unless I get an expansion card there is no way to get more SATA ports. This "server" leans more twords a home server. I use in my lab with my VMware server. It provides storage, or atleast it did until it died. Would it still be better to go with the SATA hard disk?

    Read the article

  • Cocoa equivalent of the Carbon method getPtrSize

    - by Michael Minerva
    I need to translate the a carbon method into cocoa into and I am having trouble finding any documentation about what the carbon method getPtrSize really does. From the code I am translating it seems that it returns the byte representation of an image but that doesn't really match up with the name. Could someone give me a good explanation of this method or link me to some documentation that describes it. The code I am translating is in a common lisp implementation called MCL that has a bridge to carbon (I am translating into CCL which is a common lisp implementation with a Cocoa bridge). Here is the MCL code (#_before a method call means that it is a carbon method): (defmethod COPY-CONTENT-INTO ((Source inflatable-icon) (Destination inflatable-icon)) ;; check for size compatibility to avoid disaster (unless (and (= (rows Source) (rows Destination)) (= (columns Source) (columns Destination)) (= (#_getPtrSize (image Source)) (#_getPtrSize (image Destination)))) (error "cannot copy content of source into destination inflatable icon: incompatible sizes")) ;; given that they are the same size only copy content (setf (is-upright Destination) (is-upright Source)) (setf (height Destination) (height Source)) (setf (dz Destination) (dz Source)) (setf (surfaces Destination) (surfaces Source)) (setf (distance Destination) (distance Source)) ;; arrays (noise-map Source) ;; accessor makes array if needed (noise-map Destination) ;; ;; accessor makes array if needed (dotimes (Row (rows Source)) (dotimes (Column (columns Source)) (setf (aref (noise-map Destination) Row Column) (aref (noise-map Source) Row Column)) (setf (aref (altitudes Destination) Row Column) (aref (altitudes Source) Row Column)))) (setf (connectors Destination) (mapcar #'copy-instance (connectors Source))) (setf (visible-alpha-threshold Destination) (visible-alpha-threshold Source)) ;; copy Image: slow byte copy (dotimes (I (#_getPtrSize (image Source))) (%put-byte (image Destination) (%get-byte (image Source) i) i)) ;; flat texture optimization: do not copy texture-id -> destination should get its own texture id from OpenGL (setf (is-flat Destination) (is-flat Source)) ;; do not compile flat textures: the display list overhead slows things down by about 2x (setf (auto-compile Destination) (not (is-flat Source))) ;; to make change visible we have to reset the compiled flag (setf (is-compiled Destination) nil))

    Read the article

  • How to use Parcel in Android?

    - by Mike
    I'm trying to use Parcel to write and then read back a Parcelable. For some reason, when I read the object back from the file, it's coming back as null. public void testFoo() { final Foo orig = new Foo("blah blah"); // Wrote orig to a parcel and then byte array final Parcel p1 = Parcel.obtain(); p1.writeValue(orig); final byte[] bytes = p1.marshall(); // Check to make sure that the byte array seems to contain a Parcelable assertEquals(4, bytes[0]); // Parcel.VAL_PARCELABLE // Unmarshall a Foo from that byte array final Parcel p2 = Parcel.obtain(); p2.unmarshall(bytes, 0, bytes.length); final Foo result = (Foo) p2.readValue(Foo.class.getClassLoader()); assertNotNull(result); // FAIL assertEquals( orig.str, result.str ); } protected static class Foo implements Parcelable { protected static final Parcelable.Creator<Foo> CREATOR = new Parcelable.Creator<Foo>() { public Foo createFromParcel(Parcel source) { final Foo f = new Foo(); f.str = (String) source.readValue(Foo.class.getClassLoader()); return f; } public Foo[] newArray(int size) { throw new UnsupportedOperationException(); } }; public String str; public Foo() { } public Foo( String s ) { str = s; } public int describeContents() { return 0; } public void writeToParcel(Parcel dest, int ignored) { dest.writeValue(str); } } What am I missing? UPDATE: To simplify the test I've removed the reading and writing of files in my original example.

    Read the article

  • How to detect the character encoding of a text file?

    - by Cédric Boivin
    I try to detect which character encoding is used in my file. I try with this code to get the standard encoding public static Encoding GetFileEncoding(string srcFile) { // *** Use Default of Encoding.Default (Ansi CodePage) Encoding enc = Encoding.Default; // *** Detect byte order mark if any - otherwise assume default byte[] buffer = new byte[5]; FileStream file = new FileStream(srcFile, FileMode.Open); file.Read(buffer, 0, 5); file.Close(); if (buffer[0] == 0xef && buffer[1] == 0xbb && buffer[2] == 0xbf) enc = Encoding.UTF8; else if (buffer[0] == 0xfe && buffer[1] == 0xff) enc = Encoding.Unicode; else if (buffer[0] == 0 && buffer[1] == 0 && buffer[2] == 0xfe && buffer[3] == 0xff) enc = Encoding.UTF32; else if (buffer[0] == 0x2b && buffer[1] == 0x2f && buffer[2] == 0x76) enc = Encoding.UTF7; else if (buffer[0] == 0xFE && buffer[1] == 0xFF) // 1201 unicodeFFFE Unicode (Big-Endian) enc = Encoding.GetEncoding(1201); else if (buffer[0] == 0xFF && buffer[1] == 0xFE) // 1200 utf-16 Unicode enc = Encoding.GetEncoding(1200); return enc; } My five first byte are 60, 118, 56, 46 and 49. Is there a chart that shows which encoding matches those five first bytes?

    Read the article

  • Invalid length for a Base-64 char array.

    - by Code Sherpa
    As the title says, I am getting: Invalid length for a Base-64 char array. I have read about this problem on here and it seems that the suggestion is to store ViewState in SQL if it is large. I am using a wizard with a good deal of data collection so chances are my ViewSate is large. But, before I turn to the "store-in-DB" solution, maybe somebody can take a look and tell me if I have other options? I construct the email for delivery using the below method: public void SendEmailAddressVerificationEmail(string userName, string to) { string msg = "Please click on the link below or paste it into a browser to verify your email account.<BR><BR>" + "<a href=\"" + _configuration.RootURL + "Accounts/VerifyEmail.aspx?a=" + userName.Encrypt("verify") + "\">" + _configuration.RootURL + "Accounts/VerifyEmail.aspx?a=" + userName.Encrypt("verify") + "</a>"; SendEmail(to, "", "", "Account created! Email verification required.", msg); } The Encrypt method looks like this: public static string Encrypt(string clearText, string Password) { byte[] clearBytes = System.Text.Encoding.Unicode.GetBytes(clearText); PasswordDeriveBytes pdb = new PasswordDeriveBytes(Password, new byte[] { 0x49, 0x76, 0x61, 0x6e, 0x20, 0x4d, 0x65, 0x64, 0x76, 0x65, 0x64, 0x65, 0x76 }); byte[] encryptedData = Encrypt(clearBytes, pdb.GetBytes(32), pdb.GetBytes(16)); return Convert.ToBase64String(encryptedData); } On the receiving end, the VerifyEmail.aspx.cs page has the line: string username = Cryptography.Decrypt(_webContext.UserNameToVerify, "verify"); And the decrypt method looks like: public static string Decrypt(string cipherText, string password) { **// THE ERROR IS THROWN HERE!!** byte[] cipherBytes = Convert.FromBase64String(cipherText); Can this error be remedied with a code fix or must I store ViewState in the database? Thanks in advance.

    Read the article

  • Can I convert an ASCII MD5 hashed password into a Unicode MD5 hashed password?

    - by Jimmy Moo Moo
    Hello, I'm looking for help to convert an ASCII MD5 hashed password into a Unicode MD5 hashed password? For example, I'll use the string "password" . When it's converted to an ascii byte array, I get a base64 encoded hash of X03MO1qnZdYdgyfeuILPmQ== When it's converted into a unicode byte array, I get a base64 encoded hash of sIHb6F4ew//D1OfQInQAzQ== All my passwords are stored in an md5 hash that was applied to an ascii byte array, but I'm trying to migrate my application's user data to a system that stores password in an md5 hash that is applied a unicode byte array. In case it's not clear, with the following C#code: var passwordBytes = Encoding.ASCII.GetBytes("password"); var hashAlgorithm = HashAlgorithm.Create("MD5"); var hashBytes = hashAlgorithm.ComputeHash(passwordBytes); My current system uses this, but the system I'm moving to has a diff first time. It usese Encoding.Unicode.GetBytes. Does anybody know how I can convert my passwords? From X03MO1qnZdYdgyfeuILPmQ== into sIHb6F4ew//D1OfQInQAzQ== I'm guessing the answer is that I can't.. the encoding is being done before the hashing, but I thought I'd inquire the bright minds of stackoverflow and see if anybody has a way.

    Read the article

  • Why RSA encryption can return different results with C# and Java?

    - by ActioN
    I using: c#: RSACryptoServiceProvider JAVA: KeyFactory.getInstance("RSA")+Cipher I sending public key (exponent + modulus) as byte array from java to c#. It's ok, there is the same bytes. But when i try to encrypt some data with one key in Java and c# - there is different results. Java Key Generation: KeyPairGenerator keyGen = KeyPairGenerator.getInstance("RSA"); keyGen.initialize( Config.CRYPTO_KEY_NUM_BITS ); m_KeyPair = keyGen.genKeyPair(); m_PublicKey = KeyFactory.getInstance("RSA").generatePublic( newX509EncodedKeySpec(m_KeyPair.getPublic().getEncoded())); byte[] exponent = m_PublicKey.getPublicExponent().toByteArray(); byte[] modulus = m_PublicKey.getModulus().toByteArray(); // then sending... C# Key Recieve: // Recieved... m_ExternKey = new RSAParameters(); m_ExternKey.Exponent = exponent; m_ExternKey.Modulus = modulus; m_RsaExtern = new RSACryptoServiceProvider(); m_RsaExtern.ImportParameters(m_ExternKey); byte[] test = m_RsaExtern.Encrypt(bytesToEncrypt, true); and problem is that encrypted bytes is different. Thank you.

    Read the article

  • How to encrypt Amazon CloudFront signature for private content access using canned policy

    - by Chet
    Has anyone using .net actually worked out how to successfully sign a signature to use with CloudFront private content? After a couple of days of attempts all I can get is Access Denied. I have been working with variations of the following code and also tried using OpenSSL.Net and AWSSDK but that does not have a sign method for RSA-SHA1 yet. The signature (data) looks like this {"Statement":[{"Resource":"http://xxxx.cloudfront.net/xxxx.jpg","Condition":?{"DateLessThan":?{"AWS:EpochTime":1266922799}}}]} This method attempts to sign the signature for use in the canned url. So of the variations have included chanding the padding used in the has and also reversing the byte[] before signing as apprently OpenSSL do it this way. public string Sign(string data) { using (SHA1Managed SHA1 = new SHA1Managed()) { RSACryptoServiceProvider provider = new RSACryptoServiceProvider(); RSACryptoServiceProvider.UseMachineKeyStore = false; // Amazon PEM converted to XML using OpenSslKey provider.FromXmlString("<RSAKeyValue><Modulus>....."); byte[] plainbytes = System.Text.Encoding.UTF8.GetBytes(data); byte[] hash = SHA1.ComputeHash(plainbytes); //Array.Reverse(sig); // I have see some examples that reverse the hash byte[] sig = provider.SignHash(hash, "SHA1"); return Convert.ToBase64String(sig); } } Its useful to note that I have verified the content is setup correctly in S3 and CloudFront by generating a CloudFront canned policy url using my CloudBerry Explorer. How do they do it? Any ideas would be much appreciated. Thanks

    Read the article

  • Problem with WCF Streaming

    - by H4mm3rHead
    Hi, I was looking at this thread: http://stackoverflow.com/questions/1935040/how-to-handle-large-file-uploads-via-wcf I need to have a web service hosted at my provider where i need to upload and download files to. We are talking videos from 1Mb to 100Mb hence the streaming approach. I cant get it to work, i declared an Interface: [ServiceContract] public interface IFileTransferService { [OperationContract] void UploadFile(Stream stream); } and all is fine, i implement it like this: public string FileName = "test"; public void UploadFile(Stream stream) { try { FileStream outStream = File.Open(FileName, FileMode.Create, FileAccess.Write); const int bufferLength = 4096; byte[] buffer = new byte[bufferLength]; int count = 0; while((count = stream.Read(buffer, 0, bufferLength)) > 0) { //progress outStream.Write(buffer, 0, count); } outStream.Close(); stream.Close(); //saved } catch(Exception ex) { throw new Exception("error: "+ex.Message); } } Still no problem, its published to my webserver out on the interweb. So far so good. Now i make a reference to it and will pass it a FileStream, but the argument is now a byte[] - why is that and how do i get it the proper way for streaming? Edit My binding look like this: <bindings> <basicHttpBinding> <binding name="StreamingFileTransferServicesBinding" transferMode="StreamedRequest" maxBufferSize="65536" maxReceivedMessageSize="204003200" /> </basicHttpBinding> </bindings> I can consume it without problems, and get no errors - other than my input parameter has changed from a stream to a byte[]

    Read the article

  • OutOfMemory exception when loading an image in .Net

    - by Ben
    Hi, Im loading an image from a SQL CE db and then trying to load that into a PictureBox. I am saving the image like this: if (ofd.ShowDialog() == DialogResult.OK) { picArtwork.ImageLocation = ofd.FileName; using (System.IO.FileStream fs = new System.IO.FileStream(ofd.FileName, System.IO.FileMode.Open)) { byte[] imageAsBytes = new byte[fs.Length]; fs.Read(imageAsBytes, 0, imageAsBytes.Length); thisItem.Artwork = imageAsBytes; fs.Close(); } } and then saving to the Db using LINQ To SQL. I load the image back like so: using (FileStream fs = new FileStream(@"C:\Temp\img.jpg", FileMode.CreateNew ,FileAccess.Write )) { byte[] img = (byte[])encoding.GetBytes(ThisFilm.Artwork.ToString()); fs.Write(img, 0, img.Length); } but am getting an OutOfMemoryException. I have read that this is a slight red herring and that there is probably something wrong with the filetype, but i cant figure what. Any ideas? Thanks picArtwork.Image = System.Drawing.Bitmap.FromFile(@"C:\Temp\img.jpg");

    Read the article

  • A Question about .net Rfc2898DeriveBytes class?

    - by IbrarMumtaz
    What is the difference in this class? as posed to just using Encoding.ASCII.GetBytes(string object); I have had relative success with either approach, the former is a more long winded approach where as the latter is simple and to the point. Both seem to allow you to do the same thing eventually but I am struggling to the see the point in using the former over the latter. The basic concept I have been able to grasp is that you can convert string passwords into byte arrays to be used for e.g a symmetric encryption class, AesManaged. Via the RFC class but you get to use SaltValues and password when creating your rfc object. I assume its more secure but still thats an uneducated guess at best ! Also that it allows you to return byte arrays of a certain size, well something like that. heres a few examples to show you where I am coming from? byte[] myPassinBytes = Encoding.ASCII.GetBytes("some password"); or string password = "P@%5w0r]>"; byte[] saltArray = Encoding.ASCII.GetBytes("this is my salt"); Rfc2898DeriveBytes rfcKey = new Rfc2898DeriveBytes(password, saltArray); The 'rfcKey' object can now be used towards setting up the the .Key or .IV properties on a Symmetric Encryption Algorithm class. ie. RijndaelManaged rj = new RijndaelManaged (); rj.Key = rfcKey.Getbytes(rj.KeySize / 8); rj.IV = rfcKey.Getbytes(rj.Blocksize / 8); 'rj' should be ready to go ! The confusing part ... so rather than using the 'rfcKey' object can I not just use my 'myPassInBytes' array to help set-up my 'rj' object???? I have tried doing this in VS2008 and the immediate answer is NO ! but have you guys got a better educated answer as to why the RFC class is used over the other alternative I have mentioned above and why????

    Read the article

  • Encrypting with AES

    - by lolalola
    Why can I encrypt only 16 characters of text? Works: string plainText = "1234567890123456"; Doesn't work: string plainText = "12345678901234561"; Doesn't work: string plainText = "123456789012345"; Code: string plainText = "1234567890123456"; byte[] plainTextBytes = Encoding.UTF8.GetBytes(plainText); byte[] keyBytes = System.Text.Encoding.UTF8.GetBytes("1234567890123456"); byte[] initVectorBytes = System.Text.Encoding.UTF8.GetBytes("1234567890123456"); RijndaelManaged symmetricKey = new RijndaelManaged(); symmetricKey.Mode = CipherMode.CBC; symmetricKey.Padding = PaddingMode.Zeros; ICryptoTransform encryptor = symmetricKey.CreateDecryptor(keyBytes, initVectorBytes); MemoryStream memoryStream = new MemoryStream(); CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); cryptoStream.Write(plainTextBytes, 0, plainTextBytes.Length); cryptoStream.FlushFinalBlock(); byte[] cipherTextBytes = memoryStream.ToArray(); memoryStream.Close(); cryptoStream.Close(); string cipherText = Convert.ToBase64String(cipherTextBytes); Console.ReadLine();

    Read the article

  • Connection aborted.

    - by Pinu
    I am getting this error when i am trying to upload a file of 3mb or more on my WCF client application. SocketException (0x2745): An established connection was aborted by the software in your host machine] System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) +73 System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) +131 [IOException: Unable to read data from the transport connection: An established connection was aborted by the software in your host machine.] System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) +294 System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) +26 System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead) +297 [WebException: The underlying connection was closed: An unexpected error occurred on a receive.] System.Net.HttpWebRequest.GetResponse() +5314029 System.ServiceModel.Channels.HttpChannelRequest.WaitForReply(TimeSpan timeout) +54 [CommunicationException: An error occurred while receiving the HTTP response to http://localhost:4649/Service1.svc. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the service shutting down). See server logs for more details.] System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) +7596735 System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) +275 SmartConnectClient.SmartConnect.IService1.OrderCertMail(OrderCertMailResponse OrderCertMail1) +0 SmartConnectClient.SmartConnect.Service1Client.OrderCertMail(OrderCertMailResponse OrderCertMail1) in c:\documents and settings\pkale\my documents\visual studio 2008\projects\smartconnectclient\smartconnectclient\service references\smartconnect\reference.cs:1939 SmartConnectClient.Test_CertMail_Order.Page_Load(Object sender, EventArgs e) in C:\Documents and Settings\pkale\My Documents\Visual Studio 2008\Projects\SmartConnectClient\SmartConnectClient\Test_CertMail_Order.aspx.cs:40 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 enter code here

    Read the article

  • Why does my Sax Parser produce no results after using InputStream Read?

    - by Andy Barlow
    Hello, I have this piece of code which I'm hoping will be able to tell me how much data I have downloaded (and soon put it in a progress bar), and then parse the results through my Sax Parser. If I comment out basically everything above the //xr.parse(new InputSource(request.getInputStream())); line and swap the xr.parse's over, it works fine. But at the moment, my Sax parser tells me I have nothing. Is it something to do with is.read (buffer) section? Also, just as a note, request is a HttpURLConnection with various signatures. /*Input stream to read from our connection*/ InputStream is = request.getInputStream(); /*we make a 2 Kb buffer to accelerate the download, instead of reading the file a byte at once*/ byte [ ] buffer = new byte [ 2048 ] ; /*How many bytes do we have already downloaded*/ int totBytes,bytes,sumBytes = 0; totBytes = request.getContentLength () ; while ( true ) { /*How many bytes we got*/ bytes = is.read (buffer); /*If no more byte, we're done with the download*/ if ( bytes <= 0 ) break; sumBytes+= bytes; Log.v("XML", sumBytes + " of " + totBytes + " " + ( ( float ) sumBytes/ ( float ) totBytes ) *100 + "% done" ); } /* Parse the xml-data from our URL. */ // OLD, and works if comment all the above //xr.parse(new InputSource(request.getInputStream())); xr.parse(new InputSource(is)) /* Parsing has finished. */; Can anyone help me at all?? Kind regards, Andy

    Read the article

  • Print raw data to a thermal-printer using .NET

    - by blauesocke
    I'm trying to print out raw ascii data to a thermal printer. I do this by using this code example: http://support.microsoft.com/kb/322091 but my printer prints always only one character and this not until I press the form feed button. If I print something with notepad the printer will do a form feed automatically but without printing any text. The printer is connected via usb over a lpt2usb adapter and Windows 7 uses the "Generic - Generic / Text Only" driver. Anyone knows what is going wrong? How is it possible to print some words and do some form feeds? Are there some control characters I have to send? And if yes: How do I send them? Edit 14.04.2010 21:51 My code (C#) looks like this: PrinterSettings s = new PrinterSettings(); s.PrinterName = "Generic / Text Only"; RawPrinterHelper.SendStringToPrinter(s.PrinterName, "Test"); This code will return a "T" after I pressed the form feed button (This litte black button here: swissmania.ch/images/935-151.jpg - sorry, not enough reputation for two hyperlinks) Edit 15.04.2010 16:56 I'm using now the code form here: c-sharpcorner.com/UploadFile/johnodonell/PrintingDirectlytothePrinter11222005001207AM/PrintingDirectlytothePrinter.aspx I modified it a bit that I can use the following code: byte[] toSend; // 10 = line feed // 13 carriage return/form feed toSend = new byte[1] { 13 }; PrintDirect.WritePrinter(lhPrinter, toSend, toSend.Length, ref pcWritten); Running this code has the same effekt like pressing the form feed button, it works fine! But code like this still does not work: byte[] toSend; // 10 = line feed // 13 carriage return/form feed toSend = new byte[2] { 66, 67 }; PrintDirect.WritePrinter(lhPrinter, toSend, toSend.Length, ref pcWritten); This will print out just a "B" but I expect "BC" and after running any code I have to reconnect the USB cable to make it work agian. Any ideas?

    Read the article

  • C# PInvoke VerQueryValue returns back OutOfMemoryException?

    - by Bopha
    Hi, Below is the code sample which I got from online resource but it's suppose to work with fullframework, but when I try to build it using C# smart device, it throws exception saying it's out of memory. Does anybody know how can I fix it to use on compact? the out of memory exception when I make the second call to VerQueryValue which is the last one. thanks, [DllImport("coredll.dll")] public static extern bool VerQueryValue(byte[] buffer, string subblock, out IntPtr blockbuffer, out uint len); [DllImport("coredll.dll")] public static extern bool VerQueryValue(byte[] pBlock, string pSubBlock, out string pValue, out uint len); // private static void GetAssemblyVersion() { string filename = @"\Windows\MyLibrary.dll"; if (File.Exists(filename)) { try { int handle = 0; Int32 size = 0; size = GetFileVersionInfoSize(filename, out handle); if (size > 0) { bool retValue; byte[] buffer = new byte[size]; retValue = GetFileVersionInfo(filename, handle, size, buffer); if (retValue == true) { bool success = false; IntPtr blockbuffer = IntPtr.Zero; uint len = 0; //success = VerQueryValue(buffer, "\\", out blockbuffer, out len); success = VerQueryValue(buffer, @"\VarFileInfo\Translation", out blockbuffer, out len); if(success) { int p = (int)blockbuffer; //Reads a 16-bit signed integer from unmanaged memory int j = Marshal.ReadInt16((IntPtr)p); p += 2; //Reads a 16-bit signed integer from unmanaged memory int k = Marshal.ReadInt16((IntPtr)p); string sb = string.Format("{0:X4}{1:X4}", j, k); string spv = @"\StringFileInfo\" + sb + @"\ProductVersion"; string versionInfo; VerQueryValue(buffer, spv, out versionInfo, out len); } } } } catch (Exception err) { string error = err.Message; } } }

    Read the article

  • send Image from J2ME to SERVLET

    - by Akash
    Hi, I want to send Image from J2ME to SERVLET. I am able to convert image into Byte Array, and send by Http POST. I have coded as : - From Mobile : conn = (HttpConnection)Connector.open(url,Connector.READ_WRITE,true); conn.setRequestMethod(HttpConnection.POST); conn.setRequestProperty("Content-Type", "application/x-www-form-urlencoded"); os.write(bytes, 0, bytes.length);//bytes = byte array of image At servlet : String line; BufferedReader r1 = new BufferedReader(new InputStreamReader(in)); while ((line = r1.readLine()) != null) { System.out.println("line=" + line); buf.append(line); } String s = buf.toString(); byte[] img_byte = s.getBytes(); Now d problem I found is, when I send Bytes from Mob App, some bytes are LOST , whose value is 0A and 0D-Hex ... Exactly, Cr- Carriage Return & Lf- Line Feed... It means, POST method OR readLine() not able to accept 0A & 0D value... And so I come to know that, LOST bytes are 0A and 0D occurrence in image's byte array.... Any one have any idea, how to do this, or how to use any another method..... Thanks -Akash

    Read the article

  • AES Encryption Java Invalid Key length

    - by wuntee
    I am trying to create an AES encryption method, but for some reason I keep getting a 'java.security.InvalidKeyException: Key length not 128/192/256 bits'. Here is the code: public static SecretKey getSecretKey(char[] password, byte[] salt) throws NoSuchAlgorithmException, InvalidKeySpecException{ SecretKeyFactory factory = SecretKeyFactory.getInstance("PBEWithMD5AndDES"); // NOTE: last argument is the key length, and it is 256 KeySpec spec = new PBEKeySpec(password, salt, 1024, 256); SecretKey tmp = factory.generateSecret(spec); SecretKey secret = new SecretKeySpec(tmp.getEncoded(), "AES"); return(secret); } public static byte[] encrypt(char[] password, byte[] salt, String text) throws NoSuchAlgorithmException, InvalidKeySpecException, NoSuchPaddingException, InvalidKeyException, InvalidParameterSpecException, IllegalBlockSizeException, BadPaddingException, UnsupportedEncodingException{ SecretKey secret = getSecretKey(password, salt); Cipher cipher = Cipher.getInstance("AES"); // NOTE: This is where the Exception is being thrown cipher.init(Cipher.ENCRYPT_MODE, secret); byte[] ciphertext = cipher.doFinal(text.getBytes("UTF-8")); return(ciphertext); } Can anyone see what I am doing wrong? I am thinking it may have something to do with the SecretKeyFactory algorithm, but that is the only one I can find that is supported on the end system I am developing against. Any help would be appreciated. Thanks.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >