Search Results

Search found 3953 results on 159 pages for 'byte slave'.

Page 105/159 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • c# send recive object over network?

    - by Data-Base
    Hello, I'm working on a server/client project the client will be asking the server for info and the server will send them back to the client the info may be string,number, array, list, arraylist or any other object I found allot of examples but I faced issues!!!! the solution I found so far is to serialize the object (data) and send it then de-serialize it to process here is the server code public void RunServer(string SrvIP,int SrvPort) { try { var ipAd = IPAddress.Parse(SrvIP); /* Initializes the Listener */ if (ipAd != null) { var myList = new TcpListener(ipAd, SrvPort); /* Start Listeneting at the specified port */ myList.Start(); Console.WriteLine("The server is running at port "+SrvPort+"..."); Console.WriteLine("The local End point is :" + myList.LocalEndpoint); Console.WriteLine("Waiting for a connection....."); while (true) { Socket s = myList.AcceptSocket(); Console.WriteLine("Connection accepted from " + s.RemoteEndPoint); var b = new byte[100]; int k = s.Receive(b); Console.WriteLine("Recieved..."); for (int i = 0; i < k; i++) Console.Write(Convert.ToChar(b[i])); string cmd = Encoding.ASCII.GetString(b); if (cmd.Contains("CLOSE-CONNECTION")) break; var asen = new ASCIIEncoding(); // sending text s.Send(asen.GetBytes("The string was received by the server.")); // the line bove to be modified to send serialized object? Console.WriteLine("\nSent Acknowledgement"); s.Close(); Console.ReadLine(); } /* clean up */ myList.Stop(); } } catch (Exception e) { Console.WriteLine("Error..... " + e.StackTrace); } } here is the client code that should return an object public object runClient(string SrvIP, int SrvPort) { object obj = null; try { var tcpclnt = new TcpClient(); Console.WriteLine("Connecting....."); tcpclnt.Connect(SrvIP, SrvPort); // use the ipaddress as in the server program Console.WriteLine("Connected"); Console.Write("Enter the string to be transmitted : "); var str = Console.ReadLine(); Stream stm = tcpclnt.GetStream(); var asen = new ASCIIEncoding(); if (str != null) { var ba = asen.GetBytes(str); Console.WriteLine("Transmitting....."); stm.Write(ba, 0, ba.Length); } var bb = new byte[2000]; var k = stm.Read(bb, 0, bb.Length); string data = null; for (var i = 0; i < k; i++) Console.Write(Convert.ToChar(bb[i])); //convert to object code ?????? Console.ReadLine(); tcpclnt.Close(); } catch (Exception e) { Console.WriteLine("Error..... " + e.StackTrace); } return obj; } I need to know a good serialize/serialize and how to integrate it into the solution above :-( I would be really thankful for any help cheers

    Read the article

  • Different setter behavior between DbContext and ObjectContext

    - by Paul
    (This is using EntityFramework 4.2 CTP) I haven't found any references to this on the web yet, although it's likely I'm using the wrong terminology while searching. There's also a very likely scenario where this is 100% expected behavior, just looking for confirmation and would rather not dig through the tt template (still new to this). Assuming I have a class with a boolean field called Active and I have one row that already has this value set to true. I have code that executes to set said field to True regardless of it's existing value. If I use DbContext to update the value to True no update is made. If I use ObjectContext to update the value an update is made regardless of the existing value. This is happening in the exact same EDMX, all I did was change the code generation template from DbContext to EntityObject. Update: Ok, found the confirmation I was looking for...consider this a dupe...next time I'll do MOAR SEARCHING! Entity Framework: Cancel a property change if no change in value ** Update 2: ** Problem: the default tt template wraps the "if (this != value)" in the setter with "if (iskey), so only primarykey fields receive this logic. Solution: it's not the most graceful thing, but I removed this check...we'll see how it pans out in real usage. I included the entire tt template, my changes are denoted with "**"... //////// //////// Write SimpleType Properties. //////// private void WriteSimpleTypeProperty(EdmProperty simpleProperty, CodeGenerationTools code) { MetadataTools ef = new MetadataTools(this); #> /// <summary> /// <#=SummaryComment(simpleProperty)#> /// </summary><#=LongDescriptionCommentElement(simpleProperty, 1)#> [EdmScalarPropertyAttribute(EntityKeyProperty= <#=code.CreateLiteral(ef.IsKey(simpleProperty))#>, IsNullable=<#=code.CreateLiteral(ef.IsNullable(simpleProperty))#>)] [DataMemberAttribute()] <#=code.SpaceAfter(NewModifier(simpleProperty))#><#=Accessibility.ForProperty(simpleProperty)#> <#=MultiSchemaEscape(simpleProperty.TypeUsage, code)#> <#=code.Escape(simpleProperty)#> { <#=code.SpaceAfter(Accessibility.ForGetter(simpleProperty))#>get { <#+ if (ef.ClrType(simpleProperty.TypeUsage) == typeof(byte[])) { #> return StructuralObject.GetValidValue(<#=code.FieldName(simpleProperty)#>); <#+ } else { #> return <#=code.FieldName(simpleProperty)#>; <#+ } #> } <#=code.SpaceAfter(Accessibility.ForSetter((simpleProperty)))#>set { <#+ **//if (ef.IsKey(simpleProperty)) **//{ if (ef.ClrType(simpleProperty.TypeUsage) == typeof(byte[])) { #> if (!StructuralObject.BinaryEquals(<#=code.FieldName(simpleProperty)#>, value)) <#+ } else { #> if (<#=code.FieldName(simpleProperty)#> != value) <#+ } #> { <#+ PushIndent(CodeRegion.GetIndent(1)); **//} #> <#=ChangingMethodName(simpleProperty)#>(value); ReportPropertyChanging("<#=simpleProperty.Name#>"); <#=code.FieldName(simpleProperty)#> = <#=CastToEnumType(simpleProperty.TypeUsage, code)#>StructuralObject.SetValidValue(<#=CastToUnderlyingType(simpleProperty.TypeUsage, code)#>value<#=OptionalNullableParameterForSetValidValue(simpleProperty, code)#>, "<#=simpleProperty.Name#>"); ReportPropertyChanged("<#=simpleProperty.Name#>"); <#=ChangedMethodName(simpleProperty)#>(); <#+ //if (ef.IsKey(simpleProperty)) //{ PopIndent(); #> } <#+ //} #> } }

    Read the article

  • auspex LFS backups

    - by user1250465
    I have some backup tapes which existed on an AUSPEX file server. The backups were written to tape with the SunOs version of the CPIO command. Now that I need to restore them, (of course there are no more auspex servers in existance), the backups won't restore because the headers are not standard. I have dumped the tape images to disk. PAX, CPIO, and TAR cannot read the images. I've tried all of the CPIO format options. The errors I get are "name too long", "byte swapped in header", or just junk output. I can open up the images and read the contents of the files, but cannot restore the images. I have found that SunOs had a special header in CPIO V2.5 images. I have found the source for cpio, now I need definition of the SunOs header inside CPIO?

    Read the article

  • Cannot decode complete cipher list in .NET SslStream handshake.

    - by karmasponge
    While attempting to move from a 'C' based SSL implementation to C# using the .NET SslStream and we have run into what look like cipher compatibility issues with the .NET SslStream and a AS400 machine we are trying to connect to (which worked previously). When we call SslStream.AuthenticateAsClient it is sending the following: 16 03 00 00 37 01 00 00 33 03 00 4d 2c 00 ee 99 4e 0c 5d 83 14 77 78 5c 0f d3 8f 8b d5 e6 b8 cd 61 0f 29 08 ab 75 03 f7 fa 7d 70 00 00 0c 00 05 00 0a 00 13 00 04 00 02 00 ff 01 00 Which decodes as (based on http://www.mozilla.org/projects/security/pki/nss/ssl/draft302.txt) [16] Record Type [03 00] SSL Version [00 37] Body length [01] SSL3_MT_CLIENT_HELLO [00 00 33] Length (51 bytes) [03 00] Version number = 768 [4d 2c 00 ee] 4 Bytes unix time [… ] 28 Bytes random number [00] Session number [00 0c] 12 bytes (2 * 6 Cyphers)? [00 05, 00 0a, 00 13, 00 04, 00 02, 00 ff] - [RC4, PBE-MD5-DES, RSA, MD5, PKCS, ???] [01 00] Null compression method The as400 server responds back with: 15 03 00 00 02 02 28 [15] SSL3_RT_ALERT [03 00] SSL Version [00 02] Body Length (2 Bytes) [02 28] 2 = SSL3_RT_FATAL, 40 = SSL3_AD_HANDSHAKE_FAILURE I'm specifically looking to decode the '00 FF' at the end of the cyphers. Have I decoded it correctly? What does, if anything, '00 FF' decode too? I am using the following code to test/reproduce: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Net.Sockets; using System.Net.Security; using System.Security.Authentication; using System.IO; using System.Diagnostics; using System.Security.Cryptography.X509Certificates; namespace TestSslStreamApp { class DebugStream : Stream { private Stream AggregatedStream { get; set; } public DebugStream(Stream stream) { AggregatedStream = stream; } public override bool CanRead { get { return AggregatedStream.CanRead; } } public override bool CanSeek { get { return AggregatedStream.CanSeek; } } public override bool CanWrite { get { return AggregatedStream.CanWrite; } } public override void Flush() { AggregatedStream.Flush(); } public override long Length { get { return AggregatedStream.Length; } } public override long Position { get { return AggregatedStream.Position; } set { AggregatedStream.Position = value; } } public override int Read(byte[] buffer, int offset, int count) { int bytesRead = AggregatedStream.Read(buffer, offset, count); return bytesRead; } public override long Seek(long offset, SeekOrigin origin) { return AggregatedStream.Seek(offset, origin); } public override void SetLength(long value) { AggregatedStream.SetLength(value); } public override void Write(byte[] buffer, int offset, int count) { AggregatedStream.Write(buffer, offset, count); } } class Program { static void Main(string[] args) { const string HostName = "as400"; TcpClient tcpClient = new TcpClient(HostName, 992); SslStream sslStream = new SslStream(new DebugStream(tcpClient.GetStream()), false, null, null, EncryptionPolicy.AllowNoEncryption); sslStream.AuthenticateAsClient(HostName, null, SslProtocols.Ssl3, false); } } }

    Read the article

  • Tool to convert a file of HEX to ASCII character set?

    - by Aaron
    Question: Is there a known tool to convert a file consisting of 2 byte Hex into ascii? Note: - Maintain file offset listing in bytes Example: File contents: 00000000 0054 0065 0073 0074 0020 0054 0065 0073 00000008 0074 0020 0054 0065 0073 0074 0020 0054 00000016 0065 0073 0074 0020 0054 0065 0073 0074 00000024 0020 0054 0065 0073 0074 0020 0054 0065 00000032 0073 0074 0020 0054 0065 0073 0074 0020 00000040 0054 0065 0073 0074 000a 0054 0065 0073 00000048 0074 0020 0054 0065 0073 0074 0020 0054 00000056 0065 0073 0074 0020 0054 0065 0073 0074 00000064 0020 0054 0065 0073 0074 0020 0054 0065 Expected output 00000016 0065 0073 0074 0020 0054 0065 0073 0074 |est Test Test Te| 00000032 0073 0074 0020 0054 0065 0073 0074 0020 |st Test Test.Tes| 00000048 0074 0020 0054 0065 0073 0074 0020 0054 |t Test Test Test| 00000064 0020 0054 0065 0073 0074 0020 0054 0065 | Test Test Test |

    Read the article

  • Large keepalive_requests values are severely slowing-down Nginx

    - by Gil
    When running a bacon (43-byte transparent pixel) load test on Nginx, we have tried several keepalive_requests values (from 10 to 100,000) and the optimal value seems to be 10. Here are the server HTTP headers of this tiny reply: HTTP/1.1 200 OK Server: nginx/1.5.6 Date: Wed, 23 Oct 2013 12:39:45 GMT Content-Type: image/gif Content-Length: 43 Last-Modified: Mon, 28 Sep 1970 06:00:00 GMT Connection: keep-alive Nginx is twice slower with keepalive_requests 100000 than with keepalive_requests 10. Can you help understanding that result? Or tell what we do wrong? For reference, here is the nginx.conf file.

    Read the article

  • Invoke a SOAP method with namespace prefixes

    - by mvladic
    My C# web service client sends following soap message to Java-based web service: <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <getData> <request> <requestParameters xmlns="http://b..."> <equals> ... </equals> </requestParameters> </request> </getData> </soap:Body> </soap:Envelope> and Java-based web service returns error: 500 Internal Server Error ... Cannot find dispatch method for {}getData ... Client written in Java, which works, sends the following message: <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <ns2:getData xmlns:ns2="http://a..."> <ns2:request> <ns3:requestParameters xmlns:ns3="http://b..."> <ns3:equals> ... </ns3:equals> </ns3:requestParameters> </ns2:request> </ns2:getData> </soap:Body> </soap:Envelope> Is there an easy way in C# to send SOAP messages the same way Java client sends: with namespace prefixes? Following is C# code that sends message: // class MyService is auto-generated using wsdl.exe tool MyService service = new MyService(); RequestMessage request = new RequestMessage(); ... ResponseMessage response = service.getData(request); ... UPDATE: RequestMessage class looks like this: /// <remarks/> [System.CodeDom.Compiler.GeneratedCodeAttribute("svcutil", "3.0.4506.2152")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(Namespace="http://uri.etsi.org/02657/v1.5.1#/RetainedData")] public partial class RequestMessage { private byte[] requestPriorityField; private RequestConstraints requestParametersField; private string deliveryPointHIBField; private string maxHitsField; private NationalRequestParameters nationalRequestParametersField; private System.Xml.XmlElement anyField; /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(DataType="hexBinary", Order=0)] public byte[] requestPriority { get { return this.requestPriorityField; } set { this.requestPriorityField = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Order=1)] public RequestConstraints requestParameters { get { return this.requestParametersField; } set { this.requestParametersField = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Order=2)] public string deliveryPointHIB { get { return this.deliveryPointHIBField; } set { this.deliveryPointHIBField = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(DataType="integer", Order=3)] public string maxHits { get { return this.maxHitsField; } set { this.maxHitsField = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Order=4)] public NationalRequestParameters nationalRequestParameters { get { return this.nationalRequestParametersField; } set { this.nationalRequestParametersField = value; } } /// <remarks/> [System.Xml.Serialization.XmlAnyElementAttribute(Order=5)] public System.Xml.XmlElement Any { get { return this.anyField; } set { this.anyField = value; } } }

    Read the article

  • Multiple IP Addresses on a Traceroute Line

    - by Paul
    I'm doing a traceroute from my box to ....say.... stackoverflow.com. I see a couple of instances where there are multiple ip's on one line. For instance, in below, line #2 has two IPs: 10.1.6.5 and 10.1.4.5 Also on line #4, there are two timestamps after 216.182.236.96: 0.653 ms and 0.637 ms What are these? This is on Linux Traceroute example: traceroute to www.stackoverflow.com (198.252.206.16), 30 hops max, 60 byte packets 2 ip-10-1-6-5.us-west-1.compute.internal (10.1.6.5) 0.329 ms 0.425 ms ip-10-1-4-5.us-west-1.compute.internal (10.1.4.5) 0.471 ms 4 216.182.236.104 (216.182.236.104) 0.554 ms 216.182.236.96 (216.182.236.96) 0.653 ms 0.637 ms 5 205.251.230.64 (205.251.230.64) 0.616 ms 205.251.229.232 (205.251.229.232) 1.305 ms 205.251.230.64 (205.251.230.64) 0.573 ms

    Read the article

  • Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

    - by Neil Pitman
    I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load. I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night. Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution Thanks

    Read the article

  • Git can no longer open emacs as its editor

    - by mwilliams
    I'm running Git version 1.7.3.2 that I built from source, zsh is my shell, and emacs is my editor. Recently I started seeing the following: /usr/local/Cellar/git/1.7.3.2/libexec/git-core/git-sh-setup: line 106: emacs: command not found Could not execute editor My zshrc looks like the following so I can use the Cocoa build and the console binary provided with it. EMACS_HOME="/Applications/Emacs.app/Contents/MacOS" function e() { PATH=$EMACS_HOME/bin:$PATH $EMACS_HOME/Emacs -nw $@ } function ec() { PATH=$EMACS_HOME/bin:$PATH emacsclient -t $@ } function es() { e --daemon=$1 && ec -s $1 } function el() { ps ax|grep Emacs } function ek() { $EMACS_HOME/bin/emacsclient -e '(kill-emacs)' -s $1 } function ecompile() { e -eval "(setq load-path (cons (expand-file-name \".\") load-path))" \ -batch -f batch-byte-compile $@ } alias emacs=e alias emacsclient=ec And I also have export EDITOR="emacs" and have tried adding export GIT_EDITOR="emacs" (and swapping that out with "e") But whatever I try I can't get git to open emacs whenever I need to do a commit or an interactive rebase, etc etc...

    Read the article

  • How to play audio in Java Application

    - by user577829
    I'm making a java application and I need to play audio. I'm playing mainly small sound files of my cannon firing (its a cannon shooting game) and the projectiles exploding, though I plan on having looping background music. I have found two different methods to accomplish this, but both don't work how I want. The first method is literally a method: public void playSoundFile(File file) {//http://java.ittoolbox.com/groups/technical-functional/java-l/sound-in-an-application-90681 try { //get an AudioInputStream AudioInputStream ais = AudioSystem.getAudioInputStream(file); //get the AudioFormat for the AudioInputStream AudioFormat audioformat = ais.getFormat(); System.out.println("Format: " + audioformat.toString()); System.out.println("Encoding: " + audioformat.getEncoding()); System.out.println("SampleRate:" + audioformat.getSampleRate()); System.out.println("SampleSizeInBits: " + audioformat.getSampleSizeInBits()); System.out.println("Channels: " + audioformat.getChannels()); System.out.println("FrameSize: " + audioformat.getFrameSize()); System.out.println("FrameRate: " + audioformat.getFrameRate()); System.out.println("BigEndian: " + audioformat.isBigEndian()); //ULAW format to PCM format conversion if ((audioformat.getEncoding() == AudioFormat.Encoding.ULAW) || (audioformat.getEncoding() == AudioFormat.Encoding.ALAW)) { AudioFormat newformat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, audioformat.getSampleRate(), audioformat.getSampleSizeInBits() * 2, audioformat.getChannels(), audioformat.getFrameSize() * 2, audioformat.getFrameRate(), true); ais = AudioSystem.getAudioInputStream(newformat, ais); audioformat = newformat; } //checking for a supported output line DataLine.Info datalineinfo = new DataLine.Info(SourceDataLine.class, audioformat); if (!AudioSystem.isLineSupported(datalineinfo)) { //System.out.println("Line matching " + datalineinfo + " is not supported."); } else { //System.out.println("Line matching " + datalineinfo + " is supported."); //opening the sound output line SourceDataLine sourcedataline = (SourceDataLine) AudioSystem.getLine(datalineinfo); sourcedataline.open(audioformat); sourcedataline.start(); //Copy data from the input stream to the output data line int framesizeinbytes = audioformat.getFrameSize(); int bufferlengthinframes = sourcedataline.getBufferSize() / 8; int bufferlengthinbytes = bufferlengthinframes * framesizeinbytes; byte[] sounddata = new byte[bufferlengthinbytes]; int numberofbytesread = 0; while ((numberofbytesread = ais.read(sounddata)) != -1) { int numberofbytesremaining = numberofbytesread; sourcedataline.write(sounddata, 0, numberofbytesread); } } } catch (Exception e) { e.printStackTrace(); } } The problem with this is that my entire program stops until the sound file is finished, or at least nearly finished. The second method is this: File file = new File("Launch1.wav"); AudioClip clip; try { clip = JApplet.newAudioClip(file.toURL()); clip.play(); } catch (Exception e) { e.getMessage(); } The problem I have here is that every time the sound file ends early or doesn't play at all depending on where I place the code. Is their any way to play sound without the above mentioned problems? Am I doing something wrong? Any help is greatly appreciated.

    Read the article

  • Why does sending post data with WebRequest take so long?

    - by Paramiliar
    I am currently creating a C# application to tie into a php / MySQL online system. The application needs to send post data to scripts and get the response. When I send the following data username=test&password=test I get the following responses... Starting request at 22/04/2010 12:15:42 Finished creating request : took 00:00:00.0570057 Transmitting data at 22/04/2010 12:15:42 Transmitted the data : took 00:00:06.9316931 <<-- Getting the response at 22/04/2010 12:15:49 Getting response 00:00:00.0360036 Finished response 00:00:00.0360036 Entire call took 00:00:07.0247024 As you can see it is taking 6 seconds to actually send the data to the script, I have done further testing bye sending data from telnet and by sending post data from a local file to the url and they dont even take a second so this is not a problem with the hosted script on the site. Why is it taking 6 seconds to transmit the data when it is two simple strings? I use a custom class to send the data class httppostdata { WebRequest request; WebResponse response; public string senddata(string url, string postdata) { var start = DateTime.Now; Console.WriteLine("Starting request at " + start.ToString()); // create the request to the url passed in the paramaters request = (WebRequest)WebRequest.Create(url); // set the method to post request.Method = "POST"; // set the content type and the content length request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = postdata.Length; // convert the post data into a byte array byte[] byteData = Encoding.UTF8.GetBytes(postdata); var end1 = DateTime.Now; Console.WriteLine("Finished creating request : took " + (end1 - start)); var start2 = DateTime.Now; Console.WriteLine("Transmitting data at " + start2.ToString()); // get the request stream and write the data to it Stream dataStream = request.GetRequestStream(); dataStream.Write(byteData, 0, byteData.Length); dataStream.Close(); var end2 = DateTime.Now; Console.WriteLine("Transmitted the data : took " + (end2 - start2)); // get the response var start3 = DateTime.Now; Console.WriteLine("Getting the response at " + start3.ToString()); response = request.GetResponse(); //Console.WriteLine(((WebResponse)response).StatusDescription); dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); var end3 = DateTime.Now; Console.WriteLine("Getting response " + (end3 - start3)); // read the response string serverresponse = reader.ReadToEnd(); var end3a = DateTime.Now; Console.WriteLine("Finished response " + (end3a - start3)); Console.WriteLine("Entire call took " + (end3a - start)); //Console.WriteLine(serverresponse); reader.Close(); dataStream.Close(); response.Close(); return serverresponse; } } And to call it I use private void btnLogin_Click(object sender, EventArgs e) { // string postdata; if (txtUsername.Text.Length < 3 || txtPassword.Text.Length < 3) { MessageBox.Show("Missing your username or password."); } else { string postdata = "username=" + txtUsername.Text + "&password=" + txtPassword.Text; httppostdata myPost = new httppostdata(); string response = myPost.senddata("http://www.domainname.com/scriptname.php", postdata); MessageBox.Show(response); } }

    Read the article

  • Monitor a log file on Linux and send each line to another program

    - by mlambie
    I run an apt-cacher-ng server on Ubuntu Linux which writes logs in the following format: 1299745593|O|149406|XXX.XXX.XXX.XXX|uburep/pool/main/t/tiff/libtiff4_3.9.2-2ubuntu0.4_amd64.deb 1299745593|O|10154976|XXX.XXX.XXX.XXX|uburep/pool/main/l/linux-firmware/linux-firmware_1.34.4_all.deb 1299748529|O|39368|XXX.XXX.XXX.XXX|uburep/pool/main/n/nagios-nrpe/nagios-nrpe-server_2.12-4ubuntu1_amd64.deb 1300155440|O|680100|XXX.XXX.XXX.XXX|uburep/pool/main/t/tzdata/tzdata_2011c-0ubuntu0.10.04_all.deb It shows the timestamp, direction (in or out), byte count, IP and filename. Every time a line is written to it, I'd like to also send that line to another program. I will have this program insert the line into a database so that I can crunch some statistics about how much bandwidth we're saving through operating a caching server. I do not want to cat the log file every X minutes (via cron) looking for new entries as it'd be somewhat computationally uneconomical. Instead I'd prefer to have a daemon monitor the log, and when a change is detected, each line is sent to my database-insertion script. Will swatch achieve this, or are there better options?

    Read the article

  • free Raw-File Converter/Editor

    - by RCIX
    I have RAW files output by a program with a specific set of properties (Photoshop RAW, 16 bits, IBM PC byte order, no header, 1 non-interleaved channel, variable sizes like 257X257 or 129X513); does anyone know of a free tool that will allow me to convert to and from this format, and possibly do basic editing (selection, copy/paste, rotation of selection)? I've tried Picasa, XNView, and Paint Shop Pro 7 and none of them work properly. The closest i get is Paint Shop Pro which will at least make a serviceable attempt to open these files but i can't set all of the proper settings. XNView just might be able to edit it if i can figure out how to change the open settings for a particular raw file. So my questions at current are: how do i tell XNView to open a raw file a particular way? Failing that, is there any free tool that can open Photoshop-RAW files with the above settings (that's not photoshop)? If it helps, i'm trying to import/export/edit hieghtmap data for maps for Supreme Commander.

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Fake serial communication under Linux

    - by kigurai
    I have an application where I want to simulate the connection between a device and a "modem". The device will be connected to a serial port and will talk to the software modem through that. For testing purposes I want to be able to use a mock software device to test send and receive data. Example Python code device = Device() modem = Modem() device.connect(modem) device.write("Hello") modem_reply = device.read() Now, in my final app I will just pass /dev/ttyS1 or COM1 or whatever for the application to use. But how can I do this in software? I am running Linux and application is written in Python. I have tried making a FIFO (mkfifo ~/my_fifo) and that does work, but then I'll need one FIFO for writing and one for reading. What I want is to open ~/my_fake_serial_port and read and write to that. I have also lpayed with the ptymodule, but can't get that to work either. I can get a master and slave file descriptor from pty.openpty() but trying to read or write to them only causes IOError Bad File Descriptor error message.

    Read the article

  • Listening for TCP and UDP requests on the same port

    - by user339328
    I am writing a Client/Server set of programs Depending on the operation requested by the client, I use make TCP or UDP request. Implementing the client side is straight-forward, since I can easily open connection with any protocol and send the request to the server-side. On the servers-side, on the other hand, I would like to listen both for UDP and TCP connections on the same port. Moreover, I like the the server to open new thread for each connection request. I have adopted the approach explained in: link text I have extended this code sample by creating new threads for each TCP/UDP request. This works correctly if I use TCP only, but it fails when I attempt to make UDP bindings. Please give me any suggestion how can I correct this. tnx Here is the Server Code: public class Server { public static void main(String args[]) { try { int port = 4444; if (args.length > 0) port = Integer.parseInt(args[0]); SocketAddress localport = new InetSocketAddress(port); // Create and bind a tcp channel to listen for connections on. ServerSocketChannel tcpserver = ServerSocketChannel.open(); tcpserver.socket().bind(localport); // Also create and bind a DatagramChannel to listen on. DatagramChannel udpserver = DatagramChannel.open(); udpserver.socket().bind(localport); // Specify non-blocking mode for both channels, since our // Selector object will be doing the blocking for us. tcpserver.configureBlocking(false); udpserver.configureBlocking(false); // The Selector object is what allows us to block while waiting // for activity on either of the two channels. Selector selector = Selector.open(); tcpserver.register(selector, SelectionKey.OP_ACCEPT); udpserver.register(selector, SelectionKey.OP_READ); System.out.println("Server Sterted on port: " + port + "!"); //Load Map Utils.LoadMap("mapa"); System.out.println("Server map ... LOADED!"); // Now loop forever, processing client connections while(true) { try { selector.select(); Set<SelectionKey> keys = selector.selectedKeys(); // Iterate through the Set of keys. for (Iterator<SelectionKey> i = keys.iterator(); i.hasNext();) { SelectionKey key = i.next(); i.remove(); Channel c = key.channel(); if (key.isAcceptable() && c == tcpserver) { new TCPThread(tcpserver.accept().socket()).start(); } else if (key.isReadable() && c == udpserver) { new UDPThread(udpserver.socket()).start(); } } } catch (Exception e) { e.printStackTrace(); } } } catch (Exception e) { e.printStackTrace(); System.err.println(e); System.exit(1); } } } The UDPThread code: public class UDPThread extends Thread { private DatagramSocket socket = null; public UDPThread(DatagramSocket socket) { super("UDPThread"); this.socket = socket; } @Override public void run() { byte[] buffer = new byte[2048]; try { DatagramPacket packet = new DatagramPacket(buffer, buffer.length); socket.receive(packet); String inputLine = new String(buffer); String outputLine = Utils.processCommand(inputLine.trim()); DatagramPacket reply = new DatagramPacket(outputLine.getBytes(), outputLine.getBytes().length, packet.getAddress(), packet.getPort()); socket.send(reply); } catch (IOException e) { e.printStackTrace(); } socket.close(); } } I receive: Exception in thread "UDPThread" java.nio.channels.IllegalBlockingModeException at sun.nio.ch.DatagramSocketAdaptor.receive(Unknown Source) at server.UDPThread.run(UDPThread.java:25) 10x

    Read the article

  • ZFS, dedupe and PST files

    - by Unreason
    I am interested to know what would be expected maximum dedupe ratio for a set of PST files. I have ~40G of pst files from ~15 usres with high level of duplication of attachments. I am running tests to see if I can have significant space savings if I store the data on ZFS with dedupe. For this purpose I have installed a test setup of Nexenta, but was wondering if someone here had already done this and what level of deduplication I might expect (or in another words how sensitive are pst files to block alignment and what are the parameters that can influence the ratio?). Initial test show very low dedupe ratio and I did find explanation that block level dedupe would not be efficient here and that byte level dedupe would be much better (and that it should be performed by application that is aware of internal organization), so I am just double checking here if someone have some more input. Otherwise I will probably be converting PST files to IMAP.

    Read the article

  • When using gt5 in my home directory I get a blank page.

    - by MT
    When using gt5 in various directories on my system (including my home directory) I get blank results. If I limit the max-depth enough, I get results. For example, in my home directory 'gt5 --max-depth 2' produces a listing, while 'gt5 --max-depth 3' produces a blank page. I've noticed that the temporary html file that gets created in tmp (such as '/tmp/gt5.9035.kJVM08Y9/gt5.html' is a zero-byte file. I can successfully do a du in the same directory (which is what I thought gt5 was using), so I'm not sure what to check?

    Read the article

  • Visual Studio reports that not all code path return a value, even though they do

    - by chris12892
    I have an API in NETMF C# that I am writing that includes a function to send an HTTP request. For those who are familiar with NETMF, this is a heavily modified version of the "webClient" example, which a simple application that demonstrates how to submit an HTTP request, and recive a response. In the sample, it simply prints the response and returns void,. In my version, however, I need it to return the HTTP response. For some reason, Visual Studio reports that not all code paths return a value, even though, as far as I can tell, they do. Here is my code... /// <summary> /// This is a modified webClient /// </summary> /// <param name="url"></param> private string httpRequest(string url) { // Create an HTTP Web request. HttpWebRequest request = HttpWebRequest.Create(url) as HttpWebRequest; // Set request.KeepAlive to use a persistent connection. request.KeepAlive = true; // Get a response from the server. WebResponse resp = request.GetResponse(); // Get the network response stream to read the page data. if (resp != null) { Stream respStream = resp.GetResponseStream(); string page = ""; byte[] byteData = new byte[4096]; char[] charData = new char[4096]; int bytesRead = 0; Decoder UTF8decoder = System.Text.Encoding.UTF8.GetDecoder(); int totalBytes = 0; // allow 5 seconds for reading the stream respStream.ReadTimeout = 5000; // If we know the content length, read exactly that amount of // data; otherwise, read until there is nothing left to read. if (resp.ContentLength != -1) { for (int dataRem = (int)resp.ContentLength; dataRem > 0; ) { Thread.Sleep(500); bytesRead = respStream.Read(byteData, 0, byteData.Length); if (bytesRead == 0) throw new Exception("Data laes than expected"); dataRem -= bytesRead; // Convert from bytes to chars, and add to the page // string. int byteUsed, charUsed; bool completed = false; totalBytes += bytesRead; UTF8decoder.Convert(byteData, 0, bytesRead, charData, 0, bytesRead, true, out byteUsed, out charUsed, out completed); page = page + new String(charData, 0, charUsed); } page = new String(System.Text.Encoding.UTF8.GetChars(byteData)); } else throw new Exception("No content-Length reported"); // Close the response stream. For Keep-Alive streams, the // stream will remain open and will be pushed into the unused // stream list. resp.Close(); return page; } } Any ideas? Thanks...

    Read the article

  • How to upload Image on Android?

    - by Mattiah85
    I havve to upload image from my SD card to PHP server. I have read a lot of articles and topics but I have some problems... First I have use that code: HttpURLConnection connection = null; DataOutputStream outputStream = null; //DataInputStream inputStream = null; String urlServer = hostName+"Upload"; String lineEnd = "\r\n"; String twoHyphens = "--"; String boundary = "*****"; String serverResponseMessage; //int serverResponseCode; int bytesRead, bytesAvailable, bufferSize; byte[] buffer; int maxBufferSize = 1*1024*1024; try { showLog("uploading file: " + file); FileInputStream fileInputStream = new FileInputStream(new File(pictureFileDir+"/"+file) ); URL url = new URL(urlServer); connection = (HttpURLConnection) url.openConnection(); // Allow Inputs &amp; Outputs. connection.setDoInput(true); connection.setDoOutput(true); connection.setUseCaches(false); // Set HTTP method to POST. connection.setRequestMethod("POST"); connection.setRequestProperty("Connection", "Keep-Alive"); connection.setRequestProperty("Content-Type", "multipart/form-data;boundary="+boundary); outputStream = new DataOutputStream( connection.getOutputStream() ); outputStream.writeBytes(twoHyphens + boundary + lineEnd); outputStream.writeBytes("Content-Disposition: form-data; name=\"uploaded_file\";filename=\"" + file +"\"" + lineEnd); outputStream.writeBytes(lineEnd); bytesAvailable = fileInputStream.available(); bufferSize = Math.min(bytesAvailable, maxBufferSize); buffer = new byte[bufferSize]; // Read file bytesRead = fileInputStream.read(buffer, 0, bufferSize); while (bytesRead > 0) { outputStream.write(buffer, 0, bufferSize); bytesAvailable = fileInputStream.available(); bufferSize = Math.min(bytesAvailable, maxBufferSize); bytesRead = fileInputStream.read(buffer, 0, bufferSize); } outputStream.writeBytes(lineEnd); outputStream.writeBytes(twoHyphens + boundary + twoHyphens + lineEnd); // Responses from the server (code and message) //serverResponseCode = connection.getResponseCode(); serverResponseMessage = connection.getResponseMessage(); showLog("server response: " + serverResponseMessage); fileInputStream.close(); outputStream.flush(); outputStream.close(); } catch (Exception ex) { ex.printStackTrace(); } but server response 200/OK and no file was on destination server... After i have read about Multipart: try { HttpParams params = new BasicHttpParams(); params.setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); DefaultHttpClient mHttpClient = new DefaultHttpClient(params); File image = new File(pictureFileDir + "/" + filename); HttpPost httppost = new HttpPost(hostName+"Upload"); MultipartEntity multipartEntity = new MultipartEntity(HttpMultipartMode.BROWSER_COMPATIBLE); multipartEntity.addPart("Image", new FileBody(image)); httppost.setEntity(multipartEntity); mHttpClient.execute(httppost, new PhotoUploadResponseHandler()); } catch (Exception e) { e.printStackTrace(); } but then a i have such LOG in LogCat and nothing else... 06-04 06:50:52.277: D/dalvikvm(1584): DexOpt: couldn't find static field Lorg/apache/http/message/BasicHeaderValueParser;.INSTANCE 06-04 06:50:52.277: W/dalvikvm(1584): VFY: unable to resolve static field 6688 (INSTANCE) in Lorg/apache/http/message/BasicHeaderValueParser; 06-04 06:50:52.277: D/dalvikvm(1584): VFY: replacing opcode 0x62 at 0x001b ServerSide Script: $target_path = "uploads"; $target_path = $target_path . basename( $_FILES['Image']); if(move_uploaded_file($_FILES['tmp_name'], $file_path)) { echo "success"; } else{ echo "fail"; } why? What is the simplest way to upload image?

    Read the article

  • Reverse lookup of inode/file from offset in raw device on linux and ext3/4?

    - by lilinjn
    In linux, given an offset into a raw disk device, is it possible to map back to an partition + inode? For example, suppose I know that string "xyz" is contained at byte offset 1000000 on /dev/sda: (e.g. xxd -l 100 -s 1000000 /dev/sda shows a dump that begins with "xyz") 1) How do I figure out which partition (if any) offset 1000000 is located in?(I imagine this is easy, but am including it for completeness) 2) Assuming the offset is located in a partition, how do I go about finding which inode it belongs to (or determine that it is part of free space) ? Presumably this is filesystem specific, in which case does any one know how to do this for ext4 and ext3?

    Read the article

  • Justification of Amazon EC2 Performance

    - by Adroidist
    I have a .jar file that represents a server which receives over TCP an image in bytes (of size at most 500 kb) and writes it file. It then sobels this image and sends it over TCP socket to the client side. I ran it on my laptop and it was very fast. But when I put it on Amazon EC2 server m1.large instance, i found out it is very slow - around 10 times slower. It might be the inefficiency in the code algorithm but in fact my code is nothing but receive image (like any byte file) run the sobel algorithm and send. I have the following questions: 1- Is it normal performance of Amazon EC2 server- I have read the following links link1 and link2 2- Even if the code is not that efficient, the server is finally handling a very low load (just one client), does the "inefficient" code justify such performance? 3- My laptop is dual core only...Why would the amazon ec2 server have worse performance that my laptop? How is this explained? Excuse me for my ignorance.

    Read the article

  • How to reduce virtual memory by optimising my PHP code?

    - by iCeR
    My current code (see below) uses 147MB of virtual memory! My provider has allocated 100MB by default and the process is killed once run, causing an internal error. The code is utilising curl multi and must be able to loop with more than 150 iterations whilst still minimizing the virtual memory. The code below is only set at 150 iterations and still causes the internal server error. At 90 iterations the issue does not occur. How can I adjust my code to lower the resource use / virtual memory? Thanks! <?php function udate($format, $utimestamp = null) { if ($utimestamp === null) $utimestamp = microtime(true); $timestamp = floor($utimestamp); $milliseconds = round(($utimestamp - $timestamp) * 1000); return date(preg_replace('`(?<!\\\\)u`', $milliseconds, $format), $timestamp); } $url = 'https://www.testdomain.com/'; $curl_arr = array(); $master = curl_multi_init(); for($i=0; $i<150; $i++) { $curl_arr[$i] = curl_init(); curl_setopt($curl_arr[$i], CURLOPT_URL, $url); curl_setopt($curl_arr[$i], CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYPEER, FALSE); curl_multi_add_handle($master, $curl_arr[$i]); } do { curl_multi_exec($master,$running); } while($running > 0); for($i=0; $i<150; $i++) { $results = curl_multi_getcontent ($curl_arr[$i]); $results = explode("<br>", $results); echo $results[0]; echo "<br>"; echo $results[1]; echo "<br>"; echo udate('H:i:s:u'); echo "<br><br>"; usleep(100000); } ?> Processor Information Total processors: 8 Processor #1 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #2 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #3 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #4 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #5 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #6 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #7 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #8 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Memory Information Memory for crash kernel (0x0 to 0x0) notwithin permissible range Memory: 8302344k/9175040k available (2176k kernel code, 80272k reserved, 901k data, 228k init, 7466304k highmem) System Information Linux server3.server.com 2.6.18-194.17.1.el5PAE #1 SMP Wed Sep 29 13:31:51 EDT 2010 i686 i686 i386 GNU/Linux Physical Disks SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back sd 0:1:0:0: Attached scsi disk sda sd 4:0:0:0: Attached scsi removable disk sdb sd 0:1:0:0: Attached scsi generic sg4 type 0 sd 4:0:0:0: Attached scsi generic sg7 type 0 Current Memory Usage total used free shared buffers cached Mem: 8306672 7847384 459288 0 487912 6444548 -/+ buffers/cache: 914924 7391748 Swap: 4095992 496 4095496 Total: 12402664 7847880 4554784 Current Disk Usage Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 898G 307G 546G 36% / /dev/sda1 99M 19M 76M 20% /boot none 4.0G 0 4.0G 0% /dev/shm /var/tmpMnt 4.0G 1.8G 2.0G 48% /tmp

    Read the article

  • Linux: Limiting data throughput (pipe) in bytes per second?

    - by sdaau
    Hi all, I was wandering if there is a Linux program that can limit data throughput of a pipe - in actual bytes per second?. From what I gather, applicable for the purposes would be bfr, however, it has been removed from Debian (Removal candidate: bfr) cpipe, however, it seems the lowest resolution it will support is kB/s, meaning that buffer writes can still reach MB/s ([SOLVED] Is there a program to limit terminal pipe speed? - Page 2 - Ubuntu Forums) What I'd want is to be able to specify something like cat example.txt | ratelimit -Bps 100 > /dev/ttyUSB0 ... and actually have a single byte from example.txt sent each 1/100 = 0.01 sec (or 10 ms) to 'output'.. Thanks in advance for any suggestions, Cheers!

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >