Search Results

Search found 22903 results on 917 pages for 'full length screenshots'.

Page 225/917 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Java RMI InitialContext: Equivalent of LocateRegistry.createRegistry(int) ?

    - by bguiz
    I am trying to some pretty basic RMI: // Context namingContext = new InitialContext(); Registry reg = LocateRegistry.createRegistry(9999); for ( int i = 0; i < objs.length; i++ ) { int id = objs[i].getID(); // namingContext.bind( "rmi:CustomObj" + id , objs[i] ); reg.bind( "CustomObj" + id , objs[i] ); } That works without a hitch, but for future purposes, I need to use InitialContext. Context namingContext = new InitialContext(); for ( int i = 0; i < objs.length; i++ ) { int id = objs[i].getID(); namingContext.bind( "rmi:CustomObj" + id , objs[i] ); } But I cannot get this to work. I have started rmiregistry from the command line. Is there an equivalent of LocateRegistry.createRegistry(int)? Or some other way to start the RMI registry / registry used by InitialContext from inside my class? (Instead of the command line) Stack trace: javax.naming.CommunicationException [Root exception is java.rmi.ServerException: RemoteException occurred in server thread; nested exception is: java.rmi.UnmarshalException: error unmarshalling arguments; nested exception is: java.lang.ClassNotFoundException: bguiz.scratch.network.eg.Student] at com.sun.jndi.rmi.registry.RegistryContext.bind(RegistryContext.java:126) at com.sun.jndi.toolkit.url.GenericURLContext.bind(GenericURLContext.java:208) at javax.naming.InitialContext.bind(InitialContext.java:400) at bguiz.scratch.RMITest.main(RMITest.java:29) Caused by: java.rmi.ServerException: RemoteException occurred in server thread; nested exception is: java.rmi.UnmarshalException: error unmarshalling arguments; nested exception is: java.lang.ClassNotFoundException: bguiz.scratch.CustomObj at sun.rmi.server.UnicastServerRef.oldDispatch(UnicastServerRef.java:396) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:250) ....(truncated) EDIT: I will delete my own question in a couple of days, as there seems to be no answer to this (I haven't been able to figure it out myself). Last call for any biters!

    Read the article

  • Improve speed of own debug visualizer for Delphi 2010

    - by netcodecz
    I wrote Delphi debug visualizer for TDataSet to display values of current row, source + screenshot: http://delphi.netcode.cz/text/tdataset-debug-visualizer.aspx . Working good, but very slow. I did some optimalization (how to get fieldnames) but still for only 20 fields takes 10 seconds to show - very bad. Main problem seems to be slow IOTAThread90.Evaluate used by main code shown below, this procedure cost most of time, line with ** about 80% time. FExpression is name of TDataset in code. procedure TDataSetViewerFrame.mFillData; var iCount: Integer; I: Integer; // sw: TStopwatch; s: string; begin // sw := TStopwatch.StartNew; iCount := StrToIntDef(Evaluate(FExpression+'.Fields.Count'), 0); for I := 0 to iCount - 1 do begin s:= s + Format('%s.Fields[%d].FieldName+'',''+', [FExpression, I]); // FFields.Add(Evaluate(Format('%s.Fields[%d].FieldName', [FExpression, I]))); FValues.Add(Evaluate(Format('%s.Fields[%d].Value', [FExpression, I]))); //** end; if s<> '' then Delete(s, length(s)-4, 5); s := Evaluate(s); s:= Copy(s, 2, Length(s) -2); FFields.CommaText := s; { sw.Stop; s := sw.Elapsed; Application.MessageBox(Pchar(s), '');} end; Now I have no idea how to improve performance.

    Read the article

  • Using HAML with custom filters

    - by Guard
    Hi everybody. I feel quite excited about HAML and CoffeeScript and am working on tutorial showing how to use them in non-Rails environment. So, haml has easy to use command-line utility haml input.haml output.html. And, what is great, there exist a project (one of many forks: https://github.com/aussiegeek/coffee-haml-filter) aimed at providing custom filter that converts CoffeeScript into JS inside of HAML files. Unfortunately (or am I missing something?) haml doesn't allow specifying custom filters on the command line or with some configuration file. I (not being a Ruby fan or even knowing it enough) managed to solve it (based on some clever suggestion somewhere on SO) with this helper script: haml.rb require 'rubygems' require 'active_support/core_ext/object/blank' require 'haml' require 'haml/filters/coffee' template = ARGV.length > 0 ? File.read(ARGV.shift) : STDIN.read haml_engine = Haml::Engine.new(template) file = ARGV.length > 0 ? File.open(ARGV.shift, 'w') : STDOUT file.write(haml_engine.render) file.close Which is quite straightforward, except of requires in the beginning. Now, the questions are: 1) should I really use it, or is there another way to have on-demand HAML to HTML compilation with custom filters? 2) What about HAML watch mode? It's great and convenient. I can, of course, create a polling script in python that will watch the directory changes and call this .rb script, but it looks like a dirty solution.

    Read the article

  • Print raw data to a thermal-printer using .NET

    - by blauesocke
    I'm trying to print out raw ascii data to a thermal printer. I do this by using this code example: http://support.microsoft.com/kb/322091 but my printer prints always only one character and this not until I press the form feed button. If I print something with notepad the printer will do a form feed automatically but without printing any text. The printer is connected via usb over a lpt2usb adapter and Windows 7 uses the "Generic - Generic / Text Only" driver. Anyone knows what is going wrong? How is it possible to print some words and do some form feeds? Are there some control characters I have to send? And if yes: How do I send them? Edit 14.04.2010 21:51 My code (C#) looks like this: PrinterSettings s = new PrinterSettings(); s.PrinterName = "Generic / Text Only"; RawPrinterHelper.SendStringToPrinter(s.PrinterName, "Test"); This code will return a "T" after I pressed the form feed button (This litte black button here: swissmania.ch/images/935-151.jpg - sorry, not enough reputation for two hyperlinks) Edit 15.04.2010 16:56 I'm using now the code form here: c-sharpcorner.com/UploadFile/johnodonell/PrintingDirectlytothePrinter11222005001207AM/PrintingDirectlytothePrinter.aspx I modified it a bit that I can use the following code: byte[] toSend; // 10 = line feed // 13 carriage return/form feed toSend = new byte[1] { 13 }; PrintDirect.WritePrinter(lhPrinter, toSend, toSend.Length, ref pcWritten); Running this code has the same effekt like pressing the form feed button, it works fine! But code like this still does not work: byte[] toSend; // 10 = line feed // 13 carriage return/form feed toSend = new byte[2] { 66, 67 }; PrintDirect.WritePrinter(lhPrinter, toSend, toSend.Length, ref pcWritten); This will print out just a "B" but I expect "BC" and after running any code I have to reconnect the USB cable to make it work agian. Any ideas?

    Read the article

  • Data loss when downloading data from LDAP server

    - by Ricky D'Amelio
    Hi there. This question comes from a previous one I asked about handling NSData objects: http://stackoverflow.com/questions/2453785/converting-nsdata-to-an-nsstring-representation-is-failing. I have reached the point where I am taking an NSImage, turning it into NSData and uploading those data bytes to the LDAP server. I am doing this like so; //connected successfully to LDAP server above... struct berval photo_berval; struct berval *jpegPhoto_values[2]; photo_berval.bv_len = [photo length]; photo_berval.bv_val = [photo bytes]; jpegPhoto_values[0] = &photo_berval; jpegPhoto_values[1] = NULL; mod.mod_type = "jpegPhoto"; mod.mod_op = LDAP_MOD_REPLACE|LDAP_MOD_BVALUES; mod.mod_bvalues = jpegPhoto_values; mods[0] = &mod; mods[1] = NULL; //perform the modify operation rc = ldap_modify_ext_s(ld, givenModifyEntry, mods, NULL, NULL); That happens with no errors, and you can see a big blob of data when you're in the command line. My problem is, when I go to access the same data at a later stage, I am getting an image file back that's about 120 times smaller than the original image. //find the jpegPhoto attribute photoA = ldap_first_attribute(ld, photoE, &photoBer); while (strcasecmp(photoA, "jpegphoto") != 0) { photoA = ldap_next_attribute(ld, photoE, photoBer); } //get the value of the attribute if ((list_of_photos = ldap_get_values_len(ld, photoE, photoA)) != NULL) { //get the first JPEG photo_data = *list_of_photos[0]; selectedPictureData = [NSData dataWithBytes:&photo_data length:sizeof(photo_data)]; [selectedPictureData writeToFile:@"/Users/username/Desktop/Photo 2.jpg" atomically:YES]; NSLog (@"%@", selectedPictureData); Has anyone successfully done this before or can anyone see what I might be doing wrong? I appreciate anyone's help. Sorry to post so many questions! Ricky.

    Read the article

  • WPF MVVM Chart change axes

    - by c0uchm0nster
    I'm new to WPF and MVVM. I'm struggling to determine the best way to change the view of a chart. That is, initially a chart might have the axes: X - ID, Y - Length, and then after the user changes the view (either via lisbox, radiobutton, etc) the chart would display the information: X - Length, Y - ID, and after a third change by the user it might display new content: X - ID, Y - Quality. My initial thought was that the best way to do this would be to change the bindings themselves. But I don't know how tell a control in XAML to bind using a Binding object in the ViewModel, or whether it's safe to change that binding in runtime? Then I thought maybe I could just have a generic Model that has members X and Y and populate them as needed in the viewmodel? My last thought was that I could have 3 different chart controls and just hide and show them as appropriate. What is the CORRECT/SUGGESTED way to do this in the MVVM pattern? Any code examples would be greatly appreciated. Thanks

    Read the article

  • Reading from serial port with Boost Asio?

    - by trikri
    Hi! I'm going to check for incoming messages (data packages) on the serial port, using Boost Asio. Each message will start with a header that is one byte long, and will specify which type of the message has been sent. Each different type of message has an own length. The function I'm about to write should check for new incoming messages continually, and when it finds one it should read it, and then some other function should parse it. I thought that the code might look something like this: void check_for_incoming_messages() { boost::asio::streambuf response; boost::system::error_code error; std::string s1, s2; if (boost::asio::read(port, response, boost::asio::transfer_at_least(0), error)) { s1 = streambuf_to_string(response); int msg_code = s1[0]; if (msg_code < 0 || msg_code >= NUM_MESSAGES) { // Handle error, invalid message header } if (boost::asio::read(port, response, boost::asio::transfer_at_least(message_lengths[msg_code]-s1.length()), error)) { s2 = streambuf_to_string(response); // Handle the content of s1 and s2 } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } Is boost::asio::streambuf is the right thing to use? And how do I extract the data from it so I can parse the message? I also want to know if I need to have a separate thread which only calls this function, so that it get called more often? Isn't there a risk for loosing data in between two calls to the function otherwise, because so much data comes in that it can't be stored in the serial ports memory? I'm using Qt as a widget toolkit and I don't really know how long time it needs to process all it's events.

    Read the article

  • How to Convert Non-English Characters to English Using JavaScript

    - by Adam Right
    I have a c# function which converts all non-english characters to proper characters for a given text. like as follows public static string convertString(string phrase) { int maxLength = 100; string str = phrase.ToLower(); int i = str.IndexOfAny( new char[] { 's','ç','ö','g','ü','i'}); //if any non-english charr exists,replace it with proper char if (i > -1) { StringBuilder outPut = new StringBuilder(str); outPut.Replace('ö', 'o'); outPut.Replace('ç', 'c'); outPut.Replace('s', 's'); outPut.Replace('i', 'i'); outPut.Replace('g', 'g'); outPut.Replace('ü', 'u'); str = outPut.ToString(); } // if there are other invalid chars, convert them into blank spaces str = Regex.Replace(str, @"[^a-z0-9\s-]", ""); // convert multiple spaces and hyphens into one space str = Regex.Replace(str, @"[\s-]+", " ").Trim(); // cut and trim string str = str.Substring(0, str.Length <= maxLength ? str.Length : maxLength).Trim(); // add hyphens str = Regex.Replace(str, @"\s", "-"); return str; } but i should use same function on client side with javascript. is it possible to convert above function to js ? waiting all kinds of suggestion. thanks in advance..

    Read the article

  • Encoding issue - 2nd band of ISO-8859-1 values do not get encoded?

    - by bstack
    Hello, I want to send the pound sign character i.e. '£' encoded as ISO-8859-1 across the wire. I perform this by doing the following: var _encoding = Encoding.GetEncoding("iso-8859-1"); var _requestContent = _encoding.GetBytes(requestContent); var _request = (HttpWebRequest)WebRequest.Create(target); _request.Headers[HttpRequestHeader.ContentEncoding] = _encoding.WebName; _request.Method = "POST"; _request.ContentType = "application/x-www-form-urlencoded; charset=iso-8859-1"; _request.ContentLength = _requestContent.Length; _requestStream = _request.GetRequestStream(); _requestStream.Write(_requestContent, 0, _requestContent.Length); _requestStream.Flush(); _requestStream.Close(); When I put a breakpoint at the target, I expect to receive the following: '%a3', however I receive '%u00a3' instead. ISO-8859-1 is divided into 2 groups of characters: (ref: http://en.wikipedia.org/wiki/ISO_8859-1) The lower range 20 to 7E - is where all characters seem to be encoded correctly The higher range A0 to FF - is where all characters seem to encode to their Unicode equivalent value As '£' is in higher range A0 to FF, it gets encoded to %u00a3. In fact when I use the first few characters of the higher range A0 to FF i.e. '¡¢£¤¥¦§¨©ª«¬®', I get '%u00a1%u00a2%u00a3%u00a4%u00a5%u00a6%u00a7%u00a8%u00a9%u00aa%u00ab%u00ac%u00ae'. This behaviour is consistent. The question I have is why do characters in the higher range A0 to FF get encoded to their unicode value - and not to their equivalent ISO-8859-1 value? Help would be greatly appreciated... Billy

    Read the article

  • How can I use Performance Counters in C# to monitor 4 processes with the same name?

    - by Waffles
    I'm trying to create a performance counter that can monitor the performance time of applications, one of which is Google Chrome. However, I notice that the performance time I get for chrome is unnaturally low - I look under the task-manager to realize my problem that chrome has more than one process running under the exact same name, but each process has a different working set size and thus(what I would believe) different processor times. I tried doing this: // get all processes running under the same name, and make a performance counter // for each one. Process[] toImport = Process.GetProcessesByName("chrome"); instances = new PerformanceCounter[toImport.Length]; for (int i = 0; i < instances.Length; i++) { PerformanceCounter toPopulate = new PerformanceCounter ("Process", "% Processor Time", toImport[i].ProcessName, true); //Console.WriteLine(toImport[i].ProcessName + "#" + i); instances[i] = toPopulate; } But that doesn't seem to work at all - I just monitor the same process several times over. Can anyone tell me of a way to monitor separate processes with the same name?

    Read the article

  • GZIP Java vs .NET

    - by Jim Jones
    Using the following Java code to compress/decompress bytes[] to/from GZIP. First text bytes to gzip bytes: public static byte[] fromByteToGByte(byte[] bytes) { ByteArrayOutputStream baos = null; try { ByteArrayInputStream bais = new ByteArrayInputStream(bytes); baos = new ByteArrayOutputStream(); GZIPOutputStream gzos = new GZIPOutputStream(baos); byte[] buffer = new byte[1024]; int len; while((len = bais.read(buffer)) >= 0) { gzos.write(buffer, 0, len); } gzos.close(); baos.close(); } catch (IOException e) { e.printStackTrace(); } return(baos.toByteArray()); } Then the method that goes the other way compressed bytes to uncompressed bytes: public static byte[] fromGByteToByte(byte[] gbytes) { ByteArrayOutputStream baos = null; ByteArrayInputStream bais = new ByteArrayInputStream(gbytes); try { baos = new ByteArrayOutputStream(); GZIPInputStream gzis = new GZIPInputStream(bais); byte[] bytes = new byte[1024]; int len; while((len = gzis.read(bytes)) > 0) { baos.write(bytes, 0, len); } } catch (IOException e) { e.printStackTrace(); } return(baos.toByteArray()); } Think there is any effect since I'm not writing out to a gzip file? Also I noticed that in the standard C# function that BitConverter reads the first four bytes and then the MemoryStream Write function is called with a start point of 4 and a length of input buffer length - 4. So is that effect the validity of the header? Jim

    Read the article

  • JavaMail: Could not connect to SMTP server.

    - by javacode
    Hi The following code causes an error. Please help me understand what's wrong. import javax.mail.*; import javax.mail.internet.*; import java.util.*; public class SendMail { public static void main(String [] args)throws MessagingException { SendMail sm=new SendMail(); sm.postMail(new String[]{"[email protected]"},"hi","hello","[email protected]"); } public void postMail( String recipients[ ], String subject, String message , String from) throws MessagingException { boolean debug = false; //Set the host smtp address Properties props = new Properties(); props.put("mail.smtp.host", "webmail.emailmyname.com"); // create some properties and get the default Session Session session = Session.getDefaultInstance(props, null); session.setDebug(debug); // create a message Message msg = new MimeMessage(session); // set the from and to address InternetAddress addressFrom = new InternetAddress(from); msg.setFrom(addressFrom); InternetAddress[] addressTo = new InternetAddress[recipients.length]; for (int i = 0; i < recipients.length; i++) { addressTo[i] = new InternetAddress(recipients[i]); } msg.setRecipients(Message.RecipientType.TO, addressTo); // Optional : You can also set your custom headers in the Email if you Want msg.addHeader("MyHeaderName", "myHeaderValue"); // Setting the Subject and Content Type msg.setSubject(subject); msg.setContent(message, "text/plain"); Transport.send(msg); } } Exception: <pre> com.sun.mail.smtp.SMTPSendFailedException: 450 smtpout04.dca.untd.com Authentication required at com.sun.mail.smtp.SMTPTransport.issueSendCommand(SMTPTransport.java:1829) at com.sun.mail.smtp.SMTPTransport.mailFrom(SMTPTransport.java:1368) at com.sun.mail.smtp.SMTPTransport.sendMessage(SMTPTransport.java:886) at javax.mail.Transport.send0(Transport.java:191) at javax.mail.Transport.send(Transport.java:120) at SendMail.postMail(SendMail.java:52) at SendMail.main(SendMail.java:10)

    Read the article

  • C# exception when calling stored procedure: ORA-01460 - unimplemented or unreasonable conversion req

    - by Taylor L
    I'm trying to call a stored procedure using ADO .NET and I'm getting the following error: ORA-01460 - unimplemented or unreasonable conversion requested The stored procedure I'm trying to call has the following parameters: param1 IN VARCHAR2, param2 IN NUMBER, param3 IN VARCHAR2, param4 OUT NUMBER, param5 OUT NUMBER, param6 OUT NUMBER, param7 OUT VARCHAR2 Below is the C# code I'm using to call the stored procedure: OracleCommand command = connection.CreateCommand(); command.CommandType = CommandType.StoredProcedure; command.CommandText = "MY_PROC"; OracleParameter param1 = new OracleParameter() { ParameterName = "param1", Direction = ParameterDirection.Input, Value = p1, OracleDbType = OracleDbType.Varchar2, Size = p1.Length }; OracleParameter param2 = new OracleParameter() { ParameterName = "param2", Direction = ParameterDirection.Input, Value = p2, OracleDbType = OracleDbType.Decimal }; OracleParameter param3 = new OracleParameter() { ParameterName = "param3", Direction = ParameterDirection.Input, Value = p3, OracleDbType = OracleDbType.Varchar2, Size = p3.Length }; OracleParameter param4 = new OracleParameter() { ParameterName = "param4", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Decimal }; OracleParameter param5 = new OracleParameter() { ParameterName = "param5", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Decimal}; OracleParameter param6 = new OracleParameter() { ParameterName = "param6", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Decimal }; OracleParameter param7 = new OracleParameter() { ParameterName = "param7", Direction = ParameterDirection.Output, OracleDbType = OracleDbType.Varchar2, Size = 32767 }; command.Parameters.Add(param1); command.Parameters.Add(param2); command.Parameters.Add(param3); command.Parameters.Add(param4); command.Parameters.Add(param5); command.Parameters.Add(param6); command.Parameters.Add(param7); command.ExecuteNonQuery(); Any ideas what I'm doing wrong?

    Read the article

  • c#: how to read parts of a file? (DICOM)

    - by Xaisoft
    I would like to read a DICOM file in C#. I don't want to do anything fancy, I just for now would like to know how to read in the elements, but first I would actually like to know how to read the header to see if is a valid DICOM file. It consists of Binary Data Elements. The first 128 bytes are unused (set to zero), followed by the string 'DICM'. This is followed by header information, which is organized into groups. A sample DICOM header First 128 bytes: unused DICOM format. Followed by the characters 'D','I','C','M' Followed by extra header information such as: 0002,0000, File Meta Elements Groups Len: 132 0002,0001, File Meta Info Version: 256 0002,0010, Transfer Syntax UID: 1.2.840.10008.1.2.1. 0008,0000, Identifying Group Length: 152 0008,0060, Modality: MR 0008,0070, Manufacturer: MRIcro In the above example, the header is organized into groups. The group 0002 hex is the file meta information group which contains 3 elements: one defines the group length, one stores the file version and the their stores the transfer syntax. Questions How to I read the header file and verify if it is a DICOM file by checking for the 'D','I','C','M' characters after the 128 byte preamble? How do I continue to parse the file reading the other parts of the data?

    Read the article

  • How can i optimize this c# code?

    - by Pandiya Chendur
    I have converted my Datatable to json string use the following method... public string GetJSONString(DataTable Dt) { string[] StrDc = new string[Dt.Columns.Count]; string HeadStr = string.Empty; for (int i = 0; i < Dt.Columns.Count; i++) { StrDc[i] = Dt.Columns[i].Caption; HeadStr += "\"" + StrDc[i] + "\" : \"" + StrDc[i] + i.ToString() + "¾" + "\","; } HeadStr = HeadStr.Substring(0, HeadStr.Length - 1); StringBuilder Sb = new StringBuilder(); Sb.Append("{\"" + Dt.TableName + "\" : ["); for (int i = 0; i < Dt.Rows.Count; i++) { string TempStr = HeadStr; Sb.Append("{"); for (int j = 0; j < Dt.Columns.Count; j++) { if (Dt.Rows[i][j].ToString().Contains("'") == true) { Dt.Rows[i][j] = Dt.Rows[i][j].ToString().Replace("'", ""); } TempStr = TempStr.Replace(Dt.Columns[j] + j.ToString() + "¾", Dt.Rows[i][j].ToString()); } Sb.Append(TempStr + "},"); } Sb = new StringBuilder(Sb.ToString().Substring(0, Sb.ToString().Length - 1)); Sb.Append("]}"); return Sb.ToString(); } Is this fair enough or still there is margin for optimization to make it execute faster.... Any suggestion...

    Read the article

  • Http Requests POST vs GET

    - by behrk2
    Hi everyone, I am using a lot of HTTP Requests in an application that I am writing which uses OAuth. Currently, I am sending my GET and POST requests the same way: HttpConnection connection = (HttpConnection) Connector.open(url + connectionParameters); connection.setRequestMethod(method); connection.setRequestProperty("WWW-Authenticate", "OAuth realm=api.netflix.com"); int responseCode = connection.getResponseCode(); And this is working fine. I am successfully POSTing and GETing. However, I am worried that I am not doing POST the right way. Do I need to include in the above code the following if-statement? if (method.equals("POST") && postData != null) { connection.setRequestProperty("Content-type", "application/x-www-form-urlencoded"); connection.setRequestProperty("Content-Length", Integer .toString(postData.length)); OutputStream requestOutput = connection.openOutputStream(); requestOutput.write(postData); requestOutput.close(); } If so, why? What's the difference? I would appreciate any feedback. Thanks!

    Read the article

  • Delphi - Is there a better way to get state abbreviations from state names

    - by Bill
    const states : array [0..49,0..1] of string = ( ('Alabama','AL'), ('Montana','MT'), ('Alaska','AK'), ('Nebraska','NE'), ('Arizona','AZ'), ('Nevada','NV'), ('Arkansas','AR'), ('New Hampshire','NH'), ('California','CA'), ('New Jersey','NJ'), ('Colorado','CO'), ('New Mexico','NM'), ('Connecticut','CT'), ('New York','NY'), ('Delaware','DE'), ('North Carolina','NC'), ('Florida','FL'), ('North Dakota','ND'), ('Georgia','GA'), ('Ohio','OH'), ('Hawaii','HI'), ('Oklahoma','OK'), ('Idaho','ID'), ('Oregon','OR'), ('Illinois','IL'), ('Pennsylvania','PA'), ('Indiana','IN'), ('Rhode Island','RI'), ('Iowa','IA'), ('South Carolin','SC'), ('Kansas','KS'), ('South Dakota','SD'), ('Kentucky','KY'), ('Tennessee','TN'), ('Louisiana','LA'), ('Texas','TX'), ('Maine','ME'), ('Utah','UT'), ('Maryland','MD'), ('Vermont','VT'), ('Massachusetts','MA'), ('Virginia','VA'), ('Michigan','MI'), ('Washington','WA'), ('Minnesota','MN'), ('West Virginia','WV'), ('Mississippi','MS'), ('Wisconsin','WI'), ('Missouri','MO'), ('Wyoming','WY') ); function getabb(state:string):string; var I:integer; begin for I := 0 to length(states) -1 do if lowercase(state) = lowercase(states[I,0]) then begin result:= states[I,1]; end; end; function getstate(state:string):string; var I:integer; begin for I := 0 to length(states) -1 do if lowercase(state) = lowercase(states[I,1]) then begin result:= states[I,0]; end; end; procedure TForm2.Button1Click(Sender: TObject); begin edit1.Text:=getabb(edit1.Text); end; procedure TForm2.Button2Click(Sender: TObject); begin edit1.Text:=getstate(edit1.Text); end; end. Is there a bette way to do this?

    Read the article

  • Can I include a NSUserDefault password test in AppDelegate to load a loginView?

    - by Michael Robinson
    I have a name and password in NSUserDefaults for login. I want to place a test in my AppDelegate.m class to test for presence and load a login/signup loginView.xib modally if there is no password or name stored in the app. Here is the pulling of the defaults: -(void)refreshFields { NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; usernameLabel.text = [defaults objectForKey:kUsernameKey]; passwordLabel.text = [defaults objectForKey:kPasswordKey]; Here is the tabcontroller loading part: - (void)applicationDidFinishLaunching:(UIApplication *)application { firstTab = [[FirstTab alloc] initWithStyle:UITableViewStylePlain]; UINavigationController *firstNavigationController = [[UINavigationController alloc] initWithRootViewController:firstTab]; [firstTab release]; secondTab = // EDITED FOR SPACE thirdTab = // EDITED FOR SPACE tabBarController = [[UITabBarController alloc] init]; tabBarController.viewControllers = [NSArray arrayWithObjects:firstNavigationController, secondNavigationController, thirdNavigationController, nil]; [window addSubview:tabBarController.view]; [firstNavigationController release]; [secondNavigationController release]; [thirdNavigationController release]; [self logout]; [window makeKeyAndVisible]; Here is where the loginView.xib loads automatically: - (void)logout { loginViewController = [[LoginViewController alloc] initWithNibName:@"LoginView" bundle:nil]; UINavigationController *loginNavigationController = [[UINavigationController alloc] initWithRootViewController:loginViewController]; [loginViewController release]; [tabBarController presentModalViewController:loginNavigationController animated:YES]; [loginNavigationController release]; } I want to replace the above autoload with a test similar to below (that works) using IF-ELSE - (void)logout { if ([usernameLabel.text length] == 0 || [passwordLabel.text length] == 0) { loginViewController = [[LoginViewController alloc] initWithNibName:@"LoginView" bundle:nil]; UINavigationController *loginNavigationController = [[UINavigationController alloc] initWithRootViewController:loginViewController]; [loginViewController release]; [tabBarController presentModalViewController:loginNavigationController animated:YES]; [loginNavigationController release]; }else { [window addSubview:tabBarController.view];} Thanks in advance, I'm totally lost on this.

    Read the article

  • Integrating twitpic OAuth for iPhone.

    - by asadqamber
    How can I integrate twitpic API with OAuth for posting an image from iPhone? Any help or tutorial? Currently I am doing... NSURL *twitpicURL = [NSURL URLWithString:@"http://api.twitpic.com/2/upload.format"]; theRequest = [NSMutableURLRequest requestWithURL:twitpicURL]; [theRequest setHTTPMethod:@"POST"]; // Set the params NSString *message = theMessage; [theRequest addValue:@"http://api.twitter.com/" forHTTPHeaderField:@"OAuth realm"]; [theRequest addValue:TWITPIC_API_KEY forHTTPHeaderField:@"oauth_consumer_key"]; [theRequest addValue:@"HMAC-SHA1" forHTTPHeaderField:@"oauth_signature_method"]; [theRequest addValue:USER_OAUTH_TOKEN forHTTPHeaderField:@"oauth_token"]; [theRequest addValue:USER_OAUTH_SECRET forHTTPHeaderField:@"oauth_secret"]; [theRequest addValue: @"1272325550" forHTTPHeaderField:@"oauth_timestamp"]; [theRequest addValue:nil forHTTPHeaderField:@"oauth_nonce"]; [theRequest addValue:@"1.0" forHTTPHeaderField:@"oauth_version"]; [theRequest addValue:nil forHTTPHeaderField:@"oauth_signature"]; NSMutableData *postBody = [NSMutableData data]; [postBody appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"source\"\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"lighttable"] dataUsingEncoding:NSUTF8StringEncoding]]; // Message [postBody appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"message\"\r\n\r\n%@", message]dataUsingEncoding:NSUTF8StringEncoding]]; // Media [postBody appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"media\"; filename=\"%@\"\r\n", @"doc_twitpic_image.jpg"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"Content-Type: image/jpg\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; // data as JPEG [postBody appendData:[[NSString stringWithFormat:@"Content-Transfer-Encoding: binary\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[NSData dataWithData:image]]; [theRequest setHTTPBody:postBody]; [theRequest setValue:[NSString stringWithFormat:@"%d", [postBody length]] forHTTPHeaderField:@"Content-Length"]; theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self];

    Read the article

  • C# TCP Async EndReceive() throws InvalidOperationException ONLY on Windows XP 32-bit

    - by James Farmer
    I have a simple C# Async Client using a .NET socket that waits for timed messages from a local Java server used for automating commands. The messages come in asynchronously and is written to a ring buffer. This implementation seems to work fine on Windows Vista/7/8 and OSX, but will randomly throw this exception while it's receiving a message from the local Java server: Unhandled Exception: System.InvalidOperationException: EndReceive can only be called once for each asynchronous operation.     at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult, SocketError& errorCode)     at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult)     at SocketTest.Controller.RecvAsyncCallback(IAsyncResult ar)     at System.Net.LazyAsyncResult.Complete(IntPtr userToken)     ... I've looked online for this error, but have found nothing really helpful. This is the code where it seems to break: /// <summary> /// Callback to receive socket data /// </summary> /// <param name="ar">AsyncResult to pass to End</param> private void RecvAsyncCallback(IAsyncResult ar) { // The exception will randomly happen on this call int bytes = _socket.EndReceive(_recvAsyncResult); // check for connection closed if (bytes == 0) { return; } _ringBuffer.Write(_buffer, 0, bytes); // Checks buffer CheckBuffer(); _recvAsyncResult = _sock.BeginReceive(_buffer, 0, _buffer.Length, SocketFlags.None, RecvAsyncCallback, null); } The error doesn't happen on any particular moment except in the middle of receiving a message. The message itself can be any length for this to happen, and the exception can happen right away, or sometimes even up to a minute of perfect communication. I'm pretty new with sockets and network communication, and I feel I might be missing something here. I've tested on at least 8 different computers, and the only similarity with the computers that throw this exception is that their OS is Windows XP 32-bit.

    Read the article

  • castle monorail unit test rendertext

    - by MikeWyatt
    I'm doing some maintenance on an older web application written in Monorail v1.0.3. I want to unit test an action that uses RenderText(). How do I extract the content in my test? Reading from controller.Response.OutputStream doesn't work, since the response stream is either not setup properly in PrepareController(), or is closed in RenderText(). Example Action public DeleteFoo( int id ) { var success= false; var foo = Service.Get<Foo>( id ); if( foo != null && CurrentUser.IsInRole( "CanDeleteFoo" ) ) { Service.Delete<Foo>( id ); success = true; } CancelView(); RenderText( "{ success: " + success + " }" ); } Example Test (using Moq) [Test] public void DeleteFoo() { var controller = new MyController (); PrepareController ( controller ); var foo = new Foo { Id = 123 }; var mockService = new Mock < Service > (); mockService.Setup ( s => s.Get<Foo> ( foo.Id ) ).Returns ( foo ); controller.Service = mockService.Object; controller.DeleteTicket ( ticket.Id ); mockService.Verify ( s => s.Delete<Foo> ( foo.Id ) ); Assert.AreEqual ( "{success:true}", GetResponse ( Response ) ); } // response.OutputStream.Seek throws an "System.ObjectDisposedException: Cannot access a closed Stream." exception private static string GetResponse( IResponse response ) { response.OutputStream.Seek ( 0, SeekOrigin.Begin ); var buffer = new byte[response.OutputStream.Length]; response.OutputStream.Read ( buffer, 0, buffer.Length ); return Encoding.ASCII.GetString ( buffer ); }

    Read the article

  • Having trouble adding jquery to charisma

    - by kira423
    I am trying to add some jquery to this Charisma admin panel and have been having nothing but trouble. I am trying to add it to the charisma.js file. This is what I am adding // add multiple select / deselect functionality $("#selectall").click(function () { $('.checkbox').attr('checked', this.checked); }); // if all checkbox are selected, check the selectall checkbox // and viceversa $(".checkbox").click(function(){ if($(".checkbox").length == $(".checkbox:checked").length) { $("#selectall").attr("checked", "checked"); } else { $("#selectall").removeAttr("checked"); } }); I have tried this code wrapped in the anonymous $(function(){ as well as without, and I have inserted it into both $(document).ready(function(){ and docReady() as well as in the head of my code but I am not really "trained" on jquery so I am a bit lost as to what I am doing wrong. My class and div tags are correct for the code, as I have checked them several times for misspellings. I am not sure what I am doing wrong. Is there a better "check" all code I can use here, or am I just putting this all in the wrong place? UPDATE: I think the actual code may be working, I cannot tell, after I click the select all box it seems that I have to click the other boxes 3 times to get the check mark back into the box, so it seems like it is having trouble actually showing that the box is marked. This may be a problem with styling, but I don't know how to correct it.

    Read the article

  • longest string in texts

    - by davit-datuashvili
    Ihave following code in Java: import java.util.*; public class longest{ public static void main(String[] args){ int t=0;int m=0;int token1, token2; String words[]=new String[10]; String word[]=new String[10]; String common[]=new String[10]; String text="saqartvelo gabrwyindeba da gadzlierdeba aucileblad "; String text1="saqartvelo gamtliandeba da gadzlierdeba aucileblad"; StringTokenizer st=new StringTokenizer(text); StringTokenizer st1=new StringTokenizer(text1); token1=st.countTokens(); token2=st1.countTokens(); while (st.hasMoreTokens()){ words[t]=st.nextToken(); t++; } while (st1.hasMoreTokens()){ word[m]=st1.nextToken(); m++; } for (int k=0;k<token1;k++){ for (int f=0;f<token2;f++){ if (words[f].compareTo(word[f])==0){ common[f]=words[f]; } } } while (i<common.length){ System.out.println(common[i]); i++; } } } I want that in common array put elements which i in both text or these words saqartvelo (georgia in english) da (and in english) gadzlierdeba (will be stronger) aucileblad (sure) and then between these words find string which has maximum length but it does not work more correctly it show me these words and also many null elements. How do I correct it?

    Read the article

  • cURL requests changed

    - by Andriy Mytroshyn
    I've start work with cURL library, before work i compile library. i Send request and have some problem. Code in c++ that i used for work with cURL: CURL *curl=NULL; CURLcode res; struct curl_slist *headers=NULL; // init to NULL is important curl_slist_append(headers, "POST /oauth/authorize HTTP/1.1"); curl_slist_append(headers, "Host: sp-money.yandex.ru"); curl_slist_append(headers, "Content-Type: application/x-www-form-urlencoded"); curl_slist_append(headers, "charset: UTF-8"); curl_slist_append(headers, "Content-Length: 12345"); curl = curl_easy_init(); if(!curl) return 0; curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers); curl_easy_setopt(curl, CURLOPT_URL, "sp-money.yandex.ru"); curl_easy_setopt(curl, CURLOPT_PROXY, "127.0.0.1:8888"); if( curl_easy_perform(curl)!=CURLE_OK) return 1; I've used proxy, fiddler2, for check what data sent to server. When i check sent data i get result: POST HTTP://sp-money.yandex.ru/ HTTP/1.1 Host: sp-money.yandex.ru Accept: */* Connection: Keep-Alive Content-Length: 151 Content-Type: application/x-www-form-urlencoded also i check this data using Wiresharck, result the same. Do you know why in first line cURL wrote: POST HTTP://sp-money.yandex.ru/ HTTP/1.1 I send POST /oauth/authorize HTTP/1.1 I've used VS 2010 for work, and also i don't used framework

    Read the article

  • iPhone POST to PHP failing

    - by Alexander
    I have this code here: NSString *post = [NSString stringWithFormat:@"deviceIdentifier=%@&deviceToken=%@",deviceIdentifier,deviceToken]; NSData *postData = [post dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:NO]; NSString *postLength = [NSString stringWithFormat:@"%d", [postData length]]; NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:@"http://website.com/RegisterScript.php"]]; [request setHTTPMethod:@"POST"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"Content-Type"]; [request setValue:@"MyApp-V1.0" forHTTPHeaderField:@"User-Agent"]; [request setHTTPBody:postData]; NSData *urlData = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil]; NSString *response = [[NSString alloc] initWithData:urlData encoding:NSASCIIStringEncoding]; which should be sending data to my PHP server to register the device into our database, but for some of my users no POST data is being sent. I've been told that this line: NSData *postData = [post dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:NO]; may be causing the problem. Any thoughts on why this script would sometimes not send the POST data? On the server I got the packet trace for a failed send and the Content-lenght came up as 0 so no data was sent at all. Thanks for any help

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >