Search Results

Search found 23265 results on 931 pages for 'justin case'.

Page 861/931 | < Previous Page | 857 858 859 860 861 862 863 864 865 866 867 868  | Next Page >

  • Retrieving/Updating Entity Framework POCO objects that already exist in the ObjectContext

    - by jslatts
    I have a project using Entity Framework 4.0 with POCOs (data is stored in SQL DB, lazyloading is enabled) as follows: public class ParentObject { public int ID {get; set;} public virtual List<ChildObject> children {get; set;} } public class ChildObject { public int ID {get; set;} public int ChildRoleID {get; set;} public int ParentID {get; set;} public virtual ParentObject Parent {get; set;} public virtual ChildRoleObject ChildRole {get; set;} } public class ChildRoleObject { public int ID {get; set;} public string Name {get; set;} public virtual List<ChildObject> children {get; set;} } I want to create a new ChildObject, assign it a role, then add it to an existing ParentObject. Afterwards, I want to send the new ChildObject to the caller. The code below works fine until it tries to get the object back from the database. The newChildObjectInstance only has the ChildRoleID set and does not contain a reference to the actual ChildRole object. I try and pull the new instance back out of the database in order to populate the ChildRole property. Unfortunately, in this case, instead of creating a new instance of ChildObject and assigning it to retreivedChildObject, EF finds the existing ChildObject in the context and returns the in-memory instance, with a null ChildRole property. public ChildObject CreateNewChild(int id, int roleID) { SomeObjectContext myRepository = new SomeObjectContext(); ParentObject parentObjectInstance = myRepository.GetParentObject(id); ChildObject newChildObjectInstance = new ChildObject() { ParentObject = parentObjectInstance, ParentID = parentObjectInstance.ID, ChildRoleID = roleID }; parentObjectInstance.children.Add(newChildObjectInstance); myRepository.Save(); ChildObject retreivedChildObject = myRepository.GetChildObject(newChildObjectInstance.ID); string assignedRoleName = retreivedChildObject.ChildRole.Name; //Throws exception, ChildRole is null return retreivedChildObject; } I have tried setting MergeOptions to Overwrite, calling ObjectContext.Refresh() and ObjectContext.DetectChanges() to no avail... I suspect this is related to the proxy objects that EF injects when working with POCOs. Has anyone run into this issue before? If so, what was the solution?

    Read the article

  • ASP.NET MVC Page - Viewstate for Confirm email field is getting erased on Registration Page if valid

    - by Rita
    Hi I have a Registaration page with the following fields Email, Confirm Email, Password and Confrim Password. On Register Button click and post the model to the server, the Model validates and if that Email is already Registered, it displays the Validation Error Message "User already Exists. Please Login or Register with a different email ID". While we are displaying this validation error message, I am loosing the value of "Confirm Email" field. So that the user has to reenter again and I want to avoid this. Here I don't have confirm_Email field in my Model. Is there something special that has to be done to remain Confirm Email value on the Page even in case of Validation failure? Appreciate your responses. Here is my Code: <% using (Html.BeginForm()) {%> <%= Html.ValidationSummary(false) %> <fieldset> <div class="cssform"> <p> <%= Html.LabelFor(model => model.Email)%><em>*</em> <%= Html.TextBoxFor(model => model.Email, new { @class = "required email" })%> <%= Html.ValidationMessageFor(model => model.Email)%> </p> <p> <%= Html.Label("Confirm email")%><em>*</em> <%= Html.TextBox("confirm_email")%> <%= Html.ValidationMessage("confirm_email") %> </p> <p> <%= Html.Label("Password")%><em>*</em> <%= Html.Password("Password", null, new { @class = "required" })%> <%= Html.ValidationMessage("Password")%><br /> (Note: Password should be minimum 6 characters) </p> <p> <%= Html.Label("Confirm Password")%><em>*</em> <%= Html.Password("confirm_password")%> <%= Html.ValidationMessage("confirm_password") %> </p><hr /> <p>Note: Confirmation email will be sent to the email address listed above.</p> </fieldset> <% } %>

    Read the article

  • Sockets receiving null (Android)

    - by Henrik
    I have a android app that is communicating with a server (written in java). Between these two parts I have established a Socket connection and want to send data. The problem I am having is that sometimes, for some users, the information that reaches the server is null. This works (for all phones, all users): Server: int a = Integer.parseInt(in.readLine()); int b = Integer.parseInt(in.readLine()); int c = Integer.parseInt(in.readLine()); int d = Integer.parseInt(in.readLine()); String checksum = in.readLine(); String model = in.readLine(); String device = in.readLine(); String name = in.readLine(); Client: out.println(a); out.println(b); out.println(c); out.println(d); out.println(hash); out.println(Build.MODEL); out.println(Build.DEVICE); String name = fixName(); out.print(name); out.flush(); This does not work (for some users): Server: int a = Integer.parseInt(in.readLine()); String checksum = in.readLine(); String model = in.readLine(); String device = in.readLine(); String name = in.readLine(); String msg = in.readLine(); int version = -1; String test = "hej"; try{ test = in.readLine(); version = Integer.parseInt(test); }catch(Exception e){ e.printStackTrace(); } Client: out.println(a); out.println(hash); out.println(Build.MODEL); out.println(Build.DEVICE); String name = fixName(); if(name == null) name = "John Doe"; out.println(name); String msg = fixMsg(); if(msg == null) name = "nada"; out.println(msg); out.println(curversion); out.flush(); Sometimes, in the second case, the name, msg, and version (the string test) are null at the server side. The catch is triggered because test is null. curversion,a are ints, the rest are strings. Any ideas?

    Read the article

  • Is it possible to reliably auto-decode user files to Unicode? [C#]

    - by NVRAM
    I have a web application that allows users to upload their content for processing. The processing engine expects UTF8 (and I'm composing XML from multiple users' files), so I need to ensure that I can properly decode the uploaded files. Since I'd be surprised if any of my users knew their files even were encoded, I have very little hope they'd be able to correctly specify the encoding (decoder) to use. And so, my application is left with task of detecting before decoding. This seems like such a universal problem, I'm surprised not to find either a framework capability or general recipe for the solution. Can it be I'm not searching with meaningful search terms? I've implemented BOM-aware detection (http://en.wikipedia.org/wiki/Byte_order_mark) but I'm not sure how often files will be uploaded w/o a BOM to indicate encoding, and this isn't useful for most non-UTF files. My questions boil down to: Is BOM-aware detection sufficient for the vast majority of files? In the case where BOM-detection fails, is it possible to try different decoders and determine if they are "valid"? (My attempts indicate the answer is "no.") Under what circumstances will a "valid" file fail with the C# encoder/decoder framework? Is there a repository anywhere that has a multitude of files with various encodings to use for testing? While I'm specifically asking about C#/.NET, I'd like to know the answer for Java, Python and other languages for the next time I have to do this. So far I've found: A "valid" UTF-16 file with Ctrl-S characters has caused encoding to UTF-8 to throw an exception (Illegal character?) (That was an XML encoding exception.) Decoding a valid UTF-16 file with UTF-8 succeeds but gives text with null characters. Huh? Currently, I only expect UTF-8, UTF-16 and probably ISO-8859-1 files, but I want the solution to be extensible if possible. My existing set of input files isn't nearly broad enough to uncover all the problems that will occur with live files. Although the files I'm trying to decode are "text" I think they are often created w/methods that leave garbage characters in the files. Hence "valid" files may not be "pure". Oh joy. Thanks.

    Read the article

  • Generate a list of file names based on month and year arithmetic

    - by MacUsers
    How can I list the numbers 01 to 12 (one for each of the 12 months) in such a way so that the current month always comes last where the oldest one is first. In other words, if the number is grater than the current month, it's from the previous year. e.g. 02 is Feb, 2011 (the current month right now), 03 is March, 2010 and 09 is Sep, 2010 but 01 is Jan, 2011. In this case, I'd like to have [09, 03, 01, 02]. This is what I'm doing to determine the year: for inFile in os.listdir('.'): if inFile.isdigit(): month = months[int(inFile)] if int(inFile) <= int(strftime("%m")): year = strftime("%Y") else: year = int(strftime("%Y"))-1 mnYear = month + ", " + str(year) I don't have a clue what to do next. What should I do here? Update: I think, I better upload the entire script for better understanding. #!/usr/bin/env python import os, sys from time import strftime from calendar import month_abbr vGroup = {} vo = "group_lhcb" SI00_fig = float(2.478) months = tuple(month_abbr) print "\n%-12s\t%10s\t%8s\t%10s" % ('VOs','CPU-time','CPU-time','kSI2K-hrs') print "%-12s\t%10s\t%8s\t%10s" % ('','(in Sec)','(in Hrs)','(*2.478)') print "=" * 58 for inFile in os.listdir('.'): if inFile.isdigit(): readFile = open(inFile, 'r') lines = readFile.readlines() readFile.close() month = months[int(inFile)] if int(inFile) <= int(strftime("%m")): year = strftime("%Y") else: year = int(strftime("%Y"))-1 mnYear = month + ", " + str(year) for line in lines[2:]: if line.find(vo)==0: g, i = line.split() s = vGroup.get(g, 0) vGroup[g] = s + int(i) sumHrs = ((vGroup[g]/60)/60) sumSi2k = sumHrs*SI00_fig print "%-12s\t%10s\t%8s\t%10.2f" % (mnYear,vGroup[g],sumHrs,sumSi2k) del vGroup[g] When I run the script, I get this: [root@serv07 usage]# ./test.py VOs CPU-time CPU-time kSI2K-hrs (in Sec) (in Hrs) (*2.478) ================================================== Jan, 2011 211201372 58667 145376.83 Dec, 2010 5064337 1406 3484.07 Feb, 2011 17506049 4862 12048.04 Sep, 2010 210874275 58576 145151.33 As I said in the original post, I like the result to be in this order instead: Sep, 2010 210874275 58576 145151.33 Dec, 2010 5064337 1406 3484.07 Jan, 2011 211201372 58667 145376.83 Feb, 2011 17506049 4862 12048.04 The files in the source directory reads like this: [root@serv07 usage]# ls -l total 3632 -rw-r--r-- 1 root root 1144972 Feb 9 19:23 01 -rw-r--r-- 1 root root 556630 Feb 13 09:11 02 -rw-r--r-- 1 root root 443782 Feb 11 17:23 02.bak -rw-r--r-- 1 root root 1144556 Feb 14 09:30 09 -rw-r--r-- 1 root root 370822 Feb 9 19:24 12 Did I give a better picture now? Sorry for not being very clear in the first place. Cheers!! Update @Mark Ransom This is the result from Mark's suggestion: [root@serv07 usage]# ./test.py VOs CPU-time CPU-time kSI2K-hrs (in Sec) (in Hrs) (*2.478) ========================================================== Dec, 2010 5064337 1406 3484.07 Sep, 2010 210874275 58576 145151.33 Feb, 2011 17506049 4862 12048.04 Jan, 2011 211201372 58667 145376.83 As I said before, I'm looking for the result to b printed in this order: Sep, 2010 - Dec, 2010 - Jan, 2011 - Feb, 2011 Cheers!!

    Read the article

  • Ninject with Object Initializers and LINQ

    - by Alexander Kahoun
    I'm new to Ninject so what I'm trying may not even be possible but I wanted to ask. I free-handed the below so there may be typos. Let's say I have an interface: public interface IPerson { string FirstName { get; set; } string LastName { get; set;} string GetFullName(); } And a concrete: public class Person : IPerson { public string FirstName { get; set; } public string LastName { get; set; } public string GetFullName() { return String.Concat(FirstName, " ", LastName); } } What I'm used to doing is something like this when I'm retrieving data from arrays or xml: public IEnumerable<IPerson> GetPeople(string xml) { XElement persons = XElement.Parse(xml); IEnumerable<IPerson> people = ( from person in persons.Descendants("person") select new Person { FirstName = person.Attribute("FName").Value, LastName = person.Attribute("LName").Value }).ToList(); return people; } I don't want to tightly couple the concrete to the interface in this manner. I haven't been able to find any information in regards to using Ninject with LINQ to Objects or with object initializers. I may be looking in the wrong places, but I've been searching for a day now with no luck at all. I was contemplating putting the kernel into an singleton instance and seeing if that would work, but I'm not sure that it will plus I've heard that passing your kernel around is a bad thing. I'm trying to implement this in a class library currently. If this is not possible, does anyone have any examples or suggestions as to what the best practice is in this case? Thanks in advance for the help. EDIT: Based on some of the answers I feel I should clarify. Yes, the example above appears short lived but it was simply an example of one piece that I was trying to do. Let's give a bigger picture. Say instead of XML I am gathering all my data through a 3rd party web service and I'm creating an interface for it, the data could be a defined object in the wsdl or it could sometimes be an xml string. IPerson could be used for both the Person object and a User object. I will be doing this inside of a separate class library, because it needs to be portable and will be used in other projects, and handing it to an MVC3 Web Application and the objects will be used in javascript as well. I appreciate all the input so far.

    Read the article

  • What am I not getting about this abstract class implementation?

    - by Schnapple
    PREFACE: I'm relatively inexperienced in C++ so this very well could be a Day 1 n00b question. I'm working on something whose long term goal is to be portable across multiple operating systems. I have the following files: Utilities.h #include <string> class Utilities { public: Utilities() { }; virtual ~Utilities() { }; virtual std::string ParseString(std::string const& RawString) = 0; }; UtilitiesWin.h (for the Windows class/implementation) #include <string> #include "Utilities.h" class UtilitiesWin : public Utilities { public: UtilitiesWin() { }; virtual ~UtilitiesWin() { }; virtual std::string ParseString(std::string const& RawString); }; UtilitiesWin.cpp #include <string> #include "UtilitiesWin.h" std::string UtilitiesWin::ParseString(std::string const& RawString) { // Magic happens here! // I'll put in a line of code to make it seem valid return ""; } So then elsewhere in my code I have this #include <string> #include "Utilities.h" void SomeProgram::SomeMethod() { Utilities *u = new Utilities(); StringData = u->ParseString(StringData); // StringData defined elsewhere } The compiler (Visual Studio 2008) is dying on the instance declaration c:\somepath\somecode.cpp(3) : error C2259: 'Utilities' : cannot instantiate abstract class due to following members: 'std::string Utilities::ParseString(const std::string &)' : is abstract c:\somepath\utilities.h(9) : see declaration of 'Utilities::ParseString' So in this case what I'm wanting to do is use the abstract class (Utilities) like an interface and have it know to go to the implemented version (UtilitiesWin). Obviously I'm doing something wrong but I'm not sure what. It occurs to me as I'm writing this that there's probably a crucial connection between the UtilitiesWin implementation of the Utilities abstract class that I've missed, but I'm not sure where. I mean, the following works #include <string> #include "UtilitiesWin.h" void SomeProgram::SomeMethod() { Utilities *u = new UtilitiesWin(); StringData = u->ParseString(StringData); // StringData defined elsewhere } but it means I'd have to conditionally go through the different versions later (i.e., UtilitiesMac(), UtilitiesLinux(), etc.) What have I missed here?

    Read the article

  • Is there anyway to exclude artifacts inherited from a parent POM?

    - by Miguel
    Artifacts from dependencies can be excluded by declaring an <exclusions> element inside a <dependency> But in this case it's needed to exclude an artifact inherited from a parent project. An excerpt of the POM under discussion follows: <project> <modelVersion>4.0.0</modelVersion> <groupId>test</groupId> <artifactId>jruby</artifactId> <version>0.0.1-SNAPSHOT</version> <parent> <artifactId>base</artifactId> <groupId>es.uniovi.innova</groupId> <version>1.0.0</version> </parent> <dependencies> <dependency> <groupId>com.liferay.portal</groupId> <artifactId>ALL-DEPS</artifactId> <version>1.0</version> <scope>provided</scope> <type>pom</type> </dependency> </dependencies> </project> base artifact, depends on javax.mail:mail-1.4.jar, and ALL-DEPS depends on another version of the same library. Due to the fact that mail.jar from ALL-DEPS exist on the execution environment, although not exported, collides with the mail.jar that exists on the parent, which is scoped as compile. A solution could be to rid off mail.jar from the parent POM, but most of the projects that inherit base, need it (as is a transtive dependency for log4j). So What I would like to do is to simply exclude parent's library from the child project, as it could be done if base was a dependency and not the parent pom: ... <dependency> <artifactId>base</artifactId> <groupId>es.uniovi.innova</groupId> <version>1.0.0</version> <type>pom<type> <exclusions> <exclusion> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> </exclusion> </exclusions> </dependency> ...

    Read the article

  • Hotfixing Code running inside Web Container with Groovy

    - by raoulsson
    I have a webapp running that has a bug. I know how to fix it in the sources. However I cannot redeploy the app as I would have to take it offline to do so. (At least not right now). I now want to fix the code "at runtime". Surgery on the living object, so to speak. The app is implemented in Java and is build on top of Seam. I have added a Groovy Console to the app previous to the last release. (A way to run arbitrary code at runtime) The normal way of adding behaviour to a class with Groovy would be similar to this: String.metaClass.foo= { x -> x * x } println "anything".foo(3) This code added the method foo to java.lang.String and prints 9. I can do the same thing with classes running inside my webapp container. New instances will thereafter show the same behaviour: com.my.package.SomeService.metaClass.foo= { x -> x * x } def someService = new com.my.package.SomeService() println someService.foo(3) Works as excpected. All good so far. My problem is now that the container, the web framework, Seam in this case, has already instantiated and cached the classes that I would like to manipulate (that is change their behaviour to reflect my bug fix). Ideally this code would work: com.my.package.SomeService.metaClass.foo= { x -> x * x } def x = org.jboss.seam.Component.getInstance(com.my.package.SomeService) println x.foo(3) However the instantiation of SomeService has already happened and there is no effect. Thus I need a way to make my changes "sticky". Has the groovy magic gone after my script has been run? Well, after logging out and in again, I can run this piece of code and get the expected result: def someService = new com.my.package.SomeService() println someService.foo(3) So the foo method is still around and it looks like my change has been permanent... So I guess the question that remains is how to force Seam to re-instantiate all its components and/or how to permanently make the change on all living instances...?

    Read the article

  • Delphi / MySql : Problems escaping strings

    - by mawg
    N00b here, having problems escaping strings. I used the QuotedStr() function - shouldn't that be enough. Unfortunately, the string that I am trying to quote is rather messy, but I will post it here in case anyone wants to paste it into WinMerge or KDiff3, etc. I am trying to store an entire Delphi form into the database, rather than into a .DFM file. It has only one field, a TEdit edit box. The debugger shows the form as text as 'object Form1: TScriptForm'#$D#$A' Left = 0'#$D#$A' Top = 0'#$D#$A' Align = alClient'#$D#$A' BorderStyle = bsNone'#$D#$A' ClientHeight = 517'#$D#$A' ClientWidth = 993'#$D#$A' Color = clBtnFace'#$D#$A' Font.Charset = DEFAULT_CHARSET'#$D#$A' Font.Color = clWindowText'#$D#$A' Font.Height = -11'#$D#$A' Font.Name = 'MS Sans Serif''#$D#$A' Font.Style = []'#$D#$A' OldCreateOrder = False'#$D#$A' SaveProps.Strings = ('#$D#$A' 'Visible=False')'#$D#$A' PixelsPerInch = 96'#$D#$A' TextHeight = 13'#$D#$A' object Edit1: TEdit'#$D#$A' Left = 192'#$D#$A' Top = 64'#$D#$A' Width = 121'#$D#$A' Height = 21'#$D#$A' TabOrder = 8'#$D#$A' end'#$D#$A'end'#$D#$A before calling QuotedStr() and ''object Form1: TScriptForm'#$D#$A' Left = 0'#$D#$A' Top = 0'#$D#$A' Align = alClient'#$D#$A' BorderStyle = bsNone'#$D#$A' ClientHeight = 517'#$D#$A' ClientWidth = 993'#$D#$A' Color = clBtnFace'#$D#$A' Font.Charset = DEFAULT_CHARSET'#$D#$A' Font.Color = clWindowText'#$D#$A' Font.Height = -11'#$D#$A' Font.Name = ''MS Sans Serif'''#$D#$A' Font.Style = []'#$D#$A' OldCreateOrder = False'#$D#$A' SaveProps.Strings = ('#$D#$A' ''Visible=False'')'#$D#$A' PixelsPerInch = 96'#$D#$A' TextHeight = 13'#$D#$A' object Edit1: TEdit'#$D#$A' Left = 192'#$D#$A' Top = 64'#$D#$A' Width = 121'#$D#$A' Height = 21'#$D#$A' TabOrder = 8'#$D#$A' end'#$D#$A'end'#$D#$A''' afterwards. The strange thing is that my complete command 'INSERT INTO designerFormDfm(designerFormDfmText) VALUES ("'object Form1: TScriptForm'#$D#$A' Left = 0'#$D#$A' Top = 0'#$D#$A' Align = alClient'#$D#$A' BorderStyle = bsNone'#$D#$A' ClientHeight = 517'#$D#$A' ClientWidth = 993'#$D#$A' Color = clBtnFace'#$D#$A' Font.Charset = DEFAULT_CHARSET'#$D#$A' Font.Color = clWindowText'#$D#$A' Font.Height = -11'#$D#$A' Font.Name = ''MS Sans Serif'''#$D#$A' Font.Style = []'#$D#$A' OldCreateOrder = False'#$D#$A' SaveProps.Strings = ('#$D#$A' ''Visible=False'')'#$D#$A' PixelsPerInch = 96'#$D#$A' TextHeight = 13'#$D#$A' object Edit1: TEdit'#$D#$A' Left = 192'#$D#$A' Top = 64'#$D#$A' Width = 121'#$D#$A' Height = 21'#$D#$A' TabOrder = 8'#$D#$A' end'#$D#$A'end'#$D#$A''");' executes in a MySql console, but not from Delphi, where I pass that command as parameter command to a function which ADOCommand.CommandText := command; ADOCommand.CommandType := cmdText; ADOCommand.Execute(); I can only assume that I am having problems escpaing sequences which contain single quotes (and QuotedStr() doesn't seem to escape backslahes(?!)) What am I doing that is obviously, glaringly wrong?

    Read the article

  • ASP.NET- forcing child/container events to fire before parent onload?

    - by Hans Gruber
    I'm working on a questionnaire type application in which questions are stored in a database. Therefore, I create my controls dynamically on every Page.OnLoad. This works like a charm and ViewState is persisted between postbacks because I ensure that my dynamic controls always have the same generated Control.ID. In addition to the user control that dynamically populates the questions, my questionnaire page also contains a 'Status' section (also encapsulated by a user control) which represents the status of the questionnaire (choices are 'Complete', 'Started' or 'In Progress'). If the user changes the status of questionnaire (i.e. from 'In Progress' to 'Complete'), I need to postback to the server because the contents of the dynamic portion of the questionnaire depend on the selected status. Some questions are always present regardless of status, and yet others may not be present at all for the selected status. The point is, when the status changes, I have to postback to the page and render the right set of questions. Additionally, I need to preserve any user entered values for those questions which are 'always available'. However, due to the page life cycle in ASP.NET, the 'Status' user control's OnLoad, which contains the correct status needed to load the right questions from the DB, doesn't get executed until after the 'dynamic questions' user control has already been populated (with the wrong/stale values). To get around this, I raise an event from my 'Status' user control to the main page to indicate that the Status has changed. The main page then raises an event on the 'dynamic questions' user control. Since by the time this event bubbles up, the 'dynamic questions' user control has already loaded the 'wrong' questions from the DB, it first calls Controls.Clear. It then happily uses the new status to query the database for the 'correct' questions and does a Control.Add() on each. FYI, Control.IDs are consistent across postbacks. This solution works...sorta. The correct set of questions for the selected status do get rendered; however ViewState is getting lost for those 'always available' questions. I'm guessing this is because the 'dynamic questions' user control calls Controls.Clear when responding to the status changed event. This must somehow kill the association between ViewState and my dynamic controls, even though the Control.ID are consistent. This seems like such a common requirement, I'm virtually certain there is a better, cleaner and less error prone approach to accomplish this. In case its not plain obvious, I haven't been able to grok the ASP.NET page life-cycle despite working with it for the last year. Any help is much appreciated!

    Read the article

  • php MySQL snytax error

    - by Jacksta
    my scrip is supposed to look up contacts in a table and present thm on the screen to then be edited. however this is not this case. I am getting the error Parse error: syntax error, unexpected $end in /home/admin/domains/domain.com.au/public_html/pick_modcontact.php on line 50 NOTE: this is the last line in this script. <? session_start(); if ($_SESSION[valid] != "yes") { header( "Location: contact_menu.php"); exit; } $db_name = "testDB"; $table_name = "my_contacts"; $connection = @mysql_connect("localhost", "user", "pass") or die(mysql_error()); $db = @mysql_select_db($db_name, $connection) or die(mysql_error()); $sql = "SELECT id, f_name, l_name FROM $table_name ORDER BY f_name"; $result = @mysql_query($sql, $connection) or die(mysql_error()); $num = @mysql_num_rows($result); if ($num < 1) { $display_block = "<p><em>Sorry No Results!</em></p>"; } else { while ($row = mysql_fetch_array($result)) { $id = $row['id']; $f_name = $row['f_name']; $l_name = $row['l_name']; $option_block .= "<option value\"$id\">$l_name, $f_name</option>"; } $display_block = "<form method=\"POST\" action=\"show_modcontact.php\"> <p><strong>Contact:</strong> <select name=\"id\">$option_block</select> <input type=\"submit\" name=\"submit\" value=\"Select This Contact\"></p> </form>"; ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Modify A Contact</title> </head> <body> <h1>My Contact Management System</h1> <h2><em>Modify a Contact</em></h2> <p>Select a contact from the list below, to modify the contact's record.</p> <? echo "$display_block"; ?> <br> <p><a href="contact_menu.php">Return to Main Menu</a></p> </body> </html>

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • XSL transformation and special XML entities escaping

    - by Tomas R
    I have an XML file which is transformed with XSL. Some elements have to be changed, some have to be left as is - specifically, text with entities &quot;, &amp;, &apos;, &lt;, &gt; should be left as is, and in my case &quot; and &apos; are changed to " and ' accordingly. Test XML: <?xml version="1.0" encoding="UTF-8" ?> <root> <element> &quot; &amp; &apos; &lt; &gt; </element> </root> transformation file: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output method="xml" encoding="UTF-8" omit-xml-declaration="no" indent="no" /> <xsl:template match="element"> <xsl:copy> <xsl:value-of disable-output-escaping="no" select="." /> </xsl:copy> </xsl:template> </xsl:stylesheet> result: <?xml version="1.0" encoding="UTF-8"?> <element> " &amp; ' &lt; &gt; </element> desired result: <?xml version="1.0" encoding="UTF-8"?> <element> &quot; &amp; &apos; &lt; &gt; </element> I have 2 questions: why does some of those entities are transformed and other not? how can I get a desired result?

    Read the article

  • How to apply or chain multiple matching templates in XSLT?

    - by Ignatius
    I am working on a stylesheet employing many templates with match attributes: <xsl:template match="//one" priority="0.7"> <xsl:param name="input" select="."/> <xsl:value-of select="util:uppercase($input)"/> <xsl:next-match /> </xsl:template> <xsl:template match="/stuff/one"> <xsl:param name="input" select="."/> <xsl:value-of select="util:add-period($input)"/> </xsl:template> <xsl:function name="util:uppercase"> <xsl:param name="input"/> <xsl:value-of select="upper-case($input)"/> </xsl:function> <xsl:function name="util:add-period"> <xsl:param name="input"/> <xsl:value-of select="concat($input,'.')"/> </xsl:function> What I would like to do is be able to 'chain' the two functions above, so that an input of 'string' would be rendered in the output as 'STRING.' (with the period.) I would like to do this in such a way that doesn't require knowledge of other templates in any other template. So, for instance, I would like to be able to add a "util:add-colon" method without having to open up the hood and monkey with the existing templates. I was playing around with the <xsl:next-match/> instruction to accomplish this. Adding it to the first template above does of course invoke both util:uppercase and util:add-period, but the output is an aggregation of each template output (i.e. 'STRINGstring.') It seems like there should be an elegant way to chain any number of templates together using something like <xsl:next-match/>, but have the output of each template feed the input of the next one in the chain. Am I overlooking something obvious?

    Read the article

  • IQueryable and lazy loading

    - by Nelson
    I'm having a hard time determining the best way to handle this... With Entity Framework (and L2S), LINQ queries return IQueryable. I have read various opinions on whether the DAL/BLL should return IQueryable, IEnumerable or IList. Assuming we go with IList, then the query is run immediately and that control is not passed on to the next layer. This makes it easier to unit test, etc. You lose the ability to refine the query at higher levels, but you could simply create another method that allows you to refine the query and still return IList. And there are many more pros/cons. So far so good. Now comes Entity Framework and lazy loading. I am using POCO objects with proxies in .NET 4/VS 2010. In the presentation layer I do: foreach (Order order in bll.GetOrders()) { foreach (OrderLine orderLine in order.OrderLines) { // Do something } } In this case, GetOrders() returns IList so it executes immediately before returning to the PL. But in the next foreach, you have lazy loading which executes multiple SQL queries as it gets all the OrderLines. So basically, the PL is running SQL queries "on demand" in the wrong layer. Is there any sensible way to avoid this? I could turn lazy loading off, but then what's the point of having this "feature" that everyone was complaining EF1 didn't have? And I'll admit it is very useful in many scenarios. So I see several options: Somehow remove all associations in the entities and add methods to return them. This goes against the default EF behavior/code generation and makes it harder to do some composite (multiple entity) LINQ queries. It seems like a step backwards. I vote no. If we have lazy loading anyway which makes it hard to unit test, then go all the way and return IQueryable. You'll have more control farther up the layers. I still don't think this is a good option because IQueryable ties you to L2S, L2E, or your own full implementation of IQueryable. Lazy loading may run queries "on demand", but doesn't tie you to any specific interface. I vote no. Turn off lazy loading. You'll have to handle your associations manually. This could be with eager loading's .Include(). I vote yes in some specific cases. Keep IList and lazy loading. I vote yes in many cases, only due to the troubles with the others. Any other options or suggestions? I haven't found an option that really convinces me.

    Read the article

  • Is there a path of least resistance that a newcomer to graphics-technology-adoption can take at this point in the .NET graphics world?

    - by Rao
    For the past 5 months or so, I've spent time learning C# using Andrew Troelsen's book and getting familiar with stuff in the .NET 4 stack... bits of ADO.NET, EF4 and a pinch of WCF to taste. I'm really interested in graphics development (not for games though), which is why I chose to go the .NET route when I decided choose from either Java or .NET to learn... since I heard about WPF and saw some sexy screenshots and all. I'm even almost done with the 4 WPF chapters in Troelsen's book. Now, all of a sudden I saw some post on a forum about how "WPF was dead" in the face of something called Silverlight. I searched more and saw all the confusion going on at present... even stuff like "Silverlight is dead too!" wrt HTML5. From what I gather, we are in a delicate period of time that will eventually decide which technology will stabilize, right? Even so, as someone new moving into UI & graphics development via .NET, I wish I could get some guidance from people more experienced people. Maybe I'm reading too much? Maybe I have missed some pieces of information? Maybe a path exists that minimizes tears of blood? In any case, here is a sample vomiting of my thoughts on which I'd appreciate some clarification or assurance or spanking: My present interest lies in desktop development. But on graduating from college, I wish to market myself as a .NET developer. The industry seems to be drooling for web stuff. Can Silverlight do both equally well? (I see on searches that SL works "out of browser"). I have two fair-sized hobby projects planned that will have hawt UIs with lots of drag n drop, sliding animations etc. These are intended to be desktop apps that will use reflection, database stuff using EF4, networking over LAN, reading-writing of files... does this affect which graphics technology can be used? At some laaaater point, if I become interested in doing a bit of 3D stuff in .NET, will that affect which technologies can be used? Or what if I look up to the heavens, stick out my middle finger, and do something crazy like go learn HTML5 even though my knowledge of it can be encapsulated in 2 sentences? Sorry I seem confused so much, I just want to know if there's a path of least resistance that a newcomer to graphics-technology-adoption can take at this point in the graphics world.

    Read the article

  • How to configure Jetty to reload a WebAppContext when classes are changed

    - by Guss
    I'm developing a web application and I run Jetty as the development and testing environment when I develop under Eclipse. When I make changes to Java classes, Eclipse automatically compiles them to the build directory, but Jetty won't see the changes until I stop and start the server. I know that Jetty supports "hot deployment" using ContextDeployer that will refresh updated application contexts, but it relies on a context file in a context directory being updated - which is not very useful in my case. Is there a way to set up Jetty so that it will reload the web app when any of the classes it uses is updated? My current jetty.xml looks something like this: <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure id="Server" class="org.eclipse.jetty.server.Server"> <Set name="ThreadPool"><!-- bla bla --></Set> <Call name="addConnector"><!-- bla bla --></Call> <Set name="handler"> <New id="Handlers" class="org.eclipse.jetty.server.handler.HandlerCollection"> <Set name="handlers"> <Array type="org.eclipse.jetty.server.Handler"> <Item> <New id="webapp" class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="displayName">My Web App</Set> <Set name="resourceBase">src/main/webapp</Set> <Set name="descriptor">src/main/webapp/WEB-INF/web.xml</Set> <Set name="contextPath">/mywebapp</Set> </New> </Item> <Item> <New id="DefaultHandler" class="org.eclipse.jetty.server.handler.DefaultHandler"/> </Item> </Array> </Set> </New> </Set> </Configure>

    Read the article

  • Symfony2 Entity to array

    - by Adriano Pedro
    I'm trying to migrate my flat php project to Symfony2, but its coming to be very hard. For instance, I have a table of Products specification that have several specifications and are distinguishables by its "cat" attribute in that Extraspecs DB table. Therefore I've created a Entity for that table and want to make an array of just the specifications with "cat" = 0... I supose the code is this one.. right? $typeavailable = $this->getDoctrine() ->getRepository('LabsCatalogBundle:ProductExtraspecsSpecs') ->findBy(array('cat' => '0')); Now how can i put this in an array to work with a form like this?: form = $this ->createFormBuilder($product) ->add('specs', 'choice', array('choices' => $typeavailableArray), 'multiple' => true) Thank you in advance :) # Thank you all.. But now I've came across with another problem.. In fact i'm building a form from an existing object: $form = $this ->createFormBuilder($product) ->add('name', 'text') ->add('genspec', 'choice', array('choices' => array('0' => 'None', '1' => 'General', '2' => 'Specific'))) ->add('isReg', 'choice', array('choices' => array('0' => 'Material', '1' => 'Reagent', '2' => 'Antibody', '3' => 'Growth Factors', '4' => 'Rodents', '5' => 'Lagomorphs'))) So.. in that case my current value is named "extraspecs", so i've added this like: ->add('extraspecs', 'entity', array( 'label' => 'desc', 'empty_value' => ' --- ', 'class' => 'LabsCatalogBundle:ProductExtraspecsSpecs', 'property' => 'specsid', 'query_builder' => function(EntityRepository $er) { return $er ->createQueryBuilder('e'); But "extraspecs" come from a relationship of oneToMany where every product has several extraspecs... Here is the ORM: Labs\CatalogBundle\Entity\Product: type: entity table: orders__regmat id: id: type: integer generator: { strategy: AUTO } fields: name: type: string length: 100 catnumber: type: string scale: 100 brand: type: integer scale: 10 company: type: integer scale: 10 size: type: decimal scale: 10 units: type: integer scale: 10 price: type: decimal scale: 10 reqcert: type: integer scale: 1 isReg: type: integer scale: 1 genspec: type: integer scale: 1 oneToMany: extraspecs: targetEntity: ProductExtraspecs mappedBy: product Labs\CatalogBundle\Entity\ProductExtraspecs: type: entity table: orders__regmat__extraspecs fields: extraspecid: id: true type: integer unsigned: false nullable: false generator: strategy: IDENTITY regmatid: type: integer scale: 11 spec: type: integer scale: 11 attrib: type: string length: 20 value: type: string length: 200 lifecycleCallbacks: { } manyToOne: product: targetEntity: Product inversedBy: extraspecs joinColumn: name: regmatid referencedColumnName: id HOw should I do this? Thank you!!!

    Read the article

  • JavaCC: How can one exclude a string from a token? (A.k.a. understanding token ambiguity.)

    - by java.is.for.desktop
    Hello, everyone! I had already many problems with understanding, how ambiguous tokens can be handled elegantly (or somehow at all) in JavaCC. Let's take this example: I want to parse XML processing instruction. The format is: "<?" <target> <data> "?>": target is an XML name, data can be anything except ?>, because it's the closing tag. So, lets define this in JavaCC: (I use lexical states, in this case DEFAULT and PROC_INST) TOKEN : <#NAME : (very-long-definition-from-xml-1.1-goes-here) > TOKEN : <WSS : (" " | "\t")+ > // WSS = whitespaces <DEFAULT> TOKEN : {<PI_START : "<?" > : PROC_INST} <PROC_INST> TOKEN : {<PI_TARGET : <NAME> >} <PROC_INST> TOKEN : {<PI_DATA : ~[] >} // accept everything <PROC_INST> TOKEN : {<PI_END : "?>" > : DEFAULT} Now the part which recognizes processing instructions: void PROC_INSTR() : {} { ( <PI_START> (t=<PI_TARGET>){System.out.println("target: " + t.image);} <WSS> (t=<PI_DATA>){System.out.println("data: " + t.image);} <PI_END> ) {} } Let's test it with <?mytarget here-goes-some-data?>: The target is recognized: "target: mytarget". But now I get my favorite JavaCC parsing error: !! procinstparser.ParseException: Encountered "" at line 1, column 15. !! Was expecting one of: !! Encountered nothing? Was expecting nothing? Or what? Thank you, JavaCC! I know, that I could use the MORE keyword of JavaCC, but this would give me the whole processing instruction as one token, so I'd had to parse/tokenize it further by myself. Why should I do that? Am I writing a parser that does not parse? The problem is (i guess): hence <PI_DATA> recognizes "everything", my definition is wrong. I should tell JavaCC to recognize "everything except ?>" as processing instruction data. But how can it be done? NOTE: I can only exclude single characters using ~["a"|"b"|"c"], I can't exclude strings such as ~["abc"] or ~["?>"]. Another great anti-feature of JavaCC. Thank you.

    Read the article

  • Removing elements from heap

    - by user193138
    I made a heap. I am curious if there's something subtley wrong with my remove function: int Heap::remove() { if (n == 0) exit(1); int temp = arr[0]; arr[0] = arr[--n]; heapDown(0); arr[n] = 0; return temp; } void Heap::heapDown(int i) { int l = left(i); int r = right(i); // comparing parent to left/right child // each has an inner if to handle if the first swap causes a second swap // ie 1 -> 3 -> 5 // 3 5 1 5 1 3 if (l < n && arr[i] < arr[l]) { swap(arr[i], arr[l]); heapDown(l); if (r < n && arr[i] < arr[r]) { swap(arr[i], arr[r]); heapDown(r); } } else if (r < n && arr[i] < arr[r]) { swap(arr[i], arr[r]); heapDown(r); if (l < n && arr[i] < arr[l]) { swap(arr[i], arr[l]); heapDown(l); } } } Here's my output i1i2i3i4i5i6i7 p Active heap: 7 4 6 1 3 2 5 r Removed 7 r Removed 6 p Active heap: 5 3 4 1 2 Here's my teacher's sample output: p Active heap : 7 4 6 1 3 2 5 r Removed 7 r Removed 6 p Active heap : 5 4 2 1 3 s Heapsorted : 1 2 3 4 5 While our outputs are completely different, I do seem to hold maxheap principle of having everything left oriented and for all nodes parent child(in every case I tried). I try to do algs like this from scratch, so maybe I'm just doing something really weird and wrong (I would only consider it "wrong" if it's O(lg n), as removes are intended to be for heaps). Is there anything in particular "wrong" about my remove? Thanks, http://ideone.com/PPh4eQ

    Read the article

  • NSLinguisticTagger on the contents of an NSTextStorage- crashing bug

    - by Remy Porter
    I'm trying to use an NSLinguisticTagger to monitor the contents of an NSTextStorage and provide some contextual information based on what the user types. To that end, I have an OverlayManager object, which wires up this relationship: -(void) setView:(NSTextView*) view { _view = view; _layout = view.layoutManager; _storage = view.layoutManager.textStorage; //get the TextStorage from the view [_tagger setString:_storage.string]; //pull the string out, this grabs the mutable version [self registerForNotificationsOn:self->_storage]; //subscribe to the willProcessEditing notification } When an edit occurs, I make sure to trap it and notify the tagger (and yes, I know I'm being annoyingly inconsistent with member access, I'm rusty on Obj-C, I'll fix it later): - (void) textStorageWillProcessEditing:(NSNotification*) notification{ if ([self->_storage editedMask] & NSTextStorageEditedCharacters) { NSRange editedRange = [self->_storage editedRange]; NSUInteger delta = [self->_storage changeInLength]; [_tagger stringEditedInRange:editedRange changeInLength:delta]; //should notify the tagger of the changes [self highlightEdits:self]; } } The highlightEdits message delegates the job out to a pool of "Overlay" objects. Each contains a block of code similar to this: [tagger enumerateTagsInRange:range scheme:NSLinguisticTagSchemeLexicalClass options:0 usingBlock:^(NSString *tag, NSRange tokenRange, NSRange sentenceRange, BOOL *stop) { if (tag == PartOfSpeech) { [self applyHighlightToRange:tokenRange onStorage:storage]; } }]; And that's where the problem is- the enumerateTagsInRange method crashes out with a message: 2014-06-04 10:07:19.692 WritersEditor[40191:303] NSMutableRLEArray replaceObjectsInRange:withObject:length:: Out of bounds This problem doesn't occur if I don't link to the mutable copy of the underlying string and instead do a [[_storage string] copy], but obviously I don't want to copy the entire backing store every time I want to do tagging. This all should be happening in the main run loop, so I don't think this is a threading issue. The NSRange I'm enumerating tags on exists both in the NSTextStorage and in the NSLinguisticTagger's view of the string. It's not even the fact that the applyHighlightToRange call adds attributes to the string, because it crashes before even reaching that line. I attempted to build a test case around the problem, but can't replicate it in those situations: - (void) testEdit { NSAttributedString* str = [[NSMutableAttributedString alloc] initWithString:@"Quickly, this is a test."]; text = [[NSTextStorage alloc] initWithAttributedString:str]; NSArray* schemes = [NSLinguisticTagger availableTagSchemesForLanguage:@"en"]; tagger = [[NSLinguisticTagger alloc] initWithTagSchemes:schemes options:0]; [tagger setString:[text string]]; [text beginEditing]; [[text mutableString] appendString:@"T"]; NSRange edited = [text editedRange]; NSUInteger length = [text changeInLength]; [text endEditing]; [tagger stringEditedInRange:edited changeInLength:length]; [tagger enumerateTagsInRange:edited scheme:NSLinguisticTagSchemeLexicalClass options:0 usingBlock:^(NSString *tag, NSRange tokenRange, NSRange sentenceRange, BOOL *stop) { //doesn't matter, this should crash }]; } That code doesn't crash.

    Read the article

  • ASP.Net 4.0 - Response required in SiteMap building?

    - by Nick Craver
    I'm running into an issue upgrading a project to .Net 4.0...and having trouble finding any reason for the issue (or at least, the change causing it). Given the freshness of 4.0, not a lot of blogs out there for issues yet, so I'm hoping someone here has an idea. Preface: this is a Web Forms application, coming from 3.5 SP1 to 4.0. In the Application_Start event we're iterating through the SiteMap and constructing routes based off data there (prettifying URLs mostly with some utility added), that part isn't failing though...or at least isn't not getting that far. It seems that calling the SiteMap.RootNode (inside application_start) causes 4.0 to eat it, since the XmlSiteMapProvider.GetNodeFromXmlNode method has been changed, looking in reflector you can see it's hitting HttpResponse.ApplyAppPathModifier here: str2 = HttpContext.Current.Response.ApplyAppPathModifier(str2); HttpResponse wasn't used at all in this method in the 2.0 CLR, so what we had worked fine, in 4.0 though, that method is called as a result of this stack: [HttpException (0x80004005): Response is not available in this context.] System.Web.XmlSiteMapProvider.GetNodeFromXmlNode(XmlNode xmlNode, Queue queue) System.Web.XmlSiteMapProvider.ConvertFromXmlNode(Queue queue) System.Web.XmlSiteMapProvider.BuildSiteMap() System.Web.XmlSiteMapProvider.get_RootNode() System.Web.SiteMap.get_RootNode() Since Response isn't available here in 4.0, we get an error. To reproduce this, you can narrow the test case down to this in global: protected void Application_Start(object sender, EventArgs e) { var s = SiteMap.RootNode; //Kaboom! //or just var r = Context.Response; //or var r = HttpContext.Current.Response; //all result in the same "not available" error } Question: Am I missing something obvious here? Or, is there another event added in 4.0 that's recommended for anything related to SiteMap on startup? For anyone curious/willing to help, I've created a very minimal project (a default VS 2010 ASP.Net 4.0 site, all the bells & whistles removed and only a blank sitemap and the Application_Start code added). It's a small 10kb zip available here: http://www.ncraver.com/Test/SiteMapTest.zip Update: Not a great solution, but the current work-around is to do the work in Application_BeginRequest, like this: private static bool routesRegistered = false; protected void Application_BeginRequest(object sender, EventArgs e) { if (!routesRegistered) { Application.Lock(); if (!routesRegistered) RouteManager.RegisterRoutes(RouteTable.Routes); routesRegistered = true; Application.UnLock(); } } I don't like this particularly, feels like an abuse of the event to bypass the issue. Does anyone have at least a better work-around, since the .Net 4 behavior with SiteMap is not likely to change?

    Read the article

  • Null Exception RelativeLayout

    - by theblixguy
    I am trying to remove objects from my relative layout and replace the background with another image but I get a java.lang.NullPointerException on this line: RelativeLayout ths = (RelativeLayout)findViewById(R.layout.activity_main); Below is my code: package com.ssrij.qrmag; import android.app.Activity; import android.content.Intent; import android.net.Uri; import android.os.Bundle; import android.view.View; import android.view.animation.Animation; import android.view.animation.TranslateAnimation; import android.view.animation.Animation.AnimationListener; import android.widget.Button; import android.widget.RelativeLayout; import android.widget.TextView; public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Initialize animations Animation a = new TranslateAnimation(1000,0,0,0); Animation a1 = new TranslateAnimation(1000,0,0,0); Animation a2 = new TranslateAnimation(1000,0,0,0); Animation a3 = new TranslateAnimation(1000,0,0,0); // Set animation durations (ms) a.setDuration(1200); a1.setDuration(1400); a2.setDuration(1600); a3.setDuration(1800); // Get a reference to the objects we want to apply the animation to final TextView v = (TextView)findViewById(R.id.textView1); final TextView v1 = (TextView)findViewById(R.id.textView2); final TextView v2 = (TextView)findViewById(R.id.TextView3); final Button v3 = (Button)findViewById(R.id.tap_scan); // Clear existing animations, just in case... v.clearAnimation(); v1.clearAnimation(); v2.clearAnimation(); v3.clearAnimation(); // Start animating v.startAnimation(a); v1.startAnimation(a1); v2.startAnimation(a2); v3.startAnimation(a3); a1.setAnimationListener(new AnimationListener() { @Override public void onAnimationStart(Animation animation) { // TODO Auto-generated method stub } @Override public void onAnimationRepeat(Animation animation) { // TODO Auto-generated method stub } @Override public void onAnimationEnd(Animation animation) { v.setVisibility(View.INVISIBLE); v1.setVisibility(View.INVISIBLE); v2.setVisibility(View.INVISIBLE); v3.setVisibility(View.INVISIBLE); RelativeLayout ths = (RelativeLayout)findViewById(R.layout.activity_main); ths.setBackgroundResource(R.drawable.blurbg); } }); } public void ScanQr(View v) { // Open the QR Scan page Intent a = new Intent(MainActivity.this, ScanActivity.class); startActivity(a); } } Is there anything that I am doing wrong?

    Read the article

  • Checking if an int is prime more efficiently

    - by SipSop
    I recently was part of a small java programming competition at my school. My partner and I have just finished our first pure oop class and most of the questions were out of our league so we settled on this one (and I am paraphrasing somewhat): "given an input integer n return the next int that is prime and its reverse is also prime for example if n = 18 your program should print 31" because 31 and 13 are both prime. Your .class file would then have a test case of all the possible numbers from 1-2,000,000,000 passed to it and it had to return the correct answer within 10 seconds to be considered valid. We found a solution but with larger test cases it would take longer than 10 seconds. I am fairly certain there is a way to move the range of looping from n,..2,000,000,000 down as the likely hood of needing to loop that far when n is a low number is small, but either way we broke the loop when a number is prime under both conditions is found. At first we were looping from 2,..n no matter how large it was then i remembered the rule about only looping to the square root of n. Any suggestions on how to make my program more efficient? I have had no classes dealing with complexity analysis of algorithms. Here is our attempt. public class P3 { public static void main(String[] args){ long loop = 2000000000; long n = Integer.parseInt(args[0]); for(long i = n; i<loop; i++) { String s = i +""; String r = ""; for(int j = s.length()-1; j>=0; j--) r = r + s.charAt(j); if(prime(i) && prime(Long.parseLong(r))) { System.out.println(i); break; } } System.out.println("#"); } public static boolean prime(long p){ for(int i = 2; i<(int)Math.sqrt(p); i++) { if(p%i==0) return false; } return true; } } ps sorry if i did the formatting for code wrong this is my first time posting here. Also the output had to have a '#' after each line thats what the line after the loop is about Thanks for any help you guys offer!!!

    Read the article

< Previous Page | 857 858 859 860 861 862 863 864 865 866 867 868  | Next Page >