Search Results

Search found 14644 results on 586 pages for 'auto generate'.

Page 488/586 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • Rails performance tests "rake test:benchmark" and "rake test:profile" give me errors

    - by go minimal
    I'm trying to run a blank default performance test with Ruby 1.9 and Rails 2.3.5 and I just can't get it to work! What am I missing here??? rails testapp cd testapp script/generate scaffold User name:string rake db:migrate rake test:benchmark - /usr/local/bin/ruby19 -I"lib:test" "/usr/local/lib/ruby19/gems/1.9.1/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/performance/browsing_test.rb" -- --benchmark Loaded suite /usr/local/lib/ruby19/gems/1.9.1/gems/rake-0.8.7/lib/rake/rake_test_loader Started /usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:105:in `rescue in const_missing': uninitialized constant BrowsingTest::STARTED (NameError) from /usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:94:in `const_missing' from /usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/testing/performance.rb:38:in `run' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:415:in `block (2 levels) in run_test_suites' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:409:in `each' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:409:in `block in run_test_suites' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:408:in `each' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:408:in `run_test_suites' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:388:in `run' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:329:in `block in autorun' rake aborted! Command failed with status (1): [/usr/local/bin/ruby19 -I"lib:test" "/usr/l...]

    Read the article

  • How do I add a default namespace with no prefix using XMLSerializer

    - by OldBob
    Hi Using C# and .Net 3.5; I am trying to generate an XML document that contains the default namespace without a prefix using XMLSerializer. eg. <?xml version="1.0" encoding="utf-8" ?> <MyRecord ID="9266" xmlns="http://www.website.com/MyRecord"> <List> <SpecificItem> using the following code string xmlizedString = null; MemoryStream memoryStream = new MemoryStream(); XmlSerializer xs = new XmlSerializer(typeof(ExportMyRecord)); XmlSerializerNamespaces xmlnsEmpty = new XmlSerializerNamespaces(); xmlnsEmpty.Add(string.Empty, string.Empty); XmlTextWriter xmlTextWriter = new XmlTextWriter(memoryStream, Encoding.UTF8); xs.Serialize(xmlTextWriter, myRecord, xmlnsEmpty); memoryStream = (MemoryStream)xmlTextWriter.BaseStream; xmlizedString = this.UTF8ByteArrayToString(memoryStream.ToArray()); and class structure [Serializable] [XmlRoot("MyRecord")] public class ExportMyRecord { [XmlAttribute("ID")] public int ID { get; set; } Now, I've tried various options XmlSerializer xs = new XmlSerializer(typeof(ExportMyRecord),"http://www.website.com/MyRecord"); or [XmlRoot(Namespace = "http://www.website.com/MyRecord", ElementName="MyRecord")] gives me <?xml version="1.0" encoding="utf-8"?> <q1:MylRecord ID="9266" xmlns:q1="http://www.website.com/MyRecord"> <q1:List> <q1:SpecificItem> I need the XML to have the namespace without the prefix as it's going to a third party provider and they reject all other alternatives. Any suggestions? No responses so far. Has anyone experienced this or know how to solve it?

    Read the article

  • Understanding CSRF - Simple Question

    - by byronh
    I know this might make me seem like an idiot, I've read everything there is to read about CSRF and I still don't understand how using a 'challenge token' would add any sort of prevention. Please help me clarify the basic concept, none of the articles and posts here on SO I read seemed to really explicitly state what value you're comparing with what. From OWASP: In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is utilized for each subsequent request until the session expires. If I understand the process correctly, this is what happens. I log in at http://example.com and a session/cookie is created containing this random token. Then, every form includes a hidden input also containing this random value from the session which is compared with the session/cookie upon form submission. But what does that accomplish? Aren't you just taking session data, putting it in the page, and then comparing it with the exact same session data? Seems like circular reasoning. These articles keep talking about following the "same-origin policy" but that makes no sense, because all CSRF attacks ARE of the same origin as the user, just tricking the user into doing actions he/she didn't intend. Is there any alternative other than appending the token to every single URL as a query string? Seems very ugly and impractical, and makes bookmarking harder for the user.

    Read the article

  • Covariance and Contravariance inference in C# 4.0

    - by devoured elysium
    When we define our interfaces in C# 4.0, we are allowed to mark each of the generic parameters as in or out. If we try to set a generic parameter as out and that'd lead to a problem, the compiler raises an error, not allowing us to do that. Question: If the compiler has ways of inferring what are valid uses for both covariance (out) and contravariance(in), why do we have to mark interfaces as such? Wouldn't it be enough to just let us define the interfaces as we always did, and when we tried to use them in our client code, raise an error if we tried to use them in an un-safe way? Example: interface MyInterface<out T> { T abracadabra(); } //works OK interface MyInterface2<in T> { T abracadabra(); } //compiler raises an error. //This makes me think that the compiler is cappable //of understanding what situations might generate //run-time problems and then prohibits them. Also, isn't it what Java does in the same situation? From what I recall, you just do something like IMyInterface<? extends whatever> myInterface; //covariance IMyInterface<? super whatever> myInterface2; //contravariance Or am I mixing things? Thanks

    Read the article

  • Autologin for web application

    - by Maulin
    We want to AutoLogin feature to allow user directly login using link into our Web Application. What is the best way achieve this? We have following approches in our mind. 1) Store user credentials(username/password) in cookie. Send cookie for authentication. e.g. http: //www.mysite.com/AutoLogin (here username/password will be passed in cookie) OR Pass user credentials in link URL. http: //www.mysite.com/AutoLogin?userid=<&password=< 2) Generate randon token and store user random token and user IP on server side database. When user login using link, validate token and user IP on server. e.g. http: //www.mysite.com/AutoLogin?token=< The problem with 1st approach is if hacker copies link/cookie from user machine to another machine he can login. The problem with 2nd approach is the user ip will be same for all users of same organization behind proxy. Which one is better from above from security perspective? If there is better solution which is other than mentioned above, please let us know.

    Read the article

  • can i use javabeans with hibernate ?

    - by Dilllllo
    Hello i'm using a plugin of hibernate2 in my webproject with jsp ,in my project i have a register page. Can i use javabeans to send information from a html <form> using hibernate class's ? with out hibernate i creat class with get and set like that package com.java2s; public class Lang { private String choix; private String comm; public String getChoix() { return choix; } public void setChoix(String choix) { this.choix = choix; //System.out.println(choix); } public String getComm() { return comm; } public void setComm(String comm) { this.comm = comm; // System.out.println(comm); } } but i know that hibernate generate a get and set class ! and recive it with that : <jsp:useBean id='user' class='com.java2s.Lang' type='com.java2s.Lang' scope='session' /> <jsp:setProperty name='user' property='*'/> any idea how to do that ?

    Read the article

  • How to efficiently serve massive sitemaps in django

    - by mlissner
    I have a site with about 150K pages in its sitemap. I'm using the sitemap index generator to make the sitemaps, but really, I need a way of caching it, because building the 150 sitemaps of 1,000 links each is brutal on my server.[1] I COULD cache each of these sitemap pages with memcached, which is what I'm using elsewhere on the site...however, this is so many sitemaps that it would completely fill memcached....so that doesn't work. What I think I need is a way to use the database as the cache for these, and to only generate them when there are changes to them (which as a result of the sitemap index means only changing the latest couple of sitemap pages, since the rest are always the same.)[2] But, as near as I can tell, I can only use one cache backend with django. How can I have these sitemaps ready for when Google comes-a-crawlin' without killing my database or memcached? Any thoughts? [1] I've limited it to 1,000 links per sitemap page because generating the max, 50,000 links, just wasn't happening. [2] for example, if I have sitemap.xml?page=1, page=2...sitemap.xml?page=50, I only really need to change sitemap.xml?page=50 until it is full with 1,000 links, then I can it pretty much forever, and focus on page 51 until it's full, cache it forever, etc.

    Read the article

  • Dynamically created operators

    - by Gero
    I created a program using dev-cpp and wxwidgets which solves a puzzle. The user must fill the operations blocks and the results blocks, and the program will solve it. Im solving it using bruteforce, i generate all non repeated 9 length number combinations using a recursive algorithm. It does it pretty fast. Up to here all is great! But the problem is when my program operates depending the character on the blocks. Its extremely slow (it never gets the answer), because of the chars comparation against +, -, *, etc. Im doing a CASE. Is there some way or some programming language wich allows dinamic creation of operators? So i can define the operator ROW1COL2 to be a +, and the same way to all other operations. I leave a screenshot of the app, so its easier to understand how the puzzle works. http://www.imageshare.web.id/images/9gg5cev8vyokp8rhlot9.png PD: The algorithm works, i tryed it with a trivial puzzle, and solved it in a second.

    Read the article

  • PHPUnit - multiple stubs of same class

    - by keithjgrant
    I'm building unit tests for class Foo, and I'm fairly new to unit testing. A key component of my class is an instance of BarCollection which contains a number of Bar objects. One method in Foo iterates through the collection and calls a couple methods on each Bar object in the collection. I want to use stub objects to generate a series of responses for my test class. How do I make the Bar stub class return different values as I iterate? I'm trying to do something along these lines: $stubs = array(); foreach ($array as $value) { $barStub->expects($this->any()) ->method('GetValue')) ->will($this->returnValue($value)); $stubs[] = $barStub; } // populate stubs into `Foo` // assert results from `Foo->someMethod()` So Foo->someMethod() will produce data based on the results it receives from the Bar objects. But this gives me the following error whenever the array is longer than one: There was 1 failure: 1) testMyTest(FooTest) with data set #2 (array(0.5, 0.5)) Expectation failed for method name is equal to <string:GetValue> when invoked zero or more times. Mocked method does not exist. /usr/share/php/PHPUnit/Framework/MockObject/Mock.php(193) : eval()'d code:25 One thought I had was to use ->will($this->returnCallback()) to invoke a callback method, but I don't know how to indicate to the callback which Bar object is making the call (and consequently what response to give). Another idea is to use the onConsecutiveCalls() method, or something like it, to tell my stub to return 1 the first time, 2 the second time, etc, but I'm not sure exactly how to do this. I'm also concerned that if my class ever does anything other than ordered iteration on the collection, I won't have a way to test it.

    Read the article

  • correcting fisheye distortion programmatically

    - by Will
    I have some points that describe positions in a picture taken with a fisheye lens. I've found this description of how to generate a fisheye effect, but not how to reverse it. How do you calculate the radial distance from the centre to go from fisheye to rectilinear? My function stub looks like this: Point correct_fisheye(const Point& p,const Size& img) { // to polar const Point centre = {img.width/2,img.height/2}; const Point rel = {p.x-centre.x,p.y-centre.y}; const double theta = atan2(rel.y,rel.x); double R = sqrt((rel.x*rel.x)+(rel.y*rel.y)); // fisheye undistortion in here please //... change R ... // back to rectangular const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta)); fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y); return ret; } Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?

    Read the article

  • Create Folders from text file and place dummy file in them using a CustomAction

    - by Birkoff
    I want my msi installer to generate a set of folders in a particular location and put a dummy file in each directory. Currently I have the following CustomActions: <CustomAction Id="SMC_SetPathToCmd" Property="Cmd" Value="[SystemFolder]cmd.exe"/> <CustomAction Id="SMC_GenerateMovieFolders" Property="Cmd" ExeCommand="for /f &quot;tokens=* delims= &quot; %a in ([MBSAMPLECOLLECTIONS]movies.txt) do (echo %a)" /> <CustomAction Id="SMC_CopyDummyMedia" Property="Cmd" ExeCommand="for /f &quot;tokens=* delims= &quot; %a in ([MBSAMPLECOLLECTIONS]movies.txt) do (copy [MBSAMPLECOLLECTIONS]dummy.avi &quot;%a&quot;\&quot;%a&quot;.avi)" /> These are called in the InstallExecuteSequence: <Custom Action="SMC_SetPathToCmd" After="InstallFinalize"/> <Custom Action="SMC_GenerateMovieFolders" After="SMC_SetPathToCmd"/> <Custom Action="SMC_CopyDummyMedia" After="SMC_GenerateMovieFolders"/> The custom actions seem to start, but only a blank command prompt window is shown and the directories are not generated. The files needed for the customaction are copied to the correct directory: <Directory Id="WIX_DIR_COMMON_VIDEO"> <Directory Id="MBSAMPLECOLLECTIONS" Name="MB Sample Collections" /> </Directory> <DirectoryRef Id="MBSAMPLECOLLECTIONS"> <Component Id="SampleCollections" Guid="C481566D-4CA8-4b10-B08D-EF29ACDC10B5" DiskId="1"> <File Id="movies.txt" Name="movies.txt" Source="SampleCollections\movies.txt" Checksum="no" /> <File Id="series.txt" Name="series.txt" Source="SampleCollections\series.txt" Checksum="no" /> <File Id="dummy.avi" Name="dummy.avi" Source="SampleCollections\dummy.avi" Checksum="no" /> </Component> </DirectoryRef> What's wrong with these Custom Actions or is there a simpler way to do this?

    Read the article

  • Java Performance measurement

    - by portoalet
    Hi, I am doing some Java performance comparison between my classes, and wondering if there is some sort of Java Performance Framework to make writing performance measurement code easier? I.e, what I am doing now is trying to measure what effect does it have having a method as "synchronized" as in PseudoRandomUsingSynch.nextInt() compared to using an AtomicInteger as my "synchronizer". So I am trying to measure how long it takes to generate random integers using 3 threads accessing a synchronized method looping for say 10000 times. I am sure there is a much better way doing this. Can you please enlighten me? :) public static void main( String [] args ) throws InterruptedException, ExecutionException { PseudoRandomUsingSynch rand1 = new PseudoRandomUsingSynch((int)System.currentTimeMillis()); int n = 3; ExecutorService execService = Executors.newFixedThreadPool(n); long timeBefore = System.currentTimeMillis(); for(int idx=0; idx<100000; ++idx) { Future<Integer> future = execService.submit(rand1); Future<Integer> future1 = execService.submit(rand1); Future<Integer> future2 = execService.submit(rand1); int random1 = future.get(); int random2 = future1.get(); int random3 = future2.get(); } long timeAfter = System.currentTimeMillis(); long elapsed = timeAfter - timeBefore; out.println("elapsed:" + elapsed); } the class public class PseudoRandomUsingSynch implements Callable<Integer> { private int seed; public PseudoRandomUsingSynch(int s) { seed = s; } public synchronized int nextInt(int n) { byte [] s = DonsUtil.intToByteArray(seed); SecureRandom secureRandom = new SecureRandom(s); return ( secureRandom.nextInt() % n ); } @Override public Integer call() throws Exception { return nextInt((int)System.currentTimeMillis()); } } Regards

    Read the article

  • Syncing two separate structures to the same master data

    - by Mike Burton
    I've got multiple structures to maintain in my application. All link to the same records, and one of them could be considered the "master" in that it reflects actual relationships held in files on disk. The other structures are used to "call out" elements of the main design for purchase and work orders. I'm struggling to come up with a pattern that deals appropriately with changes to the master data. As an example, the following trees might refer to the same data: A |_ B |_ C |_ D |_ E |_ B |_ C |_ D A |_ B E C |_ D A |_ B C D E These secondary structures follow internal rules, but their overall structure is usually user-determined. In all cases (including the master), any element can be used in multiple locations and in multiple trees. When I add a child to any element in the tree, I want to either automatically build the secondary structure for each instance of the "master" element or at least advertise the situation to the user and allow them to manually generate the data required for the secondary trees. Is there any pattern which might apply to this situation? I've been treating it as a view problem, but it turns out to be more complicated than that when you look at the initial generation of the data.

    Read the article

  • Correct Interactive Website System Design Concepts / Methods?

    - by Xandel
    Hi all, I hope this question isn't too open ended, but a nudge in the right direction is all I need! I am currently building an online accounting system - the idea is that users can register, log in, and then create customers, generate invoices and other documents and eventually print / email those documents out. I am a Java programmer but unfortunately haven't had too much experience in web projects and their design concepts... This is what I have got thus far - A Tomcat web server which loads Spring. Spring handles my DAO's and required classes for the business logic. Tomcat serves JSP's containing the pages which make up the website. To make it interactive I have used JavaScript in the pages (jQuery and its AJAX calls) to send and receive JSON data (this is done by posting to a page which calls a handleAction() method in one of my classes). My question is, am I tackling this project in the right way? Am I using the right tools and methods? I understand there are literally countless ways of tackling any project but I would really love to get feedback with regards to tried and tested methods, general practices etc. Thanks in advance! Xandel

    Read the article

  • Can I configure the ResetPassword in Asp.Net's MembershipProvider?

    - by coloradotechie
    I have an C# asp.net app using the default Sql MembershipProvider. My web.config has a few settings that control how I'm using this Provider: enablePasswordRetrieval="false" enablePasswordReset="true" requiresUniqueEmail="true" passwordFormat="Hashed" minRequiredPasswordLength="5" The problem I'm running into is that when people reset their passwords, it seems the ResetPassword() method returns a password that is longer than I want and has characters that can be confusing (l,1,i,I,0,O). Furthermore, I'm sending my users an email with a plain-text message and an HTML message (I'm using MailMessage with AlternateViews). If the password has unsafe HTML characters in it, when the email clients render the HTML text the password might be different (e.g. the %, &, and < aren't exactly HTML safe). I've looked over the "add" element that belongs in the web.config, but I don't see any extra configuration properties to only include certain characters in the ResetPassword() method and to limit the password length. Can I configure the ResetPassword() method to limit the password length and limit the character set it is choosing from? Right now I have a workaround: I call ResetPassword() to make sure the supplied answer is correct, and then I use a RandomPassword generator I downloaded off the internet to generate a password that I like (without ambiguous characters, HTML safe, and only 8 characters long) and then I call ChangePassword() to change the user's password after I've already reset it. My workaround seems kludgy and I thought it would be better to configure ResetPassword() to do what I want. Thank you~! ColoradoTechie

    Read the article

  • How to improve this piece of code

    - by user303518
    Can anyone help me on this. It may be very frustrating for you all. But I want you guys to take a moment to look at the code below and please tell me all the things that are wrong in the below piece of code. You can copy it into your visual studio to analyze this better. You don’t have to make this code compile. My task is to get the following things: Most obvious mistakes with this code All the things that are wrong/bad practices with the code below Modify/Write an optimized version of this code. Keep in mind, the code DOES NOT need to compile. Reduce the number of lines of code You should NEVER try to implement something like below: public List<ValidationErrorDto> ProcessEQuote(string eQuoteXml, long programUniversalID) { // Get Program Info. DataTable programs = GetAllPrograms(); DataRow[] programRows = programs.Select(string.Format("ProgramUniversalID = {0}", programUniversalID)); long programID = (long)programRows[0]["ProgramID"]; string programName = (string)programRows[0]["Description"]; // Get Config file values. string fromEmail = ConfigurationManager.AppSettings["eQuotesFromEmail"]; string technicalSupportPhone = ConfigurationManager.AppSettings["TechnicalSupportPhone"]; string fromEmailDisplayName = string.IsNullOrEmpty(ConfigurationManager.AppSettings["eQuotesFromDisplayName"]) ? null : string.Format(ConfigurationManager.AppSettings["eQuotesFromDisplayName"], programName); string itEmail = !string.IsNullOrEmpty(ConfigurationManager.AppSettings["ITEmail"]) ? ConfigurationManager.AppSettings["ITEmail"] : string.Empty; string itName = !string.IsNullOrEmpty(ConfigurationManager.AppSettings["ITName"]) ? ConfigurationManager.AppSettings["ITName"] : "IT"; try { List<ValidationErrorDto> allValidationErrors = new List<ValidationErrorDto>(); List<ValidationErrorDto> validationErrors = new List<ValidationErrorDto>(); if (validationErrors.Count == 0) { validationErrors.AddRange(ValidateEQuoteXmlAgainstSchema(eQuoteXml)); if (validationErrors.Count == 0) { XmlDocument eQuoteXmlDoc = new XmlDocument(); eQuoteXmlDoc.LoadXml(eQuoteXml); XmlElement rootElement = eQuoteXmlDoc.DocumentElement; XmlNodeList quotesList = rootElement.SelectNodes("Quote"); foreach (XmlNode node in quotesList) { // Each node should be a quote node but to be safe, check if (node.Name == "Quote") { string groupName = node.SelectSingleNode("Group/GroupName").InnerText; string groupCity = node.SelectSingleNode("Group/GroupCity").InnerText; string groupPostalCode = node.SelectSingleNode("Group/GroupZipCode").InnerText; string groupSicCode = node.SelectSingleNode("Group/GroupSIC").InnerText; string generalAgencyLicenseNumber = node.SelectSingleNode("Group/GALicenseNbr").InnerText; string brokerName = node.SelectSingleNode("Group/BrokerName").InnerText; string deliverToEmailAddress = node.SelectSingleNode("Group/ReturnEmailAddress").InnerText; string brokerEmail = node.SelectSingleNode("Group/BrokerEmail").InnerText; string groupEligibleEmployeeCountString = node.SelectSingleNode("Group/GroupNbrEmployees").InnerText; string quoteEffectiveDateString = node.SelectSingleNode("Group/QuoteEffectiveDate").InnerText; string salesRepName = node.SelectSingleNode("Group/SalesRepName").InnerText; string salesRepPhone = node.SelectSingleNode("Group/SalesRepPhone").InnerText; string salesRepEmail = node.SelectSingleNode("Group/SalesRepEmail").InnerText; string brokerLicenseNumber = node.SelectSingleNode("Group/BrokerLicenseNbr").InnerText; DateTime? quoteEffectiveDate = null; int eligibleEmployeeCount = int.Parse(groupEligibleEmployeeCountString); DateTime quoteEffectiveDateOut; if (!DateTime.TryParse(quoteEffectiveDateString, out quoteEffectiveDateOut)) validationErrors.Add(ValidationHelper.CreateValidationError((long)QuoteField.EffectiveDate, "Quote Effective Date", ValidationErrorDto.ValueOutOfRange, false, ValidationHelper.CreateValidationContext(Entity.QuoteDetail, "Quote"))); else quoteEffectiveDate = quoteEffectiveDateOut; Dictionary<string, string> replacementCodeValues = new Dictionary<string, string>(); if (string.IsNullOrEmpty(Resources.ParameterMessageKeys.ResourceManager.GetString("GroupName"))) throw new InvalidOperationException("GroupName key is not configured"); replacementCodeValues.Add(Resources.ParameterMessageKeys.GroupName, groupName); replacementCodeValues.Add(Resources.ParameterMessageKeys.ProgramName, programName); replacementCodeValues.Add(Resources.ParameterMessageKeys.SalesRepName, salesRepName); replacementCodeValues.Add(Resources.ParameterMessageKeys.SalesRepPhone, salesRepPhone); replacementCodeValues.Add(Resources.ParameterMessageKeys.SalesRepEmail, salesRepEmail); replacementCodeValues.Add(Resources.ParameterMessageKeys.TechnicalSupportPhone, technicalSupportPhone); replacementCodeValues.Add(Resources.ParameterMessageKeys.EligibleEmployeCount, eligibleEmployeeCount.ToString()); // Retrieve the CityID and StateID long? cityID = null; long? stateID = null; DataSet citiesAndStates = Addresses.GetCitiesAndStatesFromPostalCode(groupPostalCode); DataTable cities = citiesAndStates.Tables[0]; DataTable states = citiesAndStates.Tables[1]; DataRow[] cityRows = cities.Select(string.Format("Name = '{0}'", groupCity)); if (cityRows.Length > 0) { cityID = (long)cityRows[0]["CityID"]; DataRow[] stateRows = states.Select(string.Format("CityID = {0}", cityID)); if (stateRows.Length > 0) stateID = (long)stateRows[0]["StateID"]; } // If the StateID does not exist, then we cannot get the GeneralAgency, so set a validation error and do not contine. // Else, Continue and look for an General Agency. If a GA was found, look for or create a Broker. Then look for or create a Prospect Group // Then using the info, create a quote. if (!stateID.HasValue) validationErrors.Add(ValidationHelper.CreateValidationError((long)ProspectGroupField.State, "Prospect Group State", ValidationErrorDto.RequiredFieldMissing, false, ValidationHelper.CreateValidationContext(Entity.ProspectGroup, "Prospect Group"))); bool brokerValidationError = false; SalesRepDto salesRep = GetSalesRepByEmail(salesRepEmail, ref validationErrors); if (salesRep == null) { string exceptionMessage = "Sales Rep Not found in Opportunity System. Please make sure Sales Rep is present in the system by adding the sales rep in AWP SR Add Screen." + Environment.NewLine; exceptionMessage = exceptionMessage + " Sales Rep Name: " + salesRepName + Environment.NewLine; exceptionMessage = exceptionMessage + " Sales Rep Email: " + salesRepEmail + Environment.NewLine; exceptionMessage = exceptionMessage + " Module : E-Quote Service" + Environment.NewLine; throw new Exception(exceptionMessage); } if (validationErrors.Count == 0) { // Note that StateID and EffectiveDate should be valid at this point. If it weren't there would be validation errors. // Find General Agency long? generalAgencyID; validationErrors.AddRange(GetEQuoteGeneralAgency(generalAgencyLicenseNumber, stateID.Value, out generalAgencyID)); // If GA was found, check for Broker. if (validationErrors.Count == 0 && generalAgencyID.HasValue) { Dictionary<string, string> brokerNameParts = ContactHelper.GetNamePartsFromFullName(brokerName); long? brokerID; validationErrors.AddRange(CreateEQuoteBroker(brokerNameParts["FirstName"], brokerNameParts["LastName"], brokerEmail, brokerLicenseNumber, stateID.Value, generalAgencyID.Value, salesRep, programID, out brokerID)); // If Broker was found but had validation errors if (validationErrors.Count > 0) { DeliverEmailMessage(programID, quoteEffectiveDate.Value, fromEmail, fromEmailDisplayName, itEmail, DocumentType.EQuoteBrokerValidationFailureMessageEmail, replacementCodeValues); brokerValidationError = true; } // If Broker was found, check for Prospect Group if (validationErrors.Count == 0 && brokerID.HasValue) { long? prospectGroupID; validationErrors.AddRange(CreateEQuoteProspectGroup(groupName, cityID, stateID, groupPostalCode, groupSicCode, brokerID.Value, out prospectGroupID)); if (validationErrors.Count == 0 && prospectGroupID.HasValue) { if (validationErrors.Count == 0) { long? quoteID; validationErrors.AddRange(CreateEQuote(programID, prospectGroupID.Value, generalAgencyID.Value, quoteEffectiveDate.Value, eligibleEmployeeCount, deliverToEmailAddress, node.SelectNodes("Employees/Employee"), deliverToEmailAddress, out quoteID)); if (validationErrors.Count == 0 && quoteID.HasValue) { QuoteFromServiceDto quoteFromService = GetQuoteByQuoteID(quoteID.Value); // Generate Pre-Message replacementCodeValues.Add(Resources.ParameterMessageKeys.QuoteNumber, string.Format("{0}.{1}", quoteFromService.QuoteNumber, quoteFromService.QuoteVersion)); replacementCodeValues.Add(Resources.ParameterMessageKeys.Name, brokerName); replacementCodeValues.Add(Resources.ParameterMessageKeys.LicenseNumbers, brokerLicenseNumber); DeliverEmailMessage(programID, quoteEffectiveDate.Value, fromEmail, fromEmailDisplayName, deliverToEmailAddress, DocumentType.EQuotePreMessageEmail, replacementCodeValues); bool quoteGenerated = false; try { quoteGenerated = GenerateAndDeliverInitialQuote(quoteID.Value); } catch (Exception exception) { TraceLogger.LogException(exception, LoggingCategory); if (replacementCodeValues.ContainsKey(Resources.ParameterMessageKeys.Name)) replacementCodeValues[Resources.ParameterMessageKeys.Name] = itName; else replacementCodeValues.Add(Resources.ParameterMessageKeys.Name, itName); if (replacementCodeValues.ContainsKey(Resources.ParameterMessageKeys.Errors)) replacementCodeValues[Resources.ParameterMessageKeys.Errors] = string.Format("Errors:\r\n:{0}", exception); else replacementCodeValues.Add(Resources.ParameterMessageKeys.Errors, string.Format("Errors:\r\n:{0}", exception)); DeliverEmailMessage(programID, quoteEffectiveDate.Value, fromEmail, fromEmailDisplayName, itEmail, DocumentType.EQuoteSystemFailureMessageEmail, replacementCodeValues); } if (!quoteGenerated) { // Generate System Failure Message if (replacementCodeValues.ContainsKey(Resources.ParameterMessageKeys.Name)) replacementCodeValues[Resources.ParameterMessageKeys.Name] = brokerName; else replacementCodeValues.Add(Resources.ParameterMessageKeys.Name, brokerName); if (replacementCodeValues.ContainsKey(Resources.ParameterMessageKeys.Errors)) replacementCodeValues[Resources.ParameterMessageKeys.Errors] = string.Empty; else replacementCodeValues.Add(Resources.ParameterMessageKeys.Errors, string.Empty); DeliverEmailMessage(programID, quoteEffectiveDate.Value, fromEmail, fromEmailDisplayName, itEmail, DocumentType.EQuoteSystemFailureMessageEmail, replacementCodeValues); } } } } } } } //if (validationErrors.Count > 0) // Per spec, if Broker Validation returned an error we already sent an email, don't send another generic one if (validationErrors.Count > 0 && (!brokerValidationError)) { StringBuilder errorString = new StringBuilder(); foreach (ValidationErrorDto validationError in validationErrors) errorString = errorString.AppendLine(string.Format(" - {0}", ValidationHelper.GetValidationErrorReason(validationError, true))); replacementCodeValues.Add(Resources.ParameterMessageKeys.Errors, errorString.ToString()); if (replacementCodeValues.ContainsKey(Resources.ParameterMessageKeys.Name)) replacementCodeValues[Resources.ParameterMessageKeys.Name] = brokerName; else replacementCodeValues.Add(Resources.ParameterMessageKeys.Name, brokerName); // HACK: If there is no effective date, then use Today's date. Do we care about the effecitve dat on validation message? if (quoteEffectiveDate.HasValue) DeliverEmailMessage(programID, quoteEffectiveDate.Value, fromEmail, fromEmailDisplayName, itEmail, DocumentType.EQuoteValidationFailureMessageEmail, replacementCodeValues); else DeliverEmailMessage(programID, DateTime.Now, fromEmail, fromEmailDisplayName, itEmail, DocumentType.EQuoteValidationFailureMessageEmail, replacementCodeValues); } allValidationErrors.AddRange(validationErrors); validationErrors.Clear(); } } } else { // Use todays date as the effective date. Dictionary<string, string> replacementCodeValues = new Dictionary<string, string>(); StringBuilder errorString = new StringBuilder(); foreach (ValidationErrorDto validationError in validationErrors) errorString = errorString.AppendLine(string.Format(" - {0}", ValidationHelper.GetValidationErrorReason(validationError, true))); replacementCodeValues.Add(Resources.ParameterMessageKeys.Errors, string.Format("The following validation errors occurred: \r\n{0}", errorString)); replacementCodeValues.Add(Resources.ParameterMessageKeys.ProgramName, programName); replacementCodeValues.Add(Resources.ParameterMessageKeys.GroupName, "Group"); replacementCodeValues.Add(Resources.ParameterMessageKeys.Name, itName); DeliverEmailMessage(programID, DateTime.Now, fromEmail, null, itEmail, DocumentType.EQuoteSystemFailureMessageEmail, replacementCodeValues); allValidationErrors.AddRange(validationErrors); validationErrors.Clear(); } } return allValidationErrors; } catch (Exception exception) { TraceLogger.LogException(exception, LoggingCategory); // Use todays date as the effective date. Dictionary<string, string> replacementCodeValues = new Dictionary<string, string>(); replacementCodeValues.Add(Resources.ParameterMessageKeys.ProgramName, programName); replacementCodeValues.Add(Resources.ParameterMessageKeys.GroupName, "Group"); replacementCodeValues.Add(Resources.ParameterMessageKeys.Name, itName); replacementCodeValues.Add(Resources.ParameterMessageKeys.Errors, string.Format("Errors:\r\n:{0}", exception)); DeliverEmailMessage(programID, DateTime.Now, fromEmail, null, itEmail, DocumentType.EQuoteSystemFailureMessageEmail, replacementCodeValues); throw new FaultException(exception.ToString()); } }

    Read the article

  • Documentation and Build system for Mono/C#

    - by dcolish
    I'm starting out on a new project and a team member has decided to use C# as the implementation language. I don't have a lot of experience in C#, but a brief reading shows that it's very capable of being a complete cross-platform vm. Beyond the language, I've been having trouble selecting tools and workflows for managing the code as the project grows. It should be fairly small (<10K lines) but I would like to have the ability to generate documentation as the project grows, manage any external dependencies that we decide to use, and automate builds and testing. I am wondering what tools are commonly used or considered best practices for this language. I am mainly concerned with how would a build system potentially work on *nix as well as windows? Are there C# specific tools or is Make more common? In addition, I'd like to use a dvcs, but it doesn't look like Visual Studio and MonoDevelop support the same ones. What's the common vcs of choice for C#? For testing sort of Unit testing is available for C#/Mono? Finally, I know that there are good doc generators, but with the question of the build system, I would really like to have that just be a single step in the build similar to how testing is a step. Normally I'd automate with Hudson, but I am wondering if there is something more specific to the platform. Overall, I'd love to see a solution that provides a decent workflow on both windows and *nix without a heavy admin burden. I am pretty sure this is the holy grail of project management, so anything that puts me on that path is awesome.

    Read the article

  • iphone localization

    - by hardik
    hello all when i run the command to generate Localizable.string file from my terminal it says me bad entry in to the classes file the file gets generated but it has no entry in it infact it should have entry in it. Here is what i am running in my terminal but somehow it is not happening please guide me need to solve this Last login: Mon Jun 7 18:02:09 on ttys000 comp10:~ admin$ cd .. comp10:Users admin$ cd .. comp10:/ admin$ cd /Users/admin/Desktop/localisationwithcode comp10:localisationwithcode admin$ sudo usage: sudo -K | -L | -V | -h | -k | -l | -v usage: sudo [-HPSb] [-p prompt] [-u username|#uid] { -e file [...] | -i | -s | <command> } comp10:localisationwithcode admin$ genstrings Classes/*.m Bad entry in file Classes/localisationwithcodeViewController.m (line = 35): Argument is not a literal string. Bad entry in file Classes/localisationwithcodeViewController.m (line = 36): Argument is not a literal string. Bad entry in file Classes/localisationwithcodeViewController.m (line = 37): Argument is not a literal string. Bad entry in file Classes/localisationwithcodeViewController.m (line = 38): Argument is not a literal string. 2010-06-07 18:04:45.047 genstrings[3851:10b] _CFGetHostUUIDString: unable to determine UUID for host. Error: 35 comp10:localisationwithcode admin$

    Read the article

  • SQL Joins Excluding Data

    - by Andrew
    Say I have three tables: Fruit (Table 1) ------ Apple Orange Pear Banana Produce Store A (Table 2 - 2 columns: Fruit for sale => Price) ------------------------- Apple => 1.00 Orange => 1.50 Pear => 2.00 Produce Store B (Table 3 - 2 columns: Fruit for sale => Price) ------------------------ Apple => 1.10 Pear => 2.50 Banana => 1.00 If I would like to write a query with Column 1: the set of fruit offered at Produce Store A UNION Produce Store B, Column 2: Price of the fruit at Produce Store A (or null if that fruit is not offered), Column 3: Price of the fruit at Produce Store B (or null if that fruit is not offered), how would I go about joining the tables? I am facing a similar problem (with more complex tables), and no matter what I try, if the "fruit" is not at "produce store a" but is at "produce store b", it is excluded (since I am joining produce store a first). I have even written a subquery to generate a full list of fruits, then left join Produce Store A, but it is still eliminating the fruits not offered at A. Any Ideas?

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • How to handle SalesForce WSDL files for sandbox and production sites in ASP.Net?

    - by Traveling Tech Guy
    I need to authenticate users and get info about them from an ASP.Net application. Since I have 2 sites (sandbox, production) and 2 org IDs - I needed to generate 2 SalesForce WSDL files. I diffed the 2 files (each about 600kb in size) and while they are 95% the same, there are enough differences strewn all over the place - enough for me to need to use them both. I added both as web references to my solution, and here's where my problem starts.Obviously, I cannot use both references in the same file, as they contain the same classes/functions. I had to write a quick-and-dirty solution over the weekend, so I just created 2 classes - each using a different web reference - but otherwise the exact functionality, and I use the appropriate one, based on the URL the user is coming from. This works well, but strikes me as a bad (read: quick-and-dirty) solution. My question: is there any way to do one or more of the following: change the web reference on the fly? use both web references in the same file, but put one in a different namespace? find a better solution to the whole situation? I nd up with a huge XmlSerializer.dll (3mb!) - probably due to using both huge WSDL files. Thanks for your time.

    Read the article

  • SQL Server: Output an XML field as tabular data using a stored procedure

    - by Pawan
    I am using a table with an XML data field to store the audit trails of all other tables in the database. That means the same XML field has various XML information. For example my table has two records with XML data like this: 1st record: <client> <name>xyz</name> <ssn>432-54-4231</ssn> </client> 2nd record: <emp> <name>abc</name> <sal>5000</sal> </emp> These are the two sample formats and just two records. The table actually has many more XML formats in the same field and many records in each format. Now my problem is that upon query I need these XML formats to be converted into tabular result sets. What are the options for me? It would be a regular task to query this table and generate reports from it. I want to create a stored procedure to which I can pass that I need to query "<emp>" or "<client>", then my stored procedure should return tabular data.

    Read the article

  • What is the best way to automatically transpose a LilyPond source file into multiple keys?

    - by Michael Steele
    problem I'm using LilyPond to typeset sheet music for a church choir to perform. Depending on who is available on any given week, songs will be played in various keys. We have an amazing pianist who can play anything we throw at her and the guitarists will typically pencil in alternate chords, but I want to make things easier by having beautifully typeset sheet music available in any key we want. So say we're going to sing our ABCs. First I'll take whatever source transcriptions available and enter it into a LilyPond script: melody = \relative c' { c c g g a a g2 f f e e d d c2 } I want the ability to transpose this automatically, so if I want the whole thing in 'G' I wrap the song in a \transpose call like so: melody = \transpose c g \relative c' { c c g g a a g2 f f e e d d c2 } What I really want is to substitute something for the 'g' and generate the output for melody multiple times. Simple LilyPond variables don't seem to work here, and so far I've been unsuccessful in defining a scheme function to do this. What I've resorted to for the moment is taking the above file, call it twinkle.ly and turning it into an M4 script called twinkle.ly.m4, the contents of which look like this: melody = \transpose c _key \relative c' { c c g g a a g2 f f e e d d c2 } I then compile the while thing by executing the following line: > m4 -D _key=g twinkle.ly.m4 > twinkle_g.ly && lilypond twinkle_g.ly I've written a Makefile to do this for me, defining rules for every song I have and every key I'm interested in. question There's got to be a better way of going about this. Given that Lilypond supports embedded scheme, I would prefer to not use a macro preprocessed on it. Has anybody else come up with a solution to this same problem?

    Read the article

  • What's a good Java API for creating Word documents?

    - by Bill James
    I have a new app I'll be working on where I have to generate a Word document that contains tables, graphs, a table of contents and text. What's a good API to use for this? How sure are you that it supports graphs, ToCs, and tables? What are some hidden gotcha's in using them? Some clarifications: I can't output a PDF, they want a Word doc. They're using MS Word 2003 (or 2007), not OpenOffice Application is running on *nix app-server It'd be nice if I could start with a template doc and just fill in some spaces with tables, graphs, etc. Thanks for the help. Edit: Several good answers below, each with their own faults as far as my current situation. Hard to pick a "final answer" from them. Think I'll leave it open, and hope for better solutions to be created. Edit: The OpenOffice UNO project does seem to be closest to what I asked for. While POI is certainly more mainstream, it's too immature for what I want.

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >