Search Results

Search found 14539 results on 582 pages for 'date conversion'.

Page 505/582 | < Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >

  • .NET client getting "not well formed" XML response from Axis web service

    - by Tex
    I have a simple .NET app that makes a SOAP call to a third party Axis web service. When I trace the HTTP traffic, I see that the Request looks fine, however I'm getting an exception: "Response is not well-formed XML." The return object is null, as it seems the XML can't be deserialized. One question regarding the various namespace declarations inside the wsdl. Several of these declarations point to URLs / domains that no longer exist. Could this cause any problems? From the wsdl document: <wsdl:definitions targetNamespace="http://domaindoesntexist.com/" xmlns:apachesoap="http://xml.apache.org/xml-soap" xmlns:impl="http://domaindoesntexist.com/" xmlns:intf="http://domaindoesntexist.com/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:wsdlsoap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> A sample HTTP response with incriminating data removed: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Fri, 05 Jun 2009 13:54:59 GMT 7cb <?xml version="1.0" encoding="utf-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <someMethod xmlns="http://test.com/services/myservice/"> </someMethod> </soapenv:Body> </soapenv:Envelope> 0

    Read the article

  • Append data to same text file using java

    - by Manu
    SimpleDateFormat formatter = new SimpleDateFormat("ddMMyyyy_HHmmSS"); String strCurrDate = formatter.format(new java.util.Date()); String strfileNm = "Customer_" + strCurrDate + ".txt"; String strFileGenLoc = strFileLocation + "/" + strfileNm; String Query1="select '0'||to_char(sysdate,'YYYYMMDD')||'123456789' class_code from dual"; String Query2="select '0'||to_char(sysdate,'YYYYMMDD')||'123456789' class_code from dual"; try { Statement stmt = null; ResultSet rs = null; Statement stmt1 = null; ResultSet rs1 = null; stmt = conn.createStatement(); stmt1 = conn.createStatement(); rs = stmt.executeQuery(Query1); rs1 = stmt1.executeQuery(Query2); File f = new File(strFileGenLoc); OutputStream os = (OutputStream)new FileOutputStream(f,true); String encoding = "UTF8"; OutputStreamWriter osw = new OutputStreamWriter(os, encoding); BufferedWriter bw = new BufferedWriter(osw); while (rs.next() ) { bw.write(rs.getString(1)==null? "":rs.getString(1)); bw.write(" "); } bw.flush(); bw.close(); } catch (Exception e) { System.out.println( "Exception occured while getting resultset by the query"); e.printStackTrace(); } finally { try { if (conn != null) { System.out.println("Closing the connection" + conn); conn.close(); } } catch (SQLException e) { System.out.println( "Exception occured while closing the connection"); e.printStackTrace(); } } return objArrayListValue; } The above code is working fine. it writes the content of "rs" resultset data in text file Now what i want is ,i need to append the the content in "rs2" resultset to the "same text file"(ie . i need to append "rs2" content with "rs" content in the same text file)..

    Read the article

  • iPhone Development - Location Accuracy

    - by Mustafa
    I'm using following conditions in-order to make sure that the location i get has adequate accuracy, In my case kCLLocationAccuracyBest. But the problem is that i still get inaccurate location. // Filter out nil locations if(!newLocation) return; // Make sure that the location returned has the desired accuracy if(newLocation.horizontalAccuracy < manager.desiredAccuracy) return; // Filter out points that are out of order if([newLocation.timestamp timeIntervalSinceDate:oldLocation.timestamp] < 0) return; // Filter out points created before the manager was initialized NSTimeInterval secondsSinceManagerStarted = [newLocation.timestamp timeIntervalSinceDate:locationManagerStartDate]; if(secondsSinceManagerStarted < 0) return; // Also, make sure that the cached location was not returned by the CLLocationManager (it's current) - Check for 5 seconds difference if([newLocation.timestamp timeIntervalSinceReferenceDate] < [[NSDate date] timeIntervalSinceReferenceDate] - 5) return; When i activate the GPS, i get inaccurate results before i actually get an accurate result. What methods do you use to get accurate/precise location information?

    Read the article

  • Trying to set PC clock programmatically just before Daylight Saving Time ends

    - by Moe Sisko
    To reproduce : 1) Add Microsoft.VisualBasic assembly to your project reference 2) Change PC timezone to : (GMT+10:00) Canberra, Melbourne, Sydney . Ensure PC is set to automatically adjust clock for daylight savings time. (For this timezone, daylight savings time ends at 3am on 4 Apr 2010.) 3) add following code : public void SetNewDateTime(DateTime dt) { Microsoft.VisualBasic.DateAndTime.Today = dt; // ignores time component Microsoft.VisualBasic.DateAndTime.TimeOfDay = dt; // ignores date component } private void button1_Click(object sender, EventArgs e) { DateTime dt = new DateTime(2010, 4, 5, 5, 0, 0); // XX SetNewDateTime(dt); // XX System.Threading.Thread.Sleep(500); DateTime dt2 = new DateTime(2010, 4, 4, 1, 0, 0); SetNewDateTime(dt2); } 4) When button 1 is clicked, the PC clock eventually shows 2am, whereas 1 am was expected. (If code marked at "XX" is removed, the clock sometimes shows the correct time of 1 am). Any idea what is happening ? (Or is there a more reliable way of setting the PC clock from C# code ?) TIA.

    Read the article

  • Voting on Hacker News stories programmatically?

    - by igorgue
    I decided to write an app like: http://michaelgrinich.com/hackernews/ but for Android devices, my idea will use a web application backend (because I rather code in Python and for the web than completely in Java for Android devices). What I have right now implemented is something like this: $ curl -i http://localhost:8080/stories.json?page=1\&stories=1 HTTP/1.0 200 OK Date: Sun, 25 Apr 2010 07:59:37 GMT Server: WSGIServer/0.1 Python/2.6.5 Content-Length: 296 Content-Type: application/json [{"title": "Don\u2019t talk to aliens, warns Stephen Hawking", "url": "http://www.timesonline.co.uk/tol/news/science/space/article7107207.ece?", "unix_time": 1272175177, "comments": 15, "score": 38, "user": "chaostheory", "position": 1, "human_time": "Sun Apr 25 01:59:37 2010", "id": "1292241"}] The next step (and final I think) is voting, my design is doing something like this: $ curl -i http://localhost:8080/stories/1?vote=up -u username:password Will vote up and: $ curl -i http://localhost:8080/stories/1?vote=down -u username:password Down. I have no idea how to do it though... I was planning to use Twill but the login link is always different, e.g.: http://news.ycombinator.com/x?fnid=7u89ccHKln Later the Android app will consume this API. Any experience with programmatically browsing Hacker News?

    Read the article

  • How can I test a CRON job with PHP?

    - by alex
    This is the first time I've ever used a CRON. I'm using it to parse external data that is automatically FTP'd to a subdirectory on our site. I have created a controller and model which handles the data. I can access the URL fine in my browser and it works (however I will be restricting this soon). My problem is, how can I test if it's working? I've added this to my controller for a quick and dirty log $file = 'test.txt'; $contents = ''; if (file_exists($file)) { $contents = file_get_contents($file); } $contents .= date('m-d-Y') . ' --- ' . PHP_SAPI . "\n\n"; file_put_contents($file, $contents); But so far only got requests logged from myself from the browser, despite having my CRON running ever minute. 03-18-2010 --- cgi-fcgi 03-18-2010 --- cgi-fcgi I've set it up using cPanel with the command index.php properties/update/ the 2nd portion is what I use to access the page in my browser. So how can I test this is working properly, and have I stuffed anything up? Note: I'm using Kohana 3. Many thanks

    Read the article

  • De-normalization alternative to specific MYSQL problem?

    - by Booker
    I am facing quite a specific optimization problem. I currently have 4 normalized tables of data. Every second, possibly thousands of users will pull down up-to-date info from these tables using AJAX. The thing is that I can predict relatively easily which subset of data they need... The most recent 100 or so entries in those 4 normalized tables. I have been researching de-normalization... but feel that perhaps there is an easier solution. I was thinking that I could somehow every second run one sql query to condense the needed info, store it in a temp cached table and then have all of the user queries just draw from this. This will allow the complex join of 4 tables to only be run once, and then from there the users just need to do a simple lookup from the cached table. I really don't know if this is feasible. Comments on this or any other suggestions would be much appreciated. Thanks!

    Read the article

  • PHP - REGEX - use string for pattern but exclude it from being removed!

    - by aSeptik
    Hi All guys! i'm pretty new on regex, i have learned something by the way, but is still pour knowledge! so i want ask you for clarification on how it work! assuming i have the following strings, as you can see they can be formatted little different way one from another but they are very similar! DTSTART;TZID="America/Chicago":20030819T000000 DTEND;TZID="America/Chicago":20030819T010000 DTSTART;TZID=US/Pacific DTSTART;VALUE=DATE now i want replace everything between the first A-Z block and the colon so for example i would keep DTSTART:20030819T000000 DTEND:20030819T010000 DTSTART DTSTART so on my very noobs knowledge i have worked out this shitty regex! :-( preg_replace( '/^[A-Z](?!;[A-Z]=[\w\W]+):$/m' , '' , $data ); but why i'm sure this regex will not work!? :-) Pls help me! PS: the title of question is pretty explaned, i want also know how for example use a well know string block for match another... preg_replace( '/^[DTSTART](?!;[A-Z]=[\w\W]+):$/m' , '' , $data ); ..without delete DTSTART Thanks for the time! Regards Luca Filosofi

    Read the article

  • how can we store php variables in jquery variables in jquery part

    - by surya
    $('#b').bind('click',function(){ alert('hii'); var slide_start=slider_content.indexOf(0); if(slide_start==2) { $('#reg_rem_form').hide(); } var show=1+slide_start; var show_first='#'+show; **var value_to_insert=<?php echo $value;? >=$(show_first).val();** <?php /* 1. Date of birth 2. gender 3. Unvi1 4. Unvi2 5. highest degree unvi1 6. highest degree unvi2 7. Year of passing unvi1 8. Year of passing unvi2 9. Current working 10. Work experience */ ?> Just as we store php variables in jquery variables , same thing but in reverse i want to store jquery variables in php variables ??? The highlighted part is the main line. how to do that , above code giving me the error === missing ";" before statement. Is this right way to do this =$('#bold').val();

    Read the article

  • Caching Authentication Data

    - by PartlyCloudy
    Hi, I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB: Clients <= HTTP = [ RESTlet <= HTTP = CouchDB ] I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data. My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data. Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data. So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that? Or what is a good strategy to keep the cache(s) up-to-date in general? Thanks in advance.

    Read the article

  • Logback: What would cause DEBUG - WARN to append to file but NOT ERROR?

    - by Zombies
    I am running a Java program from Eclipe, and I am using the logback console plugin. I can see the ERROR level statements being appended to console (as well as all of the others). But for some reason my file, which is correctly recieving new DEBUG-WARN statements, is NOT recieving the ERROR level ones. Here is my logback.xml: <consolePlugin /> <appender name="FILE" class="ch.qos.logback.core.FileAppender"> <file>logs/main.log</file> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</Pattern> </layout> </appender> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%msg%n</Pattern> </layout> </appender> <logger name="WebsiteChecker"> <appender-ref ref="FILE" /> </logger> <root> <level value="debug" /> <!--<appender-ref ref="STDOUT" />--> <appender-ref ref="FILE" /> </root> </configuration>

    Read the article

  • Python: How can I use Twisted as the transport for SUDS?

    - by jathanism
    I have a project that is based on Twisted used to communicate with network devices and I am adding support for a new vendor (Citrix NetScaler) whose API is SOAP. Unfortunately the support for SOAP in Twisted still relies on SOAPpy, which is badly out of date. In fact as of this question (I just checked), twisted.web.soap itself hasn't even been updated in 21 months! I would like to ask if anyone has any experience they would be willing to share with utilizing Twisted's superb asynchronous transport functionality with SUDS. It seems like plugging in a custom Twisted transport would be a natural fit in SUDS' Client.options.transport, I'm just having a hard time wrapping my head around it. I did come up with a way to call the SOAP method with SUDS asynchronously by utilizing twisted.internet.threads.deferToThread(), but this feels like a hack to me. Here is an example of what I've done, to give you an idea: # netscaler is a module I wrote using suds to interface with NetScaler SOAP # Source: http://bitbucket.org/jathanism/netscaler-api/src import netscaler import os import sys from twisted.internet import reactor, defer, threads # netscaler.API is the class that sets up the suds.client.Client object host = 'netscaler.local' username = password = 'nsroot' wsdl_url = 'file://' + os.path.join(os.getcwd(), 'NSUserAdmin.wsdl') api = netscaler.API(host, username=username, password=password, wsdl_url=wsdl_url) results = [] errors = [] def handleResult(result): print '\tgot result: %s' % (result,) results.append(result) def handleError(err): sys.stderr.write('\tgot failure: %s' % (err,)) errors.append(err) # this converts the api.login() call to a Twisted thread. # api.login() should return True and is is equivalent to: # api.service.login(username=self.username, password=self.password) deferred = threads.deferToThread(api.login) deferred.addCallbacks(handleResult, handleError) reactor.run() This works as expected and defers return of the api.login() call until it is complete, instead of blocking. But as I said, it doesn't feel right. Thanks in advance for any help, guidance, feedback, criticism, insults, or total solutions.

    Read the article

  • How can I print text fields at the right coordinates?

    - by Milad
    I don't have any background in programming and this is my first shot. I wrote a Delphi program that is supposed to print on a result sheet. I work in an institute and we have to establish hundreds of result sheets every 2 months. It's really difficult to do that and different handwriting is also an important issue. I have this code: if PrintDialog.Execute() then begin with MyPrinter do begin MyPrinter.BeginDoc();//Start Printing //Prints First Name MyPrinter.Canvas.TextOut(FirstNameX,FirstNameY,EditFirstName.Text); //Prints Last Name MyPrinter.Canvas.TextOut(LastNameX,LastNameY,EditLastName.Text); //Prints Level MyPrinter.Canvas.TextOut(LevelX,LevelY,EditLevel.Text); //Prints Date MyPrinter.Canvas.TextOut(DateX,DateY,MEditDate.Text); //Prints Student Number MyPrinter.Canvas.TextOut(StdNumX,StdNumY,EditStdnumber.Text); .... MyPrinter.EndDoc();//End Printing end; end; I can't get the right coordinates to print properly. Am I missing something? How can I set the right coordinates? You know TPrinter uses pixels to get the coordinates but papers are measured in inches or centimeters. I'm really confused. I appreciate any help.

    Read the article

  • How to create instances of related models in Django

    - by sevennineteen
    I'm working on a CMSy app for which I've implemented a set of models which allow for creation of custom Template instances, made up of a number of Fields and tied to a specific Customer. The end-goal is that one or more templates with a set of custom fields can be defined through the Admin interface and associated to a customer, so that customer can then create content objects in the format prescribed by the template. I seem to have gotten this hooked up such that I can create any number of Template objects, but I'm struggling with how to create instances - actual content objects - in those templates. For example, I can define a template "Basic Page" for customer "Acme" which has the fields "Title" and "Body", but I haven't figured out how to create Basic Page instances where these fields can be filled in. Here are my (somewhat elided) models... class Customer(models.Model): ... class Field(models.Model): ... class Template(models.Model): label = models.CharField(max_length=255) clients = models.ManyToManyField(Customer, blank=True) fields = models.ManyToManyField(Field, blank=True) class ContentObject(models.Model): label = models.CharField(max_length=255) template = models.ForeignKey(Template) author = models.ForeignKey(User) customer = models.ForeignKey(Customer) mod_date = models.DateTimeField('Modified Date', editable=False) def __unicode__(self): return '%s (%s)' % (self.label, self.template) def save(self): self.mod_date = datetime.datetime.now() super(ContentObject, self).save() Thanks in advance for any advice!

    Read the article

  • Windows service threading call to WCF service

    - by Sam Brinsted
    Hi, I have a windows service that is reading data from a database and then submitting it to a WCF serivce. Once that has finished it is stamping a processed date on the original record. Trouble I am currently having is to do with threading. The call to the WCF serivce is relatively long and I want to have a number of concurrent calls to the service to help improve the throughput of the windows service. Currently I have a submitToService method on a new worker class. Upon reading a new row from the database I am creating a new thread which is calling this method. This obviously isn't too good as the number of threads quickly shoots up and overburdens the WCF service. I have put a thread.sleep in the submit method and am sure to call System.Threading.Thread.CurrentThread.Abort(); after the submission has finished. However, I don't seem to see the number of threads go down. How can I just have a fixed number of threads that can be used in the windows service? I did think about using a thread pool but read somewhere that wasn't a good choice for a windows service. Thanks very much.

    Read the article

  • Why is my UIWebView not scrollable?

    - by Thomas
    In my most frustrating roadblock to date, I've come across a UIWebView that will NOT scroll! I call it via this IBAction: -(IBAction)session2ButtonPressed:(id)sender { Session2ViewController *session2View = [[Session2ViewController alloc]initWithNibName:@"Session2ViewController" bundle:nil]; self.addictionViewController = session2View; [self.view insertSubview:addictionViewController.view atIndex:[self.view.subviews count]]; [session2View release]; } In the viewDidLoad of Session2ViewController.m, I have - (void)viewDidLoad { [super viewDidLoad]; // TRP - Grab data from plist // TRP - Build file path to the plist NSString *filePath = [[NSBundle mainBundle] pathForResource:@"Addiction" ofType:@"plist"]; // TRP - Create NSDictionary with contents of the plist NSDictionary *addictionDict = [NSDictionary dictionaryWithContentsOfFile:filePath]; // TRP - Create an array with contents of the dictionary NSArray *addictionData = [addictionDict objectForKey:@"Addiction1"]; NSLog(@"addictionData (array): %@", addictionData); // TRP - Create a string with the contents of the array NSString *addictionText = [NSString stringWithFormat:@"<DIV style='font-family:%@;font-size:%d;'>%@</DIV>", @"Helvetica", 18, [addictionData objectAtIndex:1]]; addictionInfo.backgroundColor = [UIColor clearColor]; // TRP - Load the string created and stored into addictionText and display in the UIWebView [addictionInfo loadHTMLString:addictionText baseURL:nil]; // TODO: MAKE THIS WEBVIEW SCROLL!!!!!! } In the nib, I connected my web view to the delegate and to the outlet. When I run my main project, the plist with my HTML code shows up, but does not scroll. I copied and pasted this code into a new project, wired the nib the exact same way, and badda-boom badda-bing. . . it works. I even tried to create a new nib from scratch in this project, and the exact same code would not work. Whiskey Tango Foxtrot Any ideas?? Thanks! Thomas

    Read the article

  • Google App Engine, parsedatetime and TimeZones

    - by Ron
    Hey guys, I'm working on a Google App Engine / Django app and I encountered the following problem: In my html I have an input for time. The input is free text - the user types "in 1 hour" or "tomorrow at 11am". The text is then sent to the server in AJAX, which parses it using this python library: http://code.google.com/p/parsedatetime/. Once parsed, the server returns an epoch timestamp of the time. Here is the problem - Google App Engine always runs on UTC. Therefore, lets say that the local time is now 11am and the UTC time is 2am. When I send "now" to the server it will return "2am", which is good because I want the date to be received in UTC time. When I send "in 1 hour" the server will return "3am" which is good, again. However, when I send "at noon" the server will return "12pm" because it thinks that I'm talking about noon UTC - but really I need it to return 3am, which is noon for the request sender.. I can pass on the TZ of the browser that sends the request, but that wont really help me - the parsedatetime library wont take a timezone argument (correct me if I'm wrong). Is there a walk around this? Maybe setting the environments TZ somehow? Thanks!

    Read the article

  • Associating Models with Polymorphic

    - by Josh Crowder
    I am trying to associate Contacts with Classes but as two different types. Current_classes and Interested_classes. I know I need to enable polymorphic but I am not sure as to where it needs to be enabled. This is what I have at the moment class CreateClasses < ActiveRecord::Migration def self.up create_table :classes do |t| t.string :class_type t.string :class_name t.string :date t.timestamps end end def self.down drop_table :classes end end class CreateContactsInterestedClassesJoin < ActiveRecord::Migration def self.up create_table 'contacts_interested_classes', :id => false do |t| t.column 'class_id', :integer t.column 'contact_id', :integer end end def self.down drop_table 'contacts_interested_classes' end end class CreateContactsCurrentClassesJoin < ActiveRecord::Migration def self.up create_table 'contacts_current_classes', :id => false do |t| t.column 'class_id', :integer t.column 'contact_id', :integer end end def self.down drop_table 'contacts_current_classes' end end And then inside of my Contacts Model I want to have something like this. class Contact < ActiveRecord::Base has_and_belongs_to_many :classes, :join_table => "contacts_interested_classes", :foreign_key => "class_id" :as => 'interested_classes' has_and_belongs_to_many :classes, :join_table => "contacts_current_classes", :foreign_key => "class_id" :as => 'current_classes' end What am I doing wrong?

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • how often should the entire suite of a system's unit tests be run?

    - by gerryLowry
    Generally, I'm still very much a unit testing neophyte. BTW, you may also see this question on other forums like xUnit.net, et cetera, because it's an important question to me. I apoligize in advance for my cross posting; your opinions are very important to me and not everyone in this forum belongs to the other forums too. I was looking at a large decade old legacy system which has had over 700 unit tests written recently (700 is just a small beginning). The tests happen to be written in MSTest but this question applies to all testing frameworks AFAIK. When I ran, via vs2008 "ALL TESTS", the final count was only seven tests. That's about 1% of the total tests that have been written to date. MORE INFORMATION: The ASP.NET MVC 2 RTM source code, including its unit tests, is available on CodePlex; those unit tests are also written in MSTest even though (an irrelevant fact) Brad Wilson later joined the ASP.NET MVC team as its Senior Programmer. All 2000 plus tests get run, not just a few. QUESTION: given that AFAIK the purpose of unit tests is to identify breakages in the SUT, am I correct in thinking that the "best practice" is to always, or at least very frequently, run all of the tests? Thank you. Regards, Gerry (Lowry)

    Read the article

  • Git Workflow With Capistrano

    - by jerhinesmith
    I'm trying to get my head around a good git workflow using capistrano. I've found a few good articles, but I'm either not grasping completely what they're suggesting (likely) or they're somewhat lacking. Here's kind of what I had in mind so far, but I get caught up when to merge back into the master branch (i.e. before moving to stage? after?) and trying to hook it into capistrano for deployments: Make sure you’re up to date with all the changes made on the remote master branch by other developers git checkout master git pull Create a new branch that pertains to the particular bug you're trying to fix git checkout -b bug-fix-branch Make your changes git status git add . git commit -m "Friendly message about the commit" So, this is usually where I get stuck. At this point, I have a master branch that is healthy and a new bug-fix-branch that contains my (untested -- other than unit tests) changes. If I want to push my changes to stage (through cap staging deploy), do I have to merge my changes back into the master branch (I'd prefer not to since it seems like master should be kept free of untested code)? Do I even deploy from master (or should I be tagging a release first and then modifying my production.rb file to deploy from that tag)? git-deployment seems to address some of these workflow issues, but I can't seem to find out how on earth it actually hooks into cap staging deploy and cap production deploy. Thoughts? I assume there's a likely canonical way to do this, but I either can't find it or I'm too new to git to recognize that I have found it. Help!

    Read the article

  • How to model parent to child pair in MySQL (SQL)

    - by mikeschuld
    I have a data model that includes element types Stage, Actor, and Form. Logically, Stages can be assigned pairs of ( Form <--- Actor ) which can be duplicated many times (i.e. same person and same form added to the same stage at a later date/time). Right now I am modeling this with these tables: Stage Form Actor Form_Actor _______________ |Id | |FormId | --> Id in Form |ActorId | --> Id in Actor Stage_FormActor __________________ |Id | |StageId | --> Id in Stage |FormActorId | --> Id in Form_Actor I am using CodeSmith to generate the data layer for this setup and none of the templates really know how to handle this type of relationship correctly when generating classes. Ideally, the ORM would have Stage.FormActors where FormActor would be the pair Form, Actor. Is this the correct way to model these relationships. I have tried using all three Ids in one table as well Stage_Form_Actor ______________ |Id | |StageId | --> Id in Stage |FormId | --> Id in Form |ActorId | --> Id in Actor This doesn't really get generated very well either. Ideas?

    Read the article

  • Looking for fast, minimal, preferrably free disc cloning software [closed]

    - by Dave
    We have to test our application installation and functionality on many Windows operating system versions and languages (XP, Vista, Win7; English, Spanish, Portuguese, etc; 32-bit & b4-bit.) While we can do much of this in virtual machines, we have noticed that VM's sometimes hide problems, or raise false bugs. So, we need to do "bare metal" OS installation for much of our testing. I have been using Acronis True Image for the past year, and am not impressed. It often gives random errors which require a reboot, and is really slow. For example, when trying to restore an image, it goes through a "Locking partition" cycle about three times (once after you click OK on each step of the wizard), each of which can take 5 minutes to complete. This all happens BEFORE it actually starts the image copy, which is sometimes quick (3-5 minutes), sometimes long (hours). The size of all of our images are roughly the same, so that is not related. So, anyway, I'm looking to switch to something else: I only need very basic functionality--just creating images of entire discs, and then restoring those images onto the exact same hard drive at a later date. That's it. I'm not opposed to paying for a good piece of software, but if there is something free out there that does the job well, that would be a preference. My OS on which the imaging software would run is Windows Vista, but a bootable media (into a Linux flavor) would be fine also, as long as its quick to use and reliable. Recommendations? (Also, moderators, if this should be a CW, I'll be happy to mark it as such; unclear about the rules there.)

    Read the article

  • Server side form validation and POST data

    - by tomcritchlow
    Hi, I have a user input form here: http://www.7bks.com/create (Google login required) When you first create a list you are asked to create a public username. Unfortuantely currently there is no constraint to make this unique. I'm working on the code to enforce unique usernames at the moment and would like to know the best way to do it. Tech details: appengine, python, webapp framework What I'm planning is something like this: first the /create form posts the data to /inputlist/ (this is the same as currently happens) /inputlist/ queries the datastore for the given username. If it already exists then redirect back to /create display the /create page with all the info previously but with an additional error message of "this username is already taken" My question is: Is this the best way of handling server side validation? What's the best way of storing the list details while I verify and modify the username? As I see it I have 3 options to store the list details but I'm not sure which is "best": Store the list details in the session cookie (I am using GAEsessions for cookies) Define a separate POST class for /create and post the list data back from /inputlist/ to the /create page (currently /create only has a GET class) Store the list in the datastore, even though the username is non-unique. Thank you very much for your help :) I'm pretty new to python and coding in general so if I've missed something obvious my apologies. Tom PS - I'm sure I can eventually figure it out but I can't find any documentation on POSTing data using the webapp appengine framework which I'd need in order to do solution 2 above :s maybe you could point me in the right direction for that too? Thanks! PPS - It's a little out of date now but you can see roughly how the /create and /inputlist/ code works at the moment here: 7bks.com Gist

    Read the article

  • SQL Server Express performance issue

    - by Developer IT
    Hi folks ! I know my questions will sound silly and probably nobody will have perfect answer but since I am in a complete dead-end with the situation it will make me feel better to post it here. So... I have a SQL Server Express database that's 500 Mb. It contains 5 tables and maybe 30 stored procedure. This database is use to store articles and is use for the Developer It web site. Normally the web pages load quickly, let's say 2 ou 3 sec. BUT, sqlserver process uses 100% of the processor for those 2 or 3 sec. I try to find which stored procedure was the problem and I could not find one. It seems like every read into the table dans contains the articles (there are about 155,000 of them and 20 or so gets added every 15 minutes). I added few index but without luck... It is because the table is full text indexed ? Should I have order with the primary key instead of date ? I never had any problems with ordering by dates.... Should I use dynamic SQL ? Should I add the primary key into the url of the articles ? Should I use mutiple indexes for seperate columns or one big index ? I you want more details or code bits, just ask for it. Basicly, every little hint is much apreciated. Thanks.

    Read the article

< Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >