Search Results

Search found 535 results on 22 pages for 'stewart johnson'.

Page 12/22 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How to access a PHP Web Service from ASP.Net?

    - by Steve Johnson
    I am trying use a web service in a C# ASP.Net Web Application. The service is built in php and is located on some remote server not under my control so i cant modify it to add meta data or something else into it. When i use the "Add Web Reference" option in Visual Studio 2008, I receive the following error: The HTML document does not contain Web service discovery information. while trying to add the following web service. https://subreg.forpsi.com/robot2/subreg_command.php?wsdl The web service functions are exposed and displayed in Visual Studio 2008. however i could not add the reference to it for use in ASP.Net Application. t3Service" Description Methods __construct ( ) create_contact ( ) get_contact ( ) get_domain_info ( ) get_last_error_code ( ) get_last_error_msg ( ) get_NSSET ( ) get_owner_mail ( ) login ( ) register_domain ( ) register_domain_with_admin_contacts ( ) renew_domain ( ) request_sendmail ( ) send_auth_info ( ) transfer_domain ( ) I also tried the wsdl.exe method by retrieving the xml and copying it to a wsdl file and generating a proxy class. But the wsdl output contains warnings and the proxy class generated skips the exposed fucntions and generates something like this: // CODEGEN: The operation binding 'create_contact' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'get_contact' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'get_domain_info' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'get_last_error_code' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'get_last_error_msg' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'get_NSSET' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'get_owner_mail' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'send_auth_info' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'transfer_domain' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'request_sendmail' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'login' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'register_domain' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'register_domain_with_admin_contacts' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'renew_domain' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. Any help in this regard will be highly appreciated. Regards

    Read the article

  • map xml element to xsd complexType based on attribute

    - by Joshua Johnson
    Assume there exists an XML instance document that looks like this: <root> <object type="foo"> <!-- ... --> </object> <object type="bar"> <!-- ... --> </object> </root> My goal is to have a small (static) schema that verifies proper <element type="xxx" /> syntax for objects, and another schema (more prone to change) that verifies the contents of each object element against a complexType that matches the type attribute: <complexType name="foo"><!--should match object with type="foo"--></complexType> <complexType name="bar"><!--should match object with type="bar"--></complexType> What is the best way to accomplish this (or something similar)?

    Read the article

  • SSD and programming

    - by Simon Johnson
    I'm trying to put together a business case for getting every developer in our company an Intel SSD drive. The main codebase contains roughly 400,000 lines of code. My theory is that since the code is scattered about in maybe 1500 files, an SSD drive would be substantially faster for compiles. The logic being that many small reads really punishes the seak-time bottle-neck of a traditional hard-drive. Am I right? Is SSD worth the money in productivity gains by reducing the edit/compile cycle time?

    Read the article

  • Symlinking (ln) faster than moving (mv)?

    - by Chad Johnson
    When we build web software releases, we prepare the release in a temporary directory and then replace the release directory with the temporary one just prepared: # Move and replace existing release directory. mv /path/to/httpdocs /path/to/httpdocs.before mv /path/to/$newReleaseName /path/to/httpdocs Under this scheme, it happens that with about 1 in every 15 releases, a user was using a file in the original release directory exactly at the time the commands above are run, and a fatal error occurs for that user. I am wondering if using symlinking like follows would be significantly faster, in terms of processing time, thereby helping to lessen the likelihood of this problem: # Remove and replace existing release symlink. ln -sf /path/to/$newReleaseName path/to/httpdocs

    Read the article

  • Is this a good centralized DVCS workflow?

    - by Chad Johnson
    I'm leaning toward using Mercurial, coming from Subversion, and I'd like to maintain a centralized workflow like I had with Subversion. Here is what I am thinking: stable (clone on server) default (branch) development (clone on server) default (branch) bugs (branch) developer1 (clone on local machine) developer2 (clone on local machine) developer3 (clone on local machine) feature1 (branch) developer3 (clone on local machine) feature2 (branch) developer1 (clone on local machine) developer2 (clone on local machine) As far as branches vs. clones is concerned, does this workflow sense? Do I have things straight? Also, the 'stable' clone IS the release. Does it make sense for the 'default' branch to be the release and what all other branches are ultimately merged into?

    Read the article

  • Why would some POST data go missing when using Uploadify?

    - by Chad Johnson
    I have been using Uploadify in my PHP application for the last couple months, and I've been trying to track down an elusive bug. I receive emails when fatal errors occur, and they provide me a good amount of details. I've received dozens of them. I have not, however, been able to reproduce the problem myself. Some users (like myself) experience no problem, while others do. Before I give details of the problem, here is the flow. User visits edit screen for a page in the CMS I am using. Record id for the page is put into a form as a hidden value. User clicks the Uploadify browse button and selects a file (only single file selection is allowed). User clicks Submit button for my form. jQuery intercepts the form submit action, triggers Uploadify to start uploading, and returns false for the submit action (manually cancelling the form submit event so that Uploadify can take over). Uploadify uploads to a custom process script. Uploadify finishes uploading and triggers the Javascript completion callback. The Javascript callback calls $('#myForm').submit() to submit the form. Now that's what SHOULD happen. I've received reports of the upload freezing at 100% and also others where "I/O Error" is displayed. What's happening is, the form is submitting with the completion callback, but some post parameters present in the form are simply not in the post data. The id for the page, which earlier I said is added to the form as a hidden field, is simply not there in the post data ($_POST)--there is no item for 'id' in the $_POST array. The strange thing is, the post data DOES contain values for some fields. For instance, I have an input of type text called "name" which is for the record name, and it does show up in the post data. Here is what I've gathered: This has been happening on Mac OSX 10.5 and 10.6, Windows XP, and Windows 7. I can post exact user agent strings if that helps. Users must use Flash 10.0.12 or later. We've made it so the form reverts to using a normal "file" field if they have < 10.0.12. Does anyone have ANY ideas at all what the cause of this could be?

    Read the article

  • Why would some $_POST variables go missing for a form with PHP?

    - by Chad Johnson
    Sometimes, multiple times a day in fact, users of my web application are submitting a certain form which has about a dozen form fields, half of which are hidden fields, and half of the $_POST data is simply not present in the processing script. Note that the fields that are not present are at the very bottom of the form. I know this because this raises a fatal error, and an email is dispatched to me which includes the post data. And of course, neither I nor any of the developers on my team can reproduce the problem. Flash is involved in the process, as I'm using a library called Uploadify to display a progress meter. Here is the flow...does anyone have ANY ideas at all why some of the post data would be getting wiped out? User visits edit screen for a page in the CMS I am using. Record id for the page is put into a form as a hidden value. User clicks the Uploadify browse button and selects a file (only single file selection is allowed). User clicks Submit button for my form. jQuery intercepts the form submit action, triggers Uploadify to start uploading, and returns false for the submit action (manually cancelling the form submit event so that Uploadify can take over). Uploadify uploads to a custom process script. Uploadify finishes uploading and triggers the Javascript completion callback. The Javascript callback calls $('#myForm').submit() to submit the form. This happens on multiple browsers (Firefox 3.5, 3.6, Safari, Internet Explorer 7, 8) and multiple platforms (Mac OS 10.5, 10.6 and Windows XP, 7).

    Read the article

  • Inconsistency in modified/created/accessed time on mac

    - by Seth Johnson
    I'm having trouble using os.utime to correctly set the modification time on the mac (Mac OS X 10.6.2, running Python 2.6.1 from /usr/bin/python). It's not consistent with the touch utility, and it's not consistent with the properties displayed in the Finder's "get info" window. Consider the following command sequence. The 'created' and 'modified' times in the plain text refer to the "get info" window attributes. As a reminder, os.utime takes arguments (filename, (atime, mtime)). >>> import os >>> open('tempfile','w').close() 'created' and 'modified' are both the current time. >>> os.utime('tempfile', (1000000000, 1500000000) ) 'created' is the current time, 'modified' is July 13, 2017. >>> os.utime('tempfile', (1000000000, 1000000000) ) 'created' and 'modified' are both September 8, 2001. >>> os.path.getmtime('tempfile') 1000000000.0 >>> os.path.getctime('tempfile') 1269021939.0 >>> os.path.getatime('tempfile') 1269021951.0 ...but the os.path.get?time and os.stat don't reflect it. >>> os.utime('tempfile', (1500000000, 1000000000) ) 'created' and 'modified' are still both September 8, 2001. >>> os.utime('tempfile', (1500000000, 1500000000) ) 'created' is September 8, 2001, 'modified' is July 13, 2017. I'm not sure if this is a Python problem or a Mac stat problem. When I exit the Python shell and run touch -a -t 200011221234 tempfile neither the modification nor the creation times are changed, as expected. Then I run touch -m -t 200011221234 tempfile and both 'created' and 'modified' times are changed. Does anyone have any idea what's going on? How do I change the modification and creation times consistently on the mac? (Yes, I am aware that on Unixy systems there is no "creation time.")

    Read the article

  • Help with SSh Tunnel [closed]

    - by Andrew Johnson
    I am running a Django instance locally and doing some Facebook development. So, I set up a port on a remote machine to forward to my local machine, so that Facebook can hit the web server, and have the requests forwarded to my local machine. Unfortunately, I'm getting the following error in my browser when I try and access the page: http://dev.thegreathive.com/ Any idea what I'm doing wrong? I think the problem is on my local machine, since if I kill the SSH tunnel, the error message changes.

    Read the article

  • Can Automapper populate a strongly typed object from the data in a NameValueCollection?

    - by Seth Petry-Johnson
    Can Automapper map values from a NameValueCollection onto an object, where target.Foo receives the value stored in the collection under the "Foo" key? I have a business object that stores some data in named properties and other data in a property bag. Different views make different assumptions about the data in the property bag, which I capture in page-specific view models. I want to use AutoMapper to map both the "inherent" attributes (the ones that always exist) as well as the "dynamic" attributes (the ones that vary per view and may or may not exist).

    Read the article

  • Multi-level clones with Git?

    - by Chad Johnson
    So, I'm thinking of having the following centralized setup with Git (each of these are clones): stable development developer1 developer2 developer3 So, I created my stable repository git --bare init made the 'development' clone git clone ssh://host.name//path/to/stable/project.git development and made a 'developer' clone git clone ssh://host.name//path/to/development/project.git developer So, now, I make a change, commit, and then I push from my developer account git commit --all git push and the change goes to the development clone. But now, when I ssh to the server, go to the development clone directory, and run "git fetch" or "get pull", I don't see the changes. So what do I do? Am I totally misunderstanding things and doing things wrong? How can I see the changes in the 'development' clone that I pushed from my 'developer' clone? This worked fine in Mercurial.

    Read the article

  • How important is W3C XHTML/CSS validation when finalizing work?

    - by Andrew G. Johnson
    Even though I always strive for complete validation these days, I often wonder if it's a waste of time. If the code runs and it looks the same in all browsers (I use browsershots.org to verify) then do I need to take it any further or am I just being overly anal? What level do you hold your code to when you create it for: a) yourself b) your clients P.S. Jeff and company, why doesn't stack overflow validate? :) EDIT: Some good insights, I think that since I've been so valid-obsessed for so long I program knowing what will cause problems and what won't so I'm in a better position than people who create a site first and then "go back and fix the validation problems" I think I may post another question on stack overflow; "Do you validate as you go or do you finish and then go back and validate?" as that seems to be where this question is going

    Read the article

  • Why is there no music streaming API service?

    - by Chad Johnson
    Apple has decided to kill lala.com. I loved that site. Now, everyone has to go back to paying $0.89+ for songs from Amazon, iTunes, etc. Lame. Rhapsody would be great, except there are no clients for Mac or Linux. They do have a web interface, buy it is nothing compared to lala's web 2.0y interface. What I just don't understand is, why is there no music API streaming service out there? Basically, developers could hook the service into any desktop or web app, and then users of the app could pay $x a month (like with Rhapsody) and play any amount of music, so long as their subscription is active. Why not? Lala streamed music to web browsers, so surely it could be as secure as lala is (was), preventing music theft.

    Read the article

  • How to catch FTP errors? e.g., socket.error: [Errno 10060]

    - by Johnson
    I'm using the ftplib module to upload files: files = [ a.txt , b.txt , c.txt ] s = ftplib.FTP(ftp_server , ftp_user , ftp_pw) # Connect to FTP for i in range(len(files)): f = open(files[i], 'rb') stor = 'stor ' + files[i] s.storbinary(stor, f) f.close() # close file s.quit() # close ftp How do I catch the following error? socket.error: [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond And what other errors are common when using the FTP module that I should also catch? Thanks for any help or pointers.

    Read the article

  • In Subversion, I know when a file was added, what's the quickest way to find out when it was deleted

    - by Eric Johnson
    OK, so suppose I know I added file "foo.txt" to my Subversion repository at revision 500. So I can do a svn log -v http://svnrepo/path/foo.txt@500, and that shows all the files added at the same time. What's the fastest way to find when the file was deleted after it was added? I tried svn log -r500:HEAD -v http://svnrepo/path/foo.txt@500, but that gives me "path not found" - perhaps obviously, because the file "foo.txt" doesn't exist at "HEAD". I can try a binary search algorithm going forward through revisions (and that would certainly be faster than typing this question), but is there a better way?

    Read the article

  • System.ComponentModel.Component in Visual Studio 2008

    - by Mark A Johnson
    I'm maintaining a .Net 2.0 application using Visual Studio 2008. When the application was built, it was originally in Visual Studio 2003 and made use of the System.ComponentModel.Component class for data access. You can drag and drop commands, connections, etc onto the designer surface of the component. In 2008, the data access classes don't "stick" to the component. I.e., the code for the command does not get generated in the class. when did this change? 2005? is there a replacement for this behavior, perhaps using the db pro edition? Thanks.

    Read the article

  • jqueryUI: Drag element from dialog and drop onto main page?

    - by Seth Petry-Johnson
    I am trying to create a drag and drop system consisting of a workspace and a "palette". The workspace currently consists of re-orderable list items, and I want the palette to be a floating window from which I can drag items and add them to a specific position on the workspace. I am currently using the jqueryUI "sortable" plugin for the workspace and the jqueryUI "dialog" plugin for the palette. However, I cannot drag something out of the dialog and on to the main page. When I try, the item being dragged disappears as it crosses the boundary of the dialog (which makes sense). What can I change so that items will remain visible as I drag them out of the palette and allow me to drop them onto the main workspace? Alternatively, are there any jquery plugins that offer this sort of drag-n-drop palette as a primary feature?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >