Search Results

Search found 3558 results on 143 pages for 'hosted'.

Page 119/143 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Google App Engine on Google Apps Domain

    - by Bob Ralian
    I'm having trouble getting my domain pointed to my website hosted with google app engine. Here's the background... take care to separate the concepts of "google apps" (domain hosting, email, etc.) and "google app engine" (website framework). I have a domain that's using Google Apps for Your Domain, let's call it company.com. So my login for my google apps account is [email protected]. I have a different domain that is aliased back to my google apps account, let's call it mycompany.com. It's been successfully aliased and registered with my primary google apps account using the cname method, and has updated mx records. We have a ton of domains, and I only want to use one "google apps" account to maintain them all. Now I have a website I've built using google app engine, and the url is effectively mycompany.appspot.com. I want to get mycompany.com to point to my website that currently resides at mycompany.appspot.com. There's a spot in the google app engine dashboard under application settings where you can add a domain. So I click there and enter mycompany.com and I get an error message saying that domain is not using google apps. If I back up to the page I submitted, there's a note saying I need to register the domain with google apps. So I click the link to do that and enter mycompany.com and I get an error message saying the domain has been registered and is in the process of ownership verification. But that process is already finished. So... what do I do? Does google app engine not support a domain that is only aliased to a primary google apps account? Does mycompany.com need to have its own primary google apps account?

    Read the article

  • Server reporting incorrect mime type for css files

    - by Becky
    We have a VPS server that we host our websites on. I have written a CMS using CodeIgniter. On one of the interfaces, I am attempting to upload a css file to the system. This worked correctly when we had it hosted on shared hosting. Since we've moved it to the VPS, I am getting an "incorrect filetype" error. It all comes down to the fact that the server is reporting a mime type of text/x-c for the css file rather than text/css. I logged in via shell and ran the following command on an existing valid css file (to make sure it wasn't an issue with either CodeIgniter or with php). file --brief --mime 'filename.css' 2>&1 The server gave me the following in response to my command: text/x-c; charset=us-ascii My question ... is there some sort of server setting that I need to tweak to get the server to correctly identify the css file as text/css? Do I just have to add a mime type for the css files to the server? I found the mime types file (etc/mime.types), and it just hase video types and a couple other that I have no idea what they are. There is nothing in there for css or images or html files. Unless I'm looking in the wrong spot. I'm not a server person, so I'm hoping someone can help me out. Some server specs: Apache/2.2.22 (Unix) php 5.3.13 Server API = CGI/FastCGI the fileinfo php extension appears to be disabled

    Read the article

  • Posting xml from classic asp to asp.net

    - by Chris Dunaway
    I apologize if this has been asked before. I searched and didn't find anything that matched my situation. Also bear in mind I am fairly new to asp/asp.net development. My current project is a relatively simple e-commerce site. The customer will connect to the site, select products, input shipping and billing information, payment information (credit card) and submit the order. The project is being split into two parts: The store front which includes displaying the items and taking the customer's shipping and billing information and the payment site which will collect the customers credit card, compute tax, and save the order into the company's system. The reason that the site was split up, was that our side (payment side) already has facilities for credit card handling and tax computation. There may also be some regulatory issues that the store front side does not want to deal with (which we already do). I'm working on the payment portion of the app and I am using asp.net. The store front side is being written in classic asp (not my decision). Each part will be hosted on different servers. The problem I am having is transferring the contents of the "shopping cart" to our app so that we can collect the cc info and submit the order. We had thought that the classic asp could somehow post an xml fragment which contains the billing/shipping info and the items selected. Our side would display a summary of the order, securely collect the credit card info, and submit the order to our system. But I have been unable to post or send the xml from a classic asp on one server, to our asp.net application on another. It all works just fine when I test on the same server. How can I post (or otherwise transfer) the shopping cart data from classic asp to asp.net across server boundaries and transfer control to the asp.net application? As I said, I am new to web development, so this is proving quite a challenge for me. Thanks

    Read the article

  • clientaccesspolicy.xml not being requested via HTTPS

    - by Philip
    I have a silverlight app that has been using http to communicate w/self-hosted WCF services during development. I am now securing the services via https. I am getting an error I had back at the beginning of the project: "An error occurred while trying to make a request to URI 'https://localhost:8303/service'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details." My clientaccesspolicy.xml file is setup to allow access from http://* and https://*. The only difference is using http vs https. The issue is I can usually see (via Fiddler) the clientaccesspolicy.xml file being requested, but now I cannot. I'm assuming it is failing because of this. Any ideas?

    Read the article

  • Apache Axis2 1.5.1 and NTLM Authentication

    - by arcticpenguin
    I've browsed all of the discussions here on StackOverflow regarding NTLM and Java, and I can't seem to find the answer. I'll try and be much more specific. Here's some code that returns a client stub that (I hope) is configured for NTLM authentication: ServiceStub getService() { try { ServiceStub stub = new ServiceStub( "http://myserver/some/path/to/webservices.asmx"); // this service is hosted on IIS List<String> ntlmPreferences = new ArrayList<String>(1); ntlmPreferences.add(HttpTransportProperties.Authenticator.NTLM); HttpTransportProperties.Authenticator ntlmAuthenticator = new HttpTransportProperties.Authenticator(); ntlmAuthenticator.setPreemptiveAuthentication(true); ntlmAuthenticator.setAuthSchemes(ntlmPreferences); ntlmAuthenticator.setUsername("me"); ntlmAuthenticator.setHost("localhost"); ntlmAuthenticator.setDomain("mydomain"); Options options = stub._getServiceClient().getOptions(); options.setProperty(HTTPConstants.AUTHENTICATE, ntlmAuthenticator); options.setProperty(HTTPConstants.REUSE_HTTP_CLIENT, "true"); stub._getServiceClient().setOptions(options); return stub; } catch (AxisFault e) { e.printStackTrace(); } return null; } This returns a valid SerivceStub object. When I try to invoke a call on the stub, I see the following in my log: Jun 9, 2010 12:12:22 PM org.apache.commons.httpclient.auth.AuthChallengeProcessor selectAuthScheme INFO: NTLM authentication scheme selected Jun 9, 2010 12:12:22 PM org.apache.commons.httpclient.HttpMethodDirector authenticate SEVERE: Credentials cannot be used for NTLM authentication: org.apache.commons.httpclient.UsernamePasswordCredentials Does anyone have a solution to this issue?

    Read the article

  • WCF configuration and ISA Proxies

    - by Morten Louw Nielsen
    Hi, I have a setup with a .NET WCF Service hosted on IIS. The client apps are connecting to the service through a set of ISA proxy's. I don't know how many and don't know about their configuration etc. In the client apps I open a client to the service and make several calls via the same client. It works great in my office, but when I deploy at the customer (using the ISAs), after some calls, the connection breaks. In a successfull case, the client will maximum live a few seconds, but is that too much? I think there might be several proxyes. Maybe it's using load ballancing. pseudo code is something like this: WcfClient myClient = new WcfClient(); foreach (WorkItem Item in WorkItemsStack) myClient.ProcessItem(Item); myClient.Close(); I am thinking whether I have to do something like this foreach (WorkItem Item in WorkItemsStack) { WcfClient myClient = new WcfClient(); myClient.ProcessItem(Item); myClient.Close(); } Any one with experience with this field? Kind Regards, Morten, Denmark

    Read the article

  • Could not load file or assembly 'GMap.NET.Core' or one of its dependencies. An attempt was made to load a program with an incorrect format.

    - by Sam M
    I have a wcf Service application in VS2010.My local machine is a 32 bit OS where as the server is a 64 bit. There are around 6 services in my solution. Im successfully able to host the application on IIS on my local machine.And it works fine. But when i try host that service application on Server i gets the below error Could not load file or assembly 'GMap.NET.Core' or one of its dependencies. An attempt was made to load a program with an incorrect format. I do have reference added in my solution for GMap.NET.Core . I have tried to set the properties in my solution to Any CPU . Also in the application pool i have set the Enable 32-Bit Application to True. i have also set the Copy Local to TRUE in my solution before publishing. When i run the source on through my solution i dont get any error and the solution is built successfully. What else can i try to get my services successfully hosted on the Server and should be accessed through my application.

    Read the article

  • Silverlight, WCF service, integrated security AND ssl/https not possible?

    - by Flores
    I have this setup that works perfectly when using http. A silverlight 3 client .net 4 WCF service hosted in IIS with basicHttpBinding and using integrated security on the site When setting https to required on the website the setup stops working. Using the wcftestclient on the uri I get the message: The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was 'Negotiate,NTLM'. The remote server returned an error: (401) Unauthorized. Maybe this makes sense because the wcftestclient does not pass credentials? in the web.config the security mode for the service binding is set is set to 'Transport'. The silverlight client is created like this: BasicHttpBinding basicHttpBinding = new BasicHttpBinding(); basicHttpBinding.Security.Mode = BasicHttpSecurityMode.Transport; var serviceClient = new ImportServiceClient(basicHttpBinding, serviceAddress); The service address is ofcourse starting with https:// And the silverlight client reports this error: The provided URI scheme 'https' is invalid; expected 'http'. Parameter name: via Remember, swithing it back to http (and setting security mode to 'TransportCredentialOnly' makes everything working again. Is the setup I want even supported? If so, how should it be configured?

    Read the article

  • CURL - https - solaris

    - by Vincent
    All, I am receiving the following error when I use PHP to curl to a https site. Both PHP and the https site are hosted on Solaris. This error seems to occur occassionally but frequently. error:80089077:lib(128):func(137):reason(119) This is the curl code I am using: $ch = curl_init(); $devnull = fopen('/tmp/cookie.txt', 'w'); curl_setopt($ch, CURLOPT_STDERR, $devnull); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_URL, $desturl); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT,800); curl_setopt($ch, CURLOPT_AUTOREFERER, true); curl_setopt($ch, CURLOPT_ENCODING, 'gzip,deflate'); curl_setopt($ch, CURLOPT_POSTFIELDS, $postdata); $retVal = curl_exec($ch); print_r(curl_error($ch)); curl_close($ch); if ($devnull) { fclose($devnull); } How can I fix this error? If not, is there an alternative to curl?

    Read the article

  • JQuery not working.

    - by Shantanu Gupta
    I am trying to implement JQuery in my web page but i am not been able to implement it successfully. I have a master page where i added one script for menu bar that is already using jquery hosted by Google This is coded in master page itself <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.0/jquery.min.js"></script> <script type="text/javascript"> <script type="text/javascript"> $(document).ready(function(){ $('#ddmenu > li').bind('mouseover', ddmenu_open) $('#ddmenu > li').bind('mouseout', ddmenu_timer) }); document.onclick = ddmenu_close; // ]]> </script> Now i want to implement a Jquery to set the css visibility property to true or false. into my content page of same master page. <script type="text/javascript"> $(document).ready(function(){ $("#lnkAddMore").click(function(){ alert(); } ); }); </script> This html control is under my UpdatePanel. I dont know why it is not working ? I am using this control under UpdatePanel. <input type="button" id="lnkAddMore" value="Add More" /> I tried to use it outside my update panel it is running successfully but not in UpdatePanel I think there is a problem using it with an UpdatePanel

    Read the article

  • Selecting a Java framework for large application w/ only ONE user

    - by Bijan
    I am building a large application that will be hosted on an AWS server. I'm trying to select a web framework for assisting me with code organization, template design, and generally presentation aspects. Here are some points of consideration: Require security/login/user authentication. I may add the ability in the future to allow more than just an administrator to access the web app, but it is not a public facing website. AJAX support would be helpful. There are a couple widgets that I don't want to recreate. One is a tree object, where the user can expand/contract items in the list, can create new branches, add/edit objects. This would be better off in some dynamic view rather than all done in ugly html. Generally, this is just to provide the application with a face for control, management, and monitoring. Having an easier time adding buttons, CSS, AJAX widgets are great additions though, but not the primary purpose. I'm considering: Wicket Spring Seam GWT Stripe and the list goes on, as I'm sure you all know. I originally planned on using GWT, but then started to feel that GWT didn't cover my primary needs. I could be wrong about this, but there seems to be a lot of support for GWT AND Wicket/Spring. All of this 'getting lost in java frameworks' got me thinking outside the java realm for a framework that would suit my needs that was a clear option, like: JRuby/Rails Jython/Django Groovy/Grails Guice (just throwing this in there... I don't clearly understand the main purposes of all these frameworks. It doesn't seem like DInjection is something I need for a single purpose application) Thanks as always. This community makes Googling for esoteric programming information an order of magnitude better.

    Read the article

  • How to perform two-way data binding of controls in a user control inside a FormView

    - by Sandor Drieënhuizen
    I'm trying to perform two-way data binding on the controls in my user control, which is hosted inside a FormView template. FormView: <asp:ObjectDataSource runat="server" ID="ObjectDataSource" TypeName="WebApplication1.Data" SelectMethod="GetItem" UpdateMethod="UpdateItem"> </asp:ObjectDataSource> <asp:FormView runat="server" ID="FormView" DataSourceID="ObjectDataSource"> <ItemTemplate> <uc:WebUserControl1 runat="server"></uc:WebUserControl1> </ItemTemplate> <EditItemTemplate> <uc:WebUserControl1 runat="server"></uc:WebUserControl1> </EditItemTemplate> </asp:FormView> User control: <%@ Control Language="C#" ... %> <asp:TextBox runat="server" ID="TitleTextBox" Text='<%# Bind("Title") %>'> </asp:TextBox> The binding works fine when the FormView is in View mode but when I switch to Edit mode, upon calling UpdateItem on the FormView, the bindings are lost. I know this because the FormView tries to call an update method on the ObjectDataSource that does not have an argument called 'Title'. I tried to solve this by implementing IBindableTemplate to load the controls that are inside my user control, directly into the templates (just like I had entered them declaratively like in the code above). However, when calling UpdateItem in edit mode, the container that gets passed into the ExtractValues method of the template, does not contain the TextBox anymore. It did in view mode! I have found some questions on SO that relate to this problem but they are rather dated and they don't provide any answers that helped me solve this problem. How do you think I could solve this problem? It seems to be such a simple requirement but apparently it's more like opening a can of worms...

    Read the article

  • PHP PCRE differences on testing and hosting servers

    - by Gary Pearman
    Hi all, I've got the following regular expression that works fine on my testing server, but just returns an empty string on my hosted server. $text = preg_replace('~[^\\pL\d]+~u', $use, $text); Now I'm pretty sure this comes down to the hosting server version of PCRE not being compiled with Unicode property support enabled. The differences in the two versions are as follows: My server: PCRE version 7.8 2008-09-05 Compiled with UTF-8 support Unicode properties support Newline sequence is LF \R matches all Unicode newlines Internal link size = 2 POSIX malloc threshold = 10 Default match limit = 10000000 Default recursion depth limit = 10000000 Match recursion uses stack Hosting server: PCRE version 4.5 01-December-2003 Compiled with UTF-8 support Newline character is LF Internal link size = 2 POSIX malloc threshold = 10 Default match limit = 10000000 Match recursion uses stack Also note that the version on the hosting server (the same version PHP is compiled against) is pretty old. What confuses me though, is that pcretest fails on both servers from the command line with re> ~[^\\pL\d]+~u ** Unknown option 'u' although this regexp works fine when run from PHP on my server. So, I guess my questions are does the regular expression fail on the hosting server because of the lack of Unicode properties? Or is there something else that I'm missing? Thanks all, Gaz.

    Read the article

  • H.264 / FLV best practices for HTML

    - by Steve Murch
    I run a website with about 700 videos (And no, it's not porn -- get your mind out of the gutter :-) ). The videos are currently in FLV format. We use the JWPlayer to render those videos. IIS6 hosted. Everything works just fine. As I understand it, H.264 (not FLV and likely not OGG) is the emerging preferred HTML5 video standard. Today, the iPad really only respects H.264 or YouTube. Presumably, soon many more important browsers will follow Apple's lead and respect only the HTML5 tag. OK, so I think I can figure out how to convert my existing videos into the proper H.264 format. There are various tools available, including ffmpeg.exe. I haven't tried it yet, but I don't think that's going to be a problem after fiddling with the codec settings. My question is more about the container itself -- that is, planning graceful transition for all users. What's the best-practice recommendation for rendering these videos? If I just use the HTML5 tag, then presumably any browser that doesn't yet support HTML5 won't see the videos. And if I render them in Flash format via the JWPlayer or some other player, then they won't be playable on the iPad. Do I have to do ugly UserAgent detection here to figure out what to render? I know the JWPlayer supports H.264 media, but isn't the player itself a Flash component and therefore not playable on the iPad? Sorry if I'm not being clear, but I'm scratching my head on a graceful transition plan that will work for current browsers, the iPad and the upcoming HTML5 wave. I'm not a video expert, so any advice would be most welcome, thanks.

    Read the article

  • JQuery UI Autocomplete showing as bullets

    - by awshepard
    The JQuery UI Demo page for autocomplete (link) has a nice looking search box and drop down with nice colors and highlights and such. When I implement it for myself, I end up with a bulleted list. How do I get my drop down of suggestions to look like theirs? A few notes/code fragments: I'm working in .NET land, so I'm using the <asp:ScriptManager> tag with <asp:ScriptReference>s inside it to get the hosted jquery.min.js (1.4.2) and jquery-ui.min.js (1.8.1) files from Google. My input box is fairly simple: <div class='ui-widget'> <label for="terms">Term: </label> <input id="terms" class="ui-autocomplete-input"> </div> My autocomplete looks like: $(""#terms"").autocomplete({source:""GetAttributesJSON.aspx"",minLength:2}); I get the correct data back, so that's not the issue. I just want fancy graphics. Any thoughts would be much appreciated.

    Read the article

  • ASP.NET Applications Requests/Sec suddenly jumps to a value of about 70 million/sec. on 8 core web

    - by Subhrajit Roy
    We are doing performance testing of an ASP.NET web application with VSTS 2008. We start with 2000 users and slowly ramp up to 5000 users (reaches this user load at around 2.5 hours after the tests start, after this we stay at this user load). The total test duration is of about 6 hours During these runs we have found that the counter Requests/Sec (under category ASP.NET applications) suddenly spikes to a values of 36-72 millions !!!. This keeps on happening intermittently i.e we see this issue once in every 3 performance runs that we give on the same application. In our testing environment we have 4 web servers and interestingly enough we have found that this issue occurs only in the 8 core web servers. Summarizing ... Issue : The counter Requests/Sec (under category ASP.NET Applications) suddenly jumps to a value of about 70 million/sec. on 8 core web servers. This results in an increase in SQL server connections opened by the application. Response time goes for a toss. Error rates also show similar behaviour. However the counter ISAPI Extention Requests/sec does not show any abnormal increase. The graph of this counter almost overlaps with that of counter Requests/Sec till the time of the appearance of the spike.When the spike appears , this counter (ISAPI Extention Requests/sec) actually shows a drop. Test Settings : Performance test run with Visual Studio Team System 2008. Soak test run for 6 hours. Maximum user load 5000 users. This is load is attained at about 2.5 hours into the run and mainted for remaining duration.(i.e for around 3.5 more hrs) This issue is reproducible though happens intermittently. (i.e occurs one in three or four runs) Test Environment : Web site deployed on 4 Web Servers (Windows Server 2003). Of these 2 are 4 core machines and the remaining 2 are 8 core ones. .NET Framework 3.5 SP1 installed on all 4 web servers. Application hosted on IIS 6.0 run in Worker process isolation mode.

    Read the article

  • Optimal Serialization of Primitive Types

    - by Greg Dean
    We are beginning to roll out more and more WAN deployments of our product (.Net fat client w/ IIS hosted Remoting backend). Because of this we are trying to reduce the size of the data on the wire. We have overridden the default serialization by implementing ISerializable (similar to this), we are seeing anywhere from 12% to 50% gains. Most of our efforts focus on optimizing arrays of primitive types. I would like to know if anyone knows of any fancy way of serializing primitive types, beyond the obvious? For example today we serialize an array of ints as follows: [4-bytes (array length)][4-bytes][4-bytes] Can anyone do significantly better? The most obvious example of a significant improvement, for boolean arrays, is putting 8 bools in each byte, which we already do. Note: Saving 7 bits per bool may seem like a waste of time, but when you are dealing with large magnitudes of data (which we are), it adds up very fast. Note: We want to avoid general compression algorithms because of the latency associated with it. Remoting only supports buffered requests/responses(no chunked encoding). I realize there is a fine line between compression and optimal serialization, but our tests indicate we can afford very specific serialization optimizations at very little cost in latency. Whereas reprocessing the entire buffered response into new compressed buffer is too expensive.

    Read the article

  • singleton pattern in Windows Activation Service

    - by Joshua
    Hello I have a few WCF services that are currently being self hosted, in a very basic NT Service. I want to expand my application to add provisioning of WCF Services, and updates, as well as isolation (I want each WCF Service to be in its own AppDomain). These WCF Services contain logic that needs to be run on a regular basis, pinging the database, and getting information from external devices so that when a request comes in the data is readily available. I'm thinking about trying out Windows Activation Service, because i really like the provisioning, and isolation that comes with a managed services infrastructure. If I didn't use WAS I would essentially have to write the same code myself. From what I understand though WAS does not really support the model of having a service that is running before someone actually calls a method on the service. the article I read here MSDN Article Link states "That means in essence that out-of-the-box WAS hosting is not something that is really suited for sessionful or singleton services. It is more suitable for stateless per-call services." it does say that "Out of the box" so I'm wondering if anyone has used WAS to host a WCF service that really behaves more like an NT Service (starting and stopping independantly of having a method called upon it). Or any other ideas would be great. I was planning on writting this infrastructure myself, to host WCF services in a custom ServiceHost, and put their execution in a seporate AppDomain, as well as allow for provision of these services after initial installation, along with updates. However, I would MUCH MUCH MUCH rather not own that code if I don't have to. thanks Joshua

    Read the article

  • Nested URLs and Rewrite rules in Apache2

    - by Radha Krishna. S.
    Hi, I need some help with rewrite rules and nested URLs. I am using TikiWiki for my website and am in the process of setting up SE friendly URLs for my projects. Specifically, I have the following rewrite rule for www.example.com/projects to point to a page that lists out all the projects hosted in example. RewriteRule ^Projects$ articles?type=Project [L] This works fine. Now, I would like to point www.example.com/projects/project1 to point to a specific project. I have this rewrite rule RewriteRule ^(Projects/Project1)$ tiki-read_article.php?articleId=6 This works, but partially. The content is all rendered as text but the theme - images/ css etc all go for a toss - the page is completely in text. I understand that this happens 'cause the relative paths in the theme/ css/ images all refer to Projects as the base folder instead of the root of the website. I don't want to touch the CMS portion - change the theme/ css/ image paths in the files, more for reasons of upgradability. Can someone help me understand and write a rule so that the above nested URL works? Regards, Radha

    Read the article

  • php cli script hangs with no messages

    - by julio
    Hi-- I've written a PHP script that runs via SSH and nohup, meant to process records from a database and do stuff with them (eg. process some images, update some rows). It works fine with small loads, up to maybe 10k records. I have some larger datasets that process around 40k records (not a lot, I realize, but it adds up to a lot of work when each record requires the download and processing of up to 50 images). The larger datasets can take days to process. Sometimes I'll see in my debug logs memory errors, which are clear enough-- but sometimes the script just appears to "die" or go zombie on me. My tail of the debug log just stops, with no error messages, the tail of the nohup log ends with no error, and the process is still showing in a ps list, looking like this-- 26075 pts/0 S 745:01 /usr/bin/php ./import.php but no work is getting done. Can anyone give me some ideas on why a process would just quit? The obvious things (like a php script timeout and memory issues) are not a factor, as far as I can tell. Thanks for any tips PS-- this is hosted on a godaddy VDS (not my choice). I am sort of suspecting that godaddy has some kind of limits that might kick in on me despite what overrides I put in the code (such as set_time_limit(0);).

    Read the article

  • WCF - remote service without using IIS - base address?

    - by Mark Pim
    I'm trying to get my head around the addressing of WCF services. We have a client-server setup where the server occasionally (maybe once a day) needs to push data to each client. I want to have a lightweight WCF listener service on each client hosted in an NT service to receive that data. We already have such an NT service setup hosting some local WCF services for other tasks so the overhead of this is minimal. Because of existing legacy code on the server I believe the service needs to be exposed as ASMX and use basicHttpBinding to allow it to connect. Each client is registered on the server by the user (they need to configure them individually) so discovery is not the issue. My question is, how does the addressing work? I imagine the user entering the client's address on the server in the form http://0.0.0.0/MyService or even http://hostname/MyService If so, how do I configure the client service in its App.config? Do I use localhost? If not then what is the reccommended way of exposing the service to the server? Note: I don't want to host in IIS as that adds extra requirements to the hardware required for the client. The clients will be almost certainly located on LANs, not over the public internet

    Read the article

  • WCF Service in Azure with ClaimsIdentity over SSL

    - by Sunil Ramu
    Hello , Created a WCF service as a WebRole using Azure and a client windows application which refers to this service. The Cloud Service is refered to a certificate which is created using the "Hands On Lab" given in windows identity foundation. The Web Service is hosted in IIS and it works perfect when executed. I've created a client windows app which refers to this web service. Since WIF Claims identity is used, I have a claimsAuthorizationManager Class, and also a Policy class with set of defilned policies. The Claims is set in the web.config file. When I execute the windows app as the start up project, the app prompts for authentication, and when the account credentials are given as in the config file, it opens a new "Windows Card Space" Window and Says "Incoming Policy Failed". When I close the window the System throws and Exception The incoming policy could not be validated. For more information, please see the event log. Event Log Details Incoming policy failed validation. No valid claim elements were found in the policy XML. Additional Information: at System.Environment.get_StackTrace() at Microsoft.InfoCards.Diagnostics.InfoCardTrace.BuildMessage(InfoCardBaseException ie) at Microsoft.InfoCards.Diagnostics.InfoCardTrace.TraceAndLogException(Exception e) at Microsoft.InfoCards.Diagnostics.InfoCardTrace.ThrowHelperError(Exception e) at Microsoft.InfoCards.InfoCardPolicy.Validate() at Microsoft.InfoCards.Request.PreProcessRequest() at Microsoft.InfoCards.ClientUIRequest.PreProcessRequest() at Microsoft.InfoCards.Request.DoProcessRequest(String& extendedMessage) at Microsoft.InfoCards.RequestFactory.ProcessNewRequest(Int32 parentRequestHandle, IntPtr rpcHandle, IntPtr inArgs, IntPtr& outArgs) Details: System Provider [ Name] CardSpace 3.0.0.0 EventID 267 [ Qualifiers] 49157 Level 2 Task 1 Keywords 0x80000000000000 EventRecordID 6996 Channel Application EventData No valid claim elements were found in the policy XML. Additional Information: at System.Environment.get_StackTrace() at Microsoft.InfoCards.Diagnostics.InfoCardTrace.BuildMessage(InfoCardBaseException ie) at Microsoft.InfoCards.Diagnostics.InfoCardTrace.TraceAndLogException(Exception e) at Microsoft.InfoCards.Diagnostics.InfoCardTrace.ThrowHelperError(Exception e) at Microsoft.InfoCards.InfoCardPolicy.Validate() at Microsoft.InfoCards.Request.PreProcessRequest() at Microsoft.InfoCards.ClientUIRequest.PreProcessRequest() at Microsoft.InfoCards.Request.DoProcessRequest(String& extendedMessage) at Microsoft.InfoCards.RequestFactory.ProcessNewRequest(Int32 parentRequestHandle, IntPtr rpcHandle, IntPtr inArgs, IntPtr& outArgs)

    Read the article

  • Check for modification failure in content Integration using VisualSVN Server and Cruisecontrol.net

    - by harun123
    I am using CruiseControl.net for continous integration. I've created a repository for my project using VisualSvn server (uses Windows Authentication). Both the servers are hosted in the same system (Os-Microsoft Windows Server 2003 sp2). When i force build the project using CruiseControl.net "Failed task(s): Svn: CheckForModifications" is shown as the message. When i checked the build report, it says as follows: BUILD EXCEPTION Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://sp-ci.sbsnetwork.local:8443/svn/IntranetPortal/Source': **Server certificate verification failed: issuer is not trusted** (https://sp-ci.sbsnetwork.local:8443). Process command: C:\Program Files\VisualSVN Server\bin\svn.exe log **sameUrlAbove** -r "{2010-04-29T08:35:26Z}:{2010-04-29T09:04:02Z}" --verbose --xml --username ccnetadmin --password cruise --non-interactive --no-auth-cache at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications (IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) My SourceControl node in the ccnet.config is as shown below: <sourcecontrol type="svn"> <executable>C:\Program Files\VisualSVN Server\bin\svn.exe</executable> <trunkUrl> check out url </trunkUrl> <workingDirectory> C:\ProjectWorkingDirectories\IntranetPortal\Source </workingDirectory> <username> ccnetadmin </username> <password> cruise </password> </sourcecontrol> Can any one suggest how to avoid this error?

    Read the article

  • Unknown Exception on trying to initialize the web service stub created by Axis C++

    - by Harsha Reddy
    Hi, I am trying out the sample calculator program given in the folder of axis c++. I am mainly interested in the client side. So I used the wsdl to create the stubs and my main is pretty much the same as given in the sample. However on executing the call Calculator ws (endpoint) I get an unknown exception "First-chance exception at 0x7c0024b9 in CalculatorClient.exe: 0xC0000005: Access violation reading location 0x00000000. First-chance exception at 0x7c812afb in CalculatorClient.exe: Microsoft C++ exception: [rethrow] at memory location 0x00000000.. Unhandled exception at 0x7c0024b9 in CalculatorClient.exe: 0xC0000005: Access violation reading location 0x00000000." and the exception causing code is Calculator::Calculator(const char* pchEndpointUri, AXIS_PROTOCOL_TYPE eProtocol) :Stub(pchEndpointUri, eProtocol) { } I had earlier tried to run a a webservice using Axis C++ but had received the same error. At that time my web service was a java ws on WAS. Then I later tried the calculator client (but this time I did not have any server hosting the ws as I just wanted to check if the Client could initialize). Is the problem caused due to the web service not being hosted on Apache in C++ (though I highly doubt it). Any help would be appreciated. Thanks, Harsha

    Read the article

  • Vimeo Desktop App OAuth

    - by Barry
    Hi Guys, I'm currently having massive trouble with Vimeo's Oauth implementation and my desktop app. My program does the following correctly. 1- Requests a Unauthorized Request Token with my key and secret and returns - a Token and a Token secret. 2- Generates a URL for the user to go to using the token which then shows our application's name and allows the user to Authorize us to use his/her account. It then shows a verifier which the user returns and puts into our app. The problem is the third step and actually exchanging the tokens for the access tokens. Basically every time we try and get them we get a "Invalid / expired token - The oauth_token passed was either not valid or has expired" I looked at the documentation and there's supposed to be a callback to a server when deployed like that which gives the user an "authorized token" but as im developing a desktop app we can't do this. So I assume the token retrieved in 1 is valid for this step. (actually it seems it is: http://vimeo.com/forums/topic:22605) So I'm wondering now am I missing something here on my actual vimeo application account now? is it treating it as a web hosted app with callbacks? all the elements are there for this to work and I've used this same component to create a twitter Oauth login in exactly the same way and it was fine. Thanks in advance, Barry

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >