Search Results

Search found 20761 results on 831 pages for 'chef client'.

Page 373/831 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Model validation with enumerations

    - by Robert Koritnik
    I'm using DataAnnotations attributes to validate my model objects. My model class looks similar to this: public class MyModel { [Required] public string Title { get; set; } [Required] public List<User> Editors { get; set; } } public class User { public int Id { get; set; } [Required] public string FullName { get; set; } [Required] [DataType(DataType.Email)] public string Email { get; set; } } My controller action looks like: public ActionResult NewItem(MyModel data) { //... } User is presented with a view that has a form with: a text box with dummy name where users enter user's names. For each user they enter, there's a client script coupled with ajax that creates an <input type="hidden" name="data.Editors[0].Id" value="userId" /> for each user entered (enumeration index is therefore not always 0 as written here), so default model binder is able to consume and bind the form without any problems. a text box where users enter the title Since I'm using Asp.net MVC 2 RTM which does model validation instead of input validation I don't know how to avoid validation errors. The thing is I have to use BindAttribute on my controller action. I would have to either provide a white or a black list of properties. It's always a better practice to provide a white list. It's also more future proof. The problem My form works fine, but I get validation errors about user's FullName and Email properties since they are not provided. I also shouldn't feed them to the client (via ajax when user enters user data), because email is personal contact data and is not shared between users. If there was just a single user reference on MyModel I would write [Bind(Include = "Title, Editor.Id")] But I have an enumeration of them. How do I provide Bind white list to work with my model?

    Read the article

  • Getting the CVE ID Property of an update from WSUS API via Powershell

    - by thebitsandthebytes
    I am writing a script in Powershell to get the update information from each computer and correlate the information with another System which identifies updates by CVE ID. I have discovered that there is a "CVEIDs" property for an update in WSUS, which is documented in MSDN, but I have no idea how to access the property. Retrieving the CVE ID from WSUS is the key to this script, so I am hoping someone out there can help! Here is the property that I am having difficulty accessing: IUpdate2::CveIDs Property - http://msdn.microsoft.com/en-us/library/aa386102(VS.85).aspx According to this, the IUnknown::QueryInterface method is needed to interface IUpdate2 -  "http://msdn.microsoft.com/en-us/library/ee917057(PROT.10).aspx" "An IUpdate instance can be retrieved by calling the IUpdateCollection::Item (opnum 8) (section 3.22.4.1) method.  The client can use the IUnknown::QueryInterface method to then obtain an IUpdate2, IUpdate3, IUpdate4, or IUpdate5 interface. Additionally, if the update is a driver, the client can use the IUnknown::QueryInterface method to obtain an IWindowsDriverUpdate, IWindowsDriverUpdate2, IWindowsDriverUpdate3, IWindowsDriverUpdate4, or IWindowsDriverUpdate5 interface. " Here is a skeleton of my code: [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration") | Out-Null  if (!$wsus)  {  Returns an object that implements IUpdateServer  $wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer($server, $false, $port)  }  $computerScope = New-Object Microsoft.UpdateServices.Administration.ComputerTargetScope  $updateScope = New-Object Microsoft.UpdateServices.Administration.UpdateScope  $updateScope.UpdateSources = [Microsoft.UpdateServices.Administration.UpdateSources]::MicrosoftUpdate  $wsusMachines = $wsus.GetComputerTargets($computerScope)  foreach machine in QSUS, write the full domain name $wsusMachines | ForEach-Object {  Write-host $.FullDomainName  $updates = $.GetUpdateInstallationInfoPerUpdate($updateScope)  foreach update for each machine, write the update title, installation state and securitybulletin $updates | ForEach-Object {  $update = $wsus.GetUpdate($.UpdateId) # Returns an object that implements Microsoft.UpdateServices.Administration.IUpdate $updateTitle = $update.Title | Write-Host $updateInstallationState = $.UpdateInstallationState | Write-Host $updateSecurityBulletin = $update.SecurityBulletins | Write-Host  $updateCveIds = $update.CveIDs # ERROR: Property 'CveIDs' belongs to IUpdate2, not IUpdate  }  }

    Read the article

  • why does Integrated Windows Authentication fail when clients access off the network

    - by Bryan
    My background is not with web applications so this problem is hard for me to explain easily. First I'll try to describe the setup. Client setup:-Only browser that is effected is IE 6-8 (Firefox, chrome, opera, and safari all work fine) -A user will try to access our web application from a company laptop that is not connected to our network. -This machine will be a member of our workgroup and have the company DNS listed as a trusted intranet site. (to which the application in question would be a member) -The security logon mode is set to Automatic Logon only in intranet zone only, and IWA authentication is enabled on the clients browser.Server setup:-Windows server 2003 fp2-The application will first redirect to an Authorization asp page which has anonymous access disabled and IWA enabled in IIS.what should happen is that, since the client is not currently on the network, when this page is called it should prompt the user for network credentials. But with IE, instead of prompting, the user gets a page cannot be displayed error because the IIS manager is denying access to the asp page. If the company DNS is removed from the trusted intranet site list then it prompts correctly but disables single sign on the next time that computer is connected to the network or vpn. My assumption is that since IE uses IWA and the site is listed as an internal site, when no network is found IE just sends nulls to the server attempting to authenticate which is swiftly punted back. Other browsers do not have security zones so when network credentials are not present the server prompts for them. Is there a way to get around this so that our clients can keep the company DNS in the intranet zone but still have the server prompt for credentials when not on the network? Any attempt to allow for anonymous access on the asp page, as far as I know, will cause AUTH_USER to return null and again break SSO. I realize this is slightly rambling so I will do my best to clarify and questions you guys might have. Thanks in advance.

    Read the article

  • Update a single field from a single entity with ria-services

    - by TimothyP
    There are situations where I only want to update a specific field of a single entity in the database. I loaded the entities of that type into my silverlight application, and I know they are constantly changing on the server... but there is one field which has to be set by the silverlight client... the server will only read it. How can I just send the new data for that field to the server? Example an Entity called "TextField". I have a list of TextFields loaded in the silverlight application and every now and then the user will update the Preload (string) property of an entity and that has to go back to the server without changing anything else on the server. I tried adding a simple SetPreloadText(...) method to the DomainService but that just makes Silverlight crash with some odd error code. Is there a way to this? Am I working against the idea of Silverlight here? I really don't want to send the entire object back because know that at any given time the version on the client will most likely be out of date. (which is ok for this specific application)

    Read the article

  • BN_hex2bn magicaly segfaults in openSSL

    - by xunil154
    Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long. I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum. I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck. header segment: typedef struct KEYS { RSA *serv; char* serv_pub; int pub_size; RSA *clnt; } KEYS; KEYS keys; Initializing function: // Generates and validates the servers key /* code for generating server RSA left out, it's working */ //Set client exponent keys.clnt = 0; keys.clnt = RSA_new(); BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent Problem code (in Network::server_handshake): // *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)* cout << "Assigning clients RSA" << endl; // I have verified that 'buffer' contains the proper key if (BN_hex2bn(&keys.clnt->n, buffer) < 0) { Error("ERROR reading server RSA"); } cout << "clients RSA has been assigned" << endl; The program segfaults at BN_hex2bn(&keys.clnt->n, buffer) with the error (valgrind output) Invalid read of size 8 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) by 0x40F23E: Network::server_handshake() (Network.cpp:177) by 0x40EF42: Network::startNet() (Network.cpp:126) by 0x403C38: main (server.cpp:51) Address 0x20 is not stack'd, malloc'd or (recently) free'd Process terminating with default action of signal 11 (SIGSEGV) Access not within mapped region at address 0x20 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!

    Read the article

  • Amazon EC2 RSA key stopped authenticating - Permission denied (publickey)

    - by shedd
    Authenticating to our Ubuntu EC2 instance worked fine until a little while ago. All of a sudden, the key is being rejected. When we create a new instance with the keypair, we're able to connect to the instance perfectly, so it appears to be an issue with the existing instance. Port 22 is open. Any suggestions on what to look at from a configuration standpoint so we can fix this? Any thoughts on how we can get into the box? Here is the SSH debug output. Is there anything obviously amiss? Thanks so much! $ ssh -v -i ~/zzz.pem ubuntu@###.###.###.### OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to ###.###.###.### [###.###.###.###] port 22. debug1: Connection established. debug1: identity file zzz.pem type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-6ubuntu2 debug1: match: OpenSSH_5.1p1 Debian-6ubuntu2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '###.###.###.###' is known and matches the RSA host key. debug1: Found key in /zzz/.ssh/known_hosts:18 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering public key: /zzz/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Offering public key: zzz.txt debug1: Authentications that can continue: publickey debug1: Trying private key: zzz.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey).

    Read the article

  • Enum "does not have a no-arg default constructor" with Jaxb and cxf

    - by Dave
    A client is having an issue running java2ws on some of their code, which uses & extends classes that are consumed from my SOAP web services. Confused yet? :) I'm exposing a SOAP web service (JBoss5, Java 6). Someone is consuming that web service with Axis1 and creating a jar out of it with the data types and client stubs. They are then defining their own type, which extends one of my types. My type contains an enumeration. class MyParent { private MyEnumType myEnum; // getters, settters for myEnum; } class TheirChild extends MyParent { ... } When they are running java2ws on their code (which extends my class), they get Caused by: com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of IllegalAnnotationExceptions net.foo.bar.MyEnuMType does not have a no-arg default constructor. this problem is related to the following location: at net.foo.bar.MyEnumType at public net.foo.bar.MyEnumType net.foo.bar.MyParent.getMyEnum() The enum I've defined is below. This is now how it comes out after being consumed, but it's how I have it defined on the app server: @XmlType(name = "MyEnumType") @XmlEnum public enum MyEnumType { Val1("Val1"), Val2("Val2") private final String value; MyEnumType(String v) { value = v; } public String value() { return value; } public static MyEnumType fromValue(String v) { if (v == null || v.length() == 0) { return null; } if (v.equals("Val1")) { return MyEnumType.Val1; } if (v.equals("Val2")) { return MyEnumType.Val2; } return null; } } I've seen things online and other posts, like (this one) regarding Jaxb's inability to handle Lists or things like that, but I'm baffled about my enum. I'm pretty sure you can't have a default constructor for an enum (well, at least a public no-arg constructor, Java yells at me when I try), so I'm not sure what makes this error possible. Any ideas? Also, the "2 counts of IllegalAnnotationsExceptions" may be because my code actually has two enums that are written similarly, but I left them out of this example for brevity.

    Read the article

  • wcf configuration for this code

    - by user208081
    I have the following code and would like to convert a lot of code into configuration settings for WCF. As you can see, the code is using wshttpbinding. I appreciate any help on this. try { // Provides a unique network address that a client uses to communicate with a service endpoint. EndpointAddress endpointAddress = new EndpointAddress(new Uri(FAXServiceSettings.Default.FAXReceiveServiceURL)); // Specify the protocols, transports, and message encoders used for communication between the client and the service. // WSHttpBinding represents an interoperable binding that supports distributed transactions and secure, reliable sessions. // Spefically, SOAP message security is enabled for secure transmission of the message content. WSHttpBinding clientBinding = new WSHttpBinding(SecurityMode.Message); clientBinding.OpenTimeout = TimeSpan.FromSeconds(FAXServiceSettings.Default.FAXReceiveServiceOpenTimeoutInSeconds); clientBinding.SendTimeout = TimeSpan.FromSeconds(FAXServiceSettings.Default.FAXReceiveServiceOpenTimeoutInSeconds); // Use the ChannelFactory to enable the creation of channels to the binding and endpoint. using (ChannelFactory<IReceiveFAX> channelFactory = new ChannelFactory<IReceiveFAX>(clientBinding, endpointAddress)) { // Creates a channel of a specified type to a specified endpoint address. IReceiveFAX channel = channelFactory.CreateChannel(); if (channel != null) { try { // Submit the FaxSchedule instance for routing. channel.SubmitFAXForRouting(CreateNewFaxScheduleContainerInstance()); // Explicitly close the channel using the IClientChannel interface. CloseChannel((channel as IClientChannel)); } finally { // Explicitly dispose of the channel using IDisposable interface. DisposeOfChannel((channel as IDisposable)); channel = null; } } // This method causes a CommunicationObject to gracefully transition from any state, other than the Closed state, into the Closed state. The Close method allows any // unfinished work to be completed before returning. For example, finish sending any buffered messages. channelFactory.Close(); } } catch { throw; } Pratik

    Read the article

  • Error in Implementing WS Security web service in WebLogic 10.3

    - by Chris
    Hi, I am trying to develop a JAX WS web service with WS-Security features in WebLogic 10.3. I have used the ant tasks WSDLC, JWSC and ClientGen to generate skeleton/stub for this web service. I have two keystores namely WSIdentity.jks and WSTrust.jks which contains the keys and certificates. One of the alias of WSIdentity.jks is "ws02p". The test client has the following code to invoke the web service: SecureSimpleService service = new SecureSimpleService(); SecureSimplePortType port = service.getSecureSimplePortType(); List credProviders = new ArrayList(); CredentialProvider cp = new ClientBSTCredentialProvider( "E:\\workspace\\SecureServiceWL103\\keystores\\WSIdentity.jks", "webservice", "ws01p","webservice"); credProviders.add(cp); string endpointURL="http://localhost:7001/SecureSimpleService/SecureSimpleService"; BindingProvider bp = (BindingProvider)port; Map requestContext = bp.getRequestContext(); requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, endpointURL); requestContext.put(WSSecurityContext.CREDENTIAL_PROVIDER_LIST,credProviders); requestContext.put(WSSecurityContext.TRUST_MANAGER, new TrustManager() { public boolean certificateCallback(X509Certificate[] chain, int validateErr) { // Put some custom validation code in here. // Just return true for now return true; } }); SignResponse resp1 = new SignResponse(); resp1 = port.echoSignOnlyMessage("hello sign"); System.out.println("Result: " + resp1.getMessage()); When I trying to invoke this web servcie using this test client I am getting the error "Invalid signing policy" with the following stack trace: *[java] weblogic.wsee.security.wss.policy.SecurityPolicyArchitectureException: Invalid signing policy [java] at weblogic.wsee.security.wss.plan.SecurityPolicyBlueprintDesigner.verifyPolicy(SecurityPolicyBlueprintDesigner.java:786) [java] at weblogic.wsee.security.wss.plan.SecurityPolicyBlueprintDesigner.designOutboundBlueprint(SecurityPolicyBlueprintDesigner.java:136) Am I missing any configuration settings in WebLogic admin console or is it do with something else. Thanks in advance.

    Read the article

  • Japanese text garbled while passing to a http restlet service

    - by satish-gunuputi
    Hi , I have a perl client which is calling a http restlet service (put method). Some of the parameters in this call contain japanese text. when I printed the contents of these request parameters in the restlet service I found these chars garbled ! This is my PERL client code: my %request_headers = ( 'DocumentName' => $document_name, --> This name is a JAPANESE String 'DocumentDescription' => 'Test Japanese Chars', 'content-length' => 200, 'Content-Type' => 'application/octet-stream; charset=utf-8', 'User-Agent' => "JPCharTester", ); $s-write_request('PUT', '/test-document/TEST/TEST_DOCUMENT' , %request_headers, $content); in this call both the values of $context and $document_name are JAPANESE Strings. But ONLY the document_name is received as garbled in my backend service. Here goes the Service code: String URL_ENCODING = "UTF-8"; String documentName = requestHeaders.getFirstValue("DocumentName"); System.out.println("Encoded Document Name : "+documentName+" <<<"); --> documentName is garbled here try { documentName = URLDecoder.decode(documentName, URL_ENCODING); System.out.println(>>> Decoded Document Name : "+documentName+" <<<"); --> documentName is garbled here } catch (java.io.UnsupportedEncodingException ex) { throwException(ex.getMessage(), Status.SERVER_ERROR_INTERNAL, ex); } both the above log statements printed GARBLED TEXT !! Can someone tell me what is the mistake I am doing and how to fix this ? Thanks in advance for your help. Regards, Satish.

    Read the article

  • Getting a new session key after Facebook offline_access permission

    - by Richard
    I have a mobile application that I'm using with Facebook connect. I'm having trouble getting an offline_access session key after a user has granted extended permissions. Here's the user flow: User goes to my site for the first time I send them to m.facebook.com/tos.php? and pass my api key and secret The user logs in using Facebook connect Facebook returns them to a page in my site, mysite/login-success.php with an auth_token in the query string On mysite/login-success.php I instantiate the FB api client and check to see if I already have an offline_access session key for them: $facebook = new Facebook($appapikey, $appsecret); If they haven't already provided offline_access FB gives me a temporary session key I need to get offline_access permission from the user so I forward them on to www.facebook.com/connect/prompt_permissions.php? and pass offline_access in the querystring. The user authorizes offline_access and get forwarded to mysite/permissions-success.php The problem I'm having is that after instantiating the API client on permissions-success.php the session key I have is still the temporary session key, not a new offline_access session key. The only way I've found to get the offline_access key is to delete all cookies for the user and then have them login again using Facebook connect. A fairly poor user experience. Can anyone shed some light on how to use the Facebook api to generate a new session key even if one already exists (in my case a temporary session key)?

    Read the article

  • validating wsdl/schema using cxf

    - by SGB
    I am having a hard time getting cxf to validate an xml request that my service creates for a 3rd party. My project uses maven. Here is my project structure Main Module : + Sub-Module1 = Application + sub-Module2 = Interfaces In Interfaces, inside src/main/resources I have my wsdl and xsd. so, src/main/resources + mywsdl.wsdl. + myschema.xsd The interface submodule is listed as a dependency in the Application-sub-module. inside Application sub-module, there is a cxsf file in src/maim/resources. <jaxws:client name="{myTargerNameSpaceName}port" createdFromAPI="true"> <jaxws:properties> <entry key="schema-validation-enabled" value="true" /> </jaxws:properties> </jaxws:client> AND:. <jaxws:endpoint name="{myTargetNameSpaceName}port" wsdlLocation="/mywsdl.wsdl" createdFromAPI="true"> <jaxws:properties> <entry key="schema-validation-enabled" value="true" /> </jaxws:properties> </jaxws:endpoint> I tried changing the "name="{myTargetNameSpaceName}port" to "name="{myEndPointName}port" But to no anvil. My application works. But it just do not validate the xml I am producing that has to be consumed by a 3rd party application. I would like to get the validation working, so that any request that I send would be a valid one. Any suggestions?

    Read the article

  • Sorting tasks to assign

    - by Diego
    I've got a problem that I don't know where to start. I'd realy appreciate some help. The problem: I have several T task that must be done in D days by just 1 employee (let's forget using several resources right now). Each task can be done in some times (not all tasks can be done all time). e.g.: If my employee starts working at 8 o'clock and one task is "call a client". Maybe the client office opens at 9 o'clock. Also each task has a duration (really estimated). It is supposed that the D days are enough to do all task. I've to sort the tasks to the employee. e.g.: at monday 8:00 do task 7, then at 9:30 starts with task 2. In the example task 7 duration would be 1 and a half hour. Thanks for the help! Diego PD: If someone has a way to make this and it is not an algorithm never minds, please answer and I'll manage to think the algorithm. I just don't know how to face the problem. Edit Would Project be usefull? Edit 2 Tasks / Jobs dependency is NOT required

    Read the article

  • Response.TransmitFile and delete it after transmission

    - by Radhi
    Hi, i have to implement GEDCOM export in my site. my .net code created one file at server when export to gedcom clicked. then i need to download it to client from server as well as user should be asked to where to save that file means savedialog is required. after its downloaded. i want to delete that file from server. i got one code to transmit file from server to client Response.ContentType = "text/xml"; Response.AppendHeader("Content-Disposition", "attachment; filename=" + FileName); Response.TransmitFile(Server.MapPath("~/" + FileName)); Response.End(); from this LINK but i am not able to delete the file after this code as Response.End ends response so whtever code written after that line is not execute. if i do code to delete file before Response.End(); then file does not transmitted and got error. so, please can anybody provide me any solution for this. -Thanks in advance

    Read the article

  • Web service SSL handshake fails in production environment unless SSL debugging enabled

    - by JST
    Scenario: calling a client web service over SSL (https) with mutual SSL authentication. Different service endpoint URLs and certs (both keystore and truststore) for test vs. production environments. Both test and production environments run tomcat / JBoss clustered. Production environment has load balancing / BigIP, runs Blade and non-Blade machines. Truststore is set (using -Djavax.net.ssl.trustStore=value) at startup. Keystore is set using System.setProperty("javax.net.ssl.keyStore", "value") in Java code. Web service call made using Axis2. All works fine in test environment, but when we moved to production environment (6 servers), it appears certs are not being forwarded for the handshake. Here's what we've done: in test environment, handshake using test versions of certs has been working all along, with no ssl debugging enabled confirmed in test environment that handshake with client production endpoint succeeds (production certs, both ours and theirs, are fine) -- this was done using -Djavax.net.debug=handshake,ssl confirmed that the error condition occurs on all 6 production servers took one server out of the cluster, turned on ssl debugging for just that one (with a restart), hit it directly, handshake works! switched to a different server without the debugging turned on, handshake error condition occurs turned debugging on on that second server (with a restart), hit it directly, handshake works! From the evidence, it seems like somehow the debugging being enabled causes the certificates to be properly retrieved/conveyed, although that makes no sense! I wonder whether somehow the enabled debugging makes the system pay attention to the System.setProperty call, and ignore it otherwise. However, in local and test environments, handshake worked without debugging enabled. Do I maybe need to be setting keystore on server startup like I'm setting truststore? Have been avoiding that because the keystore will differ for each of our test environments (16 of them).

    Read the article

  • Mercurial Remote Subrepos

    - by Travis G
    I'm trying to set up my Mercurial repository system to work with multiple subrepos. I've basically followed these instructions to set up the client repo with Mercurial client v1.5 and I'm using HgWebDir to host my multiple projects. I have an HgWebDir with the following structure: http://myserver/hg fooproj mylib where mylib is some collection of common template library to be consumed by fooproj. The structure of fooproj looks like this: fooproj doc/ src/ .hgignore .hgsub .hgsubstate And .hgsub looks like: src/mylib = http://myserver/hg/mylib This should work, per my interpretation of the documentation: The first 'nested' is the path in our working dir, and the second is a URL or path to pull from. So, let's say I pull down fooproj to my home folder with: ~$ hg clone http://myserver/hg/fooproj foo Which pulls down the directory structure properly and adds the folder ~/foo/src/mylib which is a local Mercurial repository. This is where the problems begin: the mylib folder is empty aside from the items in .hg. With 2 seconds of investigation, one can see the src/mylib/.hg/hgrc is: [paths] default = http://myserver/hg/fooproj/src/mylib which is completely wrong (attempting a pull of that repo will give a 404 because, well, that URL doesn't make any sense). Logically, the default value should be what I specified in .hgsub or it would get the files from the repository in some way. None of the Mercurial commands return error codes (aside from a pull from within src/mylib), so it clearly believes that it is behaving properly (and just might be), although this does not seem logical at all. What am I doing wrong?

    Read the article

  • WSE 3.0 crashes when ClearHeaders is called

    - by Daniel Enetoft
    Hi! I'm developing a client-server application in c# using WSE web-service. One of the things that the user can do is send jpg images to the server for backup via the web-service. Recently strange errors have occurred. This does not happen for all users, just a few. On the client side the exception is a System.Net.WebException Exception message: The operation has timed out and on the server the following warning is found in the event viewer: Exception information: Exception type: HttpException Exception message: Server cannot clear headers after HTTP headers have been sent. Request information Request URL: MyUrl/Service.asmx Request path: /MyWebService/Service.asmx User host address: ------- User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information: Thread ID: 7 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at System.Web.HttpResponse.ClearHeaders() at System.Web.Services.Protocols.SoapServerProtocol.WriteException(Exception e, Stream outputStream) at System.Web.Services.Protocols.ServerProtocolFactory.Create(Type type, HttpContext context, HttpRequest request, HttpResponse response, Boolean& abortProcessing) at System.Web.Services.Protocols.WebServiceHandlerFactory.CoreGetHandler(Type type, HttpContext context, HttpRequest request, HttpResponse response) Does anyone have an idea where this error can come from? I have already tried to raise the "maxRequestLength" in web.config to 16Mb but this doesn't fix it. Regards /Daniel

    Read the article

  • Rails - difference between config.cache_store and config.action_controller.cache_store?

    - by gsmendoza
    If I set this in my environment config.action_controller.cache_store = :mem_cache_store ActionController::Base.cache_store will use a memcached store but Rails.cache will use a memory store instead: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb6eb4bbc @data=<MemCache: 1 servers, ns: nil, ro: false>> >> Rails.cache => #<ActiveSupport::Cache::MemoryStore:0xb78b5e54 @data={}> In my app, I use Rails.cache.fetch(key){ object } to cache objects inside my helpers. All this time, I assumed that Rails.cache uses the memcached store so I'm surprised that it uses memory store. If I change the cache_store setting in my environment to config.cache_store = :mem_cache_store both ActionController::Base.cache_store and Rails.cache will now use the same memory store, which is what I expect: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> >> Rails.cache => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> However, when I run the app, I get a "marshal dump" error in the line where I call Rails.cache.fetch(key){ object } no marshal_dump is defined for class Proc Extracted source (around line #1): 1: Rails.cache.fetch(fragment_cache_key(...), :expires_in => 15.minutes) { ... } vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'dump' vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'set_without_newrelic_trace' What gives? Is Rails.cache meant to be a memory store? Should I call controller.cache_store.fetch in the places where I call Rails.cache.fetch?

    Read the article

  • Nonblocking Tcp server

    - by hoodoos
    It's not a question really, i'm just looking for some guidelines :) I'm currently writing some abstract tcp server which should use as low number of threads as it can. Currently it works this way. I have a thread doing listening and some worker threads. Listener thread is just sits and wait for clients to connect I expect to have a single listener thread per server instance. Worker threads are doing all read/write/processing job on clients socket. So my problem is in building efficient worker process. And I came to some problem I can't really solve yet. Worker code is something like that(code is really simple just to show a place where i have my problem): List<Socket> readSockets = new List<Socket>(); List<Socket> writeSockets = new List<Socket>(); List<Socket> errorSockets = new List<Socket>(); while( true ){ Socket.Select( readSockets, writeSockets, errorSockets, 10 ); foreach( readSocket in readSockets ){ // do reading here } foreach( writeSocket in writeSockets ){ // do writing here } // POINT2 and here's the problem i will describe below } it works all smothly accept for 100% CPU utilization because of while loop being cycling all over again, if I have my clients doing send-receive-disconnect routine it's not that painful, but if I try to keep alive doing send-receive-send-receive all over again it really eats up all CPU. So my first idea was to put a sleep there, I check if all sockets have their data send and then putting Thread.Sleep in POINT2 just for 10ms, but this 10ms later on produces a huge delay of that 10ms when I want to receive next command from client socket.. For example if I don't try to "keep alive" commands are being executed within 10-15ms and with keep alive it becomes worse by atleast 10ms :( Maybe it's just a poor architecture? What can be done so my processor won't get 100% utilization and my server to react on something appear in client socket as soon as possible? Maybe somebody can point a good example of nonblocking server and architecture it should maintain?

    Read the article

  • send xmpp <message> to component on other domain

    - by cometta
    step 1:on the same domain(.myserver.kicks-ass.net), i able to send to the mycomponent,succesfully. step 2:when i login to other domain ,example gmail.com and try send to another user on [email protected], success as well. step 3:just like step2, but i send the to mycomponent.myserver.kicks-ass.net , i get below error <message xmlns='jabber:client' to='mycomponent.myserver.kicks-ass.net' from='[email protected]/123' type='chat'> <body> just t4st </body> <x xmlns='jabber:x:event'> <offline/> <composing/> </x> </message> <message xmlns='jabber:client' to='[email protected]/123' from='mycomponent.myserver.kicks-ass.net' type='error'> <body> just t4st </body> <x xmlns='jabber:x:event'> <offline/> <composing/> </x> <error code='404' type='cancel'> <remote-server-not-found xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/> </error> </message>

    Read the article

  • Having problem with jQuery Countdown? Function serverSync: serverTime

    - by ricky roy
    serverSync: serverTime Function return value from server but I have checked both server and client time both are same.When i called server to sync with server it will not display countdown. help me ? $(function() { var shortly = new Date(); var newTime = new Date('April 9, 2010 20:38:10'); //for loop divid /// $('#defaultCountdown').countdown({ until: shortly, onExpiry: liftOff, onTick: watchCountdown, serverSync: serverTime }); $('#div1').countdown({ until: newTime }); }); function serverTime() { var time = null; $.ajax({ type: "POST", //Page Name (in which the method should be called) and method name url: "Default.aspx/GetTime", // If you want to pass parameter or data to server side function you can try line contentType: "application/json; charset=utf-8", dataType: "json", data: "{}", async: false, //else If you don't want to pass any value to server side function leave the data to blank line below //data: "{}", success: function(msg) { //Got the response from server and render to the client time = new Date(msg.d); alert(time); }, error: function(msg) { time = new Date(); alert('1'); } }); shortly = time; return time; }

    Read the article

  • Google App Engine application instance recycling and response times...

    - by Konrad
    Hi, I posted this on GAE for Java group, but I hope to get some answers here quicker :) I decided to do some long-run performance tests on my application. I created some small client hitting app every 5-30 minutes and I run 3-5 of threads with such client. I noticed huge differenced in response times and started to investigate issue. I found reason very quick. I am experiencing same issues as described in following topics: Uneven response time between connection to server to first byte sent Application instances seem to be too aggressively recycled Getting 'Request was aborted after waiting too long to attempt to service your request.' after application idle I am using Springframework, it tkes around 18-20s to start app instance, which is causing response times to take from 1s (when requests hits running app - very rare) to 22s when fresh application is created. Is there any solution for this? I was thinking about creating most basic servlet performing critical tasks (serving API call) and leave UI as is. But then I would loose all benefits of Springframework. Is there any solution for this? After solving (hacking) numerous constrains of App Engine which I hit while developing my app that is the one I think will make me move out of App Engine... that's simply to much to all the time think how to win with GAE problems than how to solve my application problems... Any help? Regards Konrad

    Read the article

  • Can i use a different parser for Axis 1.4?

    - by NishM
    The current SAX parser takes a lot of time (20 minutes) and heap memory(around 400mb) to deserialize the response coming from the soap server as per the logs. Our response XMLs are of average size 4 mb. A part of the log when it runs the applicaiton out of heap is below DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing handler org.apache.axis.message.SOAPHandler@16d22f1 DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(newElem00) DEBUG (org.apache.axis.message.MessageElement) New MessageElement (org.apache.axis.message.MessageElement@112c22) named {}name DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing element name DEBUG (org.apache.axis.utils.NSStack) NSPush (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::startElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::endElement(, name) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(popHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Popping handler org.apache.axis.message.SOAPHandler@16d22f1 DEBUG (org.apache.axis.utils.NSStack) NSPop (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Popped element stack to org.apache.axis.message.MessageElement:property DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::endElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::startElement(, value) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(pushHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing handler org.apache.axis.message.SOAPHandler@16880ba DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(newElem00) DEBUG (org.apache.axis.message.MessageElement) New MessageElement (org.apache.axis.message.MessageElement@1db74af) named {}value DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing element value DEBUG (org.apache.axis.utils.NSStack) NSPush (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::startElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::endElement(, value) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(popHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Popping handler org.apache.axis.message.SOAPHandler@16880ba DEBUG (org.apache.axis.utils.NSStack) NSPop (32) I cannot use Axis2 because of technical reasons. I have tried using HTTP Commons client instead of HTTP client but the response time remains the same. How can i link a different parser(example xerces 2.10.0 or xstream 1.3.1?) to Axis 1.4 framework in this context so that memory management and response time is favorable?.

    Read the article

  • WCF ReliableMessaging method called twice

    - by Brian
    Using Fiddler, we see 3 HTTP requests (and matching responses) for each call when: WS-ReliableMessaging is enabled, and, the method returns a large amount of data (17MB) The first HTTP request is a SOAP message with the action "CreateSequence" (presumable to establish the reliable session). The second and third HTTP requests are identical SOAP messages invoking our webservice method. Why are there two identical messages? Here is our config: <system.serviceModel> <client> <endpoint address="http://server/vdir/AccountingService.svc" binding="wsHttpBinding" bindingConfiguration="customWsHttpBinding" behaviorConfiguration="LargeServiceBehavior" contract="MyProject.Accounting.IAccountingService" name="BasicHttpBinding_IAccountingService" /> </client> <bindings> <wsHttpBinding> <binding name="customWsHttpBinding" maxReceivedMessageSize="90000000"> <reliableSession enabled="true"/> <security mode="None" /> </binding> </wsHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="LargeServiceBehavior"> <dataContractSerializer maxItemsInObjectGraph="2147483647"/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> Thanks, Brian

    Read the article

  • WCF- "The underlying connection was closed: The connection was closed unexpectedly"

    - by SumGuy
    Hi there. I'm recieving that wonderfuly ambiguous error message when using one of my webmethods on my WCF webservice. As that error message doesn't provide any explanation whatsoever allow me to post my theory. I believe it may have something to do with the return type I'm using I have a Types DLL which is refrenced in both the webservice and the client. In this DLL is the base class ExceptionMessages. There is a child of this class called DrawingExcepions. Here is some code: public class ExceptionMessages { public object[] ReturnValue { get; set; } } public class DrawingExceptions : ExceptionMessages { private List<DrawingException> des = new List<DrawingException>(); } public class DrawingException { public Exception ExceptionMsg { get; set; } public List<object> Errors { get; set; } } The using code: [OperationContract] ExceptionMessages createNewBom(Bom bom, DrawingFiles dfs); public ExceptionMessages createNewBOM(Bom bom, DrawingFiles dfs) { return insertAssembly(bom, dfs); } public DrawingExceptions insertAssembly(Bom bom, DrawingFiles dfs) { DrawingExceptions des = new DrawingExceptions(); foreach (DrawingFile d in dfs.drawingFiles) { DrawingException temp = insertNewDrawing(bom, d); if (temp != null) des.addDrawingException(temp); if (d.Child != null) des.addDrawingException(insertAssembly(bom, d.Child)); } return des; } Returns to: ExceptionMessages ems = client.createNewBom(bom, currentDFS); if (ems is DrawingExceptions) { } Basically the return type from the webmethod is ExceptionMessages however I would usually be sending the child class back instead. My only idea is that it's the child that's causing the error but as far as I've read, this should have no effect. Has anyone got any ideas what could be going wrong here? If any more info is required, just ask :) Thanks.

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >