Search Results

Search found 26176 results on 1048 pages for 'stream socket client'.

Page 445/1048 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • JSON - msg.d is undefined error

    - by Narmatha Balasundaram
    Hi - Webmethod returns an array of objects - something like this {"d": [[{"Amount":100,"Name":"StackOverflow"}, {"Amount":200,"Name":"Badges"}, {"Amount":300,"Name":"Questions"}]]} On the client-side, when the JSON is referenced using msg.d, I get a msg.d is undefined error. I am using jQuery JavaScript Library v1.4.2 How do I access the elements in the array of objects? Adding more findings, code and questions: I don't see __type in the JSON object that is returned. Does that mean that the object sent from the server is not JSON formatted? When the __type is not a part of the response, I will not be able to use msg.d? (msg.d is undefined) Some more: 1. I can access the elements from client side using msg[0][0].Amount - How can I specifically JSON format my return object (from the server)

    Read the article

  • continuous music on website

    - by Patrick
    Hi all, I'm against it as I'm sure ALL of you are, but my client wants background music on their website. I'm very new to this, so was wondering how should I do that? I know I should use iframes, but what's the actual way of using them? eg: do I just create the home page with 2 frames (one for the music, one for the rest of the website), and then every time the user clicks on a link I can load the usual destination page - or should I update all pages in some way to make sure they are 'frames enabled'? Also, I do I style the frame to make sure it's hidden? thanks, Patrick ps please don't reopen the discussion about why background music is not good - I do know that and personally hate it. But the client is adamant and paying for it so... ;)

    Read the article

  • Git repo planning questions

    - by masonk
    At work, development uses perforce to handle code sharing. I won't say "revision control", because we aren't allowed to check in changes until they are ready for regression testing. In order to get my personal change sets under revision control, I've been given the go-ahead to build my own git and initialize the client view of the perforce depot as a git repo. There are some difficulties in doing this, however. The client view lives in a subfolder of ~, (~/p4), and I want to put ~ under revision control as well, with its own separate history. I can't figure out how to keep the history for ~ separate from ~/p4 without using a submodule. The problem with a submodule is that it looks like I have to go make a repository that will become the submodule and then git submodule add <repo> <path>. But there is nowhere to make the submodule's repository except in ~. There seems to be no safe place to create the initial client view of the depot with git p4 clone. (I'm working off of the assumption that initing or cloning a repo into a subdirectory of a git repo is not supported. At least, I can find nothing authoritative on nested git repos.) edit: Is merely ignoring ~/p4 in the repo rooted at ~ enough to allow me to init a nested repo in ~/p4? My __git_ps1 function still thinks I'm in a git repository when I visit an ignored subdirectory of a git repo, so I'm inclined to think not. I need the "remote" repository created by git p4 sync to be a branch in ~/p4. We are required to keep all of our code in ~/p4 so that it doesn't get backed up. Can I pull from a "remote" branch that is really a local branch? This one is just for convenience, but I thought I could learn something by asking it. For 99% of the project, I just want to start the with the p4 head revision as the inital commit object. For the other 1%, I would like to suck down the entire p4 history so that I can browse it in git. IOW, after I'm done initalizing it, the initial commit of remotes/p4/master branch will contain: revision 1 of //depot/prod/Foo/Bar/* revision X of other files in //depot/prod/*, where X is the head revision and the remotes/p4/master branch contains Y commits, where Y is the number of changelists that had a file in //depot/prod/Foo/Bar/*, with each commit in the history corresponding to one of those p4 changelists, and HEAD looking like p4's head.

    Read the article

  • Unable to add SSH pub key for GitHub

    - by Kaushik
    I am trying to set up a new GitHub account as part of learning ruby on rails. My OS is windows. I am having the following problem while trying to add my public key to the GitHub SSH public keys. When I put the key in the text area, supply a name, and click 'Add Key', I am taken to the Job profile page without any confirmation that the key has been added.(I am using SSH client GUI to generate RSA keys). When I come back to the SSH public keys page, I see that the key is not there. I tried this multiple times...without any result. Just as a test, I tried to ssh to the GitHub account using 'ssh -v [email protected]' and here is the verbose output: OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to github.com [207.97.227.239] port 22. debug1: Connection established. debug1: identity file /c/Users/Administrator/.ssh/identity type -1 debug1: identity file /c/Users/Administrator/.ssh/id_rsa type -1 debug1: identity file /c/Users/Administrator/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5github2 debug1: match: OpenSSH_5.1p1 Debian-5github2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'github.com' is known and matches the RSA host key. debug1: Found key in /c/Users/Administrator/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/Administrator/.ssh/identity debug1: Trying private key: /c/Users/Administrator/.ssh/id_rsa debug1: Trying private key: /c/Users/Administrator/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). Also, why is it looking for the keys in c/Users/Administrator/.ssh/ Is there a possibility of changing this default location? EDIT: With Mozila, I get error msg: Oops! The key has already been taken. The key is invalid. It must begin with 'ssh-rsa' or 'ssh-dss'. My key looks like: ---- BEGIN SSH2 PUBLIC KEY ---- Comment: "[2048-bit rsa, Administrator@Kaushik-THINK, Sun Jan 02 2011 \ 02:40:03]" AAAAB3NzaC1yc2EAAAADAQABAAABAQDoA5xqJozKmAHTGh9hgmagsFOl2hdPzS8ZPV9Ta1 xU0JiUnHef38Rvz/5oqL1wW7SjmZbc/+tCPOfU1lg3UisFXajJhek2bjJ2qsKd4Sjax2Nj ZMYD7djPb8rokUYSKW3bdYyJHtJH+murz04UGdCcZ8HqkMTzqlh3zAIK7SIlCy+mtAi5A/ sm0JbqlNGHB6YrL1aWIcOSolIx2HWt8cWhlK77guT9dPgd0HT59Gn0uhO7UWGLrNdJut0x ian3HdvNYMUXoO/SkNlxvWRgZ1UOhaB+qf4hw5RCGcBbqP3fM4LKpurHZx4wEpgmM0e4EM +2PYdf46uxChNdBl7J5sZF ---- END SSH2 PUBLIC KEY ---- I still can't see the key..so not sure how it says it is already taken.

    Read the article

  • Remote Control software for Mac

    - by MarqueIV
    One of the things I like about Microsoft's RDC Client is that the resolution of the experience is set by the client and not, say, a physical monitor connected to the host, as is the case with VNC; the latter being the protocol used by Mac. This means that even though I'm connecting to a notebook with a 1280x800 physical resolution, via RDC I can run it at 2560x1600 on my 30" monitor. However, that only seems to work for RDC. Does anyone know of something I can run on the Mac that will allow me to remotely control it at a different resolution than what is physically set? TIA, Mark

    Read the article

  • One method with many behaviours or many methods

    - by Krowar
    This question is quite general and not related to a specific language, but more to coding best practices. Recently, I've been developing a feature for my app that is requested in many cases with slightly different behaviours. This function send emails , but to different receivers, or with different texts according to the parameters. The method signature is something like public static sendMail (t_message message = null , t_user receiver = null , stream attachedPiece = null) And then there are many condition inside the method, like if(attachedPiece != null) { } I've made the choice to do it this way (with a single method) because it prevents me to rewrite the (nearly) same method 10 times, but I'm not sure that it's a good practice. What should I have done? Write 10 sendMail method with different parameters? Are there obvious pros and cons for these different ways of programming? Thanks a lot.

    Read the article

  • How to mimic SMPP connection

    - by vijay.shad
    Hi, I have a situation. My application is sending multiple SMSes to end client this number varies from 50000 to 100000 SMSes at any point of time. To achieve this functionality i am using Kannel as sms sender interface. So My solution is complete. But only in development environment! Before going to be in production I supposed to do a testing of this solution environment. Is there any suggestion to created a testing environment? To give more input to my environment; kannel is using an smpp connection to deliver messages to end client. So, I think i need to mimic some smpp server for kannel.

    Read the article

  • UI Design Help / Advice

    - by Greg Andora
    Hey everyone, I have a dillema where our client relations department has been brought in for advice on UI and I vehemently disagree with it...even though I don't consider myself a designer at all. While I have been vocal about my disagreement about it, I've been asked to point to design standards to prove that what I'm saying is correct and that the guys in Client Relations are flat out wrong. A mockup is below, I'm trying to argue that the icons of the airplane, boat, and couch (ya, I didn't choose those either) belong in the header of the page (same area as the logo) and not in the content area of the page. Can anybody please help me by pointing me to something that helps prove my point? Thanks a lot, Greg Andora

    Read the article

  • Is that a RESTFUL MVC Web Service?

    - by vsj
    I am aware of Web Services and WCF but I have generic question with services.I have a ASP.NET MVC Application which does some basic functionality. I just have a controller in which I am passing it the records and serializing the information to XML using XML Serializer. Then I return this information to the browser and it displays me the XML i got from the Controller Action. So I get the XML representation of my Class(Database Object) in XML and I am to give the URL of this application to the client and access and pull the information. Is this a Service then? I mean in the end all the Clients need is the Xml representation through services also right? I am not that experienced and probably being very silly but please help me out...if I provide xml this way to the client is that a Service ? Or is there something I need to undersatand here?.

    Read the article

  • OpenSSL Versions in Solaris

    - by darrenm
    Those of you have have installed Solaris 11 or have read some of the blogs by my colleagues will have noticed Solaris 11 includes OpenSSL 1.0.0, this is a different version to what we have in Solaris 10.  I hope the following explains why that is and how it fits with the expectations on binary compatibility between Solaris releases. Solaris 10 was the first release where we included OpenSSL libraries and headers (part of it was actually statically linked into the SSH client/server in Solaris 9).  At time we were building and releasing Solaris 10 the current train of OpenSSL was 0.9.7.  The OpenSSL libraries at that time were known to not always be completely API and ABI (binary) compatible between releases (some times even in the lettered patch releases) though mostly if you stuck with the documented high level APIs you would be fine.   For this reason OpenSSL was classified as a 'Volatile' interface and in Solaris 10 Volatile interfaces were not part of the default library search path which is why the OpenSSL libraries live in /usr/sfw/lib on Solaris 10.  Okay, but what does Volatile mean ? Quoting from the attributes(5) man page description of Volatile (which was called External in older taxonomy): Volatile interfaces can change at any time and for any reason. The Volatile interface stability level allows Sun pro- ducts to quickly track a fluid, rapidly evolving specif- ication. In many cases, this is preferred to providing additional stability to the interface, as it may better meet the expectations of the consumer. The most common application of this taxonomy level is to interfaces that are controlled by a body other than Sun, but unlike specifications controlled by standards bodies or Free or Open Source Software (FOSS) communities which value interface compatibility, it can not be asserted that an incompatible change to the interface specifica- tion would be exceedingly rare. It may also be applied to FOSS controlled software where it is deemed more important to track the community with minimal latency than to provide stability to our customers. It also common to apply the Volatile classification level to interfaces in the process of being defined by trusted or widely accepted organization. These are generically referred to as draft standards. An "IETF Internet draft" is a well understood example of a specification under development. Volatile can also be applied to experimental interfaces. No assertion is made regarding either source or binary compatibility of Volatile interfaces between any two releases, including patches. Applications containing these interfaces might fail to function properly in any future release. Note that last paragraph!  OpenSSL is only one example of the many interfaces in Solaris that are classified as Volatile.  At the other end of the scale we have Committed (Stable in Solaris 10 terminology) interfaces, these include things like the POSIX APIs or Solaris specific APIs that we have no intention of changing in an incompatible way.  There are also Private interfaces and things we declare as Not-an-Interface (eg command output not intended for scripting against only to be read by humans). Even if we had declared OpenSSL as a Committed/Stable interface in Solaris 10 there are allowed exceptions, again quoting from attributes(5): 4. An interface specification which isn't controlled by Sun has been changed incompatibly and the vast majority of interface consumers expect the newer interface. 5. Not making the incompatible change would be incomprehensible to our customers. In our opinion and that of our large and small customers keeping up with the OpenSSL community is important, and certainly both of the above cases apply. Our policy for dealing with OpenSSL on Solaris 10 was to stay at 0.9.7 and add fixes for security vulnerabilities (the version string includes the CVE numbers of fixed vulnerabilities relevant to that release train).  The last release of OpenSSL 0.9.7 delivered by the upstream community was more than 4 years ago in Feb 2007. Now lets roll forward to just before the release of Solaris 11 Express in 2010. By that point in time the current OpenSSL release was 0.9.8 with the 1.0.0 release known to be coming soon.  Two significant changes to OpenSSL were made between Solaris 10 and Solaris 11 Express.  First in Solaris 11 Express (and Solaris 11) we removed the requirement that Volatile libraries be placed in /usr/sfw/lib, that means OpenSSL is now in /usr/lib, secondly we upgraded it to the then current version stream of OpenSSL (0.9.8) as was expected by our customers. In between Solaris 11 Express in 2010 and the release of Solaris 11 in 2011 the OpenSSL community released version 1.0.0.  This was a huge milestone for a long standing and highly respected open source project.  It would have been highly negligent of Solaris not to include OpenSSL 1.0.0e in the Solaris 11 release. It is the latest best supported and best performing version.     In fact Solaris 11 isn't 'just' OpenSSL 1.0.0 but we have added our SPARC T4 engine and the AES-NI engine to support the on chip crypto acceleration. This gives us 4.3x better AES performance than OpenSSL 0.9.8 running on AIX on an IBM POWER7. We are now working with the OpenSSL community to determine how best to integrate the SPARC T4 changes into the mainline OpenSSL.  The OpenSSL 'pkcs11' engine we delivered in Solaris 10 to support the CA-6000 card and the SPARC T1/T2/T3 hardware is still included in Solaris 11. When OpenSSL 1.0.1 and 1.1.0 come out we will asses what is best for Solaris customers. It might be upgrade or it might be parallel delivery of more than one version stream.  At this time Solaris 11 still classifies OpenSSL as a Volatile interface, it is our hope that we will be able at some point in a future release to give it a higher interface stability level. Happy crypting! and thank-you OpenSSL community for all the work you have done that helps Solaris.

    Read the article

  • Hardest javascript debugging problem ever

    - by Craig
    We have an ASP.NET application and when running on client site they often get null reference Javascript errors. As with all these errors the information IE6 displays is less than helpful. But the problem is as soon as I install IE script debugger and try and debug a bit more the error becomes non-reproducible.When script debugger is not installed then the error occurs again. Are there any other tools that could be helpful for javascript debugging on client site. The error is also not produced with IE7 or Firefox.

    Read the article

  • HTTP redirect fallback

    - by Ondrej Stastny
    Hi, Is there a way to provide fallback URL when HTTP redirect times out? What I'm trying to achieve is that when I hit the original url, I would respond with HTTP redirect (300, 307?) giving the browser new URL and the fallback URL in case the new URL times out? I was also considering doing this client side but it just does not seem to be very effective (in terms of speed, client support). What I would do is probably have a tiny 1x1px image on each server, try loading it and then check with javascript which server is up and redirect there. Any other ideas? Thanks

    Read the article

  • Open a document from library Word opened but the contents is not loaded

    - by user300440
    [Environment] Server Sharepoint Portal Server 2003 Client Windows XP Internet Explorer 6.0 Microsoft Office 2003 Hello~ Today I received a call from one of my client and she said that she received an alert, "Ambiguous name detected: tempDDE" when attemping to open a doc from library. So I googled some and found solutions from MS and others and modified from Normal.dot to Normal1.dot. But after that, when I opened it again from library another problem happend and it is the doc contents is not loaded. When I click the link of doc, then the WIN?WORD.exe pops up but the contents of the document is not showing up (Silent). I tried some Open the doc file using Word Open Menu (Library Link) = Worked Get the url of the file and open it uing using Word Open Menu = Worked Add the url to Trusted Site = Done but not helped Any advices are welcomed. Thanks, Karl

    Read the article

  • How to repair ods files

    - by karthick87
    I have few ods files but all of a sudden it is not opening. When i open the file i get the following error. Please look at the below snapshot Error on opening the file with Archive Manager: karthick@karthick:/media/Datas$ zip -FF data.ods --out repaired_file.ods Fix archive (-FF) - salvage what can Found end record (EOCDR) - says expect single disk archive Scanning for entries... copying: mimetype (46 bytes) copying: Configurations2/statusbar/ (0 bytes) copying: Configurations2/accelerator/current.xml (2 bytes) copying: Configurations2/floater/ (0 bytes) copying: Configurations2/popupmenu/ (0 bytes) copying: Configurations2/progressbar/ (0 bytes) copying: Configurations2/menubar/ (0 bytes) copying: Configurations2/toolbar/ (0 bytes) copying: Configurations2/images/Bitmaps/ (0 bytes) copying: content.xml zip warning: no end of stream entry found: content.xml zip warning: rewinding and scanning for later entries

    Read the article

  • Can I download & Install the wallpapers from Quantal 12.10 on Precise 12.04.1?

    - by Ankit
    I was googling regarding Ubuntu 12.10 (Quantal Quetzal); pretty much liked the wallpapers. Just out of curiosity, can I install the wallpapers from a higher release (12.10) on a previous release Precise (12.04.1) without upgrading to 12.10. I have tried the following:- ankit@stream:~$ sudo apt-get install ubuntu-wallpapers-quantal [sudo] password for ankit: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package ubuntu-wallpapers-quantal I have followed this post; but this for installing wallpapers from previous releases on the current release. Any ideas how can this be achieved.

    Read the article

  • Why does my server hang when I call a page over files_get_content?

    - by Marc
    I am trying to get content from a wordpress installation on a subdomain of my server. I tried that with file_get_content and also with Zend_Http_Client. $client = new Zend_Http_Client(Zend_Registry::get('CONFIG')->static->$name->$lang); $content = $client->request()->getBody(); As long as I run in on my localhost, it works fine. As soon as it runs on the same server as the subdomain, it hangs forever (timeout). Specs: Zend Framework Application trying to get HTML from a Wordpress Page Server running on lighttpd Several cores, much ram Do you guys have an idea on how this problem can be resolved? Cheerio

    Read the article

  • Exporting XML from FIleMaker Pro Server

    - by Jeno
    FileMaker Pro 10 Server: Mac OS X Server 10.4.11 DAtA Server: Windows Server 2008 I am having problem cross platform issue when exporting XML from FileMaker Pro client on a Mac to DATA Server. My FileMaker Pro server is hosting on a Mac OS X and the I need user to export their data to a DATA server which is hosting on a Windows Server. I created a button(function/script) in the FileMaker form for user to export data once they finish with their job. FileMaker Pro client on the PC works perfectly but It does't work on the MAC. I've tried every combination I can think of for the location path as documented on: http://www.filemaker.com/11help/html/create_db.8.32.html#1030283 Any idea? Thanks

    Read the article

  • Preload a lot of tiny pics

    - by clorz
    I'm thinking about how to approach the problem at hand. There's a flash movie that requires a lot of relativly small images and I'm trying to optimize the time it takes for them to be preloaded. One thing I've considered it turning on KeepAlive in Apache on the server side. That works. But my mind still wonders if there's anything else ;-) So, what other approaches I may try? Is there a way to compress all those images and then unpack on client side? I have full control on both server and client side. Can even try installing something other than Apache. Cache is not an option because it already works and it's first time loading that bothers me here.

    Read the article

  • C# Finalize/Dispose pattern

    - by robUK
    Hello, C# 2008 I have been working on this for a while now. And I am still confused about some issues. My questions below 1) I know that you only need a finalizer if you are disposing of unmanaged resources. However, if you are using managed resources that make calls to unmanaged resources. Would you still need to implement a finalizer? 2) However, if you develop a class that doesn't use any unmanged resources directly or indirectly and you implement the IDisposable so that clients of your class can use the 'using statement'. Would it be acceptable to implement the IDisposable just so that clients of your class can use the using statement? using(myClass objClass = new myClass()) { // Do stuff here } 3) I have developed this simple code below to demostrate the Finalize/dispose pattern: public class NoGateway : IDisposable { private WebClient wc = null; public NoGateway() { wc = new WebClient(); wc.DownloadStringCompleted += wc_DownloadStringCompleted; } // Start the Async call to find if NoGateway is true or false public void NoGatewayStatus() { // Start the Async's download // Do other work here wc.DownloadStringAsync(new Uri(www.xxxx.xxx)); } private void wc_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { // Do work here } // Dispose of the NoGateway object public void Dispose() { wc.DownloadStringCompleted -= wc_DownloadStringCompleted; wc.Dispose(); GC.SuppressFinalize(this); } } Question about the source code: 1) Here I have not added the finalizer. And normally the finalizer will be called by the GC, and the finalizer will call the Dispose. As I don't have the finalizer, when do I call the Dispose method? Is it the client of the class that has to call it? So my class in the example is called NoGateway and the client could use and dispose of the class like this: Would the Dispose method be automatically called when execution reaches the end of the using block? using(NoGateway objNoGateway = new NoGateway()) { // Do stuff here } Or does the client have to manually call the dispose method i.e.? NoGateway objNoGateway = new NoGateway(); // Do stuff with object objNoGateway.Dispose(); // finished with it Many thanks for helping with all these questions, 2) I am using the webclient class in my 'NoGateway' class. Because the webclient implements the IDisposable interface. Does this mean that the webclient indirectly uses unmanaged resources? Is there any hard and fast rule to follow about this. How do I know that a class uses unmanaged resources?

    Read the article

  • Non-zero exit status for clean exit

    - by trinithis
    Is it acceptable to return a non-zero exit code if the program in question ran properly? For example, say I have a simple program that (only) does the following: Program takes N arguments. It returns an exit code of min(N, 255). Note that any N is valid for the program. A more realistic program might return different codes for successfully ran programs that signify different things. Should these programs instead write this information to a stream instead, such as to stdout?

    Read the article

  • not sure if this is my mistake ~~ vs2010 Add Service Reference fails

    - by gerryLowry
    https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter the above is from Taleo's API guide. I'm trying to create a WCF Client (e.g.: " Creating Your First WCF Client" http://channel9.msdn.com/shows/Endpoint/Endpoint-Screencasts-Creating-Your-First-WCF-Client/ ) The tbe.taleo... link is from Taleo's API documentation. Likely my understanding is flawed. My assumption is that when the link from Taleo is entered into the vs2010 "Add Service Reference" dialog and GO is clicked, then vs2010 should retrieve a proper WSDL/SOAP envelope back from the Taleo link. That does not happen; instead an error occurs. Fiddler2 (http://fiddler2.com) displays the status code 500 "HTTP/1.1 500 Internal Server Error". [FULL DETAILS BELOW] "WcfTestClient.exe" gives a similar error: [WcfTestClient DETAILS BELOW] QUESTION: is it me, or is the Taleo link flawed? Thank you, Gerry [FULL DETAILS "Add Service Reference"] The HTML document does not contain Web service discovery information. Metadata contains a reference that cannot be resolved: 'https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter'. The content type text/xml;charset=utf-8 of the response message does not match the content type of the binding (application/soap+xml; charset=utf-8). If using a custom encoder, be sure that the IsContentTypeSupported method is implemented properly. The first 544 bytes of the response were: ' SOAP-ENV:Protocol Unsupported content type "application/soap+xml; charset=utf-8", must be: "text/xml". /MANAGER/dispatcher/servlet/rpcrouter '. The remote server returned an error: (500) Internal Server Error. If the service is defined in the current solution, try building the solution and adding the service reference again. [WcfTestClient DETAILS] Error: Cannot obtain Metadata from https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at http://go.microsoft.com/fwlink/?LinkId=65455.WS-Metadata Exchange Error URI: https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter Metadata contains a reference that cannot be resolved: 'https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter'. The content type text/xml;charset=utf-8 of the response message does not match the content type of the binding (application/soap+xml; charset=utf-8). If using a custom encoder, be sure that the IsContentTypeSupported method is implemented properly. The first 544 bytes of the response were: 'SOAP-ENV:ProtocolUnsupported content type "application/soap+xml; charset=utf-8", must be: "text/xml"./MANAGER/dispatcher/servlet/rpcrouter'. The remote server returned an error: (500) Internal Server Error.HTTP GET Error URI: https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter The HTML document does not contain Web service discovery information.

    Read the article

  • SharePoint 2010 Custom WCF Service - Windows and FBA Authentication

    - by e-rock
    I have SharePoint 2010 configured for Claims Based Authentication with both Windows and Forms Based Authentication (FBA) for external users. I also need to develop custom WCF Services. The issue is that I want Windows credentials passed into the WCF Service(s); however, I cannot seem to get the Windows credentials passed into the services. My custom WCF service appears to be using Anonymous authentication (which has to be enabled in IIS in order to display the FBA login screen). The example I have tried to follow is found at http://msdn.microsoft.com/en-us/library/ff521581.aspx. The WCF service gets deployed to _vti_bin (ISAPI folder). Here is the code for the .svc file <%@ ServiceHost Language="C#" Debug="true" Service="MyCompany.CustomerPortal.SharePoint.UI.ISAPI.MyCompany.Services.LibraryManagers.LibraryUploader, $SharePoint.Project.AssemblyFullName$" Factory="Microsoft.SharePoint.Client.Services.MultipleBaseAddressBasicHttpBindingServiceHostFactory, Microsoft.SharePoint.Client.ServerRuntime, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" CodeBehind="LibraryUploader.svc.cs" %> Here is the code behind for the .svc file [ServiceContract] public interface ILibraryUploader { [OperationContract] string SiteName(); } [BasicHttpBindingServiceMetadataExchangeEndpoint] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)] public class LibraryUploader : ILibraryUploader { //just try to return site title right now… public string SiteName() { WindowsIdentity identity = ServiceSecurityContext.Current.WindowsIdentity; ClaimsIdentity claimsIdentity = new ClaimsIdentity(identity); return SPContext.Current.Web.Title; } } The WCF test client I have just to test it out (WPF app) uses the following code to call the WCF service... private void Button1Click(object sender, RoutedEventArgs e) { BasicHttpBinding binding = new BasicHttpBinding(); binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Ntlm; EndpointAddress endpoint = new EndpointAddress( "http://dev.portal.data-image.local/_vti_bin/MyCompany.Services/LibraryManagers/LibraryUploader.svc"); LibraryUploaderClient libraryUploader = new LibraryUploaderClient(binding, endpoint); libraryUploader.ClientCredentials.Windows.AllowedImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Impersonation; MessageBox.Show(libraryUploader.SiteName()); } I am somewhat inexperienced with IIS security settings/configurations when it comes to Claims and trying to use both Windows and FBA. I am also inexperienced when it comes to WCF configurations for security. I usually develop internal biz apps and let Visual Studio decide what to use because security is rarely a concern.

    Read the article

  • How to JBoss/Blazeds clustering and channel failover

    - by Francesco
    Hi, I'm stuck with jboss and blazeds clusterization. What I have now is : 2 Jboss Instances, running in all mode One load balancer with apache and mod_jk, as suggested by Jboss docs A spring/flex integration app A flex application that I do not want to throw errors when one of my JBoss instances falls I find Adobe documentation really lacking, and being new at clustering, jgroups and balancing I cannot find how to deploy my app in clustered environment. From what I understood if configured correctly blazeds should tell flex client about failover servers upon connection. Then if flex client can't connect to the main server it goes to another. But the hard part for me is getting there. Can someone point me to the right direction? Thanks in advance

    Read the article

  • Pre-generating GUIDs for use in python?

    - by rjuiaa1
    I have a python program that needs to generate several guids and hand them back with some other data to a client over the network. It may be hit with a lot of requests in a short time period and I would like the latency to be as low as reasonably possible. Ideally, rather than generating new guids on the fly as the client waits for a response, I would rather be bulk-generating a list of guids in the background that is continually replenished so that I always have pre-generated ones ready to hand out. I am using the uuid module in python on linux. I understand that this is using the uuidd daemon to get uuids. Does uuidd already take care of pre-genreating uuids so that it always has some ready? From the documentation it appears that it does not. Is there some setting in python or with uuidd to get it to do this automatically? Is there a more elegant approach then manually creating a background thread in my program that maintains a list of uuids?

    Read the article

  • What are the benefits of designing a KeyBinding relay?

    - by Adam Naylor
    The input system of Quake3 is handled using a Keybinding relay, whereby each keypress is matched against a 'binding' which is then passed to the CLI along with a time stamp of when the keypress (or release) occurred. I just wanted to get an idea from developers what they considered to be the key benefits of designing your input system around this approach? One thing i don't particularly like is the appending of the timestamp to the bound command. This seems like a bit of a hack to bend the CLI into handling the games input? Also I feel that detecting the keypress only to add the command to a stream of text that gets parsed at a later date to be a slightly latent way of responding to input? (or is this unfounded?) The only real benefit i can see is that it allows you to bind 'complex' commands to keypresses; like 'switch weapon;+fire;' for example. Or maybe for journaling purposes? Thanks for any insights!

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >