Search Results

Search found 73679 results on 2948 pages for 'get http client info'.

Page 53/2948 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • jquery serial format

    - by Yan
    I have a website that prompt the users to enter serial number for a product. I have one text box the user need to enter the serial in this format: xx:xx:xx:xx:xx:xx there is any compment that will enter the : after every to chars ? or mybee i should split the text box to 6 text boxes ? or there anything else that you can suggest?

    Read the article

  • SEO redirects for removed pages

    - by adam
    Hi, Apologies if SO is not the right place for this, but there are 700+ other SEO questions on here. I'm a senior developer for a travel site with 12k+ pages. We completely redeveloped the site and relaunched in January, and with the volatile nature of travel, there are many pages which are no longer on the site. Examples: /destinations/africa/senegal.aspx /destinations/africa/features.aspx Of course, we have a 404 page in place (and it's a hard 404 page rather than a 30x redirect to a 404). Our SEO advisor has asked us to 30x redirect all our 404 pages (as found in Webmaster Tools), his argument being that 404's are damaging to our pagerank. He'd want us to redirect our Senegal and features pages above to the Africa page (which doesn't contain the content previously found on Senegal.aspx or features.aspx). An equivalent for SO would be taking a url for a removed question and redirecting it to /questions rather than showing a 404 'Question/Page not found'. My argument is that, as these pages are no longer on the site, 404 is the correct status to return. I'd also argue that redirecting these to less relevant pages could damage our SEO (due to duplicate content perhaps)? It's also very time consuming redirecting all 404's when our site takes some content from our in-house system, which adds/removes content at will. Thanks for any advice, Adam

    Read the article

  • OWB - 11.2.0.4 Windows standalone client released

    - by David Allan
    The 11.2.0.4 release of OWB containing the 32 bit and 64 bit standalone Windows client is released today, I had previously blogged about the Linux standalone client here. Big thanks to Anil for spearheading that, another milestone on the Data Integration roadmap. Below are the patch numbers; 17743124 - OWB 11.2.0.4 STANDALONE CLIENT FOR Windows 64 BIT 17743119 - OWB 11.2.0.4 STANDALONE CLIENT FOR Windows 32 BIT This is the terminal release of OWB and customer bugs will be resolved on top of this release. We are excited to share information on the Oracle Data Integration 12c release in our upcoming launch video webcast on November 12th.

    Read the article

  • Webmarketing : Adobe complète sa solution d'optimisation de l'expérience client avec des modules pour les campagnes mobiles

    Adobe met à jour sa solution d'optimisation de l'expérience client Avec des modules pour les campagnes mobiles et l'intégration avec Adobe Online Marketing Suite Adobe annonce la disponibilité de sa nouvelle solution de gestion de l'expérience web (WEM : Web Experience Management), une avancée qualifié de « majeure » par l'éditeur pour sa plate-forme CEM (gestion de l'expérience client). WEM vise à optimiser la manière dont les entreprises créent des expériences multicanal au profit des ventes, des services et des interactions avec le client. La solution permet aux entreprises de tirer parti des derniers terminaux mobiles et des communautés « pour développer leur potenti...

    Read the article

  • How to SELECT DISTINCT Info with TOP 1 Info and an Order By FROM the Top 1 Info

    - by Erin Taylor
    I have 2 tables, that look like: CustomerInfo(CustomterID, CustomerName) CustomerReviews(ReviewID, CustomerID, Review, Score) I want to search reviews for a string and return CustomerInfo.CustomerID and CustomerInfo.CustomerName. However, I only want to show distinct CustomerID and CustomerName along with just one of their CustomerReviews.Reviews and CustomerReviews.Score. I also want to order by the CustomerReviews.Score. I can't figure out how to do this, since a customer can leave multiple reviews, but I only want a list of customers with their highest scored review. Any ideas?

    Read the article

  • Webcast: Moving Client/Server and .NET Applications to Windows Azure Cloud

    - by Webgui
    The Cloud and SaaS models are changing the face of enterprise IT in terms of economics, scalability and accessibility . Visual WebGui Instant CloudMove transforms your Client / Server application code to run natively as .NET on Windows Azure and enables your Azure Client / Server application to have a secured-by-design plain Web or Mobile browser based accessibility. Itzik Spitzen VP of R&D, Gizmox will present a webcast on Microsoft Academy on Tuesday 8 March at 8am (USA Pacific Time) explaining how VWG bridges the gap between Client/Server applications’ richness, performance, security and ease of development and the Cloud’s economics & scalability. He will then introduce the unique migration and modernization tools which empower customers like Advanced Telemetry, Communitech, and others, to transform their existing Client/Server business application to a native Web Applications (Rich ASP.NET) and then deploy it on Windows Azure which allows accessibility from any browser (or mobile if desired by the customer). Registration page on Microsoft Academy: https://www.eventbuilder.com/microsoft/event_desc.asp?p_event=1u19p08y

    Read the article

  • Why does a Non-existant page returns 302 status when using a custom 404 page in asp.net

    - by webdevbytes
    I have setup custom 404 page custom404.aspx that returns a 404 Not Found error correctly, however the non-existant page that was initially requested returns a 302 Found status. So when I test thispagedoesnotexist.aspx, it returns a 302 Found then the custom404.aspx loads and returns a 404 Not Found status. I want to make sure that search spiders/bots understand that the requested page does not exist and should not show up in any search results. Is this the case? Cheers

    Read the article

  • nginx multiple domain virtual host configuration

    - by Poe
    I'm setting up nginx with multiple domain or wildcard support for convenience sake, rather than setting up 50+ different sites-available/* files. Hopefully this is enough to show you what I'm trying to do. Some are static sites, some are dynamic with usually wordpress installed. If an index.php exists, everything works as expected. If a file is requested that does not exist (missing.html), a 500 error is given due to the rewrite. The logged error is: *112 rewrite or internal redirection cycle while processing "/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/missing.html" The basic nginx configuration I'm currently using is: ` listen 80 default; server _; ... location / { root /var/www/$host; if (-f $request_filename) { expires max; break; } # problem, what if index.php does not exist? if (!-e $request_filename) { rewrite ^/(.*)$ /index.php/$1 last; } } ... ` If an index.php does not exist, and the file also does not exist, I would like it to error 404. Currently, nginx does not support multiple condition if's or nested if so I need a workaround.

    Read the article

  • Moving all UI logic to Client Side?

    - by Mag20
    Our team originally consisted of mostly server side developers with minimum expertise in Javascript. In ASP.NET we used to write a lot of UI logic in code-behind or more recently through controllers in MVC. A little while ago 2 high level client side developers joined our team. They can do in HTMl/CSS/Javascript pretty much anything that we could previously do with server-side code and server-side web controls: Show/hide controls Do validation Control AJAX refreshing So I started to think that maybe it would be more efficient to just create a high level API around our business logic, kinda like Amazon Fulfillment API: http://docs.amazonwebservices.com/fws/latest/APIReference/, so that client side developers would fully take over the UI, while server side developers would only concentrate on business logic. So for ordering system you would have a high level API like: OrderService.asmx CreateOrderResponse CreateOrder(CreateOrderRequest) AddOrderItem AddPayment - SubmitPayment - GetOrderByID FindOrdersByCriteria ... There would be JSON/REST access to API, so it would be easy to consume from client-side UI. We could use this API for both internal UI development and also for 3-rd parties to create their own applications. With advances in Javascript and availability of good client side developers, is it a good time to get rid of code-behind/controllers and just concentrate on developing high level APIs (ala Amazon) that client side developers can consume?

    Read the article

  • Why would certain browsers request all pages on my ASP.Net Web site twice?

    - by Deane
    Firefox is issuing duplicate requests to my ASP.Net web site. It will request a page, get the response, then immediately issue the same request again (well, almost the same -- see below). This happens on every page of this particular Web site (but not any others). IE does not do this, but Chrome also does this. I have confirmed that there is no Location header in the response, and no Javascript or meta tag in the page which would cause the page to be re-requested (if any of these were true, IE would be re-requesting pages as well). I have confirmed this behavior on multiple Firefox installs on multiple machines. Versions vary, but all are 3.x. The only difference between the two requests is the Accepts header. For the first request, it looks like this: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 For the second request, it looks like this: Accept: */* The Content-Type response header in all cases is: Content-Type: text/html; charset=utf-8 Something else odd -- even though Firefox requests the page twice, it uses the first response and discards the second. I put a counter on a page that increments with every request. I can watch the responses come back (via the Charles proxy). Firefox will get a "1" the first time, and a "2" the second time. Yet it will display the "1," for some reason. Chrome exhibits this exact same behavior. I suspect it's a protocol-level issue, given the difference in Accepts header, but I've never seen this before.

    Read the article

  • Can CFHEADER values be read by other code?

    - by Aidan Whitehall
    The code <cfheader name="Test" value="1"> <cfheader name="Test" value="2"> results in the header "Test: 2" being sent to the browser (as seen using HttpFox). Is there a way for the second line of code to determine if a header with the same name has already been written using CFHEADER? Thanks!

    Read the article

  • Support clicking a link, but sending a POST (vs GET) to the server, without Ajax?

    - by xyld
    I'm thinking this isn't exactly possible, but maybe I'm wrong. I'm simply torn between those who believe that only POST requests should modify data on the server and people that relax the rule and allow GET requests to modify data. Take this situation. Say you have a table, each row is a row in the database. I'd like to allow them to delete the row via a fancy "X" icon as the very last <td></td> element in the row. AFAIK, the only way to send a POST to the server is via a form. But do I really stuff an entire form into the last <td></td> element just to do a POST? Or should I cheat and use an <a href=...></a> tag that sends a GET request? You may be thinking "Do both! Send a POST AND use the <a ...></a> tag! Use fancy javascript + xhr!" And I'll say, oh? And how will that degrade in a zero javascript environment? Maybe we've reached a point when it doesn't make sense to worry about gracefully degrading? I'm not sure. You tell me? I'm new to web development, but I understand most of the concepts involved.

    Read the article

  • How should I setup billing for AdWords when managing a client's campaign in My Client Center? [closed]

    - by Dustin
    I have worked with Google AdWords before and will now be managing an AdWords account for a client. I have a My Client Center account, but I'm wondering what the best practices are for billing. Should I link billing to my own credit card and then have the client pay me (they have to pay me to manage the account anyway), or should I have the client pay Google directly? How is this usually done? If it is the later, what is the best way to have them input their payment info?

    Read the article

  • Asking browsers to cache as aggressively as possible

    - by balpha
    This is about a web app that serves images. Since the same request will always return the same image, I want the accessing browsers to cache the images as aggressively as possible. I pretty much want to tell the browser Here's your image. Go ahead and keep it; it's really not going to change for the next couple of days. No need to come back. Really. I promise. I do, so far, set Cache-Control: public, max-age=86400 Last-Modified: (some time ago) Expires: (two days from now) and of course return a 304 not modified if the request has the appropriate If-Modified-Since header. Is there anything else I can do (or anything I should do differently) to get my message across to the browsers? The app is hosted on the Google App Engine, in case that matters.

    Read the article

  • Client and Server game update speed

    - by user20686
    I am working on a simple two player networked asteroids game using XNA and the Lidgren networking library. For this set up I have a Lidgren server maintaining what I want to be the true state of the game, and the XNA game is the Lidgren client. The client sends key inputs to the server, and the server process the key inputs against game logic, sending back updates. (This seemed like a better idea then sending local positions to the server.) The client also processes the key inputs on its own, so as to not have any visible lag, and then interpolates between the local position and remote position. Based on what I have been reading this is the correct way to smooth out a networked game. The only thing I don’t get is what value to use as the time deltas. Currently every message the server sends it also sends a delta-time update with it, which is time between the last update. The client then saves this delta time to use for its local position updates, so they can be using roughly the same time deltas to calculate position updates. I know the XNA game update gets called 60 times a second, so I set my server to update the game state at the same speed. This will probably only work as long as the game is working on a fixed time step and will probably cause problems if I want to change that in the future. The server sends updates to clients on another thread, which runs at 10 updates per second to cut down on bandwidth. I do not see noticeable lag in movement and over time if no user input is received the local and remote positions converge on each other as they should. I am also not currently calculating for any latency as I am trying to go one step at a time. So my question is should the XNA client be using its current game time to update the local game state and not being using time deltas sent by the server? If I should be using the clients time delta between updates how do I keep it in-line with how fast the server is updating its game state?

    Read the article

  • php HTTP_REFERER header, how to turn off or leave blank

    - by eco_bach
    Hi I'm using the following simple PHP proxy script but am getting a sporadic message at the destination site. I'm thinking that perhaps it may have something to do with the HTTP_REFERER header, although I'm not explicitly defining it. Can anyone tell me how to explicitly turn off or leave the HTTP_REFERER header blank? Thanks in advance! $url = $_GET['path']; readfile($path);

    Read the article

  • Java RMI cannot connect to host from external client.

    - by Koe
    I've been using RMI in this project for a while. I've gotten the client program to connect (amongst other things) to the server when running it over my LAN, however when running it over the internet I'm running into the following exception: java.rmi.ConnectException: Connection refused to host: (private IP of host machine); nested exception is: java.net.ConnectException: Connection timed out: connect at sun.rmi.transport.tcp.TCPEndpoint.newSocket(Unknown Source) at sun.rmi.transport.tcp.TCPChannel.createConnection(Unknown Source) at sun.rmi.transport.tcp.TCPChannel.newConnection(Unknown Source) at sun.rmi.server.UnicastRef.invoke(Unknown Source) at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(Unknown Source) at java.rmi.server.RemoteObjectInvocationHandler.invoke(Unknown Source) at $Proxy1.ping(Unknown Source) at client.Launcher$PingLabel.runPing(Launcher.java:366) at client.Launcher$PingLabel.<init>(Launcher.java:353) at client.Launcher.setupContentPane(Launcher.java:112) at client.Launcher.<init>(Launcher.java:99) at client.Launcher.main(Launcher.java:59) Caused by: java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.<init>(Unknown Source) at java.net.Socket.<init>(Unknown Source) at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(Unknown Source) at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(Unknown Source) ... 12 more This error is remeniscent of my early implementation of RMI and I can obtain the error verbatum if I run the client locally without the server program running as well. To me Connection Timed Out means a problem with the server's response. Here's the client initiation: public static void main(String[] args) { try { String host = "<WAN IP>"; Registry registry = LocateRegistry.getRegistry(host, 1099); Login lstub = (Login) registry.lookup("Login Server"); Information istub = (Information) registry.lookup("Game Server"); new Launcher(istub, lstub); } catch (RemoteException e) { System.err.println("Client exception: " + e.toString()); e.printStackTrace(); } catch (NotBoundException e) { System.err.println("Client exception: " + e.toString()); e.printStackTrace(); } } Interestingly enough no Remote Exception is thrown here. Here's the server initiation: public static void main(String args[]) { try { GameServer gobj = new GameServer(); Information gstub = (Information) UnicastRemoteObject.exportObject( gobj, 1099); Registry registry = LocateRegistry.createRegistry(1099); registry.bind("Game Server", gstub); LoginServer lobj = new LoginServer(gobj); Login lstub = (Login) UnicastRemoteObject.exportObject(lobj, 7099); // Bind the remote object's stub in the registry registry.bind("Login Server", lstub); System.out.println("Server ready"); } catch (Exception e) { System.err.println("Server exception: " + e.toString()); e.printStackTrace(); } } Bad practice with the catch(Exception e) I know but bear with me. Up to this stage I know it works fine over the LAN, here's where the exception occurs over the WAN and is the first place a method in the server is called: private class PingLabel extends JLabel { private static final long serialVersionUID = 1L; public PingLabel() { super(""); runPing(); } public void setText(String text) { super.setText("Ping: " + text + "ms"); } public void runPing() { try { PingThread pt = new PingThread(); gameServer.ping(); pt.setRecieved(true); setText("" + pt.getTime()); } catch (RemoteException e) { e.printStackTrace(); } } } That's a label placed on the launcher as a ping test. the method ping(), in gameserver does nothing, as in is a null method. It's worth noting also that ports 1099 and 7099 are forwarded to the server machine (which should be obvious from the stack trace). Can anyone see anyting I'm missing/doing wrong? If you need any more information just ask. EDIT: I'm practically certain the problem has nothing to do with my router settings. When disabling my port forwarding settings I get a slightly different error: Client exception: java.rmi.ConnectException: Connection refused to host: (-WAN IP NOT LOCAL IP-); but it appears both on the machine locally connected to the server and on the remote machine. In addition, I got it to work seamlessly when connecting the server straight tho the modem (cutting out the router. I can only conclude the problem is in my router's settings but can't see where (I've checked and double checked the port forwarding page). That's the only answer i can come up with.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >