Search Results

Search found 12658 results on 507 pages for 'tfs api'.

Page 62/507 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 3)

    Over the past two weeks I've showed how to build a store locator application using ASP.NET and the free Google Maps API and Google's geocoding service. Part 1 looked at creating the database to record the store locations. This database contains a table named Stores with columns capturing each store's address and latitude and longitude coordinates. Part 1 also showed how to use Google's geocoding service to translate a user-entered address into latitude and longitude coordinates, which could then be used to retrieve and display those stores within (roughly) a 15 mile area. At the end of Part 1, the results page listed the nearby stores in a grid. In Part 2 we used the Google Maps API to add an interactive map to the search results page, with each nearby store displayed on the map as a marker. The map added in Part 2 certainly improves the search results page, but the way the nearby stores are displayed on the map leaves a bit to be desired. For starters, each nearby store is displayed on the map using the same marker icon, namely a red pushpin. This makes it difficult to match up the nearby stores listed in the grid with those displayed on the map. Hovering the mouse over a marker on the map displays the store number in a tooltip, but ideally a user could click a marker to see more detailed information about the store, such as its address, phone number, a photo of the storefront, and so forth. This third and final installment shows how to enhance the map created in Part 2. Specifically, we'll see how to customize the marker icons displayed in the map to make it easier to identify which marker corresponds to which nearby store location. We'll also look at adding rich popup windows to each marker, which includes detailed store information and can be updated further to include pictures and other HTML content. Read on to learn more! Read More >

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 3)

    Over the past two weeks I've showed how to build a store locator application using ASP.NET and the free Google Maps API and Google's geocoding service. Part 1 looked at creating the database to record the store locations. This database contains a table named Stores with columns capturing each store's address and latitude and longitude coordinates. Part 1 also showed how to use Google's geocoding service to translate a user-entered address into latitude and longitude coordinates, which could then be used to retrieve and display those stores within (roughly) a 15 mile area. At the end of Part 1, the results page listed the nearby stores in a grid. In Part 2 we used the Google Maps API to add an interactive map to the search results page, with each nearby store displayed on the map as a marker. The map added in Part 2 certainly improves the search results page, but the way the nearby stores are displayed on the map leaves a bit to be desired. For starters, each nearby store is displayed on the map using the same marker icon, namely a red pushpin. This makes it difficult to match up the nearby stores listed in the grid with those displayed on the map. Hovering the mouse over a marker on the map displays the store number in a tooltip, but ideally a user could click a marker to see more detailed information about the store, such as its address, phone number, a photo of the storefront, and so forth. This third and final installment shows how to enhance the map created in Part 2. Specifically, we'll see how to customize the marker icons displayed in the map to make it easier to identify which marker corresponds to which nearby store location. We'll also look at adding rich popup windows to each marker, which includes detailed store information and can be updated further to include pictures and other HTML content. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Great Free Courses on Building HTML5 apps using ASP.NET Web API, Knockout.js and jQuery

    - by ScottGu
    Pluralsight has developed some great training courses on the new .NET 4.5 and VS 2012 release, including two fantastic courses from John Papa that cover how to build HTML5 web apps using ASP.NET Web API, Knockout and jQuery: Single Page Apps with HTML5, Web API, Knockout and jQuery Building HTML5 and JavaScript Apps with MVVM and Knockout Free 1-Month Subscription to the Courses Pluralsight is offering a special promotion that allows you to get a free 1-month subscription to watch the above courses at no cost.  There is no obligation to buy anything at the end of the offer and you don’t need to supply a credit card in order to take part in it. To get access to the course you simply follow @pluralsight and @john_papa on Twitter and then visit this page and enter your Twitter name using the form on it.  Pluralsight will then send you a private twitter message containing the access code that you can use to subscribe to the courses (and download the course exercise files).  Once you are subscribed to the course you have one month to watch the course (and you can watch it as many times as you want during the month). Pluralsight is running the promotion through Sept 18th – so sign-up now to get access.  Once you are signed up you then have a month to watch the course. Hope this helps, Scott P.S. And if you are new to Twitter you can also optionally follow me: @scottgu

    Read the article

  • How to enable services Discovery API in GoogleCL?

    - by Marcos
    There are bits and pieces of information all over the place but I'm trying to put it all together so that GoogleCL finally accesses more than the initial 7 services. Does anyone know of a step-by-step? Right now any attempt outside these result in the error message: google tasks list Did you specify the service correctly? Must be one of 'picasa', 'blogger', 'youtube', 'docs', 'contacts', 'calendar', 'finance' I installed GoogleCL from the Ubuntu repos, authenticated a few bundled services like contacts, docs etc. and those work great, giving me access to do certain operations like upload from the command line. I would really like to get it going to support tasks and all the other elegible Google services shown at https://code.google.com/apis/explorer/#_s=tasks Here are some guides/partial steps I've found: http://code.google.com/p/googlecl/wiki/DiscoveryManual (indicates needing to check it out updated GoogleCL from the subversion repository.) http://code.google.com/p/google-api-python-client/wiki/Installation easy_install --upgrade google-api-python-client http://code.google.com/p/googlecl/wiki/Install http://code.google.com/p/googlecl/source/checkout sudo -i cd /usr/local/src/ svn checkout http://googlecl.googlecode.com/svn/trunk/ googlecl-read-only cat googlecl-read-only/INSTALL.txt cd /usr/local/src/googlecl-read-only/ python setup.py install Result: $ google discovery list Traceback (most recent call last): File "/usr/bin/google", line 488, in run_interactive run_once(options, args) File "/usr/bin/google", line 540, in run_once options.config) File "/usr/bin/google", line 364, in import_service force_gdata_v1 = config.lazy_get(package.SECTION_HEADER, AttributeError: 'module' object has no attribute 'SECTION_HEADER'

    Read the article

  • Handling SEO for Infinite pages that cause external slow API calls

    - by Noam
    I have an 'infinite' amount of pages in my site which rely on an external API. Generating each page takes time (1 minute). Links in the site point to such pages, and when a users clicks them they are generated and he waits. Considering I cannot pre-create them all, I am trying to figure out the best SEO approach to handle these pages. Options: Create really simple pages for the web spiders and only real users will fetch the data and generate the page. A little bit 'afraid' google will see this as low quality content, which might also feel duplicated. Put them under a directory in my site (e.g. /non-generated/) and put a disallow in robots.txt. Problem here is I don't want users to have to deal with a different URL when wanting to share this page or make sense of it. Thought about maybe redirecting real users from this URL back to the regular hierarchy and that way 'fooling' google not to get to them. Again not sure he will like me for that. Letting him crawl these pages. Main problem is I can't control to rate of the API calls and also my site seems slower than it should from a spider's perspective (if he only crawled the generated pages, he'd think it's much faster). Which approach would you suggest?

    Read the article

  • How to get initial API right using TDD?

    - by Vytautas Mackonis
    This might be a rather silly question as I am at my first attempts at TDD. I loved the sense of confidence it brings and generally better structure of my code but when I started to apply it on something bigger than one-class toy examples, I ran into difficulties. Suppose, you are writing a library of sorts. You know what it has to do, you know a general way of how it is supposed to be implemented (architecture wise), but you keep "discovering" that you need to make changes to your public API as you code. Perhaps you need to transform this private method into strategy pattern (and now need to pass a mocked strategy in your tests), perhaps you misplaced a responsibility here and there and split an existing class. When you are improving upon existing code, TDD seems a really good fit, but when you are writing everything from scratch, the API you write tests for is a bit "blurry" unless you do a big design up front. What do you do when you already have 30 tests on the method that had its signature (and for that part, behavior) changed? That is a lot of tests to change once they add up.

    Read the article

  • Using Coherence API to get POF bytes

    - by Bruno.Borges
    Someone raised the question on how to use the Coherence API to get the bytes of an object in POF (Portable Object Format) programatically. So I came up with this small code that shows the very cool API simple usage :-)   SimplePofContext spc = new SimplePofContext();    spc.registerUserType(0, User.class, new UserSerializer());    // consider UserSerializer as an implementation of PofSerializer            User u = new User();    u.setId(21);    u.setName("Some Name");    u.setEmail("[email protected]");            ByteArrayOutputStream baos = new ByteArrayOutputStream();    DataOutput dataOutput = new DataOutputStream(baos);    BufferOutput bufferOutput = new WrapperBufferOutput(dataOutput);    spc.serialize(bufferOutput, u);            byte[] byteArray = baos.toByteArray();    System.out.println(Arrays.toString(byteArray));  Easy, isn't?

    Read the article

  • API Class with intensive network requests

    - by Marco Acierno
    I'm working an API which works as "intermediary" between a REST API and the developer. In this way, when the programmer do something like this: User user = client.getUser(nickname); it will execute a network request to download from the service the data about the user and then the programmer can use the data by doing things like user.getLocation(); user.getDisplayName(); and so on. Now there are some methods like getFollowers() which execute another network request and i could do it in two ways: Download all the data in the getUser method (and not only the most important) but in this way the request time could be very long since it should execute the request to various urls Download the data when the user calls the method, it looks like the best way and to improve it i could cache the result so the next call to getFollowers returns immediately with the data already download instead of execute again the request. What is the best way? And i should let methods like getUser and getFollowers stop the code execution until the data is ready or i should implement a callback so when the data is ready the callback gets fired? (this looks like Javascript)

    Read the article

  • Authenticate native mobile app using a REST API

    - by Supercell
    I'm starting a new project soon, which is targeting mobile application for all major mobile platforms (iOS, Android, Windows). It will be a client-server architecture. The app is both informational and transactional. For the transactional part, they're required to have an account and log in before a transaction can be made. I'm new to mobile development, so I don't know how the authentication part is done on these platforms. The clients will communicate with the server through a REST API. Will be using HTTPS ofcourse. I haven't yet decided if I want the user to log in when they open the app, or only when they perform a transaction. I got the following questions: 1) Like the Facebook application, you only enter your credentials when you open the application for the first time. After that, you're automatically signed in every time you open the app. How does one accomplish this? Just simply by encrypting and storing the credentials on the device and sending them every time the app starts? 2) Do I need to authenticate the user for each (transactional) request made to the REST API or use a token based approach? Please feel free to suggest other ways for authentication. Thanks!

    Read the article

  • Google Currency Convertor JSON API

    - by Gopinath
    There are many live currency conversion services available on the web and the popular one’s among them are – Google, Yahoo, MSN & XE. Among all these four Google is the developer’s darling and it provides a simple JSON API that can be integrated in your applications.  http://www.google.com/ig/calculator?hl=en&q=1USD=?INR Using the API is very simple and it takes two parameters as input. The first parameter “hl” is the language code in which you want output. The second parameter “q” is the conversion query in the format <number><from currency code>=?<to currency code>. In the URL give above the query requests for conversion of 1 USD in INR. JSON output for the above query would be  similar to {lhs: "1 U.S. dollar",rhs: "54.4602984 Indian rupees",error: "",icc: true} Examples: 100 USD in INR  http://www.google.com/ig/calculator?hl=en&q=100USD=?INR Example 2: 1 GBP in INR http://www.google.com/ig/calculator?hl=en&q=1GBP=?INR Example 3: 1 USD in INR, output the data in French language http://www.google.com/ig/calculator?hl=fr&q=1USD=?INR   This is an undocumented service and expect changes at any time. But as long as it works, you got a programmatic way to convert currencies.

    Read the article

  • What are the potential problems with exposing the Facebook API secret?

    - by genehack
    I'm writing a little web utility that posts status updates to Twitter and/or Facebook. That involved creating 'applications' with both those services in order to get API keys and 'secrets'. My question is how protected I really need to keep those secrets -- in order for this to work at all, you seem to need the secret to interact with the authentication part of the service to grant the app access to your account and/or grant it permission to post updates on your behalf. Facebook's documentation says to protect the secret, but at least one other Facebook utility distributes the API key and secret in the source. It's important to note: this isn't your standard Facebook 'application' that runs within the context of Facebook, nor is it a standard "desktop"-style compiled app -- it's a web-based application intended to be run on your own web server. The audience for this is probably small and somewhat more sophisticated than average -- so, one technical alternative would be to require people to obtain their own API key and secret to use the app. That seems like a lot of work, however, and a fairly large barrier to entry to anybody using this. Anybody know or have any insight on what sort of trouble I'm letting myself in for if I put both the secrets and the API keys in the config for my app and check it into Github for all the world to see?

    Read the article

  • [Visual Studio Extension Of The Day] Test Scribe for Visual Studio Ultimate 2010 and Test Professional 2010

    - by Hosam Kamel
      Test Scribe is a documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc... . Known Issues/Limitations Customizing the generated report by changing the template, adding comments, including attachments etc… is not supported While opening a test plan summary document in  Office 2007, if you get the warning: “The file Test Plan Summary cannot be opened because there are problems with the contents” (with Details: ‘The file is corrupt and cannot be opened’), click ‘OK’. Then, click ‘Yes’ to recover the contents of the document. This will then open the document in Office 2007. The same problem is not found in Office 2010. Generated documents are stored by default in the “My documents” folder. The output path of the generated report cannot be modified. Exporting word documents for individual test suites or test cases in a test plan is not supported. Download it from Visual Studio Extension Manager Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Microsoft Visual Studio Team Explorer 2010 codename “Eaglestone”

    - by HosamKamel
    Microsoft has released the beta release of Microsoft Visual Studio Team Explorer 2010 codename “Eaglestone”, the Eclipse plugin and cross-platform command line assets that were acquired from Teamprise back in November. You can download the bits here, and participate in the associated Microsoft Connect community here. Changes done in this release : All of the architectural changes in TFS 2010 has been reacted, which primarily shows up in our support for Team Project Collections but it also means that the Eclipse plug-in supports all the configurations for project portal and reporting services that are possible (including not having any configured at all) Added the enhanced work item linking and hierarchy capabilities.  You can now define typed links, query for work items based on links, and work with work item hierarchies. Added support for the new WF-based team build Have reacted to a lot of underlying changes in the source control version model with respect to how branching, merging, and renames happen. History now follows branches and merges. Branches are proper first class citizens in the source control explorer. You can check a detailed post written  by bharry here Microsoft Visual Studio Team Explorer 2010 codename “Eaglestone”

    Read the article

  • How to manage security of these self hosted web apis, to ensure that the request coming for accessing data is authenticated?

    - by Husrat Mehmood
    Let's pretend I am going to work on an enterprise application. Say I have 11 modules in the application and I would have to develop Dashboards for every role in the organization for whom I are going to develop application. We Decided to use Asp.Net Web Api and return json data from our apis. We are going to include 11 Self hosted web apis projects in our application (one self hosted web api) for every module. All 11 modules are connected to one Sql server 2012 Database. Then once api is ready we would have to create Business Dashboards (Based upon roles in Organization). So Now my web api client is Asp.Net Mvc application.Asp.Net mvc will consume those web apis. Here is the part for whom all explanation is done. How should I manage Security of all 11 self hosted web apis? How should I only authenticated request is coming? If I authenticate user by login and password and then redirect user to appropriate Dashboard designed for the role that user have and load data by consuming web apis. How should I ensure that the request coming for accessing data is authenticated?

    Read the article

  • How to optimize calls to multiple APIs at once and return as one set?

    - by Martin
    I have a web app that searches across 2 APIs right now. I have my own Restful web service that I call, and it does all the work on the backend to asynchronously call the 2 APIs and concatenate them into one result set for my web app to use. I want to scale this out and add as many other APIs as I can (currently looking at about 10 more). But as I add APIs, the call to my service gets (potentially) slower and more complex. How do I handle one API not responding ... and other issues that arise? What would be the best way to approach this? Should I create a service call for each API, that way each one is independent and not coupled to all the other calls? Is there a way on the backend to handle the multiple API calls without all the extra complexity it adds? If I go the route of a service call per API, now my client code gets more complex (and I have a lot of clients)? And it's more work for the client, and since I have mobile apps, it will cost the client more data usage. If I go one service call, is there a way to set up some sort of connection so I can return data as I get it, in case one service call hangs?

    Read the article

  • Subversion to TFS 2010 RTM migration with timestamps

    - by ooOOoo
    Question is simple, if one needs to migrate subversion repository to TFS 2010 RTM what is the best tool to use. I have found http://www.timelymigration.com/ and looks good but after contacting them I found out that during the migration timestamps on the changesets are lost. All timestamps are set to date of migration and real timestamps are stored in the comment of the changeset. How to migrate from SVN to TFS 2010 RTM and keep the timestamps ??

    Read the article

  • Should I use TFS 2010 Project Collection Per Customer

    - by Yoann. B
    Hi, My company is a Software development company. We planned to use TFS 2010 for our future customers development. TFS 2010 introduce Team Project Collection in order to split related Team Projects. So my question is, should i use Project Collection per Customers or should i use a unique Project Collection with a Team Project per Customers which will contains some customer solution projects in it

    Read the article

  • Search Netflix using API without the user being logged in?

    - by Felix
    I'm trying to search Netflix through their API, but without logging anyone in (because I want to do this on the back-end, not necessarily related to any user action). I'm just starting off with their API so please forgive me if I'm doing something completely stupid. Here's the URL I'm trying to access: http://api.netflix.com/catalog/titles/?oauth_consumer_key=MY_CONSUMER_KEY&oauth_token_secret=MY_SECRET&term=fight+club However, that gives me a 400 Bad Request error. Is there no way to browse/search the Netflix catalog without having a user first sign in to my application? Or am I doing something wrong? Note: I'm accessing said URL through my browser, since I only want to perform a GET request, which is what a browser does by default.

    Read the article

  • Searching documents by tag using the Scribd API is no longer returning expected results.

    - by George
    Recently I have encountered an issue with Scribd where searching via Scribd API (docs.search) for documents by tag is no longer working. This has been working (for over 6 months) to return a number of documents that I have tagged with "fdsafetyandprevention" (accessible here http://www.scribd.com/tag/fdsafetyandprevention). Just recently my search via the API has stopped working. Note that test searches such as @tags "selfhelp" as described in the Scribd documentation DO work. Could my issue be related to caching or the age of my documents and Scribd choosing to not return them in search results? I have been using scribd.php (http://www.scribd.com/developers/libraries) to interface with the API using $scribd-search(@tags "fdsafetyandprevention", 20, 0, "all"). I am following the Scribd documentation for docs.search and advanced help (http://www.scribd.com/developers/search_help). Help greatly appreciated. George.

    Read the article

  • Can I expose only a portion of one .NET DLL's public classes via a different "API" DLL?

    - by Ben McIntosh
    I am designing a WPF application that uses a DLL with maybe 40 public classes. I need these to be public for a variety of reasons including ease of data binding and obfuscation. I would like to allow other people to use only a portion of these classes as an API for my software. I thought I would create the main library (core.dll) and an API library (coreAPI.dll) with the API DLL to be referenced in a new project. Is there a way to allow coreAPI.dll to expose only a few of the classes that exist in core.dll? It's not so much a security issue as I primarily want to simply hide some of the unwanted classes from the Visual Studio Intellisense. Again, internal classes for the ones I want to hide is not really an option because I need to data bind some of these classes in WPF and for that, they must be public. Are there any other ways of doing this?

    Read the article

  • Advanced WSO2 API MANAGER configurations

    - by nuvio
    I am trying to use an 'external' WSO2 ESB, so I changed the "api-manager.xml" as follows: (ESB port: 9443, API MANAGER port: 9445) <ServerURL>https://localhost:9443/services/</ServerURL> ... <APIEndpointURL>http://localhost:9443,https://localhost:9443</APIEndpointURL> But I have an error when publishing an API via "API publisher": Caused by: org.apache.axis2.AxisFault: Error initializing API handler: org.wso2. carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler Any suggestion, many thanks in advance for your help!

    Read the article

  • Learning the Introspection API (used by FxCop)

    - by Anand Patel
    Microsoft's FxCop tool uses the introspection API. This introspection API could be used to develop new code analysis tools. But the introspection api is not documented well. Additionally, I was not able to figure out any blogs which explains this API in breadth and depth of it. The knowledge gained by understanding the API can also be used for writing custom FxCop rules. Does anybody knows about any blog or resources which explains the same?

    Read the article

  • What if I have an API method and a contoller/view method with the same name in RoR?

    - by Chad Johnson
    Suppose I want to be able to view a list of products on my site by going to /product/list. Great. So this uses my 'list' view and outputs some HTML which my web browser will render. But now suppose I want to provide a REST API to my client where they can get a list of their products. So I suppose I'd have them authenticate with oAuth and then they'd call /product/list which would return a JSON array of their products. But like I said earlier, /product/list displays an HTML web page. So, I have a conflict. What is normal practice as far as providing APIs in Rails? Should I have a subdirectory, 'api', in /app/controller, and another 'product' controller? So my client would go to /api/product/list to get a list of their products? I'm a bit new to RoR, so I don't have the best grasp of the REST functionality yet, but hopefully my question makes sense.

    Read the article

  • Using Google Weather API with Lat and Lon - how to format?

    - by Paul
    I am wanting to use the Google Weather API - by passing lat and long values. However it seems Google is needing these formatted differently to the values I have stored. i.e. For the town of McTavish I have values of 45.5 and -73.583 This works here: http://api.wunderground.com/auto/wui/geo/WXCurrentObXML/index.xml?query=45.5,-73.583 But when I use Google it does not: See: www.google.com/ig/api?weather=,,,45.5,-73.583 Any help appreciated. I would prefer to use the Google Data.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >