Search Results

Search found 80218 results on 3209 pages for 'client side data'.

Page 45/3209 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Token based Authentication for WCF HTTP/REST Services: The Client

    - by Your DisplayName here!
    If you wondered how a client would have to look like to work with the authentication framework, it is pretty straightfoward: Request a token Put that token on the authorization header (along with a registered scheme) and make the service call e.g.: var oauth2 = new OAuth2Client(_oauth2Address); var swt = oauth2.RequestAccessToken( "username", "password", _baseAddress.AbsoluteUri);   var client = new HttpClient { BaseAddress = _baseAddress }; client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", swt); var response = client.Get("identity"); response.EnsureSuccessStatusCode(); HTH

    Read the article

  • Google I/O 2012 - Native Client LIVE

    Google I/O 2012 - Native Client LIVE Colton McAnlis, Noel Allen In this talk, we will be porting an application to Native Client in 60 minutes, LIVE; showing the power of what Native Client can provide for traditional C++ developers looking to move to the web. In the porting process we'll cover specific tasks that a developer would need to perform during a port, and how to to address them with new tools and technologies including debugging integration with Visual Studio and a set of newly added utility libraries to the SDK. Attendees to this session will walk away with a clear understanding of what's required to port their applications to Native Client so that they can start their own projects For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 16 0 ratings Time: 48:21 More in Science & Technology

    Read the article

  • Refreshing imported MySQL data with MySQL for Excel

    - by Javier Rivera
    Welcome to another blog post from the MySQL for Excel Team. Today we're going to talk about a new feature included since MySQL for Excel 1.3.0, you can install the latest GA or maintenance version using the MySQL Installer or optionally you can download directly any GA or non-GA version from the MySQL Developer Zone.As some users suggested in our forums we should be maintaining the link between tables and Excel not only when editing data through the Edit MySQL Data option, but also when importing data via Import MySQL Data. Before 1.3.0 this process only provided you with an offline copy of the Table's data into Excel and you had no way to refresh that information from the DB later on. Now, with this new feature we'll show you how easy is to work with the latest available information at all times. This feature is transparent to you (it doesn't require additional steps to work as long as the users had the Create an Excel Table for the imported MySQL table data option enabled. To ensure you have this option checked, click over Advanced Options... after the Import Data dialog is displayed). The current blog post assumes you already know how to import data into excel, you could always take a look at our previous post How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel if you need further reference on that topic. After importing Data from a MySQL Table into Excel, you can refresh the data in 3 ways.1. Simply right click over the range of the imported data, to show the pop-up menu: Click over the Refresh button to obtain the latest copy of the data in the table. 2. Click the Refresh button on the Data ribbon: 3. Click the Refresh All button in the Data ribbon (beware this will refresh all Excel tables in the Workbook): Please take a note of a couple of details here, the first one is about the size of the table. If by the time you refresh the table new columns had been added to it, and you originally have imported all columns, the table will grow to the right. The same applies to rows, if the table has new rows and you did not limit the results , the table will grow to to the bottom of the sheet in Excel. The second detail you should take into account is this operation will overwrite any changes done to the cells after the table was originally imported or previously refreshed: Now with this new feature, imported data remains linked to the data source and is available to be updated at all times. It empowers the user to always be able to work with the latest version of the imported MySQL data. We hope you like this this new feature and give it a try! Remember that your feedback is very important for us, so drop us a message with your comments, suggestions for this or other features and follow us at our social media channels: MySQL on Windows (this) Blog: https://blogs.oracle.com/MySqlOnWindows/ MySQL for Excel forum: http://forums.mysql.com/list.php?172 Facebook: http://www.facebook.com/mysql YouTube channel: https://www.youtube.com/user/MySQLChannel Thanks!

    Read the article

  • Appropriate response when client empowered with CMS destroys content to his own will

    - by dukeofgaming
    So, I just recently closed a website project that pretty much was The Oatmeals' Design Hell, but with content. The client loved the site at the beginning but started getting other people involved and mercilessly bombarding us with their opinions. We served a carefully thought content strategy (which the client approved) and extremely curated copywriting that took us four months after at least 5 requirement changes (new content, new objectives for the business, changed offerings, new mindfaps, etc.) that required us to rewrite the content about 3 times. The client never gave timely feedback even though we kept the process open for him and his people to see (content being developed transparently in Google Docs). Near the end of the project he still wanted to make changes but wanted us to finish already (there are not enough words in the world to even try to make sense of this). So I explained to him the obvious implications of the never-ending requirement changes and advised him to take the time to gather his thoughts with his own team and see the new content introduced as a new content maintenance project. He happily accepted, but on the day of training/delivery things went very wrong and we have no idea why. The client didn't even allow the site to be out for a week with the content we developed for him and quickly replaced us with a Joomla savvy intern so that he completely destroy the content with shallow, unstructured, tasteless and plain wordsmithing (and I'm not even being visceral). Worst insult of all, he revoked our access from his server and the deployed CMS not even having passed 10 minutes of being given his administrator account (we realized the day after that he did it in our own office, the nerve!). Everybody involved in the team is enraged and insulted. I never want to see this happen again. So, to try to make sense of this situation and avoid it in the future with new clients I have two concrete questions: Is there even an appropriate course of action with a client like this?, or is he just not worth the trouble of analyzing (blindly hoping this never repeats again). In the exercise to try and blame ourselves instead of the client and take this as a lesson of... something, how should we set expectations for new clients about the working terms, process and final product so that they are discouraged from mauling the content to their own contempt once they get the codes to the nukes (access to the CMS)?

    Read the article

  • Analyzing data from same tables in diferent db instances.

    - by Oscar Reyes
    Short version: How can I map two columns from table A and B if they both have a common identifier which in turn may have two values in column C Lets say: A --- 1 , 2 B --- ? , 3 C ----- 45, 2 45, 3 Using table C I know that id 2 and 3 belong to the same item ( 45 ) and thus "?" in table B should be 1. What query could do something like that? EDIT Long version ommited. It was really boring/confusing EDIT I'm posting some output here. From this query: select distinct( rolein) , activityin from taskperformance@dm_prod where activityin in ( select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) ) I have the following parts: select distinct( activityin ) from taskperformance where rolein = 0 Output: http://question1337216.pastebin.com/f5039557 select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) Output: http://question1337216.pastebin.com/f6cef9393 And finally: select distinct( rolein) , activityin from taskperformance@dm_prod where activityin in ( select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) ) Output: http://question1337216.pastebin.com/f346057bd Take for instace activityin 335 from first query ( from taskperformance B) . It is present in actvities from A. But is not in taskperformace in A ( but a the related activities: 92, 208, 335, 595 ) Are present in the result. The corresponding role in is: 1

    Read the article

  • Extract wrong data from a frame in C?

    - by ipkiss
    I am writing a program that reads the data from the serial port on Linux. The data are sent by another device with the following frame format: |start | Command | Data | CRC | End | |0x02 | 0x41 | (0-127 octets) | | 0x03| ---------------------------------------------------- The Data field contains 127 octets as shown and octet 1,2 contains one type of data; octet 3,4 contains another data. I need to get these data. Because in C, one byte can only holds one character and in the start field of the frame, it is 0x02 which means STX which is 3 characters. So, in order to test my program, On the sender side, I construct an array as the frame formatted above like: char frame[254]; frame[0] = 0x02; // starting field frame[1] = 0x41; // command field which is character 'A' ..so on.. And, then On the receiver side, I take out the fields like: char result[254]; // read data read(result); printf("command = %c", result[1]); // get the command field of the frame // get other field's values the command field value (result[1]) is not character 'A'. I think, this because the first field value of the frame is 0x02 (STX) occupying 3 first places in the array frame and leading to the wrong results on the receiver side. How can I correct the issue or am I doing something wrong at the sender side? Thanks all. related questions: http://stackoverflow.com/questions/2500567/parse-and-read-data-frame-in-c http://stackoverflow.com/questions/2531779/clear-data-at-serial-port-in-linux-in-c

    Read the article

  • Desktop.getDesktop().browse(uri); will open web page on server or client side?

    - by Milan
    Hello everybody, I have a JSF application and when user click on button I want to open a web page. Desktop.getDesktop().browse(uri); probably opens a web page on server side, how to do it on client side? when i try Desktop.getDesktop().browse(uri); it works, but maybe its because I open the JSF application on localhost so I dont know if the opened uri is on server side or client side. In the specification for getDesktop() its written: getDesktop() Returns the Desktop instance of the current browser context. Thanks!

    Read the article

  • Best VNC client for remote desktop assistance?

    - by e.m.fields
    Poll on best VNC / remote desktop software for assisting others on Windows/Mac machines from Ubuntu? I've heard good things about TeamViewer and Fog Creek Copilot, but I'm wondering if the included GNOME Vinaigre VNC client is good enough for this. To specify, I'm looking for best option based on: SIMPLEST ease-of-use for client to download/use on their end. See #1. Works cross-platform I am able to control client's mouse and/or keyboard from remote machine.

    Read the article

  • ldap client cannot contact ldap server

    - by Van
    I have followed these instructions: https://help.ubuntu.com/12.04/serverguide/openldap-server.html#openldap-auth-config The ldap server works fine. I can log into it using an ldap account. However, I configured another Ubuntu 12.04 server as a ldap client for authentication but I cannot contact the server. Here is the error: On the client: # ldapsearch -Q -LLL -Y EXTERNAL -H ldapi://ldap01.domain.local -b cn=config dn ldap_sasl_interactive_bind_s: Can't contact LDAP server (-1) The server can receive requests: On the client: # telnet ldap01.domain.local 389 Trying 10.3.17.10... Connected to sisn01.domain.local. Escape character is '^]'. On the client: # ldapsearch -x -h ldap01.domain.local -b cn=config dn # extended LDIF # # LDAPv3 # base <cn=config> with scope subtree # filter: (objectclass=*) # requesting: dn # # search result search: 2 result: 32 No such object # numResponses: 1 On the server: # ps aux | grep slapd openldap 3759 0.0 0.2 564820 8228 ? Ssl 08:39 0:00 /usr/sbin/slapd -h ldap:/// ldapi:/// -g openldap -u openldap -F /etc/ldap/slapd.d I suspect I am missing a configuration parameter either on the server or on the client. I just cannot figure out what. Any help here would be appreciated.

    Read the article

  • Join the Dark Side of Visual Studio 2010

    - by InfinitiesLoop
    Hard to believe it’s been so long, but it was almost 4 years ago when I published Join the Dark Side of Visual Studio . That was when a lot of people were still using VS2003, and importing and exporting environment settings required a custom add-in, VSStyler, which has since fallen off the planet and is hard to find (link, anyone? Let me know). Three versions of VS later, and I’m still using and loving the dark side. Pleased, I am (haha). In fact, that article for one reason or another is still one...(read more)

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Oracle Data Integrator at Oracle OpenWorld 2012: Demonstrations

    - by Irem Radzik
    By Mike Eisterer Oracle OpenWorld is just a few days away and  we look forward to showing Oracle Data Integrator' comprehensive data integration platform, which delivers critical data integration requirements: from high-volume, high-performance batch loads, to event-driven, trickle-feed integration processes, to SOA-enabled data services.  Several Oracle Data Integrator demonstrations will be available October 1st through the3rd : Oracle Data Integrator and Oracle GoldenGate for Oracle Applications, in Moscone South, Right - S-240 Oracle Data Integrator and Service Integration, in Moscone South, Right - S-235 Oracle Data Integrator for Big Data, in Moscone South, Right - S-236 Oracle Data Integrator for Enterprise Data Warehousing, in Moscone South, Right - S-238 Additional information about OOW 2012 may be found for the following demonstrations. If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.  

    Read the article

  • Which algorithms/data structures should I "recognize" and know by name?

    - by Earlz
    I'd like to consider myself a fairly experienced programmer. I've been programming for over 5 years now. My weak point though is terminology. I'm self-taught, so while I know how to program, I don't know some of the more formal aspects of computer science. So, what are practical algorithms/data structures that I could recognize and know by name? Note, I'm not asking for a book recommendation about implementing algorithms. I don't care about implementing them, I just want to be able to recognize when an algorithm/data structure would be a good solution to a problem. I'm asking more for a list of algorithms/data structures that I should "recognize". For instance, I know the solution to a problem like this: You manage a set of lockers labeled 0-999. People come to you to rent the locker and then come back to return the locker key. How would you build a piece of software to manage knowing which lockers are free and which are in used? The solution, would be a queue or stack. What I'm looking for are things like "in what situation should a B-Tree be used -- What search algorithm should be used here" etc. And maybe a quick introduction of how the more complex(but commonly used) data structures/algorithms work. I tried looking at Wikipedia's list of data structures and algorithms but I think that's a bit overkill. So I'm looking more for what are the essential things I should recognize?

    Read the article

  • Understanding Data Science: Recent Studies

    - by Joe Lamantia
    If you need such a deeper understanding of data science than Drew Conway's popular venn diagram model, or Josh Wills' tongue in cheek characterization, "Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician." two relatively recent studies are worth reading.   'Analyzing the Analyzers,' an O'Reilly e-book by Harlan Harris, Sean Patrick Murphy, and Marck Vaisman, suggests four distinct types of data scientists -- effectively personas, in a design sense -- based on analysis of self-identified skills among practitioners.  The scenario format dramatizes the different personas, making what could be a dry statistical readout of survey data more engaging.  The survey-only nature of the data,  the restriction of scope to just skills, and the suggested models of skill-profiles makes this feel like the sort of exercise that data scientists undertake as an every day task; collecting data, analyzing it using a mix of statistical techniques, and sharing the model that emerges from the data mining exercise.  That's not an indictment, simply an observation about the consistent feel of the effort as a product of data scientists, about data science.  And the paper 'Enterprise Data Analysis and Visualization: An Interview Study' by researchers Sean Kandel, Andreas Paepcke, Joseph Hellerstein, and Jeffery Heer considers data science within the larger context of industrial data analysis, examining analytical workflows, skills, and the challenges common to enterprise analysis efforts, and identifying three archetypes of data scientist.  As an interview-based study, the data the researchers collected is richer, and there's correspondingly greater depth in the synthesis.  The scope of the study included a broader set of roles than data scientist (enterprise analysts) and involved questions of workflow and organizational context for analytical efforts in general.  I'd suggest this is useful as a primer on analytical work and workers in enterprise settings for those who need a baseline understanding; it also offers some genuinely interesting nuggets for those already familiar with discovery work. We've undertaken a considerable amount of research into discovery, analytical work/ers, and data science over the past three years -- part of our programmatic approach to laying a foundation for product strategy and highlighting innovation opportunities -- and both studies complement and confirm much of the direct research into data science that we conducted. There were a few important differences in our findings, which I'll share and discuss in upcoming posts.

    Read the article

  • java share data between thread

    - by ayush
    i have a java process that reads data from a socket server. Thus i have a BufferedReader and a PrintWriter object corresponding to that socket. Now in the same java process i have a multithreaded java server that accepts client connections. I want to achieve a functionality where all these clients that i accept can read data from the BufferedReader object that i mentioned above.(so that they can multiplex the data) How do i make these individual client threads read the data from BuffereReader single object? Sorry for the confusion.

    Read the article

  • IOMEGA 500GB hard disk data reccovery

    - by Vineeth
    Last year by November I bought an IOMEGA 500GB Prestige hard disk. Yesterday, unfortunately the hard disk fell down from my table. After that incident, when I connect my disk, Windows asks me to format the disk to use, but I didn't format it yet. Actually, on that hard disk I have about 320GB of data. I tried all my possible ways to access my disk. I tried using DOS. It shows "data error (Cyclic redundancy check)". I have a 3 year warranty. Will I be covered under warranty if I report this issue to IOMEGA? Can I get my data back?

    Read the article

  • Noob-Friendly Guides to WSGI?

    - by Johnny McKenzie
    world! I have recently been delving into server-side code web development with python, and I have hit a brick wall; you see, I know little about server side code and HTTP (other than the v. basics with php shudder), and all of the docs for wsgi that I have found seem to be for people already well established in the field. Are there any n00b happy guides for server-side scripting (the theory of), or on wsgi out there. Http would be helpful, video tuts are also greatly appreciated. Thanks in advance.

    Read the article

  • SP1 of RadControls for WinForms Q1 2010 released, featuring VS2010 and Client Profile support

    As always, Telerik's plans were closely aligned with Microsoft's release schedule, and we were dedicated to provide VS2010 support even before Visual Studio 2010 was officially launched. Now that the first VS2010 launch event is over, here comes the first of many Telerik Service Packs to support VS2010. This RadControls for WinForms release is the first to provide support for the Client Profile, introduced with .NET3.5, and now default when starting new windows forms projects with VS2010. The Client Profile is a smaller version of the.NET Framework that includes only the assemblies needed for deploying client-based applications, which in turn reduces the size of the application. Basically, the Design time classes are excluded from the Client Profile (CP), because they are only needed for designing and not for running an application. In Q1 2010 SP1 we have moved the designer classes from the run-time assemblies to a new dedicated assembly (Telerik.WinControls.UI.Design.dll), ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • how to find a good data center?

    - by drewda
    At my start-up, we're getting to the point where we should be hosting our servers at a data center. I'd appreciate any tips and tricks y'all can offer on finding a reputable place to colocate our racks. Are there any Web sites with customer reviews of data centers or should I just be asking around at techie events? Are unlimited bandwidth plans a gimmick or becoming the norm? Is it worth establishing a redundant set of machines at a second data center from Day One? Or just do offsite back-ups? Thanks for your suggestions.

    Read the article

  • Appending column to a data frame - R

    - by darkie15
    Is it possible to append a column to data frame in the following scenario? dfWithData <- data.frame(start=c(1,2,3), end=c(11,22,33)) dfBlank <- data.frame() ..how to append column start from dfWithData to dfBlank? It looks like the data should be added when data frame is being initialized. I can do this: dfBlank <- data.frame(dfWithData[1]) but I am more interested if it is possible to append columns to an empty (but inti)

    Read the article

  • Protecting Consolidated Data on Engineered Systems

    - by Steve Enevold
    In this time of reduced budgets and cost cutting measures in Federal, State and Local governments, the requirement to provide services continues to grow. Many agencies are looking at consolidating their infrastructure to reduce cost and meet budget goals. Oracle's engineered systems are ideal platforms for accomplishing these goals. These systems provide unparalleled performance that is ideal for running applications and databases that traditionally run on separate dedicated environments. However, putting multiple critical applications and databases in a single architecture makes security more critical. You are putting a concentrated set of sensitive data on a single system, making it a more tempting target.  The environments were previously separated by iron so now you need to provide assurance that one group, department, or application's information is not visible to other personnel or applications resident in the Exadata system. Administration of the environments requires formal separation of duties so an administrator of one application environment cannot view or negatively impact others. Also, these systems need to be in protected environments just like other critical production servers. They should be in a data center protected by physical controls, network firewalls, intrusion detection and prevention, etc Exadata also provides unique security benefits, including a reducing attack surface by minimizing packages and services to only those required. In addition to reducing the possible system areas someone may attempt to infiltrate, Exadata has the following features: 1.    Infiniband, which functions as a secure private backplane 2.    IPTables  to perform stateful packet inspection for all nodes               Cellwall implements firewall services on each cell using IPTables 3.    Hardware accelerated encryption for data at rest on storage cells Oracle is uniquely positioned to provide the security necessary for implementing Exadata because security has been a core focus since the company's beginning. In addition to the security capabilities inherent in Exadata, Oracle security products are all certified to run in an Exadata environment. Database Vault Oracle Database Vault helps organizations increase the security of existing applications and address regulatory mandates that call for separation-of-duties, least privilege and other preventive controls to ensure data integrity and data privacy. Oracle Database Vault proactively protects application data stored in the Oracle database from being accessed by privileged database users. A unique feature of Database Vault is the ability to segregate administrative tasks including when a command can be executed, or that the DBA can manage the health of the database and objects, but may not see the data Advanced Security  helps organizations comply with privacy and regulatory mandates by transparently encrypting all application data or specific sensitive columns, such as credit cards, social security numbers, or personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides the most cost-effective solution for comprehensive data protection. Label Security  is a powerful and easy-to-use tool for classifying data and mediating access to data based on its classification. Designed to meet public-sector requirements for multi-level security and mandatory access control, Oracle Label Security provides a flexible framework that both government and commercial entities worldwide can use to manage access to data on a "need to know" basis in order to protect data privacy and achieve regulatory compliance  Data Masking reduces the threat of someone in the development org taking data that has been copied from production to the development environment for testing, upgrades, etc by irreversibly replacing the original sensitive data with fictitious data so that production data can be shared safely with IT developers or offshore business partners  Audit Vault and Database Firewall Oracle Audit Vault and Database Firewall serves as a critical detective and preventive control across multiple operating systems and database platforms to protect against the abuse of legitimate access to databases responsible for almost all data breaches and cyber attacks.  Consolidation, cost-savings, and performance can now be achieved without sacrificing security. The combination of built in protection and Oracle’s industry-leading data protection solutions make Exadata an ideal platform for Federal, State, and local governments and agencies.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >