Search Results

Search found 6342 results on 254 pages for 'behavior'.

Page 53/254 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Learnings from trying to write better software: Loud errors from the very start

    - by theo.spears
    Microsoft made a very small number of backwards incompatible changes between .NET 1.1 and 2.0, because they wanted to make it as easy and safe as possible to port applications to the new runtime. (Here’s a list.) However, one thing they did change was what happens when a background thread fails with an unhanded exception - in .NET 1.1 nothing happened, the thread terminated, and the application continued oblivious. Try the same trick in .NET 2.0 and the entire application, including all threads, will rudely terminate. There are three reasons for this. Firstly if a background thread has crashed, it may have left the entire application in an inconsistent state, in a way that will affect other threads. It’s better to terminate the entire application than continue and have the application perform actions based on a broken state, for example take customer orders, or write corrupt files to disk.  Secondly, during software development, it is far better for errors to be loud and obtrusive. Even if you have unit tests and integration tests (and you should), a key part of ensuring software works properly is to actually try using it, both through systematic testing and through the casual use all software gets by its developers during use. Subtle errors are easy to miss if you are not actually doing real work using the application, loud errors are obvious. Thirdly, and most importantly, even if catching and swallowing exceptions indiscriminately doesn't cause any problems in your application, the presence of unexpected exceptions shows you do not fully understand the behavior of your code. The currently released version of your application may be absolutely correct. However, because your mental model of the behavior is wrong, any future change you make to the program could and probably will introduce critical errors.  This applies to more than just exceptions causing threads to exit, any unexpected state should make the application blow up in an un-ignorable way. The worst thing you can do is silently swallow errors and continue. And let's be clear, writing to a log file does not count as blowing up in an un-ignorable way.  This is all simple as long as the call stack only contains your code, but when your functions start to be called by third party or .NET framework code, it's surprisingly easy for exceptions to start vanishing. Let's look at two examples.   1. Windows forms drag drop events  Usually if you throw an exception from a winforms event handler it will bring up the "application has crashed" dialog with abort and continue options. This is a good default behavior - the error is big and loud, but it is possible for the user to ignore the error and hopefully save their data, if somehow this bug makes it past testing. However drag and drop are different - throw an exception from one of these and it will just be silently swallowed with no explanation.  By the way, it's not just drag and drop events. Timer events do it too.  You can research how exceptions are treated in different handlers and code appropriately, but the safest and most user friendly approach is to always catch exceptions in your event handlers and show your own error message. I'll talk about one good approach to handling these exceptions at the end of this post.   2. SSMS integration for SQL Tab Magic  A while back wrote an SSMS add-in called SQL Tab Magic (learn more about the process here). It works by listening to certain SSMS events and remembering what documents are opened and closed. I deployed it internally and it was used for a few months by a number of people without problems, so I was reasonably confident in its quality. Before releasing I made a few cleanups, including introducing error reporting. Bam. A few days later I was looking at over 1,000 error reports in my inbox. In turns out I wasn't handling table designers properly. The exceptions were there, but again SSMS was helpfully swallowing them all for me, so I was blissfully unaware. Had I made my errors loud from the start, I would have noticed these issues long before and fixed them.   Handling exceptions  Now you are systematically catching exceptions throughout your application, you need to do something with them. I've tried 3 options: log them, alert the user, and automatically send them home.  There are a few good options for logging in .NET. The most widespread is Apache log4net, which provides a very capable and configurable logging framework. There is also NLog which has a compatible interface, with a greater emphasis on fluent rather than XML configuration.  Alerting the user serves two purposes. Firstly it means they understand their action has failed to they don't just assume it worked (Silent file copy failure is a problem if you then delete the originals) or that they should keep waiting for a background task to complete. Secondly, it means the users can report the bug to your support team, and then you can fix it. This means the message you show the user should contain the information you need as a developer to identify and fix it. And the user will probably just send you a screenshot of the dialog, so it shouldn't be hidden by scroll bars.  This leads us to the third option, automatically sending error reports home. By automatic I mean with minimal effort on the part of the user, rather than doing it silently behind their backs. The advantage of this is you can send back far more detailed and precise information than you can expect a user to include in an email, and by making it easier to report errors, you make it more likely users will do so.  We do this using a great tool called SmartAssembly (full disclosure: this is a product made by Red Gate). It captures complete stack traces including the values of all local variables and then allows the user to send all this information back with a single click. We also capture log files to help understand what lead up to the error. We then use the free SmartAssembly Sync for Jira to dedupe these reports and raise them as bugs in our bug tracking system.  The combined effect of loud errors during development and then automatic error reporting once software is deployed allows us to find and fix more bugs, correct misunderstandings on how our software works, and overall is a key piece in delivering higher quality software. However it is no substitute for having motivated cunning testers in the building - and we're looking to hire more of those too.   If you found this post interesting you should follow me on twitter.  

    Read the article

  • Microsoft JScript runtime error in Visual studio 2010

    - by anirudha
    There is many tool exist to debug JavaScript visual studio , firebug and some other great plugin are one of them. here I show you solution for Microsoft JScript runtime error: Object doesn't support this property or method  if you search on Google for how to debug Javascript in Visual studio. all of them told you to follow this instruction :- go to Internet option in IE > advanced tab > Browsing section > uhcheck the both option disable script debugging for IE (internet explorer) disable script debugging (others). that’s the information you read are outdate or not true these days. in visual studio you can play with JavaScript debugging even these option is check or unchecked. off-course you  can try these step in express version of visual studio. I found a little problem that my code [based on jQuery plugin] try to call some function. some of them not implement in browser so they call other so that’s fine and work in every browser. when visual studio found some function they trying to call and not implemented in browser I use to debug they tell me Microsoft JScript runtime error: Object doesn't support this property or method  they tell me again whenever I put refresh [f5] in browser. so the thing they do you never like that. see error window first whenever code is not buggy and see everytime before see the page you want to see. this behavior harsh you. there is no problem whenever you not commonly used IE but whenever you really want to debugging you got some pain too from the behavior of this. well I have some patch for that if you really like the debugging in Visual studio with IE. so if you sure that code is not buggy or you really not want to see that’s window here is trick. when you debug the JavaScript in IE choose the compact mode and they never force you to see window first who tell the thing you not want to see. How to do that for reliefs from this pain in visual studio after debug the project or website IE gone automatically launch. go to Developer tool by pressing f12 > you see window something like this:- by the way they give you document mode IE 7 as default or browser mode based on your settings. first thing is that you need to set the compact view [any ] in browser mode. and next time the error window never come again who tell you Microsoft JScript runtime error: Object doesn't support this property or method

    Read the article

  • Smoothing found path on grid

    - by Denis Ermolin
    I implemented several approaches such as A* and Potential fields for my tower defense game. But I want smooth paths, first I tried to find path on very small grid ( 5x5 pixels per tile) but it is extremly slow. I found nice video showing an RTS demo where paths are found on big grid but units dont move from each cell's center to center. How do I implement such behavior? Some examples would be great.

    Read the article

  • #DAX Query Plan in SQL Server 2012 #Tabular

    - by Marco Russo (SQLBI)
    The SQL Server Profiler provides you many information regarding the internal behavior of DAX queries sent to a BISM Tabular model. Similar to MDX, also in DAX there is a Formula Engine (FE) and a Storage Engine (SE). The SE is usually handled by Vertipaq (unless you are using DirectQuery mode) and Vertipaq SE Query classes of events gives you a SQL-like syntax that represents the query sent to the storage engine. Another interesting class of events is the DAX Query Plan , which contains a couple...(read more)

    Read the article

  • I have permanent connections to Canonical servers, what are they for and how can I turn them off?

    - by Dan Dman
    After the recent upgrade to 12, I notice permanent connections to canonical servers. Running netstat -tp gives: Foreign Address State PID/Program name mulberry.canonical:http CLOSE_WAIT 6537/ubuntu-geoip-p alkes.canonical.co:http CLOSE_WAIT 6667/python alkes.canonical.co:http CLOSE_WAIT 6667/python Why are there permanent connections and how could I stop this behavior? And if this is intentional, who is responsible? I would like to understand why this was done because to me it seems like a bad idea.

    Read the article

  • Unexpected deletion of directory

    - by Anubhav Chaturvedi
    I find that somehow the Downloads directory in /home/user/ is deleted. on using $locate Downloads, it shows the existence of directory without any existence of files within. now when i manually create directory named Downloads, $ locate Downloads shows the directory as well as the files the original folder had. also there is no hidden Downloads folder nor can i access the folder or its files this behavior is quite unexpected .... please help.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer

    - by Elton Stoneman
    This is the third in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer As the patterns get further from the simple .NET full-trust consumer, all that changes is the communication protocol and the authentication mechanism. In Part 3 the scenario is that we still have a secure .NET environment consuming our service, so we can store shared keys securely, but the runtime environment is locked down so we can't use Microsoft.ServiceBus to get the nice WCF relay bindings. To support this we will expose a RESTful endpoint through the Azure Service Bus, and require the consumer to send a security token with each HTTP service request. Pattern applicability This is a good fit for scenarios where: the runtime environment is secure enough to keep shared secrets the consumer can execute custom code, including building HTTP requests with custom headers the consumer cannot use the Azure SDK assemblies the service may need to know who is consuming it the service does not need to know who the end-user is Note there isn't actually a .NET requirement here. By exposing the service in a REST endpoint, anything that can talk HTTP can be a consumer. We'll authenticate through ACS which also gives us REST endpoints, so the service is still accessed securely. Our real-world example would be a hosted cloud app, where we we have enough room in the app's customisation to keep the shared secret somewhere safe and to hook in some HTTP calls. We will be flowing an identity through to the on-premise service now, but it will be the service identity given to the consuming app - the end user's identity isn't flown through yet. In this post, we’ll consume the service from Part 1 in ASP.NET using the WebHttpRelayBinding. The code for Part 3 (+ Part 1) is on GitHub here: IPASBR Part 3. Authenticating and authorizing with ACS We'll follow the previous examples and add a new service identity for the namespace in ACS, so we can separate permissions for different consumers (see walkthrough in Part 1). I've named the identity partialTrustConsumer. We’ll be authenticating against ACS with an explicit HTTP call, so we need a password credential rather than a symmetric key – for a nice secure option, generate a symmetric key, copy to the clipboard, then change type to password and paste in the key: We then need to do the same as in Part 2 , add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: partialTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send As with Part 2, this sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. RESTfully exposing the on-premise service through Azure Service Bus Relay The part 3 sample code is ready to go, just put your Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files.  But to do it yourself is very simple. We already have a WebGet attribute in the service for locally making REST calls, so we are just going to add a new endpoint which uses the WebHttpRelayBinding to relay that service through Azure. It's as easy as adding this endpoint to Web.config for the service:         <endpoint address="https://sixeyed-ipasbr.servicebus.windows.net/rest"                   binding="webHttpRelayBinding"                    contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> - and adding the webHttp attribute in your endpoint behavior:           <behavior name="SharedSecret">             <webHttp/>             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="gl0xaVmlebKKJUAnpripKhr8YnLf9Neaf6LR53N8uGs="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> Where's my WSDL? The metadata story for REST is a bit less automated. In our local webHttp endpoint we've enabled WCF's built-in help, so if you navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/help - you'll see the uri format for making a GET request to the service. The format is the same over Azure, so this is where you'll be connecting: https://[your-namespace].servicebus.windows.net/rest/reverse?string=abc123 Build the service with the new endpoint, open that in a browser and you'll get an XML version of an HTTP status code - a 401 with an error message stating that you haven’t provided an authorization header: <?xml version="1.0"?><Error><Code>401</Code><Detail>MissingToken: The request contains no authorization header..TrackingId:4cb53408-646b-4163-87b9-bc2b20cdfb75_5,TimeStamp:10/3/2012 8:34:07 PM</Detail></Error> By default, the setup of your Service Bus endpoint as a relying party in ACS expects a Simple Web Token to be presented with each service request, and in the browser we're not passing one, so we can't access the service. Note that this request doesn't get anywhere near your on-premise service, Service Bus only relays requests once they've got the necessary approval from ACS. Why didn't the consumer need to get ACS authorization in Part 2? It did, but it was all done behind the scenes in the NetTcpRelayBinding. By specifying our Shared Secret credentials in the consumer, the service call is preceded by a check on ACS to see that the identity provided is a) valid, and b) allowed access to our Service Bus endpoint. By making manual HTTP requests, we need to take care of that ACS check ourselves now. We do that with a simple WebClient call to the ACS endpoint of our service; passing the shared secret credentials, we will get back an SWT: var values = new System.Collections.Specialized.NameValueCollection(); values.Add("wrap_name", "partialTrustConsumer"); //service identity name values.Add("wrap_password", "suCei7AzdXY9toVH+S47C4TVyXO/UUFzu0zZiSCp64Y="); //service identity password values.Add("wrap_scope", "http://sixeyed-ipasbr.servicebus.windows.net/"); //this is the realm of the RP in ACS var acsClient = new WebClient(); var responseBytes = acsClient.UploadValues("https://sixeyed-ipasbr-sb.accesscontrol.windows.net/WRAPv0.9/", "POST", values); rawToken = System.Text.Encoding.UTF8.GetString(responseBytes); With a little manipulation, we then attach the SWT to subsequent REST calls in the authorization header; the token contains the Send claim returned from ACS, so we will be authorized to send messages into Service Bus. Running the sample Navigate to http://localhost:2028/Sixeyed.Ipasbr.WebHttpClient/Default.cshtml, enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • Isn't MVC anti OOP?

    - by m3th0dman
    The main idea behind OOP is to unify data and behavior in a single entity - the object. In procedural programming there is data and separately algorithms modifying the data. In the Model-View-Controller pattern the data and the logic/algorithms are placed in distinct entities, the model and the controller respectively. In an equivalent OOP approach shouldn't the model and the controller be placed in the same logical entity?

    Read the article

  • ASP.NET Web Roles vs ASP.NET Web Applications

    - by kaleidoscope
    The 3 differences are: References to the Windows Azure specific assemblies: Microsoft.WindowsAzure.Diagnostics, Microsoft.WindowsAzure.ServiceRuntime, and Microsoft.WindowsAzure.StorageClient Bootstrap code in the WebRole.cs/vb file that starts the DiagnosticMonitor as well as defines a default behavior of recycling the role when a configuration setting change occurs. The addition of a trace listener in the web.config file: Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener.   Amit

    Read the article

  • what's the point of method overloading?

    - by David
    I am following a textbook in which I have just come across method overloading. It briefly described method overloading as: when the same method name is used with different parameters its called method overloading. From what I've learned so far in OOP is that if I want different behaviors from an object via methods, I should use different method names that best indicate the behavior, so why should I bother with method overloading in the first place?

    Read the article

  • Security-related database settings are not restored when a DB is restored

    - by Greg Low
    A question came up today about whether it was a bug that the TRUSTWORTHY database setting isn't restored to its previous value when a database is restored. TRUSTWORTHY is a very powerful setting for a database. By design, it's not restored when a database is. We actually documented this behavior when writing the Upgrade Technical Reference for 2008: http://www.microsoft.com/downloads/en/confirmation.aspx?familyId=66d3e6f5-6902-4fdd-af75-9975aea5bea7&displayLang=en The other settings that are...(read more)

    Read the article

  • Turning your code inside out (functional style) compared to a OO paradigm

    - by Acaz Souza
    I have find this article Turning Your Code Inside Out and I want to know how this approach described in article is for OO programmers/languages. Is this style of design used in OO programmers/languages? What's downsides and goodsides of this approach in a OO language? Update: OO objects have state and behavior, the design explained in article is stateless. Is not only Single Responsability Principle. (If I'm talking shit, please explain to me instead of only downside/close votes)

    Read the article

  • Games at Work Part 2: Gamification and Enterprise Applications

    - by ultan o'broin
    Gamification and Enterprise Applications In part 1 of this article, we explored why people are motivated to play games so much. Now, let's think about what that means for Oracle applications user experience. (Even the coffee is gamified. Acknowledgement @noelruane. Check out the Guardian article Dublin's Frothing with Tech Fever. Game development is big business in Ireland too.) Applying game dynamics (gamification) effectively in the enterprise applications space to reflect business objectives is now a hot user experience topic. Consider, for example, how such dynamics could solve applications users’ problems such as: Becoming familiar or expert with an application or process Building loyalty, customer satisfaction, and branding relationships Collaborating effectively and populating content in the community Completing tasks or solving problems on time Encouraging teamwork to achieve goals Improving data accuracy and completeness of entry Locating and managing the correct resources or information Managing changes and exceptions Setting and reaching targets, quotas, or objectives Games’ Incentives, Motivation, and Behavior I asked Julian Orr, Senior Usability Engineer, in the Oracle Fusion Applications CRM User Experience (UX) team for his thoughts on what potential gamification might offer Oracle Fusion Applications. Julian pointed to the powerful incentives offered by games as the starting place: “The biggest potential for gamification in enterprise apps is as an intrinsic motivator. Mechanisms include fun, social interaction, teamwork, primal wiring, adrenaline, financial, closed-loop feedback, locus of control, flow state, and so on. But we need to know what works best for a given work situation.” For example, in CRM service applications, we might look at the motivations of typical service applications users (see figure 1) and then determine how we can 'gamify' these motivations with techniques to optimize the desired work behavior for the role (see figure 2). Description of Figure 1 Description of Figure 2 Involving Our Users Online game players are skilled collaborators as well as problem solvers. Erika Webb (@erikanollwebb), Oracle Fusion Applications UX Manager, has run gamification events for Oracle, including one on collaboration and gamification in Oracle online communities that involved Oracle customers and partners. Read more... However, let’s be clear: gamifying a user interface that’s poorly designed is merely putting the lipstick of gamification on the pig of work. Gamification cannot replace good design and killer content based on understanding how applications users really work and what motivates them. So, Let the Games Begin! Gamification has tremendous potential for the enterprise application user experience. The Oracle Fusion Applications UX team is innovating fast and hard in this area, researching with our users how gamification can make work more satisfying and enterprises more productive. If you’re interested in knowing more about our gamification research, sign up for more information or check out how your company can get involved through the Oracle Usability Advisory Board. Your thoughts? Find those comments.

    Read the article

  • Customizing MFC Document Recovery

    This C++ tutorial demonstrates how MFC 10 delivers on it's promise by delivering the boiler-plate functionality required to build a professional Windows C++ application with minimal effort while allowing .NET developers to customize aspects of MFC behavior.

    Read the article

  • *Un*installing with Ubuntu Software Center (Centre) doesn't work on 64-bit 12.04.1

    - by likethesky
    Not sure if I'm doing something wrong, or if the .deb package I'm installing is broken in some way (I've built it, using NetBeans 7.2), or if indeed this is a bug in Software Center. When I install this particular 32-bit .deb on Ubuntu 10.04 LTS--all updates applied--(where it was built), GDebi shows it and has an 'Uninstall' button next to it. So it works fine to uninstall it there, via the GDebi GUI. However, when I install it on 12.04.1 LTS--all updates applied--it installs fine, but then does not show up in Ubuntu Software Center as available to be uninstalled. No combination of searching finds it. However, I can from the command line, do sudo apt-get purge javafxapplication1 and it finds it and deletes it. The same thing happens when I build a 64-bit .deb and attempt to install it to the same (64-bit AMD) or a different 64-bit Ubuntu 12.04.1 system. So it seems to be isolated to this NetBeans-generated .deb and the 64-bit AMD build (though I haven't tried it on a 32-bit 12.04.1 install yet). These are all on VirtualBox VMs, btw, if that matters. Any way to 'clean up' my Software Center and see if it's something I've done to get it in this state? Could this behavior be due to how this particular .deb has been built? (It doesn't have an 'Installed-Size' control field, so I do get the "Package is of bad quality" warning when I install it--which I do by clicking 'Ignore and install' button.) If you want all the gory details about why this happening--a bug has been reported against NetBeans for this behavior here: http://javafx-jira.kenai.com/browse/RT-25486 (EDIT: Just to be clear, the app installs fine, runs fine, all works as intended--I just can't get that 'bad package' message to go away, and now... I also can't uninstall it via Software Center, but rather, need to use sudo apt-get purge to uninstall it, after it installs. /END EDIT) Thanks for any pointers. I'm happy to report this as a bug against Ubuntu Software Center/Centre too, if that's what it seems to be, just tell me where to do so (a link). I'm a relative Ubuntu, NetBeans, and JavaFX newbie, though a long-time programmer. If I report it as a bug, I'll try it on the 32-bit build of 12.04.1 as well. Also, if I should add any more detail to the bug reported against NetBeans above, let me know--or feel free to add it yourself to the bug report above, if you would like. Thanks again!

    Read the article

  • Consolidating Oracle E-Business Suite R12 on Oracle's SPARC SuperCluster

    - by Giri Mandalika
    An Optimized Solution for Oracle E-Business Suite (EBS) R12 12.1.3 is now available on oracle.com.     The Oracle Optimized Solution for Oracle E-Business Suite This solution was centered around the engineered system, SPARC SuperCluster T4-4. Check the business and technical white papers along with a bunch of relevant useful resources online at the above optimized solution page for EBS. What is an Optimized Solution? Oracle's Optimized Solutions are designed, tested and fully documented architectures that are tuned for optimal performance and availability. Optimized solutions are NOT pre-packaged, fully tuned, ready-to-install software bundles that can be downloaded and installed. An optimized solution is usually a well documented architecture that was thoroughly tested on a target platform. The technical white paper details the deployed application architecture along with various observations from installing the application on target platform to its behavior and performance in highly available and scalable configurations. Oracle E-Business Suite R12 Use Case Multiple E-Business Suite R12 12.1.3 application modules were tested in this optimized solution -- Financials (online - oracle forms & web requests), Order Management (online - oracle forms & web req uests) and HRMS (online - web requests & payroll batch). The solution will be updated with additional application modules, when they are available. Oracle Solaris Cluster is responsible for the high availability portion of the solution. Performance Data For the sake of completeness, test results were also documented in the optimized solution white paper. Those test results are mainly for educational purposes only. They give good sense of application behavior under the circumstances the application was tested. Since the major focus of the optimized solution is around highly available and scalable configurations, the application was configured to me et those criteria. Hence the documented test results are not directly comparable to any other E-Business Suite performance test results published by any vendor including Oracle. Such an attempt may lead to skewed, incorrect conclusions. Questions & Requests Feel free to direct your questions to the author of the white papers. If you are a potential customer who would like to test a specific E-Business Suite application module on any non-engineered syste m such as SPARC T4-X or engineered system such as SPARC SuperCluster, contact Oracle Solution Center.

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 2: The Service

    - by Your DisplayName here!
    OK – so let’s first start with a simple WCF service and connect that to ADFS 2 for authentication. The service itself simply echoes back the user’s claims – just so we can make sure it actually works and to see how the ADFS 2 issuance rules emit claims for the service: [ServiceContract(Namespace = "urn:leastprivilege:samples")] public interface IService {     [OperationContract]     List<ViewClaim> GetClaims(); } public class Service : IService {     public List<ViewClaim> GetClaims()     {         var id = Thread.CurrentPrincipal.Identity as IClaimsIdentity;         return (from c in id.Claims                 select new ViewClaim                 {                     ClaimType = c.ClaimType,                     Value = c.Value,                     Issuer = c.Issuer,                     OriginalIssuer = c.OriginalIssuer                 }).ToList();     } } The ViewClaim data contract is simply a DTO that holds the claim information. Next is the WCF configuration – let’s have a look step by step. First I mapped all my http based services to the federation binding. This is achieved by using .NET 4.0’s protocol mapping feature (this can be also done the 3.x way – but in that scenario all services will be federated): <protocolMapping>   <add scheme="http" binding="ws2007FederationHttpBinding" /> </protocolMapping> Next, I provide a standard configuration for the federation binding: <bindings>   <ws2007FederationHttpBinding>     <binding>       <security mode="TransportWithMessageCredential">         <message establishSecurityContext="false">           <issuerMetadata address="https://server/adfs/services/trust/mex" />         </message>       </security>     </binding>   </ws2007FederationHttpBinding> </bindings> This binding points to our ADFS 2 installation metadata endpoint. This is all that is needed for svcutil (aka “Add Service Reference”) to generate the required client configuration. I also chose mixed mode security (SSL + basic message credential) for best performance. This binding also disables session – you can control that via the establishSecurityContext setting on the binding. This has its pros and cons. Something for a separate blog post, I guess. Next, the behavior section adds support for metadata and WIF: <behaviors>   <serviceBehaviors>     <behavior>       <serviceMetadata httpsGetEnabled="true" />       <federatedServiceHostConfiguration />     </behavior>   </serviceBehaviors> </behaviors> The next step is to add the WIF specific configuration (in <microsoft.identityModel />). First we need to specify the key material that we will use to decrypt the incoming tokens. This is optional for web applications but for web services you need to protect the proof key – so this is mandatory (at least for symmetric proof keys, which is the default): <serviceCertificate>   <certificateReference storeLocation="LocalMachine"                         storeName="My"                         x509FindType="FindBySubjectDistinguishedName"                         findValue="CN=Service" /> </serviceCertificate> You also have to specify which incoming tokens you trust. This is accomplished by registering the thumbprint of the signing keys you want to accept. You get this information from the signing certificate configured in ADFS 2: <issuerNameRegistry type="...ConfigurationBasedIssuerNameRegistry">   <trustedIssuers>     <add thumbprint="d1 … db"           name="ADFS" />   </trustedIssuers> </issuerNameRegistry> The last step (promised) is to add the allowed audience URIs to the configuration – WCF clients use (by default – and we’ll come back to this) the endpoint address of the service: <audienceUris>   <add value="https://machine/soapadfs/service.svc" /> </audienceUris> OK – that’s it – now we have a basic WCF service that uses ADFS 2 for authentication. The next step will be to set-up ADFS to issue tokens for this service. Afterwards we can explore various options on how to use this service from a client. Stay tuned… (if you want to have a look at the full source code or peek at the upcoming parts – you can download the complete solution here)

    Read the article

  • Extremely large spike in traffic on the 1st - 4th of every month from mobile browsers

    - by wsanville
    I've noticed that on the 1st - 4th of the recent months (since January), several sites I maintain are getting thousands of requests from mobile browsers, whereas throughout the rest of the month, the numbers are in the single or double digits. Has anybody else noticed this sort of behavior? I don't have the exact user agents logged, but my analysis software (WebTrends) reports the traffic as mostly iPhone/iPad/iPod, Android, and Blackberry.

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • Last selected program or item not listed on Unity Dash

    - by Nur
    I think something is broken in my Ubuntu configuration because whenever I'm run a program or search a file on Dash, the last entry usually always added to left most item on Dash, but now it's not, I just realized this after couple of hour, my Dash look the same. How to set it back to default behavior? Because it's pretty annoying and time consuming if you have to type everything over and over when the default is just press Enter to open last item. I'm on Raring.

    Read the article

  • Class instance clustering in object reference graph for multi-entries serialization

    - by Juh_
    My question is on the best way to cluster a graph of class instances (i.e. objects, the graph nodes) linked by object references (the -directed- edges of the graph) around specifically marked objects. To explain better my question, let me explain my motivation: I currently use a moderately complex system to serialize the data used in my projects: "marked" objects have a specific attributes which stores a "saving entry": the path to an associated file on disc (but it could be done for any storage type providing the suitable interface) Those object can then be serialized automatically (eg: obj.save()) The serialization of a marked object 'a' contains implicitly all objects 'b' for which 'a' has a reference to, directly s.t: a.b = b, or indirectly s.t.: a.c.b = b for some object 'c' This is very simple and basically define specific storage entries to specific objects. I have then "container" type objects that: can be serialized similarly (in fact their are or can-be "marked") they don't serialize in their storage entries the "marked" objects (with direct reference): if a and a.b are both marked, a.save() calls b.save() and stores a.b = storage_entry(b) So, if I serialize 'a', it will serialize automatically all objects that can be reached from 'a' through the object reference graph, possibly in multiples entries. That is what I want, and is usually provides the functionalities I need. However, it is very ad-hoc and there are some structural limitations to this approach: the multi-entry saving can only works through direct connections in "container" objects, and there are situations with undefined behavior such as if two "marked" objects 'a'and 'b' both have a reference to an unmarked object 'c'. In this case my system will stores 'c' in both 'a' and 'b' making an implicit copy which not only double the storage size, but also change the object reference graph after re-loading. I am thinking of generalizing the process. Apart for the practical questions on implementation (I am coding in python, and use Pickle to serialize my objects), there is a general question on the way to attach (cluster) unmarked objects to marked ones. So, my questions are: What are the important issues that should be considered? Basically why not just use any graph parsing algorithm with the "attach to last marked node" behavior. Is there any work done on this problem, practical or theoretical, that I should be aware of? Note: I added the tag graph-database because I think the answer might come from that fields, even if the question is not.

    Read the article

  • Advantages of BDD for solo developer

    - by user248959
    I have found this lines below about the advantages of BDD (Behavior Driven Development) The domain experts define what they need in the program in a way that the developers can not misinterpret (or at least not as much as in most other approaches). Are there any more advantages apart from that? If I'm working alone (I'm not in contact with managers that could write BDD features), do I need to use BDD?

    Read the article

  • Free and Innovative Website Analytics Tools

    Many people think of free web analytics in terms of Google Analytics, well, this is understandable given the big name that Google is. Unknown to many, there are numerous web analytics tools out there that are free and very innovative. These web statistics tools measure everything from real-time visitor tracking, to search engine traffic and user behavior.

    Read the article

  • Strange date relationships with #PowerPivot

    - by Marco Russo (SQLBI)
    A reader of my PowerPivot book highlighted a strange behavior of the relationship between a datetime column and a Calendar table. Long story short: it seems that PowerPivot automatically round the date to the “neareast day”, but instead of simply removing the time (truncating the decimal part of the decimal number internally used to represent a datetime value) a rounding function seems used, moving the date to the next day if the time part contain a PM time. As you can imagine, this becomes particularly...(read more)

    Read the article

  • SQL Azure Security: DoS Part II

    - by Herve Roggero
    Ah!  When you shoot yourself in the foot... a few times... it hurts! That's what I did on Sunday, to learn more about the behavior of the SQL Azure Denial Of Service prevention feature. This article is a short follow up to my last post on this feature. In this post, I will outline some of the lessons learned that were the result of testing the behavior of SQL Azure from two machines. From the standpoint of SQL Azure, they look like one machine since they are behind a NAT. All logins affected The first thing to note is that all the logins are affected. If you lock yourself out to a specific database, none of the logins will work on that database. In fact the database size becomes "--" in the SQL Azure Portal.   Less than 100 sessions I was able to see 50+ sessions being made in SQL Azure (by looking at sys.dm_exec_sessions) before being locked out. The the DoS feature appears to be triggered in part by the number of open sessions. I could not determine if the lockout is triggered by the speed at which connection requests are made however.   Other Databases Unaffected This was interesting... the DoS feature works at the database level. Other databases were available for me to use.   Just Wait Initially I thought that going through SQL Azure and connecting from there would reset the database and allow me to connect again. Unfortunately this doesn't seem to be the case. You will have to wait. And the more you lock yourself out, the more you will have to wait... The first time the database became available again within 30 seconds or so; the second time within 2-3 minutes and the third time... within 2-3 hours...   Successful Logins The DoS feature appears to engage only for valid logins. If you have a login failure, it doesn't seem to count. I ran a test with over 100 login failures without being locked.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >