Search Results

Search found 20388 results on 816 pages for 'nvidia current'.

Page 374/816 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • MemCached on Windows x64

    - by Django Reinhardt
    This question has previously been asked, but that was a year ago and I wanted to know if there had been any developments since then. Basically we'd like to use a MemCached Server on a Windows Server 2008 R2 machine... which is only x64, obviously. I haven't found any details on a Win64 version of MemCached, but there is still the solution from the previous thread (which I haven't tried yet) to use a bit of software called MemCacheD Manager running MemCached 1.2.6. However, the current version of MCd is 1.4.4 and I was wondering if there had been any improvements since then.

    Read the article

  • HP Blade ILO not responding in chassis ILO

    - by bobinabottle
    I have just started at a new company and I am inspecting their current server config. The HP 480c blades in a c7000 chassis aren't responding to ILO, although the chassis ILO is working fine. I have a feeling the last sysadmin configured the blades ILO as static IPs and it is not responding correctly. The servers are sitting in a datacenter and I'm hoping to be able to fix this remotely. Is there a way that I can change the ILO static IPs for the blades remotely? If not and I do have to go onsite, how do I change the IP addresses of the ILO for the blades? (Sorry I'm not very familiar with HP servers) thanks for you help!

    Read the article

  • How Hard Can It Be?

    - by David Totzke
    I mean seriously.  Let’s imagine for a moment that by some stroke of luck or genius or cosmic accident that you come to be the owner of sex.com.  You’d think you had won the lottery.  That would be like having a license to print money.  I mean really.  Sex is the most searched term on the entire Internet.  Even without any SEO you’d think that your site would show up on the first page of results on Google. You would think that; and you’d be wrong.  At least in the case of the current owners of that domain name anyways.  The details can be found here but suffice it to say that Escom LLC has managed to fuck it up.  They’ve been forced into bankruptcy by their creditors.  Something doesn’t smell quite right with the whole thing.  Some guy named Mike Mann (please God, don’t let it be this Mike Mann) is an investor in all three creditors.  WTF? Seriously.  How hard can it be? Dave Just because I can…

    Read the article

  • Agile PLM Partners and OCS offer valuable services

    - by Shane Goodwin
    One of the amazing benefits of using Agile is the ability to tap into a broad number of Partners and Oracle Consulting Services to leverage very talented people. Many of these people respond to postings on the Agile PLM Yahoo usergroups (WRAU and IAUG) as well as in the Oracle Support Community. As you are managing your Agile PLM project, either a new implementation or an upgrade, these Partners and OCS can jumpstart your internal resources with information and experience which takes years to develop. They can also offer other capabilities which add to your Agile experience. As an example, GoEngineer recently announced a new offering to host Agile PLM databases for companies who do not want to run their own Agile PLM system. This offering provides our smaller customers with more options on how to implement Agile PLM. The Oracle Consulting Services group has two new offerings, one for Rapid Deployments of new customers and another for Process and Technical Diagnostics of existing customers in order to better leverage the PLM investment. Contact your Oracle Sales representative to learn more. In addition, there are many other partners who are helping our customers get the most from Agile PLM. Some examples are below. In the coming months, the Oracle Partner Network will be working with partners to certify them as specialists in Agile PLM. Some of our other current partners working on Agile PLM are: Kalypso J Squared Domain Systems Sierra Atlantic Minerva

    Read the article

  • Latest Chrome Canary Channel Build Adds Automatic ‘Malware Download’ Blocking Feature

    - by Akemi Iwaya
    As Chrome’s popularity continues to grow, malware authors are looking for new ways to target and trick users of Google’s browser into downloading malicious software to their computers. With this problem in mind, Google has introduced a new feature into the Canary Channel to automatically detect and block malware downloads whenever possible in order to help keep your system intact and safe. Screenshot courtesy of The Google Chrome Blog. In addition to the recent Reset Feature added to the stable build of Chrome this past August, the new feature in the Canary Channel build works to help protect you as follows: From the Google Chrome Blog post: In the current Canary build of Chrome, we’ll automatically block downloads of malware that we detect. If you see this message in the download tray at the bottom of your screen, you can click “Dismiss” knowing Chrome is working to keep you safe. (See screenshot above.) You can learn more about the new feature and download the latest Canary Channel build via the links below. Don’t mess with my browser! [Google Chrome Blog] Download the Latest Chrome Canary Build [Google] [via The Next Web]     

    Read the article

  • Latest Security Updates for Java are Available for Download

    - by Akemi Iwaya
    Oracle has released new updates that patch 40 security holes in their Java Runtime Environment software. Anyone who needs or actively uses the Java Runtime Environment for work or gaming should promptly update their Java installation as soon as possible. One thing to keep in mind is that there are limitations placed on updates for older versions of Java as shown in the following excerpt. If you are using an older version, then it is recommended that you update to the Java SE 7 release if possible (depending on your usage circumstances). From the The H Security blog post: Only the current version of Java, Java SE 7, will be updated for free; downloads of the new version, Java SE 7 Update 25, are available and existing installs should auto-update. Mac OS X users will get an updated Java SE 6 for their systems as an automatic update; Java SE 7 on Mac OS X is updated by Oracle. Users of other older versions of Java will only get updates if they have a maintenance contract with Oracle. Affected Product Releases and Versions: JDK and JRE 7 Update 21 and earlier JDK and JRE 6 Update 45 and earlier JDK and JRE 5.0 Update 45 and earlier JavaFX 2.2.21 and earlier Note: If you do not need Java on your system, we recommend uninstalling it entirely or disabling the browser plugin. You can download and read through the details about the latest Java updates by visiting the links shown below.    

    Read the article

  • Best Practices vs Reality

    - by RonHill
    On a scale depicting how closely best practices are followed, with "always" on one end and "never" on the other, my current company falls uncomfortably close to the latter. Just a couple trivial examples: We have no code review process There is very little documentation despite a very large code base (and some of it is blatantly incorrect/misleading) Untested/buggy/uncompilable code is frequently checked in to source control It is comically complicated to create a debuggable build for some of our components because of its underlying architecture. Unhandled exceptions are not uncommon in our releases Empty Catch{ } blocks are everywhere. Now, with the understanding that it's neither practical nor realistic to follow ALL best practices ALL the time, my question is this: How closely have commonly accepted best practices been followed at the companies you've worked for? I'm kind of a noob--this is only the second company I've worked for--so I'm not sure if I'm just more of an anal retentive coder or if I've just ended up at mediocre companies. My guess (hope?) is the latter, but a coworker with way more experience than me says every company he's ever worked for is like this. Given the obvious benefits of following most best practices most of the time, I find it hard to believe it's like this everywhere. Am I wrong?

    Read the article

  • push email / email server tutorial

    - by David A
    Does anyone happen to know the current status of push email in the linux world? From my searching at the moment I have seen Z-push http://www.ifusio.com/blog/setup-your-own-push-mail-server-with-z-push-on-debian-linux and https://peterkieser.com/2011/03/25/androids-k-9-mail-battery-life-and-dovecots-push-imap/ Are there other solutions? Does anyone have any experiences with these? They're somewhat different in that Z-push seems to work in conjunction with an existing imap server? Some time ago I did manage to compile and build Dovecot 2 (since only Dovecot 1 was available in the Ubuntu repos at the time), it would have been a real fluke because I had no idea what I was doing but it seemed to work well with my mobile phone, that said, I can't say for sure that it was pushing, but it seemed like it. Anyway, I'm here again and looking to set up a mail server. I'm hoping to do a better of a job this time around with virtual users and such. Without installing ispconfig3 (or something similar), does anyone have any recent email server tutorials (that cover all aspects MTA, MDA...) that can supply push email on a Ubuntu 12.04 server? (I'm probably of slightly above newb status, but not far) Thanks a bunch

    Read the article

  • Token based Authentication for WCF HTTP/REST Services: Authorization

    - by Your DisplayName here!
    In the previous post I showed how token based authentication can be implemented for WCF HTTP based services. Authentication is the process of finding out who the user is – this includes anonymous users. Then it is up to the service to decide under which circumstances the client has access to the service as a whole or individual operations. This is called authorization. By default – my framework does not allow anonymous users and will deny access right in the service authorization manager. You can however turn anonymous access on – that means technically, that instead of denying access, an anonymous principal is placed on Thread.CurrentPrincipal. You can flip that switch in the configuration class that you can pass into the service host/factory. var configuration = new WebTokenWebServiceHostConfiguration {     AllowAnonymousAccess = true }; But this is not enough, in addition you also need to decorate the individual operations to allow anonymous access as well, e.g.: [AllowAnonymousAccess] public string GetInfo() {     ... } Inside these operations you might have an authenticated or an anonymous principal on Thread.CurrentPrincipal, and it is up to your code to decide what to do. Side note: Being a security guy, I like this opt-in approach to anonymous access much better that all those opt-out approaches out there (like the Authorize attribute – or this.). Claims-based Authorization Since there is a ClaimsPrincipal available, you can use the standard WIF claims authorization manager infrastructure – either declaratively via ClaimsPrincipalPermission or programmatically (see also here). [ClaimsPrincipalPermission(SecurityAction.Demand,     Resource = "Claims",     Operation = "View")] public ViewClaims GetClientIdentity() {     return new ServiceLogic().GetClaims(); }   In addition you can also turn off per-request authorization (see here for background) via the config and just use the “domain specific” instrumentation. While the code is not 100% done – you can download the current solution here. HTH (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • Anti-Forgery Request Recipes For ASP.NET MVC And AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent in the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> This invocation generates a token then writes inside the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and also writes into the cookie: __RequestVerificationToken_Lw__= J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. In the server side, [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, some problems are encountered. Specify validation on controller (not on each action) The server side problem is, It is expected to declare [ValidateAntiForgeryToken] on controller, but actually it has be to declared on each POST actions. Because POST actions are usually much more then controllers, the work would be a little crazy. Problem Usually a controller contains actions for HTTP GET and actions for HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller // One [ValidateAntiForgeryToken] attribute. { [HttpGet] public ActionResult Index() // Index() cannot work. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If browser sends an HTTP GET request by clicking a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each POST action:public class SomeController : Controller // Many [ValidateAntiForgeryToken] attributes. { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } This is a little bit crazy, because one application can have a lot of POST actions. Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one for each POST action), the following ValidateAntiForgeryTokenWrapperAttribute wrapper class can be helpful, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // GET actions are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all POST actions. Maybe it would be nice if HTTP verbs can be specified on the built-in [ValidateAntiForgeryToken] attribute, which is easy to implemented. Specify Non-constant salt in runtime By default, the salt should be a compile time constant, so it can be used for the [ValidateAntiForgeryToken] or [ValidateAntiForgeryTokenWrapper] attribute. Problem One Web product might be sold to many clients. If a constant salt is evaluated in compile time, after the product is built and deployed to many clients, they all have the same salt. Of course, clients do not like this. Even some clients might want to specify a custom salt in configuration. In these scenarios, salt is required to be a runtime value. Solution In the above [ValidateAntiForgeryToken] and [ValidateAntiForgeryTokenWrapper] attribute, the salt is passed through constructor. So one solution is to remove this parameter:public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = AntiForgeryToken.Value }; } // Other members. } But here the injected dependency becomes a hard dependency. So the other solution is moving validation code into controller to work around the limitation of attributes:public abstract class AntiForgeryControllerBase : Controller { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; protected AntiForgeryControllerBase(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } protected override void OnAuthorization(AuthorizationContext filterContext) { base.OnAuthorization(filterContext); string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } Then make controller classes inheriting from this AntiForgeryControllerBase class. Now the salt is no long required to be a compile time constant. Submit token via AJAX For browser side, once server side turns on anti-forgery validation for HTTP POST, all AJAX POST requests will fail by default. Problem In AJAX scenarios, the HTTP POST request is not sent by form. Take jQuery as an example:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution Basically, the tokens must be printed to browser then sent back to server. So first of all, HtmlHelper.AntiForgeryToken() need to be called somewhere. Now the browser has token in both HTML and cookie. Then jQuery must find the printed token in the HTML, and append token to the data before sending:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated into a tiny jQuery plugin:/// <reference path="jquery-1.4.2.js" /> (function ($) { $.getAntiForgeryToken = function (tokenWindow, appPath) { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. tokenWindow = tokenWindow && typeof tokenWindow === typeof window ? tokenWindow : window; appPath = appPath && typeof appPath === "string" ? "_" + appPath.toString() : ""; // The name attribute is either __RequestVerificationToken, // or __RequestVerificationToken_{appPath}. tokenName = "__RequestVerificationToken" + appPath; // Finds the <input type="hidden" name={tokenName} value="..." /> from the specified. // var inputElements = $("input[type='hidden'][name='__RequestVerificationToken" + appPath + "']"); var inputElements = tokenWindow.document.getElementsByTagName("input"); for (var i = 0; i < inputElements.length; i++) { var inputElement = inputElements[i]; if (inputElement.type === "hidden" && inputElement.name === tokenName) { return { name: tokenName, value: inputElement.value }; } } return null; }; $.appendAntiForgeryToken = function (data, token) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } // Gets token from current window by default. token = token ? token : $.getAntiForgeryToken(); // $.getAntiForgeryToken(window). data = data ? data + "&" : ""; // If token exists, appends {token.name}={token.value} to data. return token ? data + encodeURIComponent(token.name) + "=" + encodeURIComponent(token.value) : data; }; // Wraps $.post(url, data, callback, type). $.postAntiForgery = function (url, data, callback, type) { return $.post(url, $.appendAntiForgeryToken(data), callback, type); }; // Wraps $.ajax(settings). $.ajaxAntiForgery = function (settings) { settings.data = $.appendAntiForgeryToken(settings.data); return $.ajax(settings); }; })(jQuery); In most of the scenarios, it is Ok to just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() with $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. There might be some scenarios of custom token, where $.appendAntiForgeryToken() is useful:data = $.appendAntiForgeryToken(data, token); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); And there are scenarios that the token is not in the current window. For example, an HTTP POST request can be sent by an iframe, while the token is in the parent window. Here, token's container window can be specified for $.getAntiForgeryToken():data = $.appendAntiForgeryToken(data, $.getAntiForgeryToken(window.parent)); // Token is already in data. No need to invoke $.postAntiForgery(). $.post(url, data, callback); If you have better solution, please do tell me.

    Read the article

  • Cannot access local resource (C drive) on remote desktop

    - by Robert Massa
    I've recently upgraded my client PC to Windows 7, and ever since I can't get local resource sharing for remote desktop to work. I'm connecting to a 2003 server which isn't is my current domain. All my optical and virtual drives are being shared, but the C drive stays hidden. I checked the options, and do indicate that I want to share my C drive. Is there any permission I should change for this to work? The server is configured correctly because when connecting from an XP client this problem doesn't occur. I've tried accessing the share directly by opening the \\tsclient\c path, but this doesn't work neither. \\tsclient only shows the other drives. Also copy 'n paste doesn't seem to work neither(tried restarting rdpclip to no avail), getting Cannot copy file File.dat, the device is not connected.

    Read the article

  • How will people upgrade from 12.10 to 14.04 after 13.04 is EOL?

    - by Dave Jones
    Looking at https://wiki.ubuntu.com/Releases 13.04 will reach EOL in January 2014, while 12.10 will reach EOL in April 2014, therefore if a 12.10 user hasn't upgraded to 13.04 and subsequently to 13.10, there will be a 3 month period where a 12.10 user has a supported version of Ubuntu, but will be unable to upgrade. I asked this question a number of months ago and the suggestion was that the hope was that there would be an upgrade path from 12.10 to 14.04. Could somebody confirm whether this is still the case, or if not what the plans are for 12.10 users after 13.04 becomes EOL. Edited for clarification The particular issue I was concerned about is that once 13.04 goes EOL, a 12.10 user would in theory lose the ability to upgrade once the 13.04 repo's are removed from the normal release repository. Using the old releases method would be a way around the issue, however would make it more complicated for a less experienced user. An alternative could be for the 13.04 repo's to be left available for the 3 month interim period so that a 12.10 version could still be upgraded to 13.04 and subsequently onto 13.10, however that doesn't seem an optimal solution in that users may consider that it meant that support for 13.04 was being continued. If a direct upgrade from 12.10 to 14.04 was to made available, this would only be available once 14.04 was released and still leaves the issue of the 3 months between January and April 2014 were there may be some confusion. I suspect that its not going to affect a significant number of users, if somebody has upgraded from 12.04LTS to 12.10, in all probability, they'll have continued to upgrade to 13.04 and upwards because they'd made the choice to use current rather than LTS releases. It would just be useful to have some clarification of the situation which people can be referred to in advance of 13.04 going EOL rather than hitting the cut off point and it being too late for users to make the decision and being left in limbo.

    Read the article

  • Disable .htaccess or disable some rules from .htaccess on specific URL

    - by petRUShka
    I have Kerberos-based authentication and I want to disable it on only root url: mysite.com/. And I want it to works fine on any other page like mysite.com/page1. I have such things in my .htaccess: AuthType Kerberos AuthName "Domain login" KrbAuthRealms DOMAIN.COM KrbMethodK5Passwd on Krb5KeyTab /etc/httpd/httpd.keytab require valid-user I want to turn it off only for root URL. As workaround it is possible to turn off using .htaccess in virtual host config. Unfortunately I don't know how to do it. Part of my vhost.conf: <Directory /home/user/www/current/public/> Options -MultiViews +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> It would be great if you can advice me something!

    Read the article

  • Back to Basics: When does a .NET Assembly Dependency get loaded

    - by Rick Strahl
    When we work on typical day to day applications, it's easy to forget some of the core features of the .NET framework. For me personally it's been a long time since I've learned about some of the underlying CLR system level services even though I rely on them on a daily basis. I often think only about high level application constructs and/or high level framework functionality, but the low level stuff is often just taken for granted. Over the last week at DevConnections I had all sorts of low level discussions with other developers about the inner workings of this or that technology (especially in light of my Low Level ASP.NET Architecture talk and the Razor Hosting talk). One topic that came up a couple of times and ended up a point of confusion even amongst some seasoned developers (including some folks from Microsoft <snicker>) is when assemblies actually load into a .NET process. There are a number of different ways that assemblies are loaded in .NET. When you create a typical project assemblies usually come from: The Assembly reference list of the top level 'executable' project The Assembly references of referenced projects Dynamically loaded at runtime via AppDomain/Reflection loading In addition .NET automatically loads mscorlib (most of the System namespace) the boot process that hosts the .NET runtime in EXE apps, or some other kind of runtime hosting environment (runtime hosting in servers like IIS, SQL Server or COM Interop). In hosting environments the runtime host may also pre-load a bunch of assemblies on its own (for example the ASP.NET host requires all sorts of assemblies just to run itself, before ever routing into your user specific code). Assembly Loading The most obvious source of loaded assemblies is the top level application's assembly reference list. You can add assembly references to a top level application and those assembly references are then available to the application. In a nutshell, referenced assemblies are not immediately loaded - they are loaded on the fly as needed. So regardless of whether you have an assembly reference in a top level project, or a dependent assembly assemblies typically load on an as needed basis, unless explicitly loaded by user code. The same is true of dependent assemblies. To check this out I ran a simple test: I have a utility assembly Westwind.Utilities which is a general purpose library that can work in any type of project. Due to a couple of small requirements for encoding and a logging piece that allows logging Web content (dependency on HttpContext.Current) this utility library has a dependency on System.Web. Now System.Web is a pretty large assembly and generally you'd want to avoid adding it to a non-Web project if it can be helped. So I created a Console Application that loads my utility library: You can see that the top level Console app a reference to Westwind.Utilities and System.Data (beyond the core .NET libs). The Westwind.Utilities project on the other hand has quite a few dependencies including System.Web. I then add a main program that accesses only a simple utillity method in the Westwind.Utilities library that doesn't require any of the classes that access System.Web: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); } StringUtils.NewStringId() calls into Westwind.Utilities, but it doesn't rely on System.Web. Any guesses what the assembly list looks like when I stop the code on the ReadLine() command? I'll wait here while you think about it… … … So, when I stop on ReadLine() and then fire up Process Explorer and check the assembly list I get: We can see here that .NET has not actually loaded any of the dependencies of the Westwind.Utilities assembly. Also not loaded is the top level System.Data reference even though it's in the dependent assembly list of the top level project. Since this particular function I called only uses core System functionality (contained in mscorlib) there's in fact nothing else loaded beyond the main application and my Westwind.Utilities assembly that contains the method accessed. None of the dependencies of Westwind.Utilities loaded. If you were to open the assembly in a disassembler like Reflector or ILSpy, you would however see all the compiled in dependencies. The referenced assemblies are in the dependency list and they are loadable, but they are not immediately loaded by the application. In other words the C# compiler and .NET linker are smart enough to figure out the dependencies based on the code that actually is referenced from your application and any dependencies cascading down into the dependencies from your top level application into the referenced assemblies. In the example above the usage requirement is pretty obvious since I'm only calling a single static method and then exiting the app, but in more complex applications these dependency relationships become very complicated - however it's all taken care of by the compiler and linker figuring out what types and members are actually referenced and including only those assemblies that are in fact referenced in your code or required by any of your dependencies. The good news here is: That if you are referencing an assembly that has a dependency on something like System.Web in a few places that are not actually accessed by any of your code or any dependent assembly code that you are calling, that assembly is never loaded into memory! Some Hosting Environments pre-load Assemblies The load behavior can vary however. In Console and desktop applications we have full control over assembly loading so we see the core CLR behavior. However other environments like ASP.NET for example will preload referenced assemblies explicitly as part of the startup process - primarily to minimize load conflicts. Specifically ASP.NET pre-loads all assemblies referenced in the assembly list and the /bin folder. So in Web applications it definitely pays to minimize your top level assemblies if they are not used. Understanding when Assemblies Load To clarify and see it actually happen what I described in the first example , let's look at a couple of other scenarios. To see assemblies loading at runtime in real time lets create a utility function to print out loaded assemblies to the console: public static void PrintAssemblies() { var assemblies = AppDomain.CurrentDomain.GetAssemblies(); foreach (var assembly in assemblies) { Console.WriteLine(assembly.GetName()); } } Now let's look at the first scenario where I have class method that references internally uses System.Web. In the first scenario lets add a method to my main program like this: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); PrintAssemblies(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } UpdateFromWebRequest() internally accesses HttpContext.Current to read some information of the ASP.NET Request object so it clearly needs a reference System.Web to work. In this first example, the method that holds the calling code is never called, but exists as a static method that can potentially be called externally at some point. What do you think will happen here with the assembly loading? Will System.Web load in this example? No - it doesn't. Because the WebLogEntry() method is never called by the mainline application (or anywhere else) System.Web is not loaded. .NET dynamically loads assemblies as code that needs it is called. No code references the WebLogEntry() method and so System.Web is never loaded. Next, let's add the call to this method, which should trigger System.Web to be loaded because a dependency exists. Let's change the code to: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.WriteLine("--- Before:"); PrintAssemblies(); WebLogEntry(); Console.WriteLine("--- After:"); PrintAssemblies(); Console.ReadLine(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } Looking at the code now, when do you think System.Web will be loaded? Will the before list include it? Yup System.Web gets loaded, but only after it's actually referenced. In fact, just until before the call to UpdateFromRequest() System.Web is not loaded - it only loads when the method is actually called and requires the reference in the executing code. Moral of the Story So what have we learned - or maybe remembered again? Dependent Assembly References are not pre-loaded when an application starts (by default) Dependent Assemblies that are not referenced by executing code are never loaded Dependent Assemblies are just in time loaded when first referenced in code All of this is nothing new - .NET has always worked like this. But it's good to have a refresher now and then and go through the exercise of seeing it work in action. It's not one of those things we think about everyday, and as I found out last week, I couldn't remember exactly how it worked since it's been so long since I've learned about this. And apparently I'm not the only one as several other people I had discussions with in relation to loaded assemblies also didn't recall exactly what should happen or assumed incorrectly that just having a reference automatically loads the assembly. The moral of the story for me is: Trying at all costs to eliminate an assembly reference from a component is not quite as important as it's often made out to be. For example, the Westwind.Utilities module described above has a logging component, including a Web specific logging entry that supports pulling information from the active HTTP Context. Adding that feature requires a reference to System.Web. Should I worry about this in the scope of this library? Probably not, because if I don't use that one class of nearly a hundred, System.Web never gets pulled into the parent process. IOW, System.Web only loads when I use that specific feature and if I am, well I clearly have to be running in a Web environment anyway to use it realistically. The alternative would be considerably uglier: Pulling out the WebLogEntry class and sticking it into another assembly and breaking up the logging code. In this case - definitely not worth it. So, .NET definitely goes through some pretty nifty optimizations to ensure that it loads only what it needs and in most cases you can just rely on .NET to do the right thing. Sometimes though assembly loading can go wrong (especially when signed and versioned local assemblies are involved), but that's subject for a whole other post…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • xrandr doesn’t detect display ports

    - by Psyhister
    I have a ThinkPad T510 laptop with Gentoo Linux installed on it and I can’t manage to get VGA and DisplayPort working. xrandr -q won’t show them, so I’m guessing, that there’s a problem with my kernel configuration, but I wasn’t able to find the options responsible for these ports. Here’s the output from xrandr -q: xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 175, current 1366 x 768, maximum 1366 x 768 default connected 1366x768+0+0 0mm x 0mm 1366x768 50.0* 51.0 52.0 1024x768 53.0 54.0 832x624 55.0 800x600 56.0 57.0 58.0 59.0 60.0 720x400 61.0 700x525 62.0 640x512 63.0 64.0 640x480 65.0 66.0 67.0 68.0 69.0 640x400 70.0 640x350 71.0 576x432 72.0 512x384 73.0 74.0 75.0 76.0 77.0 416x312 78.0 400x300 79.0 80.0 81.0 82.0 83.0 360x200 84.0 320x240 85.0 86.0 87.0 88.0 320x200 89.0 320x175 90.0 Can anyone help me figure out what the problem is and how to get the video connections to work?

    Read the article

  • How to get rid of Thunderbird error "[NONEXISTENT] Unknown Mailbox: Trash"

    - by endolith
    Every time I start Thunderbird, I get this error: The current operation on 'Trash' did not succeed. The mail server for account [email protected] responded: [NONEXISTENT] Unknown Mailbox: Trash (now in authenticated state) (Failure). I've figured out that it goes away if I create folders in my Gmail accounts called "[Imap]/Trash" and "[Imap]/Sent", but why are these necessary? Even if I tell Thunderbird to unsubscribe from them, it still notices when I remove the labels and starts popping up this error again. One account also complains if I remove the "Sent Items" label.

    Read the article

  • Chennai Metro Water [Indian Websites]

    - by samsudeen
    If you are living in Chennai definitely you will have this question in your mind “Does Chennai 0survive this summer without any water crisis?”. The Chennai Metro Water maintains a website where in you can know all the details about the water storage, supply. This site has all the information about water supply  especially matters for  Chennai residents. I have given some of the uses of this site Water Sewer Application Status If you are a new resident and applied for water sewer application status. You can check your application (Water Sewer Application Status) by entering your WA No (Water Application No). You can also download all the required application forms. Register Complaints You can register complaints (Register Complaints) about  water supply / defective water meter and sewage blocking online. Pay Water & Sewage Tax You can pay your water & sewage tax (Pay Tax) online including the dues /arrears if any left out. Lake Level This site also maintains the current (Lake Level) and historic water levels of the Chennai water reservoirs including the daily inflow /out flow of water. You can also know the daily / month rainfall levels of these water reservoirs from 1965 onwards. Hope this  information little  useful for those who are planning / building a new house in Chennai corporation. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Getting Started with Employee Info Starter Kit (v4.0.0)

    - by Mohammad Ashraful Alam
    The new release of Employee Info Starter Kit contains lots of exciting features available in Visual Studio 2010 and .NET 4.0. To get started with the new version, you will need less than 5 minutes. Minimum System Requirements Before getting started, please make sure you have installed Visual Studio 2010 RC (or higher) and Sql Server 2005 Express edition (or higher installed on your machine. Running the Starter Kit for First Time 1. Download the starter kit 4.0.0 version form here and extract it. 2. Go to <extraction folder>\Source\Eisk.Solution and click the solution file 3. From the solution explorer, right click the “Eisk.Web” web site project node and select “Set as Startup Project” and hit Ctrl + F5   4. You will be prompted to install database, just follow the instruction. That’s it! You are ready to use this starter kit. Running the Tests Employee Info Starter Kit contains a infrastructure for Integration and Unit Testing, by utilizing cool test tools in Visual Studio 2010. Once you complete the steps, mentioned above, take a minute to run the test cases on the fly. 1. From the solution explorer, to go “Solution Items\e-i-s-k-2010.vsmdi” and click it. You will see the available Tests in the Visual Studio Test Lists. Select all, except the “Load Tests” node (since Load Tests takes a bit time) 2. Click “Run Checked Tests” control from the upper left corner. You will see the tests running and finally the status of the tests, which indicates the current health of you application from different scenarios. Technorati Tags: asp.net,architecture,starter kit,employee info starter kit,visual studio 2010,.net 4.0,entity framework

    Read the article

  • Getting Started with Employee Info Starter Kit (v4.0.0)

    - by joycsharp
    The new release of Employee Info Starter Kit contains lots of exciting features available in Visual Studio 2010 and .NET 4.0. To get started with the new version, you will need less than 5 minutes. Minimum System Requirements Before getting started, please make sure you have installed Visual Studio 2010 RC (or higher) and Sql Server 2005 Express edition (or higher installed on your machine. Running the Starter Kit for First Time 1. Download the starter kit 4.0.0 version form here and extract it. 2. Go to <extraction folder>\Source\Eisk.Solution and click the solution file 3. From the solution explorer, right click the “Eisk.Web” web site project node and select “Set as Startup Project” and hit Ctrl + F5   4. You will be prompted to install database, just follow the instruction. That’s it! You are ready to use this starter kit. Running the Tests Employee Info Starter Kit contains a infrastructure for Integration and Unit Testing, by utilizing cool test tools in Visual Studio 2010. Once you complete the steps, mentioned above, take a minute to run the test cases on the fly. 1. From the solution explorer, to go “Solution Items\e-i-s-k-2010.vsmdi” and click it. You will see the available Tests in the Visual Studio Test Lists. Select all, except the “Load Tests” node (since Load Tests takes a bit time) 2. Click “Run Checked Tests” control from the upper left corner. You will see the tests running and finally the status of the tests, which indicates the current health of you application from different scenarios. Technorati Tags: asp.net,architecture,starter kit,employee info starter kit,visual studio 2010,.net 4.0,entity framework

    Read the article

  • Windows Azure Learning Plan - SQL Azure

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with Security for  Windows Azure.   Overview and Training Overview and general  information about SQL Azure - what it is, how it works, and where you can learn more. General Overview (sign-in required, but free) http://social.technet.microsoft.com/wiki/contents/articles/inside-sql-azure.aspx General Guidelines and Limitations http://msdn.microsoft.com/en-us/library/ee336245.aspx Microsoft SQL Azure Documentation http://msdn.microsoft.com/en-us/windowsazure/sqlazure/default.aspx Samples and Learning Sources for online and other SQL Azure Training Free Online Training http://blogs.msdn.com/b/sqlazure/archive/2010/05/06/10007449.aspx 60-minute Overview (webcast) https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032458620&CountryCode=US Architecture SQL Azure Internals and Architectures for Scale Out and other use-cases. SQL Azure Architecture http://social.technet.microsoft.com/wiki/contents/articles/inside-sql-azure.aspx Scale-out Architectures http://tinyurl.com/247zm33 Federation Concepts http://tinyurl.com/34eew2w Use-Cases http://blogical.se/blogs/jahlen/archive/2010/11/23/sql-azure-why-use-it-and-what-makes-it-different-from-sql-server.aspx SQL Azure Security Model (video) http://www.msdev.com/Directory/Description.aspx?EventId=1491 Administration Standard Administrative Tasks and Tools Tools Options http://social.technet.microsoft.com/wiki/contents/articles/overview-of-tools-to-use-with-sql-azure.aspx SQL Azure Migration Wizard http://sqlazuremw.codeplex.com/ Managing Databases and Login Security http://msdn.microsoft.com/en-us/library/ee336235.aspx General Security for SQL Azure http://msdn.microsoft.com/en-us/library/ff394108.aspx Backup and Recovery http://social.technet.microsoft.com/wiki/contents/articles/sql-azure-backup-and-restore-strategy.aspx More Backup and Recovery Options http://social.technet.microsoft.com/wiki/contents/articles/current-options-for-backing-up-data-with-sql-azure.aspx Syncing Large Databases to SQL Azure http://blogs.msdn.com/b/sync/archive/2010/09/24/how-to-sync-large-sql-server-databases-to-sql-azure.aspx Programming Programming Patterns and Architectures for SQL Azure systems. How to Build and Manage a Business Database on SQL Azure http://tinyurl.com/25q5v6g Connection Management http://social.technet.microsoft.com/wiki/contents/articles/sql-azure-connection-management-in-sql-azure.aspx Transact-SQL Supported by SQL Azure http://msdn.microsoft.com/en-us/library/ee336250.aspx

    Read the article

  • Move smaller hard drive to partition on a larger hard drive

    - by bluejeansummer
    My parents bought a new hard drive for a laptop that I've owned for several years. It's much larger than the current one, so I plan on splitting it up to dual boot it with Ubuntu. I have no problem with partitioning a drive (I always keep a LiveCD handy), but my question is this: how can I go about moving the existing partition to the new drive? This is a laptop, so I can't simply plug the new drive into another slot. Also, even if I manage to move it, will Windows still work on the new drive in a larger partition? I've had this laptop for quite a while, and I've lost the recovery discs that came with it a long time ago. I also have a lot of software without CDs to reinstall them with. This makes not reinstalling Windows a high priority. In case it helps, both drives use 2.5" PATA, and I have a 1 TB external drive available if it's needed.

    Read the article

  • Sudo yum seems to fail on CentOS, but works fine after sudo -i

    - by Aron Rotteveel
    I am currently having some trouble with yum through sudo. For some reason, it does not seem to work: aron@graviton [/var/log]# sudo yum clean all There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: /usr/lib64/python2.4/lib-dynload/datetime.so: failed to map segment from shared object: Cannot allocate memory Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.4.3 (#1, Sep 3 2009, 15:37:37) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] If you cannot solve this problem yourself, please go to the yum faq at: http://wiki.linux.duke.edu/YumFaq The strange thing, however, is that it works fine when I gain root privileges through sudo -i first. Any ideas what might be causing this problem?

    Read the article

  • Do game-theoretic considerations stand in the way of this market-based game-mechanic achieving its goals?

    - by BerndBrot
    Mechanic The mechanic is called "market manipulation" and is supposed to work like this: Players can enter the London Stock Exchange (LSE) LSE displays the stock prices of 8 to 10 companies and derivatives. This number is relatively small to ensure that players will collide in their efforts to manipulate the market in their favor. The prices are calculated based on real world prices of these companies and derivatives (in real time) any market manipulations that were conducted by the players any market corrections of the system Players can buy and sell shares with cash, a resource in the game, at current in-game market value Players can manipulate the market, i.e. let the price of a share either rise or fall, by some amount, over a certain period of time. Manipulating the market requires spending certain in-game resources and is therefore limited. The system continuously corrects market manipulations by letting the in-game prices converge towards their real world counterparts at a rate of 2% of the difference between the two per hour. Because of this market correction mechanism, pushing up prices (and screwing down prices) becomes increasingly difficult the higher (lower) the price already is. Goals Players are supposed to collide (and have incentives to collide) in their efforts to manipulate the market in their favor, especially when it comes to manipulation efforts by different groups. Prices should not resolve around any equilibrium points. The more variance the better. Band-wagoning should always involve risk (recognizing that prices start rising should not be a sure sign that they will keep rising so that everybody can make easy profits even when they don't manipulate the market themselves) Question Are there any game-theoretic considerations that prevent the mechanic from achieving these goals?

    Read the article

  • SharePoint and COMException (0x80004005): Cannot complete this action

    - by Damon
    I ran into a small issue today working on a deployment.  We were moving a custom ASP.NET control from my development environment into a SharePoint layout page on a staging environment .  I was expecting some minor issues to arise since I had developed the control in an ASP.NET website project, but after getting everything moved over we got an obscure COMException error the that looked like this: Cannot complete this action. Please try again. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Runtime.InteropServices.COMException: Cannot complete this action. [COMException (0x80004005): Cannot complete this action. .Lengthy stack trace goes here. Everything in the custom control was built using managed code, so we weren't sure why a COMException would suddenly appear. The control made use of an ITemplate to define its UI, so there was a lot of markup and binding code inside the template. As such, we started taking chunks of the template out of the layout page and eventually the error went away.  It was being caused by a section of code where we were calling a custom utility method inside some binding code: <%# WebUtility.FormatDecimal(.) %> Solution: It turns out that we were missing an Assembly and Import directive at the top of the page to let the page know where to find this method.  After adding these to the page, the error went away and everything worked great.  So a COMException (0x80004005) Cannot complete this action error is just SharePoint's friendly way of letting you know you're missing an assembly or imports reference.

    Read the article

  • How to call Office365 web service in a Console application using WCF

    - by ybbest
    In my previous post, I showed you how to call the SharePoint web service using a console application. In this post, I’d like to show you how to call the same web service in the cloud, aka Office365.In office365, it uses claims authentication as opposed to windows authentication for normal in-house SharePoint Deployment. For Details of the explanation you can see Wictor’s post on this here. The key to make it work is to understand when you authenticate from Office365, you get your authentication token. You then need to pass this token to your HTTP request as cookie to make the web service call. Here is the code sample to make it work.I have modified Wictor’s by removing the client object references. static void Main(string[] args) { MsOnlineClaimsHelper claimsHelper = new MsOnlineClaimsHelper( "[email protected]", "YourPassword","https://ybbest.sharepoint.com/"); HttpRequestMessageProperty p = new HttpRequestMessageProperty(); var cookie = claimsHelper.CookieContainer; string cookieHeader = cookie.GetCookieHeader(new Uri("https://ybbest.sharepoint.com/")); p.Headers.Add("Cookie", cookieHeader); using (ListsSoapClient proxy = new ListsSoapClient()) { proxy.Endpoint.Address = new EndpointAddress("https://ybbest.sharepoint.com/_vti_bin/Lists.asmx"); using (new OperationContextScope(proxy.InnerChannel)) { OperationContext.Current.OutgoingMessageProperties[HttpRequestMessageProperty.Name] = p; XElement spLists = proxy.GetListCollection(); foreach (var el in spLists.Descendants()) { //System.Console.WriteLine(el.Name); foreach (var attrib in el.Attributes()) { if (attrib.Name.LocalName.ToLower() == "title") { System.Console.WriteLine("> " + attrib.Name + " = " + attrib.Value); } } } } System.Console.ReadKey(); } } You can download the complete code from here. Reference: Managing shared cookies in WCF How to do active authentication to Office 365 and SharePoint Online

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >