Search Results

Search found 12848 results on 514 pages for 'multi core'.

Page 23/514 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Server Core in Windows Server 2012 - Improved Taste, Less Filling, More Uptime

    - by KeithMayer
    Would you like to reduce your patch maintenance requirements by over 1/3rd? Of course! Who wouldn't? Server Core in Windows Server 2012 reduces the disk footprint of the operating system by approximately 4GB! When using the Server Core installation option, the features related to the Server Graphical Shell ( ie., Explorer, Start Screen, and Internet Explorer ) and Graphical Management Tools and Infrastructure are not installed - GUI features that are usually not required on a dedicated s

    Read the article

  • Intel fields six-core embedded CPUs

    <b>LinuxDevices:</b> "The Xeon Processor 5600 series also includes the chipmaker's first six-core embedded processors, plus a dual-core processor for "micro servers" that has a TDP of only 30 Watts, the company says."

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Intel Core i7-4960HQ vs. 4850HQ (Haswell) [on hold]

    - by Timothy R. Butler
    I'm looking at the new MacBook Pros and trying to decide between the Core i7-4960HQ (2.6 GHz) and i7-4850 (2.3 GHz). I've found some synthetic benchmarks comparing them, but I haven't found a lot of data, so I'd appreciate any pointers to good comparisons for the Haswell family (especially these two processors). My cursory analysis seems to suggest there isn't a huge gain from the extra 300 MHz. I'd like to determine not only whether this is generally true, but also to figure out if the gains that are made in performance come at too high of cost. Is the 2.6 going to be pushing the limits of what can fit in a thin laptop without overheating? I've looked at some of Intel's documentation, but have not been able to determine what the normal and maximum operating temperature differences are for the models. In the past, there have been times that Intel's fastest models in a given range ran especially hot and/or consumed significantly more power compared to slightly slower models. Do those concerns factor into the current generation?

    Read the article

  • Deploying an EAR to JBOSS times out (org.rhq.core.pc.inventory.TimeoutException:)

    - by rangalo
    Hi, I am trying to deploy an ear file to JBOSS AS (defalut server). The application is the mavenised version of examples of SeamInAction book. When I copy the file to $JBOSS_HOME/server/default/deploy, I don't get any exception but the application doesn't respond, after some time trying to access the application from the browser gives following in the log... While deploying with admin-console (http://localhost:8080/admin-console) I get following error messgae: PS: After this Jboss gets into unusable state. I cannot even access admin-console. I just have to kill it. ErrorMessage in admin-console: Failed to create Resource Open18.ear - cause: org.rhq.core.pc.inventory.TimeoutException: Call to [org.rhq.plugins.jbossas5.ApplicationServerComponent.createResource()] with args [[CreateResourceReport: ResourceType=[ResourceType[id=0, category=Service, name=Enterprise Application (EAR), plugin=JBossAS5]], ResourceKey=[null]]] timed out. Invocation thread will be interrupted at org.rhq.core.pc.inventory.ResourceContainer$ResourceComponentInvocationHandler.invokeInNewThreadWithLock(ResourceContainer.java:437) at org.rhq.core.pc.inventory.ResourceContainer$ResourceComponentInvocationHandler.invoke(ResourceContainer.java:406) at $Proxy266.createResource(Unknown Source) at org.rhq.core.pc.inventory.CreateResourceRunner.call(CreateResourceRunner.java:113) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Error Logs: 4:08:58,555 INFO [TableMetadata] foreign keys: [fkaf42e01ba13c3380, fk_course_ref_facility] 14:08:58,555 INFO [TableMetadata] indexes: [course_pkey] 14:08:58,645 INFO [TableMetadata] table found: public.facility 14:08:58,645 INFO [TableMetadata] columns: [zip, phone, state, type, uri, city, country, id, price_range, address, county, description, nam e] 14:08:58,645 INFO [TableMetadata] foreign keys: [] 14:08:58,645 INFO [TableMetadata] indexes: [facility_pkey] 14:08:58,705 INFO [TableMetadata] table found: public.hole 14:08:58,705 INFO [TableMetadata] columns: [id, m_par, l_handicap, name, l_par, number, course_id, m_handicap] 14:08:58,705 INFO [TableMetadata] foreign keys: [fk_hole_ref_course, fk30f4c09c3f1200] 14:08:58,705 INFO [TableMetadata] indexes: [hole_pkey, uniq_hole_number] 14:08:58,764 INFO [TableMetadata] table found: public.tee 14:08:58,764 INFO [TableMetadata] columns: [hole_id, distance, tee_set_id] 14:08:58,764 INFO [TableMetadata] foreign keys: [fk1c014f8de7677, fk_tee_ref_hole, fk1c014c69de560, fk_tee_ref_tee_set] 14:08:58,764 INFO [TableMetadata] indexes: [tee_pkey] 14:08:58,826 INFO [TableMetadata] table found: public.tee_set 14:08:58,826 INFO [TableMetadata] columns: [id, color, m_slope_rating, l_slope_rating, name, course_id, m_course_rating, l_course_rating, p os] 14:08:58,826 INFO [TableMetadata] foreign keys: [fk_tee_set_ref_course, fkaa6881b79c3f1200] 14:08:58,826 INFO [TableMetadata] indexes: [tee_set_pkey, uniq_tee_set_pos, uniq_tee_set_color] 14:08:58,827 INFO [SchemaUpdate] schema update complete 14:08:58,829 INFO [NamingHelper] JNDI InitialContext properties:{java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory, java. naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces} 14:08:58,850 INFO [TomcatDeployment] deploy, ctxPath=/Open18 14:15:53,969 WARN [DiscoveryComponentProxyFactory] The discovery component for resource type [ResourceType[id=0, category=Service, name=Connector, plugin=JBossAS5]] has been blacklisted 14:15:53,970 WARN [InventoryManager] Failure during discovery for [Connector] Resources - failed after 300002 ms. org.rhq.core.pc.inventory.TimeoutException: Call to [org.rhq.plugins.jbossas5.ConnectorDiscoveryComponent.discoverResources()] with args [[org.rhq.core.pluginapi.inventory.ResourceDiscoveryContext@96db1]] timed out. Invocation thread will be interrupted at org.rhq.core.pc.util.DiscoveryComponentProxyFactory$ResourceDiscoveryComponentInvocationHandler.invokeInNewThread(DiscoveryComponentProxyFactory.java:208) at org.rhq.core.pc.util.DiscoveryComponentProxyFactory$ResourceDiscoveryComponentInvocationHandler.invoke(DiscoveryComponentProxyFactory.java:181) at $Proxy249.discoverResources(Unknown Source) at org.rhq.core.pc.inventory.InventoryManager.invokeDiscoveryComponent(InventoryManager.java:272) at org.rhq.core.pc.inventory.InventoryManager.executeComponentDiscovery(InventoryManager.java:1697) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.discoverForResource(RuntimeDiscoveryExecutor.java:218) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.discoverForResource(RuntimeDiscoveryExecutor.java:234) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.runtimeDiscover(RuntimeDiscoveryExecutor.java:134) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.call(RuntimeDiscoveryExecutor.java:94) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.call(RuntimeDiscoveryExecutor.java:51) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:207) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) 14:15:53,981 WARN [NavigationContent] Unable to find node for deleted resource [Resource[id=-5, type=Connector, key=ajp://127.0.0.1:8009, name=ajp://127.0.0.1:8009, parent=JBoss Web]].

    Read the article

  • OpenGL extension vs OpenGL core

    - by user209347
    I was doubting: I'm writing a cross-platform engine OpenGL C++, I figured out windows forces the developers to access OpenGL features above 1.1 through extensions. Now the thing is, on Linux, I know that I can directly access functions if the version supports it through glext.h and opengl version. The problem is that if on Linux, the core doesn't support it, is it possible there is an extensions that supports the same functionality, in my case vertex buffer objects? I'm doing something like this: Windows: (hashdeck) define glFunction functionpointer_to_the_extension (apparently the layout changes font size if I use #) Linux: Since glext already defined glFunction, I can write in client code glFunction, and compile it both on Windows AND Linux without changing a single line in my client code using the engine (my goal). Now the thing is, I saw a tutorial use only the extension on Linux, and not checking for the opengl implementation version. If the functionality is available in the core, is it also available as extension (VBO's e.g.)? Or is an extension something you never know is available? I want to write an engine that gets all the possibilities on hardware, so I need to check (on Linux) for extensions as well as core version for possible functionality implementation.

    Read the article

  • Picking Core Language For Large Scale Web Platform

    - by ryanzec
    Now I have work with PHP and ASP.NET quite a bit and also played around few other language for web development. I am now at a point where need to start building a backend platform that will have the ability to support a large set of applications and I am trying to figure out which language I want to choose as my core language. When I say core language I mean the language that the majority of the backend code is going to be in. This is not to say that other languages won't be used because my guess is that they will but I want a large majority of the code (90%-98%) to be in 1 language. While I see to benefit of using the language that is best for the job, having 15% in php, 15% in ASP.NET, 5% in perl, 10% in python, 15% in ruby, etc… seems like a very bad idea to me (not to mention integrating everything seamlessly would probably add a bit of overhead). If you were going to be building a large scale web platform that need to support multiple applications from scratch, what would you choose as your core language and why?

    Read the article

  • Silverlight TV 19: Hidden Gems from MIX10, UFC's Multi-Touch App

    John ran into Silverlight MVP Ward Bell of IdeaBlade while at MIX10 (how could anyone miss him!). Ward was kind enough to sit and talk with John to show off the multi-touch application his company wrote for UFC using Silverlight. It uses multi-touch, Caliburn, MVVM, and, of course, Silverlight! Relevant links: John's Blog Ward's Blog Silverlight 4 RC Features (or download here) Follow us on Twitter @SilverlightTV Learn more about Silverlight with the new Silverlight Training Course...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • "Il faut repenser les OS pour les processeurs multi-coeurs", d'après Microsoft

    "Il faut repenser les OS pour les processeurs multi-coeurs", d'après Microsoft Dave Probert est expert du noyau chez Microsoft. Selon lui, l'approche actuelle du multi-coeur n'est pas encore à même d'en exploiter toute la puissance, et est trop "compliquée". Aussi, propose-t-il une autre organisation. Car, d'après lui, "la solution" ne se situerait pas dans l'amélioration de techniques "comme le parallel programming, mais plutôt dans le refonte des abstractions de base qui constituent le modèle du système d'exploitation". Il explique qu'on ne tire pas assez parti des performances offertes par les processeurs multicoeurs et qu'aujourd'hui, on ne devrait plus avoir à patienter de...

    Read the article

  • Apps capable of mounting/unmounting CD/DVD Images with multi-sector or protected format

    - by Luis Alvarado
    What apps for Ubuntu exist that can mount/unmount CD/DVD images. Image formats like cue, bin, iso, nero format, etc... Needed features: Emulate protected CD/DVD like Daemon-Tools Mount multi-sector images Converting other formats (including multi-sector images) to ISO format. UPDATE - Updated question to "need the 3 features above instead of having them as optional". This 3 features are needed to be able to use images that have any of this features enable. For normal mounting/burning image apps you can see here: How can I graphically mount ISOs? Can I mount an ISO without administrative privileges? How burn or mount an ISO file?

    Read the article

  • What causes Multi-Page allocations?

    - by SQLOS Team
    Writing about changes in the Denali Memory Manager In his last post Rusi mentioned: " In previous SQL versions only the 8k allocations were limited by the ‘max server memory’ configuration option.  Allocations larger than 8k weren’t constrained." In SQL Server versions before Denali single page allocations and multi-Page allocations are handled by different components, the Single Page Allocator (which is responsible for Buffer Pool allocations and governed by 'max server memory') and the Multi-Page allocator (MPA) which handles allocations of greater than an 8K page. If there are many multi-page allocations this can affect how much memory needs to be reserved outside 'max server memory' which may in turn involve setting the -g memory_to_reserve startup parameter. We'll follow up with more generic articles on the new Memory Manager structure, but in this post I want to clarify what might cause these larger allocations. So what kinds of query result in MPA activity? I was asked this question the other day after delivering an MCM webcast on Memory Manager changes in Denali. After asking around our Dev team I was connected to one of our test leads Sangeetha who had tested the plan cache, and kindly provided this example of an MPA intensive query: A workload that has stored procedures with a large # of parameters (say > 100, > 500), and then invoked via large ad hoc batches, where each SP has different parameters will result in a plan being cached for this “exec proc” batch. This plan will result in MPA.   Exec proc_name @p1, ….@p500 Exec proc_name @p1, ….@p500 . . . Exec proc_name @p1, ….@p500 Go   Another workload would be large adhoc batches of the form: Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) Select * from t where col1 in (1, 2, 3, ….500) … Go  In Denali all page allocations are handled by an "Any size page allocator" and included in 'max server memory'. The buffer pool effectively becomes a client of the any size page allocator, which in turn relies on the memory manager. - Guy Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Multi-Part Map Troubleshooting

    - by Michael Stephenson
    Scenario I came across a nice little one with multi-part maps the other day. I had an orchestration where I needed to combine 4 input messages into one output message like in the below table:   Input Messages Output Messages Company Details Member Details Event Message Member Search Member Import   I thought my orchestration was working fine but for some reason when I was trying to send my message it had no content under the root node like below <ns0:ImportMemberChange xmlns:ns0="http://---------------/"></ns0:ImportMemberChange>   My map is displayed in the below picture. I knew that the member search message may not have any elements under it but its root element would always exist. The rest of the messages were expected to be fully populated. I tried a number of different things and testing my map outside of the orchestration it always worked fine. The Eureka Moment The eureka moment came when I was looking at the xslt produced by the map. Even though I'd tried swapping the order of the messages in the input of the map you can see in the below picture that the first part of the processing of the message (with the red circle around it) is doing a for-each over the GetCompanyDetailsResult element within the GetCompanyDetailsResponse message. This is because the processing is driven by the output message format and the first element to output is the OrganisationID which comes from the GetCompanyDetailsResponse message. At this point I could focus my attention on this message as the xslt shows that if this xpath statement doesn’t return the an element from the GetCompanyDetailsResponse message then the whole body of the output message will not be produced and the output from the map would look like the message I was getting. <ns0:ImportMemberChange xmlns:ns0="http://---------------/"></ns0:ImportMemberChange> I was quickly able to prove this in my map test which proved this was a likely candidate for the problem. I revisited the orchestration focusing on the creation of the GetCompanyDetailsResponse message and there was actually a bug in the orchestration which resulted in the message being incorrectly created, once this was fixed everything worked as expected. Conclusion Originally I thought it was a problem with the map itself, and looking online there wasn’t really much in the way of content around troubleshooting for multi-part map problems so I thought I'd write this up. I guess technically it isn't a multi-part map problem, but I spend a good couple of hours the other day thinking it was.

    Read the article

  • Multi Column Block Too Narrow in Chrome

    - by aksarben
    My Web site displays song lyrics in a multi-column format, using CSS3. Both Firefox & MSIE 10+ display the multi-column text perfectly, but Chrome does not. This sample page shows the problem: http://www.hymntime.com/tch/test/html5/html5-multicolumn-test.htm The page uses a media selector, so your Chrome window must be at least 1280 pixels wide to see the effect. In fact, if you make the Chrome window less than 1280 pixels, you'll see the lyrics block change to a single column, of the same overall width. In other words, when Chrome shifts to 1-column to 2-column mode (due to the wider browser window), the lyrics block remains the same width, causing text to be squeezed together. Has anyone else seen this behavior, or know a solution? Is this a Chrome bug, or I am I doing something wrong? I posted this question on a Chrome forum a while back, but got no reply.

    Read the article

  • How to safely remove a USB device from 2008 Core server?

    - by Qwerty
    I have a Hyper-V Core server 2008 that I administer via command line and remote tools. We have now got a new backup system in place and it involves me connecting an External USB drive (G:) to backup system state files. My question is how should I safely remove the drive for its weekly offsite swap? I've tried using the devcon tool however it just says the 'removal failed with no devices removed' with no other explanation. I have noticed that there isnt a readily available x64 version of devcon and that might be the cause of the problem. (I have read of people downloading a amd64 version but I have not located it myself, if someone knows where it is please let me know). The devcon command worked on my old 2003 x86 server with the command: devcon remove *3200AVJ_EXTERNAL* I have also looked at using fsutil volume dismount g: but it doesn't seem to work as G: is still listed as a connected volume. I have checked that the volume is not in use via remote tools and the net file command. Both show no open files in the G:\ volume. This could be a decent substitute as it might be used to flush any remaining IO to the volume can anyone clarify? Thanks in advance.

    Read the article

  • Defining Discovery: Core Concepts

    - by Joe Lamantia
    Discovery tools have had a referencable working definition since at least 2001, when Ben Shneiderman published 'Inventing Discovery Tools: Combining Information Visualization with Data Mining'.  Dr. Shneiderman suggested the combination of the two distinct fields of data mining and information visualization could manifest as new category of tools for discovery, an understanding that remains essentially unaltered over ten years later.  An industry analyst report titled Visual Discovery Tools: Market Segmentation and Product Positioning from March of this year, for example, reads, "Visual discovery tools are designed for visual data exploration, analysis and lightweight data mining." Tools should follow from the activities people undertake (a foundational tenet of activity centered design), however, and Dr. Shneiderman does not in fact describe or define discovery activity or capability. As I read it, discovery is assumed to be the implied sum of the separate fields of visualization and data mining as they were then understood.  As a working definition that catalyzes a field of product prototyping, it's adequate in the short term.  In the long term, it makes the boundaries of discovery both derived and temporary, and leaves a substantial gap in the landscape of core concepts around discovery, making consensus on the nature of most aspects of discovery difficult or impossible to reach.  I think this definitional gap is a major reason that discovery is still an ambiguous product landscape. To help close that gap, I'm suggesting a few definitions of four core aspects of discovery.  These come out of our sustained research into discovery needs and practices, and have the goal of clarifying the relationship between discvoery and other analytical categories.  They are suggested, but should be internally coherent and consistent.   Discovery activity is: "Purposeful sense making activity that intends to arrive at new insights and understanding through exploration and analysis (and for these we have specific defintions as well) of all types and sources of data." Discovery capability is: "The ability of people and organizations to purposefully realize valuable insights that address the full spectrum of business questions and problems by engaging effectively with all types and sources of data." Discovery tools: "Enhance individual and organizational ability to realize novel insights by augmenting and accelerating human sense making to allow engagement with all types of data at all useful scales." Discovery environments: "Enable organizations to undertake effective discovery efforts for all business purposes and perspectives, in an empirical and cooperative fashion." Note: applicability to a world of Big data is assumed - thus the refs to all scales / types / sources - rather than stated explicitly.  I like that Big Data doesn't have to be written into this core set of definitions, b/c I think it's a transitional label - the new version of Web 2.0 - and goes away over time. References and Resources: Inventing Discovery Tools Visual Discovery Tools: Market Segmentation and Product Positioning Logic versus usage: the case for activity-centered design A Taxonomy of Enterprise Search and Discovery

    Read the article

  • Understanding The Very Nature Of Linux - Becoming Core Programmer

    - by MrWho
    Well, I want to know how I should exactly start and get into the right path to become a core programmer and also get decent understanding of Linux infrastructure and fundamentals. I know my question may seem general or something but that's not because of my inability to ask a question.I'm just confused, I've programmed in a few languages and have got my hand dirty to code so I'm aware of the big picture of what the programmers actually do.Now, I want to get deeper and start my studies in a different level than I used to learn before, I want to become advanced core programmer and learn where it really start from.I'd like to know the bit by bit of what the today's operating systems like linux have been built on. I DO really need good references, books would be preferred for learning the fundamentals.If someone tell me the general path of what I'm supposed to do, it would be really appreciated.

    Read the article

  • Pull Request Changes, Multi-Selection in Advanced View, and Advertisement Changes

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website today. Pull Request Changes In this release, we have begun to re-focus on Pull Requests to ensure a productive experience between the project users and developers. We feel we made significant progress in this area for this release and look forward to using your feedback to drive future iterations. One of the biggest hurdles people have indicated is the inability to see what a pull request includes without pulling the source down from a Mercurial client. With today’s changes, any user has the ability to view a pull request, the changesets / changes included, and perform an inline diff of the file. When a pull request is made, the CodePlex website will query for all outgoing changes from the fork to the main repository for a point-in-time comparison. Because of this point-in-time comparison… All existing pull requests created prior to this release will not have changesets associated with them. If new commits are pushed to the fork while a pull request is active, they will not appear associated with the pull request. The pull request will need to be re-submitted for them to appear. Once a pull request is created, you can “View the Pull Request” which takes you to a page that looks like As you may notice, we now display a lot more detailed information regarding that pull request including who it was requested by and when, the associated changesets, the description, who it’s assigned to (we’ll come back to this) and the listing of summarized file changes. What you’ll also notice, is that each modified file has the ability to view a diff of all changes made. When you click “(view diff)” for a file, an inline diff experience appears. This new experience allows you to quickly navigate through all of the modified files as well as viewing the various change blocks for each file. You’ll also notice as you browse through each file’s changes, we update the URL to include the file path so you can quickly send a direct link to a pull request’s file. Clicking “(close diff)” will bring you back to the original pull request view. View this pull request live on WikiPlex. Pull Request Review Assignment Another new feature we added for pull requests is the ability for project members to assign pull requests for review. Any project member has the ability to assign (and re-assign if needed) a pull request to a project member. Once the assignment has been made, that project member will be notified via email of the assignment. Once they complete the review of the pull request, they can either accept or deny it similarly to the previous process. Multi-Selection in Advanced View Filters One of the more recent requests we have heard from users is the ability multi-select advanced view filters for work items. We are happy to announce this is now possible. Simply control-click the multiple options for each filter item and your work item query will be refined as such. Should you happen to unselect all options for a given filter, it will automatically reset to the default option for that filter. Furthermore, the “Direct Link” URL will be updated to include the multi-selected options for each filter. Note: The “Direct Link” feature was released in our previous deployment, just never written about. It allows you to capture the current state of your query and send it to other individuals. Advertisement Changes Very recently, the advertiser (The Lounge) we partnered to provide advertising revenue for projects, or donated to charity, was acquired by Lake Quincy Media. There has been no change in the advertising platform offering, and all projects have been converted over to using the new infrastructure. Project owners should note the new contact information for getting paid. The CodePlex team values your feedback, and is frequently monitoring Twitter, our Discussions and Issue Tracker for new features or problems. If you’ve not visited the Issue Tracker recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.

    Read the article

  • When is a 'core' library a bad idea?

    - by Alex Angas
    When developing software, I often have a centralised 'core' library containing handy code that can be shared and referenced by different projects. Examples: a set of functions to manipulate strings commonly used regular expressions common deployment code However some of my colleagues seem to be turning away from this approach. They have concerns such as the maintenance overhead of retesting code used by many projects once a bug is fixed. Now I'm reconsidering when I should be doing this. What are the issues that make using a 'core' library a bad idea?

    Read the article

  • 12.04 on Pentium Dual Core with 1GB or ram running slow

    - by Alex
    hey i have a Lenovo Thinkpad Laptop with Ubuntu 12.04 installed. It runs slow. I tried "System profiler and Benchmark" to test the computer. but the application quits and closes after the first few benchmark test. before it even gets to the other tests. So i tried "Hardinfo" that installed on the Puppy Linux live cd. that did the same thing (the apps look just a like). the memory usage isnt the problem on this pc. its the cpu processes. just running the "system profiler" app that comes with ubuntu uses about 34% on each core, default with nothing running its 5-10% on each core. i cant really find what the deal is other than that ubuntu is a cpu hog. so im testing unity2D at the moment to see how it goes. if you have any other suggestions, feel free to answer this question. thanks

    Read the article

  • Running Server With minimum core OS with VM

    - by user170019
    I want to start a server but all my application will be in a virtual partition. Therefore, I need just a very minimal core OS that can support virtualbox as there will no usage of core OS other than start up the virtualbox. Any OS that is suitable for this situation? Tried JEOS and Fedora minimal installation which is only CLI. However, I cant make those 2 OS to support virtualbox. Please help. Sorry Im totally new to Unix and CLI. Thanks

    Read the article

  • Will .NET 4.0 apps work on Win 2008 R2 Server Core?

    - by markus
    When Windows Server 2008 R2 was launched, the "server core" edition started to become useful to me, because it lets me deploy .NET background applications isolated on their own virtual machine instance with only a small fraction of all the disk space overhead of a default Windows Server installation, and very few Windows Updates. It comes with a subset of .NET 3.5 SP1 integrated (as an optional feature). Now that .NET 4.0 is released, the redistributables explicitly state that it's not support on Server Core. Any chance that there will be a separate download available for Server Core (e. g. without WPF) any time soon, has anybody heard about it?

    Read the article

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >