Search Results

Search found 539 results on 22 pages for 'incremental'.

Page 13/22 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Why is ext3 so slow to delete large files?

    - by Janis Peisenieks
    I have a server, which makes an incremental backup of a system every night. Now on saturdays, there is a full backup. But after the full backup has finished, a script kicks in, that deletes the incrementals. Now, the script sometimes breaks, and it is because the incrementals are each about 10GB files, and sometimes takes too long for the script. Now could someone explain to me, or point me in the direction of a resource, that explains why ext3 is so slow to delete files, when compared to, lets say, NTFS? I know theses are 2 completely different file systems, but I'm really interested why is there such a big difference in deletion?

    Read the article

  • Why does't rsync use delta-transfer for local files ?

    - by o_O Tync
    I have a big iso image which is currently being downloaded by a torrent client with space-reservation turned on: that means, file size is not changing while some chunks in in (4 Mib) are constantly changing because of a download. At 90% download I do the initial rsync to save time later: $ rsync -Ph DVD.iso /some/target/ sending incremental file list DVD.iso 2.60G 100% 40.23MB/s 0:01:01 (xfer#1, to-check=0/1) sent 2.60G bytes received 73 bytes 34.59M bytes/sec total size is 2.60G speedup is 1.00 Then, when the file's fully downloaded, I rsync again: total size is 2.60G speedup is 1.00 Speedup=1 says delta-transfer was not used, although 90% of the file has not changed. Why?!

    Read the article

  • Rsync like windows backup tool

    - by Halfgaar
    I need to backup some windows machines and have been unable to find the proper tool. What I need is a tool that does efficient copying of changed files to a windows network location, like Rsync does. In turn, the server will then back that up using rdiff-backup, a tool which does very clever incremental backups. Right now I'm using windows' 7 included backup feature, but I really don't get that. It's too much off-topic, but it doesn't suffice (seems buggy as well). I looked into Amanda, but as soon as it wanted to install MySQL, I aborted. I also tried Deltacopy, but unfortunately, I don't remember what the problem with that was... Any advice for an rsync like tool that just does daily syncs to a network location?

    Read the article

  • Is using SVN for development and CM a bad practice?

    - by GatorGuy
    I have a bit of experience with SVN as a pure programmer/developer. Within my company, however, we use SVN as our configuration management tool. I thought using SVN for development at the same time was OK since we could use branches and the trunk for dev, and tags for releases. To me, the tags were the CM part, and the branches/trunk were the dev part. Recently a person, who develops high level code (but outside of the "pure SW" group) mentioned that the existing philosophy (mixing SVN for dev and CM) was wrong... in his opinion. His reasoning is that he thinks the company's CM tool should always link to run-able SW (so branches would break this rule). He also mentioned that a CM tool shouldn't be a backup utility for daily or incremental commits. Finally, he doesn't like the idea of having to jump from revision 143 to 89 in order to get a working copy... and further that CM tools shouldn't allow reversion to a broken state. In general he wants to separate the CM and back-up/dev utilties that SVN offers. Honestly, I am new and the person with this perspective is one of seniority, experience, and success, so I want to field this dilemma with the stackoverflow userbase to see if his approach has merit. My question: Should SVN be purely used for development, and another tool for CM (or vice versa)? Why? If so, what tools would you suggest for this combo? Or do you think that integrating both CM and dev into SVN is the best approach? Why? Thanks.

    Read the article

  • How to verify the code that could take a substantial time to compile? [on hold]

    - by user18404
    As a follow up to my prev question: What is the best aproach for coding in a slow compilation environment To recap: I am stuck with a large software system with which a TDD ideology of "test often" does not work. And to make it even worse the features like pre-compiled headers/multi-threaded compilation/incremental linking, etc is not available to me - hence I think that the best way out would be to add the extensive logging into the system and to start "coding in large chunks", which I understand as code for a two-three hours first (as opposed to 15-20 mins in TDD) - thoroughly eyeball the code for a 15 minutes and only after all that do the compilation and run the tests. As I have been doing TDD for a quite a while, my code eyeballing / code verification skills got rusty (you don't really need this that much if you can quickly verify what you've done in 5 seconds by running a test or two) - so I am after a recommendations on how to learn these source code verification/error spotting skills again. I know I was able to do that easily some 5-10 years ago when I din't have much support from the compiler/unit testing tools I had until recently, thus there should be a way to get back to the basics.

    Read the article

  • What happens when I delete the first of several AWS EBS snapshots?

    - by Jepper
    On http://aws.amazon.com/ebs/pricing/ it says: "EBS Snapshots [...] For the first snapshot of a volume, Amazon EBS saves a full copy of your data to Amazon S3. For each incremental snapshot, only the changed part of your Amazon EBS volume is saved." I intend to snapshot some of my instances daily and keep snapshots for 7 days after which the snapshots are destroyed. What happens when, eventually, I destroy the first snapshot? Will the subsequent snapshots be worthless given the first is no longer available?

    Read the article

  • Open Source Highlight: namebench

    - by eddraper
    DNS is a big deal.  Even small incremental changes to improve its performance can yield significant value due to the vast quantity of look-ups required when using the internet.  Until now, It’s always been one of those things I had to kinda take on faith… was my ISP doing a good job?  Are those public DNS server really that much faster?  What about security and privacy concerns? Let me introduce you to namebench.  This is the kinda tool I really love – one that immediately delivers value and is almost over-the-top OCD in its attention to detail. Trust me, this tool is utterly ruthless in it’s quest for getting it right – you’re not left with a big question mark after it presents its data.  The results are conclusive and actionable.  Here’s what is does: It hunts down the fastest DNS servers from your desktop that it can find using thousands of requests.  No, it doesn’t pop up this little dialog in 10 seconds to give you some “off the cuff” answer from a handful of providers.  It takes the better part of 10-15 minutes to run.  When it finishes, it presents you with a veritable horn-o-plenty of data.  Mean response duration, response distribution, bad data,  no stone is left un-turned. Check it out.  You’ll dig it.

    Read the article

  • Can't start system due to mdadm failing

    - by user101212
    I used to have a 5-disk RAID5 partition, all working very well. I have then decided to add 3 more disks on it, totaling 8 equal disks. I've opened Webmin and just asked to add the disks. Then I've realized the three disks had NTFS partitions, wich mdadm didn't complain, so I tried to stop the growing to remove the Windows partitions. I've tried to remove a disk using the same Webmin, but (as you might guess and call me fool...), the system became unstable. By restarting the system, I've started receiving these messages: "udev[126]: timeout: killing '/sbin/mdadm --incremental /dev/sdh1' [311]" "udev[124]: timeout: killing '/sbin/mdadm --detail --export /dev/md0' [316]" I've formated the system disk, hoping to get a system up and running. I did that with all RAID disks disconected, so everything was fine. I then reconnected the disks, wich was also ok. And finally installed mdadm using apt-get. By reboot, the system has found the mdadm intention of growing the system, so the same messages appear again. I other words: I can't even reach a command prompt to do something. Any ideas of what to do? I believe I could turn off the system, disconnect the disks and look for the mdadm.conf file. Would that be a good idead? I'm no Linux expert, so I'm really lost here.

    Read the article

  • How can I check the actual size used in an NTFS directory with many hardlinks?

    - by kbyrd
    On a Win7 NTFS volume, I'm using cwrsync which supports --link-dest correctly to create "snapshot" type backups. So I have: z:\backups\2010-11-28\cygdrive\c\Users\... z:\backups\2010-12-02\cygdrive\c\Users\... The content of 2010-12-02 is mostly hardlinks back to files in the 2010-11-28 directory, but there are a few new or changed files only in 2010-12-02. On linux, the 'du' utility will tell me the actual size taken by each incremental snapshot. On Windows, explorer and du under cygwin are both fooled by hardlinks and shows 2010-12-02 taking up a little more space than 2010-11-28. Is there a Windows utility that will show the correct space acutally used?

    Read the article

  • Change AccountName/LoginName for a SharePoint User (SPUser)

    - by Rohit Gupta
    Consider the following: We have an account named MYDOMAIN\eholz. This accounts Active Directory Login Name changes to MYDOMAIN\eburrell Now this user was a active user in a Sharepoint 2010 team Site, and had a userProfile using the Account name MYDOMAIN\eholz. Since the AD LoginName changed to eburrell hence we need to update the Sharepoint User (SPUser object) as well update the userprofile to reflect the new account name. To update the Sharepoint User LoginName we can run the following stsadm command on the Server: STSADM –o migrateuser –oldlogin MYDOMAIN\eholz –newlogin MYDOMAIN\eburrell –ignoresidhistory However to update the Sharepoint 2010 UserProfile, i first tried running a Incremental/Full Synchronization using the User Profile Synchronization service… this did not work. To enable me to update the AccountName field (which is a read only field) of the UserProfile, I had to first delete the User Profile for MYDOMAIN\eholz and then run a FULL Synchronization using the User Profile Synchronization service which synchronizes the Sharepoint User Profiles with the AD profiles. Update: if you just run the STSADM –o migrateuser command… the profile also gets updated automatically. so all you need is to run the stsadm –o migrate user command and you dont need to delete and recreate the User Profile

    Read the article

  • Sharepoint managed Properties

    - by paulie
    Originally posted on StackOverflow, and edited for clarity I have a custom Content Type inside a list that has over 30 items (Which were uploaded via DockIt), and I have added several "managed properties" to the "crawled properties", in the SSP. All of them work except 1. The column "Synopsis" is a multiline field with no limit on it's length. It appears as a crawled property "Synopsis", and is mapped to a managed property 'asynop'. On the 'Advanced Search Page', it is added as a property and searchable, however it only returns a some matching records (if any). I manually created an entry, ran the crawl and was able to search for it. I edited an existing entry, ran the crawl (full and incremental), and it still only returned the manually entered entry. If I entered the search term in the Search box directly "asynop:fatigue", then all the correct results appear. Why is this happening? And could it please stop?

    Read the article

  • recommendations for disk -> usb backup software

    - by TWood
    Recently I lost a tape drive and rather than repair the unit I decided that backups to usb external drives would be cheaper. In the past I used NTBackup and figured that the new server 2008 R2 backup wbadmin utility would be able to meet my needs. It does not. I am looking for recommendations for another utility that i can use. My requirements are: -backup local disk in addition to files on a network share -scheduled task integration (or some gui options to manage schedule) -non-incremental backup Basically I could do this all with WBAdmin if it just supported network shares. I saw some links that described attaching a vhd pointed to a network share but I am trying to avoid hacks like that. If i'm going to do all that trouble I'd just as well manually copy the directories over myself. If anyone has any software suggestions that might make this task easier for me let me know please. I am considering BackupAssist but can only find a few reviews here and there for it.

    Read the article

  • Hyper-v back-up

    - by Ddave23
    We are trying to decide a good backup strategy for our new Hyper-V setup. We have 3 VMs on Windows 2008 R2 Hyper-V host. We installed Symantec BackupExec 2010 on the host and have the Hyper-V Agent installed. We would like to perform a full backup at night to tape, and an incremental twice a day to a daily tape. Our environment needs constant protection for our database (Microsoft Access). Any thoughts? Should I be looking at different software?

    Read the article

  • rsync stuck with the --checksum option

    - by billc.cn
    I use back-in-time to backup my Linux installation. It serves as an advanced wrapper for the rsync command. Today I tried to add /var/log to the list of folders to be backed up and it caused some serious performance problems. The job seems to stuck on a particular file and the CPU usage of the rsync parent process reaches 100%. I then used lsof to see which file caused the problem and it seems to be the /var/log directory. I did some googling and some experiments with the different rsync options and found --checksum to be the offender. Without the parameter, an incremental backup finishes properly in minutes. With it, the process will stuck when rsync tries to sync a constantly changing log file. This kind of make sense, but it still seems to be a bug to me. Am I using the option correctly? Is there a workaround for this?

    Read the article

  • Identify malicious subnet

    - by Macros
    I have been experiencing performance issues on a website for a while, and it always seems to hit around the same time. Having analysed the logs I've found a big spike in requests which corresponds with this slowdown, with all requests coming from the same subnet. It feels to me like an attempt to scrape the site (it is a car hire site and the requests are sequential for each IP and with incremental search criteria) and I would like to identify the source. The Subnet in question is 209.67.89.x which I can see is owned by Savvis however I can't reverse DNS any of the IPs - is there any other way I can gain more info on this (other than contacting them direct - I am also doing this)?

    Read the article

  • Compress snapshot backups with duplicates

    - by Usavich
    I have a set of backups of mostly photos. The directory looks sort of like this: backup/Day1/photos/1.jpg .../2.jpg backup/Day2/photos/2.jpg .../3.jpg .../4.jpg backup/DayN/photos/2.jpg .../3.jpg .../9.jpg Files with same name are identical. There are many duplicates. Due to the way the backup system works, it's not possible to create incremental backup directly. I always get the entire dump each day. If I want to create a compressed archive for a date range, say Day 5~9, what is the best tool/compression algorithm to do that?

    Read the article

  • xforms: how to prevent xxforms:default value from over-writing user input

    - by Purni
    I have a dropdown to display status, which can be Enabled(true) or Disabled(false). Here is my xml instance. <?xml version="1.0" encoding="UTF-8"?> <page> <file-name></file-name> <status></status> </page> By default, status should be true. So I have set it in binding as follows. <xforms:bind nodeset="./status" xxforms:default="true()" /> When user chooses Disabled in the dropdown, the status should get saved as false. Here is the xml that gets saved when I save the form. <page> <file-name>StatusDisabled.xml</file-name> <status>false</false> </page> When I open the form in edit mode, this is the xml I get in the XML inspector widget. <page> <file-name>StatusDisabled.xml</file-name> <status>true></status> </page> Status gets set to true because of xxforms:default, even though the xml is saved with a false value for status. How can I fix this? Here is the xhtml: <html xmlns="http://www.w3.org/1999/xhtml" xmlns:xforms="http://www.w3.org/2002/xforms" xmlns:xxforms="http://orbeon.org/oxf/xml/xforms"> <head> <title>XForms Default</title> <xforms:model> <xforms:instance id="instance"> <page> <name xmlns=""/> <status xmlns=""/> </page> </xforms:instance> <xforms:instance id="status-instance"> <items> <item label="Enabled" value="true" xmlns=""/> <item label="Disabled" value="false" xmlns=""/> </items> </xforms:instance> <xforms:bind nodeset="instance('instance')"> <xforms:bind nodeset="./status" xxforms:default="true()" /> </xforms:bind> </xforms:model> </head> <body> <p> <xforms:input ref="instance('instance')/name" incremental="true"> <xforms:label>Please enter your name:</xforms:label> </xforms:input> </p> <p> <xforms:select1 ref="instance('instance')/status" appearance="minimal" incremental="true"> <xforms:label>Please select status:</xforms:label> <xforms:itemset nodeset="instance('status-instance')/item"> <xforms:label ref="./@label"/> <xforms:value ref="./@value"/> </xforms:itemset> </xforms:select1> </p> </body> </html>

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • SQL Server 2005, Sudden increase of connections - SharePoint 2007

    - by CrazyNick
    We observed that sudden increase of SQL connections during a specific hour, it is a backend of a SharePoint 2007 Farm. From SharePoint 2007 Perspective: 1. Incremental crawling is scheduled at that time and few of the Timer jobs (normal timer jobs) are scheduled to run every mins / per 10mins. 2. Number of user requests are less. From SQL Server 2005 Perspective: 1. Transaction log backup is scheduled at that time 2. No other scheduled jobs are running at that time. so, how to narrow down the issue, what would be causing the sudden SQL connection increase?

    Read the article

  • Linking with Boost error

    - by drhorrible
    I just downloaded and ran the boost installer for version 1.42 (from boostpro.com), and set up my project according to the getting started guide. However, when I build the program, I get this linker error: LINK : fatal error LNK1104: cannot open file 'libboost_program_options-vc90-mt-gd-1_42.lib' The build log adds this (I've replaced project-specific paths with *'s): Creating temporary file "******\Debug\RSP00001252363252.rsp" with contents [ /OUT:"*********.exe" /INCREMENTAL /LIBPATH:"C:\Program Files\boost\boost_1_42_0\lib" /MANIFEST /MANIFESTFILE:"Debug\hw6.exe.intermediate.manifest" /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DEBUG /PDB:"********\Debug\***.pdb" /SUBSYSTEM:CONSOLE /DYNAMICBASE /NXCOMPAT /MACHINE:X86 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib ".\Debug\****.obj" ".\Debug\****.exe.embed.manifest.res" ] Creating command line "link.exe @********\Debug\RSP00001252363252.rsp /NOLOGO /ERRORREPORT:PROMPT" I've also emailed [email protected] (with a message very similar to this), but I thought maybe so would be faster.

    Read the article

  • m2eclipse resource filtering

    - by drewzilla
    I've having problems with resource filtering using m2eclipse Maven support in Eclipse. It seems that filtering only takes place on resources that have changed. This is fundamentally flawed because, if I have a file that references properties (e.g. ${my.property}, if the value of the property changes, the filtering will only be performed if the referencing file is also modified - if I only change the property value (in my pom.xml), the filtering is not applied to the files that that reference it. So, if I make a change to a property in my pom file, the filtering is not applied. However, if I then go to the file that references that property (e.g. a Spring config file) then edit and save it, the filtering is applied. I did read somewhere that: "m2eclipse skips filtering if there were no resource changes during incremental build" I'm using m2eclipse 0.10.x Has anyone else come across this? Thanks, Andrew

    Read the article

  • Does Extreme Programming Need Diagramming Tools?

    - by Ygam
    I have been experimenting with some concepts from XP, like the following: Pair Programming Test First Programming Incremental Deliveries Ruthless Refactoring So far so good until I had a major stump: How do I design my test cases when there aren't any code yet? From what basis do I have to design them? From simple assumptions? From the initial requirements? Or is this where UML diagrams and the "analysis phase" fits in? Just had to ask because in some XP books I've read, there was little to no discussion of any diagramming tool (there was one which suggested I come up with pseudocodes and some sort of a flowchart...but it did not help me in writing my tests)

    Read the article

  • Out of memory error in java

    - by Anuj
    I am getting OutOfMemoryError: java heap snippets of the method: { // step 1: I am creating a 2 dim array int totalCombination = (int) Math.pow(2.0, (double) vowelCount); here vowelCount > 10 // step2: initializing my array // step3: and using that array } My Question: each time this method is called, that array is getting created. Is it possible that the array is not getting released . In windows taskmanager i can see memory used by java is purely incremental. So it is not that at a point heap size is less, but memory is repetitively used and not released somehow. Please let me know if you need more detal. Please help to debug the error. Anuj

    Read the article

  • Renaming and Moving Files in Bash

    - by KT
    HI, I'm completely new to Bash and StackOverflow. I need to move a set of files (all contained in the same folder) to a target folder where files with the same name could already exist. In case a specific file exists, I need to rename the file before moving it, by appending for example an incremental integer to the file name. The extensions should be preserved. The file names could contain dots in the middle. Originally, I was thinking about comparing the two folders to have a list of the existing files (I did this with "comm"), but then I got a bit stuck. I think I'm just trying to do things in the most complicated possible way. Any hint to do this in the "bash way"?

    Read the article

  • unresolved external symbol __penter referenced in function _WspiapiStrdup@4

    - by John Weldon
    I started getting this compile error after upgrading to Visual Studio 2010. Not sure if it's related, but I can't figure out what library to reference to satisfy this dependency? Is it just an API change bug or something? Microsoft (R) Program Maintenance Utility Version 10.00.30319.01 Copyright (C) Microsoft Corporation. All rights reserved. del wstest.res wstest.obj wstest.pdb wstest.ilk wstest.exe wstest.exe.manifest vc90.pdb cl -Gh -Ox -DNDEBUG -c -DCRTAPI1=_cdecl -DCRTAPI2=_cdecl -nologo -GS -D_X86_=1 -DWIN32 -D_WIN32 -W3 -D_WINNT -D_WIN32_WINNT =0x0501 -DNTDDI_VERSION=0x05010000 -D_WIN32_IE=0x0600 -DWINVER=0x0501 -D_MT -D_DLL -MDd wstest.c wstest.c link /DEBUG /DEBUGTYPE:cv -out:wstest.exe wstest.obj Ws2_32.lib Shlwapi.lib Microsoft (R) Incremental Linker Version 10.00.30319.01 Copyright (C) Microsoft Corporation. All rights reserved. wstest.obj : error LNK2019: unresolved external symbol __penter referenced in function _WspiapiStrdup@4 wstest.exe : fatal error LNK1120: 1 unresolved externals NMAKE : fatal error U1077: '"c:\program files (x86)\microsoft visual studio 10.0\vc\bin\link.EXE"' : return code '0x460' Stop.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >