Search Results

Search found 14653 results on 587 pages for 'disk cache'.

Page 24/587 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Publishing my Website to my Local Disk Causes Exceptions to show Paths including my Local Disk

    - by coffeeaddict
    I've published my website many times. But didn't think about this though until I came across this issue. So I decided to publish my WAP project to a local folder on my C drive first. Then used FTP to upload it to my shared host on discountasp.net. I noticed during runtime that the stack trace was referencing that local folder still and erroring out. Anyone know what config settings are affected when publishing? Obviously something is still pointing to my local C drive and I've searched my entire solution and don't see why. Here's the runtime error I get when my code tries to run in discountasp.net's web server Cannot write into the public directory - check permissions Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: ScrewTurn.Wiki.PluginFramework.InvalidConfigurationException: Cannot write into the public directory - check permissions Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidConfigurationException: Cannot write into the public directory - check permissions] ScrewTurn.Wiki.SettingsStorageProvider.Init(IHostV30 host, String config) in C:\www\Wiki\Screwturn3_0_2_509\Core\SettingsStorageProvider.cs:90 ScrewTurn.Wiki.StartupTools.Startup() in C:\www\Wiki\Screwturn3_0_2_509\Core\StartupTools.cs:69 ScrewTurn.Wiki.Global.Application_BeginRequest(Object sender, EventArgs e) in C:\www\Wiki\Screwturn3_0_2_509\WebApplication\Global.asax.cs:29 System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +68 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75 Discountasp says it's not a permission issue but obviously it is. I think /Wiki should work...but it's not. Here's my site viewed in FTP on discountasp.net's server:

    Read the article

  • Windows Disk I/O Analysis

    - by Jonathon
    It appears that we are having a problem with the disk i/o speed on our Windows 2003 Enterprise Edition server (64-bit). As we were initializing a database that created two 1G tablespaces on 3 different machines, it became obvious that the two smaller machines (each 32-bit Windows 2003 Standard Edition with less RAM) killed the larger machine when creating the files. The larger machine took 10x as long to create the tablespaces than did the other machines. Now, I am left wondering how that could be. What programs or scripts would you guys recommend for tracking down the I/O problem? I think the issue may be with the controller card (all boxes are hardware RAID 10, but have different controller cards), but I would like to check the actual disk I/O speed as well, so I have some hard numbers to work with. Any help would be appreciated.

    Read the article

  • Portable way of finding total disk size in Java (pre java 6)

    - by Wouter Lievens
    I need to find the total size of a drive in Java 5 (or 1.5, whatever). I know that Java 6 has a new method in java.io.File, but I need it to work in Java 5. Apache Commons IO has org.apache.commons.io.FileSystemUtils to provide the free disk space, but not the total disk space. I realize this is OS dependant and will need to depend on messy command line invocation. I'm fine with it working on "most" systems, i.e. windows/linux/macosx. Preferably I'd like to use an existing library rather than write my own variants. Any thoughts? Thanks.

    Read the article

  • Disk boot failure in Windows 7 64-bit after installing the latest NVIDIA drivers

    - by Domchi
    I successfully installed the newest NVIDIA drivers (275.33) in Windows 7 64-bit and rebooted afterwards just in case. After the reboot, I got an error about missing MBR. I disconnected the slave disk so that Windows doesn't get confused and got this message: DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER It seems that Windows doesn't recognize the main disk anymore. I booted from Windows install disk but the main disk doesn't get listed as possible Windows locations to repair, and I can't get to it from the recovery prompt. The BIOS does recognize it, and I'm able to see it if I run diskpart - however "detail disk" in diskpart says that there are no volumes on the disk. I also tried bootrec /FixMbr without effect, and bootrec /FixBoot which gives the error message: Element not found. What else can I do? Why would diskpart say there are no volumes on the disk?

    Read the article

  • How do I read the cache of Chrome and Firefox programmatically on the Mac?

    - by John Gallagher
    Background I want to access the cache of Chrome and Firefox in my Cocoa application. I need to get the HTML for pages accessed recently. Safari is a piece of cake - all this information is available in SQLite data stores, but not so in Chrome and Firefox. The Problem For Firefox, the cache is in /Library/Caches/Firefox/Profiles/xxx.default/Cache with filenames _CACHE_001_ _CACHE_002_ _CACHE_003_ and _CACHE_MAP_ For Chrome, the cache is in /Library/Caches/Google/Chrome/Default/Cache with filenames data_0 data_1 data_2 and data_3 What I've tried The only article I can find that sheds any light on what format these caches are in is here. It recommends a Cache Viewer tool, but doesn't explain how one might do this programmatically. Questions Is there any way of reconstructing this data using command line tools or the Cocoa framework? Or is it much too low level? Is there another way of getting at the HTML of recent web pages that I don't know about?

    Read the article

  • choosing the right RAID level for PostgresQL database

    - by Sergey
    Hi, I got an disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will be used solely by PostgresQL database and I am trying to choose the best RAID level for it. The most priority is for read performance since we operate large data sets (tables, indexes) and we do lots of searches/scans. With the old disks that we have now the most slowdowns happen on SELECTs. Fault tolerance is less important, it can be 1 or 2 disks. Space is the least important factor. Even 1T will be enough. Which RAID level would you recommend in this situation. The current options are 60, 50 and 10, but probably other options can be even better.

    Read the article

  • Hard Drive upgrade advice for a Dell PowerEdge 1950

    - by user8185
    My setup is a Dell PowerEdge 1950 with 2x 140Gb Hard drives in a RAID 1 configuration. The OS is Windows 2003 Web Edition and disk is partitioned into to two, a 12Gb C: partition and the remainder is the D: drive. Both are very close to full capacity. Ideally I want to replace those drives with 2x 1Tb drives while retaining all data. First of all is this possible without rebuilding the server? If so, will I need any 3rd party software, Symantec Ghost, Partition Master for e.g., to do this? Any general advice as to how to go about doing this?

    Read the article

  • Can't access my accelerated hard disk from msdos after installing linux on ssd cache

    - by Chibueze Opata
    I mistakenly installed Ubuntu on my ssd (forgot my PC actually came with one), when it detected a ~31GiB disk that it wanted to install to, I was a bit confused since I had brought out 30Gb in my primary disk for it, but I clicked continue. After installation, I tried to boot back into my Windows and it brought out some Intel Raid Disk Utility stuff saying I should disable acceleration on a disk something couldn't be found, I canceled it but whatever I tried, recovery tools, setups etc, I couldn't just access the drive which was apparently using the SSD as cache. Since then I've been stuck. I tried setting the 'raid' flag to the disk from 'gParted', still I couldn't. I tried the diskraid utility from windows recover disk, it said it couldn't detect any raid, diskpart sees the partition but doesn't see the volume, when I remove the raid flag, it sees the volume as one of raw type, and I can't access anything. I can however mount the drive from terminal in Ubuntu and access my files, but I don't have any backup media at the moment so I can do a factory re-install. Please how do I go about solving the issue, precisely I would like to know how to boot into the drive again. Thanks!

    Read the article

  • Best Easiest Fastest No Install USB Boot Disk in 4 Simple Steps :)

    - by PearlFactory
    USB Boot Disk When you look how to create USB Boot Disk on the web it is a nightmare   Here is the easiest I use that works for all MS prods At a computer running Windows Vista, Windows 7, or Windows Server 2008, run a command prompt as administrator and execute the following: Make Sure you have all explorer windows closed and nothing referencing the USB i.e a doc open in Word 1. C:\> diskpart DISKPART> list disk [Identify disk # of the USB key] DISKPART> sel disk 1 [assuming 1 was the # from above] DISKPART> clean [CAUTION—will wipe whichever disk is selected] DISKPART> cre part pri DISKPART> active DISKPART> assign DISKPART> format fs=ntfs quick DISKPART> exit C:\> exit 2. Copy the contents of the Windows Server 2008 R2 or any other MS OS  DVD/ISO to the USB key. 3. From the system tray, use the “Safely remove hardware” icon to safely remove the USB key from the computer. This helps ensure that all files have been fully written to the USB key. (Especially after the large file copy) 4. Restart,,,put usb in and Find reference from HP h20195.www2.hp.com/v2/GetPDF.aspx/4AA3-1317ENW.pdf

    Read the article

  • Setting up group disk quotas

    - by Ray
    I am hoping to get some advice in setting up disk quotas. So, I know about: Adding usrquota and grpquota on to /etc/fstab for the file systems that need to be managed. Using edquota to assign disk quotas to users. However, I need to do the last step for multiple users and edquota seems to be a bit troublesome. One solution that I have found is that I can do: sudo edquota -u foo -p bar. This will copy the disk quota of bar to user foo. I was wondering if this is the best solution? I tried setting up group disk quotas but they don't seem to be working. Are group quotas meant to help in the assignment of the same quota to multiple users? Or are they suppose to give a total limit to a set of users? For example, if users A, B, C are in group X then assigning a quota of 20 GB gives each user 20 GB or does it give 20 GB to the entire group X to divide up? I'm interested in doing the former, but not the latter. Right now, I've assigned group disk quotas and they aren't working. So, I guess it is due to my misunderstanding of group disk quotas... My problem is I want to easily give the same quota to multiple users; any suggestions on the best way to do this out of what I've tried above or anything else I may not have thought of? Thank you!

    Read the article

  • REST application, Transactions, Cache drop

    - by Julian Davchev
    Hi, I am building REST API in php with memcache layer on top for caching all resources. After some reading experience it turns out it's best when documents are as simple as posible...mainly due to dropping cache sequences. So if there is 'building','room' entities for the 'room' document I would only place the id of the 'building' and not the whole data of it. Then on api client side I would merge data as needed. Problem becomes when I need to update/insert (most cases more than one table). I update one resource but on second update system fails or whatever and there becomes database inconsistancies. I see several solutions: 1. Implement rest transactions which I find wrong and complex as idea is to be stateless and easy. 2. On update/insert actions I pass more complex data (not single entities) so I can force transactions on API level. But this will make it weird that your GET document structure is same as PUT document structure. And again somehow make drop sequences complex. Any pointers are more than welcome. Cheers,

    Read the article

  • Cannot Cache NHibernate Future Criteria Results

    - by Emilian
    I have the following code: public void FuturesQuery() { using (var session = SessionFactory.OpenSession()) { var blogs = session.CreateCriteria<Blog>() .SetMaxResults(5) .SetCacheable(true) .SetCacheMode(CacheMode.Normal) .SetCacheRegion("BlogQuery") .Future<Blog>(); var countOfBlogs = session.CreateCriteria<Blog>() .SetProjection(Projections.Count(Projections.Id())) .SetCacheable(true) .SetCacheMode(CacheMode.Normal) .SetCacheRegion("BlogQuery") .FutureValue<int>(); Console.WriteLine("Number of blogs: {0}", countOfBlogs.Value); foreach (var blog in blogs) { Console.WriteLine(blog.Title); } } using (var session = SessionFactory.OpenSession()) { var blogs = session.CreateCriteria<Blog>() .SetMaxResults(5) .SetCacheable(true) .SetCacheMode(CacheMode.Normal) .SetCacheRegion("BlogQuery") .Future<Blog>(); var countOfBlogs = session.CreateCriteria<Blog>() .SetProjection(Projections.Count(Projections.Id())) .SetCacheable(true) .SetCacheMode(CacheMode.Normal) .SetCacheRegion("BlogQuery") .FutureValue<int>(); Console.WriteLine("Number of blogs: {0}", countOfBlogs.Value); foreach (var blog in blogs) { Console.WriteLine(blog.Title); } } } I was expecting that the second time I query for blogs and count of blogs I will get values from cache but instead the queries hit the database. If I don't use Futures I get the expected results. Does this means that results from Criteria using futures cannot be cached?

    Read the article

  • Post-loading : check if an image is in the browser cache

    - by Mathieu
    Short version question : Is there navigator.mozIsLocallyAvailable equivalent function that works on all browsers, or an alternative? Long version :) Hi, Here is my situation : I want to implement an HtmlHelper extension for asp.net MVC that handle image post-loading easily (using jQuery). So i render the page with empty image sources with the source specified in the "alt" attribute. I insert image sources after the "window.onload" event, and it works great. I did something like this : $(window).bind('load', function() { var plImages = $(".postLoad"); plImages.each(function() { $(this).attr("src", $(this).attr("alt")); }); }); The problem is : After the first loading, post-loaded images are cached. But if the page takes 10 seconds to load, the cached post-loaded images will be displayed after this 10 seconds. So i think to specify image sources on the "document.ready" event if the image is cached to display them immediatly. I found this function : navigator.mozIsLocallyAvailable to check if an image is in the cache. Here is what I've done with jquery : //specify cached image sources on dom ready $(document).ready(function() { var plImages = $(".postLoad"); plImages.each(function() { var source = $(this).attr("alt") var disponible = navigator.mozIsLocallyAvailable(source, true); if (disponible) $(this).attr("src", source); }); }); //specify uncached image sources after page loading $(window).bind('load', function() { var plImages = $(".postLoad"); plImages.each(function() { if ($(this).attr("src") == "") $(this).attr("src", $(this).attr("alt")); }); }); It works on Mozilla's DOM but it doesn't works on any other one. I tried navigator.isLocallyAvailable : same result. Is there any alternative?

    Read the article

  • Personal Cache vs Memcache?

    - by Kerry
    I have a personal caching class, which can be seen here ( based off WordPress' ): http://pastie.org/988427 I recently learned about memcache and it said to memcache EVERYTHING: http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html My first thought was just to keep my class with the current functions and make it use memcache instead -- is there any downside to doing this? The main difference I see is that memcache stays on with the server from page to page, while mine is for 1 page load. The problem I see arising, and this is with any system, is that they're dynamic. They change all the time. Whether its search results, visible products, etc. etc. If it's all cached, won't the create a problem? Is there a way to handle this? Obviously if something is bringing back the same results everytime it would be cached, but that's why I was doing it on a per page load basis. I'm sure there is a way to handle this, or is the cache time usually set between 5 minutes and an hour?

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Using memcache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • android - how to cache an image from a remote site

    - by Lynnooi
    Hi, Can anyone please provide me some example on how to save an image i fetch from websites into a cache. I had try to include the following function into my code and call it once i run the activity. public void getRemoteImage(String imageUrl) { imageUrl = "http://marga.mobile9.com/download/thumb/295/sexylady7_xo6npovn.jpg"; URL aURL = null; URLConnection conn = null; Bitmap bmp = null; CacheResult cache_result = CacheManager.getCacheFile(imageUrl, new HashMap()); if (cache_result == null) { try { aURL = new URL(imageUrl); conn = aURL.openConnection(); conn.connect(); InputStream is = conn.getInputStream(); cache_result = new CacheManager.CacheResult(); CacheManager.saveCacheFile(imageUrl, cache_result); } catch (Exception e) { //return null; } } bmp = BitmapFactory.decodeStream(cache_result.getInputStream());*/ Toast.makeText(context,"Please work.. namo namo namo", Toast.LENGTH_SHORT).show(); //return bmp; } However, I got a nullPointerException. Can someone please help me with it as i'm quite new in android.

    Read the article

  • Can't remove GPT data from MBR

    - by user2373121
    I am having difficulty getting the Ubuntu installer (and gparted) to recognize the partitions on my MBR type disk. Other operating systems and disk tools read the disk structure and the files on it fine. I have used fixparts to write a new MBR but the issue persists. I assume the issue stems from the Protective MBR data still present on the disk but I am at a loss as to how to remove it while preserving my NTFS data partition. Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. c:\Users\mike\Desktop\fixpartsfixparts 3: FixParts 0.8.8 Loading MBR data from 3: Warning: 0xEE partition doesn't start on sector 1. This can cause problems in some OSes. MBR command (? for help): Running gdisk shows Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. c:\Users\mike\Desktop\fixparts>gdisk 3: GPT fdisk (gdisk) version 0.8.7 Partition table scan: MBR: MBR only BSD: not present APM: not present GPT: not present *************************************************************** Found invalid GPT and valid MBR; converting MBR to GPT format in memory. THIS OPERATION IS POTENTIALLY DESTRUCTIVE! Exit by typing 'q' if you don't want to convert your MBR partitions to GPT format! *************************************************************** ************************************************************************ Most versions of Windows cannot boot from a GPT disk, and most varieties prior to Vista cannot read GPT disks. Therefore, you should exit now unless you understand the implications of converting MBR to GPT or creating a new GPT disk layout! ************************************************************************ Are you SURE you want to continue? (Y/N): y Command (? for help): p Disk 3:: 2930277168 sectors, 1.4 TiB Logical sector size: 512 bytes Disk identifier (GUID): BFE92CE8-F93D-4141-82B8-816AD06FB36E Partition table holds up to 128 entries First usable sector is 34, last usable sector is 2930277134 Partitions will be aligned on 2048-sector boundaries Total free space is 163846893 sectors (78.1 GiB) Number Start (sector) End (sector) Size Code Name 1 163842048 2930272255 1.3 TiB 0700 Microsoft basic data Command (? for help): r Recovery/transformation command (? for help): o Disk size is 2930277168 sectors (1.4 TiB) MBR disk identifier: 0x00000000 MBR partitions: Number Boot Start Sector End Sector Status Code 1 1 2930277167 primary 0xEE Recovery/transformation command (? for help): q

    Read the article

  • Fix Icon Display Problems by Rebuilding the Windows 7 Thumbnail Cache

    - by The Geek
    Have you ever been browsing through photos or videos on your PC, and noticed that the thumbnails weren’t showing up properly? Sometimes they get corrupted, and you can quickly rebuild them to fix the problem. Just for some background, let’s walk through what we’re talking about. Normally, when you’re browsing around your files, you’ll see thumbnails for pictures and videos that you are viewing. These thumbnails are all generated, and stored in a cache to make browsing files faster. But sometimes… the cache gets corrupted, and we need to rebuild them. Here’s an example of what happens when it goes out of whack… Rebuilding the Windows 7 Thumbnail Cache All you have to do is open up Disk Cleanup through the start menu search box (just type in disk cleanup to find it) Just make sure that Thumbnails is checked, and then click OK to run through the cleaning process. Similar Articles Productive Geek Tips Prevent Windows XP from Creating the Thumbs.db Thumbnail Cache FilesFixing When All Thumbnail Icons Show the Same or Wrong ImageGet Vista Taskbar Thumbnail Previews in Windows XPIncrease the size of Taskbar Preview Thumbnails in Windows 7Change SuperFetch to Only Cache System Boot Files in Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams

    Read the article

  • Using The Windows Server AppFabric Cache with ASP.NET

    Did you know that you can use the AppFabric Cache with ASP.NET?  AppFabric Cache provides an ASP.NET session state provider.  There are a number of reasons that you would want to consider using the AppFabric Cache instead of other caching technologies, including the built in ASP.NET caching. The AppFabric Cache provides a number of benefits to ASP.NET programmers.  When web applications need to maintain state, especially across a Web Farm, or needs to maintain objects across restarts...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Cool Cleaner for Android Makes Cache and History Wiping a Snap

    - by Jason Fitzpatrick
    Cool Cleaner for Android is a free application that consolidates the process of clearing the varies caches and histories on your Android dead-simple wiping. If you frequently clear the cache and history files for applications on your phone, Cool Cleaner will save you a ton of time. Rather than navigating to various applications and sub-menus to clear out the cache and the history, Cool Cleaner acts as a dashboard for all your apps. From the History and Cache tabs in the app you can wipe everything from your outgoing call log to your Market search history and more. If the app has a history file or cache you can wipe it from Cool Cleaner–including non-stock apps like Facebook, TweetDeck, game apps, etc. Cool Cleaner is a free ad-supported application. Hit up the link below to read more and grab a copy. Cool Cleaner [Android Market via Addictive Tips] How To Make a Youtube Video Into an Animated GIFHTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear Monitors

    Read the article

  • Session and Cache are By Reference

    Recently a colleague remarked that if he got a List of some type out of the ASP.NET Cache, and changed an item, the Cache item would also change.That is correct. Session (InProc) , Cache and Application all return "live" references. A good writeup on this can be found by friend and fellow MVP Rick Strahl here:http://www.west-wind.com/Weblog/posts/1214.aspxIf you do not want this behavior, you need to either delete the Session / Cache / Application object and replace it with what you want later, or...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Do you know how to move the Team Foundation Server cache

    - by Martin Hinshelwood
    There are a number of reasons why you may want to change the folder that you store the TFS Cache. It can take up “some” amount of room so moving it to another drive can be beneficial. This is the source control Cache that TFS uses to cache data from the database. Moving the Cache is pretty easy and should allow you to organise your server space a little more efficiently. You may also get a performance improvement (although small) by putting it on another drive.. Create a new directory to store the Cache. e.g. “d:\TfsCache\” Figure: Create a new folder Give the local TFS WPG group full control of the directory   Figure: You need to use the App Tier Service WPG In the application tier web.config (~\Application Tier\Web Services\web.config) add the following setting (to the appSettings section). Figure: The web.config for TFS is stored in the application folder <appsettings> ... <add value="D:\" key="dataDirectory" /> ... </appsettings> Figure: Adding this to the web.config will trigger a restart of the app pool Figure: Your web.config should look something like this The app pool will automatically recycle and Team Web Access will start using the new location.  If you then download a file (not via a proxy) a folder with a GUID should be created immediately in the folder from #1.  If the folder doesn’t appear, then you probably don’t have permissions set up properly.

    Read the article

  • Do you know how to move the Team Foundation Server cache

    - by Martin Hinshelwood
    There are a number of reasons why you may want to change the folder that you store the TFS Cache. It can take up “some” amount of room so moving it to another drive can be beneficial. This is the source control Cache that TFS uses to cache data from the database. Moving the Cache is pretty easy and should allow you to organise your server space a little more efficiently. You may also get a performance improvement (although small) by putting it on another drive.. Create a new directory to store the Cache. e.g. “d:\TfsCache\” Give the local TFS WPG group full control of the directory Figure: You need to use the App Tier service WPG In the application tier web.config (~\Application Tier\Web Services\web.config) add the following setting (to the appSettings section). <appsettings> ... <add value="D:\" key="dataDirectory" /> ... </appsettings> The app pool will automatically recycle and Team Web Access will start using the new location.  If you then download a file (not via a proxy) a folder with a GUID should be created immediately in the folder from #1.  If the folder doesn’t appear, then you probably don’t have permissions set up properly.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >