Search Results

Search found 14653 results on 587 pages for 'disk cache'.

Page 93/587 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • How to Mount remote disc as local in Windows 7 and Windows 2003?

    - by Robert
    Is it possible to mount remote file system (on Windows 2003 server) as local disk in Windows 7? I have Windows 2003 server with software RAID and some shares. On another computer with Windows 7 I have software which can access only data form local or USB disks. This solution doesn't work because program doesn't see files in folder: Mount Remote CIFS/SMB Share as a Folder not a Drive Letter Everything is the same LAN.

    Read the article

  • Software RAID for several HDs which retains files on each HD

    - by Fuxi
    Is there some kind of software/driver that would enable me to create one big volume out of several hard disks but retain the file structure on each HD? This would be in case where one hard disk crashes and only data on that HD will be lost. Windows 7 enables me to do that but in case one HD breaks all data will be lost.

    Read the article

  • Corrupted WebSphere Installation

    - by Keith
    When our server's disk fills up, WebSphere gets corrupted (missing resources in the administration console, missing servers, etc.). Is there a way to recover or repair WebSphere instead of reinstalling?

    Read the article

  • How bad is it to use a virtual file system with VMWare?

    - by user37244
    IT is running a series of VMs that we'd like to see optimized further: if the VMs' are Windows XP, storing their NTFS images out to the virtual disk (ext3) provided by Linux/VMWare, how much of a hit are we taking - as opposed to having a partition of the host hard drive formatted NTFS to eliminate the translation layer and the extra level of operating system IO preparation?

    Read the article

  • How do I tell mdadm to start using a missing disk in my RAID5 array again?

    - by Jon Cage
    I have a 3-disk RAID array running in my Ubuntu server. This has been running flawlessly for over a year but I was recently forced to strip, move and rebuild the machine. When I had it all back together and ran up Ubuntu, I had some problems with disks not being detected. A couple of reboots later and I'd solved that issue. The problem now is that the 3-disk array is showing up as degraded every time I boot up. For some reason it seems that Ubuntu has made a new array and added the missing disk to it. I've tried stopping the new 1-disk array and adding the missing disk, but I'm struggling. On startup I get this: root@uberserver:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d1 : inactive sdf1[2](S) 1953511936 blocks md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] I have two RAID arrays and the one that normally pops up as md1 isn't appearing. I read somewhere that calling mdadm --assemble --scan would re-assemble the missing array so I've tried first stopping the existing array that ubuntu started: root@uberserver:~# mdadm --stop /dev/md_d1 mdadm: stopped /dev/md_d1 ...and then tried to tell ubuntu to pick the disks up again: root@uberserver:~# mdadm --assemble --scan mdadm: /dev/md/1 has been started with 2 drives (out of 3). So that's started md1 again but it's not picking up the disk from md_d1: root@uberserver:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sde1[1] sdf1[2] 3907023872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] md_d1 : inactive sdd1[0](S) 1953511936 blocks md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] What's going wrong here? Why is Ubuntu trying to pick up sdd into a different array? How do I get that missing disk back home again? [Edit] - After adding the md1 to mdadm.conf it now tries to mount the array on startup but it's still missing the disk. If I tell it to try and assemble automatically I get the impression it know it needs sdd but can't use it: root@uberserver:~# mdadm --assemble --scan /dev/md1: File exists mdadm: /dev/md/1 already active, cannot restart it! mdadm: /dev/md/1 needed for /dev/sdd1... What am I missing?

    Read the article

  • What causes Mac OS X Permission errors?

    - by Matthew Savage
    This is out of interest rather then looking for a fix to a problem. What actually causes permissions on Mac OS X Systems to become messed up? Its an easily fixed problem (i.e. there's a quick and easy fix via Disk Utility) but its something I'd encountered a few times doing support in a Mac-reseller store without actually understanding the causes. I'd guess that part of it is due to some applications not playing nicely, but what else might be the source of this issue?

    Read the article

  • Why does a hard disk suddenly look to Windows as if it "needs to be formatted"?

    - by pufferfish
    This is more of a theory question, but what are the reason(s) for a disk to suddenly cause Windows to start saying it "needs to be formatted"? It happens to an IDE disk that I have in a cheap external enclosure, and I can usually get most of the data back by using software like recuva. It's now happened to an internal disk I have. I'm not looking for software to fix this (although links would be appreciated), but rather a low-level explanation as to what gets corrupted on the disk.

    Read the article

  • Is there a performance difference between Windows 7 on SSD installed from scratch versus it using a recent ghost/clone drive image from a harddisk?

    - by therobyouknow
    I'm planning to upgrade a notebook PC to a Solid-State Flash Drive (SSD) soon. I want to use the notebook before that and am considering installing Windows 7 on the hard disk (spinning variety, 5400rpm) before I get the SSD. To save time I am wondering if I can ghost/clone the installation of Windows 7 from the hard drive and put on the SSD. Would the performance of this clone from the harddisk onto the SSD be different from starting again and reinstalling Windows 7 from scratch on the SSD? (Windows 7 32bit professional)

    Read the article

  • Are there any known issues with Windows-8 installed on a VHD?

    - by Richard
    I installed Windows 8 preview on a VHD image and it seemed to work until I actually started using it. I´m seeing terrible performance. Installing anything makes everything else "stutter" or freeze for up to a couple of seconds at a time. I looked up hard disk performance in the task manager and this is what I found: It doesn't seem right it has 2500ms response time while reading/writing at those speeds. Is this an issue with my drive, installation or VHDs in general?

    Read the article

  • Rails Counter Cache and its implementation

    - by Ishu
    Hello All, I am trying to get hold of rails counter cache feature but not able to grasp it completely. Let's say that We have 3 models A B C A belongs to B or C depending upon a field key_type and key_id. key_type tells whether A belongs to B or C so if key_type="B" then the record belongs to B otherwise it belongs to C. In my model a.rb, I have defined following associations: belongs_to :b, :counter_cache => true, :foreign_key => "key_id" belongs_to :c, :counter_cache => true, :foreign_key => "key_id" and in b and c model files has_many :as , :conditions => {:key_type => "B"} has_many :as , :conditions => {:key_type => "C"} Both B and C Models have a column as as_count The problem is every time an object of a is created count is increased in the both the models b and c. Any help is appreciated. Initially i thought that this may work: belongs_to :b, :counter_cache => true, :foreign_key => "key_id", :conditions => {:key_type => "B"} belongs_to :c, :counter_cache => true, :foreign_key => "key_id", :conditions => {:key_type => "C"} But this does not help. Thanks

    Read the article

  • .NET assembly cache / ngen / jit image warm-up and cool-down behavior

    - by Mike Jiang
    Hi, I have an Input Method (IME) program built with C#.NET 2.0 DLL through C++/CLI. Since an IME is always attaching to another application, the C#.NET DLL seems not able to avoid image address rebasing. Although I have applied ngen to create a native image of that C#.NET 2.0 DLL and installed it into Global Assembly Cache, it didn't improved much, approximately 12 sec. down to 9 sec. on a slow PIII level PC. Therefore I uses a small application, which loads all the components referenced by the C#.NET DLL at the boot up time, to "warm up" the native image of that DLL. It works fine to speed up the loading time to 0.5 sec. However, it only worked for a while. About 30 min. later, it seems to "cool down" again. Is there any way to control the behavior of GAC or native image to be always "hot"? Is this exactly a image address rebasing problem? Thank you for your precious time. Sincerely, Mike

    Read the article

  • SQL Cache Dependency not working with Stored Procedure

    - by pjacko
    Hello, I can't get SqlCacheDependency to work with a simple stored proc (SQL Server 2008): create proc dbo.spGetPeteTest as set ANSI_NULLS ON set ANSI_PADDING ON set ANSI_WARNINGS ON set CONCAT_NULL_YIELDS_NULL ON set QUOTED_IDENTIFIER ON set NUMERIC_ROUNDABORT OFF set ARITHABORT ON select Id, Artist, Album from dbo.PeteTest And here's my ASP.NET code (3.5 framework): -- global.asax protected void Application_Start(object sender, EventArgs e) { string connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["MyConn"].ConnectionString; System.Data.SqlClient.SqlDependency.Start(connectionString); } -- Code-Behind private DataTable GetAlbums() { string connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["UnigoConnection"].ConnectionString; DataTable dtAlbums = new DataTable(); using (SqlConnection connection = new SqlConnection(connectionString)) { // Works using select statement, but NOT SP with same text //SqlCommand command = new SqlCommand( // "select Id, Artist, Album from dbo.PeteTest", connection); SqlCommand command = new SqlCommand(); command.Connection = connection; command.CommandType = CommandType.StoredProcedure; command.CommandText = "dbo.spGetPeteTest"; System.Web.Caching.SqlCacheDependency new_dependency = new System.Web.Caching.SqlCacheDependency(command); SqlDataAdapter DA1 = new SqlDataAdapter(); DA1.SelectCommand = command; DataSet DS1 = new DataSet(); DA1.Fill(DS1); dtAlbums = DS1.Tables[0]; Cache.Insert("Albums", dtAlbums, new_dependency); } return dtAlbums; } Anyone have any luck with getting this to work with SPs? Thanks!

    Read the article

  • Jsp page getting called from cache rather than getting loaded from server

    - by sam4u-optimistic86
    I am calling a jsp based on 2 parameters which is passed from jsp 1 in this way.Below i pass 2 parameters into 2.jsp and based on these 2 parameters data is displayed in 2.jsp.I have a loop in which i have a number of hrefs like the one i have described below.Each of these href passes a different set of value to 2.jsp. out.println("<a href=\"2.jsp?prId=" + prog.getId() + count + "\">" + prog.getName() + "</a>"); I retrieve these 2 parameters in 2.jsp using the following lines count_id = request.getParameter( "country_id" ); prog_id = Integer.parseInt(request.getParameter( "program_id" )); Based on these 2 parameters i show the corresponding data in 2.jsp Now i have a back button in 2.jsp and i call 1.jsp in 2.jsp using the following code <a href="1.jsp"><img src="/image/back.gif" border="0"></a> The problem is when i use the back button and go back to 1.jsp and select another href like the one i have described above i get the data related to the previous href selected. I guess the problem is when i request the page is loaded from cache rather than from the server. Please advice

    Read the article

  • Cache consistency & spawning a thread

    - by Dave Keck
    Background I've been reading through various books and articles to learn about processor caches, cache consistency, and memory barriers in the context of concurrent execution. So far though, I have been unable to determine whether a common coding practice of mine is safe in the strictest sense. Assumptions The following pseudo-code is executed on a two-processor machine: int sharedVar = 0; myThread() { print(sharedVar); } main() { sharedVar = 1; spawnThread(myThread); sleep(-1); } main() executes on processor 1 (P1), while myThread() executes on P2. Initially, sharedVar exists in the caches of both P1 and P2 with the initial value of 0 (due to some "warm-up code" that isn't shown above.) Question Strictly speaking – preferably without assuming any particular CPU – is myThread() guaranteed to print 1? With my newfound knowledge of processor caches, it seems entirely possible that at the time of the print() statement, P2 may not have received the invalidation request for sharedVar caused by P1's assignment in main(). Therefore, it seems possible that myThread() could print 0. References These are the related articles and books I've been reading. (It wouldn't allow me to format these as links because I'm a new user - sorry.) Shared Memory Consistency Models: A Tutorial hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf Memory Barriers: a Hardware View for Software Hackers rdrop.com/users/paulmck/scalability/paper/whymb.2009.04.05a.pdf Linux Kernel Memory Barriers kernel.org/doc/Documentation/memory-barriers.txt Computer Architecture: A Quantitative Approach amazon.com/Computer-Architecture-Quantitative-Approach-4th/dp/0123704901/ref=dp_ob_title_bk

    Read the article

  • Load SQL query result data into cache in advance

    - by Marc
    I have the following situation: .net 3.5 WinForm client app accessing SQL Server 2008 Some queries returning relatively big amount of data are used quite often by a form Users are using local SQL Express and restarting their machines at least daily Other users are working remotely over slow network connections The problem is that after a restart, the first time users open this form the queries are extremely slow and take more or less 15s on a fast machine to execute. Afterwards the same queries take only 3s. Of course this comes from the fact that no data is cached and must be loaded from disk first. My question: Would it be possible to force the loading of the required data in advance into SQL Server cache? Note My first idea was to execute the queries in a background worker when the application starts, so that when the user starts the form the queries will already be cached and execute fast directly. I however don't want to load the result of the queries over to the client as some users are working remotely or have otherwise slow networks. So I thought just executing the queries from a stored procedure and putting the results into temporary tables so that nothing would be returned. Turned out that some of the result sets are using dynamic columns so I couldn't create the corresponding temp tables and thus this isn't a solution. Do you happen to have any other idea?

    Read the article

  • Rate Limit Calls To Api Using Cache

    - by namtax
    Hi I am using coldfusion to call the last.fm api, using a cfc bundle sourced from here I am concerned about going over the request limit, which is 5 requests per originating IP address per second, averaged over a 5 minute period. The cfc bundle has a central component which calls all the other components, which are split up into sections like "artist", "track" etc...This central component "lastFmApi.cfc." is initiated in my application, and persisted for the lifespan of the application // Application.cfc example <cffunction name="onApplicationStart"> <cfset var apiKey = '[your api key here]' /> <cfset var apiSecret = '[your api secret here]' /> <cfset application.lastFm = CreateObject('component', 'org.FrankFusion.lastFm.lastFmApi').init(apiKey, apiSecret) /> </cffunction> Now if I want to call the api through a handler/controller, for example my artist handler...I can do this <cffunction name="artistPage" cache="5 mins"> <cfset qAlbums = application.lastFm.user.getArtist(url.artistName) /> </cffunction> I am a bit confused towards caching, but am caching each call to the api in this handler for 5 mins, but does this make any difference, because each time someone hits a new artist page wont this still count as a fresh hit against the api? Wondering how best to tackle this Thanks

    Read the article

  • Radio buttons being reset in FF on cache-refresh

    - by Andrew Song
    (This is technically an addendum to an earlier StackOverflow question I had posted, but my original post asked a different question which doesn't really cover this topic -- I don't want to edit my older question as I feel this is different enough to merit its own page) While browsing my website in Firefox 3.5 (and only FF3.5), I come across a page with two radio buttons that have the following HTML code: <input id="check1" type="radio" value="True" name="check" checked="checked"/> <input id="check2" type="radio" value="False" name="check"/> This page renders as expected, with 'check1' checked and 'check2' unchecked. When I then go to refresh the page by pressing Control + R, the two radio buttons render, but they are both unchecked even though the raw HTML code is the same (as above). If I do a cache-miss refresh (via Control + F5 or Control + Shift + R), the page returns back to the way you'd expect it. This is not a problem in any other browser I've tried except FF3.5. What is causing these radio buttons to be reset on a normal refresh? How can I avoid this?

    Read the article

  • Gem Load Error about whois command and removed cache

    - by Puru puru rin..
    Hello, I have an awesome trouble with Gem. After executing this command: rm -f /usr/local/lib/ruby/gems/1.9.1/cache/* I can not do any thing. If I try for instance: gem cleanup I get this kind of answer: /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/gemwhois.rb:3:in `require': no such file to load -- rubygems/commands/whois (LoadError) from /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/gemwhois.rb:3:in `<top (required)>' from /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/rubygems_plugin.rb:2:in `require' from /usr/local/lib/ruby/gems/1.9.1/gems/gemwhois-0.1/lib/rubygems_plugin.rb:2:in `<top (required)>' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1113:in `load' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1113:in `block in <top (required)>' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1105:in `each' from /usr/local/lib/ruby/site_ruby/1.9.1/rubygems.rb:1105:in `<top (required)>' from <internal:gem_prelude>:235:in `require' from <internal:gem_prelude>:235:in `load_full_rubygems_library' from <internal:gem_prelude>:334:in `const_missing' from /usr/local/bin/gem:12:in `<main>' It's the same for gem -v, of just gem command... I'm working of Snow Leopard. What should the best solution about you? Thanks a lot!

    Read the article

  • Database cache that I'm not aware of?

    - by Martin
    I'm using asp.net mvc, linq2sql, iis7 and sqlserver express 2008. I get these intermittent server errors, primary key conflicts on insertion. I'm using a different setup on my development computer so I can't debug. After a while they go away. Restarting iis helps. I'm getting the feeling there is cache somewhere that I'm not aware of. Can somebody help me sort out these errors? Cannot insert duplicate key row in object 'dbo.EnquiryType' with unique index 'IX_EnquiryType'. Edits regarding Venemos answer Is it possible that another application is also accessing the same database simultaneously? Yes there is, but not this particular table and no inserts or updates. There is one other table with which I experience the same problem but it has to do with a different part of the model. How often an in what context do you create a new DataContext instance? Only once, using the singleton pattern. Are the primary keys generated by the database or by the application? Database. Which version of ASP.NET MVC and which version of .NET are you using? RC2 and 3.5.

    Read the article

  • Clear tableView cell cache (or remove an entry)

    - by ManniAT
    Hi, I have the same question problem as described here http://stackoverflow.com/questions/2286669/iphone-how-to-purge-a-cached-uitableviewcell But my problem can't be solved with "resetting content". To be precise - I use a custom cell (own class). While running the application it is possible that I have to use a different "cell type". It's the same class - but it has (a lot of) differnt attributes. Of course I could always reset all the things at "PrepareForReuse" but that's not a good idea I guess (there are a lot things to reset). My idea - I do all these things in the constructor of the cell. And all the rows will use this "type of cell" later. When the (seldom) situation comes that I have to change the look of all rows I create a new instance of this kind of cell with different settings. And now I want to replace the queued cell with this new one. I tried it with simply calling the constructor with the same cellidentifier (in the hope it will replace the existing one) but that doesn't work. I also didn't find a "ClearReusableCells" or something like this. Is there a way to clear the cache - or to remove / replace a specific item? Manfred

    Read the article

  • Cache layer for MVC - Model or controller?

    - by Industrial
    Hi everyone, I am having some second thoughts about where to implement the caching part. Where is the most appropriate place to implement it, you think? Inside every model, or in the controller? Approach 1 (psuedo-code): // mycontroller.php MyController extends Controller_class { function index () { $data = $this->model->getData(); echo $data; } } // myModel.php MyModel extends Model_Class{ function getData() { $data = memcached->get('data'); if (!$data) { $query->SQL_QUERY("Do query!"); } return $data; } } Approach 2: // mycontroller.php MyController extends Controller_class { function index () { $dataArray = $this->memcached->getMulti('data','data2'); foreach ($dataArray as $key) { if (!$key) { $data = $this->model->getData(); $this->memcached->set($key, $data); } } echo $data; } } // myModel.php MyModel extends Model_Class{ function getData() { $query->SQL_QUERY("Do query!"); return $data; } } Thoughts: Approach 1: No multiget/multi-set. If a high number of keys would be returned, overhead would be caused. Easier to maintain, all database/cache handling is in each model Approach 2: Better performancewise - multiset/multiget is used More code required Harder to maintain Tell me what you think!

    Read the article

  • How to cache code in PHP?

    - by Janis Peisenieks
    I am creating a custom form building system, which includes various tokens. These tokens are found using Regular Expressions, and depending on the type of toke, parsed. Some require simple replacement, some require cycles, and so forth. Now I know, that RegExp is quite resource and time consuming, so I would like to be able to parse the code for the form once, creating a php code, and then save the PHP code, for next uses. How would I go about doing this? So far I have only seen output caching. Is there a way to cache commands like echo and cycles like foreach()? Because of misunderstandings, I'll create an example. Unparsed template data: Thank You for Your interest, [*Title*] [*Firstname*] [*Lastname*]. Here are the details of Your order! [*KeyValuePairs*] Here is the link to Your request: [*LinkToRequest*]. Parsed template: "Thank You for Your interest, <?php echo $data->title;?> <?php echo $data->firstname;?> <?php echo $data->lastname;?>. Here are the details of Your order! <?php foreach($data->values as $key=>$value){ echo $key."-".$value }?> Here is the link to Your request: <?php echo $data->linkToRequest;?>. I would then save the parsed template, and instead of parsing the template every time, just pass the $data variable to the already parsed one, which would generate an output.

    Read the article

  • Ideal HTTP cache control headers for different types of resources

    - by chris_l
    I want to find a minimal set of headers, that work with "all" caches and browsers (also when using HTTPS!) On my (GWT-based) web site, I'll have three kinds of resources: 1. Forever cacheable (public / equal for all users) These files don't ever change, and they get a filename based on the MD5 of their contents (this is GWT's approach). They should get cached as much as possible, even when using HTTPS (so I assume, I should set Cache-Control: public, especially for Firefox?) 2. Changing for every new version of the site (public / equal for all users) These files can be cached, but probably need to be revalidated every time. 3. Individual for each request (private / user specific) These resources (e. g. JSON responses) should never be cached unencrypted to disk under no circumstances. (Maybe I'll have a few specific requests that could be cached.) I have a general idea on which headers I would probably use for each type, but there's always something I could be missing.

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >