Search Results

Search found 25196 results on 1008 pages for 'hard drive cache'.

Page 66/1008 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • How to make and restore incremental snapshots of hard disk

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems. I am also not looking for a duplicity backup your all system excluding some folders script + dd to save your mbr. I can do that myself, looking for extra finesse. I'm looking for something I can do before doing massive changes to a system and then if something when wrong or I burned my hard disk after spilling coffee on it I can just boot from a liveCD and restore a working snapshot to a hard disk. It does not need to be daily, it doesn't even need a schedule. Just run once in a while and let it its job and preferably RAW based not file copy based.

    Read the article

  • Ubuntu install can't find hard drives

    - by Casey Hungler
    I recently got a Dell Inspiron Special Edition 7720 computer. I am trying to install Ubuntu along side Windows. When I use the WUBI installer, the installation of Ubuntu works as long as I do not boot into Windows; if I boot into Windows, when I go back into Ubuntu, I am given a variety of error messages which claim to have corrupt or missing kernel/root directory, etc. I have been working with this problem for about a week, and have reinstalled Ubuntu MANY times. So far, I have eliminated all of the following problems: Corrupt WUBI installation (Downloaded multiple times, used on other systems), I have tried using a CD and a flash drive, both of which work on other computers. I know that no program within Ubuntu is creating the problem. I know that others have successfully installed Ubuntu on a computer with my operating system (Windows 7 SP1). This is a much shortened version of the original question, which has been up for about 5 days, and included a more detailed description of the problem, but left everyone clueless as to the source of this problem. When I spoke with the Dell service technician who came over today to replace my keyboard, he suggested that the driver for my HDD was so new that it was not compatible with the current version of Ubuntu. His reasoning is as follows: 1) During an install from a flash drive or CD, where I am supposed to get the option to wipe my system or create a dual boot, I get a window that asks me to select a hard drive partition, but none are listed. 2) This model of computer was made public in June of this year, while Ubuntu was released in April Adopting this theory, it would seem to me that the WUBI install fails after booting into Windows because Ubuntu can no longer find the files that it needs to load. Does this theory seem at all plausible to anyone? I just want to install Ubuntu and have it stay on my computer. I don't care how I put it there, I just need it to work, so I would TRULY appreciate any advice or suggestions anyone could give. Thanks so much for your time and support!!!

    Read the article

  • My new hard drive won't automount on boot

    - by user518
    I installed a new hard drive right before installing the new Ubuntu 11.10 by reformatting, not upgrading. I was able to mount my drive, and partition it. It's a 1TB, and I was able to transfer all of my music, and videos to it. For some reason, it won't mount on boot, and I can't figure out how to manually mount it afterwards either. Here's my current /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=e0fbdf09-f9a0-4336-bac3-ba4dc6cfbcc0 / ext4 errors=remount-ro,user_xattr 0 1 # swap was on /dev/sda5 during installation UUID=adf15180-c84c-4309-bc9f-085fd7464f89 none swap sw 0 0 /dev/sdc1 /media/sdc1 ext4 defaults 0 0 The last line is what I added for my hard drive. Here's the output from sudo lshw -C disk: % sudo lshw -C disk ~ *-disk:0 description: ATA Disk product: ST3250310AS vendor: Seagate physical id: 0 bus info: scsi@2:0.0.0 logical name: /dev/sda version: 3.AD serial: 6RYBF2QE size: 232GiB (250GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=000da204 *-cdrom description: DVD-RAM writer product: DVD+-RW DH-16A6S vendor: PLDS physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/scd0 logical name: /dev/sr0 version: YD11 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc

    Read the article

  • How do you handle browser cache with login/logout?

    - by Julien
    To improve performances, I'd like to add a fairly long Cache-Control (up to 30 minutes) to each page since they do not change often. However, each page also displays the name of the user logged in (like this website). The problem is when the user logs in or logs out: the user name must change. How can I change the user name after each login/logout action while keeping a long Cache-Control? Here are the solutions I can think of: Ajax request (not cached) to retrieve and display the user name. If I have 2 requests (/user?registered and /user?new), they could be cached as well. But I am afraid this extra request would nullify my caching performance-wise Add a unique URL variable (?time=) to make the URL different, and cancel the cache. However, I would have to add this variable to all links on my webpage, not very convenient code-wise This problems becomes greater if I actually have more content that is not the same for registered users and new users.

    Read the article

  • Google Chrome does not honor cache-policy in page header if the page is displayed in a FRAME

    - by Tim
    No matter what I do: <meta http-equiv="Cache-Control" content="no-cache" /> <meta http-equiv="Expires" content="Fri, 30 Apr 2010 11:12:01 GMT" /> <meta http-equiv="Expires" content="0" /> <HTTP-EQUIV="PRAGMA" CONTENT="NO-STORE" /> Google Chrome does not reload any page according to the page's internal cache policy if the page is displayed in a frame. It is as though the meta tags are not even there. Google Chrome seems to be ignoring these tags. Since I've gotten answers to this question on other forums where the person responding has ignored the operative condition, I will repeat it: this behavior occurs when the page is displayed in a frame. I was using the latest released version and have since upgraded to 5.0.375.29 beta but the behavior is the same in both versions. Would someone please care to confirm one way or another the behavior you are seeing with framesets and the caching/expiration policies given in meta tags? Thanks

    Read the article

  • Optimal ASP.Net cache duration for a large site?

    - by HeroicLife
    I've read lots of material on how to do ASP.Net caching but little on the optimal duration that pages should be cached for. Let's say that I have a popular site with 50,000 pages. The content does not change frequently, so I could cache pages for up to an hour if I wanted. The server has 16 GB of RAM, but database connections are limited. How long should pages be cached for? My thinking is that if I set the cache duration too high (let's say 60 minutes), I will fill up memory with a fraction of the total content, which will continually be shuffled in and out of memory. Furthermore, let's say that 10% of the pages are responsible for 90% of traffic. If the popular pages are hit every second, and the unpopular ones every hour, then a 60 second cache would only keep the load-intensive content cached without sacrificing freshness. Should numerous but rarely-accessed content be cached at all?

    Read the article

  • How can I cache a Subversion password on a server, without storing it in unencrypted form?

    - by Zilk
    My Subversion server only provides access via HTTPS; support for svn+ssh has been dropped because we wanted to avoid creating system users on that machine just for SVN access. Now I'm trying to provide a way for users to cache their passwords for a while, without leaving them stored on the filesystem in unencrypted form. This is no problem for Gnome or KDE users, because they can use gnome-keyring and kwallet, respectively. IIRC, TortoiseSVN has a similar caching mechanism, too. But what about users on a non-GUI system? Some context: in this case, we have a development/testing server where one project has been checked out into the Apache htdocs directory. Development for this project is almost complete, and only minor text/layout changes are performed directly on this server. Nevertheless, the changes should be checked into the repository. There's no kwallet and no gnome-keyring on this system, and the ssh-agent can't help because the repository is accessed via https instead of svn+ssh. As far as I know, that leaves them the choice of entering the password every time they talk to the SVN server, or storing it in an insecure way. Is there any way to get something like what gnome-keyring and kwallet provide in a non-GUI environment?

    Read the article

  • Entity Framework Code First: Get Entities From Local Cache or the Database

    - by Ricardo Peres
    Entity Framework Code First makes it very easy to access local (first level) cache: you just access the DbSet<T>.Local property. This way, no query is sent to the database, only performed in already loaded entities. If you want to first search local cache, then the database, if no entries are found, you can use this extension method: 1: public static class DbContextExtensions 2: { 3: public static IQueryable<T> LocalOrDatabase<T>(this DbContext context, Expression<Func<T, Boolean>> expression) where T : class 4: { 5: IEnumerable<T> localResults = context.Set<T>().Local.Where(expression.Compile()); 6:  7: if (localResults.Any() == true) 8: { 9: return (localResults.AsQueryable()); 10: } 11:  12: IQueryable<T> databaseResults = context.Set<T>().Where(expression); 13:  14: return (databaseResults); 15: } 16: }

    Read the article

  • Des chercheurs découvrent le premier malware à pratiquer l'overwrite, caché sous la forme d'un Adobe

    Des chercheurs découvrent le premier malware à pratiquer l'overwrite, caché sous la forme d'un Adobe Updater Un code malicieux vient d'être repéré pour la première fois par des spécialistes en sécurité informatique. En effet, des chercheurs viennent de découvrir un malware qui se substitue aux mises à jour de certaines applications. Habituellement, ce type de programmes ne pratique pas l'overwrite. Seuls les ordinateurs tournant sous Windows sont touchés. Le malware se cache sous la forme d'un updater pour les produits Adobe ou Java. Une variante imite Adobe Reader v.9 et overwrite AdobeUpdater.exe, lequel a pour mission de se connecter régulièrement auprès des serveurs d'Adobe pour vérifier si une nouvelle version ...

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • Tagging Objects in the AppFabric Cache

    In two of my previous entries I outlined functionality and patterns used in the AppFabric Cache.  In this entry I wanted to expand and look at another area of functionality that people have come to expect when working with cache technology.  This expectation is the ability to tag content with more information than just the key.  As you start to examine this expectation you will soon find yourself asking if the tagged data can be related to each other and finally if it is possible...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Skipping nginx PHP cache for certain areas of a site?

    - by DisgruntledGoat
    I have just set up a new server with nginx (which I am new to) and PHP. On my site there are essentially 3 different types of files: static content like CSS, JS, and some images (most images are on an external CDN) main PHP/MySQL database-driven website which essentially acts like a static site dynamic PHP/MySQL forum It is my understanding from this question and this page that the static files need no special treatment and will be served as fast as possible. I followed the answer from the above question to set up caching for PHP files and now I have a config like this: location ~ \.php$ { try_files $uri =404; fastcgi_cache one; fastcgi_cache_key $scheme$host$request_uri; fastcgi_cache_valid 200 302 304 30m; fastcgi_cache_valid 301 1h; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example$fastcgi_script_name; fastcgi_param HTTPS off; } However, now I want to prevent caching on the forum (either for everyone or only for logged-in users - haven't checked if the latter is feasible with the forum software). I've heard that "if is evil" inside location blocks, so I am unsure how to proceed. With the if inside the location block I would probably add this in the middle: if ($request_uri ~* "^/forum/") { fastcgi_cache_bypass 1; } # or possible this, if I'm able to cache pages for anonymous visitors if ($request_uri ~* "^/forum/" && $http_cookie ~* "loggedincookie") { fastcgi_cache_bypass 1; } Will that work fine, or is there a better way to achieve this?

    Read the article

  • How to delete Chrome temp data (history, cookies, cache) using command line

    - by Dio Phung
    On Windows 7, I tried running this script but still cannot clear Chrome temp data. Can someone figure out what's wrong with the script? Where do Chrome store history and cache ? Thanks ECHO -------------------------------------- ECHO **** Clearing Chrome cache taskkill /F /IM "chrome.exe">nul 2>&1 set ChromeDataDir=C:\Users\%USERNAME%\AppData\Local\Google\Chrome\User Data\Default set ChromeCache=%ChromeDataDir%\Cache>nul 2>&1 del /q /s /f "%ChromeCache%\*.*">nul 2>&1 del /q /f "%ChromeDataDir%\*Cookies*.*">nul 2>&1 del /q /f "%ChromeDataDir%\*History*.*">nul 2>&1 set ChromeDataDir=C:\Users\%USERNAME%\Local Settings\Application Data\Google\Chrome\User Data\Default set ChromeCache=%ChromeDataDir%\Cache>nul 2>&1 del /q /s /f "%ChromeCache%\*.*">nul 2>&1 del /q /f "%ChromeDataDir%\*Cookies*.*">nul 2>&1 del /q /f "%ChromeDataDir%\*History*.*">nul 2>&1 ECHO **** Clearing Chrome cache DONE

    Read the article

  • Keep Windows Installer from using largest drive for temporary files

    - by stefan.at.wpf
    By default Windows Installer uses the largest drive for temporary storage, no matter if that's needed (meaning there would also be enough space on the system drive). Taken from http://msdn.microsoft.com/en-us/library/aa371372%28VS.85%29.aspx: During an administrative installation the installer sets ROOTDRIVE to the first connected network drive it finds that can be written to. If it is not an administrative installation, or if the installer can find no network drives, the installer sets ROOTDRIVE to the local drive that can be written to having the most free space. Now my system drive is an SSD, my largest drive is a RAID, that spins down when it's not used. Remember the SSD as system drive? Everything is silent now! Until I install something and Windows Installer wakes up my RAID again just to put a small .tmp file on it... How can I prevent Windows Installer from using the largest drive as temporary storage? Can I maybe set some access rights to disallow the Windows Installer to write on my RAID drive? Any other ideas? Thank you!

    Read the article

  • how do you view / access the contents of a mounted dmg drive through TERMINAL hdiutil diskmount

    - by A. O.
    My external USB drive failed. I made a .dmg image file of the drive using disk utility. Later I was not able to mount the .dmg image. I used terminal hdiutil attach -noverify -nomount name.dmg diskutil list diskutil mountDisk /dev/disk4 then received the following message: Volume(s) mounted successfully However, I cant see the drive or access its contents through Finder. DUtility shows the drive as ghost but I still cant mount it using diskutility. Terminal tells me that the drive is mounted and constantly shows it in the diskutil list. pwd is not the mounted .dmg image. I dont know how to enter into the mounted image drive to see its contents. So in case what I said sounds like I see the files in the mounted image no this is not the case. I do not know how to access or even change the pwd within Terminal. I was hoping to see the mounted drive tru finder but I do not see that. So I need help as to how to find a way to access the mounted image drive if it was really mounted. Terminal says that it was and it shows it under diskutil list as a /dev/disk4. Can someone please help me access the files on this drive?

    Read the article

  • AppFabric named cache, what happens if you lose a cache host?

    - by Liam
    I'm getting my head around how app fabric clustering works and there's something I'm not sure about. Given a structure where we have one named cache, with two lead hosts and (say) three cache hosts and high availability turned off and the lead host(s) performing the management role. When one cache host goes down do you loose the data that was on that cache host? In this MSDN article it states: Data on the non-lead hosts would be lost (assuming high availability was not enabled), but the rest of the cluster could continue serving and storing data But I was unsure if redundancy is built into the system. Would you loose x amount of data or would one of the other cache hosts store this data also and pick up the slack?

    Read the article

  • Linux RAID: Replacing Failed Drive...permanantly

    - by user137519
    Okay, odd question here. I have a server with RAID 5. A drive failed, in a really physically in a really odd way. On the machine it boots and is seen by the BIOS but...no partition can be seen on the drive consistantly (in and out). 2 out of 3 drives working...I made new spare disk and added it, RAID 5 rebuilt clean. All appears well but...when I reboot it keeps trying to use the 2nd drive which doesn't give any partition data, so of course the RAID 5 gets 2 out of 3...again. The status of my drive is as follows: /dev/sda2:Good /dev/sdb2 (drive has physical problem so no partition data) bad, /dev/sdc2:good /dev/sdd2:good. Every time I reboot the mdadm system seems to keep trying to use /dev/sdb which has physical failure (although spins and is detected). /dev/sdd is the new drive I created. I added /dev/sdd to the raid and it rebuilds the raid but this action isn't memorized upon reboot so it keeps listing /dev/sda and /dev/sdc but doesn't use the perfectly good /dev/sdd until I re-add manually. I've tried removing the dead drive with the mdadm tool, but as it cannot see /dev/sdb paritions it will not fail or remove it (says partition doesn't exist). the /etc/mdadm.conf was automatically made on the original OS install which only lists: DEVICE partitions MAILADDR root ARRAY /dev/md2 super-minor=2 ARRAY /dev/md0 super-minor=0 ARRAY /dev/md1 super-minor=1 Basically just the raids to use on boot. I need to remove this semi-dead drive (/dev/sdb) but I'd prefer to know why this is happening before I do. any ideas or suggestions. I supposed I could attempt to clone/replace /dev/sdb (the partitions on drive show up, then disappear shortly after) but given the partition "chester cat" behaviour this seems risky to me and as I have a working "spare" it seems unnecessary. Thanks in advance for your insight.

    Read the article

  • Drive security settings in Windows 8 Pro

    - by Donotalo
    My PC OS is Windows 8 Pro x64. Windows 8 seems confusing. D:\ drive is supposed to be used solely by a single user, who is in Users group of the PC. The requirement is... that user will have full control of D drive. Admins will have full control of D drive. All other users can only list drive contents. No file could be opened. My account is admin account. From D drive's property Security tab, I've set the following: Allow "List folder contents" for Authenticated Users group. Allow "Full control" for SYSTEM. Allow "Full control" to specific user, who's supposed to use the drive. Allow "Full control" for Administrators group of the computer. Allow "List folder contents" for Users group. After setting this up, the specific user have full control of D drive. No other user can open any file on D drive. But though my account is an admin account, no file on D drive could be opened from my account! Why is this happening and how files can be opened from my account? Note: All accounts in this PC are local accounts.

    Read the article

  • How do I fix a corrupted harddrive after failed upgrade?

    - by Nil
    The problem originated when I was trying to fix this problem. Things went horribly, horribly wrong and I ended up with a new problem altogether. The last thing I did was run sudo apt-get install and that caused my system to freeze. I restarted my computer and it would not boot from the harddrive. I ran a copy of Ubuntu 12.10 from a flashdrive that I had and ran gparted to see if my partitions were all there. It returned this message: Invalid partition table on /dev/sda -- wrong signature 5208. The drive appeared as a 2TiB unallocated drive with an error. The drive had 4 partitions before (plus random unallocated space). There was a fat32 partition, an ext4 partition which contained ubuntu 13.04/13.10 (I don't even know which one at this point), an extended partition which contained a swap partition for my ubuntu partition (I was meaning to move that ubuntu partition into the extended partition, never got around to it), and another partition (I don't remember how I formatted it). I should also mention this is a 1TB harddrive. So in short, I have a corrupted partition table on my primary harddrive from which I boot from, how can I fix this? I tried mounting the drive with sudo mount /dev/sda1 /media/ubuntu then I changed my directory to said folder and tried to list files and this monstrosity happened: $ ls ls: cannot access ??w?j^?.: Input/output error ls: cannot access ??(? ?x?.|: Input/output error ls: cannot access 6W_@?)?._??: Input/output error ls: cannot access HB0v???.A}?: Input/output error ls: cannot access ???.?X: Input/output error ls: cannot access t)?.+?l: Input/output error ls: cannot access ?h@ ?.@ : Input/output error ls: cannot access >? @??.???: Input/output error ls: cannot access m???.??: Input/output error ls: cannot access @ if??a?: Input/output error ls: cannot access ?M!vN$?.??n: Input/output error ls: cannot access ?o? ??.Bm`: Input/output error ls: cannot access ?:I??? M. : Input/output error ls: cannot access W??.??: Input/output error ls: cannot access ?: Input/output error ls: cannot access ?W?s??: Input/output error ls: cannot access ?v?k?.???: Input/output error ls: cannot access 5?$<N??: Input/output error .x????.??i: Input/output error ls: cannot access je????.j?1: Input/output error XjD?.???: Input/output error ls: cannot access W??n???.?: Input/output error ls: cannot access ?^x.$"?: Input/output error ls: cannot access !??*!??j.??: Input/output error ls: cannot access '-??k?^?.???: Input/output error ls: cannot access b?w?w?b.\??: Input/output error ls: cannot access o????"z.??B: Input/output error ls: cannot access ??b?h.?3-: Input/output error ls: cannot access ??.$7: Input/output error ls: cannot access )??K.bk: Input/output error ls: cannot access s??z?.?(?: Input/output error ls: cannot access ?F@?0?.@?: Input/output error .?D: Input/output error .??: Input/output error ls: cannot access?????. @: Input/output error ls: cannot access ?/?? ?.??: No such file or directory ls: cannot access rk?p4q(?.?k: Input/output error This looks promising. This is the output of fdisk -l $ sudo fdisk -l /dev/sda Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: invalid flag 0x5208 of partition table 5 will be corrected by w(rite) Disk /dev/sda: 2199.0 GB, 2199023132672 bytes 255 heads, 63 sectors/track, 267349 cylinders, total 4294967056 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x44fdfe06 Device Boot Start End Blocks Id System /dev/sda1 113305600 894715903 390705152 c W95 FAT32 (LBA) /dev/sda2 894715904 1489307647 297295872 83 Linux /dev/sda3 1489309694 1497307135 3998721 5 Extended /dev/sda4 1497309184 1953523711 228107264 7 HPFS/NTFS/exFAT /dev/sda5 ? 3013257822 3688738171 337740175 aa Unknown

    Read the article

  • questions about dual-boot install Ubuntu 10.04 and Windows 7 on same hard drive

    - by Tim
    I'd like to dual-boot install Ubuntu 10.04 on the same hard drive as Windows 7 which has already been installed. As to sources on the internet: I found a website iinet about dual-boot installation of Ubuntu 10.10 and Windows 7 on the same hard drive, which I think more specific than the one on Ubuntu Community without specific version of the OSes. Since I am installing Ubuntu 10.04 instead of 10.10, my question is whether their installers are same or almost same and if I can follow iinet for my dual-boot installation? Or are there better websites for information about dual-boot installtion of Ubuntu 10.04 and Windows 7? As to shrinking Windows partitions to make free space for Ubuntu partitions: iinet uses the partition software in Ubuntu's installer to shrink the Windows partition. But I saw in many website that the partition software in Ubuntu's installer cannot guarantee shrinking Windows 7 partitions successfully, so they recommended in general to shrink Windows partitions under Windows itself using its softwares. For example, in Ubuntu Community, it says: Some people think that the Windows partition must be resized only from within Windows Vista and Windows 7 using the shrink/resize option. ... If you use GParted Partition Editor in the Ubuntu Live CD be careful. So I was wondering which way to go in my situation? As to partition for bootloader files: In iinet, I don't see there is a partition created and dedicated to boot files (i.e. Grub files). However, I saw in many websites strongly suggesting using a boot partition for Grub files, especially for the purpose of separation and protection from installed OS files. I was wondering which way I should choose and why? As to installing bootloader Grub, in iinet, I see that to install Grub it only needs to specify the hard drive device for bootloader installation. However, in ubuntuguide(for more than 2 OSes and Ubuntu 9.04), some commands are needed to run in order to put Grub configuration files in MBR, and OS partition, for the chain-load process (where to find the files for the next stage). In Ubuntu Community, there are some related sentences which I don't quite understand how to do in practice: the only thing in your computer outside of Ubuntu that needs to be changed is a small code in the MBR (Master Boot Record) of the first hard disk. The MBR code is changed to point to the boot loader in Ubuntu. If you have a problem with changing the MBR code, you might prefer to just install the code for pointing to GRUB to the first sector of your Ubuntu partition instead. If you do that during the Ubuntu installation process, then Ubuntu won't boot until you configure some other boot manager to point to Ubuntu's boot sector. Windows Vista no longer utilizes boot.ini, ntdetect.com, and ntldr when booting. Instead, Vista stores all data for its new boot manager in a boot folder. Windows Vista ships with an command line utility called bcdedit.exe, which requires administrator credentials to use. You may want to read http://go.microsoft.com/fwlink/?LinkId=112156 about it. Using a command line utility always has its learning curve, so a more productive and better job can be done with a free utility called EasyBCD, developed and mastered in during the times of Vista Beta already. EasyBCD is user friendly and many Vista users highly recommend EasyBCD. In what is quoted above, I was wondering how exactly I should change the MBR code to point to the bootloader in Ubuntu? if I fail to change MBR code, are the other suggested boot managers being bcdedit.exe and EasyBCD in Windows? With the three sources above, which one shall I follow? Thanks and regards

    Read the article

  • Infinispan equivalent to ehcache's copyOnRead and copyOnWrite

    - by waxwing
    Hi, I am planning to implement a cache solution into an existing web app. Nothing complicated: basically a concurrent map that supports overflowing to disk and automatic eviction. Clustering the cache could be requirement in the future, but not now. I like ehcache's copyOnRead and copyOnWrite features, because it means that I don't have to manually clone things before modifying something I take out of the cache. Now I have started to look at Infinispan, but I have not found anything equivalent there. Does it exist? I.e., the following unit tests should pass: @Test public void testCopyOnWrite() { Date date = new Date(0); cache.put(0, date); date.setTime(1000); date = cache.get(0); assertEquals(0, date.getTime()); } @Test public void testCopyOnRead() { Date date = new Date(0); cache.put(0, date); assertNotSame(cache.get(0), cache.get(0)); }

    Read the article

  • Java Caching on distributed environment

    - by Naren
    Hi, I am supposed to create a simple replicated cache using java for internal purpose which will be used in a distributed environment. I have seen oracle has implemented Replicated Cache Service. http://wiki.tangosol.com/display/COH32UG/Replicated+Cache+Service The problem I am facing is while doing an update or remove, I acquire lock on other cache's to the point the cache get's updated and notifies others of the change. This is eventually going into a dead lock situation, while removing. Is there any strategy I should follow while updating or removing from cache's. Can I implement a replicated cache without having a primary cache?? Thanks, Naren

    Read the article

  • wanting a good memory + disk caching solution

    - by brofield
    I'm currently storing generated HTML pages in a memcached in-memory cache. This works great, however I am wanting to increase the storage capacity of the cache beyond available memory. What I would really like is: memcached semantics (i.e. not reliable, just a cache) memcached api preferred (but not required) large in-memory first level cache (MRU) huge on-disk second level cache (main) evicted from on-disk cache at maximum storage using LRU or LFU proven implementation In searching for a solution I've found the following solutions but they all miss my marks in some way. Does anyone know of either: other options that I haven't considered a way to make memcachedb do evictions Already considered are: memcachedb best fit but doesn't do evictions: explicitly "not a cache" can't see any way to do evictions (either manual or automatic) tugela cache abandoned, no support don't want to recommend it to customers nmdb doesn't use memcache api new and unproven don't want to recommend it to customers

    Read the article

  • Test-Drive ASP.NET MVC Review

    - by Ben Griswold
    A few years back I started dallying with test-driven development, but I never fully committed to the practice. This wasn’t because I didn’t believe in the value of TDD; it was more a matter of not completely understanding how to incorporate “test first” into my everyday development. Back in my web forms days, I could point fingers at the framework for my ignorance and laziness. After all, web forms weren’t exactly designed for testability so who could blame me for not embracing TDD in those conditions, right? But when I switched to ASP.NET MVC and quickly found myself fresh out of excuses and it became instantly clear that it was time to get my head around red-green-refactor once and for all or I would regretfully miss out on one of the biggest selling points the new framework had to offer. I have previously written about how I learned ASP.NET MVC. It was primarily hands on learning but I did read a couple of ASP.NET MVC books along the way. The books I read dedicated a chapter or two to TDD and they certainly addressed the benefits of TDD and how MVC was designed with testability in mind, but TDD was merely an afterthought compared to, well, teaching one how to code the model, view and controller. This approach made some sense, and I learned a bunch about MVC from those books, but when it came to TDD the books were just a teaser and an opportunity missed.  But then I got lucky – Jonathan McCracken contacted me and asked if I’d review his book, Test-Drive ASP.NET MVC, and it was just what I needed to get over the TDD hump. As the title suggests, Test-Drive ASP.NET MVC takes a different approach to learning MVC as it focuses on testing right from the very start. McCracken wastes no time and swiftly familiarizes us with the framework by building out a trivial Quote-O-Matic application and then dedicates the better part of his book to testing first – first by explaining TDD and then coding a full-featured Getting Organized application inspired by David Allen’s popular book, Getting Things Done. If you are a learn-by-example kind of coder (like me), you will instantly appreciate and enjoy McCracken’s style – its fast-moving, pragmatic and focused on only the most relevant information required to get you going with ASP.NET MVC and TDD. The book continues with the test-first theme but McCracken moves away from the sample application and incorporates other practical skills like persisting models with NHibernate, leveraging Inversion of Control with the IControllerFactory and building a RESTful web service. What I most appreciated about this section was McCracken’s use of and praise for open source libraries like Rhino Mocks, SQLite and StructureMap (to name just a few) and productivity tools like ReSharper, Web Platform Installer and ASP.NET SQL Server Setup Wizard.  McCracken’s emphasis on real world, pragmatic development was clearly demonstrated in every tool choice, straight-forward code block and developer tip. Whether one is already familiar with the tools/tips or not, McCracken’s thought process is easily understood and appreciated. The final section of the book walks the reader through security and deployment – everything from error handling and logging with ELMAH, to ASP.NET Health Monitoring, to using MSBuild with automated builds, to the deployment  of ASP.NET MVC to various web environments. These chapters, like those prior, offer enough information and explanation to simply help you get the job done.  Do I believe Test-Drive ASP.NET MVC will turn you into an expert MVC developer overnight?  Well, no.  I don’t think any book can make that claim.  If that were possible, I think book list prices would skyrocket!  That said, Test-Drive ASP.NET MVC provides a solid foundation and a unique (and dare I say necessary) approach to learning ASP.NET MVC.  Along the way McCracken shares loads of very practical software development tips and references numerous tools and libraries. The bottom line is it’s a great ASP.NET MVC primer – if you’re new to ASP.NET MVC it’s just what you need to get started.  Do I believe Test-Drive ASP.NET MVC will give you everything you need to start employing TDD in your everyday development?  Well, I used to think that learning TDD required a lot of practice and, if you’re lucky enough, the guidance of a mentor or coach.  I used to think that one couldn’t learn TDD from a book alone. Well, I’m still no pro, but I’m testing first now and Jonathan McCracken and his book, Test-Drive ASP.NET MVC, played a big part in making this happen.  If you are an MVC developer and a TDD newb, Test-Drive ASP.NET MVC is just the book for you.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >