Search Results

Search found 8238 results on 330 pages for 'dynamic disks'.

Page 216/330 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • LinkSys WRT54GL + AM200 in half-bridge mode - Setup guide recommendations?

    - by Peter Mounce
    I am basically looking for a good guide on how to set up my home network with this set of hardware. I need: Dynamic DNS Firewall + port-forwarding VPN Wake-on-LAN from outside firewall VOIP would be nice QoS would be nice (make torrents take lower priority to other services when those other services are happening) DHCP Wireless + WPA2 security Ability to play multiplayer computer games I am not a networking or computing neophyte, but the last time I messed with network gear was a few years ago, so am needing to dust off knowledge I kinda half have. I have read that I should be wanting to set up the AM200 in half-bridge mode, so that the WRT54GL gets the WAN IP - this sounds like a good idea, but I'd still like to be advised. I have read that the dd-wrt firmware will meet my needs (though I gather I'll need the vpn-specific build, which appears to preclude supporting VOIP), but I'm not wedded to using it. My ISP supplies me with: a block of 8 static IPs, of which 5 are usable to me a PPPoA ADSL2+ connection

    Read the article

  • Installing Wordpress - constant PHP/MySQL extension appears missing

    - by Driss Zouak
    I've got Win2003 w/IIS6, PHP 5 and MySQL installed. I can confirm PHP is installed correctly because I have a testMe.php that runs properly. When I run the Wordpress setup, I get informed that Your PHP installation appears to be missing the MySQL extension which is required by WordPress. But in my PHP.ini in the DYNAMIC EXTENSIONS section I have extension=php_mysql.dll extension=php_mysqli.dll I verified that mysql.dll and libmysql.dll are both in my PHP directory. I copied my libmysql.dll to the C:\Windows\System32 directory. When I try to run the initial setup for WordPress, I get this answer. I've Googled setting this up, and everything comes down to the above. I'm missing something, but none of the instructions that I've found online seem to cover whatever that is.

    Read the article

  • vmware esxi 5, cant create snapshots and consolidate fails, how to delete old or consolidate redo logs?

    - by Scott Szretter
    I have a VM that seems to be working ok, but when VMWare DR (or I) tries to create a snap shot, it fails, and when I view the summary page of the VM it has a warning at the top showing that the disks need to be consolidated. So I go to snapshot manager for the VM and choose consolidate (in snapshot manager, there are no snapshots actually listed by the way). If fails with this error: This virtual machine has 255 or more redo logs in a single branch of its snapshot tree. The maximum supported limit has been reached, creating new snapshots will not be allowed. To create new snapshots, please delete old snapshots or consolidate the redo logs. If I browse the data store (which has plenty of free space, 2 TB and this vm is under 40gb), in the vm folder, I do in fact see a bunch of files, numbered all the way to 0255: myvm-000255-ctk.vmdk myvm-000255-delta.vmdk myvm-000255.vmdk How can I clean all this up? Is there an SSH command line command or can I delete some of the files safely? Thanks!

    Read the article

  • Make UEFI, GPT, Bootloader, SSD, USB, Linux and Windows work together

    - by user129552
    I like to use the latest hardware and the latest software; thus I have a Laptop (Lenovo X220) with UEFI instead of BIOS an SSD instead of an HDD GPT partitioning scheme instead of MBR USB to boot from instead of optical disks. I need to use both Windows and Linux. I tried to make them work alongside, but I didn't succeed. Most Linux distribution isos don't even really work on UEFI systems booted from USB. (Not even the self-claimed cutting-edge Fedora. I also tried Linux Mint Debian Edition and Sabayon Linux (according to this guide) which did not work. Only Ubuntu worked for me. I first installed Windows 8 which created sda1: Recovery, sda2: EFI system, sda3: msftres, sda4: NTFS Windows. Windows worked without a problem. I then created sda5: linux-swap and installed Ubuntu into sda6: btrfs. After rebooting, I was not presented GRUB2 as expected, but instead my system just booted into Ubuntu. I could no longer access Windows. After fixing dpkg in btrfs Ubuntu, I followed the Ubuntu documentation on UEFI booting. The result left me with a broken GRUB2, but interestingly, when I wanted to select the device to boot from, I was not only presented the internal SSD, an attached USB device, or LAN, but also Grub2 (broken), Ubuntu and Windows. The result is not very satisfying to me. What would I have to do to fix everything? Or differently asked, what operating system should I install at what point given my possibilities and requirements, so that I have a working bootloader in my UEFI GPT system which presents me a working Linux and Windows.

    Read the article

  • Missing whole disk device in OpenSolaris

    - by Jeff Mc
    I have begun experimenting with Solaris and ZFS as a NAS. All was going very smoothly until I had a drive failure. When I replaced the drive, I no longer have a device file mapped to the whole disk. /dev/dsk/c7t3d0 does not exist but c7t2d0 and c7t4d0 both do. Also the sd@3,0:wd file under the /devices/ tree is non-existent. Do I have to prepare/partition the disk somehow to cause the whole disk device to exist? Here are a few outputs that might be useful. jeffmc@ats-ds2:/dev/dsk$ zpool status pool: datapool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: none requested config: NAME STATE READ WRITE CKSUM datapool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 UNAVAIL 0 0 0 cannot open mirror-1 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 jeffmc@ats-ds2:/dev/dsk$ zpool replace datapool c7t3d0 cannot open 'c7t3d0': no such device in /dev/dsk must be a full path or shorthand device name jeffmc@ats-ds2:/dev/dsk$ sudo format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@0,0 1. c7t1d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@1,0 2. c7t2d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@2,0 3. c7t3d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@3,0 4. c7t4d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@4,0 5. c7t5d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@5,0

    Read the article

  • Computer sponteously reboots when doing heavy file copy to/from disk

    - by Mark Hosang
    I've been fighting with this problem for the last 3 weeks where my machine will just instantly reboot. No BSOD, and when i checked the event log all that was reported was the generic "Kernal-power" error with the detailed information pointing to a hard crash. This is a machine that was working for 18 months before these crashes started happening. When they started happening is after I added 3 HDs in a RAID-5, upped the memory to 12gb, moved to a new house, added a SSD and added about 5 case fans. I have thus eliminated the RAID, and determined that the SSD was not the cause (because it was still crashing even though the ssd wasn't connected). I've run memtest several times over night with no memory problems showing up. I've run IntelBurnTest to max out the cpu to see if it was a heat issue and at full tilt after 20 min it was only at 85C and the machine didn't crash. I also took a look at the voltages during this test, with a screenshot at the bottom of this post I've ruled out a software issue by reinstalling windows 7 ultimate x64 a total of 5 times, but even during that the install it crashes. Happens sometime during file copying at the beginning, or during uncompressing files, or sometimes during running windows update. The only discernible pattern i can see is that it seems to crash when hard disks might be spinning up or when they are accessed heavily from large file transfers. My current guess is that it is probably an issue with the MB, PSU or the power coming through the outlet. Any suggestions of what i could try to troubleshoot or what may be wrong? Specs PSU: Seasonic M12 700w Mem: 12gb CPU: i7-920 with stock heatsink MB: Asus P6T HDs: 3 green WD and 1 Corsair force 3 120b with 1.3.3 firmware Running full tilt voltages Idling Voltages

    Read the article

  • Running KVM/XEN/Hyper-V VMs from a RAM disk, is this possible? Practical?

    - by Ausmith1
    Currently I'm using ESX (v3 and v4) to test a scripted OS (Windows 2003) and application install DVD. The DVD ISO (8GB) is mounted on a 1Gbps NFS datastore and the VMDK's (20GB) are on an SSD mounted via NFS over a 10Gbps link. It still takes a lot longer than I'd really like for to run through a test iteration and I'm wondering if mounting the virtual disks and ISO on a RAM disk on the same server as the hypervisor is running on would be worth my while. I can dedicate a server to this VM and 32GB of RAM in the system should be adequate to do the trick I'd guess. (1GB hypervisor OS, 28GB RAM disk and 2GB for the VM is < the 32GB available to me) Since hosting a RAM disk within ESX does not seem possible I'm open to trying KVM/Xen/Hyper-V. KVM would probably be my first choice of these three. Anyone out there tried this? Bear in mind this is purely for a test run of the installer, the VM will be discarded as soon as the test is completed so I'm not worried about losing data from the remote possibility of a power failure.

    Read the article

  • Is Software Raid1 Using mdadm with a Local Hard Disk and GNDB Possible?

    - by Travis
    I have multiple webservers which use many small files to created dynamic web pages. Caching the web pages isn't an option. The webserver also performs writes so I need a synchronous filesystem. I'm looking to maximise performance as it's my understanding that small files is the weakness (to varying degreess) of a cluster filesystem over ethernet. Currently I'm using Centos 5.5, 64 bit. Since it's only about 300MB of data, I'm looking at mdadm using RAID-1 with the GNBD and a local hard disk using the "--write-mostly" option so the reads are done using the local hard disk. Is this possible? If so, is there any advantage to making it a tmpfs disk instead of a local hard disk? Or will the files on the local hard disk just get cached in RAM anyway so I won't see a performance gain by using tmpfs, assuming there's enough RAM available?

    Read the article

  • is it possible for a router to provide different gateway?

    - by Hao
    i have tp-link wireless router 192.168.10.188, i was can make it function as DHCP provider(range 192.168.10.100 to 192.168.10.109). the only thing that i cannot make it work as intended, is for it to provide different gateway (192.168.10.1), the computers that obtain IP from that router properly get everything(dynamic IP and dns IP), but there is no function on that router to provide different gateway, the computers always get the router's address(192.168.10.188) as gateway. is there a router that can provide different gateway other than its own address? or the question should be, is the dhcp of a router can provide different gateway other than its own address? note: i cannot make the wireless router address as 192.168.10.1, we have main router(non-wireless, address is 192.168.10.1) that is connected directly to internet

    Read the article

  • dhclient and dhcpcd the real difference

    - by rubixibuc
    I can't figure out the difference from just the man pages. I can see what is a daemon and one is a client, but what does that mean practically when using the commands? Also what is the difference between the client and daemon in this case, not just the terms (client and daemon) but functionally wise? EDIT: How are the tasks divided, if the client updates the information on the client, what is the purpose of the daemon. I'm talking about the client daemon in this case dhcpcd not dhcpd. Both come installed by default with some versions of Linux and seem to share the duties of the dhcp client. NAME dhcpcd - DHCP client daemon Name dhclient - Dynamic Host Configuration Protocol Client

    Read the article

  • Xdebug 2.0.5 with Zend Server CE PHP5.2.12 possible?

    - by notbrain
    I'm using Zend Server CE with PHP 5.2.12 on OSX Snow Leopard and want to use Xdebug. I've turned off Zend Data Cache, Zend Optimizer+, and Zend Debugger in the console. When I run $ cd ~/Downloads/xdebug-2.0.5 $ /usr/local/zend/bin/phpize I get Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 The PHP API Version, 20041225, seems to be off from the documentation (aka wrong). When I continue installation with $ ./configure ---with-php-config=/usr/local/zend/bin/php-config $ make $ sudo make install The installed xdebug.so seems to be the wrong one. Which version of xdebug do I need for this PHP API version? The Zend API numbers are ok. I'm just confused at why the PHP API Version doesn't match. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/zend/lib/php_extensions/xdebug.so' - (null) in Unknown on line 0 PHP 5.2.12 (cli) (built: Feb 17 2010 13:39:36)

    Read the article

  • dns hierarchy not working !! Please help

    - by nikhilelite
    (DNS1 ,WWW1, Gateway1) (sub-internal network) (DNS0,WWW0,Gateway0) (internal network) DNS1: 192.168.250.3/24 WWW1: 192.168.250.4/24 Gateway1: 192.168.250.1 /24 (internal) :: 192.168.0.150 to 192.168.0.175 (external) DNS0:192.168.0.197/24 WWW0:192.168.0.197/24 Gateway0: 192.168.0.1 (internal) :: 69.94.x.x (external, dynamic ,isp control) Expected behavior: When using dig from internal (192.168.250.0/24) hosts, and query about domain from 192.168.0.197/16 nameserver's hosts (for which its authoritative) , it should return the ip address. Whats happening: After dig, answer section empty, the query is trying to access a.root server instead of 192.168.0.197 ,even though i have defined 192.168.0.197 as dns in gateway1's resolv.conf Why? I need this working asap, can anyone here help ?

    Read the article

  • Why do disk images hosted on a read-only HFS+ partition behave differently?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Removing a device in "removed" state from Linux software RAID array

    - by Sahasranaman MS
    My workstation has two disks(/dev/sd[ab]), both with similar partitioning. /dev/sdb failed, and cat /proc/mdstat stopped showing the second sdb partition. I ran mdadm --fail and mdadm --remove for all partitions from the failed disk on the arrays that use them, although all such commands failed with mdadm: set device faulty failed for /dev/sdb2: No such device mdadm: hot remove failed for /dev/sdb2: No such device or address Then I hot swapped the failed disk, partitioned the new disk and added the partitions to the respective arrays. All arrays got rebuilt properly except one, because in /dev/md2, the failed disk doesn't seem to have been removed from the array properly. Because of this, the new partition keeps getting added as a spare to the partition, and its status remains degraded. Here's what mdadm --detail /dev/md2 shows: [root@ldmohanr ~]# mdadm --detail /dev/md2 /dev/md2: Version : 1.1 Creation Time : Tue Dec 27 22:55:14 2011 Raid Level : raid1 Array Size : 52427708 (50.00 GiB 53.69 GB) Used Dev Size : 52427708 (50.00 GiB 53.69 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Nov 23 14:59:56 2012 State : active, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : ldmohanr.net:2 (local to host ldmohanr.net) UUID : 4483f95d:e485207a:b43c9af2:c37c6df1 Events : 5912611 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed 2 8 18 - spare /dev/sdb2 To remove a disk, mdadm needs a device filename, which was /dev/sdb2 originally, but that no longer refers to device number 1. I need help with removing device number 1 with 'removed' status and making /dev/sdb2 active.

    Read the article

  • Synchronising a remote folder with a local one.

    - by Workshop Alex
    I am using a network disk (that's connected to my router by USB) to store several data files. A simple .NET application that I've created is supposed to read and modify these data files. However, some security issues are preventing this application to access these files directly. (Actually, these have been built-in to my application on purpose since it's not going to support NAS disks.) Since this disk is shared with several computers, I just want to have a simple synchronisation method, which will copy the files to a local folder where3 my application can access them. And, once modified, it should send back the modified files to the NAS disk again. I have two options: 1) Build a second application to do my own synchronisation. 2) Find some build-in function inside Windows 7 Ultimate which can do this for me. Option 2 is preferred. Option 1 is something I can do easily, if need be. I don't need third-party tools. (Still, feel free to add some references to good tools, although I won't accept them as answers.) Basically, is this possible with Windows 7 and if so, how?

    Read the article

  • LinkSys WRT54GL + AM200 in half-bridge mode - UK setup guide recommendations?

    - by Peter Mounce
    I am basically looking for a good guide on how to set up my home network with this set of hardware. I need: Dynamic DNS Firewall + port-forwarding VPN Wake-on-LAN from outside firewall VOIP would be nice QoS would be nice (make torrents take lower priority to other services when those other services are happening) DHCP Wireless + WPA2 security Ability to play multiplayer computer games I am not a networking or computing neophyte, but the last time I messed with network gear was a few years ago, so am needing to dust off knowledge I kinda half have. I have read that I should be wanting to set up the AM200 in half-bridge mode, so that the WRT54GL gets the WAN IP - this sounds like a good idea, but I'd still like to be advised. I have read that the dd-wrt firmware will meet my needs (though I gather I'll need the vpn-specific build, which appears to preclude supporting VOIP), but I'm not wedded to using it. I live in the UK and my ISP supplies me with: a block of 8 static IPs, of which 5 are usable to me a PPPoA ADSL2+ connection

    Read the article

  • LinkSys WRT54GL + AM200 in half-bridge mode - UK setup guide recommendations?

    - by Peter Mounce
    Crossposted from here I am basically looking for a good guide on how to set up my home network with this set of hardware. I need: Dynamic DNS Firewall + port-forwarding VPN Wake-on-LAN from outside firewall VOIP would be nice QoS would be nice (make torrents take lower priority to other services when those other services are happening) DHCP Wireless + WPA2 security Ability to play multiplayer computer games I am not a networking or computing neophyte, but the last time I messed with network gear was a few years ago, so am needing to dust off knowledge I kinda half have. I have read that I should be wanting to set up the AM200 in half-bridge mode, so that the WRT54GL gets the WAN IP - this sounds like a good idea, but I'd still like to be advised. I have read that the dd-wrt firmware will meet my needs (though I gather I'll need the vpn-specific build, which appears to preclude supporting VOIP), but I'm not wedded to using it. I live in the UK and my ISP supplies me with: a block of 8 static IPs, of which 5 are usable to me a PPPoA ADSL2+ connection

    Read the article

  • SQL Server performance on VSphere 4.0

    - by Charles
    We are having a performance issue that we cannot explain with our VMWare environment and I am hoping someone here may be able to help. We have a web application that uses a databases backend. We have an SQL 2005 Cluster setup on Windows 2003 R2 between a physical node and a virtual node. Both physical servers are identical 2950's with 2x Xeaon x5460 Quad Core CPUs and 64GB of memory, 16GB allocated to the OS. We are utilizing an iSCSI San for all cluster disks. The problem is this, when utilizing the application under a repeated stress testing that adds CPUs to the cluster nodes, the Physical node scales from 1 pCPU to 8 pCPUs, meaning we see continued performance increases. When testing the node running Vsphere, we have the expected 12% performance hit for being virtual but we still scale from 1 vCPU to 4 vCPUs like the physical but beyond this performance drops off, by the time we get to 8 vCPUs we are seeing performance numbers worse than at 4 vCPUs. Again, both nodes are configured identically in terms of hardware, Guest OS, SQL Configurations etc and there is no traffic other than the testing on the system. There are no other VMs on the virtual server so there should be no competition for resources. We have contacted VMWare for help but they have not really been any suggesting things like setting SQL Processor Affinity which, while being helpful would have the same net effect on each box and should not change our results in the least. We have looked at all of VMWare's SQL Tuning guides with regards to VSphere with no benefit, please help!

    Read the article

  • Windows file server access control by device

    - by Ori Shavit
    I'm trying to build a system where access to certain resources (file shares) in Windows Server, is limited not only by the username (in a Active Directory domain), but also by the client machine. So far, I haven't found a good way to do this; adding the computer account to the DACL is apparently not the way to do it. Windows Server 2012 supports this with Dynamic Access Control, but this method requires all clients to be Windows 8, it seems, with no way to use this with Windows 7 clients. Is there a supported way to do this? (or alternatively, add support for device authorization with Windows 7).

    Read the article

  • SSL Certificate for local web server

    - by Firefly
    Is it at all possible to create a self-signed certificate for use on multiple machines on a local network which would stop the browser complaining it is not a trusted site? We have a product which is basically a computer running lighttpd to serve a web interface for configuring the computer (sort of how a router has a web interface). There can also be many of these machines running on the same network with dynamic IP's. What I basically want to do is enable SSL for extra security but I don't want people who are on the local network to be given a browser warning about the certificate not being trusted. Is this at all possible?

    Read the article

  • NTFS frequent corruption when writing many small files, index $I30 error

    - by david sedai
    I'm running Windows 7 Ultimate on a laptop with a 500G HDD, and had all partitions formatted as NTFS. I do a lot of programming and LaTeX typesetting, both of these involves a large amount of reading/writing/deleting to a lot of small files, such as C++ library headers or LaTeX packages. The problem is that frequently, when there is a large number of writing to files, the partition being written to often corrupts, the chkntfs e: returns dirty, where e: is the partition being written. I've re-formatted the drive, I've contacted the laptop manufacturer and had the HDD checked, the HDD is not faulty, there are no bad sectors, and I've tried a brand new HDD, to no avail, and the other partition on the same physical drive doesn't have this issue. I'm pretty sure that it's no hardware related. I've searched the Microsoft support pages, one page http://support.microsoft.com/kb/982018 provides an update for Advanced Format Disks, which I've already installed. The chkntfs log shows $130 index errors. I'm at a loss here. Can anyone help? Thanks in advance.

    Read the article

  • How do you configure ISC Bind to support GSS-TSIG Updates?

    - by netlinxman
    First, has anyone EVER configured ISC bind 9.5.0 OR greater with support for GSS-TSIG Dynamic DNS Updates AND gotten it to work? If so, what is the configuration that was used to make that happen? I feel close to having this working. I see that GSS cred passes w/o apparent error during the TKEY negotiation with an Active Directory DC and the BIND DNS server: client 192.168.0.30#52314: query gss cred: "DNS/[email protected]", GSS_C_ACCEPT, 4294967256 gss-api source name (accept) is [email protected] process_gsstkey(): dns_tsigerror_noerror client 192.168.0.30#52314: send But, when the Update is sent, it is refused: client 192.168.0.30#58330: update client 192.168.0.30#58330: updating zone 'example.com/IN': update failed: rejected by secure update (REFUSED) client 192.168.0.30#58330: send Does anyone have this working in the real world?

    Read the article

  • Connect to CentOS LAMP instance from Windows PCs

    - by Gnanesh
    I have a CentOS 6 machine running on our network which has a simple LAMP installation on it. I have some files there which I would want to access through other Windows PC which I am able to do so using the IP address of the CentOS machine. Since the IP address of the CentOS machine also could be dynamic I would want to connect to it using the computer / host name But I am not able to do so using the computer / host name of the CentOS machine. Can someone help me point out what I may be missing and help me out to resolve this?

    Read the article

  • Why can't I mount an image hosted on a read-only HFS+ partition via Boot Camp?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Time Machine vs Source Control?

    - by Blub
    Finally got convinced to start using some kind of version control for my code instead of zipping down a copy of the project at the end of each day. Downloaded Tortoise SVN and used it to create a repository localy on my hdd. I've been using it for 2 days now but I have to say that using it is actually more hassle than just copying the project manually in explorer. Sure, you only store incremental changes but with the cheap disks of today I can't really say that's an argument when you only have small projects. I haven't realy found a quick way to browse the older versions of my files eighter. What I want is an infinite undo that is completely transparent while I code, if I save the file I want a backup. I don't want to check out, check in and don't even get me started on moving files. I haven't tried Time Machine for OS X but it looks like it's exactly what I'm looking for. Does such a program exist for windows? Preferably free and with some kind of tagging-system so I can tag a timestamp when the project is working etc. Maybe should add that I mostly work alone on a single computer. Update: Some of you asked why I want backup. Since I work alone it's mostly to allow me to quickly hack up a solution without worrying that something will screw up.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >