Search Results

Search found 17406 results on 697 pages for 'option explicit'.

Page 645/697 | < Previous Page | 641 642 643 644 645 646 647 648 649 650 651 652  | Next Page >

  • Very slow printing from print server

    - by evolvd
    Print server is a VM on Xen The VM is Windows 2003 32bit. During the issue the VM is not being taxed in anyway, cpu, memory, hd read/write, and network speed is all good. The problem that I see is the transfer of the print file from the print server to the printer. The 80Mb file is transferred from the client to the print server in about 2 minutes but then it takes about 2 hours for that file to be sent to the printer. I can't figure out why this would just start to happen. The printer is rebooted every evening and is just used for one large print job in the morning. The server has been rebooted with no effect I changed the spool option to send the entire spool to the server before printing starts and it had no effect. This printer problem did happen to come about after some changes to the Xen environment. The Xen servers changed from using HBA NIC cards to software iscsi and a new switch was put in. I don't think this is related to the problem since all the speeds on the VMs are better now. The changed happened on Saturday and the first print to this printer happened on Monday morning. I'm just putting that out there but like I said I don't think it is related but I don't want to rule it out. At this point I don't have many other options besides the physical layer. I can switch out network cable that goes to the printer and I might be able to print the same job to another printer. I wont be able to test those things out till this afternoon though. Any other ideas or test I could do to try to find the reason for the slow speed? I forgot to say that this is only happening when printing to this one printer. ===Update=== I found out that there are a few printers that currently have this issue, not just the one. There are over 30 printers on the server though so I know it's not happening to all of them. I printed a large pdf doc from the server and it was able to print at the normal speed. If the machine sends the large print request it gets to the server fine but then slow to get from the server to the printer. If sent directly from the printer it gets to the printer at the normal speed. The question now is why is there a speed difference when it comes from the machine and why would it start now?

    Read the article

  • Operative systems on SD cards

    - by HisDudeness
    I was getting some wild ideas the last days, like putting some operative systems into SD cards rather than on my hard drive. I'll go further into details now and explain what lead me to consider this probably abominable decision. I am on a laptop (that means I have a native SD-card reader) which is currently running a cross-distro setup, with a bunch of Linux systems (placed in dedicated ext4 logical partitions into a huge extended one) regulated by an unique GRUB. Since today, my laptop haven't even seen any Windows system with binoculars. I was thinking about placing all the os part of my setup into a Secure Digital to save all my 500 Gb Hard Drive for documents, music, videos and so on, and being able to just remove the SD and boot my system into another computer too, as well as having the possibility of booting other systems into mine by just plugging in another SD, without having to keep it constantly placed in my PC. Also, in the remote case in the near future I just wanted to boot Windows 8 in it, I read it causes major boot incompatibility issues with other systems by needing a digital signature in order for them to start. By having it in a removable drive, I could just get rid of it when I'm needing him and switch its card with Linux one, and so not having any obstacles to their boot. Now, my questions are: I know unlikely traditional rotating disk drives, integrated circuits ones have a limited lifespan in terms of cluster rewriting. Is it an obstacle to that kind of usage? I mean, some Ultrabooks are using SSD now, is it the same issue, or there are some differences between Solid State Drives and Secure Digitals in that sense? Maybe having them to store system files which are in fixed positions (making the even-usage of cluster technology useless) constantly being re-read and updated and similar things just gets them soon unserviceable, do it? Second question: are all motherboards and BIOSes able to boot from SDs just like they are from USB pen drives (I mean, provided card reader is USB-connected, isn't it)? Or can't bootloaders like GRUB be installed on SDs working? If they can't, is it a solution installing GRUB to MBR and making boot option pointing to SD? Will it work? Are there any other problems to installing OSs on a Secure Digital?

    Read the article

  • Recommendations or advice for shared computer control

    - by Telemachus
    Basic scenario: we are a school (overwhelmingly Mac, some Windows machines via BootCamp), and we are considering using DeepFreeze to guard the state of our shared machines. We have roughly 250 machines that are either shared laptops (which move around quite a bit) or common desktops in public spaces. Obviously, we spend a lot of time maintaining the machines and trying to reverse the inevitable drift as people make changes to the computers. We would like to control the integrity of the build we initially put onto the machines without handcuffing users and especially without using Mac's Parental Control software. (We've had nothing but bad experiences with it.) We've been testing DeepFreeze, and so far it's very impressive. But I'm curious to hear if people who have used DeepFreeze or any similar software have any advice or tips. To get things started, I will post my own pros and cons. Pros: The state of the machine is frozen in our chosen state. All changes made to the machine after that disappear upon restart. (This frozen state really appears to cover everything. I have yet to do something to a test machine that isn't instantly healed.) Tons of trivial but time-consuming maintenance is gone in an instant. Also, lots of not-so-trivial breakage should be avoided. There are good options, however, that allow you to create storage spaces either globally or per user. (Otherwise, stored files disappear upon reboot. For some machines, this is a good option itself. Simply warn people: save externally or else; this machine is a kiosk, not your storage space.) Cons: Anytime we actually need to make a change (upgrade basic software, add a printer or an airport permanently, add new software), the process is a bit more complex. Reboot into a special mode (thaw state), make changes, reboot back into frozen mode. If (when?) we forget this, we will end up making changes that disappear after the next reboot. Users will forget to save files correctly (in the right place or externally), and we will have loud, unpleasant conversations explaining that we can't recover the document they worked on all afternoon yesterday. The machine rebooted. The file is gone. These are my initial thoughts, but I would love to hear from other people who have experience with DeepFreeze or any similar software. What should we be careful about? Do the pros outweigh the cons? What gains or problems am I not seeing? Thanks.

    Read the article

  • Can't save screen resolution setting.

    - by Searock
    Hi, My screen resolution in windows and previous version of Ubuntu (9.04) was 1152 x 864. But in Ubuntu 10.04 it gives me an option of 1024 x 786 and 1360 x 786. I have some how managed to add 1152x684 resolution by using xrandr command. searock@searock-desktop:~$ cvt 1152 864 1152x864 59.96 Hz (CVT 1.00M3) hsync: 53.78 kHz; pclk: 81.75 MHz Modeline "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync searock@searock-desktop:~$ xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync searock@searock-desktop:~$ xrandr --addmode S-video 1152x864 xrandr: cannot find output "S-video" searock@searock-desktop:~$ xrandr Screen 0: minimum 320 x 200, current 1024 x 768, maximum 4096 x 4096 VGA1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1360x768 59.8 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9 59.9 1152x864_60.00 (0x124) 81.0MHz h: width 1152 start 1216 end 1336 total 1520 skew 0 clock 53.3KHz v: height 864 start 867 end 871 total 897 clock 59.4Hz searock@searock-desktop:~$ xrandr --addmode VGA1 1152x864_60.00 But the problem is when ever I restart my computer I get this message. Could not apply the stored configuration for the monitors. Could not find a suitable configuration of screens. And then it comes back to 1024 x 786 My graphic card details : Intel(R) 82945G Express Chipset Family. Is there any way I can fix this once for all ? Thanks. Edit 1 : rumtscho has suggested me to modify xorg.conf file. But I am not sure what HorizSync means? is it Horizontal frequency ? My monitor model is Acer v173. Here's my specification. So what should be HorizSync and VertRefresh ? Edit 2 : I have edited my Xorg.conf file as follows : Section "Monitor" Identifier "Configured Monitor" HorizSync 30-80 VertRefresh 55-75 EndSection then I added the resolution and restarted my computer and still I am facing the same problem. Is there something that I am missing? Edit 3 : For now I have edited /etc/gdm/Init/Default(gdm startup scripts) to include following xrandr commands, just below line initctl -q emit login-session-start DISPLAY_MANAGER=gdm xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync xrandr --addmode VGA1 1152x864_60.00<br/> xrandr -s 1152x864_60.00 This has solved my problem, but this commands have increased my computer's boot time. I think I will have to edit xorg file properly. Edit 4 : Instead of adding this files to gdm startup scripts I have created a shell script and added it to startup (System - Preference - Startup Applications) #!/bin/bash xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync xrandr --addmode VGA1 1152x864_60.00 xrandr -s 1152x864_60.00 And don't forget to add execution rights. (Right Click - Properties - Permission - Allow executing file as program)

    Read the article

  • How can I change how OS X's 'say' command pronounces a word?

    - by jwhitlock
    OS X's say command is useful for some tasks (such as Skype's 'notify me when a contact comes online), but it is pronouncing some names incorrectly. Is there a way to teach say to pronounce a word differently? For example, try: say "Hi, Joel Spolsky" The 'ol' sounds like 'ball' rather than 'old'. I'd like to add an exception that say "Pronounce Spolsky like this", rather than try to teach new linguistic rules. I bet there is a way since it can pronounce "iphone" as Apple wants. Update - After some research, here's what I've learned: Text-to-speech is split between turning the text to phonemes, and then the phonemes are turned into audio using a voice. Changing the voice doesn't effect the phonemes. The Speech Synthesis Manager has some functions for turning text to phonemes, and a method for registering a speech dictionary that will add new text-phoneme maps. However, Apple's speech dictionary must be in a binary form - I didn't find any plist XML. Using dtrace while running say, I found some interesting files opened in /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources. This is probably the speech dictionary, but they are all binary, except for Homophones, which is XML. Adding entries to Homophones does nothing - it is probably used in speech-to-text. They are also code signed by Apple - changing them may prevent some programs from working. PrefixDictionary CartNames CartLite SymbolDictionary Homophones There are ways to add text versions of application interface elements so VoiceOver works, a lot of which a developer gets for free, but there are tricky bits. The standard here appears to be to use a phonetic spelling as needed. My guesses are: say is a light layer of code on top of the Speech Synthesis Manager. It would be easy for the Apple devs to add a command line option to take the path to a speech dictionary plist for alternate phoneme mapping, but they didn't. It may be a useful open-source project to write a better say. Skype probably uses Speech Synthesis Manager directly, leaving no hooks to change the way my friend's names are pronounced, other than spelling them phonetically, which is silly. The easiest way to make a command line version of say is how JRobert suggested. Here's my quick implementation, using Doug Harris's spelling suggestion: #!/bin/sh echo $@ | tr '[A-Z]' '[a-z]' | sed "s/spolsky/spowlsky/g" | /usr/bin/say Finally, some fun command line stuff: # Apple is weird sqlite3 /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources/Tuples .dump # Get too much information about what files are being opened sudo dtrace -n 'syscall::open*:entry { printf("%s %s",execname,copyinstr(arg0)); }' # Just fun say -v bad "Joel Spolsky Spolsky Spolsky Spolsky Spolsky, Joel Spolsky Spolsky Spolsky Spolsky Spolsky" echo "scale=1000; 4*a(1)" | bc -l | say

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • SQL Clustering on Hyper V - is a cluster within a cluster a benefit.

    - by Chris W
    This is a re-hash of a question I asked a while back - after a consultant has come in firing ideas in to other teams in the department the whole issue has been raised again hence I'm looking for more detailed answers. We're intending to set-up a multi-instance SQL Cluster across a number of physical blades which will run a variety of different systems across each SQL instance. In general use there will be one virtual SQL instance running on each VM host. Again, in general operation each VM host will run on a dedicated underlying blade. The set-up should give us lots of flexibility for maintenance of any individual VM or underlying blade with all the SQL instances able to fail over as required. My original plan had been to do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Within the VMs - create a failover cluster and then install SQL Server clustering. The consultant has suggested that we instead do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Create a cluster on the HOST machines which will host all the VMs. Within the VMs - create a failover cluster and then install SQL Server clustering. The big difference is the addition of step 4 whereby we cluster all of the guest VMs as well. The argument is that it improves maintenance further since we have no ties at all between the SQL cluster and physical hardware. We can in theory live migrate the guest VMs around the hosts without affecting the SQL cluster at all so we for routine maintenance physical blades we move the SQL cluster around without interruption and without needing to failover. It sounds like a nice idea but I've not come across anything on the internet where people say they've done this and it works OK. Can I actually do the live migrations of the guests without the SQL Cluster hosted within them getting upset? Does anyone have any experience of this set up, good or bad? Are there some pros and cons that I've not considered? I appreciate that mirroring is also a valuable option to consider - in this case we're favouring clustering since it will do the whole of each instance and we have a good number of databases. Some DBs are for lumbering 3rd party systems that may not even work kindly with mirroring (and my understanding of clustering is that fail overs are completely transparent to the clients). Thanks.

    Read the article

  • Bluetooth not detecting any devices in Windows 7

    - by underDog
    My Lenovo ThinkPad E320 Laptop running Windows 7 64bit has recently been refusing to detect any Bluetooth devices. I have tried to connect, using 'Add devices' under 'devices and printers' to two different Bluetooth mice and my HTC Wildfire android (2.2.1) phone and none of them are detected in the 'Add a device' dialog. History - Bluetooth initially seemed OK when I first got this laptop. I was able to connect to and use my android phone as a remote with no issues. I got my first Bluetooth mouse, it paired, but after each restart, or even after sleeping, it would not 're-connect' (even though it was listed under Bluetooth devices and supposedly 'working'), and I would need to remove the device and add it again. A week or two ago it stopped working all together. It is not detected at all. I gave up on the mouse and bought another (Lenovo ThinkPad brand) only to find that it was not detected either. I subsequently tested my Android phone and discovered it would not detect either. One thing of note is under 'Devices and Printers' there is listed a 'HID Keyboard Device' which under properties is listed as a 'Bluetooth HID Device'. This was not previously there before this problem started. Each time I remove it, or uninstall it from Device Manager it will quickly re-install itself, even with all my Bluetooth devices switched off. My (Google and searching this site) research of this issue has not yielded any definitive answers. I have changed the setting under Device Manager - Bluetooth - Properties - Power Management - 'Allow the computer to turn off this device to save power' to off. I have attempted to un-install and re-install the Bluetooth hardware, including the 'remove drivers' option and downloading and running the Lenovo Bluetooth installer package (found @ http://support.lenovo.com/en_US/downloads/detail.page?DocID=DS014997). Bluetooth is turned on. All items under Bluetooth properties (Discovery and Connections) are checked. I have tried changing the batteries. I'm not sure what else I can try, apart from perhaps doing a fresh install of Windows. Any suggestions?

    Read the article

  • Raid1 with active and spare partition

    - by Daniel Baron
    I am having the following problem with a RAID1 software raid partition on my Ubuntu system (10.04 LTS, 2.6.32-24-server in case it matters). One of my disks (sdb5) reported I/O errors and was therefore marked faulty in the array. The array was then degraded with one active device. Hence, I replaced the harddisk, cloned the partition table and added all new partitions to my raid arrays. After syncing all partitions ended up fine, having 2 active devices - except one of them. The partition which reported the faulty disk before, however, did not include the new partition as an active device but as a spare disk: md3 : active raid1 sdb5[2] sda5[1] 4881344 blocks [2/1] [_U] A detailed look reveals: root@server:~# mdadm --detail /dev/md3 [...] Number Major Minor RaidDevice State 2 8 21 0 spare rebuilding /dev/sdb5 1 8 5 1 active sync /dev/sda5 So here is the question: How do I tell my raid to turn the spare disk into an active one? And why has it been added as a spare device? Recreating or reassembling the array is not an option, because it is my root partition. And I can not find any hints to that subject in the Software Raid HOWTO. Any help would be appreciated. Current Solution I found a solution to my problem, but I am not sure that this is the actual way to do it. Having a closer look at my raid I found that sdb5 was always listed as a spare device: mdadm --examine /dev/sdb5 [...] Number Major Minor RaidDevice State this 2 8 21 2 spare /dev/sdb5 0 0 0 0 0 removed 1 1 8 5 1 active sync /dev/sda5 2 2 8 21 2 spare /dev/sdb5 so readding the device sdb5 to the array md3 always ended up in adding the device as a spare. Finally I just recreated the array mdadm --create /dev/md3 --level=1 -n2 -x0 /dev/sda5 /dev/sdb5 which worked. But the question remains open for me: Is there a better way to manipulate the summaries in the superblock and to tell the array to turn sdb5 from a spare disk to an active disk? I am still curious for an answer.

    Read the article

  • Broadcom HT1100 SATA controller not working properly with 1TB drives

    - by Jeff C
    I've been using RHEL distro's for several years and always managed to find the answers until now. I know this is more of a hardware issue, but I've been working on this for over a week and trust Linux and the IT community to help more then HP. I have CentOS 6.3 installed on an HP ProLiant DL145 G3 server with the BroadCom HT1100 IO controller and ServerWorks SATA Controller MMIO BIOS v3.0.0015.6 Firmware. This controller does not support large drives fully. Here's what I've tried and the results; Stock setup - Freezes on the ServerWorks POST screen. Can't even enter CMOS without disconnecting the drives. If I simply disconnect the SATA cables before it gets to the ServerWorks screen and reconnect afterwards I can boot from a CD, USB, PXE fine. However fiddling with cables at ever boot isn't practical. If I enter the BIOS config I can set it to not try booting the drives but leave the controller enabled. This lets me boot normally but the drives are not visible in the OS (live CDs or USB installed). I used method #2 to install and update CentOS. I have the /boot partition on a USB drive (everything else is on the SATA drives in software RAID1) hoping that would work around the issue but I get this Kernel panic - not syncing:Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.32-279.9.1.el6.x86_6 #1 Call Trace: [<ffffffff814fd6ba>] ? panic+0xa0/0x168 [<ffffffff81070c22>] ? do_exit+0x862/0x870 [<ffffffff8117cdb5>] ? fput+0x25/0x30 [<ffffffff81070c88>] ? do_group_exit+0x58/0xd0 [<ffffffff81070d17>] ? sys_exit_group+0x17/0x20 [<ffffffff8100b0f2>] ? system_call_fastpath+0x16/0x1b panic occured, switching back to text console I'm sure it should be possible to talk to the drives without the BIOS boot check since the BIOS doesn't see them in method #2 either, their disconnected when it checks, but Linux sees them fine. If anyone could help figure out how I would greatly appreciate it! The other possible option I've come across is a complex firmware update. Tyan has a few boards on their website with the HT1100 and a ServerWorks v3.0.0015.7 update which says "adds support for TB drives" in the release notes. If someone could help me get the Tyan SATA firmware into the HP ROM file so I could just reflash that would also be very much appreciated. Thanks for any help you guys can offer!

    Read the article

  • What the best way to recover from when your RAID H/W incorrectly thinks a disk is missing

    - by Software Monkey
    I have a Windows 7 system with an MSI motherboard (running the latest AMD BIOS) and two of my four disks (not the system boot disk) configured via the Mobo as RAID-1. After a normal system restart today, the RAID BIOS reports that one of the two drives has been disconnected or has failed. It's not really failed; via recovery tools I can verify that if I take the BIOS out of RAID mode. But I can find no way to re-add the second hard disk to the array and rebuild via the BIOS - the only option seems be to delete the array and recreate it, but I've done that once before and it blows away the disk. It's done this once before, however on a subsequent reboot after double-checking the drive cabling (but not changing anything) and it boot up fine. So I think the mobo RAID is a little bit flaky. At this point I would like to remove the RAID drivers, change to AHCI mode and switch over to using a Windows 7 dynamic mirror disk. But the RAID drivers seem somehow deeply bound into the Windows startup - I can't find anything like the good ol' safe-mode in Windows 7. If I boot from the Win 7 install disk in ACHI mode I can use recovery tools to log in to the Windows 7 installation, so the boot drive it seems fine with ACHI mode. Additionally, I can see all my other disks, run chkdsk on them and they seem to be fine. If I try to boot from the HDD in AHCI mode, it just reboots part way through, presumably because the RAID drivers load and conflict with the BIOS being set to AHCI. So: How do I strip the RAID drivers from my Win 7 installation? If I delete the RAID logical disk, will it really delete partitioning information, or is that just a poorly worded message when it says the data on the disk will be deleted? If I disconnect the 2 disks in a RAID array, then delete the logical disk array, and then reconnect and reboot still in RAID mode, will the disks simply revert to RAID single-disks like my other 2 and then maybe I can leave windows with RAID drivers by operate the disks as singles with 2 of them in a Windows dynamic disk mirrored setup? Does Windows 7 have anything like the Windows XP Repair Install, where it will reinstall the O/S binaries from CD, but leave apps and setup alone. I am really hoping I don't have to do a complete reinstall of Windows 7 - the last one, when I upgraded from XP, took me two days to get everything set up and installed.

    Read the article

  • What issues ensue from having multiple versions of Office installed?

    - by Michael Sorens
    My ultimate question is embodied in the title but I thought it might be helpful to others if I detail what instigated my inquiry and my examination of the problem. To me, the first rule of software updates is Primum non nocere -- first, do no harm. So with my Windows 7 system containing both Office 2003 and Office 2010 I blithely proceeded to install this month's updates from Microsoft, containing updates for both versions of Office. While Microsoft officially does not recommend running multiple versions (see, for example, Running Multiple Versions of Microsoft Excel it is possible; I have had two versions installed for a year or more and have never run into an issue before. One thing that is always mentioned is installation order, i.e., the one you want to open files by default should be installed last. I wanted 2010 as my default so I had indeed installed 2003 first then, years later, 2010. So with this round of Windows updates, either it installed patches to 2010 before 2003, knocking out the file association, or the 2003 patch was more comprehensive, in the sense of touching the file association while the 2010 did not. In any case, after updates, double-clicking a .xls file opened 2003 rather than 2010. Web search indicated either: Use the file associations control panel to re-associate .xls files with the correct version of excel. I looked at this first, but it showed what seemed to be an unversioned "Excel" associated with .xls files so I did not check further. (This turned out to be an error on my part; more later.) Re-install versions in the desired order; I find this unreasonable. Run the repair option of the Office installer on the desired version; still seems more work than one should need. Run excel from the command line with "/regserver" on the one to be the default and "/unregserver" on the other. Good idea, but further search indicated that neither 2007 nor 2010 support "/regserver" contrary to some posts (e.g. Default Program With Multiple Versions Installed). Since this was a Windows Update issue and Microsoft provides free support for such, I inquired there as well, but succeeded only in getting the suggestion to uninstall all other versions, period; not acceptable to me. What worked for me was going back to the file associations control panel and manually selected the Office 2010 version of Excel. While it appeared no different in the control panel, it did fix the double-click issue. So if all it takes is this simple fix after an update, I can live with that. What I am wondering is: Has anyone seen any other problems related to having multiple versions of Office installed?

    Read the article

  • Automating silent software deployments on Solaris 10

    - by datSilencer
    Hello everyone. Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10. Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example: Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations. Checking that target application folders exist and if not, then create them with preconfigured path values defined when the package was assembled. Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them. Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system. Obviously, doing this for many boxes (the goal in question) by hand can be slow and error prone. I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another. 1) Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort. 2) Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P. 3) Ant or Gradle scripts. They may be an option since the boxes also have java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting. It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this. I thank you for your time and help.

    Read the article

  • Windows 7 Home hangs at "Welcome" screen

    - by White Phoenix
    I'm asking on behalf of a friend who's currently having problems with his machine. Windows 7 Home 32-bit. He's too far away for me to help by going over to his house - I'm helping him over the Internet. This is his current machine: http://www.newegg.com/Product/Product.aspx?Item=N82E16883227134 The only two changes he made to that machine is to swap out the gfx card for a EVGA GTX 460 and the PSU for a Corsair TX650. Here's what happened: He was playing a computer game (fairly CPU/GPU intensive) and had some music going in the background in foobar while playing. Suddenly, he notices the music stopped playing, so he switches to foobar to try to close it, but it freezes up (window won't respond). So he figures it's just foobar having a bad day and force quits that program. Suddenly, his game won't respond, so he force quits that, then the entire computer just went to crap at that point, so he hits the restart button on his machine. Computer POSTS fine, but now he gets stuck at the Windows "welcome" screen (his account is set to auto-login). HD activity light is solid yellow but he doesn't hear HDD activity. He tried booting into Safe Mode - gets stuck at the "welcome screen". Tried a STartup Repair within Windows 7, it found a few problems, but still gets stuck at welcome. I advised him to boot off the DVD - sfc /scannow found nothing (couldn't use the regular /scannow option; says there's a repair pending, had to use use offbootdir/offwindir command switches). Ran startup repair 3 times - found nothing. My friend runs virus/malware scans on a regular basis, so he's fairly sure it's not that either. Right now I'm having my friend run chkdsk /R on the computer while in this Startup Recovery mode - so far it's caught a few bad sectors. However at this point I'm kinda wondering which way to go if chkdsk doesn't fix it. Quick Google search said someone had success by booting Windows with bootlogging on - some others have success with running the aforemented chkdsk, etc. The fact that Windows cannot even boot into Safe Mode concerns me. While we're waiting for chkdsk /R to finish, are there any other options I can give my friend short of reinstalling Windows 7? He has his data on a separate partition so that's not a major problem (though it'll be an annoyance for him). I suspect his hard drive may be having some issues, but my main concern is getting him back up and running before we start diagnosing the hard drive (I may have him run some sort of SMART test utility later).

    Read the article

  • Why does writing a file to an NFS share send a COMMIT operation to the NFS server?

    - by Antonis Christofides
    I have a Debian squeeze (2.6.32-5-amd64) which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 192.168.1.75:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS-mounted directory goes at more than 100 MB/s.) I examine the server statistics and I see that whenever a file is written, it is "committed" (this also happens with NFSv3): root@debianvboxtest:~# mount -t nfs4 192.168.1.75:/mydir /mnt root@debianvboxtest:~# nfsstat|grep -A 2 'nfs v4 operations' Server nfs v4 operations: op0-unused op1-unused op2-future access close commit 0 0% 0 0% 0 0% 10 4% 1 0% 1 0% root@debianvboxtest:~# echo 'hello' >/mnt/test1056 root@debianvboxtest:~# nfsstat|grep -A 2 'nfs v4 operations' Server nfs v4 operations: op0-unused op1-unused op2-future access close commit 0 0% 0 0% 0 0% 11 4% 2 0% 2 0% Now in the RFC, I read this: The COMMIT operation is similar in operation and semantics to the POSIX fsync(2) system call that synchronizes a file's state with the disk (file data and metadata is flushed to disk or stable storage). COMMIT performs the same operation for a client, flushing any unsynchronized data and metadata on the server to the server's disk or stable storage for the specified file. I don't understand why the client commits. I don't think that the "echo" shell built-in command runs fsync; if echo wrote to a local file and then the machine went down, the file might be lost. In contrast, the NFS client appears to be sending a COMMIT upon completion of the echo. Why? I am reluctant to use the async NFS server option, because it would apparently ignore COMMIT. I feel as if I had a local filesystem and I had to choose between syncing every file upon close and ignoring fsync altogether. What have I understood wrong?

    Read the article

  • XCP Project Kronos syslog error: "irq ... : nobody cared" on Dom0 host

    - by Vlad Fedin
    One of our production clusters driven by XCP suddenly went uresponsive. After restart and some investigation we found such logs in dom0 machine syslog: Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659040] irq 339: nobody cared (try booting with the "irqpoll" option) Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659058] Pid: 0, comm: swapper/3 Tainted: G C O 3.2.0-24-generic #37-Ubuntu Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659060] Call Trace: Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659062] <IRQ> [<ffffffff810db37d>] __report_bad_irq+0x3d/0xe0 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659071] [<ffffffff810db605>] note_interrupt+0x135/0x190 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659074] [<ffffffff810d8e69>] handle_irq_event_percpu+0xa9/0x220 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659078] [<ffffffff8130ff3b>] ? radix_tree_lookup+0xb/0x10 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659081] [<ffffffff810d9031>] handle_irq_event+0x51/0x80 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659084] [<ffffffff810dc187>] handle_edge_irq+0x87/0x140 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659089] [<ffffffff813a8829>] __xen_evtchn_do_upcall+0x199/0x250 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659092] [<ffffffff813aa96f>] xen_evtchn_do_upcall+0x2f/0x50 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659096] [<ffffffff81666d3e>] xen_do_hypervisor_callback+0x1e/0x30 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659097] <EOI> [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659104] [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659107] [<ffffffff8100a1d0>] ? xen_safe_halt+0x10/0x20 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659110] [<fff IRQ 339 in cat /proc/interrupts: 339: ... xen-pirq-msi-x eth0 where eth0 is hardware NIC. While host machine seems to hang, guest machines continue to work, so our tiny internal monitoring on one of the virtual hosts logged something like that: [2012-10-26 20:31:51] [OK......] 200 OK : 113159149 ns [2012-10-26 20:32:40] [DISASTER] 500 Can't connect to [hostname]:80 (No route to host) : 47763284432 ns ... [2012-10-26 20:34:40] [DISASTER] 500 Can't connect to [hostname]:80 (No route to host) : 46894835070 ns [2012-10-26 20:34:57] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 16821741955 ns ... [2012-10-26 20:38:18] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 20103298289 ns [2012-10-26 20:38:37] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 17895754943 ns Host and guest OS: Ubuntu 12.04 LTS, 05:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection Subsystem: ASUSTeK Computer Inc. Device 8369 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+ Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 17 Region 0: Memory at fe500000 (32-bit, non-prefetchable) [size=128K] Region 2: I/O ports at e000 [size=32] Region 3: Memory at fe520000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: e1000e Kernel modules: e1000e Any hints how to debug this?

    Read the article

  • Windows Server 2008 - one MAC Address, assign multiple external IP's to VirtualBoxes running as guests on host

    - by Sise
    Couldn't find any help @ google or here. The scenario: Windows Server 2008 Std x64 on i7-975, 12 GB RAM. The server is running in a data centre. One hardware NIC - RealTek PCIe GBE - one MAC Address. The data centre provides us 4 static external IP's. The first is assigned to the host by default of course. I have ordered all 4 IP's, the data centre can assign the available IP's to the physical MAC address of the given NIC only. This means one NIC, one MAC Address, 4 IP's. Everything works fine so far. Now, what I would like to have: Installed VirtualBox with 1-3 guests running, each gets it's own external IP assigned. Each of it should be an standalone Win Server 2008. It looks like the easiest way would be to put the guests into an virtual subnet and routing all data coming to the 2nd till 4th external IP through to this guests using there subnet IP's. I have been through the VirtualBox User Manuel regarding networking. What's not working: I can't use bridged networking without anything else, because the IP's are assigned to the one MAC address only. I can't use NAT networking because it does not allow access from outside or the host to the guest. I do not wanna use port forwarding. Host-only networking itself would not allow internet access, by sharing the default internet connection of the host, internet is granted from the guest to the outside but not from outside or the host to the guest. InternalNetworking is not really an option here. What I have tried is to create an additional MS Loopback adapter for a routed subnet, where the Vbox guests are in, now the idea was to NAT the internet connection to the loopback 'subnet'. But I can't ping the gateway from the guests. By using route command in the command shell or RRAS (static route, NAT) I didn't get there as well. Solutions like the following do work for the one way, but not for the way back: For your situation, it might be best to use the Host-Only adapter for ICS. Go to the preferences of VB itself and select network. There you can change the configuration for the interface. Set the IP address to 192.168.0.1, netmask 255.255.255.0. Disable the DHCP server if it isn't already and that's it. Now the Guest should get an IP from Windows itself and be able to get onto the internet, while you can also access the Host. Slowly I'm pretty stucked with this topic. There is a possibility I've just overlooked something or just didn't getting it by trying, especially using RRAS, but it's kinda hard to find useful howto's or something in the web. Thanks in advance! Best regards, Simon

    Read the article

  • My network drive disappears from Mac OS Finder

    - by Mariusz
    I have recently bought a Netgear WNDR3800 router to use it in my home network. But just the same day I installed it, I noticed a strange behaviour of Finder and iTunes. Let me explain it further. There is a Synology DS111 NAS attached to that router and two Macs with Mac OS X Lion. One of them is connected by a cable and the second one wirelessly. Before I changed my router to the new one I mentioned above, Finder always used to display my NAS on its sidebar. So I could just click its network name to access shared folders existing on it. But after I installed WNDR3800, I can no longer access the NAS that way. It is no longer displayed. I always have to mount it manually by typing its IP address using the Finder's 'connect to server' option. The same NAS supports TimeMachine backups and has an inbuilt DLNA server. And the same situation here. I can't perform a backup because my NAS is no longer accessible in TimeMachine preferences. iTunes does not display it as well (as a multimedia server) even though it used to before I installed that router. What's important, everything works fine for a couple of minutes after I restart the router or the NAS. Or even when I change the NAS's IP address it becomes accessible again in Finder, TimeMachine and iTunes, but only for some time. Both the Mac computers I mentioned behave the same way. And all those issues have been taking place sice I installed that new router. Before I did that, everything had worked fine. My old router was Netgear WGR614v10. Would you be so kind to tell me what you think could possibly be the reason of that behaviour? What settings of the router should I look closer at? I'm not a network specialist, but is it possible that some network packets are blocked for some reason? I will be grateful for any clues you give me. Thank you.

    Read the article

  • NFS4 / ZFS: revert ACL to clean/inherited state

    - by Keiichi
    My problem is identical to this Windows question, but pertains NFS4 (Linux) and the underlying ZFS (OpenIndiana) we are using. We have this ZFS shared via NFS4 and CIFS for Linux and Windows users respectively. It would be nice for both user groups to benefit from ACLs, but the one missing puzzle piece goes thusly: Each user has a home, where he sets a top-level, inherited ACL. He can later on refine permissions for the contained files/folders iteratively. Over time, sometimes permissions need to be generalized again to avoid increasing pollution of ACL entries. You can tweak the ACL of every single file if need be to obtain the wanted permissions, but that defeats the purpose of inherited ACLs. So, how can an ACL be completely cleared like in the question linked above? I have found nothing about what a blank, inherited ACL should look like. This usecase simply does not seem to exist. In fact, the solaris chmod manpage clearly states A- Removes all ACEs for current ACL on file and replaces current ACL with new ACL that represents only the current mode of the file. I.e. we get three new ACL entries filled with stuff representing the permission bits, which is rather useless for cleaning up. If I try to manually remove every ACE, on the last one I get chmod A0- <file> chmod: ERROR: Can't remove all ACL entries from a file Which by the way makes me think: and why not? In fact, I really want the whole file-specific ACL gone. The same holds for linux, which enumerates ACEs starting with 1(!), and verbalizes its woes less diligently nfs4_setacl -x 1 <file> Failed setxattr operation: Unknown error 524 So, what is the idea behind ACLs under Solaris/NFS? Can they never be cleaned up? Why does the recursion option for the ACL setting commands pollute all children instead of setting a single ACL and making the children inherit? Is this really the intention of the designers? I can clean up the ACLs using a windows client perfectly well, but am I supposed to tell the linux users they have to switch OS just to consolidate permissions?

    Read the article

  • SQL Server database filled the hard drive and freeing up space isn't possible

    - by Jon
    I have a database in SQL Server 2008 on a 1Tb hard drive and it filled the drive, there is only 4Kb free. The MDF file is 323Gb and the LDF is 653Gb. The hard disk this DB is on has no other files on it other than the MDF and LDF so it's impossible to free up any space on the drive. The main hard disk is smaller but there is enough room to transfer the MDF to that drive, in case that helps. This server is overseas at a customer site and it's not possible at the moment to add more disk space to the server. It's also not possible to delete any records because the DB is in a failed mode (due to no disk space) and it doesn't respond to most commands. The Db is currently in full recovery mode which is why the LDF file is so large. This DB really doesn't need to be in full recovery so going forward we plan on switching it to simple mode which will save us a lot of space. I also don't care about losing the LDF file, but I need all of the data. I've spent a lot of time looking for a way out of this problem but everything I've found first involves either freeing up disk space or adding more disk space, neither of which is an option at this time. I'm stuck and any help would be greatly appreciated. I get the following log when trying to switch the DB to online mode. Msg 945, Level 14, State 2, Line 3 Database 'DBNAME' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. Msg 5069, Level 16, State 1, Line 3 ALTER DATABASE statement failed. Msg 1101, Level 17, State 12, Line 3 Could not allocate a new page for database 'DBNAME' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. I've found the following solutions but none work due to having no disk space on that drive, and since the DB is in a failed state I can't run most commmands. - DBCC SHRINKFILE - can't be run because doing a 'use DBNAME' fails - Detaching the DB and then changing the location of the MDF/LDF files, this fails because the DB is in an offline mode so you can't run detach. I'm at a loss about what else to try. Thanks.

    Read the article

  • how do i install intermediate certificate

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on amazon load balancer however when i check the ssl with site test tool (http://www.networking4all.com/en/support/tools/site+check/), i get the following error Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. i converted crt file to pem using these command from this tutorial openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM during setting up of amazon load balancer only option i left out was certificate chain (pem encoded) however this was optional. could this be cause of my issue? and if so i how do i create certificate chain? for the last question i have tried googling however i'm getting more confused than before. please help many thanks in advance. UPDATE @all thanks for the helpful advice. if you make request to verisign they will give you a certificate chain however this chain includes public crt, intermediate crt and root crt. make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your amazon load balancer. if you are making https request from an android app then above instruction may not work for older android os such as 2.1 and 2.2. to make it work on older android os [https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR657&actp=LIST&viewlocale=en_US]. on this link click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server". copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link [https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR1409] if you are using geo trust certificates then solution is much the same for android devices however you need to copy and past their intermediate certs for android. PS: sorry for the long urls however "new users can only post a maximum of two hyperlinks"

    Read the article

  • How to create NTFS partition in Linux to install Windows 7 from USB?

    - by Michal Stefanow
    I messed up with my computer and need help. Generally: install Windows 7 from USB. Problem: "setup was unable to create a new system partition" When first attempt to install Windows 7 failed I tried Linux live USB, installed distro to HDD, and erased all the existing partitions. Current state (fdisk -l): [writing from other computer so no copy and paste] /dev/sda1 305GB Linux /dev/sda2 7GB Extended /dev/sda5 7GB Linux Swam / Solaris To create a new, NTFS partition: fdisk /dev/sda n (for new) p (for primary) 3 (for partintion number) "No free sectors available" All the HDD was formatted couple of minutes before so there is a lot of free space but how to resize a parition? I cannot find an option for resizing in man fdisk. Some people say I should use gparted but my distro doesn't not contain this package. And my distro doesn't support wireless drivers so I have serious problems with downloading stuff. I tried also using cfdisk but any command results in: "cfdisk bad primary partition 1 partition ends in the final partial cylinder" I tried also removing partition 1 and then creating a new one (so there is no "no free sectors"). I'm receiving a warning: "Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot." After restating: "grub rescue, no known filesystem" It may indicate that some changes have been made BUT when running Windows 7 installed some another error: "Windows cannot be installed to Disk 0 Partition 1" More detailed: "Windows cannot be installed to this hard disk space. Windows must be installed to a partition formatted as NTFS." So formatting drive using Windows 7 installer BUT this time yet another error: "Setup was unable to create a new system partition or locate an existing system partition. See the setup log files for more information" Apparently I cannot access logs (how?) and I am back to drawing board with my live USB (this time showing partition as HPFS/NTFS). Any suggestions how to install Windows 7? Should I reinstall Linux to HDD, erase existing partitions once again, and use Parted rather than gparted (parted is included in the distro). Or maybe should I create another bootable USB such as PartedMagic to painlessly create partitions? I just want to install Windows 7 from USB, my laptop is semi-operational and I am ready to receive some help regarding fdisk and creating NTFS partitions. UPDATE: I did as suggested (removed all the partitions) and tried to install in unallocated space. Tried to create a new partition and format it. Same error: "setup was unable to create a new system partition" Came to the conclusion it may have something to do with TrueCrypt I have recently installed. Right now trying to FIX MBR (as I haven't got possibility to create rescue disc without optical drive)

    Read the article

  • Synchronize the same set of files to 2 different locations with 2 different programs for 2 different purposes

    - by Hedgetrimmer
    Because of stupid questionable IT policies at my not-to-be-named place of occupation, I have been (and will be, for the forseeable future) carrying on an external hard drive a unison-synchronized copy of all of my documents and code, including code which resides in some of my "dotfiles" and other code which resides in ~/bin (things I've made are there because ~/bin is in my $PATH) along with some cruft generated (and to be generated) by conscript and its related "giter8" templating system for Scala project boilerplates. Despite this, I do use a symlinking program to store all of my important dotfiles in a subdirectory. Thanks to that somewhat complicated setup, I have resorted to making a directory full of symlinks to every directory (or file, as is the case with stuff under ~/bin) that I want synchronized, and then follow = True is in my unison profile. It happens to be that this collection of odds and ends—plus an automatically-generated text file containing every package installed on my system—is everything under ~ that needs to be backed up to a remote (rsync-over-ssh) host with client-side encryption and signing from GPG. I already believe that duplicity is the most appropriate program to do that. What isn't as clear-cut is how to make duplicity use the exact same set of files when it runs a backup; it would be simple if duplicity would follow symlinks, but it does not and the manpage lists no option for enabling any such behavior. Comparing unison's file selection algorithm to duplicity's, I don't think I can write a program that could compute a ruleset for one program given one for the other. For the record, I would rather not keep the symlinks manually synchronized with duplicity file-selection rules, as they can change thanks to the above-mentioned complications regarding ~/bin. I don't think running duplicity on the external hard disk is such a good idea either; I usually keep that hard disk unmounted and unplugged in case of a power failure or other physical problem with the computer, plus I'm not sure about duplicity's performance given that: the hard disk is NTFS-formatted in order to be useable at my Windows-imprisoned place of occupation. despite being a USB 3.0 disk, my computer has no USB 3.0 ports so it acts as a USB 2.0 disk. How can I have duplicity (or is there a better program that I have overlooked?) back up the exact same set of files that is bidirectionally synchronized with my external hard disk?

    Read the article

  • Ubuntu and Postfix Configuration Issues

    - by Obi Hill
    I recently installed postfix on Ubuntu Natty. I'm having a problem with the configuration. Firstly here is my postfix configuration file: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. mydomain = $myorigin myhostname = mail.nairanode.com alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/virtual # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/virtual # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 mydestination = $myorigin, $myhostname, localhost.localdomain, , localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all #mynetworks_style = host # ADDITIONAL unknown_local_recipient_reject_code = 550 maximal_queue_lifetime = 7d minimal_backoff_time = 1000s maximal_backoff_time = 8000s smtp_helo_timeout = 60s smtpd_recipient_limit = 16 smtpd_soft_error_limit = 3 smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit # Requirements for the sender details smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_$ # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.n$ # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_do$ # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes Here is also my /etc/postfix/aliases: # See man 5 aliases for format postmaster: root Here is also my /etc/mailname: nairanode.com I've also updated my hostname to nairanode.com However, when I run postalias /etc/postfix/aliases I get the following : postalias: warning: valid_hostname: invalid character 47(decimal): /etc/mailname postalias: fatal: file /etc/postfix/main.cf: parameter mydomain: bad parameter value: /etc/mailname Is there something I'm doing wrong?! I noticed that when I replace myorigin = /etc/mailname with myorigin = nairanode.com in my postfix config, I don't see any errors anymore after calling postalias. Is this a bug or something?!

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? Thanks, M

    Read the article

< Previous Page | 641 642 643 644 645 646 647 648 649 650 651 652  | Next Page >