Search Results

Search found 26810 results on 1073 pages for 'fixed point'.

Page 488/1073 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • How do I enable additional debugging output from Ansible and Vagrant?

    - by Brian Lyttle
    I'm investigating Ansible for server and application provisioning. My application is currently provisioned with shell scripts in Vagrant. Rather than rewrite my scripts I've taken a sample and attempted to deploy it. It appears to deploy fine, but I've seeing a failure message after what looks like a series of successful steps: » vagrant provision ~/vm/blvagrant 1 ? [default] Running provisioner: ansible... PLAY [web-servers] ************************************************************ GATHERING FACTS *************************************************************** ok: [192.168.9.149] TASK: [install python-software-properties] ************************************ ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [add nginx ppa if it ubuntu 10.04 and up] ******************************* ok: [192.168.9.149] => {"changed": false, "item": "", "repo": "ppa:nginx/stable", "state": "present"} TASK: [update apt repo] ******************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [install nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [copy fixed init for nginx] ********************************************* ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0755", "owner": "root", "path": "/etc/init.d/nginx", "size": 2321, "state": "file", "uid": 0} TASK: [service nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": "", "name": "nginx", "state": "started"} TASK: [write nginx.conf] ****************************************************** ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0644", "owner": "root", "path": "/etc/nginx/nginx.conf", "size": 1067, "state": "file", "uid": 0} PLAY RECAP ******************************************************************** 192.168.9.149 : ok=8 changed=0 unreachable=0 failed=0 Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. How do I go about getting additional debug information? I've already added ansible.verbose = true to my vagrant config which results in the dictionaries being displayed within the output above.

    Read the article

  • Chrome pop-up blocker fail?

    - by Count Zero
    I have this problem since just a few days. I've been a heavy Chrome user for several years, but it never happened before: Sometimes absolutely uncalled for pop-ups appear, when I click something absolutely legit. It seems to happen at a very precise rate of two pop-ups every hour or so. The pop-ups are not very varied, it seems to be a fixed set of some 4-5 ads. At first I thought I caught some malware, but after a full scan with Malwarebytes Anti-Malware, I found absolutely nothing. In addition the problem exists only with Chrome. When I use Firefox, this never ever happens. What is even more puzzling is that it happens on both of my rigs and started on the same day. I have a removable storage that I dock to both of them and access one machine from the other via RDP, but still... It seems this is a problem that others face too. See this Google forum and here. Could it be that the latest Chrome build is just acting up? (Just in case it matters: I run Chrome version 24.0.1312.57 m on Win 7.)

    Read the article

  • Why does Facebook Chat through XMPP protocol on Pidgin Portable not authorize?

    - by Sara Neff
    I heard you can use facebook chat on desktops now. Thats awsome! What i didn't hear is that it is a pain in the butt! Not awsome! I've followed six nearly identical sets of instructions from six different websides, including the one that facebook generates for you, to get facebook chat connected through Pidgin. Its the latest portable version, so from what i hear the plugin is out of the question. Whenever I go to try and connect i get a message saying "Not Authorized" and buttons to either modify the account info, or retry. NOTHING i have done has fixed this, and I can't find anything remotely usefull anywhere. I am running windows xp, and running pidgin (portable) off of a flash drive. Someone please tell me what i have to do. I read about authorizing the chat on my actual facebook page. I'd have tried that if i could find out how to do it, but if its there they hid it good. HELP?!

    Read the article

  • Can't Install Win2k8 On KVM - Classic 0x80070013 error

    - by javano
    I am trying to install Win2k8 Std as a KVM guest on Debian Squeeze. As you can see from these screen shots; No drives are detected (I have blanked out a 20GB image for testing) - screenshot1 I am using this driver CD: - screenshot2 I have signed the Win7 driver (I assume this was the most appropriate one?) - screenshot3 I can now see an unpartitioned drive - screenshot4 But I can't create a new partition on here, getting the error code 0x80070013 - screenshot5 I have had this error code before but only on a physical server. If I remember correctly it was complaining because the disks were partitioned as GPT (because it was a server that was being re-purposed) so repartitioning with an MS-DOS table fixed that. This is a blank disk image though. What is wrong here, and how can I correct this? Thank you. UPDATE I have booted the VM with a Gparted-Live disk and formatted this volume with an MS-DOS partitioning scheme, and a single 20GB NTFS file system. Now when I boot the Win2k8 CD, load my drivers, I get a different error. As you can see at the bottom of screenshot6 "Windows cannot be installed on this hard drive space. Windows must be installed to a partition formatted as NTFS". Clicking format produces the error (0x80004005) on the screen, so I think this is still a driver issue because Windows can see the drive but not interact with it properly. Is that insane thinking?

    Read the article

  • IP6 seems to be enabled - How do I configure it without interfering with IP4?

    - by Mister IT Guru
    I noticed that some of my Centos boxes have IP6 enabled, and seem to have addresses. I have no problem with this, but I would like to get a handle on it, and even connect to them using IP6. This would really help if for any reason DHCP has a hiccup. But I'm a bit lost as to where the configuration on my CentOS box is. (I am also on google researching this, but I like server fault! :) ) I am hoping that I would be able to log into this via the VPN because every now and then that DHCP device has a bad morning, and needs to be restarted. (I'm also looking into this issue, but someone else handles that, management separation gone mad!) It's a remote client, so it would be a lot easier for me to connect to these systems which seem to self configure, to use that as a pivot via ssh tunnels to get to other remote devices to continue to manage them, while out main route is fixed. I guess, my questions are How can I configure IP6 without interfering with IP4, and On CentOS, can I influence this auto configuration I seem to be seeing?

    Read the article

  • Two monitors with Mac Mini - one displays black despite receiving a signal

    - by alex
    My Mac Mini outputs to my two new monitors - Dell U2311Hs. The LED on the bezel displays blue when receiving a signal, or yellow otherwise. Both screens are displaying blue. It also seems my Mini can see both of them... However, one of them is black. It just displays black, but appears to be receiving a signal (when I turn the Mac off, it then displays No Signal). To make things weirder, on startup, the boot up (white with Apple logo) appears on the right monitor (the one that now displays black). Occasionally, it flickers up on the black screen for 1 second. I have tried Detect Displays. It appears to do nothing. I'm also running a dual monitor KVM. Video connections are DVI-D. How can I fix this situation? Thanks. Update This is the weirdest thing - I used the DVI-D cable that came with the KVM and it seems to have fixed it - I didn't both because it looks identical to any other DVI cable (in form an pin out). So, I will accept an answer if someone can tell me what may be the difference in these cables?

    Read the article

  • Xbox Music ignores Asus FN media keys in Desktop Mode

    - by Ruud Lenders
    I have an Asus PRO64JQ laptop, and I recently upgraded to Windows 8. The FN keys stopped working, so I tried to install default drivers using the CD that came with my laptop. Unfortunately, the CD does not support Windows 8. Asus Support does not help me either. It seems that Asus does not support Windows 8 on my model, since I can't find it in their list of Windows 8 upgradable models. Also, the support page of my laptop does not allow me to select Windows 8 as OS. That's why I'm asking my question here. I fixed (most of) the FN keys by installing the ATKPackage, and I was happy to find out that the media keys (play/pauze, next, previous) works with the Xbox Music app. But the keys only work in the Metro UI. In Desktop Mode, pressing FN+? (play/pause) starts up Windows Media Player. Disabling Windows Media Player through the control panel stops it from popping up, but Xbox Music still only responds to them in the Metro UI. I remember that I had similar problems when I tried to make these media keys to work with iTunes on Windows 7. The fix there was a small iTunes plug-in. So tell me, do you know how to fix my problem? Thanks.

    Read the article

  • need some help figuring out clamav & monit monitoring error...unixsocket...

    - by Ronedog
    I need a bit of help figuring something out. First off, I'm not very well versed with FreeBSD servers, etc. but with some direction hopefully I can get this fixed. I'm using FreeBSD and installed Monit so I could monitor some of the processes that run tomcat, apache, mysql, sendmail, clamav. So far, I'm only successful in getting apache & mysql to be monitored. I'm getting this error for clamav in the log file for /var/log/monit.log 'clamavd' failed, cannot open a connection to UNIX[/usr/local/etc/rc.d/clamav-clamd] My config file for clamav in /etc/monitrc is: #################################################################### # CLAMAV Virus Checks #################################################################### check process clamavd with pidfile /var/run/clamav/clamd.pid group virus start program = "/usr/local/etc/rc.d/clamav-clamd start" stop program = "/usr/local/etc/rc.d/clamav-clamd stop" if failed unixsocket /usr/local/etc/rc.d/clamav-clamd then restart if 5 restarts within 5 cycles then timeout Honestly, I really don't know much of what's going on here. My host who helped me get the box set up basically installed clamav, but doesn't offer this kind of detail in supporting me, so I'm left to figure this stuff out on my own as I own the box, but they provide the isp service. Is there anyone who can help me troubleshoot this? Thanks for your help in advance.

    Read the article

  • How do you gracefully upgrade mission critical systems to wildly disparate systems?

    - by Ernie
    In the span of the 12+ years of my career, I have yet to overcome this hurdle and I suspect the answer simply isn't easy or even possible, so I ask everyone here for their experience. Say that you're running into egregious problems that can only be fixed by moving from one platform to another - either from making a mistake in choosing the platform that was chosen years ago, or simply growing beyond what the system was originally designed for. You know for certain that the cruft that has built up over time will invariably mean that it will be nearly impossible to test for all the things that will certainly lead to tech support hell - which we all know leads to the loss of customers. Not that customers aren't already complaining about the egregious problems that already exist! The best possible way that I've discovered so far is to maybe devise a plan for the changeover, test it on a few clients, test it on a dozen clients, test it on a hundred clients, then finally finish the changeover for everyone and pray that you've worked out all the bugs with those first hundred and twenty, and that the animal by-products will not hit the ventilation system in the most spectacular fashion possible. However, that doesn't mean that it won't anyway. So say that you're moving from Exchange to Exim (or even just Sendmail to Exim). How do you handle it?

    Read the article

  • Printing on Remote Desktop session

    - by Arindam Banerjee
    We have to connect a Windows 2008 server using Remote Desktop from Windows XP machine. A Barcode Printer is attached with XP machine and the printer is shared as Local Resource in RDC session to the server. On the server we have to print from an application which prints either to LPT port or shared printer (UNC path). For this I use to configure print pooling combining LPT1 and (Terminal Server) TSxxx port. As I don't know the option to access the Terminal Session printer via UNC path. But I have the following issues - Every time I connect to a remote session, the printer from my local Win XP machine is showing in Printers and Faxes on Win 2008 Server (Terminal Server), but I am not allowed to manage the Win XP printer from Terminal Server to enable pooling. On the server I have to change the security permission every time and then enable print pooling. How can I keep the security permission unchanged? Secondly I created a batch file to enable print pooling. rundll32 printui.dll,PrintUIEntry /Xs /n "Printer (from CLIENT)" Portname "LPT1:,TS005" But every time the printer in terminal session connects in diffrent terminal Session port. Any solution to make the TS port fixed? Help from anyone will be highly appreciated.

    Read the article

  • ATI firepro will not detect a second DVI-D monitor

    - by John
    OK so weird issue here. I have previously been running 6 screens off of 3 of the older ATI firepro graphics cards but they had a problem with the heat sink getting too hot and warping the PCB resulting in total failure of the card, to replace my three dead cards I purchased a new-type ATI firepro with the newer heat sink design. I'm only using one at the moment to make sure they've fixed the problem before I waste more money on 2 more cards but this is where things start to get weird. The Firepro's only have one port on them, they connect to two monitors via a splitter cable going from the one port to two DVI connectors for the screens. When I plug two identical monitors in via their DVI inputs not matter what I do windows and Catalyst will only detect one screen. However if I use the VGA input on one of the screens with a VGA - DVI adaptor to plug it in to the card it works fine. This confuses me greatly. I'm currently using the ATI Firepro 2270 Graphics card with identical DELL U2311H screens. I can post the rest of the system spec as well if needed but I wouldn't have thought it would make much difference as it had no problem handling 6 screens before the graphics cards failed. Naturally both catalyst and ATI drivers are the most current version. ATI tech support has been absolutely zero help, they seemed to get stumped as soon as I verified that both screens were plugged in and connected properly. Anyone have any ideas?

    Read the article

  • Lenovo System Update Breaks Windows Live

    - by wolfvilleian
    Hey everyone, I've been racking my brain (and fingers from typing) trying to solve this issue to no avail. I have a Lenovo computer and I install their system update tool to install all my missing drivers. However after this tool is installed Windows Live 2011 breaks, it will no longer sign in giving error number 8e5e0247 all the solutions online haven't helped. It appears that a language setting somewhere gets set to en_ms, and I'm en_ca. My computer is running Windows 7 x64. When i try to sign onto messenger it gives an error that (with some research) means your locale or language is not supported, I've searched my computer for any reference to en_ms but find none. Also a few other things seem to have broken, When a UAC box comes up it is no longer able to identify the publisher of anything, and also the indexing service does not work (I'm not sure if the indexing issue is related, but the UAC issue happened right after installation), I had this issue before but I don't remember how I fixed it, I believe it had something to do with environmental variables. When it goes to sign in it gets as far as the "Loading contacts" then stops and goes back to the sign in screen. Has anyone seen this before? Thanks

    Read the article

  • Apache can't get viewed from outside of my LAN

    - by Javier Martinez
    I fixed it in PORTS TRIGGER menu of my router. Thanks you anyway I have a weird problem related with (i think) my cable-router and my configured vhosts in Apache2. The point is I can't access from outside of my LAN to any of my configured vhosts if I set the http port of Apache to 80 and i add a NAT rule for it. Otherwise, if I set my Apache port to 81 (or any else) with its respective NAT rule on my router it works. My router is an ARRIS TG952S and I am using Apache/2.2.22 (Debian) ports.conf NameVirtualHost *:80 Listen 80 vhost1.mydomain.net.conf <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost1.mydomain.net ServerAlias vhost1.mydomain.net www.vhost1.mydomain.net vhost2.mydomain.net.conf <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost2.mydomain.net ServerAlias vhost2.mydomain.net www.vhost2.mydomain.net DNS records (using FreeDNS) are: mydomain.net --> pointing to another server vhost1.mydomain.net --> pointing to my server vhost2.mydomain.net --> pointing to my server iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-apache-noscript tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 fail2ban-apache tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 fail2ban-ssh tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-apache (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-apache-noscript (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0 Thanks you

    Read the article

  • Linux usd disk just create sg device

    - by MTilsted
    I have a Corsair R60 ssd disk which is a disk with both sata and usb connectors. But the usb thing seems to be a bit non-standard, or maybe its just my fedora linux. When I insert the disk using a usb cabel to a running Fedora 14 linux system, a device called /dev/sg3 is added but that is all. No new /dev/sd* device is created so I can't mount the disk. If I look at cat /proc/scsi/sg/device_strs I get ATA Hitachi HTS54321 FB2O HL-DT-ST DVDRAM GSA-T50N RP05 Seagate Desktop 0130 Corsair CSSD-R60GB2 So the disk is there. (The last entry) but my linux will for some reason not see it as a usb hard disk. When I insert other usb disks they work fine. It is only this specific disk which causes problems. I have tried on 3 different computers with the same result. A hint to the problem may be that if I add the disk to a windows system(With usb) the disk is called "A fixed disk" and not a portable disk as expected. The disk works fine with linux If i connect it with the sata cabel, but I would really like to have it working with usb too. (To mount it on computers without sata).

    Read the article

  • Xen Bridge only working when IP Assigned

    - by m.sr
    Hey! Just had an (in my sense) obscure situation. I have a Xen Server with bridged networking. Everything works fine since month. A while ago i configuresd a second bridge. only some DomUs get an channel on this bridge - my Dom0 doesn't need to / should'nt use this bridge. So just 5 minutes ago while rebooting the xen host (because of an other problem with the UPS) i decided to removed the fixed ip from the the interface of the Dom0 which belongs to the second bridge. So after reboot i noticed that none of the interfaces on the second bridge is available. I couldn't find a problem. Everything was just like before the reboot, except the interface of the Dom0 had no IP address. After a while i tried to give the DomO interface of this bridge an IP again and ... BOOM ... everything is up and running again! WTF? Why is it important to have the interface of a bridge configured in the Dom0? Even when confiugured 'wrong' (complitely different netowkr settings as the network really hanging on the bridge) everythjing works fine ... I don't get it. Could please someone explain? Tnaks a lot!

    Read the article

  • Can't mount Linux usd disk. It just create /dev/sg device but no /dev/sd

    - by MTilsted
    I have a Corsair R60 ssd disk which is a disk with both sata and usb connectors. But the usb thing seems to be a bit non-standard, or maybe its just my fedora linux. When I insert the disk using a usb cabel to a running Fedora 14 linux system, a device called /dev/sg3 is added but that is all. No new /dev/sd* device is created so I can't mount the disk. If I look at cat /proc/scsi/sg/device_strs I get ATA Hitachi HTS54321 FB2O HL-DT-ST DVDRAM GSA-T50N RP05 Seagate Desktop 0130 Corsair CSSD-R60GB2 So the disk is there. (The last entry) but my linux will for some reason not see it as a usb hard disk. When I insert other usb disks they work fine. It is only this specific disk which causes problems. I have tried on 3 different computers with the same result. A hint to the problem may be that if I add the disk to a windows system(With usb) the disk is called "A fixed disk" and not a portable disk as expected. The disk works fine with linux If i connect it with the sata cabel, but I would really like to have it working with usb too. (To mount it on computers without sata).

    Read the article

  • certutil -ping fails with 30 seconds timeout - what to do?

    - by mark
    Dear ladies and sirs. The certificate store on my Win7 box is constantly hanging. Observe: C:\1.cmd C:\certutil -? | findstr /i ping -ping -- Ping Active Directory Certificate Services Request interface -pingadmin -- Ping Active Directory Certificate Services Admin interface C:\set PROMPT=$P($t)$G C:\(13:04:28.57)certutil -ping CertUtil: -ping command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:04:58.68)certutil -pingadmin CertUtil: -pingadmin command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:05:28.79)set PROMPT=$P$G C:\ Explanations: The first command shows you that there are –ping and –pingadmin parameters to certutil Trying any ping parameter fails with 30 seconds timeout (the current time is seen in the prompt) This is a serious problem. It screws all the secure communication in my app. If anyone knows how this can be fixed - please share. Thanks. P.S. 1.cmd is simply a batch of these commands: certutil -? | findstr /i ping set PROMPT=$P($t)$G certutil -ping certutil -pingadmin set PROMPT=$P$G

    Read the article

  • Windows7 corrupted profile - prevention exists?

    - by Radek
    I have dedicated Windows7 (not on domain) virtual machine for overnight automation testing. Some commands (mySQLdump, tscon.exe) must be run under administrator account. Last week administrator account's profile was corrupted. I fixed it by renaming it in the registry and logging in as administrator. And today it is corrupted again. I use administrator account only to run above commands via runas. Also the computer is restarted via cmd - shutdown command - quite often. Especially every night before automation testing starts. I checked the comp for viruses - did full scan using avast although I believed that the comp is clean. Any idea how to prevent the profile to get corrupted again? update So the first log entry in event log is today from 1.15am and one of my scripts ran runas command as administrator exactly at 1.15am. It was second time that runas war executed though after the testing started. The same happened second day in a row. Before the testing starts I need to copy one file that is locked. So I run handle.exe from runas to unlock it. That is what I think causing the profile to get corrupted. I am not able to reproduce it by myself. The message from event viewer is Windows cannot load the locally stored profile. Possible causes of this error include insufficient security rights or a corrupt local profile. DETAIL – The process cannot access the file because it is being used by another process.

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • Document Map in MS Word 2007 going bonkers

    - by rzlines
    I'm working on a large project report in Microsoft Word 2007 and have been using the document map to generate the index. I have been carefully selecting the headers that need to be added to the document map but I saved the document and opened it up today to work on it - the document map has added whatever it pleases there. This is a temporary fix from a post that I found after extensive searching that works, but when I save and close the document and open it up again I face the same dilemma: I have noticed that when Word stuffs up the document map after opening the file, I can undo this by using the UNDO button. Word calls it ’Autoformat’. I have also fixed a file that has had the document map screwed permanently (i.e saved with it) by selecting all (CTRL+A),selecting the PARAGRAPH drop down menu in the HOME TAB and in the OUTLINE drop down box, selecting ’Body Text’. This removed all the problems and did not seem to affect my outline level paragraph headings. This is also another temporary fix but I have to be on my toes not to let Word auto format at the start of the document. I also can't afford to entirely turn off auto format as I need it. I’ve solved this problem for me. When you open the file, a progress bar at the bottom first says Opening (ESC to Cancel) and then it says Word is formatting the document (ESC to Cancel). If I cancel the second process, TOC fine. No cancelling, TOC screwed. Can anyone work out how to switch off the autoformatting? This is the post in which i found for the temporary fix

    Read the article

  • Capistrano + Nginx + Passenger = 403

    - by slimchrisp
    I asked this over at stackoverflow as well, but still haven't received any answers that have helped me to solve this problem. I have spent almost a week at this point trying to solve the issue, and I'm just not making any headway. It seems that this issue is pretty common, but none of the solutions I found online work for me. A buddy of mine is actually creating the same setup, and he is having the same issue. After a few days stuck with the 403 error I started over using this tutorial: http://blog.ninjahideout.com/posts/a-guide-to-a-nginx-passenger-and-rvm-server I had hoped starting from scratch using this tutorial would work, but no dice. Either way, if you view the tutorial you can see what steps I have taken. Here is essentially what I have going on. I have a VPS account on linode.com Server OS is Ubuntu 10.04 Local OS (shouldn't matter, but just so you know) used to deploy with Capistrano is Snow Leopard 10.6.6 I use RVM on the server. Version is 1.2.2 I was previously on ruby-1.9.2-p0 [ i386 ], but per the tutorial listed above I switched to ree-1.8.7-2010.02 [ i386 ]. Running 'which ruby' from the command line verifies that I am using 1.8.7 with the following output: /usr/local/rvm/rubies/ree-1.8.7-2010.02/bin/ruby passenger -v prints the following: Phusion Passenger version 3.0.2 Running 'nginx -v' gives me a message that the command nginx could not be found. The server is definitely there and running as I can use nginx to serve static files, but this could have something to do with my problem. I have two users dealing with the install. root which I used to install everything, and deployer which is a user I created specifically to for deploying my applications My web app directory is in the deployer user's home directory as follows: /home/deployer/webapps/mysite.com/public Per Capistrano default deploy, a symbolic link called current is created in the public folder, and points to /home/deployer/webapps/mysite.com/public/releases/most_current_release I have chmodded the deployer directory recursively to 777 /opt/nginx permissions: rwxr-xr-x /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2 permissions: rwxrwsrwx My nginx config file has gone through just short of eternity variations, but currently looks like this: ================================================================================== worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2; passenger_ruby /usr/local/rvm/bin/passenger_ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { # listen *:80; server_name mysite.com www.mysite.com; root /home/deployer/webapps/mysite.com/public/current; passenger_enabled on; passenger_friendly_error_pages on; access_log logs/mysite.com/server.log; error_log logs/mysite.com/error.log info; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } ================================================================================== I bounce nginx, hit the site, and boom. 403, and logs say directory index of /home/deployer... is forbidden As others with a similar problem have said, you can drop an index.html into the public/releases/current_release and it will render. But rails no worky. That's basically it. At this point I have just about completely exhausted every possible solution attempt I can think of. I am a programmer and definitely not a sysadmin, so I am 99% sure this has something to do with permissions that I have hosed, but for the life of me I just can't figure out where. If anyone can help I would really really appreciate it. If there's any specific permission things you want me to check (ie groups/permissions), can you please include the commands to do so as well. Hopefully this will help others in the future who read this post. Let me know if there is any other information I can provide, and thanks in advance!!!

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • Can't mount Linux usb disk. It just create /dev/sg device but no /dev/sd

    - by MTilsted
    I have a Corsair R60 ssd disk which is a disk with both sata and usb connectors. But the usb thing seems to be a bit non-standard, or maybe its just my fedora linux. When I insert the disk using a usb cabel to a running Fedora 14 linux system, a device called /dev/sg3 is added but that is all. No new /dev/sd* device is created so I can't mount the disk. If I look at cat /proc/scsi/sg/device_strs I get ATA Hitachi HTS54321 FB2O HL-DT-ST DVDRAM GSA-T50N RP05 Seagate Desktop 0130 Corsair CSSD-R60GB2 So the disk is there. (The last entry) but my linux will for some reason not see it as a usb hard disk. When I insert other usb disks they work fine. It is only this specific disk which causes problems. I have tried on 3 different computers with the same result. A hint to the problem may be that if I add the disk to a windows system(With usb) the disk is called "A fixed disk" and not a portable disk as expected. The disk works fine with linux If i connect it with the sata cabel, but I would really like to have it working with usb too. (To mount it on computers without sata). Added: I did try to mount /dev/sg3 but mount say that its not a block device. (File say Its a character special device). Added output from dmesg: [ 97.454073] usb 7-1: USB disconnect, address 2 [ 105.913055] hub 2-0:1.0: unable to enumerate USB device on port 3 [ 107.048054] usb 2-3: new high speed USB device using ehci_hcd and address 5 [ 107.162900] usb 2-3: New USB device found, idVendor=1b1c, idProduct=1ab8 [ 107.162903] usb 2-3: New USB device strings: Mfr=1, Product=2, SerialNumber=5 [ 107.162906] usb 2-3: Product: CSSD-R60GB2 [ 107.162908] usb 2-3: Manufacturer: Corsair [ 107.162910] usb 2-3: SerialNumber: 10111441000000990069 [ 107.167651] scsi7 : usb-storage 2-3:1.0 [ 108.195543] scsi 7:0:0:0: Direct-Access Corsair CSSD-R60GB2 PQ: 1 ANSI: 0 [ 108.197732] scsi 7:0:0:0: Attached scsi generic sg3 type 0

    Read the article

  • Internet connection problem,ping ok , but outlook and browsers dont work

    - by Ashian
    Hi, From some days ago I have a big problem on my laptop( run windows xp sp3) When I connect to internet I can ping web sites but when try to browse them some times it work correctly and some times the connection to server intrupted and I have to refresh the page several times. in this case browser show a connection problem immediatly after I click on address bar or a link on page( wihtout any try to connect to server) I use FireFox and opera and both of them have this problem. try another ISP and still I have this problem. I didnt use any proxy server and check the proxy setting. In this case Outlook also can't connect to mail server. this problem anfter some time or after restart windows have been fixed for a while. I check for virus and can't find anything. Is there any idea how can I fix it? UPDATE: Thanks for your responses. I test them , also I use Open DNS setting and that dosent help me. last night I see that my local web application ( such as Adsl modem config web site , and sites that I set up on windows xo IIS ) aslo can't open and Internal Communication error apears ( Opera Message) that didnt relate to DNS settings or Internet connection.

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >