Search Results

Search found 28288 results on 1132 pages for 'home directory'.

Page 487/1132 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Disaster recovery backup of files/photos for personal use

    - by Renesis
    I'm looking for the best method to store a backup of important files and 5+ years of digital photos that is safe from some type of fire/flood disaster in my home. I'm looking for: Affordable: Less than $100/yr or first-time cost. Reliable: At least a smaller chance of failing than there is of fire or flood Easy for initial backup and to add to, and at least semi-easy to recover. I recently purchased a small home safe for physical vitals. It was inexpensive, solid, and is fire/water safe. If I had a physical copy of the digital files, the safe would work fine for this, but I don't know what to store in it that adequately meets the requirements above. Hard drive - I read that the danger of it not spinning up makes a hard drive a bad choice for this type of storage, although it was my first thought and would definitely be the simplest choice - very easy to take out once a month and add files to. DVDs - Way too much of a hassle for both backup and restore. Tape - No idea on the affordability of this option Online - Given that I have at least 300GB already and ever-increasing megapixels means ever-bigger files, and my ISP upload is about 2Mb at the best, this just doesn't sound like a good option for me, but I could be convinced. Other - Have I missed something? Also, I'm already covered both for sync between computers (Dropbox) and a nightly backup of these files (External HDD). The problem with the nightly backup is obviously that it's always with the computer and in a disaster would be destroyed along with it. Is anyone else doing something similar? Is the HDD as poor of a choice as I read, or is it a feasible option? Maybe two to reduce the likelihood of failure?

    Read the article

  • knife on Windows inconsistently reads ~\.ssh\knife.rb on Management Workstation

    - by gWaldo
    I am implementing a new instance of (Open-source v10.12) Chef in an existing environment. Currently the environment is mostly Windows, but more Linux is being introduced. I have used Chef in a previous gig, however that was a *nix-only environment. Because this is a primarily-Windows environment, my main workstation is Windows 7 (x64), and I use Powershell as my main terminal. I created a ~\.chef directory, populated with a knife.rb and my client.pem file. When I run knife client list from ~, I get the expected results. I keep my work in Dropbox just in case my laptop should fail or be stolen. When I run knife client list from the repo directory (C:\Users\waldo\Dropbox_company\projects\chef`), I get ERROR: Your private key could not be loaded from C:/home/waldo/.chef/waldog.pem Check your configuration file and ensure that your private key is readable (Note that the path is incorrect) This is the progression as I walk up the tree towards my ~ running knife client list: C:\Users\waldo\Dropbox\_company\projects\ => Above error C:\Users\waldo\Dropbox\_company\ => Above error C:\Users\waldo\Dropbox\ => It works! (Expected results) C:\Users\waldo\ => Expected results C:\Users\waldo\Documents\ => Expected Results C:\Users\waldo\Documents\GitHub => Expected Results C:\Users\waldo\Documents\GitHub\aProject\ => Expected Results What. The. Eff! Now, I know that I can add -c path\to\knife.rb, but that's a HUGE PITA. Question is: Why is knife inconsistently reading my ~\.chef\knife.rb, and how can I get around that without incurring carpal tunnel?

    Read the article

  • Connecting remotely to an SQL server inside a LAN

    - by vondip
    Hello everyone, I am using SQL server 2008 inside my home lan. I've configured it to accept remote connections and I can now connect to the server from other pcs inside the lan. The problems rises when I try connecting to the server from a computer outside of my home lan. I've disabled my router's firewall and I've configured a virtual server on port 1433 forwarding to the correct lan ip. What's wrong? why is it not working? Thank you very much for your help~! Edit: This is the error I keep getting: A network related or instance specific error occured while establishing connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that the SQL SERVER is configued to allow remote connections. (provider : Sql network interfaces, error: 25- Connection string is not valid) OK these are my router's details: edimax br-6204wg I am not sure how I am supposed to browse google.com. can you be a bit more specific?

    Read the article

  • IIS 8 URL Redirect on site level

    - by jackncoke
    I am trying to do a simple 301 perm redirect to another url in IIS 8. The end results would be if i navigated to domain2.com i would end up on domain1.com. We are moving from IIS 6 to a new server and have aprox 600+ sites that will be configured on this IIS 8 box. All of these sites run a property CMS and are looking at the same directory for source code. In IIS 6 i would just go to the Home directory tab of each site and check the box that says "Permanent Redirect" and provide a URL. With IIS 8 there is "HTTP Redirect" and this looks like it would do the trick but it is being applied to all the sites in IIS 8. Not on the site level like it use to be in IIS 6. I also looked into URL Rewriting module for IIS 8 but it seems to take rules in the style of a firewall and i am not sure if i could effectly create rules that would cater to 600+ sites. I am looking for the easiest way to have redirects on my site level so that that customers with multiple domains can have there sites redirect to there main domain for seo purposes. I feel like this was so easily achieved in IIS 6 that i must be overlooking something in the new version.

    Read the article

  • Add folder name to beginning of filename

    - by shekhar
    I have a directory structure as below: Folder > SubFolder1 > FileName1.abc > Filename2.abc > ............. > SubFolder2 > FileName11.abc > Filename12.abc > .............. > .......... etc. I want to rename the files inside the subfolders as: SubFolder1_Filename1.abc SubFolder1_Filename2.abc SubFolder2_Filename11.abc SubFolder2_Filename12.abc i.e. add the folder name at the beginning of the file name with the delimiter "_". The directory structure should remain unchanged. Note: Beginning of file name is same. e.g. in above case File*. I made below Script for /r "PATH" %%G in (.) do ( pushd %%G for %%* in (.) do set MyDir=%%~n* FOR %%v IN (File*.*) DO REN %%v "%MyDir%_%%v" popd ) Problem with the above script is that it is taking only one Subfolder name and placing it to the beginning of file name irrespective of the folder.

    Read the article

  • Mounting a VirtualBox shared folder on boot with fstab in OpenSuse 11.3

    - by ccook
    I have followed the steps found here, however, the share is not mounted on boot. The share will mount if i run 'mount -a' after booting. Why would the share not mount on boot? 1 - Set up a Virtual Machine and install OpenSUSE 11.2 2 - Create a shared folder on host (HostFolder) 3 - Setup the shared folder in Virtualbox Via the Virtual Machine details or via Devices Shared Folders... 4 - Install dependencies for running the Virtualbox installer You need to install the right development kernelpackage for your machinetype (use 'zypper search -i kernel' to see what's installed) sudo zypper install make gcc kernel-source kernel-hosttype/default-devel 5 - Run the Virtual Machine and go to Devices Guest Additions This mounts an iso image in your OpenSUSE guest. 6 - Open a root terminal and run cd /usr/src/linux make oldconfig && make prepare && make scripts && make dep cp ../linux-obj/$HOSTTYPE/default/Module.symvers . make prepare A commenter on previously mentioned thread says this step is unnecessary but it doesn't work without on my system. I suggest trying step 7 first and returning to step 6 if that fails. * 7 - Run ./VirtualboxLinux yourhosttype .run from the mounted iso image. 8 - Create shared folder in OpenSUSE (GuestFolder) 9 - Test with sudo mount -t vboxsf HostFolder /home/user/GuestFolder It works? Great! Let's set up the system so it automounts for your regular useraccount instead of root-only access. 10 - Add this line to /etc/fstab HostFolder /home/user/GuestFolder vboxsf defaults,uid=1000,gid=1000 0 0 11 - It works for me but if it still doesn't automount after a reboot; sudo mount -a

    Read the article

  • InstantSSL's certificate no different than a self signed certificate under Nginx with an IP accessed address

    - by Absolute0
    I ordered an ssl certificate from InstantSSL and got the following pair of files: my_ip.ca-bundle, my_ip.crt I also previously generated my own key and crt files using openssl. I concatenated all the crt files: cat my_previously_generted.crt my_ip.ca_bundle my_ip.crt chained.crt And configured nginx as follows: server { ... listen 443; ssl on; ssl_certificate /home/dmsf/csr/chained.crt; ssl_certificate_key /home/dmsf/csr/csr.nopass.key; ... } I don't have a domain name as per the clients request. When I open the browser with https://my_ip chrome gives me this error: The site's security certificate is not trusted! You attempted to reach my_ip, but the server presented a certificate issued by an entity that is not trusted by your computer's operating system. This may mean that the server has generated its own security credentials, which Google Chrome cannot rely on for identity information, or an attacker may be trying to intercept your communications. You should not proceed, especially if you have never seen this warning before for this site.

    Read the article

  • Create "raw disk file" from WIM file

    - by Joe Baltimore
    First timer here. I've searched around here, but haven't found a question like the one I have. Apologies if I missed it. The challenge at hand: produce a "raw disk image file" from a given WIM file. What I am pursuing so far is to use imagex.exe with the "/apply" operation to take the WIM and lay it down in a directory on a server. That seems to produce all the necessary "stuff" I need in that directory. How would I take that content and produce a "raw disk image file"? I'm told the definition of "raw disk image file" is a block-by-block copy of the disk image, which I hope is the output of the "imagex.exe /apply" command I use currently, but stored in a single file I can hand back to another system in our solution. imagex.exe /apply image.wim 1 R:\WimImagePoint I would like to take the contents of R:\WimImagePoint and produce the elusive (to me) "raw disk image file". ISO is not what they want, nor is anything requiring winPE. Any pointers? External utilities' references are welcome. Would like to avoid unmanaged code solutions as much as possible, but will entertain them if that's the only route. Also, I am not married to the idea of imagex /apply as the starting point, it's just the comfort zone so far.

    Read the article

  • Why won't IIS serve my website? - 404 Page Not Found

    - by Giffyguy
    Built a brand new server, with a fresh copy of Windows Server 2003 Enterprise x86 Edition. Installed the .NET Framework 1.1, 2.0, 3.5, and 4.0 Added the "Domain Controller" and "Application Server" roles. Created a new website, pointed it to a local directory: C:\Inetpub\angryoctopus.net\ Added the appropriate headers: angryoctopus.net, www.angryoctopus.net, TCP port 80, all IPs Moved the website content into the local directory. Configured the default document in IIS: Default.aspx Enabled ASP.NET for this website, and set it to the correct version: 2.0.50727 Configured the zone angryoctopus.net in DNS. Tested DNS lookup here to ensure DNS was functional. Opened website in VS 2008 and re-built (and debugged) to ensure the content was functional. I can clearly see that IIS is responding normally, by browsing directly to my server's IP address. Since this does not use the angryoctopus HTTP header, the default website is displayed instead: the "Under Construction" page. And yet, after all of this, angryoctopus.net still returns 404. Does anybody know what could be wrong? What troubleshooting steps have I forgotten? Is there a command-line diagnostic that might provide more information?

    Read the article

  • No Homegroup Computers, Network Troubleshooter Fails

    - by Mokubai
    I have a problem with my Windows 7 Homegroup, between two Windows 7 Home Premium machines. On one machine I get this: The other machine in the Homegroup is perfectly happy and is able to see and browse this faulty machine as if there is nothing wrong. The Network and Sharing Center shows that I am joined to a Homegroup on my "Home" network and nothing is out of the ordinary. I have tried leaving the Homegroup and rejoining/recreating it several times and that does nothing at all. Normal browsing to machine names and looking through folders seems to work, but it's a much more clunky way to get stuff compared to the convenience of the Homegroup facilities. Starting the troubleshooter detects some problems with a "Peer Networking" (PNRpr or something like that) service not starting but fails to fix anything. Sure enough when I go to view the services via Control Panel - Administrative Tools - Services I see that both the "Peer Name Resolution Protocol" and "Peer Networking Grouping" services are stopped. Attempting to start the "Peer Networking Grouping" gives an error that a dependency service will not start, the only service it is dependant on is the "Peer Name Resolution Protocol" so I try to start that and I get an error saying that the "service could not start due to error 0x80630801" This has happened before and I have fixed it then by using System Restore and restoring the machine to a week before when I knew it had all worked. This time though I cannot remember when I last used the Homegroup from this machine and I've installed quite a bit so I don't want to go fumbling through restore points trying to find one that works... Can anyone tell me if there is a way to reset things so that this machine is able to use the Homegroup again?

    Read the article

  • gitweb- fatal: not a git repository

    - by Robert Mason
    So I have set up a simple server running debian stable (squeeze), and have configured git. Using gitolite, I have all functionality (at least the basic clone/push/pull/commit) working. Installation of gitweb went without any issues. However, when I access gitweb, I get a gitweb screen without any repos listed. # tail -n 1 /var/log/apache2/error.log [DATE] [error] [client IP_ADDRESS] fatal: Not a git repository: '/var/lib/gitolite/repositories/testrepo.git' # cd /var/lib/gitolite/repositories/testrepo.git # ls branches config HEAD hooks info objects refs Here is what I see in /var/lib/gitolite/projects.list: testrepo.git And in /etc/gitweb.conf: # path to git projects (<project>.git) $projectroot = "/var/lib/gitolite/repositories"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = "/var/lib/gitolite/projects.list"; # stylesheet to use $stylesheet = "gitweb.css"; # javascript code for gitweb $javascript = "gitweb.js"; # logo to use $logo = "git-logo.png"; # the 'favicon' $favicon = "git-favicon.png"; What is missing?

    Read the article

  • Lost Permission on Files using wrong chmod syntax Centos 5.5

    - by alloutfallout
    Hello, I was trying to remove write permissions on an entire directory, and I used the incorrect command: chmod 644 -r sites/default I meant to type chmod -R 644 sites/default The result was this: chmod: cannot access `644': No such file or directory $ ls -als sites total 24 4 drwxr-xr-x 5 user group 4096 Jan 11 10:54 . 4 drwxrwxr-x 14 user group 4096 Jan 11 10:11 .. 4 drwxr-xr-x 4 user group 4096 Jan 5 01:25 all 4 d-w------- 3 user group 4096 Jan 11 10:43 default 4 -rw-r--r-- 1 user group 1849 Apr 15 2010 example.sites.php I fixed the permissions on the default folder with $ chmod 644 sites/default But, the following ls shows a all the files with red backgrounds and question marks. I can't access any files unless I am root. $ ls -als sites/default total 0 ? ?--------- ? ? ? ? ? . ? ?--------- ? ? ? ? ? .. ? ?--------- ? ? ? ? ? default.settings.php ? ?--------- ? ? ? ? ? files ? ?--------- ? ? ? ? ? settings.php When I log in as root, I can edit all of the files, and their permissions appear correctly. I do not know how to undo the damage caused by using -r with chmod instead of -R. Any Suggestions?

    Read the article

  • Tomcat Custom MBean

    - by Darran
    Does anyone know how to deploy a custom MBean to Tomcat? So far I`ve found this http://www.junlu.com/list/3/8871.html. I copied my jar with my MBean to Tomcat lib directory so the Custom class loader should pick it up. I then followed the instructions but I kept getting the exception below. My MBean does definitely have a public constructor. If I removed the jar from the tomcat lib directory I get the same message which suggests its not picking up my jar or my jar is being loaded after the Apache MBean Modeler is running in Tomcat. 06-Aug-2010 12:14:23 org.apache.tomcat.util.modeler.modules.MbeansSource execute SEVERE: Error creating mbean Bean:type=Bean javax.management.NotCompliantMBeanException: MBean class must have public constructor at com.sun.jmx.mbeanserver.Introspector.testCreation(Introspector.java:127) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.createMBean(DefaultMBeanServerInterceptor.java:2 at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.createMBean(DefaultMBeanServerInterceptor.java:1 at com.sun.jmx.mbeanserver.JmxMBeanServer.createMBean(JmxMBeanServer.java:393) at org.apache.tomcat.util.modeler.modules.MbeansSource.execute(MbeansSource.java:207) at org.apache.tomcat.util.modeler.modules.MbeansSource.load(MbeansSource.java:137) at org.apache.catalina.core.StandardEngine.readEngineMbeans(StandardEngine.java:517) at org.apache.catalina.core.StandardEngine.init(StandardEngine.java:321) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:411) at org.apache.catalina.core.StandardService.start(StandardService.java:519) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)

    Read the article

  • testdisk - recover partition table

    - by Evaggelos Balaskas
    I destroyed my partition table of my laptop. Testdisk reports the below Disk laptop.img - 250 GB / 232 GiB - CHS 30402 255 63 (RO) Partition Start End Size in sectors >P MS Data 435868 456606 20739 [NO NAME] P MS Data 19232600 19235479 2880 [NO NAME] D MS Data 41945087 83890143 41945057 D MS Data 57151486 168579069 111427584 D MS Data 67637246 141037565 73400320 D MS Data 151523326 193466365 41943040 D MS Data 170617328 170618223 896 D MS Data 170631168 170634047 2880 D MS Data 171338232 171344405 6174 [Boot] D MS Data 172008235 172231918 223684 [NO NAME] P MS Data 193466368 214437887 20971520 D MS Data 217321375 225321678 8000304 [root] D MS Data 224923646 308809725 83886080 [media] D MS Data 308809728 420237311 111427584 D MS Data 418910206 481824765 62914560 [vmimages] my partition table had 3 Primary Partitions. 1. WinXP Home 2. /boot 3. LVM inside LVM i had 9 or 10 LVM partitions One of them was my home (encrypted with luks) testdisk cant recover my partition table or any other partition. Partitions with [P] doesnt have any useful data. I want to use dd to extract the partitions and try to recover as many files i can. Any ideas of how i can extract eg. the [root] lvm partition from the above testdisk report ? I am afraid that my disk was also corrupted.

    Read the article

  • Ram question in VMware Server 2

    - by ToreTrygg
    Hi, I understand from the VMware Server 2 documentation that VMware Server 2 is capable of running a 64-bit guest OS underneath a 32-bit host OS, as long as the hardware running the box is 64-bit capable. Here's my situation. We currently have an underutilized XEON X3220 Quad Core 64bit Server, running Server 2003, 32-bit and 2gb of RAM (the motherboard is capable of 8gb ram). The server is currently used mainly for file and print services. It is also running Active Directory, Novell eDirectory and Groupwise 6.5. We are planning a micration to Microsoft Exchange, so the Novell eDirectory and Groupwise services will eventually be purged from this box, leaving only Active Directory, File and Print services. Being that this server is underutilized we are hoping to save hardware costs and virtualize our new Exchange investment. My question is this. Will VMware allow access to the "invisible" extra memory that Windows 32-bit won't see. Meaning, if we increase the full amount of system ram to 8gb (yes, I know the 32-bit host OS will only see a maximum of 4gb), will I be able to assign maybe 5gb to the new Server 2008 64-bit OS running Exchange and leave 3gb for the Guest OS (or maybe even a 6, 2 split). The second part of that would be, would it be better to just convert the main OS currently running to an image, convert the machine itself to ESXi and run both OSes as images under ESXi. Downtime for this box is critical, so my preference is most definitly with the first option because it presents very minimal downtime. Doing the second would make downtime quite a few hours to image the machine and then convert the image to a VMware Image.

    Read the article

  • My PC is powercycling, what's going wrong?

    - by Renai
    So here is my sorry story of woe. My PC has been functioning normally for some time. Last week I bought a cheapish powered USB hub and plugged it into my home PC, which runs Windows 7. This weekend I plugged that hub into my home PC. At some stage I hibernated the PC. Then later on I plugged my Kobo eReader into the hub to charge. Later on I started the PC up. Only thing is, it now won’t start up. It just powercycles on for a second and a half with the fans at full, then powercycles off. Then back on for two seconds, then back off. There’s no display at all and it won’t get to the BIOS screen. It looks like anything USB is not functioning — the keyboard and mouse are not lighting up etc. I’ve taken out the BIOS battery and reseated the RAM, reseated the graphics card and so on, but my suspicion is that I have blown the USB section of the motherboard somehow. Suggestions? All else failing, where is the best place to take this machine in Sydney to get evaluated? It’s a fairly powerful beast, all up has cost me about $2500 over the years, including upgrades and a recent new graphics card, so can’t just start from scratch.

    Read the article

  • Problems compiling coreutils-8.5 on Solaris 5.10 on Intel platform

    - by PP
    I am having trouble compiling coreutils-8.5 on Solaris 5.10 on the Intel platform using cc. Firstly I had the following error during ./configure: checking whether <wchar.h> uses 'inline' correctly... no configure: error: <wchar.h> cannot be used with this compiler (/tool/sunstudio12.1/bin/cc -xc99=all -g -D_REENTRANT). This seemed similar to the problem in this question. The solution was to edit configure and replace the reference of -xc99=all to -xc99=all,no_lib. This permitted the configure to complete. Then I ran /usr/sfw/bin/gmake and it progressed until I received the following message: Making all in src gmake[2]: Entering directory `/home/peterp/src/coreutils-8.5/src' gmake all-am gmake[3]: Entering directory `/home/peterp/src/coreutils-8.5/src' CCLD chroot Undefined first referenced symbol in file eaccess ../lib/libcoreutils.a(euidaccess.o) ld: fatal: Symbol referencing errors. No output written to chroot What could cause this problem? PS I was only compiling coreutils because I wanted colour ls.

    Read the article

  • How make rewrite rules relative to .htaccess file.

    - by Kendall Hopkins
    Current I have an .htaccess file like this. RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f [OR] RewriteCond %{REQUEST_URI} ^/(always|rewrite|these|dirs)/ [NC] RewriteRule ^(.*)$ router.php [L,QSA] It works create when the site files are in the document_root of the webserver (ie. domain.com/abc.php - /abc.php). But in our current setup (which isn't changeable), this isn't ensured. We can sometimes have arbitrary folder in between the document root and folder of the .htaccess file (ie. domain.com/something/abc.php - /something/abc.php). The only problem with is that is the second RewriteCond no longer works. Is there anyway to dynamically check if the accessed path by a path relative to .htaccess file. For Example: If I have a site where domain.com/rewrite/ is the directory of the .htaccess file. NOT FORCED TO REWRITE -> domain.com/rewrite/index.php FORCED TO REWRITE -> domain.com/rewrite/rewrite/index.php If I have a site where domain.com/ is the directory of the .htaccess file. NOT FORCED TO REWRITE -> domain.com/index.php FORCED TO REWRITE -> domain.com/rewrite/index.php

    Read the article

  • localhost/127.0.0.1 not working, "Unable to connect"

    - by redconservatory
    I am running some pretty basic php sites on Snow Leopard. Usually I just go to my browser and type anything like: localhost http://localhost 127.0.0.1 mycomputername.local But suddenly, after installing a gem file (compass) none of this is working. I tried sudo apachectl restart Thinking that I just needed to restart apache, but no luck. My error log looks like: [Mon Mar 26 09:39:08 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45223 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45043 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45438 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45049 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45439 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45224 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45440 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45441 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45442 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:11 2012] [notice] caught SIGTERM, shutting down I also tried sudo apachectl -k start And I got the error: Syntax error on line 182 of /private/etc/apache2/httpd.conf: Illegal option When I look at the code around that line, I see: <Directory /> Options Indexes MultiViews + FollowSymLinks AllowOverride All Order allow, deny Allow from all </Directory>

    Read the article

  • VirtualHost on WAMPSERVER not working

    - by Martin C
    I currently have WAMPSERVER 2.2 set up on my PC. I'm trying to set up a new host called pplocal.local I made the changes in httpd.conf to uncomment this: Include conf/extra/httpd-vhosts.conf Then, I edited httpd-vhosts.cong and I added the following: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1> DocumentRoot "E:/wamp2/www/" ServerName localhost </VirtualHost> <VirtualHost 127.0.0.1> DocumentRoot "E:/wamp2/www/pp/" ServerName pplocal.local <Directory "E:/wamp2/www/pp/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 </Directory> CustomLog "E:\wamp2\logs\pplocal-access.log" common ErrorLog "E:\wamp2\logs\pplocal-error.log" Im my windows 'hosts' file I added: 127.0.0.1 localhost 127.0.0.1 pplocal.local Then, I restarted apache. If I type localhost in my browser I get the files at E:/wamp2/www/ If I type pplocal.local in my browser I get the files at E:/wam2/www/ instead of those at E:/wamp2/www/pp/ I have followed several tutorials and can't see what I'm doing wrong. I'm new to editing the files associated with apache so any advice is appreciated. Thanks

    Read the article

  • Getting Dell E6320 with I7 to work with 3 monitors at 1920x1080p x 3

    - by MadBoy
    I want to buy Dell E6320 which comes with Intel Core I7-2620M (2.70GHz, 4MB cache, Dual Core) with Intel HD Graphics 3000. Laptop will come with docking station. I want to connect 3 monitors to that docking station so that working at home would give me some additional boost. Docking station will allow me to connect only 2 monitors so I'm looking at following other options: Matrox TRIPLEHEAD2GO DIGITAL Edition or TRIPLEHEAD2GO DP Edition. But reading Matrox Support Page intel GPU can't run the highest resolution with 3 monitors connected, it even gets worse since it seems monitors would have to be able to work at 50hz. Also I'm not sure but it seems that Matrox doesn't split the monitors as 3 separate monitors but simply as one big space (which is a bit opposite to what I need) Buy 2 or maybe just 1 USB based monitor but it would also mean having 1 or 2 different monitors then the main one, unless I buy 3 USB based monitors which would mean more money to spend. Also I found only couple of models and most of them require USB 3.0 and no other cables to plug in (nice but costly - couldn't find decent monitor with only USB for sending signal and having power connected normally) . But docking station has only one USB 3.0 port. Can I use hub and still get it to work? Find some converters from Digital to USB (I think DisplayLink does some?) Buy different laptop but what kind? I need it to be I7, small (13"), fast and lightweight. At same time it requires docking station that I can use at home to connect 3 external monitors. Some other suggested solution... Edit: I need 3 monitors for work in terms of coding in Visual Studio or having word/excel/outlook open. Nothing fancy. Maybe some movie once in a while.

    Read the article

  • Centos INODES usage

    - by MSTF
    We are using Centos & cPanel server but we have a important problem for INODES usage. "df -i" command showing for / directory using 6 million inodes!. When I check number of files for / directory, it has few thousand files. df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda4 6578176 6567525 10651 100% / tmpfs 8238094 1 8238093 1% /dev/shm /dev/sdi1 61054976 169 61054807 1% /backup /dev/sda1 51296 38 51258 1% /boot /dev/sda2 0 0 0 - /boot/efi /dev/sdc1 7290880 1252 7289628 1% /database /dev/sdb2 4096000 53258 4042742 2% /home /dev/sdd1 7290880 3500 7287380 1% /home2 /dev/sde1 7290880 68909 7221971 1% /home3 /dev/sdg1 7290880 68812 7222068 1% /home5 /dev/sdh1 7290880 695076 6595804 10% /home6 /dev/sdf1 7290880 58658 7232222 1% /tmp df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 99G 30G 65G 32% / tmpfs 32G 0 32G 0% /dev/shm /dev/sdi1 917G 270G 601G 32% /backup /dev/sda1 788M 80M 669M 11% /boot /dev/sda2 400M 296K 400M 1% /boot/efi /dev/sdc1 110G 1.5G 103G 2% /database /dev/sdb2 62G 1.1G 58G 2% /home /dev/sdd1 110G 79G 26G 76% /home2 /dev/sde1 110G 3.9G 101G 4% /home3 /dev/sdg1 110G 51G 54G 49% /home5 /dev/sdh1 110G 64G 41G 62% /home6 /dev/sdf1 110G 611M 104G 1% /tmp SDA disk just have Operating System and cPanel. There is no account, database, tmp on SDA disk. Why SDA using high inodes? Note: All disks is SSD 120GB Thanks.

    Read the article

  • What could cause random files being downloaded without permission?

    - by Dustin
    I have been having issues lately with a certain directory. It seems someone is placing files into it, or something of that sort, and any attempt to delete them is successful, HOWEVER they reappear over time (maybe not the exact same ones, but random files). I will provide you the information I can and several pictures of my problem: sandbox.mys4l.com/visual/files/b1.jpg Files like this have been appearing in my /visual/ folder, and I have no clue where they are coming from. sandbox.mys4l.com/visual/files/b2.jpg This is what is inside on of those weird files, it appears to be nothing problematic. sandbox.mys4l.com/visual/files/b4.jpg As you can see, in the time it took me to take the first picture, more odd files showed up. These log files are also being uploaded to this directory, and I know I didn't put them there. sandbox.mys4l.com/visual/files/b7.jpg This inside one of these mysterious .log files, I'm not sure what it's all about. These files only appear to be going into this specific area, and I'm not sure of their origin, only that they will not go away. I have done a full system scan at least twice with an up-to-date virus scan, and have looked for an unknown script which may be writing them there. Nothing has come up, so I come to you guys as I hear this is the best place to find answers. Hope this problem has a solution!

    Read the article

  • WebHost Manager - Apache's VHost isn't matching the DNS entry

    - by Trans
    I've used CPanel's WebHost Manager to create a new host on my VPS. I then used my HOSTS file to point fake.com to the relevant IP address. The problem I'm having now is, Apache isn't recognizing the VHost,or something, as it's just loading the default entry and 404'ing every document I try to GET. Here's the VHost entry NameVirtualHost 0.0.0.209:80 NameVirtualHost 0.0.0.211:80 <VirtualHost 0.0.0.209:80> ServerName fake.com ServerAlias www.fake.com DocumentRoot /home/fakecom/public_html ServerAdmin [email protected] ## User fakecom # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup fakecom fakecom </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup fakecom fakecom </IfModule> CustomLog /usr/local/apache/domlogs/fake.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." CustomLog /usr/local/apache/domlogs/fake.com combined Options -ExecCGI -Includes RemoveHandler cgi-script .cgi .pl .plx .ppl .perl ScriptAlias /cgi-bin/ /home/fakecom/public_html/cgi-bin/ </VirtualHost> I've Google'd this profusely and all that's being returned is 'DNS errors. Wait for it to propagate'. That's obviously not my problem, since I'm using HOSTS. What else could be causing this? :/ EDIT: Forgot to mention. http://fake.com/~fakecom/test.html loads just fine. So the fake.com is pointing to the right IP.

    Read the article

  • Truncated content with Apache on Vagrant VM

    - by Nev Stokes
    I'm using Vagrant to run a CentOS VM in order to try and achieve local development parity with our live servers. I've symlinked /var/www/html with the /vagrant shared directory and am forwarding port 80 for viewing at http://localhost:4567. I'm developing using SublimeText 2 on OS X Mountain Lion. Once I figured that iptables was tripping me up, all was well and good. Until I noticed something strange. I have a sample HTML page consisting of several paragraphs of lorem copy. I can view this fine in a browser on OS X. But when I make an edit, for example removing a paragraph, and refresh the content is truncated with the paragraph I deleted still visible. When I cat the files on the server I can see the changes I made but these aren't even reflected when I curl localhost. I strongly suspect that it's a problem with my Apache settings — with which I didn't really tinker — as the issue doesn't arise when I stop Apache and run sudo python -m SimpleHTTPServer 80 in the directory to view pages instead. What gives?

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >