Search Results

Search found 4606 results on 185 pages for 'mod dir'.

Page 162/185 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Computer sending data while turned off

    - by Nicklas Ansman
    I have a some what strange problem (which could have and easy and obvious solution for all I know). My problem is that when I've booted ubuntu (now 10.4 but same problem with 9.10) and turns it off it starts sending a HUGE amount of data via the ethernet cable, so much in fact that my router can't handle it and stops responding. As far as I can tell the computer is completely turned off with no fans spinning. I can add that if I boot windows I do not have this problem, just when exiting ubuntu. There are two "fixes" for my problem: Pull the ethernet cable until the next boot Turn off power to the PSU and wait for the capacitors to unload Is there anyone who knows what could be going on? I'd be happy to post some logs or conf-files. Currently I'm using the ethernet port on my motherboard which is a Asus P6T Deluxe V2 with an updated version of the BIOS (maybe not the latest but since it only happens when I've been in ubuntu I don't wanna mess with the BIOS too much). Regards Nicklas ---------Update 1---------- The router is a D-Link DIR 655 with the latest firmware. ---------Update 2---------- I've now reinstalled ubuntu (with 10.4) and I still experience the same problem.

    Read the article

  • Did chkdsk make it harder to restore files?

    - by neyl
    My friend asked me to try and fix his loaded Sansa Clip + which wasn't playing. After opening it in MSC mode I discovered that the Music directory was empty and total of all files was only a few MB. However Disk properties showed me that it was 7Gb full. I then ran Tools - Error Checking and Windows dutifully informed me that disk was corrupt and I should run again Allowing Windows to Fix Errors. I did that and it told me everything was fixed and that all files were placed in FOUND.000 Dir. FOUND.000 was about 7.5 GB with FILE0000-1546 . CHK. (I am aware of methods like ChkBack to scan and convert to mp3 etc BUT Original filenames and structure needed!) Now I started getting worried that I made things worse! I have plenty of experience with Data Recovery Programs - Recuva, Restore My Files etc. and I was anyhow planning to use them to scan the drive. But NOW after CHKDSK "fixed" the drive maybe it modified critical FAT information vital for data recovery. So I run these programs and 0!!!. No trace of files! I tried a ton of Recovery Programs with same results TILL EaseUS Data Recovery Wizard found all files and I purchased program for $55! My Question In your opinion - did running CHKDSK with automatic fixing of errors make matters worse (i.e. many data recovery progs. didn't find a trace and they would have done if not for chkdsk) or was the filesystem too corrupt anyhow for regular File Recovery Progs.? If I would be a Professional - would I be responsible for running CHKDSK - automatic Fixing. Do you know of a better Data Recovery Program than EaseUs Data Recovery wizard - According to my experience I haven't found better!? Thanks

    Read the article

  • Backup script to FTP with timed subfolders

    - by Frederik Nielsen
    I want to make a backup script, that makes a .tar.gz of a folder I define, say fx /root/tekkit/world This .tar.gz file should then be uploaded to a FTP server, named by the time it was uploaded, for example: 07-10-2012-13-00.tar.gz How should such backup script be written? I already figured out the .tar.gz part - just need the naming and the uploading to FTP. I know that FTP is not the most secure way to do it, but as it is non-sensitive data, and FTP is the only option I have, it will do. Edit: I ended up with this script: #!/bin/bash # have some path predefined for backup unless one is provided as first argument BACKUP_DIR="/root/tekkit/world/" TMP_DIR="/tmp/tekkitbackup/" FINISH_DIR="/tmp/tekkitfinished/" # construct name for our archive TIME=$(date +%d-%m-%Y-%H-%M) if [ $1 ]; then BACKUP_DIR="$1" fi echo "Backing up dir ... $BACKUP_DIR" mkdir $TMP_DIR cp -R $BACKUP_DIR $TMP_DIR cd $FINISH_DIR tar czvfp tekkit-$TIME.tar.gz -C $TMP_DIR . # create upload script for lftp cat <<EOF> lftp.upload.script open server user user password lcd $FINISH_DIR mput tekkit-$TIME.tar.gz exit EOF # start backup using lftp and script we created; if all went well print simple message and clean up lftp -f lftp.upload.script && ( echo Upload successfull ; rm lftp.upload.script )

    Read the article

  • (help help!!!) Easy_install the wrong version of python modules (Mac OS)

    - by user71415
    I installed Python 2.7 in my mac. When typing "python" in terminal, it shows: Ma-Xiaolongs-MacBook-Pro-2:~ MaXiaolong$ python Python 2.7 (r27:82508, Jul 3 2010, 20:17:05) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. The Python version is correct here. But when I try to easy_install some modules. The system will install the modules with python version 2.6 which are not able be imported to Python 2.7. And of course I can not do the functions I need in my code. Here's an example of easy_install graphy: Ma-Xiaolongs-MacBook-Pro-2:~ MaXiaolong$ easy_install graphy Searching for graphy Reading pypi.python.org/simple/graphy/ Reading http://code.google.com/p/graphy/ Best match: Graphy 1.0.0 Downloading http://pypi.python.org/packages/source/G/Graphy/Graphy- 1.0.0.tar.gz#md5=390b4f9194d81d0590abac90c8b717e0 Processing Graphy-1.0.0.tar.gz Running Graphy-1.0.0/setup.py -q bdist_egg --dist-dir /var/folders/fH/fHwdy4WtHZOBytkg1nOv9E+++TI/-Tmp-/easy_install-cFL53r/Graphy-1.0.0/egg-dist-tmp-YtDCZU warning: no files found matching '.tmpl' under directory 'graphy' warning: no files found matching '.txt' under directory 'graphy' warning: no files found matching '.h' under directory 'graphy' warning: no previously-included files matching '.pyc' found under directory '.' warning: no previously-included files matching '~' found under directory '.' warning: no previously-included files matching '.aux' found under directory '.' zip_safe flag not set; analyzing archive contents... graphy.all_tests: module references file Adding Graphy 1.0.0 to easy-install.pth file Installed /Library/Python/2.6/site-packages/Graphy-1.0.0-py2.6.egg Processing dependencies for graphy Finished processing dependencies for graphy So it installs graphy for python 2.6. Can someone help me with it? I just want to set my default easy_install version as 2.7... Thank you very much!!!!!!

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • /usr/bin/install hangs, apparently due to SELinux

    - by Cooper
    I'm trying to use the GNU coreutils install utility, however it is hanging: /usr/bin/install -v test_file test_dir/ `test_file' -> `test_dir/test_file I see the same behavior whether I run as a normal user, or root/sudo. I ran an strace -f, and this is the end of the output: ... read(4, "<username>\t-d\tsystem_u:object_r:ho"..., 4096) = 2197 <0.000012> brk(0x6e3b1000) = 0x6e3b1000 <0.000009> mmap(NULL, 29138944, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2abd831ae000 <0.000014> munmap(0x2abd815dd000, 29138944) = 0 <0.003466> The read() is reading from /etc/selinux/targeted/contexts/files/file_contexts.homedirs, apparently successfully. It appears that the process is hanging right after the munmap, but continues to eat 100% CPU. My two questions are: 1) Any good way to see what is going on with the process? I'm currently too lazy to compile a debug version of install I can run gdb on - but a strong suggestion in an answer here may motivate me to do so if needed. 2) Any idea what the SELinux issue could be? I'm not too familiar with SELinux. Additional info of possible relevance: # ls -Z drwxr-xr-x my_user 7001 user_u:object_r:user_home_t test_dir -rw-r--r-- my_user 7001 user_u:object_r:user_home_t test_file # id ... context=user_u:system_r:unconfined_t # uname -a Linux hostname 2.6.18-238.1.1.el5 #1 SMP Tue Jan 4 13:32:19 EST 2011 x86_64 x86_64 x86_64 GNU/Linux I am suspicious that SELinux + Quest Authentication Services (QAS) is causing the issue. QAS is generally well behaved, but it did cause the /etc/selinux/targeted/contexts/files/file_contexts.homedirs to get quite large (~18k users, @23 lines per user) Update: install -v -Z user_u:object_r:user_home_t file dir/ seems to work. Can anyone suggest why, given that SELinux is in permissive mode (see comments).

    Read the article

  • ksh Auto-Completion PuTTY Configuration

    - by Nitrodist
    I'm having a bit of a problem configuring my PuTTY client to work with the auto-completion feature in the ksh shell. I do a listing on the root with the directories /home and /homeroot and it returns the directories in a list just fine. I can't select it, though, by hitting X = (where X is the number). /home/nitrodist>ls /h #hits esc + = 1) home/ 2) homeroot/ #hits 2 + = for the 'homeroot' dir 1) home/ 2) homeroot/ #hits just the '=' key. 1) home/ 2) homeroot/ Any ideas? I've su -'d to another user who can actually do it with their PuTTY session and I can't do it there, which makes me think it's a PuTTY configuration issue. This is running on a ksh93 shell on HP-UX, if that makes any difference. Here's my ksh config: /home/campbelm>set -o Current option settings allexport off bgnice on emacs off errexit off gmacs off ignoreeof off interactive on keyword off markdirs off monitor on noexec off noclobber off noglob off nolog off notify off nounset off privileged off restricted off trackall off verbose off vi on viraw on xtrace off /home/campbelm>

    Read the article

  • How to repair damaged repository (which has a centralized .svn directory)?

    - by Heinrich Ulbricht
    I recently upgraded my TortoiseSVN installation to version 1.7.1. This forced me to upgrade my working copy as well. The upgrade removed all (but one) of the .svn directories from all subdirectories leaving only one in the root. Now out of the blue (of course; I suspect my antivirus software) there is an error when I for example try to clean up the working copy. I am also not able to commit anything. The error message when cleaning up is: Cleanup failed to process the following paths: C:\svn Can't open file 'C:\svn.svn\pristine\73\73bcc5fa7819f84f56b81dfa0236f0aac7b7d404.svn-base': The system cannot find the file specified. I traced the error to be related to the presence of one directory within the working copy. If I rename it then everything works. When it is present I get the error. I also deleted it and checked it out again. No change, the error persists. With previous versions I could repair damages in the .svn easily: just delete the offending folder and check out again. I cannot do this anymore because now the .svn dir is centralized. What could I do to repair my working copy?

    Read the article

  • IIS 7, FastCGI, PHP and custom php.ini files

    - by Marlon
    I'm running PHP 5.3, FastCGI, and IIS 7 on Windows Server 2008. I have a site which I would like to configure its own php.ini settings for but things aren't working as expected. I am following the tutorial located here. This is what I have done so far: 1) Configured a new website with it's own AppPool. 2) Selected PHP 5.3.6 from the PHP Manager available on the website home on IIS (not the web server home which sets the global version of PHP) 3) Added the following lines to the section of the applicationHost.config file located at system32/inetsrv/config <application fullPath="C:\Program Files (x86)\PHP\v5.3\php-cgi.exe" arguments="-d open_basedir=C:\inetpub\wwwroot\kickasswebsite.com" maxInstances="4" idleTimeout="300" activityTimeout="30" requestTimeout="90" instanceMaxRequests="200" protocol="NamedPipe" queueLength="1000" flushNamedPipe="false" rapidFailsPerMinute="10"> <environmentVariables> <environmentVariable name="PHPRC" value="c:\inetpub\wwwroot\kickasswebsite.com" /> </environmentVariables> </application> 4) I then create a php.ini file located in C:\inetpub\wwwroot\kickasswebsite.com (the location of the root of the website) register_globals = on 5) I then run test.php which simply outputs everything the method call to phpinfo() returns. At this point, I observe that the global setting for register_globals = off (as it should be), but the local setting for register_globals = off, even though I specified it differently in the php.ini file I created at the root of the site. Furthermore, I see these settings in the output of the php.ini Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\Program Files (x86)\PHP\v5.3\php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) What am I messing up on, or is there a different way to go about this?

    Read the article

  • "one-off" use of http_proxy in a Chef remote_file resource

    - by user169200
    I have a use case where most of my remote_file resources and yum resources download files directly from an internal server. However, there is a need to download one or two files with remote_file that is outside our firewall and which must go through a HTTP proxy. If I set the http_proxy setting in /etc/chef/client.rb, it adversely affects the recipe's ability to download yum and other files from internal resources. Is there a way to have a remote_file resource download a remote URL through a proxy without setting the http_proxy value in /etc/chef/client.rb? In my sample code, below, I'm downloading a redmine bundle from rubyforge.org, which requires my servers to go through a corporate proxy. I came up with a ruby_block before and after the remote_file resource that sets the http_proxy and "unsets" it. I'm looking for a cleaner way to do this. ruby_block "setenv-http_proxy" do block do Chef::Config.http_proxy = node['redmine']['http_proxy'] ENV['http_proxy'] = node['redmine']['http_proxy'] ENV['HTTP_PROXY'] = node['redmine']['http_proxy'] end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing notifies :create_if_missing, "remote_file[redmine-bundle.zip]", :immediately end remote_file "redmine-bundle.zip" do path "#{Dir.tmpdir}/redmine-#{attrs['version']}-bundle.zip" source attrs['download_url'] mode "0644" action :create_if_missing notifies :decompress, "zipp[redmine-bundle.zip]", :immediately notifies :create, "ruby_block[unsetenv-http_proxy]", :immediately end ruby_block "unsetenv-http_proxy" do block do Chef::Config.http_proxy = nil ENV['http_proxy'] = nil ENV['HTTP_PROXY'] = nil end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing end

    Read the article

  • SWATCH - what am I doing wrong?

    - by Brian Dunbar
    What I want/need/desire is to log when a user logs into my FTP server. Problem: I can't make swatch work the way I should be able to. This data is logged to a file - but of course these logs are not kept very long. I can't keep the logs around forever, but I can extract data from then, analyze it, store results elsewhere. If there is a better way to do this than the following, I'm all ears. Swatch version 3.2.3 Perl 5.12 FTP: VSFTP OS (Test): OS X 10.6.8 OS (Production): Solaris From man I see I can pass contents to a command .. so I should be able to echo those values to file, do a sed/cut/uniq thing on them for stats. $ man swatch (snip) exec command Execute command. The command may contain variables which are substituted with fields from the matched line. A $N will be replaced by the Nth field in the line. A $0 or $* will be replaced by the entire line. Swatch file .swatchrc watchfor /OK LOGIN/ echo=red pipe "echo "0: $0 1:$1 2:$2 3:$3 4:$4 5:$5" >> /Users/bdunbar/dev/ftplog/output.txt" Launch with $ swatch -c /Users/bdunbar/.swatchrc --script-dir /Users/bdunbar/dev/ftplog -t /Users/bdunbar/dev/ftplog/vsftpd.log & Test echo "Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client "206.209.255.227"" >> vsftpd.log Results - it's echoing to TTY. This is not needed or desired on the server, but it does tell me things are working. ftplog *** swatch version 3.2.3 (pid:25780) started at Mon Jul 9 15:23:33 CDT 2012 Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client 206.209.255.227 Results - bad! I appear to not be sending the variables to text. $ tail -f output.txt 0: /Users/bdunbar/dev/ftplog/.swatch_script.25780 1: 2: 3: 4: 5:

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • Windows 7: L10N mechanics

    - by John Sonderson
    I have a localized version of Windows 7. I can't figure out where windows gets the names for files and directories on the system. For instance, consider the following (default) files. > cd C:\Users\Public\Pictures\Sample Pictures > dir Chrysanthemum.jpg Desert.jpg ... When I view these files on the default file explorer I see these names: Crisantemo.jpg Deserto.jpg ... This seems to imply that each file can be somehow assigned a localized name somewhere. However I cannot figure out how. Would appreciate if someone could shed some light on this issue. Thanks. UPDATE EDIT: The desktop.ini file in the folder containing Chrysanthemum.jpg contains the following entries. The .dll files used to translate the various resources are unfortunately not human-readable and I have no clue as to how they could be generated for other files created by the user to be translated, but they serve the purpose, and solve the mystery which lead to the post. Thanks. [LocalizedFileNames] Chrysanthemum.jpg=@%systemroot%\system32\SampleRes.dll,-101 Desert.jpg=@%systemroot%\system32\SampleRes.dll,-102 Hydrangeas.jpg=@%systemroot%\system32\SampleRes.dll,-103 Jellyfish.jpg=@%systemroot%\system32\SampleRes.dll,-104 Koala.jpg=@%systemroot%\system32\SampleRes.dll,-105 Tulips.jpg=@%systemroot%\system32\SampleRes.dll,-106 Lighthouse.jpg=@%systemroot%\system32\SampleRes.dll,-107 Penguins.jpg=@%systemroot%\system32\SampleRes.dll,-108 [.ShellClassInfo] LocalizedResourceName=@%SystemRoot%\system32\shell32.dll,-21805

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • Moving a site from IIs6 to IIS7.5

    - by Sukotto
    I need to move a site off of IIS6 (Win Server 2003) and onto IIS7.5 (Win Server 2008) as soon as possible. Preferably tomorrow. The site itself is a delightful mix of classic asp (vbscript) and one-off asp.net (C#) applications (each asp.net app is in its own virtual dir and has a self-contained web.config). In case it's relevant, this is a sort of research site made up of 40 or 50 unconnected microsites. Each microsite is typically a simple form allowing a user to submit a form, which then runs a Stored Proc on a sqlserver db and displays a chart and/or table of the results. There is very little security to worry about. The database connection info is in a central file (in the case of the classic asp) or app's individual web.config (lots of duplication there) To add a little spice to the exercise... I have no idea how to admin IIS The company no longer employs the sysadmin or the guys who set this thing up. (They're not going to employ me much longer either but my sense of professional pride does not permit me to just walk away from this task). The servers are on mutually firewalled networks and I have to perform a convoluted, multi-step process to copy anything from one to the other. Would someone please point me to a crash-course tutorial for accomplishing the above? I have: a complete copy of the site's filesystem on the new box installed the 3rd party charting tool on the new system a config.xml file from the "all tasks - save configuration to a file" right click menu. There doesn't seem to be a way to import it on the new system however. The newer IIS manager has a completely different UI and I'm totally lost. Please help.

    Read the article

  • Play framework 2.2 using Upstart 1.5 (Ubuntu 12.04)

    - by Leon Radley
    I'm trying to get Play 2.2 working with upstart. I've been running Play 2.x with upstart since it's release and it's never been a problem. But since the release of 2.2 and the change to http://www.scala-sbt.org/sbt-native-packager/ play doesn't want to start any more. Here's the config I'm using description "PlayFramework 2.2" version "2.2" env APP=myapp env USER=myuser env GROUP=www-data env HOME=/home/myuser/app env PORT=9000 env ADDRESS=127.0.0.1 env CONFIG=production.conf env JAVAOPTS="-J-Xms128M -J-Xmx512m -J-server" start on runlevel [2345] stop on runlevel [06] respawn respawn limit 10 5 expect daemon # If you want the upstart script to build play with sbt pre-start script chdir $HOME sbt clean compile stage -mem $SBTMEM end script exec start-stop-daemon --pidfile ${HOME}/RUNNING_PID --chuid $USER:$GROUP --exec ${HOME}/bin/${APP} --background --start -- -Dconfig.resource=$CONFIG -Dhttp.address=$ADDRESS -Dhttp.port=$PORT $JAVAOPTS I've changed the JAVAOPTS to include the -J- and I've also changed the path to use the new startscript located in the /bin/ dir. I've read that upstart 1.4 has setuid and setguid. I've tried removing the start-stop-daemon but I haven't got that working either. Any suggestions would be appreciated.

    Read the article

  • Moving hidden files/folders with the command-line or batch-file

    - by Synetech
    Question Does anyone know of a way to move files and folders that have the hidden, system, or read-only attribute set from the command-line or a batch file? (No, stripping the attributes first is not an option since there is no practical way to know which attributes were set in order to re-set them after the move.) (Failed) Attempts Using the basic move command does not work with items with the hidden or system attribute set and for some reason, it does not have switches to specify attributes like the dir and del commands do. I tried using a utility I wrote that uses the shell’s file operation function, but that requires using start /w to prevent the batch file from running on ahead, and it complains about long-filename support for some reason. I tried using robocopy, but it first copies the files and then deletes the originals instead of simply moving the source (which results in a frustrating delay, even with the excessive output redirected to nul). (Surprisingly it seems that few people have ever needed to move hidden files from the command-line. All I could find was this one person who abandoned the attempt.)

    Read the article

  • iTunes Home Sharing only works one way between 2 Windows XP PC's on the same LAN

    - by scunliffe
    Both PC's have the latest iTunes installed. PC (A) can "see" that there is a shared library "B library" but attempts to connect to it return this error message: The shared library "{Username}'s Library" is not responding (-3259) Check that any firewall software running on either the shared computer or this computer has been set to allow communication on port 3689. however the reverse works fine. e.g. PC (B) can "see" shared library "A library" and can access all content. Notes: Both PC's have Home Sharing enabled (turned off/on several times to verify). Both PC's have Windows Firewall turned on, but in the exceptions tab, iTunes is allowed, and Port 3689 is also added as a firewall exception (just in case) Both iTunes accounts have been "authorized" on both PC's Both PC's connect via LAN via D-Link DIR-615 router. In the advanced application rules, iTunes has also been added to allow traffic on port 3689 un-hindered. Is there any other magical setting/configuration option that I should be aware of and set in order to get this to work? I could care less about sharing apps etc. I just want the music sharing to work. Update: Solved! It turns out on PC (B) there were multiple accounts set up. 1 of the accounts had the checkbox checked under the Windows firewall "On" option which states "No exceptions" thus even though it was added to the exception list on the main user account, this other account was blocking access.

    Read the article

  • Replacing HD in an MacOS 10.6.8 server caused all shares to fail

    - by Cheesus
    I'm hoping someone might have a helpful suggestion about this problem. We have 2 MacOSX servers available for file sharing. (quad Xeons - 2GB RAM, both 10.6.8), No.1 is an Open Directory Master with 50+ user accounts, No.2 has only 2 local accounts (/local/Default) and looks at the OD Master for all user accounts (/LDAPv3/10.x.x.20/) Both servers have 3 internal HD's, The boot volume with only Server OS and minimal Apps. A 'DataShare' HD (500GB) and a backup drive (500GB). After upgrading the DataShare HD in Server No.2 from a small internal HD (500GB) to larger capacity (2TB) drive, users are unable to connect to shares on Server No.2. Users get an error "There are no shares available or you are not allowed to access them on the server" The process I followed was to use Carbon Copy Cloner to create an exact copy of the original data drive (keeps all ownership data, UID, permissions, last edit date and time). Everything booted up ok, no indication there was any issues. (Paths to the sharepoint look good) Notes during troubleshooting - Server1 is operating perfectly, all users can access shares and authenticate etc. - I've checked the SACL (Server Access Control List) settings is ok. - On Server2 in the Server Admin' app, I can see all the shares listed ok. The paths seem valid, I can disable / reenable the shares, no errors. - On Server2 'workgroup manager' lists all the accounts from the OD Master in the LDAP dir view. All seems fine from here. Basically everything looks normal but no file shares on Server2 can be accessed from regular users.

    Read the article

  • Building PHP For MacOS

    - by Eray
    I was using XAMPP and decided to uninstall it and use MacOS' in-built apache and php modules. But while uninstalling XAMPP I deleted /usr/bin/php files and other PHP-CLI files accidentally. And I decided to install newest version of PHP (5.5.12) instead of rebuilding current version (5.4.24). Downloaded it and unzip. After this executed this command as mentioned at this guide. ./configure '--with-apxs2=/usr/sbin/apxs' '--enable-cli' '--with-config-file-path=/etc' '--with-zlib=/usr' '--enable-bcmath' '--with-bz2=/usr' '--enable-calendar' '--disable-cgi' '--with-curl=/usr' '--enable-dba' '--enable-ndbm=/usr' '--enable-exif' '--enable-fpm' '--enable-ftp' '--with-gd' '--enable-gd-native-ttf' '--enable-mbregex' '--with-mysql=mysqlnd' '--with-mysqli=mysqlnd' '--with-pear' '--with-pdo-mysql=mysqlnd' '--with-mysql-sock=/var/mysql/mysql.sock' '--with-tidy' '--enable-wddx' '--with-xmlrpc' '--enable-zip' make make install When i check phpinfo() , it's still version 5.4.24 . This line from my httpd.conf LoadModule php5_module libexec/apache2/libphp5.so /usr/libexec/apache2/libphp5.so coming from old version and i couldn't ind libphp5.so for new version. There is no libphp5.so file inside modules dir. How can i use new PHP build with Apache ? UPDATE Results of php -v command . PHP 5.5.12 (cli) (built: May 27 2014 05:17:21) Copyright (c) 1997-2014 The PHP GroupZend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies

    Read the article

  • 403 Error when accessing vhost directive

    - by Ortix92
    I'm having some troubles with setting up my webserver (Centos 5.8). It's a brand new server and I'm trying to set a vhost to the following dir: /home/exo/public_html However whenever I restart httpd I get the following warning: Code: Starting httpd: Warning: DocumentRoot [/home/exo/public_html] does not exist Yes the directory does exist. So whenever I visit the domain exo-l.com it gives me a 403 error. This is my config file (I put this inside my httpd.conf because the files in conf.d were not included for some reason. Or at least my newly created vhost conf file, but that has 0 priority for now) <VirtualHost *:80> DocumentRoot /home/exo/public_html ServerName www.exo-l.com ServerAlias exo-l.com <Directory /home/exo/public_html> Order allow,deny Allow from all </Directory> </VirtualHost I'm completely clueless because this should work as far as I know. httpd is being run as apache:apache i tried chowning the public_html directory (also recursively) to exo:apache, apache:apache, root:root with no success. chmod 777 doesn't do anything either. a tail from the log: [Sat Oct 13 15:10:04 2012] [error] [client 82.***.***.61] (13)Permission denied: access to / denied [Sat Oct 13 15:10:04 2012] [error] [client 82.***.***.61] (13)Permission denied: access to / denied I also found something about selinux and that disabling it might help, but do I really want to do that?

    Read the article

  • On linux, what does it mean when a directory has size 0 instead of 4096?

    - by kdt
    Here's a strange thing I haven't seen before -- a directory whose size is reported by ls as 0 instead of 4096, and I can't create any files within it. # ls -ld lib home drwxr-xr-x. 2 root root 0 Feb 7 03:10 home <-- it has zero size dr-xr-xr-x. 11 root root 4096 Feb 4 09:28 lib # touch home/foo touch: cannot touch `home/foo': No such file or directory <-- and I can't create files in it # rm home rm: cannot remove `home': Is a directory <-- look, it really is a dir So what does it mean for a directory to have size 0 instead of 4096? Filesystem is ext4 on fedora core 14. The output of mount is: /dev/mapper/vg_dev-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/vda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) Output of du -s /home: 0 /home Output of stat /home: File: `/home' Size: 0 Blocks: 0 IO Block: 1024 directory Device: 15h/21d Inode: 34913 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2011-02-07 03:45:46.188995765 -0800 Modify: 2011-02-07 03:11:59.980995019 -0800 Change: 2011-02-06 07:58:45.874995002 -0800

    Read the article

  • SAFE MODE Restriction in effect. The script not allowed to access directory owned by uid

    - by user57221
    I am running a dedicated server with multiple websites. I have created a global directory for common scripts for all websites, rather than repeating them in every website directory. How can I make this global directory accessible for all website. I am getting following error. Warning: require_once() [function.require-once]: SAFE MODE Restriction in effect. The script whose uid is XXXX is not allowed to access /vhosts/globallibrary/Zend/Application.php owned by uid XXXX I have change the ownership of global directory for X website. so it works fine for X website. latter I added another website Y Now I am getting the same error again. If I change the CHOWN for Y website then X website will have the same error. I don't want to disable the safemode restriction. Is there a work around, so that this global dir will be accessible by all website. I am getting following error in my browser when I try to access global directory. Global directory is on same level as all other websites. Is this a good practice to enable safemode for websites?

    Read the article

  • openVPN as a way to connect to a LAN by another client, different from server

    - by Einar
    Setup: one LAN handled by a router without a publicly available IP address but without any outbound connection restrictions ("target LAN"); a separate server publicly reachable from the Internet ("gateway"). I am trying to set up openVPN so that a third client can connect to the "gateway" and access the "target LAN". As the router of "target LAN" is not reachable from the Internet directly, it connects to the gateway itself via openVPN as well. The problem is how to handle routing. The LAN router has two network interfaces (for the outside network and the LAN itself). In openVPN (the server on the gateway) I set client-to-client and push "route 192.168.10.0 255.255.255.0" but I assume this would be horribly wrong (it actually messed up the routing on the LAN router until I killed openVPN). openVPN is not using bridging, is configured via tun. Other config details from the server server 10.8.0.0 255.255.255.0 client-config-dir ccd route 192.168.10.0 255.255.255.0 And the client file in ccd is iroute 192.168.10.0 255.255.255.0 What can be adjusted to ensure that a third client can connect through openVPN and access the LAN mentioned earlier?

    Read the article

  • My home router randomly disconnects me and I'm unable to reconnect to it

    - by Roy Tang
    It's happened a few times, I'm not sure how to diagnose/debug, so any advise would be appreciated. Symptons: sometimes the router will randomly disconnect; the connection icon on my desktop (wired to the router) gets that yellow "!" symbol that tells me my connection just went down. At this point I'm unable to ping the router. afterwards I try to reset the router by removing then reconnecting the power jack on the router side (this is the fastest way as I can't reset the power strip it's connected to without rebooting my desktop. the router has a reset thingy, but it's one of those things where i have to find a pin to stick into the hole, and when I get disconnected I usually need to get reconnected immediately so I just pull and put back the power jack), but even after that the connection has the same state. after the router reboots, if I try to connect to it using a wifi device like my ipad, the ipad prompts me for the wifi password even though it had already "remembered" all the settings for this router forever after i finally decide to reboot the power strip, and my desktop and the router boot up again, the connection returns to its normal state somewhat and i'm able to connect to it as normal using the desktop and wifi devices. What do I need to check the next time this happens so I can figure out the problem? Is it possibly because we've been using the power jack on the router as an easier way to reboot it? Should I be shopping around for a new router? If it helps, the router is a DLink DIR-300

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >