Search Results

Search found 45752 results on 1831 pages for 'ubuntu linux'.

Page 675/1831 | < Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >

  • FreeBSD 8.1 64bit logrotate - ELF interpreter /libexec/ld-elf-so.1 not found

    - by Richard Knop
    I am trying to get logrotate running on a FreeBSD 8.1 virtual machine. I installed the logrotate with pkg_add, I have created the logrotate.config file and also run: mkdir /var/lib/ touch /var/lib/logrotate.status Now when I do: /usr/local/sbin/logrotate -d /usr/local/etc/logrotate.conf I get this error: ELF interpreter /libexec/ld-elf-so.1 not found Abort The file ld-elf-so.1 exists: locate ld-elf.so.1 /libexec/ld-elf.so.1 /usr/libexec/ld-elf.so.1 /usr/share/man/man1/ld-elf.so.1.1.gz

    Read the article

  • awstats parse of postfix mail log drops all records

    - by accidental admin
    I'm trying to get awstats to parse the postfix mail log, but it drops allmost all entries with messages like: Corrupted record (date 20091204042837 lower than 20091211065829-20000): 2009-12-04 04:28:37 root root localhost 127.0.0.1 SMTP - 1 17480 Few more are dropped with an invalid LogFormat: Corrupted record line 24 (record format does not match LogFormat parameter): 2009-11-16 04: 28:22 root root localhost 127.0.0.1 SMTP - 14755 My conf LogFormat="%time2 %email %email_r %host %host_r %method %url %code %bytesd" I believe matches the log format (and besides is the log format I've seen everywhere for awstats mail parsing). Besides, is the same entry format as all the other entries in the mail log. Whatever is left is dropped too: Dropped record (host localhost and 127.0.0.1 not qualified by SkipHosts): 2009-12-07 04:28:36 root root localhost 127.0.0.1 SMTP - 1 17152 I added SkipHosts="" to the .conf file but to no avail. I feel like awstats really has some personal quarrel with me today.

    Read the article

  • Using a GoDaddy SSL certificate with Virtualmin (Webmin)

    - by Kevin
    A client of mine decided to go ahead and move from a self-signed certificate to a commercial one ("GoDaddy Standard SSL"). The first service I wanted to move to the commercial SSL cert was Webmin/Usermin... However, upon migrating to the new SSL cert and restarting Webmin, I got the following error: [21/Oct/2012:13:12:47 -0400] Restarting Failed to open SSL cert /etc/webmin/miniserv.cert at /usr/share/webmin/miniserv.pl line 4229. Error: Webmin server did not write new PID file And that's all it says. Here's Webmin's config file (/etc/webmin/miniserv.conf): port=10000 root=/usr/share/webmin mimetypes=/usr/share/webmin/mime.types addtype_cgi=internal/cgi realm=Webmin Server logfile=/var/webmin/miniserv.log errorlog=/var/webmin/miniserv.error pidfile=/var/webmin/miniserv.pid logtime=168 ppath= ssl=0 env_WEBMIN_CONFIG=/etc/webmin env_WEBMIN_VAR=/var/webmin atboot=1 logout=/etc/webmin/logout-flag listen=10000 denyfile=\.pl$ log=1 blockhost_failures=5 blockhost_time=60 syslog=1 session=1 server=MiniServ/1.600 userfile=/etc/webmin/miniserv.users keyfile=/etc/webmin/miniserv.pem passwd_file=/etc/shadow passwd_uindex=0 passwd_pindex=1 passwd_cindex=2 passwd_mindex=4 passwd_mode=0 preroot=virtual-server-theme passdelay=1 sudo=1 sessiononly=/virtual-server/remote.cgi preload=virtual-server=virtual-server/virtual-server-lib-funcs.pl virtual-server=virtual-server/feature-unix.pl virtual-server=virtual-server/feature-dir.pl virtual-server=virtual-server/feature-dns.pl virtual-server=virtual-server/feature-mail.pl virtual-server=virtual-server/feature-web.pl virtual-server=virtual-server/feature-webalizer.pl virtual-server=virtual-server/feature-ssl.pl virtual-server=virtual-server/feature-logrotate.pl virtual-server=virtual-server/feature-mysql.pl virtual-server=virtual-server/feature-postgres.pl virtual-server=virtual-server/feature-ftp.pl virtual-server=virtual-server/feature-spam.pl virtual-server=virtual-server/feature-virus.pl virtual-server=virtual-server/feature-webmin.pl virtual-server=virtual-server/feature-virt.pl virtual-server=virtual-server/feature-virt6.pl anonymous=/virtualmin-mailman/unauthenticated=anonymous premodules=WebminCore logouttimes= extracas=/etc/webmin/miniserv.chain certfile=/etc/webmin/miniserv.cert ssl_redirect=0 Here is a screen shot of the Webmin SSL config screen as well, for what it's worth: http://postimage.org/image/r472go7tf/ Edited Mon Oct 22 10:45:24 CDT 2012: When running the command openssl x509 -noout -text -in /etc/webmin/miniserv.cert as Falcon Momot suggested, I get the following error: unable to load certificate 139760808240800:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:696:Expecting: TRUSTED CERTIFICATE

    Read the article

  • Repairing yum's repositories on a RHEL5.

    - by The Rook
    I am using RHEL5 and yum is missing many packages, such as apache, php, and all php libraries . I have added the rpmforge repository, but i am still missing these packages. This is an i686 machine and there might not be many i686 packages available, I think that if i force an i386 i'll have serious problems. How do I make sure I have a large number of compatible packages on a RHEL5 system? I didn't install this system, is it normal for RHEL5 to have virtually no useful packages in yum? How do RHEL5 administrators use yum without introducing conflicts with currently installed packages? Should I ditch yum and use apt? Thanks!

    Read the article

  • FFSERVER - streaming an ASF video as Webm output

    - by Emmanuel Brunet
    I'm trying to stream an IP webcam ASF live stream to a ffserver to output a webm video format. The server starts successfully but the ffserver commands used to feed the ffserver fails and generates a core dump. Environment Debian 7.5 ffmpeg 2.2 Input stream $ ffprobe http://account:password@webcam/videostream.asf Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, 1 channels, s16p, 32 kb/s ffserver configuration my ffserver configuration is : Port 8091 RTSPPort 554 BindAddress 192.168.1.62 MaxHTTPConnections 1000 MaxClients 100 MaxBandwidth 1000 CustomLog - <Feed webcam.ffm> File /tmp/webcam.ffm FileMaxSize 500M ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Feed> <Stream webcam.webm> # Output stream URL definition Feed webcam.ffm # Feed from which to receive video Format webm # Audio settings AudioCodec vorbis AudioBitRate 64 # Audio bitrate # Video settings VideoCodec libvpx VideoSize 640x480 # Video resolution VideoFrameRate 25 # Video FPS AVOptionVideo flags +global_header # Parameters passed to encoder # (same as ffmpeg command-line parameters) AVOptionVideo cpu-used 0 AVOptionVideo qmin 10 AVOptionVideo qmax 42 AVOptionVideo quality good AVOptionAudio flags +global_header PreRoll 15 StartSendOnKey # VideoBitRate 32 # Video bitrate </Stream> <Stream status.html> Format status # Only allow local people to get the status ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Stream> ffmpeg feed I run the following command that fails $ ffmpeg -i http://account:password@webcam/videostream.asf http://192.168.1.62:8091/webcam.ffm http://192.168.1.62:8091/webcam.ffm Input #0, asf, from 'http://account:password@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x36a80c0] deprecated pixel format used, make sure you did set range correctly Segmentation fault I tryed $ ffmpeg -i http://account:password@webcam/videostream.asf -pix_fmt yuv420p http://192.168.1.62:8091/webcam.ffm But it raises the same error. Thanks for your help Edit For an easy testing (I thought), I tried to publish the whole ASF stream as is, meaning connecting the ASF webcam output stream to the ffserver that outputs ASF format too. And thus with mirrored encoding so I changed the ffserver configuration to ... <Stream webcam.asf> Feed webcam.ffm Format asf VideoFrameRate 25 VideoSize 640X480 VideoBitRate 256 VideoBufferSize 1000 VideoGopSize 30 AudioBitRate 32 StartSendOnKey </Stream> ... And the output is now : Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 1k tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x3d620c0] deprecated pixel format used, make sure you did set range correctly Output #0, ffm, to 'http://192.168.1.62:8091/webcam.ffm': Metadata: creation_time : now encoder : Lavf55.40.100 Stream #0:0: Audio: wmav2, 22050 Hz, mono, fltp, 32 kb/s Metadata: encoder : Lavc55.64.100 wmav2 Stream #0:1: Video: msmpeg4v3 (msmpeg4), yuv420p, 640x480, q=2-31, 256 kb/s, 1k fps, 1000k tbn, 1k tbc Metadata: Stream mapping: Stream #0:1 -> #0:0 (adpcm_ima_wav -> wmav2) Stream #0:0 -> #0:1 (mjpeg -> msmpeg4) Press [q] to stop, [?] for help Segmentation fault I can't even forward the stream.

    Read the article

  • Properly escaping forward slash in bash script for usage with sed

    - by user331839
    I'm trying to determine the size of the files that would be newly copied when syncing two folders by running rsync in dry mode and then summing up the sizes of the files listed in the output of rsync. Currently I'm stuck at prefixing the files by their parent folder. I found out how to prefix lines using sed and how to escape using sed, but I'm having troubles combining those two. This is how far I got: source="/my/source/folder/" target="/my/target/folder/" escaped=`echo "$source" | sed -e 's/[\/&]/\\//g'` du `rsync -ahnv $source $target | tail -n +2 | head -n -3 | sed "s/^/$escaped/"` | awk '{i+=$1} END {print i}' This is the output I get from bash -x myscript.sh + source=/my/source/folder/ + target=/my/target/folder ++ echo /my/source/folder/ ++ sed -e 's/[\/&]/\//g' + escaped=/my/source/folder/ + awk '{i+=$1} END {print i}' ++ rsync -ahnv /my/source/folder/ /my/target/folder/ ++ sed 's/^//my/source/folder//' ++ head -n -3 ++ tail -n +2 sed: -e expression #1, char 8: unknown option to `s' + du 80268 Any ideas on how to properly escape would be highly appreciated.

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • Combining HBase and HDFS results in Exception in makeDirOnFileSystem

    - by utrecht
    Introduction An attempt to combine HBase and HDFS results in the following: 2014-06-09 00:15:14,777 WARN org.apache.hadoop.hbase.HBaseFileSystem: Create Dir ectory, retries exhausted 2014-06-09 00:15:14,780 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. java.io.IOException: Exception in makeDirOnFileSystem at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile System.java:136) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFi leSystem.java:428) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSyst emLayout(MasterFileSystem.java:148) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSyst em.java:133) at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.j ava:572) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:432) at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":vagrant:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe rmissionChecker.java:224) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe rmissionChecker.java:204) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermi ssion(FSPermissionChecker.java:149) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F SNamesystem.java:4891) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F SNamesystem.java:4873) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAcce ss(FSNamesystem.java:4847) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FS Namesystem.java:3192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNames ystem.java:3156) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesyst em.java:3137) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameN odeRpcServer.java:669) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTra nslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cl ientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4497 0) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.cal l(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma tion.java:1438) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct orAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC onstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExce ption.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExc eption.java:57) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2153) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2122) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSy stem.java:545) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1915) at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile System.java:129) ... 6 more while configuration and system settings are as follows: [vagrant@localhost hadoop-hdfs]$ hadoop fs -ls hdfs://localhost/ Found 1 items -rw-r--r-- 3 vagrant supergroup 1010827264 2014-06-08 19:01 hdfs://localhost/u buntu-14.04-desktop-amd64.iso [vagrant@localhost hadoop-hdfs]$ /etc/hadoop/conf/core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:8020</value> </property> </configuration> /etc/hbase/conf/hbase-site.xml <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:8020/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> </configuration> /etc/hadoop/conf/hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/var/lib/hadoop-hdfs/cache</value> </property> <property> <name>dfs.data.dir</name> <value>/tmp/hellodatanode</value> </property> </configuration> NameNode directory permissions [vagrant@localhost hadoop-hdfs]$ ls -ltr /var/lib/hadoop-hdfs/cache total 8 -rwxrwxrwx. 1 hbase hdfs 15 Jun 8 23:43 in_use.lock drwxrwxrwx. 2 hbase hdfs 4096 Jun 8 23:43 current [vagrant@localhost hadoop-hdfs]$ HMaster is able to start if fs.defaultFS property has been commented in core-site.xml NameNode is listening [vagrant@localhost hadoop-hdfs]$ netstat -nato | grep 50070 tcp 0 0 0.0.0.0:50070 0.0.0.0:* LIST EN off (0.00/0/0) tcp 0 0 33.33.33.33:50070 33.33.33.1:57493 ESTA BLISHED off (0.00/0/0) and accessible by navigating to http://33.33.33.33:50070/dfshealth.jsp. Question How to solve makeDirOnFileSystem exception and let HBase connect to HDFS?

    Read the article

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • RAID-1 and regular drive removal (using RAID-1 as a backup measure)

    - by Vi
    Is using mdadm's RAID-1 of 2 partitions (one on laptop's internal HDD, one on external HDD) a good idea. I want the system to work as RAID-1 if both drives are present, work as regular volume (degradad RAID-1) if external HDD is unplugged and quickly resync when I plug external HDD again. Questions: Is it a good idea? Will write-intent bitmap be enough for this task or I need something else? Should I consider doing it at filesystem level (3b. if yes, how?). Basic requirements are: Quick resync when I re-add the external drive (provided I hasn't changed that partition). More or less consistent data on the removed drive if I remove it not during write/resync operation. If I remove the drive during resync I expect the data to be somewhat inconsistent, but expect quick resync completion when I re-add it again. E.g. I want the the remaining drive to track what is changed (there can be a lot of changes) and that sync back only those parts that need it.

    Read the article

  • bash alias doesn't carry over with sudo

    - by agent154
    I'm curious if there's a way to get my .bash_profile to work when I sudo a program. For example, I have it set to alias emerge='emerge -av' so that I can install software, and it will always ask me if I want to proceed before downloading and installing. However I just noticed when I sudo emerge foo, it defaults to just the plain command emerge foo instead of emerge -av foo. Only thing that comes to mind to fix this is to also put the alias in root's .bash_profile, but I don't want to have to resort to that since I will always have to make changes in two places when I want to add stuff to my own profile. Is there another way around this that I'm unfamiliar with?

    Read the article

  • I am getting a SQUID Error

    - by Dave
    Hello, What exactly is wrong here Entry in SQUID File--- httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on acl lan src 192.168.1.1 192.168.2.0/24 http_access allow localhost Error after: service squid restart 2010/02/01 14:24:29| Processing Configuration File: /etc/squid/squid.conf (depth 0) 2010/02/01 14:24:29| cache_cf.cc(361) parseOneConfigFile: squid.conf:10 unrecognized: 'broken_vary_encoding' 2010/02/01 14:24:29| WARNING: Netmasks are deprecated. Please use CIDR masks instead. 2010/02/01 14:24:29| WARNING: IPv4 netmasks are particularly nasty when used to compare IPv6 to IPv4 ranges. 2010/02/01 14:24:29| WARNING: For now we assume you meant to write /0 2010/02/01 14:24:29| WARNING: (B) '::/4294967200' is a subnetwork of (A) '::' 2010/02/01 14:24:29| WARNING: because of this '::' is ignored to keep splay tree searching predictable 2010/02/01 14:24:29| WARNING: You should probably remove '::/4294967200' from the ACL named 'all' 2010/02/01 14:24:29| WARNING: Netmasks are deprecated. Please use CIDR masks instead. 2010/02/01 14:24:29| WARNING: IPv4 netmasks are particularly nasty when used to compare IPv6 to IPv4 ranges. 2010/02/01 14:24:29| WARNING: For now we assume you meant to write /128 2010/02/01 14:24:29| aclParseIpData: unknown netmask '255.255.255.255' in '127.0.0.1/255.255.255.255' FATAL: Bungled squid.conf line 25: acl localhost src 127.0.0.1/255.255.255.255 Squid Cache (Version 3.1.0.14): Terminated abnormally. CPU Usage: 0.013 seconds = 0.006 user + 0.007 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 Also please provide me with the simplest squid script for the proxy to run. Restrictions can be entered. Thanks Dave

    Read the article

  • How can I get HTTPD to serve the html/php files and not list/index them when they are in folder for virtual host. Using Centos 6.0

    - by LaserBeak
    My virtual hosts are configured as below, initally I could not even get to the /public_html/ directory when typing example.com and apache would just serve me up the default welcome page, I would also get the error: Directory index forbidden by Options directive: /var/www/html/example.com/public_html/ in the log . After editing the welcome.conf page (- Index) so it does not show again when I now type example.com the/public_html/ contents (Index.php) are indexed in the browser. Where as I want it to actually execute and diplay the index.php page. vhost.conf , located in etc/httpd/vhost.d/ NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName localhost ServerAlias localhost.example.com DocumentRoot /var/www/html/example.com/public_html/ ErrorLog /var/www/html/example.com/logs/error.log CustomLog /var/www/html/example.com/logs/access.log combined </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName example.org ServerAlias www.example.org DocumentRoot /var/www/html/example.org/public_html/ ErrorLog /var/www/html/example.org/logs/error.log CustomLog /var/www/html/example.org/logs/access.log combined </VirtualHost> httpd.conf, settings on default, added onto end: Include /etc/httpd/vhosts.d/*.conf Root directories: DocumentRoot "/var/www/html"

    Read the article

  • Howto unzip ".xz" file with 7z and lzma

    - by neversaint
    I tried to uncompressed a "*.xz" file with both 7z and lzma. But they gave me such message: $ 7z x myfile.fq.xz 7-Zip 4.57 Copyright (c) 1999-2007 Igor Pavlov 2007-12-06 p7zip Version 4.57 (locale=C,Utf16=off,HugeFiles=on,4 CPUs) Processing archive: myfile.fq.xz Error: Can not open file as archive $ 7z x myfile.fq.xz 7-Zip 4.57 Copyright (c) 1999-2007 Igor Pavlov 2007-12-06 p7zip Version 4.57 (locale=C,Utf16=off,HugeFiles=on,4 CPUs) Processing archive: myfile.fq.xz Error: Can not open file as archive and with lzma $ lzma -d myfile.fq.xz J_12.fq.xz: unknown suffix -- unchanged with other option: $ lzma -S .xz -d myfile.fq.xz lzma: SetDecoderProperties() error

    Read the article

  • How can I create an “su” only user (no SSH or SFTP) and limit who can “su” into that account in RHEL5? [closed]

    - by Beaming Mel-Bin
    Possible Duplicate: How can I allow one user to su to another without allowing root access? We have a user account that our DBAs use (oracle). I do not want to set a password on this account and want to only allow users in the dba group to su - oracle. How can I accomplish this? I was thinking of just giving them sudo access to the su - oracle command. However, I wouldn't be surprised if there was a more polished/elegant/secure way.

    Read the article

  • VPN iptables Forwarding: Net-to-net

    - by Mike Holler
    I've tried to look elsewhere on this site but I couldn't find anything matching this problem. Right now I have an ipsec tunnel open between our local network and a remote network. Currently, the local box running Openswan ipsec with the tunnel open can ping the remote ipsec box and any of the other computers in the remote network. When logged into on of the remote computers, I can ping any box in our local network. That's what works, this is what doesn't: I can't ping any of the remote computers via a local machine that is not the ipsec box. Here's a diagram of our network: [local ipsec box] ----------\ \ [arbitrary local computer] --[local gateway/router] -- [internet] -- [remote ipsec box] -- [arbitrary remote computer] The local ipsec box and the arbitrary local computer have no direct contact, instead they communicate through the gateway/router. The router has been set up to forward requests from local computers for the remote subnet to the ipsec box. This works. The problem is the ipsec box doesn't forward anything. Whenever an arbitrary local computer pings something on the remote subnet, this is the response: [user@localhost ~]# ping 172.16.53.12 PING 172.16.53.12 (172.16.53.12) 56(84) bytes of data. From 10.31.14.16 icmp_seq=1 Destination Host Prohibited From 10.31.14.16 icmp_seq=2 Destination Host Prohibited From 10.31.14.16 icmp_seq=3 Destination Host Prohibited Here's the traceroute: [root@localhost ~]# traceroute 172.16.53.12 traceroute to 172.16.53.12 (172.16.53.12), 30 hops max, 60 byte packets 1 router.address.net (10.31.14.1) 0.374 ms 0.566 ms 0.651 ms 2 10.31.14.16 (10.31.14.16) 2.068 ms 2.081 ms 2.100 ms 3 10.31.14.16 (10.31.14.16) 2.132 ms !X 2.272 ms !X 2.312 ms !X That's the IP for our ipsec box it's reaching, but it's not being forwarded. On the IPSec box I have enabled IP Forwarding in /etc/sysctl.conf net.ipv4.ip_forward = 1 And I have tried to set up IPTables to forward: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [759:71213] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 25 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 500 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 4500 -j ACCEPT -A INPUT -m policy --dir in --pol ipsec -j ACCEPT -A INPUT -p esp -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -s 10.31.14.0/24 -d 172.16.53.0/24 -j ACCEPT -A FORWARD -m policy --dir in --pol ipsec -j ACCEPT -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT Am I missing a rule in IPTables? Is there something I forgot? NOTE: All the machines are running CentOS 6.x Edit: Note 2: eth1 is the only network interface on the local ipsec box.

    Read the article

  • Fedora to Fedora remote desktop does not work (connection closed error, even after setting everything up)

    - by jonderry
    I'm trying to use remote desktop on my laptop (running Fedora) to my desktop (also running Fedora) on the same local network. I configured Remote Desktop on my desktop via System - Preferences - Remote Desktop, verified that the port is open by nmap, and attempted to connect from my laptop via vinagre (also tried appending :5900 for the port, and using the ip address). In all cases, the connection fails with a popup that says "Connection closed\n Connection to host was closed." EDIT: I am able to use vinagre from the desktop to remote desktop into itself, just not from one machine to the other. I tried vncviewer and a similar problem occurs (unable connect to socket: No route to host (113))

    Read the article

  • ODBC error state S1092: postgresql through ODBC

    - by mechcow
    While performing an upgrade, our in-house software started to report the following strange error. It is a C++ application talking to a remote postgresql database, defined through ODBC: ODBC error state S1092, native error 0. [unixODBC][Driver Manager]Invalid attribute/option identifier Both the client and the server are Centos 5.4 Xen guests with the following RPMs installed: postgresql-libs-8.1.18-2.el5_4.1 postgresql-odbc-08.01.0200-3.1 postgresql-8.1.18-2.el5_4.1 postgresql-server-8.1.18-2.el5_4.1 Its possible the schema changed as part of the upgrade, could this explain the error message? What does this error message actually indicate, and do you know any likely causes of it?

    Read the article

  • Unable to connect to Postgres on Vagrant Box - Connection refused

    - by Ben Miller
    First off, I'm new to Vagrant and Postgres. I created my Vagrant instance using http://files.vagrantup.com/lucid32.box with out any trouble. I am able to run vagrant up and vagrant ssh with out issue. I followed the instructions http://blog.crowdint.com/2011/08/11/postgresql-in-vagrant.html with one minor alteration. I installed "postgresql-8.4-postgis" package instead of "postgresql postgresql-contrib" I started the server using: postgres@lucid32:/home/vagrant$ /etc/init.d/postgresql-8.4 start While connected to the vagrant instance I can use psql to connect to the instance with out issue. In my Vagrantfile I had already added: config.vm.forward_port 5432, 5432 but when I try to run psql from localhost I get: psql: could not connect to server: Connection refused Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? I'm sure I am missing something simple. Any ideas? Update: I found a reference to an issue like this and the article suggested using: psql -U postgres -h localhost with that I get: psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

    Read the article

  • iSCSI timeouts under high load

    - by Antonio
    I have two servers connected via Gigabit Ethernet. One is iSCSI target, the second one is initiator. When I run mkfs.ext4 at initiator, after a while disk IO slows down critically. In the target host I can see the following in syslog: Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668c 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668c 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668d 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668d 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668e 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668e 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 1196696 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 1196696 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119669e 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119669e 6 Sep 14 09:40:04 sh11 tgtd: abort_task_set(1139) found 119669f 0 Sep 14 09:40:04 sh11 tgtd: abort_cmd(1115) found 119669f 6 And load average grows to 12 or even more: # uptime 12:37:00 up 23 days, 13:25, 1 user, load average: 12.00, 7.00, 4.00 CentOS 6.3 tgtd 1.0.24 Intel Pentium 4 2.4GHz 1Gb RAM 2Tb WD Cavlar Green SATA 2.0 #lspci 00:00.0 Host bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE DRAM Controller/Host-Hub Interface (rev 02) 00:01.0 PCI bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE Host-to-AGP Bridge (rev 02) 00:1d.0 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801DB/DBM (ICH4/ICH4-M) USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 82) 00:1f.0 ISA bridge: Intel Corporation 82801DB/DBL (ICH4/ICH4-L) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801DB (ICH4) IDE Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) SMBus Controller (rev 02) 00:1f.5 Multimedia audio controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 02) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV200 QW [Radeon 7500] 02:01.0 Ethernet controller: D-Link System Inc DGE-530T Gigabit Ethernet Adapter (rev 11) (rev 11) 02:02.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE/SATA Controller (rev 50) 02:03.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE/SATA Controller (rev 50) 02:04.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02) 02:08.0 Ethernet controller: Intel Corporation 82801DB PRO/100 VE (CNR) Ethernet Controller (rev 82) Is there a way to tune target host to avoid these timeouts?

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> Options +indexes DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this?

    Read the article

  • How to configure wireless in gentoo?

    - by Absolute0
    I have a single access point which i want to connect to on interface ra0, when I run /etc/init.d/ra0 restart I get the following output: gentoo ~ # /etc/init.d/net.ra0 restart * Starting ra0 * Configuring wireless network for ra0 Error for wireless request "Set Mode" (8B06) : SET failed on device ra0 ; Network is down. * ra0 does not support setting the mode to "managed" Error for wireless request "Set Encode" (8B2A) : SET failed on device ra0 ; Network is down. * ra0 does not support setting keys * or the parameter "mac_key_roswell" or "key_roswell" is incorrect Error for wireless request "Set Mode" (8B06) : SET failed on device ra0 ; Network is down. * ra0 does not support setting the mode to "managed" * WEP key is not set for "BAY_WiFi" - not connecting * Couldn't associate with any access points on ra0 * Failed to configure wireless for ra0 when I run iwlist ra0 scan I get "roswell" and "bay-wifi" I want to connect to only roswell. Here is my /etc/conf.d/net: modules= ( "iwconfig" ) key_roswell="ffff-ffff-ff" # no s: means a hex key preferred_aps=( "roswell" ) what am i doing wrong?

    Read the article

  • How to configure wireless in gentoo?

    - by Absolute0
    I have a single access point which i want to connect to on interface ra0, when I run /etc/init.d/ra0 restart I get the following output: gentoo ~ # /etc/init.d/net.ra0 restart * Starting ra0 * Configuring wireless network for ra0 Error for wireless request "Set Mode" (8B06) : SET failed on device ra0 ; Network is down. * ra0 does not support setting the mode to "managed" Error for wireless request "Set Encode" (8B2A) : SET failed on device ra0 ; Network is down. * ra0 does not support setting keys * or the parameter "mac_key_roswell" or "key_roswell" is incorrect Error for wireless request "Set Mode" (8B06) : SET failed on device ra0 ; Network is down. * ra0 does not support setting the mode to "managed" * WEP key is not set for "BAY_WiFi" - not connecting * Couldn't associate with any access points on ra0 * Failed to configure wireless for ra0 when I run iwlist ra0 scan I get "roswell" and "bay-wifi" I want to connect to only roswell. Here is my /etc/conf.d/net: modules= ( "iwconfig" ) key_roswell="ffff-ffff-ff" # no s: means a hex key preferred_aps=( "roswell" ) what am i doing wrong?

    Read the article

  • mono: application not running when web.config in subfolder

    - by Quandary
    I run asp.net on mono. I wanted to run blogengine: http://www.dotnetblogengine.net/ I downloaded, put it in /var/www/blogengine, on the console went to /var/www/blogengine and started xsp2. I went to http://localhost:8080, and it ran without problems. Then I stopped xsp2, went to /var/www and started xsp2. I went to http://localhost:8080/blogengine It ended with a strange error: The section <blogProvider> can't be defined in this configuration file (the allowed definition context is 'MachineToApplication'). (/var/www/blogengine/Web.Config line 9) The problem seems to be that it stops working as soon as the xsp2 root folder is anything else than the application root folder... Do I have to config anything ? Or what else is wrong ?

    Read the article

< Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >