Search Results

Search found 7114 results on 285 pages for 'hope'.

Page 172/285 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • passing on command line options in bash

    - by bryan
    I have a question. I have a script, a kind of long script written in bash aprox. 370 lines. That has several functions and in those functions the user has to enter information which is then stored in files. ( This is suppose to represent a MySQL database, with the functions INSERT, UPDATE, SELECT, SELECT where x=y.) I created this myself in bash, now the only thing that rests me is that I need to be able to pass arguments on the command line to the script, that will do the same as the script does. I know that bash has positional parameters such as $1 $2 $3 $* $@ $0 ( refers to the name of the script) etc. I know how I can use these parameters in a simple if function. This is not enough for my script. I basicly need to do the same thing that the script does, but then from the command line. I have been struggling with this for a couple of days now and I cannot think of a way to get it to work. Maybe someone here can help me with this? If you want to have the script. That can be possible, but I don't think I can paste it in here... EDIT: Link to script, http://pastebin.com/Hd5VsDv2 Note, I am a beginner in bash scripting. EDIT: This is in reply to Answer 1. As I said I hope I can just replace the if [ "$1" = "one" ] ; then echo "found one" to if [ "$1" = "one" ] ; then echo SELECT where SELECT is the function I previously had in my script(above) http://pastebin.com/VFMkBL6g LINK to testing script

    Read the article

  • How to set up Zabbix to monitor SQL Server Failover Active-Passive Cluster?

    - by Sebastian Zaklada
    It should be simple, so it is just most likely my approach being totally off and someone will hopefully prod me into the right direction. We have a Zabbix 2.0.3 server instance set up monitoring a bunch of different servers, but now we need to set it up to monitor and notify any alerts in regards to the SQL Server 2008 R2 Failover Active-Passive cluster. Essentially, this is a 2 servers cluster, when only one of its nodes can be "active" at a given time, serving all SQL Server related requests, while the other server just "sleeps" and from the point of anyone logged on on that server - has all of the SQL Server related services in stopped state. We have tried setting up Zabbix agents on both servers, using SQL Server 2005 templates (we could not find any 2008 specific ones and the 2005 ones always seemed to be working just fine for monitoring 2008 R2 instances) and configuring Zabbix server for both of the servers, but we end up having constant alerts for the server being currently the passive one in the cluster. We have been able to look up various methods of actually monitoring the failover, but we have not been able to find any guidance in regards to how to instruct Zabbix, that in this particular case, only one of the servers in the group is expected to be in the online state, while the other can be just discarded and should not raise any alerts. I hope I made myself clear. Thanks for any guidance. I am out of ideas.

    Read the article

  • Apache: rewrite port 80 and 443 - multiple SSL vhosts setup

    - by Benjamin Jung
    SETUP: multiple SSL domains are configured on a single IP, by using vhosts with different port numbers (on which Apache listens) Apache 2.2.8 on Windows 2003 (no comments on this pls) too many Windows XP users so SNI isn't an option yet There may be reasons why it's wrong to use this approach, but it works for now. vhosts setup: # secure domain 1 <VirtualHost IP:443> SSL stuff specifying certificate etc. ServerName domain1.org </VirtualHost> # secure domain 2 <VirtualHost IP:81> SSL stuff for domain2.org ServerName domain2.org </VirtualHost> GOAL: Some folders inside the domain2.org docroot need to be secure. I used a .htaccess file to rewrite the URL to https on port 81: RewriteEngine On RewriteCond %{SERVER_PORT} !^81$ RewriteRule (.*) https://%{HTTP_HOST}:81%{REQUEST_URI} [R] Suppose I put the .htaccess in the folder 'secfolder'. When accessing http://domain2.org/secfolder this gets succesfully rewritten to https://domain2.org:81/secfolder. ISSUE: When accessing https://domain2.org/secfolder (without port 81), the certificate from the first vhost (domain1.org) is used and the browser complains that the site is insecure because the certificate is not valid for domain2.org. I thought that RewriteCond %{SERVER_PORT} !^81$ would also rewrite https://domain2.org to https://domain2.org:81, but it doesn't. It seems that the .htaccess file is not being used at all in this case. At this point I am not sure how to apply a RewriteRule to https://domain2.org. I tried creating an additional vhost for domain2 on port 443 before the one for domain1.org, but Apache seems to choke on that. I hope someone of you has an idea how to approach this. TIA.

    Read the article

  • Concerns about Apache per-Vhost logging setup

    - by etienne
    I'm both senior developer and sysadmin in my company, so i'm trying to deal with the needs of both activities. I've set up our apache box, wich deals with 30-50 domains atm (and hopefully will grow larger) and hosts both production and development sites, with this directory structure: domains/ domains/domain.ext/ #FTPS chroot for user domain.ext domains/domain.ext/public #the DocumentRoot of http://domain.ext domains/domain.ext/logs domains/domain.ext/subdomains/sub.domain.ext domains/domain.ext/subdomains/sub.domain.ext/public #DocumentRoot of http://sub.domain.ext Each domain.ext Vhost runs with his dedicated user and group via mpm-itk, umask being 027, and the logs are stored via a piped sudo command, like this: ErrorLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_error.log" CustomLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_access.log" combined Now, i've read a lot about not letting the logs out of a very restricted directory, but the developers often need to give a quick look to a particular subdomain error log, and i don't really want to give them admin rights to look into /var/logs. Having them available into the ftp account is REALLY handy during development stages. Do you think this setup is viable and safe enough? To me it is apparently looking good, but i'm concerned about 3 security issues: -is the sudo pipe enough to deal with symlink exploits? Any catches i'm missing? -log dos: logs are in the same partition of all domains. got hundreds of gigs, but still, if one get disk-space dos'd, everything will break. Any workaround? Will a short timed logrotate suffice? -file descriptors limits: AFAIK the default limit for Apache on Ubuntu Server is currently 8192, which should be plenty enough to handle 2 log files per subdomain. Is it? Am i missing something? I hope to read some thoughts on the matter!

    Read the article

  • 2000 Server, User can't logon

    - by Mike I
    I hope you can help me. I recently upgraded a workstation at my office (to a whole new machine) and ran into a pretty serious problem. Friday until 5:00 PM, I could access my mail on 2000 Exchange server. When I shut the old workstation down and put in the new workstation, I tried to set up an account. When I put the server name in appropriate field and typed my username and hit check names, my username does not come up. So to troubleshoot, (It also is a SMB server) I try to logon to my file share. (My local credentials are the same as server credientials of user account) When I try to logon to share, I just get the Username/Password screen (Never had gotten that before since credentials are the same) Again, in troubleshooting mode, I try to log on to my user from another workstation. Still can't authenticate via my user. Every other user can authenticate and load up their shares/mailboxes. I have restored Exchange from the backup as of 3 days ago (Thursday) but the exact same issue is still there. I really do not understand what is wrong and what else I can do to troubleshoot. If anyone has some pointers for me, I will surely accept them. Thanks, Mike

    Read the article

  • APC Smart UPS network shutdown issue

    - by Rob Clarke
    Here is a bit about our setup: We have 2x Smart-UPS RT 6000 XL units with network management cards We are running Powerchute from a network server Powerchute is connected to the management cards of both UPSs UPSs are set to do a graceful shutdown via Powerchute when the battery duration is under 20 minutes We also have a command file that runs with Powerchute Although our setup is redundant we do not have an equal load on each server due to APC switches for single power devices The problem is that as we do not have an equal load on each server the batteries drain at different rates. This means that the UPSs both get to the specified low battery duration at completely different times. The problem here is that UPS 1 may have run down to 5 minutes and is in desperate need of initiating a Powerchute shutdown - UPS 2 still has 25 minutes of runtime so no shutdown is initiated. Consequently UPS 1 goes down and takes all the servers with and then shuts down UPS 2 as well! What we need to happen are 1 of either 2 things: Powerchute initiates the shutdown as soon as either UPS reaches the 20 minutes low battery duration setting - and doesnt wait for both The UPS with the heavier load expends its entire battery but does not shutdown both UPSs and lets the load be switched across to the UPS that still has runtime remaining. That way when the UPS that still has runtime reaches its low battery duration it can proceed with the graceful shutdown via Powerchute. Hope that makes sense, any help is greatly appreciated!

    Read the article

  • Either, nginx+php-fpm bad config or nginx+php-fpm cannot handle high query?

    - by The Wolf
    I have wordpress installed in my server configured(hopefully with nginx+php-fpm+mariaDB). I am trying to import using wordpress importer a 1.5MB xml file. Everytime I try to upload it using the importer, it got cut of... meaning just blank screen result.. Here is my error log: actually I just posted 2 of the errors [error] 858#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.xx.xx, server: xxx.com, request: "GET xxxx.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" [error] 858#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.x.xx.xx, server: xxx.com, request: "GET xxxx.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" I don't know what is the reason why it can't process the wordpress export .xml. I already increased max_file_upload & etc., but nothing happens. Hope somebody can help me. Here are my conf: nginx.conf user nginx; worker_processes 8; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; server_tokens off; keepalive_timeout 65; fastcgi_read_timeout 500; #gzip on; client_max_body_size 2M; php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf [root@host etc]# vim php-fpm.conf [root@host etc]# vim php-fpm.conf ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf ps aux [root@host etc]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 2900 1380 ? Ss Jun02 0:00 init root 2 0.0 0.0 0 0 ? S Jun02 0:00 [kthreadd/9308] root 3 0.0 0.0 0 0 ? S Jun02 0:00 [khelper/9308] root 124 0.0 0.0 2464 576 ? S<s Jun02 0:00 /sbin/udevd -d root 460 0.0 0.1 35976 1308 ? Sl Jun02 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 474 0.0 0.0 8940 1028 ? Ss Jun02 0:00 /usr/sbin/sshd root 481 0.0 0.0 3264 876 ? Ss Jun02 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 491 0.0 0.1 6268 1432 ? S Jun02 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/host.busilak.com. mysql 584 0.1 6.8 679072 71456 ? Sl Jun02 0:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --use root 586 0.0 0.3 12008 3820 ? Ss Jun02 0:01 sshd: root@pts/0 root 629 0.0 0.0 9140 756 ? Ss Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 630 0.0 0.0 9140 520 ? S Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 645 0.0 0.1 12788 1928 ? Ss Jun02 0:01 sendmail: accepting connections smmsp 653 0.0 0.1 12576 1728 ? Ss Jun02 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue root 691 0.0 0.1 7148 1184 ? Ss Jun02 0:00 crond root 698 0.0 0.1 6272 1688 pts/0 Ss Jun02 0:00 -bash root 1006 0.0 0.0 7828 924 ? Ss 00:30 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 1007 0.0 0.1 8156 1724 ? S 00:30 0:00 nginx: worker process nginx 1008 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1009 0.0 0.1 8020 1356 ? S 00:30 0:00 nginx: worker process nginx 1011 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1012 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1013 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1014 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1015 0.0 0.1 8024 1344 ? S 00:30 0:00 nginx: worker process root 1030 0.0 0.2 25396 2904 ? Ss 00:30 0:00 php-fpm: master process (/etc/php-fpm.conf) apache 1031 0.0 1.9 40700 20624 ? S 00:30 0:00 php-fpm: pool www apache 1032 0.0 2.0 41924 21888 ? S 00:30 0:01 php-fpm: pool www apache 1033 0.0 1.9 41212 20848 ? S 00:30 0:01 php-fpm: pool www apache 1034 0.0 1.9 40956 20792 ? S 00:30 0:01 php-fpm: pool www apache 1035 0.0 2.0 41560 21556 ? S 00:30 0:02 php-fpm: pool www apache 1040 0.0 1.8 39292 19120 ? S 00:30 0:00 php-fpm: pool www root 1125 0.0 0.0 6080 1040 pts/0 R+ 01:04 0:00 ps aux netstat -l [root@host etc]# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost.localdomain:smtp *:* LISTEN tcp 0 0 localhost.locald:cslistener *:* LISTEN tcp 0 0 *:mysql *:* LISTEN tcp 0 0 *:http *:* LISTEN tcp 0 0 *:ssh *:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 60575947 /var/run/saslauthd/mux unix 2 [ ACC ] STREAM LISTENING 60574168 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 60575873 /var/lib/mysql/mysql.sock Hope somebody can help me to figure out what is the problem.

    Read the article

  • Edit-text-files-over-SSH using a local text editor

    - by Mikko Ohtamaa
    I am working in various Linux and UNIX environments. I'd like to elegantly solve the problem of editing remote configuration files over SSH. Instead of using terminal editors (nano), I'd like to open the file in a local text editor on my desktop (Sublime Text 2). CyberDuck, WinSCP and various other SFTP apps can do this. Using editors over X11 forwarding has also proven to be problematic. Also using archaic text editors like Vim or Emacs do not serve my needs well. They could do this, but I prefer using other text editing software. Using ssh mounts (FUSE) are also problematic unless they can happen on the demand and triggered by the remote site. So what I hope to achieve Have a somekind of easily deployable shell script etc. which I can copy to remote server (let's call it mooedit) I run mooedit command on the remote server of which I have connected over SSH connection mooedit sends some kind of signal (over SSH( to my local desktop On my local desktop this signal is captured and it determines 'a ha! moo wants to edit a file on server X in folder Y' File is SFTP transfered to the local desktop (/tmp) File is opened in a nice GUI text editor on the local desktop When Save is pressed, the local desktop notices changes in the file and SFTP sends the resulting file back to the server The question is: What signaling mechanisms SSH provides for this? Any other methods to trigger a local text editor for remote SSH file?

    Read the article

  • copy large LVM volume(14TB) from one server to another

    - by bruce
    recently,I have to copy a very large LVM volume()rom server A to server B. Below is the filesystem of server A and server B - server A [root@AVDVD-Filer ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_avdvdfiler-lv_root 16T 14T 1.5T 91% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/vg_avdvdfiler-test 2.3T 201M 2.1T 1% /test /dev/sr0 3.3G 3.3G 0 100% /mnt server B [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol00 20G 2.5G 16G 14% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/VolGroup00-LogVol00 16T 133M 15T 1% /xiangao/lv1 /dev/mapper/VolGroup00-LogVol01 4.7T 190M 4.5T 1% /xiangao/lv2 I want to copy LVM volume /dev/mapper/vg_avdvdfiler-lv_root on server A to LVM volume /dev/mapper/VolGroup00-LogVol00 on server B . The server A and server B is in the same IP segment. IN the LVM volume on server A , there is all average 500M avi wmv mp4 etc. I tried mount /dev/mapper/vg_avdvdfiler-lv_root on server A to server B through NFS , then use cp command copy. It is clear I faild . Because the LVM volume is too big , I do not have good idea . I hope a good solution here. I'm a chinese, my english is very pool. sorry thanks everyone!

    Read the article

  • Fix MBR from installed Windows Vista

    - by Danilo
    Hi guys, I have a quite strange problem. I had a system with Vista and Ubuntu installed. We always use Vista and Ubuntu was something we really did not need. BUT: to boot, GRUB was used (I guess grub2). Now, while being in Vista I cancelled the Ubuntu partition and with it also GRUB. Now the system does not boot anymore. I tried to reinstall Ubuntu, but I had some problems with the CD. At the moment, when the system boots I get into the GRUB shell. From there, I am able to boot Windows Vista with some commands like this ones: grub> title windows rootnoverify (hd0,msdos3) chainloader +1 boot Now the question is: if I am able to boot in Windows Vista with this trick, is it possible to fix the MBR from inside the installed windows Vista with some command/tool of Vista itself? I shall probably mention that we are not interested in double boot at the moment. We only want Vista to start. I can sum up the question like this: is there a way to fix the MBR from the installed version on Windows Vista, considering that GRUB is at the moment installed? I hope I was clear enough. Thanks for your help.

    Read the article

  • repair partition table

    - by m.sr
    Hallo. I've just overwritten my partition table of my system's hard disk. i made a cfdisk on the wrong device (/dev/sda instead of /dev/sdd), deleted all partitions, made one new primary spanning over the whole device, set its type to 07 (NTFS) and hit write. So here i am with my system running. Until i reboot, i hope/guess nothing will change - meaning: all my data is accessible (I'm currently making a dd-backup of the whole device and plan to make a .tar.gz-backup of the most important data later). I also backed up /proc/partitions, /proc/diskstats (even though i guess this is more about throughput and stuff like this ...) and /sys/block/sda/sda?/{start,size}. Some further things i know: 4 primary partitions 1st partition: ~100Mb, ext3, /boot 2nd partition: ~100Mb, "Win7 Boot Partition", ntfs(?) 3rd partition: ~20...30GB, Win7, ntfs 4th partition: ~20...30GB, luks-encrypted device The luks- de crypted device is a LVM-PV The /, /home & swap-partitions are all LVs on the (VG on the) above noted PV So my questions: What is the simplest way to just write the kernels partition table to the disk? What is the simplest way to take the above mentioned (and perhaps other I don't know of ...) data and generate the partition table? Are there any problems to take care of regarding to luks and/or lvm? Is there any data I should backup before rebooting (meanig stuff from kernel [ /sys/..., /proc/...] and so on, which could help me regenerate the partition table)? Thanks a lot! P.S.: debian sid, Kernel 2.6.34-1-amd64 from debian-experimental, 80GB Intel SSD

    Read the article

  • How to make new file permission inherit from the parent directory?

    - by Wai Yip Tung
    I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update. So I setup the permission and owner group like this drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory. I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this -rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt When robot tries to update new-file.txt, it fails due to lack of file permission. I'm not sure if setting umask helps. In anycase the new files does not really follow it. $ umask -S u=rwx,g=rx,o=rx I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.

    Read the article

  • PPP kernel module fails to load

    - by Harel
    I am trying to deal with a problem on a server I don't normally deal with. Out of the blue a script using ppp started failing saying that the ppp kernel module is not loaded. When I try to modprobe it it complains about files missing. Note below that the kernel version the server thinks it at, does not match the kernel version directory in /lib/modules. I'm not sure how this could have happened. Could the other maintainers of the server botched a kernel upgrade? My question is how can I fix this discrepancy. Can I simply rename the lib directory and hope for the best? I don't want to break stuff for the people who actually maintain the server but I do need to fix the PPP issue. $ sudo /sbin/modprobe -v ppp FATAL: Could not load /lib/modules/2.6.35.4-rscloud/modules.dep: No such file or directory $ cat /proc/version Linux version 2.6.35.4-rscloud ([email protected]) (gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) ) #8 SMP Mon Sep 20 15:54:33 UTC 2010 $ ls /lib/modules/ 2.6.33.5-rscloud

    Read the article

  • Apache2 - Hosting two sites on the same domain with different ports

    - by user1026361
    I am hosting a staging site (test.mydomain.com) which currently work well on port 80 for two sites (test.mydomain.com and test.FRmydomain.com) I am working on a new backend and I would like to deploy a third site on this server for testing. My hope is that it will live at test.mydomain.com:4204. I've got some experience with apache and quickly added statements: Listen 4204 NameVirtualHost *:4204 and created a new config for my site. What I imagine are the relevant parts of my config: <VirtualHost *:4204 > ServerAdmin [email protected] ServerName test.mydomain.com:4204 However, the site is not publicly available, by name or ip. If i curl localhost:4204 from the server, I get the expected page content At this point, I'm a bit of a loss on how to go forwards. It seems like my config is correct but not available to be served. Am I better off defining a proxy definition so that, for instance: test.mydomain.com/4204 proxies to my localhost server or is there a way to make the site available via the internet? EDIT: I have added an iptable rule after further Googling with the command: iptables -I INPUT -p tcp --dport 4204 -j ACCEPT I can see apache listening on 4204 and the rule is definitely in place but cant reach the site

    Read the article

  • How to diagnose a hang when creating a new folder in explorer.exe

    - by Jack Ukleja
    I have been having some issues with explorer.exe hanging when I create a new folder. If I use Analyse Wait Chain in the Resource Monitor it says "One or more threads of explorer.exe are waiting to finish network I/O". When I look at the offending thread in Process Explorer it reveals nothing interesting: ntdll.dll!ZwWaitForMultipleObjects+0xa KERNELBASE.dll!GetCurrentThread+0x36 kernel32.dll!WaitForMultipleObjectsEx+0xb3 USER32.dll!PeekMessageW+0x1cd USER32.dll!MsgWaitForMultipleObjectsEx+0x2a USER32.dll!MsgWaitForMultipleObjects+0x20 SHELL32.dll!SHAppBarMessage+0x41e SHELL32.dll!DragAcceptFiles+0x2a3c SHELL32.dll!DragAcceptFiles+0x2a4f SHELL32.dll!Ordinal211+0x124 SHELL32.dll!SHChangeNotification_Unlock+0x12f4 USER32.dll!GetSystemMetrics+0x2b1 USER32.dll!IsDialogMessageW+0x19b USER32.dll!IsDialogMessageW+0x1e1 ntdll.dll!KiUserCallbackDispatcher+0x1f USER32.dll!PeekMessageW+0xba USER32.dll!PeekMessageW+0x89 SHELL32.dll!SHChangeNotification_Unlock+0xd9f SHELL32.dll!Ordinal885+0x1407 SHLWAPI.dll!SHRegGetUSValueW+0x306 kernel32.dll!BaseThreadInitThunk+0xd ntdll.dll!RtlUserThreadStart+0x21 While I was looking at the explorer.exe threads I did notice a fair few that talk about ETW (Event Tracing for Windows) so obviously explorer.exe uses tracing. So I decided to try and user TraceView.exe to try and listen in on the explorer.exe traces. The problem is TraceView requires some difficult-to-come-by stuff... either pdbs, or CTL files, and .TMF files. I tried using the explorer.pdb that comes with the Windows SDK but that did not work. I do not see explorer.exe in the "named providers". And I have no idea where to locate the ctl or .TMF files for explorer.exe. So the question is: Is there a way to view the ETW trace messages from explorer? Or shall I just not bother and go back to the age old technique of disabling every explorer extenion one-by-one in the hope its one of them. (Prefer the former as I like to get to the bottom of things!!)

    Read the article

  • changing filesystem format from xfs to ext4 without losing data

    - by A.Rashad
    I have a fresh Lucid Lynx (Ubuntu 10.04) running on a laptop. where I defined the filesystems as: mount point / on ext4 (46 Gb) mount point /home on jfs (63 GB) swap as 3 Gb I left the machine over night to do some task, without AC power supply. next day in the morning I found it on standby, task completed, but filesystem was not reachable. it gave me I/O error it seems that there is a problem with jfs and standby. anyways, to avoid any hassle, I want to move this mount point from jfs format to ext4. can I do this without losing data and without the need to place the data in a temporary location until transformation is done? sorry to mention that, but I recall back in the windows days, we would change a FAT16 to FAT32 or a FAT32 to NTFS without having to lose the data. I hope this is available on Linux. Update The /home filesystem was xfs not jfs, and it seems there is a bug with this filesystem for some reason, I had to re-install the OS twice until I ended up with ext4 for the entire / However, as a conclusion, it seems that there is no way to make a conversion

    Read the article

  • ajax.googleapis.com stopping my Firefox

    - by Oscar Reyes
    Today for some strange reason, Firefox stops working properly because it is trying to fetch something from ajax.googleapis.com. Is there something I can do to avoid this? Safari and Chrome work just fine. I tried uninstalling Firebug and clearing the cache. The only thing that worked was disabling the JavaScript altogether. This seems to be the culprit link: http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js What can I do? EDIT I think I have found where the problem is. My proxy is serving one byte at a time the file, so firefox consume it at that peace. What I don't understand is why Safari and Chrome takes it right away. What I did last night was, leave the FF open all the night to give him change to load the file, my hope was that I got cached and the next time there was no need to go for it. Today in the morning, the page load successfully but the page was not cached, because the next request failed the same. Here's a video showing the problem:

    Read the article

  • Apache2.2 not responding on Windows 7 desktop

    - by Adam
    Afternoon! I'm having some trouble with Apache2.2 on Windows 7. For over a year it's been running no problem, but all of a sudden requests have just stopped responding. They don't ever time out, the browser just keeps on waiting for a response, which makes me think it's something blocking communication with Apache. Interestingly though, if I stop Apache the requests fail immediately. The Apache service is running, and using netstat I can see it listening on port 80 as configured: TCP 127.0.0.1:80 0.0.0.0:0 LISTENING If I stop the Apache service, that line disappears. I have an entry within my hosts file for each VHost I'm trying, all pointing to 127.0.0.1. Each VHost is configured to *:80. Nothing however is getting recorded in the access or error (at debug level) log files. I've verified the file paths are correct, even though they were never changed. Neither is anything getting recorded within Windows' Event Log. The problem showed up when I added a new VHost and restarted, however I hadn't been using it for a couple of days prior so I don't believe it's the config change. I have performed a syntax check to be sure, and when starting from the command prompt no errors are reported there. I do have Windows Firewall running, however I've verified the Apache rule is correct and tried turning it off to ensure that wasn't the problem. I've reinstalled Apache, in the hope it might magically fix something using the default config, but still no joy. I've also tried using a different port. I'm completely lost for ideas now. Can anybody help? Cheers Adam

    Read the article

  • SQL Server 2008 second instance times out when logging in -- but only the first time?

    - by Kromey
    This is a strange one that has plagued me for a while now. When logging in to the second instance of SQL Server 2008 on one of our database servers, we get a timeout error: Cannot connect to servername\mssqlserver2. Additional information: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft SQL Server)") (This is the error message when trying to connect with Microsoft SQL Server Management Studio; other tools experience the same error, but of course say it in different ways.) Immediately re-attempting to log in works just fine, so whatever the cause it's ephemeral! This happens regardless of user or authentication type (both Windows and SQL Server authentication methods are supported on this instance). What's even weirder, though, is that the first instance on this server has never once demonstrated this problem. Server is a Windows Server 2008 R2 virtual server, hosted in Microsoft Hyper-V (host is likewise Server 2008 R2). The server has 2GB of RAM, and seems to regularly be using 90% of that -- could low memory be the cause of this issue? I could see this second instance -- which is not used very often yet -- being swapped out to disk, and then taking too long to load back into memory to respond in time to the connection request, but I'd rather have more than just my own hunch before I go scheduling a downtime for this server (the first instance is used regularly) and then just throwing extra resources at it in the blind hope that the problem goes away.

    Read the article

  • How can I override mod-php5's .php mapping to php4-cgi per VirtualHost or Directory?

    - by geocoo
    I am running Debian Linux with apache2 and libapache2-mod-php5 5.3.3-7. I have one VirtualHost which requires php4. So I researched and compiled php4-cgi. However, I cannot seem to: Override mod-php5's mapping of .php in that vhost (or even globally, without disabling php completley). Even find where that mapping is made, in hope of disabling it and enabling mod-php5 or php4-cgi per vhost. This is my php4-cgi mapping (Inside the one php4 vhost): ScriptAlias /php4 /usr/local/php4/bin <Directory /usr/local/php4/bin> Options +ExecCGI +FollowSymLinks </Directory> <Directory /www/test> AddHandler php4-cgi-script .php Action php4-cgi-script /php4/php Options +ExecCGI </Directory> This does not work, mod-php5 still runs all .php files in that vhost/directory. If I change the file extension in the AddHandler above from .php to .php4, then .php4 files do run php4-cgi as expected, but I can't change all the files in the app to .php4. I thought maybe I could disable the mod-php5's mapping in my vhost or directory, then do my cgi-config (as above) but many combinations of these in different contexts did not work: RemoveHandler .php RemoveType .php php_flag engine off (this seems to even disable my php4-cgi so that wont work) The only other place I can find any mapping is in /etc/mime.types, but commenting out the relevant lines and restarting apache2 does not affect mod-php5's .php mapping. I have searched as much as I can, it is now a mystery to me. Any help or direction would be greatly appreciated.

    Read the article

  • Securing SSH/SFTP and best practices on security

    - by MultiformeIngegno
    I'm on a fresh VPS with Ubuntu Server 12.04. I wanted to ask you the good practices to apply to enhance security over a stock Ubuntu-server. This is what I did up to now: I added Google Authenticator to SSH, then I created a new user (whom I'll use instead of 'root' for SSH & SFTP access) which I added to my /etc/sudoers list below 'root', so now it's: # User privilege specification root ALL=(ALL:ALL) ALL new_user ALL=(ALL:ALL) ALL Then I edited sshd_config and set PermitRootLogin to 'no'. Then restarted the ssh service. Is this ok? There are a few things I'd like to ask you though: 1) What's the sense of adding a new (sudoer) user whilst the root user still exist (ok it can't access with root privilege but it's still there..)? 2) System files are owned by 'root'.. I want to use my new_user to access via SFTP but with it I can't edit those files!! Should I mass-CHMOD 'em so that new_user has write perms too? What's the good practice on this? Thanks in advance, I hope you'll tell me if I did something wrong and/or other ways to secure the system. :)

    Read the article

  • How to use multiple variables in time calculator c#

    - by Peter O'Dwyer
    I am building a video time calculator and need to use the number of frames instead of milliseconds. e.g 25FPS HH:MM:SS:FR 00:00:10:24 <-- Last frame of 10 seconds 00:00:11:00 <-- First frame of 11 seconds My problem is that if the start frames are lower than my end frames it can't calculate the time. 00:00:10:05 <-- Start 01:00:10:10 <-- End 00:59:59:05 <-- Answer which I can't get!! HERE'S MY CODE string Startdate = Dur_txtStart.Text; string Enddate = Dur_txtEnd.Text; string Startdate1 = Startdate.Substring(0, 8); int Startframe1 = Convert.ToInt32(Startdate.Substring(9, 2)); string Startdate2 = Enddate.Substring(0, 8); int Startframe2= Convert.ToInt32(Enddate.Substring(9, 2)); TimeSpan diff = DateTime.Parse(Startdate2).Subtract(DateTime.Parse(Startdate1)); int frameDiff = Startframe2 - Startframe1; int Hour = Convert.ToInt32(diff.Hours); int Min = Convert.ToInt32(diff.Minutes); int Sec = Convert.ToInt32(diff.Seconds); if (frameDiff < 0 && Sec > 0) { Sec -= 1; string frameDiffBuild = string.Format("{0}:{1}:{2}:{3}",Hour.ToString("D2"), Min.ToString("D2"), Sec.ToString("D2"), (Convert.ToInt32(Dur_txtFPS.Text + frameDiff)).ToString("D2")); Dur_txtOutTime.Text = frameDiffBuild; } else { string frameDiffBuild = string.Format("{0}:{1}:{2}:{3}",Hour.ToString("D2"), Min.ToString("D2"), Sec.ToString("D2"), frameDiff.ToString("D2")); Dur_txtOutTime.Text = frameDiffBuild; } Hope you can help guys my head is mashed!

    Read the article

  • Mac OS X - configuring ntpd server with on LAN with D-Link DIR-655

    - by Mark C
    Hey all, This question is pretty specific, but I hope someone will have seen this error elsewhere. I a configuring a machine running OS X 10.5.8 to be an NTP server for machines connected to a LAN that is not connected to the Internet. I am not too worried about knowing the "right" time on all the machines, but rather worried about making sure everyone has the same notion of time. I configured the NTP daemon on Mac by turning on the Set date and time automatically in System Preferences, using the server's clock, 127.127.1.0 as the reference clock. I figured I should see if the server can NTP query itself before proceeding to the clients. The weird part is when I run the ntpq -p command in a command-prompt when connected to my D-Link DIR-655 (firmware: 1.33), it hangs for about a minute or so each time before finally giving me some output. I thought the problem might have to do with Port Forwarding, so I configured the router to forward port 123 for the IP of the server, but that did not improve the situation. When I run the ntpq -p command on my school's network, on a Linksys WRT54G router, or with the wireless Airport card turned off - I have absolutely no problems - the command returns a response instantly. Is this normal? I can see why a query might take a minute or so, but I don't understand why one router does it faster than the other. I tried messing around with the ntp.conf file adding the burst, minpoll, and maxpoll options: server 127.127.1.0 burst minpoll 4 maxpoll 5 Figuring that perhaps I am polling too often and the configuration file is slowing me down, but even with this, the ntpq still hangs on the D-Link DIR-655, but does just fine on the other routers. Any thoughts on where the lag is coming from or if the lag is even a problem?

    Read the article

  • Single computer on network cannot connect to internet.

    - by user34630
    Hi All, I hope you can help me out! :) I have 3 computers and one device (Xbox) on my home network; 2 running XP and one on Vista. The computer that can't connect to the internet (XP) is old and failing, and shows no warning before it completely runs out of battery. Thus today, I started it up forgetting that I had unplugged it the night before, and it just ran out of battery entirely whilst I was using it and died. I think that before this I had been browsing the internet, but I can't remember for certain. (i.e. The problem I am having may or may not have been caused by this 'hard' power off). Anyway, now when I start the computer up it takes 5-10 minutes after logging on to display the start bar and icons. Also, I cannot browse the internet. The computer seems to connect to the network OK (I have tried both wired and wireless), but I can't visit sites and can't ping web addresses. Pinging the router fails, as does pinging another of my computers on the network. I have never encountered something like this before, and whilst I am no noob I am also not a sysadmin, haha. :( Any help is greatly appreciated, thank you for your time. P.S. I have tried a system restore (newbie move?).

    Read the article

  • Access router set up as a bridge behind another router

    - by Alari Truuts
    I have a problem my ISP is refusing to help me with, even though they put up the whole system. Specifications: There's a Thomson TG784 router through which the internet comes in to the building, Behind that (for some reason) is a Juniper NetScreen 5XT - 105 Firewall/Router? which leads to an AMX nxa-enet24 switch that carries the connections all over the building and a series of Apple AirPorts for wifi. Problem: The first router (Thomson) is required for ipTV (by Elion). The tv or ipTV box has to be connected straight to the Thomson router. My service provider cannot see the Thomson router from their side, but see the Juniper, so we might think the Thomson has been configured as a bridge. I need a way to access the Thomson router and see it's configurations, because currently, when connecting a Samsung tv to that router (with elion app for ipTV viewing) or even a computer, it cannot access the internet and even if it could, it would update the Thomson router software, losing it's configurations which I need to preserve. I'm unable to find out the Thomson routers ip address to connect to it, and when directly conencting with a cat5 cable, it doesn't give me an ip address. Hope someone can show me the correct direction for solving my issue. Thank you all for reading, and I appreciate any help, Alari Truuts

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >