Search Results

Search found 24755 results on 991 pages for 'linux mom'.

Page 307/991 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • opennebula VM submission failure

    - by user61175
    I am new to OpenNebula, the cloud is up and running but the VM is failed to be submitted to a node. I got the following error from the log file. ERROR: Command "scp ubuntu:/opt/nebula/images/ttylinux.img node01:/var/lib/one/8/images/disk.0" failed. ERROR: Host key verification failed. Error excuting image transfer script: Host key verification failed. The key verification keeps failing. I need to know what is going wrong ... thanks :)

    Read the article

  • Command to execute another command while replaying the command on STDOUT

    - by hakre
    It's not easy to formulate the question properly, maybe it helps when I describe what I'd like to do. I want to execute a command and pipe it's output into a tool called pastebinit which uploads the STDOUT output to pastebin. That works very well, however I would like to send the command itself on top of it but w/o typing it a second time. Is there some command I can launch "my command" with that will Print "my command" on STDOUT Executes "my command" I have the feeling that something like that exists but as hard as it is to formulate such a question properly, I was not able to dig it up with google so far.

    Read the article

  • DRBD on a disk with existing file system that takes all the place

    - by Karolis T.
    I'm currently trying to simulate the environment via XEN. I have installed two debian systems with such FS layout: cltest1:/etc# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda2 6.0G 417M 5.2G 8% / tmpfs 257M 0 257M 0% /lib/init/rw udev 10M 16K 10M 1% /dev tmpfs 257M 4.0K 257M 1% /dev/shm Host cltest2 is identical. Here's my drbd.conf global { minor-count 1; } resource mysql { protocol C; syncer { rate 10M; # 10 Megabytes } on cltest1 { device /dev/drbd0; disk /dev/xvda2; address 192.168.1.186:7789; meta-disk internal; } on cltest2 { device /dev/drbd0; disk /dev/xvda2; address 192.168.1.187:7789; meta-disk internal; } } I have not created filesystem on drbd0 Starting DRBD via init.d script errors out with: Starting DRBD resources: [ d(mysql) /dev/drbd0: Failure: (114) Lower device is already claimed. This usually means it is mounted. [mysql] cmd /sbin/drbdsetup /dev/drbd0 disk /dev/xvda2 /dev/xvda2 internal --set-defaults --create-device failed - continuing! Running: drbdadm create-md mysql gives: cltest1:/etc# drbdadm create-md mysql md_offset 6442446848 al_offset 6442414080 bm_offset 6442217472 Found ext3 filesystem which uses 6291456 kB current configuration leaves usable 6291228 kB Device size would be truncated, which would corrupt data and result in 'access beyond end of device' errors. You need to either * use external meta data (recommended) * shrink that filesystem first * zero out the device (destroy the filesystem) Operation refused. Command 'drbdmeta /dev/drbd0 v08 /dev/xvda2 internal create-md' terminated with exit code 40 drbdadm aborting As I understand, all of my problems are because I don't have unallocated disk space on xvda2. What are my options besides shrinking FS and connecting a separate physical disk? Can't the meta-data be stored on a file in the local filesystem?

    Read the article

  • RHEL - NFS4: Mounted/Exported as rw, user write permission denied

    - by brendanmac
    Hello, I have nfs4 configured between a RHEL 5.3 server (charlie) and a RHEL 5.4 client (simcom1). The machines are configured to authenticate users via kerberos by a Windows Server 2008 active directory machine called "alpha." Alpha also serves as a dns and dhcp machine for the local network. I notice that when a user logs in to a RHEL machine for the first time they are issued a unique uid to that machine; The first user to log on gets 10001. So, what I see is that users between simcom1 and charlie have different UIDs. When a user does an 'ls -la' command from within an nfs4 mount I would have thought that the usernames in the owner column would indicate 'nobody' or at least the wrong user name - since UIDs are different between the machines for each user, and not all users have logged into each machine. However, the simcom1 is able to resolve usernames in an 'ls -la' executed on files residing on charlie via nfs4 correctly. Most troubling is that users are unable to write to files across the nfs mount. The server, charlie, has the root directory exported as rw. The client, simcom1, mounts the export as rw. My configurations are shown below. My question is, how do I configure the RHEL machines to allow users to write files across nfs4 that is already mounted as read/write? [root@charlie ~]# more /etc/exports / 10.100.0.0/16(rw,no_root_squash,fsid=0) [root@charlie ~]#cat /etc/sysconfig/nfs # # Define which protocol versions mountd # will advertise. The values are "no" or "yes" # with yes being the default #MOUNTD_NFS_V1="no" #MOUNTD_NFS_V2="no" #MOUNTD_NFS_V3="no" # # # Path to remote quota server. See rquotad(8) #RQUOTAD="/usr/sbin/rpc.rquotad" # Port rquotad should listen on. #RQUOTAD_PORT=875 # Optinal options passed to rquotad #RPCRQUOTADOPTS="" # # # TCP port rpc.lockd should listen on. #LOCKD_TCPPORT=32803 # UDP port rpc.lockd should listen on. #LOCKD_UDPPORT=32769 # # # Optional arguments passed to rpc.nfsd. See rpc.nfsd(8) # Turn off v2 and v3 protocol support #RPCNFSDARGS="-N 2 -N 3" # Turn off v4 protocol support #RPCNFSDARGS="-N 4" # Number of nfs server processes to be started. # The default is 8. RPCNFSDCOUNT=8 # Stop the nfsd module from being pre-loaded #NFSD_MODULE="noload" # # # Optional arguments passed to rpc.mountd. See rpc.mountd(8) #STATDARG="" #RPCMOUNTDOPTS="" # Port rpc.mountd should listen on. #MOUNTD_PORT=892 # # # Optional arguments passed to rpc.statd. See rpc.statd(8) #RPCIDMAPDARGS="" # # Set to turn on Secure NFS mounts. SECURE_NFS="no" # Optional arguments passed to rpc.gssd. See rpc.gssd(8) #RPCGSSDARGS="-vvv" # Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8) #RPCSVCGSSDARGS="-vvv" # Don't load security modules in to the kernel #SECURE_NFS_MODS="noload" # # Don't load sunrpc module. #RPCMTAB="noload" # [root@simcom1 ~]# cat /etc/fstab --start snip-- charlie:/home /usr/local/dev/charlie nfs4 rw,nosuid, 0 0 --end snip-- [brendanmac@simcom1 /usr/local/dev/charlie/brendanmac]# touch file touch: cannot touch 'file': Permission denied [brendanmac@simcom1 /usr/local/dev/charlie/brendanmac]# su Password: [root@simcom1 /usr/local/dev/charlie/brendanmac]# touch file [root@simcom1 /usr/local/dev/charlie/brendanmac]# ls -la file -rw------- 1 root root 0 May 26 10:43 file Thank you for your assistance, Brendan

    Read the article

  • How to add an SSH user to my Ubuntu 12 server to upload PHP files

    - by user229209
    I have an Ubuntu 12 VPS and wanted to create a user account to upload and download my PHP code. So when logged in as root I created a user "chris" and then created a directory /var/www/chris I want "chris" to be able to upload and run files to the /var/www/chris directory. Permissions for the chris dir look like this: drwxrwxr-x 2 root chris 4096 Aug 20 03:35 chris As root I created a sample file called abc.php and put it in the chris dir. It worked fine when I test it in a browser. I logged in as chris and uploaded a file called 1234.php. That did not work. I just got a blank PHP page. The code was identical in both files. So it is not the code. The permissions now look like this: -rw-r--r-- 1 root chris 59 Aug 20 03:34 1234.php -rw-r--r-- 1 root root 49 Aug 20 03:21 abc.php How do I alow the "chris" user to upload files and get them to work?

    Read the article

  • Remote Sending of Emails via SMTP/EXIM Issue

    - by Christian Noel
    I have been encountering a problem when sending messages via EXIM. Here is the scenario: I have 2 servers lets just say host1.com = where all my apps and programs are hosted. host2.com = is another server which handles some apps but is also my smtp mail server. whm and cpanel are installed in both hosts as well as exim. right now, messages are being sent out as [email protected] to clients. host1.com uses the [email protected] so that it can send messages outbound as well. here's the problem, after a few hours from a fresh reboot of host1.com, sending messages from host1.com is no longer possible because i encounter an error that states: system/vendor/swift/Swift/Connection/SMTP.php [309]: The SMTP connection failed to start [tls://mail.host2.com]: fsockopen returned Error Number 110 and Error String 'Connection timed out'` also note that this was working fine earlier (like 10 hours ago) but then it suddenly fails. everytime i restart the host1.com then sending messages will work again. i have checked logs and traces but to no avail the only means of fixing this problem is restarting host1.com.

    Read the article

  • HAproxy - Redirect issue - Uri Variables ?

    - by Justin
    I'm using haproxy 1.5dev3 and I was wondering if there is any possible way to grab uri variables from a request to reappend the query on the end of a redirect url? What I'm trying to do is redirect from: http://www.domain.com/page/example.htm?id=1234567 to: http://www.domain.com/frame/newpage.cfm?id=1234567 redirect prefix doesn't work properly as it tries to append /page/example.htm to the end of the redirect url. Can I do some sort of rewrite to accomplish this? It would be awesome if you could use uri and queries as variables for redirection/pool selection like on F5. Please help...Thanks!

    Read the article

  • How to list rpm packages/subpackages sorted by total size

    - by smci
    Looking for an easy way to postprocess rpm -q output so it reports the total size of all subpackages matching a regexp, e.g. see the aspell* example below. (Short of scripting it with Python/PERL/awk, which is the next step) (Motivation: I'm trying to remove a few Gb of unnecessary packages from a CentOS install, so I'm trying to track down things that are a) large b) unnecessary and c) not dependencies of anything useful like gnome. Ultimately I want to pipe the ouput through sort -n to what the space hogs are, before doing rpm -e) My reporting command looks like [1]: cat unwanted | xargs rpm -q --qf '%9.{size} %{name}\n' > unwanted.size and here's just one example where I'd like to see rpm's total for all aspell* subpackages: root# rpm -q --qf '%9.{size} %{name}\n' `rpm -qa | grep aspell` 1040974 aspell 16417158 aspell-es 4862676 aspell-sv 4334067 aspell-en 23329116 aspell-fr 13075210 aspell-de 39342410 aspell-it 8655094 aspell-ca 62267635 aspell-cs 16714477 aspell-da 17579484 aspell-el 10625591 aspell-no 60719347 aspell-pl 12907088 aspell-pt 8007946 aspell-nl 9425163 aspell-cy Three extra nice-to-have things: list the dependencies/depending packages of each group (so I can figure out the uninstall order) Also, if you could group them by package group, that would be totally neat. Human-readable size units like 'M'/'G' (like ls -h does). Can be done with regexp and rounding on the size field. Footnote: I'm surprised up2date and yum don't add this sort of intelligence. Ideally you would want to see a tree of group-package-subpackage, with rolled-up sizes. Footnote 2: I see yum erase aspell* does actually produce this summary - but not in a query command. [1] where unwanted.txt is a textfile of unnecessary packages obtained by diffing the output of: yum list installed | sed -e 's/\..*//g' > installed.txt diff --suppress-common-lines centos4_minimal.txt installed.txt | grep '>' and centos4_minimal.txt came from the Google doc given by that helpful blogger.

    Read the article

  • What is the simplest way to build your own .deb package?

    - by Calvin Fisher
    Having used Ubuntu for several years now, I've assembled a short list of scripts and packages that I always install on my computers. I would like to pack them up into a .deb to make it easier to get set up on a fresh OS installation. I'm imagining, for instance, one package that would install all of my custom BASH scripts that I've made for common tasks, and another one that would depend on other packages (like w64codecs) that I always install but forget that I need to until I go to do something and it's not there. It doesn't even have to be by-the-book; I'm not looking to deploy these publicly. I'm just looking to roll up all these tasks into one sudo dpkg --install. To quantify "simple" or "easy," I mean to say that I'm looking for the method with the fewest steps requiring the least technical knowledge and, most importantly, taking the least time.

    Read the article

  • Mounting Replicated Gluster Multi-AZ Storage

    - by Roman Newaza
    I have Replicated Gluster Storage which is used by Auto scaling Servers. Both, Auto scaling and Storage are allocated in two Availability zones. Gluster: Number of Bricks: 4 x 2 = 8 Transport-type: tcp Bricks: Brick1: gluster01:/storage/1a # Zone A Brick2: gluster02:/storage/1b # Zone B Brick3: gluster03:/storage/2a # Zone A Brick4: gluster04:/storage/2b # Zone B Brick5: gluster01:/storage/3a # Zone A Brick6: gluster02:/storage/3b # Zone B Brick7: gluster03:/storage/4a # Zone A Brick8: gluster04:/storage/4b # Zone B I used Round Robin DNS for Gluster entry point, so DNS name resolves to all of the storage server addresses which are returned in different order all the time: # host storage.domain.com storage.domain.com has address xx.xx.xx.x1 storage.domain.com has address xx.xx.xx.x2 storage.domain.com has address xx.xx.xx.x3 storage.domain.com has address xx.xx.xx.x4 The Storage is mounted with Native Gluster Client: # grep storage /etc/fstab storage.domain.com:/storage /storage glusterfs defaults,log-level=WARNING,log-file=/var/log/gluster.log 0 0 I have heard Gluster might be mounted with the first Server IP and after that it will fetch its configuration with the rest of Servers. Personally, I never tested single Server mount setup and I don't know how Gluster handles this. On EC2, traffic among single Availability zone is free and between different zones is not. When Client in zone A writes to storage and IP of Storage in zone B is returned, it will cost me twice more for data transfer: Client (Zone A) - Storage Server (Zone B) - Replication to Storage Server (Zone A). Question: Would it be better to mount Storage Server of the same zone, so that data transfer charges apply only for replication (A - A - B)?

    Read the article

  • Great GUI for Apache2?

    - by ajsie
    I wonder if there are great GUI management tools for Apache so you dont have to manually edit files in VIM. It would be great if you could manage Apache over internet. Any suggestions of such tools?

    Read the article

  • When using grep from VIM, how to jump to results?

    - by Marplesoft
    When using the grep plugin to VIM, I can search the current directory for all occurrences of a string within a set of files, like this: :grep Ryan *.txt This outputs something like this: file1.txt:3:Ryan was here file2.txt:10:Ryan likes VIM file3.txt:5:superuser.com is a fav of Ryan (1 of 3): Ryan was here Press ENTER or type command to continue If I press enter, it just takes me back to my editor. What I really want to do is be able to open up one of those files and jump to the place where the string was found. Is there a way to do this? The 1 of 3 part makes me think there's a way to tab through the results, but I don't know what commands are available to me. Can anybody shed some light on this?

    Read the article

  • Best practice for authenticating DMZ against AD in LAN

    - by Sergei
    We have few customer facing servers in DMZ that also have user accounts , all accounts are in shadow password file. I am trying to consolidate user logons and thinking about letting LAN users to authenticate against Active Directory.Services needing authentication are Apache, Proftpd and ssh. After consulting security team I have setup authentication DMZ that has LDAPS proxy that in turn contacts another LDAPS proxy (proxy2) in LAN and this one passes authentication info via LDAP (as LDAP bind) to AD controller.Second LDAP proxy only needed because AD server refuses speak TLS with our secure LDAP implemetation. This works for Apache using appropriate module.At a later stage I may try to move customer accounts from servers to LDAP proxy so they are not scattered around servers. For SSH I joined proxy2 to Windows domain so users can logon using their windows credentials.Then I created ssh keys and copied them to DMZ servers using ssh-copy, to enable passwordless logon once users are authenticated. Is this a good way to implement this kind of SSO?Did I miss any security issues here or maybe there is a better way ofachieving my goal?

    Read the article

  • .htaccess to block by file name possible?

    - by Tiffany Walker
    I have a bunch of files that are secure_xxxxxx.php. Is there a way to use .htaccess to block access to all the secure_* php files based on IP? EDIT: I've tried but I get 500 errors <FilesMatch "^secure_.*\.php$"> order deny all deny from all allow from my ip here </FilesMatch> Don't see any errors in apache error logs either httpd -M Loaded Modules: core_module (static) authn_file_module (static) authn_default_module (static) authz_host_module (static) authz_groupfile_module (static) authz_user_module (static) authz_default_module (static) auth_basic_module (static) include_module (static) filter_module (static) log_config_module (static) logio_module (static) env_module (static) expires_module (static) headers_module (static) setenvif_module (static) version_module (static) proxy_module (static) proxy_connect_module (static) proxy_ftp_module (static) proxy_http_module (static) proxy_scgi_module (static) proxy_ajp_module (static) proxy_balancer_module (static) ssl_module (static) mpm_prefork_module (static) http_module (static) mime_module (static) dav_module (static) status_module (static) autoindex_module (static) asis_module (static) info_module (static) suexec_module (static) cgi_module (static) dav_fs_module (static) negotiation_module (static) dir_module (static) actions_module (static) userdir_module (static) alias_module (static) rewrite_module (static) so_module (static) fastinclude_module (shared) auth_passthrough_module (shared) bwlimited_module (shared) frontpage_module (shared) suphp_module (shared) Syntax OK

    Read the article

  • Can connect to Samba server but cannot access shares?

    - by jlego
    I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. Needs to be able to share files with a PC running Windows 7 and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error 0x80070035 " Windows cannot access \SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error "The original item for ShareName cannot be found." When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? UPDATE 2: As the directory I am trying to share is actually an entire internal disk, I have tried to things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. UPDATE 3: Not sure exactly what I did, but now whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Any suggestions? UPDATE 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. UPDATE 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally) UPDATE 6: Figured it out. See answer below

    Read the article

  • Sun Grid Engine: Automatically Terminating Idle Interactive Jobs

    - by dmcer
    We're considering using Sun Grid Engine on a small compute cluster. Right now, the current set up is pretty crude and just involves having people ssh to an open machine to run their jobs. We'd like to allow interactive jobs, since that should ease the transition from manually starting jobs to starting them using qsub. But, there is some concern that, if we do, people might accidentally leave their interactive sessions idle and block other jobs from being run on the machines. The issue isn't just theoretical, since we previously tried using OpenPBS and there was a problem with people opening up an interactive job in a screen session and essentially camping on a machine. Is there anyway to configure SGE to automatically kill idle interactive jobs? It looks like this was requested as an enhancement (Issue #:2447) way back in 2007. But, it doesn't seem like the request ever got implemented.

    Read the article

  • How to know my wireless card has injection enabled?

    - by shrimpy
    I am playing around with aircrack. And was trying to see whether my wireless card on my laptop can pass the injection test And I end up seeing the following... does it mean my wireless card is not able to run aircrack? root@myubuntu:/home/myubuntu# iwconfig lo no wireless extensions. eth0 no wireless extensions. eth1 IEEE 802.11bg ESSID:"" Nickname:"" Mode:Managed Frequency:2.437 GHz Access Point: Not-Associated Bit Rate:54 Mb/s Tx-Power:24 dBm Retry min limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=5/5 Signal level=0 dBm Noise level=-57 dBm Rx invalid nwid:0 Rx invalid crypt:781 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 root@myubuntu:/home/myubuntu# aireplay-ng -9 eth1 ioctl(SIOCSIWMODE) failed: Invalid argument ARP linktype is set to 1 (Ethernet) - expected ARPHRD_IEEE80211, ARPHRD_IEEE80211_FULL or ARPHRD_IEEE80211_PRISM instead. Make sure RFMON is enabled: run 'airmon-ng start eth1 <#>' Sysfs injection support was not found either. root@myubuntu:/home/myubuntu#

    Read the article

  • "File" exists or not

    - by SnailTang
    ls -il ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory total 55456 ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N and when i use to show these files, i get the info: p+st.ó·e p¦ft.¦·N Please, where do these files or somethings others exist. Or what makes them show here.

    Read the article

  • VMware Server 2.0.2 and Firefox 3.6 RC1

    - by Mads
    Hi, it seems that VMware Server 2.0.2 and Firefox 3.6 RC1 don't like each other. I have a reproducible problem on different networks, each with same software configuration (FF3.6 RC1 and VMware Server 2.0.2 on Ubuntu 8.04.3 LTS 64bit). The login screen doesn't show up and the Firefox remarks and non loadable page. It cannot load the page at all. The redirect is done (from http://:8222 to https://:8333 ) Maybe someone can help?

    Read the article

  • Accessing Netatalk/AFP Shares from OS X Snow Leopard

    - by j4nus_
    Recently upgraded Ubuntu home server from 8.04 client to 10.04 server and reinstalled all services therein. One of them is a Netatalk daemon that I configured in a fashion similar to this website: http://www.kremalicious.com/2008/06/ubuntu-as-mac-file-server-and-time-machine-volume/ Finder recognizes my server and the afp service, yet when I attempt to log in (using valid credentials), Finder indicates its the wrong username and password. I've tried altering some of the config files and my Google-fu to look for solutions, but no luck. Any tips? (This was not an issue under 8.04, if it matters)

    Read the article

  • multiple php compiler on single apache installation

    - by getmizanur
    elloo, i have some old php scripts which runs on php-5.2.x and the current server has php-5.3.x. to get around this problem,i have got two options one is to downgrade php-5.3.x or install php-5.2.x and php-5.3.x at the same time where php-5.2.x serve cgi script. i have decided go for the second option i have followed this tutorial and i can get most of it working however except execution of shell script which selects php-cgi version. i cannot get apache to execute this script. how do i get apache to execute #!/bin/sh # you can change the PHP version here. version="5.2.6" # php.ini file location, */php-5.2.6/lib equals */php-5.2.6/lib/php.ini. PHPRC=/etc/php/phpfarm/inst/php-${version}/lib/php.ini export PHPRC PHP_FCGI_CHILDREN=3 export PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=5000 export PHP_FCGI_MAX_REQUESTS # which php-cgi binary to execute exec /etc/php/phpfarm/inst/php-${version}/bin/php-cgi my apache vhost.conf <VirtualHost *:80> ServerName 526.localhost DocumentRoot /home/getmizanur/public_html/www <Directory "/home/getmizanur/public_html/www"> AddHandler php-cgi .php Action php-cgi /php-fcgi/php-cgi-5.2.6 </Directory> </VirtualHost> can some one tell me what am i doing wrong? thanks in advance. solution: if i did a2dismod php5 then the above configuration worked. when a2enmod php5 had been activated, apache was executing php5.3 instead of php5.2 even after telling apache to execute php5.2 shell script. to solve my problem, i had to change my virtualhost configuration <VirtualHost *:80> ServerName 526.localhost DocumentRoot /home/getmizanur/public_html/www DirectoryIndex index.php <Directory "/home/getmizanur/public_html/www"> AddHandler php-cgi .php Action php-cgi /php-fcgi/php-cgi-5.2.6 <FilesMatch "\.php"> SetHandler php-cgi </FilesMatch> </Directory> </VirtualHost> presto, it started working.

    Read the article

  • Using an allowed user list with VSFTPD

    - by Naftuli Tzvi Kay
    According to the Wiki here, you can only allow certain users to log in over FTP using the following configuration in your /etc/vsftp.conf file: userlist_enable=YES userlist_file=/etc/vsftp.user_list userlist_deny=NO I've configured my system to use this configuration, and I only have one user which I'd like to expose over FTP named streams, so my /etc/vsftp.user_list looks like this: streams Interestingly enough, I cannot log in once I enable to user list. If I change userlist_enable to NO, then things work properly, but if I enable it, I can't log in all, it just keeps trying to reconnect. I don't get a login failed message, it just keeps trying to reconnect when using lftp. My /etc/vsftp.conf file is available on Pastebin here and my /etc/vsftp.user_list is available here. What am I doing wrong here? I'd just like to only make the streams user able to log in.

    Read the article

  • Gluster bricks are offline and errors in logs

    - by Roman Newaza
    I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected: root@GlusterNode1a:~# gluster peer status Number of Peers: 3 Hostname: gluster-1b Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2 State: Peer in Cluster (Connected) Hostname: gluster-2b Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9 State: Peer in Cluster (Connected) Hostname: gluster-2a Uuid: 72405811-15a0-456b-86bb-1589058ff89b State: Peer in Cluster (Connected) I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log: copy(/storage/152627/dat): failed to open stream: Structure needs cleaning readfile(/storage/1438227/dat): failed to open stream: Input/output error unlink(/storage/189457/23/dat): No such file or directory Finally, I have found out some bricks are offline: root@GlusterNode1a:~# gluster volume status Status of volume: storage Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick gluster-1a:/storage/1a 24009 Y 1326 Brick gluster-1b:/storage/1b 24009 N N/A Brick gluster-2a:/storage/2a 24009 N N/A Brick gluster-2b:/storage/2b 24009 N N/A Brick gluster-1a:/storage/3a 24011 Y 1332 Brick gluster-1b:/storage/3b 24011 N N/A Brick gluster-2a:/storage/4a 24011 N N/A Brick gluster-2b:/storage/4b 24011 N N/A NFS Server on localhost 38467 Y 24670 Self-heal Daemon on localhost N/A Y 24676 NFS Server on gluster-2b 38467 Y 4339 Self-heal Daemon on gluster-2b N/A Y 4345 NFS Server on gluster-2a 38467 Y 1392 Self-heal Daemon on gluster-2a N/A Y 1402 NFS Server on gluster-1b 38467 Y 2435 Self-heal Daemon on gluster-1b N/A Y 2441 What can I do about that? I need to fix it. Note: CPU and Network usage of all the four nodes are about the same.

    Read the article

  • Run script before shutdown/restart

    - by dtbarne
    I'd like to run a PHP script when an instance is told to shutdown, but of course before it actually finishes shutting down. My particular script is just looking to push some log files from the local partition to a another server. I've got the gist of how this process works, but I need some clarification. How I understand it. Please correct me if I'm wrong. Create an executable script in /etc/init.d (lets call it /etc/init.d/push-logs) Create a symlink to /etc/init.d/push-logs from /etc/rc0.d (shutdown) and /etc/rc6.d (reboot). The name should be KXXpush-logs Here's my questions: Of course - am I understanding correctly? For #2 above - it sounds like the lower the XX the better - is there too low a number I can use? Does it matter if it shares a number with another script? Does the script in /etc/init.d/push-logs HAVE to follow the standard init.d template (supporting start/stop, etc. commands)? This doesn't really apply to my use case. If possible I just want the script to be the following: #!/bin/sh # # Run PHP file prior to shutdown # /usr/bin/php /path/to/php_file.php

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >