Search Results

Search found 3039 results on 122 pages for 'centos 5'.

Page 61/122 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Why do I get different openssl versions?

    - by CoCoMonk
    I'm trying to check if I have the latest OpenSSL, my main concern in the heartbleed bug. I tried 2 commands: openssl version yum info openssl openssl version output OpenSSL 1.0.1e-fips 11 Feb 2013 yum info openssl output Installed Packages Name : openssl Arch : x86_64 Version : 1.0.1e Release : 16.el6_5.14 ... I have a couple of questions: Why do I get different versions from these 2 commands? How do I check the heartbleed vulnerability without having the 443 port open?

    Read the article

  • Something keeps changing the default permissions of my settings.php file for a web site on Linux

    - by JrSysAdmin
    I keep changing the file permissions for the file /var/www/html/websitename/settings.php to 775 and within 15 minutes or so it automatically changes back 555. The owner of the file is "apache" and the group ownership is for our Linux Developers just like all of the other files which are not having this issue. Obviously there must be some sort of process running that is automatically changing the file permissions (apache maybe?) but I haven't been able to figure out. Any help would be greatly appreciated.

    Read the article

  • Trying to unpack 2.5GB .tar.gz file on Linux but getting "An error occurred while trying to open the archive"

    - by TMM
    Hi, Is there a limit on Linux for the file size of a .tar.gz (or its contents). I am currently creating a .tar.gz (both through the UI/"Compress As" and also through the command line) file for 2 files (6GB and 2GB), and even though it is created successfully, when I try to unpack it using Ark it throws the error "An error occurred while trying to open the archive". I have seen some places that it might be better to archive the file into several smaller .tar.gz files, but I was wondering exactly how to do this (and subsequently unpack the files). Also, is it totally impossible to use the 1 .tar.gz file approach as this would be much simpler. Thanks in advance, Tim

    Read the article

  • grabbing/parsing iSCSI iface information

    - by chrisg
    I'm writing a puppet provider for iSCSI and want to grab information about the ifaces (in my case HBAs) we have, is there a better way than doing this: iscsiadm -m iface -I be2iscsi.00:00:00:00:00:00|grep iface.ipaddress|sed -e 's/iface.ipaddress = //' it looks pretty ugly, but the -n switch doesn't seem to work unless you're in --op=update is there a better way to grab this information, in particular in ruby?

    Read the article

  • blocking port 80 via iptables

    - by JoyIan Yee-Hernandez
    I'm having problems with iptables. I am trying to block port 80 from the outside, basically plan is we just need to Tunnel via SSH then we can get on the GUI etc. on a server I have this in my rule: Chain OUTPUT (policy ACCEPT 28145 packets, 14M bytes) pkts bytes target prot opt in out source destination 0 0 DROP tcp -- * eth1 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW,ESTABLISHED And Chain INPUT (policy DROP 41 packets, 6041 bytes) 0 0 DROP tcp -- eth1 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW,ESTABLISHED Any guys wanna share some insights?

    Read the article

  • Rebuild the index with REINDEX [closed]

    - by kuttyarif
    WARNING: index "pk_alarmid" contains 1363436 row versions, but table contains 26 row versions HINT: Rebuild the index with REINDEX. WARNING: index "alarm_uei_idx" contains 1363434 row versions, but table contains 26 row versions HINT: Rebuild the index with REINDEX. WARNING: index "alarm_nodeid_idx" contains 1363434 row versions, but table contains 26 row versions HINT: Rebuild the index with REINDEX.

    Read the article

  • Yum install error (mysql-devel) depsolve

    - by Pasta
    I get the following error on yum install mysql-devel. Can anyone help? I dont have this in my /etc/yum.conf exclude list. --> Finished Dependency Resolution mysql-server-5.0.45-7.el5.x86_64 from installed has depsolving problems --> Missing Dependency: mysql = 5.0.45-7.el5 is needed by package mysql-server-5.0.45-7.el5.x86_64 (installed) Error: Missing Dependency: mysql = 5.0.45-7.el5 is needed by package mysql-server-5.0.45-7.el5.x86_64 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest Please help!

    Read the article

  • Need help to figure out iptables rule

    - by Master
    I have this iptable rule listing Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- 127.0.0.1 0.0.0.0/0 tcp dpt:3306 acctboth all -- 0.0.0.0/0 0.0.0.0/0 VZ_INPUT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 ACCEPT tcp -- 94.101.25.40 0.0.0.0/0 state NEW tcp dpt:3306 Chain FORWARD (policy DROP) target prot opt source destination VZ_FORWARD all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy DROP) target prot opt source destination acctboth all -- 0.0.0.0/0 0.0.0.0/0 VZ_OUTPUT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 94.101.25.40 0.0.0.0/0 state NEW tcp dpt:3306 I want only localhost and my ip to access tcp 3306. Can i deleted all other rules as shown above. I don't know if i nned to keep any of them or not

    Read the article

  • CSF Unresolved issue

    - by josephmarhee
    I began receiving service failures for CSF/LFD once the limit was reached in iptables preventing the service from working properly. I flushed all iptables rules, and redid by rules using CIDR rather than the individual IPs that were listed and the issue persists. Error: The VPS iptables rule limit (numiptent) is too low (1527/1536) - stopping firewall to prevent iptables blocking all connections, at line 1459 This is after restarting CSF, which gave me: You have an unresolved error when starting csf. You need to restart csf successfully to remove this warning CSF still seems to be trying to enforce rules that no longer exists (lists entire chains upon trying to be restarted,only to fail with that error). Any idea of what's going on?

    Read the article

  • Unable to view users and groups

    - by Ewr Xcerq
    I am using Centos5 running on a VMWare but whenever I choose to open the User Manager menu from System-Administration, an error message always displays The user database cannot be read. This problem is most likely caused by a mismatch between etc/passwd and /etc/shadow or /etc/group and /etc/gshadow/. The program will now exit. I am a Linux novice and have no idea how to fix this tiny issue. ANy help is thankful. Thank you.

    Read the article

  • yum security update - message indicating kernel version not up to date

    - by JMC
    Running yum --security check-update returns this message: Security: kernel-3.x.x-x.63 is an installed security update Security: kernel-3.x.x-x.29 is the currently running version I already ran the yum security update on the kernel, but it looks like it didn't change the version running on the system. What needs to be done to make it run the new kernel? Are there any concerns about why it didn't change during the installation process? The yum log just shows installed for the new kernel no error messages.

    Read the article

  • vsftpd allow anonymous log-in

    - by user1817081
    I'm setting up a ftp server, that will allow anonymous to READ/WRITE to the server. Here is my configuration. anonymous_enable=YES local_enable=YES write_enable=YES anon_upload_enable=YES anon_mkdir_write_enable=YES xferlog_enable=YES connect_from_port_20=YES xferlog_file=/var/log/xferlog xferlog_std_format=YES ftpd_banner=Welcome to blah FTP service. listen=YES pam_service_name=vsftpd userlist_enable=NO tcp_wrappers=YES no_anon_password=YES In my /var/ftp/ i set the permission to 755. When I tried to set it to 777 i got the following error, when i tried to log in. 500 OOPS: vsftpd: refusing to run with writeable anonymous root login failed. Do i need to set up anything else to allow READ/WRITE for anonymous?

    Read the article

  • Connection closed by remote host followed by Connection refused

    - by Khosrow
    All of a sudden my ssh connection to server has been damaged. Here is what's happened: $ ssh -vvv -p <PORT> -l <USER> <HOST> OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /home/khosrow/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to <HOST> [<IP>] port <PORT>. debug1: Connection established. debug1: identity file /home/khosrow/.ssh/identity type -1 debug1: identity file /home/khosrow/.ssh/id_rsa type -1 debug1: identity file /home/khosrow/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host I've recently updated the box with yum update and sshd got updated as well. I honestly don't know if this caused any damages or not. But it's prompted that /etc/ssh/sshd_config was stored as /etc/ssh/sshd_config.rpmnew which was quite normal. I've seen similar posts while googling, but almost all of them suggests that I should check /etc/hosts.allow and /etc/hosts.deny, which in my case, I can't. I can not connect to the box to see what's going on there. I rebooted the box, through web interface of server provider, and it even got worse. I'm now getting this: $ ssh -vvv -p <PORT> -l <USER> <HOST> OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /home/khosrow/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to <HOST> [<IP>] <PORT>. debug1: connect to address <IP> port <PORT>: Connection refused ssh: connect to host <HOST> port <PORT>: Connection refused with both <CUSTOM_PORT> and default 22 ports. I would really appreciate if anyone could help me on this.

    Read the article

  • linux (centos) privilege to copy file

    - by vick
    I need some help with priviligies in centos I have a file in home/admin/public_html/generate.php that I want to do some file copy with using php copy function When I set the file to chown admin:admin generate.php I can access the file but I cannot execute the php copy command because I don't have the proper rights. When I set the file to root:root generate.php I cant access the file beacuse its under admin user folder home/admin/public_html/generate.php how do I solve please, thankful for any help. Bottom line is that I want my generate.php which is owned by admin:admin to be able to copy files from sources outside its home dir and to other home dirs I am using CENTOS

    Read the article

  • CURL -I issue stop responding when contain "="

    - by user1512778
    i did this command : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceName=WebNSOR&templateName=detail.htm&requestingHandler=WebNSORDetailHandler&ID=368343543' but stuck but if i did this : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi' HTTP/1.1 200 OK Content-length: 207 Content-type: text/html Server: Sun-ONE-Web-Server/6.1 Date: Sat, 15 Dec 2012 08:49:14 GMT Via: 1.1 proxy-internet-revproxy Proxy-agent: Oracle-iPlanet-Proxy-Server/4.0 then i try shorten it : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceName=WebNSOR&templateName=detail.htm' stuck too i dont know why seems like if the url contain "=" it stop responding so tried this url removing the "=" (serviceName=WebNSOR to serviceNameWebNSOR) : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceNameWebNSOR' HTTP/1.1 200 OK Content-length: 207 Content-type: text/html Server: Sun-ONE-Web-Server/6.1 Date: Sat, 15 Dec 2012 08:50:38 GMT Via: 1.1 proxy-internet-revproxy Proxy-agent: Oracle-iPlanet-Proxy-Server/4.0 why i cant use = ? please assist me

    Read the article

  • why in /proc file system have this infomation

    - by liutaihua
    run: lsof|grep delete can find some process open fd, but system dis that it had to delete: mingetty 2031 root txt REG 8,2 15256 49021039 /sbin/mingetty (deleted) I look the /proce filesystem: ls -l /proc/[pid] lrwxrwxrwx 1 root root 0 9? 17 16:12 exe -> /sbin/mingetty (deleted) but actually, the executable(/sbin/mingetty) is normal at /sbin/mingetty path. and some soket like this situation: ls -l /proc/[pid]/fd 82 -> socket:[23716953] but, use the commands: netstat -ae|grep [socket id] can find it. why the OS display this infomation??

    Read the article

  • Nginx reverse proxy with separate aliases

    - by gabeDel
    Interesting question I have this python code: import sys, bottle, gevent from bottle import * from gevent import * from gevent.wsgi import WSGIServer @route("/") def index(): yield "/" application=bottle.default_app() WSGIServer(('', port), application, spawn=None).serve_forever() that runs standalone with nignx infront of it as a reverse proxy. Now each of these pieces of code run separately but I run multiple of these per domain per project(directory) but the code thinks for some reason that it is top level and its not so when you go to mydomain.com/something it works but if you go to mydomain.com/something/ you will get an error. No I have tested and figured out that nginx is stripping the "something" from the request/query so that when you go to mydomain.com/something/ the code thinks you are going to mydomain.com// how do I get nginx to stop removing this information? Nginx site code: upstream mydomain { server 127.0.0.1:10100 max_fails=5 fail_timeout=10s; } upstream subdirectory { server 127.0.0.1:10199 max_fails=5 fail_timeout=10s; } server { listen 80; server_name mydomain.com; access_log /var/log/nginx/access.log; location /sub { proxy_pass http://subdirectory/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /subdir { proxy_pass http://subdirectory/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } }

    Read the article

  • How do I get these permissions working right so Apache can work with the files?

    - by cosmicbdog
    I am having a go at setting up my own Apache and can't seem to get my head around the permissions. Lets say I grab a file from somewhere off the web and it has permission of 600. I then upload this file via ftp to a user directory, which is also an apache virtual site, and so this file retains this permission of 600. This means that the user can read this file, but Apache can't: it will be forbidden. What is the most simple solution so that apache can read + write whatever files end up in the users directory? Can apache be granted some sort of root power over files in a directory?

    Read the article

  • How do I get the current date according to an NTP server without setting it locally?

    - by Zac B
    I want to get the current date and time according to a remote NTP server, using Linux. I don't want to change the local time as a result; I just want to get the remote date, adjusted for the local time zone, printed out. The date returned must comply with the following criteria: It needs to be reasonably accurate. It needs to be adjusted for the timezone on the local system making the request. It needs to be formatted in an easily-readable or interpretable way (standard date format, or seconds since epoch). What I've Tried: I can call ntpdate -q my.ntp.server and get the offset between the local time and the server's time, but that doesn't return the date according to the NTP server; it just returns the offset and the local date. Is there some easy way/command I can use to say: "Print out the date according to a given NTP server, adjusted for my current timezone"?

    Read the article

  • MySQL keeps crashing due to bug

    - by mike
    So about a week ago, I finally figured out what was causing my server to continually crash. After reviewing my mysqld.log I keep seeing this same error, 101210 5:04:32 [Warning] option 'max_join_size': unsigned value 18446744073709551615 adjusted to 4294967295 Here is a link to the bug report, http://bugs.mysql.com/bug.php?id=35346 someone recommend that you set the max_join_size vaule in my.cnf to 4M, and I did. I assumed this fixed the issue, and it was working for about a week with no issues until today... I checked MySQL and the same error is now back, 101216 06:35:25 mysqld restarted 101216 6:38:15 [Warning] option 'max_join_size': unsigned value 18446744073709551615 adjusted to 4294967295 101216 6:38:15 [Warning] option 'max_join_size': unsigned value 18446744073709551615 adjusted to 4294967295 101216 06:40:42 mysqld ended Anyone know how I can really fix this issue? I can't keep having mysql crash like this. EDIT: I forgot to mention every time this happens I get an email from linode staying I have a high disk io rate Your Linode, has exceeded the notification threshold (1000) for disk io rate by averaging 2483.68 for the last 2 hours.

    Read the article

  • Virtual folder for multiple sites

    - by Cups
    I am creating a very simple flat file CMS for small (multilingual) websites. The little file writing that goes on is handled by 4 scripts in a publicly available folder in each site named /edit. Given that I have 2 websites now working on that simple system: websiteA/index.php (etc) websiteA/edit/ websiteB/index.php (etc) websiteB/edit/ What is the best way of making that /edit folder "virtual" in order that these and each subsequent website owner can login to their view of /edit and yet the code only exists in one place. I do not want the website owners to have to login from a central website, but from their own /edit directory. I have already read about different solutions seemingly using the <Directory> directive in my httpd.conf declaration for each website, and also using straight mod_rewrite but admit to now becoming confused about some of the terminology. Each website has its own config file which contains path settings and so on. What in your opinion is the best way to handle this? EDIT In light of a reply, I suppose that given a virtual host directive such as this: <VirtualHost 00.00.00.00:80> DocumentRoot /var/www/html/websitea.com ServerName www.websitea.com ServerAlias websitea.com DirectoryIndex index.htm index.php CustomLog logs/websitea combined </VirtualHost> Is it possible to create an alias inside that directive for the folder websitea.com/edit ?

    Read the article

  • Nginx common configuration that I might have missed

    - by ApPeL
    I recently moved from Apache Mod_wsgi to Nginx, and I have seen a major improvement on speed a lowering on memory usage and I am generally very happy with the it. I am not a server expert, so please be gentle. I am wondering if there are any small configuration that I might have missed, that will cause me some issues in the long run... Please see my nginx.conf file user nginx nginx; worker_processes 4; error_log /var/log/nginx/error_log info; events { worker_connections 1024; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 2k; request_pool_size 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; ignore_invalid_headers on; index index.html; server { listen 80; server_name localhost; location /media/ { root /www/django_test1/myapp; # Notice this is the /media folder that we create above } location /mediaadmin/ { alias /opt/python2.6/lib/python2.6/site-packages/django/contrib/admin/media/; # Notice this is the /media folder that we create above } location / { # host and port to fastcgi server fastcgi_pass 127.0.0.1:8080; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; client_max_body_size 100M; } access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log; } }

    Read the article

  • Path is present, Permissions are okay, but still getting error

    - by N e w B e e
    I recently installed pdftk using instruction provided at stack overflow I installed it, and run the commanded whereis pdftk the result was /usr/local/bin/pdftk /usr/bin/pdftk I have the powerpannel access and I saw it through it that pdftk actually exists at the location i run the command pdftk --version, it was okay but when in php i use <?php $command = "pdftk --help"; system("PATH=/usr/local/bin/ && $command",$response); if ($response===FALSE){ echo 'sorry error occured'; } else{ echo $response; } ?> the output is 127 the version i am using is 1.41 and the output '127' is something that i cant understand can somebody guide me?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >