Search Results

Search found 13437 results on 538 pages for 'trusted root certificates'.

Page 348/538 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Hp Procurve Switch : port filtered

    - by user117140
    My HP Procurve switch is blocking port 22 and I dont know how to unblock it.Please let me know From the server, see port 22 is blocked [root@server ~]#nmap -p22,80,443 10.247.172.70 Starting Nmap 4.11 ( http://www.insecure.org/nmap/ ) at 2012-04-16 14:12 IST mass_dns: warning: Unable to determine any DNS servers. Reverse DNS is disabled. Try using --system-dns or specify valid servers with --dns_servers Interesting ports on 10.247.172.70: PORT STATE SERVICE 22/tcp filtered ssh ------------------> see 80/tcp filtered http 443/tcp filtered https This is blocked on cisco switch but I dont have any clue how this is done. I know that vlan is configured on switch. vlan 54 ip ospf 10.247.172.65 area 0.0.0.10 vrrp vrid 54 owner virtual-ip-address 10.247.172.65 255.255.255.192 priority 255 enable exit exit Please let me know how to unblock ssh port 22 access on this switch?

    Read the article

  • CentOS: How to prevent a user from executing an application installed in a specific directory

    - by slayernoah
    I have an application installed in /etc/mydir. I have executed the following to remove the ability for users to execute this program. chown root:group1 /etc/mydir -R chmod 700 /etc/mydir -R I created a new user and logged in as this user. The new user was not added to group1 However, I was able to execute this program by just typing the program name. How can I stop users being able to run this using chmod and chown. Please let me know. PS. the new users cannot cd into /etc/mydir but they can still execute using the program name.

    Read the article

  • Storing bundled AMI:s at Amazon EC2

    - by Industrial
    Hi everybody, I am totally new on configuring servers and working with EC2, so please bare with me. I managed after a lot of hair pulling to get a server with Ubuntu up and running with memcached and some other goodies that would make a great package for me. I thought that however, when storing it as an AMI with this tool I would be able to have memcached available next time I launched an instance based upon that image. What can I do to make sure that my configuration is saved properly to an instance? Question number two: - Can I someway make a command that is automatically run on server creation, like initiating memcache with "memcache -d -m 1700 -u root" or even a batch of them?

    Read the article

  • How to access remotly to a mysql server?

    - by ÉricP
    Hi, I'm trying to access my remote mysql server from my own computer. I uncommented: bind-address = 80.10.65.45 I added 80.10.65.45 as a server in privilege root 80.10.65.45 yes ALL PRIVILEGES yes I'm using Sequel Pro on MacosX to connect via SSH here is the debug log: debug1: Authentication succeeded (password). debug1: Local connections to LOCALHOST:58517 forwarded to remote address 127.0.0.1:3306 debug1: Local forwarding listening on ::1 port 58517. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 58517. debug1: channel 1: new [port listener] debug1: Entering interactive session. debug1: Connection to port 58517 forwarding to 127.0.0.1 port 3306 requested. debug1: channel 2: new [direct-tcpip] channel 2: open failed: connect failed: Connection refused debug1: channel 2: free: direct-tcpip: listening port 58517 for 127.0.0.1 port 3306, connect from 127.0.0.1 port 58519, nchannels 3

    Read the article

  • "killed" message from cron.daily, but not when run from command line

    - by Dan Stahlke
    On Fedora 17, I put a file into /etc/cron.daily with the following contents: cd / su dstahlke /home/dstahlke/bin/anacron-daily.sh exit 0 For some reason, I get a mail every day that just says /etc/cron.daily/dstahlke-daily: ...killed. I tried with and without the exit 0 line above (I noticed that some system scripts have that and others don't, I'm not sure of the purpose). Running /etc/cron.daily/dstahlke-daily from the command line as root produces no ...killed message. Other than the message, everything seems to work fine. Putting set -x in the above script, as well as in the /home/dstahlke/bin/anacron-daily.sh script shows that the ...killed message happens just after the latter script terminates (or perhaps just after the su command finishes). What causes the ...killed message? Or, is there a more acceptable way to have anacron run a user script daily? I figured that putting this in /etc/cron.daily would help the system coordinate all of the daily tasks rather than potentially running my task concurrently with the system tasks.

    Read the article

  • Which is better for multi-use auth, MySQL, PostgreSQL, or LDAP?

    - by Fearless
    I want to set up an Oracle Linux 6 server that gives users secure IMAP email (with dovecot), Jabber IM, FTP (with vsftpd), and calDav. However, I want each user logon to be able to authenticate all services (e.g. Joe Smith signs up once for a username and password that he can use for email, ftp, and his calendar). My question is, which database service will be best suited for that application? Also, is there a way to link the database with the preexisting server shell logins (e.g. so I can read the root account's LogCheck emails on a different device)?

    Read the article

  • Corrupt mysql system tables

    - by psynnott
    I am having issues with the columns_priv table in the mysql system database. I cannot add new users currently. I have tried repairing it using mysqlcheck --auto-repair --all-databases --password but I get the following output: mysql.columns_priv Error : Incorrect file format 'columns_priv' error : Corrupt Is there any other way to repair this table, or how do I go about replacing it with a blank table? What would I lose by doing that? Thank you Edit (Additional Info) mysqld is currently using 100% cpu constantly. Looking at show processlist, I get: mysql> show processlist; +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | 5 | debian-sys-maint | localhost | mysql | Query | 1589 | Opening tables | ALTER TABLE tables_priv MODIFY Column_priv set('Select','Insert','Update','References') COLL | | | 752 | root | localhost | NULL | Query | 0 | NULL | show processlist | +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ 2 rows in set (0.00 sec)

    Read the article

  • Virtual hosting in Varnish with individual vcl files for configuration

    - by Michael Sørensen
    I wish to use varnish to put in front of an apache and a tomcat on the same server. Depending on the ip requested, it goes to a different backend. This works. Now for most of the sites the default varnish logic will work just fine. However for some specific sites I wish to use custom VCL code. I can test for host name and include config files for the specific domains, but this only works inside the individual methods recv etc. Is there a way to include a complete set of instructions, in one file, per domain, without having to manage separate files for subdomain_recv, subdomain_fetch etc? And preferably without running seperate instances of varnish. When I try to include a file on the "root level" of default.vcl, I get a compilation error. Best regards, Michael

    Read the article

  • Help on using mod_rewrite to serve I18N static site

    - by Guandalino
    My static site www.example.com is translated in different languages and files are organized in this hierarchy: / /de index.html seite-1.html /en index.html page-1.html /it index.html pagina-1.html The root contains no files, just one subdirectory for each language the site is translated in, while subdirectories contain pages translated (both content and file name are) in the language corresponding to subdirectory name, de, en, it, etc. The question is: how to configure mod_rewrite so that when a client visits www.example.com it is taken to the correct version of the site, falling back to english version if the required locale is not supported (i.e. Accept-Language header doesn't exist or specifies a language for which the site is not available, e.g. fr)? Thanks for any pointer, I'm here to provide further details or feedback! Best regards

    Read the article

  • Dynamic Subdomains

    - by crash
    On my new site I want to have dynamic subdomains. I'm trying to make it so that the subdomains use the same web root as the main domain, all under a single CodeIgniter installation. For example, subdomain.example.com would lead to example.com/subdomain, which is actually example.com/index.php/subdomain. I've already the DNS, virtual hosts set up but I"m getting caught up on the .htaccess. The effect of the linked htaccess is that when navigating to any subdomain, it gets caught up in an infinite loop. (Error log after one request.) It's the same effect for www., which should just resolve to the main domain.

    Read the article

  • Same command on multiple servers

    - by w00t
    Hello everyone. I'm just wondering if there is any fellow sysadmin with the need to execute one command on multiple servers. If so, what technique are you using? I have grown tired to ssh-ing to 3-5 servers and executing the same thing over and over again, so I'm thinking to make my life easier. Also, I think I should create keys so I don't have to enter passwords anymore (though I'm using root). After 2 years of doing this, I kind of developed a laziness. I googled it up, I know about cssh, pssh, tentakel (this one seems cool), and the more pro-genre - Puppet (of which I just heard of, didn't invest the time to read the docs). BTW, I'm using XP+putty, so if there is any putty-cool-thingy available, that's welcome too. If not, I can always ssh to one server and from there start my rest-of-the-servers-conquest :) *evil* Hit me up. Thanks.

    Read the article

  • Is it possible to limit output bandwidth between eth0 and lo?

    - by mmcbro
    I'm trying to limit the bandwidth between my eth0 output (nginx proxy) to my loopback inteface (apache) by filtering on destination port. Incoming Packet -> Eth0 -> 0.0.0.0:80 Nginx -> tc qdisc class/iptable mangle 2525port -> 127.0.0.1:2525 Apache I don't know if it's even possible I'm just experimenting. My rules are the followings : tc qdisc add dev eth0 root handle 1:0 htb tc class add dev eth0 parent 1:0 classid 1:10 htb rate 2mbps ceil 2mbps prio 0 tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 10 fw flowid 1:10 iptables -A OUTPUT -t mangle -p tcp --dport 2525 -j MARK --set-mark 10 I also tried to with FORWARD chain but its still the same.

    Read the article

  • Why can my Mac not connect to my iPhone via ssh?

    - by martin08
    I couldn't always ssh to my iPhone from my Mac. They're both on the same wifi network but sometimes the connection is established, sometimes it failed. From my Mac: $ ssh [email protected] ssh: connect to host 192.168.0.102 port 22: Operation timed out $ ping 192.168.0.102 PING 192.168.0.102 (192.168.0.102): 56 data bytes ping: sendto: No route to host ping: sendto: Host is down ping: sendto: Host is down I enabled SSH on the phone and am sure it can load webpages. So what might be a reason why they cannot connect? Thanks

    Read the article

  • One user sometimes gets an unknown certificate error opening Outlook

    - by Chris
    Let me clarify a little. This isn't an unknown certificate error it's an unknown certificate error in so much as I can't figure out where the certificate comes from. This happens on a Win 7 Enterprise machine connecting to Exchange 2010 with Outlook 2010. The error he gets is that the root is not trusted because it's a self-signed cert. Take a look at this screenshot because even if I had generated this myself I wouldn't have put "SomeOrganizationalUnit" or "SomeCity" or "SomeState", etc. (Red block covers our domain name.) I'm a little concerned this is a symptom of a security breach. Exchange 2010 has three certificates installed but none of them are this certificate. They all have different expiration dates (one is expired) and different meta-data. edit: There are two scenarios that I see the certificate warning and one of them I can reliably repeat. When the user leaves his computer on over night Outlook pops the Security Warning window. I don't know what time this happens. Using Outlook Anywhere if I connect to Exchange externally via a cellular USB modem the Security Warning window will appear every time I close and reopen Outlook. Whether I say Yes or No does not make a difference on whether or not I can connect to Exchange and send/receive email. In other words, I can always connect to Exchange. I've checked my two Exchange servers and my Cisco router for a certificate that matches this one and I can't find it. edit 2: Here is a screenshot of the Security Alert window. (I've been calling it Security Warning... My mistake.) edit 3: I stopped seeing this error several weeks ago but I can't tie it to any single event (because I just sort of realized that warning had stopped showing up) but I think I found the source of the certificate. Last week I found out that the certificate on our website DomainA.com was invalid. I knew that our web admin had installed a valid certificate so when I look into the problem I found out I was being presented with the invalid certificate that this posting is in regards to. The Exchange server's domain is mail.DomainA.com so I can only guess that Outlook was passing this invalid certificate through as it did some kind of check on DomainA.com. This issue is still a mystery because the certificate warning stopped appearing several weeks ago whereas the invalid certificate issue on the website was only fixed last week. It ended up being a problem with the website control panel. The valid certificate was installed but not being served for some reason and instead the self-signed cert was being served.

    Read the article

  • How can I whitelist a user-agent in nginx?

    - by djb
    I'm trying to figure out how to whitelist a user agent from my nginx conf. All other agents should be shown a password. In my naivity, I tried to put the following in before deny all: if ($http_user_agent ~* SpecialAgent ) { allow; } but I'm told "allow" directive is not allowed here (!). How can I make it work? A chunk of my config file: server { server_name site.com; root /var/www/site; auth_basic "Restricted"; auth_basic_user_file /usr/local/nginx/conf/htpasswd; allow 123.456.789.123; deny all; satisfy any; #other stuff... } Thanks for any help.

    Read the article

  • Email is not sending when the script is running by CRON

    - by Adam Blok
    I wrote the simple backup bash script and at the end of it, it's sending an email to me that backup is ready. Everything works perfect when I run this script from terminal (root), but when the script is running by CRON, email is not sending :-/. #!/bin/sh filename=$(date +%d-%m-%Y) backup_dir="/mnt/backup/" email_from_name="BACKUP" email_to="my@email" email_subject="Backup is ready" email_body_file="/tmp/backup-email-body.txt" tar czf "$backup_dir$filename.tgz" "/home/www" echo "Subject: $email_subject" > $email_body_file ls $backup_dir -sh >> $email_body_file sendmail -F $email_from_name -t $email_to < $email_body_file

    Read the article

  • Have apache choose a php version based on the extension in the url, but with a single file on the filesystem

    - by Somejan
    I want to configure a local apache server to serve php files with different php versions. In my document root I have phpinfo.php, now if I go to http://localhost/phpinfo.php4, I want to see the phpinfo.php file processed with php4, if I go to http://localhost/phpinfo.php5 I want to see the same file processed with php5. Note: both php 4 and 5 are already installed side by side, I have no problem configuring apache to treat files that have a .php4 or .php5 extension on the filesystem with the correct php version. What I want is for apache to do the following: If the url-path ends in .php5, serve the file which has a .php extension on the filesystem using the application/x-httpd-php5 handler. If the url-path ends in .php4, serve the same file with the .php extension on the filesystem using the application/x-httpd-php4 handler.

    Read the article

  • MySQL will Stop working after being Started

    - by user115343
    i am new to a webserver thing. I use Centmin mod to install nginx + mariaDB to setup small wordpress blog,the first day it is ok,there are nice "hello world" on my box's IP,but today i have checked that mysql is stop working so i immediately start it again but it is stoped again after some minutes! i use this tutorial but still,it will stop after some period here is my log [root@rylai ~]# tail -f /var/log/mysqld.log 120326 16:19:05 [Note] Plugin 'PBXT_STATISTICS' is disabled. 120326 16:19:05 [Note] Plugin 'InnoDB' is disabled. 120326 16:19:06 [Note] Event Scheduler: Loaded 0 events 120326 16:19:06 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.2.10-MariaDB-mariadb107' socket: '/var/lib/mysql/mysql.sock' port: 3306 (MariaDB - http://mariadb.com/) 120326 16:20:36 mysqld_safe Number of processes running now: 0 120326 16:20:36 mysqld_safe mysqld restarted 120326 16:20:39 [Note] Plugin 'ARCHIVE' is disabled. 120326 16:20:39 [Note] Plugin 'FEDERATED' is disabled. 120326 16:20:40 mysqld_safe mysqld from pid file /var/lib/mysql/rylai.pid ended I only access mysql on CLI,didnt install any panel yet

    Read the article

  • How do I access files inside a Wubi virtual ext4 Ubuntu partition from within Windows?

    - by aalaap
    I just installed Ubuntu 10.04 using Wubi on a PC that has Windows XP and Windows 7 installed. I was working in it for a while and everything is just fine. However, when I booted back into Windows 7, I couldn't figure out a way to access the files I had created or downloaded into the Ubuntu partition. They're in a virtual disk called root.disk in my C:\ubuntu\disks. Is there a way I can mount this vhd into Windows or at least browse the contents and extract what I need?

    Read the article

  • Can two Linux installations share the same /home partition?

    - by huahsin68
    I am currently using OpenSuse 11.4 and Windows XP in laptop. I was planning to remove the Windows and switch to install Kubuntu. My current situation is that I have my root (/) and /home partition separated in OpenSuse. Can I share the /home partition between OpenSuse and Kubuntu? How do I configure Kubuntu to use the existing /home partition during the installation? BTW, the most recent Kubuntu is using ext3 file system whereas my OpenSuse is using ext3. Will this a matter for me to install Kubuntu? Any other issue I need to take care of?

    Read the article

  • SugarCRM CE Won't Install on Ubuntu 10.10

    - by Trenton Scott
    I have a fresh copy of Ubuntu 10.10 server with a working LAMP installation. I downloaded SugarCRM and browsed to its directory to open the installer (via Firefox). The installer appears fine, I accept the license agreement, and it proceeds to check file permissions. It advises that several directories need looser permissions (chmod 766), and I adjust them accordingly. After making the changes, I click "recheck" and the page just reloads as blank (white). There are no errors visible, nothing in the server logs (Apache/PHP) and installation cannot continue. I'm able to get back to the installation tool by readjusting permissions back to my default (0755 for directories, 0644 for files). All files/folders are owned by root and the www-data group. Any idea about what's wrong?

    Read the article

  • How to set up virtual users in vsftpd?

    - by ares94
    I've read this tutorial: http://howto.gumph.org/content/setup-virtual-users-and-directories-in-vsftpd/ My configuration is as follow: ---vsftpd.conf--- listen=YES anonymous_enable=NO local_enable=YES virtual_use_local_privs=YES write_enable=YES connect_from_port_20=YES pam_service_name=vsftpd guest_enable=YES user_sub_token=$USER local_root=/var/www/sites/$USER chroot_local_user=YES hide_ids=YES ---/etc/pam.d/vsftpd--- auth required pam_pwdfile.so pwdfile /etc/vsftpd/passwd account required pam_permit.so I created file /etc/vsftpd/passwd and added users using htaccess. I tried to login but it didn't work: ftp 127.0.0.1 Connected to 127.0.0.1 (127.0.0.1). 220 vsFTPd 2.3.5+ (ext.1) ready... Name (127.0.0.1:root): user1 331 Please specify the password. Password: 530 Permission denied. Login failed. Everything seems fine accept the permission denied thing. How can I fix this?

    Read the article

  • Reenabling the Spotlight Menubar item in Mac OS X 10.6

    - by Tim Visher
    I believe I followed the instructions here to disable Spotlight indexing and remove the menubar item. I reenabled indexing just fine, but when I changed the permissions back to 744, the spotlight search position came back (as in the space it would normally occupy), but the actual icon and search box will not show up. If I click that portion of the screen I get a blue box, but I can't type anything in to anything. Currently, permissions look like this: [~]$ ll /System/Library/CoreServices/Search.bundle.bak/Contents/MacOS/ total 648 -rwxr-xr-x 1 root wheel 835K Sep 17 14:48 Search* ll is an alias mapped to the following alias ll='${LS_PREAMBLE} -hl' with $LS_PREAMBLE [~]$ echo $LS_PREAMBLE ls -GF (Ignore the .bak extension. I decided that until I found a way to fully restore it, I would just remove it entirely following the directions here) That looks right to me and obviously something is launching, but the UI elements aren't there. So how can I restore it? Thanks in advance!

    Read the article

  • Is there a way to disable specific Spybot Immunization rules?

    - by Iszi
    I've been having problems using a desktop sharing application, which I've traced to the Immunization protections applied via Spybot S&D. Specifically, the problem has been narrowed down to the rules in the \SOFTWARE (Plugins) categories under the Internet Explorer groups. Once I disable these Immunization categories, everything in the application works fine. Each of these categories appears to include ~900 protections on the system. I suspect that the root cause of my problems could be narrowed down to just one, or perhaps a handful, of the settings that get applied in these categories. However, I can't find any options in Spybot S&D which would allow me to drill down to the individual protection rules and choose which to enable or disable. Is there something I'm missing, or is this not a feature available via the GUI? If it's not strictly supported in the application, is there a way to work around it by manually editing some of its files or registry settings? Spybot S&D version: 2.2.21.0 Spybot Start Center version: 2.2.21.129 Windows Ultimate x64

    Read the article

  • Simple HTTP server that will send the same file for all requests?

    - by Rory McCann
    I need to debug a XML-RPC application, which sends XML replies over HTTP. I have a sample XML reply (i.e. data from the server, sent to the client that isn't working), I'd like to debug my application. Ideally I'd like a simple HTTP server that will serve one file in reply to all requests. Someone requests /? Send them this file. Someone makes a post to /server/page.php with a certain cookie? Just send them this file. I don't care about multithreading, or security. I will only need to use this for a few hours to debug. I have root on the machine. i.e. I'm hoping there's something as easy to use as this: simple_http_server -p 12445 -f my_test_file I'm aware of python's SimpleHTTPServer module, but I'm not sure how to make it work in this case.

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >