Search Results

Search found 10706 results on 429 pages for 'pkg config'.

Page 222/429 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • How to enable key forwarding with ssh-agent?

    - by Lamnk
    I've used the ssh-agent from oh-my-zsh to manage my SSH key. So far, so good, i only have to type the passphrase for my private key once when I start my shell and public key authentication works great. The problem is however that key forwarding doesn't work. There are 2 servers A & B which I can use public key to login. When I ssh into A then from there ssh into B, I must provide my password, which should not be the case. A is a CentOS 5.6 box, B is an Ubuntu 11.04 box. I have this on my local .ssh/config: Host * ForwardAgent yes OpenSSH on A is standard openssh 4.3 package provided by CentOS. I also enable ForwardAgent for ssh client on A, but forwarding still doesn't work.

    Read the article

  • Default Document not posting for IIS 7

    - by Nikshep
    I am using URL rerouting in Asp.net 4.0 and my default page for the site is \home where user's can log in the app.So when the users type in my site's url i.e www.domain.com cause of the defualt page config which I have it gets redirected to my home.aspx page which is mapped on my global.asax as \home. Now all the log in request i.e Post request coming from www.domain.com are failing no events are being fired on the server. Where as if I try www.domain.com\home then things starts working I am able to log on. I had read a similar issue but still am confused about the solution http://forums.iis.net/t/1164877.aspx , this used to work fine on IIS 6 but on IIS 7 such a scenario started happening. Am I missing some configuration , please help.

    Read the article

  • win2008 r2 IIS7.5 - setting up a custom user for an application pool, and trust issues

    - by Ken Egozi
    Scenario: blank win2008 r2 install the goal was to have a couple of sites running with isolated pool and dedicated users A new folder for a new website - c:\web\siteA\wwwroot, with the app (asp.net) deployed there in the /bin folder created a user named "appuser" and added it to the IIS_USERS group gave the website folder read and execute permissions for IIS_USERS and the appuser created the IIS site. set the app=pool identity to the appuser now I'm getting YSOD telling me that the trust-level is too low - SecurityException: That assembly does not allow partially trusted callers Added <trust level="Full" /> on the web-config, did not help changing the app-pool user to Administrator makes the site run Setting "anonymous user identity" to either IUSR or the app pool identity makes no difference any idea? is there a "step by step" howto guide for setting up users for isolated app pools on IIS7.5?

    Read the article

  • loadbalancing with difference nginx location context and backend server context

    - by robinmag
    Hi, I used nginx and upstream module for load balancing with the following config upstream lb { server 127.0.0.1:8080; server 127.0.0.1:8081; } server { listen 88; server_name localhost; location /cas/ { proxy_pass http://lb; proxy_redirect off; proxy_connect_timeout 2; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } the problem is the "location /context/" have to match to the context of backend server so when i request localhost/context/index.html then nginx routes it to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html. Is it possible to have difference backend context and nginx location for example with "location /" nginx will routes the request to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html Thank you.

    Read the article

  • Settings on php.ini ignored

    - by bfavaretto
    I can't get my server to obey the settings from php.ini (I'm trying to change memory_limit and upload_max_filesize). As far as I can tell, I'm editing the correct file. phpinfo() gives: Loaded Configuration File /etc/php.ini The file permission is 644. There are also some extra .ini files on /etc/php.d, but none include any of the keys I'm trying to change. No matter what I do, phpinfo reports the default values on both "Local" and "Master" columns. I also scanned my Apache config files, but found nothing related to PHP (besides loading the PHP module). The only way I was able to change those settings was by adding some php_value lines to my .htaccess. Is there something obvious I'm missing? This is a virtual server, and I can perform root commands with sudo. I'm running Apache 2.1.3 and PHP 5.3.3. System info (from uname -a) is: Linux sesctbapp01 2.6.18-308.1.1.el5 #1 SMP Wed Mar 7 04:16:51 EST 2012 x86_64

    Read the article

  • openLdap for windows and phpldapadmin

    - by Dr Casper Black
    Hi, Im having a problem connecting all of this. Im new to Ldap and after failing to install all of this on Ubuntu 10.04 Im trying to set it up on my local PC. I installed OpenLdap for windows http://www.userbooster.de/en/download/openldap-for-windows.aspx, Enabled the php5.3.1 extension for ldap (c:\xampp\php\ext\php_ldap.dll) in php.ini Copied the ssleay32.dll and libeay32.dll to Windows\System32 & Windows\System (Windows XP) Set the password generated by c:\Program Files\OpenLDAP\slappasswd.exe in c:\Program Files\OpenLDAP\slapd.conf (rootpw {SSHA}hash) run the c:\Program Files\OpenLDAP\slapd.exe Install phpldapadmin and call https:// 127.0.0.1 / phpldapadmin/ when I enter the credentials i get Invalid credentials (49) for user and in openldap.log i get could not stat config file "%SYSCONFDIR%\slapd.conf": No such file or directory (2) Can someone help.

    Read the article

  • TIME_WAIT connections not being cleaned up after timeout period expires

    - by Mark Dawson
    I am stress testing one of my servers by hitting it with a constant stream of new network connections, the tcp_fin_timeout is set to 60, so if I send a constant stream of something like 100 requests per second, I would expect to see a rolling average of 6000 (60 * 100) connections in a TIME_WAIT state, this is happening, but looking in netstat (using -o) to see the timers, I see connections like: TIME_WAIT timewait (0.00/0/0) where their timeout has expired but the connection is still hanging around, I then eventually run out of connections. Anyone know why these connections don't get cleaned up? If I stop creating new connections they do eventually disappear but while I am constantly creating new connections they don't, seems like the kernel isn't getting chance to clean them up? Is there some other config options I need to set to remove the connections as soon as they have expired? The server is running Ubuntu and my web server is nginx. Also it has iptables with connection tracking, not sure if that would cause these TIME_WAIT connections to live on. Thanks Mark.

    Read the article

  • MaxClients, Server Limits etc

    - by Moe
    Hello, I'm having some problems with my Server. It's getting quite a bit of traffic and is very slow, and sometimes inaccessible by my users. Here are the server specs: CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz - 16 Processors RAM: 2GB The Values for the Apache Config are: StartServers: 5 MaxSpareServers: 10 MinSpareServers: 5 MaxClients: 150 ServerLimit: 256 MaxRequestsPerChild: 1000 KeepAlive: On KeepAliveTimeout: 5 MaxKeepAliveRequests: 100 TimeOut: 300 What would be optiminal values for a server of my configuration to support the maximum amount of users at a reasonable speed without killing the server! Thank you.

    Read the article

  • SSH tunnel over http proxy with blocked 443 (SSL)

    - by Evgeny Zhulenev
    Is it possible to create an SSH tunnel over http-proxy when https access is denied? I had such configuration in .ssh\config Host home User root Hostname *my-home-pc-with-ssh-access-allowed* Port 8090 ProxyCommand corkscrew db-isa-01 8080 %h %p ~/.ssh/.corkscrew-db-isa-auth IdentityFile ~/.ssh/id_rsa Where db-isa-01 is my corporate proxy server. Today the admins blocked all https access and allowed it only for few servers on the white list. I used this command to create a tunnel: ssh -D 7070 -o 'GatewayPorts yes' -A -q -g -t root@home and now it doesn't work. As I can understand, that's because our proxy denies all https connections Proxy could not open connnection to ***: Proxy Error ( The specified Secure Sockets Layer (SSL) port is not allowed. Forefront TMG is not configured to allow SSL requests from this port. Most Web browsers use port 443 for SSL requests. ) P.S. I use Windows 7, and corscskrew with cygwin, so Linux solutions not suitable for me.

    Read the article

  • Can snort output an alert for a portscan (sfPortscan) to syslog?

    - by Jamie McNaught
    I've been working on this for too long now. I'm sure the answer should be obvious, but... Snort manual: http://www.snort.org/assets/125/snort_manual-2_8_5_1.pdf lists two logging outputs on pg 39 (pg 40 according to Acrobat Reader) as: "Unified Output" and "Log File Output" which I am guessing the former refers to the "unified" output mode... which makes me think the answer is "No, snort cannot output alerts for detected portscans to syslog." Config file I've been using is: alert tcp any 80 -> any any (msg:"TestTestTest"; content: "testtesttest"; sid:123) preprocessor sfportscan: proto { all } \ memcap { 10000000 } \ scan_type { all } \ sense_level { high } \ logfile { pscan.log } (yes, very basic I know). A simple nmap triggers output to the pscan.log Can anyone confirm this? Or point out how I do this?

    Read the article

  • Keyboard automatically disconnects and reconnects

    - by Algorithms
    The problem i am facing is the keyboard (USB) automatically disconnects when there is a fluctuation in the power supply to the speaker. The speaker and the pc both draw power from a apc ups. The fluctuation occurs because the speaker plug is not tightly connected to the ups power outlet. It is okay for normal work, but a accidental jerk causes the fluctuation. However after some amount of time (usually within 5 seconds) the keyboard automatically reconnects and windows plays the sound of hardware connected. This problem will also occur if I manually take out the speaker power cable from the ups power outlet. My question is whether the problem I am facing is due to electrical issues, or due to software problems. PC config: OS : Windows 7 Ultimate UPS : APC 600 VA PSU : Corsair TX 650 Speaker : Realtek

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • re-enabling a table for mysql replication

    - by jessieE
    We were able to setup mysql master-slave replication with the following version on both master/slave: mysqld Ver 5.5.28-29.1-log for Linux on x86_64 (Percona Server (GPL), Release 29.1) One day, we noticed that replication has stopped, we tried skipping over the entries that caused the replication errors. The errors persisted so we decided to skip replication for the 4 problematic tables. The slave has now caught up with the master except for the 4 tables. What is the best way to enable replication again for the 4 tables? This is what I have in mind but I don't know if it will work: 1) Modify slave config to enable replication again for the 4 tables 2) stop slave replication 3) for each of the 4 tables, use pt-table-sync --execute --verbose --print --sync-to-master h=localhost,D=mydb,t=mytable 4) restart slave database to reload replication configuration 5) start slave replication

    Read the article

  • Issue varnish purge through CloudFlare to Varnish

    - by Michael
    I've been working on this for a while and can't seem to find any solution. I have varnish sitting in front of my nginx server, with CloudFlare sitting in front. When I issue a curl -X PURGE host CloudFlare picks it up and of course denies it with a 503 error. If I use direct.host to bypass CloudFlare it hits the Varnish server and it accepts the request but it does nothing since direct.host isn't used so there is nothing in the cache for that url. I am using WordPress and there is a WordPress Varnish Purge plugin, it says to add the following line to wp-config.php: define('VHP_VARNISH_IP','127.0.0.1') This is specifically to work with proxy servers and/or CloudFlare to make sure the request goes to the Varnish server rather than CloudFlare, but that doesn't seem to help. Anyone see this before and have any idea?

    Read the article

  • VSFTPD Unable to set write permissions on folder

    - by Frank Astin
    I've just set up my first FTP server with VSFTPD on cent os . I can connect to it fine using a user in the group ftp-users but I get read only access . I've tried several different CHMOD codes on the folder (even 777) all to no avail . This is the tutorial I used to set up the server http://tinyurl.com/73pyuxz hopefully you'll be able to see something I missed. Thanks in advance . Requested Config File : # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd whith two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES

    Read the article

  • Temporarily Utilizing 304 Header on Apache for Crawlers

    - by Volomike
    I have a client who has a hosting arrangement with 400 customer sites all hosted through SuPHP in CGI mode on Apache. The sysop is now gone and the client is calling on me for rolling out a new PHP thing. Trouble is -- server load is very high right now and we have found that it's due to the crawlers. We had one customer in particular who complained of slow websites, and we engaged a 304 header plugin in his site against most crawlers, and his site perked right up. We'd like to lower that load by issuing a global 304 header to all the crawlers, letting human visitors through. I have a long list of user agent keywords to trap for. What's the best way to temporarily engage that global 304 header, while allowing human visitors to get right on through? I mean, I could roll out 400 .htaccess file changes, but it would be ideal to make this change in like one central Apache config and then it automatically affect all the sites at once.

    Read the article

  • loadbalancing with difference nginx location context and backend context

    - by robinmag
    Hi, I used nginx and upstream module for load balancing with the following config upstream lb { server 127.0.0.1:8080; server 127.0.0.1:8081; } server { listen 88; server_name localhost; location /cas/ { proxy_pass http://lb; proxy_redirect off; proxy_connect_timeout 2; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } the problem is the "location /context/" have to match to the context of backend server so when i request localhost/context/index.html then nginx routes it to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html. Is it possible to have difference backend context and nginx location for example with "location /" nginx will routes the request to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html Thank you.

    Read the article

  • pam_filter usage prevent passwd from working

    - by Henry-Nicolas Tourneur
    Hello everybody, I have PAM+LDAP SSL running on Debian Lenny, it works well. I always want to restrict who's able to connect, in the past I used pam_groupdn for that but I recently got a situation where I has to accept 2 different groups. So I used pam_filter like this : pam_filter |(groupattribute=server)(groupattribute=restricted_server) The problem is that with this statement, passwd doesn't work anymore with LDAP accounts. Any idea why ? Please find hereby some links to my config files : Since serverfault.com only allow me to post 1 link, please find hereunder the link to other conf files : http://pastebin.org/447148 Many thanks in advance :)

    Read the article

  • How to sort by file's modified date on IIS web server in Windows 7?

    - by ????
    Apache has this built in since 1996 which is 17 years ago... for Microsoft's IIS Web server which is available on Windows 7, is there a way to make it be able to sort the file listing by file modification dates? For example, show the file listing with the label "Date", "Filename", and clicking it will sort the files by that attribute. The only info I could find is: http://technet.microsoft.com/en-us/library/cc732762(v=ws.10).aspx cd %windir%\system32\inetsrv appcmd set config /section:directoryBrowse /showFlags:Time|Size|Extension|Date|LongDate|None but it doesn't work.

    Read the article

  • Create a Windows Image for Deployment

    - by Kiranu
    In my company we have 8 laptops that we use to deploy on the field. These machines get assigned to a user for a certain time and run Windows Vista. All the machines are the same model. After the machine is returned, it is company policy to completely format the machine and go back to a predetermined configuration. Right now, what we do is we use the recovery utility in the laptop (we are a small shop so we use the OEM Windows license that the laptops come with) and manually uninstall and change the configuration in order to bring it to our baseline config. I know that there are ways to create an image that gets copied to the hard drive with a specific configuration and with specific software installed (thats what OEMs do right?). I'm looking for a tool or a tutorial or something that explains as simply as possible how to create such an image. Thanks a lot

    Read the article

  • Nginx + PHP-FPM executes script, but returns 404

    - by MorfiusX
    I am using Nginx + PHP-FPM to run a Wordpress based site. I have a URL that should return dynamically generated JSON data for use with the DataTables jQuery plugin. The data is returned properly, but with a return code of 404. I think this is a Nginx config issue, but I haven't been able to figure out why. The script 'getTable.php' works properly on the production version of the site which is currently using Apache. Anyone know how I can get this to work on Nginx? URL: http://dev.iloveskydiving.org/wp-content/plugins/ils-workflow/lib/getTable.php SERVER: CentOS 6 + Varnish (caching disabled for development) + Nginx + PHP-FPM + Wordpress + W3 Total Cache Nginx Config: server { # Server Parameters listen 127.0.0.1:8082; server_name dev.iloveskydiving.org; root /var/www/dev.iloveskydiving.org/html; access_log /var/www/dev.iloveskydiving.org/logs/access.log main; error_log /var/www/dev.iloveskydiving.org/logs/error.log error; index index.php; # Rewrite minified CSS and JS files location ~* \.(css|js) { if (!-f $request_filename) { rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; expires max; } } # Set a variable to work around the lack of nested conditionals set $cache_uri $request_uri; # Don't cache uris containing the following segments if ($request_uri ~* "(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|wp-.*\.php|index\.php|wp\-comments\-popup\.php|wp\-links\-opml\.php|wp\-locations\.php)") { set $cache_uri "no cache"; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp\-postpass|wordpress_logged_in") { set $cache_uri 'no cache'; } # Use cached or actual file if they exists, otherwise pass request to WordPress location / { try_files /wp-content/w3tc/pgcache/$cache_uri/_index.html $uri $uri/ /index.php?q=$uri&$args; } # Cache static files for as long as possible location ~* \.(xml|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { try_files $uri =404; expires max; access_log off; } # Deny access to hidden files location ~* /\.ht { deny all; access_log off; log_not_found off; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_pass unix:/var/lib/php-fpm/php-fpm.sock; # port where FastCGI processes were spawned } } Fast CGI Params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; UPDATE: Upon further digging, it looks like Nginx is generating the 404 and PHP-FPM is executing the script properly and returning a 200. UPDATE: Here are the contents of the script: <?php /** * Connect to Wordpres */ require(dirname(__FILE__) . '/../../../../wp-blog-header.php'); /** * Define temporary array */ $aaData = array(); $aaData['aaData'] = array(); /** * Execute Query */ $query = new WP_Query( array( 'post_type' => 'post', 'posts_per_page' => '-1' ) ); foreach ($query->posts as $post) { array_push( $aaData['aaData'], array( $post->post_title ) ); } /** * Echo JSON encoded array */ echo json_encode($aaData);

    Read the article

  • What is the purpose of netcat's "-w timeout" option when ssh tunneling?

    - by jrdioko
    I am in the exact same situation as the person who posted another question, I am trying to tunnel ssh connections through a gateway server instead of having to ssh into the gateway and manually ssh again to the destination server from there. I am trying to set up the solution given in the accepted answer there, a ~/.ssh/config that includes: host foo User webby ProxyCommand ssh a nc -w 3 %h %p host a User johndoe However, when I try to ssh foo, my connection stays alive for 3 seconds and then dies with a Write failed: Broken pipe error. Removing the -w 3 option solves the problem. What is the purpose of that -w 3 in the original solution, and why is it causing a Broken pipe error when I use it? What is the harm in omitting it?

    Read the article

  • JBoss AS: use .xml files in the properties-service.xml

    - by fgysin
    The properties service (configured in properties-service.xml) in JBoss application server lets you specify external .properties files that are loaded and can then be accessed as system properties from the deployed applications. (See here http://community.jboss.org/wiki/PropertiesService for more info...) Is it also possible to load config files in the .xml format instead of .properties? I know it is possible for certain given configs like for example the mail-service.xml and the jboss-log4j.xml... But they are both loaded directly by JBoss, and not via the properties service.

    Read the article

  • SSH service will not start on fresh Cygwin 1.7.15 install

    - by Coder6841
    OS: Windows 7 x64 Cygwin: 1.7.15-1 OpenSSH: 6.0p1-1 I'm attempting to install an SSH server on Windows 7. The tutorial that I'm following to do this is here: http://www.howtogeek.com/howto/41560/how-to-get-ssh-command-line-access-to-windows-7-using-cygwin/ The issue is that upon executing the net start sshd command I get the following output:The CYGWIN sshd service is starting. The CYGWIN sshd service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. Here is the full output of the setup: AdminUser@ThisComputer ~ $ ssh-host-config *** Info: Generating /etc/ssh_host_key *** Info: Generating /etc/ssh_host_rsa_key *** Info: Generating /etc/ssh_host_dsa_key *** Info: Generating /etc/ssh_host_ecdsa_key *** Info: Creating default /etc/ssh_config file *** Info: Creating default /etc/sshd_config file *** Info: Privilege separation is set to yes by default since OpenSSH 3.3. *** Info: However, this requires a non-privileged account called 'sshd'. *** Info: For more info on privilege separation read /usr/share/doc/openssh/README.privsep. *** Query: Should privilege separation be used? (yes/no) yes *** Info: Note that creating a new user requires that the current account have *** Info: Administrator privileges. Should this script attempt to create a *** Query: new local account 'sshd'? (yes/no) yes *** Info: Updating /etc/sshd_config file *** Query: Do you want to install sshd as a service? *** Query: (Say "no" if it is already installed as a service) (yes/no) yes *** Query: Enter the value of CYGWIN for the daemon: [] *** Info: On Windows Server 2003, Windows Vista, and above, the *** Info: SYSTEM account cannot setuid to other users -- a capability *** Info: sshd requires. You need to have or to create a privileged *** Info: account. This script will help you do so. *** Info: You appear to be running Windows XP 64bit, Windows 2003 Server, *** Info: or later. On these systems, it's not possible to use the LocalSystem *** Info: account for services that can change the user id without an *** Info: explicit password (such as passwordless logins [e.g. public key *** Info: authentication] via sshd). *** Info: If you want to enable that functionality, it's required to create *** Info: a new account with special privileges (unless a similar account *** Info: already exists). This account is then used to run these special *** Info: servers. *** Info: Note that creating a new user requires that the current account *** Info: have Administrator privileges itself. *** Info: No privileged account could be found. *** Info: This script plans to use 'cyg_server'. *** Info: 'cyg_server' will only be used by registered services. *** Query: Do you want to use a different name? (yes/no) no *** Query: Create new privileged user account 'cyg_server'? (yes/no) yes *** Info: Please enter a password for new user cyg_server. Please be sure *** Info: that this password matches the password rules given on your system. *** Info: Entering no password will exit the configuration. *** Query: Please enter the password: *** Query: Reenter: *** Info: User 'cyg_server' has been created with password '[CENSORED]'. *** Info: If you change the password, please remember also to change the *** Info: password for the installed services which use (or will soon use) *** Info: the 'cyg_server' account. *** Info: Also keep in mind that the user 'cyg_server' needs read permissions *** Info: on all users' relevant files for the services running as 'cyg_server'. *** Info: In particular, for the sshd server all users' .ssh/authorized_keys *** Info: files must have appropriate permissions to allow public key *** Info: authentication. (Re-)running ssh-user-config for each user will set *** Info: these permissions correctly. [Similar restrictions apply, for *** Info: instance, for .rhosts files if the rshd server is running, etc]. *** Info: The sshd service has been installed under the 'cyg_server' *** Info: account. To start the service now, call `net start sshd' or *** Info: `cygrunsrv -S sshd'. Otherwise, it will start automatically *** Info: after the next reboot. *** Info: Host configuration finished. Have fun! AdminUser@ThisComputer ~ $ net start sshd The CYGWIN sshd service is starting. The CYGWIN sshd service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. Note that on the line *** Query: Enter the value of CYGWIN for the daemon: [] I haven't entered anything. Tutorials often say to use ntsec or ntsec tty here but those options are removed from the latest version of OpenSSH. I've tried using them anyway and the result is the same. The file /var/log/sshd.log is empty. If I try just running the command /usr/sbin/sshd I get the output /var/empty must be owned by root and not group or world-writable.. The /var/empty directory has the following permissions: drwxr-xr-x+ 1 cyg_server root 0 May 29 15:28 empty. Google searches on this error did not turn up any working fixes. One person seems to have solved it by using the command chown SYSTEM /var/empty but that did not fix it in my case.

    Read the article

  • problem with nginx reverse proxy to apache2

    - by FurtiveFelon
    I am trying to setup a reverse proxy system where nginx sits at the front handling all the requests from the internet and apache2 sits at the back handling all the dynamic content. I can setup virtualhost in nginx based on my domains, but because apache2 is listening only on 127.0.0.1:8080 (not outside facing), i'd like to still have virtualhost based on domain (or whatever can be passed from nginx to apache) and change the dynamic content based on it. Basically, I have a nginx config in sites_available and sites_enabled that basically says for location /{proxy_pass http://127.0.0.1:8080/;}. So currently i don't think there is any way of detecting which domain we have on the outside for apache. I am almost exactly following this guide to set it up: http://tumblr.intranation.com/post/766288369/using-nginx-reverse-proxy So code and others are almost the same. Any one have any ideas? Jason

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >