Search Results

Search found 10674 results on 427 pages for 'glib config'.

Page 226/427 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • Enable fastcgi on SSL Virtualhost

    - by ggstevens
    Debian 7.5 My VirtualHost for port 80 works fine with the ifmodule for fastcgi. However, it does not work with the VirtualHost for port 443. SSL/https:// was working fine until I added the following: <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /var/run/php5-fpm.sock -pass-header Authorization </IfModule> When I try to restart Apache I get an error: Reloading web server config: apache2 failed! However, if I remove the FastCgiExternalServer line, it works.

    Read the article

  • How do I stop Firefox on Ubuntu from trying to load whatever is in my clipboard when I middle-click

    - by therefromhere
    In Firefox on Ubuntu, if I middle-click anywhere on a page that's not a link, it seems to treat whatever text is in the clipboard as a URL and tries to load it. This is annoying, since if I either accidentally click the middle button or (more often) miss a link when trying to middle-click it, I'll either go to whatever URL is in my clipboard or get an alert saying: The URL is invalid and cannot be loaded Is there any way of either: a) Disabling this functionality so that middle-click on a non-link does nothing (maybe an about:config setting?, or b) Making the functionality more intelligent, so that it will only try and open text if it looks like a URL (this seems like a job for a plugin).

    Read the article

  • CruiseControl.Net - not able to see Project Statistics

    - by Anders Juul
    Hi all, I've made a reinstallation of the buildserver and can no longer see the standard graphs of project statistics. The error message shown is "Missing/Invalid statistics reports. Please check if you have enabled the Statistics Publisher, and statistics have been collected atleast once after that." To the best of my knowledge, the ccnet.config file has not been changed in this respect and by inspection it is verified that I have a Statistics / statisticsList-section for the project. Furthermore, the values appear in the Artifacts\statistics.csv and Artifacts\report.xml files. My guess would then be StatisticsGraph.xslt, which I have copied fresh from distribution to both Server\xlst and WebDashboard\xslt (why are they located in both places, by the way!?). Rebuild and check - still same error message. Any hints to how to debug this would be appreciated!

    Read the article

  • Apache2 Segfault - need help interpreting this coredump (suspect cause is memcache / php session related)

    - by WayneDV
    Three Apache2 web servers running a PHP 5.2.3 web site. We're using Memcache to cache rendered pages but also as the storage engine of the PHP Sessions. At peak traffic times we're getting Apache segmentation faults on all three web servers and all HTTPD child processes segfault. My gut tells me that the increased Memcache traffic is stopping PHP sessions from being created or cleaned up and thus the processes die. Is it possible for someone to confirm that from the following? : #0 _zend_mm_free_int (heap=0x7fb67a075820, p=0x7fb67a011538) at /usr/src/debug/php-5.3.3/Zend/zend_alloc.c:2018 #1 0x00007fb665d02e82 in mmc_buffer_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:50 #2 mmc_request_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:169 #3 0x00007fb665d031ea in mmc_pool_free (pool=0x7fb67a00e458) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:917 #4 0x00007fb665d0a2f1 in ps_close_memcache (mod_data=0x7fb66d625440) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_session.c:185 #5 0x00007fb66d1b0935 in php_session_save_current_state () at /usr/src/debug/php-5.3.3/ext/session/session.c:625 #6 php_session_flush () at /usr/src/debug/php-5.3.3/ext/session/session.c:1517 #7 0x00007fb66d1b0c1b in zm_deactivate_session (type=<value optimized out>, module_number=<value optimized out>) at /usr/src/debug/php-5.3.3/ext/session/session.c:2171 #8 0x00007fb66d2a719c in module_registry_cleanup (module=<value optimized out>) at /usr/src/debug/php-5.3.3/Zend/zend_API.c:2150 #9 0x00007fb66d2b1994 in zend_hash_reverse_apply (ht=0x7fb66d629d60, apply_func=0x7fb66d2a7180 <module_registry_cleanup>) at /usr/src/debug/php-5.3.3/Zend/zend_hash.c:755 #10 0x00007fb66d2a5c0d in zend_deactivate_modules () at /usr/src/debug/php-5.3.3/Zend/zend.c:866 #11 0x00007fb66d2541b5 in php_request_shutdown (dummy=<value optimized out>) at /usr/src/debug/php-5.3.3/main/main.c:1607 #12 0x00007fb66d32e037 in php_apache_request_dtor (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:509 #13 php_handler (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:681 #14 0x00007fb6784166f0 in ap_run_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:158 #15 0x00007fb678419f58 in ap_invoke_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:372 #16 0x00007fb6784254f0 in ap_process_request (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/modules/http/http_request.c:282 #17 0x00007fb678422418 in ap_process_http_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/modules/http/http_core.c:190 #18 0x00007fb67841e1b8 in ap_run_process_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/server/connection.c:43 #19 0x00007fb678429f4b in child_main (child_num_arg=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:662 #20 0x00007fb67842a21a in make_child (s=0x7fb679cd7860, slot=153) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:758 #21 0x00007fb67842aea4 in perform_idle_server_maintenance (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:893 #22 ap_mpm_run (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:1097 #23 0x00007fb678402890 in main (argc=1, argv=0x7fff6fecacb8) at /usr/src/debug/httpd-2.2.15/server/main.c:740 PHP.INI Follows: [PHP] engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED display_errors = Off display_startup_errors = Off log_errors = Off log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = Off variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_max_filesize = 2M allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 sendmail_path = /usr/sbin/sendmail -t -i mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [MySQL] mysql.allow_persistent = On mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_links = -1 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.save_path = "/var/lib/php/session" session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 1 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.entropy_file = session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 /etc/php.d/memcached.ini : session.save_path="tcp://memcache1:11211?persistent=1&weight=1&timeout=3&retry_interval=15"

    Read the article

  • How to symlink folders and exclude certain files

    - by Jarrod White
    Hey Guys, I'm not a server guru (unfortunately) but have a decent knowledge of linux & bsd. I'm trying to symlink multiple instances of HLDS (game server) but need to exclude certain folders & config files to achieve this properly. I need to do it this way as HLDS loads many mods automatically, and putting an exception to disable the mods doesnt work for all of them. so basically i want: /home/user/hlds-install (the base install) /home/user/server1 /home/user/server2 etc... and then be able to manually put any configs/mods ive excluded into the server dir's so that each server can be configured individually. Can anyone tell me how to do this, perhaps some sort of bash script so that I can just change the targets to run it each time i want to create a new one. I have quite a number to make so doing the whole thing manually for each one definately isn't an option and im all for working smarter, not harder! Thanks :)

    Read the article

  • PHP: Extensionless URLs in IIS7 (windows)? (for wordpress)

    - by smithym
    Hi there, I have recently installed wordpress but i would like to configure extensionless URLs .. I am using IIS7 but on a shared server. I presume i cna add something to web.config file?? I am little bit confused, in IIS7 and asp.net mvc it is done via code... but in PHP i don't think it is .... so the only alternative is to use a re-write module but i can't as I am on a shared server and can't install ISAPI stuff.. so I was wondering if there is a way to do the mapping i.e. when going to testme it would actually load testme.php Any advise really appreciated Thanks

    Read the article

  • Nginx + Haproxy + Thin + Rails - 503 Service Unavailable -

    - by Luca G. Soave
    I don't know how troubleshoot this. I get "503 Service Unavailable" http error for all "nginx upstreams" proxy passing calls to haproxy fast_thin and slow_thin ( server 127.0.0.1:3100 and server 127.0.0.1:3200 ), which loadbalance on 6 Thin servers ( 127.0.0.1:3000 .. 3005 ). Static files like /blog are currently fine. The falldown is: nginx on port 80 - haproxy on 3100 and 3200 - thin on 3000 .. 3005 and then Rails. Here it is /etc/nginx/nginx.conf : user nginx; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; include /etc/nginx/conf.d/*.conf; } then /etc/nginx/conf.d/default.conf upstream fast_thin { server 127.0.0.1:3100; } upstream slow_thin { server 127.0.0.1:3200; } server { listen 80; server_name www.gitwatcher.com; rewrite ^/(.*) http://gitwatcher.com/$1 permanent; } server { listen 80; server_name gitwatcher.com; access_log /var/www/gitwatcher/log/access.log; error_log /var/www/gitwatcher/log/error.log; root /var/www/gitwatcher/public; # index index.html; location /about { proxy_pass http://fast_thin; break; } location /trends { proxy_pass http://slow_thin; break; } location /categories { proxy_pass http://slow_thin; break; } location /signout { proxy_pass http://slow_thin; break; } location /auth/github { proxy_pass http://slow_thin; break; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://slow_thin; break; } } } then haproxy config file /etc/haproxy/haproxy.cfg : global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet nbproc 1 # number of processing cores defaults log global retries 3 maxconn 2000 contimeout 5000 mode http clitimeout 60000 # maximum inactivity time on the client side srvtimeout 30000 # maximum inactivity time on the server side timeout connect 4000 # maximum time to wait for a connection attempt to a server to succeed option httplog option dontlognull option redispatch option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode) option abortonclose # enable early dropping of aborted requests from pending queue option httpchk # enable HTTP protocol to check on servers health option forwardfor # enable insert of X-Forwarded-For headers balance roundrobin # each server is used in turns, according to assigned weight stats enable # enable web-stats at /haproxy?stats stats auth haproxy:pr0xystats # force HTTP Auth to view stats stats refresh 5s # refresh rate of stats page listen rails_proxy 127.0.0.1:3100 # - equal weights on all servers # - maxconn will queue requests at HAProxy if limit is reached # - minconn dynamically scales the connection concurrency (bound my maxconn) depending on size of HAProxy queue # - check health every 20000 microseconds server web1 127.0.0.1:3000 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3001 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3002 weight 1 minconn 3 maxconn 6 check inter 20000 listen slow_proxy 127.0.0.1:3200 # cluster for slow requests, lower the queues, check less frequently server slow1 127.0.0.1:3003 weight 1 minconn 1 maxconn 3 check inter 40000 server slow2 127.0.0.1:3004 weight 1 minconn 1 maxconn 3 check inter 40000 server slow3 127.0.0.1:3005 weight 1 minconn 1 maxconn 3 check inter 40000 and the Thin config file /etc/thin/gitwatcher.yml : --- chdir: /var/www/gitwatcher environment: production address: 0.0.0.0 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 100 require: [] wait: 30 servers: 6 daemonize: true if I look into open listen ports, I got the following : root@fullness:/var/www/gitwatcher# lsof | grep TCP | egrep "nginx|haproxy|thin" nginx 834 root 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 835 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 837 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) haproxy 1908 haproxy 4u IPv4 11699 0t0 TCP localhost:3100 (LISTEN) haproxy 1908 haproxy 6u IPv4 11701 0t0 TCP localhost:3200 (LISTEN) root@fullness:/var/www/gitwatcher# iptables -L get me the following : Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:22222 ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT all -- anywhere anywhere DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Any help ?

    Read the article

  • NFS-Root not working when booting over PXE

    - by Randy
    I am desperately trying to get a diskless client running over PXE-Boot using a NFS-Share as a root file system. I did this before some years ago but for some reason I am stucked at this since days. The TFTP-Server itself is running fine and booting a netinstaller works also fine. The kernel and initrd are loaded also but the bootprocess stops with this (screenshot) kernel panic. I'm using the squeeze standard i386-Kernel and I have prepared the initrd with this config: MODULES=most BUSYBOX=y KEYMAP=n COMPRESS=gzip BOOT=nfs DEVICE= NFSROOT=auto I also tried MODULES=netboot with the same outcome. My PXE-configuration looks like this: LABEL linux KERNEL diskless/debian-default/vmlinuz-2.6.32-5-686 APPEND root=/dev/nfs initrd=diskless/debian-default/vmlinuz-2.6.32-5-686 nfsroot=192.168.140.2:/storage/nfs-boot-images/default-squeeze ip=dhcp rw Furthermore I have captured the network communication of the client via tcpdump and learned that the client isn't even trying to connect to the NFS-share. Does anybody has got an idea what is going wrong here?

    Read the article

  • pam_ecryptfs: Error getting passwd (ProFTPD)

    - by Olirav
    proftpd: pam_ecryptfs: Error getting passwd info for user [USERNAME] I am getting this error in the syslog nearly every time any user connects via FTP, the user is able to connect and the session seems to continue without a hitch. ProFTPD.log shows no error, this warning only show in the syslog. My VPS is running Ubuntu 11.10 and Proftpd 1.3.4rc2 from the Ubuntu Repo, I have made only a few changes to the config (no weird auth methods). This has been going on for quite a while but I can't quite find the cause. Anyone got any ideas? EDIT: been looking around but all I can find with this error is the source code for the program itself; it appears to be and error in ecryptfs-utils that only proftpd is triggering.

    Read the article

  • ssh agent forwarding with many identities

    - by Eddified
    I have setup ssh agent forwarding, and I know it works. I also have many keys setup in the agent. The problem is there are so many keys that I get: Received disconnect from XXX.XXX.XXX.XXX: 2: Too many authentication failures for bob The way around this is to use IdentitiesOnly=yes so that ssh will only send the identity you want it to for the specified host. I've also gotten this implemented and I know it works, without agent forwarding. Now, I'm trying to combine the two features. That is, I want to use agent forwarding, but also be able to specify which identity to use when connecting. Problem is, I can't figure out how to do this. So, I want to connect from box A through box B to box C. Box A has all of the identity files and the ssh agent running. I want to edit box A or B's ssh config file(s) to use a specific identity that exists in box A's agent (which is being forwarded).

    Read the article

  • Xen 4.0.1 on Ubuntu 10.10 not booting

    - by Disco
    I'm trying to get Xen 4.0.1 run as dom0 on a fresh/clean install of 10.10 desktop (x64). Followed the step by step tutorial at http://wiki.xensource.com/xenwiki/Xen4.0 I have the pvops kernel in /boot, also included the ext4 fs support by recompiling the kernel by : make -j6 linux-2.6-pvops-config CONFIGMODE=menuconfig make -j6 linux-2.6-pvops-build make -j6 linux-2.6-pvops-install Here's my grub entry : menuentry 'Xen4' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 insmod ext3 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set 2bf3177a-92fd-4196-901a-da8d810b04b4 multiboot /xen-4.0.gz dom0_mem=1024M loglvl=all guest_loglvl=all module /vmlinuz-2.6.32.27 root=UUID=2bf3177a-92fd-4196-901a-da8d810b04b4 ro module /initrd.img-2.6.32.27 } blkid /dev/sda1 gives the : /dev/sda1: UUID="2bf3177a-92fd-4196-901a-da8d810b04b4" TYPE="ext3" My partition shemes is : /boot (ext3) / (ext4) Whatever option i've tried i end up with : mounting none on /dev failed: no such file or directory And message complaining that it cannot find the device with uuid ... It's taking my hairs out, if somone has a clue ...

    Read the article

  • 403 Forbidden

    - by demas
    Here is my Nginx config: user pass users; worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib64/ruby/gems/1.8/gems/passenger-3.0.7; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name some.another.ru; root /www/public/redmine; passenger_enabled on; rails_env development; } } Here is Nginx log: 2011/06/02 12:53:57 [error] 45986#0: *1 directory index of "/www/public/redmine/" is forbidden, client: **.*.**.***, server: some.another.ru, request: "GET / HTTP/1.1", host: "some.another.ru" 2011/06/02 12:53:59 [error] 45986#0: *1 open() "/www/public/redmine/favicon.ico" failed (2: No such file or directory), client: **.*.**.***, server: some.another.ru, request: "GET /favicon.ico HTTP/1.1", host: "some.another.ru" What is the reason of this error and how can I fix it?

    Read the article

  • Symfony2 on Windows with Apache, PHP and MySQL - app_dev.php will not load

    - by Lewis Bassett
    I am trying to get a Symfony2 standard distribution to work on my Windows 7 laptop. I have installed Apache2 (version 2.2.22), PHP 5.3.10 and MySQL 5.5.22. I have a demo PHP script (php_info() and a database call), and it works fine. I can get the start page (http://localhost/Symfony/web/config.php) to display, but I cannot get http://localhost/Symfony/web/app_dev.php/ to execute. The error returned is Error 101 (net::ERR_CONNECTION_RESET): The connection was reset. I can get it to work if I install XAMPP instead, but I don't want to use XAMPP. I want to be able to install and configure the components separately. Why isn't this working? Are there some Apache settings that I am missing?

    Read the article

  • SQL Server 2008 data directiories in SSD

    - by Kuroro
    I am going to install a new SQL server 2008 instance on my development/testing machine. My machine have one 7200rpm 500GB SATA Disk (C:OS) and one Intel X25-G2 80GB SSD(D:). Details machine config is as follow: CPU:i7 860 RAM:8GB Microsoft said I have an option to place following directories in different disk. So I plan to place User database & Temp DB on SSD and rest of it on traditional disk. Is it a good choice for gaining a performance boost in fast SSD? Data root directory :C:\Program Files\Microsoft SQL Server User database directory D:\Data User log directory C:\Logs Temp DB directory D:\TempDB Temp Log directory C:\TempDB Backup directory C:\Backups

    Read the article

  • How do I get nginx to issue 301 requests to HTTPS location, when SSL handled by a load-balancer?

    - by growse
    I've noticed that there's functionality enabled in nginx by default, whereby a url request without a trailing slash for a directory which exists in the filesystem automatically has a slash added through a 301 redirect. E.g. if the directory css exists within my root, then requesting http://example.com/css will result in a 301 to http://example.com/css/. However, I have another site where the SSL is offloaded by a load-balancer. In this case, when I request https://example.com/css, nginx issues a 301 redirect to http://example.com/css/, despite the fact that the HTTP_X_FORWARDED_PROTO header is set to https by the load balancer. Is this an nginx bug? Or a config setting I've missed somewhere?

    Read the article

  • Apache - Include conf Files Relative to ServerConfigFile (-f arg)

    - by Synetech inc.
    Hi, I want to use the -f command-line option for the Apache server so that I can store the conf files in a separate place (a data diectory) from the server binaries. The problem is that I use the Include directive to separate and organize the configurations, but when I use a command like Include "addons/SVN.conf", it fails because Apache looks for addons/SVN.conf in relative to the ServerRoot directory instead of the ServerConfigFile directory. I can work around this by using absolute paths (eg Include "e:\foo\bar\baz\Apache\conf\addons\svn.conf", but I don’t like that since it means I would have to change each and every Include directive if I move the conf folder as opposed to simply changing the -f option. Does anyone know of a way to get the Include directive to work relative to the conf file that Apache is passed. I tried Include "./addons/SVN.conf", but that too was relative to the ServerRoot. This forced relative-to-ServerRoot Include behavior kind of defeats the whole purpose of specifying an alternate config file to the one in ServerRoot/conf. Thanks.

    Read the article

  • ubuntu 12.04 server and tftp access violation issue on put command

    - by SMYERS
    I installed tftp as per this document: http://icesquare.com/wordpress/solvedtftp-error-code-2-access-violation/ I followed this to the letter 3 times and every time I put a file I get: root@CiscoCFG:~# tftp localhost tftp put test Error code 2: Access violation tftp root@CiscoCFG:~# tftp localhost tftp put test Error code 2: Access violation If I touch the file name chmod 777 the file then do a put it works perfectly fine. My config is as follows: service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /svr/tftp disable = no } the directory /svr/tftp permissions are 777: drwxrwxrwx 3 nobody nobody 4096 Nov 14 10:32 svr This thing should have full permissions as would anyone who wanted to write or read from that directory. I see nothing in the logs im really stumped on this. If the file is already in the directory I can read it all day long, I just cant make NEW files, can not put them, but I can do get's, I can only put to an existing file with permissions @777. Thanks

    Read the article

  • PHP-FPM for nginx on debian

    - by Jelko
    What is the preferred/recommended way of installing php-fpm on debian for use with nginx? I read about a "php5-fpm" package everywhere, but it's not available in the official debian repos any more. The PHP-FPM website (http://php-fpm.org/download/) says that fpm is now included with the php core. Is it enough to install "php5-common" then? Where are the config files, though? Other people recommend to install the current version of php and php-fpm from dotdeb.org. The versions provided there are generally more up to date. But is it secure? Is this a good repo to use in a production environment? I would appreciate any advice.

    Read the article

  • how can make transparent proxy on more than one port?

    - by ermya
    i want to make transparent proxy with linux ( centos) , i want all incoming connection on port 1000 - 2000 on eth0 forward to eth1 on port 1000 - 2000 in transparent mode i have 2 server 1- linux ( proxy server) 2- windows i want protect my windows server with my linux server firewall also i must make transparent proxy with my linux server linux server have 2 interface one for public network an another for private network connected to windows server so all incoming connection must connect to the linux server (at eth0 public network) first and after checking , must forward to the windows server on private network (with linux interface eth1 ) i can use squid for making transparent proxy but i dont know how i must config the squid for multi port because i want listen in more than 1000 ports for example from port 1000 to 2000 anyone know how can i do ?

    Read the article

  • How to connect XP mode VM to the host so as to simulate LAN Activities?

    - by rsjethani
    First of all my PC config: Host:Windows 7(32-bit) Guest: Windows XP SP3 as XP Mode. Integration Features Disabled(I don't think it has any thing to do with networking). Network Connections on Host : 1.Local Area Connection(Realtek...) - unplugged(as no other PC is connected). 2.Nokia 2730 classic USB Modem - I use this connect to the internet. Now,How can I connect the guest to the host as if they were on a LAN. Please give specific steps as I am using Windows XP Mode/Virtual machine for the first time.

    Read the article

  • Remove apache from ubuntu

    - by Keyo
    I want to remove apache as if it was never installed, no config files left behind. I intend to reinstall apache2 fresh. I have tried various combinations of apt-get options to no success. apt-get remove apache2 apt-get remove --purge apache2 apt-get purge apache2 apt-get autoremove apache2 None of these totally remove apache properly. Nothing works, the /etc/apache2 directory still exists. So I deleted it. When I install apache the folder is never created. Running Ubuntu server 10.10.

    Read the article

  • Big IP F5 outbound HTTP issues

    - by mbuk2k
    We've tried upgrading from 9.x to 10.2 on our F5 Big IP 3400 and everything went over fine apart from one thing. We're unable to establish any outbound HTTP (80) connections from any servers that are assigned to a virtual server. This is something that worked before and is required for certain calls our servers need to make. Interestingly HTTPS (443) connections work fine, it's literally just anything outbound over port 80 seems to fail. Does anyone know if anything has changed between 9.4 and 10.2 that would mean additional config would need to be made to allow for external HTTP connections? Any advice would be appreciated, thank you

    Read the article

  • phpize: m4 error just in one extension

    - by Francois
    On Linux, I installed php 5.3.8 from source. Using phpize for installing an extension works fine but not on one specific extension (mysqlnd). # cd /opt/php/5.3.8/ext/pdo && /opt/php/5.3.8/bin/phpize ... this runs ok # cd /opt/php/5.3.8/ext/mysqlnd && /opt/php/5.3.8/bin/phpize Cannot find config.m4. Make sure that you run '/opt/php/5.3.8/bin/phpize' in the top level source directory of the module` As you can see error can not be that I am not on top level source directory since I am. I tried to call phpize from ext folder - did not work either! For info I have m4 installed Any idea? Thanks :)

    Read the article

  • OpenVPN - Ubunut 10.04 - Client Can't Connect to Server - Linux Route Add Command Failed

    - by nicorellius
    I suppose this could be asked on Server Fault as well, but it is specific to the client so I thought I'd start here. I have keys for a OpenVPN server already in place. I have used these keys to connect already, but using a Windows XP machine. I started by building the client.conf file so that I could run: sudo openvpn --config client.conf And it seems correct but I still can't connect and get these errors and lines of output: Mon May 31 14:34:57 2010 ERROR: Linux route add command failed: external program exited with error status: 7 Mon May 31 14:34:57 2010 /sbin/route add -net 10.8.0.1 netmask 255.255.255.255 gw 10.8.0.17 SIOCADDRT: File exists Mon May 31 14:34:57 2010 ERROR: Linux route add command failed: external program exited with error status: 7 Mon May 31 14:34:57 2010 Initialization Sequence Completed I searched the net for forums and ideas and tried some file moving and renaming but still ended up in the same place.

    Read the article

  • Cisco Pix does not let traffic pass from outside to inside even though ACL permits

    - by Rickard
    I have tried to make my pix 515 allow traffic from outisde interface to inside, but despite permitting ACL's, it doesn't seem to let traffic through. (It is letting traffic out as it should though) I am have tried both of the following: access-list acl_in extended permit tcp any host 10.131.73.2 eq www and access-list acl_in extended permit ip any any None of them help, but I can access 10.131.73.2 from any host on the inside network. This is a one single host on the inside that should every now and then have an HTTP server running for development purpouses, so it doesn't need to reside on DMZ (and as far as I know, I can't place it on DMZ either as it's in the same subnet as the other ip's I have. Could I have missed anything? I am using PIX Version 8.0(4) My current running config looks like this: http://pastebin.com/TvRFyDrF Hope someone can help me get this working.

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >