Search Results

Search found 11935 results on 478 pages for 'knowledge module'.

Page 294/478 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • 500 Internal Server Error with PHP application

    - by James
    I have written a PHP application using Windows and XAMPP. I've been trying to run it on Ubuntu 10.10 with Lighttpd 1.4.26. Parts of the application work fine, but whenever I try to log in, I get a 500 - Internal Server Error page. The only thing that shows up in /var/log/lighttpd/error.log is 2011-02-25 13:43:13: (mod_fastcgi.c.2582) unexpected end-of-file (perhaps the fastcgi process died): pid: 1169 socket: unix:/tmp/php.socket-0 2011-02-25 13:43:13: (mod_fastcgi.c.3367) response not received, request sent: 1596 on socket: unix:/tmp/php.socket-0 for /~denton/customer-facing-portal/index.php?, closing connection If I had any output whatsoever from PHP, this would be a lot easier to debug. Any ideas on how to get some? Here is my /etc/lighttpd/lighttpd.conf file: # Debian lighttpd configuration file # ############ Options you really have to take care of #################### ## modules to load server.modules = ( "mod_alias", "mod_compress", # "mod_rewrite", # "mod_redirect", # "mod_usertrack", # "mod_expire", # "mod_flv_streaming", # "mod_evasive", "mod_setenv" ) ## a static document-root, for virtual-hosting take look at the ## server.virtual-* options server.document-root = "/var/www/" ## where to upload files to, purged daily. server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" ## files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) ## Use the "Content-Type" extended attribute to obtain mime type if possible # mimetype.use-xattr = "enable" ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## Use ipv6 only if available. (disabled for while, check #560837) #include_shell "/usr/share/lighttpd/use-ipv6.pl" ## bind to port (default: 80) # server.port = 81 ## bind to localhost only (default: all interfaces) ## server.bind = "localhost" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts server.pid-file = "/var/run/lighttpd.pid" ## ## Format: <errorfile-prefix><status>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/var/www/" ## virtual directory listings dir-listing.encoding = "utf-8" server.dir-listing = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't change) server.username = "www-data" ## change gid to <gid> (default: don't change) server.groupname = "www-data" #### compress module compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ("text/plain", "text/html", "application/x-javascript", "text/css") #### url handling modules (rewrite, redirect, access) # url.rewrite = ( "^/$" => "/server-status" ) # url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### expire module # expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### external configuration files ## mimetype mapping include_shell "/usr/share/lighttpd/create-mime.assign.pl" ## load enabled configuration files, ## read /etc/lighttpd/conf-available/README first include_shell "/usr/share/lighttpd/include-conf-enabled.pl" ## Set environment variables setenv.add-environment = ( "DB_URL__DEMO" => "192.168.1.231", "DB_NAME_DEMO" => "demo", "DB_USER_DEMO" => "user", "DB_PASS_DEMO" => "password", "DB_AGENCY_DEMO" => "demo" ) Here is my /etc/php5/cgi/php.ini file (sans 1641 lines of comments): [PHP] register_long_arrays = Off short_open_tag = Off engine = On short_open_tag = Off asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT display_errors = On display_startup_errors = On log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = On html_errors = On variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off cgi.fix_pathinfo=1 file_uploads = On upload_max_filesize = 2M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] date.timezone = "America/Chicago" [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba] Update: here is /etc/lighttpd/conf-enabled/15-fastcgi-php.conf As far as I know, it's just the default config file the Ubuntu package installed. ## FastCGI programs have the same functionality as CGI programs, ## but are considerably faster through lower interpreter startup ## time and socketed communication ## ## Documentation: /usr/share/doc/lighttpd-doc/fastcgi.txt.gz ## http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs:ConfigurationOptions#mod_fastcgi-fastcgi ## Start an FastCGI server for php (needs the php5-cgi package) fastcgi.server += ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) )

    Read the article

  • Config xampp never to cache pages from localhost

    - by Michael Mao
    Hello everyone: I am not a webmaster, and that was exactly the reason why I installed XAMPP 1.7.2 on Windows XP SP2 instead of manually configuring Apache, MySQL and PHP to cooperate each other. Now I am having to problem to disable caching pages from localhost. Some suggested just force the browser not to cache, using Firefox web developer bar or something similar; but I feel it would be better if I could configure the Apache server in XAMPP to never allow pages from localhost to be cached. I guess this is done somewhere in httpd.conf? LoadModule cache_module modules/mod_cache.so Would this module be helpful in this case? Doc here : mod_cache I am not very sure this would resolve the problem. Could anyone confirm this approach feasible? I'd like to work it out myself, given the fact that I am on the right track... Many thanks in advance

    Read the article

  • Mindtouch with fcgid - Fast CGI apache worker thread.

    - by Stephan Kristyn
    Anyone got Dekiwiki / Mindtouch running with fcgid-module? I get 504 and 500 all the time. mod_fcgid: can't apply process slot for /var/www/html/dekiwiki/index.php [Tue Dec 28 06:14:03 2010] [warn] (104)Connection reset by peer: mod_fcgid: read data from fastcgi server error. [Tue Dec 28 06:14:03 2010] [error] [client 92.75.107.53] Premature end of script headers: index.php I'm currently fiddling with SuExec and fast-cgi wrapper directory permissions, because I also employ a chrooted SFTP jail. Sometimes the first line about the process slot does not appear now. I found a solution in german and will work it through now. http://debianforum.de/forum/viewtopic.php?f=8&t=122758&start=15

    Read the article

  • Differences in memory consumption between two identical D7 sites?

    - by aendrew
    I'm running Drupal on a news site that has a lot of different View blocks on the front page (~5 total, all cached). In trying to reduce the memory footprint of the site, I've checked out source from SVN to a local development install to try and convert some of those blocks into more optimized code. Here's the weird thing. Devel module lists memory consumption at 50mb on the Production site (Running Nginx, PHP 5.2.17, XCache and Zend Optimizer.) but only 14mb on my development site (Running Apache2, PHP 5.2.13 and XCache). These are nearly-identical versions of the same site — frankly, the Production site should use even less memory as I've disabled some of the modules running on the Dev site. Any idea why this might be the case?

    Read the article

  • IIS replaces redirect status header from PHP with 302 Redirect

    - by IP
    Hello I hope I am posting this in the correct place... I'm having an issue with a 301 redirect in php. Looking at the headers, if I do a simple 301 redirect, it actually appears as a 302 redirect which is not what I am after. This is the php code: header("Status: 301 Moved Permanently"); header('Location: newurl'); It is running on the latest version of php, IIS7 and uses the FastCGI module (which is apparently where this bug could exist). A quick Google finds other people with the same problem, but no actual solution. http://www.mombu.com/php/bugs-forum/t-301-redirect-returning-302-instead-3090775.html http://forums.iis.net/p/1158431/1907156.aspx Many thanks! Paul

    Read the article

  • Active X Control issue on Terminal Server 2003

    - by Saif Khan
    I have a security camera system which can be viewed remotely via a web browser. It works excellent only with IE 6 and up and requires an ActiveX control "ERViewer.ocx". Some users require to view the cameras via Windows Terminal Server, but when they try to open the link to the DVR they get the prompt ti install the ActiveX and then the browser crashes when they try to install it. I logged in as admin and got the same issue. I called the tech support of the DVR but they have no idea, in other words, the usual useless tech support. Here is what I get in the error log Faulting application iexplore.exe, version 7.0.6000.16735, faulting module ERViewer.ocx, version 1.6.0.8, fault address 0x000064d7. I am sure it could be some kinda permission getting an ocx to run in IE. What else can I tweak?

    Read the article

  • Apache is running; however, it reports that it is not, and it will not restart.

    - by solo
    Apache is running; however, it reports that it is not, and it will not restart. # /etc/init.d/httpd status httpd.worker is stopped # /usr/sbin/lsof -iTCP:80 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME httpd.wor 1169 root 3u IPv6 2974 TCP *:http (LISTEN) httpd.wor 1211 daemon 3u IPv6 2974 TCP *:http (LISTEN) httpd.wor 1213 daemon 3u IPv6 2974 TCP *:http (LISTEN) httpd.wor 1215 daemon 3u IPv6 2974 TCP *:http (LISTEN) httpd.wor 1352 daemon 3u IPv6 2974 TCP *:http (LISTEN) #/etc/init.d/httpd restart Stopping httpd: [FAILED] Starting httpd: [Wed Mar 24 10:33:51 2010] [warn] module proxy_ajp_module is already loaded, skipping (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs [FAILED] OS: Linux DISTRO: CENTOS 5 Restarting the server didn't help, nor did killing apache and starting it. Any idea what is causing this inconsistency?

    Read the article

  • Incremental backups from Rackspace Cloud Files to Amazon Glacier

    - by Martin Wilson
    Is there a software product/module (open-source or commercial) that can provide incremental backups from Rackspace Cloud Files to Amazon Glacier? We are looking for something that will provide the following functionality (or achieve the same result, i.e. a cost-effective backup strategy for files stored in Rackspace Cloud Files): Work out which files have been added to or modified in a Rackspace Cloud account (since the last backup). Create a ZIP (or similar) of these files and store them in Amazon Glacier. Keep a record of which files are in which ZIPs. Ideally, restore either a single file or all files from Glacier back into Rackspace.

    Read the article

  • application/x-httpd-php opening rather than pages?

    - by GuyNoir
    For some reason, no matter what page I try to open (.php, .html, .html, etc) my server trys to save it, rather than displaying the page. However, I do get the "It Works!" page for the main domain, it's only when I try to display a page on a sub-directory that it fails. I researched the problem, and I placed a .htaccess file in the directory of the pages that I'm attempting open with the code: AddType application/x-httpd-php5 .php .html .htm AddHandler applicaton/x-httpd-php5 .php .htm .html .my I'm running the latest version of apache and php5 which is installed as the "php5" module. Any help? http://i.stack.imgur.com/fh6dj.png

    Read the article

  • PHP make install seems to end abruptly and does not update libphp5.so

    - by matt74tm
    I'm trying to compile PHP 5.3.3 and after a lot of ups and downs, I finally did 'make' it followed by 'make install' which just shows this: root@server [/tmp/php-5.3.3]# make install Installing PHP SAPI module: cgi Installing PHP CGI binary: /usr/bin/ Installing PHP CLI binary: /usr/bin/ Installing PHP CLI man page: /usr/share/man/man1/ Installing shared extensions: /usr/lib64/20090626/ Installing build environment: /usr/lib64/build/ Installing header files: /usr/include/php/ Installing helper programs: /usr/bin/ program: phpize program: php-config Installing man pages: /usr/share/man/man1/ page: phpize.1 page: php-config.1 /tmp/php-5.3.3/build/shtool install -c ext/phar/phar.phar /usr/bin ln -s -f /usr/bin/phar.phar /usr/bin/phar Installing PDO headers: /usr/include/php/ext/pdo/ It does not look like its done, because /usr/lib64/httpd/modules/libphp5.so still shows an old date: -rwxr-xr-x 1 root root 3193768 Mar 31 2010 libphp5.so

    Read the article

  • IIS7 Compression Configuration

    - by Level1Coder
    Hi, Previously when I used IIS6, I used IIS6 Metabase Explorer to edit Metabase.xml and manually turned on compression, specified the compression level and the file extensions to compress. IIS7 seems a bit different, there is no Metabase.xml file in the system32\inetsrv folder. Enabling compression is easy to turn on by checking the checkbox in the Compression module. But how do I manually tweak and set the compression levels and file extensions to compress? I also ran across an article saying that IIS7 also automatically throttles the compression if your CPU load is 50% then compression is turned off. Where are all these settings located?

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • Virtualmin Configuration

    - by Allen
    I am trying to get Virtualmin setup and have reached a point where my noobish sysadmin skills aren't getting the job done. This is the message I get now when I try and refresh the configuration of Virtualmin. BIND DNS server is installed, and the system is configured to use it. However, the default master DNS server XXXXXX is not a fully qualified domain name. Sendmail is only accepting SMTP connections on the following ports : 127.0.0.1 port smtp. Email from other systems on the Internet will not be accepted. This can be changed in the Sendmail Mail Server module. Please advise what I need to do to get Sendmail configured properly. Thanks!

    Read the article

  • Slow internet using Arch Linux

    - by GZaidman
    after a week or so of using Arch Linux I cant access the internet - it takes around 5 mins to load google (most of the other websites just give me a timeout), pacman's downloading speed range between 5-2Kbs, and pinging google takes around 9Kms. I'm connected using wireless network (wifi card is Intel Ultimate 6300 and router is Edimax 6524n). Every other Windows machine that's connected to the network (and even the T410 running Windows) is fine, so the problem lies in Linux. So far, i checked the resolv.conf file (my router ip address is listed), and the hosts file (pretty much default), and I disabled the ipv6 module. None of that helped. PS: i'm using NetworkManager (but the problem still occurs when connecting using wicd) running on Gnome3. Thanks in advance for any help you can provide! EDIT: something really strange happens whenever I ping google: i get an unknown host 'google.com', but the bit rate from the card jumps at the exact second I ping google (so far, the bit rate jumped to 54Mb/s from 1Mb/s over the course of 4 pings).

    Read the article

  • OpenAFS on Fedora/CentOS

    - by Michael Pliskin
    I am trying to see if OpenAFS fits my needs as a distributed filesystem and is a bit stuck. There are docs but they're all quite hard to understand, so asking for some expert advice here. My questions: which version to install? I need windows client support so I need 1.5 - right? But it is not stable.. Or is it? And don't see any pre-built rpms for it, so compiling from sources? tried to compile and it worked but it created a non-"mp" kernel module while my kernel needs an mp one - how to workaround that? do I really need a new fresh partition to start with or I can re-use an existing one and just make it available via afp? any nice HOWTOs around?

    Read the article

  • Error starting mod_wsgi

    - by Hardgood
    When I run the start-server command on mod_wsgi-experess, I get this output: Server URL : http://localhost:8000/ Server Root : /tmp/mod_wsgi-localhost:8000:0 Server Conf : /tmp/mod_wsgi-localhost:8000:0/httpd.conf Error Log : /tmp/mod_wsgi-localhost:8000:0/error_log httpd: Syntax error on line 2 of /tmp/mod_wsgi-localhost:8000:0/httpd.conf: module version_module is built-in and can't be loaded The line 2 in the httpd.conf in the tmp folder that it is referring to says the following. It was automatically created by mod_wsgi: LoadModule version_module '/usr/local/apache/modules/mod_version.so' I am running Apache 2.2.21 on CentOS 5.10. I am stumped. Any ideas how to get over this obstacle? I should mention that this is a cross-post of a StackOverflow question, after I realized that SO was probably not the best place to ask it.

    Read the article

  • Creating windows links on Linux?

    - by Bart B
    I'm running a SAMBA file store for our Windows users, and I'd like to automatically generate windows LNK files linking to other network shares that the user needs access to. I've done quite a bit of googling and I can't find a way of creating windows links on Linux, or through Perl. I did find a perl module that looked promising in CPAN, but it will only run on Windows unfortunately. If it's not possible to create .LNK files, perhaps there is an alternative solution people can suggest to allow the users to click on a file in one SAMBA store to be linked to a different SAMBA share? Thanks, Bart.

    Read the article

  • How do I enable SELinux when booting a ramdisk from a CD/DVD?

    - by JeffG
    I have a bootable DVD which boots the same Kernel as the Hard Drive (which uses SELinux). I have copied /etc/selinux and all kernel modules to my ramdisk, and have tried various combinations of selinux=1 and selinux 1 with enforcing 1 and enforcing 0. as Kernel boot parameters. All files contained in the checkpolicy, libselinux, policycoreutils, selinux-policy and selinux-policy-targeted rpms have also been copied into the ramdisk tree. After the system boots from the ramdisk, I check dmesg: % dmesg | grep -i selinux Kernel command line: initrd=idrd.img ramdisk_size=110476 selinux=1 SELinux: Initializing. SELinux: Starting in permissive mode selinux_register_security: Registering secondary module capability SElinux: Registering netfilter hooks But SELinux isn't running: % /usr/sbin/getenforce Disabled % /usr/sbin/setenforce 1 /usr/sbin/setenforce: SELinux is disabled Neither /var/log/messages nor /proc/kmsg hold clues.

    Read the article

  • Perl throwing 403 errors!

    - by Jamie
    When I first installed Perl in my WAMP setup, it worked fine. Then, after installing ASP.net, it began throwing 403 errors. Here's my ASP.net config: Load asp.net module LoadModule aspdotnet_module "modules/mod_aspdotnet.so" Set asp.net extensions AddHandler asp.net asp asax ascx ashx asmx aspx axd config cs csproj licx rem resources resx soap vb vbproj vsdisco webinfo # Mount application AspNetMount /asp "c:/users/jam/sites/asp" # ASP directory alias Alias /asp "c:/users/jam/sites/asp" # Directory setup <Directory "c:/users/jam/sites/asp"> # Options Options Indexes FollowSymLinks Includes +ExecCGI # Permissions Order allow,deny Allow from all # Default pages DirectoryIndex index.aspx index.htm </Directory> # aspnet_client files AliasMatch /aspnet_client/system_web/(\d+)_(\d+)_(\d+)_(\d+)/(.*) "C:/Windows/Microsoft.NET/Framework/v$1.$2.$3/ASP.NETClientFiles/$4" # Allow ASP.net scripts to be executed in the temp folder <Directory "C:/Windows/Microsoft.NET/Framework/v*/ASP.NETClientFiles"> Options FollowSymLinks Order allow,deny Allow from all </Directory> Also, what are the code tags for this site?

    Read the article

  • Ubuntu 9.0.4 Presario S4000NX Fan Speed

    - by Chris C
    I recently install Ubuntu 9.0.4 on a Presario S4000NX and the CPU fan speed is kept at max. With Windows XP installed the fan speed would increase/decrease as required. I've tried to install lm-sensors and run the sensors-detect. It recommended that I load the modules which I did: smsc47m192 i2c-i801 When running sensor-detect it gave me this strange message: Trying family SMSC Found SMSC LPC47M15x/192/997 Super IO Fan Sensors (but not activated) Running the sensors command gives me a list of voltages and CPU and temperature but doesn't list any fans. After doing some Internet research I then tried to load the smsc47m1 module but I get the following error: FATAL: Error inserting smsc47m1 (/lib/modules/2.6.28-15-generic/kernel/drivers/hwmon/smsc47m1.ko): no such device The file smsc47m1.ko does exist in the listed folder. Any suggestions for getting the fan speed (and the noise) down in Ubuntu? Thanx. - Chris P.S. - I would have put better tags but Server Fault wouldn't let me.

    Read the article

  • Trouble installing php memcache extension

    - by user35346
    I'm trying to install memcache on MAMP but I get the warning below, and when I continue it seems to complete properly. I add the line extension=memcache.so to the php.ini and restart MAMP but phpinfo() doesn't list the memcache extension. $ ./pecl install memcache downloading memcache-2.2.5.tgz ... Starting to download memcache-2.2.5.tgz (35,981 bytes) ..........done: 35,981 bytes 11 source files, building WARNING: php_bin /Applications/MAMP/bin/php5/bin/php appears to have a suffix 5/bin/php, but config variable php_suffix does not match running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 Enable memcache session handler support? [yes] : yes ... Build process completed successfully Installing '/Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/memcache.so' install ok: channel://pecl.php.net/memcache-2.2.5 configuration option "php_ini" is not set to php.ini location You should add "extension=memcache.so" to php.ini

    Read the article

  • Replacing an ATA-100 hard drive

    - by Pieter
    I was instructed to replace the hard drive in an old laptop, a Dell Latitude D505. I'm suspecting that there was a head crash when someone moved the laptop while it was turned on. In the specifications, I found this about the hard drive: 30GB ATA-100; (4200RPM); 40GB ATA-100 (5400RPM); 60GB ATA-100 (4200RPM) *Optional 40GB (5400) 2nd HDD Module for media bay I'm familiar with SATA and IDE, but ATA-100 doesn't ring a bell. What do I have to take into account when I go look for a replacement hard drive?

    Read the article

  • Zero-channel RAID for High Performance MySQL Server (IBM ServeRAID 8k) : Any Experience/Recommendati

    - by prs563
    We are getting this IBM rack mount server and it has this IBM ServeRAID8k storage controller with Zero-Channel RAID and 256MB battery backed cache. It can support RAID 10 which we need for our high performance MySQL server which will have 4 x 15000K RPM 300GB SAS HDD. This is mission-critical and we want as much bandwidth and performance. Is this a good card or should we replace with another IBM RAID card? IBM ServeRAID 8k SAS Controller option provides 256 MB of battery backed 533 MHz DDR2 standard power memory in a fixed mounting arrangement. The device attaches directly to IBM planar which can provide full RAID capability. Manufacturer IBM Manufacturer Part # 25R8064 Cost Central Item # 10025907 Product Description IBM ServeRAID 8k SAS - Storage controller (zero-channel RAID) - RAID 0, 1, 5, 6, 10, 1E Device Type Storage controller (zero-channel RAID) - plug-in module Buffer Size 256 MB Supported Devices Disk array (RAID) Max Storage Devices Qty 8 RAID Level RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 1E Manufacturer Warranty 1 year warranty

    Read the article

  • Free IP address management software

    - by TiFFolk
    We are choosing a system for managing our IP address space. So we are looking for a special free software like IPPlan. So what we have nowadays: Ipplan (Does not support IPv6) SolarWinds IP address tracker (IPv6 support unknown ) IP module of The NOC Project (BTW, take a look of it, seems to be very promising project) (IPv6 support unknown ) phpIP (Does not support IPv6) IP management from RackTables (Does not support IPv6) Do you know about any other special software, like written above? But: No Wiki No DNS No DHCP No spreadsheet Software should provide: Clear view of available addresses Detail listing of all addresses by subnets/search pattern/owners/additional info It must support adding additional info like owner of IP, domain-name, contacts, etc Multi user support Easy interface Software has to be specially written for address management. Scalability Any OS: win, lin, sol, web

    Read the article

  • Nginx access log shows authenticated user "admin"

    - by bearcat
    I came across a line in my Nginx access log: 218.201.121.99 - admin [12/Dec/2012:18:33:18 +0800] "GET /manager/html HTTP/1.1" 444 0 "-" "-" Let me stress that there is only 1 record with this IP. Notice the authenticated user admin. After some googling, I was able to find out only that this is authenticated user (http://wiki.nginx.org/HttpCoreModule#.24remote_user), which was authenticated by the Auth Basic Module (http://wiki.nginx.org/HttpAuthBasicModule). However, nowhere in my site (configuration) do I use HTTP basic authentication. What is going on? How did it get there? Was the user authenticated?

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >