Search Results

Search found 23316 results on 933 pages for 'content generation'.

Page 474/933 | < Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >

  • insserv: Script <SCRIPT_NAME> is broken: missing end of LSB comment.

    - by udo
    I am getting this error when running: insserv -r udo-startup.sh insserv: Script udo-startup.sh is broken: missing end of LSB comment. insserv: exiting now! The content of udo-startup.sh is this: #!/bin/bash ### BEGIN INIT INFO # Provides: udo-startup.sh # Required-Start: $local_fs $remote_fs $network $syslog # Required-Stop: $local_fs $remote_fs $network $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: - # Description: - ### END INIT INF ID=$(xinput list | grep -i touchpad | sed '/TouchPad/s/^.*id=\([0-9]*\).*$/\1/') xinput set-prop $ID "Device Enabled" 0 exit 0

    Read the article

  • Alternative SMTP-Proxy

    - by Uwe
    Currently we are using bitdefender for mail servers to scan for spam, viruses and content filtering. We chose bitdefender as it receives all incoming emails and forwards them to our internal windows IIS SMTP-service. Bitdefender is also the protection for our SMTP to not be used as spam relay as it allows certain IPs to send from only. The question is: are there any alternatives to bitdefenser for mailserver?

    Read the article

  • I can't view my website in internet explorer browser after move from apache to nginx, while it work

    - by sauravdon
    After installation of nginx webserver, i run my website in firefox. It works good in firefox, i can see my website template is looks good, but in internet explorer it is not working properly, i can't see my webpage has text and images and every content in bad style. Like images are not loading, may be css is not working. Please help me to sort out this problem. Before this i was running my website on apache with different ip address and moved to nginx. Tanks saurav

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • lighttpd config and rewriting/disabling attempts to access favicon.ico

    - by Kyle
    I've got lighttpd and apache working together on an app I'm building. lighty is serving out static content. However, each time a static asset is requested, I see a not found: favicon.ico message in the logs. I have added the following url rewrite: url.rewrite-once = ( "^/favicon.ico$" => "/assets/images/favicon.png" ) But to no avail; still getting the message. Any ideas?

    Read the article

  • How to secure a directory in Apache using a PHP session

    - by Cogsy
    I have a site that uses PHP session for authentication. There is one directory that I would like to restrict access to that does not use any PHP, it's just full of static content. I just don't know how to restrict access without every request going through a PHP script. Is there some way to have Apache check the session credentials and restrict access like Basic Auth?

    Read the article

  • Offline editable and mergeable wiki

    - by Zed
    I'm looking for a wiki software which allows me to store wiki pages on a server (basic wiki functionality) store a local copy on my computer edit my local content, and then merge my changes back to the server (version control for conflicting update) Is there a wiki, or a set of software which allows me to do this? The primary platform is linux, but I wouldn't mind if it also ran on windows...

    Read the article

  • No Telnet login prompt when used over SSH tunnel

    - by SCO
    Hi there ! I have a device, let's call it d1, runnning a lightweight Linux. This device is NATed by my internet box/router, hence not reachable from the Internet. That device runs a telnet daemon on it, and only has root as user (no pwd). Its ip address is 192.168.0.126 on the private network. From the private network (let's say 192.168.0.x), I can do: telnet 192.168.0.126 Where 192.168.0.126 is the IP address in the private network. This works correctly. However, to allow administration, I'd need to access that device from outside of that private network. Hence, I created an SSH tunnel like this on d1 : ssh -R 4455:localhost:23 ussh@s1 s1 is a server somewhere in the private network (but this is for testing purposes only, it will endup somewhere in the Internet), running a standard Linux distro and on which I created a user called 'ussh'. s1 IP address is 192.168.0.48. When I 'telnet' with the following, let's say from c1, 192.168.0.19 : telnet -l root s1 4455 I get : Trying 192.168.0.48... Connected to 192.168.0.48. Escape character is '^]'. Connection closed by foreign host . The connection is closed after roughly 30 seconds, and I didn't log. I tried without the -l switch, without any success. I tried to 'telnet' with IP addresses instead of names to avoid reverse DNS issues (although I added to d1 /etc/hosts a line refering to s1 IP/name, just in case), no success. I tried on another port than 4455, no success. I gathered Wireshark logs from s1. I can see : s1 sends SSH data to c1, c1 ACK s1 performs an AAAA DNS request for c1, gets only the Authoritave nameservers. s1 performs an A DNS request, then gets c1's IP address s1 sends a SYN packet to c1, c1 replies with a RST/ACK s1 sends a SYN to c1, C1 RST/ACK (?) After 0.8 seconds, c1 sends a SYN to s1, s1 SYN/ACK and then c1 ACK s1 sends SSH content to d1, d1 sends an ACK back to s1 s1 retries AAAA and A DNS requests After 5 seconds, s1 retries a SYN to c1, once again it is RST/ACKed by c1. This is repeated 3 more times. The last five packets : d1 sends SSH content to s1, s1 sends ACK and FIN/ACK to c1, c1 replies with FIN/ACK, s1 sends ACK to c1. The connection seems to be closed by the telnet daemon after 22 seconds. AFAIK, there is no way to decode the SSH stream, so I'm really stuck here ... Any ideas ? Thank you !

    Read the article

  • Google Chrome shows garbage text instead of web page

    - by Sarfraz Ahmed
    On some websites, I receive garbage text instead of web page itself. I have noticed that this only happens for pages that are served with Content-Encoding set to GZIP or chunked headers, see the image please: I am using latest version 22.0.1229.94 of Chrome with Windows 7 Ultimaate. The encoding of browser is set to UTF-8. (I also tried changing encoding to Western, etc but same result) Can anyone suggest a solution to this? Thanks

    Read the article

  • How well does Windows 7 MCE support Clear QAM?

    - by Jess Sightler
    How well does Windows 7 MCE support ClearQAM (no-cablecard, no HD), and are there any guidelines for which capture cards work best with it? Also, I have an old laptop with a 2 Ghz Pentium M with 1 GB of RAM. I believe that this will be able to handle 1 stream (as it currently does with non-QAM content under XP MCE. Would it also be able to handle 2 streams?

    Read the article

  • Multiple personalities for a blog

    - by Ralph Rickenbach
    I am using Blogger.com and service multiple websites. I would like to display the content of one single blog on all sites, using url's like blog.sitename.xxx and the sites corporate identity. They are rather different, but a solution that takes specific css would suffice as an absolute minimum. Better would be to have different templates. Any solution?

    Read the article

  • How to change suexec root directory from "/var/www" to "/home"?

    - by Oudin
    Hi I've installed suexec using on ubuntu 12.04: apt-get install apache2 apache2-suexec libapache2-mod-fcgid php5-cgi However when I run the following command: sudo /usr/lib/apache2/suexec -V I get the following info: -D AP_DOC_ROOT="/var/www" -D AP_GID_MIN=100 -D AP_HTTPD_USER="www-data" -D AP_LOG_EXEC="/var/log/apache2/suexec.log" -D AP_SAFE_PATH="/usr/local/bin:/usr/bin:/bin" -D AP_UID_MIN=100 -D AP_USERDIR_SUFFIX="public_html" I'm utilizing "/home/user/public_html" to serve users content on the web not "/var/www" How can I change the root directory to "/home"?

    Read the article

  • SSH socks proxy tunnel without interactive session?

    - by dirtside
    I use ssh -D 1080 myhost.org ...to open up an SSH tunnel from my work machine to my home machine, so as to bypass the idiotic content filter on the corporate firewall. However this also creates an interactive SSH session that lives the whole time I'm using the tunnel. Is there any way to tell SSH just to create the tunnel and not bother with the interactive session?

    Read the article

  • Good maintained privacy Add-On/settings set that takes usability into account?

    - by Foo Bar
    For some weeks I've been trying to find a good set of Firefox Addons that give me a good portion of privacy/security without losing to much of usability. But I can't seem to find a nice combination of add-ons/settings that I'm happy with. Here's what I tried, together with the pros and cons that I discovered: HTTPS Everywhere: Has only pro's: just install and be happy (no interaction needed), loads known pages SLL-encrypted, is updated fairly often NoScript - Fine, but needs a lot of fine-tuning, often maintained, mainly blocks all non-HTML/CSS Content, but the author sometimes seems to do "untrustworthy" decission RequestPolicy - seems dead (last activity 6 months ago, has some annoying bugs, official support mail address is dead), but the purpose of this is really great: gives you full control over cross-site requests: blocks by default, let's you add sites to a whitelist, once this is done it works interaction-less in the background AdBlock Edge: blocks specific cross-site requests from a pre-defined whitelist (can never be fully sure, need to trust others) Disconnect: like AdBlock Edge, just looking different, has no interaction possibilities (can never be fully sure, need to trust others, can not interact even if I wanted to) Firefox own Cookie Managment (block by default, whitelist specific sites), after building own whitelist it does it's work in the background and I have full control All These addons together basically block everything unsecure. But there are a lot of redundancies: NoScript has a mixed-content blocker, but FF has it's own for a while now. Also the Cookie blocker from NoScript is reduntant to my FF-Cookie setting. NoScript also has an XSS-blocker, which is redundant to RequestPolicy. Disconnect and AdBlock are extremly redundant, but not fully. And there are some bugs (especially RequestPolicy). And RequestPolicy seems to be dead. All in all, this list is great but has these heavy drawbacks. My favourite set would be "NoScript Light" (only script blocking, without all the additonal redundant-to-other-addons hick-hack it does) + HTTPS Everywhere + RequestPolicy-clone (maintained, less buggy), because RequestPolicy makes all other "site-blockers" obsolete (because it blocks everything by default and let's me create a whitelist). But since RequestPolicy is buggy and seems to be dead I have to fallback to AdBlock Edge and Disconnect, which don't block all and and need more maintaining (whitelist updates, trust-check). Are there addons that fulfill my wishes?

    Read the article

  • pg_dump: Error message from server: ERROR: missing chunk number 0 for toast value 43712886 in pg_to

    - by Kirill Novozhilov
    After $ pg_dumpall -U postgres -f /tmp/pgall.sql I see following: pg_dump: SQL command failed pg_dump: Error message from server: ERROR: missing chunk number 0 for toast value 43712886 in pg_toast_16418 pg_dump: The command was: COPY public.page_parts (id, name, filter_id, content, page_id) TO stdout; pg_dumpall: pg_dump failed on database "radiant", exiting I haven't earlier backups. How can I fix it? Thanks in advance.

    Read the article

  • Why isn't this rewrite rule (nginx) applied? (trying to setup Wordpress multisite)

    - by Brian Park
    Hi, I'm trying to setup Wordpress multisite (subfolder structure) with nginx, but having a problem with this rewrite rule. Below is the Apache's .htaccess, which I have to translate into nginx configuration. RewriteEngine On RewriteBase /blogs/ RewriteRule ^index\.php$ - [L] # uploaded files RewriteRule ^([_0-9a-zA-Z-]+/)?files/(.+) wp-includes/ms-files.php?file=$2 [L] # add a trailing slash to /wp-admin RewriteRule ^([_0-9a-zA-Z-]+/)?wp-admin$ $1wp-admin/ [R=301,L] RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^ - [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(wp-(content|admin|includes).*) $2 [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(.*\.php)$ $2 [L] RewriteRule . index.php [L] Below is what I came up with: server { listen 80; server_name example.com; server_name_in_redirect off; expires 1d; access_log /srv/www/example.com/logs/access.log; error_log /srv/www/example.com/logs/error.log; root /srv/www/example.com/public; index index.html; try_files $uri $uri/ /index.html; # rewriting uploaded files rewrite ^/blogs/(.+/)?files/(.+) /blogs/wp-includes/ms-files.php?file=$2 last; # add a trailing slash to /wp-admin rewrite ^/blogs/(.+/)?wp-admin$ /blogs/$1wp-admin/ permanent; if (!-e $request_filename) { rewrite ^/blogs/(.+/)?(wp-(content|admin|includes).*) /blogs/$2 last; rewrite ^/blogs/(.+/)?(.*\.php)$ /blogs/$2 last; } location /blogs/ { index index.php; #try_files $uri $uri/ /blogs/index.php?q=$uri&$args; } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example.com/public$fastcgi_script_name; } # static assets location ~* ^.+\.(manifest)$ { access_log /srv/www/example.com/logs/static.log; } location ~* ^.+\.(ico|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { # only set expires max IFF the file is a static file and exists if (-f $request_filename) { expires max; access_log /srv/www/example.com/logs/static.log; } } } In the above code, I believe rewrite ^/blogs/(.+/)?(.*\.php)$ /blogs/$2 last; has no effect because when I look at the access_log file, I see the following line: 2010/09/15 01:14:55 [error] 10166#0: *8 "/srv/www/example.com/public/blogs/test/index.php" is not found (2: No such file or directory), request: "GET /blogs/test/ HTTP/1.1" (Here, 'test' is the second blog created using multisite feature) What I'm expecting is that /blogs/test/index.php gets rewritten to /blogs/index.php, but it doesn't seem to do that... Am I overlooking something obvious? Thanks!

    Read the article

  • Backup solutions for Rackspace Cloud Sites

    - by Daniel A. White
    What options do I have for backing up the content from a Rackspace Cloud Sites including files and databases? I know they have cron jobs, but I am not sure what options I have when it comes to that. Here are some of the things the Cron jobs they support. http://cloudsites.rackspacecloud.com/index.php/How%5Fdo%5FI%5Fschedule%5Fa%5Fcron%5Fjob%3F

    Read the article

  • openvz graphing user_beancounters

    - by Jona
    Hi there, We run a cluster of openvz servers and are looking for a way to automatically graph the content of user_beancounters for all ves. We currently have a fairly rudimentary cron which alerts us when limits are hit but we could like a graphing solution to show us history. Obviously we could roll our own using some fancy bash/php/perl and rrdtool but we're wondering if there are any existing solutions before we go down this path. We current run a cacti/snmp based graphing infrastructure.

    Read the article

  • Reading Usenet w/o Spam

    - by user36720
    I'm trying to read comp.lang.javascript. The group seems to be active with decent content, but there is so much spam in there. Currently I'm reading it via Google Groups (http://groups.google.com/group/comp.lang.javascript/topics). Is there a way to read this group without the spam?

    Read the article

  • Tracking history in excel.

    - by Vojtech R.
    Hi, there is any possibility to track changes in Excel XP? With time, content and name of editor? Excel is running under just one windows account so system of revisions cant handle name of editors. Thanks

    Read the article

  • Need text file indexing software

    - by BLN
    Hi, I have 10 HDD. I using WhereIsIt software to index all files. I have many text file larger than 100KB and I want index text file content. But it only support from 4KB to 32KB. I moving to Google Desktop, but it very bad. I cannot start indexing as I wish. It only indexing when my computer in IDLE status.

    Read the article

< Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >