Search Results

Search found 45245 results on 1810 pages for 'html content extraction'.

Page 655/1810 | < Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >

  • Can you rely on Nginx as your only webserver for php/mysql

    - by Saif Bechan
    Can you rely on Nginx to be your only webserver. I know in terms of performance it works well, but how does it do in terms of security. I know Apache is stable and has ModSecurity. This is not the case for Nginx. I am going to use Nginx as only webserver, and only for dynamic content. All my static content is delivered by a CDN.

    Read the article

  • Nginx alongside apache in PLESK 9.3

    - by Saif Bechan
    Does anyone know how to set up Nginx alongside apache in PLESK 9.3. I want to serve my dynamic content from apache and the static content from Ningx. I read that there is a new configuration setting in plesk 9.3 where you can do that, but i can't find an explanation on how to do so. I run centos 5 Thank you guys

    Read the article

  • Wkhtmltopdf margin (top and bottom)

    - by Kwarkjes
    Iam using wkhtmltopdf 0.10.0 rc2 on a : Linux 3.2.0-24-generic #38-Ubuntu x86_64 GNU/Linux I can't create pdf's with margin-top or margin-bottom (no errors) I'm using the command bellow: wkhtmotopdf -T 50 -B 50 http://google.com ./test.pdf wkhtmotopdf --margin-top 50 --margin-bottom 50 page.html ./test.pdf When i try this: wkhtmotopdf -L 50 -R 50 -T 50 -B 50 page.html ./test.pdf Margin left and right works perfect (still no margin-top/margin-bottom) it dosn't matter wich URL or page i convert

    Read the article

  • Private web space?

    - by user17175
    I want a space on web. I want to give the username and password of that web space to several users and every user can upload the content to that space but can't delete the content from there. Any suggestions?

    Read the article

  • Installing OpenLDAP: ldap_bind: Invalid credentials (49)

    - by Arcturus
    Hello. I've been trying to set up the OpenLDAP installed by default on Fedora 12, very unsuccessfully. My ultimate goal is to use LDAP authentication for user login and Apache, using the OpenLDAP server running on the same machine. The server is running, but the error I always get when I try to use ldapsearch or ldapadd is: ldap_bind: Invalid credentials (49) I've been following these tutorials, but none of them helped me: http://www.howtoforge.com/openldap_fedora7 http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/s1-ldap-quickstart.html http://www.howtoforge.com/linux_ldap_authentication http://docs.fedoraproject.org/deployment-guide/f12/en-US/html/s1-ldap-pam.html http://www.openldap.org/doc/admin24/quickstart.html First, some components were already installed, and I installed these with yum: yum install openldap-servers openldap-devel Then, I created a basic slapd.conf file in /etc/openldap: database bdb suffix "dc=sniejana-sandbox,dc=com" rootdn "cn=root,dc=sniejana-sandbox,dc=com" rootpw {SSHA}cxdz55ygPu4T3ykg7dgu+L0VRvsFSeom directory /var/lib/ldap/sniejana-sandbox.com I obtained the rootpw with this command: slappasswd -s changeme I also created the /var/lib/ldap/sniejana-sandbox.com directory and made sure the entire contents of /var/lib/ldap were owned by the ldap user. I found two ldap.conf files, one in /etc and one in /etc/openldap. I don't know which is the right one. If I understood correctly, this file is to configure the client. I put this in both: HOST localhost BASE dc=sniejana-sandbox,dc=com I then ran the server with: service slapd start It said OK. Most of the tutorials above say to use the command ldapsearch -D "cn=Manager,dc=my-domain,dc=com" -W to ensure that everything's working. When I execute this command, a password prompt appears, and after entering the password, I get the error. ldapsearch -D "cn=root,dc=sniejana-sandbox,dc=com" -W Enter LDAP password: ldap_bind: Invalid credentials (49) The same thing happens when trying to use ldapadd. I tried with an encrypted and unencrypted password in slapd.conf, it doesn't change anything. Adding a -x for simple authentication doesn't change anything either. netstat -ap confirms the server is listening: tcp 0 0 *:ldap *:* LISTEN 4148/slapd tcp 0 0 *:ldap *:* LISTEN 4148/slapd ps -ef|grep slapd confirms the process is running: ldap 4148 1 0 15:22 ? 00:00:00 /usr/sbin/slapd -h ldap:/// -u ldap Running slaptest procudes config file testing succeeded. I read somewhere that the command ldapsearch -x -b '' -s base '(objectclass=*)' namingContext can confirm the server is running. It appears to work: # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContext # # dn: # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 I'm running out of ideas. Am I missing something obvious?

    Read the article

  • Is it possible to configure a CDN so that it will step out of the way for a subset of regional IPs?

    - by rwired
    We have a website which targets customers in China, both expat and local Chinese. We have an ICP license which allows us to host in a datacenter inside China. Internet in China is actually as fast as anywhere else (faster than most places actually), so long as the content is served-up within the boundaries of the Great-Firewall. Anything that crosses the wall is horribly slow. The problem is that most expats have some sort of VPN installed so that they can access all the blocked stuff. What this means is that when they access our site, the traffic first has to go out of China through the firewall to their VPN, and then back in. The performance is terrible, worse than if we were just hosting outside of China directly (which we used to do before the ICP was issued). So I want to use a global CDN to mirror the site automatically, but I only want to deliver the content via the CDN if the user's request IP address is outside of China. Inside China I would like the content to be served by our own server. I also want to be careful with the domain names. We currently use www.xxx.com and www.xxx.cn for language selection purposes, as these perform well in SEO on Google (which the expats use), and Baidu (which the locals use). If possible I would like to avoid having one domain on the outside, and the other on the inside since not all expats use a VPN, and some Chinese speakers also use VPNs. Also some of our legitimate customers in both languages are from outside of China. I also don't want to resort to using something like www2.xxx.com/cn for the outside connection if at all possible, since I have worries about duplicate content and canonical URLs ruining our SEO (unless you know of a quick fix for that). CDNs I'm considering are: Google PageSpeed, CloudFlare, Amazon CloudFront. None of which have datacenters inside China. I have complete control of the .com DNS zone records, but the .cn zones are under the control of the domain issuing body in China. I'm not sure at this time if they would allow even a CNAME to point to an IP outside of China (although I don't see why not). They no longer allow outside registrars like they used to.

    Read the article

  • Text editor on Windows for editing remote files

    - by Doug Harris
    I've got a team of web programmers that need to edit HTML and CSS that is stored on a linux server. They're all using Windows on their desktops. Rather than either teaching them to use vi/vim in a shell window or editing locally and copying using an SFTP client, I think it'd be easier to install a text editor which can transparently do the network negotiation. To reiterate, here are the requirements: Runs on Windows Can open file over sftp/ssh syntax highlighting for css/html

    Read the article

  • "No route to host" with ssl but not with telnet

    - by Clemens Bergmann
    I have a strange problem with connecting to a https site from one of my servers. When I type: telnet puppet 8140 I am presented with a standard telnet console and can talk to the Server as always: Connected to athena.hidden.tld. Escape character is '^]'. GET / HTTP/1.1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://athena.hidden.tld:8140/"><b>https://athena.hidden.tld:8140/</b></a></blockquote></p> <hr> <address>Apache/2.2.16 (Debian) Server at athena.hidden.tld Port 8140</address> </body></html> Connection closed by foreign host. But when I try to connect to the same host and port with ssl: openssl s_client -connect puppet:8140 It is not working connect: No route to host connect:errno=113 I am confused. At first it sounded like a firewall problem but this could not be, could it? Because this would also prevent the telnet connection. As Firewall I am using ferm on both servers. The systems are debian squeeze vm-boxes. [edit 1] Even when I try to connect directly with the IP address: openssl s_client -connect 198.51.100.1:8140 #address exchanged connect: No route to host connect:errno=113 Bringing down the firewalls on both hosts with service ferm stop is also not helping. But when I do openssl s_client -connect localhost:8140 on the server machine it is connecting fine. [edit 2] if I connect to the IP with telnet it also is not working. telnet 198.51.100.1 8140 Trying 198.51.100.1... telnet: Unable to connect to remote host: No route to host The confusion might come from IPv6. I have IPv6 on all my hosts. It seems that telnet uses IPv6 by default and this works. For example: telnet -6 puppet 8140 works but telnet -4 puppet 8140 does not work. So there seems to be a problem with the IPv4 route. openssl seems to only (or by default) use IPv4 and therefore fails but telnet uses IPv6 and succeeds.

    Read the article

  • IE8 Refuses to run Javascript from Local Hard Drive

    - by Josh Stodola
    I have a problem that just started at work recently and the network manager is certain he did not change anything with the group policy. Anyways, here is a detailed description of the problem. My machine is Windows XP SP3, and I use IE8 to browse. We have McAffee anti-virus software that I am unable to configure. I use the following file to test... <!DOCTYPE html> <html> <head> <title>Javascript Test</title> </head> <body> <script type="text/javascript"> document.write("<h1>PASS</h1>"); </script> <noscript> <h1>FAIL</h1> </noscript> </body> </html> When I open this file from the C: drive, it fails every time. If I execute it anywhere else (local/remote web server or on a mapped network drive), it works just fine. When I am simply browsing the Internet, Javascript on web sites works just fine. It is only failing on files running from my C: drive. Additionally, I have had a couple other programmers in the department try this file on their C: drive, and it works fine for them. So I don't believe it is a group policy thing. I need to fix this because I do extensive testing from my C: drive, and I am accustomed to doing so. I don't want to get into the habit of moving files to a different drive just to test. Things I have tried: Enabled "Allow Active Content to Run Files on My Computer" in Options | Advanced | Security Enabled "Allow Active Scripting" in Options | Security | Custom Level Verified that "Script" was not checked as disabled in Developer Toolbar Added localhost to Trusted Sites in Options Disabled McAffee completely (momentarily, with help from network admin) Used an older DOCTYPE in my test HTML page Re-installed IE8 completely Ran regsvr32 on the JScript.dll Slammed keyboard I am sure that there is a setting somewhere that will fix this problem, possibly in the registry. I would not be surprised if it was related to the developer toolbar. At this point I do not know where else to look. Can anyone help me resolve this problem? EDIT: Regardless of the bounty, this issue is still ongoing.

    Read the article

  • Rename file in Amazon Glacier

    - by Benjamin
    Is it possible to rename an archive in Amazon Glacier? The documentation says: After you upload an archive, you cannot update its content or its description. The only way you can update the archive content or its description is by deleting the archive and uploading another archive. That would lead me to think that it's not possible, but I'm not sure whether the file name is considered part of the archive description.

    Read the article

  • phpMyAdmin causes php-fpm worker to restart (502 Bad Gateway)

    - by rndbit
    I am trying to set up a test site for myself. Everything works fine except phpMyAdmin. php installation loads my test site scripts, they work fine, however trying to load phpMyAdmin i get 502 Bad Gateway error. Judging from logs (that are not too helpful) it seems that php-fpm worker is crashing each time phpmyadmin is being accessed. No clue how or why.. Does anyone have any idea? nginx log: *3 recv() failed (104: Connection reset by peer) while reading response header from upstream, And php-fpm log: [07-Jun-2012 14:19:51] WARNING: [pool www] child 32179 exited on signal 11 (SIGSEGV) after 3.217902 seconds from start [07-Jun-2012 14:19:51] NOTICE: [pool www] child 32351 started My nginx conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; fastcgi_buffers 8 16k; fastcgi_buffer_size 32k; include /etc/nginx/conf.d/*.conf; server { listen 443 ssl; listen 80; server_name testsite.net www.testsite.net; ssl on; ssl_certificate /var/www/html/admin/ssl/certificate.pem; ssl_certificate_key /var/www/html/admin/ssl/privatekey.pem; ssl_session_timeout 1m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5:!kEDH; ssl_prefer_server_ciphers on; access_log off; location ~ \.php$ { root /var/www/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location / { root /var/www/html; index index.php; } } } php.ini is standard, with cgi.fix_pathinfo=0 php-fpm.conf: include=/etc/php-fpm.d/*.conf [global] pid = /var/run/php-fpm/php-fpm.pid error_log = /var/log/php-fpm/error.log log_level = notice php-fpm.d/www.conf: [www] listen = 127.0.0.1:9000 listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 10 pm.start_servers = 1 pm.min_spare_servers = 1 pm.max_spare_servers = 10 slowlog = /var/log/php-fpm/www-slow.log php_flag[display_errors] = on php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on

    Read the article

  • Delete the pendive contents and also trash in mac?

    - by Warrior
    I am using mac pro.I copied some data from my pen drive to my mac and i deleted the content by moving it to trash.After that when i see the info of pen drive it give more value than the original value.If i cleaned the content of the trash only i am able to see the correct value of pen drive and able to copy data. Is mac has been designed like that or is there any other way to delete other than using "move to trash" option? Thanks.

    Read the article

  • Puppet master and resources graph

    - by jrottenberg
    Hello ! I've setup rrd reporting + graph on my puppet master, my nodes report as expected and I can see the 'changes' and 'time' graphs, but I miss the 'resources' (html and daily weekly monthly yearly graphs) elements. Note resources.rrd files are there, just puppetmaster does not generate the html and png

    Read the article

  • Apache won't serve images larger than ~2K

    - by dtbaker
    Hello, Just upgraded an old box to Ubuntu to 10.04.2 LTS. Apache will not display images to a browser that are over about 2K. Small images seem to display fine. Static HTML and PHP continues to works fine as well. Installed: apache2 2.2.14-5ubuntu8.4 apache2-mpm-prefork 2.2.14-5ubuntu8.4 apache2-utils 2.2.14-5ubuntu8.4 apache2.2-bin 2.2.14-5ubuntu8.4 apache2.2-common 2.2.14-5ubuntu8.4 here is an ngrep of an image that doesn't display fine in the browser: T 192.168.0.4:32907 - 192.168.0.54:80 [AP] GET /path/path/logo.png HTTP/1.1..Host: 192.1 68.0.54..Connection: keep-alive..Accept: application/xml,application/xhtml+ xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5..User-Ag ent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13..Accept-Enco ding: gzip,deflate,sdch..Accept-Language: en-US,en;q=0.8..Accept- Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3.... T 192.168.0.54:80 - 192.168.0.4:32907 [A] HTTP/1.1 200 OK..Date: Wed, 09 Mar 2011 05:28:38 GMT..Server: Apa che/2.2.14 (Ubuntu)..Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT ..ETag: "17b6f4-15fe-491dd63eb2f40"..Accept-Ranges: bytes..Conten t-Length: 5630..Keep-Alive: timeout=15, max=100..Connection: Keep -Alive..Content-Type: image/png.....PNG........IHDR...!...v...... .%.....sRGB.........bKGD..............pHYs.................tIME.. etc... This looks ok to me! I have tried firefox and chrome, both display small images fine but when a large image is requested the browser prompts to download the file. When the image file is saved to the local computer it is corrupt, it also takes a long time to save which makes me think the browser cannot see the content-length header sent from apache. Also when I look at the saved image file it includes the headers from apache, along with a bit of garbage at the top, like so: vi logo.png: ^@^UÅd^@$^]V^S^H^@E^@^Q,n!@^@@^F^@^@À¨^@6À¨^@^D^@P^Y¬rÇŹéw^P^@Ú^@^@^A^A^H ^@^GÝ^]^@pbSHTTP/1.1 200 OK^M Date: Wed, 09 Mar 2011 04:47:04 GMT^M Server: Apache/2.2.14 (Ubuntu)^M Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT^M ETag: "17b6ff-157c-491dd63eb2f40"^M Accept-Ranges: bytes^M Content-Length: 5500^M Keep-Alive: timeout=15, max=94^M Connection: Keep-Alive^M Content-Type: image/png^M ^M PNG^M etc... Any ideas? It's driving me nuts. There is nothing in apache error logs, and permissions are fine (because the image data is there, it's just somewhat corrupt). There's no proxy or iptables on this ubuntu box either. Thanks heaps!! Dave ps: just tried on IE from a different computer, same problem :( pps: rebooted server, no help.

    Read the article

  • putty output screen too small?

    - by iie
    I'm having strange problem with my putty console. I'm establishing connection over ssh to my home computer [ windows 7 + freesshd server ]. Everything works just fine, but I'm getting this tiny screen with output, I can resize it of course, but the content is still in small box [ content doesn't resize ] I've tried to change the number of columns and rows in properties, but it didn't change anything. The same thing is with cygwin client.

    Read the article

  • Bots see something different!?

    - by ilhan
    I've submitted my web site to different apps like YahooWebmasters and similar places. They see my web site's main page's title as Indef of/ . However I see it normally, as My Title. Server: it says Apashi (wtf!?), it is Apache in reality PHP 5.2.5 FreeBSD cPanel Version 11.24.4-RELEASE Kernel version 6.3-PRERELEASE main page: index.html I guess it is because of index.html But why?

    Read the article

  • Best Postfix spam RBL policy weight daemon?

    - by TRS-80
    I just heard about policyd-weight so I did an apt-cache search policyd which returns three options: policyd-weight postfix-policyd postfwd Which one is the best, and do you have any tips on setting them up? Our current setup is whitelister plus postgrey to greylist RBLd hosts, then fail2ban them for 10 minutes if they have 10 failures, followed by content filtering (Kaspersky Anti-Spam). The content filtering is pretty good, but there's still a lot of spam that gets through the RBL greylisting.

    Read the article

  • google search engine

    - by kourosh
    I am working on a google box, something like this, http://mytwentyfive.com/blog/wp-content/uploads/byme/Google%20Search%20Appliances.jpg I am pointing the crawler to a folder where there are html files. before the crawler was crawling the files and indexing them but right now it finds the pattern or the folder but not following any html files within the folder. I have tried everything I could and know but, can't think of anything else. Can someone help? thanks

    Read the article

  • Suggestions for Website Maintenance

    - by IceMage
    I have a website that is being hosted and supported by a company that is going out of business. We are looking for a company to do maintenance on the code/content side, and will likely move the hosting to Rackspace.com (or another SAS70 certified provider). We are bringing the CMS with us, which I know is unusual, but we like it and would like to keep it. What companies should I look at for just CODE and CONTENT maintenance? It's a PHP/MySQL website.

    Read the article

  • direct http to https on certain pages?

    - by Elliott
    Hi below is some code I added to my .htaccess code how can I add certain pages to be re-directed to https? such as login.php & login.html also if the user types in www. they get a "untrusted connection" as the SSL is only valid without the www. how could I fix this? Thanks RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} /login.html RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

    Read the article

< Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >