Search Results

Search found 52885 results on 2116 pages for 'http redirect'.

Page 599/2116 | < Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >

  • What is the "right" way to host CUPS behind Apache 2

    - by Greymeister
    I have tried some combinations of ProxyPass, ProxyPassReverse and ProxyHTMLURLMap but I'm still not having much luck. I just would like to be able to hit the printers in CUPS by going to www.printerhost.com/printers/printername rather than having to add a port 631 or have CUPS listen on port 80. As requested, here is the configuration file: LoadModule proxy_html_module modules/mod_proxy_html.so LoadModule xml2enc_module modules/mod_xml2enc.so NameVirtualHost *:80 <VirtualHost *:80> ServerName blah.yours.com JkMount /* balancer JkMount /jkmanager jk-status JkUnMount /cups* balancer ProxyRequests Off ProxyPass /cups/ http://localhost:631/ ProxyHTMLURLMap http://localhost:631 /cups <Location /cups/> ProxyPassReverse / ProxyHTMLEnable On ProxyHTMLURLMap / /cups/ </Location> </VirtualHost>

    Read the article

  • 1Gigabit vs 1.25Gibabit mismatch

    - by Joel Coel
    I need to re-connect the network to a small old outbuilding that hasn't been used in several years. I have to use the existing 62.5um multi-mode fiber run. This end of the fiber is already connected. For the end in the building, I was looking at this pair: http://www.tp-link.com/products/productDetails.asp?class=switch&content=spe&pmodel=TL-SM311LM http://www.tp-link.com/products/productDetails.asp?class=&content=spe&pmodel=TL-SL2210WEB If you look at the sfp first (first link), it's listed at 1.25Gpbs. That's odd, because IIRC the fiber should really only do 1Gbps. It's also supposed to work with the switch I posted (2nd link), but the gbic port on the switch also only shows 1Gbps. What am I missing here?

    Read the article

  • Plus signs appearing in Google searches

    - by emddudley
    Ever since Google implemented their new look at the beginning of May, I have been having trouble with their search engine changing all of the spaces in my query to plus signs. This behavior occurs when I use the search box in both Firefox and Internet Explorer. For example, if I search for google search plus signs I am taken to the following URL, where google+search+plus+signs is in the search box. http://www.google.com/search?q=google+search+plus+signs&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a However if I perform the search from google.com, I get taken to a different URL with google search plus signs as I'd expect: http://www.google.com/#hl=en&source=hp&q=google+search+plus+signs&aq=f&aqi=g1&aql=&oq=&gs_rfai=&fp=d2a3ca21987adb1 Do I need to update my browsers or something?

    Read the article

  • Can't communicate using PuTTY, but can with Termite

    - by SharpHawk
    I'm trying to establish a serial connection to a peripheral from my PC's RS-232 port. Pretty simple stuff, and I've had not trouble doing it with countless peripherals before. And yet when I configure PuTTY to the right baud rate, stop bits, etc. I'll type in "*IDN?", press enter, and the unit won't reply. After going over my settings over and over again, I decided to try another terminal program, Termite. This time it worked like a charm. What puzzles me, and what I'm trying to figure out by posting this question, is why Termite would work when PuTTY did not despite the fact that they both have the same settings. PuTTY: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Termite: http://www.compuphase.com/software_termite.htm EDIT: I now tried Tera Term as well, and it works. So PuTTY is the odd one out.

    Read the article

  • Ubuntu and mysql server. Something isnt allowing me to connect

    - by acidzombie24
    I have a question about mysql settings http://serverfault.com/questions/94054/remote-connections-and-mysql-on-ubuntu/94088#94088 now i want to figure out why i cannot connect. I made sure bind-address was commented out. I can ping the server within the VM but i cannot ping it from within the VM using mysqladmin --protocol=tcp --host=self_ip ping. I also followed along and check if my ports were open and they look like they are. I setup samba on that VM and can access that with no problem as well. It looks like ubuntu does not have a firewall either (i figured this out before) so i am stumped why the server isnt allowing my connection. Apparently the config file works on another person side http://www.pastie.org/742545 I am using Ubuntu 6.06 LTS just because of 'support' reasons. So hopefully this will be 'easy'?

    Read the article

  • questions about toucan manager

    - by user23950
    I'm new to this application toucan manager. How do I use it to download mediafire files automatically. By just providing the links. What link do I have to put in it? This: http://www.mediafire.com/?yoqygwzsyem Or this: http://download783.mediafire.com/yerhxuymxd3g/yoqygwzsyem/op-338-muxed.mp4 Because when I put the one with the .mp4, it checks the link endlessly. And one more thing, I've seen something like: you have to agree first w/ mediafire's bla bla bla. When I try to check the mediafire icon in the toucan manager.

    Read the article

  • Cascade rewriting with special characters

    - by Korjavin Ivan
    I have this .htaccess RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php All queries send to index.php. In my site, I have this link: http://site/search/?v[1]=myid it work nice, I think index.php handle it. But now I want to more readable name for this link, something like http://site.si/byid_$myid.html I tried # RewriteRule ^byid_(.*).html$ search/?v\%5B1\%5D=$1 [NC,PS,NE] #escape and urlencode # RewriteRule ^byid_(.*).html$ search/?v%5B1%5D=$1 [NC,PS,NE] #urlencode # RewriteRule ^byid_(.*).html$ search/?v[1]=$1 [NC,PS,NE] #raw RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php Each of this lines kills rewriting, all requests return 500 error. Where is my fault?

    Read the article

  • How do I compile mercurial 1.5.2 on debian?

    - by Aaron Digulla
    I downloaded the files for Mercurial 1.5.2 from http://packages.debian.org/sid/mercurial (mercurial_1.5.2-1.debian.tar.gz, mercurial_1.5.2-1.dsc and mercurial_1.5.2.orig.tar.gz). How do I get a .deb package out of these? I tried to follow the instructions at http://www.debian.org/doc/maint-guide/ch-build.en.html but they don't work. I tried to unpack the two archives and run dpkg-buildpackage or debian/rules build but that fails with: dh --with quilt clean dh_testdir debian/rules override_dh_auto_clean make[1]: Entering directory `/home/user/packages/mercurial-deb' cp -a mercurial/__version__.py mercurial/__version__.py.save cp: cannot stat `mercurial/__version__.py': No such file or directory make[1]: *** [override_dh_auto_clean] Error 1 make[1]: Leaving directory `/home/user/packages/mercurial-deb' make: *** [clean] Error 2 That's because the directory mercurial is inside mercurial_1.5.2/. Why doesn't the build script cd into the right place? If I try ../debian/rules build, I get dh --with quilt build dh: cannot read debian/control: No such file or directory sigh How do I compile a package for debian???

    Read the article

  • Apache: Setting up a reverse proxy configuration with SSL with url rewriting

    - by user1172468
    There is a host: secure.foo.com that exposes a webservice using https I want to create a reverse proxy using Apache that maps a local http port on a server internal.bar.com to the https service exposed by secure.foo.com Since it a web service I need to map all urls so that a path: https://secure.foo.com/some/path/123 is accessible by going to: http://internal.bar.com/some/path/123 Thanks. I've gotten this far: <VirtualHost *:80> ServerName gnip.measr.com SSLProxyEngine On ProxyPass / https://internal.bar.com/ </VirtualHost> I think this is working except for the url rewriting. Some resources I've found on this are: Setting up a complex Apache reverse proxy Apache as reverse proxy for https server

    Read the article

  • Cannont add service account to domain group during sql cluster install

    - by Sam
    I'm installing a 2008 instance on a 2003 machine which is already running 2005. I need to set up domain groups for the security setup step: http://msdn.microsoft.com/en-us/library/ms179530.aspx On Windows Server 2003, specify domain groups for SQL Server services. All resource permissions are controlled by domain-level groups that include SQL Server service accounts as group members. Much more info on this here: http://support.microsoft.com/kb/910708 I've had problems with being able to add the windows service accounts to the groups at install time. The security admins had to make my account a domain admin - which they were hesitant to do. The account under which SQL Server Setup is running must have permissions to add accounts to the domain groups. Is there a specific security setting which would allow my account to add accounts to a group?

    Read the article

  • Proxy Methods for Hosting a Low-Bandwidth Dynamic Website

    - by Casey
    I am building a webcam w/ HTTP server that will be running from a low-bandwith connection. The content on the site will be changing every 5 to 10 minutes. Instead of serving files directly from this connection, are there hosting companies that can act as a public proxy for my site? Therefore, if nobody is using the site, the local internet connection remains idle. And if I receive 1000 hits all at the same time, only one HTTP GET is required, and the hosting company (on a fat pipe) continues serving the other 999 requests? This doesn't sound like a very common usage model, but I feel like this would be the optimal solution to my situation.

    Read the article

  • List of tablets with keyboards.

    - by JamesM
    Please if you know any good tablets with keyboards that come either built-in or attachable (not via a USB port) please post a link to the tablet, Please add a review for that tablet if you have one, if you don't have one please say what you would do with it if you had one. It must have a physical keyboard, Cannot be a wired or wireless keyboard, only built-in/attachable see below tablets for more of an idea. Tablets With Keyboards: http://gdgt.com/asus/eee/pad/slider/ http://gdgt.com/asus/eee/pad/transformer/ If you have a table, Please what do you think of it: (Review) Your-Name: Your review. If you don't have a tablet (with a non USB keyboard) please say what you would do with one: JamesM: If I have a slider or a transformer, it would be running kFreeBSD-Debian-Squeeze. Please feel free to improve the layout/format of this topic.

    Read the article

  • install PECL JSON in PHP 5.04

    - by Radu Maris
    It's OK (compatible) to install PECL native JSON (from here) in PHP 5.0.4, on a production server running FC4 where unfortunately I cannot update PHP to at least 5.2 ? If there is a good chance to screw up PHP instalation on the server, I will not try to install it, and I will stick to Service JSON ( http ://svn.php.net/viewvc/pear/packages/Services_JSON/trunk/ ) In documentation ( http ://aurore.net/projects/php-json/ ) I have found: A simple ./configure; make; make install should do the trick. Make sure to add an extension=json.so line to your php.ini/php.d. (but I can't find anythink about compatible versions of PHP) Thank you. (Please don't tell me to update the OS and PHP, beacause it's not my decision :( )

    Read the article

  • FreeBSD high load loopback interface

    - by user1740915
    I have a problem with a FreeBSD server. There is a FreeBSD 9.0 amd64, two network cards em1 (internet), em0 (local network) configured firewall ipfw, natd, squid (not transparent), the server acts as a gateway for access to the Internet. Next problem: upload via squid is very low. At this moment I see next: natd, dhcpd load the cpu at that time when uploading through squid and there are a lot of traffic through the loopback interface. ipfw show output 0100 655389684 36707144666 allow ip from any to any via lo0 00200 0 0 deny ip from any to 127.0.0.0/8 00300 0 0 deny ip from 127.0.0.0/8 to any 00400 0 0 deny ip from any to ::1 00500 0 0 deny ip from ::1 to any 00600 4 292 allow ipv6-icmp from :: to ff02::/16 00700 0 0 allow ipv6-icmp from fe80::/10 to fe80::/10 00800 1 76 allow ipv6-icmp from fe80::/10 to ff02::/16 00900 0 0 allow ipv6-icmp from any to any ip6 icmp6types 1 01000 0 0 allow ipv6-icmp from any to any ip6 icmp6types 2,135,136 01100 1615 76160 deny ip from 192.168.1.1 to any in via em1 01200 0 0 deny ip from 199.69.99.11 to any in via em0 01300 46652 3705426 deny ip from any to 172.16.0.0/12 via em1 01400 3936404 345618870 deny ip from any to 192.168.0.0/16 via em1 01500 4 336 deny ip from any to 0.0.0.0/8 via em1 01600 4129 387621 deny ip from any to 169.254.0.0/16 via em1 01700 0 0 deny ip from any to 192.0.2.0/24 via em1 01800 917566 33777571 deny ip from any to 224.0.0.0/4 via em1 01900 147872 22029252 deny ip from any to 240.0.0.0/4 via em1 02000 1132194739 1190981955947 divert 8668 ip4 from any to any via em1 02100 3 248 deny ip from 172.16.0.0/12 to any via em1 02200 35925 2281289 deny ip from 192.168.0.0/16 to any via em1 02300 1808 122494 deny ip from 0.0.0.0/8 to any via em1 02400 3 174 deny ip from 169.254.0.0/16 to any via em1 02500 0 0 deny ip from 192.0.2.0/24 to any via em1 02600 0 0 deny ip from 224.0.0.0/4 to any via em1 02700 0 0 deny ip from 240.0.0.0/4 to any via em1 02800 960156249 1095316736582 allow tcp from any to any established 02900 64236062 8243196577 allow ip from any to any frag 03000 34 1756 allow tcp from any to me dst-port 25 setup 03100 193 11580 allow tcp from any to me dst-port 53 setup 03200 63 4222 allow udp from any to me dst-port 53 03300 64 8350 allow udp from me 53 to any 03400 417 24140 allow tcp from any to me dst-port 80 setup 03500 211 10472 allow ip from any to me dst-port 3389 setup 05300 77 4488 allow ip from any to me dst-port 1723 setup 05400 3 156 allow ip from any to me dst-port 8443 setup 05500 9882 590596 allow tcp from any to me dst-port 22 setup 05600 1 60 allow ip from any to me dst-port 2000 setup 05700 0 0 allow ip from any to me dst-port 2201 setup 07400 4241779 216690096 deny log logamount 1000 ip4 from any to any in via em1 setup proto tcp 07500 21135656 1048824936 allow tcp from any to any setup 07600 474447 35298081 allow udp from me to any dst-port 53 keep-state 07700 532 40612 allow udp from me to any dst-port 123 keep-state 65535 1990638432 1122305322718 allow ip from any to any systat -ifstat when uploading via squid Load Average ||| Interface Traffic Peak Total tun0 in 79.507 KB/s 232.479 KB/s 42.314 GB out 2.022 MB/s 2.424 MB/s 59.662 GB lo0 in 4.450 MB/s 4.450 MB/s 43.723 GB out 4.450 MB/s 4.450 MB/s 43.723 GB em1 in 2.629 MB/s 2.982 MB/s 464.533 GB out 2.493 MB/s 2.875 MB/s 484.673 GB em0 in 240.458 KB/s 296.941 KB/s 442.368 GB out 512.508 KB/s 850.857 KB/s 416.122 GB top output PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 66885 root 1 92 0 26672K 2784K CPU3 3 528:43 65.48% natd 9160 dhcpd 1 45 0 31032K 9280K CPU1 1 7:40 32.96% dhcpd 66455 root 1 20 0 18344K 2856K select 1 119:27 1.37% openvpn 16043 squid 1 20 0 44404K 17884K kqread 2 0:22 0.29% squid squid.conf cat /usr/local/etc/squid/squid.conf # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 192.168.1.1:3128 # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/squid/cache 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/squid/cache I understand that the traffic passes through the SQUID several times. But can not find why.

    Read the article

  • Nginx request forking

    - by Adam
    Hi, I'm wondering if nginx can "fork" a request. Let's imagine config: upstream backend { server localhost:8080; ... more servers here } server { location /myloc { FORK-REQUEST http://my-other-url:3135/something proxy_pass http://backend; } } I would like nginx to send a copy of request to the url specified by FORK-REQUEST and after that to load balance it with backend servers and return the response to the client. As I don't need the response from FORK-REQUEST it would be best if this request was async so normal prcessing doesn't have to wait. Is a scenario like this possible?

    Read the article

  • Reverse proxy 502 bad gateway

    - by Brian Graham
    I have setup a subdomain to proxy my plesk panel, but when saving pages I am getting 502 Bad Gateway error instead of a completion message. I am running CentOS 6. Here is my vhost.conf configuration for http://plesk.domain.tld/: RewriteEngine On RewriteCond %{SERVER_PORT} ^80$ RewriteRule $ https://plesk.domain.tld/ [R,L] Here is my vhost_ssl.conf configuration for https://plesk.domain.tld/: SSLProxyEngine On <Location /> ProxyPass https://localhost:8443/ ProxyPassReverse https://localhost:8443/ </Location> I have more than enough (and I have even checked) RAM, CPU and HDD. There are no spikes. As well, the posted information does save, it just errors when trying to show me a "This information has been saved." green/red block. Here is the relevent error from /var/log/nginx/error.log (IP/Host Filtered): 2014/05/29 02:42:41 [error] 8046#0: *402 upstream prematurely closed connection while reading response header from upstream, client: 173.238.XX.XX, server: plesk.domain.tld, request: "POST /smb/web/edit HTTP/1.1", upstream: "https://198.100.XX.XX:7081/smb/web/edit", host: "plesk.domain.tld", referrer: "https://plesk.domain.tld/smb/web/edit"

    Read the article

  • Problems serving SVN over HTTPS on Ubuntu 10.04

    - by odd parity
    We've been experiencing some problems with our Subversion server after upgrading to Ubuntu 10.04. When trying to access a repository, regardless of client (I've tried git-svn and svn on Windows as well as svn on Ubuntu 10.04, from different computers and network locations), I get a 400 bad request. Here's the output from svn: svn: Server sent unexpected return value (400 Bad Request) in response to OPTIONS request for 'https://svn.example.org/svn/programs' Here are the relevant entries from the Apache logs (I'm running Apache 2.2): error.log [Mon Jun 14 11:29:31 2010] [error] [client x.x.x.x] request failed: error reading the headers ssl_access.log x.x.x.x - - [14/Jun/2010:11:29:28 +0200] "OPTIONS /svn/programs HTTP/1.1" 401 2643 "-" "SVN/1.6.6 (r40053) neon/0.29.0" x.x.x.x - - [14/Jun/2010:11:29:31 +0200] "ction-set/></D:options>OPTIONS /svn/programs HTTP/1.1" 400 644 "-" "SVN/1.6.6 (r40053) neon/0.29.0" If anyone has run into similar problems or could give me a pointer to track down the cause of this I'd be very grateful - I'd really like to avoid having to downgrade the box again.

    Read the article

  • SharePoint Designer prompts for credentials when edited from IE8

    - by Rob Nicholson
    Our intranet is hosted using the free SharePoint services on Windows 2003. Consider the following page: http://vserver003/help/technology/multimedia/multimedia.htm On selecting "Edit with Microsoft Office SharePoint Designer" from IE8, SPD launches, opens the website and then the selected page - all is well. In order to make moving the intranet easier, we've set-up a DNS setting called intranet.company.local so you can also access the intranet that way: http://intranet.company.local/help/technology/multimedia/multimedia.htm However, when you edit this page, SPD designer prompts you for credential, i.e. domain\username and password. If you enter the details it opens fine. If you don't enter the details, the page still opens but not the website. Any ideas have to get around this prompt? Haven't a clue where to start looking. Thanks, Rob. PS. The same prompt occurs if you use the physical IP address.

    Read the article

  • mod_proxy failing as forward proxy in simple configuration

    - by Stabledog
    (On Mac OS X 10.6, Apache 2.2.11) Following the oft-repeated googled advice, I've set up mod_proxy on my Mac to act as a forward proxy for http requests. My httpd.conf contains this: <IfModule mod_proxy> ProxyRequests On ProxyVia On <Proxy *> Allow from all </Proxy> (Yes, I realize that's not ideal, but I'm behind a firewall trying to figure out why the thing doesn't work at all) So, when I point my browser's proxy settings to the local server (ip_address:80), here's what happens: I browse to http://www.cnn.com I see via sniffer that this is sent to Apache on the Mac Apache responds with its default home page ("It works!" is all this page says) So... Apache is not doing as expected -- it is not forwarding my browser's request out onto the Internet to cnn. Nothing in the logfile indicates an error or problem, and Apache returns a 200 header to the browser. Clearly there is some very basic configuration step I'm not understanding... but what?

    Read the article

  • Compiling to a binary from source

    - by Chords
    I'm using WKHTMLTOPDF on a 64-bit Linux server and I'm running into problems with the version. Seen here: http://code.google.com/p/wkhtmltopdf/downloads/list There's slim pickins when it comes to pre-compiled binaries. I started with version 0.9.9 which has a few bugs. I upgraded to 0.11.0 RC 1 to find a slew of new problems, namely the following: http://code.google.com/p/wkhtmltopdf/issues/detail?id=730 I think 0.10 RC 2 would work, and the thread above suggests compiling from the source has a fix for the error I'm getting, but I don't know how to do that. Can anyone explain how I can create a static binary myself, or would anyone be willing to create and post one for the countless people waiting for this fix?

    Read the article

  • CakePHP in a subdirectory using nginx (Rewrite rules?)

    - by lhnz
    I managed to get this to work a while back, but on returning to the cakephp project I had started it seems that whatever changes I've made to nginx recently (or perhaps a recent update) have broken my rewrite rules. Currently I have: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } location /basic_cake/ { index index.php; if (-f $request_filename) { break; } if (!-f $request_filename) { rewrite ^/basic_cake/(.+)$ /basic_cake/index.php?url=$1 last; break; } } location /cake_test/ { index index.php; if (-f $request_filename) { break; } if (!-f $request_filename) { rewrite ^/cake_test/(.+)$ /cake_test/index.php?url=$1 last; break; } } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 8081; server_name localhost; root /srv/http/html/xsp; location / { index index.html index.htm index.aspx default.aspx; } location ~ \.(aspx|asmx|ashx|asax|ascx|soap|rem|axd|cs|config|dll)$ { fastcgi_pass 127.0.0.1:9001; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } The problem that I have is that the css and images will not load from the webroot. Instead if I visit http://localhost/basic_cake/css/cake.generic.css, I get a page which tells me: CakePHP: the rapid development php framework Missing Controller Error: CssController could not be found. Error: Create the class CssController below in file: app/controllers/css_controller.php var $name = 'Css'; } ? Notice: If you want to customize this error message, create app/views/errors/missing_controller.ctp CakePHP: the rapid development php framework Does anybody have any ideas on how to fix this?

    Read the article

  • Heartbeat won't start up from a cold boot when a failed node is present

    - by Matthew
    I currently have two ubuntu servers running Heartbeat and DRBD. Let's say one node is down... The servers are directory connected with a 1000Mbps cross over cable on eth1 and have access to a IP camera LAN on eth0 The node that is still functioning won't start up heartbeat and provide access to the drbd resource. I have to manually restart heartbeat by "sudo service heartbeat restart" to get everything up and running. How can I get it to start fine from a cold start? Here is the my ha.cf and some material from the syslog... If I'm missing any information that might be of some help. http://pastebin.com/rGvzVSUq <--- Syslog http://pastebin.com/VqpaPSb5 <--- ha.cf

    Read the article

< Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >