Search Results

Search found 50475 results on 2019 pages for 'rpc over http'.

Page 98/2019 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Error while installing phabricator using http://www.phabricator.com/rsrc/install/install_rhel-derivs.sh

    - by Saurav Shah
    The command thats run is yum install httpd git php php-cli php-mysql php-process php-devel php-gd php-pecl-apc php-pecl-json mysql-server I get these errors. How do I fix these? Error: Package: php-devel-5.3.3-3.el6_2.6.x86_64 (rhel6-optional) Requires: php = 5.3.3-3.el6_2.6 Available: php-5.3.3-3.el6.x86_64 (rhel6-base) php = 5.3.3-3.el6 Installing: php-5.3.3-14.el6_3.x86_64 (rhel6-updates) php = 5.3.3-14.el6_3 Error: Package: php-process-5.3.3-3.el6_2.6.x86_64 (rhel6-optional) Requires: php-common = 5.3.3-3.el6_2.6 Available: php-common-5.3.3-3.el6.x86_64 (rhel6-base) php-common = 5.3.3-3.el6 Installing: php-common-5.3.3-14.el6_3.x86_64 (rhel6-updates) php-common = 5.3.3-14.el6_3

    Read the article

  • Open Source Chef Server can't upload cookbook

    - by veilig
    I just setup the open source chef server on an Ubuntu 12.04 EC2 instance, I've setup my webui and am able to get responses from my knife commands ie: knife node list, knife client list, knife user list, etc... I'm able to update roles, databags, environments, etc... but I cannot upload any cookbooks. I'm running my workstation on Mac OSX. I keep getting this output at the end of my command knife cookbook upload -VV curl. Doesn't matter what cookbook I upload, or if I upload them all - I keep getting the same response DEBUG: Chef::HTTP calling Chef::HTTP::ValidateContentLength#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::RemoteRequestID#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Authenticator#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Decompressor#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::CookieManager#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::JSONToModelOutput#handle_response /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http/json_output.rb:51:in `handle_response': undefined method `chomp' for nil:NilClass (NoMethodError) from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:229:in `block in apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `each' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `inject' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:144:in `request' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:118:in `put' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/cookbook_uploader.rb:123:in `block in uploader_function_for' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `call' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `block (3 levels) in process' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `loop' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `block (2 levels) in process'INFO: HTTP Request Returned 204 No Content:

    Read the article

  • Exposing a WebServer behind a firewall without Port Forwarding

    - by pbreault
    We are deploying web applications in java using tomcat on client machines across the country. Once they are installed, we want to allow a remote access to these web applications through a central server, but we do not want our clients to have to open ports on their routers. Is there a way to tunnel the http traffic so that people connected to the central server can access the web applications that are behind a firewall ? The central server has a static ip address and we have full control over it. Right now, it is a windows box but it could be changed to a linux box if necessary. Our clients are running windows xp and up. We don't need to access the filesystem, we only want to access the web application through a browser. We have looked at reverse ssh tunneling but it shows scaling problem since every packet would have to pass through the central server.

    Read the article

  • windows http tunnel trough 2 linux hosts?

    - by Darkmage
    the localhost only have connection to host1, Host1 have connetion to Host2 and localhost, how can i setup this to use host2 as a proxy for web trafic from localhost. i have seen similar topics but cant get it to work. how do i set it up on the XP client?

    Read the article

  • PHPMyAdmin works with https Only (not http)

    - by 01010011
    Hi I've been having a problem getting phpmyadmin to work consistently on my XP desktop and laptop computers for months now. When I type into Chrome's browser on both machines, localhost/phpmyadmin, I kept getting Error #1045 Access Denied for user at root@localhost (using password yes). Eventually, I realized that I had two (2) versions of mysql installed (XAMPP and MySQL Server 5.1) on both machines. So I uninstalled the MySQL Server 5.1I from the desktop and phpmyadmin worked. But when I uninstalled MySQL Server 5.1 from my laptop, it did not work. But I realized I could still get into MySQL Commandline Client using my password and that my databases were still intact. So I uninstalled and reinstalled XAMPP on the laptop and phpmyadmin worked after that. Now I have a new problem. On phpMyAdmin's home page has a message at the bottom: Your configuration file contains settings (root with no password) that correspond to the default MySQL privileged account. Your MySQL server is running with this default, is open to intrusion, and you really should fix this security hole by setting a password for user 'root'. So I located the following lines in config.inc.php file: /* Authentication type and info */ $cfg['Servers'][$i]['auth_type'] = 'config'; $cfg['Servers'][$i]['user'] = 'root'; $cfg['Servers'][$i]['password'] = ''; $cfg['Servers'][$i]['AllowNoPassword'] = true; and I just changed the last 2 lines as follows: $cfg['Servers'][$i]['password'] = 'mypassword'; $cfg['Servers'][$i]['AllowNoPassword'] = false; As soon as I did that and I tried to access phpmyadmin again, I got the Error #1045 message again, but when I tried https://localhost/phpmyadmin/ I got a red page saying this sites certificate is not trusted would you like to proceed anyway. And now it only works using https. I would really like to settle all my phpmyadmin problems once and for all so here are my questions: 1. Why does my laptop only access phpmyadmin via https? 2. How do I change my password in my configuration file? Also, if you have any other tips regarding phpMyAdmin, they are very welcome. Thanks in advance

    Read the article

  • Command-line HTTP crawler for Windows?

    - by Pekka
    Would somebody have a recommendation for a web site crawler that can be invoked and equipped with settings from the command line? This would need to run in a Windows environment. Saving the data, following stylesheet links etc. is not an issue. I only need the crawler to start with a page, parse it, and follow all the links on the same domain so that in the end, all pages on the site have been requested once. Background: I'm setting up a web site that gets frequently uploaded from an office location. Combining data from various sources, it has several levels of caching. I don't want the first user to visit the site after a fresh upload to have to wait until the page has been generated and saved in the cache.

    Read the article

  • Apache: how to set custom 401 error page and save original behaviour

    - by petRUShka
    I have Kerberos-based authentication with Apache/2.2.3 (Linux/SUSE). When user is trying to open some url, browser ask him about domain login and password like in HTTP Basic Auth. If user cancel such request 3 times Apache returns 401 Authorization Required error page. My current virtual host config is <Directory /home/user/www/current/public/> Options -MultiViews +FollowSymLinks AllowOverride None Order allow,deny Allow from all AuthType Kerberos AuthName "Domain login" KrbAuthRealms DOMAIN.COM KrbMethodK5Passwd On Krb5KeyTab /etc/httpd/httpd.keytab require valid-user </Directory> I want to set nice custom 401 error page with some instructions for users. And I added such line in virtual host config: ErrorDocument 401 /pages/401 It works, when user can't authorize apache redirects him to my nice page. But Apache doesn't ask user login\password as it did before. I want this functionality and nice error page simultaneously! Is it possible to make it works properly?

    Read the article

  • PXE Boot PCLinuxOS ISO

    - by DBNotCooper
    I'm in the process of trying to convert some computers at my local school to be diskless browser stations. We've identified PCLinuxOS as the OS we'd like to use due to it's easy interface for creating custom ISO images (we need WINE and some custom apps installed also, as well as FireFox). I've been having problems figuring out how to get an ISO to boot via PXE. In our network, I only have access to TFTP and HTTP, so I cannot use NFS. The machines all have enough memory (4 gigs) that they could use a ram drive to hold the ISO image, if that helps. Currently I've been looking at GPXE with GRUB/MEMDISK, but I don't know if that's the right solution, or even where a good resource is for setting it up. Searching the web has proved fruitless, as most of the information is either NFS-specific or out of date. The other students and I would appreciate any help! :)

    Read the article

  • ASP.NET website http requests appear to be queueing

    - by scolemann
    We cloned our servers this weekend into a colo. All non-asp.net sites are performing great, but ASP.NET sites are very slow. It appears to be an issue with the requests/connections, but I cannot figure out where. The reason I think it is a problem with the connections is that when I launch fiddler and watch the requests, all requests appear to happen sequentially. Even the static image requests are taking 5 seconds and another one doesn't start until the first one finishes. MaxConnections is set to 100 in machine.config and the "website connections" are set to unlimited. Any idea what else coudld be causing this? from machine.config:

    Read the article

  • How to ignore query parameters in web cache?

    - by eduardocereto
    Google Analytics use some query parameters to identify campaigns and to do cookie control. This is all handled by javascript code. Take a look at the following example: http://www.example.com/?utm_source=newsletter&utm_medium=email&utm_ter m=October%2B2008&utm_campaign=promotion This will set cookies via JavaScript with the right campaign origin. This query parameters can have multiple and sometimes random values. Since they are used as cache hash keys the cache performance is heavily degraded in some scenarios. I suppose there's a not so hard configuration on cache servers to just ignore all query parameters or specific query parameters. Am I right? Does anyone know how hard is it in popular web cache solutions, to create ? I'm not interested in a specific web cache solution. It would be great to hear about the one you use.

    Read the article

  • Blocking HTTP clients which request certain URLs repeatedly

    - by Guido Domenici
    I run a website on Windows Server 2008 R2. Looking through the IIS logs, I have noticed that there are some IP addresses repeatedly requesting certain URLs (such as for example /mysql/phpmyadmin/main.php, /phpadmin/main.php) which do not exist, as the site is entirely served off of ASP.NET. They are obviously fishing for known vulnerabilities. My question is, are there any firewall or other tools (Windows built-in or commercial) that allow me to block those IP addresses which request certain URLs multiple times?

    Read the article

  • In Nginx, can I handle both a location:url or a content-type: text/html response from memcached?

    - by Sean Foo
    I'm setting up an nginx - apache reverse proxy where nginx handles the static files and apache the dynamic. I have a search engine and depending on search parameter I either directly forward the user to the page they are looking for or provide a set of search results. I cache these results in memcached as key:/search.cgi?q=foo value: LOCATION:http://www.example.com/foo.html and key:/search.cgi?q=bar value: CONTENT-TYPE: text/html <html> .... .... </html> I can pull the "Content-type...." values out of memcached using nginx and send them to the user, but I can't quite figure out how to handle a returned value like "Location..." Can I?

    Read the article

  • Automatic server redirect that passes through as the referring site

    - by nsearle
    Hey All, As always, I would like to thank all of your help and assistance in advance. I am looking for a way to automatically redirect a user to a site and maintain the redirect site as the referral site. Let me explain the process and what is needed in a step by step format. User clicks on a desired link in an email User is navigated to testdomain.com User is automatically redirected to test.com/landing test.com sees testdomain.com as the referral site Data is gathered via Google Analytics I am unsure as if this PHP code will take care of it, or not - header("Location: http://google.com", true, 303); I could test and that is most likely what I am going to do. But I would like to understand a little more as to WHY this would or WHY this wouldn't work.

    Read the article

  • How do I protect large file downloads through PHP and/or Apache?

    - by Eric
    We have some large files (1-8GB) that are not publicly accessible. Currently we're serving them up through a PHP script that buffers the files in 1MB chunks and writes it to the output. It's incredibly CPU intensive and slows the server down when only a few downloads are active. We want to move the file transfer work to Apache or a more efficient method. We are using cookie authentication. FTP downloads are out unless there's some way to authenticate FTP sessions through the existing PHP session cookie. Ideally we'd like something where we can use PHP to hide the link to the file while it passes off the file transfer work to Apache, which is no doubt far more efficient at HTTP file transfers than PHP. We want to be able to resume downloads as well. Any help is appreciated.

    Read the article

  • direct http to https on certain pages?

    - by Elliott
    Hi below is some code I added to my .htaccess code how can I add certain pages to be re-directed to https? such as login.php & login.html also if the user types in www. they get a "untrusted connection" as the SSL is only valid without the www. how could I fix this? Thanks RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} /login.html RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

    Read the article

  • making cookies persistent in IE8

    - by Jamie Stevens
    There's a website I sign into frequently, and I'm getting sick of entering my username and password every time. The website can remember who I am so long as I don't close my browser (Internet Explorer 8), but when I do it forgets me, and asks me to login again. I'm guessing this is because it's using a cookie (and perhaps a session) that expires when I close my browser. Is there anyway to make this information persistent across each time I load my browser? (I tried exporting the cookies to a file, and then importing them as soon as the browser was reloaded, but that didn't work either... I'm thinking the cookie text file needs to be modified somehow.) (FYI The website is http://blackboard.unh.edu, but you won't have access unless you happen to be a student there :-) NOTE: I'm not interested in using any password remembering features in the browser. The only solution I'm open to is making the cookie / session persistent somehow!

    Read the article

  • Varnish doesn't seem to be caching

    - by Charlie Somerville
    I've setup a Varnish cache mirror to sit in front of a file server, but it seems to be endlessly re-downloading data from my file server. There's about 100GB of data in total, but so far Varnish has downloaded 800GB from my file server. I'm using the default VCL file that comes with Varnish and the response headers for files served by the file server are similar to the following: HTTP/1.1 200 OK Cache-Control: max-age=290304000, public Content-Type: image/jpeg Expires: Wed, 29 Dec 2010 21:38:33 GMT Server: Microsoft-IIS/7.0 E-Tag: "8b4723296ab697530768f18b1378b269" Content-Disposition: inline; filename=image046.jpg; X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 23 Dec 2010 05:38:33 GMT Content-Length: 100592 I'm starting varnishd with the following options: varnish/sbin/varnishd -a 0.0.0.0:80 -f varnish/etc/varnish/default.vcl -s file,varnish/var/lib/varnish/varnish_storage.bin,100G

    Read the article

  • Most scalable way of serving a small set of static HTTP content

    - by Ekevoo
    The story: Hi guys. I'm among the people responsible for serving the results of the most anticipated (by number of people participating) annual entrance exam in my state. As such, when our results are published, the interest is overwhelming. In the past we delegated the responsibility of serving the results to the media, but that spoils a little the officialness of these results. This year we went with a little (long overdue) experiment of using lighttpd instead of Apache as well as other physical network optimizations I wasn't directly involved with. The results were very satisfactory. The server didn't choke even once, nor we saw any of the usual Twitter complaints on unavailability and/or slowness that were previously common. However, because we still delegated the first publication of the results to the media I'm still not 100% sure we can handle the load of actually publishing the results first. The question: Now because these files are like 14MB in total and a true lightweight Linux distribution isn't that big either, I'm thinking: what if next year we run full RAMdrive? Is there any? Is that useful? Is that worth it for a team that uses Debian almost exclusively? Are there other optimizations that I should be focusing on instead?

    Read the article

  • Connect android to database

    - by danny
    I am doing a school project where we need to create an android application which needs to connect to a database. the application needs to gain and store information for people's profiles on the database. But unfortunatly we are a little bit stuck at this point because there are numerous ways to link the application such as http request through apache or through the SOAP/REST protocol. But it's really hard to find good instructions or tutorials on the problem since I can't really find them. Maybe that's cause i'm probably using the wrong words on google. Unfortunately I have little relevant information. So if anyone can help me with finding relevant links to good online tutorials or howto's than those are very welcome.

    Read the article

  • Send notification from HTTP bot (RESTful service or whatever)

    - by Kuroki Kaze
    I have very simple bot that gathers and parses web pages. It's on a machine in network, behind NAT (so I cannot setup a web server, for example). I don't have MTA set up. The bot should notify me about changes in parsed pages (once in a hour or two, to one recipient). How can this be done? Is there any RESTful email gateways, like SMS ones? I can set up him a twitter account and use curl to post statuses/DM, but it's a very temporary bot.

    Read the article

  • getting a 404/403 error for payment gateway

    - by Obay Ouano
    We are setting up an online payment facility using a payment gateway. After the payment gateway finishes processing the credit card details for a payment, the user is redirected to a "403 Forbidden" page. The logs show: [MY_IP_ADDRESS_HERE] - - [SOME_DATE_HERE] "GET /POSTBACK_URL.php?txnid=1338434567&result=failure&reason=The+remote+server+returned+an+error%3a+(404)+Not+Found.&digest=7a115270c56df5945c43ad86e56b2e930a3cfd50 HTTP/1.1" 404 - "PAYMENT_GATEWAY_URL_HERE" "BROWSER_DETAILS_HERE" It means that when the PAYMENT_GATEWAY_URL attempts to open our POSTBACK_URL, it gets a 404 error, is that correct? But why does the page say "403 Forbidden"? Anyway, we tried to copy-paste that same URL into the browser window, and the page is opened successfully, with our programmed error notification message. So, why couldn't it be opened when the payment gateway tried to redirect to it, but we could? Is this some sort of permissions issue? If so, the postback URL's file permissions are already 755. What am I missing?

    Read the article

  • ISA caching with no cache-related info in response header

    - by Mike M. Lin
    From the documentation, I can't figure out what criteria an ISA server uses to figure out if a cached file is valid when no cache-related info is in the response header. Let's say I got this header in my response on Thu, 13 Jan 2011 18:43:35 GMT: HTTP/1.1 200 OK Date: Thu, 13 Jan 2011 18:43:35 GMT Server: Apache/2.2.3 (Red Hat) Content-Language: en X-Powered-By: Servlet/2.5 JSP/2.1 Keep-Alive: timeout=15 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=ISO-8859-1 There's no cache directive, no last-modified field, no expires field. How will the ISA server decide for how long to cache this response?

    Read the article

  • Server 2003 answers ping, but wont serve http, ftp,smtp or pop3

    - by Manfred
    After reboot, my server wont respond to any incoming request until it is rebooted again. Then, about 5-6 hrs later, any website on it will return a ping, but it will not serve the page, nor will it serve ftp, pop3 or smtp requests. The System log shows W3SVC errors 1014 and 1074, which relate to an Application pool not replying; I have one phpAdmin app pool which I have stopped - it is showing a solitary website as the default App, but the server no longer serves php extensions, and I can't transfer the default website to another pool to kill the whole app pool. I would appreciate your help.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >