Search Results

Search found 6397 results on 256 pages for 'ssh agent'.

Page 211/256 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Uploading to another domain gives HTTP code 405

    - by dragon112
    I'm trying to upload a file (which can be quite large) from the website of one server to the backend of another server using plupload. Lets say: domain 1 = http://www.websitedomain.com/uploadform domain 2 = http://www.backenddomain.com/uploadhandler Trying to upload i send the following: OPTIONS /main/uploadnetwork.php HTTP/1.1 Host: backenddomain.com Connection: keep-alive Access-Control-Request-Method: POST Origin: http://www.websitedomain.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4 Access-Control-Request-Headers: origin, content-type Accept: */* Referer: http://www.websitedomain.com/uploadform Accept-Encoding: gzip,deflate,sdch Accept-Language: nl-NL,nl;q=0.8,en-US;q=0.6,en;q=0.4 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 DNT: 1 But when I try to start the upload the server returns the following: HTTP/1.1 405 Method Not Allowed Allow: GET, HEAD, OPTIONS, TRACE Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET X-Powered-By-Plesk: PleskWin Date: Mon, 01 Oct 2012 12:41:57 GMT Content-Length: 999 After doing some research I found out that a browser does this to check if the server will accept the intended message. It looks like my server doesn't feel like accepting a simple POST call even tho i use post all the time. The Google Chrome console gives the following error: XMLHttpRequest cannot load http://www.backenddomain.com/uploadhandler. Origin http://www.websitedomain.com is not allowed by Access-Control-Allow-Origin. Does anyone know how to stop the browser from checking or how i can tell my server to just accept the POST?

    Read the article

  • What are the possible problems, when wget returns code 500 but same request works in normal browsers?

    - by markus
    What should I be looking for, when wget returns 500 but the same URL works fine in my web browser? I don't see any access_log entries that seem to be related to the error. DEBUG output created by Wget 1.14 on linux-gnu. <SSL negotiation info stripped out> ---request begin--- GET /survey/de/tools/clear-caches/password/<some-token> HTTP/1.1 User-Agent: Wget/1.14 (linux-gnu) Accept: */* Host: testing.thesurveylab.net Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.0 500 Internal Server Error Date: Wed, 12 Dec 2012 14:53:07 GMT Server: Apache/2.2.3 (CentOS) Set-Cookie: blueprint2-staging=8jnbmkqapl30hjkgo0u6956pd1; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Strict-Transport-Security: max-age=8640000;includeSubdomains X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 5 Connection: close Content-Type: text/html; charset=UTF-8 ---response end--- 500 Internal Server Error Stored cookie testing.thesurveylab.net -1 (ANY) / <session> <insecure> [expiry none] blueprint2-staging 8jnbmkqapl30hjkgo0u6956pd1 Closed 3/SSL 0x0000000001f33430 2012-12-12 15:53:07 ERROR 500: Internal Server Error.

    Read the article

  • Zabbix doesn't update value from file neither with log[] nor with vfs.file.regexp[] item

    - by tymik
    I am using Zabbix 2.2. I have a very specific environment, where I have to generate desired data to file via script, then upload that file to ftp from host and download it to Zabbix server from ftp. After file is downloaded, I check it with log[] and vfs.file.regexp[] items. I use these items as below: log[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,"all",\1] vfs.file.regexp[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,,\1] The line I am parsing looks like below: C: 8195Mb 5879Mb 2316Mb 28.2 The value I want to extract is 28.2 at the end of file. The problem I am currently trying to solve is that when I update the file (upload from host to ftp, then download from ftp to Zabbix server), the value does not update. I was trying only log[] at start, but I suspect, that log[] treat the file as real log file and doesn't check the same lines (althought, following the documentation, it should with "all" value), so I added vfs.file.regexp[] item too. The log[] has received a value in past, but it doesn't update. The vfs.file.regexp[] hasn't received any value so far. file.txt has got reuploaded and redownloaded several times and situation doesn't change. It seems that log[] reads only new lines in the file, it doesn't check lines already caught if there are any changes. The zabbix_agentd.log file doesn't report any problem with access to file, nor with regexp construction (it did report "unsupported" for log[] key, when I had something set up wrong). I use debug logging level for agent - I haven't found any interesting info about that problem. I have no idea what I might be doing wrong or what I do not know about how Zabbix is performing these checks. I see 2 solutions for that: adding more lines to the file instead of making new one or making new files and check them with logrt[], but those doesn't satisfy my desires. Any help is greatly appreciated. Of course I will provide additional information, if requested - for now I don't know what else might be useful.

    Read the article

  • Getting the EFS Private Key out of system image

    - by thaimin
    I had to recently re-install Windows 7 and I lost my exported private key for EFS. I however have the entirety of my user directory and my figuring that the key must be in there SOMEWHERE. The only question is how to get it out. I did find the PUBLIC keys in AppData\Roaming\Microsoft\SystemCertificates\My\Certificates If I import them using certmg.msc it says I do have the private key in the information, but if I try export them it says I do not have the private key. Also, decryption of files doesn't work. There is also a "keys" folder at AppData\Roaming\Microsoft\SystemCertificates\My\Keys. After importing the certificates I copy those over into my new installation but it has no effect. I am starting to believe they are either in AppData\Roaming\Microsoft\Protect\S-1-5-21-...\ or AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-...\ but I am unsure how to use the files in those folders. Also, since my SID has changed, will I be able to use them? The other parts of the account have remained the same (name and password). I also have complete access to the user registry hive and most of the old system files (including the old system registry hives). I do keep seeing references to "Key Recovery Agent" but have not found anything about using, just that it can be used. Thanks!

    Read the article

  • Strange issue in header location redirect

    - by hd01
    I have three websites hosted (example1.com, example2.com, example3.com) on a server. There is a page (test.php) on example1.com with just code below inside it: <?php header('Location:http://example2.com/a.php'); ?> When I browse test.php it goes to http://example1.com/a.php . it doesn't understand it is another domain url, it tried to find the page on itself. but when I put http://google.com instead of example2.com/a.php it works correct. I really get confused. What is the problem ? Should I set some configuration on the server? ( I am administrator of the hosting server ). Ps. The server is behind a pound server. Here's the Firebug Net output for example1.com/test.php Response Headers: HTTP/1.1 302 Found Date: Tue, 09 Oct 2012 09:03:34 GMT Server: Apache/2.2.16 (Debian) Location: http://example1.com/a.php Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 21 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8 Request Headers: Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding gzip, deflate Accept-Language en-us,en;q=0.5 Connection keep-alive Cookie mycookie Host example1.com User-Agent Mozilla/5.0 (X11; Linux i686; rv:14.0) Gecko/20100101 Firefox/14.0.1

    Read the article

  • Unusual HEAD requests to nonsense URLs from Chrome

    - by JeremyDWill
    I have noticed unusual traffic coming from my workstation the last couple of days. I am seeing HEAD requests sent to random character URLs, usually three or four within a second, and they appear to be coming from my Chrome browser. The requests repeat only three or four times a day, but I have not identified a particular pattern. The URL characters are different for each request. Here is an example of the request as recorded by Fiddler 2: HEAD http://xqwvykjfei/ HTTP/1.1 Host: xqwvykjfei Proxy-Connection: keep-alive Content-Length: 0 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The response to this request is as follows: HTTP/1.1 502 Fiddler - DNS Lookup Failed Content-Type: text/html Connection: close Timestamp: 08:15:45.283 Fiddler: DNS Lookup for xqwvykjfei failed. No such host is known I have been unable to find any information through Google searches related to this issue. I do not remember seeing this kind of traffic before late last week, but it may be that I just missed it before. The one modification I made to my system last week that was unusual was adding the Delicious add-in/extension to both IE and Chrome. I have since removed both of these, but am still seeing the traffic. I have run virus scan (Trend Micro) and HiJackThis looking for malicious code, but I have not found any. I would appreciate any help tracking down the source of the requests, so I can determine if they are benign, or indicative of a bigger problem. Thanks.

    Read the article

  • JavaScript settings are on but I still have issues with web sites using JavaScript.

    - by Mike
    I have two computers that are the same at home except I have installed Adobe Acrobat 9 Pro Extended on my main computer along with the 2010 Outlook (Beta). I have issues when I log into a website that uses pop up calendars to select the date. I pasted it below. I checked my other computer and it is fine. I've checked the Java setting and they are correct. I am at a loss. Any suggestions? Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; HPDTDF; OfficeLiveConnector.1.4; OfficeLivePatch.1.3; eMusic DLM/4; .NET4.0C) Timestamp: Mon, 26 Jul 2010 22:03:59 UTC Message: 'CalendarPopup' is undefined Line: 390 Char: 2 Code: 0 Message: 'CalendarPopup' is undefined Line: 410 Char: 2 Code: 0

    Read the article

  • How to distribute multiple executions of an app across many machines

    - by Salec
    I've got a simulation app (64-bit windows) that runs without any user interaction. This app gathers information and pushes it to a remote MS SQL Server. What I'd like to do is execute this simulation as many times as I can on multiple machines after our nightly build has finished and it has passed the test suite. If possible I'd love to have the ability to configure it to stop after x total runs or if the entire batch has taken over y hours. I've tried using Visual Studio's built in test framework since we already have a test lab set up with multiple agents. I created a single unit test that simply runs the simulation then I created an ordered test and added that single test multiple times (from what I gather, this is the only way to execute the same unit test more than once). I found that ordered tests are only run on a single agent and not distributed which is very limiting. We use TeamCity to perform our nightly builds and I suspect it's possible to implement this on top of that, but I'm fairly new to TeamCity. We also have Jenkins and Bamboo available and I'm open to any other software that would get the job done presuming it runs on a 64-bit Windows OS. Any suggestions?

    Read the article

  • Experience with asymmetrical (non-identical hardware) SQL Server 2005 / Win 2003 cluster

    - by user24161
    I am reasonably good at dealing with SQL Server clusters; I am wondering if folks have experience, good or bad, using a mix of different models of servers from the same vendor in one SQL 2005 cluster. Suppose: I have one more powerful, more RAM, more shizzle box and one less powerful, less memory, less shizzle box bound together in a 2-node cluster. These would be HP DL380 and 580 machines (not that it should matter) I understand AND automate the process of managing memory for each SQL instance, so there's no memory contention when SQL instances fail over. Basically I am thinking a CLR proc will monitor the instances and self-regulate memory caps on each instance, so that they won't page or step on one another. I get the fact the instances might be slower and or under memory pressure if they share a "lesser" node, and that's OK. The business can deal with a slower instance in a server-problem scenario. Reasonable? Any "gotchas" to watch out for? More info 10/28: doing some experiments with a test cluster I find that reconfiguring max/min memory is OK PROVIDED the instance isn't already under memory pressure. If I torture the system with a huge query that demands a big chunk of RAM, and simultaneously adjust the memory allocation to a smaller value than what is being actively used, it's possible to run the instance out of memory and have it halt and restart itself (unhappy situation). Many ugly out-of-memory messages in the error log, crashing, burning... It's an extreme case, but good to know. Seems, then, that it would only be really safe to set this on startup of the instance, as in have a startup script that says "I am on node1, so my RAM settings are X or I am on node two, so they are Y," like this: http://sqlblog.com/blogs/aaron_bertrand... Update: I am testing a SQL Agent + PowerShell solution described in more detail here.

    Read the article

  • How to make lighttpd respect X-Forwarded-Proto when constructing redirects for directories?

    - by Tim Landscheidt
    We have an nginx proxy at tools.wmflabs.org that receives requests by http and https and passes them by http on to lighttpds on a grid (one lighttpd per top-level path). Requests that reach the proxy by https are received by the lighttpds like this: HEAD /lighttpd-test/test HTTP/1.1 Connection: close Host: tools.wmflabs.org X-Forwarded-Proto: https X-Original-URI: /lighttpd-test/test User-Agent: curl/7.29.0 Accept: */* This works great except in the case where the URL references a physical directory and misses the trailing slash ("/"), as lighttpd then generates a redirect to the http URL: HTTP/1.1 301 Moved Permanently Location: http://tools.wmflabs.org/lighttpd-test/test/ Connection: close Date: Fri, 06 Jun 2014 14:50:29 GMT Server: lighttpd/1.4.28 The relevant parts of our lighttpd configurations are: server.modules = ( "mod_setenv", "mod_access", "mod_accesslog", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_cgi", ) server.port = $port [...] server.document-root = "$home/public_html" [...] server.follow-symlink = "enable" [...] server.stat-cache-engine = "fam" ssl.engine = "disable" alias.url = ( "/$tool" => "$home/public_html/" ) index-file.names = ( "index.php", "index.html", "index.htm" ) dir-listing.encoding = "utf-8" server.dir-listing = "disable" url.access-deny = ( "~", ".inc" ) [...] How can I make lighttpd respect X-Forwarded-Proto and use it when constructing redirects for directories? I'm aware that I could try to tackle this in nginx, but I'd prefer if I can fix it in lighttpd.

    Read the article

  • How to make my Ubuntu an internet gateway for my Android phone

    - by yacine
    I want to use the internet of my school on my Android, the problem is they have a Squid proxy, and many applications on my phone don't use the proxy at all. The obvious solution is to install a transparent proxy on my Android to force all applications to connect through it. The problem is that I need to root the phone to make it work, and I don't want to do it because it's not really my phone and rooting is a little risky- Another solution, which is safer, is to make my computer run as a gateway, so I put my Ubuntu IP in the gateway parameter of the phone. I'm running a small proxy on my ubuntu (cntlm), so I redirect the Android traffic to it. I did it with "iptables" as follows: iptables -t nat -A PREROUTING -s 10.0.1.118 -p tcp -j REDIRECT --to-ports 8888 iptables -t nat -A PREROUTING -s 10.0.1.118 -p udp -j REDIRECT --to-ports 8888 10.0.1.118 is the IP of the phone, 8888 is the port of cntlm (proxy on my PC). Now, on the phone: When I enter www.google.com on the navigator I get nothing (web site not found, error message of Firefox). But, when I enter http://74.125.143.101 (IP of Google) I get an error message from the school proxy (so it worked in some way – my PC redirected the traffic of the phone to the Squid proxy). The error message is : The requested URL could not be retrieved while trying to process the request get / http/1.1 host 74.125.143.101 user-Agent ... ... I think the problem is in the "GET" header,it should be GET 74.125.143.101 HTTP/1.1. But I don't understand what's happening, and I'm a certified CCNA.

    Read the article

  • How and where to implement basic authentication in Kibana 3

    - by Jabb
    I have put my elasticsearch server behind a Apache reverse proxy that provides basic authentication. Authenticating to Apache directly from the browser works fine. However, when I use Kibana 3 to access the server, I receive authentication errors. Obviously because no auth headers are sent along with Kibana's Ajax calls. I added the below to elastic-angular-client.js in the Kibana vendor directory to implement authentication quick and dirty. But for some reason it does not work. $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); What is the best approach and place to implement basic authentication in Kibana? /*! elastic.js - v1.1.1 - 2013-05-24 * https://github.com/fullscale/elastic.js * Copyright (c) 2013 FullScale Labs, LLC; Licensed MIT */ /*jshint browser:true */ /*global angular:true */ 'use strict'; /* Angular.js service wrapping the elastic.js API. This module can simply be injected into your angular controllers. */ angular.module('elasticjs.service', []) .factory('ejsResource', ['$http', function ($http) { return function (config) { var // use existing ejs object if it exists ejs = window.ejs || {}, /* results are returned as a promise */ promiseThen = function (httpPromise, successcb, errorcb) { return httpPromise.then(function (response) { (successcb || angular.noop)(response.data); return response.data; }, function (response) { (errorcb || angular.noop)(response.data); return response.data; }); }; // check if we have a config object // if not, we have the server url so // we convert it to a config object if (config !== Object(config)) { config = {server: config}; } // set url to empty string if it was not specified if (config.server == null) { config.server = ''; } /* implement the elastic.js client interface for angular */ ejs.client = { server: function (s) { if (s == null) { return config.server; } config.server = s; return this; }, post: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); console.log($http.defaults.headers); path = config.server + path; var reqConfig = {url: path, data: data, method: 'POST'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, get: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on get request, data will be request params var reqConfig = {url: path, params: data, method: 'GET'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, put: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'PUT'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, del: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'DELETE'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, head: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on HEAD request, data will be request params var reqConfig = {url: path, params: data, method: 'HEAD'}; return $http(angular.extend(reqConfig, config)) .then(function (response) { (successcb || angular.noop)(response.headers()); return response.headers(); }, function (response) { (errorcb || angular.noop)(undefined); return undefined; }); } }; return ejs; }; }]); UPDATE 1: I implemented Matts suggestion. However, the server returns a weird response. It seems that the authorization header is not working. Could it have to do with the fact, that I am running Kibana on port 81 and elasticsearch on 8181? OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.46.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.46.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache This is the response HTTP/1.1 401 Authorization Required Date: Fri, 08 Nov 2013 23:47:02 GMT WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 UPDATE 2: Updated all instances with the modified headers in these Kibana files root@localhost:/var/www/kibana# grep -r 'ejsResource(' . ./src/app/controllers/dash.js: $scope.ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/querySrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/filterSrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/dashboard.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); And modified my vhost conf for the reverse proxy like this <VirtualHost *:8181> ProxyRequests Off ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/cake2.2.4/.htpasswd Require valid-user Header always set Access-Control-Allow-Methods "GET, POST, DELETE, OPTIONS, PUT" Header always set Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization" Header always set Access-Control-Allow-Credentials "true" Header always set Cache-Control "max-age=0" Header always set Access-Control-Allow-Origin * </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost> Apache sends back the new response headers but the request header still seems to be wrong somewhere. Authentication just doesn't work. Request Headers OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.26.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.26.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Response Headers HTTP/1.1 401 Authorization Required Date: Sat, 09 Nov 2013 08:48:48 GMT Access-Control-Allow-Methods: GET, POST, DELETE, OPTIONS, PUT Access-Control-Allow-Headers: Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization Access-Control-Allow-Credentials: true Cache-Control: max-age=0 Access-Control-Allow-Origin: * WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 SOLUTION: After doing some more research, I found out that this is definitely a configuration issue with regard to CORS. There are quite a few posts available regarding that topic but it appears that in order to solve my problem, it would be necessary to to make some very granular configurations on apache and also make sure that the right stuff is sent from the browser. So I reconsidered the strategy and found a much simpler solution. Just modify the vhost reverse proxy config to move the elastisearch server AND kibana on the same http port. This also adds even better security to Kibana. This is what I did: <VirtualHost *:8181> ProxyRequests Off ProxyPass /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPassReverse /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/.htpasswd Require valid-user </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost>

    Read the article

  • Rsync on Windows - Socket operation on non-socket

    - by TLS
    I get the following error when trying to run the latest Cygwin version of rsync in Windows XP SP2. The error occurs for attempts at both local syncs (that is: source and destination on the local harddisk only) and remote syncs (using "-e ssh" from the openssh package). Any advice on how to fix/workaround it? bash-3.2$ rsync -a dir1 dir2 rsync: Failed to dup/close: Socket operation on non-socket (108) rsync error: error in IPC code (code 14) at /home/lapo/packaging/tmp/rsync-2.6.9/pipe.c(143) [receiver=2.6.9] rsync: read error: Connection reset by peer (104) rsync error: error in IPC code (code 14) at /home/lapo/packaging/tmp/rsync-2.6.9/io.c(604) [sender=2.6.9]

    Read the article

  • pros and cons with server management gui tools to manage linux web servers

    - by ajsie
    i have stumbled upon these GUI tools that could help you manage your linux server through a web interface. ebox, webmin, ispconfig, zivios, ispcp, plesk, cpanel etc. i wonder what the pros and cons are with these solutions. a lot of people is saying that they are not as good as using pure command line (ssh) to manage your server. but i think thats yet another "linux are for advanced users" talk. i agree that some things may only be done with the command line by editing directly in the configuration files. but i don't really want to do that every time and for everything. its like not having phpmyadmin for managing mysql. it would be a pain in the ass right? so if one wants to throw up a web server serving a php site oneself developed and wants all the usual stuff up and running (mysql, phpmyadmin, svn, webdav etc) is these tools the right way to go?

    Read the article

  • Git checking out problem [fatal: early EOFs]

    - by Style
    Dear all, I'm running a Ubuntu (9.10) server with Git (latest from Ubuntu package manager) installed. Access to the Git is via SSH. On windows machines, I'm using Cygwin to push/pull code. I can push my project code onto the server but when I do a clone or pull, it returns a [fatal: early EOFs] error at about 75-80%. Upon further investigation, it seems like textual data has no issue when pulled/cloned but when the jar files and images are pulled from Git, the error will occur. Any suggestion/advice that can help to resolve this issue? Thanks in advance.

    Read the article

  • Debugging cucumber/gem dependencies

    - by mobmad
    How do you debug and fix gem errors like below? Although the below case is very specific, I'm also looking for solution to related problems like "gem already activated [...]", and resources to gem management/debugging. mycomputer:projectfolder username$ cucumber features Using the default profile... WARNING: No DRb server is running. Running features locally: /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement can't activate , already activated ruby-hmac-0.4.0 (Gem::Exception) /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:101:in `specification' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/locator.rb:81:in `plugins' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `inject' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/locator.rb:81:in `each' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/locator.rb:81:in `inject' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/locator.rb:81:in `plugins' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:109:in `locate_plugins' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:108:in `map' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:108:in `locate_plugins' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:32:in `all_plugins' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:22:in `plugins' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:53:in `add_plugin_load_paths' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:294:in `add_plugin_load_paths' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:136:in `process' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `send' /Users/username/.gem/ruby/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `run' /Users/username/Documents/projectfolder.0/sites/projectfolder/config/environment.rb:9 /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /Library/Ruby/Gems/1.8/gems/polyglot-0.2.9/lib/polyglot.rb:70:in `require' ./features/support/env.rb:12 /Library/Ruby/Gems/1.8/gems/spork-0.7.5/lib/spork.rb:23:in `prefork' ./features/support/env.rb:9 /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `polyglot_original_require' /Library/Ruby/Gems/1.8/gems/polyglot-0.2.9/lib/polyglot.rb:70:in `require' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/rb_support/rb_language.rb:124:in `load_code_file' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/step_mother.rb:84:in `load_code_file' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/step_mother.rb:76:in `load_code_files' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/step_mother.rb:75:in `each' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/step_mother.rb:75:in `load_code_files' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/cli/main.rb:47:in `execute!' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/../lib/cucumber/cli/main.rb:24:in `execute' /Library/Ruby/Gems/1.8/gems/cucumber-0.4.4/bin/cucumber:8 /usr/bin/cucumber:19:in `load' /usr/bin/cucumber:19 And this is the output from gem list actionmailer (2.3.5, 2.2.2, 1.3.6) actionpack (2.3.5, 2.2.2, 1.13.6) actionwebservice (1.2.6) activerecord (2.3.5, 2.2.2, 1.15.6) activeresource (2.3.5, 2.2.2) activesupport (2.3.5, 2.2.2, 1.4.4) acts_as_ferret (0.4.4, 0.4.3) adamwiggins-rest-client (1.0.4) aslakhellesoy-webrat (0.4.4.1) aslakjo-comatose (2.0.5.12) authlogic (2.1.3) authlogic-oid (1.0.4) builder (2.1.2) capistrano (2.5.17, 2.5.2) cgi_multipart_eof_fix (2.5.0) configuration (1.1.0) cucumber (0.4.4) cucumber-rails (0.3.0) daemons (1.0.10) database_cleaner (0.5.0) diff-lcs (1.1.2) dnssd (1.3.1, 0.6.0) fakeweb (1.2.8) fastthread (1.0.7, 1.0.1) fcgi (0.8.8, 0.8.7) ferret (0.11.6) gem_plugin (0.2.3) gemcutter (0.4.1) heroku (1.8.0) highline (1.5.2, 1.5.0) hoe (2.5.0) hpricot (0.8.2, 0.6.164) json (1.2.2) json_pure (1.2.2) launchy (0.3.5) libxml-ruby (1.1.3, 1.1.2) linecache (0.43) log4r (1.1.5) mime-types (1.16) mongrel (1.1.5) mysql (2.8.1) needle (1.3.0) net-scp (1.0.2, 1.0.1) net-sftp (2.0.4, 2.0.1, 1.1.1) net-ssh (2.0.20, 2.0.4, 1.1.4) net-ssh-gateway (1.0.1, 1.0.0) nifty-generators (0.3.2) nokogiri (1.4.1) oauth (0.3.6) oniguruma (1.1.0) plist (3.1.0) polyglot (0.2.9) rack (1.1.0, 1.0.1) rack-test (0.5.3) rails (2.3.5, 2.2.2, 1.2.6) rake (0.8.7, 0.8.3) RedCloth (4.2.2, 4.1.1) rest-client (1.4.0) rspec (1.3.0) rspec-rails (1.3.2) ruby-activeldap (0.8.3.1) ruby-debug-base (0.10.3) ruby-debug-ide (0.4.9) ruby-hmac (0.4.0) ruby-net-ldap (0.0.4) ruby-openid (2.1.7, 2.1.2) ruby-yadis (0.3.4) rubyforge (2.0.4) rubygems-update (1.3.6) rubynode (0.1.5) rubyzip (0.9.4) sanitize (1.2.0) sequel (3.0.0) sinatra (0.9.2) spork (0.7.5) sqlite3-ruby (1.2.5, 1.2.4) taps (0.2.26) term-ansicolor (1.0.4) termios (0.9.4) textpow (0.10.1) thor (0.9.9) treetop (1.4.2) twitter4r (0.3.2, 0.3.1) ultraviolet (0.10.2) webrat (0.7.0) will_paginate (2.3.12) xmpp4r (0.5, 0.4)

    Read the article

  • CVS: command-line diff on a remote CVS server between two HEAD revs

    - by Gugussee
    My CVS-fu is not very strong anymore (after years of SVN'ing and now Mercurial'ing). I'm trying to do a diff between two revisions of the HEAD branch (everything is in the HEAD anyway). I received an IDE already set up to use a :pserver:myname@cvsserver:port/cvs/project CVS. I'm on Windows XP. I do not want to use the IDE (the goal here is to learn CVS a bit more). Apparently I cannot login using SSH to the CVS server. How can I run a remote CVS diff between two HEAD revs using the command line? P.S: I am new here, mod me up so I can comment etc. :)

    Read the article

  • Setup GIT Server with Msysgit on Windows

    - by Tom
    Hi Guys, My friends and I are trying to setup GIT for windows using this tutorial but we just keep running into problems. I know many of you guys on this site are GIT gurus - so I was wondering whether anyone would be able to help us (and I am sure 100s of other Windows Devs who want to use GIT) write a "Setup GIT Server" guide for windows using Msysgit ? There is a comment on the guide above suggesting it cant be done with Msysgit because gitosis requires the use of an SSH Server and Bash ? Would really appreciate it if someone could do a step by step guide as there is not one available (we've search for hours)? Install Mysisgit ? Thx

    Read the article

  • Help -- trying to build Apache ODE source code with Buildr

    - by Moon13
    Hi, i am trying to build APACHE ODE source code with Buildr using Ruby. I installed ruby and installed Buildr with it, but when i run the command rake package on the root of APACHE ODE source code it gives me this error C:\workspace2\APACHE_ODE_1.Xrake package --trace (in C:/workspace2/APACHE_ODE_1.X) rake aborted! uninitialized constant Gem::Requirement::OP_RE The gems i have installed C:\workspace2\APACHE_ODE_1.Xgem list * LOCAL GEMS * Antwrap (0.7.0) archive-tar-minitar (0.5.2) builder (2.1.2) buildr (1.3.5) highline (1.5.1) hoe (2.3.3) net-sftp (2.0.2) net-ssh (2.0.15) rake (0.8.7) rjb (1.1.6) rspec (1.2.8) rubyforge (1.0.5) rubygems-update (1.3.6) rubyzip (0.9.1) xml-simple (1.0.12)

    Read the article

  • FCGI htaccess handler

    - by sharvey
    I'm trying to setup django on a shared hosting provider. I followed the instructions on http://helpdesk.bluehost.com/index.php/kb/article/000531 and almost have it working. The problem I'm facing now is that the traffic is properly routed throught the fcgi file, but the file itself shows up as plain text in the browser. If I run ./mysite.fcgi in the ssh shell, I do get the default django welcome page. my .htaccess is: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ mysite.fcgi/$1 [QSA,L] and mysite.fcgi: #!/usr/bin/python2.6 import sys, os os.environ['DJANGO_SETTINGS_MODULE'] = "icm.settings" from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false") thanks.

    Read the article

  • parallel-python error: RuntimeError("Socket connection is broken")

    - by user288558
    I am using a simple program to send a function: import pp nodes=('mosura02','mosura03','mosura04','mosura05','mosura06', 'mosura09','mosura10','mosura11','mosura12') nodes=('miner:60001',) def pptester(): js=pp.Server(ppservers=nodes) js.set_ncpus(0) tmp=[] for i in range(200): tmp.append(js.submit(ppworktest,(),(),('os',))) return tmp def ppworktest(): import os return os.system("uname -a") the result is: wkerzend@mosura:/home/wkerzend/tmp/ppython_test>ssh miner "source ~/coala_python_setup.sh;ppserver.py -d -p 60001" 2010-04-12 00:50:48,162 - pp - INFO - Creating server instance (pp-1.6.0) 2010-04-12 00:50:52,732 - pp - INFO - pp local server started with 32 workers 2010-04-12 00:50:52,732 - pp - DEBUG - Strarting network server interface=0.0.0.0 port=60001 Exception in thread client_socket: Traceback (most recent call last): File "/usr/lib64/python2.6/threading.py", line 525, in __bootstrap_inner self.run() File "/usr/lib64/python2.6/threading.py", line 477, in run self.__target(*self.__args, **self.__kwargs) File "/home/wkerzend/python_coala/bin/ppserver.py", line 161, in crun ctype = mysocket.receive() File "/home/wkerzend/python_coala/lib/python2.6/site-packages/pptransport.py", line 178, in receive raise RuntimeError("Socket connection is broken") RuntimeError: Socket connection is broken

    Read the article

  • How to execute with /bin/false shell

    - by Amar
    I am trying to setup per-user fastcgi scripts that will run each on a different port and with a different user. Here is example of my script: #!/bin/bash BIND=127.0.0.1:9001 USER=user PHP_FCGI_CHILDREN=2 PHP_FCGI_MAX_REQUESTS=10000 etc... However, if I add user with /bin/false (which I want, since this is about to be something like shared hosting and I don't want users to have shell access), the script is run under 1001, 1002 'user' which, as my Google searches showed, might be a security hole. My question is: Is it possible to allow user(s) to execute shell scripts but disable them so they cannot log in via SSH?

    Read the article

  • PHP mkdir and apache ownership

    - by elcorazon
    Is there a way to set php running under apache to create folders with the folder owned by the owner of the program that creates it instead of being owned by apache? Using word press it creates new folders to upload into but these are owned by apache.apache and not by the site that they are running in. This also happens using ostickets. For now we have to SSH into the server and chmod the folder, but it would seem there would be a setting somewhere to override the ownership outside of any program that does it.

    Read the article

  • write error: Broken pipe

    - by Fahim
    Hi, I have to run a tool on around 300 directories. Each run take around 1 minute to 30 minute or even more than that. So, I wrote a python script having a loop to run the tool on all directories one after another. my python script has code something like: for directory in directories: os.popen('runtool_exec ' + directory) But when I run the python script I get the following error messages repeatedly: .. tail: write error: Broken pipe date: write error: Broken pipe .. All I do is login on a remote server using ssh where the tool, python script, and subject directories are kept. When I individually run the tool from command prompt using command like: runtool_exec directory it works fine. "broken pipe" error is coming only when I run using the python script. Any idea, workaround? Please suggest. Thanks. Fahim

    Read the article

  • How te execute with /bin/false shell

    - by Amar
    Hello I am trying to setup per-user fastcgi scripts that will run each on different port and with different user. Here is example of my script: #!/bin/bash BIND=127.0.0.1:9001 USER=user PHP_FCGI_CHILDREN=2 PHP_FCGI_MAX_REQUESTS=10000 etc... However, if I add user with /bin/false (which I want, since this is about to be something like shared hosting and I dont want users to have shell access), the script is run'd under 1001, 1002 'user' which, as I googled, might be security hole. My question is: Is it possible to allow user(s) execute shell scripts but disable them to log in via SSH ? Thank you

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >