Search Results

Search found 853 results on 35 pages for 'redirection'.

Page 30/35 | < Previous Page | 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Connecting to an RMI object without registry

    - by Mark Probst
    I think I need to connect to a remote RMI object without going through the registry, but I don't know how. My situation is this: I'm implementing a simple job distribution service which consists of one distributor and multiple workers. The distributor has a registered RMI object to which clients connect to send jobs, and workers connect to accept jobs. Unfortunately the distributor and worker hosts are behind a firewall. To get to the distributor host I am tunneling two ports (one for the registry, one for the distributor object) via SSH, so I can get to the registry and the distributor from outside the firewall. To make that work I have to set "-Djava.rmi.server.hostname=localhost" on the distributor JVM so that the clients connect to their local, tunneled port, instead of the port on the actual distributor host, which is blocked. This creates a problem for the workers, though, because they need to connect to the distributor directly, but because of the "localhost" redirection they behave like clients and try to connect to a port on their own host, which is not available, because I'm not tunneling on the workers (it is impractical). Now, if I could connect to a remote object directly by giving the hostname and port, I could do away both with the registry on the distributor and the "localhost" hack, and make the workers connect properly. How do I do that? Or is there a different solution to this problem?

    Read the article

  • PHP + CURL How to get file name

    - by Gunjan
    I'm trying to download users profile picture from facebook in PHP using this function public static function downloadFile($url, $options = array()) { if (!is_array($options)) $options = array(); $options = array_merge(array( 'connectionTimeout' => 5, // seconds 'timeout' => 10, // seconds 'sslVerifyPeer' => false, 'followLocation' => true, // if true, limit recursive redirection by 'maxRedirs' => 2, // setting value for "maxRedirs" ), $options); // create a temporary file (we are assuming that we can write to the system's temporary directory) $tempFileName = tempnam(sys_get_temp_dir(), ''); $fh = fopen($tempFileName, 'w'); $curl = curl_init($url); curl_setopt($curl, CURLOPT_FILE, $fh); curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, $options['connectionTimeout']); curl_setopt($curl, CURLOPT_TIMEOUT, $options['timeout']); curl_setopt($curl, CURLOPT_HEADER, false); curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, $options['sslVerifyPeer']); curl_setopt($curl, CURLOPT_FOLLOWLOCATION, $options['followLocation']); curl_setopt($curl, CURLOPT_MAXREDIRS, $options['maxRedirs']); curl_exec($curl); curl_close($curl); fclose($fh); return $tempFileName; } The problem is it saves the file in the /tmp directory with a random name and without the extension. How can I get the original name of the file (I'm more interested in the original extension) The important things here are: The url actually redirects to image so i cant get it from original url the final url does not have the file name in headers

    Read the article

  • nginx multiple domain virtual host configuration

    - by Poe
    I'm setting up nginx with multiple domain or wildcard support for convenience sake, rather than setting up 50+ different sites-available/* files. Hopefully this is enough to show you what I'm trying to do. Some are static sites, some are dynamic with usually wordpress installed. If an index.php exists, everything works as expected. If a file is requested that does not exist (missing.html), a 500 error is given due to the rewrite. The logged error is: *112 rewrite or internal redirection cycle while processing "/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/missing.html" The basic nginx configuration I'm currently using is: ` listen 80 default; server _; ... location / { root /var/www/$host; if (-f $request_filename) { expires max; break; } # problem, what if index.php does not exist? if (!-e $request_filename) { rewrite ^/(.*)$ /index.php/$1 last; } } ... ` If an index.php does not exist, and the file also does not exist, I would like it to error 404. Currently, nginx does not support multiple condition if's or nested if so I need a workaround.

    Read the article

  • How to disable MSBuild's <RegisterOutput> target on a per-user basis?

    - by Roger Lipscombe
    I like to do my development as a normal (non-Admin) user. Our VS2010 project build fails with "Failed to register output. Please try enabling Per-user Redirection or register the component from a command prompt with elevated permissions." Since I'm not at liberty to change the project file, is there any way that I can add user-specific MSBuild targets or properties that disable this step on a specific machine, or for a specific user? I'd prefer not to hack on the core MSBuild files. I don't want to change the project file because I might then accidentally check it back in. Nor do I want to hack on the MSBuild core files, because they might get overwritten by a service pack. Given that the Visual C++ project files (and associated .targets and .props files) have about a million places to alter the build order and to import arbitrary files, I was hoping for something along those lines. MSBuild imports/evaluates the project file as follows (I've only looked down the branches that interest me): Foo.vcxproj Microsoft.Cpp.Default.props Microsoft.Cpp.props $(UserRootDir)\Microsoft.Cpp.$(Platform).user.props Microsoft.Cpp.targets Microsoft.Cpp.$(Platform).targets ImportBefore\* Microsoft.CppCommon.targets The "RegisterOutput" target is defined in Microsoft.CppCommon.targets. I was hoping to replace this by putting a do-nothing "RegisterOutput" target in $(UserRootDir)\Microsoft.Cpp.$(Platform).user.props, which is %LOCALAPPDATA%\MSBuild\v4.0\Microsoft.Cpp.Win32.user.props (UserRootDir is set in Microsoft.Cpp.Default.props if it's not already set). Unfortunately, MSBuild uses the last-defined target, which means that mine gets overridden by the built-in one. Alternatively, I could attempt to set the %(Link.RegisterOutput) metadata, but I'd have to do that on all Link items. Any idea how to do that, or even if it'll work?

    Read the article

  • Paging not working in my wordpress installation

    - by Bootcamp
    I recently started a blog site and wanted to give it a magazine look. I used Wordpress for my blog and used the Arthemia theme with it. I also changed the permalink structure to point to /%year%/%monthnum%/%day%/%postname%/ structure. Now the problem that i have is that the paging has stopped working on my home page. When i click on the next page link i get a 404 error. My /page/2 url does not show the next page. I check on google and found out that it was due to the redirection that is being performed due to the permalink change. The solution given was that i need to skip the url rewriting for the /page/* urls. This is the link to an article which said this http://www.yoursearchadvisor.com/blog/wordpress-next_posts_link-broken/ . I was not able to follow this article and solve my problem, as i could not find the permanent redirect manager under the settings section as said in this article. Can somebody please guide me how to solve this problem. I am using the latest Wordpress version and Arthemia theme with it. Thanks.

    Read the article

  • how to append a string to next line in perl

    - by tprayush
    hi all , i have a requirement like this.. this just a sample script... $ cat test.sh #!/bin/bash perl -e ' open(IN,"addrss"); open(out,">>addrss"); @newval; while (<IN>) { @col_val=split(/:/); if ($.==1) { for($i=0;$i<=$#col_val;$i++) { print("Enter value for $col_val[$i] : "); chop($newval[$i]=<STDIN>); } $str=join(":"); $_="$str" print OUT; } else { exit 0; } } close(IN); close(OUT); ' when i run this scipt... $ ./test.sh Enter value for NAME : abc Enter value for ADDRESS : asff35 Enter value for STATE : XYZ Enter value for CITY : EIDHFF Enter value for CONTACT : 234656758 $ cat addrss NAME:ADDRESS:STATE:CITY:CONTACT abc:asff35:XYZ:EIDHFF:234656758 when ran it second time $ cat addrss NAME:ADDRESS:STATE:CITY:CONTACT abc:asff35:XYZ:EIDHFF:234656758ioret:56fgdh:ghdgh:afdfg:987643221 ## it is appended in the same line... i want it to be added to the next line..... NOTE: i want to do this by explitly using the filehandles in perl....and not with redirection operators in shell. please help me!!!

    Read the article

  • Using CookieJar in Python to log in to a website from "Google App Engine". What's wrong here?

    - by brilliant
    Hello everybody, I've been trying to find a python code that would log in to my mail box on yahoo.com from "Google App Engine" . Here (click here to see that page) I was given this code: import urllib, urllib2, cookielib url = "https://login.yahoo.com/config/login?" form_data = {'login' : 'my-login-here', 'passwd' : 'my-password-here'} jar = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar)) form_data = urllib.urlencode(form_data) # data returned from this pages contains redirection resp = opener.open(url, form_data) # yahoo redirects to http://my.yahoo.com, so lets go there instead resp = opener.open('http://mail.yahoo.com') print resp.read() The author of this script looked into HTML script of yahoo log-in form and came up with this script. That log-in form contains two fields, one for users' Yahoo! ID and another one is for users' password. Here is how HTML code of that page for both of those fields looks like: User ID field: <input type="text" maxlength="96" class="yreg_ipt" size="17" value="" id="username" name="login"> Password field: <input type="password" maxlength="64" class="yreg_ipt" size="17" value="" id="passwd" name="passwd"> However, when I uploaded this code to Google App Engine I discovered that this log-in form keeps coming back to me, which, I assume, means that logging-in process didn't succeed. Why is it so?

    Read the article

  • Using CookieJar in Python to log in to a website from "Google App Engine". What's wrong here?

    - by brilliant
    Hello everybody, I've been trying to find a python code that would log in to my mail box on yahoo.com from "Google App Engine" . Here (click here to see that page) I was given this code: import urllib, urllib2, cookielib url = "https://login.yahoo.com/config/login?" form_data = {'login' : 'my-login-here', 'passwd' : 'my-password-here'} jar = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar)) form_data = urllib.urlencode(form_data) # data returned from this pages contains redirection resp = opener.open(url, form_data) # yahoo redirects to http://my.yahoo.com, so lets go there instead resp = opener.open('http://mail.yahoo.com') print resp.read() The author of this script looked into HTML script of yahoo log-in form and came up with this script. That log-in form contains two fields, one for users' Yahoo! ID and another one is for users' password. Here is how HTML code of that page for both of those fields looks like: User ID field: <input type="text" maxlength="96" class="yreg_ipt" size="17" value="" id="username" name="login"> Password field: <input type="password" maxlength="64" class="yreg_ipt" size="17" value="" id="passwd" name="passwd"> However, when I uploaded this code to Google App Engine I discovered that this log-in form keeps coming back to me, which, I assume, means that logging-in process didn't succeed. Why is it so?

    Read the article

  • Creating a HTTP handler for IIS that transparently forwards request to different port?

    - by Lasse V. Karlsen
    I have a public web server with the following software installed: IIS7 on port 80 Subversion over apache on port 81 TeamCity over apache on port 82 Unfortunately, both Subversion and TeamCity comes with their own web server installations, and they work flawlessly, so I don't really want to try to move them all to run under IIS, if that is even possible. However, I was looking at IIS and I noticed the HTTP redirect part, and I was wondering... Would it be possible for me to create a HTTP handler, and install it on a sub-domain under IIS7, so that all requests to, say, http://svn.vkarlsen.no/anything/here is passed to my HTTP handler, which then subsequently creates a request to http://localhost:81/anything/here, retrieves the data, and passes it on to the original requestee? In other words, I would like IIS to handle transparent forwards to port 81 and 82, without using the redirection features. For instance, Subversion doesn't like HTTP redirect and just says that the repository has been moved, and I need to relocate my working copy. That's not what I want. If anyone thinks this can be done, does anyone have any links to topics I need to read up on? I think I can manage the actual request parts, even with authentication, but I have no idea how to create a HTTP handler. Also bear in mind that I need to handle sub-paths and documents beneath the top-level domain, so http://svn.vkarlsen.no/whatever/here needs to be handled by a single handler, I cannot create copies of the handler for all sub-directories since paths are created from time to time.

    Read the article

  • Apache mod_rewrite - forward domain root to subdirectory

    - by DuFace
    I have what I originally assumed to be a simple problem. I am using shared hosting for my website (so I don't have access to the Apache configuration) and have only been given a single folder to store all my content in. This is all well and good but it means that all my subdomains must have their virtual document root's inside public_html, meaning they effectively become a folder on my main domain. What I'd like to do is organise my public_html something like this: public_html/ www/ index.php ... sub1/ index.php ... some_library/ ... This way, all my web content is still in public_html but only a small fraction of it will be served to the client. I can easily achieve this for all the subdomains, but it's the primary domain that I'm having issues with. I created a .htaccess file in public_html with the following: Options +SymLinksIfOwnerMatch # I'm not allowed to use FollowSymLinks RewriteEngine on RewriteBase / RewriteCond %{REQUEST_URI} !^/www [NC] RewriteRule ^(.*)$ /www/$1 [L] This works fairly well, but for some strange reason www.example.com/stuff is translated into a request for www.example.com/www/stuff and hence a 404 error is given. It was my understanding that unless an 'R' flag was specified, mod_rewrite was purely internal so I can't understand why the request is generated as that implies (to me at least) redirection. I assumed this would be a trivial problem to solve as all I actually want to do is forward all requests for the root of www.example.com to a subdirectory, but I've spent hours searching for answers and none are quite correct. I find it difficult to believe I'm the only person to have this issue. I apologise if this question has been answered on here before, I did search and trawl but couldn't find an appropriate answer. Please could someone shed some light on this?

    Read the article

  • PHP, MySQL: Security concern; Page loads in a weird way

    - by Devner
    Hi all, I am testing the security of my website. I am using the following URL to load a PHP page in my website, on localhost: http://localhost/domain/user/index.php/apple.php When I do this, the page is not loading normally; Instead the images, icons used in the page simply vanish/disappear from the page. Only text appears. And also on any link I click on this page, it brings me to this same page again without navigating to the required page. So if I have hyperlinks to other pages, such as "SEARCH", which points to search.php, instead of navigating to the search.php page, it refreshes the index.php page and just appends the page name of the destination page to the end of the URL. For example, say I used the link above. It then loads the index.php page minus the images at it's will. When I click on the "Search" link to navigate to the search page, I see the following in the URL: http://localhost/domain/user/index.php/search.php I have a redirection configured to a 404 error page in my .htaccess file, but the page does not redirect to the 404 error page. Notice the search.php towards the end of the URL above. Any other link that I click, reloads the index.php page and just appends the destination page name to the end of the URL like I have shown above. I was expecting to see a 404 Error but that does not happen. The URL should not even be able to load the page because I do NOT have a "index.php" folder in my website. What can I do to solve this? All help is appreciated. Thank you.

    Read the article

  • Stopping Wordpress From Appending /index.html to everything

    - by user439796
    I have a spaghetti code of a theme I inherited from someone and for whatever reason Google Analytics shows that I keep getting hits to a variety of URLs on the site, but the urls are all appended with /index.html. So an example would be like http://www.mysite.com/category/storyname/index.html And it appears to be doing this to almost everything (despite my permalinks being set to be "tidy"). So... What in the hell could be possibly causing this? How do I fix it? When I visit all those pages I get 404 errors so that means my visitors are not getting what they want. I have the Redirection plugin and have been manually trying to update some of these, but it is ridiculous. I'm sure there's a way to do it with htaccess but I know next to nothing about that. Here's what my htaccess currently has (the default): # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • Rails - Update a single attribute : link with custom action or form with hidden fields?

    - by MrRuru
    Let's say I have a User model, with a facebook_uid field corresponding to the user's facebook id. I want to allow the user to unlink his facebook account. Do do so, I need to set this attribute to nil. I currently see 2 ways of doing this First way : create a custom action and link to it # app/controllers/users_controller.rb def unlink_facebook_account @user = User.find params[:id] # Authorization checks go here @user.facebook_uid = nil @user.save # Redirection go here end # config/routes.rb ressources :users do get 'unlink_fb', :on => :member, :as => unlink_fb end # in a view = link_to "Unlink your facebook account", unlink_fb_path(@user) Second way : create a form to the existing update action # app/views/user/_unlink_fb_form.html.haml = form_for @user, :method => "post" do |f| = f.hidden_field :facebook_uid, :value => nil = f.submit "Unlink Facebook account" I'm not a big fan of either way. In the first one, I have to add a new action for something that the update controller already can do. In the second one, I cannot set the facebook_uid to nil without customizing the update action, and I cannot have a link instead of a button without adding some javascript. Still, what would you recommend as the best and most elegant solution for this context? Did I miss a third alternative?

    Read the article

  • linux find the command invoked

    - by Subbu
    I am writing a C program which determines the number of bytes read from the standard input . I found out there are ways to give input to the program piped input redirection entering into command line while the program is waiting for input How to find the exact command by which the program was executed from the shell . I tried using command-line arguments but failed . #include <stdio.h> int main(int argc,char *argv[]) { char buffer[100]; int n; for(n=1;n<argc;n++) printf("argument: %s\t",argv[n]); printf("\n"); if(argc==1) printf("waiting for input :"); else if (argc==3) printf("Not waiting for input . Got the source from command itself ."); n = read(0,buffer,100); if(n==-1) printf("\nError occured in reading"); printf("\nReading successfully done\n"); return 0; } Also ,

    Read the article

  • Why googling by keycaptcha gives results on reCAPTCHA? [closed]

    - by vgv8
    EDIT: I'd like to change this title to: How to STOP Google's manipulation of Google search engine presented to general public? I am frequently googling and more and more frequently bump when searching by one software product I am given instead the results on Google's own products. For ex., if I google by keyword keycaptcha for the "Past 24 hours" (after clicking on "Show search tools" -- "Past 24 hours" on the left sidebar of a browser) I am getting the Google's search results show only results on reCAPTCHA. Image uploaded later: Though, if confine keycaptcha in quotes the results are "correct" (well, kind of since they are still distorted in comparison with other search engines). I checked this during few months from different domains at different ISPs, different operating systems and from a dozen of browsers. The results are the same. Why is it and how can it be possibly corrected? My related posts: "How Gmail spam filter works?" IP adresses blacklisting Update: It is impossible for me to directly start using google.com as I am always redirected to google.ru (from google.com) by my ip-address "auto-detect location" google's "convenience". The google's help tells that it is impossible to switch off my location auto-detection because it is very helpful feature. There is a work-around to use google.com/ncr (to get google.com) (?anybody know what does it mean) to prevent redirection from google.com but even. But all results are exactly the same OK, I can search by quoted "keycaptcha", I am already accustomed to these google's quirks, but the question arises why the heck to burn time promoting someone's product if GOOGLE uses other product brands for showing its own interests/brands (reCAPTCHA) instead and what can be done with it? The general user will not understand that he was cheated and just will pick up the first (wrong) results Update2: Note that this googling behaviour: is independent on whether I am logged-in (or log-out-ed of) a google account, which account, on browser (I tried Opera, Chrome, FireFox, IE of different versions, Safari), OS or even domain; there are many such cases but I just targeted one concrete restricted example speciffically to to prevent wandering between unrelated details and peculiarities; @Michael, first it is not true and this text contains 2 links for real and significant results.. I also wrote that this is just one concrete example from many and based on many-month exp. These distortions happen upon clicking on: Past 24 hours, Past week, Past month, Past year in many other keywords, occasions/configurations of searches, etc. Second, the absence of the results is the result and there is no point to sneakingly substitute it by another unsolicited one. It is the definition of spam and scam. 3d, the question is not abt workarounds like how to write search queries or use another searching engines. The question is how to straighten the googling's results in order to stop disorienting general public about. Update: I could not understand: nobody reproduces the described by me behavior (i.e. when I click "Past 24 hours" link in google search searching for keycaptcha, the presented results are only on reCAPTCHA presented)? Update: And for the "Past week":

    Read the article

  • edited and reversed changes on .htaccess - site starts redirecting to .comindex.php/

    - by Aurigae
    Site is a Joomla 2.5 site. I wanted to add a non www to www redirect to the htaccess file, did so, then the redirection went mad, reversed but still the site redirects. When i click view site in admin panel, i get linked to http://domain.comindex.php/ The website is http://www.domain.com Visiting the website URL works without www, but once you click on projects it acts mad too. Projects is managed with joomshopping extension. EDIT: the redirect also happens when rewrite is deactivated in admin panel. ## # @package Joomla # @copyright Copyright (C) 2005 - 2012 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. Redirect 301 /index.html /index.php Redirect 301 /services /project Redirect 301 /projects/projects.html /project Redirect 301 /projects/project1.html /project Redirect 301 /projects/project2.html /project Redirect 301 /projects /project Redirect 301 /keypersonnel.html /about-agrin/keystaff Redirect 301 /cooperation.htm /about-agrin/intcoop Redirect 301 /member.html /about-agrin/memberships Redirect 301 /contact.html /contacts Redirect 301 /hr.htm /jobs Redirect 301 /index.php/404 /index.php

    Read the article

  • Using Supermicro IPMI behind a Proxy?

    - by Stefan Lasiewski
    This is a SuperMicro server with a X8DT3 motherboard which contains an On-board IPMI BMC. In this case, the BMC is a Winbond WPCM450). I believe many Dell servers use this a similar BMC model. A common practice with IPMI is to isolated it to a private, non-routable network. In our case all IPMI cards are plugged into a private management LAN at 192.168.1.0/24 which has no route to the outside world. If I plug my laptop into the 192.168.1.0/24 network, I can verify that all IPMI features work as expected, including the remote console. I need to access all of the IPMI features from a different network, over some sort of encrypted connection. I tried SSH port forwarding. This works fine for a few servers, however, we have close to 100 of these servers and maintaining a SSH client configuration to forward 6 ports on 100 servers is impractical. So I thought I would try a SOCKS proxy. This works, but it seems that the Remote Console application does not obey my systemwide proxy settings. I setup a SOCKS proxy. Verbose logging allows me to see network activity, and if ports are being forwarded. ssh -v -D 3333 [email protected] I configure my system to use the SOCKS proxy. I confirm that Java is using the SOCKS proxy settings. The SOCKS proxy is working. I connect to the BMC at http://192.168.1.100/ using my webbrowser. I can log in, view the Server Health, power the machine on or off, etc. Since SSH verbose logging is enabled, I can see the progress. Here's where it get's tricky: I click on the "Launch Console" button which downloads a file called jviewer.jnlp. JNLP files are opened with Java Web Start. A Java window opens. The titlebar says says "Redirection Viewer" in the title bar. There are menus for "Video" "Keyboard" "Mouse", etc. This confirms that Java is able to download the application through the proxy, and start the application. 60 seconds later, the application times out and simply says "Error opening video socket". Here's a screenshot. If this worked, I would see a VNC-style window. My SSH logs show no connection attempts to ports 5900/5901. This suggests that the Java application started the VNC application, but that the VNC application ignores the systemwide proxy settings and is thus unable to connect to the remote host. Java seems to obey my systemwide proxy settings, but this VNC application seems to ignore it. Is there any way for me to force this VNC application to use my systemwide proxy settings?

    Read the article

  • Profiles and using the local profile for a domain user

    - by Harry
    I’m having some trouble with profiles and would like to reach out for some help. I’ve tried to do some research to help myself along, but I’m not making much progress on my own. I’ve pretty much taken over the sys admin duties for my small lab, I don’t have much experience to justify it besides I’m the only with the time and dedication to go at it (The environment was in a state of disrepair). My network and domain I look over are extremely small by most standards, about 10 users at a time. They are pretty intensive activity on the network, and we do work with fairly large files. None of the network is online, which is nice at the moment because it allows me not to have another headache. On to my profile problem, I have set up roaming profiles for the users in the network. Now after a little research, I think I will be switching this to a hybrid of folder redirection and roaming profiles as this seems to best practice. I also don’t want the users having to wait for a long time if they have a bloated profile. Now I’ve finally got a build working using MDT. We have Mac Pros, and it wasn’t fun getting everything to play nice. The way I did this was by setting up a reference computer and installing all the software and tools that each user would need and editing the settings preferences to how we would need them. I think used MDT to do a sys prep and capture to create the image of my reference computer. Using the reference image I can push out my images to the rest of the desktops in my environment. The issue I’m having is when we join the computer to domain. The user can login and operate fine on the computer, but I’d like a more. When the user is logged on with their domain user name they lose a lot of the icons I had on my reference image, as well as the desktop background and some other miscellaneous settings. I would love to have the user log on using their domain user name and see the icons and desktop environment as I had it setup on the reference computer. I’m not sure if it is possible, or something simple that I’m missing, but any help would be greatly appreciated!

    Read the article

  • Stop squid caching 302 and 307 with deny_info

    - by 0xception
    TLDR: 302, 307 and Error pages are being cached. Need to force a refresh of the content. Long version: I've setup a very minimal squid instance running on a gateway which shouldn't not cache ANYTHING but needs to be solely used as a domain based web filter. I'm using another application which redirects un-authenticated users to the proxy which then uses the deny_info option redirects any non-whitelisted request to the login page. After the user has authenticated the firewall rule gets placed so they no longer get sent to the proxy. The problem is that when a user hits a website (xkcd.com) they are unauthenticated so they get redirected via the firewall: iptables -A unknown-user -t nat -p tcp --dport 80 -j REDIRECT --to-port 39135 to the proxy at this point squid redirects the user to the login page using a 302 (i've also tried 307, and i've also make sure the headers are set to no-cache and/or no-store for Cache-Control and Pragma). Then when the user logs into the system they get firewall rule which no longer directs them to the squid proxy. But if they go to xkcd.com again they will have the original redirection page cached and will once again get the login page. Any idea how to force these redirects to NOT be cached by the browser? Perhaps this is a problem w/ the browsers and not squid, but not sure how to get around it. Full squid config below. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.182.0/23 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl https port 443 acl http port 80 acl CONNECT method CONNECT # # Disable Cache # cache deny all via off negative_ttl 0 seconds refresh_all_ims on #error_default_language en # Allow manager access only from localhost http_access allow manager localhost http_access deny manager # Deny access to anything other then http http_access deny !http # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !https visible_hostname gate.ovatn.net # Disable memory pooling memory_pools off # Never use neigh cache objects for cgi-bin scripts hierarchy_stoplist cgi-bin ? # # URL rewrite Test Settings # #acl whitelist dstdomain "/etc/squid/domains-pre.lst" #url_rewrite_program /usr/lib/squid/redirector #url_rewrite_access allow !whitelist #url_rewrite_children 5 startup=0 idle=1 concurrency=0 #http_access allow all # # Deny Info Error Test # acl whitelist dstdomain "/etc/squid/domains-pre.lst" deny_info http://login.domain.com/ whitelist #deny_info ERR_ACCESS_DENIED whitelist http_access deny !whitelist http_access allow whitelist http_port 39135 transparent ## Debug Values access_log /var/log/squid/access-pre.log cache_log /var/log/squid/cache-pre.log # Production Values #access_log /dev/null #cache_log /dev/null # Set PID file pid_filename /var/run/gatekeeper-pre.pid SOLUTION: I believe I might have found a solution to this. After days and days trying to figure it out, only through a random stumble I found client_persistent_connections off server_persistent_connections off This did the trick. So it wasn't so much cache as it was a single persistent connection messing things up. W000T!

    Read the article

  • How to redirect http requests to http (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. server { listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; } server { listen *:443 ssl default_server; server_name <%= @fqdn %>; server_tokens off; root <%= @git_home %>/gitlab/public; ssl on; ssl_certificate <%= @gitlab_ssl_cert %>; ssl_certificate_key <%= @gitlab_ssl_key %>; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; ect.... I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • nginx - Redirect specific page paths to https while keeping everything else on http (in a single server call)?

    - by Kris Anderson
    From what I've gathered so far it's clear that running if statements in nginx should be avoided at all costs. Most of the examples I've found so far regarding specific page redirects involve multiple servers being used. But, isn't that a bit wasteful? I'm not sure, but I would think multiple servers to accomplish this would be somewhat slower then a single server when under heavy load. My current server call is this: server { listen 10.0.0.60:80; listen 10.0.0.60:443 default ssl; #other code } What I want to do is redirect certain http requests to https requests. For example, I want /login/ and /my-account/ to always be forced to use SSL. If you're on /help/ though, I want that served over the default http. Is there a way to accomplish this within a single server call? Or is there no downside to using 2 server calls to get this working? nginx seems to be under pretty active development and a lot of the older guides I've followed were from times when you couldn't listen to requests for port 80 and 443 within the same server call. But now that nginx has been updated to support that (I'm running 1.2.4), I'm wondering if there's a "best practice" way of handling this today. Any help would be greatly appreciated. EDIT: I did find this guide: http://redant.com.au/blog/manage-ssl-redirection-in-nginx-using-maps-and-save-the-universe/ and I updated my code as follows: map $uri $my_preferred_proto { default "http"; ~^/#/user/login "https"; } server { listen 10.0.0.60:80; ## listen for ipv4; this line is default and implied listen 10.0.0.60:443 default ssl; if ($my_preferred_proto = "none") { set $my_preferred_proto $scheme; } if ($my_preferred_proto != $scheme) { return 301 $my_preferred_proto://mysite.com$request_uri; } It's not working though. When I change the default to https everything is redirected to SSL so it does somewhat work. But the redirect of /#/user/login is not redirecting to HTTPS. Any ideas? Also, is this a good way to go about this?

    Read the article

  • IIS URL Rewrite - Redirect any HTTPS traffic to sub-domain

    - by uniquelau
    We have an interesting hosting environment that dictates all secure traffic must travel over a specific sub domain. E.g. http://secure.domain.com/my-page I'd like to handle this switch using URL Rewrite, i.e. at server level, rather than application level. My cases are: https://secure.domain.com/page = NO CHANGE, remains the same https://domain.com/page = sub-domain inserted, https://secure.domain.com/page https://www.domain.com/page = remove 'www', insert sub-domain In my mind the logic is: INPUT = Full Url = http://www.domain.com/page If INPUT contains HTTPS Then check Full URL, does it contain 'secure'? If YES do nothing, if no add 'secure' If INPUT contains 'www' remove 'www' The certificate is not a wild card (e.g. top level domain) and is issues to: https://secure.domain.com/ The website could also be hosted in a staging environment. E.g. https://secure.environment.domain.com/ I do not have control over 'environment' or 'domain' or the 'tld'. Laurence - Update 1, 19th August So as mentioned below, the trick here is to avoid a redirect loop that could drive anyone well loopy. This is what I propose: One rule to force certain traffic to the secure domain: <rule name="Force 'Umbraco' to secure" stopProcessing="true"> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_URI}" pattern="^/umbraco/(.+)$" ignoreCase="true" /> <add input="{HTTP_HOST}" negate="true" pattern="^secure\.(.+)$" /> </conditions> <action type="Redirect" url="https://secure.{HTTP_HOST}/{R:0}" redirectType="Permanent" /> </rule> Another rule, that then removes the secure domain, expect for traffic on the secure domain. <rule name="Remove secure, expect for Umbraco" stopProcessing="true"> <match url="(.*)" ignoreCase="true" /> <conditions logicalGrouping="MatchAll"> <add input="{HTTP_HOST}" pattern="^secure\.(.+)$" /> <add input="{REQUEST_URI}" negate="true" pattern="^/umbraco/(.+)$" ignoreCase="true" /> </conditions> <!-- Set Domain to match environment --> <action type="Redirect" url="http://staging.domain.com/{R:0}" appendQueryString="true" redirectType="Permanent" /> </rule> This works for a single directory or group of files, however I've been unable to add additional logic into those two rules. For example you might have 3 folders that need to be secure, I tried adding these as Negate records, but then no redirection happens at all. Hmmm! L

    Read the article

  • How Do I Properly Run OfflineIMAP in a Crontab

    - by alharaka
    Installed Fedora. # cat /etc/redhat_release | awk ' { print F "> " $0; print ""; }' Fedora release 14 (Laughlin) Installed offlineimap from yum, cuz I'm lazy these days. # yum info offlineimap | awk ' { print F "> " $0; print ""; }' Loaded plugins: langpacks, presto, refresh-packagekit Adding en_US to language list Installed Packages Name : offlineimap Arch : noarch Version : 6.2.0 Release : 2.fc14 Size : 611 k Repo : installed From repo : fedora Summary : Powerful IMAP/Maildir synchronization and reader support URL : http://software.complete.org/offlineimap/ License : GPLv2+ Description : OfflineIMAP is a tool to simplify your e-mail reading. With : OfflineIMAP, you can read the same mailbox from multiple : computers. You get a current copy of your messages on each : computer, and changes you make one place will be visible on all : other systems. For instance, you can delete a message on your home : computer, and it will appear deleted on your work computer as : well. OfflineIMAP is also useful if you want to use a mail reader : that does not have IMAP support, has poor IMAP support, or does : not provide disconnected operation. And, lo and behold, every time I run offlineimap and try to redirect output in a crontab, it does not work. Below is my .offlineimaprc. [general] ui = TTY.TTYUI accounts = Personal, Work maxsyncaccounts = 3 [Account Personal] localrepository = Local.Personal remoterepository = Remote.Personal [Account Work] localrepository = Local.Work remoterepository = Remote.Work [Repository Local.Personal] type = Maildir localfolders = ~/mail/gmail [Repository Local.Work] type = Maildir localfolders = ~/mail/companymail [Repository Remote.Personal] type = IMAP remotehost = imap.gmail.com remoteuser = [email protected] remotepass = password ssl = yes maxconnections = 4 # Otherwise "deleting" a message will just remove any labels and # retain the message in the All Mail folder. realdelete = no [Repository Remote.Work] type = IMAP remotehost = server.company.tld remoteuser = username remotepass = password ssl = yes maxconnections = 4 I have tried TTY.TTYUI, NonInteractive.Quiet and NonInteractive.Basic with different variations. With or without redirection, the crontab entries I try cause problems. $ crontab -l | awk ' { print F "> " $0; print ""; }' */5 * * * * offlineimap >> ~/mail/logs/offlineimap.log 2>&1 */5 * * * * offlineimap I always get the same damn error ERROR: No UIs were found usable!. What am I doing wrong!?

    Read the article

  • When running a shell script, how can you protect it from overwriting or truncating files?

    - by Joseph Garvin
    If while an application is running one of the shared libraries it uses is written to or truncated, then the application will crash. Moving the file or removing it wholesale with 'rm' will not cause a crash, because the OS (Solaris in this case but I assume this is true on Linux and other *nix as well) is smart enough to not delete the inode associated with the file while any process has it open. I have a shell script that performs installation of shared libraries. Sometimes, it may be used to reinstall versions of shared libraries that were already installed, without an uninstall first. Because applications may be using the already installed shared libraries, it's important the the script is smart enough to rm the files or move them out of the way (e.g. to a 'deleted' folder that cron could empty at a time when we know no applications will be running) before installing the new ones so that they're not overwritten or truncated. Unfortunately, recently an application crashed just after an install. Coincidence? It's difficult to tell. The real solution here is to switch over to a more robust installation method than an old gigantic shell script, but it'd be nice to have some extra protection until the switch is made. Is there any way to wrap a shell script to protect it from overwriting or truncating files (and ideally failing loudly), but still allowing them to be moved or rm'd? Standard UNIX file permissions won't do the trick because you can't distinguish moving/removing from overwriting/truncating. Aliases could work but I'm not sure what entirety of commands need to be aliased. I imagine something like truss/strace except before each action it checks against a filter whether to actually do it. I don't need a perfect solution that would work even against an intentionally malicious script. Ideas I have so far: Alias cp to GNU cp (not the default since I'm on Solaris) and use the --remove-destination option. Alias install to GNU install and use the --backup option. It might be smart enough to move the existing file to the backup file name rather than making a copy, thus preserving the inode. "set noclobber" in ~/.bashrc so that I/O redirection won't overwrite files

    Read the article

  • Unextending Sharepoint 2007 Web Application from a zone

    - by dunxd
    When our Sharepoint was migrated from Sharepoint 2003 to Sharepoint 2007 (both fully paid versions), the consultants who carried it out extended each web app into two IIS sites/zones (e.g. the original Web App was http://intranet, then http://newintranet and http://intranet would be created for Sharepoint 2007 - each with its own IIS site). The idea was that during the migration period we would set up DNS to point the old url to SP2003 servers and the new one to SP2007, then once the migration was complete, do a DNS change so the SP2007 would recieve the requests to the http://intranet type URLs. Unfortunately the contractors did not tidy up the application extensions and IIS sites after the migration, and for some time both URLs were in use, resulting in many document links pointing to the http://newintranet type URLs. This means I need to maintain these URLs. Due to a rejig of organisation structure we now need to relocate some Sharepoint sites, and I'd like to use the RDA Collaboration Sharepoint URL Redirector feature. However a limitation of this is that it doesn't work for Web Applications which have been extended into multiple zones. So I have a need to tidy up the situation that our consultants left behind. I think the right thing to do is use the "Remove Sharepoint from IIS Web Site" page in Central Admin to remove the zone for the newintranet type sites, and select the option to also delete the IIS site. That should result in having no IIS sites listening for http://newintranet type URLs. Is this the right procedure? Once I have done that I need to set up Sharepoint to receive requests sent to the http://newintranet type URLs so they will continue to work. I am not sure if I should do this: using Alternative Access Mappings or, by adding a host header to the IIS site or, creating a non Sharepoint IIS site for each http://newintranet type URL, and use IIS redirection to forward the requests to the new URL using variables to pass the path to the Sharepoint site. Does anyone have any thoughts on these options, or any other way of achieving this? Sharepoint 2007 is running on Windows 2003 with IIS6. We don't currently have plans/budget to upgrade to Sharepoint 2010.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35  | Next Page >