Search Results

Search found 25503 results on 1021 pages for 'browser security'.

Page 359/1021 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • Test server on a local network with XAMPP

    - by hopscotch1978
    Hi, I'm not very proficient with networks and could use some help. I've got a Win 7 desktop with XAMPP which acts as my local dev machine. I've configured a virtual host on the desktop which I'm able to access fine. If I'm understanding things correctly, the virtual host uses port 80 (<VirtualHost 127.0.0.1:80>). I've just tried to configure a separate Win XP laptop on the local wireless network to connect to the main desktop for testing purposes. I've added the IP address and virtual host name to my Hosts file on the laptop. My virtual host is imaginatively named "virtualhost1". When I type this into my laptop browser, it connects correctly to the main desktop and I get the XAMPP welcome screen. But I can't seem to get to the actual site, just the XAMPP welcome screen. It kind of jumps the browser to http://virtualhost1/xampp/. I think it's a port issue of some sort but I have no idea how to resolve it. I would get the same XAMPP welcome screen on my desktop if I omitted ":80" from the virtual host declaration. On my main desktop, typing "virtualhost1" to the browser address bar gives me the site correctly, not the XAMPP welcome screen. Help would be appreciated. Thank you.

    Read the article

  • Reverse and Forward DNS set up correctly but sometimes MapReduce job fails

    - by phodamentals
    Ever since we switched over our cluster to communicate via private interfaces and created a DNS server with correct forward and reverse lookup zones, we get this message before the M/R job runs: ERROR org.apache.hadoop.hbase.mapreduce.TableInputFormatBase - Cannot resolve the host name for /192.168.3.9 because of javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name '9.3.168.192.in-addr.arpa' A dig and nslookup both show that the reverse and forward look-ups both get good responses with no errors from within the cluster. Shortly after these messages, the job runs...but every once in awhile we get a NPE: Exception in thread "main" java.lang.NullPointerException INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.net.DNS.reverseDns(DNS.java:93) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.reverseDNS(TableInputFormatBase.java:219) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:184) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1063) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1080) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:992) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:945) INFO app.insights.search.SearchIndexUpdater - at java.security.AccessController.doPrivileged(Native Method) INFO app.insights.search.SearchIndexUpdater - at javax.security.auth.Subject.doAs(Subject.java:415) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapreduce.Job.submit(Job.java:566) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596) INFO app.insights.search.SearchIndexUpdater - at app.insights.search.correlator.comments.CommentCorrelator.main(CommentCorrelator.java:72 Does anyone else who has set-up a CDH Hadoop cluster on a private network w/DNS server get this? CDH 4.3.1 with MR1 2.0.0 and HBase 0.94.6

    Read the article

  • PHP Output buffer flush issue on Apache/Linux

    - by Iiro Vaahtojärvi
    Hi, I'm running into issues with the PHP output buffer flushing on my Linux web server. The output buffer is maintained correctly and all the right data is pushed to it in my code, but the usual flushing mechanisms won't flush it to the browser. I have tried everything posted here: http://php.net/manual/en/function.flush.php but no success so far. I got a small script from php.net to test it: <?php ob_start(); for($i=0;$i<70;$i++) { echo 'printing...<br />'; ob_get_flush(); flush(); usleep(300000); } ?> This should print "printing..." to the browser 70 times, one line every three seconds. This works fine on my other testing environment which is based on Windows (still using apache, XAMPP package), but on my Linux server it doesn't. It waits for the script to finish before giving anything to the browser, basically ignoring the whole flush command. If anyone has experienced this before or knows of anything that could help (be it server configuration or adjustment to code) it would be greatly appreciated!

    Read the article

  • certificate working on IP but not on URL

    - by Stephan
    I asked this question on stackoverflow, and I've been suggested to repost it here. I have a problem accessing my site (on https) with IEMobile 9 (WP 7.5). It says it's got problem with the certificate, as if it wasn't valid. Everything works on any other browser or platform I tested (android (several phones and a galaxy tab with stock browser, firefox, opera, dolphin), iOS (iphone and ipad with safari and chrome), an old nokia with symbian, windows 7, linux and mac). To try to solve this I saved the certificate (.cer) on the server and accessed it from the phone browser. It always complained except when I accessed it through the server IP (192.168.xx.xx). At that point it (said it) installed correctly the certificate. If then I try to access the index.html still using the IP all works fine and it doesn't complain about the certificate. If, though, I try to access the index using the actual URL (blah.myblah.com), it complains again about the certificate, as if it wasn't installed! It isn't a problem of DNS, cause that's up and serving the right ip, and the phone is correctly setup to use it. The certificate is signed by geotrust/rapidssl for *.myblah.com.

    Read the article

  • Cisco ASA 8.2 ACL For NAT

    - by javano
    Sadly I have gone back in time to ASA 8.2(5)33 which I am not so familiar with. I have configured NAT between two interfaces but traffic isn't passing becasue I can't get the ACL to work; (The full config which isn't very big is here but to keep this post tidy I have just pasted the important parts below); interface Ethernet0/0 switchport access vlan 108 ! interface Ethernet0/6 switchport access vlan 104 ! interface Ethernet0/7 switchport access vlan 105 ! interface Vlan104 description BUILDING2 nameif BUILDING2 security-level 0 ip address 10.104.0.1 255.255.255.0 ! interface Vlan105 description BUILDING1 nameif BUILDING1 security-level 0 ip address 10.105.0.1 255.255.255.0 ! interface Vlan108 description Main LAN VLAN nameif lan security-level 0 ip address 172.22.0.215 255.255.255.0 ! object-group network obj_net_Remote_Hosts network-object host 111.111.111.3 network-object host 111.111.111.65 object-group network obj_host_pc1_eth1 network-object host 10.104.0.111 object-group network obj_host_pc2_eth1 network-object host 10.104.0.112 object-group network obj_host_pc3_eth1 network-object host 10.104.0.106 object-group network obj_host_pc4_eth1 network-object host 10.104.0.107 object-group network obj_net_PCs description IPs of PCs group-object obj_host_pc1_eth1 group-object obj_host_pc2_eth1 group-object obj_host_pc3_eth1 group-object obj_host_pc4_eth1 access-list acl_NAT_pc1_91 extended permit tcp host 10.104.0.111 host 111.111.111.3 eq 8101 access-list acl_Permit_PCs extended permit tcp object-group obj_net_PCs object-group obj_net_Remote_Hosts eq 8101 ! global (BUILDING1) 11 111.111.222.91 netmask 255.255.255.255 nat (BUILDING2) 11 access-list acl_NAT_pc1_91 access-group acl_Permit_PCs in interface BUILDING2 route BUILDING1 111.111.111.3 255.255.255.255 10.105.0.2 1 route BUILDING1 111.111.111.65 255.255.255.255 10.105.0.2 1 When I try and connect from PC1 to ip 111.111.111.3 I see the following error logged on the ASA console; %ASA-2-106001: Inbound TCP connection denied from 10.104.0.111/38495 to 111.111.111.3/8101 flags SYN on interface blades What the duce!

    Read the article

  • unreadable corrupted ntfs partition - lost clusters reported

    - by Eduardo Martinez
    Hi, partition magic is reporting multiple 'bad file record signature' and 'lost clusters' errors on my 250GB samsung sata disk (connected via usb on a xp sp3). Unfortunately PM is unable to fix. PM shows the drive as being NTFS, detects used space ok and also drive name. But PM browser (right click on partition, browse...) won't show anything (as if disk was empty) Windows Explorer is not even picking the drive name and reports 'the file or directory is corrupted and unreadable' PTDD partition table doctor demo tells me the boot sector is fine, and I can see all disk content on its browser - but crucially cannot copy that content over to a new disk (PTDD browser is pretty arid to say the least) Also tried - photorec-6.11.3 - it actually started to extract files but wouldn't keep file names or any folder structure (maybe I missed sth on the configuration options) - find and mount - intellectual scan went well, the only partition on the disk was detected, then tried to mount into p: but got this error on windows explorer: 'p:\ is not accesible. The media is write protected'. Find and mount allows you to create an image from partition but I don't have a disk big enough at hand. Does anyone know if this will keep the extracted files/folders structure intact? I'm starting to think the disk is pretty screwed and my chances to recover this data are slim. Please someone enlighten me with that marvellous piece of software I am missing :-) Thanks in advance

    Read the article

  • how to download vim script in command line

    - by HaiYuan Zhang
    whenever I want to install a new vim script to the linux server I'm working in , my typical workflow is as the following: surf the plugin's homepage in vim online using fireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to linux server using winscp which is really inconvenient. I don't know what is the magic behind this : I mean for the same hyperlinki click it in web browser I can let you download it but use wget plus the hyperlink in linux commandline will end up with nothing but error indication. hyperlink in web browser . otherwise I can get the link in web browser and then use wget or some similar tool to actually do the downloding. I try new cool vim scripts quite often , so you can imagin my dismay when have to repeat the tedious action all the time. So if anyone of you knows some tips which can let me downloading the vim scripts in a more "professional" way, I'll appreciate it a lot. post edit : My problem is not find a tool like wget or curl . The problem I met is quite specific to use these tools to download vim script. let's take http://www.vim.org/scripts/script.php?script_id=30 as an example, it's the normal place where one can get the script, at least for me. but I can't find an working url from this page that can feed to wget .

    Read the article

  • Which is more secure: Tomcat standalone or Tomcat behind Apache?

    - by NoozNooz42
    This question is not about performance, nor about load-balancing, etc. Which would be more secure: running Tomcat in standalone mode or running Tomcat behind apache? The thing is, Tomcat is written in Java and hence it is pretty much immune to buffer overrun/overflow (unless a buffer overrun in a C-written lib used by Tomcat can be triggered, but they're rare [the last I remember was in zlib, many many moons ago] and one heck of a hack to actually exploit), which gets rid of a lot of potential exploits. This page: http://wiki.apache.org/tomcat/FAQ/Security has this to say: There have been no public cases of damage done to a company, organization, or individual due to a Tomcat security issue... there have been only theoretical vulnerabilities found. All of those were addressed even though there were no documented cases of actual exploitation of these vulnerabilities. This, combined with the fact that buffer overrun/overflow are pretty much non-existent in Java, makes me believe that Tomcat in standalone mode is pretty secure. In addition to that, I can install both Java and Tomcat on Linux without needing to be root. The only moment I need to be root is to set up a transparent port 8080 to port 80 forwarding (and 8443 to 443). Two iptables line as root, that's all root is needed for. (I don't know for Apache). Apache is much more used than Tomcat and definitely does not have a security track record as good as Tomcat. What would make Tomcat + Apache more secure? What would make Tomcat + Apache less secure? In short: which is more secure, Tomcat standalone or Tomcat with Apache? (remembering that performance aren't an issue here)

    Read the article

  • Nginx and automatic updates

    - by Desmond Hume
    I'm on Ubuntu 12.04.1 with unattended-upgrades configured for automatic security updates, and I installed Nginx by first adding deb http://nginx.org/packages/ubuntu/ lucid nginx deb-src http://nginx.org/packages/ubuntu/ lucid nginx to /etc/apt/sources.list file, just as was suggested by the official wiki, and then by sudo apt-get update sudo apt-get install nginx which installed Nginx with all the standard modules. But now I think I could make good use of one or two of the Nginx optional modules, like the gzip precompression module or some security-related one. So far, I see two ways of adding an optional module to Nginx, one is compiling and installing from the source code and the other is described in this article. So, which of the ways should I choose so that automatic updates still run for and apply to Nginx and its optional modules? Or should I create a cron job with a command/script specific for Nginx instead of using unattended-upgrades utility? Can I choose between volume updates and security-only updates to be automatically applied to the standard and optional modules? And finally, is there a possibility to automatically update Nginx's modules on the fly (without any connections having been dropped), like the documentation suggests it's possible with sudo kill -USR2 $( cat /run/nginx.pid ) P.S. Actually I'm not certain if unattended-upgrades utility would automatically update the standard modules in the first place, not enough time has passed since Nginx was installed to say for sure.

    Read the article

  • Wireless disconnects every 30 minutes

    - by Kez
    I have had a look through all the related questions and I get the feeling my problem is unique. My wireless connection disconnects every 30 minutes, for maybe 1 to 3 seconds. If I am browsing the web while it happens, I get the page cannot be displayed error message. I have checked the event logs as I was curious to know if there was anything in there. There is. Event 8033: BROWSER - The browser has forced an election on network \Device\NetBT_Tcpip_{B919CC30-25A9-45DD-A09F-549A6262FC9E} because a master browser was stopped. Reported exactly every 30 minutes which coincides with my wireless problem. I am running Windows 7 Ultimate, 32-bit. My wireless is Realtek RTL8187 integrated into a ASUS P5K-E/Wifi motherboard. It is on a workgroup and has never been on a domain. This problem does not affect any other computers. Wireless reception is great, and I have ensured that the wireless unit is transmitting on a frequency not used by any nearby wireless basestations. How can I fix this pesky problem?

    Read the article

  • Apache Server Redirect Subdomain to Port

    - by Matt Clark
    I am trying to setup my server with a Minecraft server on a non-standard port with a subdomain redirect, which when navigated to by minecraft will go to its correct port, or if navigated to by a web browser will show a web-page. i.e.: **Minecraft** minecraft.example.com:25565 -> example.com:25465 **Web Browser** minecraft.example.com:80 -> Displays HTML Page I am attempting to do this by using the following VirtualHosts in Apache: Listen 25565 <VirtualHost *:80> ServerAdmin [email protected] ServerName minecraft.example.com DocumentRoot /var/www/example.com/minecraft <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/example.com/minecraft/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> <VirtualHost *:25565> ServerAdmin [email protected] ServerName minecraft.example.com ProxyPass / http://localhost:25465 retry=1 acquire=3000 timeout=6$ ProxyPassReverse / http://localhost:25465 </VirtualHost> Running this configuration when I browse to minecraft.example.com I am able to see the files in the /var/www/example.com/minecraft/ folder, however if I try and connect in minecraft I get an exception, and in the browser I get a page with the following information: minecraft.example.com:25565 -> Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /. Reason: Error reading from remote server Could anybody share some insight on what I may be doing wrong and what the best possible solution would be to fix this? Thanks.

    Read the article

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • Detecting login credentials abuse

    Greetings. I am the webmaster for a small, growing industrial association. Soon, I will have to implement a restricted, members-only section for the website. The problem is that our organization membership both includes big companies as well as amateur “clubs” (it's a relatively new industry…). It is clear that those clubs will share the login ID they will use to log onto our website. The problem is to detect whether one of their members will share the login credentials with people who would not normally supposed to be accessing the website (there is no objection for such a club to have all it’s members get on the website). I have thought about logging along with each sign-on the IP address as well as the OS and the browser used; if the OS/Browser stays constant and there are no more than, say, 10 different IP addresses, the account is clearly used by very few different computers. But if there are 50 OS/Browser combination and 150 different IPs, the credentials have obviously been disseminated far, and there would be then cause for action, such as modifying the password. Of course, it is extremely annoying when your password is being unilaterally changed. So, for this problem, I thought about allowing the “clubs” to manage their own list of sub-accounts, and therefore if abuse is suspected, the user responsible would be easily pinned-down, and this “sub-member” alone would face the annoyance of a password change. Question: What potential problems would anyone see with such an approach?

    Read the article

  • Create a wifi hotspot in a place where an authentication is required

    - by SoftTimur
    I live in a residence where Internet is provided via cable. Once the computer is connected to the cable, launching a browser will trigger an authentication, I have a username and password to enter, then the internet will be connected. With a gateway (e.g. Wireless Cable Voice Gateway Model CBVG834G) and 2 cables, two PCs can connect to the Internet with my account at the same time. Now the question is, I don't like the cable, and would like to create a wifi hotspot. It seems realizable with the same gateway. According to the instruction on page 2-4 of the manual: Enter http://192.168.0.1 in the address field of your Internet browser. Log in to the gateway with either of the default user names, MSO or admin... However, while connecting to the Internet successfully via cable and the gateway (e.g. google works), opening 192.168.0.1 oddly gives me an error on the browser: Does anyone know what happened? Is it due to the authentication required by my residence? Is there any other way to build a hotspot of wifi? PS: My system is MAC OS

    Read the article

  • MySQL ODBC + SSL with only the SSL Cipher option?

    - by sdek
    Does anybody know how I can have an SSL encrypted connection over MySQL ODBC without the cert options? So I asked my web host to setup a MySQL+SSL connection so that we can access our website's database via ODBC or MySQL Query Browser (or the likes). I am able to get an encrypted connection with the standard mysql client and MySQL Query Browser, but I can't get the ODBC connection to work. Looking for a little help... The way they set it up is a little different from the way I have read about on the interweb. The host didn't setup a cert, or at least I don't think so - I don't need to specify any cert options in my connection. I just need to specify the ssl cipher. Here is how I connect with the mysql client: mysql -h myhost.com -u myuser --ssl-cipher=3DES -p That works to get an encrypted connection. At least I am pretty sure it works because when I run mysql> \s I get SSL: Cipher in use is EDH-RSA-DES-CBC3-SHA Also, when I put EDH-RSA-DES-CBC3-SHA into the SSL Cipher field of MySQL Query Browser (without specifying any other SSL options) it connects just fine. But then when I try to do the same thing with my MySQL ODBC 3.5.1 and 5.1 I get a generic error. Here is the error from the 5.1 Driver. Connection Failed: [HY000] [MySQL][ODBC 5.1 Driver]SSL connection error

    Read the article

  • How far should we take the N+N redundancy craziness ?

    - by Brann
    The industry standard when it comes from redundancy is quite high, to say the least. To illustrate my point, here is my current setup (I'm running a financial service). Each server has a RAID array in case something goes wrong on one hard drive .... and in case something goes wrong on the server, it's mirrored by another spare identical server ... and both server cannot go down at the same time, because I've got redundant power, and redundant network connectivity, etc ... and my hosting center itself has dual electricity connections to two different energy providers, and redundant network connectivity, and redundant toilets in case the two security guards (sorry, four) needs to use it at the same time ... and in case something goes wrong anyway (a nuclear nuke? can't think of anything else), I've got another identical hosting facility in another country with the exact same setup. Cost of reputational damage if down = very high Probability of a hardware failure with my setup : <<1% Probability of a hardware failure with a less paranoiac setup : <<1% ASWELL Probability of a software failure in our application code : 1% (if your software is never down because of bugs, then I suggest you doublecheck your reporting/monitoring system is not down. Even SQLServer - which is arguably developed and tested by clever people with a strong methodology - is sometimes down) In other words, I feel like I could host a cheap laptop in my mother's flat, and the human/software problems would still be my higher risk. Of course, there are other things to take into consideration such as : scalability data security the clients expectations that you meet the industry standard But still, hosting two servers in two different data centers (without extra spare servers, nor doubled network equipment apart from the one provided by my hosting facility) would provide me with the scalability and the physical security I need. I feel like we're reaching a point where redundancy is just a communcation tool. Honestly, what's the difference between a 99.999% uptime and a 99.9999% uptime when you know you'll be down 1% of the time because of software bugs ? How far do you push your redundancy crazyness ?

    Read the article

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by B. Kulakli
    We have a proxy server in our internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that says "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Tries to open www.google.com Should see web server output on local network Tries another web site on internet Should see web server output on local network Sets up proxy Tries to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), no replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my NAT rule at checkpoint NGX R60 does not include return packets. I definitely need some help.

    Read the article

  • Links break in IE9 when using Wordpress plugins in non Wordpress Page

    - by mouli
    I have a site that uses SEF URLs and htaccess RewriteRules to serve up the pages. This has worked fine for several years until the arrival of IE9. Now it appears that the links are not being rewritten and the site is dead in the water. I have tried different compatabilty modes, to no avail, and I've played with the Rewrite Rules over and over, tried different doctypes and a few other browser settings. I agree that it cannot in theory be a browser specific problem if the problem is with the htaccess file but this site works in IE8, firefox and chrome. I have run the rewriterule through a validator and it looks fine. Any ideas would be appreciated as I am running out of ideas. The site is www.marlboroughsounds.co.nz a sample link is http://www.marlboroughsounds.co.nz/walking/freedom-walk-queen-charlotte-track/4dfw and the rewrite rule thats not working looks like this: RewriteRule ^walking/.*/([a-z0-9_]*)/?$ /walking.php?act_code=$1 [L] The link fails and it serves up a browser 404 page, not even the custom 404 I have for the site. Any ideas would be much appreciated as I am stumped.

    Read the article

  • How to download Vim script on the command-line?

    - by HaiYuan Zhang
    Whenever I want to install a new Vim script on the Linux server I'm working on, my typical workflow is as the following: surf the plugin's homepage in Vim online using FireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to Linux server using WinSCP which is really inconvenient. I don't know what is the magic behind this: I mean for the same hyperlink I click it in web browser. I can let you download it but use Wget plus the hyperlink in Linux command-line will end up with nothing but an error indication. Hyperlink in the web browser. Otherwise I can get the link in web browser and then use Wget or some similar tool to actually do the downloding. I try new cool Vim scripts quite ofte , so you can imagine my dismay when I have to repeat the tedious action all the time. What are some tips which can let me download the Vim scripts in a more "professional" way? Post edit: My problem is not find a tool like Wget or cURL. The problem I met is quite specific; to use these tools to download a Vim script. Let's take http://www.vim.org/scripts/script.php?script_id=30 as an example. It's the normal place where one can get the script, at least for me. But I can't find an working URL from this page that can feed to Wget.

    Read the article

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by kulakli
    Hi, We have a proxy server in internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that say "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Trys to open www.google.com Should see web server output on local network Trys another web site on internet Should see web server output on local network Sets up proxy Trys to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), No replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my nat rule at checkpoint NGX R60 does not include return packets. I definitely need some help. Regards, Burak

    Read the article

  • Struggling with proper way to setup Permissions on Linux/Apache Web Server

    - by Dr. DOT
    Your expert experience and assistance is great, greatly appreciated here. I have been running a LAMP server for a long time, yet I still struggle with the best way to set file & directory permissions for FTP and WWW protocol activity. My Control panel is WHM/cPanel (not that it makes a difference), and out-of-the box: files are owned by the user account setup in WHM (eg, "abc") files have a group setting of "abc" as well file permissions are created with 644 directories are owned by "abc" directories have a group setting of "abc" directories permissions are created with 0755 Again, these are the default permission settings. Now everything is fine with FTP activity, but please advise me if any of these file/directory settings create issues, especially with security. Here's where my struggle comes into play. I have PHP apps that allow a visitor to create, edit, rename, delete, etc. sub-directories and files in certain selected directories. PHP runs as "nobody" on my server. So in order to get my PHP/Web apps to work, I have had to: chown nobody * chgrp nobody * chmod 0777 * to everything in these certain & selected sub-directories. I know this is probably a huge security whole (so don't ask me for any links :) but how should I set all the permissions to allow my FTP user to do his thing while allowing the PHP apps to do their thing will also "minimizing" any security risks and exposures? I know that big CMS systems like Drupal, Joomla, WordPress and so on, handle this. Thanks ahead of time for reading through this and offering your expert advice!

    Read the article

  • Completed downloads freeze Windows

    - by Ben Hooper
    The Issue Shortly after a file download via Google Chrome for Windows completes, the download will get stuck on "0 seconds left" and all other programs (except Google Chrome, for some reason, but browsing will not work) completely freezes into Windows' infamous "Not Responding" state, affecting Explorer particularly badly. Eventually, the programs will recover themselves but they will recover significantly faster if you cancel the file download, relative to how quickly you react. Performing the exact same operation immediately after cancelling the download usually works without issue. This issue occurs when with any file type (.ZIP, .MSI, .MSG, .PNG, .URL, etc) of any size from any source (Dropbox, SourceForge, Imgur, even tiny and locally-generated BLObs created by my own Chrome extension, etc) to any location.   Potential Causes As this issue is so inconsistent, I haven't been able to prove whether the issue is Chrome-specific or being caused by my system or my Chrome configuration but it's happening on both my work and home PCs. I originally suspected that this issue was being caused by security software scanning completed downloads for threats but I'm not as confident in that theory anymore as the issue persisted even after changing my security software from ESET NOD32 and Malwarebytes Anti-Malware Pro to ESET Endpoint to Microsoft Security Essentials.   System Information (of both PCs) Windows version: 7 Service Pack 1 64-bit Google Chrome version: 30.0.1599.101 (but has been happening for a long time)   Screenshots

    Read the article

  • Opera's problem with magnet-links

    - by Dmitriy Matveev
    I've encountered following problem while using Opera web browser: When I click on a magnet link on some web page like magnet:?xt=urn:tree:tiger:CXW6MJFRNOEFU2STCBWWOIYZLVCR2FTR37SQCXY&xl=352342016&dn=ER%20-%207x16%20-%20Witch%20Hunt.avi Opera is asking me if I want to open that link with my DC++ client. If I click 'Yes' button then my DC++ client is correctly "opening" the clicked magnet link and performing some action on it. There is also an option "Do not show this dialog again" in Opera's dialog, but it doesn't seem to be working correctly. If I check that option before answering 'Yes' and then click on other magnet link of the same kind the Opera will again ask me about how to open new link. I haven't found protocol association in 'Control Panel Default Programs Set Associations' part of my Windows Vista settings, but if I paste magnet link in "Run" dialog then Vista will handle that link perfectly. I've tried to find out how to manually set protocol association in Opera and found 'Programs' page of browser's advanced settings. There I discovered that instead of storing protocol to application associations Opera tries to store per-link associations (There are several entries with exact links as they was on web page as value of protocol field). If I click on the links which are already stored in Opera's protocols associations browser will ask me about them again. I haven't found any information on how to resolve this problem on the internet, maybe someone on this site will be able to help me.

    Read the article

  • How to correctly deploy Adobe Reader 9.1

    - by Ben Gillam
    Hi I have recently tried to deploy Adobe Reader 9.1 onto our network here. (SBS 2003 server and XP Workstations) I followed the instructions for the extraction of the installer and .msi and then creating a .mst transform file to set custom options. (Suppress EULA, dont create desktop icon etc) I then added the package to my deployment GPO applied the relevant .mst file and preceded to deploy accross the network. The software package is computer assigned to be installed prior to logon, to avoid user permissions issues. The package deploys correctly to computers and will run perfectly fine if you run from a shortcut, however when trying to view a pdf from within a web browser it fails with the following message. "The adobe acrobat/reader that is running can not be used to view PDF files in a web browser. Adobe Acrobat/Reader version 8 or 9 is required. Please exit and try again" I have found many pages on google refering to this problem, but none appear to be in relation the problems I have found. http :// kb2.adobe.com/cps/405/kb405461.html These fixes recommend correcting a registry entry (which i should mention is missing after the deployed installation. However this does not work. Switching off display in a browser - Seems to defeat the object of fixing the problem Removing old versions - There arent any. Trying with a different user - This affects all users of all privalige levels on all computers. On my workstation I uninstalled Acrobat Reader 9.1 then reinstalled manually using the same installation source files and it works fine. has anyone sucsessfully deployed AR9.1 on their domain and if so how? For the time being I have downloaded the older 8.1.3 release and deployed this in the same way which works fine, but would like to be using the up to date version. Thanks

    Read the article

  • Automatic o/s reset on a dedicated internet browsing Windows 7 pc.

    - by camelCase
    I have just purchased a new Acer Revo nettop PC for dedicated internet browsing. It will be the only pc on a home network. My original plan was to install one virtual PC for family browsing, another for remote web based server administration and ban browser use from the host Windows 7 o/s. The idea was that I could recover to a fresh VHD image once a week to eliminate any build up of malware inside the browser VMs. However now I am looking for alternative solutions since the Intel Atom cpu does not have hardware VT support which Windows Virtual PC requires. Would it be possible to engineer some type of routine overnight host o/s wipe and recovery? I guess cyber cafes do something like this? The only user data that would need to be retained across a recovery would be browser bookmarks but these could be exported to remote service. Edit 1: I am thinking the o/s reset could be done via some disk image recovery process. Edit 2: Just had a brainwave. Routine browsing could be done via the new Google Chrome O/S. I have just seen a video of the Google Chrome o/s booting off a usb pen drive in seconds.

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >