Search Results

Search found 52885 results on 2116 pages for 'http redirect'.

Page 266/2116 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • Static IP configuration causing apt-get errors

    - by JPbuntu
    I am getting errors when running apt-get update or when installing new packages. Although this only happens when the server configured for a static IP address. Changing the configuration back to DHCP and restarting networking fixes the problem, although I want a static IP. Once it is working I can change back to my static IP address and restart networking. Although this only works until I restart the server (restarting the router is ok), and then I start getting the same errors and have to switch back to DHCP. Any ideas on what could be causing this or tips on troubleshooting it? Thanks in advance. here is my static IP configuration: auto eth0 iface eth0 inet static address 192.168.2.2 netmask 255.255.255.0 gateway 192.168.2.1 The apt-get update errors go something like this: A few of these Ign http://us.archive.ubuntu.com precise-backports InRelease then a lot of these Err http://security.ubuntu.com precise-security Release.gpg Something wicked happened resolving 'security.ubuntu.com:http' (-5 - No address associated with hostname) and a lot of these W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en Something wicked happened resolving 'us.archive.ubuntu.com:http' (-5 - No address associated with hostname)

    Read the article

  • still getting 403 on apt-get install: no proxy, urls seems valid

    - by Berry Tsakala
    i'm trying to install libreoffice (or openoffice) on Ubuntu server 12.10, the packages exist - verified with "apt-cache search", the file /etc/apt/apt.conf.d/30proxy doesn't exist on my system the text 'proxy' isn't mention in grep proxy /etc/apt/apt.conf.d/ other packages that i tried to apt-get-install -- are installed OK. the only thing i haven't done is to replace the respository servers; i'm afraid it can break the dpkg system! related questions http://askubuntu.com/questions/304340/apt-get-403-forbidden?rq=1 http://askubuntu.com/questions/303150/apt-get-403-forbidden-but-accessible-in-the-browser http://askubuntu.com/questions/409998/proxy-blocking-apt-get-allowing-wget-curl http://askubuntu.com/questions/367737/apt-get-upgrade-gives-403-forbidden-error?rq=1 What else can I do to solve this 403 error and install liber/open-office using apt?

    Read the article

  • tomcat dns forwarding to multiple applications

    - by basis vasis
    I recently installed business objects software on tomcat 6. I have 2 domains - domain1 and domain2. This software allows access to two of its applications via these URLS: ADDRESS:http://myservername.domain1:8080/BO/APP1 and ADDRESS:http://myservername.domain1:8080/BO/APP2. Instead of these urls, I would like the end users to access these apps via something like http://bobj.domain2.com:8080/BO/APP1 and http://bobj.domain2.com:8080/BO/APP2. I cannot figure out how to accomplish that. I have looked into the option of http redirect (not good because the destination address shows up in the address bar), domain forwarding (not sure if it would work with multiple applications and forwarding from one domain to another) and also using apache tomcat with mod_jk by using virtual hosts (not sure if it is possible when forwarding from one domain to a sub domain in another domain) ?? please advise as to what would be my best option and how to accomplish. thanks a bunch

    Read the article

  • Order of mod_rewrite rules in .htaccess not being followed

    - by user39461
    We're trying to enforce HTTPS on certain URLs and HTTP on others. We are also rewriting URLs so all requests go through our index.php. Here is our .htaccess file. # enable mod_rewrite RewriteEngine on # define the base url for accessing this folder RewriteBase / # Enforce http and https for certain pages RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !^/(en|fr)/(customer|checkout)(.*)$ [NC] RewriteRule ^(.*)$ http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/(en|fr)/(customer|checkout)(.*)$ [NC] RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # rewrite all requests for file and folders that do not exists RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?query=$1 [L,QSA] If we don't include the last rule (RewriteRule ^(.*)$ index.php?query=$1 [L,QSA]), the HTTPS and HTTP rules work perfectly however; When we add the last three lines our other rules stop working properly. For example if we try to goto https:// www.domain.com/en/customer/login, it redirects to http:// www.domain.com/index.php?query=en/customer/login. It's like the last rule is being applied before the redirection is done and after the [L] flag indicating the the redirection is the last rule to apply.

    Read the article

  • Do CDNs work with POST operations?

    - by iddqd
    I'm using a CDN (Level3) for the first time and I'm a bit confused. I'm accessing dynamic URLs such as http://cdn.mysite.com?getItem=1234 that return text data. Do CDNs work with HTTP POST operations? When i issue a HTTP POST operation, my "real" server receives this request every time, so I'm wondering if the CDN has a problem with POST operations. If i use HTTP GET it seems to work, i call the URL once (from my application), i can see my server receiving the request. If i call it a second time, the CDN delivers it directly, my server doesn't get anything. However if i open same the link manually from a second browser tab, my server is asked to deliver again, shouldn't it be cached by now? Many thanks.

    Read the article

  • iptables - quick safety eval & limit max conns over time

    - by Peter Hanneman
    Working on locking down a *nix server box with some fancy iptable(v1.4.4) rules. I'm approaching the matter with a "paranoid, everyone's out to get me" style, not necessarily because I expect the box to be a hacker magnet but rather just for the sake of learning iptables and *nix security more throughly. Everything is well commented - so if anyone sees something I missed please let me know! The *nat table's "--to-ports" point to the only ports with actively listening services. (aside from pings) Layer 2 apps listen exclusively on chmod'ed sockets bridged by one of the layer 1 daemons. Layers 3+ inherit from layer 2 in a similar fashion. The two lines giving me grief are commented out at the very bottom of the *filter rules. The first line runs fine but it's all or nothing. :) Many thanks, Peter H. *nat #Flush previous rules, chains and counters for the 'nat' table -F -X -Z #Redirect traffic to alternate internal ports -I PREROUTING --src 0/0 -p tcp --dport 80 -j REDIRECT --to-ports 8080 -I PREROUTING --src 0/0 -p tcp --dport 443 -j REDIRECT --to-ports 8443 -I PREROUTING --src 0/0 -p udp --dport 53 -j REDIRECT --to-ports 8053 -I PREROUTING --src 0/0 -p tcp --dport 9022 -j REDIRECT --to-ports 8022 COMMIT *filter #Flush previous settings, chains and counters for the 'filter' table -F -X -Z #Set default behavior for all connections and protocols -P INPUT DROP -P OUTPUT DROP -A FORWARD -j DROP #Only accept loopback traffic originating from the local NIC -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j DROP #Accept all outgoing non-fragmented traffic having a valid state -A OUTPUT ! -f -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT #Drop fragmented incoming packets (Not always malicious - acceptable for use now) -A INPUT -f -j DROP #Allow ping requests rate limited to one per second (burst ensures reliable results for high latency connections) -A INPUT -p icmp --icmp-type 8 -m limit --limit 1/sec --limit-burst 2 -j ACCEPT #Declaration of custom chains -N INSPECT_TCP_FLAGS -N INSPECT_STATE -N INSPECT #Drop incoming tcp connections with invalid tcp-flags -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL ALL -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL NONE -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,FIN FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,PSH PSH -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,URG URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags FIN,RST FIN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,RST SYN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP #Accept incoming traffic having either an established or related state -A INSPECT_STATE -m state --state ESTABLISHED,RELATED -j ACCEPT #Drop new incoming tcp connections if they aren't SYN packets -A INSPECT_STATE -m state --state NEW -p tcp ! --syn -j DROP #Drop incoming traffic with invalid states -A INSPECT_STATE -m state --state INVALID -j DROP #INSPECT chain definition -A INSPECT -p tcp -j INSPECT_TCP_FLAGS -A INSPECT -j INSPECT_STATE #Route incoming traffic through the INSPECT chain -A INPUT -j INSPECT #Accept redirected HTTP traffic via HA reverse proxy -A INPUT -p tcp --dport 8080 -j ACCEPT #Accept redirected HTTPS traffic via STUNNEL SSH gateway (As well as tunneled HTTPS traffic destine for other services) -A INPUT -p tcp --dport 8443 -j ACCEPT #Accept redirected DNS traffic for NSD authoritative nameserver -A INPUT -p udp --dport 8053 -j ACCEPT #Accept redirected SSH traffic for OpenSSH server #Temp solution: -A INPUT -p tcp --dport 8022 -j ACCEPT #Ideal solution: #Limit new ssh connections to max 10 per 10 minutes while allowing an "unlimited" (or better reasonably limited?) number of established connections. #-A INPUT -p tcp --dport 8022 --state NEW,ESTABLISHED -m recent --set -j ACCEPT #-A INPUT -p tcp --dport 8022 --state NEW -m recent --update --seconds 600 --hitcount 11 -j DROP COMMIT *mangle #Flush previous rules, chains and counters in the 'mangle' table -F -X -Z COMMIT

    Read the article

  • apt-get upgrade stuck at the same package

    - by decibyte
    Current status I've started to suspect this is not an Ubuntu issue, but related to the internet connection here at my work. Until I'm sure, Im leaving my question below: Original question I'm stuck, can't upgrade my system. Running sudo apt-get upgrade gives me the following: mmm@alalunga:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ginn libgrip0 linux-generic-pae linux-headers-generic-pae linux-image-generic-pae The following packages will be upgraded: apport apport-gtk bind9-host build-essential dhcp3-client dhcp3-common dnsutils eog evince evince-common firefox firefox-branding firefox-dbg firefox-globalmenu firefox-gnome-support firefox-locale-en gimp gimp-data gir1.2-totem-1.0 glib-networking glib-networking-common glib-networking-services gnupg gpgv icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-6-plugin icedtea-netx icedtea-netx-common icedtea-plugin isc-dhcp-client isc-dhcp-common libapache2-mod-php5 libart-2.0-2 libbind9-80 libdns81 libevince3-3 libgimp2.0 libisc83 libisccc80 libisccfg82 liblwres80 libssl-dev libssl-doc libssl1.0.0 libtotem0 linux-firmware linux-libc-dev openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssl php-pear php5-cli php5-common php5-curl php5-dev php5-gd php5-mysql php5-xsl policykit-1-gnome python-apport python-django python-gst0.10 python-problem-report resolvconf thunderbird thunderbird-globalmenu thunderbird-gnome-support totem totem-common totem-mozilla totem-plugins xserver-xorg-input-synaptics 74 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. Need to get 317 MB/327 MB of archives. After this operation, 1.481 kB of additional disk space will be used. Do you want to continue [Y/n]? Get:1 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:2 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:3 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:4 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:5 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:6 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:7 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] 9% [7 openjdk-6-jre-headless 27,3 MB/27,3 MB 100%] It keeps downloading the package openjdk-6-jre-headless, then does nothing for a while (hanging on what's the last line above), then download the package again. It's at its 13th download attempt at the moment of writing. The actual downloads seem to be done just fine, but whatever it does after downloading seems to be failing. I tried removing openjdk-6, but then it wanted to install openjdk-7 instead, with the same result, hanging at openjdk-7-jre-headless instead. I also tried changing servers from my local (Danish) to the main server. No luck. It's also keeping me from upgrading alle the other packages. What to do? Update After following instructions in the answer by @lpanebr, it is now stuck at the linux-firmware package. So, maybe it's a more general problem than being related to specific package(s)? Although it did download some packages without problems before getting stuck at linux-firmware.

    Read the article

  • Storing deb packages on local media

    - by Saeid87
    Is there a way to store deb packages (all or a specific version of package) on a local media (dvd, usb etc...) so later I would be able to install those packages on a PC which does not have Internet connection? For example, these are the packages that I want to install on a PC which doesn't have Internet connection: # TinyOS MSP430 GCC Compiler Repository # Version 4.6.3 deb http://tinyprod.net/repos/debian squeeze main deb http://tinyprod.net/repos/debian msp430-46 main # TinyOS version 2.1.2 deb http://tinyos.stanford.edu/tinyos/dists/ubuntu lucid main

    Read the article

  • From Pocket to Instapaper

    - by Michael Freidgeim
    Some time ago I’ve described the issues that I’ve had since a new version of Read It Later, named Pocket, was introduced.I’ve waited with hope for a new upgrade, but I had a huge disappointment with the latest version 16 June 2012. It didn’t fixed any of the two major problems, that I  experienced since new Pocket was introduced-  1. iPad app still didn’t show many of the saved links. 2. ability to rename articles on iPad still wasn’t restored.I’ve posted the message into their forum. They did not show my comment on their forum( I would name it censorship, not moderation), but a few days ago I’ve received an email, recommending “try logging out of the app on your iPad, and back in again.” Their suggestion helped,  but I don’t understand, why it is not posted as a recommendation on their support site.So I decided to try InstAPaper on my iPad, Previously I’ve used it for Kindle. I never considered it before on iPad, because there were no free demo and I was very satisfied with RIL free and then RIL Pro. Currently InstAPaper cost $3, so the price is not an issue.I’ve checked that it has most of features that I am using(e.g. renaming, folders) and I am quite happy with it now. Actually I am using Pocket (or RIL free) for old bookmarks( I have 1000+ stored on my iPad) and for new bookmarks I am using InstAPaper.Having a solid experience with RIL/Pocket I’ve created a list of suggestions to Marco Arment to implement.1. Some pages stored in InstAPaper have removed essential sections of the text. E.g in many blogs comments are not stored in  InstAPaper. Some pages lost almost all of important links (e.g. http://www.lib.rus.ec/a/32416 -sorry, in Russian). RIL/Pocket has 2 modes to store offline- Web view and Article view. Web View includes all links/images of the original page, but it’s very reliable. Article view suppose to strip unrelated information, but often corrupts the content. I prefer to use offline Web view.InstAPaper should also support offline Web view, in case if stripped view removes important part of content.2.  Black full screen Saving on iPad Safari is very annoying. After user pressed a bookmark, the saving has some delay and then for a few seconds prevents from reading the text.Would be better to show as message on the top part(as in Pocket ). I am surprised, that  a full screen popup was  implemented recently as a desired feature. 3.There are no comments allowed on http://blog.instapaper.com/. I would prefer to post some of these notes as comments on http://blog.instapaper.com/ rather than write them in my blog and then send link to Marco.(I found recommendation how to add support of comments on tumblr at http://www.tumblr.com/help, but then realized that Marko was the lead developer ofTumblr.)4. Also there is no support forum. I understand that maintenance of the forum ican be a hassle, but stackexchange fSome time ago I’ve described the issues that I’ve had since a new version of Read It Later, named Pocket, was introduced.I’ve waited with hope for a new upgrade, but I had a huge disappointment with the latest version 16 June 2012. It didn’t fixed any of the two major problems, that I  experienced since new Pocket was introduced- orums can be referred on  http://www.instapaper.com/main/support page, i.e.http://webapps.stackexchange.com/search?q=Instapaper  or http://apple.stackexchange.com/search?q=Instapaper 5. Tags are more convenient than folders. i.e. an ability for the same article to have more than one tag. Also creating of new folders is not supported offline, which is an annoying limitation.6. I would like to have a narrow list - additionally to existing list modes have a subject only list or subject+site list to show more list items on a screen.7. Limit of 500 offline articles sounds quite big, but my RIL list exceeded 1000, so it could be a issue in the future.8. Search button on iPad version is visible, but doesn’t work- it forces to buy Premium subscription. I think, that it’s not correct. If the button in a paid version is visible and enabled, it should  provide  a working functionality, e.g. search in article names only. And leave full-text search for the premium support.9..Copy URL is an important operation and deserves to be in a first level of Action menu, rather than in Share sub-menu.I’ve also have comment re post http://www.marco.org/2011/04/28/removed-instapaper-free. Marco Arment  explained, why he doesn’t provide free version of Instapaper.  I believe that he is loosing essential part of his customers. When I decided which of iPad application to choose, I’ve selected RIL, because I was able to play with free version, and I liked it. I didn’t have a chance to compare RIL and InstAPaper on iPad, so I’ve bought  RIL pro. For a user there is no point to pay even $3 , if there are similar free product, that user can try and see, is it suitable for him/her.I’ve also played with Readability. It doesn’t have folders or tags(which is very important for me), but nicely supports full text search

    Read the article

  • C++ Numerical Recipes &ndash; A New Adventure!

    - by JoshReuben
    I am about to embark on a great journey – over the next 6 weeks I plan to read through C++ Numerical Recipes 3rd edition http://amzn.to/YtdpkS I'll be reading this with an eye to C++ AMP, thinking about implementing the suitable subset (non-recursive, additive, commutative) to run on the GPU. APIs supporting HPC, GPGPU or MapReduce are all useful – providing you have the ability to choose the correct algorithm to leverage on them. I really think this is the most fascinating area of programming – a lot more exciting than LOB CRUD !!! When you think about it , everything is a function – we categorize & we extrapolate. As abstractions get higher & less leaky, sooner or later information systems programming will become a non-programmer task – you will be using WYSIWYG designers to build: GUIs MVVM service mapping & virtualization workflows ORM Entity relations In the data source SharePoint / LightSwitch are not there yet, but every iteration gets closer. For information workers, managed code is a race to the bottom. As MS futures are a bit shaky right now, the provider agnostic nature & higher barriers of entry of both C++ & Numerical Analysis seem like a rational choice to me. Its also fascinating – stepping outside the box. This is not the first time I've delved into numerical analysis. 6 months ago I read Numerical methods with Applications, which can be found for free online: http://nm.mathforcollege.com/ 2 years ago I learned the .NET Extreme Optimization library www.extremeoptimization.com – not bad 2.5 years ago I read Schaums Numerical Analysis book http://amzn.to/V5yuLI - not an easy read, as topics jump back & forth across chapters: 3 years ago I read Practical Numerical Methods with C# http://amzn.to/V5yCL9 (which is a toy learning language for this kind of stuff) I also read through AI a Modern Approach 3rd edition END to END http://amzn.to/V5yQSp - this took me a few years but was the most rewarding experience. I'll post progress updates – see you on the other side !

    Read the article

  • what is the best way to create a blog

    - by jogesh_p
    like usually i have a query related to my blog, but i am bit confuse about a question. i want to know the best way to create a blog. Like: i have a domain, http://www.example.com and i want to create my blog on this domain, but the issue the could i use the sub-domain for my blog like http://blog.example.com or i have to do like http://www.example.com/blog and why beneficial according to SEO.

    Read the article

  • Installing latest Firefox beta, am I doing it wrong?

    - by xiaohouzi79
    I followed the instructions in this question to install the latest Firefox beta: sudo add-apt-repository ppa:mozillateam/firefox-next sudo apt-get update && sudo apt-get install firefox-4.0 This is the error I'm getting when running the second set of commands: Err http://ppa.launchpad.net maverick/main Sources 404 Not Found Err http://ppa.launchpad.net maverick/main i386 Packages 404 Not Found Fetched 24.8kB in 4s (5,279B/s) W: Failed to fetch http://ppa.launchpad.net/mozillateam/firefoxt-next/ubuntu/dists/maverick/main/source/Sources.gz 404 Not Found W: Failed to fetch http://ppa.launchpad.net/mozillateam/firefoxt-next/ubuntu/dists/maverick/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • SQL Server 2008 R2 Cumulative Update 8 now available

    - by Greg Low
    CU8 is now available for SQL Server 2008 R2. You will find it here: http://support.microsoft.com/kb/2534352/en-us It includes the following fixes: VSTS bug number KB article number Description 726734 2522893 (http://support.microsoft.com/kb/2522893/ ) FIX: A backup operation on a SQL Server 2008 or SQL Server 2008 R2 database fails if you enable change tracking on this database 730658 2525665 (http://support.microsoft.com/kb/2525665/ ) FIX: SQL Server 2008 BIDS stops responding when you stop debugging...(read more)

    Read the article

  • About MVVM

    - by samkea
    http://csharperimage.jeremylikness.com/ How to make Silverlight datagrid more user friendly.(http://weblogs.asp.net/alexeyzakharov/archive/2009/06/06/silverlight-tips-amp-tricks-make-silverlight-datagrid-be-more-mvvm-friendly.aspx) http://www.galasoft.ch/mvvm/sample1/

    Read the article

  • Nginx Subdomain Problem

    - by user292299
    i can't access my subdomain on localhost. my localdomain is localhost.dev and it's work.but i want to auto subdomain for php script (username.localhost.dev) i try this server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev ***.localhost.dev**; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } it's not working.i change server_name for testing server_name localhost.dev asd.localhost.dev; i can't access asd.localhost.dev and i try this double server{} section # You may add here your # server { # ... # } # statements for each of your virtual hosts to this file ## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } ############################### server { access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name asd.localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ =404; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ =404; # } #} i can't success

    Read the article

  • CodePlex Daily Summary for Wednesday, August 20, 2014

    CodePlex Daily Summary for Wednesday, August 20, 2014Popular ReleasesMolGridCal & MolCal: MolGridCal tutorial v1.1: Update the contents for grid computing virtual screening.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.SQL Scripts to Improve DNN Performance: Turbo Scripts 0.9.1 for DNN 7.1.2 - 7.3.2: Support for DNN 7.3.2 added Fixed a typo in GetVendor procedure New goodies TurboDbConfig and TurboDNNSettings added (re-uploaded with fixed file name)The IRIS Toolbox Project: IRIS Toolbox Release 20140819: Regular Updates Issue 66 (66) closed, bug fixed. Calls to irisstartup and iriscleanup occasionally crashed when older IRIS versions were on the search path. Issue 65 (65) closed, bug fixed. The function tseries/convert crashed when called with 'method=' 'first' or 'last'.ConEmu - Windows console with tabs: ConEmu 140819 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe" HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksLinq 4 Javascript: Version 2.4: Minor Changes Made Added Count() and Count(with where clause) Distinct will now use a dictionary instead of a custom dictionary object Organize the unit tests. The variable names will actually make sense and won't be 2 letters. SelectMany will now use the queryable logic.Magick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...Fluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.Google .Net API: Drive.Sample: Google .NET Client API – Drive.SampleInstructions for the Google .NET Client API – Drive.Sample</h2> http://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples#hg%2FDrive.SampleBrowse Source, or main file http://code.google.com/p/google-api-dotnet-client/source/browse/Drive.Sample/Program.cs?repo=samplesProgram.cs <h3>1. Checkout Instructions</h3> <p><b>Prerequisites:</b> Install Visual Studio, and <a href="http://mercurial.selenic.com/">Mercurial</a>.</p> ...FineUI - jQuery / ExtJS based ASP.NET Controls: FineUI v4.1.1: -??Form??????????????(???-5929)。 -?TemplateField??ExpandOnDoubleClick、ExpandOnEnter、ExpandToSelectRow????(LZOM-5932)。 -BodyPadding???????,??“5”“5 10”,???????????“5px”“5px 10px”。 -??TriggerBox?EnableEdit=false????,??????????????(Jango_Jing-5450)。 -???????????DataKeyNames???????????(yygy-6002)。 -????????????????????????(Gnid-6018)。 -??PageManager???AutoSizePanelID????,??????????????????(yygy-6008)。 -?FState???????????????,????????????????(????-5925)。 -??????OnClientClick???return?????????(FineU...DNN CMS Platform: 07.03.02: Major Highlights Fixed backwards compatibility issue with 3rd party control panels Fixed issue in the drag and drop functionality of the File Uploader in IE 11 and Safari Fixed issue where users were able to create pages with the same name Fixed issue that affected older versions of DNN that do not include the maxAllowedContentLength during upgrade Fixed issue that stopped some skins from being upgraded to newer versions Fixed issue that randomly showed an unexpected error during us...WordMat: WordMat for Mac: WordMat for Mac has a few limitations compared to the Windows version - Graph is not supported (Gnuplot, GeoGebra and Excel works) - Units are not supported yet (Coming up) The Mac version is yet as tested as the windows version.New ProjectsCryptography Enumerations JavaScript Shell: Javascript shell for CAPICOM enumsFamiglia Information Server: Famiglia ProjectHustle: Hustle is a small utility that can be used to manage threads when executing command line tools. The tool was built to take advantage parallel loading in TM1.LuckyDraw: an lucky drawer for 2nd Sudoku Competition Powered by PythonMachina Aurum WebDriver Powershell: Powershell commands to the http://www.w3.org/TR/webdriver/ specification (works on Internet Explorer 11 and Chrome)ODBC Connect: ODBC Connect is a Windows user interface for 32bit and 64bit ODBC data sources. Generate SQL statements, export to file, run from the command line to automate.Post Publisher: Post Publisher is a post publishing platform. This is a work-in-progress.Samurai Framework: Samurai is a game framework written in C# using OpenGL for graphics. It aims to be a higher level abstraction rather than just a simple OpenGL binding.SharePoint 2013 tokenizing user/group autocomplete: SharePoint 2013 tokenizing user/group autocomplete plugin allow user to search/select users/groups.SharePoint 2013 User Lync Presence using jQuery: SharePoint 2013 User Lync Presence jQuery plugin renders the presence for particular user with or without picture.SharePoint Search Box User/Group AutoComplete: SharePoint Search box User/Group autocomplete is a jQuery plugin to quick search the user/group from OOB search input control.TI Helper: TI Helper is an Auto Hot Key script that can be used to enhance the options in the IBM Cognos TM1 Turbo Integrator (TI) process editor.trade-software: for trading educationWPF Graphing: CUSTOME GRAPHING SOLUTIONYoutubeChannelDownloader: I love to see video on my favorite channel,and I want to download videos on this channel to watch later. So I made this app.

    Read the article

  • Directing from a 1und1 hosting solution, with urls intact

    - by Jelmar
    I have done this before on GoDaddy without a hitch, but I cannot seem to figure out this particular case. I have a domain space with temporary url http://yogainun.mysubname.com/ and am hosting the domain name that is to be applied to it at 1und1.de. Right now I have set it up so that from the 1und1 domain name hosting the address http://www.yoga-in-unternehmen.de/ is frame redirected to the subdomain that I just referred to. But this is not what I want. http://www.yoga-in-unternehmen.de/ is to be the domain. With the frame redirect, url's like http://www.yoga-in-unternehmen.de/example-article do not show up. But this is what I want. With godaddy in a similar case, I just turned on DNS and changed the name servers. That worked without problem, but with 1und1 not. Is there something I am missing?

    Read the article

  • Installing Cairo to get FastRWeb working for R gWidgetsWWW2 -pkg

    - by hhh
    I want to install FastRWeb for R but it requires some Cairo. How can I install the Cairo? compilation terminated. make: *** [xlib-backend.o] Error 1 ERROR: compilation failed for package ‘Cairo’ * removing ‘/home/xfz/R/i686-pc-linux-gnu-library/2.13/Cairo’ ERROR: dependency ‘Cairo’ is not available for package ‘FastRWeb’ * removing ‘/home/xfz/R/i686-pc-linux-gnu-library/2.13/FastRWeb’ The downloaded packages are in ‘/tmp/Rtmpno8hhF/downloaded_packages’ Warning messages: 1: In install.packages("FastRWeb", , "http://rforge.net/", type = "source") : installation of package 'Cairo' had non-zero exit status 2: In install.packages("FastRWeb", , "http://rforge.net/", type = "source") : installation of package 'FastRWeb' had non-zero exit status I cannot find what the Cairo is here, 16 entries with this search term below. It is apparently some library. $ apt-cache search libcairo|wc 16 132 996 Perhaps related http://stackoverflow.com/questions/9826128/r-making-r-rook-program-into-rscript-program-r http://stackoverflow.com/questions/9812547/r-gui-vizualiser-with-command-line-access-browser-based-letting-users-to-s Some related packages FastRWeb and RServe for the gWidgetsWWW2 -pkg.

    Read the article

  • Having good domain name and using domain aliases ( I use notlong.com)?

    - by Michal P.
    I use only free servers and after creating my website: http://pundaquit.republika.pl I decided to make access to that domain by simple domain name . I decided to use domain alias http://notlong.com/ service and have simple domain name http://pundaquit.notlong.com The second advantage of using alias here was to be independant from my file host which I will have to change. I haven't found a better alias service like notlong, because notlong.com is easy to remember. After that I encounter many problems: * most of forums or social services treat notlong adress as a spam, * Bing so far hvn't accepted http://pundaquit.notlong.com domain and others. Is it another way to have good free domain name? How about the situation when your hosting server will inform you to expire? Only a lasting layer of domain aliases make you independant from the real file hosts.

    Read the article

  • Convert Google Video URL [on hold]

    - by user3716328
    When i download a video from YouTube(Google docs or Google plus) with a download manager this what i get referer: http://www.youtube.com/watch?v=71zlOWbEoe8 address: http://r4---sn-5hnezn76.googlevideo.com/videoplayback?id=o-AAyCp6dqoOj_sUwGtSwE9J27TU7-iKHf4d209xDnuCee&signature=2EB6338E125E84AAA3736DBCEF6AC35451A2104A.1C29A8A9B830CDC9E4493D93A25BA62FF3672E4A&key=cms1&ipbits=0&fexp=3300091%2C3300111%2C3300130%2C3300137%2C3300164%2C3310366%2C3310648%2C3310698%2C3312169%2C907050%2C908562%2C913434%2C923341%2C924203%2C930008%2C932617%2C935501%2C939936%2C939937&ratebypass=yes&ip=41.230.222.50&upn=DO_DxWpYrpg&expire=1402102715&sparams=expire,gcr,id,ip,ipbits,itag,ratebypass,source,upn&itag=18&source=youtube&gcr=tn&redirect_counter=1&req_id=8a4c0009cb74c822&cms_redirect=yes&ms=nxu&mt=1402081178&mv=m&mws=yes And I want to know if i have the address is there a way to get the referer I mean if i have this : http ://r4---sn-5hnezn76.googlevideo. com/videoplayback?id...... how to get http ://www.youtube.com/watch?v=......... ?

    Read the article

  • June 2012 Critical Patch Update for Java SE Released

    - by Eric P. Maurice
    Hi, this is Eric Maurice. Oracle just released the June 2012 Critical Patch Update for Java SE.  This Critical Patch Update provides 14 new security fixes across Java SE products.  As discussed in previous blog entries, Critical Patch Updates for Java SE will, for the foreseeable future, continue to be released on a separate schedule than that of other Oracle products due to previous commitments made to Java customers.  12 of the 14 Java SE vulnerabilities fixed in this Critical Patch Update may be remotely exploitable without authentication.  6 of these vulnerabilities have a CVSS Base Score of 10.0.  In accordance with Oracle’s policies, these CVSS 10 scores represent instances where a user running a Java applet or Java Web Start application has administrator privileges (as is typical on Windows XP).  When the user does not run with administrator privileges (typical on the Solaris and Linux operating systems), the corresponding CVSS impact scores for Confidentiality, Integrity, and Availability for these vulnerabilities would be "Partial" instead of "Complete", thus lowering these CVSS Base Scores to 7.5. Due to the high severity of these vulnerabilities, Oracle recommends that customers obtain and apply these security fixes as soon as possible: Developers should download the latest release at http://www.oracle.com/technetwork/java/javase/downloads/index.html    Java users should download the latest release of JRE at http://java.com, and of course  Windows users can take advantage of the Java Automatic Update to get the latest release. In addition, Oracle recommends removing old an unused versions  of Java as the latest version is always the recommended version as it contains the most recent enhancements, and bug and security fixes.  For more information: •Instructions on removing older (and less secure) versions of Java can be found at http://java.com/en/download/faq/remove_olderversions.xml  •Users can verify that they’re running the most recent version of Java by visiting: http://java.com/en/download/installed.jsp   •The Advisory for the June 2012 Critical Patch Update for Java SE is located at http://www.oracle.com/technetwork/topics/security/javacpujun2012-1515912.html

    Read the article

  • How to add sitemap of a blogger blog to Google webmaster tools?

    - by Chankey Pathak
    I have two blogs. Let's say http://name.blogspot.com/ and the other one is http://name2.blogspot.com/ For blog 1: While submitting sitemap to Google webmaster tool I selected the option and then added rss.xml at last. (http://name.blogspot.com/rss.xml) Sitemap added successfully. For blog 2: I followed the same procedure but it didn't work. Then I tried to open the url (http://name.blogspot.com/rss.xml). The url was showing the atom feeds. I tried same with the other blog but that url is redirecting to the feedburner feeds. I think this is the reason why the other blog's sitemap is not getting submitted. Help me with it.

    Read the article

  • Get external metadata with streamripper using python script

    - by user72379
    Hi I like using streamripper to rip music from the web. I have a favorite radio station that doesn't have the metadata for the songs, so I have to screen scrape it from its website manually. I created this neat python script in the format that the docs suggest, and linked the address in the GUI for streamriper. But it still doesn't work, any one know how to make it work..? I know it used to work. It gives you a sample here: http://streamripper.sourceforge.net/history.php import http import time import re u = 'SEE IMAGE FOR THIS URL' s = http.Session(0, 0) s.add_headers(h, persistent=1) while 1: c = unicode(s.get(u)) pat = r'class="title"([^<]+)([^<]+)' m = re.search(pat, c) title = m.group(1) artist = m.group(2) print 'TITLE='+title+'\n'+'ARTIST='+artist+'\n.\n' time.sleep(30) [img]http://s11.postimage.org/sok928lsz/urlstream.png[/img] I put the address to the script here: [img]http://s17.postimage.org/4bhmhi4yn/streamripper.png[/img] I've tried putting it in the root of the streamripper application and doing this: lax.py I've even compiled it to a EXE, and tried linking to that.. nothing What am I doing wrong?

    Read the article

  • ViewModelLocators

    - by samkea
    Bobby's http://blog.bmdiaz.com/archive/2010/03/31/kiss-and-tell---mvvm-and-the-viewmodellocator.aspx Kelly's http://blog.kellybrownsberger.com/archive/2010/03/31/81.aspx John Papa and Glen Block http://johnpapa.net/silverlight/simple-viewmodel-locator-for-mvvm-the-patients-have-left-the-asylum/

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >