Search Results

Search found 14531 results on 582 pages for 'proxy pass'.

Page 7/582 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • What's the easiest way to create an HTTP proxy which adds basic authentication to requests?

    - by joshdoe
    I am trying to use a service provided by a server which requires basic HTTP authentication, however the application I am using does not support authentication. What I'd like to do is create a proxy that will enable my auth-less application to connect via the proxy (which will add the authentication information) to the server requiring authentication. I'm sure this can be done, however I'm overwhelmed with the number of proxies out there and couldn't find an answer how to do this. Basically it seems all I want to do is have a proxy serve this URL: http://username:password@remoteserver/path as this URL: http://proxyserver/path I can run it on Linux, but a plus if I can run it Windows as well. Open source or at least free is a must. A big plus is if it's fairly straightforward to setup.

    Read the article

  • How can I use a Windows 2003 server as a HTTP proxy?

    - by Will
    I'd like to set up an HTTP proxy on a windows 2003 server so that I can access blocked websites such as YouTube from behind a corporate firewall (DAMN THE MAN!). I've never done this before, so I'm not even sure if the picture I have in my head is valid or possible. So I'm stuck behind a firewall that blocks sites that I need to access occasionally but that are blocked because of abuse by slackers. I've got a Windows 2003 server hosted out on the internet (i.e., outside of this odious firewall). I know I can configure my browser to use a proxy for my HTTP traffic, so why not use my server? What I'd like to know is: Is my concept valid? Can this be done, and will it work? How do I configure my server to act as a proxy? What applications may I have to install? Free is fine but don't leave out commercial software TIA

    Read the article

  • How to set up Nginx as a caching reverse proxy?

    - by Continuation
    I heard recently that Nginx has added caching to its reverse proxy feature. I looked around but couldn't find much info about it. I want to set up Nginx as a caching reverse proxy in front of Apache/Django: to have Nginx proxy requests for some (but not all) dynamic pages to Apache, then cache the generated pages and serve subsequent requests for those pages from cache. Ideally I'd want to invalidate cache in 2 ways: Set an expiration date on the cached item To explicitly invalidate the cached item. E.g. if my Django backend has updated certain data, I'd want to tell Nginx to invalidate the cache of the affected pages Is it possible to set Nginx to do that? How?

    Read the article

  • How to set up Nginx as a caching reverse proxy?

    - by Continuation
    I heard recently that Nginx has added caching to its reverse proxy feature. I looked around but couldn't find much info about it. I want to set up Nginx as a caching reverse proxy in front of Apache/Django: to have Nginx proxy requests for some (but not all) dynamic pages to Apache, then cache the generated pages and serve subsequent requests for those pages from cache. Ideally I'd want to invalidate cache in 2 ways: Set an expiration date on the cached item To explicitly invalidate the cached item. E.g. if my Django backend has updated certain data, I'd want to tell Nginx to invalidate the cache of the affected pages Is it possible to set Nginx to do that? How?

    Read the article

  • 24 hours to pass until 24 Hours of PASS

    - by Rob Farley
    There’s a bunch of stuff going on at the moment in the SQL world, so if you’ve missed this particular piece of news, let me tell you a bit about it. Twice a year, the SQL community puts on its biggest virtual event – 24 Hours of PASS. And the next one is tomorrow – March 21st, 2012. Twenty-four sessions, back-to-back, featuring a selection of some of the best presenters in the SQL world, speakers from all over the world, coming together in an online collaboration that so far has well over thirty thousand registrations across the presentations. Some people are signed up for all 24 sessions, some only one. Traditionally, LiveMeeting has been used as the platform for this event, but this year we’re going with a new platform – IBTalk. It promises big, and we’re hoping it won’t let us down. LiveMeeting has been great, and we thank Microsoft for providing it as a platform for the past few years. However, as the event has grown, we’ve found that a new idea is necessary. Last year a search was done for a new platform, and IBTalk ticked the right boxes. The feedback from the presenters and moderators so far has been overwhelmingly positive, and we’re hoping that this is going to really enhance the user experience. One of my favourite features of the platform is the language side. It provides a pretty good translation service. Users who join a session will see a flag on the left of the screen. If they click it, they can change the language to one of 15 on offer. Picking this changes all the labels on everything. It even translates the text in the Q&A window. What this means is that someone from Brazil can ask their question in Portuguese, and the presenter will see it in English. Then if the answer is typed in English, the questioner will be able to see the answer, also in Portuguese. Or they can switch to English to see it as the answerer typed it. I know there’s always the risk of bad translations going on, but I’ve heard good things about this translation service. But there’s more – IBTalk are providing staff to type up closed captioning live during the event. So if English isn’t your first language, don’t worry! Picking your language will also let you see subtitles in your chosen language. I’m hoping that this event is the start of PASS being able to reach people from all corners of the world. Wouldn’t it be great to find that this event is successful, and that the next 24HOP (later in the year, our Summit Preview event) has just as many non-English speakers tuning in as English speakers? If you haven’t been planning which sessions you’re going to attend, you really should get over to sqlpass.org/24hours and have a look through what’s on offer. There’s some amazing material from some of the industry’s brightest, covering a wide range of topics, from classic SQL areas to the brand new SQL 2012 features. There really should be something for every SQL professional. Check the time zones though – if you’re in the US you might be on Summer time, and an hour closer to GMT than normal. Massive thanks must go to Microsoft, SQL Sentry and Idera for sponsoring this event. Without sponsors we wouldn’t be able to put any of this on. These companies are helping 24HOP continue to grow into an event for the whole world. See you tomorrow! @rob_farley | #24hop | #sqlpass

    Read the article

  • 24 hours to pass until 24 Hours of PASS

    - by Rob Farley
    There’s a bunch of stuff going on at the moment in the SQL world, so if you’ve missed this particular piece of news, let me tell you a bit about it. Twice a year, the SQL community puts on its biggest virtual event – 24 Hours of PASS. And the next one is tomorrow – March 21st, 2012. Twenty-four sessions, back-to-back, featuring a selection of some of the best presenters in the SQL world, speakers from all over the world, coming together in an online collaboration that so far has well over thirty thousand registrations across the presentations. Some people are signed up for all 24 sessions, some only one. Traditionally, LiveMeeting has been used as the platform for this event, but this year we’re going with a new platform – IBTalk. It promises big, and we’re hoping it won’t let us down. LiveMeeting has been great, and we thank Microsoft for providing it as a platform for the past few years. However, as the event has grown, we’ve found that a new idea is necessary. Last year a search was done for a new platform, and IBTalk ticked the right boxes. The feedback from the presenters and moderators so far has been overwhelmingly positive, and we’re hoping that this is going to really enhance the user experience. One of my favourite features of the platform is the language side. It provides a pretty good translation service. Users who join a session will see a flag on the left of the screen. If they click it, they can change the language to one of 15 on offer. Picking this changes all the labels on everything. It even translates the text in the Q&A window. What this means is that someone from Brazil can ask their question in Portuguese, and the presenter will see it in English. Then if the answer is typed in English, the questioner will be able to see the answer, also in Portuguese. Or they can switch to English to see it as the answerer typed it. I know there’s always the risk of bad translations going on, but I’ve heard good things about this translation service. But there’s more – IBTalk are providing staff to type up closed captioning live during the event. So if English isn’t your first language, don’t worry! Picking your language will also let you see subtitles in your chosen language. I’m hoping that this event is the start of PASS being able to reach people from all corners of the world. Wouldn’t it be great to find that this event is successful, and that the next 24HOP (later in the year, our Summit Preview event) has just as many non-English speakers tuning in as English speakers? If you haven’t been planning which sessions you’re going to attend, you really should get over to sqlpass.org/24hours and have a look through what’s on offer. There’s some amazing material from some of the industry’s brightest, covering a wide range of topics, from classic SQL areas to the brand new SQL 2012 features. There really should be something for every SQL professional. Check the time zones though – if you’re in the US you might be on Summer time, and an hour closer to GMT than normal. Massive thanks must go to Microsoft, SQL Sentry and Idera for sponsoring this event. Without sponsors we wouldn’t be able to put any of this on. These companies are helping 24HOP continue to grow into an event for the whole world. See you tomorrow! @rob_farley | #24hop | #sqlpass

    Read the article

  • How to setup equivalent USVIDEO.ORG DNS-Proxy on Linux

    - by Gary
    I have a VPS in the USA running Ubuntu. I want to setup something similar to http://www.usvideo.org Basically, USVIDEO is a DNS service that allows Canadians to access American content like Hulu, Netflix, NBC, and etc (restricted by geographical IP). Here is how I think USVideo does it: Clients (PS3, XBOX, PC) specifies the DNS server(s) as specified on USVIDEO.org's website. If the DNS request is a video/audio site such as Netflix or Pandora, forward the request to a proxy. Otherwise, for all other requests, forward it to a different DNS server. If the specific video/audio URL is requested, return the address of the proxy server, which in turn relays traffic to the destination video/audio domain via the U.S. gateway so that it appears that the access is coming from a U.S. IP address. Once the DNS request has passed the U.S. IP address check, their proxy server steps out of the loop and lets the video streaming site contact you directly to start the video stream. This trick relies on the way that the video streaming sites check the country of your IP address once up front, but don't actually check the country of the destination IP address while the video is streaming. What is elegant about this solution is that a VPN Tunnel is not required to bypass geographical IP checks from certain websites. All that is required on the client side is to specify the DNS server (the VPS). If a certain site is geographically locked, just forward the traffic to a proxy, and that's it. These sites can be specified in the DNS entries, or perhaps in the proxy service to redirect the DNS request to its own proxy. I believe what I need to setup something similar is Squid Proxy, IPTables, and DNS. What I need help is how to exactly approach this? Would Squid Proxy be setup as a transparent proxy?

    Read the article

  • Apache reverse proxy access control

    - by Steven
    I have an Apache reverse proxy that is currently reverse proxying for a few sites. However i am now going to be adding a new site (lets call it newsite.com) that should only be accessible by certain IP's. Is this doable using Apache as a reverse proxy? I use VirtualHosts for the sites that are being proxyied. I have tried using the Allow/Deny directives in combination with the Location statements. For example: <VirtualHost *:80> Servername newsite.com <Location http://newsite.com> Order Deny,Allow Deny from all Allow from x.x.x.x </Location> <IfModule rewrite_module> RewriteRule ^/$ http://newsite.internal.com [proxy] </IfModule> I have also tried configuring allow/deny specicaily for the site in the Proxy directives, for example <Proxy http://newsite.com/> Order deny,allow Deny from all Allow from x.x.x.x </Proxy> I still have this definition for the rest of the proxied sites however. <Proxy *> Order deny,allow Allow from all </Proxy> No matter what i do it seems to be accessible from any where. Is this because of the definition for all other proxied sites. Is there an order to which it applies Proxy directives. I have had the newsite one both before and after the * one, and also within the VirtualHost statement.

    Read the article

  • Connection refused in ssh tunnel to apache forward proxy setup

    - by arkascha
    I am trying to setup a private forward proxy in a small server. I mean to use it during a conference to tunnel my internet access through an ssh tunnel to the proxy server. So I created a virtual host inside apache-2.2 running the proxy, the proxy_http and the proxy_connect module. I use this configuration: <VirtualHost localhost:8080> ServerAdmin xxxxxxxxxxxxxxxxxxxx ServerName yyyyyyyyyyyyyyyyyyyy ErrorLog /var/log/apache2/proxy-error_log CustomLog /var/log/apache2/proxy-access_log combined <IfModule mod_proxy.c> ProxyRequests On <Proxy *> # deny access to all IP addresses except localhost Order deny,allow Deny from all Allow from 127.0.0.1 </Proxy> # The following is my preference. Your mileage may vary. ProxyVia Block ## allow SSL proxy AllowCONNECT 443 </IfModule> </VirtualHost> After restarting apache I create a tunnel from client to server: #> ssh -L8080:localhost:8080 <server address> and try to access the internet through that tunnel: #> links -http-proxy localhost:8080 http://www.linux.org I would expect to see the requested page. Instead a get a "connection refused" error. In the shell holding open the ssh tunnel I get this: channel 3: open failed: connect failed: Connection refused Anyone got an idea why this connection is refused ?

    Read the article

  • Apache and multiple tomcats proxy

    - by Sebb77
    I have 1 apache server and two tomcat servers with two different applications. I want to use the apache as a proxy so that the user can access the application from the same url using different paths. e.g.: localhost/app1 --> localhost:8080/app1 localhost/app2 --> localhost:8181/app2 I tried all 3 mod proxy of apache (mod_jk, mod_proxy_http and mod_proxy_ajp) but the first application is working, whilst the second is not accessible. This is the apache configuration I'm using: ProxyPassMatch ^(/.*\.gif)$ ! ProxyPassMatch ^(/.*\.css)$ ! ProxyPassMatch ^(/.*\.png)$ ! ProxyPassMatch ^(/.*\.js)$ ! ProxyPassMatch ^(/.*\.jpeg)$ ! ProxyPassMatch ^(/.*\.jpg)$ ! ProxyRequests Off ProxyPass /app1 ajp://localhost:8009/ ProxyPassReverse /app1 ajp://localhost:8009/ ProxyPass /app2 ajp://localhost:8909/ ProxyPassReverse /app2 ajp://localhost:8909/ With the above, I manage to view the tomcat root application using localhost/app1, but I get "Service Temporarily Unavailable" (apache error) when accessing app2. I need to keep the tomcat servers separate because I need to restart one of the applications often and it is not an option to save both apps on the same tomcat. Can someone point me out what I'm doing wrong? Thank you all.

    Read the article

  • Back from PASS Europe 2010

    - by Davide Mauri
    PASS Europe 2010 is finished and I’m now finally back at home and will stay here for a while. I would like to thanks all the people who has come to my sessions for all their feedback, especially for the “Adaptive BI” session! Slides and demos should be available for download from the PASS European Conference website in a couple of days. Meanwhile if you want to rate my session online, you can do it here: Adaptive BI http://speakerrate.com/talks/3136-adaptive-bi-best-practices Blazing Fast Queries http://speakerrate.com/talks/3135-blazing-fast-queries-when-indexes-are-not-enough Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • HTTPS is not working in transparent proxy with Squid

    - by Supratik
    Hi I am using Squid proxy 3.1, all systems in the LAN connects to the internet through proxy. Direct connection is blocked using the iptables from the gateway server. There are some devices which does not have options for auto proxy or manual proxy and can only connect to the internet directly. So I enabled transparent proxy in Squid and redirected packets for port 80 and 443 to Squid proxy using iptables. Now the problem is it is working fine for HTTP port but HTTPS is not working. It is throwing "ssl_error_rx_record_too_long" error. If it is not possible through transparent proxy can you please suggest me another solution. Warm Regards Supratik

    Read the article

  • Configuring Apache reverse proxy

    - by Martin
    I have loadbalancer server and edges. I am trying to configure reverse proxy in order to hide the backend servers PL1,2,3. PL 1,2,3 are not located in same subnet. They are located in different locations. PL1 Lb1 -> PL2 PL3 I tried to configure Apache reverse proxy but it is not sending request to PL1,2,3. Reverse proxy worked only when I configured apache to send request to local server on other port. ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /PL1 http://PL1server.com/ ProxyPassReverse /PL1 http://PL1server.com/ The above configuration did not worked. Could you help me to solve the issue. Or is there other proxy types like Squid,Socks5 to solve this issue. Does the reverse proxy fails if we use IP address or domain URL in ProxyPass and ProxyPassReverse ?

    Read the article

  • proxy.pac file performance optimization

    - by Tuinslak
    I reroute certain websites through a proxy with a proxy.pac file. It basically looks like this: if (shExpMatch(host, "www.youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } if (shExpMatch(host, "youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } At the moment about 125 sites are rerouted using this method. However, I plan on adding quite a few more domains to it, and I'm guessing it will eventually be a list of 500-1000 domains. It's important to not reroute all traffic through the proxy. What's the best way to keep this file optimized, performance-wise ? Thanks

    Read the article

  • Danger in running a proxy server? [closed]

    - by NessDan
    I currently have a home server that I'm using to learn more and more about servers. There's also the advantage of being able to run things like a Minecraft server (Yeah!). I recently installed and setup a proxy service known as Squid. The main reason was so that no matter where I was, I would be able to access sites without dealing with any network content filter (like at schools). I wanted to make this public but I had second thoughts on it. I thought last night that if people were using my proxy, couldn't they access illegal materials with it? What if someone used my proxy to download copyright material? Or launched an attack on another site via my proxy? What if someone actually looked up child pornography through the proxy? My question is, am I liable for what people use my proxy for? If someone does an illegal act and it leads to my proxy server, could I be held accountable for the actions done?

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • Apache SSL reverse proxy to a Embed Tomcat

    - by ggarcia24
    I'm trying to put in place a reverse proxy for an application that is running a tomcat embed server over SSL. The application needs to run over SSL on the port 9002 so I have no way of "disabling SSL" for this app. The current setup schema looks like this: [192.168.0.10:443 - Apache with mod_proxy] --> [192.168.0.10:9002 - Tomcat App] After googling on how to make such a setup (and testing) I came across this: https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/861137 Which lead to make my current configuration (to try to emulate the --secure-protocol=sslv3 option of wget) /etc/apache2/sites/enabled/default-ssl: <VirtualHost _default_:443> SSLEngine On SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key SSLProxyEngine On SSLProxyProtocol SSLv3 SSLProxyCipherSuite SSLv3 ProxyPass /test/ https://192.168.0.10:9002/ ProxyPassReverse /test/ https://192.168.0.10:9002/ LogLevel debug ErrorLog /var/log/apache2/error-ssl.log CustomLog /var/log/apache2/access-ssl.log combined </VirtualHost> The thing is that the error log is showing error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol Complete request log: [Wed Mar 13 20:05:57 2013] [debug] mod_proxy.c(1020): Running scheme https handler (attempt 0) [Wed Mar 13 20:05:57 2013] [debug] mod_proxy_http.c(1973): proxy: HTTP: serving URL https://192.168.0.10:9002/ [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2011): proxy: HTTPS: has acquired connection for (192.168.0.10) [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2067): proxy: connecting https://192.168.0.10:9002/ to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2193): proxy: connected / to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2444): proxy: HTTPS: fam 2 socket created to connect to 192.168.0.10 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2576): proxy: HTTPS: connection complete to 192.168.0.10:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection to child 0 established (server demo1agrubu01.demo.lab:443) [Wed Mar 13 20:05:57 2013] [info] Seeding PRNG with 656 bytes of entropy [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1866): OpenSSL: Handshake: start [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: before/connect initialization [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: unknown state [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1897): OpenSSL: read 7/7 bytes from BIO#7f122800a100 [mem: 7f1230018f60] (BIO dump follows) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1830): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1869): | 0000: 15 03 01 00 02 02 50 ......P | [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1875): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1903): OpenSSL: Exit: error in unknown state [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] SSL Proxy connect failed [Wed Mar 13 20:05:57 2013] [info] SSL Library Error: 336032002 error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 0 with abortive shutdown (server example1.domain.tld:443) [Wed Mar 13 20:05:57 2013] [error] (502)Unknown error 502: proxy: pass request body failed to 172.31.4.13:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [error] [client 192.168.0.10] proxy: Error during SSL Handshake with remote server returned by /dsfe/ [Wed Mar 13 20:05:57 2013] [error] proxy: pass request body failed to 192.168.0.10:9002 (172.31.4.13) from 172.31.4.13 () [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2029): proxy: HTTPS: has released connection for (172.31.4.13) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1884): OpenSSL: Write: SSL negotiation finished successfully [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 6 with standard shutdown (server example1.domain.tld:443) If I do a wget --secure-protocol=sslv3 --no-check-certificate https://192.168.0.10:9002/ it works perfectly, but from apache is not working. I'm on an Ubuntu Server with the latest updates running apache2 with mod_proxy and mod_ssl enabled: ~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" ~# dpkg -s apache2 ... Version: 2.2.22-1ubuntu1.2 ... ~# dpkg -s openssl ... Version: 1.0.1-4ubuntu5.7 ... Hope that anyone may help

    Read the article

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • Live from the #summit13 keynote : 2013-10-16

    - by AaronBertrand
    Early morning start here in Charlotte. I'm going to try and keep this post updated as I have new information from the keynote to share, so refresh often! 8:24 AM Bill Graziano takes the stage and welcomes us to the 15th PASS Summit. He mentions that PASS delivered over 700,000 hours of technical training in the previous fiscal year, and shows a Power BI Power Map video talking about all of the SQL Saturday accomplishments in the last few years. She introduces Amy Lewis, who wins this year's PASSion...(read more)

    Read the article

  • Finding an HTTP proxy that will intercept static resource requests

    - by pkh
    Background I develop a web application that lives on an embedded device. In order to make dev times sane, frontend development is done using apache serving static documents, with PHP proxying out to the embedded device for specifically configured dynamic resources. This requires that we keep various server-simulation scripts hanging around in source control, and it requires updating those scripts whenever we add a new dynamic resource. Problem I'd like to invert the logic: if the requested document is available in the static documents directory, serve it; otherwise, proxy the request to the embedded device. Optimally, I want a software package that will do this for me (for Windows or buildable on cygwin). I can deal with forcing apache to do it with PHP, but I'm unsure how to configure it to make it happen. I've looked at squid and privoxy, but neither of them seem to do what I want. Any ideas? I'd rather not have to roll my own.

    Read the article

  • Using Supermicro IPMI behind a Proxy?

    - by Stefan Lasiewski
    This is a SuperMicro server with a X8DT3 motherboard which contains an On-board IPMI BMC. In this case, the BMC is a Winbond WPCM450). I believe many Dell servers use this a similar BMC model. A common practice with IPMI is to isolated it to a private, non-routable network. In our case all IPMI cards are plugged into a private management LAN at 192.168.1.0/24 which has no route to the outside world. If I plug my laptop into the 192.168.1.0/24 network, I can verify that all IPMI features work as expected, including the remote console. I need to access all of the IPMI features from a different network, over some sort of encrypted connection. I tried SSH port forwarding. This works fine for a few servers, however, we have close to 100 of these servers and maintaining a SSH client configuration to forward 6 ports on 100 servers is impractical. So I thought I would try a SOCKS proxy. This works, but it seems that the Remote Console application does not obey my systemwide proxy settings. I setup a SOCKS proxy. Verbose logging allows me to see network activity, and if ports are being forwarded. ssh -v -D 3333 [email protected] I configure my system to use the SOCKS proxy. I confirm that Java is using the SOCKS proxy settings. The SOCKS proxy is working. I connect to the BMC at http://192.168.1.100/ using my webbrowser. I can log in, view the Server Health, power the machine on or off, etc. Since SSH verbose logging is enabled, I can see the progress. Here's where it get's tricky: I click on the "Launch Console" button which downloads a file called jviewer.jnlp. JNLP files are opened with Java Web Start. A Java window opens. The titlebar says says "Redirection Viewer" in the title bar. There are menus for "Video" "Keyboard" "Mouse", etc. This confirms that Java is able to download the application through the proxy, and start the application. 60 seconds later, the application times out and simply says "Error opening video socket". Here's a screenshot. If this worked, I would see a VNC-style window. My SSH logs show no connection attempts to ports 5900/5901. This suggests that the Java application started the VNC application, but that the VNC application ignores the systemwide proxy settings and is thus unable to connect to the remote host. Java seems to obey my systemwide proxy settings, but this VNC application seems to ignore it. Is there any way for me to force this VNC application to use my systemwide proxy settings?

    Read the article

  • View a pdf with quick webview though apache proxy

    - by Musa
    I have a site(IIS) that is accessed via a proxy in apache(on an IBM i). This site serves PDFs which has quick web view and if I access a pdf directly from the IIS server the PDFs starts to display immediately but if I go through the proxy I have to wait until the entire pdf downloads before I can view it. In the apache config file I use ProxyPass /path/ http://xxx.xxx.xxx.xxx/ <LocationMatch "/path/"> Header set Cache-Control "no-cache" </LocationMatch> I tried adding SetEnv proxy-sendcl to LocationMatch directive this had no effect. The PDFs that view quickly makes a lot of partial requests This is the initial request and response headers GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip HTTP/1.1 200 OK Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 15330238 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET This is a partial request and response GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept: */* Referer: http://xxx.xxx.xxx.xxx/xxxx.PDF Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip Range: bytes=0-32767 HTTP/1.1 206 Partial Content Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 32768 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Range: bytes 0-32767/15330238 Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET These are the headers I get if I go through he proxy GET /path/xxx.PDF HTTP/1.1 Host: domain:xxxx Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 HTTP/1.1 200 OK Date: Mon, 25 Aug 2014 13:28:42 GMT Server: Microsoft-IIS/7.5 Content-Type: application/pdf Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes ETag: "b6262940bbecf1:0"-gzip X-Powered-By: ASP.NET Cache-Control: no-cache Expires: Thu, 24 Aug 2017 13:28:42 GMT Vary: Accept-Encoding Content-Encoding: gzip Keep-Alive: timeout=300, max=100 Connection: Keep-Alive Transfer-Encoding: chunked I'm guessing its because the proxy uses Transfer-Encoding: chunked but I'm not sure and wasn't able to turn it off to check. Browser Chrome 36.0.1985.143 m Using the native PDF viewer Any help to get the pdf quick web view through the proxy working would be appreciated.

    Read the article

  • configuring and authenticating proxy on ubuntu 12.04 terminal

    - by awuni joshua
    I am very new to ubuntu, I decided to try it out two days ago. I am using proxy server [41.74.91.139 with user name and password]. I cant access any on-line resources both with the terminal, transmission(to download torrents), or Ubuntu software center. For firefox, I have configured it and can download .deb and other files but need direct access. Please give me step by step instruction as to how to configure and authenticate 12.04 terminal. Thank you

    Read the article

  • 502: proxy: pass request body failed

    - by Andrei Serdeliuc
    Sometimes I get the following error (in apache's error.log) when viewing my site over https: (502)Unknown error 502: proxy: pass request body failed to xxx.xxx.xxx.xxx:443 I'm not entirely sure what this is and why it happens, it's also not consistent. The request route is: Browser Proxy server (apache with mod_proxy + mod_ssl) Load balancer (aws) Web server (apache with mod_ssl) The configuration on the proxy server is as follows: <VirtualHost *:443> ProxyRequests Off ProxyVia On ServerName www.xxx.co.uk ServerAlias xxx.co.uk <Directory proxy:*> Order deny,allow Allow from all </Directory> <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> ProxyPass / balancer://cluster:443/ lbmethod=byrequests ProxyPassReverse / balancer://cluster:443/ ProxyPreserveHost off SSLProxyEngine On SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /var/www/vhosts/xxx/ssl/www.xxx.co.uk.cert SSLCertificateKeyFile /var/www/vhosts/xxx/ssl/www.xxx.co.uk.key <Proxy balancer://cluster> BalancerMember https://xxx.eu-west-1.elb.amazonaws.com </Proxy> </VirtualHost> Any idea what the issue might be?

    Read the article

  • SSH multi-hop connections with netcat mode proxy

    - by aef
    Since OpenSSH 5.4 there is a new feature called natcat mode, which allows you to bind STDIN and STDOUT of local SSH client to a TCP port accessible through the remote SSH server. This mode is enabled by simply calling ssh -W [HOST]:[PORT] Theoretically this should be ideal for use in the ProxyCommand setting in per-host SSH configurations, which was previously often used with the nc (netcat) command. ProxyCommand allows you to configure a machine as proxy between you local machine and the target SSH server, for example if the target SSH server is hidden behind a firewall. The problem now is, that instead of working, it throws a cryptic error message in my face: Bad packet length 1397966893. Disconnecting: Packet corrupt Here is an excerpt from my ~/.ssh/config: Host * Protocol 2 ControlMaster auto ControlPath ~/.ssh/cm_socket/%r@%h:%p ControlPersist 4h Host proxy-host proxy-host.my-domain.tld HostName proxy-host.my-domain.tld ForwardAgent yes Host target-server target-server.my-domain.tld HostName target-server.my-domain.tld ProxyCommand ssh -W %h:%p proxy-host ForwardAgent yes As you can see here, I'm using the ControlMaster feature so I don't have to open more than one SSH connection per-host. The client machine I tested this with is an Ubuntu 11.10 (x86_64) and both proxy-host and target-server are Debian Wheezy Beta 3 (x86_64) machines. The error happens when I call ssh target-server. When I call it with the -v flag, here is what I get additionally: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /home/aef/.ssh/config debug1: Applying options for * debug1: Applying options for target-server.my-domain.tld debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/home/aef/.ssh/cm_socket/[email protected]:22" does not exist debug1: Executing proxy command: exec ssh -W target-server.my-domain.tld:22 proxy-host.my-domain.tld debug1: identity file /home/aef/.ssh/id_rsa type -1 debug1: identity file /home/aef/.ssh/id_rsa-cert type -1 debug1: identity file /home/aef/.ssh/id_dsa type -1 debug1: identity file /home/aef/.ssh/id_dsa-cert type -1 debug1: identity file /home/aef/.ssh/id_ecdsa type -1 debug1: identity file /home/aef/.ssh/id_ecdsa-cert type -1 debug1: permanently_drop_suid: 1000 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-3 debug1: match: OpenSSH_6.0p1 Debian-3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent Bad packet length 1397966893. Disconnecting: Packet corrupt

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >