Search Results

Search found 5868 results on 235 pages for 'reverse proxy'.

Page 32/235 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • How to proxy with apache site from same domain but another port as a subfolder?

    - by myWallJSON
    So I have a problem - I have my main site on apache web server on debian on port 80; I develop a web server (in some C++ or C#) and it currently runs on port 6666. But some people are living under firewalls and can access only port 80. I wonder if it is possible via apache map all requests to say mysite.com:80/6666/url as if they were to mysite.com:6666/url, not map via redirection, but really make apache stream content from my site to user as if it were in some folder?

    Read the article

  • Using Tweepy API behind proxy

    - by user1505819
    I have a using Tweepy, a python wrapper for Twitter.I am writing a small GUI application in Python which updates my twitter account. Currently, I am just testing if the I can get connected to Twitter, hence used test() call. I am behind Squid Proxy server.What changes should I make to snippet so that I should get my work done. Setting http_proxy in bash shell did not help me. def printTweet(self): #extract tweet string tweet_str = str(self.ui.tweet_txt.toPlainText()) ; #tweet string extracted. self.ui.tweet_txt.clear() ; self.tweet_on_twitter(str); def tweet_on_twitter(self,my_tweet) : auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET); auth.set_access_token(ACCESS_KEY, ACCESS_SECRET) ; api = tweepy.API(auth) ; if api.test() : print 'Test successful' ; else : print 'Test unsuccessful';

    Read the article

  • How to configure web proxy for Jing

    - by Denis
    Jing is great. But it won't start unless it can phone the mother ship. My internet access is via corporate web proxy. Jing doesn't seem to offer any way of configuring its connection to the internet. You'd think it would just use the windows internet connection settings that work for browsing, but no dice. Anyone solve this problem? Maybe there are some configuration files or registry settings that would fix this? Thanks!

    Read the article

  • SQL Server Reporting Services proxy timeout (ASP.NET)

    - by Philip
    Morning, We are using SSRS (2005) and have a ASP.NET frontend using the SSRS WebControl. I've boiled the problem down the time it takes for one particular report to be generated is greater than the timeout on the proxy server. It looks like the way the SSRS web control tries to do things is by performing an HTTP request for the report, however the problem with this is the request can timeout potentially before the report has generated. Looking at the HTTP traffic the response is a 504 (gateway timeout). Is there a way to increase the timeout or change SSRS WebControl to use more robust polling mechanism (which isn't dependant on the timeout of the HTTP request). I could be wrong but I don't think ServerReport.Timeout property would resolve the issue we are seeing? Any thoughts? Philip

    Read the article

  • Reverse-engineer SharePoint fields, content types and list instance—Part3

    - by ybbest
    Reverse-engineer SharePoint fields, content types and list instance—Part1 Reverse-engineer SharePoint fields, content types and list instance—Part2 Reverse-engineer SharePoint fields, content types and list instance—Part3 In Part 1 and Part 2 of this series, I demonstrate how to reverse engineer SharePoint fields, content types. In this post I will cover how to include lookup fields in the content type and create list instance using these content types. Firstly, I will cover how to create list instance and bind the custom content type to the custom list. 1. Create a custom list using list Instance item in visual studio and select custom list. 2. In the feature receiver add the Department content type to Department list and remove the item content type. C# AddContentTypeToList(web, “Department”, ” Department”); private void AddContentTypeToList(SPWeb web,string listName, string contentTypeName) { SPList list = web.Lists.TryGetList(listName); list.OnQuickLaunch = true; list.ContentTypesEnabled = true; list.Update(); SPContentType employeeContentType = web.ContentTypes[contentTypeName]; list.ContentTypes.Add(employeeContentType); list.ContentTypes["Item"].Delete(); list.Update(); } Next, I will cover how to create the lookup fields. The difference between creating a normal field and lookup fields is that you need to create the lookup fields after the lists are created. This is because the lookup fields references fields from the foreign list. 1. In your solution, you need to create a feature that deploys the list before deploying the lookup fields. 2. You need to write the following code in the feature receiver to add the lookup columns in the ContentType. C# //add the lookup fields SPFieldLookup departmentField = EnsureLookupField(currentWeb, “YBBESTDepartment”, currentWeb.Lists["DepartmentList"].ID, “Title”); //add to the content types SPContentType employeeContentType = currentWeb.ContentTypes["Employee"]; //Add the lookup fields as SPFieldLink employeeContentType.FieldLinks.Add(new SPFieldLink(departmentField)); employeeContentType.Update(true); private static SPFieldLookup EnsureLookupField(SPWeb currentWeb, String sFieldName, Guid LookupListID, String sLookupField) { //add the lookup fields SPFieldLookup lookupField = null; try { lookupField = currentWeb.Fields[sFieldName] as SPFieldLookup; } catch (Exception e) { } if (lookupField == null) { currentWeb.Fields.AddLookup(sFieldName, LookupListID, true); currentWeb.Update(); lookupField = currentWeb.Fields[sFieldName] as SPFieldLookup; lookupField.LookupField = sLookupField; lookupField.Group = “YBBEST”; lookupField.Required = true; lookupField.Update(); } return lookupField; }

    Read the article

  • Facts Concerning a Reverse Email Look-Up

    A reverse email look-up, which is commonly known as an reverse email trace investigation, is a very beneficial type of service that is performed by a knowledgeable professional investigator that is t... [Author: Ed Opperman - Computers and Internet - June 17, 2010]

    Read the article

  • how to Acces Blocked Sites?

    - by Muhammad AYUB Khan BALOUCH
    im in Pakistan and Youtube is blocked in Pakistan . i want to take the Lecture videos from youtube. in windows i was using Hotsopshield to bypass proxy but now in Ubuntu i dnt know how to Bypass Proxy . i found some where that i can bypas proxy by Putty software . can u guide me how can i bypass proxy by that. but i was not able to do so . kindly tell me some easy method to bypass proxy . i dnt want to used websites like accesstoblockedsites.com

    Read the article

  • Can't ping external websites

    - by Frantumn
    I can't ping google.com with my virtual ubuntu 12.04 server. I have set up a proxy URL in my /etc/apt/apt.conf file and it says Aquire::http::proxy http://urlname.com:9999; Now, I don't know a lot about how the proxy works, but I do know when we use it on windows VMachines it's a pac script that we place in internet explorer LAN settings and it automatically detects the script and gives internet access. I tried including the 9999/proxy.pac in the apt.conf URL and it didn't seem to work any better. Would ubuntu know how to handle a proxy.pac assuming it was created for windows? Should my URL include the .pac or just end after the port numbers? I've tried both without sucess, but I would like to know. A quick test to ping a fellow co-workers' PC was sucessful. So I can see network computers, but not google. or other internet sources.

    Read the article

  • IIS as proxy to rails/mongrel - force a proxied host to generate REMOTE_USER

    - by rbn
    Hello -- Using Application Request Routing I have IIS 7.5 set up as a reverse proxy to a Mongrel service which is serving a rails app. IIS is set up to use Windows Auth and is working but I cannot access the REMOTE_USER variable in the rails app to get at current user's identity. I have inspected the request object in rails and I don't have any other variables like LOGON_USER, HTTP_REMOTE_USER, AUTH_USER, etc. I am trying to find a way to inject the REMOTE_USER variable into Mongrel's server variables. This post describes what I am looking for using mod_rewrite on Apache but I am having trouble recreating this rule for iis. this is the rewrite rule from the post mentioned above for Apache RewriteEngine On RewriteCond %{LA-U:REMOTE_USER} (.+) RewriteRule . - [E=RU:%1] Header add X-Forwarded-User %{RU}e I tried reproducing the rule in IIS and got a URL Rewrite Module error ("The condition's expression "%{LA-U:REMOTE_USER}" is invalid."). I know I'm probably using Apache syntax where IIS syntax is needed but am not sure how to proceed at this point. Any help greatly appreciated.

    Read the article

  • Does git clone work through NTLM proxies?

    - by AndreaG
    I've tried both using export http_proxy=http://[username]:[pwd]@[proxy] and git config --global http.proxy http://[username]:[pwd]@[proxy]. I couldn't make it work. It looks like git uses Basic authentication: Initialized empty Git repository in /home/.../.git/ * Couldn't find host github.com in the .netrc file, using defaults * About to connect() to github.com port 8080 (#0) * Trying 10.... * Connected to github.com (10....) port 8080 (#0) * Proxy auth using Basic with user '...' > GET http://github.com/sunlightlabs/fiftystates.git/info/refs HTTP/1.1 Proxy-Authorization: Basic MD... User-Agent: git/1.6.1.2 Host: github.com Pragma: no-cache Accept: */* Proxy-Connection: Keep-Alive < HTTP/1.1 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to t he Web Proxy filter is denied. ) < Via: 1.1 ... < Proxy-Authenticate: Negotiate < Proxy-Authenticate: Kerberos < Proxy-Authenticate: NTLM < Connection: Keep-Alive < Proxy-Connection: Keep-Alive < Pragma: no-cache < Cache-Control: no-cache < Content-Type: text/html < Content-Length: 4118 * The requested URL returned error: 407 * Closing connection #0 fatal: http://github.com/sunlightlabs/fiftystates.git/info/refs download error - The requested URL returned error: 407 Google search returned mixed and probably not updated results. Somewhere it says that curl is (was?) used under the hood, but its options are (were?) hardwired into code. For example, curl --proxy-ntlm --proxy ...:8080 google.com works, and I'd like to use the same option with git. I need some more definite answers here: has anybody succeed using git through Windows proxies? Which version? Thanks.

    Read the article

  • is there a way using Ruby's net/http to post form data to an http proxy?

    - by Derek P.
    I have a basic Squid server setup and I am trying to use Ruby's Net::HTTP::Proxy class to send a POST of form data to a specified HTTP endpoint. I assumed I could do the following: Net::HTTP::Proxy(my_host, my_port).start(url.host) do |h| req = Net::HTTP::Post.new(url.path) req.form_data = { "xml" => xml } h.request(req) end But, alas, proxy vs. non-proxied Net::HTTP classes don't seem to use the proxy IP Address. my remote service responds telling me that it received a request from the wrong IP address, ie: not the proxy. I am looking for a specific way to write the procedure, so that I can successfully send a form post via a proxy. Help? :)

    Read the article

  • Weird result with apache vs lighttpd in reverse proxy.

    - by northox
    I have an Apache server running in reverse proxy mode in front of a Tomcat java server. It handle HTTP and HTTPS and send those request back and forth to the Tomcat server on an internal HTTP port. I'm trying to replace the reverse proxy with Lighttpd. Here's the problem: while asking for the same HTTPS url, while using Apache as the reverse proxy, the Tomcat server redirect (302) to an HTTPS page but with Lighttpd it redirect to the same page in HTTP (not HTTPS). What does Lighttpd could do different in order to have a different result from the backend server? In theory, using Apache or Lighttpd server as a reverse proxy should not change anything... but it does. Any idea? I'll try to find something by sniffing the traffic on the backend tomcat server.

    Read the article

  • How do I route watir through a proxy pragmatically?

    - by feydr
    I'm trying to route watir through a proxy pragmatically -- this means within the script I'd like to change my proxy dynamically before launching the browser. Here's what I've tried so far (and so far am failing): I'm running chrome and lucid lynx ubuntu. I chose TREX cause I thought watir might be making use of PROXY or something. I rewrote /usr/bin/google-chrome as: #!/bin/bash /opt/google/chrome/chrome --proxy-server="$TREX" $@ The reason I'm passing in the environment variable to proxy-server rather than http_proxy is because I never could get http_proxy to work as is anyways then I did a simple: require 'rubygems' require 'watir-webdriver' ENV['TREX'] = "XX.XX.XX.XX:YY" browser = Watir::Browser.new(:chrome) browser.goto("http://mysite.com") Anyways, what is happening here is that it is forwarding me to the login page of the proxy rather than just forwarding the request. What am I missing here? I feel like I'm pretty close.

    Read the article

  • Using PHP cURL with an HTTP Debugging Proxy

    - by Kane
    I'm using the app "Fiddler" to debug a GET attempt to a website via PHP cURL. In order to see the cURL traffic I had to specify that the cURL connection use the Fiddler proxy (see code below). $ch = curl_init(); curl_setopt($ch, CURLOPT_HTTPPROXYTUNNEL, 1); curl_setopt($ch, CURLOPT_PROXY, '127.0.0.1:8888'); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5); curl_setopt($ch, CURLOPT_TIMEOUT, 10); curl_setopt($ch, CURLOPT_HEADERFUNCTION, 'read_header'); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_USERAGENT, $user_agent); curl_setopt($ch, CURLOPT_REFERER, "http://domain.com"); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_COOKIEJAR, "my_cookies.txt"); curl_setopt($ch, CURLOPT_COOKIEFILE, "my_cookies.txt"); curl_setopt($ch, CURLOPT_URL, "http://domain.com"); $response = curl_exec($ch); But the problem is that in Fiddler I can only see this: Request (domain.com is just an alias): CONNECT domain.com:80 HTTP/1.1 Response: HTTP/1.1 200 Blind-Connection Established If I manually load the website in a browser Fiddler gives me WAY more information. I can see the cookies, the header information, and what I'm receiving via the GET. Any ideas why Fiddler can't see more useful information from PHP cURL? Edit: I tried turning on the "Enable HTTPS Decryption" option inside Tools / Fiddler Options / HTTPS (which I'm not sure why I'd need to use as I didn't tell cURL to use HTTPS). Unfortunately, by changing this setting I now get a Response of: HTTP/1.1 502 Connection failed Edit: If it helps, the app "Charles" shows me WAY more information than Fiddler, but I really want to figure out Fiddler since I like it better.

    Read the article

  • httplib2 giving internal server error 500 with proxy

    - by NJTechie
    Following is the code and error it throws. It works fine without the proxy http = httplib2.Http() . Any pointers are highly appreciated! Usage : http = httplib2.Http(proxy_info = httplib2.ProxyInfo(socks.PROXY_TYPE_HTTP, '74.115.1.11', 80)) main_url = 'http://www.mywebsite.com' response, content = http.request(main_url, 'GET') Error : File "testproxy.py", line 17, in <module> response, content = http.request(main_url, 'GET') File "/home/kk/bin/pythonlib/httplib2/__init__.py", line 1129, in request (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey) File "/home/kk/bin/pythonlib/httplib2/__init__.py", line 901, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/home/kk/bin/pythonlib/httplib2/__init__.py", line 862, in _conn_request conn.request(method, request_uri, body, headers) File "/usr/lib/python2.5/httplib.py", line 866, in request self._send_request(method, url, body, headers) File "/usr/lib/python2.5/httplib.py", line 889, in _send_request self.endheaders() File "/usr/lib/python2.5/httplib.py", line 860, in endheaders self._send_output() File "/usr/lib/python2.5/httplib.py", line 732, in _send_output self.send(msg) File "/usr/lib/python2.5/httplib.py", line 699, in send self.connect() File "/home/kk/bin/pythonlib/httplib2/__init__.py", line 740, in connect self.sock.connect(sa) File "/home/kk/bin/pythonlib/socks.py", line 383, in connect self.__negotiatehttp(destpair[0],destpair[1]) File "/home/kk/bin/pythonlib/socks.py", line 349, in __negotiatehttp raise HTTPError((statuscode,statusline[2])) socks.HTTPError: (500, 'Internal Server Error')

    Read the article

  • Can anyone help me make my reverse proxy actually cache?

    - by Lenary
    Hi folks, I'm trying to configure a Reverse Caching Proxy but so far have had no luck. I would preferrably like to use apache (that will be all it will be used for), but am open to solutions using other software that can also run on Mac OS X 10.6 (I have also tried using Varnish and Squid, but with no more luck). We're running a system with about 80 mac mini clients that will be requesting lots of video from a server. To reduce load, we thought we could use Apache (which comes on the macs by default) to cache this video forever (or at least as long as possible) onto the macs' disks. I have managed to get a reverse proxy set up with apache using ProxyPass etc, but when i tried to add CacheEnable disk / to the configuration, nothing happened (i do have mod_disk_cache included). Can anyone help with my issue? The apache config file is here Thanks in advance Edit: So far I have been testing it with smaller text files, and it hasn't been caching properly. This suggests it is nothing to do with us actually downloading video, but actually to do with the cache configuration.

    Read the article

  • How to setup IIS 7.5 Reverse Proxy for quite a few internal servers - Server Farm?

    - by Tim Murphree
    I have tried for a few days, but I'm lost. Here's what I'm trying to do: I want to setup an IIS 7.5 as a Reverse Proxy for about 30 internal HTTP servers, located on my internal LAN. Everything is running on port 80. The internal servers are really IP based webcams. Here is scenario: www.mycamserver.com/cam1 192.168.1.101 www.mycamserver.com/cam2 192.168.1.102 and so on, until.. www.mycamserver.com/cam30 192.168.1.130 I have installed ARR and URL Rewrite. So far, I have managed, at one time, to seem to forward an incoming URL to an internal server, but the page would not fully load (error 404). Also, I setup a Server Farm, but it seems all traffic is now set to the first node on the Server Farm (192.168.1.101). However, at least the page loads and runs correctly. I simply want to do an exact match, for example, "cam14", and reverse-proxy / rewrite to a corresponding internal server address - "192.168.1.114".

    Read the article

  • How to create a simple Proxy to access web servers in C

    - by jesusiniesta
    Hi. I’m trying to create an small Web Proxy in C. First, I’m trying to get a webpage, sending a GET frame to the server. I don’t know what I have missed, but I am not receiving any response. I would really appreciate if you can help me to find what is missing in this code. int main (int argc, char** argv) { int cache_size, //size of the cache in KiB port, port_google = 80, dir, mySocket, socket_google; char google[] = "www.google.es", ip[16]; struct sockaddr_in socketAddr; char buffer[10000000]; if (GetParameters(argc,argv,&cache_size,&port) != 0) return -1; GetIP (google, ip); printf("ip2 = %s\n",ip); dir = inet_addr (ip); printf("ip3 = %i\n",dir); /* Creation of a socket with Google */ socket_google = conectClient (port_google, dir, &socketAddr); if (socket_google < 0) return -1; else printf("Socket created\n"); sprintf(buffer,"GET /index.html HTTP/1.1\r\n\r\n"); if (write(socket_google, (void*)buffer, LONGITUD_MSJ+1) < 0 ) return 1; else printf("GET frame sent\n"); strcpy(buffer,"\n"); read(socket_google, buffer, sizeof(buffer)); // strcpy(message,buffer); printf("%s\n", buffer); return 0; } And this is the code I use to create the socket. I think this part is OK, but I copy it just in case. int conectClient (int puerto, int direccion, struct sockaddr_in *socketAddr) { int mySocket; char error[1000]; if ( (mySocket = socket(AF_INET, SOCK_STREAM, 0)) == -1) { printf("Error when creating the socket\n"); return -2; } socketAddr->sin_family = AF_INET; socketAddr->sin_addr.s_addr = direccion; socketAddr->sin_port = htons(puerto); if (connect (mySocket, (struct sockaddr *)socketAddr,sizeof (*socketAddr)) == -1) { snprintf(error, sizeof(error), "Error in %s:%d\n", __FILE__, __LINE__); perror(error); printf("%s\n",error); printf ("-- Error when stablishing a connection\n"); return -1; } return mySocket; } Thanks!

    Read the article

  • Proxy settings with ivy...

    - by user315228
    Hi, I have an issue where in I have defined dependancies in ivy.xml on our internal corporate svn. I am able to access this svn site without any proxy task in ant. While my dependencies resides on ibiblio, that’s something outside our corporate, and needs proxy inorder to download something. I am facing problem using ivy here: I have following in build.xml <target name="proxy"> <property name="proxy.host" value="xyz.proxy.net"/> <property name="proxy.port" value="8443"/> <setproxy proxyhost="${proxy.host}" proxyport="${proxy.port}"/> </target> &lt;!-- resolve the dependencies of stratus --&gt; &lt;target name="resolveTestDependency" depends="testResolve, proxy" description="retrieve test dependencies with ivy"&gt; &lt;ivy:settings file="stratus-ivysettings.xml" /> &lt;ivy:retrieve conf="test" pattern="${jars}/[artifact]-[revision].[ext]"/&gt;<!--pattern here specifies where do you want to download lib to?--> </target> <target name=" testResolve "> <ivy:settings file="stratus-ivysettings.xml" /> <ivy:resolve conf="test" file="stratus-ivy.xml"/> </target> Following is the excerpt from stratus-ivysettings.xml <resolvers <!-- here you define your file in private machine not on the repo (e.g. jPricer.jar or edgApi.jar)-- <url name="privateFS" <ivy pattern="http://xyz.svn.com/ivyRepository/[organisation]/ivy/ivy.xml"/ </url . . . <url name="public" m2compatible="true" <artifact pattern="http://www.ibiblio.org/maven2/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]"/ </url . . . So as can be seen here for getting ivy.xml, I don’t need any proxy as its within our own network which cant be accesses when I set proxy. But on the other hand I am using ibiblio as well which is external to our network and works only with proxy. So above build.xml wont work in that case. Can somebody help here. I don’t need proxy while getting ivy.xml (as if I have proxy, ivy wont be able to find ivy file behind proxy from within the network), and I just need it when my resolver goes to public url.

    Read the article

  • Proxy settings with ivy...

    - by user315228
    Hi, I have an issue where in I have defined dependancies in ivy.xml on our internal corporate svn. I am able to access this svn site without any proxy task in ant. While my dependencies resides on ibiblio, that’s something outside our corporate, and needs proxy inorder to download something. I am facing problem using ivy here: I have following in build.xml <target name="proxy" <property name="proxy.host" value="xyz.proxy.net"/ <property name="proxy.port" value="8443"/ <setproxy proxyhost="${proxy.host}" proxyport="${proxy.port}"/ </target <!-- resolve the dependencies of stratus --> <target name="resolveTestDependency" depends="testResolve, proxy" description="retrieve test dependencies with ivy"> <ivy:settings file="stratus-ivysettings.xml" /> <ivy:retrieve conf="test" pattern="${jars}/[artifact]-[revision].[ext]"/><!--pattern here specifies where do you want to download lib to?--> </target> <target name=" testResolve "> <ivy:settings file="stratus-ivysettings.xml" /> <ivy:resolve conf="test" file="stratus-ivy.xml"/> </target> Following is the excerpt from stratus-ivysettings.xml <resolvers <!-- here you define your file in private machine not on the repo (e.g. jPricer.jar or edgApi.jar)-- <!-- This we will use a url nd not local file system.. -- <url name="privateFS" <ivy pattern="http://xyz.svn.com/ivyRepository/ [organisation]/ivy/ivy.xml"/ </url . . . <url name="public" m2compatible="true" <artifact pattern="http://www.ibiblio.org/maven2/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]"/ </url . . . So as can be seen here for getting ivy.xml, I don’t need any proxy as its within our own network which cant be accesses when I set proxy. But on the other hand I am using ibiblio as well which is external to our network and works only with proxy. So above build.xml wont work in that case. Can somebody help here. I don’t need proxy while getting ivy.xml (as if I have proxy, ivy wont be able to find ivy file behind proxy from within the network), and I just need it when my resolver goes to public url.

    Read the article

  • How to connect through a proxy using Remote Desktop?

    - by scottmarlowe
    So I've got a home server running Windows Server 2003. I use a dual network card setup and Routing and Remote Access to link the internal, private network to the external connection. The external connection hooks directly to my cable modem (so no routers or other devices sitting between). The problem I'm having is that I can't connect remotely from a location outside the house (so connecting to the server's external connection) to the server using either Remote Desktop or VNC. I have enabled both ports in Routing and Remote Access's firewall to allow access, and I have enabled Remote Desktop in Windows Server 2003. The odd thing is that I can access my home server's SVN repository and I can even ping the server's IP. I am using the IP to attempt to connect, though I use a dyndns.com provided name to connect to my SVN repository, so it shouldn't make a difference (I know the IP is getting resolved correctly). Any ideas on where to start diagnosing this one? I haven't seen anything in my server's event log. If any other info is needed, let me know. Thanks. UPDATE: One last piece of information: We use a proxy server at work, which I'm nearly 100% sure is the culprit. I have a workaround--if I connect to our VPN (even though I'm already inside the building) I am able to connect to my home server. This is with VNC. However, is there a way to connect through a proxy using Remote Desktop? ONE MORE UPDATE: Indeed, it was the http proxy I'm sitting behind at work that was causing the issue. An acceptable workaround is to use my VPN connection to bypass the proxy, and I'm in!

    Read the article

  • Spring Roo Database Reverse Engineer with Oracle

    - by kerry
    So you are trying to reverse engineer an Oracle database with roo? Unfortunately, due to licensing restrictions with the Oracle JDBC Drivers, this is a little difficult. There are a few blog posts and forum threads that address the problem but I figured I would post what worked for me here. First, you need to download the appropriate Oracle Drivers from Oracle. The required login, stringent password requirements, nosy registration form, and general system instability made this a pretty painful step for me. I’d also like to say that companies that have password requirements that don’t allow symbols (or any other non-standard requirement) have a special place in my heart. Having to recover my password every time I go to your site virtually guarantees I will only go there when I absolutely have to (not often). Anyways, once you have it downloaded you need to install is with maven: mvn install:install-file -Dfile=~/Downloads/ojdbc6.jar -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11.2.0.3 -Dpackaging=jar -DgeneratePom=true Here comes the fun part. You need to create an osgi wrapper for the driver to install it in roo. Otherwise, roo cannot see the driver. Create a new folder and put the contents of the oracle roo addon pom gist I created. Now build it with maven. You may want to change some of the artifact ids and dependencies for your particular situation. mvn package No open a roo shell and execute the following command: osgi install --url file:///Users/me/my-osgi-project/target/the-jar-it-built.jar Now run (in roo): jpa setup --provider HIBERNATE --database ORACLE dependency remove --groupId com.oracle --artifactId ojdbc14 --version 10.2.0.2 dependency add --groupId com.oracle --artifactId ojdbc6 --version 11.2.0.3 database properties set --key database.driverClassName --value oracle.jdbc.OracleDriver database properties set --key database.url --value jdbc:oracle:thin:@%YOUR_CONNECTION_INFO% database properties set --key database.username --value %YOUR_USERNAME% database properties set --key database.password --value %YOUR_PASSWORD% database reverse engineer --schema %YOUR_SCHEMA% --package ~.domain If you have any package loading exceptions when running the reverse engineer command you can uninstall the osgi bundle, set the package to optional in the osgi pom in the IncludedPackages tag (javax.some.package.*;resolution:=optional) rebuild, then reinstall in roo.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >