Search Results

Search found 3028 results on 122 pages for 'urls'.

Page 106/122 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • How to combine twill and python into one code that could be run on "Google App Engine"?

    - by brilliant
    Hello everybody!!! I have installed twill on my computer (having previously installed Python 2.5) and have been using it recently. Python is installed on disk C on my computer: C:\Python25 And the twill folder (“twill-0.9”) is located here: E:\tmp\twill-0.9 Here is a code that I’ve been using in twill: go “some website’s sign-in page URL” formvalue 2 userid “my login” formvalue 2 pass “my password” submit go “URL of some other page from that website” save_html result.txt This code helps me to log in to one website, in which I have an account, record the HTML code of some other page of that website (that I can access only after logging in), and store it in a file named “result.txt” (of course, before using this code I firstly need to replace “my login” with my real login, “my password” with my real password, “some website’s sign-in page URL” and “URL of some other page from that website” with real URLs of that website, and number 2 with the number of the form on that website that is used as a sign-in form on that website’s log-in page) This code I store in “test.twill” file that is located in my “twill-0.9” folder: E:\tmp\twill-0.9\test.twill I run this file from my command prompt: python twill-sh test.twill Now, I also have installed “Google App Engine SDK” from “Google App Engine” and have also been using it for awhile. For example, I’ve been using this code: import hashlib m = hashlib.md5() m.update("Nobody inspects") m.update(" the spammish repetition ") print m.hexdigest() This code helps me transform the phrase “Nobody inspects the spammish repetition” into md5 digest. Now, how can I put these two pieces of code together into one python script that I could run on “Google App Engine”? Let’s say, I want my code to log in to a website from “Google App Engine”, go to another page on that website, record its HTML code (that’s what my twill code does) and than transform this HTML code into its md5 digest (that’s what my second code does). So, how can I combine those two codes into one python code? I guess, it should be done somehow by importing twill, but how can it be done? Can a python code - the one that is being run by “Google App Engine” - import twill from somewhere on the internet? Or, perhaps, twill is already installed on “Google App Engine”?

    Read the article

  • Django: Serving Media Behind Custom URL

    - by TheLizardKing
    So I of course know that serving static files through Django will send you straight to hell but I am confused on how to use a custom url to mask the true location of the file using Django. http://stackoverflow.com/questions/2681338/django-serving-a-download-in-a-generic-view but the answer I accepted seems to be the "wrong" way of doing things. urls.py: url(r'^song/(?P<song_id>\d+)/download/$', song_download, name='song_download'), views.py: def song_download(request, song_id): song = Song.objects.get(id=song_id) fsock = open(os.path.join(song.path, song.filename)) response = HttpResponse(fsock, mimetype='audio/mpeg') response['Content-Disposition'] = "attachment; filename=%s - %s.mp3" % (song.artist, song.title) return response This solution works perfectly but not perfectly enough it turns out. How can I avoid having a direct link to the mp3 while still serving through nginx/apache? EDIT 1 - ADDITIONAL INFO Currently I can get my files by using an address such as: http://www.example.com/music/song/1692/download/ But the above mentioned method is the devil's work. How can I accomplished what I get above while still making nginx/apache serve the media? Is this something that should be done at the webserver level? Some crazy mod_rewrite? http://static.example.com/music/Aphex%20Twin%20-%20Richard%20D.%20James%20(V0)/10%20Logon-Rock%20Witch.mp3

    Read the article

  • Grails UrlMappings with .html

    - by Glennn
    I'm developing a Grails web application (mainly as a learning exercise). I have previously written some standard Grails apps, but in this case I wanted to try creating a controller that would intercept all requests (including static html) of the form: <a href="/testApp/testJsp.jsp">test 1</a> <a href="/testApp/testGsp.gsp">test 2</a> <a href="/testApp/testHtm.htm">test 3</a> <a href="/testApp/testHtml.html">test 4</a> The intent is to do some simple business logic (auditing) each time a user clicks a link. I know I could do this using a Filter (or a range of other methods), however I thought this should work too and wanted to do this using a Grails framework. I set up the Grail UrlMappings.groovy file to map all URLs of that form (/$myPathParam?) to a single controller: class UrlMappings { static mappings = { "/$controller/$action?/$id?"{ constraints { } } "/$path?" (controller: 'auditRecord', action: 'showPage') "500"(view:'/error') } } In that controller (in the appropriate "showPage" action) I've been printing out the path information, for example: def showPage = { println "params.path = " + params.path ... render(view: resultingView) } The results of the println in the showPage action for each of my four links are testJsp.jsp testGsp.gsp testHtm.htm testHtml Why is the last one "testHtml", not "testHtml.html"? In a previous (Stack Overflow query) Olexandr encountered this issue and was advised to simply concatenate the value of request.format - which, indeed, does return "html". However request.format also returns "html" for all four links. I'm interested in gaining an understanding of what Grails is doing and why. Is there some way to configure Grails so the params.path variable in the controller shows "testHtml.html" rather than stripping off the "html" extension? It doesn't seem to remove the extension for any other file type (including .htm). Is there a good reason it's doing this? I know that it is a bit unusual to use a controller for static html, but still would like to understand what's going on.

    Read the article

  • protecting grails melody with grails filter

    - by batmannavneet
    I have an application where I am using spring security along with grails melody. I am planning to run grails melody in production environment, but don't want visitors to have access to it. How should I achieve that ? I tried creating a filter in grails (just showing the sample of what I am trying, not the actual code)- def filters = { allURIs(uri:'/**') { before = { //... if(request.forwardURI.indexOf("admin") != -1 || request.forwardURI.indexOf("monitoring") != -1) { response.sendError 404 return false } } } } But this doesnt work as the request for "monitoring" doesnt hit this filter. I dont even want the user to know that such a URL exists, so I want to check in the filter that if "monitoring" is the URL, I show the 404 error page. Thats also the reason why I dont want to protect this URL with spring security as it will show "access denied" page. Basically I want the URL to exist but they should be invisible to users. I want the access to be open to only certain IP addresses for these special URLs. On another note, Is it possible to write a grails filter that "acts" before the spring security filter is hit ? I want to be able to do some filtering before I forward requests to spring security. Writing a grails filter like above doesnt help. Spring security filter gets hit first if I access a protected resource and this filter doesn't get called. Thanks

    Read the article

  • How to customize RESTful Routes in Rails (basics)

    - by viatropos
    I have read through the Rails docs for Routing, Restful Resources, and the UrlHelper, and still don't understand best practices for creating complex/nested routes. The example I'm working on now is for events, which has_many rsvps. So a user's looking through a list of events, and clicks register, and goes through a registration process, etc. I want the urls to look like this: /events /events/123 # possible without title, like SO /events/123/my-event-title # canonical version /events/my-category/123/my-event-title # also possible like this /events/123/my-event-title/registration/new ... and all the restful nested resouces. Question is, how do I accomplish this with the minimal amount of code? Here's what I currently have: map.resources :events do |event| event.resources :rsvps, :as => "registration" end That gets me this: /events/123/registration What's the best way to accomplish the other 2 routes? /events/123/my-event-title # canonical version /events/my-category/123/my-event-title # also possible like this Where my-category is just an array of 10 possible types the event can be. I've modified Event#to_param to return "#{self.id.to_s}-#{self.title.parameterize}", but I'd prefer to have /id/title with the whole canonical-ness

    Read the article

  • Is there a good way of automatically generating javascript client code from server side python

    - by tat.wright
    I basically want to be able to: Write a few functions in python (with the minimum amount of extra meta data) Turn these functions into a web service (with the minimum of effort / boiler plate) Automatically generate some javascript functions / objects for rpc (this should prevent me from doing as many stupid things as possible like mistyping method names, forgetting the names of methods, passing the wrong number of arguments) Example python: def hello_world(): return "Hello world" javascript: ... <!-- This file is automatically generated (either dynamically or statically) --> <script src="http://myurl.com/webservice/client_side_javascript"> </script> ... <script> $('#button').click(function () { hello_world(function (data){ $('#label').text(data))) } </script> A bit of research has shown me some approaches that come close to this: Automatic generation of json-rpc services from functions with a little boiler plate code in python and then using jquery and json to do the calls (still easy to make mistakes with method names - still need to be aware of urls when calling, very irritating to write these calls yourself in the firebug shell) Using a library like soaplib to generate wsdl from python (by adding copious type information). And then somehow convert this into javascript (not sure if there is even a library to do this) But are there any approaches closer to what I want?

    Read the article

  • youtube embeded player: change video link with javascript, dinamically

    - by Anthony Koval'
    here is a part of html code (video urls marked with a django-template language variables): <div class="mainPlayer"> <object width="580" height="326"> <param name="movie" value="{{main_video.video_url}}"></param> <param name="allowFullScreen" value="true"></param> <param name="allowscriptaccess" value="always"></param> <embed src="{{main_video.video_url}}" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="580" height="326"></embed> </object> </div> and JS-code (using jQuery 1.4.x) $(document).ready(function(){ ..... $(".activeMovie img").live("click", function(){ video_url = ($(this).parent().find('input').val()); $('.mainPlayer').find('param:eq(0)').val(video_url); $('.mainPlayer').find('embed').attr('src', video_url); }) ... }) such a algorithm works fine in ff 3.6.3, but any luck in chrome4 or opera 10.x., src and value are changed, but youtube player still shows an old video.

    Read the article

  • Trouble with piping through sed

    - by Joel
    I am having trouble piping through sed. Once I have piped output to sed, I cannot pipe the output of sed elsewhere. wget -r -nv http://127.0.0.1:3000/test.html Outputs: 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/test.html [99/99] -> "127.0.0.1:3000/test.html" [1] 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/robots.txt [83/83] -> "127.0.0.1:3000/robots.txt" [1] 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/shop [22818/22818] -> "127.0.0.1:3000/shop.29" [1] I pipe the output through sed to get a clean list of URLs: wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' Outputs: http://127.0.0.1:3000/test.html http://127.0.0.1:3000/robots.txt http://127.0.0.1:3000/shop I would like to then dump the output to file, so I do this: wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' > /tmp/DUMP_FILE I interrupt the process after a few seconds and check the file, yet it is empty. Interesting, the following yields no output (same as above, but piping sed output through cat): wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' | cat Why can I not pipe the output of sed to another program like cat?

    Read the article

  • WIKI replacement solution for SharePoint?

    - by Jakub
    I'm trying to research a replacement for the pathetic WIKI that comes with WSS (only wiki code it has is to create url links). I have looked at a few but most 'replacements' I see are MOSS only? (or so it just states MOSS for requirements). Has anyone faced this situation? What did you end up using? I would like something that I can have all in one location (not different apps, hence WSS). With LDAP / AD Integration like WSS. Thanks appreciate any input. I would like to see ~ $3k solutions tho (nothing super expensive, hence why we don't run MOSS). EDIT: Anyone else have any suggestions? EDIT2: Actually since I haven't had much feedback (thanks to those that have). I installed mediawiki under IIS with PHP enabled, and enabled the IIS AD hack for authentication. IIS ends up prompting for authentication (user/pass) if you use a non IE browser, then it sets the $_SERVER["REMOTE_USER"] variable, and grabs some AD info (groups etc). Works rather well, only issues is the UGLY urls so far. But its fully working. Seems like a good setup. Other than having to rely on MYSQL (my company strives to be mainly SQL Server)

    Read the article

  • AS3 URLRequest in for Loop problem

    - by Adrian
    Hi guys, I read some data from a xml file, everything works great besides urls. I can't figure what's the problem with the "navigateURL" function or with the eventListener... on which square I click it opens the last url from the xml file for(var i:Number = 0; i <= gamesInput.game.length() -1; i++) { var square:square_mc = new square_mc(); //xml values var tGame_name:String = gamesInput.game.name.text()[i];//game name var tGame_id:Number = gamesInput.children()[i].attributes()[2].toXMLString();//game id var tGame_thumbnail:String = thumbPath + gamesInput.game.thumbnail.text()[i];//thumb path var tGame_url:String = gamesInput.game.url.text()[i];//game url addChild(square); square.tgname_txt.text = tGame_name; square.tgurl_txt.text = tGame_url; //load & attach game thumb var getThumb:URLRequest = new URLRequest(tGame_thumbnail); var loadThumb:Loader = new Loader(); loadThumb.load(getThumb); square.addChild(loadThumb); // square.y = squareY; square.x = squareX; squareX += square.width + 10; square.buttonMode = true; this.addEventListener(MouseEvent.CLICK, navigateURL); } function navigateURL(event:MouseEvent):void { var url:URLRequest = new URLRequest(tGame_url); navigateToURL(url, "_blank"); trace(tGame_url); } Many thanks!

    Read the article

  • Zend_Validate_Abstract custom validator not displaying correct error messages.

    - by Jeremy Dowell
    I have two text fields in a form that I need to make sure neither have empty values nor contain the same string. The custom validator that I wrote extends Zend_Validate_Abstract and works correctly in that it passes back the correct error messages. In this case either: isEmpty or isMatch. However, the documentation says to use addErrorMessages to define the correct error messages to be displayed. in this case, i have attached ->addErrorMessages(array("isEmpty"=>"foo", "isMatch"=>"bar")); to the form field. According to everything I've read, if I return "isEmpty" from isValid(), my error message should read "foo" and if i return "isMatch" then it should read "bar". This is not the case I'm running into though. If I return false from is valid, no matter what i set $this-_error() to be, my error message displays "foo", or whatever I have at index[0] of the error messages array. If I don't define errorMessages, then I just get the error code I passed back for the display and I get the proper one, depending on what I passed back. How do I catch the error code and display the correct error message in my form? The fix I have implemented, until I figure it out properly, is to pass back the full message as the errorcode from the custom validator. This will work in this instance, but the error message is specific to this page and doesn't really allow for re-use of code. Things I have already tried: I have already tried validator chaining so that my custom validator only checks for matches: ->setRequired("true") ->addValidator("NotEmpty") ->addErrorMessage("URL May Not Be Empty") ->addValidator([*customValidator]*) ->addErrorMessage("X and Y urls may not be the same") But again, if either throws an error, the last error message to be set displays, regardless of what the error truly is. I'm not entirely sure where to go from here. Any suggestions?

    Read the article

  • Problem with TTWebController… alternative view controller template for UIWebView?

    - by prendio2
    I have a UIWebView with content populated from a Last.fm API call. This content contains links, many of which are handled by parsing info from the URL in: - (BOOL)webView:(UIWebView *)aWbView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType; …and passing that info on to appropraite view controllers. When I cannot handle a URL I would like to push a new view controller on the stack and load the webpage into a UIWebView there. The TTWebController in Three20 looks very promising since it has also implemented a navigation toolbar, loading indicators etc. Somewhat naively perhaps I thought I would be able to use this controller to display web content in my app, without implementing Three20 throughout the app. I followed the instructions on github to add Three20 to my project and added the following code to deal with URLs in shouldStartLoadWithRequest: TTWebController* browser = [[TTWebController alloc]init]; [browser openURL:@"http://three20.info"]; //initially test with this URL [self.navigationController pushViewController:navigator animated:YES]; This crashes my app however with the following notice: *** -[TTNavigator setParentViewController:]: unrecognized selector sent to instance 0x3d5db70 What else do I need to do to implement the TTWebController in my app? Or do you know of an alternative view controller template that I can use instead of the Three20 implementation?

    Read the article

  • How to host WCF service and TCP server on same socket?

    - by Ole Jak
    Tooday I use ServiceHost for self hosting WCF cervices. I want to host near to my WCF services my own TCP programm for direct sockets operations (like lien to some sort of broadcasting TCP stream) I need control over namespaces (so I would be able to let my clients to send TCP streams directly into my service using some nice URLs like example.com:port/myserver/stream?id=1 or example.com:port/myserver/stream?id=anything and so that I will not be bothered with Idea of 1 client for 1 socket at one time moment, I realy want to keep my WCF services on the same port as my own server or what it is so to be able to call www.example.com:port/myWCF/stream?id=222...) Can any body please help me with this? I am using just WCF now. And I do not enjoy how it works. That is one of many resons why I want to start migration to clear TCP=) I can not use net-tcp binding or any sort of other cool WS-* binding (tooday I use the simpliest one so that my clients like Flash, AJAX, etc connect to me with ease). I needed Fast and easy in implemrnting connection protocol like one I created fore use with Sockets for real time hi ammount of data transfering. So.. Any Ideas? Please - I need help.

    Read the article

  • ASP.NET MVC: How to create a usable UrlHelper instance?

    - by Marek
    I am using quartz.net to schedule regular events within asp.net mvc application. The scheduled job should call a service layer script that requires a UrlHelper instance (for creating Urls based on correct routes (via urlHelper.Action(..)) contained in emails that will be sent by the service). I do not want to hardcode the links into the emails - they should be resolved using the urlhelper. The job: public class EvaluateRequestsJob : Quartz.IJob { public void Execute(JobExecutionContext context) { // where to get a usable urlHelper instance? ServiceFactory.GetRequestService(urlHelper).RunEvaluation(); } } Please note that this is not run within the MVC pipeline. There is no current request being served, the code is run by the Quartz scheduler at defined times. How do I get a UrlHelper instance usable on the indicated place? If it is not possible to construct a UrlHelper, the other option I see is to make the job "self-call" a controller action by doing a HTTP request - while executing the action I will of course have a UrlHelper instance available - but this seems a little bit hacky to me.

    Read the article

  • Python - Open default mail client using mailto, with multiple recipients

    - by victorhooi
    Hi, I'm attempting to write a Python function to send an email to a list of users, using the default installed mail client. I want to open the email client, and give the user the opportunity to edit the list of users or the email body. I did some searching, and according to here: http://www.sightspecific.com/~mosh/WWW_FAQ/multrec.html It's apparently against the RFC spec to put multiple comma-delimited recipients in a mailto link. However, that's the way everybody else seems to be doing it. What exactly is the modern stance on this? Anyhow, I found the following two sites: http://2ality.blogspot.com/2009/02/generate-emails-with-mailto-urls-and.html http://www.megasolutions.net/python/invoke-users-standard-mail-client-64348.aspx which seem to suggest solutions using urllib.parse (url.parse.quote for me), and webbrowser.open. I tried the sample code from the first link (2ality.blogspot.com), and that worked fine, and opened my default mail client. However, when I try to use the code in my own module, it seems to open up my default browser, for some weird reason. No funny text in the address bar, it just opens up the browser. The email_incorrect_phone_numbers() function is in the Employees class, which contains a dictionary (employee_dict) of Employee objects, which themselves have a number of employee attributes (sn, givenName, mail etc.). Full code is actually here (http://stackoverflow.com/questions/2963975/python-converting-csv-to-objects-code-design) from urllib.parse import quote import webbrowser .... def email_incorrect_phone_numbers(self): email_list = [] for employee in self.employee_dict.values(): if not PhoneNumberFormats.standard_format.search(employee.telephoneNumber): print(employee.telephoneNumber, employee.sn, employee.givenName, employee.mail) email_list.append(employee.mail) recipients = ', '.join(email_list) webbrowser.open("mailto:%s?subject=%s&body=%s" % (recipients, quote("testing"), quote('testing')) ) Any suggestions? Cheers, Victor

    Read the article

  • Spring-MVC Problem using @Controller on controller implementing an interface

    - by layne
    I'm using spring 2.5 and annotations to configure my spring-mvc web context. Unfortunately, I am unable to get the following to work. I'm not sure if this is a bug (seems like it) or if there is a basic misunderstanding on how the annotations and interface implementation subclassing works. For example, @Controller @RequestMapping("url-mapping-here") public class Foo { @RequestMapping(method=RequestMethod.GET) public void showForm() { ... } @RequestMapping(method=RequestMethod.POST) public String processForm() { ... } } works fine. When the context starts up, the urls this handler deals with are discovered, and everything works great. This however does not: @Controller @RequestMapping("url-mapping-here") public class Foo implements Bar { @RequestMapping(method=RequestMethod.GET) public void showForm() { ... } @RequestMapping(method=RequestMethod.POST) public String processForm() { ... } } When I try to pull up the url, I get the following nasty stack trace: javax.servlet.ServletException: No adapter for handler [com.shaneleopard.web.controller.RegistrationController@e973e3]: Does your handler implement a supported interface like Controller? org.springframework.web.servlet.DispatcherServlet.getHandlerAdapter(DispatcherServlet.java:1091) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:874) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:809) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571) org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:501) javax.servlet.http.HttpServlet.service(HttpServlet.java:627) However, if I change Bar to be an abstract superclass and have Foo extend it, then it works again. @Controller @RequestMapping("url-mapping-here") public class Foo extends Bar { @RequestMapping(method=RequestMethod.GET) public void showForm() { ... } @RequestMapping(method=RequestMethod.POST) public String processForm() { ... } } This seems like a bug. The @Controller annotation should be sufficient to mark this as a controller, and I should be able to implement one or more interfaces in my controller without having to do anything else. Any ideas?

    Read the article

  • Regular expression either/or not matching everything

    - by dwatransit
    I'm trying to parse an HTTP GET request to determine if the url contains any of a number of file types. If it does, I want to capture the entire request. There is something I don't understand about ORing. The following regular expression only captures part of it, and only if .flv is the first int the list of ORd values. (I've obscured the urls with spaces because Stackoverflow limits hyperlinks) regex: GET.?(.flv)|(.mp4)|(.avi).? test text: GET http: // foo.server.com/download/0/37/3000016511/.flv?mt=video/xy match output: GET http: // foo.server.com/download/0/37/3000016511/.flv I don't understand why the .*? at the end of the regex isnt callowing it to capture the entire text. If I get rid of the ORing of file types, then it works. Here is the test code in case my explanation doesn't make sense: public static void main(String[] args) { // TODO Auto-generated method stub String sourcestring = "GET http: // foo.server.com/download/0/37/3000016511/.flv?mt=video/xy"; Pattern re = Pattern.compile("GET .?\.flv."); // this works //output: // [0][0] = GET http :// foo.server.com/download/0/37/3000016511/.flv?mt=video/xy // the match from the following ends with the ".flv", not the entire url. // also it only works if .flv is the first of the 3 ORd options //Pattern re = Pattern.compile("GET .?(\.flv)|(\.mp4)|(\.avi).?"); // output: //[0][0] = GET http: // foo.server.com/download/0/37/3000016511/.flv // [0][1] = .flv // [0][2] = null // [0][3] = null Matcher m = re.matcher(sourcestring); int mIdx = 0; while (m.find()){ for( int groupIdx = 0; groupIdx < m.groupCount()+1; groupIdx++ ){ System.out.println( "[" + mIdx + "][" + groupIdx + "] = " + m.group(groupIdx)); } mIdx++; } } }

    Read the article

  • How do I specify Open ID Realm in spring security ?

    - by Salvin Francis
    We are using Spring security in our application with support for username / password based authentication as well as Open id based authentication. The issue is that google gives a different open id for the return url specified and we have at least 2 different entry points in our application from where open id is configured into our system. Hence we decided to use open id realm. http://blog.stackoverflow.com/2009/0...ue-per-domain/ http://groups.google.com/group/googl...unts-api?pli=1 how is it possible to integrate realm into our spring configuration/code ? This is how we are doing it in traditional openid library code: AuthRequest authReq = consumerManager.authenticate(discovered, someReturnToUrl,"http://www.example.com"); This works and gives same open id for different urls from our site. our configuration: Code: ... <http auto-config="false"> <!-- <intercept-url> tags are here --> <remember-me user-service-ref="someRememberedService" key="some key" /> <form-login login-page="/Login.html" authentication-failure-url="/Login.html?error=true" always-use-default-target="false" default-target-url="/MainPage.html"/> <openid-login authentication-failure-url="/Login.html?error=true" always-use-default-target="true" default-target-url="/MainPage.html" user-service-ref="someOpenIdUserService"/> </http> ... <beans:bean id="someOpenIdUserService" class="com.something.MyOpenIDUserDetailsService"> </beans:bean> <beans:bean id="openIdAuthenticationProvider" class="com.something.MyOpenIDAuthenticationProvider"> <custom-authentication-provider /> <beans:property name="userDetailsService" ref="someOpenIdUserService"/> </beans:bean> ...

    Read the article

  • Webcrawler, feedback?

    - by Jan Kuboschek
    Hey folks, every once in a while I have the need to automate data collection tasks from websites. Sometimes I need a bunch of URLs from a directory, sometimes I need an XML sitemap (yes, I know there is lots of software for that and online services). Anyways, as follow up to my previous question I've written a little webcrawler that can visit websites. Basic crawler class to easily and quickly interact with one website. Override "doAction(String URL, String content)" to process the content further (e.g. store it, parse it). Concept allows for multi-threading of crawlers. All class instances share processed and queued lists of links. Instead of keeping track of processed links and queued links within the object, a JDBC connection could be established to store links in a database. Currently limited to one website at a time, however, could be expanded upon by adding an externalLinks stack and adding to it as appropriate. JCrawler is intended to be used to quickly generate XML sitemaps or parse websites for your desired information. It's lightweight. Is this a good/decent way to write the crawler, provided the limitations above? http://pastebin.com/VtgC4qVE - Main.java http://pastebin.com/gF4sLHEW - JCrawler.java http://pastebin.com/VJ1grArt - HTMLUtils.java Thanks for your feedback in advance! :)

    Read the article

  • .NET client getting "not well formed" XML response from Axis web service

    - by Tex
    I have a simple .NET app that makes a SOAP call to a third party Axis web service. When I trace the HTTP traffic, I see that the Request looks fine, however I'm getting an exception: "Response is not well-formed XML." The return object is null, as it seems the XML can't be deserialized. One question regarding the various namespace declarations inside the wsdl. Several of these declarations point to URLs / domains that no longer exist. Could this cause any problems? From the wsdl document: <wsdl:definitions targetNamespace="http://domaindoesntexist.com/" xmlns:apachesoap="http://xml.apache.org/xml-soap" xmlns:impl="http://domaindoesntexist.com/" xmlns:intf="http://domaindoesntexist.com/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:wsdlsoap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> A sample HTTP response with incriminating data removed: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Fri, 05 Jun 2009 13:54:59 GMT 7cb <?xml version="1.0" encoding="utf-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <someMethod xmlns="http://test.com/services/myservice/"> </someMethod> </soapenv:Body> </soapenv:Envelope> 0

    Read the article

  • Monitor web sites visited using Internet Explorer, Opera, Chrome, Firefox and Safari in Python

    - by Zachary Brown
    I am working on a project for work and have seemed to run into a small problem. The project is a similar program to Web Nanny, but branded to my client's company. It will have features such as website blocking by URL, keyword and web activity logs. I would also need it to be able to "pause" downloads until an acceptable username and password is entered. I found a script to monitor the URL visited in Internet Explorer (shown below), but it seems to slow the browser down considerably. I have not found any support or ideas onhow to implement this in other browsers. So, my questions are: 1). How to I monitor other browser activity / visited URLs? 2). How do I prevent downloading unless an acceptable username and password is entered? from win32com.client import Dispatch,WithEvents import time,threading,pythoncom,sys stopEvent=threading.Event() class EventSink(object): def OnNavigateComplete2(self,*args): print "complete",args stopEvent.set() def waitUntilReady(ie): if ie.ReadyState!=4: while 1: print "waiting" pythoncom.PumpWaitingMessages() stopEvent.wait(.2) if stopEvent.isSet() or ie.ReadyState==4: stopEvent.clear() break; time.clock() ie=Dispatch('InternetExplorer.Application',EventSink) ev=WithEvents(ie,EventSink) ie.Visible=1 ie.Navigate("http://www.google.com") waitUntilReady(ie) print "location",ie.LocationName ie.Navigate("http://www.aol.com") waitUntilReady(ie) print "location",ie.LocationName print ie.LocationName,time.clock() print ie.ReadyState

    Read the article

  • Non-wiki CMS for an online user guide

    - by Russell Leggett
    For a large web application I'm building, I need to create an extensive user guide. The first thought was a wiki, but what I've seen lacks the ease of customization I've seen in CMSs, and has a lot of extra features I don't need. The number of users editing the document is small and closed, but it needs to be editable by non-technical users. The number of pages will likely be between 50-100. It also needs to be searchable. It would also be a plus if it had nice readable urls to link to from our web app. Right now, my best guess is WordPress, but that seems a lot more geared towards blogging with just a handful of pages, than having several pages, and possibly no blogs. There isn't a language requirement, although we have the most experience with Java and PHP. We aren't looking to have to do any major coding other than customizing for visuals, so hopefully the language will not be too important. Again, I'm not looking for the best general purpose CMS, just something that would be easiest for a user guide.

    Read the article

  • Convert HTML + CSS to PDF with PHP?

    - by cletus
    Ok, I'm now banging my head against a brick wall with this one. I have an HTML (not XHTML) document that renders fine in Firefox 3 and IE 7. It uses fairly basic CSS to style it and renders fine in HTML. I'm now after a way of converting it to PDF. I have tried: DOMPDF: it had huge problems with tables. I factored out my large nested tables and it helped (before it was just consuming up to 128M of memory then dying--thats my limit on memory in php.ini) but it makes a complete mess of tables and doesn't seem to get images. The tables were just basic stuff with some border styles to add some lines at various points; HTML2PDF and HTML2PS: I actually had better luck with this. It rendered some of the images (all the images are Google Chart URLs) and the table formatting was much better but it seemed to have some complexity problem I haven't figured out yet and kept dying with unknown node_type() errors. Not sure where to go from here; and Htmldoc: this seems to work fine on basic HTML but has almost no support for CSS whatsoever so you have to do everything in HTML (I didn't realize it was still 2001 in Htmldoc-land...) so it's useless to me. I tried a Windows app called Html2Pdf Pilot that actually did a pretty decent job but I need something that at a minimum runs on Linux and ideally runs on-demand via PHP on the Webserver. I really can't believe I'm this stuck. Am I missing something?

    Read the article

  • Auto-resolving a hostname in WCF Metadata Publishing

    - by Mike C
    I am running a self-hosted WCF service. In the service configuration, I am using localhost in my BaseAddresses that I hook my endpoints to. When trying to connect to an endpoint using the WCF test client, I have no problem connecting to the endpoint and getting the metadata using the machine's name. The problem that I run into is that the client that is generated from metadata uses localhost in the endpoint URLs it wants to connect to. I'm assuming that this is because localhost is the endpoint URL published by metadata. As a result, any calls to the methods on the service will fail since localhost on the calling machine isn't running the service. What I would like to figure out is if it is possible for the service metadata to publish the proper URL to a client depending on the client who is calling it. For example, if I was requesting the service metadata from a machine on the same network as the server the endpoint should be net.tcp://MYSERVER:1234/MyEndpoint. If I was requesting it from a machine outside the network, the URL should be net.tcp://MYSERVER.mydomain.com:1234/MyEndpoint. And obviously if the client was on the same machine, THEN the URL could be net.tcp://localhost:1234/MyEndpoint. Is this just a flaw in the default IMetadataExchange contract? Is there some reason the metadata needs to publish the information in a non-contextual way? Is there another way I should be configuring my BaseAddresses in order to get the functionality I want? Thanks, Mike

    Read the article

  • Problem when getting pageContent of an unavailable URL in Java

    - by tiendv
    I have a code for get pagecontent from a URL: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URL; import java.net.URLConnection; public class GetPageFromURLAction extends Thread { public String stringPageContent; public String targerURL; public String getPageContent(String targetURL) throws IOException { String returnString=""; URL urlString = new URL(targetURL); URLConnection openConnection = urlString.openConnection(); String temp; BufferedReader in = new BufferedReader( newInputStreamReader(openConnection.getInputStream())); while ((temp = in.readLine()) != null) { returnString += temp + "\n"; } in.close(); // String nohtml = sb.toString().replaceAll("\\<.*?>",""); return returnString; } public String getStringPageContent() { return stringPageContent; } public void setStringPageContent(String stringPageContent) { this.stringPageContent = stringPageContent; } public String getTargerURL() { return targerURL; } public void setTargerURL(String targerURL) { this.targerURL = targerURL; } @Override public void run() { try { this.stringPageContent=this.getPageContent(targerURL); } catch (IOException e) { e.printStackTrace(); } } } Sometimes I receive an HTTP error of 405 or 403 and result string is null. I have tried checking permission to connect to the URL with: URLConnection openConnection = urlString.openConnection(); openConnection.getPermission() but it usualy returns null. Does mean that i don't have permission to access the link? I have tried stripping off the query portion of the URL with: String nohtml = sb.toString().replaceAll("\\<.*?>",""); where sb is a Stringbulder, but it doesn't seem to strip off the whole query substring. In an unrelated question, I'd like to use threads here because I must retrieve many URLs; how can I create a multi-thread client to improve the speed?

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >