Search Results

Search found 18210 results on 729 pages for 'website promotion'.

Page 122/729 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • What movie website allows people to scrape it?

    - by Sergio Tapia
    I've wanted to make a C# library to scrape movie information and return it to the application, but someone told me that it's against the TOS. RottenTomatoes seems to have no problems with it from what I've read on their licensing page, but I'm not quite sure. Where could I aquire movie information legally and without cost? It's for an open source application hosted here: LINK

    Read the article

  • Alternatives to CAT.NET for website security analysis

    - by Gavin Miller
    I'm looking for an alternative tool to CAT.NET for performing static security scans on .NET code. Currently the CAT.NET tooling/development is at a somewhat fragile stage and doesn't offer the reliability that I'm looking for. Are there any alternative static code analyzers that you use for detecting security issues?

    Read the article

  • Visualizing the SiteMap of a large (page number) website

    - by Michael
    I'm looking for a tool or service that can spider a web domain with a large number of pages, create a sitemap, and then visualize that map in a way that will help me see, understand and group content (I'm new to the site) Something like a tree-view or other standard Site Map visualizations would be great. I am yet unable to find a tool that does this (I've found plenty of things to spider the site and create an xml file, nothing to visualize it) Thanks!

    Read the article

  • Hide *.inc.php from website visitors

    - by Ghostrider
    I have a script myscript.inc.php which handles all urls that look like /script-blah I accomplish this by using following .htaccess RewriteEngine On RewriteRule ^script-(.*)$ myscript.inc.php?s=$1 [QSA,L] However users could also access it this way by typing /myscript.inc.php?s=blah I would like to prevent that. I tried <Files ~ "\.inc\.php$"> Order deny,allow Deny from all </Files> and RewriteCond %{REQUEST_URI} \.inc\.php RewriteRule .* - [F,L,NS] They both prevent users from viewing /myscript.inc.php?s=blah but they also cause /script-blah to return 403... Is there a way to do this correctly?

    Read the article

  • FormsAuthentication redirecting to login page when visiting root of website

    - by Ryan Lattimer
    I wanted to use FormsAuthentication to secure my static files as well on my site, so I followed the instructions located here http://learn.iis.net/page.aspx/244/how-to-take-advantage-of-the-iis7-integrated-pipeline/ under title "Enabling Forms Authentication for the Entire Application". Now though, when I try to visit the site by going directly to http://www.mysite.com I get redirected to http://www.mysite.com/Login.aspx?ReturnUrl=%2f instead of it using my DefaultDocument I have set. I can go to my default document by just visiting http://www.mysite.com/Home.aspx without any issues because it is set to allow anonymous access. Is there something I need to add into my web.config file to make iis7 allow anonymous access to the root? I tried adding with anonymous access but no such luck. Any help would be much appreciated. Both Home and the Login form allow anonymous. <location path="Home.aspx"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="Login.aspx"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> Login form is set as the loginUrl <authentication mode="Forms"> <forms protection="All" loginUrl="Login.aspx"> </forms> </authentication> Default document is set as Home.aspx <defaultDocument> <files> <add value="Home.aspx" /> </files> </defaultDocument> I have not removed any of the iis7 default documents. However, Home.aspx is first in the priority.

    Read the article

  • Good programming website like Stack Overflow?

    - by hhafez
    What other good collaborative programming/software development/engineering websites do you know of? I'm not looking for language or platform specific websites. Nor am I looking for something similar to the format of Stack Overflow. My main criteria is that the community is knowledgeable, helpful active friendly I know the question is open ended/subjective but I'd like to know as many places where I can get the help of my peers. The accepted answer will contain links to your recommended sites have a short description be concise be highly voted by your peers

    Read the article

  • Problem pulling data from website in .NET and C#

    - by Cptcecil
    I have written a web scraping program to go to a list of pages and write all the html to a file. The problem is that when I pull a block of text some of the characters get written as '?'. How do I pull those characters into my text file? Here is my code: string baseUri = String.Format("http://www.rogersmushrooms.com/gallery/loadimage.asp?did={0}&blockName={1}", id.ToString(), name.Trim()); // our third request is for the actual webpage after the login. HttpWebRequest request = (HttpWebRequest)WebRequest.Create(baseUri); request.Method = "GET"; request.UserAgent = "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1)"; //get the response object, so that we may get the session cookie. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); StreamReader reader = new StreamReader(response.GetResponseStream()); // and read the response string page = reader.ReadToEnd(); StreamWriter SW; string filename = string.Format("{0}.txt", id.ToString()); SW = File.AppendText("C:\\Share\\" + filename); SW.Write(page); reader.Close(); response.Close();

    Read the article

  • How to parse a custom XML-style error code response from a website

    - by user1870127
    I'm developing a program that queries and prints out open data from the local transit authority, which is returned in the form of an XML response. Normally, when there are buses scheduled to run in the next few hours (and in other typical situations), the XML response generated by the page is handled correctly by the java.net.URLConnection.getInputStream() function, and I am able to print the individual results afterwards. The problem is when the buses are NOT running, or when some other problem with my queries develops after it is sent to the transit authority's web server. When the authority developed their service, they came up with their own unique error response codes, which are also sent as XMLs. For example, one of these error messages might look like this: <Error xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Code>3005</Code> <Message>Sorry, no stop estimates found for given values.</Message> </Error> (This code and similar is all that I receive from the transit authority in such situations.) However, it appears that URLConnection.getInputStream() and some of its siblings are unable to interpret this custom code as a "valid" response that I can handle and print out as an error message. Instead, they give me a more generic HTTP/1.1 404 Not Found error. This problem cascades into my program which then prints out a java.io.FileNotFoundException error pointing to the offending input stream. My question is therefore two-fold: 1. Is there a way to retrieve, parse, and print a custom XML-formatted error code sent by a web service using the plugins that are available in Java? 2. If the above is not possible, what other tools should I use or develop to handle such custom codes as described?

    Read the article

  • How to get non-latin characters from website?

    - by latata
    I try to get data from latata.pl/pl.php and view all sign (polish - iso-8859-2) final URL url = new URL("http://latata.pl/pl.php"); final URLConnection urlConnection = url.openConnection(); final BufferedReader in = new BufferedReader(new InputStreamReader( urlConnection.getInputStream())); String inputLine; while ((inputLine = in.readLine()) != null) { System.out.println(inputLine); } in.close(); It doesn't work. :( Any ideas?

    Read the article

  • Website settings for running visual web as root?

    - by Curtis White
    Scott Gu explains how to run visual web developer using a root path, here: http://weblogs.asp.net/scottgu/archive/2006/12/19/tip-trick-how-to-run-a-root-site-with-the-local-web-server-using-vs-2005-sp1.aspx This worked exactly as he described in one instance for me. But, today I do not see this option. More over, I do not think I have a solution file, and I think that has something to do with it. I'm aware there are web application projects and web site model, and web site model is basically just a "directory". But can web site model, also, have a solution file for this setting or not a solution file? What determines that? I am interested in using this method on on a web site, i.e directory only model.

    Read the article

  • I want to create an Android App that checks a website like Woot

    - by tim
    Im new to android and thought it would be fun to develop an app that goes out and checks woot.com. The idea I came up with is for the app to be a widget that refresh woot.com once a day and displays a picture of the item and price. If the widget is clicked on it would open the browser to woot.com. In theory this seems like it would be easy but Im having trouble figuring out where to begin. Any help would be appreciated. Thanks Tim

    Read the article

  • One to Many relashionships for a restaurants website?

    - by myaccount
    Each restaurant has restaurant branches, each branch must determine which days of the week it opens, each of those days must determine (several) open_hour and close_hour thru that day. I created one to many relationship using these tables: rest_names --- rest_branches --- open_days --- open_hours Am I going right this way? or there is another way to do this, maybe less complicated? And how the query will be like to get the hours of a restaurant on a specific day, say sunday?

    Read the article

  • Storing uploaded content on a website

    - by Matt
    For the past 5 years, my typical solution for storing uploaded files (images, videos, documents, etc) was to throw everything into an "upload" folder and give it a unique name. I'm looking to refine my methods for storing uploaded content and I'm just wondering what other methods are used / preferred. I've considered storing each item in their own folder (folder name is the Id in the db) so I can preserve the uploaded file name. I've also considered uploading all media to a locked folder, then using a file handler, which you pass the Id of the file you want to download in the querystring, it would then read the file and send the bytes to the user. This is handy for checking access, and restricting bandwidth for users.

    Read the article

  • Website stress test in Python - Django

    - by RadiantHex
    Hi folks, I'm trying to build a small stress test script to test how quickly a set of requests gets done. Need to measure speed for 100 requests. Problem is that I wouldn't know how to implement it, as it would require parallel url requests to be called. Any ideas?

    Read the article

  • Top techniques to avoid 'data scraping' from a website database

    - by Addsy
    I am setting up a site using PHP and MySQL that is essentially just a web front-end to an existing database. Understandably my client is very keen to prevent anyone from being able to make a copy of the data in the database yet at the same time wants everything publicly available and even a "view all" link to display every record in the db. Whilst I have put everything in place to prevent attacks such as SQL injection attacks, there is nothing to prevent anyone from viewing all the records as html and running some sort of script to parse this data back into another database. Even if I was to remove the "view all" link, someone could still, in theory, use an automated process to go through each record one by one and compile these into a new database, essentially pinching all the information. Does anyone have any good tactics for preventing or even just dettering this that they could share. Thanks

    Read the article

  • Website in right-to-left languages (Arabic, Hebrew)

    - by jack
    I currently developing a multi-language interface for a Django project. But when I started to work on Arabic and Hebrew languages, I noticed all pages messed up after dir="rtl" to html tag (according to instructions on http://www.w3.org/International/tutorials/bidi-xhtml/) Does that mean I need separate stylesheets for right-to-left languages?

    Read the article

  • Global resources can't be resolved after publishing Website in VS2008

    - by Scoregraphic
    Hi there I have a web-project running in VS 2008. We have some global resource files (*.resx) in the App_GlobalResources folder for internationalisation. All this works like a charm on my local IIS installation out of VS. But when I publish my web-project to the local filesystem and/or another server, all the resources can no longer be found. So I guess the pre-compilation is somehow corrupting stuff. When I call the pre-compiled web, I get an error that the resource object with key xyz cannot be found, although it could be found before. I checked with .NET reflector if the resource stuff made it into the *.dlls. All those identifiers are there (bin/Web.dll, bin/<culture>/Web.resources.dll). The identifiers are loaded like this: <asp:MenuItem NavigateUrl="~/OrderNew.aspx" Text="<%$ Resources:MyProject, MenuNewOrder %>" Value="NewOrder"> The resource files are called MyProject.resx and MyProject.<culture>.resx where <culture> corresponds the the specific culture (i.e. MyProject.de-DE.resx). Any ideas how to solve this? I really appreciate any help. Thanks Edit: If I copy the App_GlobalResources folder manually to the output, the resources may be loaded normally. So I really really wonder what this pre-compilation is all about. I'm still interested in solving the issue "the right way".

    Read the article

  • Raw Video file from website

    - by Charlie
    I would like to make an app that will download videos files that can be played later. At first i thought I could have a UIWeb view and then try to access the cache of it and get the file that way but unfortunately i don't have access to that. After trying that my next thought would be get the direct link to the video file. Essentially what download helper does on fire fox. Any idea of where i could look at for help or any body have any better idea? Is there any stringByEvaluatingjavascript that might be of use or is there away to access the cache on a webview in your own app? Thanks for any help!

    Read the article

  • share the same cookie between two website using PHP cURL extension

    - by powerboy
    I want to get the contents of some emails in my gmail account. I would like to use the PHP cURL extension to do this. I followed these steps in my first try: In the PHP code, output the contents of https://www.google.com/accounts/ServiceLoginAuth. In the browser, the user input username and password to login. In the PHP code, save cookies in a file named cookie.txt. In the PHP code, send request to https://mail.google.com/ along with cookies retrieved from cookie.txt and output the contents. The following code does not work: $login_url = 'https://www.google.com/accounts/ServiceLoginAuth'; $gmail_url = 'https://mail.google.com/'; $cookie_file = dirname(__FILE__) . '/cookie.txt'; $ch = curl_init(); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_COOKIEJAR, $cookie_file); curl_setopt($ch, CURLOPT_URL, $login_url); $output = curl_exec($ch); echo $output; curl_setopt($ch, CURLOPT_URL, $gmail_url); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie_file); $output = curl_exec($ch); echo $output; curl_close($ch);

    Read the article

  • Getting javascript mouse position relative to website prefferably without jQuery

    - by Constructor
    I've found this snippet on Ajaxian, but I can't seem to use the cursor.y (or cursor.x) as a variable and when the function is called as such it does not seem to work. Is there a syntax problem or something else? function getPosition(e) { e = e || window.event; var cursor = {x:0, y:0}; if (e.pageX || e.pageY) { cursor.x = e.pageX; cursor.y = e.pageY; } else { cursor.x = e.clientX + (document.documentElement.scrollLeft || document.body.scrollLeft) - document.documentElement.clientLeft; cursor.y = e.clientY + (document.documentElement.scrollTop || document.body.scrollTop) - document.documentElement.clientTop; } return cursor; } I'd preffer not to use jQuery UI if possible, since I've always thaught of jQuery and librarys as a bit of an overkill for most JS programing.

    Read the article

  • ASP.NET Website no Response while Processing Long Process

    - by Ammar
    Dear Programmers, When my Application face a long-time process, i.e fetch a query (SELECT a, b, c FROM d) This query needs 10 seconds to be completed in the MSSQL Management Studio, but when the ASP.NET application try to fetch it, it refuse to return any response to any other requests made on that Server. I am hosting my Application on VPS Server with good specifications, and I am giving this example the (SELECT a, b, c FROM d) just to tell you the issue, it can be any process, maybe processing a movie, or even fetching some data through external API that is experiencing some slow-down,or whatever. Any help or suggestions would be highly appreciated.

    Read the article

  • Customer session is different in different parts of a Magento website

    - by Josh Pennington
    I have a function inside of a Helper in Magento that returns whether or not a customer attribute equals one. Here is my Helper class class Nie_Nie_Helper_Data extends Mage_Core_Helper_Abstract { public function isNieAdmin() { if(Mage::getSingleton('customer/session')->getCustomer()->getNieAdmin() == 1) { return true; } else { return false; } } } Now when I call this function from a class that extends Mage_Core_Block_Template, everything seems to work fine. However when I try to use this inside one of my controllers, it does not work. In fact when I do Mage::getSingleton('customer/session')-getCustomer()-debug() the only variable that is returned is the website_id. Does anyone know what I have to do in order to get this to work? Thanks Josh Pennington

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >