Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 111/1180 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • VB6 Parser/Lexer/Scripter

    - by rlb.usa
    I've got a game in VB6 and it works great and all, but I have been toying with the idea of creating a scripting engine. Ii'm thinking I'd like VB6 to read in flat text script files for me and then lex/parse/execute them. I have good programming experience, and I've built a simple C compiler, as well as a LOGO emulator before. My question is: Are there any tools that I can use, like Lexx/Yakk/Bison to help me? How should I approach this problem in regards to lexing, parsing, and feeding the commands back to VB6 so I can handle them? Is this idea a BAD IDEA in the sense that there are too many obstacles in the way (For example, building minesweeper in assembly, though not impossible, is very difficult, and a bad idea.)?

    Read the article

  • Scoping in embedded groovy scripts

    - by Aaron Digulla
    In my app, I use Groovy as a scripting language. To make things easier for my customers, I have a global scope where I define helper classes and constants. Currently, I need to run the script (which builds the global scope) every time a user script is executed: context = setupGroovy(); runScript( context, "global.groovy" ); // Can I avoid doing this step every time? runScript( context, "user.groovy" ); Is there a way to setup this global scope once and just tell the embedded script interpreter: "Look here if you can't find a variable"? That way, I could run the global script once. Note: Security is not an issue here but if you know a way to make sure the user can't modify the global scope, that's an additional plus.

    Read the article

  • Win32 script environment for testing http redirects?

    - by Anders Lindahl
    The past few days I've been working with setting up an Apache server on Windows. The server is supposed to host several .htaccess files, each redirecting (or, in some cases, proxying) to different hosts. I want to create tests for these redirectons, and the solution I'm currently considering is a CGI script running on the same server, sending GET requests to it and verifying that it gets the correct redirection headers back. A scripting solution (vscript/jscript) seems worth exploring, but so far I've only managed to rule out Microsoft.XMLHTTP because it follows the redirect "behind the scenes". Are there any libraries or other solutions already present on a reasonably standard Windows Server that can do this kind of low-level HTTP work? If not, any other suggestions of simple environments to set up for verifying redirects?

    Read the article

  • Shell script to count files, then remove oldest files

    - by Nic Hubbard
    I am new to shell scripting, so I need some help here. I have a directory that fills up with backups. If I have more than 10 backup files, I would like to remove the oldest files, so that the 10 newest backup files are the only ones that are left. So far, I know how to count the files, which seems easy enough, but how do I then remove the oldest files, if the count is over 10? if [ls /backups | wc -l > 10] then echo "More than 10" fi

    Read the article

  • Validating parameters to a bash script

    - by nickf
    I'm a total newbie to doing any bash scripting, but I came up with a basic one to help automate the process of removing a number of folders as they become unneeded. #!/bin/bash rm -rf ~/myfolder1/$1/anotherfolder rm -rf ~/myfolder2/$1/yetanotherfolder rm -rf ~/myfolder3/$1/thisisafolder This is evoked like so: ./myscript.sh <{id-number}> The problem is that if you forget to type in the id-number (as I did just then), then it could potentially delete a lot of things that you really don't want deleted. Is there a way you can add any form of validation to the command line parameters? In my case, it'd be good to check that a) there is one parameter, b) it's numerical, and c) that folder exists; before continuing with the script.

    Read the article

  • Embed external website inside a page

    - by jasongullickson
    I'd like to load something from website B into a page on website A and contain the functionality of website B within a container on website A. I tried doing this using a div and jQuery's load() method but I run into cross-domain-scripting issues (I think, it works with a local file but not a remote URL). I also tried using an iframe but strange things happen (for example, when a link is clicked in the "contained" website B, it reloads the entire browser, losing the content of website A). I've read about some server-side ways of handling this (and it may just come to that) but ideally I want something completely client side, JavaScript and HTML. Any ideas?

    Read the article

  • Using an ActiveX object from an Outlook hosted webpage - possible?

    - by Nic Wise
    I'm trying to do the following: We have an outlook plugin, written in .NET (and C++). It does various things, and is manually installed on the end users machines (usually via AD deployment or similar) We are changing our search to use a webpage-based search, but from within outlook. That part is ok, however we want to communicate from the webpage to the surrounding outlook application. We can call into outlook by exposing an ActiveX object from our plugin, however we get security warnings, even if it's signed and marked as safe for scripting. Is this even possible? Has anyone done it? Anyone have a better way of doing it? We only need to pass in a small amount of data (a message id), and only from the webpage to outlook [update]: This is the error: automation server can't create object. We can get around it a bit by turning things off in IE, but thats not a good way to do it! Thanks

    Read the article

  • How do I remove a site from IIS7 using JavaScript?

    - by Petteri Vilmusenaho
    Hi, I've been searching the Internet for a way to remove a site from IIS7 using JavaScript. I've found, and used, a lot of examples on how to create a site and applications using JavaScript but not a single example on how to delete a site! For example, I've figured out how to create a site using the examples found at http://www.iis.net/ConfigReference/system.applicationHost/sites. Does anyone know how to remove a site from IIS7 using JavaScript? Please help! /Petteri

    Read the article

  • Use mod_rewrite to force users to homepage when entering a site?

    - by scotru
    Is it possible to use mod_rewrite to force all users entering a site (either through a link from another site, or by typing a URL in the address bar) to be redirected to the homepage? From the homepage (or any page within the site), users should then be able to access other pages in the site. But all users would be forced to enter the site through the homepage. Can this be done with mod_rewrite (or without using a scripting language)?

    Read the article

  • How to stop Feedreader fetching content from my site using iFrame?

    - by Wei Kai
    As you all can see from the picture below, my site's content is duplicated by FeedReader (using iFrame) and indexed at Google. When I clicked at the FeedReader link, it uses some sort of iFrame to draw content from my site live. At the meantime, my site traffic has dropped significantly, but I not sure if this is the reason. https://lh4.googleusercontent.com/-hc4pVwHvQoo/UGGcwVyRqYI/AAAAAAAAAIc/9m04UOwmfEk/s1600/1.PNG https://lh3.googleusercontent.com/-ljj6dV7xTik/UGGc0x4GiZI/AAAAAAAAAIk/3mZ6HiCiQ2w/s1600/2.PNG What can I do to prevent Feedreader to fetch my content to their site? Any help would be much appreciated. By the way, I'm using wordpress as my CMS. I have also highlighted this issue to FeedReader 2 days ago, but yet to get any reply from them.

    Read the article

  • Wikileaks : le créateur du site arrêté et ses avoirs gelés, les cyber-militants s'organisent et OVH s'exprime dans une nouvelle lettre ouverte

    Wikileaks : le créateur du site arrêté et ses avoirs gelés, les cyber-militants s'organisent et OVH s'exprime dans une nouvelle lettre ouverte Mise à jour du 07/12/10 Wikileaks est en train de passer du statut de site polémique à celui de site traqué. Une position qui, habituellement, attire de multiples sympathies dans la sphère d'Internet. Et Wikileaks ne fait pas exception. De nombreux internautes, y compris ceux qui n'adhèrent pas spécialement à la philosophie du site, ont décidé de réagir aux tentatives visant à le mettre hors-ligne. Il existe par exemple à présent de nombreux miroirs (et donc d'hébergeurs v...

    Read the article

  • What bots are really worth letting onto a site?

    - by blunders
    Having written a number of bots, and seen the massive amounts of random bots that happen to crawl a site, I am wondering if the goal of the site allowing bots is for the potential for the bot to send real traffic back to the site if there is any reason to allow bots that are not known to be sending real traffic back, and how to spot these "good" bots; based on how they ID themselves, IPs they come from, behaviors, etc.

    Read the article

  • How do I get google to see keywords on a one page web application site?

    - by David
    I'm going to have to link to the web site to explain this, http://www.diagram.ly, it's a free service, so I hope this doesn't break advertising rules. Basically, it's a one page web application, I don't want to create a web site for it. Some background text loads and if JavaScript is enabled, the web application itself then loads. The problem is that Google only seems to be picking up the title of the page and the text on the footer, so the site only appear on Google search for very limited text (based on the title and meta description mostly). I was hoping that search engines would pick up on the background text and index that. The text is factual, not keyword stuffed. Yahoo seems to pick up the text, just not Google. Does anyone have any experience of how Google would view such a site and where I could put the text for a better result? Edit I should mention that Google Webmaster Tools lists the site keywords as "Component, diagramly, feed, mxgraph, share and twitter". Basically the footer and little else.

    Read the article

  • My parked domain was de-indexed by Google - what to do?

    - by Programmer Joe
    I have a question about how to handle my domain. In a nutshell, I bought a domain last year from Go Daddy. My intention was to launch a real site with this domain and I have spent the last year working on my site. For the last year, I have been using the default Go Daddy page display for an up and coming site. When I first bought this site, it was indexed by Google - you could search for "alphabanter" and my site would show up on the search result page for Google. Several months ago, it seemed Google de-indexed my domain and if you type "alphabanter," my domain no longer shows up on the list of search results. However, if you search for "www.alphabanter.com", that's the only way it shows up in the search results for Google. Anyways, I am about to launch my site for real. However, I don't quite know if I can get my site back into Google's index. I have a few questions: 1) Was my domain permanently penalized by Google and removed from their index just because it was a parked domain? I don't believe I have done anything abusive other than using the Go Daddy default page for almost a year because my site was not ready. 2) Should I just launch my site, put a few backlinks to my site, and hope that Google indexes my site again? 3) Should I submit my site to Google at Google submit your content I assume getting Google to reconsider my site is the last option if none of the above works.

    Read the article

  • How to block FeedReader from fetching my content to their site?

    - by Wei Kai
    As you all can see from the picture below, my site's content is duplicated by FeedReader and indexed at Google. When I clicked at the FeedReader link, it uses some sort of iFrame to draw content from my site live. This forms some sort of content duplication to me, and I believe it does harm to my site. (Stackoverflow doesn't allow me to post image due to new account, pls click at the link above to see the picture, million thanks to you.) What can I do to prevent Feedreader to fetch my content to their site? I know robots.txt can perform such function, but I don't know how to do it. Any help would be much appreciated. I have also highlighted this issue to FeedReader 2 days ago, but yet to get any reply from them.

    Read the article

  • Is Wordpress a good CMS for a Event Site? [closed]

    - by Roland
    I plan on building a gallery/exhibition event site. so Locations are usually always the same an fall into 3 categorys (gallery, offspace, institution). then there is the Exhibition title the date and the participating artists. So I was wondering if Wordpress could handle such a site. it should be very data driving though, so all the information is in a list view on one site and can be ordered and queryed (which artists took part in which exhibitions and so on) Please tell me the cons and pros of using Wordpress for such a site and problems I could run into if I might plan to broaden the scope later on. thanks!

    Read the article

  • What if you've been asked to develop a site and the client later introduces Ts&Cs that you'll breach whilst doing your job?

    - by Matt Lacey
    Disclaimer : this is all made up. Honest. And it represents no clients or employers living or dead, blah blah blah, etc. [Allegedly] As part of a website I've built, I've now been provided the Terms and Conditions of site usage to display on the site. These terms--which must be agreed to to access the site--include my (or any visitor to the sites) compliance with a number of clauses. Many of these clauses refer to general computer use and are not tied specifically to use of the site. Some of these clauses refer to things I have had to previously do as a legitimate part of my job and would expect to have to do again. When I've raised similar issues previously my line manager has said just to ignore it but that doesn't seem to be the professional thing to do. So, what do I do? Abiding by the terms would mean that I could no longer work on the project and would cause issues with my employer and the owner of the business the site is being created for. Ignoring them could lead to possible future issues with the business owner and is not something I'm necessarily happy with (the deliberate breaking of a legal contract). Neither option is one I'd choose and could have major consequences. Any thoughts?

    Read the article

  • How can I view localized versions of my site?

    - by Max Vernon
    We are adding internationalization to our site. We are getting the client's IP address from the headers and looking it up against the IP2location database to get the client's country. Several of our clients reported seeing a blank page over the weekend. We'd like to be able to get screenshots or use a browser from many different countries on an ongoing basis for testing code changes. I need to know what the site looks like when accessed from various countries since there are several elements that vary by country. I've used Tor and Vidalia, along with the Tor customized Firefox browser however it appears the CSS is getting mangled. I have also used http://webpagetest.org to check the site, however the screenshot it gives is too small to be really useful. Is there a site or a service I can use to get screenshots or interact with my website from various countries?

    Read the article

  • Should I give preferential treatment to proxy users on my ecommerce site?

    - by Question Overflow
    I am setting up an ecommerce site that caters to a worldwide audience. I would imagine that visitors would come from everywhere, and for whatever reasons, some would be connecting through proxy servers. My site uses a server that is configured to rate limit connections from the same ip address to protect itself from a DOS attack. So, if a proxy server is heavily used by my visitors, then it would appear to be a DOS. This is problematic in a sense that it is hard to tell whether the users are genuinely browsing my site or if a DOS is taking place. So my question is, should I give preferential treatment to proxy users on my ecommerce site? If yes, how should this be done. If not, why not?

    Read the article

  • Google Analytics: Why is Avg Time on Site lower than Avg time on Page?

    - by Melanie
    I have the following Custom Report set up in Google Analytics: Metrics: Avg Time on Page Avg Time on Site Dimensions: Page So a report looks like this: Page Avg Time on Page Avg Time on Site /an-article 00:03:14 00:00:11 /another-article 00:05:11 00:01:07 /something-written 00:03:00 00:00:31 Why is it that for each 'page', the 'site views' are significantly lower?

    Read the article

  • Does SEO optimisation count on the responsive side of a site?

    - by Rick Donohoe
    I'm looking at making some SEO optimisation fixes, and at this point I'm sorting out the heading structure and keywords - H1's, H2's etc We have a site where there are a number of similar blocks, and one is always visible, and one is hidden depending on the screen size. This is our method of making a single site responsive. Firstly, how does this technique affect the SEO, and in general does the responsive side of a site matter at all to search engines? What I mean by this is if the site has different content depending on screen sizes, then which content would the search spider crawl?

    Read the article

  • Learning python, what to learn next to build a site?

    - by user1476145
    Hi everyone thanks for your help. I am currently learning python now through Think Python: How to Think Like a Computer Scientist. It is a great book, and is free online. I will learn more about python through Udacity and books online. After I learn python, what will I need to learn next to build a site? I want to build an interactive photo site. I will have to learn HTML. Do any of you know a good book or web site to learn HTML? I need to learn how to have python in HTML. As you know, I am using an editor now, but I need to learn how to use python to build a site. I am sorry that I am so ignorant. I just need some good books to read or courses to take. thank you so much, Luke

    Read the article

  • How should I handle search engines auto-correcting the spelling of a site's name?

    - by Nathan G.
    A client's site and company is called 'Tranin Communications' (Tranin is her last name). It ranks well in searches for her name but rather poorly in searches for the name of her site/company. I realized that this is largely due to* search engines (Google especially) assuming that the query was misspelled and automatically including results for both 'train communications' and 'communications training'. Both of those queries yield many high-ranking sites that completely drown out hers. Sometimes Google even shows results for 'communications training' instead of 'tranin communications', hiding her site altogether. Is there a way to report an incorrect auto-correction to Google or something I can do to discourage this behavior (e.g. a meta tag)? My searches have come up cold, any suggestions would be appreciated. *I've come to this conclusion because her site ranks very highly when the same queries are put in quotes.

    Read the article

  • What to do with a site that has multiple languages in Google Analytics...

    - by stephmoreland
    We have a site that has four "streams" for language and each language has different content based on that language and location (US English, Spanish, Canadian English and Canadian French). I'm wondering if I have to set up accounts for each stream so that we can see the stats from each stream only, or do I use one account and somehow tell GA to separate the different streams based on language. For example, the US English site starts at (/en/) while the Canadian English site starts at (/ca_en/), etc.

    Read the article

  • What technical details should a programmer of a web application consider before making the site public?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web application consider before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also, I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification.

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >