Search Results

Search found 43114 results on 1725 pages for 'web farm'.

Page 19/1725 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Detecting 'stealth' web-crawlers

    - by Jacco
    What options are there to detect web-crawlers that do not want to be detected? (I know that listing detection techniques will allow the smart stealth-crawler programmer to make a better spider, but I do not think that we will ever be able to block smart stealth-crawlers anyway, only the ones that make mistakes.) I'm not talking about the nice crawlers such as googlebot and Yahoo! Slurp. I consider a bot nice if it: identifies itself as a bot in the user agent string reads robots.txt (and obeys it) I'm talking about the bad crawlers, hiding behind common user agents, using my bandwidth and never giving me anything in return. There are some trapdoors that can be constructed updated list (thanks Chris, gs): Adding a directory only listed (marked as disallow) in the robots.txt, Adding invisible links (possibly marked as rel="nofollow"?), style="display: none;" on link or parent container placed underneath another element with higher z-index detect who doesn't understand CaPiTaLiSaTioN, detect who tries to post replies but always fail the Captcha. detect GET requests to POST-only resources detect interval between requests detect order of pages requested detect who (consistently) requests https resources over http detect who does not request image file (this in combination with a list of user-agents of known image capable browsers works surprisingly nice) Some traps would be triggered by both 'good' and 'bad' bots. you could combine those with a whitelist: It trigger a trap It request robots.txt? It doest not trigger another trap because it obeyed robots.txt One other important thing here is: Please consider blind people using a screen readers: give people a way to contact you, or solve a (non-image) Captcha to continue browsing. What methods are there to automatically detect the web crawlers trying to mask themselves as normal human visitors. Update The question is not: How do I catch every crawler. The question is: How can I maximize the chance of detecting a crawler. Some spiders are really good, and actually parse and understand html, xhtml, css javascript, VB script etc... I have no illusions: I won't be able to beat them. You would however be surprised how stupid some crawlers are. With the best example of stupidity (in my opinion) being: cast all URLs to lower case before requesting them. And then there is a whole bunch of crawlers that are just 'not good enough' to avoid the various trapdoors.

    Read the article

  • Testing an iphone web app on windows

    - by JoseMarmolejos
    When developing web apps for the iphone on a mac you can test your app in either Iphoney or the apple supplied simulator; bot of them are excellent for the task but are only available for macs. So I have to ask, are windows alternative for these iphone simulators? So far I could only find this one.

    Read the article

  • Best practices for sending automated daily emails from web service

    - by Tauren
    I am running a web service that currently sends confirmation emails out to new users via the gmail smtp servers. As I'm only getting a few new users each day, this hasn't been a problem. I've recently added new features to the webapp that will require a customized message to be sent out to each user every day. Think of this as similar to the regular messages LinkedIn sends out that give you a status report on the activity in your network. Every user's message will be different. With thousands of users, this means thousands of unique messages will be sent each day. Edit: I've since found that these types of email are called "transactional or relationship messages". Spamtacular has a good article on differentiating between marketing and transactional email. I don't think using gmail's smtp servers will cut it anymore, but I don't know that for sure. I don't know what gmail's maximum outgoing messages per account is (it might be 100/day), but they limit outgoing mail to 500 recipients per message. I'm not sending a single message to 500 recipients, but I'm going to be sending 1000's of customized messages with each recipient getting one per day. I'm interested to learn any best practices for doing this (especially for Java-based webapps). Here are some of my thoughts and concerns on it: Should I set up my own outgoing mail server? If I do this, it seems like I'll have all sorts of other issues to worry about, such as preventing mail server abuse, monitoring bounces, allowing ways to opt-out of emails, etc. Are there any tools or services to help with this? Maybe something like OpenEMM or a services like MailChimp? But those seem focused more toward email marketing campaigns. I don't think I should have the webapp itself handle sending emails as it currently is for new user signups. I'm thinking I should setup a separate messaging server that can access the same backend/datastore as the webapp. Thoughts on this? Should I consider setting up some sort of message queueing service to help with this, such as JMS, RabbitMQ, ActiveMQ, etc.? Do I need to provide users a way to opt-out? Do I need to flag these as bulk messages? I don't really consider these email marketing messages, but I'm unsure what is considered appropriate or proper netiquette. Any advice is appreciated. I'm also very interested in open source tools or web services that simplify things and could help me to ramp up as quickly as possible. Thanks!

    Read the article

  • How to create Ror style Restful routing in Asp.net MVC Web Api

    - by Jas
    How to configure routing in asp.net web api, to that I can code for the following actions in my ApiController inherited class? |======================================================================================| |Http Verb| Path | Action | Used for | |======================================================================================| | GET | /photos | index | display a list of all photos | | GET | /photos/new | new | return an HTML form for creating a new photo | | POST | /photos/ | create | create a new photo | | GET | /photos/:id | show | display a specific photo | | GET | /photos/:id/edit | edit | return an HTML form for editing a photo | | PUT | /photos/:id | update | update a specific photo | | DELETE | /photos/:id | destroy | delete a specific photo |

    Read the article

  • How to create a setup for a web application with Web Platform Installer

    - by Nasser Hajloo
    I have a large Web Application ( ErPwith 11 subsystem) and I want tocreate a setup for itwith Microsoft WebPI. Currently We send our application for customers once a week (for weekly updates). We usefollowing tools in this application, So How to create a setup for out project toconfigure it in client IIS automatically List item .netFramework 3.5 SQL server 2008 Asp.net C# NHibernate Log4net castleProxy SQL Server Reporting Services (RDL) Visual Studio Client Reports (RDLC) Javascript JQuery

    Read the article

  • Problem when creating Web Service

    - by Polaris
    I am using Visual Studio 2008 SP1 and framework 3.5 sp1 on Windows XP sp3. I have Java service which I consume in my .NET application. When attemp to add web service I get next error: The operation could not be complited.An attempt was made to load a program with an incorrect format But on another machine everything works fine. There is not any error in Java service. I think error in VS. Is anybody have seen this error before?

    Read the article

  • technologies beside scaling web applications in a distributed nature

    - by wik
    Hello, I am interested in theory to scale web applications in a distributed nature, i.e. when there is some platform/stack can be extended by others applications running on different servers, etc. I am researching this field and feels the lack of the right keywords :) Interesting concepts found so far: opensocial through API, like shopify does (shopify it's a hosted ecommerce solution) semantic web not quite sure about this one Am I on the right way or am I lost anything? :) Thanks.

    Read the article

  • Making a Web Gui to design a Garden Layout

    - by paddydub
    I would like to design a web page Gui where users can design a simple interactive garden. The user would pick a template design and receive price estimates based on the design template and the dimensions entered. I'd like the user to be able to move items such as plants, stones and be able to adjust the dimensions of the grass, paving. I'm thinking i could make it using flash but I would like to know there are any other ways I could use to implement this?

    Read the article

  • Stateful EJBs in web application?

    - by Sebastien Lorber
    Hello I never used stateful EJBs. I understand that a stateful EJB can be useful with a java client. But i wonder: in which case to use them on a web application? And how? Should we put these stateful beans in Session (because of stateless http)? Is it a good practice? (without debating too much about stateful vs stateless)

    Read the article

  • Web based client for Amazon S3

    - by Dick Lebavo
    We are looking for a secure online solution to access our files stored on Amazon S3. We have about 3K files, mostly media and documents, that we need to make available to our employees on the move. We don't want to develop anything in-house if there is an existing solution. Please note that our employees are not technologically minded , so a simple web based upload/download GUI would work the best.

    Read the article

  • How to decide on what hardware to deploy web application

    - by Yuval A
    Suppose you have a web application, no specific stack (Java/.NET/LAMP/Django/Rails, all good). How would you decide on which hardware to deploy it? What rules of thumb exist when determining how many machines you need? How would you formulate parameters such as concurrent users, simultaneous connections and DB read/write ratio to a decision on how much, and which, hardware you need? Any resources on this issue would be very helpful...

    Read the article

  • Back out plan for a Web App

    - by nobody
    We need a back out plan for a web app whose first maintenance release is going to production soon. The issue we are facing is even if we back out new EAR and deploy old one , the data which was keyed in using new release would not support old business rules(current), since there is enormous changes in business rules. Can you suggest how do we tackle this issue?

    Read the article

  • File structure for PHP-based website.

    - by John Berryman
    I'm building a PHP-based web app for the first time and I haven't found anything to pattern it after. At this point I'm mostly curious about how the files should be arranged into directories so that development of the website can be manageable. This includes javascript scripts, images, stylesheets, cgi scripts, html files, pure php files that define common functions, etc. Question: Can someone point me to an explanation about how such a website is typically organized on the server?

    Read the article

  • Building an SSL server farm

    - by dan
    I'm interested in building the the architecture in the article referenced below. I currently have a modestly-priced layer-4 load balancer and my application servers are the SSL endpoints. I want to put an SSL server farm in between my load balancer and my app servers. Then I will put another inexpensive load balancer between the SSL farm and my app servers, to do layer-7 routing. My web application has a fairly high amount of consumer traffic, that 6 servers can handle at about 50% capacity. Additionally, I have infrastructure traffic that is several orders of magnitude heavier than my consumer traffic. This is data coming in from all over the world that must integrate with my web application in real time. In total I have 18 app servers to handle all the traffic, plus 6 database servers. I will be adding 6 more app servers over the next 2 weeks and another 6 the 2 weeks after that. Conservatively, I estimate I will need to scale to 120 servers by the end of the year. My motivation right now is to separate the consumer traffic from the infrastructure traffic. The consumer traffic is higher priority than the infrastructure traffic and I cannot allow a stampede on the infrastructure side to take down my consumer-facing servers. Having a website that is always up is the top priority. However if there is a failure in one of the consumer app servers, I want to route that traffic to the servers designated for infrastructure traffic. The complication is that all the traffic is addressed using the same hostname and is nearly 100% https. The only way in my case to distinguish infrastructure from consumer traffic is by URL (poor architecture I inherited), so I need a layer 7 load balancer to be able to route. However for that to work I need either a fancy hardware-based SSL terminator or an SSL server farm as described above. Because my user base is rapidly scaling, I worry that if I go down the hardware path it will become very expensive very fast, especially since I will need 4 of everything for high availability (2 identical setups in 2 facilities). Meanwhile, the above diagram seems very flexible and more horizontally scalable. Has anyone built this before? Are there pre-built configurations? What considerations should I make and what software should I use (I've heard of people using apache with mod-ssl, nginx, and stunnel)? Also, when does it make sense to buy an expensive load balancer vs building an SSL server farm? http://1wt.eu/articles/2006_lb/index_05.html

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment

    - by user69508
    (Please forgive if this is posted in an incorrect forum. We didn’t know exactly where to post it.) We have an ASP.NET Web API single page application - a browser-based app running in IIS to serve up HTML5/CSS3/JavaScript, which talks to the ASP.NET Web API endpoint only to access a database and transfer JSON data. Everything is working great in our development environment - that is, we have one Visual Studio solution with an ASP.NET Web API project and two class library projects for data access. While development and testing on development boxes, using IIS Express to a localhost:port to run the site and access the Web API, everything is fine. Now we need to move it to a production environment (and we’re having problems - or just not understanding what needs to be done). The production environment is all internal (nothing will be exposed on the public Internet). There are two domains. One domain, the corporate domain, is where all users login normally. The other domain, the process domain, contains the SQL Server instance that our app and Web API will need to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. So, I guess what they want is: corp domain (end users) <– firewall (open port 80) <– DMZ (web server running IIS) <– firewall (open port 80 or 1433????) <– process domain (IIS for Web API and SQL Server) We’re developers and don’t really understand all the networking aspects, so we’re wondering how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code (HTML5/CSS3/JavaScript/images/etc.) is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Or, does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to "http://server/appname/api/getitmes"? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? I’m sure there are other questions, but we’ll start with these. Thanks!!! (Note: the servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.)

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • The Mysterious ARR Server Farm to URL Rewrite link

    Application Request Routing (ARR) is a reverse proxy plug-in for IIS7+ that does many things, including functioning as a load balancer.  For this post, Im assuming that you already have an understanding of ARR.  Today I wanted to find out how the mysterious link between ARR and URL Rewrite is maintained.  Let me explain ARR is unique in that it doesnt work by itself.  It sits on top of IIS7 and uses URL Rewrite.  As a result, ARR depends on URL Rewrite to catch the traffic...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to introduce web development to non-programmers?

    - by Gulshan
    Once one of my non-programmer friends asked, "I have a cool website idea that I don't want to share. Rather I want to develop it on my own. So, I want to learn web development. Tell me what to do?" And sometimes many other people asked about how to start with web development as a profession. But they are non-programmers or not from Computer Science background. What should I suggest to them? Learning programming from the scratch? Or using CMS-like tools? Or anything else?

    Read the article

  • How to find web hosting that meets my requirements?

    - by John Conde
    This question is here so we can offer users who are looking for information on how to determine which web hosting solution is right for them. All future questions pertaining to finding web hosting should be closed as a duplicate of this question. As per this meta question. How to find web hosting that meets my requirements? What we're looking for in answers to this question the basics about web hosting: What is web hosting? What is the difference between shared, VPS, and dedicated hosting? How does a content delivery network relate to web hosting? Anything else you feel is helpful in finding a web host. What we do not want is: Endorsements or recommendations for specific web hosts We do not want your experience or other subjective information (just the facts please)

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >