Search Results

Search found 7625 results on 305 pages for 'scraper sites'.

Page 143/305 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Is there a way to make money from indie downloadable games? [closed]

    - by AShelly
    It appears that there are ways to make money with flash games through portal and aggregator sites and embedded ads. But I do my programming in C and C++. I've started a prototype which relies on a few existing C++ SDK's. The game would have to be downloadable. Is this just a labor of love, or are there any ways to make money from this type of game? Does anyone pay for shareware anymore? What other options are there?

    Read the article

  • Going from webforms, VS 2008, 3.5 framework to the "next level" based on my goals

    - by Caveatrob
    I've got a few choices to make as I develop some business websites that will run for the next two to three years. Currently I run ASP.NET 3.5 with Visual Studio 2008. I do my development rather crudely in WebForms because that's what I learned and am most productive with. I don't use Membership or any other frameworks in my projects. I use a simple class that maintains a few session keys for each user based on basic database tables for users and roles. (I have about 3,000 users). So far I've kept the data simple, using ADO.NET against SQL Server and a data access class (Circa 2000, I know) to build my sites. My questions are as follows: Under what conditions would I be better off moving to MVC? Under what conditions would I find LINQ and ORM a better way to go than standard ADO.NET? Would I benefit, in my current state of development, from going from Studio 2008 to Studio 2010?

    Read the article

  • Strategy for image sizes

    - by MotiveKyle
    I run a site that has a lot of writers that generate quite a few articles a day. I require them to provide two image sizes (one for the big headline image and one as the thumbnail). I've been wanting to change up the site layout a bit, but I am becoming limited by the image sizes for the posts. I have considered just cropping images, but they still need to look nice, and cropping doesn't always generate what I'd want. I'd prefer to just scale down by percentage (as I do with thumbnails). Should I just make the writers provide more images? How do other sites handle this?

    Read the article

  • Les entreprises informatiques vont-elles toutes investir dans les réseaux sociaux ? Microsoft a proposé 15 $ milliards à Facebook

    Les grandes entreprises informatiques vont-elles toutes investir dans les réseaux sociaux ? Microsoft a tenté de racheter Facebook pour 15 milliards de dollars Facebook, leader incontesté des réseaux sociaux avec plus de 520 millions de membres, est maintenant l'un des sites web les plus importants au monde, si ce n'est "le" plus important d'après certains. Google va s'y mettre aussi, puisque la firme est en plein développement de son interface communautaire Google Me. Partout sur le Net, les ajouts participatifs fleurissent. Et, on vient de l'apprendre, Microsoft à tenté de racheter l'entreprise de Mark Zuckerberg. En effet, Steve Ballmer a approché le plus jeune milliardaire ...

    Read the article

  • Advertisement programs that allow "clickjacking" (earning advertisement revenues by popups generated by clicks on the website)?

    - by Tom
    Whether clickjacking is an ethically responsible way of earning advertisement revenues is a subjective discussion and should not be discussed here. However, it appears that quite a lot of popular sites generate "popups" when you click either of their links or buttons. An example is the Party Poker advertisement (I am sure many of you will have seen this one). I wonder though, what kind of advertisement companies allow such techniques? Surely Google Adsense does not? But which do, and are they reliable partners?

    Read the article

  • Verifying that a user comes from a 'partner' site?

    - by matt_tm
    We're building a Drupal module that is going to be given to trusted 'corporate partners'. When a user clicks on a link, he should be redirected to our site as if he's a logged in user. How should I verify that the user is indeed coming from that site? It does not look like 'HTTP_REFERER' is enough because it appears it can be faked. We are providing these partner sites with API Keys. If I receive the API-key as a POST value, sent over https, would that be a sufficient indicator that the user is a genuine partner-site user?

    Read the article

  • New &lt;%: %&gt; Syntax for HTML Encoding Output in ASP.NET 4 (and ASP.NET MVC 2)

    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the nineteenth in a series of blog posts Im doing on the upcoming VS 2010 and .NET 4 release. Todays post covers a small, but very useful, new syntax feature being introduced with ASP.NET 4 which is the ability to automatically HTML encode output within code nuggets.  This helps protect your applications and sites against cross-site script injection...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • why my system gets frozen?

    - by user93019
    Is now something like 2 weeks since my system gets frozen while browsing the web, using apps, watching videos on web sites, basically using any part of my netbook, even when i'm only using one program (like the web browser) this happens. Is so frozen, that the only option i have left to do is to PRESS and hold the power button, nothing else works; I have seeing an error message telling me about a "compiz" error, but not all the times, and this happens every day, some times 2 or 3 times, during the same day. Why does this happens? how can i fix it? please help. Just to let you know i have my system up to date on 12.04 32 bits version, working on an Acer aspire AOD257, 1 Gb RAM, Intel® Atom™ CPU N570 @ 1.66GHz × 4 .

    Read the article

  • Are HTTP requests cached? [closed]

    - by nischayn22
    Many HTTP requests are sent repeatedly by browsers on almost every page load, such as requesting the jQuery .js file etc. Since these are already used on too many sites doesn't modern browsers keep a cache for this? I am thinking of a system where the browser has a cached copy of the .js file used very very frequently. On a new request for the .js file, it sends the server a request for a hash of the .js file (provided the server can reply to that) and compares the returned hash with the cached copy's hash... rest is intuitive.

    Read the article

  • Is adding in the header the license type enough to say: "my code is licensed"?

    - by silverfox
    I read on various sites about licenses. I did just put the license type in the header file (in my case a javascript file, open-source): /* * "codeName" "version" * http://officialsite.com/ * * Copyright 2012 "codeName" * Released under the "LICENSE NAME" license * http://officialsite.com/LICENSE NAME */ javascript code ... In the same folder I leave a copy of the license. The listing of the folder looks like this: * codeName.js * LICENSE In the file LICENSE is the full text of the license my code uses. What I cannot find anywhere that says is this is enough to say my code is licensed (the case of open-source). Is something more required?

    Read the article

  • Does setting document.domain via script interfere with Google Analytics?

    - by Seth Petry-Johnson
    I have a site, www.example.com, that displays some secure content from forms.example.com in iframes. To enable cross-frame navigation, pages on both sites use JavaScript to set the document.domain to just "example.com". I am using Google Analytics on www.example.com, but the GA site is not showing any data. It indicates that the tracking code is found (the status icon is a green checkmark), but no data is reported. The GA profile lists the website as "www.example.com". Is this a supported scenario? Is my script interfering with the GA code in some way?

    Read the article

  • backup dedicated server runing ubuntu 10.04 and plesk 11.01 prior to update os to uduntu 12.04

    - by timmob
    i would like to backup my dedicated server which is my web server hosting various sites and email, so that I can update the os to Ubuntu 12.04, and basically restore back to 10.04 if things go wrong. I have a local machine that I can install 12.04 onto an then I was going to rsinc between the two, but I am fairly clueless when it comes to linux. I can ssh into the remote server and gain root access. can anyone explain if i need to backup the whole server hard drive or just some of the files? Thanks Timmo.

    Read the article

  • BleachBit: How to Completely Clear URL History in Firefox?

    - by tSquirrel
    14.04 / Firefox 29.0 I've been using Bleachbit to clear usage/file history, and for the most part it works great. However, it doesn't seem to clear the website hostnames out of the URL, at all. These addresses are not bookmarked. Also, the total URL isn't preserved, just the hostname. Visit site http://www.bluesnews.com/some_random_URL_string Exit Firefox Run Bleachbit, with ALL Firefox options selected Restart Firefox Check history: completely empty, other than bookmarked sites. www.bluesnews is NOT bookmarked Type "blue" which is Firefox automatically completes as "http://www.bluesnews.com/" Alternate Step #3: Use Firefox's built-in "Clear History" and select ALL entries with a time frame of "Everything". Same result as above. My inquiry in BB forums hasn't been responded to. I found Dan's proposed solution, however changing autocomplete in about:config only turns off the function, it doesn't actually stop storing URLs.

    Read the article

  • Naming interfaces for persistent values

    - by orip
    I have 2 distinct types of persistent values that I'm having trouble naming well. They're defined with the following Java-esque structure, borrowing Guava's Optional for the example and using generic names to avoid anchoring: interface Foo<T> { T get(); void set(T value); } interface Bar<T> { Optional<T> get(); void set(T value); } With Foo, if the value hasn't been set explicitly then there's some default value available or pre-set. With Bar, if the value hasn't been set explicitly then there's a distinct "no value" state. I'm trying to optimize the names for their call sites. For example, someone using Foo may not care whether there's a default value involved, only that they're guaranteed to always have a value. How would you go about naming these interfaces?

    Read the article

  • Website (X)HTML Code Change Detection [closed]

    - by 0pt1m1z3
    I am looking for an enterprise-grade service or a tool that can be used to scan / fingerprint websites and notify when major XHTML code changes are detected. The tool should be able to continuously scan thousands of websites and determine the percentage of HTML code that has been modified since the last run. And then either save the data where it can be easily accessed or send periodic notifications. I know of services like ChangeDetect.com, but they don't do markup only changes and instead focus on everything, including content. We don't really care about presentation content, because a lot of sites we need to cover are updated frequently with content.

    Read the article

  • Photos - do I really need to look for the author and ask his permission when posting them on my site?

    - by user6456
    When I find a photo somewhere on the internet, without any explicit information of whether I can re-publish it on my own website, without any hint of who is the owner/author of that photo, can I still do it? I'm puzzled here cause I've seen like millions of websites, often very big, that repost photos, most probably found via google and it's VERY unlikely they bothered to look for and contact the author of that photos. Is every one of that sites likely to be sued at any moment? What about the case of forums and content provided by users - there is virtually no way of prevention here.

    Read the article

  • Kaspersky decouvre un virus "fileless" qui s'installe dans la RAM et désactive le contrôle des accès utilisateurs de Windows

    Suite aux indications d'un chercheur indépendant qui « souhaite garder l'anonymat », le laboratoire Kaspersky a découvert un nouveau virus. Celui-ci se propageait via les annonces du réseau publicitaire Russe AdFox.ru, présentes sur des sites d'informations populaires. Ce qui rend ce virus particulier est son mode opératoire. Si l'inclusion d'une iFrame renvoyant vers un site contenant du code malicieux (hébergé sur un site .eu) est classique, la stratégie d'exploitation utilisée est par contre rarissime. Le virus utilise une vulnérabilité critique de la machine virtuelle Java (CVE-2011-3544, pour laquelle il existe un patch depuis six mois) ; mais contrairement aux a...

    Read the article

  • Microsoft sur le point de se lancer dans les réseaux sociaux d'entreprises pour un milliard de dollars, d'après Bloomberg

    Microsoft pourrait débourser 1 milliard de dollars pour se lancer dans les réseaux sociaux professionnels D'entreprises, d'après Bloomberg Les réseaux sociaux, c'est un peu comme le Cloud ou les sites Webs. Il y en a de toutes sortes. L'appellation couvre presque tout et son contraire : des outils publics ou privés, pour le loisir ou pour les professionnels. Dans les réseaux, on connait Facebook, LinkedIn ou Viadeo, ou la deuxième tentative de Google de percer sur ce marché (Google+). Mais on connait moins les acteurs des « réseaux privés » comme Atlassian qui édite par exemple Confluence (sorte de Facebook à...

    Read the article

  • How Web Optimization Services Work to Increase Your on the Internet Reputation

    SEO is a symbol of search engine optimization and that is the key to success from the enterprise. No site has meaning if it seriously isn't properly promoted. Anytime any web surfer is in look up of any certain merchandise, providers or data he makes use of the easiest method of browsing as a result of search engine optimization and this is habit of many individuals to only search straight into 5 or six major sites for their goal. No person has time to seem directly into 100 pages of internet search engine as there is no need to have when he finds in major pages.

    Read the article

  • How to report abuse to website hosting company (GoDaddy)

    - by lgratian
    I'm not sure if this is the right place to ask such a question... Let's say that a website posted a picture of me, without my consent, and I want to be removed (it's something private, could compromise my career if it's seen by someone that shouldn't). I sent them an email asking nicely that they should remove it, but they didn't respond and the picture is still there. Using 'Whois' I found that the website is hosted by GoDaddy. Is there a way (an email address, for ex.) to report to GoDaddy that one of the sites they're hosting does something illegal and to force them to remove the photo? I searched the site and found nothing about such a thing. Thnaks in advance!

    Read the article

  • Domain in PENDINGDELETE, question about drop

    - by kcdwayne
    A domain I want is in the pendingDelete stage according to WHOIS. I have been monitoring it since redemptionPeriod, and it entered into pendingDelete 5 days ago today. After checking a few services (SnapNames, etc), they report it is scheduled to drop on the 11th (7 days, by my calculations). I'm not quite sure what to believe. The domain isn't highly valuable. It is to me and one other company. I can see no backorders placed on the big name sites, so I'm thinking of trying to get it without a backorder service. Any insight as to when it will actually drop? I've read 11AM-2PM PST, but I'm unsure. Thanks.

    Read the article

  • How does delicious.com avoid being sued for copyright infringement?

    - by Stanish
    With the recent redesign of delicious.com, they've added a much more graphical home page. The site continues to be a service for people to bookmark and share websites they come across on the web. The delicious home is now made up of images taken from those linked sites. See for yourself at http://delicious.com I would like to know what in the law allows them to do this, considering the images represent the main content of the page, and they clearly do not own copyright to those images? I know there is some leeway given to search engines where it is considered fair use to use a small portion of the content if the aim is to lead people to the originating site. Does that apply here?

    Read the article

  • Do they ask too much on this job?

    - by user58404
    I am looking for web developer job and this job description caught my eyes. I am not sure how much they offer but I was wondering if anyone here meets all of their requirements? To me, that's a lot of knowledge. 2 to 4+ years experience building web sites and applications in a professional environment Strong working knowledge of HTML5 and CSS3 Strong working knowledge of JavaScript, jQuery, AJAX Working knowledge of Ruby on Rails or similar MVC framework Working knowledge of ExpressionEngine, Wordpress or similar CMS Experience administering a LAMP-based server Experience with cross-platform and cross-browser website testing Comfortable working with version control (preferably Git) Proficient with Adobe Photoshop, Illustrator, and Fireworks Comfortable working on a Mac Self-starter with excellent time-management skills with the ability to meet challenging deadlines Ability to work independently with minimal supervision Desire to work on a small team Bonus Skills: Experience deploying to Heroku or similar PaaS provider. Experience developing Facebook applications A strong sense of design Cool open source projects (send us your Github account!) Advanced working knowledge of server administration and website deployment. Java and/or .NET experience

    Read the article

  • How to pass information across domains to ask for newsletter only once?

    - by Michal Stefanow
    Lets assume following scenario, I have two sites: example1.com example2.com When user visits 1 there is a prompt "please signup to a newsletter". Same thing happens when user visits 2. However when navigating from 1 to 2 I don't want signup form to be shown. My first thought were 3rd-party cookies, but it seems that they are blocked / not working: http://stackoverflow.com/questions/4701922/how-does-facebook-set-cross-domain-cookies-for-iframes-on-canvas-pages?rq=1 http://stackoverflow.com/questions/172223/how-do-i-set-cookies-from-outside-domains-inside-iframes-in-safari?rq=1 Another thought is to append #noshow for each URL but that would require some work - for instance a script that would intercept click / tap events and modify URL structure depending on the address. (but that seems hacky) I wonder if you know a robust well-established solution to this issue? Thanks

    Read the article

  • What payment processors fullfill this requirements and what are their pluses and minuses? [on hold]

    - by Sharen Eayrs
    Accept credit card Allow me to automatically credit a customers' account. We're not selling e-book. We're selling a credit to our account. So it's important that customers do not get credited twice. Easy to program and integrate with our sites. Have affiliate programs. Transfer money to bank accounts quickly Accept merchants from many countries No monthly fee is a plus. I am thinking of using 2co.com avangate.com clickbank.com Some people reccomend https://stripe.com/us/features (is it easy to implement) http://www.paymentwall.com/ There are so many payment processor I am very confused. Can anyone tell me pluses and minuses of those ones. or perhaps others. What would be the plus and minuses of those 3 that you know off

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >