Search Results

Search found 25629 results on 1026 pages for 'site maintenance'.

Page 213/1026 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • You Can Manage Your Own Website

    It is necessary for you to manage your website properly so that visitors would be attracted to your site. Website management and maintenance are several activities which are performed to ensure that your web sites are current and updated. Web sites reflect the internal and current activities of that particular online business.

    Read the article

  • How do spambots work?

    - by rlb.usa
    I have a forum that's getting hit a lot by forum spambots, and of course the best way to defeat something is to know thy enemy. I'll worry about defeating those spambots later, but right now I'd like to know more about them. Reading around, I felt surprised about the lack of thorough information on the subject (or perhaps my ineptness to input the correct search terms for better google results). I'm interested in learning all about spambots. I've asked on other forums and gotten brush-off answers like "Spambots are always users registering on your site." How do forum spambots work? How do they find the 'new user registration' page? (I'm especially surprised because some forums don't have a dedicated URL for this eg, www.forum.com/register.html , but instead use query strings or even other methods invisible to the URL bar) How do they know what to enter into each 'new user registration' field? How do they determine what's a page they can spam / enter data into and what is not? Do they even 'view' this page at all? ..If not, then I'd assume they're communicating with the server directly - how is - this possible? How do they do it? Can forum spambots break CAPTCHAs? Can they solve logic questions (how?)? Math questions? Do they reverse-engineer client-side anti-bot validation scripts? Server-side scripts? What techniques are still valid to prevent them? Where do spambots come from? Is someone sitting behind the computer snickering as they watch their bot destroy site after site? Or are they snickering as they simply 'release' it onto the internet somehow? Are spambots 'run' by an infected computer somewhere? Do they replicate themselves? etc

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • Jail user to home directory while still allowing permission to create and delete files/folders

    - by Sevenupcan
    I'm trying to give a client SFTP access to the root directory of their site on my server (Ubuntu 10.10) so they can manager their website themselves. While I have been successful in jailing a user to a directory and giving them SFTP access; they are only allowed to create and delete new files in sub directories (the directories they own). This means that I must give them access to the parent directory to the root of their site. How can I limit them to the root of their site (for example public_html) while still allowing them the ability create and delete files. All the tutorials I have read suggest that the root must be the owner of the user's home directory, which prevents them from write access inside that directory. I'm relatively new to managing my own server so any advice would be very grateful. Many thanks.

    Read the article

  • Securing internal data accessed by a website on the big, bad internet

    - by aehiilrs
    A close relative of this question on Stack Overflow: When you have a web site in your DMZ that needs to access production data stored on an internal DB, what strategies do you recommend using to lower the risks that come from accessing live data? Is it even considered acceptable to have a connection initiated from the DMZ come inside of your network? An extra detail about the nature of the site that kind of throws a monkey wrench into the machinery is that people using the web site will be competing for "spots" on a first-come, first-serve basis with others using the internal software. Because of this, as close to zero lag time between the two applications as possible is ideal.

    Read the article

  • No Significant Fragmentation? Look Closer…

    If you are relying on using 'best-practice' percentage-based thresholds when you are creating an index maintenance plan for a SQL Server that checks the fragmentation in your pages, you may miss occasional 'edge' conditions on larger tables that will cause severe degradation in performance. It is worth being aware of patterns of data access in particular tables when judging the best threshold figure to use.

    Read the article

  • No Significant Fragmentation? Look Closer…

    If you are relying on using 'best-practice' percentage-based thresholds when you are creating an index maintenance plan for a SQL Server that checks the fragmentation in your pages, you may miss occasional 'edge' conditions on larger tables that will cause severe degradation in performance. It is worth being aware of patterns of data access in particular tables when judging the best threshold figure to use.

    Read the article

  • Default document not working after installing SP1 on Windows 2008 R2 x64

    - by boredgeek
    We have a web site that should only be available for authorized users. So we deny anonymous access for the site. However we do allow anonymous access to the default page and the login page. When we installed SP1 the behavior of the server changed. Now if the user is trying to access the root of the site, say http://mysite.com, she is redirected to login page rather than the default page. Is there a hotfix to bring back the previous behavior?

    Read the article

  • Why the Ubuntu App Developer website is not showing content about development for desktop?

    - by Zignd
    Looks like they removed every content that is not related with development for desktop. For example when you click in "Get Started" tab there is only information about the Ubuntu Touch and its SDK, when you click on "Resources" tab and then on "Programming languages" you only see C++, JavaScript and QML (no Python, Java, Mono, etc). You also can't find any information about Quickly, try clicking on "Quicky" at "Resources" in the website bottom and you will see a "Page not found" error. Is the site under maintenance or something else?

    Read the article

  • How to auto-update a website mirror with exceptions to certain pages?

    - by tomatosalad
    I'm currently mirroring a website on my server. The site itself is rarely updated, but it is updated enough that info can become outdated quickly. I mirrored it first with wget, and this worked fine, but I made some changes: The original index.html used frames, but the site also provides a main.html which is essentially index.html but without frames. I deleted index.html and renamed main.html. I did not want to mirror the webchat, blog or forum, so I deleted those files and directories and made directories "blogs" "forum" and "chat" and placed a php redirect in each of those, redirecting visitors to the orignal site. I'd like to auto-update the mirror (maybe once every 24-72 hours), but preserve the changes I made. Is this possible? How would I go about doing it? I am completely clueless as to how. Thanks for any and all help! :)

    Read the article

  • Looking for wireless product

    - by Belinda
    Can anyone suggest some options for me? I would like to provide our maintenance department with a view-only monitor showing our common Outlook calendars. I'm guessing it would have to have some touch-screen capabilities since they would need to be able to toggle between several calendars. We do have a cobbled together, building wide wireless network. I don't want anything that needs to be "wired or cabled" just plugged into a power source and mounted on the wall.

    Read the article

  • stop apache from asking for SSL password each restart

    - by acidzombie24
    Using instructions from this site but varying them just a little i created a CA using -newca, i copied cacert.pem to my comp and imported as trusted issuer in IE. I then did -newreq and -sign (note: i do /full/path/CA.sh -cmd and not sh CA.sh -cmd) and moved the cert and key to apache. I visited the site in IE and using .NET code and it appears trusted, great (unless i write www. in front which is expected). But every time i restart apache i need to type in my password for the site(s?). How can i make it so i DO NOT need to type in the password?

    Read the article

  • PHP hosting some info required [closed]

    - by mtk
    I have recently given a control of newly bought hosting space and the domain account. There is a technical team from the hosting site to help out with problems, but that is a long process, i.e. log a ticket, wait for a long time, and I don't get the correct answer in the first shot. I was wondering, if anyone has any helpful guide and how one must go with hosting a site. Any info that must be know w.r.t to cpanel? Any other useful stuff if any one has, or could point me to ? Just to give a few difficulties: The same php code working well on local machine, giving error on remote as "File not found". The file is present indeed as I have ftp'ed all the files correctly. session_start error are outputted to html page with warning "Header already sent". and many more technical things, that work well on local but not on actual hosting server. So, if anyone has any helpful stuff in this reference, as to what all changes are required or what a programmer must be aware from a hosting perspective, please let me know. Note I am hosting a PHP site with mysql db, on a shared environment.

    Read the article

  • Serve up syntactic XHTML5 using the text/html MIME type?

    - by cboettig
    I have a site currently written with HTML5 tags. I'd like to be able to parse the site as XML, with support for namespaces, etc, to facilitate programmatic extraction of data. Currently I have <!DOCTYPE html> and <meta charset="utf-8"> Which I gather is equivalent in HTML5 to explicitly setting the content-types as <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> for my current setup. In order to serve XML it sounds like the right thing to do is <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> Should I also change my Content-Type to <meta http-equiv="content-type" content="application/xhtml+xml; charset=iso-8859-1" /> Or is that not necessary? What is the advantage of having content-type be "application/xhtml+xml"? What is the disadvantage? (Sounds like it may break internet explorer rendering of the site? but maybe that information is out of date now?) Many thanks!

    Read the article

  • Slow slef hosted wordpress website

    - by Integrati Marketing
    Hi All, we have a great site which has been humming along nicely for about 5 months and then in May it went from a page load speed time of 3-5 secs to now an agonising 15+ secs!!! The host has been really helpful and has even shifted the site to a new server which is faster! I guess seeing as though we do not have the insight or your expertise we would ask the Serverfault community and see what this crowd of experts could recommend? Appreciate any insight, thank you. site is here: integrati.com.au Cheers. :)

    Read the article

  • Slow self hosted wordpress website

    - by Integrati Marketing
    Hi All, we have a great site which has been humming along nicely for about 5 months and then in May it went from a page load speed time of 3-5 secs to now an agonising 15+ secs!!! The host has been really helpful and has even shifted the site to a new server which is faster! I guess seeing as though we do not have the insight or your expertise we would ask the Serverfault community and see what this crowd of experts could recommend? Appreciate any insight, thank you. site is here: integrati.com.au Cheers. :)

    Read the article

  • Issues With IIS Hosting Two Domains From Same Folder

    - by Bob Mc
    I have two different domain names that resolve to the same ASP.Net site. Both domains are hosted on the same server, which runs Windows Server 2003 and IIS6. The sites are differentiated in IIS Manager using host headers. However, both of the sites point to the same folder on the local drive for the site's page files. I am occasionally experiencing an ASP.Net error that says "The state information is invalid for this page and might be corrupted." I'm the site developer so I've addressed all the relevant code-related causes for this issue. However, I was wondering whether having two domains/sites sharing the same folder for an ASP.Net application might be causing this intermittent error. Also, is this generally a bad practice? Should I make separate, duplicate folders for each of the domains? Seems like that can become a maintenance headache.

    Read the article

  • How do I reduce RAM usage on my server?

    - by Abs
    I have recently launched a site that is very popular but I am having trouble with scalability. My site makes heavy use of FFmpeg and at peak times RAM usage hits the 2 GB point quickly and the swap file starts getting used. CPU usage starts rising too. Users complain that the site is slow. They say this because all FFmpeg instances run very slow because of the number running at the same time. Users make use of FFmpeg on my server in real time. Is there anything I can consider or do to ease down the usage of the server and RAM just shooting up? Maybe there is something better than FFmpeg (!). Is the only solution "throwing some cash" at a more powerful server? I have given little information, please ask for more, so this problem can be solved.

    Read the article

  • Le noyau Linux 3.2 disponible : intégration du code d'Android, améliorations réseaux, Btrfs et support d'une nouvelle architecture

    Le noyau Linux 3.2 disponible : intégration du code d'Android améliorations réseaux, Btrfs et support d'une nouvelle architecture Linus Torvalds vient d'annoncer la disponibilité de la version 3.3 du noyau Linux. Au menu des nouveautés, on notera essentiellement la réintégration des portions de code du noyau d'Android . Pour rappel, en 2009, les pilotes d'Android avaient été exclus du noyau parce qu'ils n'étaient pas suffisamment maintenus. L'intégration d'Android permettra aux développeurs d'utiliser le noyau Linux pour faire fonctionner un système Android, développer un pilote pour les deux et réduira les couts de maintenance des correctifs indépendants d'une...

    Read the article

  • Certain websites do not open. Instead a 1*1 image gets displayed

    - by Ranjith - SR2GF
    When I try to visit certain websites like www.bidvertiser.com, www.buysellads.com, a white page shows up, the title bar displays the site name followed by (1x1) in brackets. When I right click, 'View Source' option appears disabled. The Save As.. option shows the file type to be gif. However, when I preview the site in Google search results ( by moving the mouse over ) the screenshot of the site displays well. This happens on all the three browsers on my computer: Chrome, Firefox and IE. What is the problem and how can it be resolved? EDIT: At some point of time, they probably worked on my computer! I think it is a more general problem. The same happens when I click on certain links in Google search results.

    Read the article

  • Bordeaux 2.0.4 for Linux Released

    <b>Wine-Reviews:</b> "The Bordeaux Technology Group released Bordeaux 2.0.4 for Linux today. Bordeaux 2.0.4 is a maintenance release that fixes a number of small bugs. With this release we have changed the Bordeux UI from a GTKDialog to a GTKWindow, the "OK" button has also been re-named to "Install"."

    Read the article

  • What constitutes "commercial purposes"?

    - by RoboShop
    I'm looking at this license. It says that I can use it for "non-commercial purposes". What does that mean? I see in Stack Exchange, under Network Profile there is that graph that tracks your points across your Stack Exchange accounts. It uses a control called HighCharts which have a paid and Creative Commons licensed version. So would Stack Overflow constitute a commercial site? We don't pay to use this site, but obviously the site makes money from ads, etc. Then again, there's a lot of sites that have ads who won't necessarily make a profit, it may only be subsiding their costs. But even then, you could argue that even if it is only subsiding their costs, a lot of IT companies run at a loss in order to build a big enough customer base. So where is the line here? Is it any website on the internet? Is it any website that has ads? Is it any website that turns over a profit?

    Read the article

  • Making a more reliable and flexible sp_MSforeachdb

    While the system procedure sp_MSforeachdb is neither documented nor officially supported, most SQL Server professionals have used it at one time or another. This is typically for ad hoc maintenance tasks, but many people (myself included) have used this type of looping activity in permanent routines. Sadly, I have discovered instances where, under heavy load and/or with a large number of databases, the procedure can actually skip multiple catalogs with no error or warning message. Since this situation is not easily reproducible, and since Microsoft typically has no interest in fixing unsupported objects, this may be happening in your environment right now

    Read the article

  • sites now not responding on port 80 [closed]

    - by JohnMerlino
    Possible Duplicate: unable to connect site to different port I was trying to resolve an issue with getting a site running on a different port: unable to connect site to different port But somehow it took out all my other sites. Now even the ones that were responding on port 80 are no longe responding, even though I did not touch the virtual hosts for them. I get this message now: Oops! Google Chrome could not connect to mysite.com However, ping responds: ping mysite.com PING mysite.com (64.135.12.134): 56 data bytes 64 bytes from 64.135.12.134: icmp_seq=0 ttl=49 time=20.839 ms 64 bytes from 64.135.12.134: icmp_seq=1 ttl=49 time=20.489 ms The result of telnet: $ telnet guarddoggps.com 80 Trying 64.135.12.134... telnet: connect to address 64.135.12.134: Connection refused telnet: Unable to connect to remote host

    Read the article

  • Le noyau Linux 3.3 disponible : intégration du code d'Android, améliorations réseaux, Btrfs et support d'une nouvelle architecture

    Le noyau Linux 3.3 disponible : intégration du code d'Android améliorations réseaux, Btrfs et support d'une nouvelle architecture Linus Torvalds vient d'annoncer la disponibilité de la version 3.3 du noyau Linux. Au menu des nouveautés, on notera essentiellement la réintégration des portions de code du noyau d'Android . Pour rappel, en 2009, les pilotes d'Android avaient été exclus du noyau parce qu'ils n'étaient pas suffisamment maintenus. L'intégration d'Android permettra aux développeurs d'utiliser le noyau Linux pour faire fonctionner un système Android, développer un pilote pour les deux et réduira les couts de maintenance des correctifs indépendants d'une...

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >