Search Results

Search found 25629 results on 1026 pages for 'site maintenance'.

Page 24/1026 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • How can I set up a dual-site Storage Daemon in Bacula (mirror the backup)

    - by Andy
    On site A, I have sucessfully set up a bacula director on one host, several File Daemons on the hosts I want to backup, and finally one Storage Daemon where the backup actually is stored. If disaster struck the building Site A, I want a second Storage Daemon on another site, Site B. The Filesets, Director etc would be the same, except the jobs will be stored on the other Storage Daemon as well. Are there any best practises on this?

    Read the article

  • Copy CakePHP site under IIS webroot?

    - by dan giz
    I've got a CakePHP application that uses a MSSql server running on windows server 2008r2 enterprise using IIS 7.5 My application is the only website running and is installed into wwwroot with cakephp installed to c:\inetpub (one level up from the site) I want to copy this site so i can have a development version to work on, without conflicting with the live site. how do i go about doing this? i'm confused, since previous set ups i've seen have a different folder in wwwroot for the site itself.

    Read the article

  • Why is my site not on Google? [closed]

    - by RD
    I wanted to post a link here, but some people might see that as advertising. So, instead I'm going to phrase my question like this: What can I do, to make sure my site appears on Google? I have already done the following: Submitted my sitemap Added my site at www.google.com/addurl Added Analytics to my site Checked in the webmaster tools if there are crawlers errors But still, after about three or four days, the crawler hasn't crawled my site. What am I missing?

    Read the article

  • Using local DNS and public DNS during site development

    - by ChrisFM
    I'm a web designer and I often develop new sites for existing businesses. Sometimes I find it useful to point my DNS address (for my personal computer) to the development servers local DNS (instead of Googles 8.8.8.8 or the default isp's address). I like this as it let's me see that the new site's internal links, etc, operate before switching over the authorative DNS from the old site. However outgoing links (Like a Google map) would not route with my local dns. The first thing I thought was, oh I just need to fill out my DNS directory with Google and every other domain I might need to link to, wait... that sounds insane. I was wondering if anyone could give me insight into a better way or more functional way to get use local DNS addresses when they're available and public supplied DNS address when they are not? Kinda like a DNS to 'roll-over'? Or maybe a completely different approach to development all together. Thanks in advance for your insight. All the best!

    Read the article

  • World Class Training For Them, an Amazon Gift Certificate For You

    - by Adam Machanic
    We have just two weeks to go before Paul Randal and Kimberly Tripp touch down in the Boston area to deliver their famous SQL Server Immersions course . This is going to be a truly fantastic SQL Server learning experience and we're hoping a few more people will join in the fun. This is where you come in: we have a few vacant seats remaining and we need your help spreading the word. Simply tell your friends and colleagues about the course and e-mail me (adam [at] bostonsqltraining [dot] com) the names...(read more)

    Read the article

  • So Your Laptop’s Fan Has Stopped Working Then? [Humorous Image]

    - by Asian Angel
    There is such a thing as dust build-up and then there are the odd cases of dust-ball evolution… What is the worst case of dust build-up that you have dealt with? Make sure to share your stories with your fellow readers in the comments! Help, my laptop’s fan is not working! [via Reddit Tech Support Gore] Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Google Webmasters Tools strange 404 errors referred from same site

    - by Out of Control
    Starting about a month ago, I noticed a sudden increase in 404 errors in Webmasters Tools for one of my sites (over 1400 errors so far). All the errors are being referred from my own site to non existent pages. The 404 error URLs are all of the same format: URL: http://www.helloneighbour.com/save/1347208508000 The number on the end appears to be a timestamp followed by 3 zeros. The referring page, in this case is : Linked from http://www.helloneighbour.com/save/cmw-insurance-insurance-burnaby When I look at the source code of that page, or I use Webmaster tools to view the page as Google sees it, I can't find any link that comes close to what is above. I built the site, and I can't find any place that might be causing these false links either. The server logs (access and error) don't show Google or anyone else trying to access these links. I've marked all these pages as fixed, and waited a couple of weeks, only to find the errors come back again over the last few days. I'm wondering if anyone else has seen anything strange like this, or if someone might have a way for me to debug, replicate this error myself.

    Read the article

  • Maintaining Revision Levels

    - by kyle.hatlestad
    A question that came up on an earlier blog post was how to limit the number of revisions on a piece of content. UCM does not inherently enforce any sort of limit on how many revisions you can have. It's unlimited. In some cases, there may be content that goes through lots of changes, but there just simply isn't a need to keep all of its revisions around. Deleting those revisions through the content information screen can be very cumbersome. And going through the Repository Manager applet can take time as well to filter and find the revisions to get rid of. But there is an easier way through the Archiver. The Export Query criteria in Archiver includes a very handy field called 'Revision Rank'. With revision labels, they typically go up as new revisions come in (e.g. 1, 2, 3, 4, etc...). But you can't really use this field to tell it to keep the top 5 revisions. Those top 5 revision numbers are always going up. But revision rank goes the opposite direction. The very latest revision is always 0. The previous revision to that is 1. Previous revision to that is 2. And so on and so forth. With revision rank, you can set your query to look for any Revision Rank greater or equal to 5. Now as older revisions move down the line, their revision rank gets higher and higher until they reach that threshold. Then when you run that archive export, you can choose to delete and remove those revisions. Running that export in Archiver is normally a manual process. But with Idc Command, you can script the process and have it run automatically from the server. Idc Command is a utility that allows you to run any of the content server services via the command line. You basically feed it a text file with the services and parameters defined along with the user to run it as. The Idc Command executable is located within the \bin\ directory: $ ./IdcCommand -f DeleteOlderRevisions.txt -u sysadmin -l delete_revisions.log In this example, our IdcCommand file to run the export and do the deletions would look like: IdcService=EXPORT_ARCHIVE aArchiveName=DeleteOlderRevisions aDoDelete=1 IDC_Name=idc dataSource=RevisionIDs <<EOD>> You can then use automated scheduling routines in the OS to run the command and command file at the frequency needed. Remember that you are deleting the revisions from within UCM, but they are still getting placed within the archive. So you will need to delete those batches to have them fully removed (or re-import if you need to recover them). For more information about Idc Command, you can find that in the Idc Command Reference Guide.

    Read the article

  • Why is vCenter 5.1u1 exiting hosts from maintenance mode?

    - by Shane Madden
    This vCenter server was just upgraded to 5.1 update 1. I'm going through hosts and bringing firmware up to date, then upgrading them from various versions of 5.0 to 5.1u1. vCenter 5.1u1 seems to have an interesting new behavior: it's removing hosts from maintenance mode when they reconnect after being disconnected -- but very inconsistently, I've seen it maybe 4 or 5 times on ~25-30 host reboots. I've only seen it happen on 5.0 hosts that have not yet been upgraded to 5.1. In the image, I placed the host in maint mode and rebooted it into the HP SPP DVD's automatic update mode. After its usual ~40 minute update process, the host came back online.. and 7 seconds before even logging that the host had reconnected, vCenter had sent the host a task to exit maintenance mode. In my understanding, the only time vCenter should drop a host out of maintenance mode is when vCenter put it into maintenance mode itself (such as a VUM upgrade task). Why would this vCenter be unilaterally exiting a host from user-initiated maintenance mode? Edit, additional info: I ran the firmware upgrades on 5 more hosts, all at the same time. Two of them exited maint mode after reconnecting, three did not. The common factor of those exiting maint mode seems to be how long they were offline; the two that took a few tries to boot to the virtual media are the two that got knocked out of maint mode. esx31 (image above): 45 minutes unresponsive esx19 (exited maint): 87 minutes unresponsive esx24 (stayed in maint): 32 minutes unresponsive esx29 (stayed in maint): 39 minutes unresponsive esx32 (stayed in maint): 30 minutes unresponsive esx34 (exited maint): 70 minutes unresponsive Edit: The disconnect time idea seems to have been a red herring, as it's not happening consistently. Additionally, in the vpxd.log the exit maint mode task initiation seems to always immediately follow this vim.EnvironmentBrowser.queryProvisioningPolicy SOAP call. Here's the lines, slightly trimmed for clarity: 15:27:49.535 [info 'vpxdvpxdVmomi'] [ClientAdapterBase::InvokeOnSoap] Invoke done (esx31, vim.EnvironmentBrowser.queryProvisioningPolicy) 15:27:49.560 [info 'commonvpxLro'] [VpxLRO] -- BEGIN task -- esx31 -- HostSystem.exitMaintenanceMode -- Note that on the nodes that don't get the exit task, the vim.EnvironmentBrowser.queryProvisioningPolicy event still occurs. I'm not seeing any other differences in events before or after this in the reconnect process, aside from the extra events caused by exiting maintenance mode. Given the log's mention of provisioning policies, looking for autodeploy-related maintenance mode issues turns up complaints about similar behavior (though I'm not using autodeploy at all).

    Read the article

  • IFMR Conference – Global Procurement & Supply Chain Management for the Oil & Gas Industry

    - by Pam Petropoulos
    Dates: June 9 - 11, 2014Location: JW Marriott Houston, TXThis 2nd Global Procurement and Supply Chain Management Conference for the Oil & Gas Industry will cover key market challenges including: - supplier / operator relationships- benchmarking strategic procurement and category management- capacity overload vs. demand- new frontiers /new procurement strategies - sustainability in procurement and supply chain With a one-track focus, this is a highly intensive, content-driven event that includes case studies, presentations and panel discussions over two full days.Plan to attend the Oracle presentation on day one, and the Oracle panel discussion on day two. Oil & Gas experts will be available in the Oracle booth to answer questions.Click here to learn more and register.

    Read the article

  • MS Bing web crawler out of control causing our site to go down

    - by akaDanPaul
    Here is a weird one that I am not sure what to do. Today our companies e-commerce site went down. I tailed the production log and saw that we were receiving a ton of request from this range of IP's 157.55.98.0/157.55.100.0. I googled around and come to find out that it is a MSN Web Crawler. So essentially MS web crawler overloaded our site causing it not to respond. Even though in our robots.txt file we have the following; Crawl-delay: 10 So what I did was just banned the IP range in iptables. But what I am not sure to do from here is how to follow up. I can't find anywhere to contact Bing about this issue, I don't want to keep those IPs blocked because I am sure eventually we will get de-indexed from Bing. And it doesn't really seem like this has happened to anyone else before. Any Suggestions? Update, My Server / Web Stats Our web server is using Nginx, Rails 3, and 5 Unicorn workers. We have 4gb of memory and 2 virtual cores. We have been running this setup for over 9 months now and never had an issue, 95% of the time our system is under very little load. On average we receive 800,000 page views a month and this never comes close to bringing / slowing down our web server. Taking a look at the logs we were receiving anywhere from 5 up to 40 request / second from this IP range. In all my years of web development I have never seen a crawler hit a website so many times. Is this new with Bing?

    Read the article

  • SQL Server 2005 Default Backup Plan

    - by tylerl
    I noticed that a newly imported database on SQLServer 2005 had configured itself (without my knowledge) to perform daily backups; but it's not deleting old files and quickly filling up the disk. I don't know how the backup job got configured (maybe that's something that gets transferred when you move a database?) but I'm having trouble modifying it. The backup runs as part of SQL Server Agent job called "Daily Backups". This job runs a package called "(SSIS Packages)\Maintenance Plans\Backup Plan" -- which I can't find. The "Management\Maintenance Plans" area for my server is empty. I imagine I could delete the existing plan and re-create it manually, but I was hoping to just modify what was already there, since all that's missing is deleting old files.

    Read the article

  • Site in subdomain (MaraDNS + Nginx)

    - by Grzegorz
    Welcome, Actually I'm doing some experiments on my VPS with Ubuntu. I've installed MaraDNS with Nginx. At this moment I've correctly launch static site which is available from Internet (maindomain.com). In next step I want to add new site which will be available in subdomain, for example dev.maindomain.com. I've tried to db.maindomain.com file (used by MaraDNS): maindomain.com. xxx.xxx.xxx.xxx www.maindomain.com. CNAME maindomain.com. dev.maindomain.com. xxx.xxx.xxx.xxx Where xxx.xxx.xxx.xxx is VPS IP address. In nginx.conf I have: server { listen 80; server_name maindomain.com; access_log /var/log/nginx/maindomain.com.log location / { root /var/www/maindomain.com; index index.html; } } server { listen 80; server_name dev.maindomain.com; access_log /var/log/nginx/dev.maindomain.com.log location / { root /var/www/dev.maindomain.com; index index.html; } } With this configuration maindomain.com works properly, but dev.maindomain.com isn't available. When I try: ping dev.maindomain.com then I get my xxx.xxx.xxx.xxx IP. Do you have any suggestions how can I resolve this problem?

    Read the article

  • Analyzing the errorlog

    - by TiborKaraszi
    How often do you do this? Look over each message (type) in the errorlog file and determine whether this is something you want to act on. Sure, some (but not all) of you have some monitoring solution in place, but are you 100% confident that it really will notify for all messages that you might find interesting? That there isn't even one little message hiding in there that you would find valuable knowing about? Or how about messages that you typically don't are about, but knowing that you have a high...(read more)

    Read the article

  • What is the way to understand someone else's giant uncommented spaghetti code? [closed]

    - by Anisha Kaul
    Possible Duplicate: I’ve inherited 200K lines of spaghetti code — what now? I have been recently handled a giant multithreaded program with no comments and have been asked to understand what it does, and then to improve it (if possible). Are there some techniques which should be followed when we need to understand someone else's code? OR do we straightaway start from the first function call and go on tracking next function calls? C++ (with multi-threading) on Linux

    Read the article

  • Updated article "Agent Alerts Management Pack"

    - by TiborKaraszi
    I've just updated the "Agent Alerts Management Pack" found here . I realize that some don't feel confident in reading and executing T-SQL code and they instead prefer to point & click in SSMS instead. So I added two tables with my suggestion on the severity levels and error numbers to define alerts for. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Readability of || statements

    - by Devin G Rhode
    On HTML5 Boilerplate they use this code for jQuery: <!-- Load jQuery with a protocol relative URL; fall back to local if offline --> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></scrip> <script>window.jQuery || document.write('<script src="js/libs/jquery-1.7.2.min.js"><\/script>')</script> The question is simple, what's more readable: if (!jQuery) document.write( -local jQuery- ); or window.jQuery || document.write( -local jQuery- );

    Read the article

  • Backup File Naming Convention

    - by Andrew Kelly
      I have been asked this many times before and again just recently so I figured why not blog about it. None of this information outlined here is rocket science or even new but it is an area that I don’t think people put enough thought into before implementing.  Sure everyone choses some format but it often doesn’t go far enough in my opinion to get the most bang for the buck. This is the format I prefer to use: ServerName_InstanceName_BackupType_DBName_DateTimeStamp.xxx ServerName_InstanceName...(read more)

    Read the article

  • Adsense click bot is click bombing my site

    - by Graham
    I have a site that get's roughly 7,000 - 10,000 page views per day right now. Starting around 1 AM on 7/1/12 I noticed the CTR was rising dramatically. These clicks would be credited then de-credited soon after. So, they were obviously fraudulent clicks. The next day I had about 200 clicks in account with about 100 of them being fraudulent. It's about 3 - 8 per hour evenly dispersed for each of the three ads 24 hours a day. This leads me to believe that it's some sort of Adsense click bot. Also, I removed the ads last evening then put them back up around 3AM and the invalid clicks started within 10 minutes. I signed up for statcounter.com to analyze the exit links on the Adsense. Then I conditionally blocked ads for the IP address of the person / bot I suspected doing this. But, I think that the bot has several proxies to choose from and can refresh IP addresses. I've notified Google through the invalid click form / email 4 times over the past two days in order to let them know I'm aware of the situation and am working on a solution. I've also temporally removed all ads on that site. How can I block a bot like this? Thank you.

    Read the article

  • How do you dive into large code bases?

    - by miku
    What tools and techniques do you use for exploring and learning an unknown code base? I am thinking of tools like grep, ctags, unit-tests, functional test, class-diagram generators, call graphs, code metrics like sloccount and so on. I'd be interested in your experiences, the helpers you used or wrote yourself and the size of the codebase, with which you worked with. I realize, that this is also a process (happening over time) and that learning can mean "can give a ten minute intro" to "can refactor and shrink this to 30% of the size". Let's leave that open for now.

    Read the article

  • Do programmers at non-software companies need the same things as at software companies?

    - by Michael
    There is a lot of evidence that things like offices, multiple screens, administration rights of your own computer, and being allowed whatever software you want is great for productivity while developing. However, the studies I've seen tend toward companies that sell software. Therefore, keeping the programmers productive is paramount to the company's profitability. However, at companies that produce software simply to support their primary function, programming is merely a support role. Do the same rules apply at a company that only uses the software they produce to support their business, and a lot of a programmer's work is maintainence?

    Read the article

  • MSSQL:Ultimate configuration for rebuilding indexes+statistics

    - by Niels Bosma
    My database is growing slower even though I have a bunch of indexes setup. Yesterday I figured out that I need to setup a maintenance plan to build the indexes etc. So my question is what's the ultimate configuration for this? Do I need All: "Rebuild idex task", "Reorganize index task" and "update statistics task". Anything else I need to setup. Shrink database? (Today, the only maintenance plans I have is backup) Does it matter in what order I run them? Any configuration options I should be aware of? I've read of problems with log growing wild, how do I fix that? My transaction log is quite small and is usually a problem for me. -

    Read the article

  • What to choose for a multilingual site with support for Markdown and commenting

    - by Kent
    I want to publish articles at a multilingual site. I want to be able to write an article in two languages and have them available on separate URLs: thesite.foo/english-breakfast thesite.com/engelsk-frukost If the users web browser is set to English I'd like to show a small notice at the top of the Swedish version with a link to the English one. The link should have an appropriate rel attribute for a translation (search for hreflang at http://diveintohtml5.org/semantics.html). There should be a way to list all articles belonging to these sets: Swedish only, English only, Swedish versions + English only, English versions + Swedish only. I'd like to publish these as four RSS-feeds. And I would like to have two versions of the main site, one in Swedish (showing Swedish versions + English only) and one in English (showing English versions). I shall be able to write the articles using Markdown, as that is the formatting language I find most convenient. There should be a way for users to comment. And some kind of way for me to protect myself against comment spam. I am leaning towards learning Drupal. I suspect I'll have to code this behavior myself as a module. To be frank I'd rather work with Java. Is Drupal the way to go? Or is there something more suitable for this project?

    Read the article

  • Video for an ads-driven web-site

    - by AntonAL
    I have a website, wich i will fill with a bunch of useful videos. I've implemented an ads rotation engine for articles and will do so for videos. The next milestone is to decide, how video will be integrated. They are two ways: To host videos myself. Pros: complete freedom. Cons: need tens of gigabytes of storage; support for multiple formats to be crossbrowser and crossdevice. Use Youtube. Pros: Very simple to use; nothing to do. What are pros and cons for each way ? Some questions for YouTube: Will i be able to control playback of YouTube-embedded video to make post-rolls ? What is ranking impact on my web-site, when most of pages will refer to YouTube ? Will, say, iPad play video, embedded via YouTube's iframe ? Does relying entirely on YouTube have a long-term perspective for a web-site, that should bring money ?

    Read the article

  • Can .htaccess slow down a site?

    - by Cody Sharp
    I'm working with a client on an e-commerce website. I implemented clean URLs using .htaccess. I also used .htaccess to solve canonical issues such as redirecting www to non-www and removing index.php from the URL. The website recently began to slow down dramatically, sometimes not even loading. The site is hosted on GoDaddy, and when the client called GoDaddy they told him it was the .htaccess file slowing down the website. I find this highly unlikely because of my past experiences, but I'm not 100% sure. My thinking is that the client's website is most likely on a shared server with a busy neighborhood, thus slowing down the site. It's not always slow, but rather sporadic throughout the day, loading fast at some points and slow at other points in time. Can the .htaccess file slow down a website to a crawl? If so, are there better ways to solve these problems with different rewrite rules and such? Here is what the actual .htaccess file looks like: Options +FollowSymlinks RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^www.example.net [NC] RewriteRule ^(.*)$ http://example.net/$1 [L,R=301] RewriteRule ^products/([0-9a-zA-Z\_\-]*)\.htm([l]?)$ index.php p=product&product_code=$1 [L] RewriteRule ^catalog/([0-9a-zA-Z\_\-]*)\.htm([l]?)$ index.php p=catalog&catalog_code=$1 [L] RewriteRule ^pages/([0-9a-zA-Z\_\-]*)\.htm([l]?)$ index.php?p=page&page_id=$1 [L] RewriteRule ^index\.htm([l]?)$ index.php?p=home [L] RewriteRule ^site_map\.htm([l]?)$ index.php?p=site_map [L] RewriteCond %{QUERY_STRING} ^p=home$ RewriteRule (.*) ? [R=permanent] I'm a .htaccess and regex novice, so any pointed out mistakes would also help. Thank you.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >