Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 396/916 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • How to sync Ubuntu/software/configurations between N computers with free software and/or without a cloud?

    - by skanatek
    Note: this question is not about syncing data in a Dropbox-like way (files, folders), it is more about syncing configurations. I would like to have exactly the same version of Ubuntu with all the software installed and configured both on my Desktop PC and on my Laptop PC (and maybe on my small netbook PC) without using Ubuntu Sync and with minimal maintenance effort (setup once, run for a long time). The use case is the following: I work on my Laptop PC and do some changes to software configuration, for example: configure vim to have a new plugin update the Search Tracker / Recoll file search index configure Thunderbird to have an additional IMAP account ('remember password') add some new bookmarks in Firefox/Chrome change the desktop background image install new software with apt-get install build and install new software with checkinstall etc. I do some 'sync' operation I switch to my Desktop PC and get all the changes from (1) working on the Desktop PC I work on my Desktop PC and do some changes to software configuration, for example: add new directory to the list of directories to be backed up by DejaDup add a new check spelling dictionary to the Libreoffice Writer configure the Terminator software to have colored fonts install new font into the Ubuntu system configure Ekiga to make phone calls etc. I do some 'sync' operation I switch to my Laptop PC and get all the changes from (1) and (4) working on the Laptop PC. Question: What free/open-source software can I use to sync both machines' Ubuntu systems, installed software and configurations? Is it possible to do that without any cloud services? Complementary question: It is obvious that the Desktop PC and the Laptop PC have different hardware configurations. How does the 'sync software' in question deal with video drivers, wlan drivers and their configurations? Note: I do not need all the PCs to be synced at the same time, because I work with only one single machine at once. Note: I considered to use Chef to solve the problem, but it seems that it might be really cumbersome to maintain such a setup. Note: I also considered using a bootable USB with Ubuntu installed (portable Linux), but I am not sure that the video drivers will work then.

    Read the article

  • nagios wrongly reports packet loss

    - by Alien Life Form
    Lately, on my nagios 3.2.3 install (CentOS5, monitoring ~ 300 hosts, 1150 services) has sdtarted to occasionally report high packet loss on 50-60 hosts at a time. Problem is it's bogus. Manual runs of ping (or its own check_ping binary) finds no fault with any of the affected hosts. The only possible cures I found so far are: run all the checks manually (they will succeed but it may act up again on next check) acknowledge and wait for the problem to go away (may take several ours) I suspect (but have no particular reason other than single rescheduled checks succeeding) that the problem may lay with all the checks being mass scheduled together - in which case introducing some jitter in the scheduling (how?) might help. Or it may be something completely different. Ideas, anyone?

    Read the article

  • Gwibber only launches sometimes

    - by Stephen Judge
    I face this problem where Gwibber only launches sporadically. Sometimes when I click it to launch, it launches and then other times it doesn't. I can't seem to figure out what is preventing it from launching and what sort of information I need to collect, also where to collect it from to make a bug report. I have killed the gwibber-service processes in the System Monitor "it loads three processes called gwibber-service, is this normal" several times and tried to launch Gwibber again, but this doesn't seem to work. The process just called gwibber starts, then the three gwibber-service processes start, then the gwibbber process ends and the three gwibber-service processes remain but the application is still not launching. Generally, I want to know are other people facing the same problem. If someone can give me some guidance on how to triage this problem and get the information need to make a bug report I would be grateful. The upside to this though is at least when it is not launching it is preventing me from wasting endless hours reading my streams on Identi.ca and Twitter, so it is a bit Workrave for microblogging. In which case maybe I shouldn't fix this problem :-)

    Read the article

  • How to create or recover Windows Bootloader after deleting Ubuntu boot drive

    - by Kincaid
    I have a computer that dual-boots (or tri-boots) Windows 8 Release Preview, Windows 7, and Ubuntu 12.04. Grub boots between Windows 8 and Ubuntu; for which I use primarily. Recently, I decided to remove Ubuntu, as I hardly used it. I deleted the Ubuntu partition accidentally before replacing the Grub bootloader. Now, whenever I want to boot the machine, it gives me the "grub-rescue" prompt -- I am unable to boot into either Windows (8 nor 7), nor Ubuntu (except via USB, of course). I do not have any Windows 7/8 recovery media, so that isn't an option. Please note that after I deleted the Ubuntu partition, I put the PC into hibernate, and then turned it on. This means the C:\ [Windows 8] drive cannot be mounted. I don't know if that is bad, but it definitely doesn't make things better. I am currently booting Ubuntu via USB, in an effort to restore the Windows bootloader. I have looked into using boot-repair to solve the problem using the instructions here, although after attempting to apply the changes, it gave the error: "Please install the [mbr] packages. Then try again." I don't know why I'm getting this error; is there a way to install the 'mbr packages?' I honestly don't know what exactly they are, nor how to install them. Are there any other options I have not yet exhausted to be able to boot back into Windows, in the case that there is a better way? I want to set the bootloader to boot into Windows 8, but booting into either Windows 7 or 8 is fine (I can use EasyBCD from there). Is there a simple solution to this? I've checked BIOS, and I haven't been able to find a way to boot into Windows.

    Read the article

  • How to manage own bots at the server?

    - by Nikolay Kuznetsov
    There is a game server and people can play in game rooms of 2, 3 or 4. When a client connects to server he can send a request specifying a number of people or range he wants to play with. One of this value is valid: {2-4, 2-3, 3-4, 2, 3, 4} So the server maintains 3 separate queues for game room with 2, 3 and 4 people. So we can denote queues as #2, #3 and #4. It work the following way. If a client sends request, 3-4, then two separate request are added to queues #3 and #4. If queue #3 now have 3 requests from different people then game room with 3 players is created, and all other requests from those players are removed from all queues. Right now not many people are online simultaneously, so they apply for a game wait for some time and quit because game does not start in a reasonable time. That's a simple bot for beginning has been developed. So there is a need to patch server code to run a bot, if some one requests a game, but humans are not online. Input: request from human {2-4, 2-3, 3-4, 2, 3, 4} Output: number of bots to run and time to wait for each before connecting, depending on queues state. The problem is that I don't know how to manage bots properly at the server? Example: #3 has 1 request and #4 has 1 request Request from user is {3,4} then server can add one bot to play game with 3 people or two bots to play game of 4. Example: #3 has 1 request and #4 has 2 requests Request from user is {3,4} then in each case just one bot is needed so game with 4 players is more preferrable.

    Read the article

  • Allow access to WordPress site only by links in email newsletters

    - by Shane
    I send out a personal email newsletter, and have been looking into sending it via some service like MailChimp, or sendy.co. Many of these email services suggest, or require, the email newsletter content to be available online, in case the recipient's email app doesn't render it properly, or at all. The thing is I don't want my newsletter contents visible to the whole world. Nor do I want to require existing recipients to make accounts/be assigned accounts, with passwords. So, the question is: How can my WordPress site content be viewable only by clicking on the link to it in the email newsletter. It can't be found in a Google search; but once at the site the visitor can view previous newsletter contents. It seems an .htaccess file would do the trick, but I have been unable to figure out the syntax for this. Thanks for your help. I have copied below two other questions, and answers, which have helped me word my question clearly. Similar to this request about allowing access to a certain group while still restricting access to the world: Is there a way to password protect directory only in cpanel. But the user should not be prompted the password, when they try to access it via web? This persons question is the closest I could find to my situation: Restrict direct folder access via .htaccess except via specific links

    Read the article

  • Fusion 3.1 and Parallels 6 for Win7 x64

    - by Ronnie
    I read in a recent article that Parallels Desktop 6 is faster almost everywhere than VMware Fusion. I was originally using Parallels 4 before passing to VMware due to the frequent Parallels crashes. As now I am using a lot Fusion on my Macbook on a big Win7 x64 virtualized development machine that I find too slow I am wondering if the announced speed up of Parallels V6 is justified to come back to it. As a test I converted my Fusion 3.1 to a trial of Parallels Desktop 6 and my Windows Experience Index passed from 4.7 of Fusion to 4.5 on Parallels 6 so apparently the virtualized machine is not seeing that speed benefit. Is there any optimization to set up on Parallels to increase the WEI or should I stay with Fusion (and in this case this kind of articles is just marketing stuff)?

    Read the article

  • Creating svn repo programmatically from a webpage and sudo

    - by Adriano Varoli Piazza
    We want to automate the creation of the svn repos and trac environments for new projects. Basically, this would mean creating a web script that got some info (like env and repo name, etc) from the user and then executed sudo -u svn svnadmin create /var/svn/<projectname> trac-admin /var/trac/sites/<projectname> initenv [... All extra params...] For the second command, this is simple, as it already runs as the www-data user, so I wouldn't have to use sudo. But for the first command, I'd have to use sudo and add www-data to the sudoers file. I was wondering if this is a good idea, and how to do it in that case. Reading the manpage has left me with more doubts than certainties about this. This webserver would only be accessible from our internal network, by the way. The OS is Ubuntu Server 10.04.

    Read the article

  • Customizing the NUnit GUI for data-driven testing

    - by rwong
    My test project consists of a set of input data files which is fed into a piece of legacy third-party software. Since the input data files for this software are difficult to construct (not something that can be done intentionally), I am not going to add new input data files. Each input data file will be subject to a set of "test functions". Some of the test functions can be invoked independently. Other test functions represent the stages of a sequential operation - if an earlier stage fails, the subsequent stages do not need to be executed. I have experimented with the NUnit parametrized test case (TestCaseAttribute and TestCaseSourceAttribute), passing in the list of data files as test cases. I am generally satisfied with the the ability to select the input data for testing. However, I would like to see if it is possible to customize its GUI's tree structure, so that the "test functions" become the children of the "input data". For example: File #1 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest File #2 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest This will be useful for identifying the stage that failed during the processing of a particular input file. Is there any tips and tricks that will enable the new tree layout? Do I need to customize NUnit to get this layout?

    Read the article

  • Xml Serialization and the [Obsolete] Attribute

    - by PSteele
    I learned something new today: Starting with .NET 3.5, the XmlSerializer no longer serializes properties that are marked with the Obsolete attribute.  I can’t say that I really agree with this.  Marking something Obsolete is supposed to be something for a developer to deal with in source code.  Once an object is serialized to XML, it becomes data.  I think using the Obsolete attribute as both a compiler flag as well as controlling XML serialization is a bad idea. In this post, I’ll show you how I ran into this and how I got around it. The Setup Let’s start with some make-believe code to demonstrate the issue.  We have a simple data class for storing some information.  We use XML serialization to read and write the data: public class MyData { public int Age { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public List<String> Hobbies { get; set; }   public MyData() { this.Hobbies = new List<string>(); } } Now a few simple lines of code to serialize it to XML: static void Main(string[] args) { var data = new MyData {    FirstName = "Zachary", LastName = "Smith", Age = 50, Hobbies = {"Mischief", "Sabotage"}, }; var serializer = new XmlSerializer(typeof (MyData)); serializer.Serialize(Console.Out, data); Console.ReadKey(); } And this is what we see on the console: <?xml version="1.0" encoding="IBM437"?> <MyData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>   The Change So we decided to track the hobbies as a list of strings.  As always, things change and we have more information we need to store per-hobby.  We create a custom “Hobby” object, add a List<Hobby> to our MyData class and we obsolete the old “Hobbies” list to let developers know they shouldn’t use it going forward: public class Hobby { public string Name { get; set; } public int Frequency { get; set; } public int TimesCaught { get; set; }   public override string ToString() { return this.Name; } } public class MyData { public int Age { get; set; } public string FirstName { get; set; } public string LastName { get; set; } [Obsolete("Use HobbyData collection instead.")] public List<String> Hobbies { get; set; } public List<Hobby> HobbyData { get; set; }   public MyData() { this.Hobbies = new List<string>(); this.HobbyData = new List<Hobby>(); } } Here’s the kicker: This serialization is done in another application.  The consumers of the XML will be older clients (clients that expect only a “Hobbies” collection) as well as newer clients (that support the new “HobbyData” collection).  This really shouldn’t be a problem – the obsolete attribute is metadata for .NET compilers.  Unfortunately, the XmlSerializer also looks at the compiler attribute to determine what items to serialize/deserialize.  Here’s an example of our problem: static void Main(string[] args) { var xml = @"<?xml version=""1.0"" encoding=""IBM437""?> <MyData xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xmlns:xsd=""http://www.w3.org/2001/XMLSchema""> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>"; var serializer = new XmlSerializer(typeof(MyData)); var stream = new StringReader(xml); var data = (MyData) serializer.Deserialize(stream);   if( data.Hobbies.Count != 2) { throw new ApplicationException("Hobbies did not deserialize properly"); } } If you run the code above, you’ll hit the exception.  Even though the XML contains a “<Hobbies>” node, the obsolete attribute prevents the node from being processed.  This will break old clients that use the new library, but don’t yet access the HobbyData collection. The Fix This fix (in this case), isn’t too painful.  The XmlSerializer exposes events for times when it runs into items (Elements, Attributes, Nodes, etc…) it doesn’t know what to do with.  We can hook in to those events and check and see if we’re getting something that we want to support (like our “Hobbies” node). Here’s a way to read in the old XML data with full support of the new data structure (and keeping the Hobbies collection marked as obsolete): static void Main(string[] args) { var xml = @"<?xml version=""1.0"" encoding=""IBM437""?> <MyData xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xmlns:xsd=""http://www.w3.org/2001/XMLSchema""> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>"; var serializer = new XmlSerializer(typeof(MyData)); serializer.UnknownElement += serializer_UnknownElement; var stream = new StringReader(xml); var data = (MyData)serializer.Deserialize(stream);   if (data.Hobbies.Count != 2) { throw new ApplicationException("Hobbies did not deserialize properly"); } }   static void serializer_UnknownElement(object sender, XmlElementEventArgs e) { if( e.Element.Name != "Hobbies") { return; }   var target = (MyData) e.ObjectBeingDeserialized; foreach(XmlElement hobby in e.Element.ChildNodes) { target.Hobbies.Add(hobby.InnerText); target.HobbyData.Add(new Hobby{Name = hobby.InnerText}); } } As you can see, we hook in to the “UnknownElement” event.  Once we determine it’s our “Hobbies” node, we deserialize it ourselves – as well as populating the new HobbyData collection.  In this case, we have a fairly simple solution to a small change in XML layout.  If you make more extensive changes, it would probably be easier to do some custom serialization to support older data. A sample project with all of this code is available from my repository on bitbucket. Technorati Tags: XmlSerializer,Obsolete,.NET

    Read the article

  • Activesync/OWA Desktop Client

    - by prestomation
    At my company we have Exchange 2k3 with OWA being public, serving up Activesync and webmail. There is no pop3 or imap support from our admins. Outlook 2k3's RPC over HTTP is also disabled Is there a desktop client that can connect to Activesync or OWA? If my ipod touch can connect to activesync, why can't my pc? I'd preferably like a linux daemon that could simply forward emails to my gmail address, but I guess I'll take what I can get. Thanks EDIT: In case it was not clear, our Exchange server is hidden completely behind a firewall, and a second exchange server has only activesync and https ports opened to the world.

    Read the article

  • MVC Validation with ModelState.isValid through a wizard

    - by Emmanuel TOPE
    I'm working on a small educational project on MVC 3, and I'm facing a small problem, when attempting to handle validation in my application through a wizard. I tried to get benefit from the ability of MVC3 to deliver content of a different view using the same URL, when handling an [HttpPost] method on a page. I my case,my main model's class contains about ten [Required] properties, that I would like to expose through a small wizard in 3 steps , So I want that the user may be able to enter his personal informations in the first step, then respond to some questions in the second stepp and finally receive a confirmation mail from the web application whit his credentials in the last step. I can't access the last step, because of the ModelState.isValid method that I use to handle validations, and which can't perform properly if I define some properties as [Required], but don't put them on the first view. As the replies to those questions remain in a couple of choices, I've thinked that I may use some nullable bool? for in order to avoid validation issues, but know that it's not the proper way. Are there someone who would like to help me find a way to extend my validation to those three steps ? Thanks in advance and sorry for my english, I'm not a native speaker.

    Read the article

  • You Can&rsquo;t Upload An Empty File To SharePoint 2007 Or SharePoint 2010

    - by Brian Jackett
    The title of this post is pretty self explanatory, but I thought it worth mentioning since I had never run across this rule until just recently.  A few weeks ago I was testing out a new workflow attached to a SharePoint 2007 document library.  I uploaded various file types to ensure all were handled properly.  One of the files I happened to test with was an empty .txt file to which I got the following error.      As you can see from the error message you aren’t allowed to upload a file that is empty.  Fast forward to this week when I was doing some research for my upcoming SharePoint 2010 beta exams.  I remembered that error I got a few weeks ago and decided to try out with SharePoint 2010 as well.  No surprises I got a similar error. Conclusion     Next time you are uploading files to a SharePoint 2007 or 2010 document library, make sure the file is not empty.  Coincidentally when I tweeted about this issue a few friends replied that they had also found this error recently.  I don’t know the internal reasoning why this is prevented but I assume it has something to do with how the blob for the file is stored in the database.  I assume that this would still be the case even if you had Remote Blob Storage (RBS) configured for your farm, but don’t have access to such a farm to confirm.  If anyone reading this does have access and wants to confirm that would be appreciated, just leave a comment.         -Frog Out

    Read the article

  • Error 502 in OpenOfficeSpreadsheet formula

    - by cody
    The formula failing is the following: =IF(TIMEVALUE(C2 & ":00") > TIMEVALUE(B2 & ":00"); 0; C2-B2) I previously tried =IF(C2 > B2; 0; C2-B2) but this also gives me "Error 502". The cells it is referring to contains data in the format "12:30" (I formatted the columns with format "HH:MM"). I just want to calculate how much time lies between two times, respecting the special case where endtime < starttime.

    Read the article

  • Laptop battery life drastically decreased compared to Windows 7

    - by Aron Rotteveel
    I am running Ubuntu 10.10 on my Dell Studio XPS 1640 and have about one hour of battery life in it, compared to about 2.5 hours running on Windows 7. This is with wireless and bluetooth on, but still, the difference seems incredible. What could be causing such a difference and is there a way to close the gap without losing core functionality? EDIT: here's some output from powertop. This is with bluetooth turned off and Wifi turned on. The output seems pretty normal to me, but as indicated, this is about 1 hour of battery life on a full battery... Wakeups-from-idle per second : 476.2 interval: 10.0s Power usage (ACPI estimate): 2.5W (1.2 hours) Top causes for wakeups: 30.0% (167.2)D chrome 21.0% (117.3) [extra timer interrupt] 13.9% ( 77.4) [kernel scheduler] Load balancing tick 3.4% ( 18.9)D xchat 7.1% ( 39.8) [iwlagn] <interrupt> 5.9% ( 32.9) AptanaStudio3 3.9% ( 21.6)D java 2.7% ( 14.9) [TLB shootdowns] <kernel IPI> 2.5% ( 14.1) docky 1.8% ( 10.0) nautilus 1.6% ( 9.0) thunderbird-bin 1.0% ( 5.5) [ahci] <interrupt> 0.9% ( 5.0) syndaemon 0.8% ( 4.3) [kernel core] hrtimer_start (tick_sched_timer) EDIT: after changing /proc/sys/vm/laptop_mode to 5 (it was set to 0), wakeups seem to have decreased, although usage still seems far too high: Wakeups-from-idle per second : 263.8 interval: 10.0s Power usage (ACPI estimate): 2.6W (0.9 hours) EDIT: I seem to have discovered the main cause: I was using the open source ATI Drivers. I recently installed the official ATI drivers and laptop battery life seems to have doubled since. EDIT: last edit. The previous 'solution' of installing the official ATI drivers turns out to be a non-solution. Although it does increase battery life, my laptop resolution is maxed out at 1200x800 after a reboot. (Please note that this problem does not need answering in this question as it is a seperate case)

    Read the article

  • What tools should every programmer know?

    - by acidzombie24
    What are some tools every programmer know about? Some examples i thought were Source control. (No explanation needed) Profiler. Many could go without but its good to know how to use one when the occasion arise What else? I was thinking a bug report software but i havent used one so i wasnt sure. Should programmers know how to use TRAC? I remember in the past a person telling me if i was making a shared library i should know (Some Name) which generates docs from the source (in that case C++). What was that called and what else should i know about? -edit- what about team management software? any software you could not live without in a specific project would be a good mention. I'll also mention i use VMs frequently during the prototype or end phase to see if there are any issues on a clean XP or linux distro and if i forgot anything in my redistribution. I cant imagine the end/release and testing phase of a project without a VM.

    Read the article

  • Web Development Environment: How to distribute edited hosts files over bunch of mac machines?

    - by Alex Reds
    I am doing some research to prepare some web development environment for our small(10ppl and growing) new office. User Case: For each new web project usually we create new alias on an Apache server someproject.companywebsite From my understanding in order to see this website locally for all the rest of our team(including mangers and directors) they will need to edit hosts file (e.g. "192.168.1.10 someproject.companywebsite"), and like that each time for a new project(can be 2-5 each week) Solution: And I looking for a solution how to edit this hosts file only once and distribute it over all mac machines in our network at once or much more flawlessly than poking around with each machine every time over and over again. Is that possible? Or that a very wrong way of doing that? Perhaps we better set up own local dns server and point to it our router? Though own dns server a bit concerns me because of might be some network interruption and others lags, if you know what I mean. Or perhaps there are another workflows for that? What's the best way for such things? So I'll be so grateful to hear some advices from experienced admins. I couldn't find that info on internet, so if you know where to read about it, point me in a right direction. Thank you in advance Alex

    Read the article

  • Laptop burned after heavy OpenGL usage. Is there hope?

    - by leladax
    After programming for OpenGL and a 'slow OS' case for a couple of minutes the screen went blank. I shut it down with forcing it with the shutdown key and now there's no led at all with battery or AC, it doesn't start at all, it's totally dead. It's most certainly not the AC adapter since that didn't show at first, it doesn't start a led at all and if the AC is connected it does a very slight and faint clicking noise (one has to have his ear next to it to hear it, near the AC connector). Is there any hope? I suppose it's a burned motherboard. I suspected a burned GPU but that would still leave the leds at least lid or at least attempting to start up. Now it's totally dead. It's a TOSHIBA satellite x200-219. It has no warranty, as it's more than 2 years since purchase.

    Read the article

  • Redhat cluster Vs Pacemaker Vs Gluster Vs Sheepdog

    - by chandank
    Changing the entire question as earlier one was very confusing. I have been exploring different clustering system to run Virtual machines on two different machines on LAN with high availability. Currently I am already using DRBD resource on two different machines on Primary/Secondary mode. In case the primary fails I manually promote the secondary to Primary and start the VM. I also explored Gluster and looks good, however, I would rather prefer clustering over Gluster (user space FS). So if anyone has idea which one would be better from ease of use prospective please I would be interested in. Moreover, sheepdog project appears good, however, could not find much documentations/Howtos. I am using Centos 6.

    Read the article

  • How can I decrease relevancy of Creative Commons footer text? (In Google Webmaster Tools)

    - by anonymous coward
    I know that I may just have to link the image to make this happen, but I figured it was worth asking, just in case there's some other semantic markup or tips I could use... I have a site that uses the textual Creative Commons blurb in the footer. The markup is like so: <div class="footer"> <!-- snip --> <!-- Creative Commons License --> <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-sa/3.0/us/80x15.png" /></a><br />This work by <a xmlns:cc="http://creativecommons.org/ns#" href="http://www.xmemphisx.com/" property="cc:attributionName" rel="cc:attributionURL">xMEMPHISx.com</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License</a>. <!-- /Creative Commons License --> </div> Within Google Webmaster Tools, the list of relevant keywords is heavily saturated with the text from that blurb. For instance, 50% of my top-ten most relevant keywords (including the site name): [site name] license [keyword] commons creative [keyword] alike [keyword] attribution [keyword] I have not done any extensive testing to find out rather or not this list even matters, and so far this doesn't impact performance in any way. The site is well designed for humans, and it is as findable as it needs to be at the moment. But, out of mostly curiosity: Do you have any tips for decreasing the relevancy of the text from the Creative Commons footer blurb?

    Read the article

  • how to word wrap, align text like the output of man?

    - by cody
    what is the command that word wraps and justifies a text file so that the output looks like that of a man page: All of these system calls are used to wait for state changes in a child of the calling process, and obtain information about the child whose state has changed. A state change is considered to be: the child terminated; the child was stopped by a signal; or the child was resumed by a signal. In the case of a terminated child, performing a wait allows the system to release the resources associated with the child; if a wait is not performed, then the termi- nated child remains in a "zombie" state (see NOTES below). Thanks.

    Read the article

  • Strange robots.txt - how and why did it get there?

    - by Mick
    I recently created a very simple, pure HTML website which I have hosted with "hostmonster". Hostmonster had very good reviews on some comparison website and in general so far they appear to be perfectly good in every way... At least I thought so until just now... I have been making lots of edits to my site on an almost daily basis. My site now appears on the first page (7th on the list) for my most important keyphrase when doing a google search. But I did notice some problem with the snippet chosen by google. I asked a question on this site about snippets and got some great answers. I then made some modifications to my meta data and within 48hrs the google snippet for my search was perfect. The odd thing though was that looking at the "cached" version google had, it appeared that the cache was still very odl- like three weeks previous. This seemed very odd - how could it be that the google robots had read my new metadata without updating the cache? This puzzled me greatly. Just now it occurred to me that maybe I had some goofey setting in my robots.txt file. I didn't actually remember even making one - but I thought I'd have a look just in case. Much to my horror, I saw that there was a robots.txt and it contained the disturbing text below: sitemap: http://cdn.attracta.com/sitemap/728687.xml.gz Intuitively this looks like some kind of junk, spam trick, and I had indeed been getting some spam from "attracta". So my questions are: 1. Should I simply delete this robots.txt? 2. Was the file there all along - placed there because of some commercial tie-in between attracta and hostmonster. 3. Does the attracta robots file explain the lack of re-caching?

    Read the article

  • Mac internet problems

    - by Bradley Herman
    Our office is set up with mostly macs (7 of them) but we do have a windows laptop and a windows desktop on the network as well. The network is configured with a modem going into a switch/router throughout the office to the computers, along with a wireless router. Everything runs fine most of the time, but periodically while using the web, certain sites will stop loading and timeout repeatedly. This usually lasts 20 minutes or so and can be incredibly annoying. Resetting the modem/router and/or rebooting the computer never helps. The weirdest part is that in almost every case, the websites are fine on our Windows machines. I frequently use github, google, Stack Overflow, and jQuery reference and I can count on the sites being unavailable to me at least once a day. While I can't get them to load, I can spin my chair around to the windows server behind me and load the sites just fine. Any idea what the hell could be going on here?

    Read the article

  • Puppet master/agent basic setup

    - by lewap
    I'm trying to setup a basic puppet agent/master use-case with an agent server and a master. I've setup two servers with puppet and puppet master respectively. After the following setup of both servers: puppet master --no-daemonize --verbose puppet agent --test puppet cert --list to get the list, puppet cert --sign to sign it. puppet agent --test I get the message: err: Could not retrieve catalog from remote server: hostname was not match with the server certificate warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: hostname was not match with the server certificate What do I need to do in order to get the agent/master to be able to talk to each other?

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >