Search Results

Search found 25400 results on 1016 pages for 'enable manual correct'.

Page 555/1016 | < Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >

  • How do you pronounce Linux?

    - by Xerxes
    I'm tired of the old fart at work who keeps coming upto my desk and telling me all about his "years of experience in working with Unix and Lye-nix". I couldn't vent it out at him because that would be wrong, so I'm going to vent it out here - because obviously (that's the right thing to do...). Anyway, for all the people that practice in this disgusting behaviour - the pronunciation is.... (Hmmm - anyone know phonetics?) - "Li-nix" Note: Despite hating him for this - he is otherwise a very nice (but sometimes rather annoying) person. Now... to formally make this a "question" - Could someone write the phonetics for pronouncing "Linux", and also the notorious "Lye-nix", so I can make a note of it for future ventings? I think this is right... L?n?x, NOT L?n?x. ...or perhaps... L?n?x, NOT L?n?x* Can someone confirm the correct phonetics? (Listen to Linus on the matter).

    Read the article

  • Best practice for scaling a single application source to multiple nodes

    - by Andrew Waters
    I have an application which needs to scale horizontally to cover web and service nodes (at the moment they're all on one) but interact with the same set of databases and source files (both application code and custom assets). Database is no problem, it's handled already with replication in MongoDB. Also, the configuration of the servers are the same (100% linux). This question is literally about sharing a filesystem between machines so that its content is always correct, regardless of the node accessing it. My two thoughts have so far been NFS and SAN - SAN being prohibitively expensive and NFS seeing some performance issues on the second node with regards to glob()ing in PHP. Does anyone have recommended strategies or other techniques that don't involved sharding data across nodes or any potential gotchas in NFS that may cause slow disk seek times? To give you an idea of the scale, the main node initialises it's application modules in ~ 0.01 seconds. The secondary is taking ~2.2 seconds. They're VM's inside a local virtual network in ESXi and ping time between them is ~0.3ms

    Read the article

  • How can I know I'm buying a heatsink that will work with my CPU?

    - by Mike Peshka
    Recently I've been using my CPU a lot more for gaming, and as of two days ago, my computer had just been shutting off suddenly with no warning. I'm inclined to believe I need a new heatsink and cooling fan system. (Correct me if I am wrong.) Now I went around to BestBuy and Staples to purchase a new one, but both places instructed me to look online. Now I am posed with a problem. I don't know how to shop for one online because I want to make sure it will work with my unit. My CPU is a Pentium® Dual-Core CPU E2210 @ 2. 20GHz

    Read the article

  • Configuring Nagios BGP plugin on Ubuntu

    - by user141610
    I am trying to configure nagios check_bgp_neighbors plug-in on Ubuntu and followed README file of check_bgp_neighbors plug-in. I have made following changes: define command{ command_name check_bgp_all command_line $USER1$/check_bgp_neighbors -H $HOSTADDRESS$ -C $USER3$ -n $ARG1$ -n $ARG2$ } to define command{ command_name check_bgp_all command_line /usr/local/nagios/libexec/check_bgp_neighbors.sh -H xx.xx.xx.49 -C xx.xx.xx.50 And define service{ use server-service hostgroup_name svc-bgp1 service_description BGP Check 1 check_command check_bgp_all!10.0.0.1!172.16.0.2 } to define service{ use generic-service hostgroup_name svc-bgp1 service_description BGP Check 1 check_command check_bgp_all!xx.xx.xx.50 } xx.xx.xx.49 is the IP of the host router and xx.xx.xx.50 is the IP of eBGP neighbour. After that it shows critical status. I know my command is not correct but cannot detect the problem. I learned that in this plug-in user-name and password of the host router are required but don't know how and where to provide it. Nagios log does not show any error message. Status information: Failed: status:0 prefixes:0 sent:0 received:0

    Read the article

  • outlook express 2007: remove personal folders from mail folders.

    - by ufk
    Hiya. I installed outlook express 2007 and i configured it for my e-mail account. it seems that i cannot remove the personal folders and i cannot configure the gmail Mail file to be as default. (it's translated from hebrew so i hope i got the menu words correct) when i go to tools - manage accounts - data files i see 2 files, one for personal folders and the other for my gmail. the personal folder is the default one and i can't remove it and i can't set my gmail mail file to be the default. how can i resolve the issue? thanks!

    Read the article

  • Trying to install Proprietory Nvidia Graphics Drivers

    - by Peter Snow
    After reading and trying many different suggestions for some hours, I returned to this how-to: https://help.ubuntu.com/community/BinaryDriverHowto/Nvidia The first problem I encounter is how to identify which of the listed drivers support my Nvidia GEForce 630M graphics card. Following the links doesn't really help, since it is not stated there either (except where support for a new driver was added later which is explicitly stated, but the original devices covered are not). However, even if I knew, if it doesn't appear in the 'Additional Drivers' dialogue (see below), how will I install it? Second Issue: The article goes on to say that available drivers for my hardware are usually listed in 'Additional Drivers'. In my case, they aren't. Unfortunately, it doesn't tell me how to correct that or work around it? I've checked the bios and there is no way offered there to disable the integrated graphics, only the Nvidia graphics. I've also tried each available option in this: $ sudo update-alternatives --config i386-linux-gnu_gl_conf My system is an Acer Aspire 4752G bought May 2012. I'm running Ubuntu 12.04LTS. uname -a : 3.2.0-38-generic-pae #61-Ubuntu SMP Tue Feb 19 12:39:51 UTC 2013 i686 i686 i386 GNU/Linux It's 64bit hardware but I installed 32bit OS for greater software compatibility. Running $ sudo tail -fn 500 /var/log/Xorg.0.log | grep '(EE)' returns" (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 28.886] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) The reason for wanting the proprietor y drivers is because my laptop comes with 3D accelerated graphics adaptor and so rather than confining myself to struggling with the on-board graphics, I would rather use it. I also want to experiment with using it for bitmining (which uses the GPU's for computing power).

    Read the article

  • Accessing a shared folder using other credentials with Windows 7

    - by Nicolas Buduroi
    I was at a client office trying to connect to a shared drive on their network but was greeted by a "you do not have permission to access" error! Couldn't find any way to enter the required credentials as this message didn't had any other options. Tried to map the drive, selected the option to enter the correct credentials (with \\HOST\user) but it wouldn't work at all. The worst thing in all of this is that my coworker who is using OS X has been able to connect to that drive without any problem, he clicked on it, entered the credentials and could open the folder! The folder is shared by a Windows Small Business Server 2008 machine.

    Read the article

  • Using a Vertex Buffer and DrawUserIndexedPrimitives?

    - by MattMcg
    Let's say I have a large but static world and only a single moving object on said world. To increase performance I wish to use a vertex and index buffer for the static part of the world. I set them up and they work fine however if I throw in another draw call to DrawUserIndexedPrimitives (to draw my one single moving object) after the call to DrawIndexedPrimitives, it will error out saying a valid vertex buffer must be set. I can only assume the DrawUserIndexedPrimitive call destroyed/replaced the vertex buffer I set. In order to get around this I must call device.SetVertexBuffer(vertexBuffer) every frame. Something tells me that isn't correct as that kind of defeats the point of a buffer? To shed some light, the large vertex buffer is the final merged mesh of many repeated cubes (think Minecraft) which I manually create to reduce the amount of vertices/indexes needed (for example two connected cubes become one cuboid, the connecting faces are cut out), and also the amount of matrix translations (as it would suck to do one per cube). The moving objects would be other items in the world which are dynamic and not fixed to the block grid, so things like the NPCs who move constantly. How do I go about handling the large static world but also allowing objects to freely move about?

    Read the article

  • Object construction design

    - by James
    I recently started to use c# to interface with a database, and there was one part of the process that appeared odd to me. When creating a SqlCommand, the method I was lead to took the form: SqlCommand myCommand = new SqlCommand("Command String", myConnection); Coming from a Java background, I was expecting something more similar to SqlCommand myCommand = myConnection.createCommand("Command String"); I am asking, in terms of design, what is the difference between the two? The phrase "single responsibility" has been used to suggest that a connection should not be responsible for creating SqlCommands, but I would also say that, in my mind, the difference between the two is partly a mental one of the difference between a connection executing a command and a command acting on a connection, the latter of which seems less like what I have been lead to believe OOP should be. There is also a part of me wondering if the two should be completely separate, and should only come together in some sort of connection.execute(command) method. Can anyone help clear up these differences? Are any of these methods "more correct" than the others from an OO point of view? (P.S. the fact that c# is used is completely irrelevant. It just highlighted to me that different approaches were used)

    Read the article

  • HAPROXY per domain redirection

    - by SecondThought
    I'm trying to redirect requests to my load balancer by domain name with acl and hdr_dom, to a separate backend. The redirection works ok with the first request - 'GET /' (the destination server is a WordPress site) but when the client asks for the assets ('GET /blablabla/style.css' for example) the haproxy doesn't redirect it to the right backend anymore, but to the default one, with . In the haproxy log I can see the correct host that the request is for (the one that I defined in hdr_dom) but it's like that since the GET request itself is relative (I mean not containing the domain but only from the /blablabla and forth), haproxy doesn't recognize it with the hdr_dom. I'm just guessing here.. Please help...

    Read the article

  • Outlook 2007 disconnecting after sending email to specific email

    - by Michael
    So when a user emails another tech in my domain they get the prompt for username/password as though they got disconnected. I have not seen any issues in the event logs, tried deleting the email address from his auto correct, ran Outlook in safe mode and searched online as well. I am kind of lost. Checked the event logs on the exchange server as well in the security and still nothing. Exchange 2010, Client OS: Vista x64, Outlook 2007. Thanks!

    Read the article

  • Launcher icons are invisible after upgrade from 11.10 to 12.04

    - by Clo Knibbe
    I am re-purposing an old laptop. I installed 11.10 on it and then immediately upgraded to 12.04. (I could not directly install 12.04 as my system does not support PAE.) When my system was (briefly) 11.10, the desktop appeared as expected. However, after the upgrade to 12.04, the icons in the launcher area are invisible. If I hover over the spot where the icon should be the little popup window showing the tool's name appears, and I can click to invoke the tool. I just cannot see the icons. ![invisible icons in launcher][1] The icons do appear as expected in other contexts, for example in the Home folder and in Dash Home. My theme is "Ambiance (default)" I do not have a ~/.icons folder. This is the top level contents of /usr/share/icons: default DMZ-Black DMZ-White gnome handhelds hicolor HighContrast HighContrastInverse Humanity Humanity-Dark locolor LoginIcons LowContrast redglass ubuntu-mono-dark ubuntu-mono-light unity-icon-theme whiteglass (Sorry for the poor formatting, can't get it to show in list.) I suspect that the launcher isn't looking for the icons in the right place, but I don't know how to confirm that, or how to correct. This is my first foray into Linux, although I used to use Unix a few decades ago. This doesn't look much like my old Sun workstation, though! Does anyone have any suggestions or insights for me? Thanks.

    Read the article

  • Spreading incoming batched data into a real-time stream

    - by pr1001
    I would like to display some events in 'real-time'. However, I must fetch the data from another source. I can request the last X minutes, though the source is updated approximately every 5 minutes. This means that there will be a delay between the most recent data retrieved and the point in time that I make the request. Second, because I will be receiving a batch of data, I don't want to just fire out all the events down a socket once my fetcher has retrieved it: I would like to spread out the events so that they are both accurately spaced amongst each other and in sync with their original occurrences (e.g. an event is always displayed 6 minutes after it actually happened). My thought is to fetch the data every 5 minutes from the source, knowing that I won't get the very latest data. The original data would be then queued to be sent down the socket 7.5 minutes from its original timestamp – that is, at least ~2.5 minutes from when its batch was fetched and at most 7.5 minutes since then. My question is this: is this the best way to approach the problem? Does this problem have any standard approaches or associated literature related to implementation best-practices and edge cases? I am a bit worried that the frequency of my fetches and the frequency in which the source is updated will get out of sync, leading to points where no data will be retrieved from the source. However, since my socket delay is greater than my fetch frequency, the subsequent fetch should retrieve newer data before the socket queue is empty. Is that correct? Am I missing something? Thanks!

    Read the article

  • Problem with wake after suspend using USB remote.

    - by Bod
    Hi, I'm a linux newbie looking for some help. I'm currently setting up an XBMC HTPC using a laptop and 10.10 and all works great except for waking from resume using the power button on the remote. The suspend works from remote works fine as does the resume using the power button on the laptop. I've checked /proc/acpi/wakeup which initially showed the following. Device S-state Status Sysfs node C096 S5 *disabled pci:0000:00:1e.0 C0F1 S3 *disabled pci:0000:00:1d.0 C0F8 S3 *disabled pci:0000:00:1d.1 C0F9 S3 *disabled pci:0000:00:1d.2 C0FA S3 *disabled pci:0000:00:1d.3 C0FB S3 *disabled pci:0000:00:1d.7 C102 S5 *disabled pci:0000:00:1c.0 C22B S5 *disabled pci:0000:08:00.0 C115 S5 *disabled pci:0000:00:1c.2 C22C S5 *disabled C118 S5 *disabled pci:0000:00:1c.3 C22C S5 *disabled I've since configured the above so that the S3 devices above are enabled. I've confirmed that they are the correct devices using lspci 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) None of this has worked unfortunately and I'm now stuck. It simply refuses to wakeup from the remote. The USB receiver shows no activity LED while suspended. Suspend/resume from the remote works fine from Windows 7 so I know the laptop is ok with it. Any ideas? I need to get this sorted to gain Wife Approval for this system. Thanks, Bod.

    Read the article

  • BizTalk 2009 - How do I do t"HAT"?

    - by StuartBrierley
    In my previous life working with BizTalk Server 2004, I came to view HAT (the Health and Activity Tracking tool) as one of my first ports of call in the case of problems with any of our BizTalk solutions.  When you move to BizTalk Server 2009 it is quickly apparent that HAT is no longer with us. HAT was useful in BizTalk 2004 mainly as it provided developers and administrators with a number of useful queries and views of what was going on inside BizTalk at runtime; when and what type of messages were received and sent, what messages had been suspended, what orchestration were running or suspended, you could even follow the process flow of a message or orchestration to see what was going on. With BizTalk Server 2009 much of the functionality of HAT can now be found in the BizTalk Administration console.  Select a BizTalk Group and you will be shown the Group Hub Overview page.  This provides a number of default queries that replicate some of those found in the old HAT. You can also use the Group Hub page to create new queries.  These can then be saved and loaded in other Group Hub instances - useful for creating queries in development for later use in Test, Psuedo-Live and Live environments. In the next few posts I am going to look at some of the common queries that we might miss from HAT and recreate them (or something close) using the new query option. Messages - last 100 received Messages - last 100 sent Messages - last 50 suspended Service instances - last 100 I have yet to try the updated Admin-HAT-Console in anger, and after using old-HAT for so long it may take some getting uesd to, but so far I would say that moving the HAT functionality into the BizTalk Administration console was probably the correct way to go.  Having one tool as the place to look for the combined functionality on offer certainly seems to be the sensible option.

    Read the article

  • How to determine the amount to spend per phrase on Adwords research?

    - by Anonymous -
    My company would like to start a PPC advertising campaign. Whilst I understand the concept and how to set everything up from a technical point of view, this is something I've never done before. Logically, we'd like to test out a wide range of keywords that we think would lead to conversions, which we've put together through brainstorming and with some help from Google's External Keyword Tool. Sub-question whilst I remember - am I correct in thinking that in Google's keyword tool, keywords that we think will perform well that have a low competition yet high monthly searches are good since there will be less advertisers, meaning our bid per click will be less? Is there a common benchmark or process of doing a round of tests with keywords? Should we wait for 100 clicks on each keyword, see which ones have lead to the most sales (or rather, sales that are sustainable with the cost per click of that keyword), then drop the ones which aren't converting and put that budget onto the converting keywords? We realistically have a few hundred keywords/phrases we would like to test, but spending $100 per keyword/phrase is going to work out as quite an expensive test. It would be nice to be able to spend $5-10 per phrase, but I don't think the sample size would be great enough to determine anything usefully reliable. Another approach might be to setup all the keywords, and those that bring the most sales within x hours/days would be the ones we use. What is the common procedure with things like this? I know there are a plethora of companies that specialize in exactly this, but this is something we anticipate doing a lot in the future, so it would make sense to do it in house if at all possible.

    Read the article

  • Usual Suspects: Typical 3rd Party Entities in E-Commerce [closed]

    - by zharvey
    I am doing some requirements/analysis for a web app that I'd like to build (Ruby/Java developer here). This web app would have a store front, shopping cart and would need to be totally compliant with all e-com best practices. It's amazing how much non-technical info comes up when you search for phrases like "how does e-commerce work", but very little comes up in the way of technical details. As such, I'm having extreme frustration finding answers to what I consider pretty straight-forward questions. I came here because I believe this question is not off-topic; if it is, please leave a comment as to why this question does not belong here and I will happily remove it myself (upvotes if your comment can point me to the correct place for this question!). So then: What 3rd parties will I need to work with to have a modern, web-compliant e-com site? So far I can account for a payment gateway provider like Authorize.net and an SSL certificate provider like Trustwave. Any others? What other standards besides PCI compliance will I be held to (besides governing laws, of course!)? Vulnerability scans: PCI compliance requires quarterly scans: if I'm a "Level 4" (low volume) Merchant does that still apply to me? Irregardless, my backend architecture is quite huge, with web servers, app servers, database, message brokers and more. Do each of these servers need to be scanned?!? If not what servers do need to get these quarterly scans? I usually hate to ask micro-questions inside of one large one, but these are so closely-related I just felt like asking them all separately would be spamming the site with too many petty questions. Thanks in advance!

    Read the article

  • license and copyright assignment

    - by corintiumrope
    I'm currently working on a wordpress plugin. My client gives me a specs doc (a powerpoint presentation, if you can call that a specs doc), and I code the requested functionality. Every time I send him code every file containing code starts with these lines: Author: My Name Copyright: The_client's_company.com License: MIT Expat (http://en.wikipedia.org/wiki/Expat_License) My intention being giving my client complete right to relicense and distribute the code under any other license (as the TOS of the freelancing website requires, plus I know he intends to sell it under a proprietary license), but at the same time giving myself the right to expand and redistribute the plugin under MIT license if I wish to (not that I do). The reason is I am paid only 10USD/hour (this is my first gig) so I want to at least keep the right to reuse parts of the code in other projects or expand it if I want to start a similar project myself when I finish the contract (unlikely, but who knows...) or show it to potential employees. The contract we agreed upon doesn't include any licensing specifications but I've informed him on the emails we've interchanged that although all my work is licensed by default as MIT I'm giving my clients the copyright of the code I produce so they can relicense it at will before distribution. Is this the correct way of achieving that?

    Read the article

  • Impersonation on IIS 7.0 passes the machine credentials for Crystal Reports

    - by pknox
    On a 32-bit Windows 2008 server running the Donor2 Application in the Classic .NET Managed Pipeline mode, configured for Windows Integrated Authentication and Impersonation, all of the .NET pages are passing the authenticated user’s credentials [DomainName\UserName]. This is the correct, expected behavior. The Crystal Reports pages, instead of passing the authenticated user’s credentials, are passing the IIS Server’s credentials [DomainName\MachineName$]. One of the very frustrating aspects of this situation is that I have another server which, as far as I can tell, is configured identically. That server, when loading Crystal Reports, is passing the authenticated user’s credentials [DomainName\UserName] as expected. I have obviously missed something, but I have no idea what it could be.

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • htaccess order Deny,Allow rule

    - by aspiringCodeArtisan
    I'd like to dynamically add IPs to a block list via htaccess. I was hoping someone could tell me if the following will work in my case (I'm unsure how to test via localhost). My .htaccess file will have the following by default: order allow,deny allow from all IPs will be dynamically appended: Order Deny,Allow Allow from all Deny from 192.168.30.1 The way I understand this is that it is by default allow all with the optional list of deny rules. If I'm not mistaken Order Deny,Allow will look at the Deny list first, is this correct? And does the Allow from all rule need to be at the end?

    Read the article

  • Exchange 2007 Email Address Policies

    - by Ryan Migita
    We have recently upgraded to Exchange 2007 (from 2003) and have noticed the change from recipient policies to email address policies. We have two separate domains (let's call them domaina.com and domainb.com) we receive email for, have email address policies and both email address policies are not applied. In our Exchange 2003 environment, domaina.com was the default email address when we created new mailboxes and due to the migration, domainb is the default (and its email address policy is a higher priority). Now, when we create a new mailbox (or edit existing ones), the primary email address becomes domainb.com. Now the question is, is this as simple as putting the email address policies in the correct order? Do I have to apply both policies? What effect will the above changes make to existing mailboxes? Since we do not have any conditions set on the policies, I assume prior to making these changes, I should force all domainb mailboxes to not automatically update email address based on policy? Thanks in advance!

    Read the article

  • Can I improve my AdWords quality scores with better landing pages?

    - by Eric
    I noticed that I have some keywords in my AdWords that are totally applicable to my site but the quality score of the keyword is 4 or 5. I'd like to get it up higher by creating custom versions of my site's home page (landing page) targeted specifically for people searching on those keywords. So for example, if we pretend my site sells pet food, my current home page has the phrase "dog food." I have a specific AdWords campaign for people searching on cat food (with cat food-specific ads). I'm thinking about changing the URL on those ads to something like http://mysite.com/cat.html, so a different home page comes up with the phrase "cat food." My thinking is that will help Google see that this new landing page is appropriate for the keywords and will raise my quality score for the "cat food" keywords. (Note that none of what I'm doing is shady or misleading; nobody would disagree that all of the keywords and ads I've created are perfect and appropriate for what my site offers.) Question: is what I describe the correct way to raise poor quality scores on keywords, and will it help?

    Read the article

  • How to programmatically control slave computer on BIOS level?

    - by PovilasSid
    I want to run some test on hardware level. My goal is to create or find a way to control one computer from another down to BIOS settings changing. For example: Master computer sends a signal for slave to restart and opens BIOS settings dialog. Master computer sends a signal to slave to change BIOS parameters and then restart. Then slave fully boots up master starts up some software on slave. Then software finishes operations cycle continues till certain conditions are met. I know that I am looking for a complex thing but mainly what I need are correct keywords because now I am being flooded by BIOS configurations tutorials. Main concerns: Is it possible without using any custom tailer chip? How can master monitor salve`s hardware activity? How to let master handled more than one slave? How what connections are needed to create this kind of setup? (cables)

    Read the article

  • Are webhosts that require NS instead of a CNAME common?

    - by billpg
    I've just signed up with a webhost (which I prefer not to name) and I'm reasonably happy with it. The only nit was when I was ready to put a site online and I asked the support line to what name I should point my 'www' CNAME to. They responded that they don't do that and I need to set my domain's NS records for the hosting to work. "Why would you ever want to do it that way? Our service to you includes DNS and our servers are probably much better than the one your registrar provides." This was a bit of surprise as all of the other webhosts I've worked with happily support this. I've set up (eg) gallery.myfriend.example for friends by having them configure their DNS to CNAME 'gallery' to the name of a shared server at a webhost and the webhost does name-based hosting for 'gallery.myfriend.example'. (Of course, if the webhost ever tells me I'm being moved from A.webhost.example to B.webhost.example, it would be my responsibility to change where the CNAME points. Really good webhosts would instead create myname.webhost.example for the IP of whichever server my stuff happens to be on, so I'd never have to worry about keeping my CNAME up to date.) Is my impression correct, that most webhosts will happily support a service that begins with a CNAME hosted elsewhere, or is it really more common that webhosts will only provide a service if they control the DNS service too?

    Read the article

< Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >