Search Results

Search found 15820 results on 633 pages for 'domain transfer'.

Page 16/633 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Tips for Domain Name Management?

    - by bofe
    Expired domain names = downtime for websites. Downtime = bad. How does your organization make sure domain names have been renewed? I believe ICANN requires registrars to give a notice at 60 days and 30 days, but these can easily get ignored -- especially with a large amount of domains. Does your solution work for a large amount of domain names? ( 100) Is it registrar specific?

    Read the article

  • Domain Hosting, DNS/NS Management [closed]

    - by ledy
    What criteria do you have for choosing the right provider for registering a domain? (not hosting, only the domain) So far, I can only see the different pricing and features of the DNS manager but have no idea what I have to pay attention in addition to this. How to know about the quality of the provider's nameserver and could there be differences in speed concerns. Background: The hosting itself incl. webspace, server etc. is separate from the domain and I am not going to book the domain with the same hoster as the webspace/server hoster.

    Read the article

  • one domain name, two servers

    - by MarcoKotrotsos
    Goodmorning. Because my client wants to keep an old server (running php-fusion) running under the same domain name as a new server (running Drupal) I have a question. Is this possible? And most importantly how would I do this?? The old server URL structure is like projectname.domain.com and the new server's URL structure is domain.com/projectname. Is this possible? To run these two servers side by side, on the same domain name- but with a different URL structure? Thank you Marco

    Read the article

  • A unique identifier for a Domain

    - by jchoover
    I asked a question over on StackOverflow and was directed to ask a related one here to see if I could get any additional input. Basically, I am looking to have my application aware of what domain it's running under, if any at all. (I want to expose certain debugging facilities only in house, and due to our deployment model it isn't possible to have a different build.) Since I am over paranoid, I didn't want to just rely on the domain name to ensure we are in house. As such I noted the DOMAIN_CONTROLLER_INFO (http://msdn.microsoft.com/en-us/library/ms675912(v=vs.85).aspx ) returned from DsGetDcName (http://msdn.microsoft.com/en-us/library/ms675983(v=vs.85).aspx) has a GUID associated with it, however I can find little if any information on it. I am assuming this GUID is generated at the time the first DC in a domain is created, and that it would live on for the life of the domain. Does anyone else have any inner knowledge and would be kind enough to confirm or deny my assumptions?

    Read the article

  • Domain transfer and New Hosting Management

    - by Anubhav Saini
    I wanted to migrate from my older registrar to GoDaddy. Main reason because current registrar/hosting provider doesn't support .NET. My old registrar gave me control over the domain and hosting account. So, basically I have everything I would need. ( I know theory only ) I applied for Transfer of domain, bought a hosting package from GoDaddy and uploaded new web site. So, I am waiting for domain transfer and it tells me that I have to wait for 5-7 days for approval. Okay. But today, my old registrar told/taunted me that I really didn't need to apply for transfer. What could possibly I have done differently? My domain expires on this 15th. Now I don't know much about how all of this really works, but I am guessing he meant, "you should have waited for 15 days and let it expire after which you should buy the domain as it is expired". Is it really so(I doubt) or there are some other ways I could have got same result but without transferring domain? (like, changing DNS entries) I have read like all of the documentation available on namecheap/GoDaddy/Whois about domain transfers. But maybe because I am new to this it is all confusing to me. I would also like to know what to do with DNS settings after transfer succeeds. I want to kill the old website. So, what nameserver settings I need to change, new one or old one or both? I have old host+old domain registrar + old working site on one hand, on the other hand, new site + pending domain transfer + new DNS settings.

    Read the article

  • How do I prevent ISPs from killing downloads of files in mid-transfer?

    - by Gorchestopher H
    I run a small website with a few users, low traffic, mostly to share personal mp3 files with a small community. Depending on their ISP, my users can't always download or stream larger files. By larger I mean larger than 1MB. Essentially the host either stops sending, or the client stops receiving. One of the links along the connection chain simply ends its connection before the transfer completes Trace-route shows no connection issues. There are no connection issues with short transfers that don't take more than a few seconds. It's these 10 second transfers that just end up ending. Just doing a straight download with a direct link can yield this error if you have the wrong ISP. Strangely enough, this is most common with users with ISPs who are essentially independent providers that buy service via a fiber link. Unfortunately these providers aren't very knowledgeable, are unable to do any testing, and insist it's a problem with the host. I have gotten my host to transfer my site to different servers of their, to the same effect. Nearly identical sites (affiliate sites actually) experience no such issue. What can I be doing to further troubleshoot this matter? How can I prove that someone is dropping the ball, and identify who that party is? Can I do a 5Mb traceroute? EDIT Maybe I can clear up some misconceptions with my question: The files are not very large. They are simply over 2Mb. The users do not have "slow" connections, they are at least 5mbps. This "time out" happens very quickly, in the realm of 5 seconds, so I don't know if it's a timeout or not. The user often gets 1 or 2Mb in this chunk of time. I have tried streaming with a flash player. I have tried saving the target. Forcing the download. I have tried allowing the browser to stream the file. I have tried different browsers (FF, IE, Chrome). Users are able to download identical files when on different hosts.

    Read the article

  • Domain Validation in a CQRS architecture

    - by Jupaol
    Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion The first approach I considered was: class Customer : EntityBase<Customer> { public void ChangeEmail(string email) { if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”); if(!email.IsEmail()) throw new DomainException(); if(email.Contains(“@mailinator.com”)) throw new DomainException(); } } I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer> { private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver; public bool IsSatisfiedBy(Customer customer) { return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email); } } But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture: class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand> { public void IsValid(Customer entity, ChangeEmailCommand command) { if(!command.Email.HasValidDomain()) throw new DomainException(“...”); } } Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it. Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components... Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options: Pass the validators to each method of my entity Validate my objects externally (from the command handler) I am not happy with the option 1 so I would explain how I would do it with the option 2 class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand> { public void Execute(ChangeEmailCommand command) { private IEnumerable<IDomainInvariantValidator> validators; // here I would get the validators required for this command injected, and in here I would validate them, something like this using (var t = this.unitOfWork.BeginTransaction()) { var customer = this.unitOfWork.Get<Customer>(command.CustomerId); this.validators.ForEach(x =. x.IsValid(customer, command)); // here I know the command is valid // the call to ChangeEmail will fire domain events as needed customer.ChangeEmail(command.Email); t.Commit(); } } } Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation EDIT I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement

    Read the article

  • Why/How do expired domain names get bought so quickly?

    - by Alex Angas
    A relative let my wife's family .com domain name expire. Apart from that being annoying in itself, the domain now redirects to random spam blogs and is owned by someone with almost 5000 other domains according to DomainTools. They also want a fortune to return it. The family name is pretty unusual and completely unrelated to the spam. So how did they manage to snap the domain name up so quickly and what value is it to them?

    Read the article

  • Create subdomains via C-Panel or via domain registrar?

    - by cybergeek654
    I am a novice, so excuse me if it sounds dummy. I read a few similar questions on this topic here but they did not answer my question. I have a personal website, hosted with C-panel control panel. Via C-panel I can easily create new subdomains and I do not need to apply any DNS settings or so. And the subdomian is good to go immediately and are working fine. I have unlimited subdomains option. When I check my DNS management control panel in my domain registrar's site, there is no record associated with my subdomains. Now I want to buy a new domain name from 1and1, and have it as an add-on domain in my prevouse host. 1and1 say they only allow 5 subdomains. What does this mean? Can not I create unlimited subdomains under my new domain name, just as I do for my old domain? How does C-panel create and manage subdomains that there is nothing about it in my DNS control panel? Thanks for your help

    Read the article

  • When choosing a domain does including your brand affect SEO performance?

    - by bpeterson76
    I've been asked to build a "landing page" for a local branch of an international corporation. While the corporation has a well-established domain name, the local office wants to use a unique, separate url that will be easy for them to relay to clients. However, the corporation is considered a category leader, so the local office is also concerned about the importance to carrying over the company's brand to the URL. Questions that have arisen: From an SEO perspective, is there a benefit to including the brand name in the URL? Would it be more beneficial to buy a domain that relates generically to the INDUSTRY as opposed to the specific brand name? Would the benefits of an easy-to-remember, short domain outweigh any SEO benefits that might be gained by a longer, brand-specific domain?

    Read the article

  • Are there any services that sell 'packs' of domain names?

    - by DC_
    Are there any services that sell 'packs' of domain names? What I'm looking for is a service that will allow me to buy, for example, 50 random domain names. I don't care what the names are, they could be kjsgadkj286.mobi for all I care. Does anything like this exist? I am not interested in registering them myself. I thought that maybe domain squatters or someone similar might have leftover worthless names they want to get rid of.

    Read the article

  • The Message Queuing service failed to join the computer's domain 'DOMAIN' Error 0xc00e0025

    - by SimonGoldstone
    Struggling to get MSMQ installed in Domain Integration mode on Windows 2012 (Azure). So far, I've provisioned a brand new Windows Server 2012 (R2) machine on the Azure platform and installed the Active Directory role and promoted the machine to a domain controller. Once the AD was in place, I then added the MSMQ feature, along with the Directory Integration add on. However, it will not install in AD integration mode. It will only work in Workgroup mode. I can verify this by running the following powershell command: New-MSMQQueue -name Queue1 -queuetype Public When I run this command, I get the following error: New-MsmqQueue : A workgroup installation computer does not support the operation. The event viewer reveals to errors in the Application log: 1. The Message Queuing service failed to join the computer's domain 'DOMAIN'. Error 0xc00e0025: 2. Message Queuing was unable to create the msmq (MSMQ Configuration) object in Active Directory Domain Services. Error c00e0025h: I'm struggling here. Any advice?

    Read the article

  • Mod_rewrite display subdomain.domain.com and call domain.com/subdomain/ for SSL

    - by Jeff H.
    I have a website secured by a standard SSL certificate, securing a few different shops under different subdirectories. Ex. domain.com/shop1/ The shops are also accessible via a subdomain e.g. shop1.domain.com. What I'm trying to accomplish: display shop1.domain.com to the user, while keeping all of the actual server calls as domain.com/shop1, so that the secure pages will continue to work properly. (Not sure if I'm using the proper language, exactly, I hope my point is clear.) To be clear: my SSL is working fine, and I don't need help with that, and I don't need or want to purchase a UCC cert. It can't be that difficult for anyone with experience with Apache. (I've spent 3 hours trying to learn about mod_rewrite. It's just not clicking.) I'm on a GoDaddy secure shared server, so please keep in mind that I'm not able to reset the server or anything.

    Read the article

  • Transfer video off iPhone 3GS

    - by Zman771
    I recorded a bunch of video on my iPhone 3GS and want to copy it all to my computer without emailing each one individually to myself (in addition, some of the videos are too big to email). Is there an easy way to do this? (jailbreak options ok as well)

    Read the article

  • Sending email from an alternative domain to protect my "core" domain from spam filters

    - by Jack7890
    I run a website (seatgeek.com) that sends a lot of transactional email to users--account updates, alerts, etc. It's important to us that our domain remains clean in the eyes of spam filters. We'd like to roll out an email marketing campaign. It's nothing particularly spammy, but this would be the first time we ever emailed to people who hadn't expressly asked to receive email from us. It's to market a new product we built to a specific niche of professionals. In order to protect our domain in the eyes of spam filters, we're considering sending the marketing email from an alternative domain. The alternative domain is an alternative landing page we sometimes use for this new product. Is there any way this could backfire on us? Does it seem like a particularly poor idea?

    Read the article

  • Netcat file transfer problem

    - by thepurplepixel
    I have two custom scripts I just wrote to facilitate transferring files between my VPS and my home server. They are both written in bash (short & sweet): To send: #!/bin/bash SENDFILE=$1 PORT=$2 HOST='<my house>' HOSTIP=`host $HOST | grep "has address" | cut --delimiter=" " -f 4` echo Transferring file \"$SENDFILE\" to $HOST \($HOSTIP\). tar -c "$SENDFILE" | pv -c -N tar -i 0.5 | lzma -z -c -6 | pv -c -N lzma -i 0.5 | nc -q 1 $HOSTIP $PORT echo Done. To receive: #!/bin/bash SERVER='<myserver>' SERVERIP=`host $SERVER | grep "has address" | cut --delimiter=" " -f 4` PORT=$1 echo Receiving file from $SERVER \($SERVERIP\) on port $PORT. nc -l $PORT | pv -c -N netcat -i 0.5 | lzma -d -c | pv -c -N lzma -i 0.5 | tar -xf - echo Done. The problem is that, for a very quick second, I see something flash along the lines of "Connection Refused" (before pv overwrites it), and no file is ever transferred. The port is forwarded through my router, and nmap confirms it: ~$ sudo nmap -sU -PN -p55515 -v <my house> Starting Nmap 5.00 ( http://nmap.org ) at 2010-04-21 18:10 EDT NSE: Loaded 0 scripts for scanning. Initiating Parallel DNS resolution of 1 host. at 18:10 Completed Parallel DNS resolution of 1 host. at 18:10, 0.00s elapsed Initiating UDP Scan at 18:10 Scanning 74.13.25.94 [1 port] Completed UDP Scan at 18:10, 2.02s elapsed (1 total ports) Host 74.13.25.94 is up. Interesting ports on 74.13.25.94: PORT STATE SERVICE 55515/udp open|filtered unknown Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 2.08 seconds Raw packets sent: 2 (56B) | Rcvd: 5 (260B) Also, running netcat normally doesn't work either: squircle@summit:~$ netcat <my house> 55515 <my house> [<my IP>] 55515 (?) : Connection refused Both boxes are Ubuntu Karmic (9.10). The receiver has no firewall, and outbound traffic on that port is allowed on the sender. I have no idea what to troubleshoot next. Any ideas? P.S.: Feel free to move this to SO/SF if you feel it would fit better there.

    Read the article

  • HTTPS in sub domain redirects to main domain

    - by Amitabh
    We recently bought a wildcard certificate and installed it for a domain. It works fine for the main domain but seems to not work at all for any sub domains. Whats happening is we can access the sub domains fine on HTTP, but whenever we try HTTPS for the same sub domain url we are redirected back to the main domain. So if I put up a test folder "httpstest" in a sub domain with a index.html file in it, the following happens mysubdomain.mywebsite.com/httpstest/index.html or mysubdomain.mywebsite.com/httpstest/ works perfectly fine with http:// but mysubdomain.mywebsite.com/httpstest/ or mysubdomain.mywebsite.com/httpstest/index.html does not work with https:// and redirects to the main domain.Any help on this is greatly appreciated. The site is not the main site used for setting up the VPS. It was added from WHM. Environment: We are on a Linux VPS. Cpanel 11.30.6 , Apache 2.2.22, PHP 5.3.13 The Virtualhost entry looks like: <VirtualHost xx.xx.xxx.xx:443> ServerName my-own-website.com ServerAlias www.my-own-website.com DocumentRoot /home/amitabh/public_html ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/my-own-website.com combined CustomLog /usr/local/apache/domlogs/my-own-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User amitabh # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup amitabh amitabh </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup amitabh amitabh </IfModule> ScriptAlias /cgi-bin/ /home/amitabh/public_html/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-own-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-own-website.com.key SSLCACertificateFile /etc/ssl/certs/my-own-website.com.cabundle CustomLog /usr/local/apache/domlogs/my-own-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/amitabh/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/ssl/2/amitabh/my-own-website.com/*.conf" </VirtualHost>` I guess I messed up the formatting big time. Any help on formatting and on the issue is great appreciated. Thank you. Update: I could not update the formatting here. I posted the same question in a linux forum . I will really appreciate any pointer on it.

    Read the article

  • Secure copying (file transfer) between two Linux servers in the same datacenter (Linode)

    - by MountainX
    I have two Linodes in the same data center. I want to copy files from one to the other each night or on demand (for about the next month, until this project is finished). So I'm thinking about using rsync. My question is how do I set up the two Linode servers to communicate via private IP addresses securely? Both servers are SSH hardened, they use denyhosts and have a fairly restrictive iptables setup. I know I need to first assign private IP addresses to each server, then configure static networking according to this guide. What is next? What SSH or iptables settings are needed to allow these two servers to communicate? What further info do I need to supply in this question? I'm looking for a basic step-by-step guide for how to do this.

    Read the article

  • Odd FTP ASCII/BINARY transfer behavior after migrating to a new server

    - by Incognita Mundi
    I recently got a dedicated server and after migration to the new machine I started noticing problems with file transfers. My FTP client, despite being set to auto, keeps uploading php files in binary mode and the content of those files is messed up. Since I mostly upload files of different kinds it would be annoying to change from binary to ASCII every single time, beside, I never had such problems. What could be the cause of this behavior? My dedicated server runs CENTOS 6.4 and the ftp server is Pure-FTPd. I tried different FTP clients and they all have the same problem so I assume is soem misconfiguration on the server side. Thanks

    Read the article

  • How long would this file transfer take?

    - by CT
    I have 12 hours to backup 2 TB of data. I would like to backup to a network share to a computer using consumer WD 2TB Black 7200rpm hard drives. Gigabit Ethernet. What other variables would I need to consider to see if this is feasible? How would I set up this calculation?

    Read the article

  • Windows 7 File Transfer Speed over Gigabit is slow

    - by Adam Haile
    I've got windows 7 pro running on my file server and my main desktop. Each has a gigabit network connection and I'm connected to a gigabit switch. However, when trying to copy some large files, it's running pretty slow at a measly 12-15 MB/s The data is coming from a 7200RPM SATA drive (which I think should be good for almost 150MB/s) and going to a Drobo on the server connected via FireWire 800, so I can't think of any bottlenecks I might have in the hardware. But TeraCopy still says it's only going at 12-15 MB/s What else could be wrong here?

    Read the article

  • Transfer file over ssh

    - by datasunny
    Hi all, In ssh protocol, is there a mechanism for file transfer? Im working on a existing code base which already has ssh facilities code. Now i need to transfer files over ssh connection. If ssh protocol already support it, i don't have to integrate scp stuff into it. Thanks.

    Read the article

  • 1 domain.. 2 server and 2 applications

    - by basit.
    i have a site like twitter.com on server one and on server two i have forum, which path is like domain.com/forum on server one i wanted to implement wild card dns and put main domain on it. but on server two i wanted to keep forum separate, i cant give sub-domain forum.domain.com, because all its links are already put in search engines and link back to domain.com/forum. so i was wondering, how can i put domain and wild card dns on server one and still able to give path on server 2 for domain.com/forum (as sub-folder). any ideas? do you think htaccess can do that job? if yes, then how?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >