Search Results

Search found 11897 results on 476 pages for 'dean rather'.

Page 322/476 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • RSA keys - virtual hosts

    - by Bosworth99
    Pardon my noobness, but I just got started with VPS (linux) hosting; setting up passwordless ssh for multiple users has proved to be kind of a pain. Currently I'm the single user of this ubuntu 10.04 LTS VPS (linode.com). I was able to establish a single rsa passkey under my home/user/.ssh/authorized_keys location. Fine. PuTTy works as expected, and Filezilla (sftp) links up as required. I've been working on a single site that this user owns, and thats not been a problem. Now, I want to set up some other sites, and I've chosen Webmin with the VirtualMin plugin to make this work. I made another user (or, rather, virtualmin did), but I've been unable to get FileZilla to link up to this new user. Could anyone with experience here explain what the setup is supposed to look like? IE - can I use a single rsa key pair for all accounts (if, for example, I give ownership of files to the original user?). Or is it standard practice to create a separate key pair for each user, and establish a separate putty/filezilla login for each? I've spent enough time dinking around with this to be frustrated. "Sever rejected the provided key" error sucks after the fifth hour. I'm about to set up an ftp server and call it a day. Any thoughts would be most welcome -

    Read the article

  • How can I explain to dspam that the user "brandon" is the same as "brandon@mydomain"

    - by Brandon Craig Rhodes
    I am using dspam for spam filtering by running the "dspamd" daemon under Ubuntu 9.10 and then setting up a Postfix rule that says: smtpd_recipient_restrictions = ... check_client_access pcre:/etc/postfix/dspam_everything ... where that PCRE map looks like this: /./ FILTER lmtp:[127.0.0.1]:11124 This works well, and means that all users on my system get all of their email, whether "dspam" thinks it is innocent or not, and have the option of filtering on its decisions or ignoring them. The problem comes when I want to train dspam using my email archives. After reading about the "dspam" command, I tried this on the files in my Inbox and spam boxes (which date from when I was using another filtering solution): for file in Mail/Inbox/*; do cat $file | dspam --class=innocent --source=corpus; done for file in Mail/spam/*; do cat $file | dspam --class=spam --source=corpus; done The symptom I noticed after doing all of this was that dspam was horrible at classifying spam — it couldn't find any! The problem, when I tracked it down, was that I was training the user "brandon" with the above commands, but the incoming email was instead compared against the username "brandon@mydomain", so it was running against a completely empty training database! So, what can I do to make the above commands actually train my fully-qualified email address rather than my bare username? I would like to avoid having to run "dspam" as root with a "--user" option. I would have expected that the "dspam" configuration files would have had an "append_domain" attribute or something with which to decorate local usernames with an appropriate email domain, but I can't find any such thing. When I used to use the Berkeley DB backend to "dspam", I solved this problem by creating a symlink from one of the databases to the other. :-) But that solution eventually died because the BDB backend is not thread-safe, so now I have moved to the PostgreSQL back-end and need a way to solve the problem there. And, no, the table where it keeps usernames has a UNIQUE constraint that prevents me from listing both usernames as mapping to the same ID. :-)

    Read the article

  • How to create an MST for silent install using Orca?

    - by Sanarothe
    Hi. I'm trying to deploy 7zip via GPO; I assigned the original MSI, but the package installation simply doesn't take place. What I've gathered is that I need to create an MST. In the spirit of trying to learn as much as possible about it, I've opted to use Orca rather than a third-party automagic tool, but I'm at a loss as to which fields to edit. So far the only change that I've made is to give the license accepted checkbox a value of "1" instead of pointing to another key that, still, just gave it a value of "1." So, to give this some structure, How does (Or what criteria should I consider) creating a MST make the install noninteractive/silent? Do you have to manually reconfigure the MSI to simply not perform the GUI aspects? Or do I have to execute the program in silent mode after defining the variables the the installer requests? (Though, of course, it seems that would defeat the purpose of the MST) How do I determine which fields I need to edit? I've loaded the installer and it takes three inputs: License acceptance, feature set and installation location. I want all of the default values: I'm just trying to deploy it at all, not customize the installation. I BELIEVE that I should be messing with some values in the Registry table, but I really don't know. If I'm not asking the right questions, can someone point me to a THOROUGH resource or documentation for this process? I've already gone over the technet articles on basic Orca use and deployment, but I couldn't really find anything on creating MST that didn't involve a third party program in which one runs a 'dummy' installer to get the before and after snapshots. Thank you very much, Cameron UPDATE: After spending the day troubleshooting, I finally got my server to send out 7zip, but not until I had also assigned firefox. Not sure why it didn't want to send out 7zip by itself, but I also had some domain naming problems. Thanks for the input (GPResult helped enormously.)

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • IDE/PATA high-speed hard drive dock

    - by wfaulk
    I frequently need to access bare drives for backups and need a quick, high-speed way to deal with them. There are a multitude of SATA hard drive docks (for example), but I have a lot of IDE/PATA (hereafter "IDE") drives that I would like to be able to use similarly. There are IDE-to-SATA adapters so you can plug your IDE hard drive into a SATA port, so I don't see any reason why you couldn't use the same technology to have a native dock, yet none seems to exist. Now, I'm aware that 3.5" IDE drives do not have a specification for the layout of the connector, and therefore can't be slapped into a dock the same way a SATA drive could, but 2.5" PATA drives do. In fact, I'm not terribly interested in supporting 3.5" drives. It would be nice, but I deal with them far less frequently than 2.5" drives. Also, I'd very much like for the connection to the computer be faster than USB, preferably eSATA, I don't want to be spending time mounting a drive inside an enclosure, I don't want bare drives lying around with a cable hanging off of them, and I'd prefer a single dock rather than two. What seems like the ideal solution to me would be a regular SATA→eSATA dock and some sort of screwless adapter for IDE drives, but I'm open to any suggestions, regardless of my stated preferences, but which are, in some sort of order of preference: high-speed (faster than USB, at least) holder for drive (not just a cable) no complicated enclosure support for 3.5" IDE drives single dock Updates: Here's a 3.5" IDE to 3.5" SATA docking adapter that could be part of the solution. Weird. I figured that would be the impossible part. I was hoping to find something like this 2.5" to 3.5" SATA chassis that would take a 44-pin IDE drive internally. It looks like the Vantec EZ Swap EX comes awfully close. It has its own bay dock, but it looks like the SATA ports on the back are spaced properly, even if they're not aligned quite properly. Unfortunately, the proper position is at the very edge of the drive, which means that the docks' connectors are at the very edge of their recesses, which means there's no way to fit it in there.

    Read the article

  • Windows 32-bit and 64-bit and GPT

    - by MrLane
    I know similar questions have been asked before across several sites, but the answers at least to me have been confusing and conflicting. My understanding has always been that 64-bit Windows will create and use GPT disks just fine, but will not boot from them without a UEFI BIOS. Also my understanding WAS that 32-bit Windows could not use GPT at all and so is always restricted to 2.2TB disks, which was another reason to move to 64-bit on top of the 4GB memory limit. But I have now read that this isn't correct: 32-bit Windows will create and use GPT disks just as 64-bit does. The only resriction is that you can't boot 32-bit Windows even if you DO have a UEFI BIOS? I don't think much of the literature has explained this well. There are several tools floating around for creating virtual disks or 2.2+.8GB partition schemes and such for 32-bit systems. Why when it seems you can use GPT in 32-bit Windows anyway. It also seems that people blame MS for lagging behind with respect to all of this: but it seems the issue is with BIOS manufactures not supporting UEFI rather than MS not supporting GPT... Is my new understanding now correct?

    Read the article

  • Can't Connect To Local Mysql Using IP Address, but CAN connect from remote server

    - by user1782041
    Here's an interesting one that does not seem to fall into any of the mysql connection issues I've read about or searched for: On an Ubuntu 12.04 box I had some system updates waiting to install, and I took care of that this evening. After the install, I started seeing some errors in my syslog complaining about a particular php script that could no longer connect to the mysql instance on the box. Here is the specific error: PHP Warning: mysql_connect(): Can't connect to MySQL server on '192.168.0.40' (4) Now, the server's IP address is 192.168.0.40, and I've checked to make sure that I have mysql listening on 0.0.0.0 so that I can connect using either "localhost" or "192.168.0.40". Here's where things get odd: From the local machine, if I try the following: mysql -uroot -p -h192.168.0.40 I get this error: ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.0.40' (110) I've checked, and error 110 indicates an OS timeout, and error 2003 is the mysql generic "can't connect" error. This indicates that it is not permissions with the user. However, if I do the same thing from a remote machine (say, from 192.168.0.30), I log right in with no problems. Futher, other scripts on the local machine that connect to mysql using "localhost" for the host rather than "192.168.0.40" connect with no problems. Also, I can connect via the mysql socket with no problems both from the command line and php scripts. So, this feels like a networking issue of some kind on the local box, but there are no iptables rules on this box (it is firewalled externally) and I can't figure out what else may be causing this. This problematic script worked perfectly prior to the latest system update. For now, I'll simply change the script to connect via localhost, but I'd really like to know why it broke for 2 reasons: There may be other scripts that connect using 192.168.0.40 that don't run very often which are now broken. Auditing them all will take more time than I feel like devoting at the moment. I'm curious, and want to know why it broke so I can fix it correctly. Any help?

    Read the article

  • Virtual Machines and Automatic Software Updates

    - by Zian Choy
    It's obvious that one's main computer should always be have all the latest security patches and most people don't blink an eye when Microsoft Update installs non-security updates. In the land of virtual machines, I've run into 2 problems with automatic updates: The virtual machines are only run when needed. Only Windows virtual machines seem to patch themselves. To elaborate on #1, I generally make a virtual machine with a purpose in mind. For example, when I needed an old copy of Internet Explorer to reproduce a bug in RSS Bandit, I had a Virtual PC named RSS Bandit. The machine only stayed running for a few minutes at a time. Consequently, there is no downtime for the machine to download updates at 3 AM. To elaborate on #2, I've noticed that if I haven't run a Windows virtual machine in a while, then the moment I log in, the computer frantically downloads updates and within seconds, if I click the Start button, there is a little orange shield next to the "Shutdown" button. However, I ran a freshly created Ubuntu VM for several hours today with hundreds of updates pending and it seemed to never download any of them or install any of them. Is there any reason to be concerned about running VMs with dozens of security holes? If I should be concerned, then is there any way to get Ubuntu to download and install updates rather than just advertising a long list of updates to download next century? I've already tried telling Ubuntu to automatically download and install updates.

    Read the article

  • email dropbox between two mutually untrusted sites

    - by user52874
    I've an interesting problem that I thought was straightforward, but turns out I think I'm whistling down the wrong path. It has to do with (shudder) email. I thought I was done with needing to know about email guts ten years ago; I was wrong. Anyway. Simply put, I need to figure out how to relay outgoing email that is not targetted in our domain from our domain into a 'dropbox' in a DMZ, and the Other Guys can retrieve that email from their side of the DMZ and distribute it accordingly, even out to the public internet if need be. There will be no [un-established] traffic coming back to Our side from anywhere; any attempts to do so are dropped with malicious prejudice. Our side is postfix running on scilinux6.1. The DMZ boxes are redhat5.4. The Other Guys are M$ Exchange. The firewalls are set up such that data can go from Our Side downsec to the DMZ, but not upsec from the DMZ into Our Side. Same for the Other Guys. My first thinking was simply to set up postfix on a box in the DMZ and tell them to set up fetchmail or whatever the M$ equivalent is, but then I started remembering that postfix wants to actively relay email onwards, rather than hold it and wait for someone to 'reach in' and retrieve it. I'm not sure I've explained this well, but hopefully it's clear enough that someone can point me in the right direction. I seem to remember having done this before, but it was a looong time ago. thanks!

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • For Australian audiences, would an uncached .com.au domain resolve faster than an uncached .com?

    - by thomasrutter
    Is there any speed benefit to using a .com.au domain rather than a .com if your customers, hosting and DNS services are in Australia, specifically in the worst typical case (domain is not cached in any local DNS relay for customer)? Assuming that both domains pointed to the same nameservers in the end. I know this is mostly academic because we are talking about a DNS lookup that would take at most a few hundred milliseconds and would only be relevant once at the beginning of a session. I just was curious. I know that an uncached .com lookup will involve consulting at least one ?.gtld-servers.net. server and an uncached .com.au will involve consulting at least one ?.au. server. Now, what I guess I'd need to know is Are the various ?.gtld-servers.net. servers using anycast technology that would have local fully authoritative nodes in Australia, making them just as fast to Australians as ?.au. and avoiding a 200ms+ overseas latency, or are some or all of them hosted only in the US or in the northern hemisphere?

    Read the article

  • Terminal server performance over high latency links

    - by holz
    Our datacenter and head office is currently in Brisbane, Australia, and we have a branch office in the UK. We have a private WAN with a 768k link to our UK office and the latency is at about 350ms. The terminal server performance is reeeeealy bad. Applications that don't have too much animation or any images seem to be okay. But as soon as they do, the session is almost unusable. Powerpoint and internet explorer are good examples of apps that make it run slow. And if there is an image in your email signature, outlook will hang for about 10 seconds each time a new line is inserted, while the image gets moved down a few pixels. We are currently running server 2003. I have tried Server 2008 R2 RDS, and also a third party solution called Blaze by a company called Ericom, but it is still not too much better. We currently have a 5 levels dynamic class of service with the priority in the following order. VoIP Video Terminal Services Printing Everything else When testing the terminal server performance, the link monitored using net-flows, and have plenty we of bandwidth available, so I believe that it is a latency issue rather than bandwidth. Is there anything that can be done to improve performance. Would citrix help at all?

    Read the article

  • Does Hyper-V support SCSI Pass-through discs in a Server 2003 R2 VM?

    - by Peter Bernier
    I'm running into some difficulties getting pass-through disks to be accessible to a Hyper-v server 2003 r2 virtual machine. Host OS : Server 2008 R2 full w/Hyper-V role Guest OS : Server 2003 R2 (Windows Home Server) The guest's OS disk is a pass-through disk on the IDE controller (not the best solution, but I can live with it). My storage disks will be pass-through disks on the SCSI controller. I'm able to see all of the disks that I'll be using for the VM on the host without issue. The problem that I'm having is that I can't seem to get the guest OS to be able to 'see' the storage drives (as pass-through disks on the SCSI controller). Here's what I'm doing : On the host, the storage drive is set to 'Offline' just like the OS disk (this is required for pass-through to work). In the VM, the storage drive is on the SCSI controller. Hyper-V Integration Tools are installed in guest. That's as far as I'm able to get. I don't see the drive in Computer Management, or in Windows Explorer (I've tried with an unformatted disk, as well as after formatting a partition). I am able to see a removable device that lists the disk's model number in the Guest, but I can't seem to access the storage. (I get an entry in Device Manager that needs drivers, but nothing on the Integration Tools disc works..) Trouble-shooting steps I've tried : If put the pass-through drive on the IDE controller, I can see it in the Guest. If put the storage drive 'Online' in the host and create a VHD on it on the SCSI controller, I can see it in the Guest. I suppose I could create a fixed-size VHD that consumes the entire disk, but I'd rather not have that overhead. I've also extracted the contents of the Integration Tools drivers (x86 and amd64) and tried pointing the disk controller to each of those, with no luck. Can anyone offer suggestions as to how I can get this to work properly?

    Read the article

  • Managing Apache to Compensate for WebDAV's Security Masking

    - by Tohuw
    When a user creates a file via WebDAV, the default behavior is that the file is owned by the user and group running the Apache process, with a umask of 022. Unfortunately, this makes it impossible for unprivileged users to write to the files by other means without being a member of the group Apache runs under (which strikes me as a particularly bad idea). My current solution is to set umask 000 in Apache's envvars and remove all world permissions from the webdav parent directory for the user. So, if the WebDAV share is /home/foo/www, then /home/foo/www is owned by www-data:foo with permissions of 770. This keeps other unprivileged users out, more or less, but it's hokey at best and a security disaster awaiting at worst. From my research and poking around at mod_dav and Apache, I cannot find a reasonable solution short of a cron job flipping all the permissions back (I'd rather not have the load and increased complexity on the server). SuExec won't work, either, because WebDAV operations are not going to execute as a different user. Any thoughts on this? Thank you.

    Read the article

  • Postfix aliases and duplicate e-mails, how to fix?

    - by macke
    I have aliases set up in postfix, such as the following: [email protected]: [email protected], [email protected] ... When an email is sent to [email protected], and any of the recipients in that alias is cc:ed which is quite common (ie: "Reply all"), the e-mail is delivered in duplicates. For instance, if an e-mail is sent to [email protected] and [email protected] is cc:ed, it'll get delivered twice. According to the Postfix FAQ, this is by design as Postfix sends e-mail in parallel without expanding the groups, which makes it faster than sendmail. Now that's all fine and dandy, but is it possible to configure Postfix to actually remove duplicate recipients before sending the e-mail? I've found a lot of posts from people all over the net that has the same problem, but I have yet to find an answer. If this is not possible to do in Postfix, is it possible to do it somewhere on the way? I've tried educating my users, but it's rather futile I'm afraid... I'm running postfix on Mac OS X Server 10.6, amavis is set as content_filter and dovecot is set as mailbox_command. I've tried setting up procmail as a content_filter for smtp delivery (as per the suggestion below), but I can't seem to get it right. For various reasons, I can't replace the standard OS X configuration, meaning postfix, amavis and dovecot stay put. I can however add to it if I wish.

    Read the article

  • copSSH and cygwin - Can't use windows style paths

    - by DrFredEdison
    I setup copSSH on one of my windows servers, and within the copSSH bash shell, I can't seem to use windows-style paths to remove and copy files. If I do try, I get the following: $ /bin/cp -r C:/Domains/_temp/collage_push/* C:/Domains/collage/ cygwin warning: MS-DOS style path detected: C:/Domains/_temp/collage_push/ Preferred POSIX equivalent is: /cygdrive/c/Domains/_temp/collage_push/ CYGWIN environment variable option "nodosfilewarning" turns off this warning. Consult the user's guide for more details about POSIX paths: http://cygwin.com/cygwin-ug-net/using.html#using-pathnames I have created a windows environment variable CYGWIN set to nodosfilewarning. It has no effect. I added export CYGWIN=nodosfilewarning to my .bashrc and doing a echo $CYGWIN in my ssh session confirms it is indeed getting set; yet again, it has no effect finally, I noted that when not doing my own export that CYGWIN contains "nontsec binmode" (no quotes), so I tried: export CYGWIN="nodosfilewarning nontsec binmode" in my .bashrc and still no dice. Older versions of CopSSH didn't have this issue. How can I actually override this error? I have a lot of scripts that already use windows-style paths, and I'd rather not change them if possible.

    Read the article

  • LAMP server VM issues

    - by nullArray
    After getting a recommendation to salvage a wiki by installing a LAMP server, I went on the prowl for a good virtualized one. I used the VMware Player version. Since the windows box has Bonjour, I can, for example, go to http://lamp.local. and it works see the web client. The problem is, I can't ssh to a directory to scp the files I need, mount a usb thumbdrive (usbfs is unsupported) nor get samba working. I can't even update the ubuntu installation, it fails. I've tried bridged, nat and host-only networking settings in VMware Player. Bridged gives me an undefined IP, while the other two each have different IPs. All three settings allow me to access the web config, but none of them give me samba access. Windows usually freezes, then reports that it cannot connect. I'd rather not wipe a box to do a dedicated install, is there I way I can get this VM working, or are there better LAMP VMs out there? This one came already working and set up with VMware Player, so I thought it would be perfect... Thanks,

    Read the article

  • Wiping Deleted Directory Entries and Defragmenting Directories

    - by Synetech inc.
    Hi, I have seen plenty of apps that wipe free space on a disk (usually by creating a file that is as big as the remaining space) or defragment a file (usually by using the MoveFile API to copy it to a new contiguous area). What I have not seen however is a program that wipes the deleted directory entries. That is, when a file is deleted, its information (name, dates, etc.) remain in the directory, but are simply marked as empty. That leaves all kinds of information in a directory entry, and also wastes space since (at least on FAT drives), the directory may be using several clusters. For example, if a directory once had a lot of files, it will be expanded to use another cluster which could be anywhere on the disk. This means that the directory is fragmented, and may be using more clusters than needed, possibly with 100’s of unused (ie, “deleted file”) entries between active files. Does anyone know of a program that can defragment/consolidate directories (ie, wipe unused entries, and move active entries together)? (I would really rather not have to resort to writing my own yet again.) Thanks a lot. EDIT Sorry, I should have said, Windows and/or DOS, for FAT*/NTFS.

    Read the article

  • SMTP Unreachable from Specific Networks

    - by Jason George
    I host my business site through a VPS account. The instance runs Ubuntu and I'm using POSTFIX+Dovecot as my mail server. For the most part, the mail server works fine. I have noticed, however, that I can not send mail from specific local networks. I noticed this at a client's office serval months ago. I can receive email, but any time I tried to send mail when connected to their network the connection would time out. Since I could send my mail after leaving, I chalked it up to improper network configuration and didn't worry about it. Unfortunately I've recently moved, switched service providers, and am forced to use the service providers router due to the special set-up they put in place to give me DSL in the sticks--well beyond the typical range for a DSL run. Now I'm unable to send email from home, which is a problem. I have tried sending email through my phone (using cellular service rather than my DSL) just to confirm the server is currently working. I'm not even sure where start debugging. Any ideas on how I might track down the issue would be greatly appreciated.

    Read the article

  • Server 2008 R2 DNS Lockup / Stops Resolving Internet Names

    - by Richard Maynard
    We've deployed our first 2008 R2 server on a client site which has replaced their existing 2003 DC. This server provides DNS resolution services to all client machines on that site for general internet usage. Since using the 2008 R2 DNS services we have noticed every couple of days the DNS server starts timing out when requests to certain sites are made (google is the only example I can provide at this time although it seems to be larger sites with problems rather than small - CDN compatiblity issue?). When you restart the DNS Server service then resolution returns to normal... just only for a day or so. Is anybody aware of any significant changes to the DNS server architecture or configuration out of the box in R2 that may explain this intermittent behaviour? I have already tried the fix listed here to no avail: http://weblogs.asp.net/owscott/archive/2009/09/15/windows-server-2008-r2-dns-issues.aspx The following PS command prompt info illustrates the issue: PS C:\Users\Administrator.UK> nslookup Default Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 > www.google.com Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 Non-authoritative answer: Name: www.l.google.com Addresses: 66.102.9.99 66.102.9.104 66.102.9.105 66.102.9.103 66.102.9.147 Aliases: www.google.com > www.google.co.uk Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 * s8209001.uk.kingdomfaith.com can't find www.google.co.uk: Server failed

    Read the article

  • SQL Server on Linux

    - by TimothyAWiseman
    For a particular project that is coming up, I am trying to expand my knowledge of Linux, so I am going to set up a Linux system at home. Rather than dual booting, I am thinking about putting SQL Server on a Windows Virtual Machine with Linux as the host at least until this project is over when I will probably switch back to Linux. So, I have a couple of different, but interrelated questions: How well does this work? This is only a test machine at home, so I can easily accept a fair bit of degradation, but if it is going to be a horrible reduction in performance I will dual boot instead. Is there a particular virtual machine manager I should look at to go this route? Since this is my personal machine, price is an issue but I am quite happy to pay a reasonable amount. And finally, given the choice of VMM, is there a particular Linux Distro I should be looking at? [This has been cross posted at Ask.SqlServerCentral.com . I think it may be appropriate at both sites. ]

    Read the article

  • Networking Home Office

    - by Matt
    I'm in the process of building an office in my garden. It's about 25m away from my house. I'd like to run a wired network connection to the office. I'd rather not go down the powerline route, as speeds don't seem great, and I'm likely to want to be moving a lot of data around on the internal network. I have an electrician who is running armoured electrical cable to the office, and is providing conduit for me to run network cable. My questions are: 1) What type of cable to run 2) How I terminate/connect it at both ends I could get something like armoured cat6 utp solid core (like this: http://www.netstoredirect.com/cat6-cable/289166-external-armoured-cat6-utp-solid-cable-price-per-metre.html) which seems fairly robust, but then I have to terminate it. Additionally, where the cable enters my house, there is about another 15m to where my router is situated. I also read this artice: http://www.audioholics.com/audio-video-cables/bjc-cat-network-cable-quality-interview which scared me into realising I don't know what I'm doing!! particularly with termination. Or I could get an "cat6 external patch cable" (e.g http://www.netstoredirect.com/rj45-network-cables/239231-external-cat6-utp-ldpe-rj45-patch-leads.html) and run that in the conduit, and work out how to terminate it at the house end. At the office end I guess I can just plug it into a switch. Any help? Thanks

    Read the article

  • Is there a way to "burn" audio to an ISO? (as an audio CD)

    - by Sootah
    I have an audiobook that I've downloaded via their download manager, and it's loaded into their cutesy little audio program that they force you to use. I can play the book just fine using their proprietary software, and while it's annoying when using my PC, it's utterly UNBEARABLE when I try to listen to it on my Blackberry. The program is INSANELY slow, it literally takes around 30 seconds to switch between tracks, so if I've forgotten where I am in the book it takes me around 15 minutes to finally get to where I was at. I've looked everywhere on how to transcode the book to .MP3, but evidently with their current format it's either extremely convoluted (and I have no desire to dick around with installing some older version of the codec, getting a different transcoding app, and then wrestling with getting it to actually work). Since I'm able to burn a copy of the book to an audio CD, I figure the best way to go about this is to just make the CDs and then rip them off of those to .MP3. In order to avoid wasting two hours, not to mention 14 CD-R's, I was wondering if there's a way to "burn" to an .ISO instead of an actual CD-R. I currently have SlySoft's Virtual CloneDrive installed, so I can mount .ISO's easily enough, but now I want to actually create an ISO via the CD burning process. Just in case I've not explained myself very well, here is an overview of what I intend to do: "Burn" a set of Audio CD .ISOs from the audiobook (hopefully I can do this using Windows Media Player, otherwise I'll be forced to use the audiobook app) Mount an .ISO in Virtual CloneDrive Rip the audio tracks on the mounted .ISO to .MP3s Repeat steps 2-3 until the entire book is in .MP3 format Copy .MP3s to my Blackberry so that I'm not driven insane every time I want to listen to the book in the car, and be able to use Winamp when listening on my computer EDIT: I'd suppose a rather concise way to put it is that I need something that will emulate a CD-R drive, so that you can select it as the output drive in whatever app your burning the audio CD from. (I'd suppose that when you "insert a blank CD-R" the app would then ask you what file to save to)

    Read the article

  • Openfire on EC2 with Jingle

    - by Bjorn Roche
    I would like to run Openfire (or another XMPP server) on EC2. At the moment this is just for testing, so easy setup and configuration are important, as is low cost. At some point, however, if things go well, it will be important to scale this. Ideally, it would be nice to not have to switch software when the scaling happens, but if a switch needs to happen later it certainly can. My requirements are: basic XMPP services, including muc and pubsub. Logins controlled from an external API. Preferably, when a user attempts to connect, the XMPP server checks with the api to see if their username and password are correct, but I can also have the API keep the XMPP server up to date on new users, deleted users, pasword changes and so on. I see Openfire has a "user service" API. Not ideal, but it looks workable. Jingle, including relay and STUN. It's not at all clear to me if the Jingle Nodes plugin takes care of this. I'm a bit confused about what's required to set this up, and I'd rather know in advance than be confused along the way :). eg It seems like STUN servers require more than one IP address. Can Openfire do all this for me, including stun and media relay on a single machine? Is this hard to configure on EC2 with Openfire? What are the basic steps? Would this be easier with something else like, say Tigase? What about database? Should I use amazon's database service, or run a db on the same machine? Would the server be compatible with a service like http://www.siteuptime.com/ Thanks!

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >