Search Results

Search found 22224 results on 889 pages for 'point of sale'.

Page 647/889 | < Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

  • Synchronize folders on different computers without cloud and without network just internet

    - by theimmortalbg
    I have two computers with windows 7, one in my home town and one in another town. So they are not in private network but I have internet on both. They have exactly the same file structure. I am searching for program that can keep the data equal. I know about dropbox or google drive but they are cloud and I don't want to use them. Also they are using folder that you should copy your data in it. There is another programs that are like a server, just put something and after that you can download it but I dont need them. I want just to point which folders to be synchronized and the program make the synchronization. The sync can be in real time if the two computers are powered, or after few time when they are powered. Or it can lock another computer synced folder till update is required. At all this is my documents that I want to be synced in all my computers and to be changed from where I want. In fact I can move the updates with flash but if some program save the changes and make them on another computer with one click it will facilitate my work.

    Read the article

  • Things to check for an internet-facing email server.

    - by Shtééf
    I'm faced with the task of setting up a public-internet-facing email server, that will be relaying mail for all of our other servers in the network. While the software in itself is set up in few keystrokes, what little experience I have with managing an email server has thought me that there are tons of awkward filtering techniques employed by other email systems. Systems that my own server will inevitably interact with a some point. Hence, my questions: What things should be kept in mind and double checked when setting up an email server? What resources are available for checking if my email server is set-up correctly? I'm specifically NOT looking for instructions for any given mail server, such as Exchange or Postfix. But it's okay to say: “you should have X and Y in your set-up, because when talking to server software Z, it typically tries to weed out open relays by checking for these.” Some things I've discovered myself: Make sure forward and reverse DNS are set up. Mail servers tend to do a reverse lookup for the peer IP-address when receiving. Matching a reverse look up with a follow-up forward lookup is probably employed to weed out open relays run through malware on home networks. Make sure the user in the From-address exists. The From-address is easily spoofed. A receiving mail server may try to contact the mail server in the From-domain, and see if the From-user actually exists.

    Read the article

  • MySQL slave server from dumps

    - by HTF
    I've created a slave server from live machine which is acting as a master now. I use the following procedure to create it: mysqldump --opt -Q -B --master-data=2 --all-databases > dump.sql then I imported this dump on the new machine, applied the "CHANGE MASTER TO..." directive with a log file/position from the dump. Please note that I have around 8000 databases and I didn't stop the master while the dumps were running. The replication works fine but is this a properly method for creating a slave server? I'm planning to promote this slave to a master (different location) so I would like to make sure that there is a 100% data consistency between the servers. I've found this article where it says: The naive approach is just to use mysqldump to export a copy of the master and load it on the slave server. This works if you only have one database. With multiple database, you'll end up with inconsistent data. Mysqldump will dump data from each database on the server in a different transaction. That means that your export will have data from a different point in time for each database. Thank you

    Read the article

  • Moved servers running Windows Server 2003

    - by Charles
    Our company has two locations and each location has a Windows Server 2003 machine as the DC and several servers, running on two different sub-nets. We are consolidating the locations. I changed the IP address on one of the web servers prior to moving to the main location. I didn't change the IP address on either the DC or the other web servers prior to moving to the main location. Now, only the web server whose IP was changed is able to serve pages. The other web servers are not able to serve pages, cannot be pinged, or be accessed via RDP. Since we don't need the second DC, it has been powered down. When I tried to ping it, the previous IP address was received. My colleague changed the IP address in the DC's DNS, but when I ping it, a timeout error is received. I know that I should have read a lot more before doing this. What can I do to fix it? Thanks, in advance, for your help! Update MarkM, thanks for the info on demoting a DC. That's one of the things I want to do after everything is working. Is there a good, clear article you recommend? Rusty, there are no DMZs involved at this point. I need to set up a DMZ, but that's another project.

    Read the article

  • Using wildcard domains to serve images without http blocking

    - by iopener
    I read that browsers sometimes block waiting for multiple images from the same host, and I'm trying to do everything I can to speed up page load times. One caveat: I need to serve files over HTTPS. Any opinions about whether this is feasible: Setup a wildcard cert for *.domain.com. Whenever I need an image, generate an number based on a hash mod 5 of the filename, and append it to an 'img' subdomain (eg img1.domain.com, img4.domain.com, img3.domain.com, etc.); the hash will make any filename always use the same subdomain, and therefore the browser should be able to cache the images Configure a dynamic virtualhost record to point all img#. subdomains to /var/www/img I am looking for feedback about this plan. My concerns are: Will I get warnings when my page has https:// links to multiple subdomains? Is the dynamic virtualhost record I'm talking about even possible? Considering the amount of processing this would require, is it likely to even produce any kind of overall benefit? I'm probably averaging a half-dozen images per page, with only half being changed on each page refresh. Thanks in advance for you feedback.

    Read the article

  • Windows-to-linux: Putty with SSH and private/public key pair

    - by Johnny Kauffman
    I spent about 3 hours trying to figure out how to connect to a linux box from my windows machine using putty without having to send the password. This is connecting to an Ubuntu server that is using OpenSSH. The private key is SSH-2 RSA, 1024 bits. I am connecting using SSH2. I have run into the more common problems already: Putty generated the public key in the "wrong format". I have corrected this (as seen on this blog post). However, since I am not yet connected, I cannot absolutely confirm that this file is in the correct format. The key is all on a single line now, and I have tried adding/removing line breaks at the end of the file. I've also tried the public file doctoring process a few times to ensure that I haven't flubbed up the manual conversion. Even so, I have no way to verify accuracy here. The permissions were at once point wrong as well, specifically meaning that the file had too many permissions. I had to solve this too and I know it got past this because I no longer see a related error in /var/log/auth.log. I've tried both authorized_keys and authorized_keys2 in case the server has an old version of OpenSSH, but this changed nothing. I do have access as a user. After this keyfile stuff fails, I can enter my password instead The only remaining nibble of information I have is that it claims I have the alleged password wrong: sshd[22288]: Failed password for zzzzzzz from zz.zz.zz.zz port 53620 ssh2 Even so, as far as I can tell, this is just a lazy try/catch somewhere, since I don't think there's a password involved at all. I see nothing else in any of the /var/log files of use. What else could be wrong?

    Read the article

  • CentOS Installation on a Cisco MCS 7800

    - by William
    Hello, I'm having some problems installing CentOS 5.5 Final (i386) Onto my server, a Cisco MCS 7800. The problem comes very early into the installation. When the welcome screen comes up ans gives you the option on how to boot into the DVD, Ill press enter to go into the graphical installer. The Screen will then have a blinking cursor in the top left of the screen and will never go away (I thought that it just might need time but I let it sit for over 5 hours.) I then booted into it again and tried using Linux Text thinking it was a problem with graphical installer. That didn't work, same problem. Then I tried a DVD of RHEL 5 and got the same problem, both graphical and Linux text. At this point i think its a hardware problem. The Server has 2GB of ECC RAM, 1 Pentium 4 CPU @ 3.06GHZ and 2 WD Hard Drives (80GB) Configured for RAID 0. ( Also there is a option in the BIOS for what OS type and that is set to Linux.) If anyone has any idea what is going on, it would be helpful. ================Edit================== ooshro, typing "text" doesn't change a thing. still stuck at the blinking cursor. I looked it up and its really the same thing as typing "linux text", which as stated in the first part of my question, i've already done.

    Read the article

  • Wifi antenna extension with F-connector/RG-6(RG-59) cable?

    - by rjz2000
    In an older house, the wire mesh in walls surrounding the furnace behave like a Faraday cage and block wifi signals. It is also difficult to lay new cable, however there is television cable to multiple locations due to there once having been a roof-installed, television antenna. It would be relatively trivial to install the wifi router at the center distribution point, then have the antenna broadcasting/receiving the signal plugged in at each of the old television outlets. I assume that it would not be too difficult to find an adapter for SMA <- F-type connectors. The cable is actually RG-59 rather than RG-6, but I assume that it still has relatively good RF isolation along its length, which is no more than a couple hundred feet in any direction. Does anyone know a problem with the idea? Will a router get confused if there is /too little/ interference between the two antenna? Is that length of cable (~100ft) too long for the signal a router broadcasts? I have seen that it is also possible to use old ~$30/each FiOS cable modems available on eBay to extend a network over television cable. However, that seems like a less elegant solution, and might interfere with upnp and dlna services I'd like to have work on a single network. Thanks if anyone has answers or suggestions before I try this project!

    Read the article

  • Wifi antenna extension with F-connector/RG-6(RG-59) cable?

    - by rjz2000
    In an older house, the wire mesh in walls surrounding the furnace behave like a Faraday cage and block wifi signals. It is also difficult to lay new cable, however there is television cable to multiple locations due to there once having been a roof-installed, television antenna. It would be relatively trivial to install the wifi router at the center distribution point, then have the antenna broadcasting/receiving the signal plugged in at each of the old television outlets. I assume that it would not be too difficult to find an adapter for SMA <- F-type connectors. The cable is actually RG-59 rather than RG-6, but I assume that it still has relatively good RF isolation along its length, which is no more than a couple hundred feet in any direction. Does anyone know a problem with the idea? Will a router get confused if there is /too little/ interference between the two antenna? Is that length of cable (~100ft) too long for the signal a router broadcasts? I have seen that it is also possible to use old ~$30/each FiOS cable modems available on eBay to extend a network over television cable. However, that seems like a less elegant solution, and might interfere with upnp and dlna services I'd like to have work on a single network. Thanks if anyone has answers or suggestions before I try this project!

    Read the article

  • Creating security permissions for a non-domain-member user in Windows Server 2008

    - by Overhed
    Hello everyone, I apologize in advance for incorrect use of terminology, as I'm not an IT person by trade. I'm doing some remote work via a VPN for a client and I need to add some DCOM Service security permissions for my remote user. Even though I'm on the VPN, the request for access to the DCOM service is using my PCs native user (and since I'm running Vista Home Premium it looks something like: PC-NAME\Username). The request for access comes back with access denied and I can not add this user to the security permissions as it "is not from a domain listed in the Select Location dialog box, and is therefore not valid". I'm pretty stuck and have no clue what kind of steps I need to do here. Any help would be appreciated, thanks in advance. EDIT: I have no control over what credentials are being passed in to the server by my computer. This scenario is occurring in an installation wizard that has a section which requests you point it to the machine running the "server" version of the software I'm installing (it then tries to invoke the relevant COM service, but my user does not have "Remove Activation Permissions" on that service, so I get request denied).

    Read the article

  • ConfigMgr 2012 - How to automatically make updates available to computers without forcing them to be installed?

    - by Massimo
    I'm using System Center Configuration Manager 2012 with the Software Update Point feature; however, in this environment patching has to be strictly manual, because server reboots need to be approved and scheduled by different people; thus, I need to use ConfigMgr's SUP like I would use a plain WSUS server with auto-approval but with manual installation. I created some Automatic Deployment Rules to automatically download and deploy critical updates, and to have an installation dealine of "as soon as possible"; but then, I've also configured those rules to not do anything when the deadline is reached, and to not perform system restarts even if needed (see image). Also, I've configured the device collection to where those rules deploy updates to not have any valid maintencance window. However, I'm experiencing quite the opposite as what I was expecting: as soon as the new updates are processed by the ADRs, they get automatically installed on all systems by the Software Center, and the computers are subsequently restarted. Why is this happening? Am I getting something wrong or is just ConfigMgr 2012 not behaving like it should?

    Read the article

  • How to configure traffic from a specific IP hardcoded to an IP to forward to another IP:PORT using i

    - by cclark
    Unfortunately we have a client who has hardcoded a device to point at a specific IP and port. We'd like to redirect traffic from their IP to our load balancer which will send the HTTP POSTs to a pool of servers able to handle that request. I would like existing traffic from all other IPs to be unaffected. I believe iptables is the best way to accomplish this and I think this command should work: /sbin/iptables -t nat -A PREROUTING -s $CUSTIP -j DNAT -p tcp --dport 8080 -d $CURR_SERVER_IP --to-destination $NEW_SERVER_IP:8080 Unfortunately it isn't working as expected. I'm not sure if I need to add another rule, potentially in the POSTROUTING chain? Below I've substituted the variables above with real IPs and tried to replicate the layout in my test environment in incremental steps. $CURR_SERVER_IP = 192.168.2.11 $NEW_SERVER_IP = 192.168.2.12 $CUST_IP = 192.168.0.50 Port forward on the same IP /sbin/iptables -t nat -A PREROUTING -p tcp -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.11:8080 Works exactly as expected. IP and port forward to a different machine /sbin/iptables -t nat -A PREROUTING -p tcp -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.12:8080 Connections seem to timeout. Restrict IP and port forward to only be applied to requests from a specific IP /sbin/iptables -t nat -A PREROUTING -p tcp -s 192.168.0.50 -d 192.168.2.11 --dport 16000 -j DNAT --to-destination 192.168.2.12:8080 Times out as well. Probably for the same reason as the previous entry. Does anyone have any insights or suggestions? thanks,

    Read the article

  • Multiple servers vs 1 big server performace

    - by pistacchio
    Hi to all! My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load. Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers. Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is. Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem? Thank you in advance! EDIT: Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

    Read the article

  • Certificate revocation check fails for non-domain guest in spite of accessible CRL

    - by 0xFE
    When we try to use certificates on computers that are not part of the domain, Windows complains that The revocation function was unable to check revocation because the revocation server was offline. However, if I manually open the certificate and check the CRL Distribution Point property, I see an ldap:/// URL and an http:// URL that points to externally-accessible IIS site that hosts the CRLs. Of course, the non-domain-joined client cannot access the ldap:/// URL, but it can download the CRL from the http:// link (at least in a browser). I enabled CAPI logging and I see the event that corresponds to this failed revocation check. The RevocationInfo section is: RevocationInfo [ freshnessTime] PT11H27M4S RevocationResult The revocation function was unable to check revocation because the revocation server was offline. [ value] 80092013 CertificateRevocationList [ location] UrlCache [ url] http://the correct URL [fileRef] 6E463C2583E17C63EF9EAC4EFBF2AEAFA04794EB.crl [issuerName] the name of the CA Furthermore, I can see the HTTP request to the correct URL and the server's response (HTTP 304 Not Modified) with Microsoft Network Monitor. I ran certutil -verify -urlfetch, and it seems to show the same thing: the computer recognizes both URLs, tries both, and even though the http:// link succeeds, returns the same error. Is there a way to have non-domain-joined clients skip the ldap:/// link and only check the http:// one? Edit: The ldap:/// URL is ldap:///CN=<name of CA>,CN=<name of server that is running the CA>,CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC=<domain name>?certificateRevocationList?base?objectClass=cRLDistributionPoint The non-domain-joined clients may be on the domain network or on an external network. The http:// CDP is accessible from the public internet.

    Read the article

  • Mapped network drive missing from My Computer and Explorer

    - by matt wilkie
    On a Windows XP Pro SP3 machine one network drive refuses to show up in My Computer or Explorer. The missing drive letter is G:, if that matters. Other mappings work fine. Other profiles one the same machine have no problem mapping G:. I can access the G: just fine typing it into the address bar or in CMD shell. I've used TweakUI to toggle hide/show G: with no difference. TweakUI says G: should be visible. I've logged off,on between toggles to make sure the settings are taking effect. I've looked at reg key [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer] and made sure it's zero'd. [insert ref link here] We've limped along with this broken setup for some time, just working around it, but some applications do not allow typing in a path when choosing a place to save files and it's reached the point where it's intolerable. So, anyone have any idea why XP won't show this drive letter? or how to fix it?

    Read the article

  • Remove directory from URL IIS 7.5

    - by xalx
    I've tried to find a solution to this and found some guides out there but none seem to work. I have the following URL - http://www.mysite.com/aboutus.html However there are some other sites which link to my old hosted site and point to http://www.mysite.com/nw/aboutus.html. My issue here is trying to remove the 'nw' directory from the URL's. I have setup the following URL Rewrite in IIS but it does not seem to do anything, <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="Redirect all to root folder" enabled="true" stopProcessing="true"> <match url="^nw$|^/nw/(.*)$" /> <conditions> </conditions> <action type="Redirect" url="nw/{R:1}" /> </rule> <rule name="RewriteToFile"> <match url="^(?!nw/)(.*)" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="/{R:1}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Any insight would be appreciated.

    Read the article

  • How can I move linked Word/Excel files without breaking the links under Windows 7?

    - by DOUG NEEDHAM
    I currently operate under Windows XP and have multiple links between my Word and Excel files. I have to upgrade to Windows 7. When the .doc and .xls files are converted to .docm and .xlsm, respectively, the links no longer work. The Word document is still attempting to point back to the old .xls file rather than the new file. Also, creating new links between Word and Excel within Office 2010 doesn't seem to work. I create the new link, switch it from "Auto" to "Manual" and everything works fine. But when I copy the files to another folder, the Word document is still trying to link to the file in the previous folder rather than the new folder. This always worked in Windows XP. I've been using linked Word/Excel documents for 10+ years and have never really had a problem. I'm very careful to maintain Word and Excel filenames when moving the files to a new folder. The process has always been to 1.) move the files, 2.) update the links, 3.) rename the files, and 4.) update the links again. It's my understanding that under Windows XP, links between Word and Excel are relative. But under Windows 7 (and Office 2010?), those same links become fixed.

    Read the article

  • Upgrading only certain packages via the getdeb repo

    - by intuited
    I'm a bit confused about how getdeb.net works now. The last time I got a package from there was a while ago; at that point the procedure was that you would just download a .deb for each package that you wanted to install/upgrade and then install it using dpkg -i. However the inexorable march of progress has lent its trumpets to this system as well, and getdeb installs are now done via their repo, which is registered with apt in /etc/apt/sources.list.d, after you install a single package that makes the changes to the apt database. I've installed that package, and I've discovered that aptitude dist-upgrade now wants to upgrade a lot of packages on my system that weren't ready for upgrades prior to the installation of the getdeb package. If I rename the file /etc/apt/sources.list.d/getdeb.list to something with a different extension, then do aptitude update && aptitude dist-upgrade, it stops wanting to upgrade packages. So I gather that the default behaviour is now to upgrade all packages to the version available at getdeb. This is not particularly appropriate, since these packages are not as well tested as the officially released versions. Is there a config setting somewhere that will prevent upgrading packages to versions from the getdeb repo unless this action is specifically selected? I'd like to be able to pick and choose what packages are upgraded via getdeb.

    Read the article

  • No boot device found. Press any key to continue

    - by Andrew Banks
    I took out the hard drive from my Dell Latitude E5420 notebook, put in an ADATA S599 solid state drive, and installed Ubuntu 11.10. When I boot, the Dell BIOS splash screen appears with a progress bar, which quickly fills up, and the screen goes black. All of this is like it was before. At this point, the OS splash screen should fade in. Instead, I was dismayed to see simply the following, in white text on a black screen: No boot device found. Press any key to continue After looking around for the Any key (just kidding) I press a key, and the Dell BIOS splash screen appears again with a progress bar, which quickly fills up, and the screen goes black. This time, however, the Ubuntu splash screen shows up, Ubuntu opens up, and all is normal. Every time I shut down, however, this happens again. It's like a game the computer and I play together. The computer has never started up without first saying: No boot device found. Press any key to continue and it has always started up after I press any key to continue. It also starts up fine if I click Restart instead of Shut Down. Thoughts?

    Read the article

  • How can I remove all drivers and other files related to a USB Mass Storage device?

    - by Bob
    I have a flash drive here that does not work on one OS on computer - let's call it the desktop Windows 7. It works fine on another computer - laptop Windows 7. It also works fine on Windows 8 on the same desktop computer. Other flash drives work fine under desktop Windows 7. So not a hardware issue, not a generic USB Mass Storage driver issue. It's something specific to this drive. On desktop Windows 7, I can connect the drive but no volume comes up under Windows Explorer. Ditto for Disk Management. With diskpart, loading hangs until I unplug the drive, if I replug it and try list disk it hangs again. If I unplug the drive at this point, list disk prints out all attached drives - including the just removed flash drive. The drive consistently appears under Device Manager, but uninstalling the drivers, restarting and reinstalling the drivers (by inserting the drive) only works for the first insertion. After that it fails again. I get the feeling that the driver files are not actually removed, and are corrupted, meaning every reinstall it's the same corrupted drivers being installed. Is there any way to remove these drivers completely? Or perhaps some other setting Windows 7 retains? Formatting the drive through another computer/OS does not help. I've also tried a complete wipe and rebuild of the MBR and single partition. The allocation unit size makes no difference; neither does a NTFS format. This is a relatively small matter, and I would not like to reinstall the entire OS!

    Read the article

  • Moving a site from IIs6 to IIS7.5

    - by Sukotto
    I need to move a site off of IIS6 (Win Server 2003) and onto IIS7.5 (Win Server 2008) as soon as possible. Preferably tomorrow. The site itself is a delightful mix of classic asp (vbscript) and one-off asp.net (C#) applications (each asp.net app is in its own virtual dir and has a self-contained web.config). In case it's relevant, this is a sort of research site made up of 40 or 50 unconnected microsites. Each microsite is typically a simple form allowing a user to submit a form, which then runs a Stored Proc on a sqlserver db and displays a chart and/or table of the results. There is very little security to worry about. The database connection info is in a central file (in the case of the classic asp) or app's individual web.config (lots of duplication there) To add a little spice to the exercise... I have no idea how to admin IIS The company no longer employs the sysadmin or the guys who set this thing up. (They're not going to employ me much longer either but my sense of professional pride does not permit me to just walk away from this task). The servers are on mutually firewalled networks and I have to perform a convoluted, multi-step process to copy anything from one to the other. Would someone please point me to a crash-course tutorial for accomplishing the above? I have: a complete copy of the site's filesystem on the new box installed the 3rd party charting tool on the new system a config.xml file from the "all tasks - save configuration to a file" right click menu. There doesn't seem to be a way to import it on the new system however. The newer IIS manager has a completely different UI and I'm totally lost. Please help.

    Read the article

  • Ubuntu 9.04: Ripping CDs with grip?

    - by chris
    I tried to rip a CD tonight, and couldn't figure out how to configure grip - /dev/cdrom doesn't seem to be the mount point for music CDs any more. How can I configure grip to find CDs? Update: /etc/fstab has /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 But there's nothing visible in /media/cdrom0 (or /media/cdrom, which is a symlink to cdrom0) There's an icon on the desktop labeled "Audio Disk" and opening it shows the .wav files on the CD. The location is cdda://sr0/, but grip doesn't like that either. Trying to manually mount /dev/sr0, I get $ sudo mount -t auto /dev/sr0 foo/ mount: block device /dev/sr0 is write-protected, mounting read-only mount: you must specify the filesystem type Update 2: Tried to change the media handling preferences (From a file browser, Edit-Preferences, Media, CD Audio) to "Do Nothing". CD Still doesn't mount. Update 3: With an audio CD in the drive: $ ls -l /dev/ | grep cd lrwxrwxrwx 1 root root 3 2009-09-15 22:13 cdrom1 -> sr0 lrwxrwxrwx 1 root root 3 2009-09-15 22:13 cdrw1 -> sr0 drwxr-xr-x 2 root root 60 2009-09-15 22:13 pktcdvd lrwxrwxrwx 1 root root 3 2009-09-15 22:13 scd0 -> sr0 crw-rw----+ 1 root cdrom 21, 2 2009-09-15 22:13 sg2 brw-rw----+ 1 root cdrom 11, 0 2009-09-15 22:13 sr0

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • Mass-migrating from POP3 to Exchange 2010, how do I copy mailboxes?

    - by Erik P. Skaalerud
    I'm in the process of planning our migration from an internal hosted POP3-server (dovecot) to Exchange 2010. We're using Outlook 2003 for the moment, but will soon upgrade to Outlook 2010. The big problem is that we have about 50 computers here in our HQ, plus ~30 clients in branch offices (wich will get their Exchange migration later sometime). I'm the only IT personel, and having to go around and manually set up Outlook and copy over their PST contents is not a option I'm looking for. Some users have set outlook to keep messages for X number of days on the POP3 server, others have not. Using a POP3 connector to transfer over the mails is not a viable option. Here is what I've done so far: Created a transform for the Office 2003 administrative installation point Created a .PRF file to modify any existing e-mail account to switch over to Exchange (including the RPC-encrypt hotfix described in MSKB 2006508) Tested both transform and PRF, both works Created a test-OU and GPO containing the Office 2003 installation with transform applied, also works My big question is: How can I force Outlook to import any existing .PST into the new Exchange mailbox when the user starts up Outlook for the first time after the MST/PRF have been applied? Is this possible?

    Read the article

< Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >