Search Results

Search found 9117 results on 365 pages for 'systems analysis'.

Page 303/365 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • Backup Exec tape rotation guidelines

    - by HannesFostie
    Hi We use Backup Exec to take care of our backups for our data server, exchange server, and one more set of systems. Each of these 3 is being done on a separate "set" of tapes. Our goal is to be able to roll back a full 2 weeks, with 1 full backup each weekend and differential/incremental backups in between (the difference between the two in our case isn't very big, because the employees mostly use a very similar set of files throughout the week). While playing around with the settings on how to achieve this, we set the time for BE to keep the full backup to 14 days, but because we have too much data this would require manual intervention each time to erase a certain tape and use that. What I would like to know is what kind of guidelines, tricks, tips and general "stuff to think about" you keep in mind when designing your backup schedule. The type of backups (full/diff/incr) isn't of that much importance in our case as it's more or less set in stone. Made this community wiki as it's not a very specific question. Thanks in advance!

    Read the article

  • Enterprise class storage best practices

    - by churnd
    One thing that has always perplexed me is storage best practices. Filesystems brag about how they can be petabytes or exabytes in size. Yet, I do not know many sysadmins who are willing to let a single volume grow over several terrabytes. I do know the primary reason behind this is how long it would take to rebuild the array should a drive fail. The more drives in a single LUN, the longer this takes and the greater your risk of losing another drive while the rebuild is taking place. Then there's usage reasons. Admins will carve out a LUN based on how much space they think needs to be allocated to the project. It seems more practical to me for the LUN to be one large array and to use quotas. I understand this wouldn't satisfy every requirement (iSCSI), but I see a lot of NAS systems (NFS) managed this way. I also understand that the underlying volumes can be grown/shrunk as needed quite easily, but wouldn't it be less "risky" to use quotas rather than manipulating volumes and bringing possible data loss into the equation? There may be some other reasons I'm missing, so please enlighten me. Can we not expect filesystems to ever be so large? Are we waiting for the hardware to get faster to cut down on rebuild times?

    Read the article

  • CD/DVD Drive not detecting locally burned CD/DVDs, but works fine with Genuine discs.

    - by Rahul
    I'm using Dell Inspiron 1420 - 32 Bit - Windows Vista, since 2.5 years. I'm facing a strange problem with my CD/DVD-drive. I cannot run/play a CD/DVD which I get burned from my friends. But when I insert Genuine CD, I'm able to play/run it. And when I try to install my Vista package which I got with my notebook, the CD/DVD gets loaded. If I insert a CD/DVD which I get from my friend, CD doesn't get loaded and the system gets hanged. But all these CDs/DVDs work on other systems. I've tested it on many of my friends PCs. So, now I'm able to run only genuine CDs & a few genuine DVDs. My Experience/Experiments: I tried to install Windows Vista using Genuine DVD - It worked I tried to install Ubuntu which I got from shipped from Canonical Ltd. - It worked I tried to install OpenSUSE .iso file burned to a DVD in my friend's PC - It didn't work for me (But working perfectly fine in my friends PCs(Tested in 4 other PCs) Tried to play a DVD containing movies, burned in my friend's PC - It didn't work for me (But working perfectly fine in my friends PCs Any help/suggestions would be appreciated. Thanks.

    Read the article

  • Solution to easily share large files with non-tech-savvy users?

    - by Tim
    Hey all, We've got a server setup at work which we'd like to use to exchange large files with known clients easily. We're looking into software to facilitate this, but somewhow typing "large file hosting" into Google gives questionable results.. ;) We've come up with the following requirements, and I hope any of you can points us in the direction of a solution that offers this functionality, or is malleable to our needs. Synchronization / revision management is of no concern, it's mostly single large (up to 1+ GB) file uploads & downloads we'll need. We'd like to make the downloads expire & be removed after a certain number of days / downloads, to limit the amount of cleanup we'd have to do. The data files exchanged sometimes hold confidential information, so the URLs generated should be random and not publicly visible. Our users are of the less technically savvy variety, so a simple webform would be best over a desktop client (because we also have to support a mix of operating systems). As for use of the system we'd either like to send out generated random URLs for them to upload their files, or have an easy way manage & expire users. Works on a linux (Ubuntu) server (so nothing .Net-related please) Does anyone know of software that fits the above criteria? We've already seen a few instances of this within the scientific community, but nothing we could use directly.. Best regards, Tim

    Read the article

  • Solution to easily share large files with non-tech-savvy users?

    - by Tim
    Hey all, We've got a server setup at work which we'd like to use to exchange large files with known clients easily. We're looking into software to facilitate this, but somewhow typing "large file hosting" into Google gives questionable results.. ;) We've come up with the following requirements, and I hope any of you can points us in the direction of a solution that offers this functionality, or is malleable to our needs. Synchronization / revision management is of no concern, it's mostly single large (up to 1+ GB) file uploads & downloads we'll need. We'd like to make the downloads expire & be removed after a certain number of days / downloads, to limit the amount of cleanup we'd have to do. The data files exchanged sometimes hold confidential information, so the URLs generated should be random and not publicly visible. Our users are of the less technically savvy variety, so a simple webform would be best over a desktop client (because we also have to support a mix of operating systems). As for use of the system we'd either like to send out generated random URLs for them to upload their files, or have an easy way manage & expire users. Works on a linux (Ubuntu) server (so nothing .Net-related please) Does anyone know of software that fits the above criteria? We've already seen a few instances of this within the scientific community, but nothing we could use directly.. Best regards, Tim

    Read the article

  • Execute remote shell commands on windows XP embedded

    - by BartD
    The following situation: We have Windows XP Embedded clients that have all admin shares disabled and only have read-only shares (for security reasons). What we want to do is run remote shell (dos) commands on these machines. At first we looked at PsExec & BeyondExec applications (and all sorts of variants), but all of them rely on having at least an admin$ share, which are disabled on our systems. Telnet is not secure enough, as is RSHD servers. So we looked at the next obvious solution: and SSH server. We also prefer an open-source or freeware solution that is still maintained. I looked at freeSSH server for Windows, but that didn't run stable, I tried installing copSSH, WinSSH & openSSH for Windows, but none of these applications seem to work on Windows XP Embedded. The services can either not be installed or cannot be started. I don't know why. Some kind of dependency that is missing. So are there any other solutions out there? I don't care about having to an agent installation locally of some kind on each system, as long as the size of the software is small enough. Can someone suggest some alternatives to what I've already mentioned? Thank you very much.

    Read the article

  • What is the quickest and safest way to test new software and revert all changes, if needed?

    - by calbar
    I'm looking for Windows software that will allow me to quickly create a "checkpoint", do whatever I might need to do to my computer - install programs/drivers/updates, create/delete personal files, reboot the system multiple times, open questionable attachments - and then revert the entire system back to when the checkpoint was created. Essentially I want Windows Restore Points that save my personal files and partitions, too. It sounds like disk imaging might be the ticket, but creating them is much too slow and the restore process too involved... I'm hoping to sacrifice full disaster recovery for speed. Creating a checkpoint should be as close to one-click as possible, and rolling back should be a matter of selecting a restore point and rebooting. Ding! I'm familiar with Sandboxie, True Image Home "Try and Decide", Returnil, and a number of other "virtual system" apps that actively "catch" changes and allow you to commit or reject them. I'm not interested in these for a number of reasons - I prefer the "cut and dry" restore point approach. Finally, I'll note that I've just recently become aware of Comodo Time Machine. It sounds absolutely perfect, however, a quick skim through the user forums show more than a few horror stories of corrupted, unbootable systems. Any positive personal experience with the software to suppress my superstitions, or suggestions for more established alternatives would be greatly appreciated - Comodo Time Machine seems relatively new. Thanks for your help!

    Read the article

  • Block SMTP session with sender domain which doesn't itself accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the declared sender's domain has an MX which refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. Note that I'm not, as some commenters have assumed, talking about checking whether the SMTP client will receive messages. The check I want is whether the declared sender's domain (regardless of who the current SMTP client is) will accept SMTP connections from here. In other words: when we get around to sending a message back, we'll need the sender's domain to accept SMTP connections; I want to do that check before accepting the incoming session. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • "Network Error - 53" while trying to mount NFS share in Windows Server 2008 client

    - by Mike B
    CentOS | Windows 2008 I've got a CentOS 5.5 server running nfsd. On the Windows side, I'm running Windows Server 2008 R2 Enterprise. I have the "Files Services" server role enabled and both Client for NFS and Server for NFS are on. I'm able to successfully connect/mount to the CentOS NFS share from other linux systems but am experiencing errors connecting to it from Windows. When I try to connect, I get the following: C:\Users\fooadmin>mount -o anon 10.10.10.10:/share/ z: Network Error - 53 Type 'NET HELPMSG 53' for more information. (IP and share name have been changed to protect the innocent :-) ) Additional information: I've verified low-level network connectivity between the Windows client and the NFS server with telnet (to the NFS on TCP/2049) so I know the port is open. I've further confirmed that inbound and outbound firewall ports are present and enabled. I came across a Microsoft tech note that suggested changing the "Provider Order" so "NFS Network" is above other items like Microsoft Windows Network. I changed this and restarted the NFS client - no luck. I've confirmed that the share folder on the NFS server is readable/writable by all (777) I've tried other variations of the mount command like: mount 10.10.10.10:/share/ z: and mount 10.10.10.10:/share z: and mount -o anon mtype=hard \\10.10.10.10:/share * No luck. As per the command output, I tried typing NET HELPMSG 53 but that doesn't tell me much. Just "The network path was not found". I'm lost on how to proceed with troubleshooting. Any ideas?

    Read the article

  • CC.NET + SVN : Server certificate issue

    - by MSI
    I am trying to setup Continuous Integration in our office. Being a puny little developer I am facing this supposedly infamous problem: " Source control operation failed: svn: OPTIONS of 'https://trunkURL': Server certificate verification failed: issuer is not trusted" So I tried the following solution - Run CC.NET service (server running as win service) using a domain account (rather than default LOCAL SYSTEMS) and accept cert permanently using command prompt under that user by using svn log/list on the repo. Doesn't help :(. I am getting the following from my artifact/log files(or dashboard) ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://TrunkURL': Server certificate verification failed: issuer is not trusted (https://ServerAdd) . Process command: E:\(svn.exe Path) log https://TrunkURL -r "{2010-11-08T02:12:20Z}:{2010-11-08T02:13:21Z}" --verbose --xml --no-auth-cache --non-interactive at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModificationsWithLogging(ISourceControl sc, IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) We are using VisualSVN Server and CC.NET for this adventure. Tips, suggestions will be highly appreciated. Thanks

    Read the article

  • Bad Mumble control channel performance in KVM guest

    - by aef
    I'm running a Mumble server (Murmur) on a Debian Wheezy Beta 4 KVM guest which runs on a Debian Wheezy Beta 4 KVM hypervisor. The guest machines are attached to a bridge device on the hypervisor system through Virtio network interfaces. The Hypervisor is attached to a 100Mbit/s uplink and does IP-routing between the guest machines and the remaining Internet. In this setup we're experiencing a clearly recognizable lag between double-clicking a channel in the client and the channel joining action happening. This happens with a lot of different clients between 1.2.3 and 1.2.4 on Linux and Windows systems. Voice quality and latency seems to be completely unaffected by this. Most of the times the client's information dialog states a 16ms latency for both the voice and control channel. The deviation for the control channels mostly is a lot higher than the one of the voice channels. In some situations the control channel is displayed with a 100ms ping and about 1000 deviation. It seems the TCP performance is a problem here. We had no problems on an earlier setup which was in principle quite like the new one. We used Debian Lenny based Xen hypervisor and a soft-virtualised guest machine instead and an earlier version of the Mumble 1.2.3 series. The current murmurd --version says: 1.2.3-349-g315b5f5-2.1

    Read the article

  • I've just set up FreeBSD 8.0 and can't login with ssh

    - by Matt
    /etc/hosts.allow is set to allow any protocol from anywhere. I can "ssh localhost" and it works. I simply get "connection refused" from putty on another machine. Any ideas? Will try to get a copy of the sshd_server.conf file as soon as I can find a flash disk to copy it to, but I thought someone might know what you need to set initially to permit login. EDIT: I think I can see why it's not working now. If I telnet to the IP address of the server I'm seeing MGE UPS SYSTEMS SNMP Web/Agent configuration menu. Enter Password: Doh. Ok, so the IP address is assigned by DHCP, but it seems there is already a device statically assigned to that address. I'll put in a reservation and try again. ok, sorted now. It was an ip address conflict. Windows DHCP isn't smart enough to check if there is something listening on the address before first assigning it.

    Read the article

  • Sane patch schedule for Windows 2003 cluster

    - by sixlettervariables
    We've got a cluster of 75 Win2k3 nodes at work in a coarse grained compute cluster. The cluster is behind a mountain of firewalls and resides in its own VLAN. Jobs of all sizes and types run on the cluster and all of the executables running are custom-made. (ed: additional notes on our executables) The jobs range from 30 seconds to 7 days in duration, and may contain one executable or 2000 sub-jobs (of short duration). Obviously we are trying to avoid the situation where our IT schedules a reboot during a 7 day production job. We have scheduling software which accomodates all of the normal tasks for a coarse grained cluster and we can control which machines are active for submission, etc. If WSUS was in some way scriptable (or the client could state it's availability for shutdown) we could coordinate the two systems and help out. Currently, the patch schedule is the Sunday after Super Tuesday regardless of what is running on the cluster. We have to ask for an exemption every time we want to delay patching a machine for a long running production job. Basically, while our group is responsible for the machines we have little control over IT's patch schedule. Is patching monthly with MS's schedule sane for a production Windows cluster? Are there software hooks in WSUS where we could say, "please don't reboot just yet"?

    Read the article

  • how to setup a bridge with 2 NICs and few virtual machines

    - by Bond
    Here is my situation. I have a server with 2 NICs. I have installed virtual box and I have created a few Guest Operating Systems on it. I want these Virtual Machines to be using a bridge.NIC2 would be used to setup this bridge and NIC1 would be connected to corporate network.I am not clear with how should I go on doing this. /etc/network/interfaces is the file which I am trying to modify etc. My approach is following 1) Define a configuration file /etc/network/interfaces 2) Create IPTABLES as how NIC1 will forward the packets to Bridge on NIC2 Now comes the problem I do not understand what is the meaning of following lines in the configuration file auto lo iface lo inet loopback # The primary network interface auto eth2 iface eth2 inet manual auto br0 iface br0 inet static address 192.168.1.14 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.10 # dns-* options are implemented by the resolvconf package, if installed dns-nameservers 192.168.13.2 dns-search myserver.net bridge_ports eth2 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off So any pointers to what should be the entries of /etc/network/interfaces file. So that I understand which parameter is to be used when and where that would help me.

    Read the article

  • External Storage for 2TB of backups and 4TB of data RAID level? HW vs Software?

    - by Jerry Mayers
    I have a Mac Mini set up as a media center/file server. Currently I just have a hodgepodge mess of external drives for storage. I'm maxed out, and I have some new laptops on the way with much larger drives and I need to work out a good storage solution for backing them up, as well as storing media on the server. I need around 2 TB of storage for the time machine backups from my various systems and around 2 TB more for media. I would like to build this to handle around 6 TB total so I have some growing room. Since I'm using a Mac Mini as the server I need to use external enclosure(s) that support USB 2 or Firewire 800 (preferred) or gigabit Ethernet. Performance of the system isn't a huge concern since the majority of the access from other computers is done over 802.11N. I plan on using 2TB drives, for the final version, but initially I'll try and use my existing 2 (1TB) drives + some new 2TB drives, and swap the 1TB ones out as I fill up. As to the actual questions: Should I use hardware RAID in some enclosure? Because if the enclosure dies I have to find an identical one to get to my data right? Wouldn't a software RAID be better as I can use any method of connecting the drives to the system? Remember OS X server is my OS. What if I had to reinstall OS X, can I restore the software RAID easily? What RAID version should I use? For the 2TB used for the time machine disk I don't see why I need RAID here, just a single 2TB drive since its already the backup, but for the remaining 4TB it would be the only copy of the data so I should build some redundancy. I had a RAID 5 setup using a cheep RAID PCI card years ago running RAID 5 in a 2 TB array and when a drive died it wanted 48 hours to rebuild. Is this crazy slow for a setup of this size or is this to be expected? Any suggestions as to drive enclosures?

    Read the article

  • MS Excel 2010 - Using DSN + 32 bit drivers

    - by Kristiaan
    I need some advice as im running into a problem and so far i have been unable to find a solution. We have a set of reports developed in MS Excel that use DSN file to connect to data sources to retrieve data, these work fine on 32 / 64bit systems, however we are moving to a terminal server environment using windows 2008 R2 64Bit. The reports fail to run using the DSN's within this environment if we only have the 32bit drivers installed and configured in the ODBC settings, the minute we install the 64Bit drivers the software works. Is there a way / Method of getting Excel or the DSN file to NOT use the 64Bit driver, but force it to use the 32bit driver. ANSWERED - But due to low user score i cannot "answer" my own question... Sadly there is no way to-do what i want to-do, without a lot of very nasty and not 100% perfect reg hacks. If you need to access 32bit ODBC data sources the application in question has to be 32Bit. here is a link to just one forum post i found relating to this type of problem, it appears the only way i would be able to accomplish this is to remove the 64bit version of office and install the 32bit version instead of it. http://social.msdn.microsoft.com/Forums/en-US/accessdev/thread/5108f337-f06a-4518-afe3-d3c1abd040ef/

    Read the article

  • Partitioning & Linux

    - by Zac
    Every tutorial on Linux-based partitioning schemes (or, just partitioning in general) will tell you that a PC can have either 4 primary partitions, or 3 primaries and 1 extended. They will all also tell you that Linux (in my case, Ubuntu) can be installed on either. It's also come to my attention that it is not too atypical for FHS directories, such as usr/, tmp/, etc/, home/ or var/ to be mounted separately on other partitions. Several questions I am unable to find the answers to, purely for my own edification: (1) By "PC", are we really talking about common PC disk types, like IDE or SATA? I guess I'm wondering why PC uses are limited to 4 primaries or 3 primaries + 1 extended (2) I'm choking on some basic OS concepts: it is said that a partition can be mounted by a file system or an OS. So I assume this means I can somehow instruct Ubuntu to mount to 1 partition, and then any part of, say, ReiserFS, to be mounted to another partition? How? (3)(a) What about creating swap partitions? Is there too much of a good thing with swap partitioning? If I have 4GB RAM over 320GB disk, what should my swap partition size be, and why? (3)(b) Are swap files the only way to create swap partitions? Wouldn't a Linux partitioning utility allow me to define a partition as being for virtual memory only? (4) Why are partitions limited to being "mounted" by just OSes and file systems? Why couldn't I write a program to take up its own, say, 512 MB partition, and then have it invoked or uses by an OS installed on another partition? Thanks for shedding any light here... not critical that I know this stuff, but it's got me thinking incessantly. And when I think incessantly, I...can't......sleep....

    Read the article

  • What to do when a device has no driver for Windows 7 but it has Vista, XP drivers

    - by Mehper C. Palavuzlar
    This has always been a bothersome matter for me. Some devices (printers, scanners, etc.) have drivers for older versions of Windows (Vista, XP, 2000, NT) but no driver for Windows 7. What are my chances to install such devices on Windows 7? Example case: I have a Sharp printer & scanner (Sharp AR-122E N) which I have used for my old Windows XP based PC. Now I want to install it on a Windows 7 x64 based PC. Windows 7 cannot load its driver. I used the original driver CD but when I run the setup.exe (which is included in AR122EN111.exe, 6713KB), it says Cannot install driver on this operating system. Supported operating systems are: Windows 2000, XP, Vista. I tried to install the driver using compatibility settings. I tried Windows Vista and Windows XP SP3, but to no avail. The setup gave the same error. I also googled for Windows 7 driver for "Sharp AR-122E N" but it only listed the original driver that I tried. The official site of Sharp does not even list the driver for this product. In the past, the compatibility setting workaround did work for some devices, but this time it failed. What else can I do to overcome this problem?

    Read the article

  • IP6 seems to be enabled - How do I configure it without interfering with IP4?

    - by Mister IT Guru
    I noticed that some of my Centos boxes have IP6 enabled, and seem to have addresses. I have no problem with this, but I would like to get a handle on it, and even connect to them using IP6. This would really help if for any reason DHCP has a hiccup. But I'm a bit lost as to where the configuration on my CentOS box is. (I am also on google researching this, but I like server fault! :) ) I am hoping that I would be able to log into this via the VPN because every now and then that DHCP device has a bad morning, and needs to be restarted. (I'm also looking into this issue, but someone else handles that, management separation gone mad!) It's a remote client, so it would be a lot easier for me to connect to these systems which seem to self configure, to use that as a pivot via ssh tunnels to get to other remote devices to continue to manage them, while out main route is fixed. I guess, my questions are How can I configure IP6 without interfering with IP4, and On CentOS, can I influence this auto configuration I seem to be seeing?

    Read the article

  • Sharing music on NAS with other zunes and ipods

    - by osij2is
    After being a long time iPod owner, I'm switching to the new Zune with its subscription model. I haven't bought a Zune yet but I'm planning on doing so within the next month or so. I have approximately 40gb worth of music and my girlfriend has her iPod music library around 30gb. I've been trying to figure out how to migrate all our music off of our laptops/desktops and centralize everything on my NAS. In sharing iPod music isn't too bad. Sharing from one machine to all is fairly easy within the iTunes player. As far as storing all the music on a NAS, again, iPods aren't too bad and imagine other systems aren't difficult. But I'm really new to the Zune and I'm beginning to run into some issues. My questions are: Is it possible to store all music from our iPods and Zune subscriptions and share music between the iPod/Zune within the same file share on my NAS? I'm sure it's possible to store music on a share, but I'm not sure how iTunes and the Zune software differs. Is there 3rd party software, maybe something like DoubleTwist that can sync based from NAS to multiple desktop/laptops? I've never used DoubleTwist but it's something that I found that looks close to being what I need. I've never quite done this myself so I'm trying to find a solution that can: a) store music on a network share; b) sync between different devices (zune/ipod) seamlessly.

    Read the article

  • Downmix surround to Dolby Pro-Logic at the OS/driver level in Windows 7?

    - by davr
    First off, I'm talking about Dolby Pro-Logic, a really old tech for encoding 4 audio channels (L/R/C/SR) into two analog outputs, and then extracting them again. It was used in surround sound systems in the last century. I have a modern PC that can output 5.1 analog audio (Three outputs on the back carry six channels of audio). But I have a really old surround sound reciever that only has a two-channel, L/R input, which it extracts 4 channels of audio from, and outputs to 5.1 speakers. What I want is some way for the OS, Windows 7, to act as if I really had 5.1 audio channels available, so applications produce surround audio, but before outputting it out of the back of my PC, apply Dolby Pro-Logic matrix encoding so that it outputs over only two channels. These two channels would then get sent to my receiver via a RCA cable, which would decode it again and drive the surround speakers. Is anything like this possible? I'm pretty sure I could do it at an application / codec level, but I'm looking for something that I just have to set once.

    Read the article

  • Does my Oracle DBA need root access?

    - by Dr I
    I'm currently discussing with my Oracle DBA Collegue that request a root access on our production servers. I'm not so hot to let him use the root access on our production servers. He is arguing that he need it to perform some operations like restarting the server and some other obscure arguments. The point is that I'm not agree with him because I've set him a Oracle user/group and a dba group where Oracle user belong. Everything is running smoothy and without any root permissions for now. I also think that all administrative tasks like scheduled server restart and so one need to be operated by the proper administrator (The Systems administrator on our case) to avoid any kind of issues related to a misunderstanding of the infrastructure interactions. So, I need the help of both, sysadmins and Oracle DBAs to lead me on the correct direction. If my collegue really need this rights I'll give him, but I'm just basically quite affraid of that because of security and integrity concerns. I know that my collegue is really good as a Oracle DBA and he know is work very well, but I also know that I've very few cases where a software and its admin really need root access. Once again, I'm not looking for pros/cons but rather an advice on the way that I should take to deal with this situation.

    Read the article

  • Struggling with proper way to setup Permissions on Linux/Apache Web Server

    - by Dr. DOT
    Your expert experience and assistance is great, greatly appreciated here. I have been running a LAMP server for a long time, yet I still struggle with the best way to set file & directory permissions for FTP and WWW protocol activity. My Control panel is WHM/cPanel (not that it makes a difference), and out-of-the box: files are owned by the user account setup in WHM (eg, "abc") files have a group setting of "abc" as well file permissions are created with 644 directories are owned by "abc" directories have a group setting of "abc" directories permissions are created with 0755 Again, these are the default permission settings. Now everything is fine with FTP activity, but please advise me if any of these file/directory settings create issues, especially with security. Here's where my struggle comes into play. I have PHP apps that allow a visitor to create, edit, rename, delete, etc. sub-directories and files in certain selected directories. PHP runs as "nobody" on my server. So in order to get my PHP/Web apps to work, I have had to: chown nobody * chgrp nobody * chmod 0777 * to everything in these certain & selected sub-directories. I know this is probably a huge security whole (so don't ask me for any links :) but how should I set all the permissions to allow my FTP user to do his thing while allowing the PHP apps to do their thing will also "minimizing" any security risks and exposures? I know that big CMS systems like Drupal, Joomla, WordPress and so on, handle this. Thanks ahead of time for reading through this and offering your expert advice!

    Read the article

  • How can one use online backup with large amounts of static data?

    - by Billy ONeal
    I'd like to setup an offsite backup solution for about 500GB of data that's currently stored between my various machines. I don't care about data retention rates, as this is only a backup of, not primary storage, for my data. If the backup is stored on crappy non-redundant systems, that does not matter. The data set is almost entirely static, and mostly consists of things like installers for Visual Studio, and installer disk images for all of my games. I have found two services which meet most of this: Mozy Carbonite However, both services impose low bandwidth caps, on the order of 50kb/s, which prevent me from backing up a dataset of this size effectively (somewhere on the order of 6 weeks), despite the fact that I get multiple MB/s upload speeds everywhere else from this location. Carbonite has the additional problem that it tries to ignore pretty much every file in my backup set by default, because the files are mostly iso files and vmdk files, which aren't backed up by default. There are other services such as EC2 which don't have such bandwidth caps, but such services are typically stored in highly redundant servers, and therefore cost on the order of 10 cents/gb/month, which is insanely expensive for storage of this kind of data set. (At $50/month I could build my own NAS to hold the data which would pay for itself after ~2-3 months) (To be fair, they're offering quite a bit more service than I'm looking for at that price, such as offering public HTTP access to the data) Does anything exist meeting those requirements or am I basically hosed?

    Read the article

  • Adding Windows 7 32 bit as dual boot option

    - by djerry
    A relative of mine has bought a new laptop this year on which windows 7 (64 bit) is installed. Aside some standard programs he uses on that laptop, he also has some software for his bike that needs to run. The developers of that program still don't support 64-bit systems and therefor I thought about making it dual boot, so he can still use the power of the 64-bit, and just for the bike program, he can initiate the 32-bit version. My questions now are: What are the risks involved in this operation? What steps need to be taken to make this dual boot succesful? Any other ideas besides dual booting? Thanks in advance. Edit I might have forgotten/misphrased something. The software does run on 64-bit, but it cannot find the bike connected to the computer. So I think it's a matter drivers which aren't compatible with the 64-bit system. That's why I wanted to install the 32-bit windows so the drivers would work.

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >