Search Results

Search found 4625 results on 185 pages for 'split tunnel'.

Page 139/185 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Two audio streams - headphones and speakers

    - by Sylvester
    What I want (this is probably hard for most to answer, as this is a very unique setup) is to have two different streams (this means audio splitter is not an option, as it will still only be one stream) of audio - one through the headphones and one through the main speakers. I can do the audio rerouting using virtual audio cables, however the problem is this: i cannot get both headphones AND speakers to play even just one stream, let alone two seperate ones. using "split front and back audio into seperate streams is not an option, as the actual MB F_PANEL is faulty (nothing to do with the case front panel, just so you know. that works fine) So, first things first. I need it to recognise the headphones as a seperate audio device so that Virtual Audio Cables will detect it and allow me to route the necessary audio to the headphones only. I also need to be able have sound play through speakers and headphones together what i want to achieve overall, is this: have the ENTIRE computer's sounds picked up by VAC, and stream them to Line1. then have Line1 stream to the headphones. that way whatever's being streamed is heard through the headphones, while the entire system sounds (including those not streamed) are played through speakers.

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • How do I only dp or do just the lines, not the entire block in Vim diff?

    - by hobbes3
    I'm currently using MacVim (Snapshot 64) "Split Diff by..." menu option. The file is Django's my settings.py from version 1.3.1 to a fresh file from version 1.4. (Open the image on a separate window/tab to enlarge.) I know two basic commands do to "obtain" (and replace) a block from the other side. dp to "put" (and replace) a block to the other side. But those two commands writes the entire block, which in MacVim is the purple highlights. If you look at the 2nd block, you can see that from line 2 and 3 only has 2 words that are different: mysite and hobbes3. I just want to replace per line not the entire block. So what is there a command to replace do do and dp per line as oppose to an entire block or do I have to manually type it out? Bonus question: I noticed that once I manually edit a block, I lose the purple highlighting. How do I "refresh" the diff again to include the highlights without reopening the file? Please try to keep the answers Vim-general as oppose to MacVim-specific. Thanks!

    Read the article

  • Windows Media Based video audio converter?

    - by acidzombie24
    This may seem like an odd question. Right now ONLY windows media player, VLC and media player classic opens and plays my audio video correctly. Virtualdub plays it back with the wrong framerate and losses the audio, Avidemux 2.5 seems to be able to dump the audio/video but the video (like all other apps) is either a bad framerate or is wrong (glitches and bad framerate or bad dump). Nothing recognizes the audio file and when playing the video Avidemux (and most other things) die. FFMPEG cant seem to split the video or audio (using copy -an and etc) and this is getting me very angry. VLC dumps the video incorrectly when i try dumping it with that too. What can i use to convert the video? its streaming so it starts at 26mins in and ends at 28 (this is where apps have the problem. They dont know this and fudge everything or crash). I manage to dump the audio with Avidemux but virtualdub and ffmpeg says unreconized codec. Even if i cant convert it (it seems compressed enough) i want to at least attach it back into an AVI.

    Read the article

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP?

    Read the article

  • Software mirroring (RAID1) versus "Fake Raid" for new Windows 7 install

    - by kquinn
    I've just ordered two new hard drives for my main desktop and a copy of Windows 7 Professional 64-bit. I'd like to do a clean install of Win7 onto the new drives (leaving my old XP Pro boot partition around for a while in case something goes disastrously wrong, etc.). I want to have them set up in mirrored (RAID-1) mode. My understanding is that Win7 Pro can do software mirroring, but can I set this up directly at install time? If so, how? Note that I'd like the disk to be split into three partitions (OS/Apps&Data/Bulk data), all of which should be mirrored. Would it be better (more reliable or faster) to use my motherboard's hardware RAID support? My motherboard is an older nVidia nForce 680i SLI, which is not the most stable of motherboards, and I'm not sure how trustworthy its RAID1 configuration might be (or if Win7 could even detect and install onto a hardware-mirrored volume). Also, the performance characteristics of RAID1 are rather different than RAID0 or RAID5, and I'm wondering if Win7's software mirroring might actually be faster than hardware RAID1 (for example, I'm more of a Unix admin when I have to wear the sysadmin hat, and I've had great success deploying ZFS; most hardware RAID1 implementations have to read both disks and compare results to look for data errors, but ZFS can read from only one disk in the mirror and just use the built-in checksum, meaning it can have up to 2x the number of reads in-flight, as long as there's no data corruption). Edit: Okay, my question about whether Windows 7 can do software mirroring has been answered, and it can. I'm still unsure whether Windows software RAID or my motherboard's hardware "fake RAID" function is a better choice, though. Remember, I'm only interested in mirroring -- not the more complicated striping or parity operations that generally show the poor performance of crappy motherboard RAID solutions.

    Read the article

  • Can I get ethernet out of my Verizon FIOS set-top box?

    - by Tom Hughes
    Setup: my home network is long & skinny, and the FIOS-connected router is all the way at one of the apartment. At the other end, far away (and a floor higher) is my HD TV, which gets a cable-TV signal from a Verizon set-top box that is coax-connected back to the FIOS on-premises equipment. Wi-Fi won't work, the apartment is too stretched out, with old, thick walls and floors. Goal: I think there are three ways to get ethernet back to where the HD TV is: 1) run a cable! this isn't crazy but isn't cheap either (my building won't let me do it, it involves hiring an electrician because the cable would run partly through the public hallway ceiling) 2) split the coax near the TV and put in... a MoCA device? 3) somehow tease the set-top box, which has an RJ-11 (ethernet) port on the back, to give me network access. Question: any other choices? and, is one choice better than the others? #3 is by far the most desirable because it would involve the least wiring -- but I can't find any resources to help make it happen. #2 is a bit scary, I don't want to degrade service to the TV or anywhere else for that matter.

    Read the article

  • mod_rewrite REQUEST_FILENAME doesn't contain absolute path

    - by Paul Dixon
    I have a problem with a file test operation in a mod_rewrite RewriteCond entry which is testing whether %{REQUEST_FILENAME} exists. It seems that rather than %{REQUEST_FILENAME} being an absolute path, I'm getting a path which is rooted at the DocumentRoot instead. Configuration I have this inside a <VirtualHost> block in my apache 2.2.9 configuration: RewriteEngine on RewriteLog /tmp/rewrite.log RewriteLogLevel 5 #push virtually everything through our dispatcher script RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/([^/]*)/?([^/]*) /dispatch.php?_c=$1&_m=$2 [qsa,L] Diagnostics attempted That rule is a common enough idiom for routing requests for non-existent files or directories through a script. Trouble is, it's firing even if a file does exist. If I remove the rule, I can request normal files just fine. But with the rule in place, these requests get directed to dispatch.php Rewrite log trace Here's what I see in the rewrite.log init rewrite engine with requested uri /test.txt applying pattern '^/([^/]*)/?([^/]*)' to uri '/test.txt' RewriteCond: input='/test.txt' pattern='!-f' => matched RewriteCond: input='/test.txt' pattern='!-d' => matched rewrite '/test.txt' -> '/dispatch.php?_c=test.txt&_m=' split uri=/dispatch.php?_c=test.txt&_m= -> uri=/dispatch.php, args=_c=test.txt&_m= local path result: /dispatch.php prefixed with document_root to /path/to/my/public_html/dispatch.php go-ahead with /path/to/my/public_html/dispatch.php [OK] So, it looks to me like the REQUEST_FILENAME is being presented as a path from the document root, rather than the file system root, which is presumably why the file test operator fails. Any pointers for resolving this gratefully received...

    Read the article

  • Architecture for highly available MySQL with automatic failover in physically diverse locations

    - by Warner
    I have been researching high availability (HA) solutions for MySQL between data centers. For servers located in the same physical environment, I have preferred dual master with heartbeat (floating VIP) using an active passive approach. The heartbeat is over both a serial connection as well as an ethernet connection. Ultimately, my goal is to maintain this same level of availability but between data centers. I want to dynamically failover between both data centers without manual intervention and still maintain data integrity. There would be BGP on top. Web clusters in both locations, which would have the potential to route to the databases between both sides. If the Internet connection went down on site 1, clients would route through site 2, to the Web cluster, and then to the database in site 1 if the link between both sites is still up. With this scenario, due to the lack of physical link (serial) there is a more likely chance of split brain. If the WAN went down between both sites, the VIP would end up on both sites, where a variety of unpleasant scenarios could introduce desync. Another potential issue I see is difficulty scaling this infrastructure to a third data center in the future. The network layer is not a focus. The architecture is flexible at this stage. Again, my focus is a solution for maintaining data integrity as well as automatic failover with the MySQL databases. I would likely design the rest around this. Can you recommend a proven solution for MySQL HA between two physically diverse sites? Thank you for taking the time to read this. I look forward to reading your recommendations.

    Read the article

  • What are best practices on virtual lab/test bed architecture?

    - by WooYek
    I am currently preparing a new small virtual environment for development and testing with Windows Server + SQL Server + AD + Sharepoint + Exchange + IIS(ASP.NET) + Biztalk + ?, for a small (up to 5) dev team. What are pros and cons on different approaches, eg. splitting up over different machines or packing everything up per machine. I your experience what are the best practices I should follow in terms of architecture and various system/servers placement. What to share and what to split per person. I would like to achieve some flexibility for the dev and testing process (so teammebers would not be steeping on each other's toes) and limit administrative effort needed to propagate settings, integrate work items and revert changes when something breaks up. It's not supposed to be an everyday development working environment, more a tier 2 developer testing environment, and not yet an integration or QA testing environment with formal change process. IMO the two borderline solutions are: creating one all-inclusive machine for each dev team member giving them freedom to manage creating shared environment managed by the one with somehow formalized change request process What golden mean would you recommend, and why?

    Read the article

  • Campus VLAN Segmentation - By OS?

    - by Moduspwnens
    We've been thinking through re-arranging our network and VLAN configuration. Here's the situation. We already have our servers, VoIP phones, and printers on their own VLANs, but our problem lies with end user devices. There are just too many to lump on the same VLAN without being hammered with broadcasts! Our current segmentation strategy has them split into VLANs like this: Student iPads Staff iPads Student Macbooks Staff Macbooks Gaming devices Staff (Other) Student (Other) *Note that our network has many more iPads and MacBooks than most. Since the primary reason we're splitting them is just to put them in smaller groups, this has been working for us (for the most part). However, this required our staff to maintain access control lists (MAC addresses) of all devices belonging in these groups. It also has the unfortunate side effect of illogically grouping broadcast traffic. For example, using this setup, students on opposite ends of campus using iPads will share broadcasts, but two devices belonging to the same user (in the same room) will likely be on completely separate VLANs. I feel like there must be a better way of doing this. I've done a lot of research and I'm having trouble finding instances of this kind of segmentation being recommended. The feedback on the most relevant SO question seems to point toward VLAN segmentation by building/physical location. I feel like that makes sense because logically, at least among miscellaneous end users, broadcasts will typically be intended for nearby devices. Are there other campuses/large-scale networks out there segmenting VLANs based on end-system OS? Is this a typical configuration? Would VLAN segmentation based on physical location (or some other criteria) be more effective? EDIT: I've been told that we will soon be able to dynamically determine device OS without maintaining access lists, although I'm not sure how much that affects the answers to the questions.

    Read the article

  • Site-to-site VPN using MD5 instead of SHA and getting regular disconnection

    - by Steven
    We are experiencing some strange behavior with a site-to-site IPsec VPN that goes down about every week for 30 minutes (Iam told 30 minutes exactly). I don't have access to the logs, so it's difficult to troubleshoot. What is also strange is that the two VPN devices are set to use SHA hash algorithm but apparently end up agreeing to use MD5. Does anybody have a clue? or is this just insufficient information? Edit: Here is an extract of the log of one of the two VPN devices, which is a Cisco 3000 series VPN concentrator. 27981 03/08/2010 10:02:16.290 SEV=4 IKE/41 RPT=16120 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 27983 03/08/2010 10:02:56.930 SEV=4 IKE/41 RPT=16121 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 27986 03/08/2010 10:03:35.370 SEV=4 IKE/41 RPT=16122 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) [… same continues for another 15 minutes …] 28093 03/08/2010 10:19:46.710 SEV=4 IKE/41 RPT=16140 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 28096 03/08/2010 10:20:17.720 SEV=5 IKE/172 RPT=1291 xxxxxxxx Group [xxxxxxxx] Automatic NAT Detection Status: Remote end is NOT behind a NAT device This end IS behind a NAT device 28100 03/08/2010 10:20:17.820 SEV=3 IKE/134 RPT=79 xxxxxxxx Group [xxxxxxxx] Mismatch: Configured LAN-to-LAN proposal differs from negotiated proposal. Verify local and remote LAN-to-LAN connection lists. 28103 03/08/2010 10:20:17.820 SEV=4 IKE/119 RPT=1197 xxxxxxxx Group [xxxxxxxx] PHASE 1 COMPLETED 28104 03/08/2010 10:20:17.820 SEV=4 AUTH/22 RPT=1031 xxxxxxxx User [xxxxxxxx] Group [xxxxxxxx] connected, Session Type: IPSec/LAN- to-LAN 28106 03/08/2010 10:20:17.820 SEV=4 AUTH/84 RPT=39 LAN-to-LAN tunnel to headend device xxxxxxxx connected 28110 03/08/2010 10:20:17.920 SEV=5 IKE/25 RPT=1291 xxxxxxxx Group [xxxxxxxx] Received remote Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28113 03/08/2010 10:20:17.920 SEV=5 IKE/24 RPT=88 xxxxxxxx Group [xxxxxxxx] Received local Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28116 03/08/2010 10:20:17.920 SEV=5 IKE/66 RPT=1290 xxxxxxxx Group [xxxxxxxx] IKE Remote Peer configured for SA: L2L: 1A 28117 03/08/2010 10:20:17.930 SEV=5 IKE/25 RPT=1292 xxxxxxxx Group [xxxxxxxx] Received remote Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28120 03/08/2010 10:20:17.930 SEV=5 IKE/24 RPT=89 xxxxxxxx Group [xxxxxxxx] Received local Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28123 03/08/2010 10:20:17.930 SEV=5 IKE/66 RPT=1291 xxxxxxxx Group [xxxxxxxx] IKE Remote Peer configured for SA: L2L: 1A 28124 03/08/2010 10:20:18.070 SEV=4 IKE/173 RPT=17330 xxxxxxxx Group [xxxxxxxx] NAT-Traversal successfully negotiated! IPSec traffic will be encapsulated to pass through NAT devices. 28127 03/08/2010 10:20:18.070 SEV=4 IKE/49 RPT=17332 xxxxxxxx Group [xxxxxxxx] Security negotiation complete for LAN-to-LAN Group (xxxxxxxx) Responder, Inbound SPI = 0x56a4fe5c, Outbound SPI = 0xcdfc3892 28130 03/08/2010 10:20:18.070 SEV=4 IKE/120 RPT=17332 xxxxxxxx Group [xxxxxxxx] PHASE 2 COMPLETED (msgid=37b3b298) 28131 03/08/2010 10:20:18.750 SEV=4 IKE/41 RPT=16141 xxxxxxxx Group [xxxxxxxx] IKE Initiator: New Phase 2, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 28135 03/08/2010 10:20:18.870 SEV=4 IKE/173 RPT=17331 xxxxxxxx Group [xxxxxxxx] NAT-Traversal successfully negotiated! IPSec traffic will be encapsulated to pass through NAT devices.

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • Tri-head linux system with Xmonad: is it possible to have HW acceleration

    - by progo
    What means there exists to have three monitors, all controlled by Xmonad and have hardware 3D acceleration as well? I had the pleasure of using three monitors earlier this year, and while Xmonad and Xinerama handle three monitors easily, I had to throw in an extra display driver, and also let go of Nvidia's own TwinView (which is a hack on Xinerama). This left me with no HW acceleration and some flickering as double buffering wouldn't work with certain applications. However, the three monitors handle so beautifully that I had hard time coming back to two. I understand the easiest way to achieve HW-accelerated tri-head combo is to split into two Xorgs. I wouldn't be able to switch windows between the Xorgs, so I'm not really into this solution. What's more, having a cheap and old PCI card along with even slightly better PCIe seemed to slow things down. Even if I occasionally disabled the third monitor from Xorg configure, I couldn't get HW acceleration to work. Only after I physically disconnected the old PCI card, I could get the games back in business. Would a Matrox Dual/Tri-head2go and a powerful Nvidia GPU do the trick? I understand Xmonad can be configured to "believe" that a "single" (as Dualhead2Go will merge) 3360x1050 display is actually two different ones? So that Xmonad's Mod-w and Mod-e would work properly there.

    Read the article

  • Can Subject Alternative Name accommodate multiple virtual mail domains?

    - by Lawrence
    I am currently running a postfix server with self signed certificates serving one mail domain, mycompany.com, the mail server is mail.mycompany.com and so is the CN of the certificate. Now, I need to add a new domain to it. The new domain name is mycompany.net to the same server. Since the users already have the root of the old certificate, I'd like to reuse that. However, I'd like to issue a new certificate so users using the SMTP from Outlook/Thunderbird of mail.mycompany.net do not get warnings. If I understand correctly, if I issue a new certificate with CN=mail.mycompany.com and a subjectAltName=DNS:mail.mydomain.net and have postfix serve this, the client will not complain either way about the cn not matching the target host name. Am I correct in this assumption or am I misunderstanding the concept of Subject Alternative Name? Just to avoid conversation, I do not want to have users on mycompany.net addresses use the mycompany.com server because I might (not a technical issue) have to split up into two different locations, and I want to produce an easily migrateable setup.

    Read the article

  • Backup plan for linux webserver in small business?

    - by radman
    Hi, I am currently in the process of writing a backup plan for the webserver in use by my business. I am very new to this area and have a few ideas about how things should work but am unsure of what tools to use and what sort of restore process is appropriate. I'm looking for something relatively simplistic and it doesn't have to be 100% paranoid just enough to give me a reliable backup. Speed is not of the essence and there is not going to be a live fallback in place. The backup will be onto a single hdd that will be stored onsite (no option for offsite as yet). Backups will be taking place weekly. I am constrained by both time and money which is why I'm aiming for a good enough solution. Is taking an image of the webserver system drive periodically and using that as the backup appropriate? Should I be testing that the backups restore correctly every time that I perform one? This is a bit broad but what setup would you use if you were in my place, given the services I am running? Should I add additonal machines and split the services? Any advice is much appreciated! See below for server details Webserver Platform Linux Ubuntu server Running mail-server svn-server mediawiki wordpress apache-webserver Hardware single 500gb sata drive Architecture Single machine behind router (with firewall) accessible to the internet.

    Read the article

  • Website Use Monitoring for 3 People

    - by linkedlinked
    I work in an IT startup with 2 partners, and I'm the programmer/IT guy -- in other words, the work horse. To make a long story short, I'm doing most of the work right now, while they spend all day on Facebook. That's OK, because they're paying my salary, but if the project fails, I'm sure they'll blame me for it (I'm doing my best to make sure that doesn't happen!), and I want some sort of recourse. I already have an app that blocks time-wasters on my local PC, and keeps logs of when the app is enabled (so I can say "I had Facebook blocked from 9am-5pm today.") Is there any way I can get a brief summary of the most heavily visited sites, split up by client PC? At the end of the month, I want to be able to say "You both load Facebook, on average, every 10 minutes. You spend hours a day on Youtube, and haven't opened up our bugtracker in weeks" and maybe have a nifty chart or graph to match it. We have a crappy D-Link router, and no IT budget. They are both on Windows Vista, I run Ubuntu Linux. I don't want to install any monitoring software on their PC, but I'm totally fine with, say, routing all the network traffic through my machine. I guess I can think of lots of ways to accomplish this (telnet into JSSH and list open tabs? log all the DNS requests, per-domain? even thinking of setting up a webcam on my desk and just keeping 5-minute snapshots...), I just don't really know where to start. Any advice is appreciated, thanks!

    Read the article

  • Move 53,800+ files into 54 separate folders with ~1000 files each?

    - by ane
    Trying to import 53,800+ individual files (messages) using Gmail's POP fetcher. Gmail understandably refuses, giving the error: "Too many messages to download. There are too many messages on the other server." The folder in question looks like similar to: /usr/home/customer/Maildir/cur/1203672790.V57I586f04M867101.mail.net:2,S /usr/home/customer/Maildir/cur/1203676329.V57I586f22M520117.mail.net:2,S /usr/home/customer/Maildir/cur/1203677194.V57I586f26M688004.mail.net:2,S /usr/home/customer/Maildir/cur/1203679158.V57I586f2bM182864.mail.net:2,S /usr/home/customer/Maildir/cur/1203680493.V57I586f33M740378.mail.net:2,S /usr/home/customer/Maildir/cur/1203685837.V57I586f0bM835200.mail.net:2,S /usr/home/customer/Maildir/cur/1203687920.V57I586f65M995884.mail.net:2,S ... Using the shell (tcsh, sh, etc. on FreeBSD), what one-line command can I type to split this directory full of files into separate folders so Gmail only sees 1000 messages at a time? Something with find or ls | xargs mv maybe. Whatever is fastest. The desired output directory would now look something like: /usr/home/customer/Maildir/cur/1203672790.V57I586f04M867101.mail.net:2,S /usr/home/customer/Maildir/cur/1203676329.V57I586f22M520117.mail.net:2,S ... /usr/home/customer/set1/ (contains messages 1-1000) /usr/home/customer/set2/ (contains messages 1001-2000) /usr/home/customer/set3/ (etc.) Ideally, cron could run another command to automatically reverse the process in 1000 message increments every hour. So Gmail only sees & downloads 1000 at a time.

    Read the article

  • How to set up the jdbc driver to connect to hsqldb from libreoffice?

    - by rumtscho
    I am trying to "split" a LibreOffice .odb file into a HSQL database and an OpenOffice document containing forms and macros. I am trying to follow the instructions from this thread: Within a few minutes you can convert your embedded HSQLDB to a stand-alone HSQLDB which is just a very fine database engine. 1) Download and extract the current version from http://hsqldb.org/ and point the Java class path in ToolsOptionsJava to the new hsqldb.jar 2) Extract the database folder from your embedded database and rename the files data, properties, script to name.data name.properties, name.script where "name." is an arbitrary name prefix. 3) Connect a Base document to an existing JDBC database such as jdbc:hsqldb:file:/home/chenier/hsqldb/name;default_schema=true;shutdown=true;hsqldb.default_table_type=cached;get_column_name=false (again, "name" refers to your own file name prefix). This local single-user connection gives you much more than the embedded HSQLDB. 4) Copy queries, forms and reports from the old database over to the new one. The wizard presents me with a window expecting two inputs: a "Datasource URL" and a "JDBC driver class". As far as I can tell, the tutorial above only tells me what to put into the Datasource URL. As for the JDBC driver class, I have no idea what to write into this field. I tried the fully-qualified name of the Java class, org.hsqldb.jdbc.JDBCDriver as given in the HSQLDB documentation. When that failed, I tried the physical path /var/lib/hsqldb/lib/hsqldb.jar (although that should have been unnecessary, because first I pointed to this path as described under 1 and then restarted LibreOffice). In both cases, "Test class" failed with the message "The JDBC driver could not be loaded". OpenOffice's documentation doesn't say anything sensible about the field, it was something like "enter the JDBC driver in this box". Any ideas what I should enter there to get the connection working?

    Read the article

  • Migrating away from LVM

    - by Kye
    I have an Ubuntu home media server setup with 4.5TB split across a few hard-drives (1x3TB, 2x1TB) and I'm using LVM2 to manage the volumes. I have recently added a 60GB SSD to my server, and I wish to use it to house the 'root' partition of my server (which is currently under the LVM group). I don't want to simply add it to the LVM volume group, because (afaik) there's no way to ensure that the SSD will be used for the root filesystem. If I just throw it at the VG, it may be used to house my media, which would defeat the purpose of having the SSD in the first place. I feel that my only solution is to somehow remove my root partition from the LVM setup and copy it across to the SSD. My boot partition is, of course, not part of the LVM group. My disk setup is as follows: 60GB SSD: EMPTY. 1TB HDD: /boot, LVM space. 1TB HDD: LVM space. 3TB HHD: LVM space. I have a few logical volumes. my root (/), a 'media' volume for my media collection, a backup one for my network backups.etc. Does anyone have any advice as to how to go about this? My end goal is to have the 60GB SSD used for my boot and root partitions, with everything else on the 3TB/1TB/1TB hard-drives.

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • Excel 2007 - Adding line breaks in a cell and no line over 50 characters

    - by Richard Drew
    I have notes stored in an excel cell. I add line breaks and dates every time I add a new note. I need to copy this to another program, but it has a line limit of 50 characters. I want a line break for each new date and for when each date's comment goes over 50 characters. I'm able to do one or the other, but I can't figure out how to do both. I'd prefer words not to be split up, but at this point I don't care. Below is some sample input. If needed for a SUBSTITUTE or REPLACE function, I could add a ~ before each date in my input as a delimiter. Sample Input: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates and locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to [email protected] 05/14 - Copy sent to John Public and [email protected] Ideal Output: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates an d locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to [email protected] om 05/14 - Copy sent to John Public and email@custome r.com

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • Wiring my internet

    - by u8sand
    I have Verizon internet service and am currently using wifi. My router is in the basement and my desktop computer is 2 floors and on the other side of the house above it... Worst possible positioning but that's just how things worked out. My wireless currently is extremely unstable so I've decide to correct the problem by wiring my computer directly. The problem lies here: when redoing the room next to it (when the wall was open) we went ahead and wired some coaxial cable from our attic to our basement (with plenty of slack on both ends, don't ask me why we didn't go ahead and wire a CAT6 cable). The question is: Can I use the coaxial cable to bring me internet connection? Naturally the router (which needs to stay where it is) takes a coaxial cable input and has Ethernet outputs. So maybe I would have to take a ethernet cable, convert to coaxial-coaxial to my computer, convert back to ethernet. Is this even possible to convert from coaxial to ethernet? Or do I have to attempt to go ahead and fish a cat6 cable through my house. I cannot just split the signal because that would require two routers and two networks (which I don't believe would work with one cable-one ISP correct me if I'm wrong). Thanks

    Read the article

  • IPv6 Routing / Subnetting

    - by nappo
    Recently I have installed Citrix Xen Server 6.2 on a machine. My Provider (Hetzner) gave me the IPv6 Subnet 2a01:4f8:200:xxxx::/64. Followed an article in the providers wiki (1) i got it working and can assign IPs to my guests (CentOS). However i can't assign a second IP to a single guest - it will result in a timeout. I'm not very familiar with IPv6 routing / subnetting - any help or tips for further troubleshooting is welcome! My Setup: XenServer 6.2 IPv6: 2a01:4f8:200:xxxx::2/112 ip -6 route: 2a01:4f8:200:xxxx::/112 dev xenbr0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::1 dev xenbr0 metric 1024 mtu 1500 advmss 1440 hoplimit 0 default via fe80::1 dev xenbr0 metric 1024 mtu 1500 advmss 1440 hoplimit 0 Guest 1 IPv6: 2a01:4f8:200:xxxx::3/64 IPv6: 2a01:4f8:200:xxxx::4/64 ip -6 route: 2a01:4f8:200:xxxx::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 default via fe80::1 dev eth0 metric 1 mtu 1500 advmss 1440 hoplimit 4294967295 Guest 2 IPv6: 2a01:4f8:200:xxxx::5/64 Guest 1 IPv6 is working fine, Guest 2 too. As suggested by the wiki article (1) i split my /64 network into a /112. Is it right to set the host /112 and the guests /64? Why is that?

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >