Search Results

Search found 15875 results on 635 pages for 'border box'.

Page 506/635 | < Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >

  • Wastage of resources in Virtualization

    - by Sabeen Malik
    I have asked this question on SO, but was suggested that i ask it here on SF, so here it goes. http://stackoverflow.com/questions/3010753/wastage-of-resources-in-virtualization I am not sure if this is the right place to ask the question. However i hope it is. When looking for a VPS earlier today, I was trying to understand how each container would work in the background. Keeping in mind the fact that the operating system uses most of the memory and power on a system, wouldn't having multiple operating systems in the same machine mean more wastage of resources. For instance if i was running centOS on a dedicated box and it was running lets say 20 background OS level processes. Then i go and install a virtualization platform and install 5 more centOS virtual machines in the same system which are exactly the same as the host operating system. Doesn't this mean duplication of those 20 processes 6 times? So internally the context switching is happening between 120 processes instead of 20? Further Notes: Here is an example of what i am thinking: I have a master-slave configuration for a long running, cpu + memory intensive process, which can be distributed to 4 machines. Lets say when the process runs on these 4 machines with lets say 1 Gh CPU and 1 Gig RAM, i get 400 results per hour from the cluster (assuming 100 results from one machine) . Now i get a bigger machine ( lets say 4Gh and 4 Gig RAM), have 4 virtual hosts on it with 1 Gz CPU and 1 Gig RAM. Will this configuration give me the same 4 results per hour from these 4 virtual hosts?

    Read the article

  • The Server Fault Wiki of recommended practices [migrated]

    - by Avery Payne
    So I've noticed that there are several recommendations on basic practices on Server Fault, but there doesn't seem to be a cohesive view as to how those recommendations would all fit together. So I thought I would lump these together as a kind of mental exercise to see what the "ServerFault Community IT Department" would look like if it were implemented. This would give a few things: it would make a reasonable wiki (in the true wiki spirit of many contributions), it would provide several links to well-vetted practices, and it would be kind of fun to see what the amalgamation would look like. And who knows, it may even point out some interesting issues between different forms of "best practices", although I would be stunned if there was a conflict hidden in there someplace... Add your favorites from Server Fault as answers, and I'll re-edit this section with the results. Here's a few catagories to collect different ideas together. Hardware Configuration(s) Server room configuration. Server room temperature Firmware Updates and Scheduling Storage Configuration(s) Selecting a NAS box Linux: Dealing with /tmp Linux: Install apps in /var or /opt? Network Configuration(s) checking DNS health and compliance Security Practice(s) Password (General) Best Practices Password sharing methods Windows Update Updating Windows Servers that are hosts for VMs Network Service(s) User Service(s) User Naming & Deletion Upgrade Process(es) Disaster Recovery Checking Backups Documenting an outage for a post-mortem review Last Edit: 2010-02-17

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • Offline cache copies in Windows file sharing

    - by netvope
    I frequently access media files (music or video) on a remote Windows file share. My Internet connection is not very fast, and I find it a waste of bandwidth when I repeatedly access the same files. For example, I may listen to the same song 30 times in a month. So, I would like to cache files I've used. I know Windows has an "Always available offline" feature but I dont' think it suit my needs. I don't want to make the whole share "available offline" as the remote Windows file share is huge (in terabytes). Making individual files "available offline" is tedious as the files are scattered in many different directories. It would be much more convenient if the system can simply cache those I've used. I could also manually make a local copy each time I use a file... but this is even more troublesome than making each file "available offline" Also The files on the share seldom change. Many of the files are rarely used. Some of the files are frequently used. I don't have a list of the most frequently used files. It would be the best if I could tell Windows to cache the last accessed 10GB, but apparently it doesn't have this feature. So I think the best way is to have a SMB/CIFS caching proxy. What do you think? I have a Linux box sitting around. Perhaps I should try to setup samba4?

    Read the article

  • How do I improve my screen resolution in Windows Remote Desktop?

    - by Jeff
    I'm RDP'ing into a Win2K3 machine from a WinXP machine, and I cannot stand the low screen resolution I get on the Win2K3 box. Text is too large and the graphics/colors aren't very smooth. How do I improve this? If I right-click on the desktop of the remote machine and go to Properties - Settings, I see that the screen resolution is set to 1280x1024 (should be okay, I would think), and the color quality is Medium (16 bit) (not optimal) and I don't have the option to change either setting (because they're set in the .rdp file for the session, right?). If I move over to the Appearance tab, I see that font size is set to Normal, with no option to make it smaller. The thing is, these settings are close to what I have on the XP machine I'm RDP'ing in from. The only difference (in those settings) is the color quality, which is 32 bit. Any ideas on how I can improve the situation? Other tidbits: The graphics card on the Win2K3 machine is ATI ES1000. I think I have the latest drivers for it. I'm running VMware Workstation on the Win2K3 machine, and if I create a Win2K3 VM and RDP into it from the XP machine, the resolution is just fine.

    Read the article

  • my X server doesn't load a module called "glx", but my video drivers seem to be installed

    - by rumtscho
    I just got a new, very wide monitor (2560x1440) and there is no sense maximizing my applications. So I installed Compiz Config Settings Manager and enabled Grid. Nothing happened, the shortcuts don't move application windows. Went to System - Preferences - Appearance, the Visual effects tab. It at "disabled". When I try to set them to "normal" or "extra", a message box appears telling me that it's searching for video drivers, then disappears, and I get an error message "Desktop effects could not be enabled". I opened Xorg.0.log, and had errors there: (EE) Failed to load /usr/lib/xorg/modules/extensions//libglx.so (II) UnloadModule: "glx" (EE) Failed to load module "glx" (loader failed, 7) (II) LoadModule: "extmod" Going to Administration - System - Hardware drivers, it said that there are no available and/or installed hardware drivers. But apt-get said that it cannot install nvidia-glx-185, as it is already installed. Googling my error message suggested that I install and run something called envyng. This let me install the nvidia drivers again, and now I can see in the Hardware Drivers window that they are installed and active. But the error message in Xorg.0.log remains, and I still cannot enable the Compiz effects or use Grid. Now, I don't have enough Linux experience to understand if this is a single cause-effect-chain of problems, or three independent ones, but I'd appreciate help for any of them. I am running Ubuntu 9.10, the video card is a GeForce 7600GS.

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • ssh without password does not work for some users

    - by joshxdr
    I have a new RHEL4 Linux box that I am using to copy data to old Solaris 2.6 and RHEL3 Linux boxes with scp. I have found that with the same setup, it works for some users but not for others. For user jane, this works fine: jane@host1$ ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/osborjo/.ssh/identity debug1: Offering public key: /mnt/home/osborjo/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). for user jack it does not: jack@host1 ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/oper1/.ssh/identity debug1: Offering public key: /mnt/home/oper1/.ssh/id_rsa debug1: Authentications that can continue: publickey,password,keyboard-interactive I have looked at the permissions for all the keys and files, they look the same. Since I am using home directories mounted by NFS, the keys for both the remote host and the local host are in the same directory. This is how things look for jane: jane@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jane operator 394 Jan 27 16:28 authorized_keys -rw------- 1 jane operator 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jane operator 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jane operator 1205 Jan 27 16:46 known_hosts For user jack: jack@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jack engineer 394 Jan 27 16:28 authorized_keys -rw------- 1 jack engineer 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jack engineer 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jack engineer 1205 Jan 27 16:46 known_hosts As a last ditch effort, I copied the authorized_keys, id_rsa, and id_rsa.pub from jill to jack, and changed the username in authorized_keys and id_rsa.pub with vi. It still did not work. It seems there is something different between the two users but I cannot figure out what it is.

    Read the article

  • Collect temperature and fan speed with munin from Windows 7 PC?

    - by mfn
    Hi, I'm quite fond of munin and using it also at home to monitor my PCs. What was super-duper easy under Linux is pretty much unsolvable for me under Windows: I'd like to monitor CPU and Motherboard temperatures as well as fan speed. On Linux I'm using lm-sensors and the plugin for munin was basically there. I access already some information from my Windows machine via SNMP (disk space, CPU usage, memory usage); the graphs are simple as is the information exposed via SNMP, but they do their job. But when it comes to temperature and fan speed I'm running against a wall. My research so far resulted in that Windows does not by default provide out of the box ability to retrieve temperature/fan speed data. Third party applications are necessary which have know-how how to communicate with the Motherboard chips. The best I cam up with is that SpeedFan exposes a shared memory interface and there exists a library which hooks into Windows SNMP facility and bridges over to SpeedFans shared memory interface; it's called SFSNMP (site currently down). Unfortunately the library doesn't work, there's a bug report at SpeedFan open about it, but it's currently not moving (although the SFSNMP author is active there) . So, unless that's going to work like anytime soon, are there any alternatives? I'm not found of buying any software to get that feature, given that I take it as granted that my system exposes me the information to properly monitor it, but anyway don't just not answer because of this.

    Read the article

  • Search inside of text files

    - by Matt
    So here is the situation. I currently run a mail server for my small non-profit company. My mail server (Merak Mail Server) keeps logs in .log files and mail as .tmp files. Essentially these are just text files that are kept on the server. Problem is that when I put text into the "Containing text" field on Windows Explorer, it always misses the files and tells me no results returned. Then when I search the files one by one (painful at best), I find the files I need. Do I not understand the search feature well enough, or maybe I have it indexing wrong. I really don't care what I need to use to search the files, even a third-party app is fine with me, I just want to type an email address into a box and search all of my log files or email files and find out which one I am looking for. It can be Windows Search or something else, as long as I can find a way to get the job done I will be happy. Pay solutions are fine as well. Thanks everyone in advance.

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by ThatGraemeGuy
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • Backing up Excel Files to a different Directory

    - by Joe Taylor
    In Excel 2007 in the Save As box there is an option to 'Create a Backup' which simply backs up the file whenever it is saved. Unfortunately it backs up the file to the same directory as the original. Is there a simple way to change this directory to another drive / folder? I have messed about with macros to do this, coming up with: Private Sub Workbook_BeforeClose(Cancel As Boolean) 'Private Sub Workbook_BeforeSave(ByVal SaveAsUI As Boolean, Cancel As Boolean) 'Saves the current file to a backup folder and the default folder 'Note that any backup is overwritten Application.DisplayAlerts = False ActiveWorkbook.SaveCopyAs Filename:="T:\TEC_SERV\Backup file folder - DO NOT DELETE\" & _ ActiveWorkbook.Name ActiveWorkbook.Save Application.DisplayAlerts = True End Sub This creates a backup of the file ok the first time, however if this is tried again I get: Run-Time Error '1004'; Microsoft Office Excel cannot access the file 'T:\TEC_SERV\Backup file folder - DO NOT DELETE\Test Macro Sheet.xlsm. There are several possible reasons: The file name or path does not exist The file is being used by another program The workbook you are trying to save has the same name as a... I know the path is correct, I also know that the file is not open anywhere else. The workbook has the same name as the one I'm trying to save over but it should just overwrite. I have posted the question about the coding on Stack Overflow but wondered if there is an easier way to do this. Any help would be much appreciated. Joe

    Read the article

  • Installing WindowsAuthentication breaks authentication / web.config?

    - by Ian Quigley
    I have a clean Windows 2008 R2 box (on a VM) and have installed IIS 7.5 with default options. I then copied a website to it (from Windows 7, IIS 7) and after a little tweaking the website is working fine. The website is currently using and working with Anonymous Authentication. I have gone back to the Windows Components/Sever Manager, Roles - Security and ticked and installed Windows Authentication. When I check my server in IIS (top level above sites) - Authentication, I see Anonymous Authentication (enabled) ASP.NET Impersonation (disabled) Forms Authentication (disbaled) Windows Authentication (enabled) When I check my default website - Authentication, I see as above but "Retrieving status" and an error dialog saying There was an error while performing this operation. Details: Filename c:\inetpub\wwwroot\screwturnwiki\web.config Line number: 96 Error: This configuration section cannot be used in this path. This happens when the section is being locked at the parent level. Locking is either by default (overriderModeDefault="Deny"), or set explicity by a location tag with overrideMode="Deny" or the legacy allowOverride="False". I have tried hand editing the web.config with no success. UN-installing Windows Authentication happily returns my site to working with Anonymous Authentication, and allows me to enable/disable these three options. FYI. I am using ScrewTurnWiki with the Active Directory plug in.

    Read the article

  • Reliable applicance for routing IT emergency calls (SIP and ISDN)

    - by chiborg
    We have a fairly big IT installation and our IT staff needs to be reachable 24/7. At the moment we have the following setup for "emergency" calls to our IT staff on our main Asterisk box: An incoming emergency number (connected via SIP trunk and a BRI card in case the SIP trunk goes down). When the number is called during the office hours, all the SIP phones of the IT staff are called simultaneously. When the number is called out of office hours interface, a list of mobile phone numbers is called, one after another until someone picks up. The list can be changed by the IT staff via command line script. The setup works well, but the Asterisk is heavily used in a call center, has experienced some outages and misconfigurations, each of them bringing down the IT emergency number. So we'd like to put the IT emergency call functionality on a separate device. This does not need to be a big server, it even does not need to be Asterisk, it only has one purpose and should do it reliably. It should be very low-maintenance. Any suggestions for hard- and software?

    Read the article

  • Centos 5.xx Nagios sSMTP mail cannot be sent from nagios server, but works great from console

    - by adam
    I spent last 3 hours of reasearch on how to get nagios to work with email notifications, i need to send emails form work where the only accesible smtp server is the company's one. i managed to get it done from the console using: mail [email protected] working perfectly for the purpouse i set up ssmtp.conf so as: [email protected] mailhub=smtp.company.com:587 [email protected] AuthPass=mypassword FromLineOverride=YES useSTARTTLS=YES rewriteDomain=company.pl hostname=nagios UseTLS=YES i also edited the file /etc/ssmtp/revaliases so as: root:[email protected]:smtp.company.com:587 nagios:[email protected]:smtp.company.com:587 nagiosadmin:[email protected]:smtp.company.com:587 i also edited the file permisions for /etc/ssmtp/* so as: -rwxrwxrwx 1 root nagios 371 lis 22 15:27 /etc/ssmtp/revaliases -rwxrwxrwx 1 root nagios 1569 lis 22 17:36 /etc/ssmtp/ssmtp.conf and i assigned to proper groups i belive: cat /etc/group |grep nagios mail:x:12:mail,postfix,nagios mailnull:x:47:nagios nagios:x:2106:nagios nagcmd:x:2107:nagios when i send mail manualy, i recieve it on my priv box, but when i send mail from nagios the mail log says: Nov 22 17:47:03 certa-vm2 sSMTP[9099]: MAIL FROM:<[email protected]> Nov 22 17:47:03 certa-vm2 sSMTP[9099]: 550 You are not allowed to send mail from this address it says [email protected] and im not allowed to send mails claiming to be [email protected], its suppoused to be [email protected], what am i doing wrong? i ran out of tricks... kind regards Adam xxxx

    Read the article

  • Windows Media Center doesn't see my movies

    - by DrJekyll
    I am trying to configure my Windows Media Center (Windows 7 Ultimate). I selected folder with my movies and added it to the library, but when I went to the movies library, it says "There are no items in this library yet - Windows Media Center is searching for media files in the background...". I have all necessary codecs installed, Windows Media Player opens those movies correctly. When I right click on the file - Open with - Windows Media Center it also plays them without any problem. Any ideas why they don't appear in the libraries? Edit: Movies are coded with divx and xvid codecs and they have ".avi" extension. Windows doesn't have problems playing them. I told Media Center where the files are. I even pointed Windows Media Center to a folder with only one .avi file it still couldn't find anything there. (I have given it quiet some time, even though searching in the directory with only one file shouldn't take more than a few seconds.) When I add a folder with a lot of movies, I get a dialog box "You can wait while media is added or select OK to continue using Windows Media Center.".                                                                        At the end it says it added about 90 movies, but when I go to the libraries, it's still empty.

    Read the article

  • Windows server 2008R2 routing with single NIC

    - by Fabian
    I'm trying to duplicate a Linux server configuration to a windows server 2008R2 box. Basicaly this linux server acts as a router, but it is doing its job with only 1 interface (1 NIC). Here is the network configuration in place (I cannot change it) : INTERNET <== Router (local ip = 194.168.0.3) <== linux Server (ip : 194.168.0.2). The router is configured with a DMZ to 194.168.0.2, and only allow this IP to connect to internet (Cannot change this router configuration). The linux server is configured with a default gateway to 194.168.0.3, with the option : "Act as router". All other computer on the lan have this configuration (given by DHCP) : IP range : 194.168.0.X MASK : 255.255.255.0 Default gateway : 194.168.0.2 And everything is working perfectly. I'm trying to reproduce this way of routing with only one NIC from a windows server 2008R2, but it seems that you cannnot do it with only one NIC (all exemples I see are refering to 2 NIC with 2 different network). Does someone have an idea how to achieved this in Windows server 2008R2 ? Tx you for your help ! Fabian.

    Read the article

  • Scroll wheel causes browsers + Windows explorer to go back

    - by KaptajnKold
    I use Windows 7 on a VirtualBox VM on a Mac. Lately, when I'm using any browser (IE9, Chrome or FireFox or even Windows Explorer), use of the scroll wheel followed by any movement of the mouse cursor causes the browser to go back. Very annoying. This happens when I use the scroll wheel on a USB connected mouse (brand/model unknown, since I don't have it in front of me as I write this) or when I use two-finger scrolling on the trackpad when no mouse is connected. When I connect from my VM to a remote Windows box (Windows Server 2008), I experience the same problem. I have tried rebooting the VM to no avail. I am not sure when the problem started exactly. It may or may not have been after I connected the USB mouse for the first time, but trying to unplug it and then rebooting didn't help. I have tried to google for a solution, but all I've found are people who accidentally pressed the shift key while scrolling, which will cause the browser to go backward or forward in the browser history. This however is not the problem I'm having. To be clear, in my case the browser only goes back when I move the mouse after I've used the scroll wheel. I'm at my wits end :(

    Read the article

  • Ways to have audio output without wires

    - by viraptor
    I'm trying to find a way of using my home speakers/amp without actually having to connect them. There are two laptops that use them normally (so I don't like changing the connection all the time) and I'd rather move the speakers to a place that's away from the couch. I'm not sure how to do this though... The options I can think of are: some kind of wireless jack-jack connection finally getting a media server Unfortunately I can't find any good product for the first solution. I've seen some headphones which have the receiver integrated and a separate transmitted, so in general the idea is already out there, just not the way I need ;) I've seen also http://www.miccus.com/products/blubridge-mini-jack, but I'd have to have a compatible receiver which I can't find on its own (maybe there's some application that the media server could use?). As far as media server goes... many of the plug servers look really interesting, but I'm not sure how to create an audio output and how to redirect the input really. None of the plug servers I've seen so far advertises the option of audio output jack port. I think this part could be fixed by getting one with an usb port and a separate cheap usb soundcard. I hope that input can be sorted out in some rather simple way. I've got Linux running on both laptops so I hope that would be possible to configure jack/pulse/whatever to use the remote endpoint, or even write a simple local-/dev/dsp:network:media-server-/dev/dsp forwarder. So the main question is... are there better ways? Are there any out of the box solutions? Or maybe this was already done by someone and described somewhere?

    Read the article

  • Websockets Server with Fault-Tolerance and Durable Message Store

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for. EDIT So, after some research the following installed options seem to be the most robust: Kaazing Migratory Migratory (http://migratory.ro) Hosted services that seem "real" Pusher (great API but no history feature yet) PubNub (has history) All of the above services have graceful fallback to other communication methods if websockets are not available. I was not able to find any open source that provided "out of the box" clustering, fail-over, and a durable message store to play back history. There are some projects that may serve as good starting points, but not exactly what I am looking for.

    Read the article

  • Windows Media Center showing Jerky Video on PC

    - by Kris Erickson
    I had to repave my Windows 7 x64 box last week due to a hard drive crash, and for a while everything was running perfectly but now all videos in Windows Media Center are jerky (the sound is fine, they just seem to skip a ton of frames all the time). This is on the local machine, but the same thing happens when I try to stream to my Xbox. The videos all show fine in VLC and Windows Media Player. I guess I must have installed something recently (in the process of getting all the apps I usually have running on my PC) that caused this but for the life of me I can't figure it out. I have updated to the latest video driver (and then rolled back to the standard Windows 7 driver), I have rolled back all the other drivers that I have installed (I believe). I have uninstalled all the codec packs (I also run TVersity, so I hate the TVersity codec pack installed), and I uninstalled TVersity. Nothing seems to help. I have uninstalled windows media center, and reinstalled it from the Programs and Features. I have basically ran out of things to try to fix this, and am almost thinking about reinstalling Windows again. Any suggestions?

    Read the article

  • How to use WPA2 client mode in the Linux-based Cisco WAP4410N access point

    - by joechip
    I have a Cisco WAP4410N access point that I want to use as a client to connect to a WPA2 wireless network (for WLAN service monitoring purposes). Supposedly this access point supports a "Wireless Client/Repeater" mode that allows to do this. The Repeater function is optional (I have that box unchecked so that nobody can connect to this access point wirelessly). I have verified through SSH that the access point gets configured as a client and not as a Master. But it never associates to the SSID I ask it to. This is what iwconfig shows: ath04 IEEE 802.11ng ESSID:"myownssid" Mode:Managed Channel:0 Access Point: Not-Associated Bit Rate:0 kb/s Tx-Power:14 dBm Sensitivity=1/3 Retry:off RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=0/94 Signal level=161/162 Noise level=161/161 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 Although I've never done this from the command line, I suppose I could use wpa_supplicant or wpa_client to associate it, but I don't know how to do that without editing configuration files and the filesystem is readonly. Besides, I would have to run those commands manually after every reboot. I'd like to know how to do this the Cisco way, if possible. If not, any trick to make this work would be useful. Edit: This is with the latest firmware, 2.0.4.2. And I found that not all of the filesystem is readonly, since /var and /tmp are mounted with type ramfs.

    Read the article

  • RAID administration in Debian Lenny

    - by Siim K
    I've got an old box that I don't want to scrap yet because it's got a nice working 5-disk RAID assembly. I want to create 2 arrays: RAID 1 with 2 disks and RAID 5 with the other 3 disks. The RAID card is Intel SRCU31L. I can create the RAID 1 volume in the console that you access with Ctrl+C at startup. But it only allows for creation of one volume so I can't do anything with the 3 remaining disks. I installed Debian Lenny on the RAID 1 volume and it worked out nicely. What utilites could I now use to create/manage the RAID volumes in Debian Linux? I installed the raidutils package but get an error when trying to fetch a list: #raidutil -L controller or #raidutil -L physical # raidutil -L controller osdOpenEngine : 11/08/110-18:16:08 Fatal error, no active controller device files found. Engine connect failed: Open What could I try to get this thing working? Can you suggest any other tools? Command #lspci -vv gives me this about the controller: 00:06.1 I2O: Intel Corporation Integrated RAID (rev 02) (prog-if 01) Subsystem: Intel Corporation Device 0001 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort - <MAbort- >SERR- <PERR- INTx- Latency: 64, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 26 Region 0: Memory at f9800000 (32-bit, prefetchable) [size=8M] [virtual] Expansion ROM at 30020000 [disabled] [size=64K] Capabilities: <access denied> Kernel driver in use: PCI_I2O Kernel modules: i2o_core

    Read the article

  • freebsd-update failure

    - by ctuffli
    Some time ago, I upgrade my FreeBSD box to 7.1-RC2, but now I'd like to move to 7.2-RELEASE. I tried running # uname -mrsi FreeBSD 7.1-RC2 i386 GENERIC # freebsd-update upgrade -r 7.2-RELEASE Looking up update.FreeBSD.org mirrors... 3 mirrors found. Fetching metadata signature for 7.1-RC2 from update4.FreeBSD.org... failed. Fetching metadata signature for 7.1-RC2 from update5.FreeBSD.org... failed. Fetching metadata signature for 7.1-RC2 from update2.FreeBSD.org... failed. No mirrors remaining, giving up. Substituting 7.1 for 7.2 gives the same error. Adding a --debug option shows the failure as being fetch: http://update4.FreeBSD.org/7.1-RC2/i386/latest.ssl: Not Found Is there any way to still do a binary upgrade of this system as the 7.1-RC* directories don't exist on http://update.freebsd.org anymore? Upgrading from source is an option, but I wanted to see if there was some way to salvage this installation.

    Read the article

< Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >