Search Results

Search found 8673 results on 347 pages for 'kvm switch'.

Page 262/347 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • VirtualBox: using physical partition as virtual drive

    - by Hamman Samuel
    Background: I am using VirtualBox installed on Windows 7. From within VirtualBox I am using Xubuntu as a virtual OS. The reason I chose this approach is so that I don't have to keep turning off Windows and rebooting from Xubuntu every time I needed to switch OSes. And VirtualBox's seamless mode is pretty amazing to allow me see Xubuntu and Windows 7 all in one screen. Issue: Now I am thinking of a way to have Xubuntu more integrated into my system. By this I mean I want to have a physical partition for Xubuntu. But I want to still have the feeling of the seamless mode. Question: So finally, my question is: is it possible to load a partition in VirtualBox as a virtual OS? Case examples: Ideal scenario would be: I physically boot up and login to Windows 7. Now I want to access Xubuntu, so I load VirtualBox and access my Xubuntu partition without rebooting. And the other way around too, i.e. I boot up the system, login to Xubuntu, and can access the actual Windows 7 partition through VirtualBox. Other info: Please note that I am not talking about getting access to files, as I have a completely separate partition for my files, and am very familiar with VirtualBox's Shared Folders option.

    Read the article

  • Hard crash when using bluetooth headset on MacBook Pro and Lion 10.7.2

    - by jtalarico
    I recently picked up a bluetooth headset (Motorola S10-HD) and started using it with my MacBook Pro (17" purchased new in 2010) running Lion 10.7.2. Here's what works - stereo audio: iTunes Spotify Pandora (via browser) games (e.g. Minecraft, which is a Java app) audio from YouTube Plex, VLC, other video players Here's what doesn't work well - stereo audio fails and the headset seems to go into mono (i.e. tinny-as-hell) mode: Google Hangout Skype GoTo Meeting Here's what's just downright catastrophic. If I'm listening to stereo audio and then decide to jump into a Skype call (Google Hangout, or GoTo Meeting), bluetooth often crashes and I can only get things working again by shutting down the device, disabling bluetooth, and getting things back up and running again. But the audio is still horrible, and MUCH better using just a simple set of iPhone earbuds and mic. About 80% of the time during such a call, bluetooth crashes. And about 90% of the time, after the call ends, Skype is shut down, or I try to switch back to playing stereo audio, I get a hard crash!! The gray screen of death descends and I'm told I need to restart my machine. In one such instance, even after a reboot, I could not enable bluetooth again ("Turn Bluetooth On" was grayed out in taskbar). Is this just a weak implementation of bluetooth by Apple, or is this a hardware issue? I've seen others posting similar issues even on the Apple support site indicating that bluetooth headsets are failing left and right, but I haven't seen anyone mention hard crashes.

    Read the article

  • How to prevent Gnome-shell's Alt+Tab from grouping windows from similar apps?

    - by wleoncio
    I love pretty much everything about how Gnome Shell handles app-switching through Alt+Tab. My one gripe with it, though, is how it forces the user to use Alt+` to switch between windows of the same app. This is very annoying for me, because now I have to keep in mind if the last window I was using belonged to the same app as the current window or not. Definitely a nuisance for power users who thinks in terms of "windows I'm working with" instead of "applications I'm working on". I've tried the AlternateTab extension ( https://extensions.gnome.org/extension/15/alternatetab/ ), but it's looks way too ugly for me. Not to mention that in the end all I want is to remap Alt+(key above tab) to Alt+Tab on this application. I guess one option would be to just tweak Gnome-shell. My guess is that I should tinker with the altTab.js file at /usr/share/gnome-shell/js/ui/, but the file is too long and overwhelming for someone like me, who doesn't know JavaScript. Does anyone know how I can make Gnome Shell stop grouping windows by applications?

    Read the article

  • Document Map in MS Word 2007 going bonkers

    - by rzlines
    I'm working on a large project report in Microsoft Word 2007 and have been using the document map to generate the index. I have been carefully selecting the headers that need to be added to the document map but I saved the document and opened it up today to work on it - the document map has added whatever it pleases there. This is a temporary fix from a post that I found after extensive searching that works, but when I save and close the document and open it up again I face the same dilemma: I have noticed that when Word stuffs up the document map after opening the file, I can undo this by using the UNDO button. Word calls it ’Autoformat’. I have also fixed a file that has had the document map screwed permanently (i.e saved with it) by selecting all (CTRL+A),selecting the PARAGRAPH drop down menu in the HOME TAB and in the OUTLINE drop down box, selecting ’Body Text’. This removed all the problems and did not seem to affect my outline level paragraph headings. This is also another temporary fix but I have to be on my toes not to let Word auto format at the start of the document. I also can't afford to entirely turn off auto format as I need it. I’ve solved this problem for me. When you open the file, a progress bar at the bottom first says Opening (ESC to Cancel) and then it says Word is formatting the document (ESC to Cancel). If I cancel the second process, TOC fine. No cancelling, TOC screwed. Can anyone work out how to switch off the autoformatting? This is the post in which i found for the temporary fix

    Read the article

  • How can I move linked Word/Excel files without breaking the links under Windows 7?

    - by DOUG NEEDHAM
    I currently operate under Windows XP and have multiple links between my Word and Excel files. I have to upgrade to Windows 7. When the .doc and .xls files are converted to .docm and .xlsm, respectively, the links no longer work. The Word document is still attempting to point back to the old .xls file rather than the new file. Also, creating new links between Word and Excel within Office 2010 doesn't seem to work. I create the new link, switch it from "Auto" to "Manual" and everything works fine. But when I copy the files to another folder, the Word document is still trying to link to the file in the previous folder rather than the new folder. This always worked in Windows XP. I've been using linked Word/Excel documents for 10+ years and have never really had a problem. I'm very careful to maintain Word and Excel filenames when moving the files to a new folder. The process has always been to 1.) move the files, 2.) update the links, 3.) rename the files, and 4.) update the links again. It's my understanding that under Windows XP, links between Word and Excel are relative. But under Windows 7 (and Office 2010?), those same links become fixed.

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • Mass-migrating from POP3 to Exchange 2010, how do I copy mailboxes?

    - by Erik P. Skaalerud
    I'm in the process of planning our migration from an internal hosted POP3-server (dovecot) to Exchange 2010. We're using Outlook 2003 for the moment, but will soon upgrade to Outlook 2010. The big problem is that we have about 50 computers here in our HQ, plus ~30 clients in branch offices (wich will get their Exchange migration later sometime). I'm the only IT personel, and having to go around and manually set up Outlook and copy over their PST contents is not a option I'm looking for. Some users have set outlook to keep messages for X number of days on the POP3 server, others have not. Using a POP3 connector to transfer over the mails is not a viable option. Here is what I've done so far: Created a transform for the Office 2003 administrative installation point Created a .PRF file to modify any existing e-mail account to switch over to Exchange (including the RPC-encrypt hotfix described in MSKB 2006508) Tested both transform and PRF, both works Created a test-OU and GPO containing the Office 2003 installation with transform applied, also works My big question is: How can I force Outlook to import any existing .PST into the new Exchange mailbox when the user starts up Outlook for the first time after the MST/PRF have been applied? Is this possible?

    Read the article

  • Hyper-V with multiple physical networks

    - by Yaman
    I have Hyper-V on my laptop with a wireless and Ethernet card, sometimes I connect using wireless card, and sometimes using the Ethernet cable. I am trying to configure Hyper-v to work always with internet provided to virtual machine. I have tried to create 2 virtual switches, one external to the Wireless network card, and the other one external to the Ethernet card. What happens is that the wireless network creates a bridge object in the network and sharing center\Network connections of windows 8, while the Ethernet does not. Unfortunately, they do not work together as external, i have to set the connected one to external and the other one as external, I also have to go to the properties of the bridge and virtual Ethernet properties to uncheck and check some components like: Client for Microsoft Networks Deterministic Network Enhancer VMWare Bridge Protocol QoS Packet Scheduler File and Printer Sharing for Microsoft Networks In order to make things work. Sometimes I keep the wireless network switch external, and go to another location (another wireless network), and it disconnects, i have to reconfigure the switches. Is there a way to do the configuration once and remain working wherever I connect, whether its Wireless or Ethernet and on any network with DHCP?

    Read the article

  • SQL Server Subscriber Migration

    - by SuperCoolMoss
    We're currently have one way transaction replication from a SQL Server 2005 OLTP publisher/distrbituor to two subscribers (one SQL 2005 and the other SQL2008 R2). Replication security is via the SQL Agents' domain service account (the same account is used on all boxes). The SQL2008R2 subscriber is used for BI purposes and hosts a database that has a subset of the Production publisher database tables, with different security and indexes. We need to migrate this BI subscriber to a newer box with more performant hardware. The plan is as follows: Stop replicating to the BI box (continue replicating to the other subscriber). Backup all databases on the BI box (including system databases). Restore all databases (including master in single user mode) to the new BI box (this has SQL Server 2008R2 already installed). Take the old BI box off the network and shut it down. Rename and Re-IP the new BI box to be the same as the old box. Switch replication back on. Are there any flaws in this approach?

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • Exchange 2007 with Android activesync

    - by lbanz
    A few of our users noticed that it will stop working intermittently for them. I didn't believe it at first until I changed my android phone and it started occuring for me. It will just stop syncing completely, it looks like the server is blocking the device completely. This mainly occurs when they are using the wifi. I've done some testing. If I switch off the wifi and use the phone data plan it will work fine. When it's on the wifi network, I try and browse to the webmail/owa page and it says page not found! I did a dns lookup and they resolve correctly. If I use another device on the same wifi network, it can access the exchange servers fine. Sometimes the wifi network will just work without any issues. But when it fails, it looks like the phone constantly checks the server every second to see if it is online even though I've got it on manual sync. I was wondering whether it tries to sync too many times and exchange thinks its a denial service attack. My old android phone that works is Froyo and the new one is Icecream. People who have reported issues seems to be newer phones. They also tested their own wifi network at home and experience the same problem. We haven't patch our exchange recently before seeing this problem. Anyone has seen this issue?

    Read the article

  • Variable TTL inside a LAN

    - by user140783
    I recently discovered that ping my local router, returns different TTL values??. The ping 3 switch must pass through before reaching the router, there may be the problem? 192.168.1.99 is the IP of my router , a Cisco WRT120N Thank you! Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=190 Respuesta desde 192.168.1.99: bytes=32 tiempo=29ms TTL=3 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=117 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=131 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=111 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=240 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=51 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=190 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Traceroute G:\Documents and Settings\Administrador>tracert 192.168.1.99 Traza a la dirección maxi2011 [192.168.1.99] sobre un máximo de 30 saltos: 1 <1 ms <1 ms <1 ms maxi2011 [192.168.1.99] Traza completa. G:\Documents and Settings\Administradorping 192.168.1.99 Haciendo ping a 192.168.1.99 con 32 bytes de datos: Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=190 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=190 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=117 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=117 Estadísticas de ping para 192.168.1.99: Paquetes: enviados = 4, recibidos = 4, perdidos = 0 (0% perdidos), Tiempos aproximados de ida y vuelta en milisegundos: Mínimo = 0ms, Máximo = 0ms, Media = 0ms G:\Documents and Settings\Administrador

    Read the article

  • daily rsync backups with hard links, checksums, and a new computer

    - by user75058
    I backup my laptop to a Fedora desktop daily using rsync with hard links. This has worked great for almost a year. I recently purchased a new computer, transferred over my data, and would like to continue backing up this computer daily. However, due to the data transfer from the old laptop to the new laptop, the timestamps have obviously changed, and will thus cause my daily rsync backup to re-transfer all of the data. I thought that by adding the -c (checksum) switch to my rsync backup it would match files based on checksum, instead of timestamp and size, and only transfer those files that are different or not present. This appeared to work, but upon examining the new backup, hard links are not being created, and it appears the files that should be hard linked are simply being copied to the new backup directory from the previous backup directory on the backup server. This is very peculiar behavior to me, and I am having trouble figuring out why this is occurring. Checksums match for files that I think should be hard linked. I have looked through the rsync man page and Google'd around a bit and have been unable to find anything for me to better understand this behavior.

    Read the article

  • DNS resolve .com domain on local domain

    - by Joost Verdaasdonk
    I'm building a local 2008 R2 domain as a test case to be able to write a roadmap for the real new domain that needs to be created soon. What I would like to know if I'm able to make a record in DNS that will point the domain name: www.example.com and example.com to one of the servers in my network. I tried creating an a-record for it but that doesn't work. To be honest I'm not even sure if this is possible? So can I do this? That way I would be able to fully test all our services (and webb app) offline before I build the real domain and switch the DNS records at the provider. Some advice if possible and where to start is appreciated. The solution (Thanks Brent): Create new Forward lookup zone pointing to example.com Create empty A record pointing to IP of the webserver you are targeting If www is needed create A record with Name: www and IP of your webserver sub domains repeat the process but then with names for example: sub or www.sub (and ip your webserver) Be aware of the DNS Cache while you are in this process. Things can take time or do the following: Right click the server and choose clear cache in CMD: ipconfig /flushdns (to flush the client cache)

    Read the article

  • Trouble connecting to a local SQL server instance from the web

    - by dfarney
    We have a small network behind a firewall (WatchGuard XTM 2 series) and network switch. On our network we have multiple instances of SQL server, but 1 in specific that I would like to be able to access remotely from our website. We have a static IP address from our ISP and then all the machines on the network have a locally assigned dynamic IP address. When trying to connect to the database from outside our network how do I get the request to be directed to the proper machine / SQL instance? Is it a parameter in my connection string or something in my firewall? A few things to rule out: 1) The firewall is allowing access from the website to our network. I added the site's IP and opened up port 1433. Also, when trying to connect and monitoring the firewall no exceptions come up as they did before I added the proper IP address. 2) Remote connections on the SQL server has been setup and enabled. I've done a lot of reading up on remote connections and I am sure it has been setup properly. I am currently getting this error message on my site: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.)

    Read the article

  • RPC Server Unavailable on Hyper-V cluster when moving resources after the host adapter has failed

    - by Doug Luxem
    On a Windows 2008 R2 SP1 cluster running Hyper-V, a lost network connectivity on the primary host interface. The interface was rapidly flapping up and down, and this was later determined to be caused by a faulty switch port. As this was a clustered server, the host interface was not fault tolerant (seeing as how the whole server was fault tolerant), so connectivity to the host was going up and down. The Hyper-V guests were completely unaffected by the network outage as they used a dedicated trunk on the server separate from the host interface. Additionally, dedicated interfaces for the cluster and live migration networks were fine. In order to diagnose the server, I tried to move all resources (Hyper-V Guests) to other nodes through Failover Cluster Manager. These moves failed with an error RPC Server Unavailable. The only way to move resources was by shutting down the guests, stopping the cluster service on the Node A, allowing other nodes to take ownership of the resources, and restarting the guests. A few other notes: All nodes have Client for MS Networks and File & Printer Sharing enabled on the Cluster and LM networks. Node A was accessible over cluster and LM networks from other nodes (these are private, cluster-only networks); pingable, CIFs, etc. Accessing \\NODEA is done over the Host adapters, as you would expect in this case and is the reason for the RPC Server Unavailable error with that adapter being down. My questions here are - Is there a way to still use Live Migration in a failure scenario such as this to prevent shutting down the Hyper-V guests? How can the network be reconfigured in the future so that the cluster service attempts to use the cluster and/or live migration networks to issue the RPC requests?

    Read the article

  • Windows Explorer and UAC: run elevated

    - by syneticon-dj
    I am profoundly annoyed by UAC and switch it off for my admin user wherever I can. Yet, there are situations where I can't - especially if those are machines not under my continuous administration. In this case, I am always challenged with the task of traversing directories using my administrative user via the Windows Explorer where regular users do not have "read" permissions. The possible two approaches to this problem so far: change the ACLs to the directory in question to include my user (Windows conveniently offers the Continue button in the "You don't currently have permissions to access this folder" dialog. This obviously sucks since more often than not I do not want to change ACLs but just look into the folder's contents use an elevated cmd.exe prompt along with a bunch of command line utilities - this usually takes a lot of time when browsing through large and / or complex directory structures What I would love to see would be a way to run Windows Explorer in elevated mode. I have yet to find out how to do so. But other suggestions solving this problem in an unobtrusive way without changing the entire system's configuration (and preferably without the need for downloading / installing anything) are very welcome, too. I have seen this post with a suggestion for altering HKCR - interesting, but it changes the behavior for all users, which I am not allowed to do in most situations. Also, some folks have suggested using UNC paths to access the folders - unfortunately this does not work when accessing the same machine (i.e. \\localhost\c$\path) as the "Administrators" group membership is still stripped from the token and a re-authentication (and thus the creation of a new token) would not happen when accessing localhost.

    Read the article

  • Combining AD permissions with FTP

    - by user64204
    We're using Windows Server 2008 with Active Directory controlling access to a network share. We've setup FTP so that people can access that share from outside (we used to use the PPTP VPN but for various reasons we need to switch to FTP). So far here is what we've managed to implement on the FTP: -The network share is used as the FTP root (defined as a UNC) and that is working fine. -AD authentication is working fine (wrong password and you stay out, good password you're in, password management in AD correctly synched with the FTP). -AD permissions are failing: the AD permissions on the content of the FTP root are ignored: it's either a user only has read or write access, but this applies to the whole FTP root, which obviously isn't suitable since that FTP root is initially our network share and files/folders have different AD permissions depending on people's groups... Whether we set the permissions through the share OR the FTP management interface, AD permissions are never enforced. Q1: Is that normal? Q2: If so what solutions exist to combine AD permissions with FTP on MS server 2008? Q3: If not, where should I look to fix the configuration?

    Read the article

  • Ubuntu server apt-get says "(-5 - No address associated with hostname)"

    - by Srini
    I have a ubuntu 12.04 server. Running sudo apt-get update on it produces errors like this: W: Failed to fetch http://au.archive.ubuntu.com/ubuntu/dists/precise-backports/main/binary-i386/Packages Something wicked happened resolving 'au.archive.ubuntu.com:http' (-5 - No address associated with hostname) I am able to ping all the other hosts on the network and also Google's DNS 8.8.8.8. But am unable to ping www.google.com. So, I'm guessing something is wrong with my DNS setup, but not sure what. I use static IP and my /etc/network/interfaces looks like this: auto eth0 iface eth0 inet static address 192.168.1.50 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.0.255 gateway 192.168.1.1 #dns-nameserver 203.12.160.35 203.12.160.36 #nameserver 203.12.160.35 203.12.160.36 My /etc/resolv.conf and /etc/resolvconf/resolv.conf.d/base are both empty and my /etc/resolvconf/resolv.conf.d/original says: nameserver 192.168.1.1 Any help would be greatly appreciated. P.S. I've googled it a bit and the common resolution is to switch to DHCP which I don't want to do since this is my home server. Thanks Srini

    Read the article

  • torrent downloads not showing on Squid log

    - by noobroot
    hello, i have just a few months working as sysadmin, hence i still have lots to learn, first thing id like to do is as follows: We have an OpenBSD 4.5 box acting like firewall,dns,cache etc, the box has 2 network cards, one conected directly to the internet and the other to our switch, i used to work with sarg for the log analysis but then changed to the much faster free-sa. I use a daily free-sa report to check the bandwidth usage and report our top 5 bandwidth consumers (3 days a week being #1 and you will be buying the pizzas :D, we are a small company ~20 so we are very familiar). this was working really good until recently, one of us required to download some stuff via torrent (~3GB) and since the pizza rule is active for non-work related downloads, he told me (verified) that his download was indeed work related so i would dismiss that 3GB off his quota, but to my surprise the log didnt showed that 3GB, since his ip consumption was only around 290MB. More recently, since the FIFA world cup started, we know that some of the employees are watching the match's streaming, we know it and we dont care about it since, like already stated, we are a small company so we dont have restrictive policies, we all can chat, watch youtube, download anything we want BUT we are only allowed 300MB a day otherwise you'll get in the top5-pizza-board, anyway, that streaming consumption is also not showing in the free-sa reports. So my question is, why is these data being excluded from the reports? im thinking that the free-sa reports list only certain types of things but im also thinking if are the squid logs the ones that are not erm... logging these conections. Any help, guide, advice or clarification is appreciated.

    Read the article

  • Very Slow DSL (ethernet) speed [New Interesting Update]

    - by Abhijit
    Very IMPORTANT and INTERESTING UPDATE: Due to some reason I just thought to do a complete new setup and this time I decided to again have openSUSE plus ubuntu. So I first reinstall lubuntu and then I installed OpenSUSE 12.2 (64 bit). Now, my DSL speed is working very normal and fine on opensuse. So this is very scary. Is it possible for any operating system to manipulate my NIC so that it will work fine only on that operating system and not on another os? Regarding positive thinking and not being paranoid, what is it that makes ONLY suse to get my NIC to work at normal speed but ubuntu can not do it? Not even fedora? Not even linux mint? What all these OS are lacking that enables suse to work great? == ORIGINAL QUESTION == I 'was' on opensuse 12.2 when my dsl speed was normal. Yesterday I switched from opensuse to ubuntu 12.04 and speed decreased. It came to range of 7-10-13-20-25-kbps. Then I switch to linux mint, and then to fedora. Still slow speed. When I was in ubuntu I disabled ipv6 but still no luck. Now I am in fedora but this time with DIFFERENT ISP. And still I am getting very slow sped. So my guess is this is nothing to do with os. What can be wrong? Is this problem of NIC? Does NIC speed decreases over time? Does NIC life ends over time as with keyboard or mouse? Help please All the os I used are 64 bit and my laptop is Compaq Presario A965Tu Intel Centrino DUal Core. Interesting thing to notice is I get normal speed while downloading torrent inside torrent client softwares. This slow speed issue applied to download from any web browser or installing software using terminal.

    Read the article

  • Windows network routing

    - by fabianvilers
    Hi! I'm working by my customer premises and they let me connect my private laptop on a dedicated Wi-Fi for internet access. It's nice for external consultants. The only issue is that we can't connect on a remote server on port 25. I suppose this policy is set up to avoid infected computers sending spam from their network. As you can have guessed, this is something weird that I can't send mail at all. Fortunately, I've a 3G cell phone that I can connect by Bluetooth on my laptop. So when I want to send an e-mail, I have to disconnect from Wi-Fi, connect my phone, send the e-mail, disconnect phone and reconnect Wi-Fi. Kinda overhead. My question is: how can I tell Windows 7 to use the Wi-Fi for every out connection, but if it's a connection on port 25, use the cell phone network? With this solution, I could let my phone connected all day without having to switch again and again. Thanks a lot for your anwwers. Fabian

    Read the article

  • Internet Explorer ignoring PHP setcookie() from CentOS server

    - by Hussein Sabbagh
    In the past we used a windows XAMPP server for an internal website. It worked fine but had some intermittent issues and we decided to move to a LAMP server on CentOS. We made the switch today but it turns out Internet Explorer ignores every attempt I make at saving a cookie. There is no underscore in the URL being used... the URL is actually the same as the one the XAMPP server used, where I was able to save cookies without any problems. It really doesn't make any sense to me, all of the code is the same. The only thing to change is the version of PHP and the server OS. The website works on all other browsers except IE. I can't even make a simple setcookie call. On a blank test page I use setcookie("test", "test", time()+36000, "/"); sleep(5); print_r($_COOKIE); and there is nothing there. Our users can't log into the website because of this and I have no idea what the issue is. If anyone can provide any clues or resolutions I would greatly appreciate it. Obviously the easy answer is to not use IE, but that is not an option in this case.

    Read the article

  • HAProxy crashes on all requests in 1.5-dev12

    - by Daniel Hough
    I'm having an issue where HAProxy is crashing with no explanation when I switch from 1.4.12 to 1.5-dev12. The reason I'm switching is for the SSL offloading. My config file doesn't have any errors, it's quite simple and it works well with 1.4 - but for some reason when I run it with 1.5-dev12 I see the logs noting that the two backends I have have been set up, and then when I hit one of the frontends, I get an HTTP 400 in the browser and suddenly HAProxy isn't running anymore when I check. I understand that a common workaround to the lack of SSL support for HAProxy is to use Stud, and I may go with that since I am in need of an SSL solution for my service, but before I dele into that world I thought I might see if anybody has experienced the same problems and might know how to fix it. The server is Ubuntu 10.04 and I followed the make instructions on the Exceliance blog here. EDIT: On the advice of Kyle Brandt, I did a bit more investigation. I attached gdb to the haproxy process and when the crash occurred this is what I got: Program received signal SIGSEGV, Segmentation fault. 0x0804e5c2 in dequeue_all_listeners (list=0x9e1a418) at src/protocols.c:184 184 list_for_each_entry_safe(listener, l_back, list, wait_queue) { P.S. HAProxy is awesome, so thank you Exceliance for providing us with something so useful :)

    Read the article

  • Wireless signal changes from strong to weak after connecting

    - by gibberish
    Router (primary AP) is a WRVS4400N, WAP (signal booster) is a WAP4410N. Problem: User is physically located within ten feet of WAP (200 feet from main wireless router). Signal is at 5 bars as user connects to wireless network. Within seconds, signal is at or below two bars and connection is poor. Background: Trying to solve problem of weak wireless signal in back offices. Desired result is for client laptops to automatically switch to the stronger signal. WAP is connected to network via Ethernet cable. WAP is set to AP mode (instead of Wireless Repeater mode) WAP does appear to boost signal. Using Windows 7 sys tray Connect To A Network applet, can observe signal boost as laptop approaches the WAP. Above-described problem happens to users located near or beyond the WAP. It does not happen to users in close proximity to the router. Secondary Question: If using WAP in AP Mode, do WAP and Router (primary AP) need to be on the same channel?

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >