Search Results

Search found 22160 results on 887 pages for 'speed up computer'.

Page 574/887 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • Hyper-v on 2012R2 startup gen1 vm causes the host to freeze up

    - by sputnik
    I've searched a lot to resolve the following issue, but nothing helped me. My problem is, that starting up a first-gen vm locks up the whole host. Only a hard reset helps. Second-gen vm starts and runs perfectly. The freezes happened on 3 different vms. FreeBSD, Ubuntu, Windows Server 2008R2, while Windows 8.1 on second gen config works perfectly. Im using this pc mainly as a workstation. No eventlog errors nor dumps are generated. My system: Windows Server 2012R2 FX-8350, non OC ASRock 870 Extreme R2 (Crappy board imho) 32GB DDR3 1866@1600 (My motherboard, against the "support" for 1866ram won't work with full speed) 120GB SSD 4.5TB Storage space device I dont think that its due to my system, because vmware workstation was running without problems. Did I forget to configure something? Any help is appreciated. P.S: Even deactivating C1E, C6, C&Q didnt work. P.P.S: With no virtual network adapter set, the system still locks up. Creating a first gen vm without any hdds and network and launching works. Attaching a boot dvd causes the host to freeze. The host freezes as the gen1 vm begins to boot, doesn't matter if from dvd or hdd

    Read the article

  • What does this diagnostic output mean?

    - by ChrisF
    I recently had a fault with my broadband connection. It turned out to be a fault with the ISP's or teleco's equipment. My ISP posted this diagnostic, but while I understand it in general, I'd like to to know more about the details. I'm assuming that ATM means Asynchronous Transfer Mode and PPP means Point to Point Protocol. It was this that my router was indicating as the fault. xDSL Status Test Summary Sync Status: Circuit In Sync General Information NTE Status: NTE Power Status: Unknown Bypass Status: Upstream DSL Link Information Downstream DSL Link Information Loop Loss: 9.0 17.0 SNR Margin: 25 15 Errored Seconds: 0 0 HEC Errors: 0 Cell Count: 0 0 Speed: 448 8128 TAM Status: Successfully executed operation Network Test: Sub-Test Results Layer Name Value Status Modem pass Transmitter Power (Upstream) 12.4 dBm Transmitter Power (Downstream) 8.8 dBm Upstream psd -38 dBm/Hz Downstream psd -51 dBm/Hz DSL pass Equipment Vendor Name TSTC Equipment Vendor Id n/a Equipment Vendor Revision n/a Training Time 8 s Num Syncs 1 Upstream bit rate 448 kbps Downstream bit rate 8128 kbps Upstream maximum bit rate 1108 kbps Downstream maximum bit rate 11744 kbps Upstream Attenuation 3.5 dB Downstream Attenuation 0.0 dB Upstream Noise Margin 20.0 dB Downstream Noise Margin 19.0 dB Local CRC Errors 0 Remote CRC Errors 0 Up Data Path interleaved Down Data Path interleaved Standard Used G_DMT INP INP Upstream Symbols n/a INP Upstream Delay 4 ms INP Upstream Depth 4 INP Downstream Symbols n/a INP Downstream Delay 5 ms INP Downstream Depth 32 ATM Reason: No ATM cells received fail Number of cells transmitted 30 Number of cells received 0 number of Near end HEC errors 0 number of Far end HEC errors n/a PPP Reason: No response from peer fail PAP authentication nottested CHAP authentication nottested (I'm not sure that Super User is the best place to ask this, but two people have suggested I ask it here so here I am).

    Read the article

  • pfSense router on a LAN with two gateways

    - by JohnCC
    I have a LAN with an ADSL modem/router on it. We have just gained an alternative high-speed internet connection at our location, and I want to connect the LAN to it, eventually dropping the ADSL. I've chosen to use a small PFSense box to connect the LAN to the new WAN connection. Two servers on the LAN run services accessible to the outside via NAT using the single ADSL WAN IP. We have DNS records which point to this IP. I want to do the same via the new connection, using the WAN IP there. That connection permits multiple IPs, so I have configured pfSense using virtual IP's, 1:1 NAT and appropriate firewall rules. When I change the servers' default gateway settings to the pfSense box, I can access the services via the new WAN IPs without a problem. However, I can no longer access them via the old WAN IP. If I set the servers' default gateway back to the ADSL router, then the opposite is true - I can access the services via the ADSL IP, but not via the new one. In the first case, I believe this is because an incoming SYN packet arrives at the ADSL WAN IP, and is NAT'd and sent to the internal IP of the server. The server responds with a SYN/ACK which it sends via its default gateway, the pfSense box. The pfSense box sees a SYN/ACK that it saw no SYN for and drops the packet. Is there any sensible way around this? I would like the services to be accessible via both IPs for a short period at least, since once I change the DNS it will take a while before everyone picks up the new address.

    Read the article

  • How to get ISA 2006 Web Proxy to work with the Single Network Adapter template

    - by tronda
    I need to test an issue with running our application behind a proxy server with different type of configurations, so I installed ISA 2006 Enterprise on a desktop computer. Since this computer only has a single network card and I want to start out easy, I chose the "Single Network Adapter" template. We have a internal NAT'ed network which is in the 10 range. I have defined the internal network on the ISA server to be 10.XXX.YY.1 - 10.XXX.YY.255 I also have the Default rule which denies all traffic, but I've added the following Rule: Policy - Protocols - From - To Accept HTTP Internal External HTTPS Local Host Internal HTTS Server Localhost Then I configured Internet Explorer on a virutal machine running XP within virtualbox with Brigded network (gets same network address range as regular computers on our network) similar to this Instead of the server name I used the IP address. When I try to access a web page, this doesn't go through and I get the following log messages on the proxy server: Original Client IP Client Agent Authenticated Client Service Referring Server Destination Host Name Transport HTTP Method MIME Type Object Source Source Proxy Destination Proxy Bidirectional Client Host Name Filter Information Network Interface Raw IP Header Raw Payload GMT Log Time Source Port Processing Time Bytes Sent Bytes Received Cache Information Error Information Authentication Server Log Time Client IP Destination IP Destination Port Protocol Action Rule Result Code HTTP Status Code Client Username Source Network Destination Network URL Server Name Log Record Type 10.XXX.YY.174 - TCP - - - 24.08.2010 13:25:24 1080 0 0 0 0x0 0x0 - 24.08.2010 06:25:24 10.XXX.YY.174 10.XXX.YY.175 80 HTTP Initiated Connection MyHTTPAccess 0x0 ERROR_SUCCESS Internal Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:24 2275 0 0 0 0x0 0x0 - 24.08.2010 06:25:24 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Initiated Connection 0x0 ERROR_SUCCESS Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:25 2275 0 0 0 0x0 0x0 - 24.08.2010 06:25:25 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Closed Connection 0x80074e20 FWX_E_GRACEFUL_SHUTDOWN Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:25 2276 0 0 0 0x0 0x0 - 24.08.2010 06:25:25 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Initiated Connection 0x0 ERROR_SUCCESS Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:26 2276 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Closed Connection 0x80074e20 FWX_E_GRACEFUL_SHUTDOWN Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:26 2277 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Initiated Connection 0x0 ERROR_SUCCESS Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.159 - UDP - - - 24.08.2010 13:25:26 68 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.159 255.255.255.255 67 DHCP (request) Denied Connection [Enterprise] Default rule 0xc004000d FWX_E_POLICY_RULES_DENIED Internal Local Host - PROXYTEST Firewall 10.XXX.YY.166 - UDP - - - 24.08.2010 13:25:26 68 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.166 255.255.255.255 67 DHCP (request) Denied Connection [Enterprise] Default rule 0xc004000d FWX_E_POLICY_RULES_DENIED Internal Local Host - PROXYTEST Firewall 0.0.0.0 Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Yes Proxy 10.XXX.YY.175 TCP GET Internet - - - Req ID: 096c76ae; Compression: client=No, server=No, compress rate=0% decompress rate=0% - - - 24.08.2010 13:25:27 0 2945 2581 446 0x0 0x40 24.08.2010 06:25:27 10.XXX.YY.174 10.XXX.YY.175 80 http Failed Connection Attempt MyHTTPAccess 10061 anonymous Internal Local Host http://www.vg.no/ PROXYTEST Web Proxy Filter 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:27 2277 0 0 0 0x0 0x0 - 24.08.2010 06:25:27 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Closed Connection 0x80074e20 FWX_E_GRACEFUL_SHUTDOWN Local Host Local Host - PROXYTEST Firewall

    Read the article

  • Is it possible to change "working directory" of XeTeX?

    - by Herbert Sitz
    Using XeTeX there are many working files that get created in process of producing the pdf, and they litter the directory where my main .tex file is. Is it possible to change the working directory of XeTeX so that it stores all these scratch files in some other directory, out of the way? There is a previous question on Superuser.com that discusses a utility that cleans up the working files by deleting them after they're produced: http://superuser.com/questions/95712/how-to-avoid-littering-ones-tex-directories-with-intermediate-files That solution doesn't work for me since I'm using XeTeX, but also it seems like it would be preferable to simply be able to designate a "scratch" directory where all working files are saved. I haven't been able to find any info on how to do it though. Is there a way? (My question is prompted partly because of the fact that I often work with files in a directory that is shared using DropBox, so it creates a lot of unnecessary traffic if files are getting created and destroyed willy nilly. I don't know if it affects speed in any way, but the idea of having a separate working directory that is not shared/replicated by DropBox would be a cleaner solution, even if I could use the method suggested in the earlier thread.)

    Read the article

  • Internet Troubles - PPPoE vs PPPoA?

    - by AkkA
    I have been having some internet troubles at home (ADSL2+ connection in Australia). We get random drop-outs from the authentication connection. It will keep the connection to the DSL service, but we lose authentication and either have to restart the router/modem (its combined, a Belkin one, not sure on model number) or unplug the phone cable, wait about 30 seconds and plug it in again. I've called the ISP (Telstra) a few times, but they only offer limited support when we dont use their supported hardware. Apparently something had happened on their side, they checked the box again (at least it sounded that simple), and told me it would be fine. It wasnt. I've replaced all the filters around the house, but that didnt help either. We do live a little bit away from the exchange (get a sync speed of about 3000/900), so I thought it could be due to line noise but that hasnt helped. Telstra allow both PPPoE and PPPoA connections (which I'm configuring through my router, dont have software on the PC side). I've been running PPPoA the whole time, would it make any difference changing it to PPPoE? If not, are there any other theories as to why we would be experiencing these drop-outs? It has been fine for at least 12 months, then suddenly started about 2 months ago.

    Read the article

  • Non-alphanumeric character folder name auto-completion problems

    - by viking
    I have been working with Windows 7's command line and have some folders that begin with non-alphanumeric characters. When I try to use tab completion to complete the folder name, the initial character is not included inside of the quotation marks. Example: C:\Users\username\!example is the folder I want to get into, but when I type: cd ! and press <Tab> to autocomplete, it will complete to cd !"!example" instead of the expected cd "!example" Any ideas on how to fix this besides changing the folder names? EDIT: I realize I could just tab through the entire list after entering cd, but I'm looking for a way to speed up the process. I have been spending a significant amount of time navigating these folders. UPDATE: This also happens if there is a space in the directory. For example: "c:\Program Files". In order to continue using tab to complete, first the second quote has to be deleted. C:\Program press Tab "C:\Program Files" is what appears. To navigate to a subdirectory, first the quote after Program Files has to be deleted before the next directory can be spelled out.

    Read the article

  • How to create a software raid5 array without a spare

    - by Yannick M.
    I am trying to create a software raid5 array using mdadm: $ linux # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 --spare-devices=0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 64K mdadm: array /dev/md0 started. However when inspecting /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdd1[4] sdc1[2] sdb1[1] sda1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_] [>....................] recovery = 0.3% (2970496/976759936) finish=186.1min speed=87172K/sec unused devices: <none> It seems one drive isn't active, so I check the details of the array: /dev/md0: Version : 00.90.03 Creation Time : Tue Jul 21 16:29:53 2009 Raid Level : raid5 Array Size : 2930279808 (2794.53 GiB 3000.61 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jul 21 16:29:53 2009 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 0% complete UUID : ce8b2f40:821d003c:0027688e:a70977ec Events : 0.1 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 4 8 49 3 spare rebuilding /dev/sdd1 And it seems there are only 3 active devices, with one spare. Is it just me, or something wrong here?

    Read the article

  • Connection problems via Wifi (multiple devices)

    - by Kelvin Farrell
    I'm having connection issues with my router (Linksys WRT610N) at home. There are a number of things that are happening (may be more, this is just what I've mainly noticed)... 1) Using my laptop (Macbook Pro OSX Lion), I am unable to complete any operations with my external FTP server, hosted with FatCow. I can connect to it, navigate through all the files, but when I try to edit/delete/add a file the operation times out. EVERY time. I've used two other Wifi connection on my laptop and neither have this issue. 2) I am unable to upload photos/videos to Facebook or Twitter using my phone (Samsung Galaxy S2) or my tablet (HP Touchpad - CM9). Neither am I able to upload files to Dropbox via either of the devices. Same thing happens in all situations; the upload will begin and it will just hang on 0% forever. After about 10 mins I am always forced to disconnect the Wifi to stop the action. 3) My laptop is having slow internet speed, even though we are on 20mb broadband. Speedtests say I'm getting a good connection and my Ping is good, but when using streaming services like Spotify, it takes forever to load a page and frequently stops to buffer whilst playing a song. Don't know if it's worth mentioning but I have no issues with my XBox (Ethernet), AppleTV (Wifi) or my girlfrield's phone (Nokia Lumia 800 - WP7.5) on the network. I'd really appreciate any help. This is driving me insane and is really affecting both my working and leisure use of the internet.

    Read the article

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0? Link up/down every hour or 2

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). [mymacaddress]/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) It's being logged every hour. I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. There are some tunable SCTP parameters but it's not something I'm familiar with. Do I have to add changes to /etc/system? Looks like sctp_heartbeat_interval might be what I need to change? If it makes any difference, I have a few solaris zones running on this server, each with their own IP address on a virtual interface. eth0:0, eth0:1, etc. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else? Update: This was happening more frequently but now it seems to be happening roughly every hour or every two hours. It's not consistent. I tried setting setting the link speed and duplex to match the switch port and that seemed to make it stop happening for a few hours but then it started again.

    Read the article

  • replacing buffalo lonkstations with FreeNAS, overall backup strategy, am I on the right path?

    - by Shreko
    We've been using 2 Buffalo LinkStations of 320Gb each for shared directory and employee's server storage (around 20 employees). So only documents (word, excel, cad drawings etc.) and database backup of the main application server (ERP, Accounting) 1 buffalo box serves as a main one, located at the server room, next to the main application server and the other buffalo box is located on the opposite side of the building (for fire protection) in a secure storage room and backs up the first one. We also have several external HDs that backs up everything from the buffalo box for an offsite backup. After 3.5 years of using these, capacity is a main limitation, I'm planning a replacement and would like to use FreeNAS (we already use monowall with great success). I would like to keep it simple and continue similar setup, building two low power boxes with 1 hd (2Tb) each. Is low power atom mobo OK? Not sure about HDs? I've read on this site somebody mentioning more seagate ES2 as more reliable and better performing. How would those eco/green drives compare. We've been pretty happy with speed of Buffalo boxes and I don't want my users to notice any slowdown. Any suggestion?

    Read the article

  • Why do hosts prefer Linux to Windows Server?

    - by iconiK
    So far I see a HUGE majority of hosts provide only Linux shared hosting, providing Windows only to VPS (or even to only dedicated servers). Why is it so? While Windows is a lot more expensive than Linux (though it depends on a lot of factors, not just initial and support license cost), it also provides ASP.NET, IIS and of course, Microsoft SQL Server. I know in the past it might have been because of cPanel being Linux only but now they have a Windows version. But still, why is Linux predominantly used on shared hosting? PHP works on both systems. IIS can be (and probably is) faster. MySQL runs on both systems as well. cPanel has a Windows version. Python, Perl, Ruby, all run on Windows as well. You even have MS SQL Server Express, which I find more superior than MySQL in both speed and features. Access is there for low usage requirements, as is SQLite (which is so great for quick small stuff). And with PowerShell you have a good alternative to the Unix shell. EDIT: I am looking for common reasons, I realize each hosting company (and/or it's clients) may have different needs. This becomes very important when you get to VPS or Cloud which give you a full operating system to use.

    Read the article

  • LG LW20 Express won't boot after hdd replace

    - by Mika
    My old laptop (LG LW20 Express) got a hdd failure and I replaced the hdd. Now the laptop won't boot from cd or usb. I'm trying to install ubuntu on it. When I turn the laptop on it shows me the startup screen but when it should be the time to load operating system it just gives a black screen and starts over. This loop continues until I shut down the laptop. I created the usb boot drive following this guide https://help.ubuntu.com/community/Installation/FromUSBStick/ I used my boot cd to install ubuntu on this machine I'm using right now. So at least the cd should work. From the BIOS I can see that my newly installed hdd is recognized and put as a secondary master. Also the cd and removable media are in the boot list before hdd. The laptop runs pretty hot. The fan is at full speed pretty soon after the laptop is turned on. Earlier I suspected that it would have been the almost broken hdd that would have produced that heat but there obviously is something else also. Any ideas what to check?

    Read the article

  • Terminal Server in Windows Server 2003

    - by Hemal
    I have a confusion regarding what I am doing here. At present I have a Windows Server 2003 server with SP2. I have assigned RAS/VPN server role to it (through Manage my server wizard) and in my router, I setup the IP address of my RAS/VPN server as PPTP server. Staff leave their workstations ON all the time and access them from home through RDP. They first connect thorugh VPN & in the RDC they simply type their respective IP or computer name to access the office network from home. Everything works fine so far except: Staff have to leave compuers always ON in the office Speed is very slow depend how many staff members access the VPN network I was told to install and configure Terminal service to improve this situation. I already added TS Role in the server but I don't know how to clients can access the TS server from home or remote location. I really appreciate any good links or guidence from the experts in this group regarding this. Thank you in advance for any replies!

    Read the article

  • NFS confusion - writing many small files

    - by Antonis Christofides
    I have a Debian squeeze amd64 which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 176.9.116.102:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS directory goes at more than 100 MB/s.) I am confused by the description of async in NFS. If my application accesses the local directory, system calls like write and close return even if caches have not been flushed to permanent storage. Apparently this is not true with NFS sync behaviour. However, with NFS async behaviour, even calls like fsync are ignored. Isn't it possible to work like local files, i.e. generally work asynchronously, but honour fsync and O_SYNC?

    Read the article

  • In an environment with multiple WiFi access points, do wireless clients sometimes connect to both at the same time?

    - by Bobby Burgess
    This is more of a curiosity than a problem, but in this new office I have two D-link DAP-2553's connected in a master/slave array (this just means the master keeps certain configuration options aligned with the slave). The network is set to 802.11n-only, and each AP has the same SSID and WPA2 key. The only difference is that they are on different channels (1 and 11). The WiFi network itself is working well. Users can roam around and the signal/speed is fairly consistent. However, I notice that when I look at the 802.11 client list in the web admin page for each of the 2 APs, I see that certain clients are connected to both, for extended periods of time, but I assume they are only passing data through one of them. Not every client is seen on each AP, but at any given time the same MAC address of a WiFi adapter can be associated (and remain associated) with both APs. The client list auto-refreshes every few seconds so I believe I'm looking at the most recent rather than stale information. One of the WiFi adapters that consistently associates with both APs is an Intel Centrino Wireless-N 1030 (laptop chip). Is it part of the WiFi standard that more than one association per WiFi card can be established concurrently on separate APS?

    Read the article

  • How to handle sh: fetch: command not found

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps

    Read the article

  • How to move or delete files from a folder containing 2 million files on an NTFS drive?

    - by Beau
    The issue is that any modification to the directory locks up Explorer indefinitely, though Samba access to other directories still works. I've tried moving files locally and over Samba. Even enumerating the directory to get the list of files locks up the computer indefinitely. I tried using Python's win32file.FindFilesIterator to iterate the files but that also hangs. My idea was to move each file to a different directory (in a directory above the directory we're dealing with) based on its timestamp, so that we'd have at most a thousand or so files in each directory... But since I can't even enumerate the files, that's been a non-starter. If I have to give up and just nuke the directory I'm willing to do that, but a standard delete also hangs indefinitely. I have set these two parameters to increase speed and they also did not help the issue: R:\>fsutil behavior query disablelastaccess disablelastaccess = 1 R:\>fsutil behavior query disable8dot3 disable8dot3 = 1 These are all sequential images that would have run into the 'bug' with 8.3 filenames whereby many similarly named files in one directory can take a long time to compute 8.3 filenames. From what I understand this data is stored in the file system even after disable8dot3 is enabled, so it may still be contributing to the problem. Any ideas?

    Read the article

  • How to interrupt software raid resync?

    - by Adam5
    I want to interrupt a running resync operation on a debian squeeze software raid. (This is the regular scheduled compare resync. The raid array is still clean in such a case. Do not confuse this with a rebuild after a disk failed and was replaced.) How to stop this scheduled resync operation while it is running? Another raid array is "resync pending", because they all get checked on the same day (sunday night) one after another. I want a complete stop of this sunday night resyncing. [Edit: sudo kill -9 1010 doesn't stop it, 1010 is the PID of the md2_resync process] I would also like to know how I can control the intervals between resyncs and the remainig time till the next one. [Edit2: What I did now was to make the resync go very slow, so it does not disturb anymore: sudo sysctl -w dev.raid.speed_limit_max=1000 taken from http://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html During the night I will set it back to a high value, so the resync can terminate. This workaround is fine for most situations, nonetheless it would be interesting to know if what I asked is possible. For example it does not seem to be possible to grow an array, while it is resyncing or resyncing "pending"]

    Read the article

  • Desktop PC does not power up on power button

    - by hIpPy
    When I press the power button on my desktop, it does not power up completely. Before I press the power button, I see lights on the motherboard. Everything is normal. On power button press, the fans on the cpu, graphics card and motherboard start to spin a little for a second or two and then they stop. No beeps during this process. It has been doing this for a while now but it used to start up after some trials. Once it starts up, I have NO issues at all like random shutdowns so it is not an issue with OS. Update: I left the desktop off for a few days and it started. I'm just guessing here but it seems as if the PSU (Antec TP2-550ATX) is dying out and does not have enough power now - just a guess. It's an old desktop assembled in 2005 but I have maintained it well. Update: I always keep the desktop running and I never shut it down. During updates or manual restarts, it powers up without issues. I wonder if this sheds lights on the issue. Any idea how I can narrow down the issue? ex: if I can find if the PSU is dying etc. I'd really like to fix the issue. Please help. Thanks. Below is the complete configuration. DFI LAN-Party UT NF4 Ultra-D 6/23 {6.70}, Evercool EC-VC-RE 41/47C, AMD Opteron 170 2.0GHz {1.3.2.16} 1.312V 36/41C, ThermalRight SI-120, Panaflo 120×38mm OCZ Platinum 2×1GB 200MHz 2.66V 3-3-2-7 1T XFX 7800GTX 256MB 475/1250MHz {91.31}, Zalman VF900 Cu led 41/56C WD Caviar 320GB 7200RPM 16MB SATA 3Gb/s Antec TP2-550ATX Antec P180 WinXP sp3 Logitech MX310 Razer Mantis Speed BenQ FP91G+ 19" LCD 8ms DVI Creative Audigy2 ZS {4.42} BenQ DW1640 Logitech z-5300e 5.1 280W Legend: Driver versions: {} User settings: [] Voltage: V Wattage: W Temperature: C (Celsius) min/max

    Read the article

  • Windows Server 2012 Hyper-V very slow

    - by Matt Taylor
    I have been running several Hyper-V VMs on Windows Server 2008 R2 for the past couple of years and enjoying perfectly adequate performance for my testing/development/r&d environments. I'm a software developer so my hardware knowledge is basic however I built the rig using: •Gigabyte GA-X58A-UD3R Intel X58 (Socket 1366) DDR3 Motherboard •Intel Core i7 960 3.20GHz (Bloomfield) (Socket LGA1366) •24GB triple channel RAM The host OS is running on an OCZ SSD and all the VMs are running on a 2TB Marvell SATA3 RAID 0 array consisting of 2 Western Digital Caviar Black 7,200rpm drives. I have tested the speed of the 2TB drive and appear to be getting less than 3Mbs but it can adequately run a 4 VM farm including a DC, (SQL) database and IIS application servers. I recently upgraded the SSD on which the host runs to a 256GB OCZ Vertex 4 and took the opportunity to upgrade to Windows Server 2012 and installed the Hyper-V role. I tried importing one of my existing Windows Server 2008 R2 VMs (and converted it to .vhdx) plus I have tried creating a brand new Windows Server 2008 R2 VM but both are running extremely slowly and I can see nothing obvious using the host and guest Task Manager/Resource Monitor tools. In both cases the VM has 8GB RAM (fixed), 4 CPUs, fixed size HD (not expanding) and is using an external virtual network running on a separate NIC to the host. I have upgraded the BIOS to the latest available version and checked the virtualization settings. I have run out of "obvious" (to a developer) things to check/configure and my next option will be to re-install the host OS but before I do I would very much appreciate any advice from any experts out there. Thanks

    Read the article

  • Connect to Nonencrypted Wireless Network Using Ubuntu Commands

    - by Tim
    I failed to connect to an open i.e. nonencrypted wireless network using Ubuntu command lines. Here is what I did: $ sudo /etc/init.d/NetworkManager stop * Stopping network connection manager NetworkManager [ OK ] $ sudo /sbin/ifconfig wlan0 up $ sudo iwconfig wlan0 essid "Cavalier High-Speed 866-4-CAVTEL" $ sudo dhclient wlan0 There is already a pid file /var/run/dhclient.pid with pid 10812 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ wmaster0: unknown hardware address type 801 wmaster0: unknown hardware address type 801 Listening on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on Socket/fallback DHCPREQUEST of 192.168.1.67 on wlan0 to 255.255.255.255 port 67 DHCPREQUEST of 192.168.1.67 on wlan0 to 255.255.255.255 port 67 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 8 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 21 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 6 No DHCPOFFERS received. Trying recorded lease 192.168.1.67 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. --- 192.168.1.1 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms Trying recorded lease 192.168.1.45 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. --- 192.168.1.1 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms No working leases in persistent database - sleeping. $ sudo /sbin/iwconfig wlan0 wlan0 IEEE 802.11bg Mode:Managed Frequency:2.422 GHz Access Point: Not-Associated Tx-Power=27 dBm Retry min limit:7 RTS thr:off Fragment thr=2352 B Encryption key:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 I was wondering what the problem is and how I can do it right? Thanks and regards!

    Read the article

  • GlusterFS as elastic file storage?

    - by Christopher Vanderlinden
    Is there any way to run GlusterFS in a replicated mode, but with the ability to dynamically scale the volume up and down? Say you have 3 servers all running glusterd. your Gluster volume would have to be setup with replica 3 gluster volume create test-volume replica 3 192.168.0.150:/test-volume 192.168.0.151:/test-volume 192.168.0.152:/test-volume You would then mount it as say \mnt\gfs_test What happens when I want to add 2 more servers to the storage pool and then also use them in this volume? Is there any easy way to expand the volume AND increase that replica count to 5? My end goal is to run this on EC2 instances, say 3 Apache front ends, with the webroot setup on the gluster volume mount. My concern is that if I ever need to spin up a server, I would want the server to not only be an additional Apache front end, but also another server in the gluster file system, adding to fault tolerance as well as possibly giving a slight boost in read speed. Maybe there are better options that would fit the bill here? Thanks.

    Read the article

  • Alienware Aurora R2 Slow Boot Up

    - by James R
    I have an Aurora R2 bought a few years ago, and recently I decided a RAM update and new Samsung SSD would be good for speed. So now it's super fast, with the exception of booting up. It still takes good 2 minutes to get past the first splash screen on the BIOS, it's only the BIOS, after that it's like lightning. I've Googled the issue, and the usual problem is the BIOS trying to boot from anything it can, with the fix being to change the boot menu. However I've changed it now, and it's still slow. When I disconnect the USB devices it speeds up, but I can't do that every time I want to boot the PC up! The only other option I can think of is upgrading the BIOS, however it seems that A04 is the recommended on for Aurora R2s, so I don't know if upgrading the BIOS could cause issues, especially not if it doesn't solve the issue. Also, when I disable my original hard drive in the boot menu, the PC won't boot up. Despite the Samsung one being fine to boot from, and the original not being needed as far as I know for starting Windows, it gives me an error message and makes me restart the PC, with a new boot configuration (with the original drive as second choice). Any ideas on how to make the BIOS boot faster? And why I need to have my original drive in the boot menu?

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >