Search Results

Search found 8275 results on 331 pages for 'bad appz'.

Page 17/331 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Seasgate GoFlex NAS + Horrible Speed = Bad Experience

    - by Jon H
    I am having issues with transfer speeds from my desktop PC to my NAS. I have my NAS hooked up to a Gigabit Gateway as well as my Desktop with Cat 5e. I see up to 4.0 MB/Second Transfer Rates, the normal is about 2.5 MB/Seconds. There is 3 Partitions on my NAS, Public, Private, Backup. When I transfer from Private to Public I see the speeds above. If its under the same partition almost instant. I was wondering if the speeds I am seeing is in due to my Computer or the NAS. I was looking into building my own Media Server in due to these horrible speeds. Is their anything I can do in the mean time to speed this up? Motherboard = M3970AM-HP (Angelica) Processor = AMD FX 6100 Ram = 10GB PC3-10600 MB/sec Hard Drive (1) = 1.5TB SATA 3.0GB 5400RPM Hard Drive (2) = 120GB SATA SSD NAS = Seagate 3TB Go Flex Home Connection (1) = 1000 Base T Connection (2) = Wireless N

    Read the article

  • Interaction .asp with .swf flash - good on IIS6, bad on IIS7

    - by gial
    Hi, all! I can't make my app work after migration from IIS6 to IIS7. Problem is described below. My app: in my app I use 'flash.swf'. This .swf appeals to .asp, which contains only: Response.write "<myNode>test_is_ok</myNode>" .swf must get it and show "test_is_ok". And it is really ok on IIS6 2003, but on IIS7 2008R2 .swf shows me "undefined". Situation: Separate request from IE to .asp gets "test_is_ok" on both computers. If .swf from one computer appeals to .asp on another - nothing works. If I delete .asp, then .swf also shows "undefined", so I think it don't really appeals to .asp on IIS7. Suggest me, please, something, what can help.

    Read the article

  • Bad Mumble control channel performance in KVM guest

    - by aef
    I'm running a Mumble server (Murmur) on a Debian Wheezy Beta 4 KVM guest which runs on a Debian Wheezy Beta 4 KVM hypervisor. The guest machines are attached to a bridge device on the hypervisor system through Virtio network interfaces. The Hypervisor is attached to a 100Mbit/s uplink and does IP-routing between the guest machines and the remaining Internet. In this setup we're experiencing a clearly recognizable lag between double-clicking a channel in the client and the channel joining action happening. This happens with a lot of different clients between 1.2.3 and 1.2.4 on Linux and Windows systems. Voice quality and latency seems to be completely unaffected by this. Most of the times the client's information dialog states a 16ms latency for both the voice and control channel. The deviation for the control channels mostly is a lot higher than the one of the voice channels. In some situations the control channel is displayed with a 100ms ping and about 1000 deviation. It seems the TCP performance is a problem here. We had no problems on an earlier setup which was in principle quite like the new one. We used Debian Lenny based Xen hypervisor and a soft-virtualised guest machine instead and an earlier version of the Mumble 1.2.3 series. The current murmurd --version says: 1.2.3-349-g315b5f5-2.1

    Read the article

  • DVI to HDMI - displaying bad.

    - by TutorialPoint
    I currently have a computer with a ASUS EAH5770 (ATi Radeon HD 5770) 1GB GDDR5 video card, and 4GB ram, 2.6 GHZ i5 processor. I just switched from a DVI (the blue one) to an HDMI cable. (Bigben Flat cable HDMI 1.3c) I use a Samsung SyncMaster 2032 MW, which has a HDMI input. The weird thing is, that my screen is off-the-corner of the tv (so it's too wide) (1920x1080), and windows icons and text are not displaying well, though 1080p videos in YouTube are looking brilliant, just like pictures. So i think it has something to do with Windows. I already have the ATI Catalyst Control Center with the drivers i received with my video card. I do not currently know how to fix these problems. Do i have to reinstall Windows or so? Or is it (hopefully) easier?

    Read the article

  • D-LINK DIR-615 router keeps giving my wireless devices bad ip addresses

    - by mlsteeves
    I have a D-LINK DIR-615 router, and wired devices have no problem getting an IP, however; wireless devices end up with a 169.254.. address (subsequently, they cannot access the internet through the router). I have removed all wired connections from the router, so there is no other dhcp server running. I've also gone back to the store, and replaced it with another, thinking that maybe it was defective. According to the router, it gave 192.168.0.101 to the wireless device. According to the wireless device it got 169.254.67.71. I've tried both a laptop and an iPod Touch, both exhibit the same behaviour. Has anyone seen this type of behaviour, or have any ideas of stuff to try? NEW INFORMATION I looked at the logs on the router, and when the wireless device tries to connect, this is what is logged: Sep 10 18:13:39 UDHCPD sending OFFER of 192.168.0.111 Sep 10 18:13:31 UDHCPD sending OFFER of 192.168.0.111 Sep 10 18:13:26 UDHCPD sending OFFER of 192.168.0.111 Sep 10 18:13:23 UDHCPD sending OFFER of 192.168.0.111 Sep 10 18:13:21 UDHCPD sending OFFER of 192.168.0.111 I connected a computer directly to the router, and here is what it looks like: Sep 10 18:14:18 UDHCPD Inform: add_lease 192.168.0.110 Sep 10 18:14:14 UDHCPD sending ACK to 192.168.0.110 Sep 10 18:14:14 UDHCPD sending OFFER of 192.168.0.110 Not sure if that helps or not.

    Read the article

  • Nginx - Serve blank page on "Bad Gateway" error

    - by TheLittleCheeseburger
    Hello all. I want to use Nginx as a simple reverse proxy, but if the server behind Nginx is down I just was to display a blank page. For some reason this configuration isn't displaying a blank page on error 502 and I can't figure out why. Thanks for your help! user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; # multi_accept on; } http { keepalive_timeout 65; proxy_read_timeout 200; upstream tornado { server 127.0.0.1:8001; } server { listen 80; server_name www.something.com; location / { error_page 502 = @blank; proxy_pass http://tornado; } location @blank { index index.html; root /web/blank; } } }

    Read the article

  • Clone a 2TB WD Green internal drive with bad sectors to a 3TB partitioned external

    - by ron
    I have a 2TB WD Black drive and would like to simply do a straight clone from a failing 3TB drive to it. Both are SATA. Will I be able to just install the new drive alongside the faulty one and then do the clone/rescue attempt with ddrescue or is there a better method? The faulty internal drive mentioned has bed sectors although am usually able to boot into Windows 7 Ultimate with it and navigate and access all my programs. I have been attempting some trials with an Ubuntu Live CD using ddrescue but am not sure I'm doing it right. I have a 3TB WD my book essential external which is GPT and have created a separate 2TB partition on it which I am trying to clone to. I assume I need to format the new drive first to NTFS? Can I do that via the Ubuntu Rescue Remix 12-04 live DVD that I've been booting with?

    Read the article

  • Linux servers seeing bad download performance behind Sonicwall firewall

    - by Joshua Penix
    I'm working with a pair of co-located CentOS Linux servers sitting behind a Sonicwall PRO 2040 Enhanced firewall running in transparent bridge mode. These servers are having a strange problem downloading files more than a few megabytes in size. For example, if I try to wget or FTP a copy of the Linux kernel from kernel.org, the first ~1-2MB will download at 600+K/s, and then throughput will drop off a cliff to 1K/s. I've reviewed all the firewall configuration settings for anything suspicious, but found nothing. More interestingly, I performed the same download with a Windows server sitting behind the same firewall, and it sailed right through at 600+K/s the whole way. Has anyone seen this? Where should I start looking to troubleshoot this problem?

    Read the article

  • How to fix a bad case rattle

    - by C. Ross
    I have a full sized ATX case with several fans, including one on the door/removable side. This fan makes the "door" rattle or vibrate loudly when the fan runs at full speed, such as at startup. I can stop the rattle temporarily by placing my hand on the "door", or pushing an object next to it. Do you have any suggestions for a permanent solution? Note: The "door" in question is a slide out panel with two twist screws at the back to hold it in.

    Read the article

  • Why is cleartype so bad at high res?

    - by ULTRA_POROV
    Cleartype is great when displaying small text 10 - 16px HOWEVER when you display something above 20px it start's looking like shit. Just compare it to Photoshop. Photoshop rendering at small size is not very impressive, too blurry. But if you compare it at 20px + Photoshop wins all the time. Cleartype looks jaggy around the edges, almost like there is no cleartype at all. Can this be fixed, or is it just the way cleartype is?

    Read the article

  • size of bad data on HD windows7

    - by acidzombie24
    While using my external harddrive (NTFS) i had a crc32 error. Now i would like to see how much data is corrupted. If its a few KBs i wont mind but if its a few MB i should consider getting a new harddrive. How can i check using windows7

    Read the article

  • S.M.A.R.T. broken sectors

    - by Jeffrey Vandenborne
    Recently I received my hard drive (LaCie) that I've sent away for warranty, my disk failed, and I used Palimpsest Disk utility to check if anything was wrong in the S.M.A.R.T Status. And it said that there were a few broken sectors. So the next day, I went to the store and told the story. 4 weeks later I actually got my drive back. The first thing I did was plugging it in and starting the disk utility, and weirdly it showed me pretty much the exact same things, even the values of most tests were the same as they were before when my drive broke. The serial number is different though, but it does show a very peculiar value. Now I'm wondering, I'm almost sure it's the exact same drive and it still says I've got broken sectors, does it just say that because it has been cached in the drive somewhere while LaCie DID actually fix it? Or should I run the extended self test (which seems to take hours) first? Also I've tried the smartctl command tool, it says the drive has smart support, but it doesn't show anything, it says that it's enabled, but then it says that it's disabled, picture below The picture of the Disk utility: Thanks in advance

    Read the article

  • Bad HD and robocopy

    - by acidzombie24
    After my HD gave me crc errors I wanted to copy one drive to another and picked up a new 1TB hd. I am using the command robocopy G: J: /MIR /COPYALL /ZB First it tried copying the file a few times (i didnt count, its not in my window anymore) and got an access denied error, error 5. Then it tried again and locked up. I tried copying that specific file (14mb) and windows says "cant read from source file or disk". I started robocopy again. Hopefully it will ignore it after a fail attempt or two but what options can i use to say if it doesnt work continue to the next file. It looked like that is what it was doing but for this last one it repeated more then 4times and locked up finally. I'm open to other copy solutions. I do prefer built in solutions. I am using windows 7 Also how might i do this without the /MIR option. Is /S /E good enough? flag reference here -edit- i see i can control retries with /R: but i am still open to alternative solutions.

    Read the article

  • How long does badblocks take on a 1TB drive?

    - by Steven Don
    I'm running badblocks (or rather "e2fsck -c") on a 1TB drive and if the progress indicator is any indication (no pun intended), it's going to take almost forever to complete. Right now it says 0.01% done, 30:20 elapsed which would mean the thing would take 17 weeks or so to complete, which seems rather excessive in my book. Is that a normal amount of time for such a check to take or it simply that my suspicions are correct in that the drive is failing, thus causing the check to take only slightly shorter than eternity? I found this question here, but that pertains to the amount of passes done.

    Read the article

  • Lustre - is this bad form?

    - by ethrbunny
    Im going to be consolidating several 'server rooms' into a single installation soon. Part of this effort will be finding a home for 5Tb (and growing) of files / logs. To this end Im looking at Lustre and appreciating its ability to scale. The big vendors want to sell me a $20K SAN to manage this but Im wondering about buying several iSCSI units (like this http://www.asacomputers.com/3U-iSCSI-Solution.html) and using VMs for the OSS machines. This would let me fail-over to cover problems and not require a dedicated system for each OSS. Given articles like this (http://h30565.www3.hp.com/t5/Feature-Articles/RAID-Is-Dead-Long-Live-RAID/ba-p/1422) that talk about how RAID is not keeping up with drive density Im leaning towards more disks with lower capacity each. Again - some akin to the iSCSI array above. Tell me why this is a terrible idea. Do I really need to invest in a PE710 for each OSS/OST?

    Read the article

  • Bad HD video deinterlacing processing

    - by Guy Fawkes
    I have Ubuntu 12.04 32-bit with Unity. My system configuration is: CPU: Core 2 Quad Q6600 (2.4 GHz) RAM: 8192 Mb DDR2 Kingston Video: Palit GeForce GTX 260 216 SP, and my screen resolution is 1680x1050. I also have Window 7 Ulitimate installed, and I can see the same files in Media Player Classic without any horizontal lines. I've installed vdpau driver, NVIDIA drivers 304.51, and MPlayer 2 (within SMPlayer). I've disabled "Sync to VBlank" option in CCSM (because in other way, by default, MPlayer process use about 50-60 percents of my processor load), tried to swich between different deinterlace options in SMPlayer, used "-vc ffh264vdpau,ffmpeg12vdpau" (without quotes) parameters for MPlayer, switched to "Ubuntu 2D", but, finally, have no results. Any suggestions? How must I to set up MPlayer?

    Read the article

  • Is it bad to redirect http to https?

    - by jasondavis
    I just installed an SSL Certificate on my server. I use a web hosting panel called ZPanel that is an open source project. It then set up a redirect for all traffic on my domain on Port 80 to redirect it to Port 443. In other words, all my http://example.com traffic is now redirected to the appropriate https://example.com version of the page. The redirect is done in my Apache Virtual Hosts file with something like this... RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] My question is, are there any drawbacks to using SSL? Since this is not a 301 Redirect, will I lose link juice/ranking in search engines by switching to https? I appreciate the help. I have always wanted to set up SSL on a server, just for the practice of doing it, and I finally decided to do it tonight. It seems to be working well so far, but I am not sure if it's a good idea to use this on every page. My site is not eCommerce and doesn't handle sensitive data; it's mainly for looks and the thrill of installing it for learning. UPDATED ISSUE Strangely Bing creates this screenshot from my site now that it is using HTTPS everywhere...

    Read the article

  • Bad IIS 7.5 performance on webserver

    - by Robert P.
    I have a webpage (ASP.NET 4.0 / MVC 4). On my development machine (i5-2500 3.3 8GB Win7 VS2010 SP1 Fujitsu Esprimo P700) the page performs with 160 requests/sec on devenv webserver on my machine. The page performs with 250 requests/sec on my local IIS 7.5. (uncompiled web) The page performs with 20 requests per second on a 16core 32gb ram production server (Fujitsu RX-300 w2k8 rc2 IIS 7.5). (compiled web) Why? I think it's the IIS configuration but i can't figure out whats the problem. The page runs with 1 worker process on both machines. Web garden is not an option (it helps but the app isnt compatible with)

    Read the article

  • Nginx Tornado Combination Causing 502 Bad Gateway Errors

    - by PlaidFan
    We are facing a problem with inconsistent 502 errors and tracking down the reasons has been a very frustrating exercise. We can reproduce the problem by sending several simultaneous requests quickly. The problem is that several is only in the range of 10 to 20 within a 5 seconds (not a typo). So clearly this type of load should be handled easily. We really like the Nginx + Tornado approach but are considering going to a more traditional (e.g. threading) approach because this problem has been very difficult to solve. I was wondering if you a) know how to fix this issue and b) how we can tracked down the culprit(s). The log files simply identify there being a connection refused. We have the same problem as this post: How do I debug a HTTP 502 error? But there is no answer provided on how to solve the problem so I'm hoping you can help because this may be a common issue with this type of setup. Thanks in advance, Paul

    Read the article

  • How to prevent bad queries from breaking replication?

    - by nulll
    For my personal experience, mysql replication is fragile. I know that there area many things not to do beacuse they could break replication, but we are humans and the error could always occur. So I was thinking... is there a way to enforce mysql replication? Something that prevents queries that are dangerous for the replication to be runt? In other words I'm searching for something that saves replication even if I accidentally run evil queries.

    Read the article

  • How to force a remap of sectors reported in S.M.A.R.T C5 (Current Pending Sector Count)?

    - by edgh
    The S.M.A.R.T C5 value of my Samsung HM640JJ Hard Drive (in an HP Pavilion dv6 laptop) is "yellow status = caution" C5 was 10 yesterday, and it's 21 today. C4 (Reallocation Event Count) = 0 and 05 (Reallocated Sectors Count) = 0 How can I force the firmware to reallocate them? I removed the partitions, recreated them again and formatted the entire drive. I ran chkdsk /r /f I ran the BIOS disk check utility and other diagnose/repair tools

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • Securing internal data accessed by a website on the big, bad internet

    - by aehiilrs
    A close relative of this question on Stack Overflow: When you have a web site in your DMZ that needs to access production data stored on an internal DB, what strategies do you recommend using to lower the risks that come from accessing live data? Is it even considered acceptable to have a connection initiated from the DMZ come inside of your network? An extra detail about the nature of the site that kind of throws a monkey wrench into the machinery is that people using the web site will be competing for "spots" on a first-come, first-serve basis with others using the internal software. Because of this, as close to zero lag time between the two applications as possible is ideal.

    Read the article

  • gzip compression good or bad?

    - by WarDoGG
    I have a server that currently does a lot of processing in my application and the target users are those who have a very good internet connection. The output that is sent from the server is always text/html and we do not use any media (audio/video) only images (static site images like logo,etc). We are experiencing severe performance issues and I wonder if turning off gzip/mod_deflate on the server so that the server would avoid compressing the output. Will this cause an improvement in performance?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >