Search Results

Search found 9286 results on 372 pages for 'transfer speed'.

Page 336/372 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Determining the health of a Cisco switch port?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced recently, however, the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link. We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • How to correctly partition usb flash drive and which filesystem to choose considering wear leveling?

    - by random1
    Two problems. First one: how to partition the flash drive? I shouldn't need to do this, but I'm no longer sure if my partition is properly aligned since I was forced to delete and create a new partition table after gparted complained when I tried to format the drive from FAT to ext4. The naive answer would be to say "just use default and everything is going to be alright". However if you read the following links you'll know things are not that simple: https://lwn.net/Articles/428584/ and http://linux-howto-guide.blogspot.com/2009/10/increase-usb-flash-drive-write-speed.html Then there is also the issue of cylinders, heads and sectors. Currently I get this: $sfdisk -l -uM /dev/sdd Disk /dev/sdd: 30147 cylinders, 64 heads, 32 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 30147/64/32). For this listing I'll assume that geometry. Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End MiB #blocks Id System /dev/sdd1 1 30146 30146 30869504 83 Linux $fdisk -l /dev/sdd Disk /dev/sdd: 31.6 GB, 31611420672 bytes 255 heads, 63 sectors/track, 3843 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00010c28 So from my current understanding I should align partitions at 4 MiB (currently it's at 1 MiB). But I still don't know how to set the heads and sectors properly for my device. Second problem: file system. From the benchmarks I saw ext4 provides the best performance, however there is the issue of wear leveling. How can I know that my Transcend JetFlash 700's microcontroller provides for wear leveling? Or will I just be killing my drive faster? I've seen a lot of posts on the web saying don't worry the newer drives already take care of that. But I've never seen a single piece of backed evidence of that and at some point people start mixing SSD with USB flash drives technology. The safe option would be to go for ext2, however a serious of tests that I performed showed horrible performance!!! These values are from a real scenario and not some synthetic test: 42 files: 3,429,415,284 bytes copied to flash drive original fat32: 15.1 MiB/s ext4 after new partition table: 10.2 MiB/s ext2 after new partition table: 1.9 MiB/s Please read the links that I posted above before answering. I would also be interested in answers backed up with some references because a lot is said and re-said but then it lacks facts. Thank you for the help.

    Read the article

  • Using an SSD with no AHCI [ICH7 base] - Windows 7 hangs frequently

    - by h4xnoodle
    I have a Shuttle Intel G31 + ICH7 (base -- not M/R etc) system. I just bought an OCZ Vertex 3 120gb [VTX3-25SAT3-120G] which includes the Sandforce 2218 firmware. The ICH7 does not support AHCI. I understand that this can be a problem. What I don't understand, is if it's necessary to have the proper performance of this drive. I know that without AHCI I may get a limited read/write speed -- this is fine. What my concern is, is the constant freezing/hangs I'm getting with Windows 7 on any disk activity. The 'Highest Active Time' flip-flops from 0 to 100% every minute or so regardless of large or small files. EDIT: The threads/processes with the highest response time is the kernel. I've been reading about other people with Shuttle SG31G2s, and they seem to be using SSDs no problem. Is this the controller's fault? The fact that I do not have AHCI enabled? It makes sense to me that if this SSD requires AHCI features that it would cause Windows to hang, but I would like to fully determine my situation before returning things/reformatting. To initially have my drive recognise the SSD at all, I had to change the BIOS option to Force Gen II instead of Auto for the SATA controller. I then installed Windows with no problem. There were no errors in the event log related to disk usage, but watching the perfmon I could see the highest active time and the processes (usually pagefile.sys being written to, or chrome/firefox caching) which was correlated to the hanging. So now what I need answered is: should I be returning this SSD and getting one with a different controller, or returning the SSD all-together as it will never work out and I will continue to get these hangs. Posts I've read: Windows 7 New SSD SATA AHCI? -- suggests to use AHCI http://forums.anandtech.com/showthread.php?t=2189868 -- Sandforce issues Windows 7 freezes with SSD -- and attached posts Why does my Windows 7 PC / SSD drive keep freezing? -- this is not the controller I have, but still a related issue. Windows 7 hangs after longer inactivity of user -- also tried messing with power settings with no luck. It was already set to 'Never' for turning off HDDs.

    Read the article

  • Coda 2 and SCP uploading files with the wrong permission

    - by Tom Black
    Currently I have a basic Ubuntu server running a website. The website is for a few students learning HTML/PHP and each student has their own account with a symbolic link to the shared website folder. Since the students are working on the website together, each user needs to be able to modify all the files (index.html for example). So I created a Webdev group containing all of the students with the default umask of 0002 set in their .bashrc (This allows newly created files to be 774). The shared folder is owned by the group Webdev with a chmod g+s so that new files/folders also belong to the group Webdev. The problem is that the students are using an IDE (Coda 2) and when they create a new file or folder using the IDE the file has the permissions of 644 on the server (not group writable). However when I make a new file through connecting with Cyberduck (SFTP client) the file permissions are 664 (as they should be). So I don't understand why Coda would be any different. However, after some trial and error I believe that Coda is first creating the file on local disk and then uploading that file to the server. On a mac by default a newly created file is 644. When the client uploads a file that's already 644 it stays 644 on the server side (umask is kind of useless in this situation). I've also tried creating ACL permissions for that folder but an uploaded file from my mac via SCP doesn't get the default ACL permissions. In Coda there is an option to change file permissions on a transfer. However this option seems to apply a chmod to all files being uploaded or saved. When one of students is modifying a file created by someone else when they try to upload the file or save it Coda tries to also do a chmod but fails because that user isn't the owner of the file. My current solution is using bindfs... I mount the shared web folder and bindfs sets permissions and group ownership of newly created files. However, bindfs seems to be a bit slow and I'm sure there is a better solution. Even if the students ditched Coda 2 and used Mac vim with scp the newly created files on the server would behave the same (644) which is default on the mac. Other options... 1) Either I teach the students to use (ssh/chmod) with their IDE to change their own file permissions when uploading. 2) I make all the students' Macs have the default umask of 0002 which would upload files with the right permissions. 3) Write a corn script to fix the file permissions every 5 to 15 minutes... (This option I think is the worst if students are working together at the same time). Is there any way that I could make all files that are uploaded via SCP have the default file permissions of 664 even though the uploaded file has a lower permission? (After hours of searching I don't think this is possible) I guess a corn script is my best option for novice users. How do web developers work together on larger sites? similar to this: http://serverfault.com/questions/283492/how-to-specify-file-permission-when-putting-a-file-using-openssh-sftp-command Also similar: http://serverfault.com/questions/395418/managing-linux-directory-permissions-sftp

    Read the article

  • external hard drive is no longer recognized, gives buffer I/O errors

    - by BioGeek
    Hi all, The external hard drive which contains all my photos and where I backed-up all my important documents is no longer recognized. It is a three month old 500GB Iomage Prestige Desktop Hard Drive. When I plug it in, it is recognised as a USB device, because it shows up when I type lsusb, but dmesg gives this error message. [19712.013250] usb 2-2: new high speed USB device using ehci_hcd and address 21 [19712.145347] usb 2-2: configuration #1 chosen from 1 choice [19712.147214] scsi25 : SCSI emulation for USB Mass Storage devices [19712.147514] usb-storage: device found at 21 [19712.147519] usb-storage: waiting for device to settle before scanning [19717.148978] usb-storage: device scan complete [19717.149527] scsi 25:0:0:0: Direct-Access ST350082 0AS PQ: 0 ANSI: 2 CCS [19717.151020] sd 25:0:0:0: Attached scsi generic sg2 type 0 [19717.151685] sd 25:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/465 GiB) [19717.160402] sd 25:0:0:0: [sdb] Write Protect is off [19717.160412] sd 25:0:0:0: [sdb] Mode Sense: 34 00 00 00 [19717.160418] sd 25:0:0:0: [sdb] Assuming drive cache: write through [19717.165685] sd 25:0:0:0: [sdb] Assuming drive cache: write through [19717.165691] sdb: sdb1 [19719.171808] sd 25:0:0:0: [sdb] Assuming drive cache: write through [19719.171818] sd 25:0:0:0: [sdb] Attached SCSI disk [19737.430998] sd 25:0:0:0: [sdb] Unhandled sense code [19737.431007] sd 25:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [19737.431016] sd 25:0:0:0: [sdb] Sense Key : Medium Error [current] [19737.431027] sd 25:0:0:0: [sdb] Add. Sense: Unrecovered read error [19737.431038] end_request: I/O error, dev sdb, sector 6160463 [19737.431050] Buffer I/O error on device sdb1, logical block 6160400 [19737.431060] Buffer I/O error on device sdb1, logical block 6160401 [19737.431067] Buffer I/O error on device sdb1, logical block 6160402 [19737.431075] Buffer I/O error on device sdb1, logical block 6160403 [19737.431082] Buffer I/O error on device sdb1, logical block 6160404 [19737.431088] Buffer I/O error on device sdb1, logical block 6160405 [19737.431096] Buffer I/O error on device sdb1, logical block 6160406 [19737.431102] Buffer I/O error on device sdb1, logical block 6160407 [19737.431114] Buffer I/O error on device sdb1, logical block 6160408 [19737.431121] Buffer I/O error on device sdb1, logical block 6160409 [19737.712183] sd 6:0:0:0: [sdb] Unhandled sense code [19737.712191] sd 6:0:0:0: [sdb] Result: hostbyte=DID_ERROR driverbyte=DRIVER_SENSE [19737.712200] sd 6:0:0:0: [sdb] Sense Key : Hardware Error [current] [19737.712210] sd 6:0:0:0: [sdb] Add. Sense: No additional sense information [19737.712222] end_request: I/O error, dev sdb, sector 0 [19737.712232] Buffer I/O error on device sdb, logical block 0 Neither does the external drive show when I use fdisk: jeroen@phalacrocorax:~$ sudo fdisk -l [sudo] password for jeroen: Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x000341ad Device Boot Start End Blocks Id System /dev/sda1 * 1 18714 150320173+ 83 Linux /dev/sda2 18715 19457 5968147+ 5 Extended /dev/sda5 18715 19457 5968116 82 Linux swap / Solaris` I popped the disk out of the casing put it on a SATA connect internally and then tried the file recovery programs testdisk/photorec and SpinRite, but both failed because they couldn't recognize the external harddisk. Do I have any other options?

    Read the article

  • What are some fast methods for navigating to frequently used folders in Windows 7?

    - by fostandy
    (This is a followup question from my previous question.) In windows XP I used to be able to quickly navigate to frequently used folders by making use of the 'Favorites' menu item and the hotkey behaviour. In certain conditions it could be set up so that getting to a particular folder was as easy as alt-a x (and without a file explorer window open it was as fast as win-e alt-a x). I am struggling to get anywhere near this speed in Windows 7 and would like to solicit advice from others regarding fast folder navigation to see if I am missing any methods. My current way to navigate quickly is basically move hand to mouse move cursor to navigation pane/pain. scroll all the way to the top (because normally I the panel is focused on whatever deep directory structure I am already in). sift through my 50+ favorites to get the one I want, or click a link to a folder that contains further links in some sort of 'pseudo-tree' functionality. select it. This is slower than my previous method by upwards of an order of magnitude. There are a couple of things I've contemplated: add expandable folders, not just direct links, to the favorites menu. add expandable folders, not just direct links, to the start menu. add links of my favorite folders to a submenu of the start menu so that they come up when I search them. They do but this still rather cumbersome started using 7stacks - url here (I cannot link the url directly due to lack of reputation but http://www.alastria.com/index.php?p=software-7s). This is about the closest I've gotten to some sort of compact, customizeable, easy to access, tree based navigation structure. How do you power users quickly navigate to your favorite folders? Are there keyboard shortcuts I am missing? Can someone recommend other apps or addon or extensions that can achieve this sort of functionality? The Current solution (thanks to the answers below) I am going to use is a combination of Autohotkey and 7stacks - autohotkey to launch 7stacks, 7stacks with the 'menu' stack type for fast, key-enabled navigation to folders organised in a tree structure. This solves about 90% of the issue, the only issues are (note that these are really minor, I am really splitting hairs more than anything here) Can't use this for existing folder navigation (ie already have a explorer window open, want to go to another directory) A bit more cumbersome to add/remove entries to compared to xp favorites. A little slower than xp favorites. Whatever. I'm happy. Thanks guys. I think the answer is a split to John T and Kelbizzle - I've elected to give the answer to John T and +1 to Kelbizzle as I had already mentioned 7stacks.

    Read the article

  • Troubleshooting an unstable internet connection

    - by Konrad Rudolph
    My MacBook Pro running OS X (10.9, but I had the same problem before) is connected to a Belkin router via WiFi and, using Virgin Media as the ISP, to the internet. The connection is extremely unstable – on some days, I get a ping timeout every few seconds. In addition, some domains seem to suffer general connectivity issues. For instance, I often find that while the youtube.com website loads, none of the videos (which are hosted on a separate domain) do. At other times, videos load but always fail to buffer, even though the actual connection speed is ok, even though I’ve disabled dash playback. Since I’m living in a rented room and the ISP contract isn’t actually mine I’ve got only limited possibilities of addressing the problem. In particular, I have no access to the router configuration and my non tech savvy landlady, while sympathetic, is not in a great hurry to hand the problem over to the ISP’s customer support. What’s more, I seem to be the only person in the house experiencing these problems – but I can imagine that this is simply because I’m the only one who’s using the internet continuously. I’m searching for specific tests that might be able to pinpoint – and ideally solve – the problem. So far all I’ve managed to do is establish that Virgin is routing my traffic in mysterious ways. Here’s an excerpt from traceroute google.co.uk. It’s worth mentioning that the host name doesn’t seem to matter a lot, the trace route is always the same. traceroute: Warning: google.co.uk has multiple addresses; using 62.254.36.148 traceroute to google.co.uk (62.254.36.148), 64 hops max, 52 byte packets 1 (192.168.2.1) 1.112 ms 1.300 ms 2.359 ms 2 10.100.32.1 (10.100.32.1) 11.926 ms 10.217 ms 24.987 ms 3 cmbg-core-1a-ae3-610.network.virginmedia.net (80.1.202.93) 28.809 ms * 66.653 ms 4 popl-bb-1b-ae16-0.network.virginmedia.net (212.43.163.141) 13.759 ms 126.504 ms 20.472 ms 5 nrth-bb-1b-et-010-0.network.virginmedia.net (62.253.175.57) 28.357 ms 16.398 ms 42.387 ms 6 nrth-bb-1c-ae1-0.network.virginmedia.net (62.253.174.110) 27.441 ms 15.622 ms 12.044 ms 7 lutn-icdn-1-ae0-0.network.virginmedia.net (62.253.175.82) 16.678 ms 28.463 ms 28.253 ms 8 * * * 9 * * * 10 * * * ^C If I let it, this goes on until the end of time. It never seems to reach a destination. Is this normal? A friend living in the same town who is also with Virgin Media has a more conventional traceroute output: 7 hops to google.co.uk, all of which send the ICMP TIME_EXCEEDED response. The obvious fix – rebooting the router – doesn’t seem to help. As far as I can tell, the WiFi connection is stable (I can always ping the router) so the problem is further downstream. I’ve tried using an alternative DNS before (OpenDNS) but if anything, this made things worse. In fact, it made all Google services nigh unreachable.

    Read the article

  • Ubuntu Newbie Needs Assistance!!

    - by Steve Greene
    New Ubuntu User Needs Help!- version 9.10 does not communicate with laptop Hello folks, Several days ago, I installed Ubuntu 9.10 onto my Acer Aspire 3100 laptop, running it alongside Widows Vista as a dual-bootable system. Creation of the Ubuntu boot CD went fine, and the installation onto my hard drive was flawless. Ubuntu opens and behaves as I would expect, except for one little problem. For reasons unknown to me, Ubuntu is not communicating with my laptop's networking hardware, and I have no internet connectivity, even when sitting directly under the wireless router at the local library (literally), which puts out a wickedly-fast signal that my Windows Vista OS auto-detects and immediately connects to. Up in the right side of the Ubuntu desktop, I click on the network icon and it does not show a wireless connection at all, even though I am only a few feet from the router. At home, where I use a dialup modem, I also see no means of getting online. My modem is an HDAUDIO Soft Data Fax Modem with Smart CP,manufactured by CXT (Conexant Systems Inc., file version 4.0.13.0, and the driver version is 7.58.0.0). I desparately wish to convert to Ubuntu. I used Mac for ten years, and then Windows for ten years. Now, after 20 years, I want to live out my days as an open-source Ubuntu fanatic. I am ready to give the old status quo the boot! I am an advanced computer user, but I am not a programmer. I seek a solution that is user-friendly for normal people, something equivalent to a driver that I can easily install or activate that will allow Ubuntu to see my hardware and get me connected. Can anyone help me over this hopefully-little glitch so that I can move on in total Ubuntu bliss? My processor is a Mobile AMD Sempron Processor 3500+ at 1.80 GHz, 1.50 GB RAM, and a 32-bit Operating System. I am running Windows Vista Home Basic, Service Pack 2. My current email is [email protected] if you have a workable solution that does not require programmer status to implement. Surely this must be a simple fix that I simply am overlooking, but being the new guy on the block, I have yet to be enlightened. Thanks for your help in coming up to speed!! Steve Wanna' be Ubuntu Fanatic "If you're not living on the edge, you're taking up too much space."

    Read the article

  • How clean is deleting a computer object?

    - by Kevin
    Though quite skilled at software development, I'm a novice when it comes to Active Directory. I've noticed that AD seems to have a lot of stuff buried in the directory and schema which does not appear superficially when using simplified tools such as Active Directory Users and Computers. It kind of feels like the Windows registry, where COM classes have all kinds of intertwined references, many of which are purely by GUID, such that it's not enough to just search for anything referencing "GadgetXyz" by name in order to cleanly remove GadgetXyz. This occasionally leads to the uneasy feeling that I may have useless garbage building up in there which I have no idea how to weed out. For instance, I made the mistake a while back of trying to rename a DC, figuring I could just do it in the usual manner from Control Panel. I found references to the old name buried all over the place which made it impossible to reuse that name without considerable manual cleanup. Even long after I got it all working, I've stumbled upon the old name hidden away in LDAP. (There were no other DCs left in the picture at that time so I don't think it was a tombstone issue.) More specifically, I'm worried about the case of just outright deleting a computer from AD. I understand the cleanest way to do it is to log into the computer itself and tell it to leave the domain. (As an aside, doing this in Windows 8 seems to only disable the computer object and not delete it outright!) My concern is cases where this is not possible, for instance because it was on an already-deleted VM image. I can simply go into Active Directory Users and Computers, find the computer object, click it, and press Delete, and it seems to go away. My question is, is it totally, totally gone, or could this leave hanging references in any Active Directory nook or cranny I won't know to look in? (Excluding of course the expected tombstone records which expire after a set time.) If so, is there any good way to clean up the mess? Thank you for any insight! Kevin ps., It was over a year ago so I don't remember the exact details, but here's the gist of the DC renaming issue. I started with a single 2008 DC named ABC in a physical machine and wanted to end up instead with a DC of the same name running in a vSphere VM. Not wanting to mess with imaging the physical machine, my plan instead was: Rename ABC to XYZ. Fresh install 2008 on a VM, name it ABC, and join it to the domain. (I may have done the latter in the same step as promoting to DC; I don't recall.) dcpromo the new ABC as a 2nd DC, including GC. Make sure the new ABC replicated correctly from XYZ and then transfer the FSMO roles from XYZ to it. Once everything was confirmed to work with the new ABC alone, demote XYZ, remove the AD role, and remove it from the domain. Eventually I managed to do this but it was a much bumpier ride than expected. In particular, I got errors trying to join the new ABC to the domain. These included "The pre-windows 2000 name is already in use" and "No mapping between account names and security IDs was done." I eventually found that the computer object for XYZ had attributes that still referred to it as ABC. Among these were servicePrincipalName, msDS-AdditionalDnsHostName, and msDS-AdditionalSamAccountName. The latter I could not edit via Attribute Editor and instead had to run this against XYZ: NETDOM computername <simple-name> /add:<FQDN> There were some other hitches I don't remember exactly.

    Read the article

  • I cut-to-move DCIM folder to ext SD when an auto android OS update popped up b4 I could choose target - Cannot recover 200+ photos

    - by ZeroG
    I was downloading my Exhibit II's DCIM camera folder (with month's of photos inside) to its external SD card, in order to transfer them into my laptop. In my overconfidence, I hurriedly chose cut-to-move (rather than copy-to-move) when KABOOM! —an automatic Android OS update popped up before I could choose the target!!! I figured everything was in cache & calmly tried to go through with the update. But that was not a typically seamless event. It showed downloading icon but hmm… since I rooted the phone it brought the command line up & recovery sequence. But neither Android nor I had yet downloaded any alternate custom ROM Files to internal SD to update from! So were they trying to make me unroot my phone by giving me some bogus update on the fly or just give me a hard time in trying to hand me down an unrooted ROM that I'd have to figure out how to root again? Yes, I know there was that blurb about overwriting a file of the same name but I was trying to shake the darn stubborn update being forced on my phone during this precarious moment. I thought I had frozen or turned off all those auto-updates previously. Anyway, phones are small & fingers are big (sigh)... I tried to reboot into safe mode but the resultant photo file was partially overwritten (200 files had names but Zero bytes in them). I thought maybe it was still hung in cache or deposited somewhere else but I have searched everywhere with file managers. Since I did not have Titanium backing up camera, photo folder or gallery, I cannot recover 200+ photos. Dumb. You can understand my dilemma as I am involved in the arts & although just a camera phone, most of these photos were historic & aesthetic or at least as to subject matter. Photo-ops don't reoccur. I have tried a couple of recovery apps from the market like Search Duplicates & Recover to no avail. I was only able to salvage stuff I'd sent out in messages. I've got several decades in computers & this is such a miserable beginner's piece of bad luck I can't believe it happened to me. They were precious photos! Yes, I turned on Titanium since & yes I even tried USB to laptop recoveries. Being on a MacBookPro I'm trying androidfiletransfer.dmg, but I'd have to upgrade to Peach Sunrise to get above Android 3.0 for that App to recognize the phone via USB & the programmer says installation zeros your data, so that pretty much toasts any secret hidden places where these photos may have been deposited. Don't want to do that & am still trying to find them. They certainly didn't make it to my external SD Card. If any of you techies out there know anything, please help & thanks. Despite decades of being in computing, unfamiliar & ever-changing hard or software can humble even the most seasoned veterans.

    Read the article

  • Performance of ClearCase servers on VMs?

    - by Garen
    Where I work, we are in need of upgrading our ClearCase servers and it's been proposed that we move them into a new (yet-to-be-deployed) VMmare system. In the past I've not noticed a significant problem with performance with most applications when running in VMs, but given that ClearCase "speed" (i.e. dynamic-view response times) is so latency sensitive I am concerned that this will not be a good idea. VMWare has numerous white-papers detailing performance related issues based on network traffic patterns that re-inforces my hypothesis, but nothing particularly concrete for this particular use case that I can see. What I can find are various forum posts online, but which are somewhat dated, e.g.: ClearCase clients are supported on VMWare, but not for performance issues. I would never put a production server on VM. It will work but will be slower. The more complex the slower it gets. accessing or building from a local snapshot view will be the fastest, building in a remote VM stored dynamic view using clearmake will be painful..... VMWare is best used for test environments (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) and: VMware + ClearCase = works but SLUGGISH!!!!!! (windows)(not for production environment) My company tried to mandate that all new apps or app upgrades needed to be on/moved VMware instances. The VMware instance could not handle the demands of ClearCase. (come to find out that I was sharing a box with a database server) Will you know what else would be on that box besides ClearCase? Karl (via http://www.cmcrossroads.com/forums?func=view&id=44094&catid=31) and: ... are still finding we can't get the performance using dynamic views to below 2.5 times that of a physical machine. Interestingly, speaking to a few people with much VMWare experience and indeed from running builds, we are finding that typically, VMWare doesn't take that much longer for most applications and about 10-20% longer has been quoted. (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) Which brings me to the more direct question: Does anyone have any more recent experience with ClearCase servers on VMware (if not any specific, relevant performance advice)?

    Read the article

  • Troubleshooting major performance issue: Is culprit Intel RST, Hard drive, or something else?

    - by Sean Killeen
    The Setup I have the following components that come into play in this situation: ASUS P8Z68 V/PRO motherboard a RAID1 configuration (1x 1TB drive, 1 x 2TB drive -- I explain below), accelerated with an SSD using Intel's RST software, and 1 TB drive standing by as a spare. Core i7 2600k 32 GB RAM Windows 8.1 This box was designed to be beast, and until just recently, was very good at being just that. What's Happening The system has slowed to a crawl whenever it touches the disk. Things appear to work at normal speed when dealing with memory. For example, typing this is fine, but saving it to disk from notepad gave me a 5-7 second pause when clicking save. The disks appear to be at 100% all the time (e.g. the light on the disk access on the PC is solidly on -- not even any flashing) In ProcExp it appears that the disk is barely being utilized at all: Intel RST reports that everything is fine: Other Details Prior to this happening, RST had reported that my drives were failing (one went bad, one was throwing SMART events). This made sense; they were at the tail end of their warranty and the PC is on almost all the time. I RMA'd the drives via Seagate. In the meantime, I'd purchased a 2TB drive because I didn't realize that the 1TB drives were under warranty. I figured I'd replace the other 1 TB drive with another 2 TB when it died but then discovered the warranty. AFAIK, I haven't done any major updates since 8.1 and it worked fine after those. Question(s) How can I troubleshoot this? What is the best way to try to figure out why disks are being maxed out despite the OS reporting barely any disk usage and that everything is OK? Given the failures, etc. that I describe above, is it possible that the problem could be the I/O on the motherboard itself? If so, how would I even be able to diagnose it? I'm betting the drives that Seagate gave me are refurbished (didn't think to look; that's dumb). Is it possible that the same model drive, refurbished, could somehow cause this? In terms of how RAID1 works, is it possible that one drive is "falling behind" somehow, and that the RAID1 is constantly trying to fix the mirroring? If so, this seems like Intel RST would report on it, but I wanted to consider it as an option.

    Read the article

  • Varnish + Plesk : vhost broken

    - by Raphaël
    I have an e-commerce site with 300,000 products and 20,000 categories. It is slow and currently in production. I decided to install Varnish to speed up. The trouble is that during installation, I got a Guru Meditation. Since the site is in production, I am not allowed to leave this error more than a second, thinking to have made an enormous stupidity. I followed the following tutorial: http://www.euperia.com/linux/setting-up-varnish-with-apache-tutorial I'm sure I followed all without error. I say that there may be a specific configuration with plesk. Has anyone already installed Varnish on a ubuntu 11.04 server with plesk 10? Does anyone have a better resource? I know it is "very vague" as an error, but maybe some of you have had this problem. edit 24/11/2011 I continued to work on Varnish + Plesk ... but it still does not work. 1) I changed the port for apache in plesk General # mysql -uadmin -p`cat /etc/psa/.psa.shadow` -D psa -e'replace into misc (param, val) values ("http_port", 8008)' 1.1) I rebuild the server conf # /usr/local/psa/admin/bin/httpdmng --reconfigure-all 2) I changed the apache conf files (if those were not taking full plesk top) vim /etc/apache2/ports.conf NameVirtualHost *:8008 Listen 8008 2.1) I do the same with /etc/apache2/sites-enables/000-default 3) I changed the port of my vhost (a single server) vim /var/www/vhosts/MYDOMAIN.COM/conf/XXXXXXXXX.http.include Replace the port 80 by this I want. Rebuild the vhost conf /usr/local/psa/admin/sbin/websrvmng --reconfigure-vhost --vhost-name=<domain_name> with without www (See my issue in serverfault: Edit vhost port in plesk 10.3 ) 4) I installed varnish by following this tutorial : http://www.euperia.com/linux/setting-up-varnish-with-apache-tutorial 5) I restart apache 2 + varnish service apache2 restart service varnish restart When I go to my site, I come across a page of apache It works! This is the default web page for this server. The web server software is running but no content has been added, yet. Can somebody help me ? This means that my vhost does not point to the right place. Why? What to do? How?

    Read the article

  • Critique My Backup and Storage Plan

    - by MetaHyperBolic
    My current storage (RAID-1 off of a hardware RAID card) and backup (a spare drive) solutions for my home network are inadequate. I have too much data scattered on various one-off drives. It is time to evolve. Backups seem simple enough, at least: lots of big drives. However, I am bewildered by the number of choices for small home storage. The Drobo S looks appealing. So does the ReadyNAS. I am not looking for bunches of shiny features, I'm mostly interested in reliability. I am not interested in building Yet Another PC to create a file server or doing something in the cloud, or whatever. I'm stupid, so I am keeping it simple. Requirements for Main Volume: Starting working space roughly 2TB, with options for growth up to 5TB RAID or something RAID-like with at least one parity drive eSATA II for speed during backups Ability to shut down gracefully when alerted of low power by a UPS Optional but Desirable: Will take 2TB drives now with options for the larger 3TB drives coming in 2010-2011 Optional but Desirable: : RAID-6 or something similar, with two parity drives Optional but Desirable: : Hot spare Ethernet connection not required, as the volume will be shared via the same machines which runs my home print server Backups: Backup performed via ROBOCOPY in mirror mode to an external hard drive via a eSATA II connection. Start with rotating between two external 2TB hard drives, will go up to six external 2TB drives. Start with a weekly backup, move to a bi-weekly backup as more drives are added. Move to 3TB drives as the size of my main volume increases. Backup drives will be stored on an off-site location. Hard drives: I plan on buying all of the same model, but different batches from different vendors. I found a "burn-in" utility with which I can pound away on the drives for a couple of weeks before adding them to the backup pool or the main volume. I estimate that I am looking at roughly $1,500 to start, once I start throwing in two TB drives for backup and four for storage. So, are there any obvious flaws in my plan? What have I overlooked? Any suggestions for the storage device for my main volume that fits my requirements? Or do I just keep it simple, 2 drives in RAID-1, then perform due diligence with my backups, accepting that I will have to buy a whole new unit when my data grows past 2TB?

    Read the article

  • IIS 7.5 on Windows Server 2008 R2 refusing to create PASSIVE MODE FTP connections

    - by Campbell
    I'm attempting to get an FTP client written in perl to transfer files from an IIS 7.5 FTP server using passive mode. I've configured the FTP server as per instructions and have also configured Windows Firewall to allow this type of traffic. I have validated that the firewall is behaviong correctly by checking to ensure there are no blocked packets in the logs. I have verified the that FTP control channel is being opened on Port 21. I believe the client is being told by IIS which port to connect on for passive mode and IIS is refusing to allow this connection. The perl log looks like: C:\cygwin\Perl\lib\FMT>perl FTPTest.pl Net::FTP>>> Net::FTP(2.77) Net::FTP>>> Exporter(5.64_01) Net::FTP>>> Net::Cmd(2.29) Net::FTP>>> IO::Socket::INET(1.31) Net::FTP>>> IO::Socket(1.31) Net::FTP>>> IO::Handle(1.28) Net::FTP=GLOB(0x20abac0)<<< 220 Microsoft FTP Service Net::FTP=GLOB(0x20abac0)>>> USER ftpuser Net::FTP=GLOB(0x20abac0)<<< 331 Password required for ftpuser. Net::FTP=GLOB(0x20abac0)>>> PASS .... Net::FTP=GLOB(0x20abac0)<<< 230 User logged in. Net::FTP=GLOB(0x20abac0)>>> CWD /Logs Net::FTP=GLOB(0x20abac0)<<< 250 CWD command successful. Net::FTP=GLOB(0x20abac0)>>> PASV Net::FTP=GLOB(0x20abac0)<<< 227 Entering Passive Mode (xx,xxx,xxx,xxx,160,41). Net::FTP=GLOB(0x20abac0)>>> RETR filename.txt Can't use an undefined value as a symbol reference at C:/Utilities/strawberryper l/perl/lib/Net/FTP/dataconn.pm line 54. IIS logs look as follows: 2010-10-02 17:40:06 xx.xxx.xx.xx - yy.y.yy.yy ControlChannelOpened - - 0 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a - - 2010-10-02 17:40:06 xx.xxx.xx.xx - yy.y.yy.yy USER ftpuser 331 0 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a - - 2010-10-02 17:40:06 xx.xxx.xx.xx MACHINENAME\ftpuser yy.y.yy.yy PASS *** 230 0 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a / - 2010-10-02 17:40:06 xx.xxx.xx.xx MACHINENAME\ftpuser yy.y.yy.yy CWD /Logs 250 0 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a /Logs - 2010-10-02 17:40:06 xx.xxx.xx.xx MACHINENAME\ftpuser yy.y.yy.yy PASV - 227 0 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a - - 2010-10-02 17:40:27 - MACHINENAME\ftpuser zz.z.zz.zzz 41001 DataChannelClosed - - 64 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a - - 2010-10-02 17:40:27 xx.xxx.xx.xx MACHINENAME\ftpuser yy.y.yy.yy ControlChannelClosed - - 64 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a - - 2010-10-02 17:40:27 xx.xxx.xx.xx MACHINENAME\ftpuser yy.y.yy.yy RETR filename.txt 550 1236 0 27a48c9b-9dce-4770-8bcf-fc89f2569b1a filename.txt - We've managed to see this issue with other FTP clients also, I don't think its something funny in Perl. I've been informed that this works fine in the IIS 6 FTP server. I'm wondering if there is something we're missing here.

    Read the article

  • Very poor read performance compared to write performance on md(raid1) / crypt(luks) / lvm

    - by Android5360
    I'm experiencing very poor read performance over raid1/crypt/lvm. In the same time, write speeds are about 2x+ faster on the same setup. On another raid1 setup on the same machine I get normal read speeds (maybe because I'm not using cryptsetup). OS related disks: sda + sdb. I have raid1 configuration with two disks, both are in place. I'm using LVM over the RAID. No encryption. Both disks are WD Green, 5400 rpm. IO test results on this raid1: dd if=/dev/zero of=/tmp/output.img3 bs=8k count=256k conv=fsync - 2147483648 bytes (2.1 GB) copied, 22.3392 s, 96.1 MB/s sync echo 3 > /proc/sys/vm/drop_caches dd if=/tmp/output.img3 of=/dev/null bs=8k - 2147483648 bytes (2.1 GB) copied, 15.9 s, 135 MB/s And here is the problematic setup (on the same machine). Currently I have only one sdc (WD Green, 5400rpm) configured in software raid1 + crypt (luks, serpent-xts-plain) + lvm. Tomorrow I will attach another disk (sdd) to complete this two-disk raid1 setup. IO tests results on this raid1: dd if=/dev/zero of=output.img3 bs=8k count=256k conv=fsync 2147483648 bytes (2.1 GB) copied, 17.7235 s, 121 MB/s sync echo 3 > /proc/sys/vm/drop_caches dd if=output.img3 of=/dev/null bs=8k 2147483648 bytes (2.1 GB) copied, 36.2454 s, 59.2 MB/s We can see that the read performance is very very bad (59MB/s compared to 135MB/s when using no encryption). Nothing is using the disks during benchmark. I can confirm this because I checked with iostat and dstat. Details on the hardware: disks: all are WD green, 5400rpm, 64mb cache. cpu: FX-8350 at stock speed ram: 4x4GB at 1066Mhz. Details on the software: OS: Debian Wheezy 7, amd64 mdadm: v3.2.5 - 18th May 2012 LVM version: 2.02.95(2) (2012-03-06) LVM Library version: 1.02.74 (2012-03-06) LVM Driver version: 4.22.0 cryptsetup: 1.4.3 Here is how I configured the slow raid1+crypt+lvm setup: parted /dev/sdc mklabel gpt type: ext4 start: 2048s end: -1 Now the raid, crypt and the lvm configuration: mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdc cryptsetup --cipher serpent-xts-plain luksFormat /dev/md1 cryptsetup luksOpen /dev/md1 md1_crypt vgcreate vg_sql /dev/mapper/md1_crypt lvcreate -l 100%VG vg_sql -n lv_sql mkfs.ext4 /dev/mapper/vg_sql-lv-sql mount /dev/mapper/vg_sql-lv_sql /sql So guys, can you help me identify the reason and fix it? It has to be something with the cryptsetup as there is no such read slowdown on the other setup (sda+sdb) where no encryption is present. But I have no idea what to do. Thanks!

    Read the article

  • Looking for a fiber optic "switch" or "router" for home use

    - by Shrout1
    The gist of my question: What is a "fiber optic" switch called? I.E. a layer 2 ethernet switch that uses fiber TX and RX connections and sends layer 2 network traffic between the fiber strands that are connected. Can someone purchase a dedicated fiber switch that does not have copper ethernet ports? What is the current average price of a device like this? Not necessarily looking for product endorsements, just information Might not make sense to go this route if it is too cost prohibitive What type of fiber connector is used for terminating a fiber strand into a jack on the wall? Can fiber be "patched" using two jacks and a "patch" cable? Is signal loss a concern with the longest runs at 100-200ft, a patch cable and media converters? The full story: My parents had unterminated fiber optic cable and terminated Cat5e run throughout their home when it was built in 2004. 10 years later the Cat5e isn't providing the throughput that my father needs to accomplish multiple streams of HD and fast system backups throughout the house. He can't reach gigabit speeds across the distance of the Cat5e runs. We are both interested in terminating the fiber connections and using them as high speed "backbones" to copper switches in each room of the house. It would be easy to attain gigabit speeds (or better, eventually) using the fiber. I have searched and searched for a "fiber optic switch" or "fiber optic router" and cannot find the correct term to describe this piece of hardware. We can use fiber media converters at the end points of each connection, however it would be nice to have a "patch panel" set up in the network closet in the basement that has fiber connections on it and switches the ethernet streams between the connections/systems in the house. Each fiber media converter costs between $50-$100 a piece... After 10 or so terminated connections it might make sense to find a piece of hardware that does not require media converters. That would depend upon the cost of this hardware Somewhat unrelated, if we are able to route between these fiber strands successfully, what is the physical connector type used in a jack on the wall? Just like RJ45 has a wall outlet (depicted below): What is the fiber optic equivalent of this? In the interim could we "patch" a couple fiber strands together in the network closet? Would signal loss be of concern with a run length of 100-200 feet, a patch cable and two media converters? If that would work then it could be used until the funds are available for more.

    Read the article

  • Is real-time or synchronous replication possible over WAN link?

    - by johnnyb10
    The company I work for is looking to implement truly real-time file replication with file locking over a WAN link that spans over 2000 miles. We currently have a 16-drive SAN setup in our east coast office. We also have an office out in Colorado that will have the same exact SAN setup. The idea is to have those two SANs contain the same exact data at all times, which will allow us to work with the same data pool, and which will also provide use with an offsite backup solution, should a failure occur on either end. We're running Server 2008. The objective is to enable users in the east coast office to work on files and have those changes be instantly updated on the Colorado SAN as well. We also need there to be file locking so that there will be no conflicts or overwritten changes if users attempt to work on the same file. Is this scenario even possible, at speeds that would make the files usable? And if so, what software would we need to pull this off? As I understand it, DFS-R does not provide file locking, so if we used that, we would need to go with a third-party product like Peerlock. But I don't even know if DFS-R is an option. Can it replicate quickly enough over a WAN link? Can any product? It seems that if we were to use synchronous replication, the programs would be unacceptably slow, as every write would have to wait for confirmation from the other end of the link. But if we used asynchronous replication, what kind of latency would we be looking at? There is a product from GlobalScape called WAFS that claims to provide "File coherence with real-time file locking, file release, and synchronization" and says that "As files are modified, changes are mirrored instantly using intelligent byte-level differencing to minimize the impact on network bandwidth". So this sounds like synchronous replication, but that doesn't even seem possible, given physical limitations such as the speed of light. If anyone has any experience with this kind of setup, or knows whether it's even possible, I'd appreciate your input and suggestions, including recommendations for software that we should check out.

    Read the article

  • mobile broadband recommendations for Lenovo T500

    - by Justin Grant
    I use a Lenovo T500 primarily in and around San Francisco, although I do some travel in the US on business and occasionally to Europe/Asia. I'm looking for a good mobile broadband option for my T500. I am admittedly baffled by the various mobile-broadband choices (3G vs. 4G, WiMax vs. LTE vs. MIMO vs. ..., etc.). My priorities are (in this order): Compatible with Lenovo T500 and Windows 7. I realize only the AT&T accessory card is listed on Lenovo's site, but I've also heard that other cards will work in my T500 too, like the WiMax/Wifi combo card-- so I'm interested in what actually works, not necessarily only what Lenovo is promoting. Reliable coverage in US large cities, especially the SF Bay Area. my IPhone has lousy coverage in many spots, so I'd be nervous about an AT&T 3G option unless the problem is with the IPhone and not AT&T's network. I'm OK with non-great coverage outside major US cities, since I don't do much travel in those areas. Speed. faster is better. Internal card. I'd slightly prefer something I could install inside my T500 instead of a dongle on the side that might break off, although this is my lowest priority so it's not a big deal. Price. I don't want to pay over $100/month. I've tried lots of Googling and haven't come up with clear answers. I've seen lots of general overviews without recommendations, and lots of passionate opinions which don't feel objective (and don't help me understand compatibility with my hardware & geography). Can you recommend a good, objective guide online, ideally for Lenovo although general guide is OK too, which can help me figure out which option is the best one for me? I'd also be interested in your own personal experiences of using mobile broadband using a Lenovo T500. I'll accept the answer which gets me closest to making a decision.

    Read the article

  • How to unlock and remove a protected partition from Prestigio USB stick?

    - by mr.b
    Ok, so, I have one of those fancy schmancy devices, which is given to me by a frustrated friend of mine. Device is a Prestigio Leather 8GB, which identifies itself to Linux host as: Bus 001 Device 006: ID 1307:0165 Transcend Information, Inc. 2GB/4GB Flash Drive Kernel messages as USB device is plugged in: kernel: [ 2769.580042] usb 1-9: new high speed USB device using ehci_hcd and address 7 kernel: [ 2769.714782] scsi8 : usb-storage 1-9:1.0 kernel: [ 2770.713937] scsi 8:0:0:0: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.714535] scsi 8:0:0:1: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.715734] sd 8:0:0:0: Attached scsi generic sg3 type 0 kernel: [ 2770.716108] sd 8:0:0:1: Attached scsi generic sg4 type 0 kernel: [ 2770.722175] sd 8:0:0:0: [sdc] 962560 512-byte logical blocks: (492 MB/470 MiB) kernel: [ 2770.722657] sd 8:0:0:0: [sdc] Write Protect is on kernel: [ 2770.731078] sd 8:0:0:1: [sdd] 14012416 512-byte logical blocks: (7.17 GB/6.68 GiB) kernel: [ 2770.731215] sdc: kernel: [ 2770.738251] sd 8:0:0:1: [sdd] Write Protect is off kernel: [ 2770.880328] kernel: [ 2770.885876] sd 8:0:0:0: [sdc] Attached SCSI removable disk kernel: [ 2770.887442] sdd: unknown partition table kernel: [ 2771.049605] sd 8:0:0:1: [sdd] Attached SCSI removable disk So, symptoms are typical for U3-like devices: two separate devices inside of a single flash device. Windows sees it also as two identical usb devices, and mounts two separate drives to system, whereas first one presents itself as a CDROM device, holding a write-protected content, and second is a regular flash-disk partition, that "can" be written to. However, it seems like it's broken in some weird way, since it won't let me write anything to it, format it, nothing, but that's not the issue right now. Question: How can I unlock entire USB stick so it appears to system as a single, 8GB device which can be partitioned and used normally, without restrictions? Since it appeared to be an U3 device, I have tried standard utilities: both U3 Uninstaller by u3.com (found on SoftPedia), and opensource u3_tool from sourceforge (on both Windows and Linux). First utility failed to even detect USB stick as U3 device (simply stood idle while I re-plugged stick several times), while second tool failed with some obscure error about SCSI command unable to do something (I might be able to provide exact errors when I switch back to windows). u3_tool -i /dev/sg3 (Display device info) fails with u3_partition_info() failed: Device reported command failed: status 1 ...and every other option fails with same error, minus first part which states which command precisely has failed. So, apparently, this isn't a U3 device. Or, if it is, it doesn't behave like one. I read on a few occasions that this device protection is done by special command sent to device which tells it to lock itself, and so there should be an unlock command, that would set drive straight. Does anyone have any idea about what could I do to this device to fix it? P.S. I also mentioned a problem with being unable to use second "drive", but I'll tackle that problem when (and if) I manage to merge those two devices into one...

    Read the article

  • My facebook blocking ACL has stopped working

    - by Josh
    This probably very simple. This was setup before I arrived, and has been working to block facebook. I recently eliminated some static port forwarding on this 2691 (as in, I don't think anything else has changed), and now facebook is once again accessible. Why is this list not doing what it seems like it should be doing (and was doing)? Would an extended outbound ACL be more appropriate (I think that would have been my thought if I had been tasked with creating this in the first place)? Something different? I've included below what I believe are the relevant parts of the config. interface FastEthernet0/0 ip address my.pub.ip.add my.ip.add.msk ip access-group 1 in ip nat outside ip virtual-reassembly duplex auto speed auto access-list 1 deny 69.171.224.0 0.0.31.255 access-list 1 deny 74.119.76.0 0.0.3.255 access-list 1 deny 204.15.20.0 0.0.3.255 access-list 1 deny 66.220.144.0 0.0.15.255 access-list 1 deny 69.63.176.0 0.0.15.255 access-list 1 permit any ip nat inside source list 105 interface FastEthernet0/0 overload access-list 105 deny ip 192.168.0.0 0.0.0.255 192.168.8.0 0.0.0.255 access-list 105 permit ip 192.168.0.0 0.0.0.255 any access-list 105 permit ip 192.168.1.0 0.0.0.255 any EDIT ACL is once again blocking Facebook. Here is the new definition for those interested... access-list 1 deny 66.220.144.0 0.0.7.255 access-list 1 deny 66.220.152.0 0.0.7.255 access-list 1 deny 69.63.176.0 0.0.7.255 access-list 1 deny 69.63.176.0 0.0.0.255 access-list 1 deny 69.63.184.0 0.0.7.255 access-list 1 deny 69.171.224.0 0.0.15.255 access-list 1 deny 69.171.239.0 0.0.0.255 access-list 1 deny 69.171.240.0 0.0.15.255 access-list 1 deny 69.171.255.0 0.0.0.255 access-list 1 deny 74.119.76.0 0.0.3.255 access-list 1 deny 173.252.64.0 0.0.31.255 access-list 1 deny 173.252.70.0 0.0.0.255 access-list 1 deny 173.252.96.0 0.0.31.255 access-list 1 deny 204.15.20.0 0.0.3.255 access-list 1 permit any

    Read the article

  • Win 8: Adding a boot volume to an MBR dynamic disk [NOT about changing to basic disks]

    - by Stilez
    (This is NOT aiming to convert to basic disk. In this question, the disk stays dynamic but becomes bootable) There doesn't seem to be a clear, well stated answer I can find, for the question "What are the criteria for Windows 8 to successfully boot from an MBR dynamic disk", or "how do I fix a dynamic MBR partition that's failing boot"? I've tried to educate myself but can't find crucial information to clear it all up. My existing HDD/SSD setup: DISK 0 ~ 60GB SSD/MBR/basic: (350MB recovery)(60GB windows 8 bootable) DISK 1 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB unallocated)(410GB mirrored data) DISK 2 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB unallocated)(410GB mirrored data) DISKS 3, 4, 5: (ignored for simplicity: 2xHDD RAID1 + caching SSD) I'm heavy duty on data crunching and virtualisation, just maxxed out 32GB RAM @ 2133 and moved to 4960X + 64GB. Disk 0 is a pure system disk of little value, and virtualisations runs off mirrored SSDs (Samsung 840 Pro 512 x 2) for double speed reading and so they snapshot in reasonable time. I'm using 4 SATA3 ports and the board only has two decent Intel ports (onboard Marvell are poorer quality). I'm wary of choosing between LSI, HighPoint and other 3rd party controllers as I'm unfamiliar with the maze of decent RAID cards (that's a whole other issue!). I want to cut down my SSD needs by moving the boot volume and caching volume to the 840 pros, giving a setup with 2 fewer SSDs: DISK 0 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB boot)(410GB mirrored data) DISK 1 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(30GB cache for the ICH10R mirror)(30GB temp)(410GB mirrored data) DISKS 2, 3: (2xHDD RAID1) Intel's RST allows this, Win 8 allows booting off a MBR/dynamic disk, and the two 60GB SSDs are hardly the fastest SSDs anyway, they'll get repurposed. Moving the caching volume is easy. Moving the boot volume has me stumped. The difficulty is, I'm hitting a wall of knowledge here. I have a UEFI Asus motherboard with an previous traditional MBR/basic boot disk, and I want it to boot from a disk and volume that's MBR/dynamic. The disk copy is physically ok (Partition Wizard Server will copy to dynamic volumes) but then hits a light blue 0xc000000e boot error. No real surprise, I expected to have some boot fixing, but had expected Windows to boot-fix it (all drivers exist), or the usual manual fixes to work. Specifically, I don't know enough, to know what's got to be manually checked and perhaps corrected for the disk to boot (legacy/uefi/bios, odd partitions, boot tables, disk IDs, hidden boot files, oh my!), or if I need to change any of this secure boot/UEFI/legacy stuff in the bios, convert a 512 SSD to basic and then back to dynamic when working, or if the issue is pure OS config using "diskpart", "bootsect" and "bootrec" from the Win8 DVD. The old system disk still boots but I don't know enough to figure what to fix, to make the system boot as I want. The answers probably aren't hard but the real issue is my confusion and missing information. Thanks for helping!

    Read the article

  • Brand new Mac Pro tower fan suddenly runs full-tilt

    - by Caffeine Coma
    My Quad-Core Mac Pro tower is two days old. Initially, I was impressed with how quiet it was compared to my older Macbook Pro. Then on day two, for some reason it started running very loudly. It's not just a "little" loud- my wife walked into the room and asked what the noise was. At first I thought this was just because I was hitting the CPU a bit (importing my iPhone library into iLife '09, and running Eclipse). But now that that's done, Activity Monitor shows a virtually idle CPU; there's nothing running that ought to be causing this, as far as I can tell. I tried powering it off & letting it cool down for a few minutes to no avail; about 10 seconds after powering up, the box gets loud again. I took a look at it with the side cover off, and it seems to be the fan near the top middle, between the power supply and the disk drive. It can't be a dust issue, as the machine is only 2 days old (and I peeked inside anyway just to be sure- clean). I did do a software update over the past 24 hours or so, but I can't say that it occurred immediately after that. I also did a migration of my old apps and data from my MBPro, for what it's worth. Why is it suddenly so loud? How can I monitor the fan speed and various system temperatures? Here's a link to my temps and fan speeds. UPDATE 1: Took it to the Apple Store. They took it in the back (where it's presumably quieter) and ran a fan diagnostic; no problems were found. The guy also told me that it was "a little loud", but normal. I don't buy it. It was virtually silent the first 24 hours I was using it. They would not replace/service it in the store (grrr... that's why I went there, as directed by Apple Care) but said I could get a replacement from the online store, as it was just purchased. I think I will try that. UPDATE 2: Apple is letting me send it back for a replacement. Glad to see so many responses to this question mentioning that the MacPros are usually silent; it's not all just in my head. :-)

    Read the article

  • why nginx rewrite post request from /login to //login?

    - by jiangchengwu
    There is a if statement, which will rewrite url when the client is Android. Everything ok. But, something got strange. Nginx will write post request /login to //login, even if the block of if statement is bank. So I got a 404 page. As the jetty server only accept /login request. Server conf: location / { proxy_pass http://localhost:8785/; proxy_set_header Host $http_host; proxy_set_header Remote-Addr $http_remote_addr; proxy_set_header X-Real-IP $remote_addr; if ( $http_user_agent ~ Android ){ # rewrite something, been commented } } Debug info, origin log https://gist.github.com/3799021 ... 2012/09/28 16:29:49 [debug] 26416#0: *1 http script regex: "Android" 2012/09/28 16:29:49 [notice] 26416#0: *1 "Android" matches "Android/1.0", client: 106.187.97.22, server: ireedr.com, request: "POST /login HTTP/1.1", host: "ireedr.com" ... 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "POST //login HTTP/1.0 Host: ireedr.com X-Real-IP: 106.187.97.22 Connection: close Accept-Encoding: identity, deflate, compress, gzip Accept: */* User-Agent: Android/1.0 " ... 2012/09/28 16:29:49 [debug] 26416#0: *1 HTTP/1.1 404 Not Found Server: nginx/1.2.1 Date: Fri, 28 Sep 2012 08:29:49 GMT Content-Type: text/html;charset=ISO-8859-1 Transfer-Encoding: chunked Connection: keep-alive Cache-Control: must-revalidate,no-cache,no-store Content-Encoding: gzip ... Only when I commented the block in the configration file: location / { proxy_pass http://localhost:8785/; proxy_set_header Host $http_host; proxy_set_header Remote-Addr $http_remote_addr; proxy_set_header X-Real-IP $remote_addr; #if ( $http_user_agent ~ Android ){ # #} } The client can get an 200 response. Debug info, origin log https://gist.github.com/3799023 ... "POST /login HTTP/1.0 Host: ireedr.com X-Real-IP: 106.187.97.22 Connection: close Accept-Encoding: identity, deflate, compress, gzip Accept: */* User-Agent: Android/1.0 " ... 2012/09/28 16:27:19 [debug] 26319#0: *1 HTTP/1.1 200 OK Server: nginx/1.2.1 Date: Fri, 28 Sep 2012 08:27:19 GMT Content-Type: application/json;charset=UTF-8 Content-Length: 17 Connection: keep-alive ... As the log: 2012/09/28 16:29:49 [notice] 26416#0: *1 "Android" matches "Android/1.0", client: 106.187.97.22, server: ireedr.com, request: "POST /login HTTP/1.1", host: "ireedr.com" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script if 2012/09/28 16:29:49 [debug] 26416#0: *1 post rewrite phase: 4 2012/09/28 16:29:49 [debug] 26416#0: *1 generic phase: 5 2012/09/28 16:29:49 [debug] 26416#0: *1 generic phase: 6 2012/09/28 16:29:49 [debug] 26416#0: *1 generic phase: 7 2012/09/28 16:29:49 [debug] 26416#0: *1 access phase: 8 2012/09/28 16:29:49 [debug] 26416#0: *1 access phase: 9 2012/09/28 16:29:49 [debug] 26416#0: *1 access phase: 10 2012/09/28 16:29:49 [debug] 26416#0: *1 post access phase: 11 2012/09/28 16:29:49 [debug] 26416#0: *1 try files phase: 12 2012/09/28 16:29:49 [debug] 26416#0: *1 posix_memalign: 0000000001E798F0:4096 @16 2012/09/28 16:29:49 [debug] 26416#0: *1 http init upstream, client timer: 0 2012/09/28 16:29:49 [debug] 26416#0: *1 epoll add event: fd:13 op:3 ev:80000005 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "Host: " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script var: "ireedr.com" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: " " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "X-Real-IP: " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script var: "106.187.97.22" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: " " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "Connection: close " 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "Accept-Encoding: identity, deflate, compress, gzip" 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "Accept: */*" 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "User-Agent: Android/1.0" 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "POST //login HTTP/1.0 Host: ireedr.com X-Real-IP: 106.187.97.22 Connection: close Accept-Encoding: identity, deflate, compress, gzip Accept: */* User-Agent: Android/1.0 " ... Maybe post rewrite phase had rewrite the request. Anybody can help me to solve this problem or know why nginx do that ? Much appreciated.

    Read the article

  • Set up lnux box for hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP: To upgrade PHP to the latest version, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! cd /tmp #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm #rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm #rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. [will list all packages available in the IUS repo] rpm -qa | grep php [will list installed packages needed to be removed. the installed packages need to be removed before you can install the IUS packages otherwise there will be conflicts] #yum shell >remove php-gd php-cli php-odbc php-mbstring php-pdo php php-xml php-common php-ldap php-mysql php-imap Setting up Remove Process >install php53 php53-mcrypt php53-mysql php53-cli php53-common php53-ldap php53-imap php53-devel >transaction solve >transaction run Leaving Shell #php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) This process removes the old version of PHP and installs the latest. To upgrade mysql: Pretty much the same process as above with PHP #/etc/init.d/mysqld stop [OK] rpm -qa | grep mysql [installed mysql packages] #yum shell >remove mysql mysql-server Setting up Remove Process >install mysql51 mysql51-server mysql51-devel >transaction solve >transaction run Leaving Shell #service mysqld start [OK] #mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project The above upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Create a chroot jail to hold sftp user via rssh. This will force SCP/SFTP and will circumvent traditional FTP server setup. #cd /tmp #wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm #rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm #useradd -m -d /home/dev -s /usr/bin/rssh dev #passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. #vi /etc/rssh.conf Uncomment line allowscp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). Above instructions for SFTP appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure virtual interfaces/ip based virtual hosts for SSL, setting up a CA, or anything else would be appreciated.

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >