Search Results

Search found 501 results on 21 pages for 'reliability'.

Page 15/21 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • The Problem Should Define the Process, Not the Tool

    - by thatjeffsmith
    All around awesome tool, but not the only gadget in your toolbox.I’m stepping down from my SQL Developer pulpit today and standing up on my philosophical soap box. I’m frequently asked to help folks transition from one set of database tools over to Oracle SQL Developer, which I’m MORE than happy to do. But, I’m not looking to simply change the way people interact with Oracle database. What I care about is your productivity. Is there a faster, more efficient way for you to connect the dots, get from A to B, or just get home to your kids or to the pub for happy hour? If you have defined a business process around a specific tool, what happens when that tool ‘goes away?’ Does the business stop? No, you feel immediate pain until you are able to re-implement the process using another mechanism. Where I get confused, or even frustrated, is when someone asks me to redesign our tool to match their problem. Tools are just tools. Saying you ‘can’t load your data anymore because XYZ’ isn’t valid when you could easily do that same task via SQL*Loader, Create Table As Selects, or 9 other different mechanisms. Sometimes changes brings opportunity for improvement in the process. Don’t be afraid to step back and re-evaluate a problem with a fresh set of eyes. Just trying to replicate your process in another tool exactly as it was done in the ‘old tool’ doesn’t always make sense. Quick sidebar: scheduling a Windows program to kick off thousands if not millions of table inserts from Excel versus using a ‘proper’ server process using SQL*Loader and or external tables means sacrificing scalability and reliability for convenience. Don’t let old habits blind you to new solutions and possibilities. Of couse I’m not going to sit here and say that our tools aren’t deficient in some areas or can’t be improved upon. But I bet if we work together we can find something that’s not only better for the business, but is also better for you. What do you ‘miss’ since you’ve started using SQL Developer as your primary Oracle database tools? I’d love to start a thread here and share ideas on how we can better serve you and your organizations needs. The end solution might not look exactly what you have in mind starting out, but I had no idea I’d be a Product Manager when I started college either What can you no longer ‘do’ since you picked up SQL Developer? What hurts more than it should? What keeps you from being great versus just good?

    Read the article

  • Event-Driven Debugging

    - by Brian Donahue
    Most application troubleshooting involves getting an error, analyzing the error message, and at worst, attaching a debugger to work out the real cause. What is not really covered is how to troubleshoot an applicaiton that is not errant, but is having a performance issue, and more than likely, in the middle of the night when you are snug in your bed, sawing logs. What you need is an ever-vigilant cyborg who never sleeps to sit in front of your server all night, but as SkyNet is not live yet, you can settle for the next-best thing. Windows provides performance counters and alerts that can tell you when an applicaiton reaches an unacceptable threshold of naughty behavior, but although it can tattle on your brainchild, it won't be the child psychiatrist that you need to tell you why he's pulling your server's pigtails and pulling faces at the teacher. What you need is to plug a debugger into performance monitor and have it tell you what's going on with your applicaiton at the time. For this purpose, I'd used Microsoft's MDbgEngine as the basis for an applicaiton that will dump a program's stacks, I call it Application Slicer Dicer Wonder Dumper Super Cyborg, or StackOMatic for short. StackOMatic can look at a program's behavior and tell you if the stacks are not moving, but it can also work on the command-line to dump all managed methods on the stack at will. Now that there is a command you can use to dump the stacks, all you need to do is politely tell Windows to run it when you're displeased with your creation as it's trashing the CPU of your server at 3 AM. The first step is to create a scheduled task to tell StackOMatic to dump your applicaiton. Start Task Scheduler and right-click Task Scheduler Library and then Create Task. For this exercise I'm creating a task that will dump the Red Gate SQL Monitor Base Monitor Service. In the Actions tab, I enter the path to StackOMatic and use the arguments to log the stack dump to a file: /PN:RedGate.Response.Engine.Alerting.Base.Service /OUT:c:\users\administrator\MonitorLog.txt Next, I go into Windows Server 2008's Reliability and Performance Monitor and add a new Data Collector Set. This set will produce an alert on the %Processor Time for the service. When the processor time breaches 50%, it will run the StackDumpBaseService task I created. Whenever the service misbehaves, it will append to the log file. Now when I go to work in the morning, I can see what the service was doing when it overloaded the processor and take action.

    Read the article

  • Redesigning an Information System - Part 1

    - by dbradley
    Through the next few weeks or months I'd like to run a small series of articles sharing my experiences from the largest of the project I've worked on and explore some of the real-world problems I've come across and how we went about solving them. I'm afraid I can't give too many specifics on the project right now as it's not yet complete so you'll have to forgive me for being a little abstract in places! To start with I'm going to run through a little of the background of the problem and the motivations to re-design from scratch. Then I'll work through the approaches taken to understanding the requirements, designing, implementing, testing and migrating to the new system. Motivations for Re-designing a Large Information System The system is one that's been in place for a number of years and was originally designed to do a significantly different one to what it's now being used for. This is mainly due to the product maturing as well as client requirements changing. As with most information systems this one can be defined in four main areas of functionality: Input – adding information to the system Storage – persisting information in an efficient, searchable structure Output – delivering the information to the client Control – management of the process There can be a variety of reasons to re-design an existing system; a few of our own turned out to be factors such as: Overall system reliability System response time Failure isolation and recovery Maintainability of code and information General extensibility to solve future problem Separation of business and product concerns New or improved features The factor that started the thought process was the desire to improve the way in which information was entered into the system. However, this alone was not the entire reason for deciding to redesign. Business Drivers Typically all software engineers would always prefer to do a project from scratch themselves. It generally means you don't have to deal with problems created by predecessors and you can create your own absolutely perfect solution. However, the reality of working within a business is that the bottom line comes down to return on investment. For a medium sized business such as mine there must be actual value able to be delivered within a reasonable timeframe for any work to be started. As a result, any long term project will generally take a lot of effort and consideration to be approved by those in charge and therefore it might be better to break down the project into more manageable chunks which allow more frequent deliverables and also value within a shorter timeframe. As the only thing of concern was the methods for inputting information, this is where we started with requirements gathering and design. However knowing that there might be more to the problem and not limiting your design decisions before the requirements is key to finding the best solutions.

    Read the article

  • Wacom consumer tablet driver service may crash while opening Bamboo Preferences, often after resuming computer from sleep

    - by DragonLord
    One of the ExpressKeys on my Wacom Bamboo Capture graphics tablet is mapped to Bamboo Preferences, so that I can quickly access the tablet settings and view the battery level (I have the Wireless Accessory Kit installed). However, when I connect the tablet to the computer, in wired or wireless mode, and attempt to open Bamboo Preferences, the Wacom consumer tablet driver service may crash, most often when I try to do so after resuming the computer from sleep. There is usually no direct indication of the crash (although I once did get Tablet Service for consumer driver stopped working and was closed), only that the cursor shows that the system is busy for a split second. When this happens, the pen no longer tracks on the screen when in proximity of the tablet (even though it is detected by the tablet itself); however, touch continues to function correctly. To recover from this condition, I need to restart the tablet driver services. I got tired of having to go through Task Manager to restart the service every time this happens, so I ended up writing the following command script, with a shortcut on the desktop for running it with elevated privileges: net stop TabletServicePen net start TabletServicePen net stop TouchServicePen net start TouchServicePen Is there something I can do to prevent these crashes from happening in the first place, or do I have have to deal with this issue until the driver is updated? Windows 7 Home Premium 64-bit. Tablet drivers are up to date. Technical details Action Center gives the following details about the crash in Reliability Monitor: Source Tablet Service for consumer driver Summary Stopped working Date ?10/?15/?2012 2:48 PM Status Report sent Description Faulting Application Path: C:\Program Files\Tablet\Pen\Pen_Tablet.exe Problem signature Problem Event Name: APPCRASH Application Name: Pen_Tablet.exe Application Version: 5.2.5.5 Application Timestamp: 4e694ecd Fault Module Name: Pen_Tablet.exe Fault Module Version: 5.2.5.5 Fault Module Timestamp: 4e694ecd Exception Code: c0000005 Exception Offset: 00000000002f6cde OS Version: 6.1.7601.2.1.0.768.3 Locale ID: 1033 Additional Information 1: 9d4f Additional Information 2: 9d4f1c8d2c16a5d47e28521ff719cfba Additional Information 3: 375e Additional Information 4: 375ebb9963823eb7e450696f2abb66cc Extra information about the problem Bucket ID: 45598085 Exception code 0xC0000005 means STATUS_ACCESS_VIOLATION. The event log contains essentially the same information.

    Read the article

  • Server configuration advice for new site that could get lots of traffic within 6m

    - by alchemical
    We're setting up a new web2.0 type site with elements of e-commerce. Budget is kind of tight. Due to the nature of the site and promotions, etc., we expect traffic could ramp up fairly quickly. Looking for advice for a good configuration to start with, we' looking to co-lo with CalPop in downtown LA. We've looked at Dell, ABMX.com, and got a quote from CalPop (they make their own servers as they also do managed hosting). Price range has been anywhere from about $1200-$3300 per server. We're thinking to start with a web server and db server, both with mirrored drives. It would be nice to stay under about 2k per server if possible. Min configuration for each would probably be a quad-core with 8GB Ram. Thinking to run Windows Server 2008 R2 (Web Edition?) and SQL Server 2008. Looking for advice on the best server configurations and/or brands that fit the budget, yet will allow us to smoothly scale as traffic increases. Reliability is also pretty important. Also wondering if a switch/router is necessary or useful to connect the two servers.

    Read the article

  • Laptop choice for development: MacBook Pro 17 vs Dell Studio XPS 16 vs HP Envy 15

    - by Shalan
    Hey! First things first - let me state that I am not intending to play games on this - I have narrowed down to these 3 purely based on specs and its individual brand reliability in the market. I intend to primarily use: Visual Studio 2008 Pro a lot (develop and deploy on Windows platforms) SQL Server 2005 Oracle 10g Adobe Photoshop CS4 Microsoft Expression Studio Google Sketchup I currently use a desktop PC (Core2Duo 2.66Ghz with 3GB DDRII memory) running Vista Business 32-bit - and I have to admit that, especially for Visual Studio, its quite sluggish to a point where it affects productivity. Furthermore, I intend to only use the notebook on a table - with a cooled surface, like granite :) - so I would appreciate people's input with regard to heat issues. Im aware that the Dell's primary exhaust gets blocked by the lid when open, but some reviews don't seem to place extraordinary emphasis on heat issues resulting from this. My options for the Dell/Alienware: Core i7 720QM 4GB DDRIII memory ATI mobility 3670 (512) 128GB Solid State Drive 16-inch Full HD RGB-LED LCD display (1080p) 3-year next-business-day support My configuration for the Apple MBP: Core2Duo 2.8Ghz (Im assuming the T9600) 4GB DDRIII memory 128GB Solid State Drive standard 1 year support The one advantage I think of with the MBP is that I can have the addition of OSX (though Im unsure what I would use it for, but purely to play around with a much-boasted-about OS) What are your thoughts on this, especially regarding build-quality, heat, performance and battery-life? Much thanks! ~shalan

    Read the article

  • Increasing SQL Server / Sage performance with SSD? (Dell PE T410)

    - by Anthony
    I have a client wanting better performance of their Sage (Accpac & CRM) server (v5.5, soon to be v7). It's running on 1 of 2 Hyper-V VMs (Svr2008) on a Dell PE T410 server with 24GB of RAM (1333MHz) & dual quad-core, and both VMs (only their C: drives) are on a single RAID5 array. All clients connect via 1Gb ethernet. The 2nd VM is SBS2008 with 9GB RAM (& all SBS dbs & company data are on a separate RAID5 array), & 3GB RAM for the Svr2008 hypervisor. I've given the Sage/SQL Server VM all the RAM I can (12GB) & SQL Server RAM caching (~8GB, never exceeds ~7.5GB, eg. entire db can now be cached in RAM) and that's helped significantly. Upgrading the Hypervisor to Svr2012 is an obvious step, but probably not a dramatic improvement? What about an SSD for this Sage/SQL Server VM (VM = 100GB, <10GB for the actual live DB) ? Can SSDs be put into the SAS hot-swap bays? Or will I have to use the mobo SATA(3Gbps?) ports, or PCI-E SSD card? Should SSDs be RAIDed for this situation? Or is SSD's higher reliability offsetting the need for RAID1/5/10? (I have nightly full disk backups) New territory for me, would appreciate some feedback. Thanks, Anthony.

    Read the article

  • How does enterprise failover, such as with google.com, actually work?

    - by Alex Regan
    We have a few fedora systems that are configured for web, FTP, and email services. We'd like to mirror these services, so that we can provide near 100% reliability for our users. I'm a fairly experienced Linux administrator, but don't have much experience with redundant systems. What is the best way to do this? How does google and amazon do it? Google.com resolves to multiple IP addresses, but if my local desktop caches one of the IPs that are unreachable, I'm going to get a failed connection message. How do they prevent that from happening? If one of their servers goes down, how is it automatically redirected to another system, without the end-user ever knowing it? I understand there are failover devices, but they're only for failing over the system itself, not a complete network. Let's say we have the worst-case scenario, such as my primary system becomes inaccessible. What are the fundamental components that are used on Linux systems to provide this capability? I'm looking for concepts, or approaches, not answers like "check out openstack". What are the actual pieces that make up the solution? What has to be done to implement this capability? Hopefully my question is clear. I'd like to know what the pieces are that make up a failover system and what approach is taken by successful organizations that implement it. Thanks again, Alex

    Read the article

  • How do I keep folders synced and backed up between two macs using a Linux NAS (rsync?)

    - by Hultner
    I've got two primary computers, one Mac Pro and one MacBook Pro for when I'm on the go. I've also got a Linux sever which also acts as NAS. Currently I backup the entire computers to an external drive with Time Machine which is rather useless and doesn't sync anything. What I really want to do is to keep my important files synced between both computers and my NAS (which is running RAID 5), that way I'm not backing up easily replaceable systemfiles and I've got all my important files in 3 places where two of them are running raid so at least 5 drives would have to crash at the same time before actual data loss occur. Folders I want to keep synced is basically my photo, documents, development, mamp and work folders and then I want to keep the user library folder backed up but not synced. I'm thinking that I'd have to use rsync but don't know how. Before suggesting Dropbox and similar suggestions I don't want to use them because of several reasons some of them being security (Dropbox obviously proved this), Speed (sometimes I'll sync gigabytes of data and that will be significantly faster locally and probably even through VPN as I have a Gigabit pipe), Space (space on my NAS is cheap and only practically limited by my needs), reliability (even if my internet were to go down I still need to be able to keep my files synced incase I'd need to go somewhere on the fly), price (I already have all the hardware and for the amount of gigabytes and bandwidth I'd need I doubt that there's any free or cheap service). Those are my main reason for wanting to keep it locally. I'm sorry for any spelling or grammatical mistakes that I've might have done. I'm writing this on my smartphone from a shaky train and English isn't my mother tongue. I gratefully appreciate any answers even if only partly solving my problem.

    Read the article

  • Increasing MSSQL/Sage performance with SSD? (Dell PE T410)

    - by Anthony
    I have a client wanting better performance of their Sage (Accpac & CRM) server (v5.5, soon to be v7). It's running on 1 of 2 Hyper-V VMs (Svr2008) on a Dell PE T410 server with 24GB of RAM (1333MHz) & dual quad-core, and both VMs (only their C: drives) are on a single RAID5 array. All clients connect via 1Gb ethernet. The 2nd VM is SBS2008 with 9GB RAM (& all SBS dbs & company data are on a separate RAID5 array), & 3GB RAM for the Svr2008 hypervisor. I've given the Sage/MSSQL VM all the RAM I can (12GB) & SQL RAM caching (~8GB, never exceeds ~7.5GB, eg. entire db can now be cached in RAM) and that's helped significantly. Upgrading the Hypervisor to Svr2012 is an obvious step, but probably not a dramatic improvement? What about an SSD for this Sage/SQL VM (VM = 100GB, <10GB for the actual live DB) ? Can SSDs be put into the SAS hot-swap bays? Or will I have to use the mobo SATA(3Gbps?) ports, or PCI-E SSD card? Should SSDs be RAIDed for this situation? Or is SSD's higher reliability offsetting the need for RAID1/5/10? (I have nightly full disk backups) New territory for me, would appreciate some feedback. Thanks, Anthony.

    Read the article

  • Creating mdraid device on top of other existing mdraid devices

    - by Dmitriusan
    I'm considering creating something like "hierarchical raid" and wondering whether it is possible using pure mdraid. Moreover, I'm going to boot from this device. I'm using Ubuntu Server 12.04 LTS with Grub2 bootloader. Motivation behind doing that is: I have 4 x 1tb 7200rpm disks. Two are newer and faster (up to 200mb/sec) and other two are slower (up to 140mb/sec). I want to create RAID-0 device from them. When creating such RAID-0 directly from 4 hard disks, I get summary speed up to ~480mb/sec. That is roughly 4*120mb/sec, so RAID-0 works with speed of the slowest device. I have an idea to create a separate RAID-0 md0 device from 500gb partitions of slower hard disks. Theoretically, this md0 device will have speed 2*140=240~280mb/sec. After that, I'm going to add this md0 device to RAID-0 with faster disks, finishing with up to 3*200=600mb/sec. Stripe-width for this raid will be 2x times bigger than for underlying raid with slow disks. Questions are: is it possible or I'm missing something? will that work as expected? can I boot from such consolidated raid device? any better ideas? any pitfalls? I don't want to use fakeraid for consolidating slow disks for multiple reasons (portability, ability to customize parameters and so on). PS Speed is needed for home virtualization server and just for experience/fun. Reliability is provided via regular automatic backups to a separate device. PPS I considered also using different stripe-width for hard disks with different speed in single raid, but mdraid does not seem to support that.

    Read the article

  • Laptop recommendation - Portable Gaming

    - by ivan
    So, I'm looking for a new laptop (http://superuser.com/questions/116869/toshiba-satellite-u500-totally-damaged-lcd). My requirements for a new Laptop are: -good keyboard(illuminated) and touchpad (multi-media keys included, should be better than toshiba u500) -good graphics card, with system rating of 6.3 and up for gaming graphics (my Toshiba U500 has 6.3). I used to run some heavy games on my Toshiba U500 with ATI Mobility Radeon 4570 with 512 mb VRAM but the framerates are not that nice on high settings. -Decent CPU but I think all new Core i3, i5, i7 can run most of recent resource intensive games (My Toshiba U500 has a Core 2 Duo T6500, 2.13 Ghz) I'm also looking for a long-term reliability, good sound quality, lots of fast RAM of-course(4GB DDR3 - 1066Mhz and up) and a clear looking LED screen with a decent resolution. (I can accomodate a laptop with screen size of 13-inch upto 15.6 inch, and I don't want it to be heavy because I might be taking it outdoors) I'm actually impressed when I saw HP Pavilion DV6t but the screen resolution seems to be a little too small for 15.6 inch. The Pavilion DV3 are also good but I want to know if there other options. Looking for some opinions.. Thanks. :D

    Read the article

  • New virtualization project and old SAN

    - by Chris
    Hi, We'll start shortly a partial virtualization of our infrastructure and consolidate a dozen servers into virtuals instances. We'll also add some client application virtualization into the mix for good measure. Two HP DL 380 with the new xeons 56xx and 96 GB of memory each running xenserver + xenapp will then take charge of most of our IT needs. So far, so good. One element that is missing from the picture is the storage part. We need some sort of shared storage to enable live motion and other HA features. We have an IBM DS 4300 SAN that we can use for that. But since it's in production since 2005, I'm not sure about such a critical role for a 5yr old part. So my question is: What is the reliability of this kind of equipment after 5 yr ? Can it last 10 yr with no or few problems ? Since our budjet is tight, not buying another SAN will be a big plus. This lead me to another question: FC disks cost an arm and a leg from IBM. When I type the replacement # in google (for example IBM 300GB 15K 4GBPS FC HDD 42D0410), I can find it at a fraction of the price at various sites. So am I stupid to buy from IBM or naive to trust 3rd party reseller ?? Thanks, Chris

    Read the article

  • Cisco Multi-DMZ firewall

    - by BParker
    I need to find a firewall that will give me 1 LAN port, and 5-7 DMZ ports. I have a requirement to replace some FreeBSD systems that are used to run some testing equipment. It is essential that the DMZ ports cannot communicate with each other, but the LAN port can communicate with everyone. That way a user on the LAN can connect to the test systems, but the test systems are isolated entirely and cannot interfere with each other. One of the DMZ's will be connected to a VMWare ESXi server, one to a standard server, and the rest to various types of equipment. The lan port will be connected to the corporate LAN switch. Sorry if i am a little vague, I am just trying to work all this out myself! Currently we have a FreeBSD configured, but the quad port NIC's are pretty expensive, and the PC itself is old, so i would prefer to replace it with a dedicate piece of kit which can do the same job, but more reliably! These test rigs are used all over the place, and get moved quite often, so i am aiming for Cisco kit for ease of configuration and reliability of the hardware itself. Thanks

    Read the article

  • Connecting to IPv6 hosts when mobile and on a Surface?

    - by Cerebrate
    Specifically, at my usual location, I have an IPv6 network which connects to the Internet via a static tunnel set up to Hurricane Electric's tunnel broker ( http://www.tunnelbroker.net/ ). This works essentially perfectly, allowing inbound and outbound connectivity. Now, however, I need to connect back to host(s) on that network over IPv6 from mobile tablet(s); meaning the conditions are such that there is no guarantee or even likelihood of native IPv6 support where it happens to be at any given time, and the IPv4 address of the tablet will change on a fairly regular basis. The native Teredo support, as configured by default, functions well enough to let me ping my target hosts, but appears to have neither the reliability nor the throughput to support anything else; I have been unable to make any actual connections (trying a number of TCP-based protocols) using it. I had considered setting up an independent tunnel for the tablet(s), and using scripts to update the client endpoint IP address when it changes, but since both (a) many of the locations will be behind NAT devices over which I have no control, and (b) the option over which I do have control is an AT&T Unite hotspot which does not offer protocol 41 forwarding or respond to ICMP on its public address, this approach does not seem viable. I am additionally constrained as the mobile tablet(s) in question are Surface RTs, and as such are incapable of running, for example, AICCU client software. What is my best option to pursue to obtain IPv6 connectivity in this scenario?

    Read the article

  • Which internet scenario would be better?

    - by JL
    I currently have an 8mbps (down) / 512kbps (up) telephone ADSL solution. I must say the reliability is excellent, and up until now its been the fastest connection I could get because I don't live in a cable zone. The real speed of my connection is around 7mbps, but sometimes I manage to get the full 8mbps. I use my connection for work, so it needs to be at least 99% reliable. Recently I was told by a guy who lives up the road that he has a wireless connection with an external antenna and his speeds are 20mbps / 512kbps - he's also paying about 1/2 of what I pay for my wired telephone connection. My question is, is wireless internet good enough for a power user who uses his connection for work 8 hours a day, including VPNing into servers remotely. Besides this I also enjoy playing the odd network game, not a WoW freak, but sometimes I do pick up the odd MMORPG and at times do indulge in some semi heavy gaming sprees. Will this wireless latency drive me crazy and seem slow in comparison? Will it be reliable enough, I also live in an area that snows heavily in winter. I guess its a question of - should I go wireless or not. I've only had 1 wireless connection before and that was years ago using iBurst technology and I remember it was terrible for VPN, but I guess the technology might have been improved since then? What do you guys think?

    Read the article

  • Trying to understand Wireless N vs Wireless AC

    - by EGHDK
    Whenever a new wireless standard gets approved you expect faster speeds and longer range. From everything that I've read about it, it seems that AC will only transfer over the 5GHz band and up to 3Gbps. Studying the new AC routers on the market, it seems that they will transfer over 5GHz and 2.4GHz. And 5GHz will only transfer at 1.3Gbps. Which isn't what AC is supposed to be. I know there is a difference between what the standard actually says, and what products will actually do, but is there any reason for this? Is there any other main differences between AC and N? I've heard people discussing AC and saying that it's finally "fixing" what N was supposed to fix... what do they mean by that? Any security benefits? I have seen this image online: Will AC really do that? Will that require an AC network card in my laptop for that to actually happen? Lastly, will the router only be able to communicate with AC devices if I have beamforming technology on? I know it's a ton of questions, but most articles online seem to be outdated, and don't provide too much reliability.

    Read the article

  • Abnormally high amount of Transmit discards reported by Solarwinds for multiple switches

    - by Jared
    I have several 3750X Cisco switches that, according to our Solarwinds NPM, are producing billions of transmit discards per day. I'm not sure why it's reporting these discards. Many of the ports on the 3750X's have 2960's connected to them and are hardcoded as trunk ports. Solarwinds NPM version 10.3 Cisco IOS version 12.2(58)SE2 Total output drops: 29139431: GigabitEthernet1/0/43 is up, line protocol is up (connected) Hardware is Gigabit Ethernet, address is XXXX (bia XXXX) Description: XXXX MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, media type is 10/100/1000BaseTX input flow-control is off, output flow-control is unsupported ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:00:47, output 00:00:50, output hang never Last clearing of "show interface" counters 1w4d Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 29139431 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 35000 bits/sec, 56 packets/sec 51376 packets input, 9967594 bytes, 0 no buffer Received 51376 broadcasts (51376 multicasts) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 51376 multicast, 0 pause input 0 input packets with dribble condition detected 115672302 packets output, 8673778028 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 pause output 0 output buffer failures, 0 output buffers swapped out sh controllers gigabitEthernet 1/0/43 utilization: Receive Bandwidth Percentage Utilization : 0 Transmit Bandwidth Percentage Utilization : 0

    Read the article

  • PDU management interface has low availability - product flaw or isolated issue

    - by DeanB
    Our colocation provider has supplied us with APC AP7932 switched 0U PDUs as part of several cabinets they provide us. We have had a lot of trouble with the network management aspect of these PDUs, which I'll describe below. We are moving to cage space in the same datacenter, and plan to provide our own PDUs, so I'd like to determine which enterprise-grade PDUs have been reliable performers from a remote management perspective. Our colo-provided PDUs are configured to support management via an SSL web UI and via telnet. We updated the firmware on all of them to the current version as of NOV2011. They respond to pings reliably, and we have no reason to suspect a network layer issue. However, we experience frequent hangs, timeouts, disconnects, and general unavailability from the embedded management host in all of the PDUs. We occasionally have to restart the microcontroller on the PDU to recover from what appears to be an occasional hard fault. The outlets stay powered (thankfully), but the management aspect is so unreliable that it has become an ops liability - we can't be confident that we could get into the PDU to power cycle a host if we needed to. We have 3 PDUs that all exhibit identical behavior. There are many manufacturers of enterprise-grade 0U switched PDUs, all with comparable features. If I looked at the datasheet for our current PDUs, they would appear to be a good fit -- only with the benefit of suffering through using them do we know to avoid them. I'd like to avoid picking a PDU that looks fine on paper, but has similar reliability issues. What has been others' experience with switched PDUs? Is this level of flakiness normal?

    Read the article

  • Formula to calculate probability of unrecoverable read error during RAID rebuild

    - by OlafM
    I need to compare the reliability of different RAID systems with either consumer or enterprise drives. The formula to have the probability of success of a rebuild, ignoring mechanical problems, is simple: error_probability = 1 - (1-per_bit_error_rate)^bit_read and with 3 TB drives I get 38% probability to experience an URE (unrecoverable read error) for a 2+1 disks RAID5 (4.7% for enterprise drives) 21% for a RAID1 (2.4% for enterprise drives) 51% probability of error during recovery for the 3+1 RAID5 often used by users of SOHO products like Synologys. Most people don't know about this. Calculating the error for single disk tolerance is easy, my question concerns systems tolerant to multiple disks failures (RAID6/Z2, RAIDZ3 and RAID1 with multiple disks). If only the first disk is used for rebuild and the second one is read again from the beginning in case or an URE, then the error probability is the one calculated above squared (14.5% for consumer RAID5 2+1, 4.5% for consumer RAID1 1+2). However, I suppose (at least in ZFS that has full checksums!) that the second parity/available disk is read only where needed, meaning that only few sectors are needed: how many UREs can possibly happen in the first disk? not many, otherwise the error probability for single-disk tolerance systems would skyrocket even more than I calculated. If I'm correct, a second parity disk would practically lower the risk to extremely low values. Am I correct?

    Read the article

  • OpenWrt vs DDWrt

    - by Ioan Paul Pirau
    I have a TP-Link Wr1043ND router and I want to install one of these two firmwares: OpenWRT DD-WRT I read that I can install custom packages and do much more than I can with the original firmware. I would like to ask someone with experience in using both OpenWRT and DD-WRT which he would recommend and why. And to give a few reference points I'm interested in: reliability – network stability both on cable and wireless and on the usb drive performance – network speed, very important also usb drive speed configurability – the possibility to add extensions such as a torrent client, FTP, SSH, WWW and SVN server directly ease of use – the ease of installation and configuration of the router support/docs – how much info there is if you stumble upon a problem and you have to find some documentation, or if there's any free support (but that's a longshot) Of course I don't imagine that I will find the perfect firmware and that one is vastly superior over the other. Also if there's anyone out there who uses one of these firmwares on a TP-Link Wr1043ND, it would be great to get some feedback about the impact of the changes from the original firmware. P.S. I'm open also for Tomato if it's the better one.

    Read the article

  • Cannot get Windows snipping tool to auto run with AutoHotKey

    - by jasondavis
    I am trying to get Windows 7 sniping tool to run when I hit my PRINTSCREEN keyboard button with AUTOHOTKEY. I have been unsuccessful so far though. Here is what I have for the AutoHotKey script. I have tried this PRINTSCREEN::Run, c:\windows\system32\SnippingTool.exe and this PRINTSCREEN::Run, SnippingTool.exe and this PRINTSCREEN::Run, SnippingTool And all those give me this error when I hit the PRINTSCREEN button... It basicly says it cannot find the file, however the file path seems to be correct, I can copy paste it into a window and it opens the snipping tool, any ideas why it will not work? Here is the full code to my AHK file... ; ; AutoHotkey Version: 1.x ; Language: English ; Platform: Win7 ; Author: Jason Davis <friendproject@> ; ; Script Function: ; Template script (you can customize this template by editing "ShellNew\Template.ahk" in your Windows folder) ; #NoEnv ; Recommended for performance and compatibility with future AutoHotkey releases. SendMode Input ; Recommended for new scripts due to its superior speed and reliability. SetWorkingDir %A_ScriptDir% ; Ensures a consistent starting directory. /* PRINTSCREEN = Will run Windows 7 snipping tool */ PRINTSCREEN::Run, c:\windows\system32\SnippingTool.exe return

    Read the article

  • How do I configured postfix and to use SES, and still be able to forward email from unverified external addresses?

    - by Jeff
    We are using postfix for email group lists (eg "[email protected]" will go to all members) from Amazon EC2 systems. For a variety of reasons (scalability and reliability) we would like to use SES for all outgoing emails. I was able to configure postfix to use SES as the SMTP for outgoing emails. This works fine for all verified emails. But of course, when an outsider emails me at "[email protected]", it chokes. Postfix is configured to forward to my gmail account (via the virtual table), the SES rejects it because the outside user is not verified. So none of our mailing groups configured through postfix will work this way. I would be happy to rewrite all "From" addresses before sending (and simply leave the Reply To as the original sender), but I cannot seem to find a working configuration. No matter what I set in canonical or generic regexps, SES seems to reject all forwarded emails. Surely somebody must have configured postfix with SES to handle virtual addresses? How does this work?

    Read the article

  • What to look for in a reliable backup hard disk?

    - by Senthil
    I want to buy an internal hard disk and use a docking station along with it for backing up important data. The size will be around 500GB to 1TB. I have a budget and several models fit into it. So far, they only seem to vary in size, speed and brand. These are the only things I can compare from the specs. I guess asking for which brand is best is completely subjective so I won't do that. I want my disk to have long life and be reliable. Doesn't matter if it is somewhat slow. Size: Should I go for the one with highest size within my budget? Will higher density cause problems? Or should I go for a moderately sized one? Does the number of platters have an impact? Speed: I do not want high performance. I want it to be reliable and last long. I am definitely not going to choose the expensive 10,000 rpm ones. Should I go for 5400 or 7200? Do these numbers affect longevity and reliability? Are there any other technical and objective factors that I should look for?

    Read the article

  • Exchange 2013 attachments too big?

    - by KPS
    I am having the toughest time sending large attachments, everywhere I have checked my file size limit for send/receive is 100mb but yet users are unable to receive files even at the size of 14mb. I'm using a spam filter (Appriver) and have worked with there support for a very long time, we see the following errors in logs 13:32:40.260 4 SMTP-000036([myserverIP]) rsp: 354 Start mail input; end with <CRLF>.<CRLF> 13:33:41.038 3 SMTP-000033([myserverIP]) write failed. Error Code=connection reset by peer 13:33:41.038 3 SMTP-000033([myserverIP]) [659500] failed to send. Error Code=connection reset by peer 13:33:41.038 4 SMTP([myserverIP]) [659500] batch reenqueued into tail Windows firewall is disabled on the exchange server, all other emails that are of smaller value come through just fine. Here is a print out of size limits: ConnectorType ConnectorName MaxReceiveMessageSize MaxSendMessageSize ------------- ------------- --------------------- ------------------ Send InternetSendConnector - 35 MB (36,700,160 bytes) Send Appriver-Smarthost - 35 MB (36,700,160 bytes) Receive Default EXCHSRVR 100 MB (104,857,600 bytes) - Receive Client Proxy EXCHSRVR 100 MB (104,857,600 bytes) - Receive Default Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive Outbound Proxy Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive Client Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive ExchangeRelay 100 MB (104,857,600 bytes) - TransportConfig - 100 MB (104,857,600 bytes) 10 MB (10,485,760 bytes) ADSiteLink DEFAULTIPSITELINK Unlimited Unlimited There is a no anti-virus on the server either that could be interfering, I am out of ideas at this point :( EDIT 1 After running BPA, it gives and error: Exchange Organization: Check whether the incoming message(CN=MyDomain,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=WG,DC=local) size isn't set The maximum incoming message size isn't set in organization 'CN=MyDomain,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=WG,DC=local'. This can cause reliability problems. Here are the sizes as of now: [PS] C:\Temp>Get-TransportConfig | ft MaxSendSize, MaxReceiveSize MaxSendSize MaxReceiveSize ----------- -------------- Unlimited Unlimited [PS] C:\Temp>Get-ReceiveConnector | ft name, MaxMessageSize Name MaxMessageSize ---- -------------- Default EXCHSRVR 100 MB (104,857,600 bytes) Client Proxy EXCHSRVR 100 MB (104,857,600 bytes) Default Frontend EXCHSRVR 100 MB (104,857,600 bytes) Outbound Proxy Frontend EXCHSRVR 100 MB (104,857,600 bytes) Client Frontend EXCHSRVR 100 MB (104,857,600 bytes) ExchangeRelay 100 MB (104,857,600 bytes) Again, smaller emails come through just fine. Seems like there is a 10mb receive limit somewhere that I cannot find.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21  | Next Page >