Search Results

Search found 3695 results on 148 pages for 'failure'.

Page 101/148 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Embracing Imperfection

    - by Johnm
    The pursuit of perfection is a road on which we often find ourselves traveling. It is an unpaved road filed with pot-holes and ruts that often destroy our stride. The shoulders of this road are lined with the bones and rotting carcasses of well planned projects, solutions and dreams of others who have dared the journey. Often the choice to engage in this travel is a compulsive one. We can't help but to pack our bags and make the trip. We justify it by equating it to the delivery of a quality product or service. We use our past travels as validation of our worthiness and value. Our shared experience, as tortured pilgrims of perfection, reveals that each odyssey that bewitched us resulted in a stark reminder of the very weaknesses and fears that we were attempting to mollify. The voice of the critic that berated us for the lack of craftsmanship was our own. Although, at the end of the journey our own critical voice was joined by the gnashing of teeth of those who could not reap the fruit of your labor due to its lack of timely delivery. There is another road in which to travel. It is the pursuit of embracing imperfection. The cost of traveling this route is your contribution to its eternal construction. Each segment is designed uniquely. At times it has the appearance of a patchwork quilt; while other times it is well organized and highly measured. In all cases, its construction has continually advanced and been utilized as each segment was delivered by its architect. Those who choose to select this spindle of these crossroads crack open the shells of their fears to reveal the vapor that is within. They construct their houses upon these shells. Through their hunger for mastery they wring every drop of nectar from failure and discard its husks to the ditches of this road. Through their efforts the thoroughfare begins to develop a personality of its own, a beautifully human one, rich with the strengths and weaknesses of all of its contributors. Like many of us, the pursuit of perfection has not served me well. In fact, I would say that it has been more damaging than it has been helpful. While the perfectionist in me occasionally makes its presence known, I consider myself a "recovering perfectionist". It is evident to me that there is immense beauty found in imperfection. I choose to embrace it. It is grounding. It is constructive. It is honest.

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • Getting a virus is *very* annoying

    - by bconlon
    I spent most of yesterday removing an annoying virus from my PC. I feel slightly foolish for getting one in the first place, but after so many years I guess I was always going to eventually succumb. I was also a little surprised at the failure of various tools at removing it. The virus would redirect the browser to websites including ‘licosearch’, ‘hugosearch’ and ‘facebook’, and the disk would be thrashing away infecting dlls in some way. I had the full up to date version of McAfee installed. This identified that there was an issue in some dlls on the system and was able to ‘fix’ them. But they kept getting re-infected. So I installed Microsoft Security Essentials and this too was able to identify and ‘fix’ the infected dlls. The system scans take forever and I really expected better results. I also tried Malwarebytes, Hitman Pro, AVG and Sophos to no avail. Eventually I thought I’d investigate myself. It turned out that on reboot, the virus would start 3 instances of Firefox.exe which I’m guessing would do bad things including infecting as many dlls on the system as possible. I removed Firefox and the virus cleverly then launched 3 instances of Chrome! So I uninstalled Chrome and yes, it then started to launch 3 instances of iexplore.exe. If I’m honest, by this stage I was just seeing if it would be able to use any of the browsers! As it was starting these on reboot, I looked in my User Startup folder and there was a <randomly named>.exe and several log files. I deleted these and rebooted. When I looked they had been recreated. So I then looked in the registry Run and RunOnce entries: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run. Sure enough there were entries to run a file in C:\Program Files\<random name folder>\<random name file>.exe. I deleted this and rebooted and it was fixed. I also looked in the event log and found a warning that Winlogon had failed to start the file C:\Program Files\<random name folder>\<random name file>.exe So I also checked HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon and this entry had also been changed. Finally I ran a full system scan to clean up any infected dlls. I hope it’s gone for good!  #

    Read the article

  • What arguments can I use to "sell" the BDD concept to a team reluctant to adopt it?

    - by S.Robins
    I am a bit of a vocal proponent of the BDD methodology. I've been applying BDD for a couple of years now, and have adopted StoryQ as my framework of choice when developing DotNet applications. Even though I have been unit testing for many years, and had previously shifted to a test-first approach, I've found that I get much more value out of using a BDD framework, because my tests capture the intent of the requirements in relatively clear English within my code, and because my tests can execute multiple assertions without ending the test halfway through - meaning I can see which specific assertions pass/fail at a glance without debugging to prove it. This has really been the tip of the iceberg for me, as I've also noticed that I am able to debug both test and implementation code in a more targeted manner, with the result that my productivity has grown significantly, and that I can more easily determine where a failure occurs if a problem happens to make it all the way to the integration build due to the output that makes its way into the build logs. Further, the StoryQ api has a lovely fluent syntax that is easy to learn and which can be applied in an extraordinary number of ways, requiring no external dependencies in order to use it. So with all of these benefits, you would think it an easy to introduce the concept to the rest of the team. Unfortunately, the other team members are reluctant to even look at StoryQ to evaluate it properly (let alone entertain the idea of applying BDD), and have convinced each other to try and remove a number of StoryQ elements from our own core testing framework, even though they originally supported the use of StoryQ, and that it doesn't impact on any other part of our testing system. Doing so would end up increasing my workload significantly overall and really goes against the grain, as I am convinced through practical experience that it is a better way to work in a test-first manner in our particular working environment, and can only lead to greater improvements in the quality of our software, given I've found it easier to stick with test first using BDD. So the question really comes down to the following: What arguments can I use to really drive the point home that it would be better to use StoryQ, or at the very least apply the BDD methodology? Can you point me to any anecdotal evidence that I can use to support my argument to adopt BDD as our standard method of choice? What counter arguments can you think of that could suggest that my wish to convert the team efforts to BDD might be in error? Yes, I'm happy to be proven wrong provided the argument is a sound one. NOTE: I am not advocating that we rewrite our tests in their entirety, but rather to simply start working in a different manner for all future testing work.

    Read the article

  • Redesigning an Information System - Part 1

    - by dbradley
    Through the next few weeks or months I'd like to run a small series of articles sharing my experiences from the largest of the project I've worked on and explore some of the real-world problems I've come across and how we went about solving them. I'm afraid I can't give too many specifics on the project right now as it's not yet complete so you'll have to forgive me for being a little abstract in places! To start with I'm going to run through a little of the background of the problem and the motivations to re-design from scratch. Then I'll work through the approaches taken to understanding the requirements, designing, implementing, testing and migrating to the new system. Motivations for Re-designing a Large Information System The system is one that's been in place for a number of years and was originally designed to do a significantly different one to what it's now being used for. This is mainly due to the product maturing as well as client requirements changing. As with most information systems this one can be defined in four main areas of functionality: Input – adding information to the system Storage – persisting information in an efficient, searchable structure Output – delivering the information to the client Control – management of the process There can be a variety of reasons to re-design an existing system; a few of our own turned out to be factors such as: Overall system reliability System response time Failure isolation and recovery Maintainability of code and information General extensibility to solve future problem Separation of business and product concerns New or improved features The factor that started the thought process was the desire to improve the way in which information was entered into the system. However, this alone was not the entire reason for deciding to redesign. Business Drivers Typically all software engineers would always prefer to do a project from scratch themselves. It generally means you don't have to deal with problems created by predecessors and you can create your own absolutely perfect solution. However, the reality of working within a business is that the bottom line comes down to return on investment. For a medium sized business such as mine there must be actual value able to be delivered within a reasonable timeframe for any work to be started. As a result, any long term project will generally take a lot of effort and consideration to be approved by those in charge and therefore it might be better to break down the project into more manageable chunks which allow more frequent deliverables and also value within a shorter timeframe. As the only thing of concern was the methods for inputting information, this is where we started with requirements gathering and design. However knowing that there might be more to the problem and not limiting your design decisions before the requirements is key to finding the best solutions.

    Read the article

  • Integrating with a payment provider; Proper and robust OOP approach

    - by ExternalUse
    History We are currently using a so called redirect model for our online payments (where you send the payer to a payment gateway, where he inputs his payment details - the gateway will then return him to a success/failure callback page). That's easy and straight-forward, but unfortunately quite inconvenient and at times confusing for our customers (leaving the site, changing their credit card details with an additional login on another site etc). Intention & Problem description We are now intending to switch to an integrated approach using an exchange of XML requests and responses. My problem is on how to cater with all (or rather most) of the things that may happen during processing - bearing in mind that normally simplicity is robust whereas complexity is fragile. Examples User abort: The user inputs Credit Card details and hits submit. An XML message to the provider's gateway is sent and waiting for response. The user hits "stop" in his browser or closes the window. ignore_user_abort() in PHP may be an option - but is that reliable? might it be better to redirect the user to a "please wait"-page, that in turn opens an AJAX or other request to the actual processor that does not rely on the connection? Database goes away sounds over-complicated, but with e.g. a webserver in the States and a DB in the UK, it has happened and will happen again: User clicks together his order, payment request has been sent to the provider but the response cannot be stored in the database. What approach could I use, using PHP to sort of start an SQL like "Transaction" that only at the very end gets committed or rolled back, depending on the individual steps? Should then neither commit or roll back have happened, I could sort of "lock" the user to prevent him from paying again or to improperly account for payments - but how? And what else do I need to consider technically? None of the integration examples of e.g. Worldpay, Realex or SagePay offer any insight, and neither Google or my search terms were good enough to find somebody else's thoughts on this. Thank you very much for any insight on how you would approach this!

    Read the article

  • Application stuck in TCP retransmit

    - by SandeepJ
    I am running Linux kernel 3.13 (Ubuntu 14.04) on two Virtual Machines each of which operates inside two different servers running ESXi 5.1. There is a zeromq client-server application running between the two VMs. After running for about 10-30 minutes, this application consistently hangs due to inability to retransmit a lost packet. When I run the same setup over Ubuntu 12.04 (Linux 3.11), the application never fails If you notice below, "ss" (socket statistics) shows 1 packet lost, sk_wmem_queued of 14110 (i.e. w14110) and a high rto (120000). State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 12350 192.168.2.122:41808 192.168.2.172:55550 timer:(on,16sec,10) uid:1000 ino:35042 sk:ffff880035bcb100 <- skmem:(r0,rb648720,t0,tb1164800,f2274,w14110,o0,bl0) ts sack cubic wscale:7,7 rto:120000 rtt:7.5/3 ato:40 mss:8948 cwnd:1 ssthresh:21 send 9.5Mbps unacked:1 retrans:1/10 lost:1 rcv_rtt:1476 rcv_space:37621 Since this has happened so consistently, I was able to capture the TCP log in wireshark. I found that the packet which is lost does get retransmitted and even acknowledged by the TCP in the other OS (the sequence number is seen in the ACK), but the sender doesn't seem to understand this ACK and continues retransmitting. MTU is 9000 on both virtual machines and througout the route. The packets being sent are large in size. As I said earlier, this does not happen on Ubuntu 12.04 (kernel 3.11). So I did a diff on the TCP config options (seen via "sysctl -a |grep tcp ") between 14.04 and 12.04 and found the following differences. I also noticed that net.ipv4.tcp_mtu_probing=0 in both configurations. Left side is 3.11, right side is 3.13 <<net.ipv4.tcp_abc = 0 <<net.ipv4.tcp_cookie_size = 0 <<net.ipv4.tcp_dma_copybreak = 4096 14c11 << net.ipv4.tcp_early_retrans = 2 --- >> net.ipv4.tcp_early_retrans = 3 17c14 << net.ipv4.tcp_fastopen = 0 >> net.ipv4.tcp_fastopen = 1 20d16 << net.ipv4.tcp_frto_response = 0 26,27c22 << net.ipv4.tcp_max_orphans = 16384 << net.ipv4.tcp_max_ssthresh = 0 >> net.ipv4.tcp_max_orphans = 4096 29,30c24,25 << net.ipv4.tcp_max_tw_buckets = 16384 << net.ipv4.tcp_mem = 94377 125837 188754 >> net.ipv4.tcp_max_tw_buckets = 4096 >> net.ipv4.tcp_mem = 23352 31138 46704 34a30 >> net.ipv4.tcp_notsent_lowat = -1 My question to the networking experts on this forum : Are there any other debugging tools or options I can install/enable to dig further into why this TCP retransmit failure is occurring so consistently ? Are there any configuration changes which might account for this weird behaviour.

    Read the article

  • Windows Backup failed with error 0x807800C5

    - by Alexey Ivanov
    I setup backup of my Windows 8 laptop with Windows 7 File Recovery (known as Backup and Restore in Windows 7). Backup of files runs successfully. But if I try to create system image, it fails with error 0x807800C5: Error details on the dialog: The mounted backup volume is inaccessible. Error details in the system log: There was a failure in preparing the backup image of one of the volumes in the backup set. I save the backup to a network location, WD MyBookLive.   Edit: I tried some of the steps suggested in the various thread about this issue: Cleaned up the backup location: Removed MediaID.bin in the backup location. Removed folder <ComputerName> from WindowsImageBackup. Restarted backup resulted in the same error. However, the error dialog shows slightly different error message: The specified backup disk cannot be found. Performed System File Check by running sfc /scannow. It showed no errors. Running backup failed nevertheless. I tried searching Google for error code but I've found no solution so far. Update: I submitted technical support request to Microsoft. The first suggestion was to clean boot, but it didn't help. I pointed out that I had tried all the methods from the same problem on MS Answers, and nothing had helped.

    Read the article

  • SSTP client disconnects shortly after successfully connected to VPN

    - by Eran Betzalel
    I'm successfully authenticating and connecting to a SSTP VPN (on windows 2008) from my windows 7 machine, but for some reason, the connection is disconnected about a 1-2 seconds after it's established. I've done the following: Defined a SSTP VPN on my windows server 2008. Defined the same machine as CA. Issued the needed certificates and published them on the client. I'm currently testing this VPN inside my LAN so all the needed ports are opened. Here are the event log entries when trying to connect: Error Log (Client): The user HOME\User dialed a connection named Home VPN which has terminated. The reason code returned on termination is 829. Error Log (Server-VPN): The user HOME\User connected on port VPN0-0 on 7/27/2012 at 1:57 AM and disconnected on 7/27/2012 at 1:57 AM. The user was active for 0 minutes 0 seconds. 312 bytes were sent and 4528 bytes were received. The reason for disconnecting was user request. What would be the issue? How can I resolve or debug it? UPDATE: I've found an event log (Log=System, Source=RasSstp) message on the windows 7 machine that tries to connect to the VPN: The SSTP-based VPN connection to the remote access server was terminated because of a security check failure. Security settings on the remote access server do not match settings on this computer. Contact the system administrator of the remote access server and relay the following information: SHA1 Certificate Hash: 065D681...520375552F SHA256 Certificate Hash: 18DED363...EEEE28CFD00

    Read the article

  • SCCM 2012 - Windows 8 WSUS

    - by Owen
    We're using SCCM 2012 to deploy Windows Updates on our domain, and our Windows 8 clients have started failing with error 80240438 when they try to update. Windows 7 clients update fine, but Windows 8 clients refuse to do anything. I've done a search online and it seems to only reference Windows InTune. Has anyone seen any similar behavior on Windows 8 machines? If we don't get that error, we're getting 80244021 which seems to indicate that the server can't be found.... but they can resolve it just fine and our exceptions are defined on the proxy too. A bit stuck here! 2012-11-22 14:45:28:935 476 998 Agent ********* 2012-11-22 14:45:28:935 476 998 Agent ** END ** Agent: Finding updates [CallerId = AutomaticUpdates] 2012-11-22 14:45:28:935 476 998 Agent ************* 2012-11-22 14:45:28:935 476 998 Agent WARNING: WU client failed Searching for update with error 0x80240438 2012-11-22 14:45:28:935 476 c74 AU >>## RESUMED ## AU: Search for updates [CallId = {EAECB947-48AC-43BE-8F98-C44727E4A131} ServiceId = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7}] 2012-11-22 14:45:28:935 476 c74 AU # WARNING: Search callback failed, result = 0x80240438 2012-11-22 14:45:28:935 476 c74 AU ######### 2012-11-22 14:45:28:935 476 c74 AU ## END ## AU: Search for updates [CallId = {EAECB947-48AC-43BE-8F98-C44727E4A131} ServiceId = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7}] 2012-11-22 14:45:28:935 476 c74 AU ############# 2012-11-22 14:45:28:935 476 c74 AU All AU searches complete. 2012-11-22 14:45:28:935 476 c74 AU # WARNING: Failed to find updates with error code 80240438 2012-11-22 14:45:28:935 476 c74 AU AU setting next detection timeout to 2012-11-22 04:12:23 2012-11-22 14:45:33:936 476 c9c Report REPORT EVENT: {EE35CD79-FD2A-472D-BFC9-0420F5D60C04} 2012-11-22 14:45:28:935+1300 1 148 [AGENT_DETECTION_FAILED] 101 {00000000-0000-0000-0000-000000000000} 0 80240438 AutomaticUpdates Failure Software Synchronization Windows Update Client failed to detect with error 0x80240438. 2012-11-22 14:45:33:938 476 c9c Report CWERReporter::HandleEvents - WER report upload completed with status 0x8 2012-11-22 14:45:33:938 476 c9c Report WER Report sent: 7.8.9200.16420 0x80240438 00000000-0000-0000-0000-000000000000 Scan 101 Managed 2012-11-22 14:45:33:938 476 c9c Report CWERReporter finishing event handling. (00000000) Thanks in advance

    Read the article

  • Rebuilding array on 3ware 9690SA-8I

    - by Tim Jones
    I have a RAID10 (8x1TB) array on a 3ware 9690 card running on an Ubuntu 1110 server. There was a kernel update so I scheduled a reboot after which the array was inaccessible. I checked the status a drive has died in the array, but the controller has thrown the entire array into an 'inoperable' state instead of simply degraded (what's the point of the RAID now ;-). After taking out the 'dead' drive I run a quick test to find it completely functional without a bad sector to be found. I try to put the drive back in but the array still marks the disk as degraded (remembering serial number or something??) and the entire array as inoperable... So I swap it out for a known working drive (not the same capacity but higher - should still work) and initiate a rebuild with the the new drive as a replacement. This fails instantly with the error "(0x0B:0x0033): Unit busy : Failed to start Rebuild on Unit 0". The unit shouldn't be busy as it is not mounted (the card itself is listed with lshw but the array it provides is not). I'm pretty much at an impasse now, I don't understand how I can have a single drive failure on a RAID10 that makes the entire array inaccessible, degraded I could understand but inaccessible?? I don't think the controller as prior to the reboot it was completely functional.

    Read the article

  • IIS7Register failed with HRESULT 800700b7: 'Cannot create a file when that file already exists.'

    - by Optimax
    I am trying to re-install ASP.NET on IIS7 running in Win7/64, which magically stopped working all of as sudden. When I run aspnet_regiis -i, I get an error message that says Finished installing ASP.NET (4.0.30319). Setup has detected some errors during the operation. For details, please read the setup log file C:\Users\username\AppData\Local\Temp\ASPNETSetup_00031.log Looking at the log, it reports Failure Changing IIS ApplicationHost.config: IIS7Register failed with HRESULT 800700b7: 'Cannot create a file when that file already exists. ' The real problem surfaces when trying to access an ASP.NET web page from that server: HTTP Error 500.21 - Internal Server Error Handler "PageHandlerFactory-Integrated" has a bad module "ManagedPipelineHandler" in its module list and Most likely causes: Managed handler is used; however, ASP.NET is not installed or is not installed completely. There is a typographical error in the configuration for the handler module list. So it seems ASP.NET has NOT been properly re-installed. Now, I am aware of the alleged one-and-only remedy for this, repeated all over the Web, and referenced for example here: http://blogs.msdn.com/b/dougste/archive/2010/09/06/errors-installing-asp-net-4-0.aspx Except that the proposed solution does not work for me. I have expanded the %windir% macros within isapiCgiRestriction section for .NET 4.0 - and aspnet_regiis still fails for me. Any other ideas?

    Read the article

  • Oracle VM repository creation seems contradictory to its server pool?

    - by Michael
    I found something contradictory in Oracle VM. Clustered server pool creation in Oracle VM would format my FC LUN as ocfs2 , and start o2cb & ocfs2 services to build cluster environment. After that, when I wanted to create repository on the serverpool, unexpectedly, it told me that the physical disk I chose which is also my FC LUN, already contains a file system. What a contradictory! So what, delete the file system in serverpool? If so, why created it before?! OVM> list physicaldisk Command: list physicaldisk Status: Success Time: 2012-09-10 06:44:42.660 Data: id:0004fb00001800007765e62381895f61 name:OVM_HDS OVM> create serverpool clusterenable=true virtualip=10.84.21.123 physicaldisk=OVM_HDS name=ovmserverpool Serverpool creation took quite a long time since my FC LUN was big. When the creation completed, my FC LUN was created as ocfs2 and o2cb & ocfs2 services were started on my ovm servers successfully. But then repository creation indeed throws me a big surprise ... OVM> create repository serverpool=ovmserverpool physicaldisk=OVM_HDS name=ovmrepo Command: create repository serverpool=ovmserverpool physicaldisk=OVM_HDS name=ovmrepo Status: Failure Time: 2012-09-10 06:23:44.656 Error Msg: com.oracle.ovm.mgr.api.exception.RuleException: OVMRU_002026E Cannot use or delete physical disk: OVM_HDS, it already contains a file system: [Pool filesystem for ovmserverpool] Mon Sep 10 06:23:44 CST 2012 What should I do now? Delete the filesystem using dd command? That would destroy the serverpool, right? I'm really confused. My OVM Manager version is 3.1.1.399 which is the latest. Any tips are appreciated. Thanks.

    Read the article

  • Exchange 2010 DAG Automatic Failover Testing/Issue. Not always automatically failing over to health

    - by Richard
    Ok I've got 2 exchange 2010 servers that run client access/hub transport/mailbox roles and one exchange 2010 server running just client access/hub transport roles and acts as my bridgehead. The two mailbox servers are running one database setup in a DAG. Server A shows the DB Mounted and Server B shows Healthy. If I reboot Server A via windows GUI Server B switches from healthy to mounted and I see hardly any interruption in service using Outlook 2007. Server A shows "Service down", then "Failed" then "Healthy" and leaves the DB mounted on Server B. This is how it should work, so far so good. Now if I test Server A being shut down cold, or unplugging both nics from network to simulate failure, Server B switches from Healthy to Mounted and server A switches to "Service Down" but my outlook client never connects to the DB mounted on server B! I can connect to server C (client access/hub transport) and get to my email and even send new email out, but incoming email doesn't deliver until Server A is brought back online and it's DB goes back to Healthy status. So I don't understand why it auto fail-overs when I reboot the server with the mounted DB copy, causing very little outlook 2007 hiccup if any. But when I shutdown or DC the mounted DB server it DOES mount the healthy copy but outlook 2007 clients can't connect.. I hope the picture I'm trying to paint makes some sense, it's driving me a little batty. Any help would be appreciated!

    Read the article

  • Linked server problem on SQL Server 2005

    - by BradyKelly
    I have a weird issue and I hope someone can steer me in the right direction for resolving this please. When I execute the following query against a linked server, I get the following error. I can connect to the server in SSMS as a separate server, and execute a similar query against its Deposits table. The nn.nn is my own replacement to avoid broadcasting our server addresses. The query: select td.Batch , td.DateTimeDeposited from Deposits cd left join [172.nn.nn.32\sqlexpress].Terminal.dbo.Deposits td on cd.DateTimeDeposited = td.DateTimeDeposited The error: OLE DB provider "SQLNCLI" for linked server "172.nn.nn.11\sqlexpress" returned message "Login timeout expired". OLE DB provider "SQLNCLI" for linked server "172.nn.nn.11\sqlexpress" returned message "An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.". Msg 65535, Level 16, State 1, Line 0 SQL Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF]. Notice how the error is about server 172.nn.nn.11 and not 172.nn.nn.32. SOLVED (STUPID ME): Somebody had added an extra bit to my query that was scrolled off-screen and was querying the 17.nn.nn.11 server.

    Read the article

  • Failed Backup Job With Backup Exec 12 and AOFO

    - by Mort
    I am backing up a Windows 2003 Small Business Server with SP2. We are running Backup Exec 12 with SP4. Recently the backup job started failing on backing up the system state with the following error: V-79-57344-34110 - AOFO: Initialization failure on: "System?State". Advanced Open File Option used: Microsoft Volume Shadow Copy Service (VSS). Snapshot provider error (0xE000FE7D): Access is denied. To back up or restore System State, administrator privileges are required. Check the Windows Event Viewer for details. Upon review of Symantec's website the error indicates a credential problem. However when I test the credentials they come back with no failures. I have found another forum here referencing a similar error and have tried what has been indicated with no succesful results. I have created new jobs based on new selection lists with no succesful results. I suspect a new update possibly from Microsoft may be causing this but I have no idea which one. I am looking for feedback. Thanks.

    Read the article

  • Event ID: 861 - The Windows Firewall has detected an application listening for incoming traffic

    - by Chris Marisic
    Firstly, my machines aren't compromised any person suggesting such will be DV'd. The security logs on some of my networks client machines (all Windows Xp Sp3) get filled with these useless error messages. Security Failure Audit Detailed Tracking Event ID: 861 User: NT AUTHORITY\NETWORK SERVICE The Windows Firewall has detected an application listening for incoming traffic. Name: - Path: C:\WINDOWS\system32\svchost.exe Process identifier: 976 User account: NETWORK SERVICE User domain: NT AUTHORITY Service: Yes RPC server: No IP version: IPv4 IP protocol: UDP Port number: 55035 Allowed: No User notified: No It's always on various random ports of UDP so setting up a port exception isn't really an option. It's always from svchost or lsass both of which are running services from DLLs. One of the most offending processes seems to the be DnsCache. I have in my global policy under AT < Network < Network Connection < Widnows Firewall < Domain Profile (I haven't changed any standard profile options do both need configured? To allow remote administration and desktop exceptions and have a custom program exception list that has %SystemRoot%\system32\svchost.exe:*:enabled:svchost (Windows won't allow you to add this exception on a local machine but it let me have it on here in the global policy it just doesn't seem to do anything) %SystemRoot%\system32\lsass.exe:*enabled:lsass (I think this one ended all of my LSASS messages) %SystemRoot%\system32\dnsrslvr.dll:*:enabled:dnscache (I tried adding the dll itself to the exception list, this didn't seem to do anything) Is there really any other options left other than disabling the Windows Firewall entirely, disabling auditing entirely or just changing the event viewer to just auto overwrite when needed? I'd much rather fix the problem and get rid of these entries ever being created instead of just trying to cover up the problem.

    Read the article

  • ADODB DB2 DSN using IBMDADB2 provider

    - by Eli Sand
    I have a very bizarre issue with trying to establish a working connection to an IBM DB2 server from Classic ASP using ADODB. On my development server I am running IIS and have a local instance of DB2 running. When I create a system DSN on this server and try to connect to it with ADODB, I have to specify Provider=IBMDADB2; in my connection along with the DSN name - failure to include the provider and my connection won't work. On my production server(s), I have one running IIS and a second system running an instance of DB2. When I create a system DSN on the production IIS server and try to connect to it with ADODB, I cannot specify the provider, otherwise it throws an uncatchable error in an external module (I assume it's referring to the DB2 module) if I try to do anything past get a connection (oddly, opening the connection itself doesn't throw an error - but if I run a query it does). If I remove the Provider=IBMDADB2; from the connection string (thus I just have DSN=some_name), it works fine. On both systems I can verify through the ODBC connection manager that the DSN's work and can connect to the databases, and on both systems I have made sure to set the correct (only) instance of DB2 as the default. Can anyone tell me why I have to have different connection strings for the development and production servers? I would like to be able to use the same connection string for both environments if at all possible. If that means either specifying a provider for both, or for neither I don't care which - I would just like to know what's going on and how to fix it.

    Read the article

  • Random software/shutdown problems Mac mini

    - by Oliver
    Hello I have a Mac mini using OS X 10.5.8. Recently I've been having problems that are becoming increasingly severe. Particular symptoms: Firefox "quits" but the dot on the dock remains. If I try shutting down at this point I can't and have to use the power button. I can't start Firefox either. Significantly higher than normal crash rate for CS4 Illustrator Photoshop and Dreamweaver. If Firefox remains open when sleeping the Mac, on waking up firefox is completely unresponsive and must be restarted. Can cause problem 1. Today I lost all my FTP settings for Dreamweaver CS4 Bluetooth switches on when booting despite being switched off. Network password retention is somewhat erractic. Tried running Apple Hardware Test as described in the manual. Cannot access install disk by pressing D at startup. My pen tablet (Bamboo) lost my preferences. Tried unstalling software and now cannot re-install. I've never had problems of this level outside warranty before so I don't know much about debugging Macs. Solutions attempted: Disable bluetooth (see problem 5) Disable time machine Run appple harware test (see problem 7) Tried using activity monitor but don't know which processes are normal Tried uninstalling more recent software. I don't have any idea what to do. It seems like a hard disk failure but I don't have the know how to continue.

    Read the article

  • OpenLDAP mirror mode replication failing with TLS behind a load balancer

    - by Lynn Owens
    I have two OpenLDAP servers that are both running TLS. They are: ldap1.mydomain.com ldap2.mydomain.com I also have a load balancer cluster with a dns name of it's own: ldap.mydomain.com The SSL certificate has a CN of ldap.mydomain.com, with SANs of ldap1.mydomain.com and ldap2.mydomain.com. Everything works... Except mirror mode replication. My mirror mode replication is setup like this: ldap.conf TLS_REQCERT allow cn=config.ldif olcServerID: 1 ldap://ldap1.mydomain.com olcServerID: 2 ldap://ldap2.mydomain.com On ldap1, olcDatabase{1}hdb.ldif olcMirrorMode: TRUE olcSyncrepl: {0}rid=001 provider=ldap://ldap2.mydomain.com bindmethod=simple bindmethod=simple binddn="cn=me,dc=mydomain,dc=com" credentials="REDACTED" starttls=yes searchbase="dc=mydomain,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" On ldap2, olcDatabase{1}hdb.ldif olcMirrorMode: TRUE olcSyncrepl: {0}rid=001 provider=ldap://ldap1.mydomain.com bindmethod=simple bindmethod=simple binddn="cn=me,dc=mydomain,dc=com" credentials="REDACTED" starttls=yes searchbase="dc=mydomain,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" Here's the errors I'm getting in syslog: Dec 1 21:05:01 ldap1 slapd[6800]: slap_client_connect: URI=ldap://ldap2.mydomain.com DN="cn=me,dc=mydomain,dc=com" ldap_sasl_bind_s failed (-1) Dec 1 21:05:01 ldap1 slapd[6800]: do_syncrepl: rid=001 rc -1 retrying Dec 1 21:05:08 ldap1 slapd[6800]: conn=1111 fd=20 ACCEPT from IP=ldap.mydomain.com:2295 (IP=ldap1.mydomain.com:636) Dec 1 21:05:08 ldap1 slapd[6800]: conn=1111 fd=20 closed (TLS negotiation failure) Any ideas? I've been working on OpenLdap for way too long now.

    Read the article

  • 32-bit Ubuntu or 64-bit w/Intel Atom D510 w/4GB RAM?

    - by T.J. Crowder
    (I've seen this question and some related ones, and perhaps this is a duplicate although part of my question is specific to the Atom D510.) I'm going to be installing Ubuntu on a new silent desktop as my latest (and hopefully last) attempt to switch from Windows to Linux for at least most everyday tasks. The new machine is entirely passvely cooled, but as a consequence, not astonishingly powerful — an Atom D510 (dual-core, 1.6GHz, HT) on Intel's D510MO board. That's fine, I won't use it for gaming, (much) video editing, etc. It's a 64-bit processor and I'm maxing the board out at 4GB of RAM (hey, that 1.6 CPU needs all the help it can get), which naturally raises the question of whether to install Ubuntu 64-bit or 32-bit (and if the latter, either live with the missing RAM, or do the PAE kernel dance). Although I've used Linux on servers for years, I'm very nearly a Linux desktop newbie and am not currently in the mood to fight driver wars and such. So if I'm setting myself up for failure with 64-bit, I'll live with the missing ~0.8GB or fiddle with PAE. But if 64-bit is entirely "ready," great, I'm there. So: Do most mainstream apps (now) play nicely with 64-bit Linux? I can't help but notice the "AMD" in the ISO image filename ubuntu-10.04-desktop-amd64.iso and I know AMD lead the way on this stuff — does Ubuntu 64-bit play nicely with Intel processors? Just generally, would you recommend one or the other? (And if anyone has any experience with Ubuntu specifically on the D510 [32-bit or 64-bit] which might lead me one way or t'other, that would be useful.) Thanks in advance.

    Read the article

  • Windows Server 2008 R2, SQL Server 2008 R2, and Registry Backups

    - by charliedigital
    Hi folks! As a developer, I've installed various instances of SQL Sever (2000, 2005, 2008, R2) from all the ways back to 2003 and I've never had an install fail on me....until yesterday. I was installing SQL Server 2008 R2 onto a Windows Server 2008 R2 hosted virtual server and the install finished, but failed on every component. To make matters worse, it was in a state in which I could not uninstall it either, even using command line options! I dug around a bit, but didn't get very far with it. The error is enigmatic and Google didn't turn up much hope. So today, I am going to try again after having the VM image wiped overnight. My question is how can I guard against the same failure today? I don't mind if it fails, but then I know it may be something wrong with the base image I'm getting from the hosting company. I really don't feel like paying another $20 to wipe the VM and I have no idea why it failed. Is it enough for me to backup the registry so that I can restore it in case it fails? What about the installation files? Do I need to have a tool to clean that out, too? Sorry, I'm no sys admin so no real experience with backup/restore aside from System Restore! So any advice would be appreciated!

    Read the article

  • Authenticating Windows 7 against MIT Kerberos 5

    - by tommed
    Hi There, I've been wracking my brains trying to get Windows 7 authenticating against a MIT Kerberos 5 Realm (which is running on an Arch Linux server). I've done the following on the server (aka dc1): Installed and configured a NTP time server Installed and configured DHCP and DNS (setup for the domain tnet.loc) Installed Kerberos from source Setup the database Configured the keytab Setup the ACL file with: *@TNET.LOC * Added a policy for my user and my machine: addpol users addpol admin addpol hosts ank -policy users [email protected] ank -policy admin tom/[email protected] ank -policy hosts host/wdesk3.tnet.loc -pw MYPASSWORDHERE I then did the following to the windows 7 client (aka wdesk3): Made sure the ip address was supplied by my DHCP server and dc1.tnet.loc pings ok Set the internet time server to my linux server (aka dc1.tnet.loc) Used ksetup to configure the realm: ksetup /SetRealm TNET.LOC ksetup /AddKdc dc1.tnet.loc ksetip /SetComputerPassword MYPASSWORDHERE ksetip /MapUser * * After some googl-ing I found that DES encryption was disabled by Windows 7 by default and I turned the policy on to support DES encryption over Kerberos Then I rebooted the windows client However after doing all that I still cannot login from my Windows client. :( Looking at the logs on the server; the request looks fine and everything works great, I think the issue is that the response from the KDC is not recognized by the Windows Client and a generic login error appears: "Login Failure: User name or password is invalid". The log file for the server looks like this (I tail'ed this so I know it's happening when the Windows machine attempts the login): Screen-shot: http://dl.dropbox.com/u/577250/email/login_attempt.png If I supply an invalid realm in the login window I get a completely different error message, so I don't think it's a connection problem from the client to the server? But I can't find any error logs on the Windows machine? (anyone know where these are?) If I try: runas /netonly /user:[email protected] cmd.exe everything works (although I don't get anything appear in the server logs, so I'm wondering if it's not touching the server for this??), but if I run: runas /user:[email protected] cmd.exe I get the same authentication error. Any Kerberos Gurus out there who can give me some ideas as to what to try next? pretty please?

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by user65124
    Hi there. We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >