Search Results

Search found 10463 results on 419 pages for 'task tracking'.

Page 282/419 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the administrators of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on StackOverflow to get feedback from the programmer/user side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates Repository-dictated Configuration If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • How do I increase timeout for a cronjob/crontab?

    - by Mohit Ranka
    I have written a script that gets data from solr for which date is within the specified period, and I run the script using as a daily cron. The problem is the cronjob does not complete the task. If I manually run the script (for the same time period), it works well. If I reduce the specified time period, the script runs from the cron as well. So my guess is cronjob is timing out while running the script is there is much data to process. How do I increase the timeout for cronjob? PS - 1. The script I am running in cronjob is a bash script which runs a python script.

    Read the article

  • SSIS- Sharepoint list data transfer issue

    - by Vicky
    Hi , We are trying to transfer data from oracle database (about 60,0000) records only to a sharepoint list using SSIS. But we are getting following error when records reaches around 19000 . The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020 and System.ServiceModel.ProtocolException: The remote server returned an unexpected response: (400) Bad Request. Earlier we thought if could because of Sharepoint list limit so we tried by reducing two of the columns and then it has went fine. So we left with one of the column of Datatype DT_STR and length 400 in oracle beacuse of which issue might be happening, It is mapped to sharepoint custom list field of multiline type. We also verified if length of field is issue but in oracle DB for all records max length for this column is only 239 so length issue is also ruled out. Any one who has faced this kind of issue or knows cause of this issue.Kindly let us know.. Thanks and regards, Vicky

    Read the article

  • MSBuild Include Remote File 2008?

    - by ScSub
    TFS 2008, VS 2008. I have a tfsbuild.proj and tfsbuild.msp file in $/MyStuff/TeamBuildTypes/Dev folder. I have a targets file at $/MyStuff/TeamBuildTypes/IncludeFiles/Common/test.xml. test.xml contains an XML fragment that overrides the BeforeGet task. I tried to get the file into my tfsbuild.proj file like this: <Import Project="$/MyStuff/TeamBuildTypes/IncludeFiles/Common/test.xml" /> The build fails because it tries to get the file from a relative path that is way off. How can I specify external/include files from an explicit TFS "remote" path? Thanks.

    Read the article

  • Get BSD file descriptor from OSX CoreServices objects.

    - by Inso Reiges
    Hello, I am new to OSX user space development. I've read documentation and googled before asking, but still have no clue about the following. If i am to use CoreServices framework to work with files (FSRef, Forks, URLs, etc.) will i be able to get a raw BSD file descriptor (plain int)? If yes, then how can i do that? The thing is, i want to learn to program with OSX frameworks, but the actual task at hand will require BSD file descriptor later. Inso.

    Read the article

  • Chrome plugin process - npapi plugin

    - by kambamsu
    Hi, I'm writing an npapi plugin in Qt. My plugin works perfectly on firefox and opera. The problem in chrome i guess is regarding the "process-per-plugin" setup. What happens is, when i first open a page, the plugin is injected and all works as per expected. But when i navigate from that page to another one, in the new page, the plugin seems to get injected but even its constructor isnt called. To examine the issue, I tried killing my plugin process via the chrome task manager before i navigate to the new page. When i do this, the plugin works as expected in the 2nd page too. I'm unable to comprehend what is happening here. Any help would be appreciated. Thanks

    Read the article

  • Sending binary data with Indy through TCP\IP, how?

    - by Wodzu
    Hello. How to send a binary data with Indy components? Which of them is most suitable for this task? I've tried to use TIdTcpClient but it allows only to send strings. I've found one reponce for that problem here but I don't get it. It says about method Write(TIdBytes), but the answer is not clear for me. Does he meant Write to some instance of TIdBytes, and how to connect that instance with TIdTcpClient? Thanks for any help.

    Read the article

  • Eclipse uses 100 % CPU randomly

    - by Florian Gutmann
    Hi everyone! My eclipse sometimes starts using 100 % of my CPU very spontaneously. I can't figure out why it needs that much CPU usage. There is no background task like "building workspace" running. After some time the CPU load drops to 0 and everything is normal. I can't find any information related to the problem in workspace/.metadata/.log file. Has anybody some tip how i can figure out which part of eclipse is using the CPU so heavily? Is there a way to get a thread dump of eclipse? The kill -3 on the eclipse process doesn't do anything. Eclipse Version: Galileo JavaEE Operating System: Linux 2.6.31 Thanks in advance! Florian

    Read the article

  • how to precompile sass with gruntjs?

    - by chovy
    There seem to be a few plugins...and I'm using webstorm file watcher which also precompiles individual files. I think this may not be the best way to setup a watcher. I'm running this command now: sass --no-cache --update --stop-on-error --trace ./app/sass:./app/css It seems to conflict with the webstorm file watch, which appears to be appending everything to base.css. Can someone tell me what exactly this command is doing vs. a sass filewatcher in webstorm? What's the best way to work with sass: precompile my sass to css using a grunt build task and have file watchers while developing? My base.sass looks like this: @charset "UTF-8"; /* DO NOT EDIT FILES IN ./css. See ./sass instead */ @import "page"; @import "modal"; @import "nav"; @import "tables"; @import "forms"; @import "message";

    Read the article

  • Value of text box disapears - binding viewmodel to a tab (content control)

    - by Eli Perpinyal
    Based on the MVVM example by Josh Smith, I have implemented the multi tab option which binds to a different tab to a different view model using a simple datatemplate that binds a viewmodel to a view. <DataTemplate DataType="{x:Type fixtureVM:SearchViewModel}"> <SearchVw:SearchView/> </DataTemplate> The issue that I'm having, is when I switch tabs and then switch back again, the value in the textbox disappears. When I bind the Text in the textbox to a value in the ViewModel it does not disappear. This is fine, and I can overcome this but I am having another issue for example with the position of the scroll bar in a grid disappearing once the tab has lost focus. Why is the value disappearing? I'm assuming it is a WPF sub system task that cleans up resources!? how can I avoid this? I also feel it might be slowing down my app.

    Read the article

  • How to marshal a COM-Parameter as VT_ARRAY of VT_RECORD

    - by Oliver Japes
    I've already done some extensive search, but I can't seem to find anything matching my problem. The task I'm currently working on is to create a WCF-Wrapper for some DCOM-Objects. This already works great for the most parts, but now I'm stuck with one invocation that expects a VT_ARRAY containing VT_RECORD-Objects. Marshalling as VT_ARRAY is not a problem, but how can I tell COM that the elements in this array are VT_RECORDs? This is the invocation as I current use it. InitTestCase(testCaseName, parameterFileName, testCase, cellInfos.ToArray()); The parameter I'm talking about is the last one. It's defined as List<CellInfo>, CellInfo itself is already attributed with Guid("7D422961-331E-47E2-BC71-7839E9E77D39") and ComVisible(true). It's not a struct but a class. This is the condition failing on the native side: if (VT_RECORD == varCellConfig.vt)... Because of old software using these interfaces, changing the native side is not an option Any idea?

    Read the article

  • Block Google requests to 16k using pf firewall

    - by atmosx
    I'd like to block access to Google search using PF after the threshold of 17500 requests (connection established) in 24h, from a host running FreeBSD 9. What I came up with, after reading pf-faq is this rule: pass out on $net proto tcp from any to 'www.google.com' port www flags S/SA keep state (max-src-conn 200, max-src-conn-rate 17500/86400) NOTE: 86400 are 24h in seconds. The rule should work, but PF is smart enough to know that www.google.com resolves in 5 different IPs. So my pfctl -sr output gives me this: pass out on vte0 inet proto tcp from any to 173.194.44.81 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.82 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.83 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.80 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.84 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) PF creates 5 different rules, 1 for each IP that Google resolves. However I have the sense - without being 100% sure, I didn't had the chance to test it - that the number 17500/86400 applies for each IP. If that's the case - please confirm - then it's not what I want. In pf-faq there's another option called source-track-global: source-track This option enables the tracking of number of states created per source IP address. This option has two formats: + source-track rule - The maximum number of states created by this rule is limited by the rule's max-src-nodes and max-src-states options. Only state entries created by this particular rule count toward the rule's limits. + source-track global - The number of states created by all rules that use this option is limited. Each rule can specify different max-src-nodes and max-src-states options, however state entries created by any participating rule count towards each individual rule's limits. The total number of source IP addresses tracked globally can be controlled via the src-nodes runtime option. I tried to apply source-track-global in the above rule without success. How can I use this option in order to achieve my goal? Any thoughts or comments are more than welcome since I'm an amateur and don't fully understand PF yet. Thanks

    Read the article

  • Ruby on rails - Radrails IDE - mysql issues.

    - by ThomasReggi
    I have been trying to get Ruby on Rails to migrate a database for the good part of today, the problems all seem to result with this issue, can someone please help! If its a radrails specific problem I guess Ill take this to their forums. Something is telling me this is an easy fix. >rake db:migrate (in C:/Users/Thomas/My Documents/Aptana RadRails Workspace/rp) !!! The bundled mysql.rb driver has been removed from Rails 2.2. Please install the mysql gem and try again: gem install mysql. rake aborted! 126: The specified module could not be found. - C:/Ruby/lib/ruby/gems/1.8/gems/mysql-2.8.1-x86-mswin32/lib/1.8/mysql_api.so (See full trace by running task with --trace)

    Read the article

  • Generating Google Maps markers using Ruby

    - by ischnura
    I would like to do a very simple task: add some markers in a Google Map using a list of addresses from an array. I have been thinking about generating the Google Maps JavaScript API code using ruby (printf) but this does not seem like a very clean and beautiful solution... I have read about YM4R for Ruby on Rails... my project is pretty simple and I have never worked with Ruby on Rails... I have also never used JQuerry... but I am very willing to learn to use this tools :) What do you think will be the best approach to generating the markers?

    Read the article

  • Web framework for an application utilizing existing database?

    - by tputkonen
    A legacy web application written using PHP and utilizing MySql database needs to be rewritten completely. However, the existing database structure must not be changed at all. I'm looking for suggestions on which framework would be most suitable for this task? Language candidates are Python, PHP, Ruby and Java. According to many sources it might be challenging to utilize rails effectively with existing database. Also I have not found a way to automatically generate models out of the database. With Django it's very easy to generate models automatically. However I'd appreciate first hand experience on its suitability to work with legacy DBs. Also I appreciate suggestions of other frameworks worth considering.

    Read the article

  • SharePoint 2007 Central Admin w3wp.exe process consumin 99% CPU

    - by Matrich
    Hi, I have been running an intranet using SharePoint 2007 for over a year and all has been working fine. However, after some time, I realized that the intranet portal was slow. Trying to access the Central Admin over another computer not the SharePoint server also became an issue. So I logged onto the real SharePoint Server and it took some ages to login and then was so slow even on the server unlike other times. When I checked the Task Manager, I found out that w3wp.exe was consuming 99% of the CPU speed. When I restarted the Central Admin App Pool, everything came back to normal and all was running well but after a few minutes (15 or so), it again became slow. I have checked the Event Logs and nothing conclusive was there to help me out. Anyone who has had this experience? or has any good resource? Please help. Thanks in advance

    Read the article

  • iphone: the executable was signed with invalid entitlements

    - by numbernine
    I'm trying to install my iphone app on my device for testing and whenever I try to build it I get: The executable was signed with invalid entitlements The entitlements specificed in your application's Code Signing Entitlements do not match those specified in your provisioning profile. Now I've tried adding an Entitlements.plist file and both checking and unchecking get-task-allow. I've added the file name under Code Signing Entitlements under the project and then under the target, both, neither, etc. I've deleted and re-created every app id, provisioning profile, and certificate. Those all seem valid. This is not an ad-hoc distribution (it's development) and it's not a jailbroken phone. I don't even understand where in the provisioning profile any Code Signing Entitlements are specified..?

    Read the article

  • What is "Virtual Size" in sysinternals process explorer

    - by robert
    Hi My application runs for few hours, There is no increase in any value ( vmsize, memory) of Task Manager. But after few hours i get out of memory errors. In sysinternals i see that "Virtual Size" is contineously increasing, and when it reach around 2 GB i start getting memory errors. So what kind of memory leak is that ? How can i demonstrate it with a code ? Is it possible to reproduce same thing with any piece of code where none of the memory value increase but only the Virtual Size in sysinternsl process explorer increase ? thanks for any suggestions

    Read the article

  • AnkhSVN: moving a project to another repo

    - by pcampbell
    My task is to move this VS solution and projects to another SVN server. I'm working with Visual Studio 2010 RC1 and AnkhSVN 2.1.7819.... Currently the files are all bound to a repo at C:\Repositories\foo. I'd like to move it to http://someSite/svn/foo I'm presented with this error message: Repository UUID '152c39db-5799-4234-85f2-074004a6fcad' doesn't match expected UUID '6c83444d-7f93-d64a-b0a0-23283495cf17' Questions How can I avoid this message in AnkhSVN? Are there better solutions for moving the source into a repo on the new target? How can I get Ankh to 'forget' that repo at C:\Repositories\ forever?

    Read the article

  • project hours worked to be sum of tasks hours worked.

    - by silverkid
    i have a sharepoint list called project . this list has column called hours worked. Then i also have a list called tasks. this list also has a column called hours worked. the task list also has a lookup field where we select project ID from project list. Thus for each project we can have many tasks. now tasks list items are created by individual users and i have to create such a mechanism that the hours worked in project list must always be the sum of hours worked in tasks of that project. How can I achieve this.

    Read the article

  • Large file upload into WSS v3

    - by Rubens Farias
    I'd built an WSSv3 application which upload files in small chunks; when every data piece arrives, I temporarly keep it into a SQL 2005 image data type field for performance reasons**. Problem come when upload ends; I need to move data from my SQL Server to Sharepoint Document Library through WSSv3 object model. Right now, I can think two approaches: SPFileCollection.Add(string, (byte[])reader[0]); // OutOfMemoryException and SPFile file = folder.Files.Add("filename", new byte[]{ }); using(Stream stream = file.OpenBinaryStream()) { // ... init vars and stuff ... while ((bytes = reader.GetBytes(0, offset, buffer, 0, BUFFER_SIZE)) 0) { stream.Write(buffer, 0, (int)bytes); // Timeout issues } file.SaveBinary(stream); } Are there any other way to complete successfully this task? ** Performance reasons: if you tries to write every chunk directly at Sharepoint, you'll note a performance degradation as file grows up (100Mb).

    Read the article

  • Core Location Best Placement and User Interruption

    - by b.dot
    Hi All, My application uses Core Location in three different views. It's working perfectly. In my first view, I subclass the CLLocationManager and use protocol methods for location updates to my calling class. Before I install the framework and code in my other classes, I was wondering: Is the protocol method the best way? What happens to the Core Location execution if the user exits the view or quits the app while it's trying to get a location fix? Is the location task terminated with the GPS system turned off immediately? If the user simply switches to another view, is it OK to assume that I can start Core Location in the next view without regard to the last? Where should the first update location call be placed. Should the application delegate instantiate the CLLocation Manager class using protocol so that it can update any of the views chosen or should each class instantiate the manager. Any feedback would be appreciated. Thanks.

    Read the article

  • What is a Perl regex for finding the first non-consecutively-repeating character in a string.

    - by DVK
    Your task, should you choose to accept it, is to write a Perl regular expression that for a given string, will return the first occurence of a character that is not consecutively duplicated. In other words, both preceded AND succeeded by characters different from itself (or start/end of string respectively). Example: IN: aabbcdecc OUT: c Please note that "not consecutively duplicated" does not mean "anywhere in the string". NOTE: it must be a pure regex expression. E.g. the solution that obviously comes to mind (clone the string, delete all the duplicates, and print the first remaining character) does not count, although it solves the problem. The question is inspired by my somewhat off-topic answer to this: http://stackoverflow.com/questions/2548606/perl-function-to-find-first-non-repeating-character-in-a-string

    Read the article

  • What is best way to update digital certificates from server to many clients when certificate expires

    - by pramodc84
    One of my friend is working on issue related to updating expired digital certificates. He is working on Java application(Swings I guess), which has 4000 clients. All those need a digital certificate to connect to the application and this certificate expires every year. At the end of year he needs to update the certificate credentials for all clients. Currently this is manual process, done by connecting to each of 4000 systems either locally or by remote connection. He is got task to convert this process to be an automated process. Please suggest some solutions.

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >