Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 825/903 | < Previous Page | 821 822 823 824 825 826 827 828 829 830 831 832  | Next Page >

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • Exchange 2007 and migrating only some users under a shared domain name

    - by DomoDomo
    I'm in the process of moving two law firms to hosted Exchange 2007, a service that the consulting company I work for offers. Let's call these two firms Crane Law and Poole Law. These two firms were ONE firm just six months ago, but split. So they have three email domains: Old Firm: craneandpoole.com New Firm 1: cranelaw.com New Firm 2: poolelaw.com Both Firm 1 & Firm 2 use craneandpoole.com email addresses, as for the other two domains, only people who work at the respective firm use that firm's domain name, natch. Currently these two firms are still using the same pre-split internal Exchange 2007 server, where MX records for all three domains point. Here's the problem. I'm not moving both companies at the same time. I'm moving Crane Law two weeks before Poole Law. During this two weeks, both companies need to be able to: Continue to receive emails addressed to craneandpoole.com Send emails between firms, using cranelaw.com and poolelaw.com accounts I also have a third problem: I'd like to setup all three domains in my hosting infrastructure way ahead of time, to make my own life easier What would solve all my problems would be, if there is some way I can tell Exchange 2007, even though this domain exists locally forward on the message to the outside world using public MX record as a basis for where to send it (or if I could somehow create a route for it statically that would work too). If this doesn't work, to address points #1 when I migrate Crane Law, I will delete all references locally to cranelaw.com on their current Exchange server, and setup individual forwards for each of their craneandpool.com mailboxes to forward to our hosted exchange server. This will also take care of point #2, since the cranelaw.com won't be there locally, when poolelaw.com tries to send to cranelaw.com, public MX records will be used for mail routing decisions and go to my hosted exchange. The bummer of that though is, I won't be able to setup poolelaw.com ahead of time in hosted Exchange, will have to wait to do it day of :( Sorry for the long and confusing post. Just wondering if there is a better or simpler way to do what I want? Three tier forests and that kind of thing are out, this is just a two week window where they won't be in the same place.

    Read the article

  • When running a shell script, how can you protect it from overwriting or truncating files?

    - by Joseph Garvin
    If while an application is running one of the shared libraries it uses is written to or truncated, then the application will crash. Moving the file or removing it wholesale with 'rm' will not cause a crash, because the OS (Solaris in this case but I assume this is true on Linux and other *nix as well) is smart enough to not delete the inode associated with the file while any process has it open. I have a shell script that performs installation of shared libraries. Sometimes, it may be used to reinstall versions of shared libraries that were already installed, without an uninstall first. Because applications may be using the already installed shared libraries, it's important the the script is smart enough to rm the files or move them out of the way (e.g. to a 'deleted' folder that cron could empty at a time when we know no applications will be running) before installing the new ones so that they're not overwritten or truncated. Unfortunately, recently an application crashed just after an install. Coincidence? It's difficult to tell. The real solution here is to switch over to a more robust installation method than an old gigantic shell script, but it'd be nice to have some extra protection until the switch is made. Is there any way to wrap a shell script to protect it from overwriting or truncating files (and ideally failing loudly), but still allowing them to be moved or rm'd? Standard UNIX file permissions won't do the trick because you can't distinguish moving/removing from overwriting/truncating. Aliases could work but I'm not sure what entirety of commands need to be aliased. I imagine something like truss/strace except before each action it checks against a filter whether to actually do it. I don't need a perfect solution that would work even against an intentionally malicious script. Ideas I have so far: Alias cp to GNU cp (not the default since I'm on Solaris) and use the --remove-destination option. Alias install to GNU install and use the --backup option. It might be smart enough to move the existing file to the backup file name rather than making a copy, thus preserving the inode. "set noclobber" in ~/.bashrc so that I/O redirection won't overwrite files

    Read the article

  • Proper Imaging Procedures to Restore and Deploy Image with Separate System Reserved Partition

    - by alharaka
    UPDATE: As per my experience here, no one responded. If I do not hear back from TechNet forum members about it, I will post a bounty here, if it makes a difference. I have banged my head against a wall for what seems like all week. I am going to explain my simple procedure, and how none of it, absolutely none, seems to work afterword despite few alternatives and everyone on the internet telling assuming this is how to do it. Diskpart Commands to Create FS Structure REM Select the disk targeted for deployment. REM REM NOTE: Usually disk 0, but drive failure can make it external USB REM media. This will erase the drive regardless! select disk 0 REM Remove previous formatting. clean REM Create System Reserved partition bootloader and files. create partition primary size=100 REM Format the volume format fs=ntfs label="System Reserved" quick override noerr REM Assign the System Reserved partition the D: mount for now assign letter=C REM The main system partition, size not specified to occupy whole drive. create partition primary REM Format the volume format fs=ntfs quick override noerr REM Assign the OS partition the D: mount for now assign letter=D REM Make this the active/bootable partition. sel disk 0 sel partition 1 active REM Close out the diskpart session. exit Now, I thought this was madness, but it turns out the System Reserved partition and standard "System Partition" (C:, commonly both the boot and system volumes where you find the Windows directory AND the bootmgr/ntldr hardware files, this is where Windows 7 diverges) as mounted in the Windows PE session where I run these commands do not matter. See reference here. Since this needs to be BitLocker-ready, enter this crappy System Reserved partition that is separate 100MB of awesome that goes before the regular boot volume. I do this, then I proceed to the next step. Deploy System Reserved and Normal System Images REM C is still the "System Reserved Partition", and the image is just like it sounds. imagex /apply G:\images\systemreserved.wim 1 C: REM D is now what will be the C: system partition on reboot, supposedly. imagex /apply G:\images\testimage.wim 1 D: Reboot the system Now, the images I just captured should look good. This is not even sysprepped, but reapplying the same fscking image I prepared on the same reference workstation hours before. Problem is I get 0xc000000e could not detect the accessible boot device \Windows\system32\winload.exe or different kinds of nonsense revolving around being able to find the boot volume with all the right files. I try different variations of things, now none of them work. I tried repairs with bcdboot, with a fresh System Reserved partition or not, bootrec, and maually editing the damn BCD store with bcdedit. I tried finalizing the above process with and without bootsect /nt60 C: /force. I need to wrap up and automate this procedure. What am I doing wrong that does not make the image happy, but really just miserable.

    Read the article

  • Remotely from Chrome or IE page loads ~60seconds, from Firefox or IE on local machine - instantly.

    - by Janis Veinbergs
    The problem: If i access SharePoint from Windows 7 with IE8 or Chrome5 - I must wait for like a minute to get a response. If i use other Windows 7 with IE8, just the same - just wait a MINUTE. If i use Firefox3.6 on W7 machine - page opens up instantly. Now switch to IE rendering engine in Firefox, you will have to wait just as with IE. Now i tried IE8 on XP SP3 - page opens up instantly. I tried IE8 on Windows Server 2003 SP2 (machine on which SharePoint is hosted) - page opens up instantly. IIS6 Logs I did request almost instantly from all 3 browsers and this is what shows up in IIS logs (first 2 entries for each browser): Chrome Ok, IIS saw first Chrome request when i Hit enter in browser, but i had to wait long for things to move on 2010-06-01 05:46:04 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.55+Safari/533.4 401 2 2148074254 Loading... 2010-06-01 05:47:07 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.55+Safari/533.4 401 1 0 ... etc... Firefox All Instantly 2010-06-01 05:46:06 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+lv;+rv:1.9.2.3)+Gecko/20100401+Firefox/3.6.3 401 2 2148074254 2010-06-01 05:46:06 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+lv;+rv:1.9.2.3)+Gecko/20100401+Firefox/3.6.3 401 1 0 ... etc... IE I did hit enter when it was 05:46:06, but these are first entries in IIS logs 2010-06-01 05:47:08 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+Tablet+PC+2.0;+.NET+CLR+1.1.4322;+.NET4.0C;+.NET4.0E) 401 1 0 2010-06-01 05:47:08 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+Tablet+PC+2.0;+.NET+CLR+1.1.4322;+.NET4.0C;+.NET4.0E) 401 1 0 ... etc... Nothing to see in Event Logs. The question Similar question has been asked but there is no response and i`m trying to access page without SSL and that happens even on GET requests. Where do I look? Where would be the problem? Browser? OS? I don't even know what to think about. Just a note Just a note about chrome's process isolation: I found it sad that while I was waiting that minute with Chrome, i could not use any other tab (i could switch, but i could not, for example, scroll or use any controls)

    Read the article

  • init never reaping zombie/defunct processes

    - by st9
    Hi, On my Fedora Core 9 webserver with kernel 2.6.18.8, init isn't reaping zombie processes. This would be bearable if it wasn't for the process table eventually reaching an upper limit where no new processes can be allocated. Sample output of ps -el | grep 'Z': F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 5 Z 0 2648 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 1 Z 51 2656 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 1 Z 0 2670 1 0 75 0 - 0 exit ? 00:00:02 crond <defunct> 4 Z 0 2874 1 0 82 0 - 0 exit ? 00:00:00 mysqld_safe <defunct> 5 Z 0 28104 1 0 76 0 - 0 exit ? 00:00:00 httpd <defunct> 5 Z 0 28716 1 0 76 0 - 0 exit ? 00:00:06 lfd <defunct> 5 Z 74 10172 1 0 75 0 - 0 exit ? 00:00:00 sshd <defunct> 5 Z 0 11199 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11202 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11205 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11208 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11211 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11240 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11246 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11249 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11252 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 1 Z 0 14106 1 0 80 0 - 0 exit ? 00:00:00 anacron <defunct> 5 Z 0 14631 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> Is this an OS bug? misconfiguration? I'm looking for inspiration as to the source of this problem. Thanks

    Read the article

  • configuring slime in emacs

    - by CodeKingPlusPlus
    I am in the process of configuring slime for emacs. So far I have read about basic functionality for common lisp such as C-c C-q which invokes the command slime-close-parens-at-point which places the proper number of parens where your mouse is. Another command that seemed cool was invoked by C-c C-c and it would pass the code you are editing in a buffer to the REPL, and "compile" it. Why won't these commands work for me? Anyway, I have downloaded slime via M-x list-packages and do not seem to have this functionality (C-h w and then any of these commands tells me that these commands do note exist). So, I saw a bunch of other slime extensions such as slime-repl', 'slime-fuzzy' and 'hippie-expand-slime'. So I again usedM-x list-packages` and downloaded them. Still I did not have these commands. Here is the content of my emacs file relevant to slime: ;;;Common Lisp and Slime (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-20130626.1151") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-repl-201000404") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/hippie-expand-slime-20130226.1656") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-fuzzy-20100404") (require 'slime) (setq slime-lisp-implementations `((sbcl ("/usr/bin/sbcl")) (ecl ("/usr/bin/ecl")) (clisp ("/usr/bin/clisp" "-q -I")))) (require 'slime-repl) (require 'slime-fuzzy) (require 'hippie-expand-slime) When I execute M-x slime I get the following message in the inferior-lisp buffer where I can execute common lisp code (however, shouldn't this be the slime-repl since I required it?): STYLE-WARNING: redefining EMACS-INSPECT (#<BUILT-IN-CLASS T>) in DEFMETHOD STYLE-WARNING: Implicitly creating new generic function STREAM-READ-CHAR-WILL-HANG-P. WARNING: These Swank interfaces are unimplemented: (DISASSEMBLE-FRAME SLDB-BREAK-AT-START SLDB-BREAK-ON-RETURN) ;; Swank started at port: 46533. Then a slime-error buffer is created with the contents: Invalid protocol message: Symbol "CREATE-REPL" not found in the SWANK package. Line: 1, Column: 28, File-Position: 28 Stream: #<SB-IMPL::STRING-INPUT-STREAM {10056B9C33}> (:emacs-rex (swank:create-repl nil) "COMMON-LISP-USER" t 5) How should I modify my emacs file to give me the functionality of those commands? In my emacs file am I not loading the necessary files? Do I need to install an additional package? If you need more information let me know! All help is much appreciated!

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Mod_rewrite with UTF-8 accent, multiviews , .htaccess

    - by GuruJR
    Problem: with Mod_rewrite, multiview & Apache config Introduction: The website is in french and i had problem with unicode encoding and mod_rewrite within php wihtout multiviews Old server was not handling utf8 correctly (somewhere between PHP, apache mod rewrite or mysql) Updated Server to Ubuntu 11.04 , the process was destructive lost all files in var/www/ (the site was mainly 2 files index.php & static.php) lost the site specific .Htaccess file lost MySQL dbs lost old apache.conf What i have done so far: What works: Setup GNutls for SSL, Listen 443 = port.conf Created 2 Vhosts in one file for :80 and :443 = website.conf Enforce SSL = Redirecting :80 to :443 with a mod_rewrite redirect Tried to set utf-8 everywhere.. Set charset and collation , db connection , mb_settings , names utf-8 and utf8_unicode_ci, everywhere (php,mysql,apache) to be sure to serve files as UTF-8 i enabled multiview renamed index.php.utf8.fr and static.php.utf8.fr With multiview enabled, Multibytes Accents in URL works SSL TLS 1.0 What dont work: With multiview enabled , mod_rewrite works for only one of my rewriterules With multiview Disabled, i loose access to the document root as "Forbidden" With multiview Disabled, i loose Multibytes (single charater accent) The Apache Default server is full of settings. (what can i safely remove ?) these are my configuration files so far :80 Vhost file (this one work you can use this to force redirect to https) RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} LanguagePriority fr :443 Vhost file (GnuTls is working) DocumentRoot /var/www/x ServerName example.com ServerAlias www.example.com <Directory "/var/www/x"> allow from all Options FollowSymLinks +MultiViews AddLanguage fr .fr AddCharset UTF-8 .utf8 LanguagePriority fr </Directory> GnuTLSEnable on GnuTLSPriorities SECURE:+VERS-TLS1.1:+AES-256-CBC:+RSA:+SHA1:+COMP-NULL GnuTLSCertificateFile /path/to/certificate.crt GnuTLSKeyFile /path/to/certificate.key <Directory "/var/www/x/base"> </Directory> Basic .htaccess file AddDefaultCharset utf-8 Options FollowSymLinks +MultiViews RewriteEngine on RewriteRule ^api/$ /index.php.utf8.fr?v=4 [L,NC,R] RewriteRule ^contrib/$ /index.php.utf8.fr?v=2 [L,NC,R] RewriteRule ^coop/$ /index.php.utf8.fr?v=3 [L,NC,R] RewriteRule ^crowd/$ /index.php.utf8.fr?v=2 [L,NC,R] RewriteRule ^([^/]*)/([^/]*)$ /static.php.utf8.fr?VALUEONE=$2&VALUETWO=$1 [L] So my quesiton is whats wrong , what do i have missing is there extra settings that i need to kill from the apache default . in order to be sure all parts are using utf-8 at all time, and that my mod_rewrite rules work with accent Thank you all in advance for your help, I will follow this question closely , to add any needed information.

    Read the article

  • Windows Server 2008 Migration - Did I miss something?

    - by DevNULL
    I'm running in to a few complications in my migration process. My main role has been a Linux / Sun administrator for 15 yrs so Windows server 2008 environment is a bit new to me, but understandable. Here's our situation and reason for migrating... We have a group of developers that develop VERY low-level software in Visual C with some inline assembler. All the workstations were separate from each other which cased consistency problems with development libraries, versions, etc... Our goal was to throw them all on to a Windows domain were we can control workstation installations, hot fixes (which can cause enormous problems), software versions, etc... All Development Workstations are running Windows XP x32 (sp3) and x64 (sp2) I running in to user permission problems and I was wondering maybe I missed one, tWO or a handful of things during my deployment. Here is what I have currently done: Installed and Activated Windows Server 2008 Added Roles for DNS and Active Directory Configured DNS with WINS for netbios name usage Added developers to AD and mapped their shared folders to their profile Added roles for IIS7 and configured the developers SVN Installed MySQL Enterprise Edition for development usage Not having a firm understanding of Group Policy I haven't delved deeply in to that realm yet. Problems I'm encountering: 1. When I configure any XP workstations to logon our domain, once a user uses their new AD login, everything goes well, except they have very restrictive permissions. (Eg: If a user opens any existing file, they don't have write access, except in their documents folder.) Since these guys are working on low system level events, they need to r/w all files. All I'm looking to restrict in software installations. Am I correct to assume that I can use WSUS to maintain the domains hot fixes and updates pushed to the workstations? I need to map a centralized shared development drive upon the users login. This is open to EVERYONE. Right now I have the users folders mapped upon login through their AD profile. But how do I map a share if I've already defined one within their profile in AD? Any responses would be very grateful. Do I have to configure and define a group policy for the domain users? Can I use Volume Mirroring to mirror / sync two drives on two separate servers or should I just script a rsync or MS Synctool? The drives simply store nightly system images.

    Read the article

  • server 2008 r2 - wbadmin systemstatebackup - system writer not found in the backup

    - by TWood
    I am trying to manually run a systemstatebackup command on my server 2008 r2 box and I am getting an error code '2155347997' when I view the backup event log details. The command line tells me that I have log files written to the c:\windows\logs\windowsserverbackup\ path but I have no files of the .log type there. My command window tells me "System Writer is not found in the backup". However when I run vssadmin list writers I find System Writer in the list and it shows normal status with no last errors stored. I am running this from an elevated command prompt as well as from a logged on administrator account. My backup target path has permission for network service to have full control and it has plenty of free space. Looking in eventlog I have two VSS error 8194 that happen immediately before the Backup error 517 which has the errorcode 2155347997 listed. All three of these errors are a result of trying to run the command for the systemstatebackup. It's my belief that some VSS related permission is failing and exiting the backup process before it ever gets started. Because of this the initial code that creates the log files must not be running and this is why I have no files. When running the systemstatebackup command from the command prompt and watching the windowsserverbackup directory I do see that I have a Wbadmin.0.etl file which gets created but it is deleted when the backup errors out and stops. I have looked online and there are numerous opinions as to the cause of this error. These are the things I have corrected to try and fix this issue before posting here: Machine runs a HP 1410i smart array controller but at one time also used a LSI scsi card. Used networkadminkb.com's kb# a467 to find one LSI_SCSI entry in HKLMSysCurrentControlSetServices which start was set to 0x0 and I modified to 0x3. No changes. In HKLMSystemCurrentControlSetServicesVSSDiag I gave network service full control where it previously only had "Special Permission". No changes. I followed KB2009272 to manually try to fix system writer. These are all of the things I have tried. What else should I look at to resolve this issue? It may be important to note that I run Mozy Pro on this server and that was known in the past to use VSS for copying operations and it occasionally threw an error. However since an update last year those error event log entries have stopped.

    Read the article

  • Partition table corrupted (USB flash drive)

    - by 13ren
    It's an 8 GB Patriot thumb drive, which I've used extensively with lots of data. Today, it is detected, but all data is gone: (EDIT at least some data is still there, but the partition table is gone) EDIT @Sathya (thanks) here's the relevant output from sudo fdisk -l: Disk /dev/sdc: 8019 MB, 8019509248 bytes 247 heads, 62 sectors/track, 1022 cylinders Units = cylinders of 15314 * 512 = 7840768 bytes Disk /dev/sdc doesn't contain a valid partition table It looks like it is /dev/sdc, with that 8 GB... and no partition table. I tried to mount /dev/sdc (and then dmesg | tail): /media> sudo mount /dev/sdc mytmp mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so /media> dmesg | tail [ 24.300000] sdc: unknown partition table [ 24.320000] sd 2:0:0:0: Attached scsi removable disk sdc [ 24.370000] usb-storage: device scan complete [ 26.870000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 26.870000] EXT2-fs: group descriptors corrupted! [ 50.420000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 5565.470000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 5565.470000] EXT2-fs: group descriptors corrupted! EDIT @Col: results from testdisk Disk /dev/sdc - 8013 MB / 7642 MiB - CHS 1022 247 62 Current partition structure: Partition Start End Size in sectors Partition sector doesn't have the endmark 0xAA55 After I hit [proceed], it says: Structure: Ok. Keys A: add partition, L: load backup, Enter: to continue The "Structure: Ok." seems reassuring... will "A: add partition" make my old data accessible (if it's still there), or will it make a new, fresh partition? Another option is "[ MBR Code ] Write TestDisk MBR code to first sector" - would it be better to do this? EDIT I found that at least some of my data is still on the flash drive, by using the below, and searching for English text in less (like " the "): cat /dev/sde | tr -cd '\11\12\40\1540-\176' | less (The drive changed from "/dev/sdb" to "/dev/sde" because I connected some extra drives today). I've learnt that "/dev/sde1" would be the first partition; and "/dev/sde" is the whole drive. Because unix treats these devices just like files, you can use all the ordinary unix file commands on them, like cat, and then process them like any other stream of data. The tr above removes non-printable characters ("\40" is space, which I wanted to preserve). In less, you can use "/" to search, similar to Vim. How can I get my data back (assuming it's still there)? If only the partition table is corrupted, is there a standard "partition recovery tool"? Is there a way to "repartition" without deleting everything?

    Read the article

  • PTR Record Troubles

    - by Physikal
    I am having a hell of a time getting our PTR record right. Our current PTR zone looks like this: $ttl 38400 @ IN SOA ns1.domain.com. admin.domain.com. ( 1268669139 10800 3600 604800 38400 ) xxx.xxx.xxx.in-addr.arpa. IN NS ns2.domain.com. xxx.xxx.xxx.in-addr.arpa. IN NS ns1.domain.com. 97 IN PTR mail.domain.com. xxx.xxx.xxx.xxx.in-addr.arpa. IN PTR mail.domain.com. 97.96/28. IN PTR mail.domain.com For some reason the only thing that works is the 97.96/28. When this line is in there it actually says I have a PTR record when reporting from intodns.com. If I remove that line, it says I have no PTR. I have followed instructions from http://www.philchen.com/2007/04/04/configuring-reverse-dns and when I follow those instructions intodns.com says I have no PTR. When it does work with the line 97.96/28., the PTR kicks back as (from intodns.com) : 97.xxx.xxx.xxx.in-addr.arpa -> mail.domain.com.xxx.xxx.xxx.in-addr.arpa Which is, to my knowledge, an incorrect PTR. I want it to just kick back as mail.domain.com, without the xxx.xxx.xxx.in-addr.arpa extension. I have tried everything I can think of but I can't fix it. I can't help but think it's one of those things that is so stupid and simple I'm going to do the ol'facepalm. Any help is greatly appreciated. Thanks! In the event that the domain zone is needed, here it is: $ttl 38400 @ IN SOA domain.com. [email protected]. ( 1265221037 10800 3600 604800 38400 ) domain.com. IN A xxx.xxx.xxx.xxx www.domain.com. IN A xxx.xxx.xxx.xxx ftp.domain.com. IN A xxx.xxx.xxx.xxx m.domain.com. IN A xxx.xxx.xxx.xxx localhost.domain.com. IN A 127.0.0.1 webmail.domain.com. IN A xxx.xxx.xxx.xxx admin.domain.com. IN A xxx.xxx.xxx.xxx mail.domain.com. IN A xxx.xxx.xxx.xxx domain.com. IN MX 5 mail.domain.com. domain.com. IN TXT "v=spf1 a mx a:domain.com ip4:xxx.xxx.xxx.xxx ?all" domain.com. IN NS ns1 domain.com. IN NS ns2 ns1 IN A xxx.xxx.xxx.xxx ns2 IN A xxx.xxx.xxx.xxx Any double entries in different formats were part of my troubleshooting process.

    Read the article

  • Command-line connect to wired network for Ubuntu

    - by Tim
    I like to know how to use command-line to connect to a wired network in general for Ubuntu 8.10? In my case, I connect a cable to my laptop but it doesn't work with my WICD. So I like to try command-line method. Here is the ifconfig of my network adapters: $ ifconfig eth0 Link encap:Ethernet HWaddr 00:c0:9f:8d:23:74 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0x1800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4457 errors:0 dropped:0 overruns:0 frame:0 TX packets:4457 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:493002 (493.0 KB) TX bytes:493002 (493.0 KB) wlan0 Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 RX packets:1508929 errors:0 dropped:0 overruns:0 frame:0 TX packets:768144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:806027375 (806.0 MB) TX bytes:78834873 (78.8 MB) wlan0:avahi Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 inet addr:169.254.5.92 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 wmaster0 Link encap:UNSPEC HWaddr 00-0E-9B-AB-56-19-00-00-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Thanks and regards! UPDATE: Tried what oyvindio suggested. Here is the failing message: $ sudo dhclient3 eth0 There is already a pid file /var/run/dhclient.pid with pid 18279 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:c0:9f:8d:23:74 Sending on LPF/eth0/00:c0:9f:8d:23:74 Sending on Socket/fallback DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 11 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 No DHCPOFFERS received. No working leases in persistent database - sleeping.

    Read the article

  • How do I permanently delete e-mail messages in the sendmail queue and keep them from coming back?

    - by Steven Oxley
    I have a pretty annoying problem here. I have been testing an application and have created some test e-mails to bogus e-mail addresses (not to mention that my server isn't really set up to send e-mail anyway). Of course, sendmail is not able to send these messages and they have been getting stuck in the sendmail queue. I want to manually delete the messages that have been building up in the queue instead of waiting the 5 days that sendmail usually takes to stop retrying. I am using Ubuntu 10.04 and /var/spool/mqueue/ is the directory in which every how-to I have read says the e-mails that are queued up are kept. When I delete the files in this directory, sendmail stops trying to process the e-mails until what appears to be a cron script runs and re-populates this directory with the messages I don't want sent. Here are some lines from my syslog: Jun 2 17:35:19 sajo-laptop sm-mta[9367]: o530SlbK009365: to=, ctladdr= (33/33), delay=00:06:27, xdelay=00:06:22, mailer=esmtp, pri=120418, relay=e.mx.mail.yahoo.com. [67.195.168.230], dsn=4.0.0, stat=Deferred: Connection timed out with e.mx.mail.yahoo.com. Jun 2 17:35:48 sajo-laptop sm-mta[9149]: o4VHn3cw003597: to=, ctladdr= (33/33), delay=2+06:46:45, xdelay=00:34:12, mailer=esmtp, pri=3540649, relay=mx2.hotmail.com. [65.54.188.94], dsn=4.0.0, stat=Deferred: Connection timed out with mx2.hotmail.com. Jun 2 17:39:02 sajo-laptop CRON[9510]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -n 200 -r -0 rm) Jun 2 17:39:43 sajo-laptop sm-mta[9372]: o52LHK4s007585: to=, ctladdr= (33/33), delay=03:22:18, xdelay=00:06:28, mailer=esmtp, pri=1470404, relay=c.mx.mail.yahoo.com. [206.190.54.127], dsn=4.0.0, stat=Deferred: Connection timed out with c.mx.mail.yahoo.com. Jun 2 17:39:50 sajo-laptop sm-mta[9149]: o51I8ieV004377: to=, ctladdr= (33/33), delay=1+06:31:06, xdelay=00:03:57, mailer=esmtp, pri=6601668, relay=alt4.gmail-smtp-in.l.google.com. [74.125.79.114], dsn=4.0.0, stat=Deferred: Connection timed out with alt4.gmail-smtp-in.l.google.com. Jun 2 17:40:01 sajo-laptop CRON[9523]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Does anyone know how I can get rid of these messages permanently? As a side note, I'd also like to know if there is a way to set up sendmail to "fake" sending e-mail. Is there?

    Read the article

  • unable to join domain using virtualbox

    - by FreshPrinceOfSO
    I'm in the process of setting up a VM environment for a MS certification exam (70-462). Following the training kit's instructions, I've set up a domain controller (DC) and two members (SQL-A, SQL-B) thus far. I can't figure out why I can't join the domain. DC IPv4 Address . . . : 10.10.10.10(Preferred) Subnet Mask. . . . : 255.0.0.0 DNS Servers. . . . : ::1 127.0.0.1 SQL-A IPv4 Address . . . : 10.10.10.20(Preferred) Subnet Mask. . . . : 255.0.0.0 DNS Servers. . . . : 10.10.10.10 SQL-B IPv4 Address . . . : 10.10.10.30(Preferred) Subnet Mask. . . . : 255.0.0.0 DNS Servers. . . . : 10.10.10.10 I've read how to do networking between virtual machines in virtualbox and the documentation. After trying various network adapter configurations, I can't get them to communicate in order to have the two members join the domain. When I ping from .30 to .10, I get: ping 10.10.10.10 Pinging 10.10.10.10 with 32 bytes of data: Reply from 10.10.10.20: Destination host unreachable. Reply from 10.10.10.20: Destination host unreachable. Reply from 10.10.10.20: Destination host unreachable. Reply from 10.10.10.20: Destination host unreachable. Trying to join the domain: netdom join SQL-A /domain:contso.com The specified domain either does not exist or could not be contacted. The command failed to complete successfully. Within VirtualBox, I've tried the following combinations for network adapter: Attached to - Promiscuous Mode ------------------------------- NAT Bridged Adapter - Deny Bridged Adapter - Allow VMs Bridged Adapter - Allow All Internal Network - Deny Internal Network - Allow VMs Internal Network - Allow All Host-only Adapter - Deny Host-only Adapter - Allow VMs Host-only Adapter - Allow All Edit ipconfig /all of DC ipconfig /all of SQL-A

    Read the article

  • Generating/managing config files for hosted application

    - by mfinni
    I asked a question about config management, and haven't seen a reply. It's possible my question was too vague, so let's get down to brass tacks. Here's the process we follow when onboarding a new customer instance into our hosted application : how would you manage this? I'm leaning towards a Perl script to populate templates to generate shell scripts, config files, XML config files, etc. Looking briefly at CFengine and Chef, it seems like they're not going to reduce the amount of work, because I'd still have to manually specify all of the changes/edits within the tool. Doesn't seem to be much of a gain over touching the config files directly. We add a stanza to the main config file for the core (3rd-party) application. This stanza has values that defines the instance (customer) name the TCP listener port for this instance (not one currently used) the DB2 database name (serial numeric identifier, already exists, they get prestaged for us by the DBAs) three sub-config files, by name - they need to be created from 3 templates and be named after the instance The sub-config files define: The filepath for the DB2 volumes The filepath for the storage of objects The filepath for just one of the DB2 volumes (yes, redundant to the first item. We run some application commands, start the instance We do some LDAP thingies (make an OU for the instance, etc.) We add a stanza to the config file for our security listener that acts as a passthrough to LDAP instance name LDAP OU TCP port for instance DB2 database name We restart the security listener (off-hours), change the main config file from item 1, stop and restart the instance. It is now authenticating via LDAP. We add the stop and start commands for this instance to the HA failover scripts. We import an XML config file into the instance that defines things for the actual application for the customer - user names, groups, permissions, and business rules. The XML is supplied by the implementation team. Now, we configure the dataloading application We add a stanza to the existing top-level config file that points to a new customer-level config file. The new customer-level config file includes: the instance (customer) name the DB2 database name arbitrary number of sub-config files, by name Each of the sub-config files defines: filepaths to the directories for ingestion, feedback, backup, and failure those filepaths have a common path to a customer-specific folder, and then one folder for each sub-config file Each of those filepaths needs to be created We need to add this customer instance to our monitoring scripts that confirm the proper processes are running and can be logged into. Of course, those monitoring config files include the instance name, the TCP port, the DB2 database name, etc. There's also a reporting application that needs to be configured for the new instance. You get the idea. There's also XML that is loaded into WAS by the middleware team. We give them the values for them to plug into the XML - they could very easily hand us the template and we could give them back completed XML.

    Read the article

  • Missing startup screens and slow bootup/login after using WinClone to expand Bootcamp partition

    - by user26453
    I used WinClone to backup my Bootcamp partition, which was a Windows 7 Ultimate install, on my late 2006 Macbook Pro. I desired to expand the Bootcamp partition's size. It worked reasonably well with some hiccups along the way and some remaining issues. First issue I ran into was the Bootcamp Assistant utility - it would not recreate the partition. This was due to a lack of contiguous space that is required for the Bootcamp partition. As a result I wiped the whole drive and reinstalled Snow Leopard, did the minimum amount of system updates, and created and formated a new Bootcamp partition. WinClone restored the image without complaint and the image was automatically resized to the new partition's size. Second issue I ran into was after the first boot into Windows. The first thing I noticed was that instead of the newer "slick" startup screen (4 colors wisping around, a Windows 7 title), there was more of an old school style startup screen (a progress bar with block increments, yellow/greenish color, nothing else really). The initial bootup to a login screen was slow, perhaps as Windows dealt with the partition changes. After logging in, the screen goes blank and the computer seems to hang for a minute, before completing the login. After subsequent restarts, the slick screen is still missing, boot to login screen is normal, but the time from login to desktop active is still very slow. As a side note, this behavior of a long time from login to the desktop finally loaded I've previously only seen when the computer would try to hibernate and fail (battery is really bad). On the next startup, I would see this behavior, but not subsequently. So a potential cause: I imaged the partition after hibernating out of Windows. From reading some posts/guides on the subject, this was not recommended, and perhaps shouldn't even have worked? Could the partition be stuck in some weird mode as a result that makes the boot issues appear? I've attempted to disable hibernation and restart, trying to delete the .sys file that hibernation uses. Other fixes I'm thinking of attempting are booting a Win7 disc and repairing the install/partition. I can't shake the nagging feeling something isn't right as a result of the modified boot screens and the slow login process.

    Read the article

  • DELL DRAC & Ubuntu VPN Connection

    - by Mikunos
    I am trying to connect to a DELL DRAC card without success by Ubuntu VPN Connection Manager. I have these data: Protocol: PPTP SERVER IP PPTP: 1233.123.123.123 DRAC IP: 192.168.10.25 Subnet: 255.255.0.0 User: myuser Pass: mypass where have I to write these parameters? I have configured the PPTP connection using the graphical tool in Ubuntu 11.10 ... but in the /var/log/syslog I get these messages: Apr 15 11:33:15 shinet NetworkManager[1035]: <info> Starting VPN service 'pptp'... Apr 15 11:33:15 shinet NetworkManager[1035]: <info> VPN service 'pptp' started (org.freedesktop.NetworkManager.pptp), PID 18180 Apr 15 11:33:15 shinet NetworkManager[1035]: <info> VPN service 'pptp' appeared; activating connections Apr 15 11:33:15 shinet NetworkManager[1035]: <info> VPN plugin state changed: 3 Apr 15 11:33:15 shinet NetworkManager[1035]: <info> VPN connection 'Connessione VPN 1' (Connect) reply received. Apr 15 11:33:15 shinet pppd[18182]: Plugin /usr/lib/pppd/2.4.5/nm-pptp-pppd-plugin.so loaded. Apr 15 11:33:15 shinet pppd[18182]: pppd 2.4.5 started by root, uid 0 Apr 15 11:33:15 shinet pppd[18182]: Using interface ppp0 Apr 15 11:33:15 shinet pppd[18182]: Connect: ppp0 <--> /dev/pts/1 Apr 15 11:33:15 shinet NetworkManager[1035]: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) Apr 15 11:33:15 shinet NetworkManager[1035]: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. Apr 15 11:33:15 shinet pptp[18185]: nm-pptp-service-18180 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Apr 15 11:33:46 shinet pppd[18182]: LCP: timeout sending Config-Requests Apr 15 11:33:46 shinet pppd[18182]: Connection terminated. Apr 15 11:33:46 shinet avahi-daemon[1081]: Withdrawing workstation service for ppp0. Apr 15 11:33:46 shinet NetworkManager[1035]: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp0, iface: ppp0) Apr 15 11:33:46 shinet NetworkManager[1035]: <warn> VPN plugin failed: 1 Apr 15 11:33:46 shinet pppd[18182]: Modem hangup Apr 15 11:33:46 shinet NetworkManager[1035]: <warn> VPN plugin failed: 1 Apr 15 11:33:51 shinet pppd[18182]: Exit. Apr 15 11:33:51 shinet NetworkManager[1035]: <warn> VPN plugin failed: 1 Apr 15 11:33:51 shinet NetworkManager[1035]: <info> VPN plugin state changed: 6 Apr 15 11:33:51 shinet NetworkManager[1035]: <info> VPN plugin state change reason: 0 Apr 15 11:33:51 shinet NetworkManager[1035]: <warn> error disconnecting VPN: Could not process the request because no VPN connection was active. Apr 15 11:33:51 shinet NetworkManager[1035]: <info> Policy set 'Wired connection 1' (eth0) as default for IPv4 routing and DNS. Apr 15 11:33:57 shinet NetworkManager[1035]: <info> VPN service 'pptp' disappeared Thanks

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • configuration required for HIVE to be installed on a node

    - by ????? ????????
    I went through the process of manually installing ambari (not through SSH, because I couldnt get keyless to work) and everything installed OK, except for HIVE and GANGLIA. I got this message: stderr: None stdout: warning: Unrecognised escape sequence ‘\;’ in file /var/lib/ambari-agent/puppet/modules/hdp-hive/manifests/hive/service_check.pp at line 32 warning: Dynamic lookup of $configuration is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes. notice: /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln 32]/Exec[hdp::snappy::package::ln 32]/returns: executed successfully notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2Smoke.sh]/ensure: defined content as ‘{md5}7f1d24221266a2330ec55ba620c015a9' notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2.sql]/ensure: defined content as ‘{md5}0c429dc9ae0867b5af74ef85b5530d84' notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/File[/tmp/hcatSmoke.sh]/ensure: defined content as ‘{md5}bae7742f7083db968cb6b2bd208874cb’ notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:11:56 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException org.apache.hadoop.hive.ql.parse.SemanticException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException [Error 10001]: Table not found hcatsmokeida8c07401_date102513 notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:15 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException o When i go to the alerts and health checks i’m getting this: ive Metastore status check CRIT for 42 minutes CRITICAL: Error accessing hive-metaserver status [13/06/25 03:44:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. What am I doing wrong? I have already tried to do ambari-server reset on the the database without results.

    Read the article

  • pam_unix(sshd:session) session opened for user NOT ROOT by (uid=0), then closes immediately using using TortiseSVN

    - by codewaggle
    I'm having problems accessing an SVN repository using TortoiseSVN 1.7.8. The SVN repository is on a CentOS 6.3 box and appears to be functioning correctly. # svnadmin --version # svnadmin, version 1.6.11 (r934486) I can access the repository from another CentOS box with this command: svn list svn+ssh://[email protected]/var/svn/joetest But when I attempt to browse the repository using TortiseSVN from a Win 7 workstation I'm unable to do so using the following path: svn+ssh://[email protected]/var/svn/joetest I'm able to login via SSH from the workstation using Putty. The results are the same if I attempt access as root. I've given ownership of the repository to USER:USER and ran chmod 2700 -R /var/svn/. Because I can access the repository via ssh from another Linux box, permissions don't appear to be the problem. When I watch the log file using tail -fn 2000 /var/log/secure, I see the following each time TortiseSVN asks for the password: Sep 26 17:34:31 dev sshd[30361]: Accepted password for USER from xx.xxx.xx.xxx port 59101 ssh2 Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session opened for user USER by (uid=0) Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session closed for user USER I'm actually able to login, but the session is then closed immediately. It caught my eye that the session is being opened for USER by root (uid=0), which may be correct, but I'll mention it in case it has something to do with the problem. I looked into modifying the svnserve.conf, but as far as I can tell, it's not used when accessing the repository via svn+ssh, a private svnserve instance is created for each log in via this method. From the manual: There's still a third way to invoke svnserve, and that's in “tunnel mode”, with the -t option. This mode assumes that a remote-service program such as RSH or SSH has successfully authenticated a user and is now invoking a private svnserve process as that user. The svnserve program behaves normally (communicating via stdin and stdout), and assumes that the traffic is being automatically redirected over some sort of tunnel back to the client. When svnserve is invoked by a tunnel agent like this, be sure that the authenticated user has full read and write access to the repository database files. (See Servers and Permissions: A Word of Warning.) It's essentially the same as a local user accessing the repository via file:/// URLs. The only non-default settings in sshd_config are: Protocol 2 # to disable Protocol 1 SyslogFacility AUTHPRIV ChallengeResponseAuthentication no GSSAPIAuthentication yes GSSAPICleanupCredentials yes UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS X11Forwarding no Subsystem sftp /usr/libexec/openssh/sftp-server Any thoughts?

    Read the article

  • JNDI Datasource definition in Tomcat 6.0

    - by romaintaz
    I want to define a DataSource to an Oracle database on my Tomcat 6.0. So, in conf/server.xml (yes, I know that this DataSource will be available for all the webapps in Tomcat, but it's not a problem here), I've set this Resource: <GlobalNamingResources> <Resource name="hibernate/HibernateDS" auth="Container" type="javax.sql.DataSource" url="jdbc:oracle:thin:@myserver:1542:foo" username="foo" password="bar" driverClassName="oracle.jdbc.OracleDriver" maxActive="50" maxIdle="10" validationQuery="select 1 from dual"/> Then, in the web.xml of my application, I set a resource-ref element: <resource-ref> <description>Hibernate Datasource</description> <res-ref-name>hibernate/HibernateDS</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Finally, as Hibernate is used to manage the database connection, I have a webapps/mywebapp/WEB-INF/classes/hibernate.cfg.xml that creates a session-factory using the JNDI DataSource: <hibernate-configuration> <session-factory> <property name="connection.datasource">java:comp/env/hibernate/HibernateDS</property> ... However, when I start my Tomcat server, I get an error that says it could not create the INFO [net.sf.hibernate.util.NamingHelper] JNDI InitialContext properties:{} INFO [net.sf.hibernate.connection.DatasourceConnectionProvider] Using datasource: java:comp/env/hibernate/HibernateDS INFO [net.sf.hibernate.transaction.TransactionFactoryFactory] Transaction strategy: net.sf.hibernate.transaction.JDBCTransactionFactory INFO [net.sf.hibernate.transaction.TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of process level read-write cache is not recommended) WARN [net.sf.hibernate.cfg.SettingsFactory] Could not obtain connection metadata org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot create JDBC driver of class '' for connect URL 'null' at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1150) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.getConnection(BasicDataSource.java:880) at net.sf.hibernate.connection.DatasourceConnectionProvider.getConnection(DatasourceConnectionProvider.java:59) at net.sf.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:84) at net.sf.hibernate.cfg.Configuration.buildSettings(Configuration.java:1172) ... Caused by: java.lang.NullPointerException at sun.jdbc.odbc.JdbcOdbcDriver.getProtocol(JdbcOdbcDriver.java:507) at sun.jdbc.odbc.JdbcOdbcDriver.knownURL(JdbcOdbcDriver.java:476) at sun.jdbc.odbc.JdbcOdbcDriver.acceptsURL(JdbcOdbcDriver.java:307) at java.sql.DriverManager.getDriver(DriverManager.java:253) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1143) ... 11 more Do you have any idea why Hibernate is not able to construct the session-factory? What is wrong in my configuration?

    Read the article

  • Local Apache on Windows XP not finishing page requests

    - by asgeo1
    I have Apache 2.2.11 installed locally on my Windows XP (SP3) dev machine, which I setup about 3 months ago. I have just started having a strange problem in the last few week. Apache is serving some basic PHP applications like phpMyAdmin. When I make a page request, Apache appears to not finish serving all resources for that page. Firefox shows the "Transferring data from servername..." message, and the page never completes. The same problem happens in Internet Explorer too. I can sometimes tell which resource it is waiting on, because most of the page will render except for some image or similar resources. (Not sure why Firebug doesn't show this) It doesn't have the problem every page request - for page requests where most of the resources are cached in my browser, the page request will work with no problems. Or pages that are very light will work with no problems. However, if I "hard" refresh the page, I will have this problem (probably because it is requesting all page resources) Does anyone know what this could be? It is so strange that it has only just started happening - and I did not make any changes to my system (that I am aware of) I tried playing with the Apache ThreadsPerChild setting, but it did not seem to make a difference. UPDATE: I have been doing some more tests. I have been serving the most basic of pages, just a plain HTML file: <html> <body> <h1>testing</h1> </body> </html> If I request this page multiple times in a row, AND each request occurs immediately after the previous has completed, then 50% of the time the request will time out. However, if I put a 1-2 second gap between requests, then there is no problem. This correlates to what I have observed when the brower requests a real application page. When the browser has nothing cached, then all of the page resources are requested from the browser in a short amount of time - this appears to trigger the problem. UPDATE2: Nathan Long has helped me understand the issue a little better with the server-status page (see below). It is weird, it is like the server has a hickup sending data to the client. The client sits there waiting forever for data that never arrives. Closing the client process does not terminate the connection on the server - the server still has active threads for each previously attempted connection, but they just sit there - not sending any data and never terminating. (even though the client is now closed) Only a restart of the server seems to terminate them.

    Read the article

  • SCCM SP2 - OOB Management Certificates Problems

    - by Achinoam
    Hi experts, I have a vPro client computer with AMT 4.0. It was importeed successfully via the Import OOB Computers wizard, and after sending a "Hello- packet" it became provisioned. (The SCCM GUI displays AMT Status: Provisioned). But when I try to perform power operations on this machine, they always fail with the following lines in the log: AMT Operation Worker: Wakes up to process instruction files 7/29/2009 10:59:29 AM 2176 (0x0880) AMT Operation Worker: Wait 20 seconds... 7/29/2009 10:59:29 AM 2176 (0x0880) Auto-worker Thread Pool: Work thread 3884 started 7/29/2009 10:59:29 AM 3884 (0x0F2C) session params : https:/ / amt4.domaindemo.com:16993 , 11001 7/29/2009 10:59:29 AM 3884 (0x0F2C) ERROR: Invoke(invoke) failed: 80020009argNum = 0 7/29/2009 10:59:31 AM 3884 (0x0F2C) Description: A security error occurred 7/29/2009 10:59:31 AM 3884 (0x0F2C) Error: Failed to Invoke CIM_BootConfigSetting::ChangeBootOrder_INPUT action. 7/29/2009 10:59:31 AM 3884 (0x0F2C) AMT Operation Worker: AMT machine amt4.domaindemo.com can't be waken up. Error code: 0x80072F8F 7/29/2009 10:59:31 AM 3884 (0x0F2C) Auto-worker Thread Pool: Warning, Failed to run task this time. Will retry(1) it 7/29/2009 10:59:31 AM 3884 (0x0F2C) After investigation, I've seen that the problem occurs already on the 2nd stage of the provisioning: Start 2nd stage provision on AMT device amt4.domaindemo.com. 8/2/2009 4:55:12 PM 2944 (0x0B80) session params : https: / / amt4.domaindemo.com:16993 , 11001 8/2/2009 4:55:12 PM 2944 (0x0B80) Delete existing ACLs... 8/2/2009 4:55:12 PM 2944 (0x0B80) ERROR: Invoke(invoke) failed: 80020009argNum = 0 8/2/2009 4:55:14 PM 2944 (0x0B80) Description: A security error occurred 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Cannot Enumerate User Acl Entries. 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: CSMSAMTProvTask::StartProvision Fail to call AMTWSManUtilities::DeleteACLs 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Can not finish WSMAN call with target device. 1. Check if there is a winhttp proxy to block connection. 2. Service point is trying to establish connection with wireless IP address of AMT firmware but wireless management has NOT enabled yet. AMT firmware doesn't support provision through wireless connection. 3. For greater than 3.x AMT, there is a known issue in AMT firmware that WSMAN will fail with FQDN longer than 44 bytes. (MachineId = 17) 8/2/2009 4:55:14 PM 2944 (0x0B80) STATMSG: ID=7208 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_AMT_OPERATION_MANAGER" SYS=JE-DEV-MS0 SITE=JR1 PID=1756 TID=2944 GMTDATE=Sun Aug 02 14:55:14.281 2009 ISTR0="amt4.domaindemo.com" ISTR1="amt4.domaindemo.com" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0 8/2/2009 4:55:14 PM 2944 (0x0B80) This error is consistent with all the other 2nd stage provisioning tasks. (Add ACLs, Enable Web UI, etc.) I've opened the certification authority, and I see that the certificates were issued to the SCCM Site server instead of the AMT client! What could be the reason for this failure? What is the problematic definition for the certificate? Thank you in advance!!!

    Read the article

< Previous Page | 821 822 823 824 825 826 827 828 829 830 831 832  | Next Page >