Search Results

Search found 14231 results on 570 pages for 'folder redirection'.

Page 490/570 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • SMTP error goes directly to Badmail directory after Queue

    - by Sergio López
    This is the error I got in the .BDR Unable to deliver this message because the follow error was encountered: "This message is a delivery status notification that cannot be delivered.". The specific error code was 0xC00402C7. The message sender was <. The message was intended for the following recipients. [email protected] This is the .bad file I got in the badmail error, Can anyone help me ? I´m getting this error from every mail I try to deliver from several php apps and other apps, the relay is only for 2 ip adresses 127.0.0.1 and the server ip, I telnet the smtp and it seems to work fine the mail go to the queue folder... Im stucked From: postmaster@ALRSERVER02 To: [email protected] Date: Mon, 22 Aug 2011 18:39:38 -0500 MIME-Version: 1.0 Content-Type: multipart/report; report-type=delivery-status; boundary="9B095B5ADSN=_01CC61236DC6DEED00000001ALRSERVER02" X-DSNContext: 7ce717b1 - 1378 - 00000002 - C00402CF Message-ID: Subject: Delivery Status Notification (Failure) This is a MIME-formatted message. Portions of this message may be unreadable without a MIME-capable mail program. --9B095B5ADSN=_01CC61236DC6DEED00000001ALRSERVER02 Content-Type: text/plain; charset=unicode-1-1-utf-7 This is an automatically generated Delivery Status Notification. Delivery to the following recipients failed. [email protected] --9B095B5ADSN=_01CC61236DC6DEED00000001ALRSERVER02 Content-Type: message/delivery-status Reporting-MTA: dns;ALRSERVER02 Received-From-MTA: dns;ALRSERVER02 Arrival-Date: Mon, 22 Aug 2011 18:39:38 -0500 Final-Recipient: rfc822;[email protected] Action: failed Status: 5.3.5 --9B095B5ADSN=_01CC61236DC6DEED00000001ALRSERVER02 Content-Type: message/rfc822 Received: from ALRSERVER02 ([74.3.161.94]) by ALRSERVER02 with Microsoft SMTPSVC(7.0.6002.18264); Mon, 22 Aug 2011 18:39:38 -0500 Subject: =?utf-8?Q?[MantisBT]_Reinicializaci=C3=B3n_de_Contrase=C3=B1a?= To: [email protected] X-PHP-Originating-Script: 0:class.phpmailer.php Date: Mon, 22 Aug 2011 17:39:38 -0600 Return-Path: [email protected] From: Alr Tracker Message-ID: X-Priority: 3 X-Mailer: PHPMailer 5.1 (phpmailer.sourceforge.net) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="utf-8" X-OriginalArrivalTime: 22 Aug 2011 23:39:38.0020 (UTC) FILETIME=[C182E640:01CC6124] Si solicitó este cambio, visite la siguiente URL para cambiar su contraseña: Usuario: slopez Dirección IP remota: 189.191.159.86 NO RESPONDA A ESTE MENSAJE --9B095B5ADSN=_01CC61236DC6DEED00000001ALRSERVER02--

    Read the article

  • Cannot get Windows snipping tool to auto run with AutoHotKey

    - by jasondavis
    I am trying to get Windows 7 sniping tool to run when I hit my PRINTSCREEN keyboard button with AUTOHOTKEY. I have been unsuccessful so far though. Here is what I have for the AutoHotKey script. I have tried this PRINTSCREEN::Run, c:\windows\system32\SnippingTool.exe and this PRINTSCREEN::Run, SnippingTool.exe and this PRINTSCREEN::Run, SnippingTool And all those give me this error when I hit the PRINTSCREEN button... It basicly says it cannot find the file, however the file path seems to be correct, I can copy paste it into a window and it opens the snipping tool, any ideas why it will not work? Here is the full code to my AHK file... ; ; AutoHotkey Version: 1.x ; Language: English ; Platform: Win7 ; Author: Jason Davis <friendproject@> ; ; Script Function: ; Template script (you can customize this template by editing "ShellNew\Template.ahk" in your Windows folder) ; #NoEnv ; Recommended for performance and compatibility with future AutoHotkey releases. SendMode Input ; Recommended for new scripts due to its superior speed and reliability. SetWorkingDir %A_ScriptDir% ; Ensures a consistent starting directory. /* PRINTSCREEN = Will run Windows 7 snipping tool */ PRINTSCREEN::Run, c:\windows\system32\SnippingTool.exe return

    Read the article

  • How can I debug solutions in Visual Studio 2010 from a network share?

    - by alastairs
    I've recently got a new Mac laptop and am running VS2010 in a Parallels virtual machine. It's mostly working out well for me, but I'm having some problems with debugging specific project types, related to the fact that the projects are being accessed via a network share. Test projects don't run because the test runner can't load the tests' DLL. Web projects fail to run in the Visual Studio mini web server, throwing the following exception: 'An error occurred loading a configuration file: Failed to start monitoring changes to path\to\web.config'. I've spent the evening trawling the web with little luck on this. After reading these two posts, I tried out the usual CasPol changes, but then found this post from one of the early VS2010 betas indicating that CasPol is no longer needed/supported in .NET 4.0 and VS2010. The network share is accessible via both a mapped drive and the UNC path. The virtual machine runs its applications under the administrator account, which appears to have all the necessary permissions on the network share to create, read, write and delete files and folders. I say "appears to have" as I can't view the Security Properties of the appropriate folder via Explorer: the Security tab just isn't present. Has anyone managed to successfully load and debug web and test projects from a network share in VS2010?

    Read the article

  • Git fails to push with error 'out of memory'

    - by jwir3
    I'm using gitosis on a server that has a low amount of memory, specifically around 512 MB. When I try to push a large folder (happens to be a backup from an android phone), I get: me@corellia:~/Configs/$ git push origin master Counting objects: 18, done. Delta compression using up to 8 threads. Compressing objects: 100% (14/14), done. fatal: Out of memory, malloc failed MiB | 685 KiB/s error: pack-objects died of signal 13 error: failed to push some refs to 'git@dagobah:Configs' I've been searching the web, and notably found: http://www.mail-archive.com/[email protected]/msg01747.html as well as http://git.661346.n2.nabble.com/Out-of-memory-error-during-git-push-td5443705.html but these don't seem to help me for two reasons: 1) I am not actually out of memory when I push. When I run 'top' during the push, I get: 24262 git 18 0 16204 6084 1096 S 2 1.2 0:00.12 git-unpack-obje Also, during the push if I run /head/meminfo, I get: MemTotal: 524288 kB MemFree: 289408 kB Buffers: 0 kB Cached: 0 kB SwapCached: 0 kB Active: 0 kB Inactive: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 524288 kB So, it seems that I have enough memory free, but it's actually still failing, and I'm not enough of a git guru to figure out what is happening. I would appreciate it if someone could give me a hand here and tell me what could be causing this problem, and what I can do to solve it. Thanks! EDIT: The output of running the ulimit -a command: scottj@dagobah:~$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 204800 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 204800 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Performance associated with storing millions of files on NTFS

    - by Tim Brigham
    Does anyone have a method / formula, etc that I could use - hopefully based on both current and projected numbers of files - to project the 'right' length of the split and the number of nested folders? Please note that although similar it isn't quite the same as Storing a million images in the filesystem. I'm looking for a way to help make the theories outlined more generic. Assumptions I have 'some' initial number of files. This number would be arbitrary but large. Say 500k to 10m+. I have considered the underlying physical hardware disk IO requirements that would be necessary to support such an endeavor. Put another way As time progresses this store will grow. I want to have the best balance of current performance and as my needs increase. Say I double or triple my storage. I need to be able to address both current needs and projected future growth. I need to both plan ahead and not sacrifice too much of current performance. What I've come up with I'm already thinking about using a hash split every so many characters to split things out across multiple directories and keeping the trees even, very similar as outlined in the comments in the question above. It also avoids duplicate files, which would be critical over time. I'm sure that the initial folder structure would be different based on what I've outlined, and depending on the initial scale. As far as I can figure there isn't a one size fits all solution here. It would be horrendously time intensive to work something out experimentally.

    Read the article

  • 403.4 won't redirect in IE7

    - by Jeremy Morgan
    I have a secured folder that requires SSL. I have set it up in IIS(6) to require SSL. We don't want the visitors to be greeted with the "must be secure connection" error, so I have modified the 403.4 error page to contain the following: function redirectToHttps() { var httpURL = window.location.hostname+window.location.pathname; var httpsURL = "https://" + httpURL ; window.location = httpsURL ; } redirectToHttps(); And this solution works great for every browser, but IE7. On any other browser, if you type in http://www.mysite.com/securedfolder it will automatically redirect you to https://www.mysite.com/securedfolder with no message or anything (the intended action). But in Internet Explorer 7 ONLY it will bring up a page that says The website declined to show this webpage Most Likely Causes: This website requires you to log in This is something we don't want of course. I have verified that javascript is enabled, and the security settings have no effect, even when I set them to the lowest level I get the same error. I'm wondering, has anyone else seen this before?

    Read the article

  • Edit-text-files-over-SSH using a local text editor

    - by Mikko Ohtamaa
    I am working in various Linux and UNIX environments. I'd like to elegantly solve the problem of editing remote configuration files over SSH. Instead of using terminal editors (nano), I'd like to open the file in a local text editor on my desktop (Sublime Text 2). CyberDuck, WinSCP and various other SFTP apps can do this. Using editors over X11 forwarding has also proven to be problematic. Also using archaic text editors like Vim or Emacs do not serve my needs well. They could do this, but I prefer using other text editing software. Using ssh mounts (FUSE) are also problematic unless they can happen on the demand and triggered by the remote site. So what I hope to achieve Have a somekind of easily deployable shell script etc. which I can copy to remote server (let's call it mooedit) I run mooedit command on the remote server of which I have connected over SSH connection mooedit sends some kind of signal (over SSH( to my local desktop On my local desktop this signal is captured and it determines 'a ha! moo wants to edit a file on server X in folder Y' File is SFTP transfered to the local desktop (/tmp) File is opened in a nice GUI text editor on the local desktop When Save is pressed, the local desktop notices changes in the file and SFTP sends the resulting file back to the server The question is: What signaling mechanisms SSH provides for this? Any other methods to trigger a local text editor for remote SSH file?

    Read the article

  • How to send mail with PHP [migrated]

    - by roth66
    My litle problem is about mail() function in PHP, it doesnt want to send emails, to my local server, or anywhere else. I don't think that function was supposed to send mail to adresses like: [email protected]; So I've installed a mail server: hmailserver, I installed a client: dream-mail; I installed sendmaill.exe; (actually unzipped it in a folder, then in php.ini set the sendmail_path to point to it) After countless trials and errors, it still doesn't work. my system would comprise in an Apache server 2.2, and PHP (last version I think 5.3 or somehing), running on windows. And now for avoiding the usual questions (Did you make rules in your firewall etc etc), I guess I should mention, that there arent any connectivity issues, everything is set to "local" (localhost), port 25, 110, 143, are all opened, And, after a few days of fiddling with my brand new mail-server, I manage to make it work. THe Dream-mail client, has a test, trough which it would test its connections, and according to it, the SMTP AND POP3 connections are all successful, it even sends an email, for testing. SO ya, it would work. The problem, remains: PHP mail funcion. And I really need it, since on my website there's a contact form, and right now is useless. I've also checked the form it self, and seems to be alright.

    Read the article

  • What is the quickest reliable way to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

  • Virtual Fileserver

    - by Sergei
    Hi, We are planning to move our production servers to the datacenter and virtualize remaining servers in the process.Datacenter will have HP blades with vSphere on top.Currentliy we are using Celerra NS20 as fileserver.Since datacenter is using HP kit and EVA 4400 as SAN, we cannot have Celerra there, as EMC supoprt for Celerra does not work for non EMC array. I have searched for possible options and one of them was to have HP NAS blade X3800sb instead of Celerra.However this seems like overkill for me.We are only using Celerra for about 100 users and 50 servers and I think having X3800sb could be waste of resources. The other option would be to have a virtual fileserver as a part of vmware environment in datacenter.We only need CIFS to be provided.The only option I can think of is Windows Storage server.We had a bad expirience with Windows servers used as fileservers ( memory leaks one thing) in the past and this was one of the reasons we moved to Celerra. What are the other options?We need something as reliable as Celerra with as many options as possible.For example , Celerra has per folder quotas, deduplication, dynamic volume allocation, automatic failover, VTLU, replication. Also we would need to replicate NAS data to the failover site.We could use block level replication , SAN-to-SAN, but this would mean wasted bandwidth, as we need only subset of folders to be replicated.We used CA XSoft for windows servers in the past and Celerra has option for Celerra replication. Thank you very much in advance, Please ask me if I missed any details!

    Read the article

  • Help me understand Ubuntu user/group permissions.

    - by Bartek
    I'm beginning to deal with more than one user on my system (it's a VPS serving some sites) and I need to make sure I understand how group permissions work. Here's my setup: I have an account named "admin" .. it's basically the primary account that is used for serving most of the sites that I control myself. Now, I added a second account named "Ville" as one of my users wants to be able to administer that site. So, I can do this the easy way and just chown their domains folder under the ville user and viola, they have permission to do whatever they need be and so forth. However, let's say I want to also give the admin user access to the files (modifying and all) .. how can I put both users into the same group and give them both permission? I've tried doing: sudo usermod -a -G admin ville To add the ville into the admin group, but ville still cannot edit files by admin. Permissions for the primary directory for the ville user are read/write for both owner and group, and the current group for the files is admin:admin .. But ville still can't write into the directory. So, what should I be doing here to get this right and secure at the same time? Thank you.

    Read the article

  • XP SP2 Event log not logging events

    - by Weedfreer
    I have a problem whereby a terminal appears not to be logging events correctly and occasionally appears to have problems communicating accross the network.The terminal has previously been infected with a virus which apears to have 'played' with the default group policy in the standard user profile. Although, outwardly, the terminal appears to be working normally I still have a nagging feeling that it isn't quite back to the way it was. It was infected by a user plugging in a USB Stick while the company was using the older version of the AV software...typically a week or so before it was updated.I have configured the Event logs to Overwrite as required and to be 5056KB in Maximum size. I have also attempted:- Disabling the Event Log service & restarting Renewing the EVT files in Windows\system32\config directory Restarting the event log service and restarting Clearing the event log in the Services MMC Resetting the Filters to Default in the services MMC Using the EVENTCREATE command remotely from a CMD window on the server to force an event creation event. So far the only operation to have any sort of success is the remote computer EVENTCREATE command from a CMD window on the server. As it stands, the only other time that the computer has managed to create events is while it is being restarted.Has anyone gotany ideas on how to proceed? I'm thinking that possibly a refresh of the 'Windows\system32\config\SystemProfile' folder. I'm also thinking about running a tool such as Malwarebytes but this could be slightly controvertial as the system needs to be running on 'up-time' for as long as possible. I'm also wonderign whether anyone knows of any Windows admin tools that allow me to control the event logging options or default security options so that i could get it back to some sort of standard.What I'm trying to avoid is a complte re-imaging of the terminal. Although this is an option, I dont really want to have to take it if i dont need to.Many thanks in advance for any suggestions anyone may be able to provide.

    Read the article

  • Cannot access server shares over VPN

    - by DuncanDavies
    I've set up a single hosted server to use as a development environment for a web-based application. The web app is served up fine on port 80, however I'm struggling to get my VPN to behave how I'd expect so the developers don't have the access they require. The VPN connects fine and I can access the back-end database (SQL Server) which resides on the server with the client tools from the laptops. However they cannot access any shared folders. The server's local IP address is 10.x.x.x, and I've assigned a static IP address pool to RRAS (of 192.168.100.1 - 20). The clients pick up a valid IP Address (i.e. 192.168.100.9) when they connect. There is no name resolution setup, DNS or WINS. When connected via VPN the clients can ping the server (192.168.100.1) by IP Address, but cannot map a drive to a shared folder (net use * \\192.168.100.1\xxxxx) - I get 'System error 53 has occurred. The network path was not found.' I don't understand why I can ping by the ip, but not map by it. Some details: Server OS is Windows 2008 (Datacenter) VPN is SSTP using RRAS Clients are all Windows 7 I've tried temporarily disabling the firewalls So, why can we not access the file system when everything else (ping, RDP, SQL Server clients tools) works? Thanks for your help Duncan

    Read the article

  • Why do my backup fail when I target a network share hosted by a Synology DS211 disk station?

    - by Larry
    My backups are failing when I try to use a network share hosted by a Synology DS211 disk station. They work fine if I target a different network share (i.e. \server1\data\larry). When I run the following command: Wbadmin start backup -backupTarget:\\diskstation\backup-larry -include:C: This is what I get: wbadmin 1.0 - Backup command-line tool (C) Copyright 2004 Microsoft Corp. Note: The backed up data cannot be securely protected at this destination. Backups stored on a remote shared folder might be accessible by other people on the network. You should only save your backups to a location where you trust the other users who have access to the location or on a network that has additional security precautions in place. Retrieving volume information... This will back up volume WIN7(C:) to \\diskstation\backup-larry. Do you want to start the backup operation? [Y] Yes [N] No y Note: The list of volumes included for backup does not include all the volumes that contain operating system components. This backup cannot be used to perform a system recovery. However, you can recover other items if the destination media type supports it. The backup operation to \\diskstation\backup-larry is starting. Creating a shadow copy of the volumes specified for backup... Creating a shadow copy of the volumes specified for backup... The backup operation stopped before completing. Summary of the backup operation: ------------------ The backup operation stopped before completing. Detailed error: Access is denied. Windows Backup failed to write the file: '<backup location>\WindowsImageBackup\<Computer Name>\MediaId'. Access is denied. The backup creates the following path \\diskstation\backup-larry\WindowsImageBackup\LARRY-MYDOMAIN\ but its empty. I definitely have read/write access on the target directory (\diskstation\backup-larry). I have verified this by looking at the permission and by actually copying files to this location. Any suggestions?

    Read the article

  • SkyDrive broken after upgrade to Windows 8.1: "This location can't be found, please try later"

    - by avo
    Upgrading from Windows 8 to Windows 8.1 via the Store upgrade path has screwed my SkyDrive. The C:\Users\<user name>\SkyDrive folder is empty (it only has single file desktop.ini). When I open the native (Store) SkyDrive app, I see "This location can't be found, please try later". I'm glad to still have my files alive online in my SkyDrive account. I tried disconneting from / reconnecting to my Microsoft Account with no luck. Anyone has an idea on how to fix this without reinstalling/refreshing Windows 8.1? From Event Viewer: Faulting application name: skydrive.exe, version: 6.3.9600.16412, time stamp: 0x5243d370 Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000 Exception code: 0x00000000 Fault offset: 0x0000000000000000 Faulting process ID: 0x4e8 Faulting application start time: 0x01cece256589c7ee Faulting application path: C:\Windows\System32\skydrive.exe Faulting module path: unknown Report ID: {...} Faulting package full name: Faulting package-relative application ID: Also: The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {C2F03A33-21F5-47FA-B4BB-156362A2F239} and APPID {316CDED5-E4AE-4B15-9113-7055D84DCC97} to the user NT AUTHORITY\LOCAL SERVICE SID (S-1-5-19) from address LocalHost (Using LRPC) running in the application container Unavailable SID (Unavailable). This security permission can be modified using the Component Services administrative tool. Never was a big fan of in-place upgrade anyway, but this time it was a machine which I use for work, with a lot of stuff already installed on it. Shouldn't have tried to upgrade it in the first place, but was convinced Windows 8.1 is a solid update. Another lesson learnt.

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by ThatGraemeGuy
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • unreadable corrupted ntfs partition - lost clusters reported

    - by Eduardo Martinez
    Hi, partition magic is reporting multiple 'bad file record signature' and 'lost clusters' errors on my 250GB samsung sata disk (connected via usb on a xp sp3). Unfortunately PM is unable to fix. PM shows the drive as being NTFS, detects used space ok and also drive name. But PM browser (right click on partition, browse...) won't show anything (as if disk was empty) Windows Explorer is not even picking the drive name and reports 'the file or directory is corrupted and unreadable' PTDD partition table doctor demo tells me the boot sector is fine, and I can see all disk content on its browser - but crucially cannot copy that content over to a new disk (PTDD browser is pretty arid to say the least) Also tried - photorec-6.11.3 - it actually started to extract files but wouldn't keep file names or any folder structure (maybe I missed sth on the configuration options) - find and mount - intellectual scan went well, the only partition on the disk was detected, then tried to mount into p: but got this error on windows explorer: 'p:\ is not accesible. The media is write protected'. Find and mount allows you to create an image from partition but I don't have a disk big enough at hand. Does anyone know if this will keep the extracted files/folders structure intact? I'm starting to think the disk is pretty screwed and my chances to recover this data are slim. Please someone enlighten me with that marvellous piece of software I am missing :-) Thanks in advance

    Read the article

  • SQL Full-Text indexing not populating

    - by Sam
    We installed a clustered SQL 2005 installation on windows 2008 and reattached our san drives from another machine and restored to do a migration to new hardware. There have been a few minor issues, but this one has me stuck. Trying to populate Full-Text indexes is not working. I create a basic table with some simple text in a new database and get the same results as old indexes. 2010-09-27 10:30:46.85 spid19s Informational: Full-text Full population initialized for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'). Population sub-tasks: 1. 2010-09-27 10:31:15.36 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001DF. Attempt will be made to reindex it. 2010-09-27 10:31:15.37 spid19s The component 'MSFTE.DLL' reported error while indexing. Component path 'D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\MSFTE.DLL'. 2010-09-27 10:31:15.37 spid19s Error '0x80070003' occurred during full-text index population for table or indexed view '[SQL_DBA].[dbo].[CIS_Report_Executions]' (table or indexed view ID '1767677345', database ID '5'), full-text key value 0x000001E0. Attempt will be made to reindex it. The rebuild/repopulate procedure finishes, but I get zero rows in the index. The .dll in the message is present and the service accounts have access to this. My FTData also has data in it, so it seems there wouldn't be permission issue on this folder. Application throws this error: “PHP Warning: mssql_query() [function.mssql-query]: message: Full-text catalog 'ikm_PageIndex_FText' is in an unusable state. Drop and re-create this full-text catalog. (severity 16) in E:\Inetpub\knowledgebase_insidemesa\lib\database\mssql.php on line 154” A microsoft discussion is the only post I found which had claimed to fix this - said it was registry related, but then didn't post the fix.

    Read the article

  • How to hide subfolder when using Web.config for subdomains?

    - by mc-kay
    I have FTP access to my ASP.NET Websapce (IIS 7) and I route subdomains with a Web.config in the web root folder. She looks like this: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="route www and emtpy requests" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^(www.)?example.com" /> <add input="{PATH_INFO}" pattern="^/www/" negate="true" /> </conditions> <action type="Rewrite" url="\www\{R:0}" /> </rule> <rule name="route to blog" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^blog.example.com$" /> <add input="{PATH_INFO}" pattern="^/blog/" negate="true" /> </conditions> <action type="Rewrite" url="\blog\{R:0}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> As you can see i have two folders in my root directory: "www" and "blog". When i now enter "blog.example.com" everythink is working fine, but when i click a link i will go to "blog.example.com/blog" What can I do to prevent this behavior ?

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • Mercurial internal Setup on Windows 7 - Exception happened during processing of request from ...

    - by Sad0w1nL1ght
    Hy, i have 1 central repository and many locals. On my machine i have local and a central repository too. I can make clone/commit/update/push/pull very easy between the local and central repository on my local machine. but when i want to make a clone from another machine it gets an error. listening at http://MyLocalMachine:8000/ (bound to *:8000) ---------------------------------------- Exception happened during processing of request from ('192.168.0.194', 49319) Traceback (most recent call last): File "SocketServer.pyc", line 558, in process_request_thread File "SocketServer.pyc", line 320, in finish_request File "mercurial\hgweb\server.pyc", line 47, in __init__ File "SocketServer.pyc", line 615, in __init__ File "BaseHTTPServer.pyc", line 329, in handle File "BaseHTTPServer.pyc", line 323, in handle_one_request File "mercurial\hgweb\server.pyc", line 79, in do_GET File "mercurial\hgweb\server.pyc", line 70, in do_POST File "mercurial\hgweb\server.pyc", line 63, in do_write File "mercurial\hgweb\server.pyc", line 127, in do_hgweb File "mercurial\hgweb\hgweb_mod.pyc", line 86, in __call__ File "mercurial\hgweb\hgweb_mod.pyc", line 118, in run_wsgi ErrorResponse ---------------------------------------- The command line wich started the central repo: hg serve -R TT -n TTZoli The command from remote machine for cloning: hg clone --pull http://MyLocalMachine:8000/TT Config for the central repo: [ui] username = MyLocalUserName username = test <[email protected]> with this user i'm trying to acces the central repo [web] push_ssl = false Config for the remote repo: [ui] username = test <[email protected]> [web] push_ssl = false I'm not sure if it's relevant,my firewall is turned off on both machines, and the files in /hg folder are not versioned on the server, except hgignore. Could you please suggest some ideas? What could be the problem? Thanks in advance!

    Read the article

  • PHP-FPM issue on LEMP Stack and WordPress

    - by jw60660
    I'm very much a NGINX and Server Admin beginner. I used this tutorial to install NGINX / PHP / mySQL / WordPress: C3M Digital Tutorial In this tutorial the backend php-cgi setup is configured using fastcgi. php5-fpm was installed during this tutorial: apt-get install nginx-full php5-fpm php5 php5-mysql php5-apc php5-mysql php5-xsl php5-xmlrpc php5-sqlite php5-snmp php5-curl After reading that the NGINX configuration on the WordPress codec was more secure than most tutorials, I decided to use the codex configuration: WordPress NGINX configuration in Codex The Codex configuration uses php-fpm for backend php-cgi. When opening the browser I got a 502 Bad Gateway error. The error log was: "2012/06/10 21:18:27 [crit] 14009#0: *4 connect() to unix:/tmp/php-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 12.3.456.789, server: mywebsite.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/tmp/php-fpm.sock:", hos t: "mywebsite.com"" In the main NGINX configuration file supplied by the codex I noticed the line starting "server unix:" in the upstream php block which point to the empty directory: # Upstream to abstract backend connection(s) for PHP. upstream php { server unix:/tmp/php-fpm.sock; # server 127.0.0.1:9000; } I checked the folder at /tmp and it was empty. Seems I missed configuring php-fpm to play with NGINX. Can someone point me in the right direction? Much appreciated!

    Read the article

  • Oracle 10g for Windows does not start up on system boot

    - by Mike Dimmick
    We have an Oracle 10g Enterprise Edition installation (10.2.0.1.0) on a Windows Server 2003 virtual machine. It was initially created with Virtual Server 2005 R2 SP1 but has now been migrated to Windows Server 2008 Hyper-V. The services start on system boot, but the instance does not start up. This problem was actually occurring on Virtual Server after a migration from one server to another, but I managed to fix it then with: oradim -edit -sid ORCL -startmode auto However, this now has no effect. oradim.log (in %OracleHome%\database\oradim.log) says: Thu Jun 10 14:14:48 2010 C:\oracle\product\10.2.0\db_3\bin\oradim.exe -startup -sid orcl -usrpwd * -log oradim.log -nocheck 0 Thu Jun 10 14:14:48 2010 ORA-12560: TNS:protocol adapter error sqlnet.log in the same folder has: Fatal NI connect error 12560, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle)(ARGV0=oracleorcl)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=orcl)(CID=(PROGRAM=C:\oracle\product\10.2.0\db_3\bin\oradim.exe)(HOST=ORACLE-VM)(USER=SYSTEM)))) VERSION INFORMATION: TNS for 32-bit Windows: Version 10.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 - Production Time: 10-JUN-2010 14:14:48 Tracing not turned on. Tns error struct: ns main err code: 12560 TNS-12560: TNS:protocol adapter error ns secondary err code: 0 nt main err code: 530 TNS-00530: Protocol adapter error nt secondary err code: 2 nt OS err code: 0 The ORA_ORCL_AUTOSTART registry value is set to TRUE, so it should be auto-starting - and you can see that it's trying to. The problem also occurs when stopping and restarting the OracleServiceORCL service. I've enabled SQL*Net tracing which shows: [10-JUN-2010 15:09:33.919] snlpcss: entry [10-JUN-2010 15:09:34.419] snlpcss: Unable to spawn Oracle oracle (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) orcl, error 2. [10-JUN-2010 15:09:34.419] snlpcall: exit On a hunch that error 2 is Windows error 2 (file not found) I tried restarting the service with Process Monitor watching oradim.exe, but this appears to delay things just enough that it always works. Right now I have a horrible hack where I've created a Scheduled Task to run oradim -startup -sid ORCL when the Administrator account logs on, and set the VM to auto-logon. I'd still like to work out why it's not working.

    Read the article

  • Mac SMB connections to Windows 2003 server, leaving Open Files

    - by Bruce Garlock
    We have several Mac clients (Both 10.5, and 10.6) mounting a share from a Windows 2003 server. At least once a day, our archivist will go into this share to archive items from it, to the backup server. Most of the time, she has no issues: she copies the folder to the archive server, when it's done, she deletes it from this share. Then, she will come upon one, and it will say she doesn't have permission. When I go into the Open sessions, it will say that a particular user has a READ lock on the file, in Windows 2003. Of course, this person does not have the file open, and the only way we can delete it, is to close the open session on the file. My thoughts: The Mac likes to "sprinkle" Hidden "Resource Forks" on SMB servers, and possibly, when this Mac who last wrote to that share, closes out of the file, and these files still exist. Windows 2003 has a bug, that doesn't properly "release" the OPLOCK on the file? Steve Ballmer just doesn't like Mac's, so he wants to annoy everyone by not releasing file locks :-) What can be done about this? It happens every day, and sometimes several times per day! Many thanks, Bruce

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >