Search Results

Search found 6069 results on 243 pages for 'ftv admin'.

Page 191/243 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • .htaccess authorization requiring username/password for every resource

    - by webworm
    I am using Apache2 on Ubuntu and I have having some "weird" user authorization issues. I am using .htaccess to control access to my directories. I have many users and have grouped them into user groups which are defined in a "group" file. I then use .htaccess within each directory to define which users have access to the directory and which do not. Here is an example .htaccess file. AuthUserFile /var/local/.htpasswd AuthGroupFile /var/local/groups AuthName "Username and Password Required" AuthType Basic require group design admin Everything is working with one exception. I added a new user to one of my groups and though they can gain access to the directory they are prompted for a username and password for every resource (i.e. image, CSS). After a while I can just keep selecting "cancel" and I will get a page with just html with no images or CSS. I would think the browser would just cache the username/password. It seems to be working well for other users. Any thoughts?

    Read the article

  • .htaccess authorization requiring username/password for every resource

    - by webworm
    I am using Apache2 on Ubuntu and I have having some "weird" user authorization issues. I am using .htaccess to control access to my directories. I have many users and have grouped them into user groups which are defined in a "group" file. I then use .htaccess within each directory to define which users have access to the directory and which do not. Here is an example .htaccess file. AuthUserFile /var/local/.htpasswd AuthGroupFile /var/local/groups AuthName "Username and Password Required" AuthType Basic require group design admin Everything is working with one exception. I added a new user to one of my groups and though they can gain access to the directory they are prompted for a username and password for every resource (i.e. image, CSS). After a while I can just keep selecting "cancel" and I will get a page with just html with no images or CSS. I would think the browser would just cache the username/password. It seems to be working well for other users. Any thoughts?

    Read the article

  • Reporting Services 2008: Virtual directories not visible in IIS7..

    - by Ryan Barrett
    I'm having some problems with Reporting Services on Windows Server 2008 Standard. I've installed server 2008 as a standalone webserver (with roles/features of an web application server). On top of that, I've installed Sql Server 2008 Standard with Reporting Services (and the rest of the BI tools). Problem is, I want to modify the rights on the virtual directories. However, the virtual directories aren't appearing in IIS 7 management tool. I can connect to reporting services, albeit only with the local windows admin account. I can download Report Builder fine from an session on the server (but not from any clients). I've tried removing the default website from IIS, and that stops the reporting services website from working. The machine (a VM) isn't for production use - it's used on a closed network internally for testing and development purposes. I need to be able to let my fellow developers login without a password, and they must be able to install ReportBuilder 2.0. Must not be linked to a domain or active directory in any form. Google isn't much help, the results suggest I modify the virtual directory Does anyone have any suggestions?

    Read the article

  • PsExec and Remote Environment Variables, Logging, Etc.

    - by alharaka
    When I run PsExec on a remote computer, I always fall short of what I want. What I would like ideally in most situations is a) a log on an admin server where each individual log has the name of each the remote computer it was generated from (e.g. COMPNAME1.log, COMPNAME2.log, etc.) or b) a log file on each remote computer with whatever name I specify. When I try scenario (a), I use the following command. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username cmd /c echo TEST >> \\server.company.tld\share\%computername%.log Problem is that it never works. All the computers just write to the log where %computername% is just the computer I execute PsExec from in my office. What I want are unique logs for each computer specific in the listofcomputers.txt that will correctly use the hostname from the remote environment variable without issue. Is that even possible? It does not seem to work for me. I tried this, and the syntax is clearly wrong. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username "cmd /c echo TEST >> \\server.company.tld\share\%computername%.log" PsExec just fails saying the system file cannot be found (read: syntax fail). As for scenario (b), it appears to be a variation of a similar problem. When I run a command like this, it does not work. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username "cmd /c echo %computername% >> \\server.company.tld\share\aggregated.log" Is there something I do not understand about remote path and environment variables with PsExec on the cmd.exe console (I have not even tried the dreaded PowerShell yet). I know such things work in a batch file (cmd /c \\server.company.tld\share\runthis.bat), but is there a reason it will not work when executing commands as arguments? I always need this, and can never get it!

    Read the article

  • Remote desktop connection to network printer

    - by andand
    I'm trying to print a document from a remote WinXP machine to a network printer I use on a local Win7 machine using Remote Desktop. The network printer does not appear in the list of those available on the WinXP box. In more detail, the local machine runs Windows 7 (no admin rights) and connects to a network printer managed by a print server (i.e. not using a local TCP/IP Port). I have access to a Windows XP host on a separate network which I access using Remote Desktop. I would like to have print requests from the remote XP box forwarded to the network printer I use on the Windows 7 machine. The XP machine cannot access the print server I use on the Win7 machine nor can it create a TCP/IP port to connect directly to the printer (network configuration issues). After having consulting the KB312135 I confirmed the "Printers" option was selected in the Remote Desktop Client, Local Resources Tab, yet the network printer does not appear on the list of available printers on the XP box. Is this a lost cause or is there something else I haven't managed to locate yet?

    Read the article

  • Nexus functionality is limited after installation

    - by Dmitriy Sukharev
    I have a CentOS based server with Sonatype Nexus 2.0.4-1 installed. The issue is that there are no standard "Artifact Search", "Advanced Search", "Browse Index", "Refresh Index" Nexus features, as well as Artifact Information tab after selection of any artifact (only Maven Information tab). I tried to Google, but was amazed that there're no information about this issue. Actually it looks like all actions I've done are: wget http://www.sonatype.org/downloads/nexus-2.0.4-1-bundle.tar.gz tar -xvf nexus-2.0.4-1-bundle.tar.gz cp -r nexus-2.0.4-1 sonatype-work /opt/ ln -s /opt/nexus-2.0.4-1/* /opt/nexus ln /opt/nexus/bin/nexus /etc/init.d/ chmod 755 /etc/init.d/nexus vim /etc/init.d/nexus NEXUS_HOME=“/opt/nexus” RUN_AS_USER=“nexus” useradd -s /sbin/nologin -d /var/lib/nexus nexus chown -R nexus /opt/nexus/ chown -R nexus /opt/nexus-2.0.4-1/ sudo -u nexus cp /opt/nexus/conf/examples/proxy-https/jetty.xml /opt/nexus/conf/ To force Nexus be available through HTTPS I went to Administration - Server - Application Server Settings as admin and changed Base URL to https:// external IP/nexus and set Force Base URL to true. Any ideas how to get missed Nexus features?

    Read the article

  • IIS 7.5 error 500 in fastcgi module after upgrading wordpress to 3.0.2

    - by Maniac13
    I am running multiple wordpress blogs on the following setup: Server 2008 R2; IIS 7.5; PHP 5.3.3; MySQL 5.0.7; I upgraded my wordpress install from 2.9.2 to 3.0.2 (on 2 different sites) today and the upgrade went fine. I can serve .php pages without errors, log into the admin system etc. I can browse my blog by going directly to mywebsite.com/index.php, but when I try to go to mywebsite.com (without the index.php) I get he below 500 error. I reset IIS, removed and re-attached the default document, but I am running out of ideas. Please if anyone has a solution for this that would be great. This is the 500 error I am getting: Error Summary HTTP Error 500.0 - Internal Server Error The page cannot be displayed because an internal server error has occurred. Detailed Error Information Module FastCgiModule Notification ExecuteRequestHandler Handler PHP FastCGI Error Code 0x00000000 Requested URL http://mywebsite.com:80/index.php Physical Path D:\mywebsite.com\index.php Logon Method Anonymous Logon User Anonymous Thanks Stephan

    Read the article

  • Windows XP mounting USB drive to same letter as previously mapped network drive

    - by GAThrawn
    Why does Windows always mount a USB drive as the next drive letter after the last physical drive, even when that letter is already taken by a mapped drive, and is there any way to improve this behaviour? What happens is I tend to use a few different flash drives on my PC, as well as having both a Blackberry and a personal phone that mount as USB drives when I plug them in to charge. Being on a corporate PC I also have a number of mapped network drives (some set by login script, some set as persistent mappings in my profile). When I first login I'll have drive letters like this: C: - Local Drive D: - DVD Drive G: - Login script mapped drive J: - Login script mapped drive When I plug the Blackberry in it'll mount two drives (one for onboard storage, one for the SD card) as E: and F:. If I then plug in another USB drive it will mount as G:, even though that's already taken by a network mapped drive. This leaves me with the following drives: C: - Local Drive D: - DVD Drive E: - USB drive (Blackberry) F: - USB drive (Blackberry) G: - Login script mapped drive [G: - USB drive - mounted but not visible in Explorer or command prompt] J: - Login script mapped drive I then have to go into Disk Management, find the new USB drive that's mounted to G: and re-assign it to another letter eg Z:, once this is done Auto-Play detects it and throws up its normal dialog, and its browseable in Explorer. While this is OK to do if you only use one or two USB drives and have admin access to your PC with your login account, its a total pain in the proverbial if you regularly use a whole load of different USB devices, and corporate policy means you have one account for your normal login (that only has User access to workstations), but have to use a different account for any privileged action. I realize that one possible reason for this is the difference between hardware which is mounted and assigned drive letters at the systen level, and mapped drives which are done at the user level. For USB devices that are already plugged in before login, then obviously they're mounted before Windows knows what network drives may be mapped. However if you plug the USB devices in after you're fully logged in and have drives mapped then Windows must know which letters are available?

    Read the article

  • OSX Server 10.5 - Cannot log into Workgroup Manager - diradmin password is correct

    - by Mister IT Guru
    I've got a setup where I am trying to rescue a broken AD. We can no longer authenticate on the Workgroup manager, with passwords being rejected all the time - even though it is correct. I can connect using the workgroup manager on another server and I get the user list as expected, but when I click the padlock to make changes, I get the following screen: The problem is, I know the password is correct, I just used it to connect to the server in the first place. I can log into the server using the local admin, and services such as AFP, VPN and SMB continue to serve users. I have about 300 or so users on this server, and I would very much like to avoid having a rebuild. As there is much configuration that has been done without my knowledge (it's a client machine), I'd like to attempt to fix it, and then create another server and migration OD off this broken machine, then decommission it "gently". Ultimately this would mean no disruption of services. What I'd like it some tips as to how to fix the problem with authenticating to make changes in the work group manager, and maintenance on open directory in general. Thanks

    Read the article

  • Virtual Lan on the Cloud -- Help Confirm my understanding?

    - by marfarma
    [Note: Tried to post this over at ServerFault, but I don't have enough 'points' for more than one link. Powers that be, move this question over there.] Please give this a quick read and let me know if I'm missing something before I start trying to make this work. I'm not a systems admin professional, and I'd hate to end up banging my head into the wall if I can avoid it. Goals: Create a 'road-warrior' capable star shaped virtual LAN for consultants who spend the majority of their time on client sites, and who's firm has no physical network or servers. Enable CIFS access to a cloud-server based installation of Alfresco Allow Eventual implementation of some form of single-sign-on ( OpenLDAP server ) access to Alfresco and other server applications implemented in the future Given: All Servers will live in the public internet cloud (Rackspace Cloud Servers) OpenVPN Server will be a Linux disto, probably Ubuntu 9.x, installed on same server as Alfresco (at least to start) Staff will access server applications and resources from client sites, hotels, trains, planes, coffee shops or their homes over various ISP, using their company laptops or personal home desktops. Based on my Research thus far, to accomplish this, I'll need: OpenVPN with Bridging Enabled to create a star shaped "virtual" LAN http://openvpn.net/index.php/open-source/documentation/miscellaneous/76-ethernet-bridging.html A Road Warrior Network Configuration, as described in this Shorewall article (lower down the page) http://www.shorewall.net/OPENVPN.html Configure bridge addressesing (probably DHCP) http://openvpn.net/index.php/open-source/faq.html#bridge-addressing Configure CIFS / Samba to accept VPN IP address http://serverfault.com/questions/137933/howto-access-samba-share-over-vpn-tunnel Set up Client software, with keys configured for access (potentially through a OpenVPN-Sa client portal) http://www.openvpn.net/index.php/access-server/download-openvpn-as/221-installation-overview.html

    Read the article

  • Share Point ACL on OSX Lion Server - Posix group always takes over ACLs

    - by Ben
    Trying to configure a share point on a Lion Server machine. The directory is created by the local server admin (serveradmin) and has rwxr-x--- given to it. The serveradmin user belongs to the local staff group so serveradmin readwrite staff group read Others none We have an OD group for all the employees (Workers) . Using the Server tool we've given Full Control to the share point: Workers Full Control serveradmin readwrite staff group read Others none We would assume that Workers could then do what they want on the share but that doesn't seem to be the case. It appears the POSIX permissions take over the ACL permissions for Worker. If I change the staff permission to readwrite then the Workers can create a file or folder in the share point. I would think the ACL should take over but it doesn't, posix always win, rendering ACL useless. Furthermore if I leave the readwrite permission for staff and take Write permission away for the Workers group then the posix group still wins. Essentially the Workers ACL does absolutely nothing. There are reports of similar problems in this Apple forum thread: https://discussions.apple.com/thread/3722901 The directory nesting fix suggested there doesn't work for us. Has anyone had similar issues and know how to fix this? Edit: in Workgroup Manager the employees user are set to primary group staff and given the additional OD group Workers. Changing their primary group doesn't help, it only shifts the problem onto Others taking over rights (logically) Edit 2: Ok, this is interesting, adding OD Users to the share's ACL works totally fine

    Read the article

  • Permissions on mac for itunes library with multiple users - idea

    - by John
    I currently have a lot of music on an external drive and my itunes set up from there. However, periodically, when the external drive isn't connected, itunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my mac HD is large enough to house the music collection. However, I have 4 family members - all with their own logins - using this same gob of music. I don't want 4 copies of the library, only one with all libraries referencing it. So, what I want to do is: 1 - move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. 2 - get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let itunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the itunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any xml library file editing if possible. Am I dreaming? Thanks for help.

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • How do I repair the corrupted files found by sfc /scannow? "Windows Resource Protection found corrupt files but was unable to fix some of them."

    - by galacticninja
    After running chkdsk C: /F /R and finding out that my hard disk has 24 KB in bad sectors (log is posted below), I decided to run Windows 7's System File Checker utility (sfc /scannow). SFC showed the ff. message after I ran it: "Windows Resource Protection found corrupt files but was unable to fix some of them. Details are included in the CBS.Log windir\Logs\CBS\CBS.log." Since the CBS.log file is too large, I ran findstr /c:"[SR]" %windir%\Logs\CBS\CBS.log >"%userprofile%\Desktop\sfcdetails.txt" (as per Microsoft's KB 928228 article) to only get the log text pertaining to the corrupt files. (log is also posted below) How do I troubleshoot and repair the corrupted files mentioned by sfc /scannow? My OS is Windows 7, 64-bit. chkdsk log Checking file system on C: The type of the file system is NTFS. A disk check has been scheduled. Windows will now check the disk. CHKDSK is verifying files (stage 1 of 5)... 936192 file records processed. File verification completed. 25238 large file records processed. 0 bad file records processed. 4 EA records processed. 44 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 1051640 index entries processed. Index verification completed. 0 unindexed files scanned. 0 unindexed files recovered. CHKDSK is verifying security descriptors (stage 3 of 5)... 936192 file SDs/SIDs processed. Cleaning up 24 unused index entries from index $SII of file 0x9. Cleaning up 24 unused index entries from index $SDH of file 0x9. Cleaning up 24 unused security descriptors. Security descriptor verification completed. 57725 data files processed. CHKDSK is verifying Usn Journal... 36994248 USN bytes processed. Usn Journal verification completed. CHKDSK is verifying file data (stage 4 of 5)... 936176 files processed. File data verification completed. CHKDSK is verifying free space (stage 5 of 5)... 306238 free clusters processed. Free space verification is complete. Adding 1 bad clusters to the Bad Clusters File. Correcting errors in the Volume Bitmap. Windows has made corrections to the file system. 488282111 KB total disk space. 485595420 KB in 766458 files. 401856 KB in 57726 indexes. 24 KB in bad sectors. 1059863 KB in use by the system. 65536 KB occupied by the log file. 1224948 KB available on disk. 4096 bytes in each allocation unit. 122070527 total allocation units on disk. 306237 allocation units available on disk. Internal Info: 00 49 0e 00 81 93 0c 00 34 01 17 00 00 00 00 00 .I......4....... 6b 29 00 00 2c 00 00 00 00 00 00 00 00 00 00 00 k)..,........... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ sfc /scannow log (through findstr /c:"[SR]" %windir%\Logs\CBS\CBS.log >"%userprofile%\Desktop\sfcdetails.txt") Note: The full log is at http://pastebin.com/raw.php?i=gTEGZmWj . I've only quoted parts of the full log below (mostly from the last part), as the full log won't fit within the character limit for questions. I've added it to serve as a preview. ... 2013-12-28 19:37:50, Info CSI00000542 [SR] Beginning Verify and Repair transaction 2013-12-28 19:37:55, Info CSI00000544 [SR] Verify complete 2013-12-28 19:37:56, Info CSI00000545 [SR] Verifying 95 (0x000000000000005f) components 2013-12-28 19:37:56, Info CSI00000546 [SR] Beginning Verify and Repair transaction 2013-12-28 19:38:03, Info CSI00000548 [SR] Verify complete 2013-12-28 19:38:03, Info CSI00000549 [SR] Repairing 43 (0x000000000000002b) components 2013-12-28 19:38:03, Info CSI0000054a [SR] Beginning Verify and Repair transaction ... 2013-12-28 19:38:15, Info CSI00000730 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:62{31}]"GroupPolicy-Admin-Gpedit-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000733 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:30{15}]"frs-core-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000736 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:26{13}]"gpmgmt-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000739 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:74{37}]"MediaServer-ASPAdmin-Migration-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000073c [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:36{18}]"Ldap-Client-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000073f [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:38{19}]"iSNS_Service-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000742 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:76{38}]"MediaServer-Multicast-Migration-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000745 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:78{39}]"Kerberos-Key-Distribution-Center-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000748 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:86{43}]"GroupPolicy-CSE-SoftwareInstallation-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000074b [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:28{14}]"ieframe-dl.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000074e [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:76{38}]"GroupPolicy-Admin-Gpedit-Snapin-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000751 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:32{16}]"IPSec-Svc-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000754 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:22{11}]"HTTP-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000757 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:56{28}]"MediaServer-Migration-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000075a [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:26{13}]"GPBase-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000075d [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:38{19}]"IasMigPlugin-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000760 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:50{25}]"International-Core-DL.man"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000762 [SR] Cannot repair member file [l:24{12}]"wbemdisp.dll" of Microsoft-Windows-WMI-Scripting, Version = 6.1.7600.16385, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI00000763 [SR] This component was referenced by [l:202{101}]"Microsoft-Windows-Foundation-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.WindowsFoundationDelivery" 2013-12-28 19:38:16, Info CSI00000766 [SR] Could not reproject corrupted file [ml:58{29},l:56{28}]"\??\C:\Windows\SysWOW64\wbem"\[l:24{12}]"wbemdisp.dll"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000768 [SR] Cannot repair member file [l:56{28}]"Microsoft.MediaCenter.UI.dll" of Microsoft.MediaCenter.UI, Version = 6.1.7601.17514, pA = PROCESSOR_ARCHITECTURE_MSIL (8), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI00000769 [SR] This component was referenced by [l:176{88}]"Microsoft-Windows-MediaCenter-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.MediaCenter" 2013-12-28 19:38:16, Info CSI0000076c [SR] Could not reproject corrupted file [ml:520{260},l:40{20}]"\??\C:\Windows\ehome"\[l:56{28}]"Microsoft.MediaCenter.UI.dll"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI0000076e [SR] Cannot repair member file [l:24{12}]"ReAgentc.exe" of Microsoft-Windows-WinRE-RecoveryTools, Version = 6.1.7601.17514, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI0000076f [SR] This component was referenced by [l:202{101}]"Microsoft-Windows-Foundation-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.WindowsFoundationDelivery" 2013-12-28 19:38:16, Info CSI00000772 [SR] Could not reproject corrupted file [ml:48{24},l:46{23}]"\??\C:\Windows\SysWOW64"\[l:24{12}]"ReAgentc.exe"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000774 [SR] Cannot repair member file [l:82{41}]"System.Management.Automation.dll-Help.xml" of Microsoft-Windows-PowerShell-PreLoc.Resources, Version = 6.1.7600.16385, pA = PROCESSOR_ARCHITECTURE_AMD64 (9), Culture = [l:10{5}]"en-US", VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI00000775 [SR] This component was referenced by [l:266{133}]"Microsoft-Windows-Client-Features-Package~31bf3856ad364e35~amd64~en-US~6.1.7601.17514.Microsoft-Windows-Client-Features-Language-Pack" 2013-12-28 19:38:16, Info CSI00000778 [SR] Could not reproject corrupted file [ml:520{260},l:104{52}]"\??\C:\Windows\System32\WindowsPowerShell\v1.0\en-US"\[l:82{41}]"System.Management.Automation.dll-Help.xml"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI0000077a [SR] Cannot repair member file [l:18{9}]"hlink.dll" of Microsoft-Windows-HLink, Version = 6.1.7600.16385, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI0000077b [SR] This component was referenced by [l:202{101}]"Microsoft-Windows-Foundation-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.WindowsFoundationDelivery" 2013-12-28 19:38:16, Info CSI0000077e [SR] Could not reproject corrupted file [ml:48{24},l:46{23}]"\??\C:\Windows\SysWOW64"\[l:18{9}]"hlink.dll"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000780 [SR] Repair complete 2013-12-28 19:38:16, Info CSI00000781 [SR] Committing transaction 2013-12-28 19:38:19, Info CSI00000785 [SR] Verify and Repair Transaction completed. All files and registry keys listed in this transaction have been successfully repaired

    Read the article

  • Should one have a separate user account for work use? [closed]

    - by Tyler Wayne
    This question examines the practice of using a separate OS-level user account to divide work use from personal use (specifically, in a creative profession and on a personal computer). I recently left my in-the-flesh job to go to school, but I'm carrying on with the work remotely. I do all of my work on my laptop, and I currently have a separate user account called "Work" where I do exactly that. However, I'm now starting to question that practice. Because my hobby is the same as my job, I want to save notes of the things I learn while working. Because ideas come at any moment, I often want to throw something into my personal task manager's inbox and look at it again later. That task manager is well-suited to handle both the work and personal aspects of my life. Only my personal account has admin rights, but work sometimes requires me to install programs. My employer has no preference regarding my choice, so that is a non-issue. My work is essentially freelance web development, so advice given with that in mind will be much appreciated. Back up all opinion with some personal experience, please. Ideally, give a list of pros and cons and then name reasons for your position.

    Read the article

  • Recovering Data from a Linkstation LS-WXL/R1

    - by kingkool68
    I've been running a Buffalo Linkstation LS-WXL/R1 in RAID1 for a few weeks. Two nights ago we had a brief power outage. Yesterday when I tried to mount the disk it couldn't be found. I logged in to the web admin and it couldn't see any storage attached and no arrays available. Well this sucks. I ended up taking the disks out and mounting them to an Ubuntu virtual machine. They how up as an Array but I can't start the Array to the best of my limited knowledge. I could still see the 6 partitions, so I'm confident the data is there. I can use Photorec (http://www.cgsecurity.org/wiki/PhotoRec) to recover most of the files I want, it just takes some time. Could I make an image of the data partition and mount that in Ubuntu so I can get at the data I want through the file system? 95% of the data on my Linkstation LS-WXL/R1 is backup. I just put a few folders of images on there that I need to get back. I'm already preparing to format the disks when I'm done and re-building the RAID1 array from scratch. Any advice would be appreciated.

    Read the article

  • Explorer.exe not starting after login on Windows Server 2003 (Terminal Services and console)

    - by Pepperoni Icecream
    When users login to a Windows Server 2003 R2 running Terminal Services they have a blank desktop. Upon inspection, explorer.exe is not running. When I login as administrator, using either RDP or to the console, I am having the same issue. I can pull up the taskman and start explorer.exe manually. I have another Terminal Server setup exactly the same way (same apps, settings, GPO, etc . . .) the only difference is we deployed Symantec Endpoint Client 11.0.5 on Friday. For some reason the working Terminal Server is still on 11.0.4, but the suspect server received the 11.0.5 client upgrade. I checked the eventviewer for any relevant explorer.exe entries to no avail. It seems that if SEP is preventing explorer.exe from starting at login it would do the same for the domain admin starting explorer.exe from the taskman. I disabled the SEP client and services on the server and issued smc -stop and tried logging in again. Still no explorer.exe. So I'm not sure if the client upgrade is relevant but it is worth mentioning since that was the last system change. The 2 servers are members of a NLB group. I took the bad terminal server out of the group until the issue is resolved. Actually stopped the host using NLB manager Any help is appreciated.

    Read the article

  • Lotus Domino - DAOS not reducing file size?

    - by SydxPages
    I have implemented DAOS on a Lotus Domino Server (8.5.3 FP2) as follows: Lotus Domino Server Document: Store file attachments in DAOS: Enabled Minimum size of object before Domino will store in DAOS: 64000 bytes DAOS base path: E:\DAOS Defer object deletion for: 30 days Transaction logging is running, and the specific test database has the following advanced properties set: Domino Attachment and Object Service (ticked) Use LZ1 compression for atachments Compress Database Design Compress Data I have restarted the server. When I run a compact -c, it compacts the database, but does not reduce the size. I have checked the DB in Windows Explorer (60Gb) and the size is the same pre and post. I have checked the directory (E:\DAOS) and it is 35Gb in size. When I run the command 'Tell DAOSMgr Status tmp\test.nsf', I get the following response. From looking up on the net, I believe ticket count = 0 means that the db is not really DAOS'ed? Admin Process: Searching Administration Requests database DAOSMGR: Status tmptest.nsf started DAOS database status: Database: E:\Lotus\Domino\Data\tmp\test.nsf Database state = Synchronized Last resynchronized: 03/09/2012 02:49:13 PM Ticket count: 0 DAOSMGR: Status tmp\test.nsf completed I have run fixup on the database. When I have tried to run the DAOS estimator it has always crashed. This was a problem with larger databases on earlier versions of domino, but not anymore. Can anyone tell me why the size has not reduced? Am I missing anything?

    Read the article

  • SSL setup: UCC or wildcard certificates?

    - by quanza
    I've scoured the web for a clear and concise answer to my SSL question, but to no avail. So here goes: I have a web-service requiring SSL support for authentication pages. The root-level domain does not have the "www" - i.e., secure://domain.com - but localized pages use "language-code.domain.com", i.e. secure://ja.domain.com So I need at least a wildcard SSL certificate that supports secure://*.domain.com However, we also have a public sandbox environment at sandbox.domain.com, which we also need to support under localized domains - so secure://ja.sandbox.domain.com needs to also work. The previous admin managed to purchase a wildcard SSL certificate for .domain.com, but with a Subject Alternative Name for "domain.com". So, I'm thinking of trying to get a wildcard certificate with SANs defined as "domain.com" and ".*.domain.com". But now I'm getting confused because there seem to be separate SAN certificates, also called UCC certificates. Can someone clarify whether it's possible to get a wildcard certificate with additional SAN fields, and ultimately what the best way is to support: secure://domain.com secure://.domain.com secure://.*.domain.com with the fewest (and cheapest!) number of SSL certificates? Thanks!

    Read the article

  • Two hosted servers, one public - VPN?

    - by Aquitaine
    Hello there, Web developer here who has to occasionally wear a system & network admin hat (small company). We currently have a single hosted server running Windows Server 2003 that runs both our web server (IIS/Coldfusion) and our database server (SQL Server 2008). We lock down the SQL server by allowing only specific IPs to connect to it. Not ideal but it's worked thus far. We're moving up to two distinct servers and I want to take the opportunity to 'get things right' and make only the web server face the public. What I need to be able to do is to allow only a handful of people to connect to the database server. Rather than using an IP allow list, I'd prefer to use a VPN to let people through so that access is based on the user and not simply the user's location. I'm leaning toward something like OpenVPN, just so I can stick with Server 2008 Web edition. Do I: Use the web server as a VPN server and set up the database server to only accept connections from the web server? Is there an extra step required to make connections to, say, db.mycompany.com route through the VPN rather than through a different connection? I'm ignorant of this part of network infrastructure stuff. Or, Set up a VPN server on the database server as the only public-facing server connection so that there aren't any routing issues to deal with? I know this is Network 101 stuff but I thought I'd ask before just blundering through it since it could affect the company a bit. Thanks very much!

    Read the article

  • how to correctly mount fat32 partition in Ubuntu in order to preserve case

    - by Dean
    I've found there are couple of problems might be related how my FAT32 partition was mounted. I hope you can help me to solve the problem. I also included the command I used to help others when they find this post, sorry to those might feel I should use less space. I've the following file structures on my disk dean@notebook:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x08860886 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 5737 45978624 7 HPFS/NTFS /dev/sda3 5738 10600 39062047+ 83 Linux /dev/sda4 10601 19457 71143852+ 5 Extended /dev/sda5 10601 11208 4883728+ 82 Linux swap / Solaris /dev/sda6 11209 15033 30720000 b W95 FAT32 /dev/sda7 15033 19457 35537920 7 HPFS/NTFS In the etc/fstab I've got UUID=91c57a65-dc53-476b-b219-28dac3682d31 / ext4 defaults 0 1 UUID=BEA2A8AFA2A86D99 /media/NTFS ntfs-3g quiet,defaults,locale=en_US.utf8,umask=0 0 0 UUID=0C0C-9BB3 /media/FAT32 vfat user,auto,utf8,fmask=0111,dmask=0000,uid=1000 0 0 /dev/sda5 swap swap sw 0 0 /dev/sda1 /media/sda1 ntfs nls=iso8859-1,ro,noauto,umask=000 0 0 /dev/sda2 /media/sda2 ntfs nls=iso8859-1,ro,noauto,umask=000 0 0 I checked my id using id and I've got dean@notebook:~$ id uid=1000(dean) gid=1000(dean) groups=4(adm),20(dialout),24(cdrom),46(plugdev),103(fuse),104(lpadmin),115(admin),120(sambashare),1000(dean) I don't know why with these settings I still have problem of using svn like in this one Thank you for your help!

    Read the article

  • ssh use with netcat to forward connections via bastion host to inside machine

    - by Registered User
    Hi, I am having a server in a corporate data centre who's sys admin is me. There are some virtual machines running on it.The main server is accessible from internet via SSH. There are some people who within the lan access the virtual machines whose IPs on LAN are 192.168.1.1 192.168.1.2 192.168.1.3 192.168.1.4 the main machine which is a bastion host for internet has IP 192.168.1.50 and only I have access to it. I have to give people on internet the access to the internal machines whose IP I mentioned above.I know tunnel is a good way but the people are fairly non technical and do not want to get into a tunnel etc jargons.So I came across a solution as explained on this link On the gateway machine which is 192.168.1.50 in the .ssh/config file I add following Host securehost.example.com ProxyCommand ssh [email protected] nc %h %p Now my question is do I need to create separate accounts on the bastion host (gateway) to those users who can SSH to the inside machines and in each of the users .ssh/config I need to make the above entry or where exactly I put the .ssh/config on the gateway. Also ssh [email protected] where user1 exists only on inside machine 192.168.1.1 and not on the gateway is that right syntax? Because the internal machines are accessilbe to outside world as site1.example.com site2.example.com site3.example.com site4.example.com But SSH is only for example.com and only one user.So How should I go for .ssh/config 1) What is the correct syntax for ProxyCommand on gateway's .ssh/config should I use ProxyCommand ssh [email protected] nc %h %p or I should use ProxyCommand ssh [email protected] in nc %h %p 2) Should I create new user accounts on gateway or adding them in AllowedUsers on ssh_config is sufficient?

    Read the article

  • CentOS security for lazy admins

    - by Robby75
    I'm running CentOS 5.5 (basic LAMP with Parallels Power Panel and Plesk) and have thus far neglected security (because it's not my full-time job, there is always something more important on my todo-list). My server does not contain any secret data and also no lives depend on it - Basically what I want is to make sure it does not become part of a botnet, that is "good enough" security in my case. Anyway, I don't want to become a full-time paranoid admin (like constantly watching and patching everything because of some obscure problem), I also don't care about most security problems like DOS attacks or problems that only exist when using some arcane settings. I'm in search of a "happy medium", for example a list of known important problems in the default installation of CentOS 5.5 and/or a list of security problems that have actually been exploited - not the typical endless list of buffer overflows that "maybe" a problem in some special case. The problem that I have with the usually recommended approaches (joining mailing lists, etc.) is that the really important problems (something where an exploit exists, that is exploitable in a common setup and where the attacker can do something really useful - i.e. not a DOS) are completely and utterly swamped by millions of tiny security alerts that surely are important for high-security servers, but not for me. Thanks for all suggestions!

    Read the article

  • How to copy a bunch of pages? Is there a 3rd party tool?

    - by unknown (yahoo)
    (I asked the following question at the DNN forum, and also at snowcovered. Nobody knew of such an obvious time-saver being for sale. I'm posting here in case anybody knows of a freeware module that might do this.) By "groups of dnn pages", I mean pages that form a hierarchy (not necessary a hierarchy that is headed with a page at the same level as the Home page.) I know that I can copy web pages, one by one, using the admin login via the web-based dnn interface. But, I'd prefer a script or wizard, of some sort (that runs scripts behind the scenes) that can allow me to 1) specify a web page that I want to copy (along with the hierarchy of pages under it) 2) specify the names and titles of the new top-level pages 3) specify whether the contained modules of the top-level page that I want to copy is to be : ( ) New ( ) Copy ( ) Reference (as in the web-based interface) 4) repeat 3) for each of the source pages in the hierarchy that I want to copy You might say that I am looking to do something similar to creating a portal web site based on a template, except that it's not an entirely new website - instead it's a section of the current web site. I might want to do this because I have an organization which is broken into chapters, and I want each chapter to have, say, it's own General Information page (which acts like it's home page), and underneath that, in it's hierarchy, a Contact Info page and an Events page. so: Home Page   General Information Page     Contact Info     Events -- Home Page   General Information Page     Contact Info     Events   General Information Page Kiwanis - Bloomfield     Contact Info     Events   General Information Page Kiwanis - Dayton     Contact Info     Events If I have 200 chapters, I certainly don't want to copy those 3 web pages using the web based interface, as that would take a long time. (And imagine if each chapter's new sub-website had 30 pages!) I just want to specify the parameters of a copy process, then press a button, and let the system do the rest.

    Read the article

  • Total newb having SSH and remote MySQL access problems

    - by kscott
    I don't often work with linux or need to SSH into remote MySQL databases, so pardon my ignorance. For months I had been using the HeidiSQL client application to remotely access a MySQL database. Today two things happened: the DB moved to a new server and I updated HeidiSQL, now I cannot log in to the MySQL server, when attempting I get this message from Heidi: SQL Error (2003) in statement #0: Can't connect to MySQL server on 'localhost' (10061) If I use Putty, I can connect to the server and get MySQL access through command line, including fetching data from the DB. I assume this means my credentials and address are correct, but do not understand why putting those same details into HeidiSQL's SSH tunnel info won't work. I also downloaded the MySQL Workbench and attempted to set up a connection through that client and got this message: Cannot Connect to Database Server Your connection attempt failed for user 'myusername' from your host to server at localhost:3306: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 Please: 1 Check that mysql is running on server localhost 2 Check that mysql is running on port 3306 (note: 3306 is the default, but this can be changed) 3 Check the myusername has rights to connect to localhost from your address (mysql rights define what clients can connect to the server and from which machines) 4 Make sure you are both providing a password if needed and using the correct password for localhost connecting from the host address you're connecting from From Googling around I see that it could be related to the MySQL bind-address, but I am a third party sub-contractor with no access to the MySQL settings of this box and the system admin is assuring me that I'm an idiot and need to figure it out on my end. This is completely possible but I don't know what else to try. Edit 1 - The client settings I am using In Heidi and MySQL Workbench I am using the following: SSH host + port: theHostnameOfTheRemoteServer.com:22 {this is the same host I can Putty to} SSH Username: mySSHusername {the same user name I use for my Putty connection} SSH Password: mySSHpassword {the same password for the Putty connection} Local port: 3307 MySQL host: theHostnameOfTheRemoteServer.com MySQL User: mySQLusername {which I can connect with once in with Putty} MySQL Password: mySQLpassword {which works once in with Putty} Port: 3306

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >