Search Results

Search found 19981 results on 800 pages for 'sibling control'.

Page 335/800 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • r1soft agent is failing with the error: "write error while sending code: Broken pipe"

    - by curiousguy
    I have an Ubuntu 10.04.4 LTS server with r1soft agent installed in it. Recently, the backups are failing with the following error. -------- write error while sending code: Broken pipe -------- I have reinstalled the buagent but to no avail. On checking the server logs, I could see the following errors listed in it: -------- # tail -f /var/log/messages |grep -i buagent Nov 17 03:35:06 microscope buagent: Need to back up 126 sectors Nov 17 03:35:06 microscope buagent: (Righteous Backup Linux Agent) 1.79.0 build 12433 Nov 17 03:35:06 microscope buagent: allowing control from backup server (10.128.136.195) with valid RSA key Nov 17 03:35:06 microscope buagent: allowing control from backup server (10.128.136.201) with valid RSA key Nov 17 03:35:06 microscope buagent: sending auth challenge for allowed host at (10.128.136.201) port (47890) Nov 17 03:35:06 microscope buagent: host (10.128.136.201) port (47890) authentication successful Nov 17 03:35:06 microscope buagent: Backup request accepted. Starting backup. Nov 17 03:35:06 microscope buagent: Snapshot completed in 0.010 seconds. Nov 17 03:45:03 microscope buagent: Error reading blocks from snapshot. Nov 17 03:45:03 microscope buagent: Reading blocks failed Nov 17 03:45:03 microscope buagent: error backup aborted Nov 17 03:45:03 microscope buagent: backup failed on agent closing connection Nov 17 03:45:03 microscope buagent: Backup failed. Nov 17 03:45:03 microscope buagent: write error while sending code: Broken pipe (32) Nov 17 03:45:03 microscope buagent: tell child write failed -------- I tried changing the 'Timeout' and 'DiskAsPartition' value in '/etc/buagent/agent_config' file but no luck. Also, verified that proper route is added to the backup server. The agent is also running fine. Am I missing anything? Any help would be much appreciated. Note: CDP 2.0 is installed in the backup server.

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • iPhone Remote with iTunes Library via VPN

    - by sudo work
    Alright, so I'm currently behind a network router (not under my control). The router performs NAT and somehow prevents a computer from scanning other nodes. At least, you're unable, in this instance, to locate an iTunes library. You can, however, communicate with a node's open ports if the local IP address is known, as well as the port. I haven't actually tried port scanning a specific IP using nmap or another tool yet. So I've tried one solution to remove the contribution of the router entirely (to verify that it works without the influence of the routers). I set up an access point using my iPhone and tethered my computer (with the library) to it. From here, I was able to pair my library and the iPhone Remote application. Control of the library was normal as well. This solution is not ideal, however, because I am actively using bandwidth with my computer and cannot afford to be tethered to my 3G connection. A viable solution for me is to use a common VPN connection, which I have set up on a Ubuntu (Intrepid) server that is remote. Both my computer and iPhone are able to access the VPN via PPTP. The server is setup with PPTPD as the VPN-server; I'm using IPTables to perform IP masquerading and forwarding traffic. I however, still cannot connect the library to the phone. I can however, see both devices on the VPN subnet (192.168.0.0/24). SSH'ing and such works fine. What settings on the VPN server must I change to get this to work? Also, how can I assign static IP addresses to various PPTP clients based on MAC addresses?

    Read the article

  • DFSR NTFS Permissions Not Working!??!

    - by megadood
    I have two windwos 2008 standard servers running DFSR okay. I can create a file on one server, it is replicated to the other okay etc. I have the namespace shared folder on each server shared with full control administrators / everyone change/read permissions. I then browse to the folder on server 1 e.g.\server1\namespace\share\folder1. I right click the folder, and configure the NTFS permissions as I would like for example Adminsitrators Full Control / One User Read/Write Access / No other users in the user list. I save this and then double check the second server e.g. \server2\namespace\share\folder1. I right click the same folder name as before and can see the NTFS permissions have replicated accordingly. I right click the folder and go to properties - security - advanced - effective permissions and select a user that shouldnt be able to get into that folder e.g. testuser. It agrees with the NTFS permissions and shows that testuser has no ticks next to any permissions so should be denied access. I logon to any network PC or the server as testuser. Browse to \server1\namespace\share\folder1. It lets me straight in, no access denied messages. The same applies to server2. It seems as thought all my NTFS permissions are being ignored. I have 1 DFS share and then all the subfolders are a mixture of private folders and public folders so need the NTFS permissions to work ideally. Any idea whats going on? Is this normal? From my tests all users can access any DFSR folder under the namespace\share which is quite worrying. Thanks

    Read the article

  • Why is my RapidSSL Certificate chain is not trusted on ubuntu?

    - by olouv
    I have a website that works perfectly with Chrome & other browser but i get some errors with PHP in CLI mode so i'm investigating it, running this: openssl s_client -showcerts -verify 32 -connect dev.carlipa-online.com:443 Quite suprisingly my HTTPS appears untrusted with a Verify return code: 27 (certificate not trusted) Here is the raw output : verify depth is 32 CONNECTED(00000003) depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=20:unable to get local issuer certificate verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=27:certificate not trusted verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 So GeoTrust Global CA appears to be not trusted on the system (Ubuntu 11.10). Added Equifax_Secure_CA to try to solve this... But i get in this case Verify return code: 19 (self signed certificate in certificate chain) ! Raw output : verify depth is 32 CONNECTED(00000003) depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify error:num=19:self signed certificate in certificate chain verify return:1 depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 Edit Looks like my server does not trust/provide the Equifax Root CA, however i do correctly have the file in /usr/share/ca-certificates/mozilla/Equifax...

    Read the article

  • Pushing updates to live server... FTP isn't cutting it... a better method?

    - by Jenkz
    I'm the lead developer in a team of 2. My partner has only just joined the project and despite using GIT for version control etc, we are still stuck in the dark ages when it comes to code deployment. Currently I make all site updates via FTP (this way I have control / responsibility over everything that goes live), using Filezilla. I've done this for years, but we now have some large PHP classes (300KB), and a lot of traffic. So in short, every time I upload a key class "general" for example, the site goes down until the file finishes uploading. This is only 5/6 seconds at a time, but this is increasingly unacceptable. I realise I can upload the file under a different name and then rename both files... but really there must be a better way? I've heard about rsyncing code across from another server, but I don't see how this prevents switching to the new file whilst uploading. We only have one server (for DB and Apache) but also use some cloud servers (for openx as an example).

    Read the article

  • win8: access denied to external USB disk; update access rights fails

    - by Gerard
    I use to work with 2 laptops (vista and win7), my work being files on an external usb disk. My oldest laptop broke down, so I bought a new one. I had no option other than take win8. 1/ I suspect something changed with access rights, as my external disk suffered some "access denied" problem on win8. I was prompted (by win8) somehow to fix the access rights, which I tried to do, getting to the properties - security. This process was very slow and ended up saying "disk is not ready". Additonnally, the usb somehow was not recognized anymore. 2/ Back to win7, I was warned that my disk needed to be verified, which I did. In this process, some files were lost (most of them i could recover from the folder found00x, but I have some backup anyway). Also, I don't know why, but under win7, all the folder showed with a lock. 3/ Then back again to win8. Same problem : access denied to my disk + no way to change access rights as it gets stuck "disk is not ready". Now I am pretty sure there is some kind of bug or inconsistence in win8 / win7. I did 2/ and 3/ a few times. At some point, I also got an access denied in win7. I could restore access rigths to the disk to "system" (properties - security - EDIT for full control to group "system" ...). But then I still get the same access right pb on win8, and getting stuck in the process to restore full control to "system" -- and "admin" groups. Now, after I tried for more than 3 days, I am losing my patience with that bloody win8 which I did not want to buy but had no choice. I upgraded win8 with the windows updates available. Does not help. Anybody can help me ?

    Read the article

  • iSCSI SAN Implementation with several ESXi hosts and two Equallogic SANs

    - by Sergey
    I work for a small state college. We currently have 4 ESXi hosts (all made by Dell), 2 EqualLogic SANs (PS4000 and PS4100) and a bunch of old HP Procurve switches. The current setup is very far from being redundant and fast so we want to improve it. I read several threads but get even more confused. The Procurve Switches are 2824. I know they don't support Jumbo Frames and Flow Control at the same time, but we have plans to upgrade to something like Procurve 3500yl. Any suggestions? I heard Dell Powerconnects 6xxx are pretty good but I'm not sure how they compare to HPs. There will be a 4-port Etherchannel (Link Aggregation) between the switches, and all control modules on SAN will be connected to different switches. Is there anything that will make the setup better? Are there better switches then Procurves 3500yl that cost less than 5k? What kind of bandwidth can I expect between ESXi hosts (they will also be connected to 2824 with multiple cables) and SANs?

    Read the article

  • Change the Default Date setting in Word 2010

    - by Chris
    I am using Word 2010 and Windows 7. You know how when you start typing a date in Word it will automatically suggest what it thinks you want? Like if I start typing “6/29”, a little grey bubble will display “6/29/13 (Press ENTER to Insert)”. How do I get it so the bubble will display the year in a 4 digit format, such as "6/29/2013 (Press Enter to Insert)"? The below picture is how it looks when typing a date into Word. I have already gone to the Date & Time option under the Insert menu and the date format that I want is already selected. I think this is only for using quickparts anyway, so the date automatically updates when you open a document. The Region and Language settings under the Control Panel are correct as well. I thought at one point I found it somewhere under options, but I am sure I looked through everything many times and I can’t find it. I posted this exact question at the Microsoft website and someone replied: Go to the Windows Control Panel and click on Clock, Language and Region and then on Change the date, time, or number format and then modify the Short date format so that it is what you want to be used. So please don't suggest this again, because in my question I did say that I already tried this and it doesn't work, at least not for Word, in this situation. Thanks.

    Read the article

  • How to monitor process hogging CPU on a remote server

    - by sergeb
    I have a remotely hosted (virtual, VMware) dedicated server (Windows 2008 Server Web edition w/ SP1) that I can only connect to over Remote Desktop. Lately, a process hogs CPU for ~40 minutes most every day (at a random hour) and brings all web sites on the server down. While this is going on I also cannot connect using Remote Desktop to investigate on what is that process... Promptly after 40 min I can RD and the first thing I see on the Perf Monitor is that there was something topping the CPU at 100% and stops just before I'm able to RD... I'm aware of the beginning and end of this for I have monitors setup that email me up/down status of the web sites but I'm locked out while this is happening - can't RD to the server until it's over (and too late to see the Task Manager/Process Explorer picture). What is the best way/tool to setup on the server to continuously monitor all processes so when this happens I login and "replay" it to find the process causing this trouble? (I have no control over the virtual/VMware setup for it is hosted by a 3rd-party but I have most full control over my dedicated machine) Thanks in advance!

    Read the article

  • Ubuntu server PPTPD with OS X clients Problems

    - by Nakedsteve
    I'm trying to get a PPTP server running on a ubuntu server, but I've run into some issues with it. I followed this guide on how to set up pptpd on my server, and everything went smooth, but when I try to connect with my mac, it gives me this error: Here's my configuration: Does anyone have any idea as to what I'm doing wrong here? Update: Here's what the pptpd.log has to say about it: steve@debian:~$ sudo tail /var/log/pptpd.log sudo: unable to resolve host debian Sep 3 21:46:43 debian pptpd[2485]: MGR: Manager process started Sep 3 21:46:43 debian pptpd[2485]: MGR: Maximum of 11 connections available Sep 3 21:46:43 debian pptpd[2485]: MGR: Couldn't create host socket Sep 3 21:46:43 debian pptpd[2485]: createHostSocket: Address already in use Sep 3 21:46:56 debian pptpd[2486]: CTRL: Client 192.168.1.101 control connection started Sep 3 21:46:56 debian pptpd[2486]: CTRL: Starting call (launching pppd, opening GRE) Sep 3 21:46:56 debian pptpd[2486]: GRE: read(fd=6,buffer=204d0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Sep 3 21:46:56 debian pptpd[2486]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Sep 3 21:46:56 debian pptpd[2486]: CTRL: Reaping child PPP[2487] Sep 3 21:46:56 debian pptpd[2486]: CTRL: Client 192.168.1.101 control connection finished My pptpd options are: asyncmap 0 noauth crtscts lock hide-password modem debug proxyarp lcp-echo-interval 30 lcp-echo-failure 4 nopix

    Read the article

  • How do I resolve BSOD: PAGE_FAULT_IN_NONPAGED_AREA?

    - by Burnzy
    I have been trouble shooting this for a few days and cannot fix this anyhow. Computer specifications Mobo: ASUS Sabertooth X58 LGA 1366 Intel X58 SATA 6Gb/s USB 3.0 ATX Intel Motherboard CPU: Intel(R) Core(TM) i7 CPU 920 (Bloomfield) @ 2.67 ( no OC ) RAM: 6144MB RAM GPU: 2x NVIDIA GeForce GTS 250 1Go in SLI (sli is not enabled anyway at the moment anyway) Drives: OCZ RevoDrive OCZSSDPX-1RVD0120 PCI-E x4 120GB PCI Express MLC Internal SSD [RAID-0]. (I know this could potentilly cause trouble but I had the BSOD before using this drive) Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drive - Bare Drive Click here for a log of a crash I just had. Click here for a log of a crash I had 30 minutes later, note that it's another driver. Some info Occurence: It seems pretty random so far, haven't noticed any kind of pattern I tried: Windows memory diagnostic (went smoothly at 1066mhz) As I said, it was still happening on my HDD, so when I bought the revodrive I install a new OS on there and still got the error, I believed it happened and I had no drivers installed at that point (not 100% sure) Change the following registry value to 1 (true): HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SessionManager \MemoryManagement\ClearPageFileAtShutdown Tried to lower even more ram clock Made sure ram timing was set to recommended by manufacturer Verified if motherboard was in good physical condition (yes and its brand new) There is one thing to note, when I got the new motherboard, I installed the new drivers WITHOUT formatting and the I removed the motherboard drivers that I could remove from the control panel (pretty much the first things that have been installed). Could this cause an issue even ON THE OTHER drive (revodrive). Hopefully someone can help me, I am getting tired of this, spending so much money and cannot get this to work correctly. If you need any other information let me know, thank you!

    Read the article

  • How can ICS in Windows 7 be managed via command line, scripts, config files, etc.?

    - by Skya
    I've been using ICS successfully for years, but now I'm looking for a way to control it through something else than the GUI in Control Panel\Network and Internet\Network Connections - Connection Properties: I want to do everything that the encircled checkbox does, without touching the GUI. But what does the checkbox do? Microsoft don't provide specific information and the most helpful forum post I've found is from 2003. Assuming that some of the advice is still valid, I've come to the conclusion that ICS is broken down into 6 parts that have to be set up individually: the sharedAccess service interface settings firewall rules a static route dnsproxy autodhcp I've already learned that the service can be started/stopped with the command net start/stop sharedAccess and that netsh is a good tool for changing the interface settings and the firewall rules. But I don't understand how ICS handles routing and DNS. All hosts in my network are configured statically, so I don't care much about autodhcp. Thanks for your help! EDIT: I've spent the whole day scanning through ProcMon and I've seen reads/writes to both the registry and the filesystem and it is difficult to determine what parts of it actually make ICS work. I'm trying to look for an API instead. I'm looking into this right now, but I still want to know more about the inner workings.

    Read the article

  • System fans connected to a Gigabyte Z77-D3H motherboard do not increase in speed

    - by Andrew
    The motherboard (Gigabyte Z77-D3H) controls my 3-pin CPU fan just fine. My system fans are a 3-pin fan (plugged into SYS_FAN1), and a 4-pin fan plugged into SYS_FAN3. All 3 of the system fan headers are 4-pin, but the user manual states that SYS_FAN1 is really a 3-pin header (that it can control the speed of a 3-pin fan) and the 4th pin is just a reserve. All my fans have a max RPM of 2000. Normally, all the fans run around 1000 RPMs when I'm not doing anything intensive. This proves that the motherboard can set the speed. However,when I run Folding@Home and my temperatures increase (around 70C) only the CPU fan increases to around 2000 RPMs. The system fans stay around 1000 RPMs. Through the BIOS I am able to disable the system fan control and the system fans then run at max RPMs (meaning the motherboard was doing something). I've updated the BIOS to the latest version and tried out Speedfan, but neither helped my situation. What I'd like is for the system fans to ramp up their RPMs as needed. Any thoughts? Tl;dr: My system (case) fans, but not my CPU fan, are always stuck around 1000 RPMs out of 2000 no matter the temperature.

    Read the article

  • Server Names Inside Private Network

    - by thyandrecardoso
    Our office has a private network, where any requests on a (pre-determined) public IP are forwarded to a private IP inside said network. On that private IP, we've got a server running several services, including HTTP servers, and SCM systems. We only control our private network, having no control on the public IP configuration. We bought a domain name, and pointed it to that public IP, so people can access our services from the outside. But, when inside the office, people can't use that DNS name, because the server and any other hosts inside the network share the same public IP! For desktops, inside the office network, dealing with names is really easy: one entry on the hosts file and we're done. However, for laptops, that keep going in and out, and need to access services inside the office, the naming is really annoying. I don't know the "standard" process for dealing with these kind of situations. I've considered installing BIND in the office, and make people configure their wireless and wired connections to use that DNS server. What is the correct approach in this situation? If using BIND (or any other DNS server) is the answer, how should I configure it so that people inside the office can use it to get our custom names, and get forwarded to the ISP DNS when trying to reach the internet?

    Read the article

  • Are web service handler chains possible under IIS / ASP.NET

    - by Mike
    I'm working with a client who wants me to implement a particular design in an IIS/ASP.NET environment. This design was already implemented in Java, but I am not sure it is possible using Microsoft technologies. In a Tomcat/Java environment one can create so call Handler Chains. In essence a handler runs on the server on which the web service is running and it intercepts the SOAP message coming to the web service. The handler can perform a number of tasks before passing control to the web service. Some of these tasks may refer to authentication and authorization. Moreover, one can create handler chains, such that the handlers can run in a particular sequence before passing control to the web service. This is a very elegant solution, as certain aspects of authentication and authorization can be automatically performed, without the developer of the client application and of the web service having to invest anything in it. The code for the client application and the web service is not affected. You may find a number of articles on internet on this subject by searching on Google for "web service handler chain". I performed searches for web service handlers in IIS or ASP.NET. I get some hits, but apparently handlers in IIS have another meaning than that described above. My question therefor is: Can handler chains (as available in Java and Tomcat) be created in IIS? If so, how (any article, book, forum...)? Either a negative or a positive answer will be greatly appreciated. Mike

    Read the article

  • AD server within another network - DNS issues

    - by Harry Muscle
    Here's a quick summary of the environment I support: we have a domain (domain A) that has about 20 client computers. The domain server for this domain and all the clients sit within the network infrastructure of a larger domain (domain B). All the computers get their network settings via DHCP from domain B's servers. I have no control and am unable to make changes to anything to do with domain B. The problem I have is that currently in order for my domain's (domain A) clients to be able to resolve the domain server and the shares on it they have their DNS server IP address set to domain A's domain server (via the default GPO). Unfortunately when a laptop (windows and mac) gets taken home, they are still looking for the domain server as their DNS server and obviously can't access the internet correctly outside of our environment. Ideally I need a solution where the machines use domain A's domain server as their DNS when inside the office and use what ever DNS server DHCP gives them when they are outside the office. However, since I have no control over the office DHCP server, I'm not sure how this can be accomplished. Any help and advice that anyone can offer is highly appreciated. Thanks, Harry P.S. The solution I'm trying to find needs to require no involvement from the user.

    Read the article

  • Permissions won't cascade more than 1 level

    - by Jovin_
    Running Windows Small Business Server 2011 I have a file structure with a lot of sub folders (sometimes 5-6 levels deep). I have created access groups to grant access to my users, and also deny groups to deny access to others. X Access & X Deny. These allow or deny access to a mapped network drive X: On the server I put in the groups with Full Control Allow for X Access and Full Control Deny for X Deny, I also tick the box "Apple these permissions to objects and/or containers within this container only" and have ensured that "Apply to:" is "This folder, subfolders and files". But for some reason the permissions will only apply to the next level of folders & files. ex. structure: X: Folder 1 Folder 1a Folder 2 Folder 2a If I apply the permissions to X: it'll only go to Folder 1 & 2, not 1a and 2a, I then need to manually apply the permissions to these too. Is this working as intended or am I doing something wrong?

    Read the article

  • WAMP starts Apache or Mysql, but not both?

    - by ladenedge
    When I install WAMP, the Apache and Mysql services are set to run as the LocalService user and all works well. However, because I need to access remote UNC paths in my PHP code, I need to run at least Apache as a user that exists on both the local host and the remote host - I'll call him WampUser. When both Apache and Mysql are set to start as WampUser, I cannot start both at the same time. If both are stopped, I can start either successfully. When I attempt to start the other, I get Error 1053: The service did not respond to the start or control request in a timely fashion. This error appears immediately - there is no timeout. When at least one of the services is set to start as LocalService, both start fine. I can, therefore, solve my problem by setting Apache to WampUser and Mysql to LocalService, but I'm more interested in why this is happening in the first place. I'm especially curious because this situation does not occur on other servers - something I've done to this server has made these two services exclusive when running as the same user. Here are some miscellaneous data points: I am using Windows Server 2003. I've provided recursive Full Control to the C:\wamp directory for WampUser. Nothing appears in the event log after the service fails. No log entries appear in either the Mysql log or the Apache error log. Neither application appears in the process list when the appropriate service is stopped. Any ideas?

    Read the article

  • Can't reset Windows 7 Registry permissions.

    - by n10i
    hi all, i am trying to reset win 7 registry permissions using secedit /configure /cfg %windir%\inf\defltbase.inf /db defltbase.sdb /verbose /areas REGKEYS But i am receiving the following error: An extended error has occurred. The task has completed with an error. See log %windir%\security\logs\scesrv.log for detail info. The content Of the log file: ------------------------------------------- Friday, April 16, 2010 1:50:43 PM ----Configuration engine was initialized successfully.---- ----Reading Configuration Template info... ----Configure 64-bit Registry Keys... Configure users.default. Warning 5: Access is denied. Error taking ownership of users.default\software\SetID. Warning 5: Access is denied. Error opening users.default\software\SetID. Warning 5: Access is denied. Error setting security on users.default\software\SetID. Configure machine\software. Warning 5: Access is denied. Error setting security on machine\software. Warning 1336: The access control list (ACL) structure is invalid. Error setting security on machine\software\Macrovision. Configuration of Registry Keys was completed with one or more errors. ----Configure 32-bit Registry Keys... Configure machine\software. Warning 1336: The access control list (ACL) structure is invalid. Error setting security on machine\software\Audible. Configuration of Registry Keys was completed with one or more errors. ----Un-initialize configuration engine... plz! help me guys!

    Read the article

  • Prevent Network Printers Automatically being added to 'Devices and Printers' in Windows 7

    - by Ben
    I think my question is a duplicate of this one, but the original question was never properly answered (the steps described are for Windows XP). I am aware of the option to "Turn off Network Discovery" (under Control Panel All Control Panel Items Network and Sharing Center Advanced sharing settings); I set this option (for both Home/Work and Private) but it doesn't seem to stop the printers getting added, and has the side effect of preventing me from browsing the list of machines on the network (which I need). I've tried the Windows XP registry option - but it doesn't seem to make any difference: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced\NoNetCrawling I find it annoying having the printer list cluttered with printers from all over the office which I am never going to use (especially since lots of them no longer physically exist, users just haven't deleted them from their machines). This must be a real problem for people in massive offices with large numbers of printers - but I can't seem to find a lot of people complaining about it - which makes me think I'm missing something obvious. I don't really want to hack the firewall or turn off sharing completely, I still want to select and use network printers and file shares. Any ideas?

    Read the article

  • How to set up QoS on ADSL router (terracom) for prioritizing browsing

    - by DBZ_A
    I want to configure the ADSL router which connects 10+ machines to the internet. I want to give maximum priority to browsing (ports 80,443) and set low priority for bittorrent etc.(port 42180) I have been experimenting with settings , but with no luck. There are three settings which confuse me, along with my understanding. 802.1 Priority - Related to LAN level, possible values 0-7 , higher numbers means higher priority. 'Mark traffic priority' - clueless about this. IPP/DS - IP Precedence - possible values 0-7 ; 6 & 7 are reserved, so set 5 for highest priority. Or when using DSCP - set 46 for highest priority. Please help me in getting this done... Similer question for another model of router here , but with less number of confusing options :) How to configure QoS on home router Update: from discussion on another thread, QoS can control only upstream traffic (from router to the internet) , while this may in turn affect downstream traffic rate, there is no direct control over data coming into the router.

    Read the article

  • mod_deflate doesn't work [closed]

    - by kikio
    I want to gzip my static files. so put this in .htaccess: <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </IfModule> and looked for mod_deflate in phpinfo() output Loaded Modules section, and I found it. But when I track server responses with Firebug, no gzipped file can be found: HTTP/1.1 200 OK Date: Sat, 08 Sep 2012 21:41:21 GMT Last-Modified: Sat, 08 Sep 2012 21:26:04 GMT Accept-Ranges: bytes Cache-Control: max-age=604800 Expires: Sat, 15 Sep 2012 21:41:21 GMT Vary: Accept-Encoding Keep-Alive: timeout=3, max=50 Connection: Keep-Alive Content-Type: text/css Content-Length: 18206 What's the problem? I'm sure I have mod_deflate enabled (according to php apache_get_modules()). UPDATE: the request headers: GET /d/jquery-ui.css HTTP/1.1 Host: 127.0.0.1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Pragma: no-cache Cache-Control: no-cache

    Read the article

  • Connecting to IPv6 hosts when mobile and on a Surface?

    - by Cerebrate
    Specifically, at my usual location, I have an IPv6 network which connects to the Internet via a static tunnel set up to Hurricane Electric's tunnel broker ( http://www.tunnelbroker.net/ ). This works essentially perfectly, allowing inbound and outbound connectivity. Now, however, I need to connect back to host(s) on that network over IPv6 from mobile tablet(s); meaning the conditions are such that there is no guarantee or even likelihood of native IPv6 support where it happens to be at any given time, and the IPv4 address of the tablet will change on a fairly regular basis. The native Teredo support, as configured by default, functions well enough to let me ping my target hosts, but appears to have neither the reliability nor the throughput to support anything else; I have been unable to make any actual connections (trying a number of TCP-based protocols) using it. I had considered setting up an independent tunnel for the tablet(s), and using scripts to update the client endpoint IP address when it changes, but since both (a) many of the locations will be behind NAT devices over which I have no control, and (b) the option over which I do have control is an AT&T Unite hotspot which does not offer protocol 41 forwarding or respond to ICMP on its public address, this approach does not seem viable. I am additionally constrained as the mobile tablet(s) in question are Surface RTs, and as such are incapable of running, for example, AICCU client software. What is my best option to pursue to obtain IPv6 connectivity in this scenario?

    Read the article

  • Share Point ACL on OSX Lion Server - Posix group always takes over ACLs

    - by Ben
    Trying to configure a share point on a Lion Server machine. The directory is created by the local server admin (serveradmin) and has rwxr-x--- given to it. The serveradmin user belongs to the local staff group so serveradmin readwrite staff group read Others none We have an OD group for all the employees (Workers) . Using the Server tool we've given Full Control to the share point: Workers Full Control serveradmin readwrite staff group read Others none We would assume that Workers could then do what they want on the share but that doesn't seem to be the case. It appears the POSIX permissions take over the ACL permissions for Worker. If I change the staff permission to readwrite then the Workers can create a file or folder in the share point. I would think the ACL should take over but it doesn't, posix always win, rendering ACL useless. Furthermore if I leave the readwrite permission for staff and take Write permission away for the Workers group then the posix group still wins. Essentially the Workers ACL does absolutely nothing. There are reports of similar problems in this Apple forum thread: https://discussions.apple.com/thread/3722901 The directory nesting fix suggested there doesn't work for us. Has anyone had similar issues and know how to fix this? Edit: in Workgroup Manager the employees user are set to primary group staff and given the additional OD group Workers. Changing their primary group doesn't help, it only shifts the problem onto Others taking over rights (logically) Edit 2: Ok, this is interesting, adding OD Users to the share's ACL works totally fine

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >