Search Results

Search found 3815 results on 153 pages for 'compact policy'.

Page 122/153 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • using pf for packet filtering and ipfw's dummynet for bandwidth limiting at the same time

    - by krdx
    I would like to ask if it's fine to use pf for all packet filtering (including using altq for traffic shaping) and ipfw's dummynet for bandwidth limiting certain IPs or subnets at the same time. I am using FreeBSD 10 and I couldn't find a definitive answer to this. Googling returns such results as: It works It doesn't work Might work but it's not stable and not recommended It can work as long as you load the kernel modules in the right order It used to work but with recent FreeBSD versions it doesn't You can make it work provided you use a patch from pfsense Then there's a mention that this patch might had been merged back to FreeBSD, but I can't find it. One certain thing is that pfsense uses both firewalls simultaneously so the question is, is it possible with stock FreeBSD 10 (and where to obtain the patch if it's still necessary). For reference here's a sample of what I have for now and how I load things /etc/rc.conf ifconfig_vtnet0="inet 80.224.45.100 netmask 255.255.255.0 -rxcsum -txcsum" ifconfig_vtnet1="inet 10.20.20.1 netmask 255.255.255.0 -rxcsum -txcsum" defaultrouter="80.224.45.1" gateway_enable="YES" firewall_enable="YES" firewall_script="/etc/ipfw.rules" pf_enable="YES" pf_rules="/etc/pf.conf" /etc/pf.conf WAN1="vtnet0" LAN1="vtnet1" set skip on lo0 set block-policy return scrub on $WAN1 all fragment reassemble scrub on $LAN1 all fragment reassemble altq on $WAN1 hfsc bandwidth 30Mb queue { q_ssh, q_default } queue q_ssh bandwidth 10% priority 2 hfsc (upperlimit 99%) queue q_default bandwidth 90% priority 1 hfsc (default upperlimit 99%) nat on $WAN1 from $LAN1:network to any -> ($WAN1) block in all block out all antispoof quick for $WAN1 antispoof quick for $LAN1 pass in on $WAN1 inet proto icmp from any to $WAN1 keep state pass in on $WAN1 proto tcp from any to $WAN1 port www pass in on $WAN1 proto tcp from any to $WAN1 port ssh pass out quick on $WAN1 proto tcp from $WAN1 to any port ssh queue q_ssh keep state pass out on $WAN1 keep state pass in on $LAN1 from $LAN1:network to any keep state /etc/ipfw.rules ipfw -q -f flush ipfw -q add 65534 allow all from any to any ipfw -q pipe 1 config bw 2048KBit/s ipfw -q pipe 2 config bw 2048KBit/s ipfw -q add pipe 1 ip from any to 10.20.20.4 via vtnet1 out ipfw -q add pipe 2 ip from 10.20.20.4 to any via vtnet1 in

    Read the article

  • Windows 7 libraries and folder redirection nightmare

    - by Lobuno
    Hello! In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: The Documents library is EMPTY Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. Deleting the library and auto-creating it doesn't solve the problem The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. The shared drives are on a W2008 indexed server... Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

  • Windows 7 libraries and folder redirection nightmare

    - by Lobuno
    Hello! In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: The Documents library is EMPTY Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. Deleting the library and auto-creating it doesn't solve the problem The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. The shared drives are on a W2008 indexed server... Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

  • Squid Authentication & streaming

    - by Steve Butler
    I've got squid setup using Kerberos authentication. I'm also using squidguard as an URL redirector to block out the usual nastiness of the web. There are some sites though that we allow certain users to, and others not. This all works well, assuming I'm not using any streaming. From what i can determine from the squid logs and the wireshark traces I've done, when the initial request to stream is sent, everything is good, the authenticated username is sent with the request to squidguard. The problem is that on subsequent traffic the username is not sent to squidguard, causing it to be blocked based on default policy. I've tried using the squid built-in allow/deny stuff, but its relatively clunky, and so far squidguard has been pretty easy and fast. Here comes the question(s): How do i get Squid to pass username on all requests? (something tells me this isn't the best way) How do i get squidguard to see traffic is authenticated to a specific user even when a username isn't passed? Is there any other way of accomplishing this? A few details that may be of importance: I'm using a list of users stored in a text file for squidguard to compare against. I'm using full kerberos auth with Squid. CentOS 6.0 Squid 3.1.4 Squidguard 1.3

    Read the article

  • MOSS2007 tries to use ActiveDirectory when I have configured an alternative membership provider

    - by glenatron
    I've got a MOSS site that I am trying to configure using Forms authentication and absolutely any kind of membership provider whatsoever. Thus far ActiveDirectory has proved obstructively difficult so I've just whipped up a simple stub membership provider and put it in the GAC. It's a very basic and simple provider but it works fine with an ASP.Net site, I just can't make it work with Sharepoint. On Sharepoint I get the following error when I look for StubProvider:Bob ( or anything else for that matter) from the "Policy For Web Application" people picker: Error in searching user 'StubProvider:bob' : System.ComponentModel.Win32Exception: Unable to contact the global catalog server at Microsoft.SharePoint.Utilities.SPActiveDirectoryDomain.GetDirectorySearcher() at Microsoft.SharePoint.WebControls.PeopleEditor.SearchFromGC(SPActiveDirectoryDomain domain, String strFilter, String[] rgstrProp, Int32 nTimeout, Int32 nSizeLimit, SPUserCollection spUsers, ArrayList& rgResults) at Microsoft.SharePoint.Utilities.SPUserUtility.SearchAgainstAD(String input, SPActiveDirectoryDomain domainController, SPPrincipalType scopes, SPUserCollection usersContainer, Int32 maxCount, String customQuery, TimeSpan searchTimeout, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPActiveDirectoryPrincipalResolver.SearchPrincipals(String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPUtility.SearchPrincipalFromResolvers(List`1 resolvers, String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount, Dictionary`2 usersDict). The Provider is named as Authentication Provider for the Site Collection in question. As far as I can tell this is because Sharepoint is still trying to access ActiveDirectory rather than talking to the provider I'm asking it to use. My Sharepoint Central Administration section includes this: <membership> <providers> <add name="StubProvider" type="StubMembershipProvider.Provider, StubMembershipProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=5bd7e2498c3e1a03" /> </providers> </membership> And also: <PeoplePickerWildcards> <clear /> <add key="StubProvider" value="%" /> </PeoplePickerWildcards> Is there a clear reason why this would not be accessible from the PeoplePicker or why it is still trying to use ActiveDirectory? I've made sure I reset IIS and even restarted the server to see if either of those helped but they made no difference.

    Read the article

  • Debian Linux bridging router intermittently dropping packets [migrated]

    - by nomen
    My old Asus router died a few weeks ago, so I thought I'd set up my Debian box to deal with routing my home network. I have a few complications, but I adapted my configuration from a previously working configuration, and I don't see why I am having intermittent problems. But I am having them! Every so often, my SSH connections to the router (and to the Xen virtual machines hosted by the router) just drop. I am unable to use the router's dns server. I can't ping the router. Etc. (I can provide more details, but I'm not sure what will be helpful) /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # Gigabit ethernet, internal network auto eth0 allow-hotplug eth0 iface eth0 inet manual # USB ethernet, internet auto eth1 allow-hotplug eth1 iface eth1 inet dhcp # Xen Bridge auto xlan0 iface xlan0 inet static bridge_ports eth0 address 10.47.94.1 netmask 255.255.255.0 As I understand it, this is sufficient to create the network interfaces, and even do some switching between Xen hosts and my eth0 interface. I installed and configured Shorewall to manage routing: /etc/shorewall/zones fw firewall net ipv4 lan ipv4 /etc/shorewall/interfaces net eth1 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians lan xlan0 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians,routeback,bridge /etc/shorewall/policy net all DROP info fw net ACCEPT info all all REJECT info /etc/shorewall/rules DNS(ACCEPT) fw net DNS(ACCEPT) lan fw ... and so on, these all work, when the router is accepting traffic at all. /etc/shorewall/masq eth1 10.47.94.0/24 Can anybody help?

    Read the article

  • How to install a private user script in Chrome 21+?

    - by Mathias Bynens
    In Chrome 20 and older versions, you could simply open any .user.js file in Chrome and it would prompt you to install the user script. However, in Chrome 21 and up, it downloads the file instead, and displays a warning at the top saying “Extensions, apps, and user scripts can only be added from the Chrome Web Store”. The “Learn More” link points to http://support.google.com/chrome_webstore/bin/answer.py?hl=en&answer=2664769, but that page doesn’t say anything about user scripts, only about extensions in .crx format, apps, and themes. This part sounded interesting: Enterprise Administrators: You can specify URLs that are allowed to install extensions, apps, and themes directly through the ExtensionInstallSources policy. So, I ran the following commands, then restarted Chrome and Chrome Canary: defaults write com.google.Chrome ExtensionInstallSources -array "https://gist.github.com/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://gist.github.com/*" Sadly, these settings only seem to affect extensions, apps, and themes (as it says in the text), not user scripts. (I’ve filed a bug asking to make this setting affect user scripts as well.) Any ideas on how to install a private user script (that I don’t want to add to the Chrome Web Store) in Chrome 21+? Update: The problem was that gist.github.com’s raw URLs redirect to a different domain. So, use these commands instead: # Allow installing user scripts via GitHub or Userscripts.org defaults write com.google.Chrome ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" This works!

    Read the article

  • What breaks in a Windows domain if a member has a high time skew?

    - by Ryan Ries
    It's taken for granted by most IT people that in a Windows domain, if a member server's clock is off by more than 5 minutes (or however many minutes you've configured it for) from that of its domain controller - logons and authentications will fail. But that is not necessarily true. At least not for all authentication processes on all versions of Windows. For instance, I can set my time on my Windows 7 client to be skewed all to heck - logoff/logon still works fine. What happens is that my client sends an AS_REQ (with his time stamp) to the domain controller, and the DC responds with KRB_AP_ERR_SKEW. But the magic is that when the DC responds with the aforementioned Kerberos error, the DC also includes his time stamp, which the client in turn uses to adjust his own time and resubmits the AS_REQ, which is then approved. This behavior is not considered a security threat because encryption and secrets are still being used in the communication. This is also not just a Microsoft thing. RFC 4430 describes this behavior. So my question is does anyone know when this changed? And why is it that other things fail? For instance, Office Communicator kicks me off if my clock starts drifting too far out. I really wish to have more detail on this. edit: Here's the bit from RFC 4430 that I'm talking about: If the server clock and the client clock are off by more than the policy-determined clock skew limit (usually 5 minutes), the server MUST return a KRB_AP_ERR_SKEW. The optional client's time in the KRB-ERROR SHOULD be filled out. If the server protects the error by adding the Cksum field and returning the correct client's time, the client SHOULD compute the difference (in seconds) between the two clocks based upon the client and server time contained in the KRB-ERROR message. The client SHOULD store this clock difference and use it to adjust its clock in subsequent messages. If the error is not protected, the client MUST NOT use the difference to adjust subsequent messages, because doing so would allow an attacker to construct authenticators that can be used to mount replay attacks.

    Read the article

  • Connect to Postgres remotely, open port 5432 for Postgres in iptables

    - by Victor
    I am trying to connect to Postgres remotely but I need to open port 5432 in iptables. My current iptables configuration is as follows: *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections # # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE # -A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT What would I have to add in iptables to open the port? I'm trying to install phppgadmin on a different server to access the postgres database. Thank you.

    Read the article

  • Allow access from outside network with dmz and iptables

    - by Ivan
    I'm having a problem with my home network. So my setup is like this: In my Router (using Ubuntu desktop v11.04), I installed squid proxy as my transparent proxy. So I would like to use dyndns to my home network so I could be access my server from the internet, and also I installed CCTV camera and I would like to enable watching it from internet. The problem is I cannot access it from outside the net. I already set DMZ in my modem to my router ip. My first guess is because i'm using iptables to redirect all inside network to use squid. And not allow from outside traffic to my inside network. Here is my iptables script: #!/bin/sh # squid server IP SQUID_SERVER="192.168.5.1" # Interface connected to Internet INTERNET="eth0" # Interface connected to LAN LAN_IN="eth1" # Squid port SQUID_PORT="3128" # Clean old firewall iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X # Load IPTABLES modules for NAT and IP conntrack support modprobe ip_conntrack modprobe ip_conntrack_ftp # For win xp ftp client #modprobe ip_nat_ftp echo 1 > /proc/sys/net/ipv4/ip_forward # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT ACCEPT # Unlimited access to loop back iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # Allow UDP, DNS and Passive FTP iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j ACCEPT # set this system as a router for Rest of LAN iptables --table nat --append POSTROUTING --out-interface $INTERNET -j MASQUERADE iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT # unlimited access to LAN iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT # DNAT port 80 request comming from LAN systems to squid 3128 ($SQUID_PORT) aka transparent proxy iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to $SQUID_SERVER:$SQUID_PORT # if it is same system iptables -t nat -A PREROUTING -i $INTERNET -p tcp --dport 80 -j REDIRECT --to-port $SQUID_PORT # DROP everything and Log it iptables -A INPUT -j LOG iptables -A INPUT -j DROP If you know where did I miss, please advice me. Thanks for all your help and I really appreciate it.

    Read the article

  • Can't resolve offline file conflicts

    - by Bryan
    We use roaming profiles on our Server 2008 R2 domain, with folder redirection for 'desktop', 'my documents' and 'application data'. But as our network is split across two sites, we have one file server at each site, which are configured to use domain based DFS namespaces and DFS replication to keep things in sync. The DFS path for the replication folder is as follows: \\domain\folderredirection$\<username>\<redirected-folder-name> The real paths are \\site-1-server\folderredirection$\<username>\<redirected-folder-name> and \\site-2-server\folderredirection$\<username>\<redirected-folder-name> As our users all switch between sites (sometimes several time per day), our folder redirection policy has to redirect to the DFS roots rather than hardcoded to a specific server. Both DFS and DFS-R have been proven to be working perfectly. On our laptops, we use offline files for the redirected folders, and this also works fine, however the problem is as follows: When conflicts occur in offline files, it is impossible to resolve the conflicts. I'm given the usual conflict resolution options (i.e. 'Ignore', 'Keep Both', 'Keep network' and 'Keep local'), however, not one of these options will resolve any conflict, yet no error is produced. We only use offline files on laptops, which have either Windows XP Professional or Windows 7 Professional installed. The problem is not specific to any one laptop, it affects every laptop and every conflicting file in exactly the same way. I would have thought the set up we have is common for companies that have multiple sites, so I'm hoping someone will have seen this before?

    Read the article

  • Windows XP mounting USB drive to same letter as previously mapped network drive

    - by GAThrawn
    Why does Windows always mount a USB drive as the next drive letter after the last physical drive, even when that letter is already taken by a mapped drive, and is there any way to improve this behaviour? What happens is I tend to use a few different flash drives on my PC, as well as having both a Blackberry and a personal phone that mount as USB drives when I plug them in to charge. Being on a corporate PC I also have a number of mapped network drives (some set by login script, some set as persistent mappings in my profile). When I first login I'll have drive letters like this: C: - Local Drive D: - DVD Drive G: - Login script mapped drive J: - Login script mapped drive When I plug the Blackberry in it'll mount two drives (one for onboard storage, one for the SD card) as E: and F:. If I then plug in another USB drive it will mount as G:, even though that's already taken by a network mapped drive. This leaves me with the following drives: C: - Local Drive D: - DVD Drive E: - USB drive (Blackberry) F: - USB drive (Blackberry) G: - Login script mapped drive [G: - USB drive - mounted but not visible in Explorer or command prompt] J: - Login script mapped drive I then have to go into Disk Management, find the new USB drive that's mounted to G: and re-assign it to another letter eg Z:, once this is done Auto-Play detects it and throws up its normal dialog, and its browseable in Explorer. While this is OK to do if you only use one or two USB drives and have admin access to your PC with your login account, its a total pain in the proverbial if you regularly use a whole load of different USB devices, and corporate policy means you have one account for your normal login (that only has User access to workstations), but have to use a different account for any privileged action. I realize that one possible reason for this is the difference between hardware which is mounted and assigned drive letters at the systen level, and mapped drives which are done at the user level. For USB devices that are already plugged in before login, then obviously they're mounted before Windows knows what network drives may be mapped. However if you plug the USB devices in after you're fully logged in and have drives mapped then Windows must know which letters are available?

    Read the article

  • Bounce backs from web-generated e-mails are missing

    - by JerSchneid
    We use Google Apps to host my company's mail. On our website, we send some e-mails on behalf of our users. In those e-mails we include lines like this: Return-Path: <[email protected]> Sender: <[email protected]> Sending the messages works great (passes SPF tests), but in the case that the message is sent TO an invalid e-mail address, we expect to get a bounce back message sent to "[email protected]". That message never arrives. (If we send an e-mail manually from within the gmail interface to the same bad e-mail, the message does arrive). We used to receive the bounce back messages as expected, but it seems like they are always quietly blocked now (not in spam or anything). Is there a new policy that blocks bounce backs when the "From" does not match the "Return-Path" or something? We would really like to get these bounce-backs to verify the delivery of the messages. Is there any way to prevent them from being blocked?! Thank you!

    Read the article

  • What is the latest on Microsoft Expression Studio licensing?

    - by DanM
    In the past, there's been an issue with Microsoft not allowing you to deactivate an Expression Studio key. Basically, you get two keys per license. If you assign both keys (say one to a desktop and one to a laptop), then you upgrade to a new machine (say you replace your laptop or upgrade some of the hardware), you have to buy a new copy of Expression Studio ($600 for Ultimate). This seems ludicrous to me, and I'm wondering if anyone knows if this policy is still in place. I can't seem to find a EULA online anywhere, so I don't know where to find this information. I know my laptop is due for replacement soon, and I want to know if I'm going to have to sink $600 into a software product I already purchased. For background, please refer to this thread on the Microsoft Expression forums: http://social.expression.microsoft.com/Forums/en-US/general/thread/da5587bc-b098-4c6a-9a56-af3608d940d0 Note that this thread is locked. Microsoft doesn't seem to want people to discuss this. This is one reason I'm posting here rather than on that site.

    Read the article

  • SPF record for Gmail?

    - by Chris
    I have DNS, with a SPF TXT record, configured for a domain name. The primary user of the domain name now needs to be able to send both from our SMTP servers, and also from her GMail account. I've seen all the information about adding "include:_spf.google.com" to the SPF TXT record, but, as I look into it, it appears that record is outdated. In particular, I had the user send me a test message, and note that it was: Received: from mail-la0-f50.google.com (mail-la0-f50.google.com [209.85.215.50]) However, _spf.google.com doesn't list that IP address: $ dig +short _spf.google.com txt "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ip4:173.194.0.0/16 ?all" (Note that a 209.85.21*8*.0 network is listed, but not 209.85.21*5*.0.) Is there a better way to enable sending from GMail? This user sends to at least one recipient with a strict SPF policy that bounces mail not from a designated host... Many thanks!

    Read the article

  • Optimum configuration of McAfee for Servers

    - by Wayne Arthurton
    Our corporate standard is McAfee Enterprise, unfortunately this is non-negotiable. On two types of servers I'm responsible for, SQL & Web, we have noticed major performance issues with the corporate standard setup. Max scan time 45sec One policy for all processes Scan ALL files on write, read and open for backup Heuristics: Find unknown programs, trojans and macros Detect unwanted programs Exclude: EVT, LDF, LOG, MDF, VMD, , windows file protection) This of course still causes major slowdowns. IIS .NET recompiles are slow especially with SharePoint, SQL backups and restores, SQL Analysis Services, Integration Services and temp data from them as well. I have looked from time to time, for some best practices on setting up McAfee of SQL & SQL Analysis Service, SQL Integration Service, Visual Studio, Sharepoint, and .NET web servers in general. How do people setup McAfee enterprise on their corporate serves keeping security intact, but affecting performance as minimally as possible? Has anyone run across white papers on these setups? Obviously some are case by case, but there must be some best practices out there somewhere.

    Read the article

  • Remote Desktop Printing With Color

    - by philibertperusse
    Our ACCPACC administration software runs on an off-site dedicated hosted computer, running Windows 2003 Server with a completely different NT DOMAIN. We have many users connecting to that computer remotely to perform administrative tasks such as printing cheques, printing invoices, printing POs, packing slips and so on. Basically the setup is that we are all connecting using Remote Desktop Protocol (local computers are Mac OS.X, XP SP3, Vista and Windows 7). At our office we have a DOCUCOLOR 242 printer. When printing from the ACCPACC software, it prints to the local printer in our office. This is because we are using RDP features to connect printer ressources to the remote computer. This almost works now. I had to install the printer driver software on the remote 2003 Server for the printer sharing to work. Now, everyone is able to print black and white but color is out. NOTES: Normal users on that Windows 2003 server are running as part of a Group Policy Object to restrict what can be done. I took one of these normal users and gave him all domain administrator rights, no effect still B&W only. I took this account and moved it OUT of GPO policies, as a normal account instead, no effect still B&W only. It seems only MY account (which is domain administrator AND a normal account not part of the GPO objects) can actually print with color. This is the account that was used for installing the printer driver software. How can I manage to get everyone to print in color? Any suggestions as to what to try next?

    Read the article

  • VNC error: "Could not connect to session bus: Failed to connect to socket"

    - by GJ
    I started a vncserver on display :1 on an ubuntu machine. When I connect to it, I get a grey X window with an error message Could not connect to session bus: Failed to connect to socket. The vnc log is: Xvnc Free Edition 4.1.1 - built Apr 9 2010 15:59:33 Copyright (C) 2002-2005 RealVNC Ltd. See http://www.realvnc.com for information on VNC. Underlying X server release 40300000, The XFree86 Project, Inc Sun Mar 20 15:33:59 2011 vncext: VNC extension running! vncext: Listening for VNC connections on port 5901 vncext: created VNC server for screen 0 error opening security policy file /etc/X11/xserver/SecurityPolicy Could not init font path element /usr/X11R6/lib/X11/fonts/Type1/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/Speedo/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/misc/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/75dpi/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/100dpi/, removing from list! cat: /var/run/gdm/auth-for-link2-eGnVvf/database: No such file or directory gnome-session[24880]: WARNING: Could not make bus activated clients aware of DISPLAY=:1.0 environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused gnome-session[24880]: WARNING: Could not make bus activated clients aware of GNOME_DESKTOP_SESSION_ID=this-is-deprecated environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused gnome-session[24880]: WARNING: Could not make bus activated clients aware of SESSION_MANAGER=local/dell:@/tmp/.ICE-unix/24880,unix/dell:/tmp/.ICE-unix/24880 environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused Sun Mar 20 15:34:10 2011 Connections: accepted: 0.0.0.0::51620 SConnection: Client needs protocol version 3.8 SConnection: Client requests security type VncAuth(2) VNCSConnST: Server default pixel format depth 16 (16bpp) little-endian rgb565 VNCSConnST: Client pixel format depth 16 (16bpp) little-endian rgb565 gnome-session[24880]: Gtk-CRITICAL: gtk_main_quit: assertion `main_loops != NULL' failed gnome-session[24880]: CRITICAL: dbus_g_proxy_new_for_name: assertion `connection != NULL' failed Any ideas how to fix it?

    Read the article

  • How to make Exchange 2003 non-authoritive

    - by Romski
    Background We are a small company with an internally hosted Exchange 2003. It receives email for 2 domains (the company was renamed a few years back). For the sake of argument, the domains are: oldname.com newname.com We have moved newname.com to a hosted exchange service, and our DNS record is correctly routing emails. Our internal server still receives email for oldname.com, although we have asked our hosting company to accept emails for that domain. Problem My problem is that emails generated internally from monitoring software, printer, etc. are being caught by our (defunct) internal server and being delivered to the old mailboxes. I believe that what is happening is that our internal exchange server considers itself to be the authoritive server for newname.com. I think it must be looking in active directory for a mailbox and delivering it internally without ever going outside. Attempt to fix I started to follow the article here: http://support.microsoft.com/kb/321721. I removed the SMTP recipient policy for newname.com, and added a dummy address and made it primary. I also answered yes for updating the associated emails. I then restarted the Microsoft Exchange Routing System and SMTP, but emails are still being routed internally. Is there a way to force the exchange server to route all emails for the domain newname.com to the new hosted service?

    Read the article

  • Password Manager that allows syncing accross platforms

    - by lexu
    I use OS X, Linux, Solaris and windows for work and from home. There are good tools that allow me to manage the many logins/passwords required platform independently. But mostly they expect me to carry a thumb-drive around or require direct access to a central location (a sky drive in the cloud). The thumb-drive is too easily lost (= synchronized backup needed), the central location not always reachable/ mountable. Besides company policy rightly prevents this often. Is there a tool that allows me to add passwords locally and then syncs it's DB with the "mother-ship" later. Or is there another approach that you use, that solves my problem? EDIT My question is more about "synchronize" than cross platform. I've evaluated (=read feature list) some good cross platform tools, but need one that does the synchronizing for me. By synchronize I mean "merge two versions" not "replace (hopefully) old file with new." I'm not sure I'm always disciplined/awake enough to prevent data loss. UPDATE Lifehacker just posted that AgileSolutions now have a beta version of 1Password for Windows.

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • 403 Forbidden When Using AuthzSVNAccessFile

    - by David Osborn
    I've had a nicely functioning svn server running on windows that uses Apache for access. In the original setup every user had access to all repositories, but I recently needed the ability to grant a user only access to one repository. I uncommented the AuthzSVNAccessFile line in my httpd.conf file and pointed it to an accessfile and setup the access file, but I get a 403 Forbidden when I go to mydomain.com/svn . If I recomment out this line then things work again. I also made sure I uncommented the LoadModule authz_svn_module and verified that it was point to the correct file. Below is the Location section of my httpd.conf and my svnaccessfile httpd.conf (location section only) <Location /svn> DAV svn SVNParentPath C:\svn SVNListParentPath on AuthType Basic AuthName "Subversion repositories" AuthUserFile passwd Require valid-user AuthzSVNAccessFile svnaccessfile </Location> (I want a more complex policy in the long run but just did this to test the file out) svnaccessfile [svn:/] * = rw I have also tried just the below for the svnaccessfile. [/] * = rw I also restart the service after each change just to make sure it is taken.

    Read the article

  • Terminal Server 2008: Remote App Issue

    - by JohnyD
    I have a FoxPro 2.6 (16-bit) application that I've installed on a Win2008 (32-bit) Terminal Server. I then created a Remote App from it. It works fine. The problem is that within this FoxPro application it calls out to a .Net application. I have the proper .Net Framework installed on the server (2.0) and I have run the code access security policy tool (caspol.exe). However, when I launch the .Net app from within the FoxPro application I get the following error: Description: Stopped working Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: vector.exe Problem Signature 02: 1.0.0.3 Problem Signature 03: 48b579f2 Problem Signature 04: vector Problem Signature 05: 1.0.0.3 Problem Signature 06: 48b579f2 Problem Signature 07: f Problem Signature 08: 57 Problem Signature 09: System.Security.Security OS Version: 6.0.6001.2.1.0.18.10 Locale ID: 1033 Vector.exe is our .Net application. In fact, it's an in-between application that checks to ensure you have the latest version. When it's done it calls out to another .Net executable. Does anyone believe this should be a problem? Thanks in advance.

    Read the article

  • Easy GUI way to auto scale EC2 and RDS: aws console, scalr, ylastic...?

    - by Zillo
    I am managing all my instances with the AWS Management Console (the GUI web console) but now I want to use Auto Scale and it seems that this can not be done with that console. Yes, there is CloudWatch but I can only create alarms (e-mail notifications), it seems that CouldWatch needs you to add the auto scale policy in some other place (by command line console?). I would like to use some easy GUI interface. Ylastic and Scalr seems to be a good option. Which one do you think is better? Regarding Scalr, is there any difference between the open source software Scalr and the service Scalr.net? I mean, is the GUI interface the same? I like the idea of the Scalr because I do not need to give my Secret Access Key to a third party (like in Ylastic or in Scalr.net) One question about the Scalr software, it has to be installed in the instances or it must be installed in another machine? Do I need to setup again all my security permissions, AMIs, snapshots, etc. or I can use AWS Management Console for everything and Scalr just to auto scale.

    Read the article

  • Windows 7 libraries nightmare

    - by Lobuno
    In our active directory we deploy a policy to our clients where the personal directory (My documents) is redirected to a file server of ours \server\share\username\Documents In older systems everything worked fine. in Windows 7 some users are experimenting the following symptoms: The Documents library is EMPTY Where the documents library should be shown in Explorer an empty white icon is displayed. No caption. Right clicking in the Documents library to edit the folders that are part of the libraries brings the dialog up. However, that dialog is unusable. No folder is present there and clicking Add folder does nothing. Deleting the library and auto-creating it doesn't solve the problem The shared directory can be accessed via UNC paths and it can be mounted as a shared drive as well. The library is still broken. The shared drives are on a W2008 indexed server... Using the Windows Library tool utility doesn't solve the problem. What can the cause of this problem be and how can this be solved?

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >