Search Results

Search found 4947 results on 198 pages for 'insurance policy administration'.

Page 161/198 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • NGINX Configuration Error using Codex Example: Is This a Typo in Codex?

    - by jw60660
    I installed NGINX using this tutorial: C3M Digital NGINX Tuturial but after reading this article on security issues with "cut and paste" configuration tutorials: Neal Poole's article regarding security and NGINX configuration I decided to follow Poole's suggestion to use the configuration suggested in the WordPress codex: Codex on NGINX Configuration I used the Codex configuration for a multisite installation using W3 Total Cache. When attempting to start NGINX I get an error saying that the /etc/nginx/nginx.conf test failed. The error message was: "Restarting nginx: nginx: [emerg] unknown directive "//" in /etc/nginx/sites-enabled/teambrazil.com:18" When I looked at my site specific configuration at that path I noticed the rewrite rule in the server block was: rewrite ^ $scheme://teambrazil.conf$request_uri redirect; That line in the Codex example was: rewrite ^ $scheme://mysite.conf$request_uri redirect; That looked like a mistake to me, and I changed my line to: rewrite ^ $scheme://teambrazil.com$request_uri redirect; I then attempted to restart NGINX but got the same error message. My question is: is that a mistake, and is there anything more I have to do aside from restarting NGINX after making this change. As suggested by both tutorials I set up the directories: /etc/nginx/sites-enabled and /etc/nginx/sites-available and created the appropriate symbolic links using: touch /etc/nginx/sites-available/teambrazil.com ln -s /etc/nginx/sites-available/teambrazil.com /etc/nginx/sites-enabled/teambrazil.com Is there something else I need to consider after making this correction? Or was it not an error in the first place? I'm pretty stuck here. BTW, I am using Debian squeeze as an OS on Amerinoc's VPS. I'm just getting familiar with VPS administration and am pretty much a noob. Thanks very much, would appreciate any input.

    Read the article

  • IPtables: DNAT not working

    - by GetFree
    In a CentOS server I have, I want to forward port 8080 to a third-party webserver. So I added this rule: iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination thirdparty_server_ip:80 But it doesn't seem to work. In an effort to debug the process, I added these two LOG rules: iptables -t mangle -A PREROUTING -p tcp --src my_laptop_ip --dport ! 22 -j LOG --log-level warning --log-prefix "[_REQUEST_COMING_FROM_CLIENT_] " iptables -t nat -A POSTROUTING -p tcp --dst thirdparty_server_ip -j LOG --log-level warning --log-prefix "[_REQUEST_BEING_FORWARDED_] " (the --dport ! 22 part is there just to filter out the SSH traffic so that my log file doesn't get flooded) According to this page the mangle/PREROUTING chain is the first one to process incomming packets and the nat/POSTROUTING chain is the last one to process outgoing packets. And since the nat/PREROUTING chain comes in the middle of the other two, the three rules should do this: the rule in mangle/PREROUTING logs the incomming packets the rule in nat/PREROUTING modifies the packets (it changes the dest IP and port) the rule in nat/POSTROUTING logs the modified packets about to be forwarded Although the first rule does log incomming packets comming from my laptop, the third rule doesn't log the packets which are supposed to be modified by the second rule. It does log, however, packets that are produced in the server, hence I know the two LOG rules are working properly. Why are the packets not being forwarded, or at least why are they not being logged by the third rule? PS: there are no more rules than those three. All other chains in all tables are empty and with policy ACCEPT.

    Read the article

  • Picking a degree path...

    - by Chris
    I'll be going to University of South Florida soon, and have to choose between two degrees, I want to head into general Server (IT) administration for a small / medium business. Setting up computers, imaging, managing file servers / logon servers /etc. * I had to change the http to hxxp in order to post. I have two degrees I'm currently choosing between: - BSAS hxxp://www.poly.usf.edu/Academics/AppliedAS/BSAS-IT/Program_of_Study.html - BSIT hxxp://www.poly.usf.edu/IT/ I like the idea of a BSAS because it'll get me out sooner, and then I can work on a few certifications to "match" the BSIT... I'm just worried companies will look at that as a "lesser" degree to a BSIT (or even a CS degree.) What are your guys' thoughts on these two degrees? The BSIT has more math, which I still have about 2 more classes to go through (I'll be heading to USF this August.) while the BSIT doesn't require those 2 extra math classes. I keep on hearing from people that when they hire you for your first job, they don't care which degree you have, as long as it's relevant and it's a 4-year degree, is this true?

    Read the article

  • Too many concurrent connections Exchange 2010. What else is there to check?

    - by hydroparadise
    I thought that I had this under control before. But for some reason during our last email marketing promo, I start receiving from our mass email client (built in house).. The message could not be sent to the SMTP server. The transport error code is 0x800ccc67. The server repsonse was 421 4.3.2 The maximum number of concurrent connections has exceeded a limit, closing transmission channel again. There's several places I've checked to make sure that wouldn't be an issue. First I checked that receive connector was set to receive an adequate number of connections on our relay connector (1000 connections). Then, I would later find out about Throttling Policies. I created one and set all the properties I knew to set in terms of the policy following properties to 1000; EWSMaxConcurrency, OWAMaxConcurrency, CPAMaxConcurrency, and CPAMaxConcurrency. Still, the email client starts receiving the error shortly after 100 has been sent and takes about 15-30 seconds. The process is then repeatable, but still the error gets received at the same spot everytime. Is there a rate setting that I am missing? Was there a windows update that I missed looking at? Should the software have it's own throttling feature?

    Read the article

  • Copy UNC network path (not drive letter) for paths on mapped drives from Windows Explorer

    - by Ernest Mueller
    I frequently want to share network paths to files with other folks on my team via email or chat. We have a lot of mapped drives here, both ones we set up ourselves and ones set up by our IT overlords. What I'd like to be able to do is to copy the full real path (not the drive letter) from Windows Explorer to send to folks. Example: I have a file in my "Q:" drive, \cartman\users\emueller, I want to send a link to file foo.doc to everyone. When I copy the file path (shift+right click, "copy as path") it gets the file name "Q:\foo.doc". This is unhelpful to others, who would like to see \cartman\users\emueller\foo.doc, obviously. In Explorer it clearly knows it - in the address bar I see "Computer - emueller (\cartman\users) (Q:) -". Is there a way to say "hey man copy that path as text with the \cartman\users\emueller not the Q: in it?" I know I could just set up mapped network locations instead of the mapped drives for the ones that I set up personally and avoid this problem, but most of the mapped drives like the "users" share come from our IT policy. I could just make a separate network location and then ignore my Q: drive but that's inconvenient (and they do it so they can move accounts across servers). Sure my emailed path might eventually break because I'm losing the drive letter indirection but that's OK with me.

    Read the article

  • My Windows 7 has suddenly stopped displaying Unicode symbols

    - by Felix Dombek
    For some strange reason, my computer suddenly doesn't show certain unicode characters anymore! I have no idea what happened. Affected applications include Windows Explorer (should be Japanese characters), Google Chrome (should be a heart), and Winamp (should be stars): Russian, German etc. characters are displayed normally. Chrome also displays Japanese script on websites, but not in the GUI. How can I fix it? Update: I have tried to use System Restore to fix it. I needed to go back in time quite a while because the most recent restore points didn't solve it so I used one from the middle of November. After that restore, Unicode symbols were displayed again. Then I updated my system with Windows Update again because those were removed during the restore. After that, the error occurred again! I then did a restore to a point before my new updates, but the error persists, and the old restore point (which I used before) is gone and there are currently no other snapshots of the system. Any suggestions on what to do now? Update 2: I could find a workaround: Control Panel ? Region and Language ? Administration ? Change Language for Unicode-incompatible programs to Japanese (Japan). All mentioned programs display their symbols correctly again. However, I don't consider this a fix because these programs are not usually Unicode-incompatible, and it also leads to some (non-serious) artifacts in some programs. I still welcome an answer that tells me what went wrong here and how to fix the issue.

    Read the article

  • Autologin 2 Windows users OR Login another user from the desktop

    - by fpdragon
    I'm using two windows users on my HTPC at the same time. One is just for watching videos and one for administration via remote. This setup is quite ideal for me since windows can handle multiple concurrent logins and the win "rdp concurrent hack" (Google). The problem is, I want both users to be logged in automatically when the pc was started. It shall be possible to watch tv and also the admin user shall be automatically logged in to start my scripts and other tasks, even if I haven't logged in via remote desktop manually. Later, when I want to admin my htpc I can just rdp connect the admin user without interrupting the video playback on the actual HTPC's screen and check my cleanup tasks, downloads, ... witch already executed for this admin user. But right now I found no solution to automatically login user A from a user B desktop and I also found no solution to autologin both users immediately at startup. As a workaround I have to fire up my other notebook machine and login one time with the remote user via rdp. From this time on the remote admin user is running concurrent with the main user in the background of the machine. The other workaround would be... after startup switch user from main user to admin user and then back again. But that also requires manual steps. I'm on a Windows 8 System right now but all infos for Win7 or XP would be also interesting. thanks a lot for all ideas. PS: just to prevent useless posts... don't tell me that only one user can be logged in to windows. ;)

    Read the article

  • Cannot remove storage account because of lease, but I already deleted the server [closed]

    - by djechelon
    I recently created a temporary virtual server on Azure. Then I deleted it. I wanted to delete the storage account associated with it because I didn't need it any more. The problem is that the VHD file is still associated to a non-existing virtual machine!! If I try to delete the VHD from Virtual Machines\Disks I get the Delete button greyed and the table tells me it's still associated with the old VM. If I go to storage administration and try to delete the blob from vhds/ directory I get there is an active lease. I've read on Azure forums that, in these case, one should try to force releasing the lease from the blob. I followed their instructions and downloaded their script, but running it failed. The script detected that the disk is associated to a Virtual Machine and can't be deleted. The problem is that I'm 1000000% sure that I already deleted the VM. In fact, I currently only have a single VM that has its own HD and is up and running fine! What can I do to delete that storage account that is probably sucking money from my pocket?

    Read the article

  • Overriding HOMEDRIVE and HOMEPATH as a Windows 7 user

    - by MikeC
    My employer has an Active Directory group policy which sets my Windows 7 laptop HOMEDRIVE to "M:" (a mapped network drive) and my HOMEPATH to "\". Since I have read-only permissions for the root of that shared drive, I cannot create files or directories in my windows home directory. My attempts to work with the IT department have been unsuccessful. Is there a way for me to globally change these envars at boot or login time? I need for all applications to use alternate values (such as "C:" and "\Users\myname"). I have some installed utilities (like gvim and others) that store preference files in the user's home directory. IMPORTANT: Changing these envars under "System Properties Environment Variables" does not work. I have tried setting these as both User and System Variables (including a reboot). TypingSET HOMEin a DOS window clearly shows that my settings are ignored. Also, using "Start in" in a Windows shortcut will also not solve this, as I need things like Explorer context menu items (like "Edit with Vim") to operate correctly. I do have admin rights on this company laptop, but I am not a Win7 guru. Back in the day, a boot script would have solved this in a minute. Is it even possible today? Thanks.

    Read the article

  • DVD Share on Vista Home Premium Failing

    - by hpunyon
    UPDATE: - I can't find any Local Policy Editor for Vista Home Premium, as suggested. - I did learn about registry keys: allocatecdroms, allocatefloppies, allocatedasd and tried adding these keys (individually and collectively) and setting them to both 0 or 1. There was no positive affect on read access to the DVD root folder - always Access Denied. ORIGINAL POST: Failing read access to the root folder of a DVD drive in Vista Home Premium laptop using the Guest account - Access Denied. The client is an XP Home PC that can see, but not access, the data in the share. I'm only trying to read the data DVD - not trying to write/burn anything. On the Vista laptop, I have: All Firewalls and Antivirus disabled.UAC disabled. Password checking disabled. "Advanced Shared" the DVD drive, with "Everyone" having full-access permissions to the share. Tried adding Guest and Anonymous users having full-access permissions to the share. RestrictAnonymous=0 set in the registry. Both PC's are in the same workgroup (MSHOME) The XP Home client sees the shared DVD in \Vista_Hostname\ but when I double click the drive icon on the client, I get a popup that access is denied, check with the administrator, etc. I can share other folders on the Vista PC and see and READ these from the XP Home client. If I enable password checking on the Vista side, I get a user/password popup, and I can authenticate (using my known Vista account, that happens to have Admin rights) and then I can get to see and read the DVD data. I need to open this up so that the (default) Guest user can see and access the DVD data files.

    Read the article

  • Copy UNC network path (not drive letter) for paths on mapped drives from Windows Explorer

    - by Ernest Mueller
    I frequently want to share network paths to files with other folks on my team via email or chat. We have a lot of mapped drives here, both ones we set up ourselves and ones set up by our IT overlords. What I'd like to be able to do is to copy the full real path (not the drive letter) from Windows Explorer to send to folks. Example: I have a file in my "Q:" drive, \\cartman\users\emueller, and I want to send a link to the file foo.doc therein to coworkers. When I copy the file path (shift+right click, "copy as path") it gets the file name "Q:\foo.doc". This is unhelpful to others, who would need to see \\cartman\users\emueller\foo.doc to be able to consume the link. In Explorer it clearly knows it - in the address bar I see "Computer - emueller (\\cartman\users) (Q:) -". Is there a way to say "hey man copy that path as text with the \\cartman\users\emueller not the Q: in it?" I know I could just set up mapped network locations instead of the mapped drives for the ones that I set up personally and avoid this problem, but most of the mapped drives like the "users" share come from our IT policy. I could just make a separate network location and then ignore my Q: drive but that's inconvenient (and they do it so they can move accounts across servers). Sure my emailed path might eventually break because I'm losing the drive letter indirection but that's OK with me.

    Read the article

  • Setting up a network where packets are traced

    - by Marcus
    My situation is the following: I have an internet connection, which is shared between people. More or less obviously, people is using it to download illegal stuff. Since I'm the owner of the connection, I want to avoid being sued. I don't want to prevent the people from doing the things they want, but I want to be legally safe. Now, I have relatively little competences in network administration, so I was wondering: is it possible to setup a network, where the source and destination of the packets are logged? I would use this to prove, in case of lawsuit, that the traffic was coming from a given machine. if the idea is feasible, is there any wireless router on which I can install linux, where I can install the packet sniffer? how much space could the logs take (containing only the timestamp/source/destination), per GB of traffic? a very rough estimation would be very helpful. if a machine on my network is sending bittorrent packets to a certain IP, would this log be able to reflect the time, source ip and destination ip? I assume that obviously the torrent data would be encrypted and un-decryptable. Am I missing something? Is there a better strategy? Any pointer to documentation would be helpful as well - in that case, I would use this as starting point.

    Read the article

  • How to encourage Windows administrators to pick up scripting?

    - by icelava
    When I worked as an administrator in my first job, I was frustrated that our administration processes with Windows servers were a series of point-and-clicks; we could never match the level of efficiency with the Unix servers which had a group of shell scripts to automate a lot of the work. I soon read about WSH and ADSI and wasted no time learning just how much automation I was able to achieve with scripting. There was a huge problem though - almost none of my Windows colleagues were really interested in learning scripting. They seemed happy with the manually mouse-clicking chores and were never excited at the prospect of using scripts to do the work on their behalf. I struggled to convince them to pick up scripting skills despite the evident increases in efficiency. I left that job in pursuit of a full-time software development career thereafter. Almost a decade on working in various environments and different customers, I still encounter Windows administrators mainly possessing this general "mood" where they would avoid scripting as much as possible. Despite the increasing level of accessibility Windows server technologies are opening up for scripting and automation. I am almost certain the majority of administrators are administrators precisely because they absolutely hate performing any kind of programming duties. What are some means to encourage and motivate administrators that scripting can really help them in the long run?

    Read the article

  • Block users from Social networking websites while firewall is down

    - by SuperFurryToad
    We currently have a SonicWall firewall, which does a pretty good job a blocking Social networking websites like Facebook and Bebo. The problem we are having is that sometimes we need to temporarily disable our firewall blocklist so we can update our company's page on Facebook for example. Whenever we do this, have see an avalanche of users logging on to their Facebook pages during work time. So what we need a way to block access while the firewall is down. For the sake of argument, we have two groups of users - "management" and "standard users". "standard users" would have no access to Facebook, but "management" users would have access. Perhaps something like a host file redirect for non-management users. This could probably be enforced via group policy that would call a bat file to copy down the host file, depending if the user was management or not. I'm keen to hear any suggestions for what the best practice would be for this in a Windows/AD environment. Yes, I know what we're doing here is trying to solve a HR problem using IT. But this is the way management wants it and we have a lot of semi-autonomous branch offices that we don't have a lot of day to day contact with, so an automated way of enforcing this would be the most preferable method.

    Read the article

  • How to access a port via OpenVpn only

    - by Andy M
    I've set up an openvpn server alongside an apache website that can only be accessed on port 8100 on the same machine. My /etc/openvpn/server.conf file looks like this: port 1194 proto tcp dev tun ca ./easy-rsa2/keys/ca.crt cert ./easy-rsa2/keys/server.crt key ./easy-rsa2/keys/server.key # This file should be kept secret dh ./easy-rsa2/keys/dh1024.pem # Diffie-Hellman parameter server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt # make sure clients can still connect to the internet push "redirect-gateway def1 bypass-dhcp" keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 Now I tried to let only clients connected to the vpn network access the website on apache via port 8100. So I defined a few iptables rules: #!/bin/sh # My system IP/set ip address of server SERVER_IP="192.168.0.2" # Flushing all rules iptables -F iptables -X # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP # Allow incoming access to port 8100 from OpenVPN 10.8.0.1 iptables -A INPUT -i tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # outgoing http iptables -A OUTPUT -o tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT Now when I connect to the server from my client computer and try to access the website on 192.168.0.2:8100, my browser can't open it. Will I have to forward traffic from tun0 to eth0? Or is there anything else I'm missing?

    Read the article

  • Under FreeBSD, can a VLAN interface have a smaller MTU than the primary interface?

    - by larsks
    I have a system with two physical interfaces, combined into a LACP aggregation group. That LACP channel has two VLANs, one untagged (the "native vlan") and one using VLAN tagging. This gives us: lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=19b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4> ether 00:25:90:1d:fe:8e inet 10.243.24.23 netmask 0xffffff00 broadcast 10.243.24.255 media: Ethernet autoselect status: active laggproto lacp laggport: em1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> laggport: em0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> vlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=3<RXCSUM,TXCSUM> ether 00:25:90:1d:fe:8e inet 10.243.16.23 netmask 0xffffff80 broadcast 10.243.16.127 media: Ethernet autoselect status: active vlan: 610 parent interface: lagg0 Is it possible to set a 9K MTU on lagg0 while preserving the 1500 byte MTU on vlan0? Normally I would simply try this out, but this is actually on a vendor-supported platform and I am loathe to make changes "behind the back" of their administration interface. This system is roughly FreeBSD 7.3.

    Read the article

  • Legal IT documents

    - by TylerShads
    I have been wondering this past week because my big boss told me to start keeping track of all the things I have fixed, how to fix them, etc. Which is reasonable and have been doing anyway. But then a related question came to mind. What kind of documentation should I have on hand as far as users go. More specifically I am talking in terms of EULA, ToC, etc (correct me please if I'm using the wrong terms) Or more specifically a policy, so to speak, for the users and such. Can't say I'm a legal expert, otherwise I'd be a lawyer. The environment the users are in is pretty laid back so I don't forsee a problem. But assume that there should ever arise a problem, what should I have written up/have on hand? EDIT: I really should have noted that we are a medical transport facility and have patient records so I know that something must be done there to comply with HIPAA policies I believe. I do like what anthonysomerset said about the "If I get by a bus" Scenario and want to apply it not only to the documentation I am currently writing but also for if say an employee were to steal info from the server or edge cases, theft, etc. As far as our staff, its relatively small as in a single HR person, no legal department aside from the 2 owners' lawyers and me being the only IT person on staff with a guy who is no more than a mac superuser.

    Read the article

  • Preventing applications from performing run once tasks for multiple users

    - by JohnyV
    In our environment we have several applications that are installed that have a need to run a little prompt the first time they run eg Media player, Google earth etc. The problem is we have many users on many different computers. And the computers have deepfreeze running on them which removes the users profile once the computer is restarted. So next time that user logs in they have to go through the whole thing of run once again. I have managed to prevent IE runonce using group policy and office run once from using the office customisation tool. Is there a way to make this happen for other applications. On windows xp we used to copy a user that has run all the apps and place their default profile into the default profile so that new users get that profile template. Now with windows 7 the process of copy profiles is not as easy. Is there an easy way to copy profiles in win 7 or is there a better way (eg modify reg or app data) to prevent apps from performing an initial run. Thanks

    Read the article

  • Anonymous file sharing without login window, from Windows 7 server to XP clients

    - by Niten
    I'm trying to provide machines on a small LAN with read-only, anonymous access to files shared from a Windows 7 workstation (let's call it WIN7SVR). In particular, I don't want clients to have to deal with a login window when they navigate to, e.g., \\WIN7SVR in Windows Explorer, but we do not have a domain and synchronizing accounts between the server and clients would be intractable. There are both Windows 7 and Windows XP clients that need access to these shares. I got this working for Windows 7 clients by just enabling the Guest account on WIN7SVR and setting appropriate share permissions. Other Windows 7 machines automatically try logging in as Guest, it seems, so their users don't have to deal with the login window. The problem is with the XP clients--they can access the server if the user enters "Guest" in the login window, but I don't want users to have to do that. So from what I gather, in my limited understanding of Windows file sharing, this boils down to granting null sessions access to file shares on WIN7SVR. But I've had no success so far on that front. I've tried all the following in the local group policy editor on the Windows 7 server: Set Network access: Let Everyone permissions apply to anonymous users to Enabled Set Network access: Restrict anonymous access to Named Pipes and Shares to Disabled Added the names of corresponding shares to Network access: Shares that can be accessed anonymously Added "ANONYMOUS LOGON" to Access this computer from the network under User Rights Assignment Any advice would be highly appreciated... I'm mostly a Unix guy, so I feel somewhat out of my league with Windows file sharing. I do understand that any sort of anonymous access to file shares isn't generally ideal from a security standpoint, but it's the most practical solution for us in this case, and access to our network is well enough controlled that share-level security isn't a concern.

    Read the article

  • Certificates required for WHQL-certified drivers

    - by Kasius
    The 64-bit Windows 7 image that we deploy to machines at our site does not contain all of the certificates included on a default Windows image. Automatic root certificate installation is also disabled per policy from higher in the organization. We have had a lot of trouble installing many WHQL-certified drivers from reputable companies (ex. HP, Lexmark, Dell, etc.), and I hypothesize that a required certificate is missing from one of the certificate stores on the machine. The error we typically get is: The driver cannot be installed because it is either not digitally signed or not signed in the appropriate manner. I know that it is signed. A .CAT file is included, and it has the following tree from top to bottom: Microsoft Root Authority (thumbprint a4 34 89 15 9a 52 0f 0d 93 d0 32 cc af 37 e7 fe 20 a8 b4 19) Microsoft Windows Hardware Compatibility PCA (thumbprint 93 b8 d8 82 0a 32 db 20 a5 ea b6 8d 86 ad 67 8e fa 14 ea 41) Microsoft Windows Hardware Compatibility Publisher (thumprint b0 50 45 45 42 4e be 2c 16 2f 62 5b bf 5a e6 9b 96 bf 0b 0b) What certificates are required to install WHQL-certified drivers? Is it possibly something other than certificates? Thanks! NOTE: I have posted this question on Technet as well, but honestly, I've never had a lot of luck posting questions on the Technet forums.

    Read the article

  • XP/Intel wirelss only showing 'hpsetup' ad-hoc network that isn't there

    - by ewall
    Trying to help my friend with her work XP laptop, which recently stopped seeing any wireless SSIDs except the SSID 'hpsetup' (presumably from a wireless-enabled HP printer). Relevant information: The laptop is a Lenovo T500 (Centrino 2 chipset) with XP SP3. The network adapter is Intel WiFi Link 5300 AGN (built-in). The latest version (13.5) of the Intel drivers only are installed, not the Intel config software, so XP is using the Wireless Zero-Config manager. The wireless router is a NetGear WGR614 v7 with 802.11b/g. The SSID is broadcasting, and all the other laptops in the house can see and connect to it. On the laptop, I have tried repairing the network connection, disabling power management, turning off 802.11a & n radio, and more... but it didn't help. Some of the wireless settings are managed by Group Policy from her office (I get the "At least one of your changes was not applied successfully to your wireless configuration" message). It is enforced to connect to "Access point (infrastructure) networks only". The real kicker is that my laptop does not an SSID named 'hpsetup' here, but it can see several broadcasted SSIDs including the one we want, while my friend's laptop doesn't see any SSID except 'hpsetup'. Any suggestions?

    Read the article

  • How to encourage Windows administrators to pick up scripting

    - by icelava
    When i worked as an administrator in my first job, I was frustrated our administration processes with Windows servers were a series of point-and-clicks; we could never match the level of efficiency with the Unix servers which had a group of shell scripts to automate a lot of the work. I soon read about WSH and ADSI and wasted no time learning just how much automation I was able to achieve with scripting. There was a huge problem though - almost none of my Windows colleagues were really interested in learning scripting. They seemed happy with the manually mouse-clicking chores and were never excited at the prospect of using scripts to do the work on their behalf. I struggled to convince them to pick up scripting skills despite the evident increases in efficiency. I left that job in pursuit of a full-time software development career thereafter. Almost a decade on working in various environments and different customers, I still encounter Windows administrators mainly possessing this general "mood" where they would avoid scripting as much as possible. Despite the increasing level of accessibility Windows server technologies are opening up for scripting and automation. I am almost certain the majority of administrators are administrators precisely because they absolutely hate performing any kind of programming duties. What are some means to encourage and motivate administrators that scripting can really help them in the long run?

    Read the article

  • Why can't I see all of the client certificates available when I visit my web site locally on Windows 7 IIS 7?

    - by Jay
    My team has recently moved to Windows 7 for our developer machines. We are attempting to configure IIS for application testing. Our application requires SSL and client certificates in order to authenticate. What I've done: I have configured IIS to require SSL and require (and tried accept) certificates under SSL Settings. I have created the https binding and set it to the proper server certificate. I've installed all the root and intermediate chain certificates for the soft certificates properly in current user and local machine stores. The problem When I browse to the web site, the SSL connection is established and I am prompted to choose a certificate. The issue is that the certificate is one that is created by my company that would be invalid for use in the application. I am not given the soft certificates that I have installed using MMC and IE. We are able to utilize the soft certs from our development machines to our Windows 2008 servers that host the application. What I did: I have attempted to copy the Root CA to every folder location for the Current User and Location Machine account stores that the company certificate's root is in. My questions: Could I be mishandling the certs anywhere else? Could there be a local/group policy that could be blocking the other certs from use? What (if anything) should have to be done differently on Windows 7 from 2008 in regards to IIS? Thanks for your help.

    Read the article

  • linux container bridge filters ARP reply

    - by Dani Camps
    I am using kernel 3.0, and I have configured a linux container that is bridged to a tap interface in my host computer. This is the bridge configuration: :~$ brctl show bridge-1 bridge name bridge id STP enabled interfaces bridge-1 8000.9249c78a510b no ns3-mesh-tap-1 vethjUErij My problem is that this bridge is dropping ARP replies that come from the ns3-mesh-tap-1 interface. Instead, if I statically populate the ARP tables and ping directly everything works, so it has to be something related to ARP. I have read about similar problems in related posts, and I have tried with the solutions explained therein but nothing seems to work. Specifically: ~$ grep net.bridge /etc/sysctl.conf net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-filter-vlan-tagged = 0 net.bridge.bridge-nf-filter-pppoe-tagged = 0 arptables and ebtables are not installed. iptables FORWARD is all set to accept: Chain FORWARD (policy ACCEPT) target prot opt source destination The bridged interfaces are set to PROMISC: ~$ ifconfig ns3-mesh-tap-1 Link encap:Ethernet HWaddr 1a:c7:24:ef:36:1a ... UP BROADCAST PROMISC MULTICAST MTU:1500 Metric:1 vethjUErij Link encap:Ethernet HWaddr aa:b0:d1:3b:9a:0a .... UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 The macs learned by the bridge are correct (checked with brctl showmacs). Any insight on what I am doing wrong would be greatly appreciated. Best Regards Daniel

    Read the article

  • SFTP, ChrootDirectory and multiple users

    - by mdo
    I need a setup where I can put the contents of several user folders to a DMZ server from where external clients can download it, protocol SFTP, Linux, OpenSSH. To ease administration we want to use one single user for the upload. What does work is to define ChrootDirectory /home/sftp/ in sshd_config, set the according ownership and modes and define a home dir in passwd so that the working directory of the user fits. This is my structure: /home/sftp/uploader/user1/file1.txt /user2/file2.txt The uploader user can write file1.txt and file2.txt to the corresponding folders and by having the user folders (user1, user2) set to the users' primary group + setting SETGUID on the folders the users are able to even delete the files (which is necessary). Only problem: because /home/sftp/ is the chroot base dir the users can change updir and see other users' folders, though not being able to change into because of access rights. Requirement: We want to prevent users to change to /home/sftp/uploader/ and see other users' folders. My requirements are to use SFTP, have one upload user and every user must have write access to his home dir. Obviously it's not an option to use something like ChrootDirectory %h because every path component of the chroot path needs to have limited access rights, so as far as I understand this does not work.

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >