Search Results

Search found 15208 results on 609 pages for 'andys crazy questions'.

Page 457/609 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • outlook security alert after adding a second wireless access point to the network

    - by Mark
    Just added a Netgear WG103 Wireless Access Point in our conference room to allow visitors to access the internet through out internal network. When switched on visitors can connect to the intenet and everything works fine. Except, when the Access Point is switched on, normal users of the network get a Security Alert when they try to start Outlook 2007. The Security Alert is the same as the one shown in question 148526 asked by desiny back in June 2010 (http://serverfault.com/questions/148526/outlook-security-alert-following-exchange-2007-upgrade-to-sp2) rather than "autodiscover.ad.unc.edu" my security alert references our "Remote.server.org.uk". If I view the certificate it relates to "Netgear HTTPS:....", but the only Netgear equipment we have is the new Access Point installed in the conference room. If the Access Point is not switched on we do not get the Security Alert. At first I thought it was because we had selected "WPA-PSK & WPA2-PSK" Network Authentication Type but it continues to occur even if we opt for "Shared Key" WEP Data Encryption. I do not understand why adding a Netgear Wireless Access point would cause Outlook to issue a Security Alert when users try to read their email. Does anyone know what I have to do to get rid of the Security Alert? Thanks in advance for reading this and helping me out.

    Read the article

  • Intermittent extrememly long response times when downloading documents

    - by pap
    I have a Java web application running om Tomcat 7 with an Apache httpd 2.2 fronting with mod_jk/AJP. One part of the application is serving files (up to 4mb size). Now, normally this all runs very smooth with stable, low response-times. However, in rare instances (<0.1% of downloads), the downloadtime will go beyond 1 minute. After activating the ThreadStuckValve in Tomcat, I can see that the long responses seem to be stuck at org.apache.tomcat.jni.Socket.sendbb(Native method) i.e network I/O. At most, these long-running downloads take 5 minutes, which I strongly suspect is because of the default 300 second timout in Apache 2.2 (http://httpd.apache.org/docs/2.2/mod/core.html, "TimeOut directive"). To me, this looks like network problems. The Apache timeout (if that is what is kicking in at the 5 minute mark) indicates that ACK packets are not being transmitted correctly. My questions are what could be causing this? Closed browser at receiving end but socket not signaled as closed properly? Packet loss or some other network failure in transit? Where would I start troubleshooting this? We're running Tomcat and Apache on Windows server 2008-R2 in a vmware virtualized server.

    Read the article

  • Running HTTP and HTTPS connections for a single domain (say, www.example.com) through a Cisco ACE SS

    - by Paddu
    My web application config has a Cisco ACE load balancing across a server farm and I want to use the ACE as an SSL endpoint as well. To make this work, the network architect has come up with a design where all secure pages have to be served from secure.my-domain.com, while non-secure pages are served up from www.my-domain.com. The reason for this is apparently that the configuring the Cisco ACE to accept HTTPS requests on port 443 for a particular public IP prevents the simultaneous acceptance of HTTP requests on port 80 for the same IP. While I'm not a networking (or Cisco) expert, this seems to be intuitively wrong, as it would prevent any website using the Cisco ACE to serve pages on http://www.my-domain.com and https://www.my-domain.com simultaneously. In this situation, my questions are: Is this truly a limitation of the Cisco ACE when used as an SSL endpoint? If not, then can I assume that we can set up the ACE to accept connections for a particular IP on ports 80 and 443, and function as an SSL endpoint for the incoming requests on 443? Links to appropriate documentation most welcome here. Assuming the setup in the previous question, can I then redirect both sets of requests to the same server farm on the same port?

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • Setting up MySQL Linux slave with a Windows master

    - by philwilks
    I'm running a MySQL 5.0 database server on Windows Server 2008. The total size of the database is about 1Gb. I make daily backups, but I'd like to step up to having a slave server for extra protection. My thinking was that I wouldn't need the expense of a Windows machine to do this, and a Linux "cloud server" from RackSpace would do the job well for quite a low cost. However I have little experience with Linux, so I have a few questions... Does this sound like a good idea? Is there anything wrong with linking Windows and Linux MySQL servers? Does Linux have the equivalent of Remote Desktop Connection? If so can I use this from a Windows machine? Would a particular Linux distro be well suited to this task? RackSpace offer ArchLinux, CentOS, Debian, Fedora and Ubuntu. My immediate thinking is to go with Ubuntu as I've heard it's more friendly for people coming from a Windows background. Any comments you have would be very appreciated! Phil

    Read the article

  • What is the peak theoretical WiFi G user density? [closed]

    - by Bigbio2002
    I've seen a few WiFi capacity planning questions, and this one is related, but hopefully different enough not to be closed. Also, this is related specifically to 802.11g, but a similar question could be made for N. In order to squeeze more WiFi users into a space, the transmit power on the APs need to be reduced and the APs squeezed closer together. My question is, how far can you practically take this before the network becomes unusable? There will come a point where the transmit power is so weak that nobody will actually be able to pick up a connection, or be constantly roaming to/from APs spaced a few feet apart as they walk around. There are also only 3 available channels to use as well, which is a factor to consider. After determining the peak AP density, then multiply by users-per-AP, which should be easier to find out. After factoring all of this in and running some back-of-the-envelope calculations, I'd like to be able to get a figure of "XX users per 10ft^2" or something. This can be considered the physical limit of WiFi, and will keep people from asking about getting 3,000 people in a ballroom conference on WiFi. Can anyone with WiFi experience chime in, or better yet, provide some calculations for a more accurate figure? Assumptions: Let's assume an ideal environment with no reflection (think of a big, square, open room, with the APs spaced out on a plane), APs are placed on the ceiling so humans won't absorb the waves, and the only interference are from the APs themselves and the devices. As for what devices specifically, that's irrelevant for the first point of the question (AP density, so only channel and transmit power should matter). User experience: Wikipedia states that Wireless G has about 22Mbps maximum effective throughput, or about 2.75MB/s. For the purpose of this question, anything below 100KB/s per user can be deemed to be a poor user experience. As for roaming, I'll assume the user is standing in the same place, so hopefully that will be a non-issue.

    Read the article

  • Laptop shuts down randomly without warning

    - by Robert P.
    My Asus Zenbook UX32V turns off randomly when I'm working on it. This happens both when the computer is recently turned on (5 minutes), and after being on for several days. I'm not running any heavy software The laptop is not heating The fan is not working on the maximum capacity (it's not heating) It happens when the laptop is lying still on the table It is no warning, it simply goes black It happens both when charging and on battery My guess is that it suddenly lose power somehow. What puzzles me is that I can flip the laptop upside down, sideways, shake it, etc. without it shutting off. This makes me think it's not something that's loose causing occasional short-circuits. I realize that the laptop probably doesn't like flipping and shaking, but it was the best way I could troubleshoot. I rarely turn the computer off, only have it in hibernate or sleep mode (most often hibernate). I've never experienced that the laptop is off when I wake it up from sleep mode. I've had the problem for a few months and it happens 2-8 times a week. Specs: Asus Zenbook UX32V Windows 8.1 (it happened in Windows 8.0 too) Intel i5-3317U CPU @ 1.70GHz The laptop is approx 1.5 years, but it has a small dent on one of the sides that probably voids the warranty. The dent has been there since week one and I don't think it's related to the problems I'm having now. Does anyone have a clue what might cause this, and how it might be fixed? I've read all other questions (some of which are listed below) that seem related to my issue, but none report the same behavior as I'm experiencing. Most report heavy games, heating etc. Asus N53J Laptop randomly shuts down Laptop is randomly shutting off Computer shuts down without warning My laptop acer aspire 5720 suddenly turn off randomly Computer randomly shutting down Windows 8.1 randomly shuts self down ASUS K55VM Laptop unexpectedly shuts down

    Read the article

  • Attaching 3.5" desktop drive to MacBook SATA

    - by Kyle Cronin
    I have a mid-2007 MacBook that, according to the Apple Store, has suffered some liquid damage and requires a new logic board to operate correctly, a ~$750 repair I've been told (would normally be around ~$300 were it not for the "liquid damage"). The unit itself works fine - the only problem I've been having is that the system does not recognize the battery and will not charge it. Curiously, the system can still be powered by the battery and even recognizes when the power cord is detached by diming the backlight, but I digress. Now that this laptop will likely become a desktop, I'm wondering if it might be possible to attach a desktop drive. I recently purchased a 2TB SATA drive and I'm wondering if it's possible to somehow attach it where the current internal drive connects. Obviously the drive itself will not fit inside the device, but as the unit will spend the rest of its days on my desk, that's not really much of an issue. My main questions are: Is this possible? If so, how would I connect the drive? Would a SATA extender cable work? Is the SATA port on my MacBook capable of powering a desktop drive? Or should I just get a SATA male-to-female cable and see if I can power the drive through other means (a cheap power supply, for example) The disk I'm referring to is the Hitachi Deskstar HD32000. Though I couldn't find that exact model on Hitachi's support site, these are the power requirements for a similar drive, the 7K2000 (2TB, 7200RPM, SATA II): Power Requirement +5 VDC (+/-5%) +12 VDC (+/-10%) Startup current (A, max.) 1.2 (+5V), 2.0 (+12V) Idle (W) 7.5 From what I've read, 2.5" drives require 5V, meaning that my MacBook obviously is capable of producing it. The specs seem to suggest that this drive seems capable of accepting it instead of the typical 12V - is this an accurate interpretation of the power requirements? Or does it need both 12V and 5V?

    Read the article

  • VPN messes up DNS resolution

    - by user124114
    After connecting with the Kerio VPN client (OS X Leopard) to a server, the internet (~web browsing) stopped working for the client. After poking around, the issue seems to be bad DNS server (i.e., entering IPs directly works). After disconnecting from the VPN, the invalid DNS server disappears from scutil --dns and all's well again. Now, I don't understand why OS X on the client even changes the DNS settings -- internet should be routed through a different interface, through the default gateway, not through the VPN. Questions: By what mechanism does connecting the VPN client change the "default" DNS server? How can I stop the VPN client from changing routing/DNS rules? Where is this stuff stored/modified? Before VPN: $ scutil --dns DNS configuration resolver #1 nameserver[0] : 10.66.77.1 # <---- default gateway = home router; all good order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 ... VPN connected: $ scutil --dns DNS configuration resolver #1 nameserver[0] : 192.168.1.1 # <--- rubbish nameserver[1] : 192.168.2.1 order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 ... The VPN doesn't appear among $ networksetup -listallnetworkservices.

    Read the article

  • Someone used or hacked my computer to commit a crime? what defense do I have?

    - by srguws
    Hello, I need IMMEDIATE Help on a computer crime that I was arrested for. It may involve my computer, my ip, and my ex-girlfriend being the true criminal. The police do not tell you much they are very vague. I was charged though! So my questions are: -If someone did use my computer at my house and business and post a rude craigslist ad about a friend of my girlfriend at the time from a fake email address, how can I be the ONLY one as a suspect. Also how can I be charged. I noticed the last few days there are many ways to use other peoples computers, connections, etc. Here are a few things I found: You can steal or illegally use an ip addresss or mac address. Dynamic Ip is less secure and more vulnerable than static. People can sidejack and spoof your Mac, Ip, etc. There is another thing called arp spoofing. I am sure this is more things, but how can I prove that this happened to me or didnt happen to me. -The police contacted Craigslist, the victim, aol, and the two isp companies. They say they traced the IP's to my business and my home. My ex was who I lived with and had a business with has access to the computers and the keys to bothe buildings. My brother also lives and works with me. My business has many teenagers who use the computer and wifi. My brother is a college kid and also has friends over the house and they use the computer freely. So how can they say it was me because of an angry ex girlfriend.

    Read the article

  • Enable gzip on Nginx

    - by Rob Wilkerson
    Yes, I know that there are a lot of other questions that seem exactly like this out there. I think I must've looked all of them. Twice. In desparation, I'm adding another in case my specific configuration is the issue. Bear with me. First, the question: What do I need to do to get gzip compression to work? I have an Ubuntu 12.04 server installed running nginx 1.1.19. Nginx was installed with the following packages: nginx nginx-common nginx-full The http block of my nginx.conf looks like this: http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Both PageSpeed and YSlow are reporting that I need to enable compression. I can see that the request headers indicate Accept-Encoding:gzip,deflate,sdch, but the response headers do not have the corollary Content-Encoding header. I've tried various other config values (gzip_vary on, gzip_http_version 1.0, etc.), but no joy. As far as I know, I can only assume that nginx was compiled with compression support, but if there's any way to verify that, I'd love to know. If anyone sees anything I'm missing or can suggest further debugging, please let me know. I'm no sysadmin and I'm new to Nginx so I've exhausted everything I can think of or have read. Thanks.

    Read the article

  • How to automatically restart Apache service after HTTP 503 error?

    - by Gnanam
    Our production server is running Apache v2.2.4 on CentOS5.2. Mono v1.2.4 is integrated within Apache. Recently, we faced a problem in our production server. From Apache's access_log, I found a HTTP 500 internal server error for one of the HTTP request and all subsequent HTTP requests also failed but with HTTP 503 service unavailable error. From thereafter, none of the requests were successful. Also, only later some time, we realized that our application was not working because of this error and then we restarted Apache service. My questions are, in this kind of situation, how do I automatically restart Apache service when HTTP 503 error is encountered? Is there any Apache directive available to set? in general, what would cause a HTTP 503 error in Apache? NOTE: Mono helps in running applications developed in .NET on a Linux-based OS. EDIT: I agree on finding the root cause of this problem. In fact, we've been analyzing that too. Till we resolve it, am finding whether this could be restarted immediately on its own without having any downtime/service disruption for application users.

    Read the article

  • Multiple *NIX Accounts with Identical UID

    - by Tim
    I am curious whether there is a standard expected behavior and whether it is considered bad practice when creating more than one account on Linux/Unix that have the same UID. I've done some testing on RHEL5 with this and it behaved as I expected, but I don't know if I'm tempting fate using this trick. As an example, let's say I have two accounts with the same IDs: a1:$1$4zIl1:5000:5000::/home/a1:/bin/bash a2:$1$bmh92:5000:5000::/home/a2:/bin/bash What this means is: I can log in to each account using its own password. Files I create will have the same UID. Tools such as "ls -l" will list the UID as the first entry in the file (a1 in this case). I avoid any permissions or ownership problems between the two accounts because they are really the same user. I get login auditing for each account, so I have better granularity into tracking what is happening on the system. So my questions are: Is this ability designed or is it just the way it happens to work? Is this going to be consistent across *nix variants? Is this accepted practice? Are there unintended consequences to this practice? Note, the idea here is to use this for system accounts and not normal user accounts.

    Read the article

  • problems installing mysql and phpmyadmin to localhost

    - by Joel
    Hi guys, I know there have been many similar questions, but as far as I can tell, most of the other people have gotten further than I have... I'm trying to get a WAMP setup happening. I've got PHP and Apache running and talking to each other. PHP is in c:\PHP Apache is in it's default program files folder. mySQL is in it's default install location. I have localhost setup at D:\public_html\ I'm able to navigate to localhost and see html and php files. But I have a simple mySQL test file: <?php // hostname or ip of server (for local testing, localhost should work) $dbServer='localhost'; // username and password to log onto db server $dbUser='root'; $dbPass=''; // name of database $dbName='test'; $link = mysql_connect("$dbServer", "$dbUser", "$dbPass") or die("Could not connect"); print "Connected successfully<br>"; mysql_select_db("$dbName") or die("Could not select database"); print "Database selected successfully<br>"; // close connection mysql_close($link); ?> When I try and open this, I get "could not connect" Now, I haven't even created a database yet, because I can't log into mySQL with phpmyadmin-so I think I've done something wrong in my mySQL install because they aren't talking to each other. I guess my main question is how do I first create a database in mySQL to be sure I have even installed it correctly?

    Read the article

  • Recommended offline on-demand virus scanners

    - by ashh
    I have never run full anti-virus on my Windows XP systems. Instead I use various anti-malware tools to manually perform scans every few weeks. This approach, combined with Windows updates and general care about what web-sites I visit and what files I download has kept me 99% free of problems. The remaining 1% has occurred when I download files that I know may contain malware, but still decide the risk is worth it. When on 2 occasions in 10 years I did get caught doing this, I realised that being able to easily scan them would most likely have avoided getting infected. I don't need, or want, to run a "stay resident" anti-virus. Also, the online scanners such as Kaspersky etc limit uploads to small files, so these are not always useful. In summary I would like to simply be able to download a file and then manually initiate an on demand anti-virus scan, on the downloaded file only. I'm sure some/most Anti-Virus do both, however once again I don't really want to pay for or need the stay resident part. Any recommendations (commercial or free)? UPDATE: This is not an exact duplicate, nor a possible duplicate. I searched for and read other questions on anti-virus here at SuperUser and found none that answered my question. I am specifically asking about anti-virus scanners that run ON-DEMAND locally on the computer, not online scanners.

    Read the article

  • correct file permissions for trac and git user to access gitolite server repos

    - by klemens
    hi, sounds like a stupid questions (to me), but i couldn't find any info. on my server i host some git repositories via gitolite, and have a trac for every repository. i have a user called git to push/pull from server (git clone git@server:repo). and trac is a apache vhost with mod_wsgi. this runs with the www-data user. so what riddles me (maybe because I have not much of a clue about file-permissions at all) is whats the best permissions setup (chown, chmod) for the git repositories (/home/git/repositories/...). www-data (or trac) needs to at least read permissions (i think). and git (or gitolite) needs obviously read/write permissions to push changesets. i tried a little bit around (i.e. adding www-data and/or git to the www-data/git group), but didn't got it right. at least one of the two don't work (git or trac). any suggestions are highly appreciated. regard, klemens

    Read the article

  • Filesystem fragmentation on the level of set of files

    - by trismarck
    The file is stored in blocks by the file system. The block is the smallest amount of data the file system can assign to store a file. The classical definition of a fragmented file is that the file is stored in blocks that are 'scattered' (that are physically non-contiguous) around the hard drive. What I want to ask about is this second type of fragmentation I've came up with. Lets suppose we install a program. This program has very many files. When the program starts, the program always loads the contents of those files sequentially. Now, even if the hard disk is defragmented, there is still a possibility that the files (but not the blocks building up to files) will be scattered on the disk and thus the program launch time will be longer. Actually, this time could be longer due to defragmentation of the disk, as the defragmentation process not only glues fragmented files but also moves some files to optimize free space chunks. The questions: is the type of fragmentation I mentioned relevant for the file system? is it possible to remedy this kind of fragmentation and if yes, how would you do it? Also, I'm not sure if this question should belong to superuser or to serverfault (as I guess the filesystem fragmentation is more important in the server environment).

    Read the article

  • How can I safely close this window and forever avoid seeing similar pop-ups from Mackeeper Zeobit's malware and spyware?

    - by Michael Prescott
    The attached image shows a window that just popped up and the only button available is the OK button. I could Force quit Safari, but I've got several sites open right now and don't want to try and find my place again. Besides, I've seen similar hacks in the past and I'd like to learn how to handle them in a way better than just a brute force-quit. I've never heard of MacKeeper or Zeobit, so I opened Firefox and did a few searches while Safari is obviously still stuck, waiting for me to click the sneaky OK button in the dialog window. Anyhow, at least the first few pages of most search results contain lots of blabbering from questionable witnesses about how MacKeeper saved them from some malware or spyware. However, any company that is hacking the browser to maliciously install their product is itself the criminal and not providing a true security application. So, there are three questions here: How can I close this window? Can I do something to Safari to avoid these hacks in the future? (Just curious) Is MacKeeper or Zeobit somehow loading the search results so that no information about their application being malware or spyware is listed (I can't be the only person in the world that is offended by their tactics, even though it appears I am)?

    Read the article

  • Why the VPN Network Shake-Up?

    - by Brent Arias
    I can RDP to another machine on my home network, only if I'm not also hooked up to my employer's VPN with the Cisco VPN client. Indeed, I can't even ping the other machine by name in this mode, because ICMP suddenly thinks that ( ping myMachine ) now means ( ping myMachine.myEmployer.com ). Of course there is no machine by that latter name, and so it fails. Even weirder, once I disconnect from the VPN I can again ping myMachine successfully, but ICMP reports the machine by its MAC address instead of its IP address. I don't think I've ever seen ping identify another machine by its MAC address. So two questions: How can I access via RDP/ping the other machine BY NAME on my local network while also connected to the VPN? Why is ping identifying a MAC address for the machine on my home network, instead of an IP address? And how can I change this so that an IP address is reported instead? For question #1, I can indeed access the other machine on my home network by IP address. I suspect if I put the name-IP pair into my HOSTS file, then I would be able to access it even when connected to the VPN. But I wonder if there is another (more elegant) solution?

    Read the article

  • Password Authentication Fails - NTLMv2

    - by JMeterX
    Environment: Windows 2000 sp4 EDIT: Domain Controller with no trust setup with the Win2008 Server Windows XP machines Windows 2008 Server Netapp NAS Problem: We have a shared folder that resides on a NAS using a Windows 2008 AD for the authentication with the proper permissions setup. When the Windows 2000 machine tries to open the share residing on the Win2008 machine, it is prompted for a username and password. Upon entering the credentials it continuously re-asks for credentials. Important Details: The Windows 2000 machine can ping both the XP machines and the Windows 2008 Server The Windows 2008 machine is mandated to only use NTLMv2 The Windows 2000 machine was originally set to NTLM but was recently switched to NTLMv2 if negotiated for the purpose of trying to connect to the share. As I am sure it will come up, we are using Windows 2000 because of contractual obligations Questions: Why is password Authentication failing in this case? After setting a GPO for the Win2000 machine for it to use NTLMv2, do we need to reboot the machine for the changes to take affect? We used SECEDIT to update the GPOs without rebooting. UPDATE We checked both of the 2008 Domain Controllers to find an error code. We received: Microsoft_Auth_Package_V1_0 0xc000006a Event ID: 4776 I know this to be an authentication error via THIS article "The value provided as the current password is not correct" We know this password to be correct, but since these two domains (Win2000 & Win2008) do not have a trust setup what authentication account needs to be used? One that resides on the Win2000 hosted domain?

    Read the article

  • Shared configuration for Eclipse on Debian server

    - by Joris Meys
    I've manually installed the latest Eclipse on our debian server and wanted to configure it so all users share the same configuration. It turned out less obvious than I thought: I don't seem to be able to install packages for all users. If I run it myself, all configuration data is saved under my own home directory. If I run Eclipse using sudo, everything is saved under the root directory but is not accessible for other users when they run Eclipse. I've been browsing the manual of Eclipse and some forums, but apart from a "yes, you can" I couldn't find any information on how that should be done. The biggest problem is installing plugins for all users to be found. Any help is greatly appreciated. Eclipse : 3.6.1 classic, installed using this procedure. Server uname: GNU/Linux * 2.6.26-2-amd64 Server is accessed using Putty, and Gnome desktop through realVNC. Just mentioning it if that is of any importance. Our sysadmin is on "prolonged leave" (working in Spain and never replaced), so I'm stuck without help here. EDIT : -- I asked this question also on StackOverflow as I wasn't certain this is a genuine server-related question. Please feel free to merge both questions at the appropriate place. --

    Read the article

  • What other tool is using my hotkey?

    - by Sammy
    I use Greenshot for screenshots, and it's been nagging about some other software tool using the same hotkey. I started receiving this warning message about two days ago. It shows up each time I reboot and log on to Windows. The hotkey(s) "Ctrl + Shift + PrintScreen" could not be registered. This problem is probably caused by another tool claiming usage of the same hotkey(s)! You could either change your hotkey settings or deactivate/change the software making use of the hotkey(s). What's this all about? The only software I recently installed is CPU-Z Core Temp Speed Fan HD Tune Epson Print CD NetStress What I would like to know is how to find out what other tool is causing this conflict? Do I really have to uninstall each program, one by one, until there is no conflict anymore? I see no option for customizing any hotkeys in CPU-Z, and according to docs there are only a few keyboard shortcuts. These are F5 through F9, but they are no hotkeys. There is nothing in Core Temp, and from what I can see... nothing in Speed Fan. Is any of these programs known to use Ctrl + Shift + PrintScreen hotkey for screenshots? I am actually suspecting the Dropbox client. I think I saw a warning recently coming from Dropbox program, something to do with hotkeys or keyboard shortcuts. I see that it has an option for sharing screenshots under Preferences menu, but I see no option for hotkeys. Core Temp actually also has an option for taking screenshots (F9) but it's just that - a keyboard shortcut, not a hotkey. And again, there's no option actually for changing this setting in Options/Settings menu. How do you resolve this type of conflicts? Are there any general methods you can use to pinpoint the second conflicting software? Like... is there some Windows registry key that holds the hotkeys? Or is it just down to mere luck and trial and error? Addendum I forgot to mention, when I do use the Ctrl + Shift + PrintScreen hotkey, what happens is that the Greenshot context menu shows up, asking me where I want to save the screenshot. So it appears to be working. But I am still getting the darn warning every time I reboot and log on to Windows?! I actually tried changing the key bindings in Greenshot preferences, but after a reboot it seems to have returned back to the settings I had previously. Update I can't see any hotkey conflicts in the Widnows Hotkey Explorer. The aforementioned hotkey is reserved by Greenshot, and I don't see any other program using the same hotkey binding. But when I went into Greenshot preferences, this is what I discovered. As you can see it's the Greenshot itself that uses the same hotkey twice! I guess that's why no other program was listed above as using this hotkey. But how can Greenshot be so stupid to use the same hotkey more than once? I didn't do this! It's not my fault... I'm not that stupid. This is what it's set to right now: Capture full screen: Ctrl + Skift + Prntscrn Capture window: Alt + Prntscrn Capture region: Ctrl + Prntscrn Capture last region: Skift + Prntscrn Capture Internet Explorer: Ctrl + Skift + Prntscrn And this is my preferred setting: Capture full screen: Prntscrn Capture window: Alt + Prntscrn Capture region: Ctrl + Prntscrn Capture last region: Capture Internet Explorer: I don't use any hotkey for "last region" and IE. But when I set this to my liking, as listed here, Greenshot gives me the same warning message, even as I tab through the hotkey entry fields. Sometimes it even gives me the warning when I just click Cancel button. This is really crazy! On the side note... You might have noticed that I have "update check" set to 0 (zero). This is because, in my experience, Greenshot changes all or only some of my preferences back to default settings whenever it automatically updates to a new version. So I opted to stay off updates to get rid of the problem. It has done so for the past three updates or so. I hoped to receive a new update that would fix the issue, but I think it still reverts back to default settings after each update to a new version, including setting default hotkeys. Update 2 I'll give you just one example of how Greenshot behaves. This is the dialog I have in front of me right now. As you can see, I have removed the last two hotkeys and changed the first one to my own liking. While I was clicking in the fields and removing the two hotkeys I was getting the warning message. So let's say I click in the "capture last region" field. Then I get this: Note that none of the entries include "Ctrl + Shift + PrintScreen" that it's warning about. Now I will change all the hotkeys so I get something like this: So now I'm using QWERTY letters for binding, like Ctrl+Alt+Q, Ctrl+Alt+W and so on. As far as I know no Windows program is using these. While I was clicking through the different fields it was giving me the warning. Now when I try to click OK to save the changes, it once again gives me a warning about "ctrl + shift + printscreen". Update 3 After setting the above key bindings (QWERTY) and saving changes, and then rebooting, the conflict seems to have been resolved. I was then able to set following key bindings. Capture full screen: Prntscrn Capture window: Alt + Prntscrn Capture region: Ctrl + Prntscrn I was not prompted with the warning message this time. Perhaps changing key binding required a system reboot? Sounds far fetched but that appears to be the case. I'm still not sure what caused this conflict, but I know for sure that it started after installing aforementioned programs. It might just have to do with Greenshot itself, and not some other program. Like I said, I know from experience that Greenshot likes to mess with users' settings after each update. I wouldn't be surprised if it was actually silently updated, even though I have specified not to check for updates, then it changed the key bindings back to defaults and caused a conflict with the hotkeys that were registered with the operating system previously. I rarely reboot the system, so that could have added to the conflict. Next time if I see this I will run Hotkey Explorer immediately and see if there is another program causing the conflict.

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • fwbuilder/iptables manually scripted + autogenerated rules at startup?

    - by Jakobud
    Fedora 11 Our previous IT-guy setup iptable rules on our firewall in a way that is confusing me and he didn't document any of it. I was hoping someone could help me make some sense of it. The iptables service is obviously starting at startup, but the /etc/sysconfig/iptables file was untouched (default values). I found in /etc/rc.local he was doing this: # We have multiple ISP connections on our network. # The following is about 50+ rules to route incoming and outgoing # information. For example, certain internal hosts are specified here # to use ISP A connection while everyone else on the network uses # ISP B connection when access the internet. ip rule add from 99.99.99.99 table Whatever_0 ip rule add from 99.99.99.98 table Whatever_0 ip rule add from 99.99.99.97 table Whatever_0 ip rule add from 99.99.99.96 table Whatever_0 ip rule add from 99.99.99.95 table Whatever_0 ip rule add from 192.168.1.103 table ISB_A ip rule add from 192.168.1.105 table ISB_A ip route add 192.168.0.0/24 dev eth0 table ISB_B # etc... and then near the end of the file, AFTER all the ip rules he just declared, he has this: /root/fw/firewall-rules.fw He's executing the firewall rules file that was auto-generated by fwbuilder. Some questions Why is he declaring all these ip rules in rc.local instead of declaring them in fwbuilder like all the other rules? Any advantage or necessity to this? Or is this just a poorly organized way to implement firewall rules? Why is he declaring ip rules BEFORE executing the fwbuilder script? I would assume that one of the first things the fwbuilder script does it get rid of any existing rules before declaring all the new ones. Am I wrong about this? If that was the case, the fwbuilder script would basically just delete all the ip rules that were defined in rc.local. Does this make any sense? Why is he executing all this stuff at startup in rc.local instead of just using iptables-save to keep the firewall settings at /etc/sysconfig/iptables that will get implemented at runtime?

    Read the article

  • Setting the default permissions for files uploaded via FTP to a directory

    - by Kerri
    Disclaimer: I'm just a web designer/coder, and server admin stuff is my weakest point of them all. So be easy on me (and very specific). I'm using a simple CMS (Unify) on a site, where part of the functionality is that the client can upload files to a specified directory (using FTP). The permissions for the upload directory are set to 755. But when files are uploaded through the interface, they are uploaded with permissions set to 640 (instead of 644), so site visitors cannot acces the files. When I emailed the CMS's support about this, they told me that it was a server setting, and I need to make sure that files uploaded through FTP are set to 644. Makes perfect sense, but I have no idea how to do this. Any help would be greatly appreciated. This site is a shared site hosted by Network Solutions (Unix), so my access options are limited. I can edit .htaccess files, and php.ini, but that's about all I have access to. It appears I can't even log on via shell. ETA: 11/11/2010 Thanks all. I was able to work around this problem by setting up the CMS's settings in a different way. I'd be interested in following up on Nick O'Niel's suggestions, because I think he's on the right track, but unfortunately I can't access the necessary files on this particular server. So, anyway, I'm leaving this open, since the original questions isn't exactly resolved. Unfortunately, I probably can't put a correct answer to the test, since the shared server in question has nearly all of its config files tightly locked down.

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >