Search Results

Search found 19815 results on 793 pages for 'out of control'.

Page 331/793 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • Can't set screen brightness in Gentoo system

    - by Real Yang
    My system: Linux gentoo 3.10.7-gentoo-r1 VGA compatible controller: NVIDIA Corporation GT216M [GeForce GT 240M] (rev a2) output of xbacklight: No outputs have backlight property output of xrandr: xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1280 x 720, maximum 1280 x 768 default connected 1280x720+0+0 0mm x 0mm 1280x720 0.0* 1024x768 61.0 800x600 61.0 640x480 60.0 1280x768 0.0 output of ls /proc/acpi: button/ event When I'm in kernel 3.8.13, I can change my brightness using xbacklight. I compiled 3.10.7-r1 using genkernel all. Before the upgrade I did get a notice of "compatible issues for Nvdia users" from emerge but I still don't know the details. It there anyway to let me set the brightness? Then i found a ebuild app-laptop/nvdiabl-0.81 and tried to emerege nvidabl, I got this message: Your kernel does not support FB_BACKLIGHT. To enable you it you can enable any frame buffer with backlight control or nouveau. Note that you cannot use FB_NVIDIA with nvidia's proprietary driver Please check to make sure these options are set correctly. Failure to do so may cause unexpected problems. Once you have satisfied these options, please try merging this package again. ERROR: app-laptop/nvidiabl-0.81::gentoo failed (pretend phase): Incorrect kernel configuration options Call stack: ebuild.sh, line 93: Called pkg_pretend nvidiabl-0.81.ebuild, line 31: Called linux-mod_pkg_setup linux-mod.eclass, line 559: Called linux-info_pkg_setup linux-info.eclass, line 911: Called check_extra_config linux-info.eclass, line 805: Called die The specific snippet of code: die "Incorrect kernel configuration options" [SOLVED] I enter the menuconfig again and check the Device Drivers -> Graphics support -> Support for frame buffer devices, then i found this: <*> nVidia Framebuffer Support [*] Support for backlight control (NEW) What can i say. Recompiling...

    Read the article

  • Help me please with this error

    - by Brandon
    I setup IIS. I moved my folder with all the files to the IIS directory. Now when I go to http://localhost/thefolder I get: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)] Luxand.FSDK.ActivateLibrary(String LicenseKey) +0 FaceRecognition._Default.Page_Load(Object sender, EventArgs e) in D:\Project Details\Layne Projects\DotNet Project\FaceRecognition\FaceRecognition\Default.aspx.cs:60 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +25 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +42 System.Web.UI.Control.OnLoad(EventArgs e) +132 System.Web.UI.Control.LoadRecursive() +66 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2428

    Read the article

  • How do you use a Mac without a mouse?

    - by NitroxDM
    So I was using my Mac Mini and decided to set the application button on Logic Click mouse to open Exposé. I had this set up at one time and found it useful. So I open the Exposé control panel and took a look at the triggers for Exposé. There is drop down for a secondary trigger. In that drop down there was a ton of mouse buttons 100+. So I selected #32 just to see what button #32 was on my 5 button mouse. Turns out it's all of them! So no matter what button I press guess what happens... I get Exposé. So using the just the keyboard how do I: Change the Exposé trigger? I can use the keyboard to bring up the control Exposé panel but guess what I can't get anything else to come in to focus. Kill Exposé? Right now I can say is D'oh! Any ideas?

    Read the article

  • LDAP: Extend database using referral

    - by ecapstone
    My company uses an off-site LDAP server to handle authentication. I'm currently working on a local VPN for my branch that needs to use the off-site LDAP to check user's usernames and passwords, but I don't want every employee to have access to the VPN - I need to be able to control whether users can authenticate with the off-site LDAP based on whether they're allowed to use the VPN. My current solution involves having our own local LDAP server, which has a referral to the off-site server (I got most of my information from here: http://www.zytrax.com/books/ldap/ch7/referrals.html). This means that when local users try to check their credentials with the local server, it redirects them to the off-site server, which checks the credentials. This works for authentication, but not for authorization. It would be easiest to add a vpn_users group or is_vpn_user attribute on the off-site server, but, well, that's above my pay grade. Is there any way I can use the local server to control whether users have access to the VPN without needing to change the off-site server? If I could somehow use it to have a local vpn_users group without the users in it having to be located on the local server, that would probably work, but I have no idea how to set that up or if LDAP even supports such a configuration. For reference, I'm using the openvpn-auth-ldap (https://code.google.com/p/openvpn-auth-ldap/) plugin.

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) edit: similar question here What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • How to sync two computers using new MobileMe calendar

    - by CesarGon
    I have been using MobileMe for over a year with success. I use it to sync my Outlook calendars in my work and home computers, using Windows 7 and Outlook 2007. The main Outlook calendar folder in my work computer is replicated to MobileMe as "Work", and synced to my home computer, and the main calendar folder in my home computer is replicated to MobileMe as "Home", and synced to my work computer. This means that I can see both "Work" and "Home" calendars from both computers (as well as from the web interface through me.com), which is very convenient. Yesterday I migrated to the new MobileMe calendar, accepting the suggestion that popped up on the me.com website. After the migration, the MobileMe control panel on each of Windows computers asked me to re-configure my calendar setup, and everything fell apart. The "Home" and "Work" calendar folders in Outlook are now ignored by MobileMe, and new ones named "Home in MobileMe" and "Work in MobileMe" have been created, and placed in a separate Outlook data file rather than the default. This means that now: I now have four folders, two of which are not replicated to MobileMe The two folders that are not replicated reside on a separate data file, so alarms and reminders don't work; they're basically useless to me as calendar folders In addition, the button in the MobileMe Control Panel that used to let me specify what MobileMe folder should be synced against the default Outlook folder has gone. MobileMe is now too smart. Do you have any idea how to undo this mess and go back to a situation where I have two folders, as described in the top paragraph, which keep synced? I don't want an extra data file. Thanks.

    Read the article

  • Win 7 dual monitor: Don't move application windows when turn off second monitor

    - by codewaggle
    The title is correct, it should say "Don't" not "Doesn't". I Don't want the application windows to be moved to the main monitor when I turn off the monitors. On my Win XP dual monitor system, I can turn off the monitors and when I turn them back on, the application windows are in the same locations on the same monitors as when I turned the monitors off. On the Win 7 system, every time I turn off the monitors (or just the second monitor), all of the application windows are moved to the "main" monitor. After experimenting with the settings, I've found one process that enables me to turn off the monitors and still keep my application windows laid out in my chosen locations on the two monitors: 1) Switch the display setting to a single monitor. 2) Turn off the monitors. 3) Turn on the monitors. 4) Switch the display setting back to "Extend these Displays". After step 4, the application windows that I had laid out on the second monitor are moved back to their original locations on the second monitor. Is there a windows or nVidia setting that would leave the application windows on the second monitor so that I don't need to switch the display settings every time I turn off the monitors? Specs: Windows 7 64-bit dual monitors (1 DP, 1 DVI) Desktop (most questions seem to be about laptops) nVidia Quadro 2000 nVidia Control Panel nVidia nView Control Panel

    Read the article

  • How can I configure apache to cache the images that it is serving? Right now it is giving headers t

    - by Tchalvak
    Serving up images that don't seem to cache There's a LAPP (postgresql instead of mysql) running over on http://ninjawars.net. I just recently noticed that images don't seem to be caching with any kind of good frequency as I was reloading a page with a few images on it here: http://www.ninjawars.net/attack_player.php Here is an example image (they're probably all being served exactly the same): http://www.ninjawars.net/images/characters/fighter.png Checking the header, it seems that the caching is set to: Cache-Control:max-age=0 (the full header for this image-like-all-the-others is... Request URL:http://www.ninjawars.net/images/characters/fighter.png Request Method:GET Status Code:200 OK Request Headers Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Cache-Control:max-age=0 Referer:http://www.ninjawars.net/images/characters/fighter.png User-Agent:Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.3 Safari/533.4 Response Headers Accept-Ranges:bytes Content-Length:938 Content-Type:image/png Date:Thu, 13 May 2010 21:24:07 GMT ETag:"ffd4d-3aa-4837efc120540" Last-Modified:Mon, 05 Apr 2010 15:28:45 GMT Server:Apache ) So what modules or config or htaccess or whatever do I change to have it cache images, e.g. for 24 hours?

    Read the article

  • Debian - Secure system from current administrator

    - by netadmin
    Hello, I am the Network and Systems Administrator in an organization of just under 500 users. We have a number of Windows Servers, and that is certainly my area of expertise. We also have a very small handful of Debian servers. We are about to terminate the sysadmin of these Debian systems. Short of powering down the systems, I would like to know how I can ensure that the previous admin does not have control of these systems in the future, at least until we hire a replacement linux sysadmin. I have physical/virtual-console access to each of the systems, so I can reboot them in various user-modes. I just don't know what to do. Please assume that I do not currently have root access to all of these systems (an oversight on my part that I now recognize.) I have some experience in Linux, and use it on my desktop on a daily basis, but I must admit that I am a competent user of linux, not a systems admin. I have no fear of the command line however.... Is there a list of steps that one should take to "secure" a system from somebody else? Again, I assure you that this is legit, I am re-taking control of my employer's systems, at the request of my employer. I hope to not have to shut the systems down permanently and still be reasonably certain that they are secure. Thanks for your time.

    Read the article

  • Using Toshiba 22EL833 as PC display through HDMI input

    - by Oleg V. Volkov
    I had another Toshiba TV - 19SL738 - connected to this same PC and video card (GTX 8800) through DVI<-HDMI (DVI on PC side, HDMI on TV) before, that was working perfectly at it's native resolution 1360x768. Some time ago I had to change to 22EL833 and immediately faced problem with Windows 7 control panel and NVIDIA control panel both reporting native resolution for new TV as 1080i, 1920x1280, despite TV documentation saying that it have same 1360x768 as previous one. Practical tests confirmed that true native resolution is indeed 1360x768, because plugging in through DVI<-VGA and setting custom resolution through NVIDIA panel shown clear colors and crisp image, while setting anything different with either DVI<-VGA or DVI<-HDMI produced horribly distorted or squished images, with almost unreadable slim lines (as in letters, for example). Now, my problem is that there's no drivers for this TV and I'm unable to get good image while connecting it through DVI<-HDMI directly. The best I've achieved is editing EDID/driver manually, to persuade system that native resolution should be 1360x768, and while image became mostly clear, colors turned to some strange washed out effect, with pools of pure yellow, cyan and magenta there and there filling place of other colors. Gradients also became noticeably stripped as well. Somehow it looks like dithering gone bad and makes me suspect that image is still down/upscaled several times internally somewhere along the line. How can I connect this TV to DVI output of my video card to get best possible clear image, correct colors and correct native resolution?

    Read the article

  • Load balanced IIS. Should I use NLB, or linux-based reverse proxy, or something else?

    - by growse
    What would be the best approach for load-balancing at least 2-3 Windows 2008 R2 IIS webservers running a multitude of .NET applications? My choices appear to be: 1) Hardware-based network device load balancer, like a Cisco CSS 2) Windows NLB 3) Some sort of linux based proxy, either haproxy or other The three servers sit as VMs on a vSphere farm, so I have the ability to clone to up the instance count in times of high load. I control the switch that the vSphere hosts are plugged into (Cisco 3750), but don't control the switching/routing infrastructure beyond that to the clients. (1) Is too expensive, and probably overkill for my needs. I've included this in case someone figures out a cunning way to do it on my existing network kit, which I doubt. (2) would seem to be the obvious "built-in" option, but seems to be quite fiddly messing around with network interfaces, multicast, and generally other things that seem to be needlessly complex. It's also fairly stupid, in that it can't remove hosts from the pool if they start throwing 500 errors or otherwise go wrong (3) is the most interesting option, as it would appear to offer the most flexibility and customizability, but without having to mess around with the network. However, while I'm familiar with the reverse-proxy capabilities of lighttpd etc, I'm not that well read on other options like HAProxy, which might be able to offer a lot more. Which would you go for, and is there anything I've not thought of?

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Modifier key bug with Adobe Design Standard CS4 (Mac OS 10.6.6)

    - by Andy
    When I launch the Adobe apps, they ask me if I want to delete the preferences file. I say 'no'. Then, when using the apps, none of the tools seem to work properly, eg in Illustrator, I can't move objects without it duplicating the object. Photoshop constantly displays the contextual menu icon that usually only appears when you hold down the control key. I can't do any work in them when this happens, so I am forced to log out and then back in. They work for a while and then this annoying bug happens again. I think it must be to do with the modifier keys (command option and control). I think Adobe's apps must think I'm permanently pressing the modifier keys down - the thing is, no other app on my machine has this problem! Does anyone else experience this? Or have a fix for it? Any suggestions much appreciated. Running: Mac OS X 10.6.6 on an iMac. Adobe CS4 Design standard installed.

    Read the article

  • ipmi - can't ping or remotely connect

    - by Fidel
    I've tried configuring the IPMI controller to accept remote connections, but I can't even ping it. Here is it status: #/usr/local/bin/ipmitool lan print 2 Set in Progress : Set Complete Auth Type Support : NONE PASSWORD Auth Type Enable : Callback : : User : NONE PASSWORD : Operator : PASSWORD : Admin : PASSWORD : OEM : IP Address Source : Static Address IP Address : 192.168.1.112 Subnet Mask : 255.255.255.0 MAC Address : 00:a0:a5:67:45:25 IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10 BMC ARP Control : ARP Responses Enabled, Gratuitous ARP Enabled Gratituous ARP Intrvl : 8.0 seconds Default Gateway IP : 192.168.1.1 Default Gateway MAC : 00:00:00:00:00:00 802.1q VLAN ID : Disabled 802.1q VLAN Priority : 0 RMCP+ Cipher Suites : 0,1,2,3 Cipher Suite Priv Max : uaaaXXXXXXXXXXX : X=Cipher Suite Unused : c=CALLBACK : u=USER : o=OPERATOR : a=ADMIN : O=OEM # /usr/local/bin/ipmitool user list 2 ID Name Enabled Callin Link Auth IPMI Msg Channel Priv Limit 1 true false true true USER 2 admin true false true true ADMINISTRATOR # /usr/local/bin/ipmitool channel getaccess 2 2 Maximum User IDs : 5 Enabled User IDs : 2 User ID : 2 User Name : admin Fixed Name : No Access Available : callback Link Authentication : enabled IPMI Messaging : enabled Privilege Level : ADMINISTRATOR # /usr/local/bin/ipmitool channel info 2 Channel 0x2 info: Channel Medium Type : 802.3 LAN Channel Protocol Type : IPMB-1.0 Session Support : multi-session Active Session Count : 0 Protocol Vendor ID : 7154 Volatile(active) Settings Alerting : disabled Per-message Auth : disabled User Level Auth : disabled Access Mode : always available Non-Volatile Settings Alerting : disabled Per-message Auth : disabled User Level Auth : disabled Access Mode : always available # /usr/local/bin/ipmitool chassis status System Power : on Power Overload : false Power Interlock : inactive Main Power Fault : false Power Control Fault : false Power Restore Policy : unknown Last Power Event : Chassis Intrusion : inactive Front-Panel Lockout : inactive Drive Fault : false Cooling/Fan Fault : false # arp Address HWtype HWaddress Flags Mask Iface 192.168.1.112 ether 00:A0:A5:67:45:25 C bond0 # /usr/local/bin/ipmitool -I lan -H 192.168.1.112 -U admin -P admin chassis power status Error: Unable to establish LAN session Unable to get Chassis Power Status In summary. It exists on the ARP list so arp's are being broadcast. I can't ping it and can't connect to it. Can anyone spot any glaring mistakes in the configuration? Many thanks, Fidel

    Read the article

  • Using Toshiba 22EL833 as PC display with GTX8800

    - by Oleg V. Volkov
    I had another Toshiba TV - 19SL738 - connected to this same PC and video card (GTX 8800) through DVI<-HDMI (DVI on PC side, HDMI on TV) before, that was working perfectly at it's native resolution 1360x768. Some time ago I had to change to 22EL833 and immediately faced problem with Windows 7 control panel and NVIDIA control panel both reporting native resolution for new TV as 1080i, 1920x1280, despite TV documentation saying that it have same 1360x768 as previous one. Practical tests confirmed that true native resolution is indeed 1360x768, because plugging in through DVI<-VGA and setting custom resolution through NVIDIA panel shown clear colors and crisp image, while setting anything different with either DVI<-VGA or DVI<-HDMI produced horribly distorted or squished images, with almost unreadable slim lines (as in letters, for example). Now, my problem is that there's no drivers for this TV and I'm unable to get good image while connecting it through DVI<-HDMI directly. The best I've achieved is editing EDID/driver manually, to persuade system that native resolution should be 1360x768, and while image became mostly clear, colors turned to some strange washed out effect, with pools of pure yellow, cyan and magenta there and there filling place of other colors. Gradients also became noticeably stripped as well. Somehow it looks like dithering gone bad and makes me suspect that image is still down/upscaled several times internally somewhere along the line. How can I connect this TV to DVI output of my video card to get best possible clear image, correct colors and correct native resolution?

    Read the article

  • Why does hiberfil.sys come back from the dead on Windows 7?

    - by Corey White
    I have Windows 7 running on a small (40GB) partition, with 4GB ram. This means that the hiberfil.sys file created by Hibernate takes up a significant portion of the available diskspace. I would like to remove it. I am aware that I can disable Hibernate and remove hiberfil.sys by entering powercfg -h off in an elevated command prompt. This works -- the file is immediately removed, and after doing so, the HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Power\HibernateEnabled key is (correctly) set to 0. However, the next time I reboot the PC, hiberfil.sys returns from the dead, Hibernate is reenabled, and that registry key has returned to 1. I'm pretty much at my wits' end with this. Almost everything I can find online related to removing the hiberfil.sys file simply suggests using powercfg to turn off hibernation, and that appears to work for just about everyone. But it just keeps coming back for me! (Like a vampire, sucking up my disk space.) I did find one other thread from someone who seems to have had the same issue, but none of the suggestions there worked for the original poster (or for me). Still, I have tried everything listed there, including: Disabling hybrid sleep Disabling Hibernate through the command prompt, through the Power Options GUI, and through both (in both orders) Manually changing the HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Power\HibernateEnabled key Pretty much everything else I can think of! I do want to reiterate that I have no problem removing the file -- that works great. It just comes back after every reboot. I'm about ready to throw in the towel and just run a script on login to disable Hibernate each time, even though that seems like a crazily hacky "solution" . . . but I was hoping someone here could suggest something else, first. Thanks!

    Read the article

  • Weird mouse behaviour. Debian wheezy

    - by DevNoob
    When I move my mouse slowly over the desktop the pointer jumps often a few pixels (one or two) in the opposite direction of which I move my mouse. Horribly when trying to set the cursor around some semicolons in eclipse. I guess this is the result of a wrong set resolution of it. I suppose this is because the mouse was set initially really fast and even if I do xset 1/2 3, the mouse is just to fast and unprecise for me. It aready tried to configure the xorg.conf like this: Section "InputDevice" Identifier "Configured Mouse" Driver "mouse" Option "Device" "/dev/mouse" Option "Protocol" "Auto" Option "Name" "Logitech G3" Option "Resolution" "2000" EndSection But with no effect. Maybe because there is no /dev/mouse. This ist the content of dev. Maybe you can tell me which one is the mouse. autofs block bsg btrfs-control bus cdrom cdrw char console core cpu cpu_dma_latency disk dvd dvdrw fd fd0 full fuse fw0 hidraw0 hidraw1 hpet input kmsg log loop0 loop1 loop2 loop3 loop4 loop5 loop6 loop7 loop-control MAKEDEV mapper mcelog mem net network_latency network_throughput null nvidia0 nvidiactl oldmem port ppp printer psaux ptmx pts random rfkill root rtc rtc0 sda sda1 sda2 sda3 sda5 sda6 sda7 sda8 sdb sdb1 sg0 sg1 sg2 shm snapshot snd sndstat sr0 stderr stdin stdout tty tty0 tty1 tty10 tty11 tty12 tty13 tty14 tty15 tty16 tty17 tty18 tty19 tty2 tty20 tty21 tty22 tty23 tty24 tty25 tty26 tty27 tty28 tty29 tty3 tty30 tty31 tty32 tty33 tty34 tty35 tty36 tty37 tty38 tty39 tty4 tty40 tty41 tty42 tty43 tty44 tty45 tty46 tty47 tty48 tty49 tty5 tty50 tty51 tty52 tty53 tty54 tty55 tty56 tty57 tty58 tty59 tty6 tty60 tty61 tty62 tty63 tty7 tty8 tty9 ttyS0 ttyS1 ttyS2 ttyS3 uinput urandom usb vcs vcs1 vcs2 vcs3 vcs4 vcs5 vcs6 vcs7 vcsa vcsa1 vcsa2 vcsa3 vcsa4 vcsa5 vcsa6 vcsa7 vga_arbiter vmci vmmon vmnet0 vmnet1 vmnet8 vsock watchdog xconsole zero So my question is: How do I setup my mouse correctly in Debian wheezy?

    Read the article

  • Registry remotley hacked win 7 need help tracking the perp

    - by user577229
    I was writing some .VBS code at thhe office that would allow certain file extensions to be downloaded without a warning dialog on a w7x32 system. The system I was writing this on is in a lab on a segmented subnet. All web access is via a proxy server. The only means of accessing my machine is via the internet or from within the labs MSFT AD domain. While writing and testing my code I found a message of sorts. Upon refresing the registry to verify my code changed a dword, instead the message HELLO was written and visible in regedit where the dword value wass called for. I took a screen shot and proceeded to edit my code. This same weird behavior occurred last time I was writing registry code except on another internal server. I understand that remote registry access exists for windows systems. I will block this immediately once I return to the office. What I want to know is, can I trace who made this connection? How would I do this? I suspect the cause of this is the cause of other "odd" behaviors I'm experiencing at work such as losing control of my input director master control for over an hour and unchanged code that all of a sudden fails for no logical region. These failures occur at funny times, whenver I'm about to give a demonstration of my test code. I know this sounds crazy however knowledge of the registry component makes this believable. Once the registry can be accessed, the entire system is compromised. Any help or sanity checking is appreciated.

    Read the article

  • Ubuntu Pound Reverse Proxy Load Balancing Based off active server load?

    - by Andrew
    I have Pound installed on a loadbalancer. It seems to work okay, except that it randomly assigns the backend server to forward the request to. I've put 1 backend machine under so much load that it went into using swap, and I can't even ssh into it to test this scenareo. I would like the loadbalancer to realize that the machine is overloaded, and send it to a different backend machine. However it doesn't. I've read the man page and it seems like the directive "DynScale 1" is what would monitor this, but it still redirects to the overloaded server. I've also put in "HAport 22" to the backend figuring since I can't ssh in, neither could the loadbalancer and it would consider the backend server dead until it gets rid of the load and responds, but that didn't help either. If anyone could help with this, I'd appreciate it. My current config is below. ###################################################################### ## global options: User "www-data" Group "www-data" #RootJail "/chroot/pound" ## Logging: (goes to syslog by default) ## 0 no logging ## 1 normal ## 2 extended ## 3 Apache-style (common log format) LogLevel 3 ## check backend every X secs: Alive 5 DynScale 1 Client 1200 TimeOut 1500 # poundctl control socket Control "/var/run/pound/poundctl.socket" ###################################################################### ## listen, redirect and ... to: ## redirect all requests on port 80 to SSL ListenHTTP Address 192.168.1.XX Port 80 Service Redirect "https://xxx.com/" End End ListenHTTPS Address 192.168.1.XX Port 443 Cert "/files/www.xxx.com.pem" Service BackEnd Address 192.168.1.1 Port 80 HAport 22 End BackEnd Address 192.168.1.2 Port 80 HAport 22 End End End

    Read the article

  • How should I set up protection for the database against sql injection when all the php scripts are flawed?

    - by Tchalvak
    I've inherited a php web app that is very insecure, with a history of sql injection. I can't fix the scripts immediately, I rather need them to be running to have the website running, and there are too many php scripts to deal with from the php end first. I do, however, have full control over the server and the software on the server, including full control over the mysql database and it's users. Let's estimate it at something like 300 scripts overall, 40 semi-private scripts, and 20 private/secure scripts. So my question is how best to go about securing the data, with the implicit assumption that sql injection from the php side (e.g. somewhere in that list of 300 scripts) is inevitable? My first-draft plan is to create multiple tiers of different permissioned users in the mysql database. In this way I can secure the data & scripts in most need of securing first ("private/secure" category), then the second tier of database tables & scripts ("semi-private"), and finally deal with the security of the rest of the php app overall (with the result of finally securing the database tables that essentially deal with "public" information, e.g. stuff that even just viewing the homepage requires). So, 3 database users (public, semi-private, and secure), with a different user connecting for each of three different groups of scripts (the secure scripts, the semi-private scripts, and the public scripts). In this way, I can prevent all access to "secure" from "public" or from "semi-private", and to "semi-private" from "public". Are there other alternatives that I should look into? If a tiered access system is the way to go, what approaches are best?

    Read the article

  • Sending email with exim and external sender address

    - by Tronic
    i have following problem: i want to send emails with an rails webapp. i set up an exim server and when looking into the logs, the sending works, but the emails aren't sent really. i had the same problem with another isp. the sender address is hosted on another mailserver, other isp. i think the problem is, that sending doesn't work because the sener address isn't hosted on the same server. do you have any advice on this? the logs (exim) tell me the following: 2011-01-01 14:38:06 1PZ1eo-0000Ga-38 <= <> R=1PZ1eo-0000GY-1p U=Debian-exim P=local S=1778 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 => [email protected] R=dnslookup T=remote_smtp H=mx1.emailsrvr.com [98.129.184.131] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,O=mx1.emailsrvr.com,OU=GT21850092,OU=See www.geotrust.com/resources/cps (c)08,OU=Domain Control Validated - QuickSSL(R),CN=mx1.emailsrvr.com" 2011-01-01 14:38:08 1PZ1eo-0000Ga-38 Completed [email protected] is the external sender-address! thank you! Edit with more details when sending a mail from command line with echo "Test" | mail -s Testmail [email protected] the logs says 2011-01-01 20:45:24 1PZ7OG-0001Vp-Rx <= root@gustav U=root P=local S=360 2011-01-01 20:45:26 1PZ7OG-0001Vp-Rx => [email protected] R=dnslookup T=remote_smtp H=gmail-smtp-in.l.google.com [209.85.229.27] X=TLS1.0:RSA_ARCFOUR_MD5:16 DN="C=US,ST=California,L=Mountain View,O=Google Inc,CN=mx.google.com" 2011-01-01 20:45:26 1PZ7OG-0001Vp-Rx Completed and i get the mail on my gmail account. but when sending by webapp (when testing locally with sendmail it works fine) i only get this log output 2011-01-01 20:50:08 1PZ7Sq-0001X9-L4 <= <> R=1PZ7Sq-0001X7-Jo U=Debian-exim P=local S=1780 2011-01-01 20:50:11 1PZ7Sq-0001X9-L4 => [email protected] R=dnslookup T=remote_smtp H=mx1.emailsrvr.com [98.129.184.3] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,O=mx1.emailsrvr.com,OU=GT21850092,OU=See www.geotrust.com/resources/cps (c)08,OU=Domain Control Validated - QuickSSL(R),CN=mx1.emailsrvr.com" 2011-01-01 20:50:11 1PZ7Sq-0001X9-L4 Completed

    Read the article

  • DNS: how to get local server to superimpose results over authoritative server?

    - by growse
    I've got a domain for which the DNS I control, and is hosted on the internet. I also have a NAT'd internal network (192.168.0.0/24) which has internet access, and which I also control. On this internal network, I also have a DNS resolver. DNS software on both is PowerDNS. What I want to be able to do is for the DNS resolver on the internal network to be able to add/change records of queries and results that come down from the authoritative server. For example, the authoritative server might have a single record for animal.example.com: animal.example.com. IN AAAA 2001:140:283::1 However, I'd like it so that when internal clients do a dns lookup for animal.example.com, they might get back the following: animal.example.com. IN AAAA 2001:140:283::1 animal.example.com. IN A 192.168.0.2 Obviously, I could set up the internal DNS server to pretend to be authoritative for example.com, but that would require a fair bit of effort to keep the main DNS server and the internal DNS server in sync for the records which are the same between both. If the internal DNS server could somehow be made a slave of the main DNS server, but also have the provision to add its own results in, that would be ideal. Is this possible?

    Read the article

  • Colorizing your terminal and shell environment?

    - by Stefan Lasiewski
    I spend most of my time working in Unix environments and using Terminal emulators. I try to use color on the commandline, because color makes the output more useful and intuitive. What are some good ways to add color to my terminal environment? What tricks do you do? What pitfals have you encountered? Unfortunately, support for color is wildly variable depending on terminal type, OS, TERM setting, utility, buggy implementations, etc. Here's what I do currently, after alot of experimentation: I tend to set 'TERM=xterm-color', which is supported on most hosts (but not all). I work on a number of different hosts, different OS versions, etc. I'm trying to keep things simple and generic, if possible. Many OSs set things like 'dircolors' and by default, and I don't want to modify this everywhere. So I try to stick with the defaults. Instead tweak my Terminal's color configuration. Use color for some unix commands (ls, grep, less, vim) and the Bash prompt. These commands seem to the standard "ANSI escape sequences" I've managed to find some settings which are widely supported, and which don't print gobbledygook characters in older environments (even FreeBSD4!) (For the most part). From my .bash_profile ### Color support # The Terminal application typically does 'export TERM=term=color' # Some terminal types will print Black, White & underlined with these settings. OS=`uname -s` case "$OS" in "SunOS" ) # Solaris9 ls doesn't allow color, so use special characters instead. LS_OPTS='-F' ;; "Linux" ) # GNU tools supports colors! See dircolors to customize colors export LS_OPTS='--color=auto' # Color support using 'less -R' alias less='less --RAW-CONTROL-CHARS' alias ls='ls ${LS_OPTS} export GREP_OPTIONS="--color=auto" ;; "Darwin"|"FreeBSD") # Most FreeBSD & Apple Darwin supports colors # LS_OPTS="-G" export CLICOLOR=true alias less='less --RAW-CONTROL-CHARS' export GREP_OPTIONS="--color=auto" ;; esac

    Read the article

  • Moving my OpenID from Livejournal to... something else.

    - by T-Boy
    I've actually been an early user of OpenID, although there are still some questions that I've had with OpenID that I've never really had satisfactorily answered. Now, I understand that if I have full control over my domain, I can set it up so that I can delegate the task of authenticating to another OpenID service provider. The problem is, what I'd like to do is to get the Livejournal server to pass the authentication to someone else, instead of having LJ doing it. Preferably what I'd like to do is get Livejournal, when asked by a authenticating provider, say, "No, I don't do it anymore -- go to this address". The plan was that this address would then be in a domain I fully control, which then would pass it on to whichever service provider I choose. I don't even know if I've gotten my understanding of OpenID right, if all this shenanigans are necessary, if my question makes sense, or if it's even possible with a service provider like Livejournal. (tried tagging this with livejournal, and it told me I couldn't, because I don't have enough reputation. Oh well; one must start somewhere. Sorry for the inconvenience!)

    Read the article

  • Requesting better explanation for expires headers

    - by syn4k
    I have successfully implemented expires headers however, for several days I have been stumped by one thing. This article: http://www.tipsandtricks-hq.com/how-to-add-far-future-expires-headers-to-your-wordpress-site-1533 states Keep in mind that when you use expires header the files are cached in the browser until it expires so do not use this on files that changes frequently. Other sites indicate the same in my reading. But this doesn't seem to be true. I have updated an image, using the same name, several times. Each time I update and refresh my browser, the new image (with the same name) displays. I understand from this article that the old image should display unless I use a new name. Do you happen to know where the misunderstanding is? I have verified that the image in question has expires headers set on it: Request Headers: Host domain.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.28) Gecko/20120306 Firefox/3.6.28 FirePHP/0.5 Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://domain.com/index.php Cookie __utma=1.61479883.1332439113.1332783348.1332796726.4; __utmz=1.1332439113.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none);PHPSESSID=lv2hun9klt2nhrdkdbqt8abug7; __utmb=1.33.10.1332796726; __utmc=1; ck_authorized=true x-insight activate If-Modified-Since Mon, 26 Mar 2012 21:55:33 GMT Cache-Control max-age=0 Response Headers: Date Mon, 26 Mar 2012 22:06:50 GMT Server Apache/2.2.3 (CentOS) Connection close Expires Wed, 25 Apr 2012 22:06:50 GMT Cache-Control max-age=2592000

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >