Search Results

Search found 13895 results on 556 pages for 'options'.

Page 437/556 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • Securing phpmyadmin: non-standard port + https

    - by elect
    Trying to secure phpmyadmin, we already did the following: Cookie Auth login firewall off tcp port 3306. running on non-standard port Now we would like to implement https... but how could it work with phpmyadmin running already on a non-stardard port? This is the apache config: # PHP MY ADMIN <VirtualHost *:$CUSTOMPORT> Alias /phpmyadmin /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> Options FollowSymLinks DirectoryIndex index.php <IfModule mod_php5.c> AddType application/x-httpd-php .php php_flag magic_quotes_gpc Off php_flag track_vars On php_flag register_globals Off php_value include_path . </IfModule> </Directory> # Disallow web access to directories that don't need it <Directory /usr/share/phpmyadmin/libraries> Order Deny,Allow Deny from All </Directory> <Directory /usr/share/phpmyadmin/setup/lib> Order Deny,Allow Deny from All </Directory> # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/phpmyadmin.log combined </VirtualHost>

    Read the article

  • Windows 7 mouse multi selection on windows explorer with CTRL and point to select / single click to open

    - by bortao
    This is something that have annoyed me since i changed to Windows 7. I don't do double click for a long time now since windows introduced 1 click selection (Folder Options Click items as follows Single click to open an item). The problem is, when i click Control and hover over the items, they select/deselect automatically every small amount of mouse movement i make. What i want is to select/deselect the items only when i enter/exit them. So to deselect an item im hovering, i have to move the mouse out of it, and then back in. I'm using Intellimouse Explorer official drivers. (edit) Here's another related annoyance: When you are hovering something and move the mouse inside other item (Holding Ctrl) The new item may or may not be selected. If you continue moving the mouse it gets selected/deselected as you move. (more edit) I have found that the parameter HKCU\Control Panel\Mouse\MouseHoverHeight / MouseHoverWidth have some influence here. If it's set to 2, the item select/deselect really quickly as you move the mouse. When set to larger numbers, its slower. But setting to 20 or 200 don't seem to be much different.

    Read the article

  • How to redirect or rewrite IIS site with port in URL to URL without port?

    - by user2573690
    I'm not 100% sure if this is the right part of StackOverflow to post this but to me it made the most sense. Sorry if its not! Currently I have a site in IIS configured on HTTPS with port 7500. I can access this site by using the URL: https://portal.company.com:7500. What I would like to do is remove the port number at the end of the URL so users can access this site using https://portal.company.com... I am a complete beginner with IIS, but what I have tried is the HTTP Redirect, which if I used on this IIS site, would redirect a user that hits portal.company.com:7500 to some other site, which is not what I need. Another thing I have though about is creating another IIS site which serves the purpose of being at the URL portal.company.com and when its hit, it redirects to my portal.company.com:7500, but I don't know if this is the best approach. So my question is, what are my options for achieving the behavior mentioned above and what is the best/recommended approach? I haven't played with URL Rewriting before but I will look into that now while I wait for a reply. Thanks!! Using IIS Manager on a Windows Server 2008 machine.

    Read the article

  • Stop Windows 7 from accessing or writing to hard drive unless "told" to by me? (More info inside...)

    - by Jeff
    A confusing question, perhaps, but bear with me. I have two internal HDDs set up in a RAID0 array which I use as mass storage. I access the drive very infrequently (once a day at most) and so I have set up Windows 7's power options to turn off idle disks after only 1 minute. This is fine, and the disks are turned off most of the time. However, I notice that Windows sometimes spins up the drives when I really, really don't want or need it to. This causes a 30 second delay as both drives spin up and lock up my system. Some examples of when this happens: 1) When I'm installing something using Windows Installer or Installshield; it seems to me as if they're using the drive with most available free space as the installer cache location... so my big RAID drive has to spin up! Most annoying. 2) Apparently, when I open a Java-based program which resides on my system drive and has nothing to do with my RAID drive! 3) At boot-up and shut-down time. At shutdown the drive spin up only for the computer to immediately shut down! Incredibly frustrating! I've already tried changing the letter of the drive, and at some points have removed the drive letter entirely, which solves the first two issues above. So my question (FINALLY!) is this: is there any way I can mark this drive as being for "storage only", so Windows basically does not see it at all until I actually invoke it somehow? Or is there any way I could set it up so that only specific programs have write access to it? For example, download managers, TeraCopy, etc. etc.? Basically I want it to be a "ghost drive" until I'm ready to use it and to stop Windows from spinning it up all the damn time! Thank you. :)

    Read the article

  • PCI configuration method error (Linux Kernel)

    - by user326580
    (I'm not sure if here is the best place for that question, so I will be pleased if anyone suggests me a more proper forum for that.) I'm trying to install Ubuntu 12.04.4 in a netbook (from an usb), but the kernel stops very early in initialization process. After two days of research, I've found that it boots with the parameter pci=conf2 but not with the default conf1 method. Nevertheless, after kernel boot, it seems that Ubuntu can't find usb devices and I'm not be able to install it. Trying with Debian, its a graphic installer and I found that the mouse isn't working neither.I think pci devices are not working. I tried about 50% of kernel pci boot options in the kernel-parameters file (in conjunction with the implicit default conf1) without luck. Any suggestions? PS: The problem is the same with kernel 2.6 or 3. (In Spanish) No estoy seguro si éste es el mejor lugar para esta pregunta, por lo cual estaré encantado si alguno me sugiere un mejor lugar para ella. Estoy intentando instalar Ubuntu 12.04.4 en una netbook (desde un usb), pero el kernel se detiene muy temprano en la inicialización. Después de dos días de investigar, encontré que arranca con el parámetro pci=conf2 pero no con método default conf1. Sin embargo después de que el kernel arranca, parece que Ubuntu no logra encontrar los dispositivos usb y no puedo instalar el sistema. Intentando con Debian y su instalador gráfico, encontré que el ratón tampoco funcionaba, así que pienso que los dispositivos pci no están funcionando. Intenté con aproximadamente el 50% de las opciones de arranque del kernel para pci (en conjunción con el método implícito conf1) sin suerte. Alguna idea? PS: El problema es el mismo con el kernel 2.6 o 3.

    Read the article

  • Find which files an apache process is writing to?

    - by Haluk
    We have this apache process which becomes io-bound time to time. Using atop, we can see it is a write operation. Using lsof -p <PID> we can see a list of files open by the httpd process. First we thought "log" files must be the problem. So we turned them off just to test. However write operations still continues. We will continue testing a few other things. For instance we use php session variables a lot. Maybe php session files are getting all the writing. But is there a way to quickly identify files which get written to by the httpd process? This way we can focus our efforts on those files. UPDATE: We used the strace command as suggested. Here are two lines from the output. write(23, "\27\0\0\0\3SET CHARACTER SET utf8", 27) = 27 write(23, "\17\0\0\0\3SET NAMES utf8", 19) = 19 We do not have a mysql process on this server. So is strace also showing what is being written to an ethernet port? UPDATE2: During high io load, the process which consumes most of the write resources gives the following output to strace -e trace=write -p <PID>: --- SIGCHLD (Child exited) @ 0 (0) --- write(9, "!", 1) = 1 write(19, "OPTIONS * HTTP/1.0\r\nUser-Agent: Apache (internal dummy connection)\r\n\r\n", 70) = 70 However I cannot figure out where these are being written to.

    Read the article

  • Remote Desktop leaves host unresponsive

    - by Jeff Dalley
    I have my desktop PC at home set up to accept remote connections, and I often connect to it from work on my laptop via mstsc.exe. However, every time I remote to it, I find when I go home that despite the monitor being on - it's not receiving an image and it looks as though the computer is hibernating/asleep. I basically have to restart it whenever I get home and I know there's an answer for why its doing this. More details: When exiting the remote session, I have tried both logging off the account, and closing the RDP window without logging off; both give the same result. When I get home to the desktop I of course try moving the mouse, ctrl+alt+del to see if its responsive to restart, multiple key-press to see if I can get any audio out of it; It seems pretty obvious its sleeping/hibernating in some way: Nothing happens in any of these cases and a physical restart is necessary. Both desktop and laptop are running Windows 7 Ultimate. I'm thinking it really is sleeping/hibernating it, and I'm not sure why because left alone my desktop's power options are set to never turn off the HDD or change its state - I leave it on 24/7. This could be a stupid error on my part but I just can't see it! Thanks.

    Read the article

  • Setting up Virtual Host in Fedora Core 15 using apache

    - by Roland
    I'm trying to setup a couple of Virtual Host files on my Localhost PC running Fedora Core 15. Now I get this working, but now onloy one Virtual Host site works, and if I type in 127.0.0.1/test/testApp.php which is not related to the Virtual Host site , I get redirected to the Virtual Host site. Here's what I did. I created a new folder called virtualhosts in /etc/httpd/ where all my host files are stored in the following format site.conf In /etc/conf/httpd.conf I enabled NameVirtualHost *:80 and included the host files at the bottom of the config page like this Include virtualhosts/*.conf In /etc/hosts I added the line 127.0.0.1 website No when I run sudo httpd -t I get Syntax OK I restart apache and then the Virtualhost works, but as soon as I add other hosts and only use 127.0.0.1 as above it still links to the original host. Am I doing anything wrong here or left out something? An example of my Virtual Host file looks like this <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html/website/ ServerName website ServerAlias website ErrorLog logs/dev-error_log CustomLog logs/dev-access_log common Alias /blog /var/www/html/blog/ <Directory /var/www/html/website/> Options FollowSymLinks Allow Override All Order allow,deny allow from all </Directory> #php_value error_reporting E_ALL & ~E_NOTICE & ~E_DEPRECATED php_flag display_errors On php_value date.timezone Europe/London </VirtualHost>

    Read the article

  • Upgrade or replace?

    - by Felix
    My current PC is about four years old, although I have made upgrades to it throughout its existence. The current specs are: (old) Intel Pentium D 2.80Ghz (32K L1 / 2M L2), Gigabyte 945GCMX-S2 motherboard (old) 2.5GB DDR2 (slot0: 512MB @ 533Mhz; slot1: 2GB @ 667Mhz) (new) HIS Radeon HD 4670 - I think this is limited by the motherboard not supporting PCIe 2.0 (?) (old) WD Caviar 160GB - pretty slow (new) WD Caviar Black 640GB (if any more specs are relevant, let me know and I'll add them) Now, on to my question. I've been having performance issues lately, both in video games and in intensive applications. A couple of examples: Android application development (running Eclipse and the Android emulator) is painfully slow (on Linux). I only realized this when, at my new job as an Android dev, both tools are MUCH quicker. (I'm not sure what CPU I have there) The guys at my new job got me NFS Hot Pursuit, in which I barely get like 5-10FPS, even with graphics options turned all the way down My guess is that the bottleneck in my system is my CPU, so I'm thinking of upgrading to a Quad Core i5 + new motherboard + 4GB DDR3 (or more, 'cause I know you'll all jump and say 8GB minimum). Now: Is that a good idea? Is my CPU really a bottleneck, or is the whole system too old and I should replace it? I run Windows 7 on the old, 160GB HDD (which is on IDE, by the way). Could this slow down games as well? Should I get a new drive for Windows if I want to play new games? I know nothing about power supplies. Could that be a problem / will it be a problem if I upgrade to an i5? How come DiRT2 works on full graphics settings (pretty amazing graphics by the way) and NFS Hot Pursuit pulls only 5-10FPS?

    Read the article

  • Slower/cached Linux file system required

    - by Chopper3
    I know it sounds odd but I need a slower or cached filesystem. I have a lot of firewalls that are syslog'ing their data to a pair of Linux VMs which write these files to their 'local' (actually FC SAN attached) ext3-formatted disks and also forward the messages to our Splunk servers. The problem is that the syslog server is writing these syslog messages as hundreds, sometimes thousands, of tiny ~4k writes per second back to our FC SAN - which can handle this workload right now but our FW traffic's going to be growing by at least a factor of 5000% (really) in coming months and that'll be a pain for the SAN, I want to fix the root cause before it's a problem. So I need some help figuring out a way of getting these writes cached or held-off in some way from the 'physical' disks so that the VMs fire off larger, but less frequent, writes - there's no way of avoiding these writes but there's no need for it to do so many tiny ones. I've looked at the various ext3 options, setting noatime and nodiratime but that's not made much of a dent in the problem. Obviously I'm investigating other file systems but thought I'd throw this out in case others have the same problem in the future. Oh and I can't just forward these messages to Splunk, our firewall team insist they're in their original format for diag purposes.

    Read the article

  • How to install (old) packages for Ubuntu 9.04?

    - by wchrisjohnson
    Based on some excellent feedback by Mark here (http://serverfault.com/questions/285598/should-i-clone-a-physical-server-to-create-a-vm-for-a-staging-server), today I was able to use the vmware converter to clone my production server for a staging server. However the nic won't come up no matter what I do. I attempted to inistall vmware tools, as I suspect that the fact that it is not installed might prevent the nic from working. (I have the nic set as a vmxnet3 card in the vm settings). The install failed because there were several dependencies missing as well as the Linux headers. Given that Ubuntu 9.04 has been EOL'd, the packages I need to install to get the vmware tools to install are no longer available. I doubt the ubuntu 9.04 install CD has the packages on it. What are my options? I'd rather not upgrade the version of Ubuntu yet, as the point of the vm right now is to maintain parity with the production server. Might I have better luck resetting the driver to use vmxnet2 instead of the vmxnet3? Thanks in advance! Chris

    Read the article

  • Configuring SQL Server Express 2005

    - by MrTognio
    What's the proper way to configure SQL Server Express 2005 so that it can allow for a number of clients to get connected to the server? I have my application running both in the server machine and the client machines. Given the nature of my application, clients are the branches geographically distant from each other, and the server itself. Every operation the client records must be reported to the server, because the server needs total control over the usage and production. But, what should I consider when configuring the connection in both sides, the server and the client? I'm not as used to SQL Server, I'm a beginner, however through SQL Server Configuration Manager I have set the main options without success. The problem seems to be related to trusted connections even though I have set it to support both windows and SQL Server authentication. When the client tries to connect to the server using windows authentication it displays no table; when it tries to communicate using a password (SQL Server authentication), tables are successfully displayed but no access is allowed... Thanx in advance!

    Read the article

  • How can I prevent a DDOS attack on Amazon EC2?

    - by cwd
    One of the servers I use is hosted on the Amazon EC2 cloud. Every few months we appear to have a DDOS attack on this sever. This slows the server down incredibly. After around 30 minutes, and sometimes a reboot later, everything is back to normal. Amazon has security groups and firewall, but what else should I have in place on an EC2 server to mitigate or prevent an attack? From similar questions I've learned: Limit the rate of requests/minute (or seconds) from a particular IP address via something like IP tables (or maybe UFW?) Have enough resources to survive such an attack - or - Possibly build the web application so it is elastic / has an elastic load balancer and can quickly scale up to meet such a high demand) If using mySql, set up mySql connections so that they run sequentially so that slow queries won't bog down the system What else am I missing? I would love information about specific tools and configuration options (again, using Linux here), and/or anything that is specific to Amazon EC2. ps: Notes about monitoring for DDOS would also be welcomed - perhaps with nagios? ;)

    Read the article

  • Can I configure Thunderbird 3 to refresh the folder list for an Exchange IMAP account?

    - by Howiecamp
    Background: When used as an IMAP client against Gmail, Thunderbird 3 (may be the case in v2 also, not sure) will refresh it's list of folders (the folders correspond to Gmail labels) when you do "Download/Sync Now..." or restart the Thunderbird client. Any new folders (labels) created in Gmail will sync to the client and any folders moved/changed/deleted folders in Gmail will move/change/delete on the client as well. (Note: Thunderbird has the concept of "subscribing" to IMAP folders (assumingly allowing you to determine which folders you want, rather than bringing all of them down and dragging loads of data across the wire). When used against Gmail, Thunderbird appears to automatically subscribe to all folders (including when folders are newly created in Gmail), so this might be why the refresh is happening properly.) This behavior is what I want with Exchange. When using Thunderbird with Exchange (2007), the folder list doesn't refresh when folders are added/changed/deleted on the server and/or from a different mail client. When I look at the subscription options, some are checked and some are not (not sure why Thunderbird picked some and not others). And when I add new folders on the server and/or from another client, they never even appear in Thunderbird's list of folders, preventing me from subscribing to them.

    Read the article

  • MySQL Memory Limit Windows Server 2003

    - by Matt
    I am running MySQL 5.0.51a on Windows Server 2003 Standard Edition on an HP DL580 G4 with 3GB installed. One of my database tables has grown to 5.3 GB with an index file of 2.5 GB, which I believe is causing MySQL to be slow due to having to constantly load and unload the index file when updates are made to the table. The server itself seems to be performing OK because MySQL is only using about 500MB of memory (there are other apps running on the system, but MySQL uses the most memory). The table is fairly active with new records getting adding all during day but no deletes, ever. The MySQL server has up to 600 connections allowed, but only small number (10 or 20) would actually be writing to this table. I increased the memory limits in MySQL but since the max connections is so high I don't think I can give each connection 1GB without risking a problem. Is there some tuning that would let just certain connections get a lot of memory? So I have started to look for alternatives to avert the crisis I know is coming soon. Some of the options I have: Upgrade to Server 2003 Enterprise to install 64GB of memory. Question: would 32 bit MySQL be able to access more than 2GB? Would that be 2GB per thread? That would still be smaller than the index table size so it might not solve the problem completely, but it would be better than now. Upgrade to Server 200x 64 bit and MySQL 64 bit. Switch to a *nix 64 bit server. If anybody has suggestions for things to do in the meantime, opinions on which way to go, or other things that I have overlooked I would appreciate the help. Thanks

    Read the article

  • Relative path incorrect in the view layer when hosting a rails3 app in a subdirectory using passenger and apache

    - by Saifis
    I want to host multiple Rails apps on a multiple server using sub-directories. And have encountered some relative path problems. I have made a symbolic link to the app's public directory and placed it in the /var/www/html directory, var/www/html/ /test_app (symbolic link to the public folder of test_app) and set apache as so LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12 PassengerRuby /usr/local/bin/ruby <VirtualHost *:80> ServerName test.com DocumentRoot /var/www/html Options Indexes FollowSymLinks -MultiViews RailsBaseURI /test_app </Location> </VirtualHost> The links in the app itself works just fine, all the links acknowledge the test_app/ directory and work, however, when it comes to showing images in the public directory in the view, the relative path goes wrong. Say I have /system/files/1/aaa.png it goes looking for it in /var/www/html/system/files/1/aaa.png rather than /var/www/html/test_app/system/files/1/aaa.png As far as I understand this is an Apache setting problem than something to be done in Rails, if its possible I would prefer to have it contained in the conf file of apache rather than having to alter the code.

    Read the article

  • PXELinux and compressed kernels/images

    - by Yvan JANSSENS
    Is it possible to boot compressed kernels with a compressed initrd with PXELinux? First, a little background: We created a custom Linux distro, for diskless OpenCL computing nodes. We want those nodes to fetch their OS from the network. Our Distro is composed out of a kernel (duh) and a large initrd which is loaded into RAM and everything is executed from there. We chose to run everything off the initrd for two reasons: NFS was not an option to serve the filesystem's extra contents Fast file access from RAM. No persistent storage needed, data and config is pulled dynamically through a SOAP service. Now our initrd is about 450M in size. At our network speeds, it takes about two to three minutes to load a single client. Will compression speed up te downloading, and if yes, which one should be used? Is LZMA supported by PXELinux, or do we need to stick to bzip2 or gzip? Because of the 2-3 minutes loading time, booting 15 nodes over the same network link takes quite a lot of time. We decided not to use hard drives or CD/DVD drives, for financial reasons (cheapest HDD @ €30 times 15 is a lot of money saved ;-) ) So, our question is: what compression options are available for this setup? And how do we do this? Thank you for your time! Yvan Janssens

    Read the article

  • Windows 7 32bit resolution is limited for HDTV monitor

    - by Nick
    I have a small Magnavox HDTV that i am using to test a Frankenstein PC build. The goal is to eventually connect to my old rear projection HDTV which supports 1080i via component input. The goal is also not to buy anymore stuff, otherwise i will just buy a smartTV and be done. I have a ATI Radeon HD 3450 with component out adapter YPrPb. The monitor supports 1080p, but over analog component out, should only go upto 1080i. I have had this working with another setup. On this particular setup, i have Windows 7 32bit, with the latest 12.8 catalyst drivers installed. the windows splash screen starts in 480p, then switches to 480i when the login prompt is shown. When try to change the resolution, 720x480 is the maximum value of the slider. I have also tried the "list all modes" and that also maxes out at 720x480. There are two options for this monitor in the devices seciton, Generic PNP monitor, and Generic non-PNP monitor. Neither setting fixes this. Any ideas on how to get 1080i?

    Read the article

  • ASA access lists and Egress Filtering

    - by Nate
    Hello. I'm trying to learn how to use a cisco ASA firewall, and I don't really know what I'm doing. I'm trying to set up some egress filtering, with the goal of allowing only the minimal amount of traffic out of the network, even if it originated from within the inside interface. In other words, I'm trying to set up dmz_in and inside_in ACLs as if the inside interface is not too trustworthy. I haven't fully grasped all the concepts yet, so I have a few issues. Assume that we're working with three interfaces: inside, outside, and DMZ. Let's say I have a server (X.Y.Z.1) that has to respond to PING, HTTP, SSH, FTP, MySQL, and SMTP. My ACL looks something like this: access-list outside_in extended permit icmp any host X.Y.Z.1 echo-reply access-list outside_in extended permit tcp any host X.Y.Z.1 eq www access-list outside_in extended permit tcp any host X.Y.Z.1 eq ssh access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp-data established access-list outside_in extended permit tcp any host X.Y.Z.1 eq 3306 access-list outside_in extended permit tcp any host X.Y.Z.1 eq smtp and I apply it like this: access-group outside_in in interface outside My question is, what can I do for egress filtering? I want to only allow the minimal amount of traffic out. Do I just "reverse" the rules (i.e. the smtp rule becomes access-list inside_out extended permit tcp host X.Y.Z.1 any eq smtp ) and call it a day, or can I further cull my options? What can I safely block? Furthermore, when doing egress filtering, is it enough to apply "inverted" rules to the outside interface, or should I also look into making dmz_in and inside_in acls? I've heard the term "egress filtering" thrown around a lot, but I don't really know what I'm doing. Any pointers towards good resources and reading would also be helpful, most of the ones I've found presume that I know a lot more than I do.

    Read the article

  • Apache2 403 permission denied on Ubuntu 12.04

    - by skeniver
    I have a sub-directory in my /var/www folder called prod, which is password protected. It was all working fine until I asked my server admin to help me set up allow all access to one particular file. Now the entire folder is just giving me a 403 error. This is the sites-enabled file: <VirtualHost *:80> ServerAdmin [email protected] # Server name ServerName prod.xxx.co.uk DocumentRoot /var/www/prod <Directory /var/www/prod> Options Indexes FollowSymLinks MultiViews +ExecCGI Includes AllowOverride None Order allow,deny AuthType Basic AuthName "Please log in" AuthUserFile /home/ubuntu/.htpasswd Require valid-user </Directory> <Directory /var/www/prod/xxx/cgi-bin/api.pl> Allow from All Satisfy Any </Directory> ScriptAlias /xxx/cgi-bin/ /var/www/prod/xxx/cgi-bin/ ErrorLog ${APACHE_LOG_DIR}/prod.xxx.error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/prod.xxx.access.log combined </VirtualHost> Now he's unsure why this is blocking me out completely. No permissions have been changed, but this is the /var/www/ folder: 4 drwxr-xr-x 2 root root 4096 Jan 3 21:10 images 4 drwxr-sr-x 4 root www-data 4096 Mar 31 14:47 jslib 4 drwxr-xr-x 7 root root 4096 Jun 2 13:00 prod When I try to visit http://prod.xxx.co.uk, I don't get asked for the password; I just get 403'd I hope I've given enough information... Anyone able to spot something he can't?

    Read the article

  • Is it possible to get ESC to behave as an actual escape key?

    - by leftaroundabout
    So I have finally switched, not so much because I'm yet convinced Emacs in itself is the better editor but because it certainly does have more powerful extensions. I am still using vim-mode though, perhaps that's part of my problem... but I really don't intend to abandon the modes-approach, so I'll probably stay with it. I'm getting along quite well, but one thing I find really unnerving is the behaviour of the esc key (which I have in the shift-lock position). I'm used to relying on this a lot as more or less a "panic key", which may not be nice but I find allows me to work generally quite a bit less caring about the keystrokes themselves, and thus faster. What I'd like this key to do is just get me out of any minibuffer or special editing mode into a well-defined normal state. Perhaps most importantly, I would like it to not do anything unrelated, Simulate meta. What do I have an alt key for? Close windows I'm not even in at the time. Getting interpreted as the final key in some key sequence. ... Is it possible to turn all that off and make esc an actual escape key? Vim-mode does make it behave kind of as I like in some situations, but when other plugins are involves this often breaks. Alternatively, are there different options that might suit my kind of workflow?

    Read the article

  • New AD user request form and workflow

    - by user66390
    I'm wondering if anyone is providing a solid solution for creating New Network User Account Request forms, and attaching workflows to them to automate account creation? I'm currently investigating a number of options, but am surprised that such a ubiquitous task hasn't been solved a dozen times over and thoroughly documented. Or at least isn't integrated into current off-the-shelf change management and ticketing systems. Ideally, I'd like for our current ticketing system, ServiceDesk+ to present a standard 'New User' form to department heads, which they can fill in with the required new user details. This triggers a workflow that submits the request as a ticket that can be reviewed and actioned. Actioning the ticket triggers a workflow that creates a user in AD with the details provided, and notifies the department head upon completion. All told, a pretty standard requirement that I'm sure most organizations have. What are other people doing to accomplish this? Edit: I should add, I'm more looking for "supported" methods. As is, I've submitted a number of scripted solutions, none of which have met with manager approval.

    Read the article

  • Cannot change power button or lid close action

    - by Mark Henderson
    I have a Samsung 900x laptop and I want to change it so that when I close the lid, nothing happens (I often close the lid to carry it somewhere 10 seconds away, and by putting it into suspend it cancels any active downloads/etc). Easy, right? Go to Power Options and change it there; just like on every other laptop in the world. Not so fast: Saywhat?! That message only shows up for the nodes for Lid Close Action, Power Button and Sleep Button. I can change every other setting except for those three. I'm definately an Administrator on the computer, and I've googled the error and found dozens of hits on other crappy forums, but of course nothing on those worked (otherwise, I wouldn't be here). And as ususal the "Why can't..." hyperlink gives no useful infomation what so ever (just a generic Help document). So - how can I change what closing the lid does? I will modify the registry directly if I have to.

    Read the article

  • Fix stubborn 'Setting locale failed.'

    - by user60129
    I have a very stubborn, well-known locale error on Ubuntu 9.10: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TIME = "custom.UTF-8", LANG = "en_US.UTF-8" Tried the following: Added LANG=en_US.UTF-8 and LC_ALL=en_US.UTF-8 to /etc/environment Run apt-get install --reinstall locales (error: perl: warning: Falling back to the standard locale ("C"). /usr/bin/mandb: can't set the locale; make sure $LC_* and $LANG are correct) Run sudo dpkg-reconfigure locales. Result: Cannot set LC_ALL to default locale: No such file or directory, and then updates locales all locales including en_US.UTF-8 sudo locale-gen updates all locales successfully, including en_US.UTF-8 sudo locale-gen un_US en_US.UTF-8 gives no error nor other output In /etc/default/locale it says LANG="en_US.UTF-8" echo $LANG gives en_US.UTF-8 /var/lib/locales/supported.d/local says en_US.UTF-8 UTF-8 locale -a gives me: C en_AG en_AU.utf8 en_BW.utf8 en_CA.utf8 en_DK.utf8 en_GB.utf8 en_HK.utf8 en_IE.utf8 en_IN en_NG en_NZ.utf8 en_PH.utf8 en_SG.utf8 en_US.utf8 en_ZA.utf8 en_ZW.utf8 POSIX So well... I am pretty much out of options I can think of. Anybody any idea?? Thanks!

    Read the article

  • VMware guest pauses when the host is idle - how do I keep it running?

    - by EMP
    I'm running VMWare Worstation 7 with Windows 7 x64 as guest, Windows XP x64 as host. Inside the guest I run a long-running console application, which prints out progress messages with timestamps on them. Sometimes I leave it running for several hours while I lock the host OS and don't touch the computer at all. When I come back I find that some time after I left it seems to have paused and automatically resumed: the console app hasn't made much progress and there's a large time gap in its progress messages. There's nothing relevant in the host event log, but in the guest Application event log I can see these messages around the time I left: A request to disable the Desktop Window Manager was made by process (VMware Tools Service) The Desktop Window Manager was unable to start because composition was disabled by a running application And later, around the time I returned, this shows up in the System log: The system time has changed to ?2012?-?01?-?12T06:36:46.921000000Z from ?2012?-?01?-?12T03:18:19.953079000Z. That seems to support my theory that it's VMware doing something and not Windows itself. The question is: how do I stop it doing that? I want my application to continue running. By the way, the power options are set to never sleep in both guest and host.

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >