Search Results

Search found 23265 results on 931 pages for 'justin case'.

Page 644/931 | < Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >

  • Launching firefox on remote server causes local firefox to start instead

    - by terdon
    Right, this is strange. I am connecting from my laptop (LMDE) to a remote host (SUSE linux enterprise) using ssh -X. I want to launch a firefox instance running on the remote server so I can have access to webpages on a private network. User@RemoteMachine $ which -a firefox /usr/bin/firefox User@RemoteMachine $ /usr/bin/firefox --version Mozilla Firefox 2.0.0.2, Copyright (c) 1998 - 2007 mozilla.org User@LocalMachine $ which -a firefox /usr/bin/firefox User@LocalMachine $ /usr/bin/firefox --version Mozilla Firefox 14.0.1 Now, if firefox is not running on the local machine, everything goes as expected and executing firefox on the remote machine causes a firefox (v 2.0) window running on the remote machine to show up. However, if firefox is running on the local machine a second window of firefox 14.0.1 running on the local machine appears. I have checked top in both machines. In the 2nd case, a firefox process briefely appears on the remote machine and then disappears when the local version of firefox is launched. My questions are the following: What gives? How/why can firefox connect to its existing instance on the local machine? The remote machine appears to have access to the local machine. It, in fact, appears to have the right to execute programs on my local machine. Am I missing something or is this just weird? Is this not a security risk?

    Read the article

  • Remote Desktop Mobile mangles barcodes coming from scanner

    - by sfonck
    We have an application here using handhelds to scan barcodes. These handhelds are actually making a remote desktop session towards a server where the application runs. Works fine. Now we have bought some new Motorola MC55's running 'Windows Mobile 6.1 Classic', and when using the application over remote desktop: it mangles the characters of the barcodes.... I already tried following things: When scanning a barcode on the MC55 itself it is displayed correctly When scanning a barcode via the remote desktop into a notepad session it is incorrect. Played with all options of the 'Remote Desktop Mobile' - no result Disabled 'autocorrect' and 'suggest words when entering text' on the input settings - no result The strange things is: a barcode which consists of only numbers gets scanned correctly the mangled characters comes through in lower case For some codes \t is mangled in between (should normally be entered after the barcode) e.g.: 'PERIN4' becomes 'ERINp4' 'MGZB' becomes 'GZB m' 'BAK664' becomes 'AK664 b' 'MAGBFA01' becomes 'AGBFmA01' '5021879949500' gets scanned correctly Final solution: Suppllier of the handhelds said the handheld was sending the characters too fast over the remote desktop connection. They changed the handheld to wait for 50ms between sending each character, which produced correct results right now. Scanning a barcode became somewhat slower but it's almost not remarkable to endusers.

    Read the article

  • DNS slow after losing DNS Server

    - by Tim
    We have set up a small Windows Server 2008 R2 network with a domain controller which is also acting as the DNS server for the network (we opted to install DNS when setting up the domain). This network isn't connected to the Internet in any way, so all machines have been configured to use the IP address of the domain controller as their primary DNS and no secondary DNS server has been configured. If we shut down or unplug the network cable from the domain controller, DNS lookups become quite slow and the performance of the network suffers. For example, running a ping command using a hostname takes around 5-6 seconds to resolve the name. I presume this is because it is looking for the DNS, then falling back to some other method of resolving the names as the DNS server is now gone. All the machines have static IP addresses so we are considering just putting all entries in the HOSTS file of each machine. However, it would be nice to have a centralised DNS in case we one day change the IP of one of the machines. Is there a better way to speed this up?

    Read the article

  • Confusion about Kerberos, delegation and SPNs.

    - by Vilx-
    I already posted this question on SO, but the nature of it is between programming and server configuration, so I'll re-post it here as well. I'm trying to write a proof-of-concept application that performs Kerberos delegation. I've written all the code, and it seems to working (I'm authenticating fine), but the resulting security context doesn't have the ISC_REQ_DELEGATE flag set. So I'm thinking that maybe one of the endpoints (client or server) is forbidden to delegate. However I'm not authenticating against an SPN. Just one domain user against another domain user. As the SPN for InitializeSecurityContext() I'm passing "[email protected]" (which is the user account under which the server application is running). As I understand, domain users have delegation enabled by default. Anyway, I asked the admin to check, and the "account is sensitive and cannot be delegated" checkbox is off. I know that if my server was running as a NETWORK SERVICE and I used an SPN to connect to it, then I'd need the computer account in AD to have the "Trust computer for delegation" checkbox checked (off by default), but... this is not the case, right? Or is it? Also - when the checkbox in the computer account is set, do the changes take place immediately, or must I reboot the server PC or wait for a while?

    Read the article

  • Audio Static/Interference regardless of audio interface?

    - by Tom
    I currently am running a media center/server on a Lubuntu machine. The machine specs: Core 2 Duo Extreme EVGA SLI 680i MotherBoard 2 GB DDR2 Ram 3 Hard Drives no raid - WD Caviar Black, Green, and Samsung Spinpoint Galaxy GTX 220 1GB External USB Creative XI-FI Extreme Card 550W Power Supply This machine is hooked up through an optical cable to an ONKYO HTR340 Receiver through the XIFI card. Whenever I play any audio regardless if it is through XBMC, the default audio player, a flash video, etc, I get a horrible static sound that randomly gets louder. Here is a video of the sound: http://www.youtube.com/watch?v=SqKQkxYRVA4 This static comes in randomly, sometimes going away for short periods, but eventually always comes back. So far I have tried everything I could think of: Reinstalling OS Installing/upgrading/repairing PulseAudio/Alsa Installing alternate OSes, straight Ubuntu, Lubuntu, Xubuntu, Arch, Mint, Windows 7 Switching audio from the external card to internal Optical, audio out through HDMI, audio out through headphones Different ports on receiver (my main desktop sounds fine on the same sound system) Different optical cables Unplugging everything unnecessary from the motherboard (1 HD, 1 Stick of Ram, 1 Keyboard) Swapping out ram Swapping out the motherboard Replacing the Graphics Card (was replaced due to fan being noisy, not specifically for this problem) Different harddrives Swapping power supply Disabling onboard audio Pretty much everything short of swapping the CPU. I haven't been able to narrow down the problem and it is getting frustrating. Is it possible that the CPU is faulty and might cause a problem such as this, or that the PC case is shorting out the motherboard? Any kind of suggestions will be appreciated.

    Read the article

  • Toshiba Portege M400 screen rotation not working under Windows 7 x64

    - by Christi
    I have installed Windows 7 on my Toshiba Portege M400. This in itself was relatively tricky.* However, the button utilities aren't quite working for me. One of the buttons tries to launch the Toshiba Assist program, which doesn't appear to be available under Windows 7 for the M400, but this I can live without. More important is that the screen won't rotate as it is supposed to when you hold the "cancel" (X in a circle) button on the bezel. The PC is set to run "C:\Program Files (x860\Toshiba\Toshiba Rotation Utility\phtrot.exe". There is a "trot.exe" file in the same directory (the former appears to be to cause slightly different behaviour when rotation is done by press and hold). Neither of these programs rotates the screen either by using the buttons or from the command line. The screen is rotating normally when switching from tablet to laptop mode, so there does not appear to be an inherent problem with rotation. I'd just like to be able to use the buttons on the side of the screen to change the screen orientation. Windows XP used to have a "setrot" utility to do this, but that seems to have gone in Windows 7 Thanks for your help. *Just in case anyone comes looking for how to do this, you need to extract driver files from http://cdgenp01.csd.toshiba.com/content/support/downloads/util_raid_os2007252a.exe, which does not seem to be listed among the available files for the m400. This executable contains the SATA interface drivers that will need to be loaded by the installer before it can see your hard disk drive. It needs to be unpacked and the files copied to a USB key which they can then be loaded from in the install process. The utilities etc. for installation post windows install are all available from the Toshiba USA support website.

    Read the article

  • Explain why folder's permissions differ depending on HOW user is accessing server AFP vs SSH

    - by Meltemi
    Hoping someone can explain what is probably fairly obvious...but confuses me. Imagine two users with admin privileges on our server (Mac OS X Server 10.5). Call them joe & bob. both users are members of these groups: Staff Group ID: 20 Workgroup Group ID: 1025 Shared folder "devfolder" has sharing set as so: POSIX: Owner: joe read & write Group: admin read & write Other no access ACL: Workgroup Allow Read & write Question is why when looking at same folder does the ownership appear to change depending on who's doing the looking?!? Both looking at same folder on the server: From Joe's perspective: xserve:devfolder joe$ ls -l drwxrwxr-x 6 joe workgroup 204 May 20 19:32 app drwxrwxr-x 9 joe workgroup 306 May 20 19:32 config drwxrwxr-x 3 joe workgroup 102 May 20 19:32 db drwxrwxr-x 3 joe workgroup 102 May 20 19:32 doc drwxrwxr-x 3 joe workgroup 102 May 20 19:32 lib And from Bob's perspective (folder mounted on his machine via AFP): bobmac:devfolder bob$ ls -l drwxrwxr-x 6 bob _bob 264 May 20 19:32 app drwxrwxr-x 9 bob _bob 264 May 20 19:32 config drwxrwxr-x 3 bob _bob 264 May 20 19:32 db drwxrwxr-x 3 bob _bob 264 May 20 19:32 doc drwxrwxr-x 3 bob _bob 264 May 20 19:32 lib Now if Bob connects to server via SSH then his output is identical to Joe's, as expected. Can anyone tell me what the client is doing in this case and what should be expected when bob creates or updates files in this folder? What tools do I have to better understand this from the command line? Is this normal? Perhaps a "cleaner" way that wouldn't be confusing with "bob _bob"?!?

    Read the article

  • How to set shmall, shmmax, shmni, etc ... in general and for postgresql

    - by jpic
    I've used the documentation from PostgreSQL to set it for example this config: >>> cat /proc/meminfo MemTotal: 16345480 kB MemFree: 1770128 kB Buffers: 382184 kB Cached: 10432632 kB SwapCached: 0 kB Active: 9228324 kB Inactive: 4621264 kB Active(anon): 7019996 kB Inactive(anon): 548528 kB Active(file): 2208328 kB Inactive(file): 4072736 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 3432 kB Writeback: 0 kB AnonPages: 3034588 kB Mapped: 4243720 kB Shmem: 4533752 kB Slab: 481728 kB SReclaimable: 440712 kB SUnreclaim: 41016 kB KernelStack: 1776 kB PageTables: 39208 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 8172740 kB Committed_AS: 14935216 kB VmallocTotal: 34359738367 kB VmallocUsed: 399340 kB VmallocChunk: 34359334908 kB HardwareCorrupted: 0 kB AnonHugePages: 456704 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 12288 kB DirectMap2M: 16680960 kB >>> ipcs -l ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 4316816 max total shared memory (kbytes) = 4316816 min seg size (bytes) = 1 ------ Semaphore Limits -------- max number of arrays = 128 max semaphores per array = 250 max semaphores system wide = 32000 max ops per semop call = 32 semaphore max value = 32767 ------ Messages Limits -------- max queues system wide = 31918 max size of message (bytes) = 8192 default max size of queue (bytes) = 16384 sysctl.conf extract: kernel.shmall = 1079204 kernel.shmmax = 4420419584 postgresql.conf non defaults: max_connections = 60 # (change requires restart) shared_buffers = 4GB # min 128kB work_mem = 4MB # min 64kB wal_sync_method = open_sync # the default is the first option checkpoint_segments = 16 # in logfile segments, min 1, 16MB each checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 effective_cache_size = 6GB Is this appropriate ? If not (or not necessarily), in which case would it be appropriate ? We did note nice performance improvements with this config, how would you improve it ? How should kernel memory management parameters be set ? Can anybody explain how to really set them from the ground up ?

    Read the article

  • Too much memory consumed during TFS automated build

    - by Bernard Chen
    We're running TFS 2010 Standard Edition, and we've set up an automated build to run whenever someone checks in code. We run through all of the automated tests (built with MSTest) as part of the build. We've configured the build to run the tests as a 64-bit process, but the QTAgent.exe that runs the tests grows in memory while the tests are running. It is currently reaching 8GB for the ~650 tests we have, and the process has slowed significantly when we went from 450 tests to 650 tests. When we run all of the tests in the local development environment, memory seems to be freed at least with each TestClass and never exceeds a certain level. The process of running all tests has not increased significantly in the local development environment. Is there a way to configure the build service to free up memory with each Test or each TestClass? With the way things are currently running, the build process gets very slow when we start to run out of memory on the machine. Edit: I found the MSTest invocation in the build log and ran it manually and saw the same behavior of runaway memory. I removed the /publish, /publishbuild, /teamproject, /platform, and /flavor parameters from the invocation of MSTest, in case the test runner was holding onto results until the end, but the behavior didn't change. I ran the same command line on a dev box, separate from the build server, and the memory freed up frequently. It seems there must be something wrong/different about the build server that is causing it to behave different, but I'm stumped where to look. I've looked at qtagent.exe.config, mstest.exe.config, versions of both executables. What else might affect this?

    Read the article

  • "Unable to open MRTG log file" error with nagios and mrtg

    - by Simone Magnaschi
    We have a strange issue with our setup of icinga / nagios and mrtg. Icinga is working great and has no problem, it can monitor basically everything without issues. We setup mrtg to gather bandwith data from our routers and switches. MRTG is working fine: it stores the log data in the /var/www/mrtg/ directory and displays the graph data via web. We assume so MRTG is doing great. We tried to setup bandwidth checks in nagios: define service{ use generic-service ; Inherit values from a template host_name zywall-agora service_description ZYWALL AGORA TRAFFICO check_command check_local_mrtgtraf!/var/www/mrtg/x.x.x.x_2.log!AVG!1000000,2000000!5000000,5000000!1000 check_interval 1 ; Check the service every 1 minute under normal conditions retry_interval 1 ; Re-check every minute until its final/hard state is determined } Where /var/www/mrtg/x.x.x.x_2.log is the correct log path file. We keep on getting Unable to open MRTG log file error in the test result in icinga web interface. We tried everything: give ownership to user nagios or icinga to the log file give chmod 777 to the file try to copy the file in another directory and give it full permission Same error. The strange thing is that if we use the command that nagios generate in a bash session the command works like a charm: /usr/lib64/nagios/plugins/check_mrtgtraf -F /var/www/mrtg/x.x.x.x_2.log -a AVG -w 10,20 -c 5000000,5000000 -e 10 Result: Traffic WARNING - Avg. In = 17.9 KB/s, Avg. Out = 5.0 KB/s|in=17.877930KB/s;10.000000;5000000.000000;0.000000 out=5.000000KB/s;20.000000;5000000.000000;0.000000 We ran that command line as root, as user nagios and as user icinga and all three worked ok. We thought that the command that nagios perform maybe has something wrong in it, so we debugged nagios but we found out that the generated command from nagios is the same as above. Searching on google for these kind of problem returns only issues of systems where mrtg is not installed or issues with the wrong path to the log file, but these seems not to be our case. We are stuck, can somebody help?

    Read the article

  • Arch Linux drops me on my school network.

    - by Kravlin
    I'm running a Lenovo X61 which i carry around my college for getting on the internet at various points in the day. The network has always been finicky but recently it's gotten worse. I'll connect using iwconfig, get an ip from dhcpcd and log in using vpnc to their system. Sometimes I'll stay connected for hours but most of the time within 30 seconds my network traffic will drop to zero and i'll be unable to do anything. My computer still belives it's connected, however to try again i need to put my wireless interface down, put it back up and try again. It's gotten so bad that i've got a window on my computer pinging yahoo or google constantly in order to know if i'm still able to get online. I know other people who have used Arch Linux that don't have the same problems as well as people who use Ubuntu who haven't had any problems either. It seems like my computer is a special case. Does anyone have any suggestions on how to fix it? dmesg doesn't show anything out of the ordinary going on and i don't know where else to look for errors or other things to try.

    Read the article

  • Troubles installing/starting Redis via Resque

    - by Craig Flannagan
    Trying to complete instructions for Resque/Redis installation here: https://github.com/defunkt/resque/blob/master/README.markdown Am stuck at where I'm trying to start up Redis via Resque at the following command: Craig:/usr/local/src/resque$ rake redis:start (in /usr/local/src/resque) Detach with Ctrl+\ Re-attach with rake redis:attach ../../bin/dtach -A /tmp/redis.dtach ../../bin/redis-server ../../../etc/redis.conf rake aborted! Command failed with status (127): [../../bin/dtach -A /tmp/redis.dtach ../../...] (See full trace by running task with --trace) Rerunning with --trace (showing only part of trace): Craig:/usr/local/src/resque$ rake redis:start --trace (in /usr/local/src/resque) ** Invoke redis:start (first_time) ** Execute redis:start Detach with Ctrl+\ Re-attach with rake redis:attach ../../bin/dtach -A /tmp/redis.dtach ../../bin/redis-server ../../../etc/redis.conf rake aborted! Command failed with status (127): [../../bin/dtach -A /tmp/redis.dtach ../../...] /Users/craigflannagan/.rvm/gems/ruby-1.9.2-head@foo/gems/rake-0.8.7/lib/rake.rb:995:in `block in sh' Not sure what is wrong here - by the way, when I did those instructions $ git clone git://github.com/defunkt/resque.git $ cd resque $ PREFIX=<your_prefix> rake redis:install dtach:install $ rake redis:start I wasn't sure whether or not I was supposed to be doing #1 from within the Rails project, or if I was supposed to have the git clone create a new folder outside the Rails project (in this case, I chose to have folder created outside the project).

    Read the article

  • Install GRUB bootloader without installing Linux.

    - by Kavitesh Singh
    I have Windows 7 installed on the system and I want to create a separate WinPe bootable partition which system can fallback when things go wrong. Now Win7 does give this option and i might also edit the BCD store to make changes in the boot menu of Win7 or could use EasyBCD. I dont want to use these options as i need to customize hiding/unhiding of partitions at the time of booting etc. I search and found GRUB might the tool i am looking for. I want to use GRUB loader without any version of Linux installed on the system. Can someone guide me how i can install the GRUB on harddisk MBR and configure the bootmenu. I search internet and mostly i came across commands which search the GRUB on the harddisk because of existing linux installation and then try to repair it. In my case there is no Linux at all. I have Ubuntu 9.10 bootable CD/ OpenSUSE 11.2 liveCD and installation disc. Can i use them to install GRUB on my system?

    Read the article

  • IP address spoofing using Source Routing

    - by iamrohitbanga
    With IP options we can specify the route we want an IP packet to take while connecting to a server. If we know that a particular server provides some extra functionality based on the IP address can we not utilize this by spoofing an IP packet so that the source IP address is the privileged IP address and one of the hosts on the Source Routing is our own. So if the privileged IP address is x1 and server IP address is x2 and my own IP address is x3. I send a packet from x1 to x2 which is supposed to pass through x3. x1 does not actually send the packet. It is just that x2 thinks the packet came from x1 via x3. Now in response if x2 uses the same routing policy (as a matter of courtesy to x1) then all packets would be received by x3. Will the destination typically use the same IP address sequences as specified in the routing header so that packets coming from the server pass through my IP where I can get the required information? Can we not spoof a TCP connection in the above case? Is this attack used in practice?

    Read the article

  • Recovery of Windows DFS partion with shadow copy versioned files when overwritten with older modifie

    - by patjbs
    I've noticed the following "bug" on a DFS volume with shadow copies: Pretend you have the following folders/files under shadow copy versioning, going back two weeks. MyDirectory+ MyFile - Modified Date 8/1/2009 The current date: 8/30/2009 You have another version of MyFile stored elsewhere, with a modified date of 7/1/2009. Copy your other version of MyFile into MyDirectory, overwriting the newest version. I expected that you could roll back to the version that was there when it last imaged, say on the prior day and recover your 8/1 version. Not the case. Now, when you go to look at previous versions for the past two weeks, the versioning of that file will be entirely lost, and you'll be stuck with your older 7/1 version. Suckage. Questions: (1) Is this intentional, and if so, what's the rationale? I assume that DFS picks up on the versioning based on the current file, and that's what's wiping out prior versions, but it seems like a fairly stupid/naive way of handling versioning to me. (2) Is there a way to backtrack out of this, without resorting to restoration from other backup mediums? Thanks!

    Read the article

  • How to figure out how much RAM each prefork thread requires for maximum Wordpress performance on an EC2 small instance

    - by two7s_clash
    Just read Making WordPress Stable on EC2-Micro In the "Tuning Apache" section, I can't quite figure out how he comes up with his numbers for his prefork config. He explains how to get the numbers for an average process, which I get. But then: Or roughly 53MB per process...In this case, ten threads should be safe. This means that if we receive more than ten simultaneous requests, the other requests will be queued until a worker thread is available. In order to maximize performance, we will also configure the system to have this number of threads available all of the time. From 53MB per process, with 613MB of RAM, he somehow gets this config, which I don't get: <IfModule prefork.c> StartServers 10 MinSpareServers 10 MaxSpareServers 10 MaxClients 10 MaxRequestsPerChild 4000 </IfModule> How exactly does he get this from 53MB per process, with 613MB limit? Bonus question From the below, on a small instance (1.7 GB memory), what would good settings be? bitnami@ip-10-203-39-166:~$ ps xav |grep httpd 1411 ? Ss 0:00 2 0 114928 15436 0.8 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1415 ? S 0:06 10 0 125860 55900 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1426 ? S 0:08 19 0 127000 62996 3.5 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1446 ? S 0:05 48 0 131932 72792 4.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1513 ? S 0:05 7 0 125672 54840 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1516 ? S 0:02 2 0 125228 48680 2.7 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1517 ? S 0:06 2 0 127004 55796 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1518 ? S 0:03 1 0 127196 54208 3.0 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1531 ? R 0:04 0 0 127500 54236 3.0 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf

    Read the article

  • Vista logs into black screen, "Application Error 0xc0000022" for explorer.exe

    - by IMAPC
    Whenever I attempt to log into a Windows Vista computer (the only account on it), I'm presented with a black screen & a cursor. I can open Task Manager, and from there I can launch applications. It seems to be using Aero Basic (instead of the full Aero which I had set as default before the problem started). When attempting to launch "explorer.exe" I get "explorer.exe - Application Error 'The application failed to initialize properly (0xc0000022). Click OK to terminate the application.'" Every now and then I get an error along the lines of "the application has failed to start because its side by side configuration is incorrect please see the application event log for more detail." I can boot into safe mode successfully, but I still get the black screen when I log into it in regular mode. I've tried most of the suggestions here, but did not work. I'm attempting to back up everything right now in case the only fix is to reinstall Windows. Has anyone seen this before?

    Read the article

  • Why does Windows 7 always automatically change the input or keyboard language?

    - by B-Ball
    I am wondering why Windows 7 always automatically changes my input or keyboard language. I've a notebook with an integrated QWERTY keyboard English (United States). Traveling, I use that one but, additionally, I've my own as well as a much better keyboard at home which is a QWERTZ keyboard German (Germany). Thus, being at home, I'd like to use my QWERTZ keyboard. Unfortunately, Windows 7 does not play along at this one. Every time, I start up my notebook, it is usually set to English (United States) but that's not the problem. In case, I'd use my notebook QWERTY keyboard English (United States), that's fine. However, if I start up my notebook and I'd like to use my QWERTZ keyboard German (Germany), I usually press ALT + Left Shift in order to switch from English (United States) to German (Germany) and Windows 7 switches the input language but only for the program that is currently open. If my input language is set to German (Germany) and I, e.g., open NotePad, Windows 7 automatically switches my input language to English (United States). This is very annoying since I've to change the input or keyboard language to German (Germany) every time I open up a new program. Why doesn't Windows 7 stay with one input language if I changed it manually by pressing ALT + Left Shift? Why doesn't the manual change of the input or keyboard language apply for the whole Windows 7? Why does it only affect the currently opened program? Since I've two keyboards with two different layouts, I seriously need to have both of the keyboards languages installed. I tried both of the below settings in order to find a solution for my problem. Currently, I am using the first option, two input languages. First option: Two input language - www.abload.de/img/19aie.jpg Second option: Two keyboard languages - www.abload.de/img/2nb4x.jpg Thank you very much in advance.

    Read the article

  • Apache2 - mod_rewrite : RequestHeader and environment variables

    - by Guillaume
    I try to get the value of the request parameter "authorization" and to store it in the header "Authorization" of the request. The first rewrite rule works fine. In the second rewrite rule the value of $2 does not seem to be stored in the environement variable. As a consequence the request header "Authorization" is empty. Any idea ? Thanks. <VirtualHost *:8010> RewriteLog "/var/apache2/logs/rewrite.log" RewriteLogLevel 9 RewriteEngine On RewriteRule ^/(.*)&authorization=@(.*)@(.*) http://<ip>:<port>/$1&authorization=@$2@$3 [L,P] RewriteRule ^/(.*)&authorization=@(.*)@(.*) - [E=AUTHORIZATION:$2,NE] RequestHeader add "Authorization" "%{AUTHORIZATION}e" </VirtualHost> I need to handle several cases because sometimes parameters are in the path and sometines they are in the query. Depending on the user. This last case fails. The header value for AUTHORIZATION looks empty. # if the query string includes the authorization parameter RewriteCond %{QUERY_STRING} ^(.*)authorization=@(.*)@(.*)$ # keep the value of the parameter in the AUTHORIZATION variable and redirect RewriteRule ^/(.*) http://<ip>:<port>/ [E=AUTHORIZATION:%2,NE,L,P] # add the value of AUTHORIZATION in the header RequestHeader add "Authorization" "%{AUTHORIZATION}e"

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • How to configure postfix for per-sender SASL authentication

    - by Marwan
    I have two gmail accounts, and I want to configure my local postfix server as a client which does SASL authentication with smtp.gmail.com:587 with credentials that depend on the sender address. So, let's say that my gmail accounts are: [email protected] and [email protected]. If I sent a mail with [email protected] in the FROM header field, then postfix should use the credentials: [email protected]:psswd1 to do SASL authentication with gmail SMTP server. Similarly with [email protected], it should use [email protected]:passwd2. Sounds fairly simple. Well, I followed the postfix official documentation at http://www.postfix.org/SASL_README.html, and I ended up with the following relevant configurations: /etc/postfix/main.cf smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sender_dependent_authentication = yes sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relay smtp_tls_security_level = secure smtp_tls_CAfile = /etc/ssl/certs/Equifax_Secure_CA.pem smtp_tls_CApath = /etc/ssl/certs smtp_tls_session_cache_database = btree:/etc/postfix/smtp_scache smtp_tls_session_cache_timeout = 3600s smtp_tls_loglevel = 1 tls_random_source = dev:/dev/urandom relayhost = smtp.gmail.com:587 /etc/postfix/sasl_passwd [email protected] [email protected]:passwd1 [email protected] [email protected]:passwd2 smtp.gmail.com:587 [email protected]:passwd1 /etc/postfix/sender_relay [email protected] smtp.gmail.com:587 [email protected] smtp.gmail.com:587 After I'm done with the configurations I did: $ postmap /etc/postfix/sasl_passwd $ postmap /etc/postfix/sender_relay $ /etc/init.d/postfix restart The problem is that when I send a mail from [email protected], the message ends up in the destination with sender address [email protected] and NOT [email protected], which means that postfix always ignores the per-sender configurations and send the mail using the default credentials (the third line in /etc/postfix/sasl_passwd above). I checked the configurations multiple times and even compared them to those in various blog posts addressing the same issue but found them to be more or less the same as mine. So, can anyone point me in the right direction, in case I'm missing something? Many thanks.

    Read the article

  • Ubuntu software stack to mimic Active Directory auth

    - by WickedGrey
    I'm going to have an Ubuntu 11.10 box in a customer's data center running a custom webapp. The customer will not have ssh access to the box, but will need authentication and authorization to access the webapp. The customer needs to have the option of either pointing the webapp at something that we've installed locally on the machine, or to use an Active Directory server that they have. I plan on using a standard "users belong to groups; groups have sets of permissions; the webapp requires certain permissions to respond" auth setup. What software stack can I install locally that will allow an easy switch to and from an Active Directory server, while keeping the configuration as simple as possible (both for me and the end customer)? I would like to use as much off-the-shelf software for this as possible; I do not want to be in the business of keeping user passwords secure. I could see handling the user/group/permission relationships myself if there is not a good out-of-the-box solution (but that seems highly unlikely). I will accept answers in the form of links to "here is what you need" pages, but not "here is what Kerberos does" unless that page also tells me if it's required for my use case (essentially, I know that AD can speak Kerberos, but I can't tell if I need it to, or if I can just use LDAP, or...).

    Read the article

  • Destination host unreachable - Windows Server 2008

    - by Doug
    Hi There, I'm working with a windows 2008 domain controller, which I'm having issues connecting to internet resources. A small bit of background, this is a 2008 domain controller that has been added into an existing Win 2k domain, with a goal of replacing the older computers. Both of the older controllers can still access internet resources, and so can all the clients. When I ping Google.ca from the new server, it does resolve to an ip address, but then says "Reply from 192.168.123.20: Destination host unreachable." I'm really at a lost now, I've checked and rechecked my ip configuration, the default gateway is my router, the primary DNS server is the my DC, and the secondary DNS is also my router. The DNS server on the domain has a forwarder added for the router as well. Everything on my local network works just fine, all my internal resources can be resolved. For the time being, I've stopped the Firewall service. I'm not 100% used to Server 2008 yet, but it might be a case of just missing something simple. Thanks for your time.

    Read the article

  • Destination host unreachable - Windows Server 2008

    - by Doug
    Hi There, I'm working with a windows 2008 domain controller, which I'm having issues connecting to internet resources. A small bit of background, this is a 2008 domain controller that has been added into an existing Win 2k domain, with a goal of replacing the older computers. Both of the older controllers can still access internet resources, and so can all the clients. When I ping Google.ca from the new server, it does resolve to an ip address, but then says "Reply from 192.168.123.20: Destination host unreachable." I'm really at a lost now, I've checked and rechecked my ip configuration, the default gateway is my router, the primary DNS server is the my DC, and the secondary DNS is also my router. The DNS server on the domain has a forwarder added for the router as well. Everything on my local network works just fine, all my internal resources can be resolved. For the time being, I've stopped the Firewall service. I'm not 100% used to Server 2008 yet, but it might be a case of just missing something simple. Thanks for your time.

    Read the article

  • Nginx dynamic upstream configuration / routing

    - by Dan Sosedoff
    I was experimenting with dynamic upstream configuration for nginx and cant find any good solution to implement upstream configuration from third-party source like redis or mysql. The idea behind it is to have a single file configuration in primary server and proxy requests to various app servers based on environment conditions. Think of dynamic deployments where you have X servers that are running Y workers on different ports. For instance, i create a new app and deploy. App manager selects a server and then rolls out a worker (Ruby/PHP/Python) and then reports the ip:port to the central database with status "up". At this time when i go to the given url nginx should proxy all requests to the specified ip:port upstream. The whole thing is pretty similar to what heroku does, except this proof-of-concept is not supposed to be production ready, mostly for internal needs. The easiest solution i found was using resolver with ruby-based DNS server. It works, nginx gets the IP address correctly, but the only problem is that you cant define port number for that IP. Second solution (which i havent tried yet) is to roll something else as a proxy server, maybe written in Erlang. In this case we need to use something to serve static content. Any ideas how to implement this in more flexible and stable way? P.S. Some research options: http://openresty.org/#DynamicRoutingBasedOnRedis https://github.com/nodejitsu/node-http-proxy

    Read the article

< Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >