Search Results

Search found 20869 results on 835 pages for 'things i hate'.

Page 591/835 | < Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >

  • Should one have a separate user account for work use? [closed]

    - by Tyler Wayne
    This question examines the practice of using a separate OS-level user account to divide work use from personal use (specifically, in a creative profession and on a personal computer). I recently left my in-the-flesh job to go to school, but I'm carrying on with the work remotely. I do all of my work on my laptop, and I currently have a separate user account called "Work" where I do exactly that. However, I'm now starting to question that practice. Because my hobby is the same as my job, I want to save notes of the things I learn while working. Because ideas come at any moment, I often want to throw something into my personal task manager's inbox and look at it again later. That task manager is well-suited to handle both the work and personal aspects of my life. Only my personal account has admin rights, but work sometimes requires me to install programs. My employer has no preference regarding my choice, so that is a non-issue. My work is essentially freelance web development, so advice given with that in mind will be much appreciated. Back up all opinion with some personal experience, please. Ideally, give a list of pros and cons and then name reasons for your position.

    Read the article

  • Midnight Commander Woes: Output while panels are active, and tab completion.

    - by Eddie Parker
    I'm trying out midnight commander (loved Norton back in the day!) and I'm finding two things hard to work out. I'm curious if there's ways around this or not however. 1) If the panels are active and I issue a command that has a lot of output, it appears to be lost forever. i.e., if the panels are visible and I cat something (i.e., cat /proc/cpuinfo), that info is gone forever once the panels get redrawn. Is there anyway to see the output? I've tried 'ctrl-o', but it appears to just give me a fresh sub-shell and wipes the previous output away. Pausing after every invocation is a bit irritating, so I'd rather not use that option. 2) Tab completion for commands When mc is running, it consumes the tab character for switching panels. Is there any way to get around this so I can still type in paths and what not on the command line? I'm running cygwin if that matters at all.

    Read the article

  • Enable Soap for PHP 5.5.x on CentOS 6.5

    - by Chris Mancini
    Unfortunately I have to support soap on my server for the Fedex webservice. I recompiled PHP enabling support and it works via CLI but not PHP-fpm. They both point to the same ini file and both show the module loaded, but only CLI shows the configuration values. Output of php -i | grep -i soap Configuration File (php.ini) Path => /usr/local/etc Loaded Configuration File => /usr/local/etc/php.ini Configure Command => './configure' '--prefix=/usr/local' '--with-config-file-path=/usr/local/etc' '--with-config-file-scan-dir=/usr/local/etc/php_user/' '--enable-fpm' '--enable-ftp' '--enable-libxml' '--enable-mbstring' '--enable-pdo' '--enable-soap' '--enable-sockets=shared' '--enable-zip' '--with-curl' '--with-fpm-group=nginx' '--with-fpm-user=nginx' '--with-freetype-dir=/usr/lib64/' '--with-gd' '--with-jpeg-dir=/usr/lib64/' '--with-libdir=lib64' '--with-mcrypt' '--with-openssl' '--with-pdo-mysql' '--with-pear' '--with-readline' '--with-tidy' '--with-xsl' '--with-zlib' '--without-pdo-sqlite' '--without-sqlite3' soap Soap Client => enabled Soap Server => enabled soap.wsdl_cache => 1 => 1 soap.wsdl_cache_dir => /tmp => /tmp soap.wsdl_cache_enabled => 1 => 1 soap.wsdl_cache_limit => 5 => 5 soap.wsdl_cache_ttl => 86400 => 86400 Output from php-fpm phpinfo(): Configuration File (php.ini) Path /usr/local/etc Loaded Configuration File /usr/local/etc/php.ini SOAP Brad Lafountain, Shane Caraveo, Dmitry Stogov Please help, I have tried so many things...

    Read the article

  • APC Smart UPS network shutdown issue

    - by Rob Clarke
    Here is a bit about our setup: We have 2x Smart-UPS RT 6000 XL units with network management cards We are running Powerchute from a network server Powerchute is connected to the management cards of both UPSs UPSs are set to do a graceful shutdown via Powerchute when the battery duration is under 20 minutes We also have a command file that runs with Powerchute Although our setup is redundant we do not have an equal load on each server due to APC switches for single power devices The problem is that as we do not have an equal load on each server the batteries drain at different rates. This means that the UPSs both get to the specified low battery duration at completely different times. The problem here is that UPS 1 may have run down to 5 minutes and is in desperate need of initiating a Powerchute shutdown - UPS 2 still has 25 minutes of runtime so no shutdown is initiated. Consequently UPS 1 goes down and takes all the servers with and then shuts down UPS 2 as well! What we need to happen are 1 of either 2 things: Powerchute initiates the shutdown as soon as either UPS reaches the 20 minutes low battery duration setting - and doesnt wait for both The UPS with the heavier load expends its entire battery but does not shutdown both UPSs and lets the load be switched across to the UPS that still has runtime remaining. That way when the UPS that still has runtime reaches its low battery duration it can proceed with the graceful shutdown via Powerchute. Hope that makes sense, any help is greatly appreciated!

    Read the article

  • Citrix Metaframe/RD - screen refresh weirdness

    - by southof40
    I access a clients W2003 machine (XEN Virtualization) using RD over Citrix Metaframe. Everything used to be fine. Some weeks ago things turned bad ! All is well initially but after, say, 5 minutes the screen will stop refreshing. Rather weirdly you can then still proceed in a way as you can make the screen refresh by getting the RD window to go through a restore/maximise cycle (this is only possible using the ALT-BREAK shortcut as everything else is locked up). This then allows you to proceed by typing something and going ALT-BREAK to see the results. Using menus is just not possible at all. There's some indications that clearing the java cache between sessions helps. Also that the lockup happens more quickly if you make the 'lots of stuff happen' on the screen - for instance if you do a directory listing of a big directory then often that will cause the lockup to occur. Similary opening a dense Excel workbook and then scrolling it will cause the lockup to occur. Any Metaframe veterans out there who recognise these symptoms ? I'd be very grateful as it's driving me nuts.

    Read the article

  • Comparing Nginx+PHP-FPM to Apache-mod_php

    - by Rushi
    I'm running Drupal and trying to figure out the best stack to serve it. Apache + mod_php or Nginx + PHP-FPM I used ApacheBench (ab) and Siege to test both setups and I'm seeing Apache performing better. This surprises me a little bit since I've heard a lot of good things about Nginx + PHP-FPM. My current Nginx setup is something that is a bit out of the box, and the same goes for PHP-FPM What optimizations I can make to speed up the Nginx + PHP-FPM combo over Apache and mo_php ? In my tests using ab, Apache is outperforming Nginx significantly (higher requets/second and finishing tests much faster) I've googled around a bit, but since I've never using Nginx, PHP-FPM or FastCGI, I don't exactly know where to start PHP v5.2.13, Drupal v6, latest PHP-FPM and Nginx compiled from source. Apache v2.0.63 ApacheBench Nginx + PHP-FPM Server Software: nginx/0.7.67 Server Hostname: test2.com Server Port: 80 Concurrency Level: 25 ---> Time taken for tests: 158.510008 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 ---> Requests per second: 6.31 [#/sec] (mean) Time per request: 3962.750 [ms] (mean) Time per request: 158.510 [ms] (mean, across all concurrent requests) Transfer rate: 181.38 [Kbytes/sec] received ApacheBench Apache using mod_php Server Software: Apache/2.0.63 Server Hostname: test1.com Server Port: 80 Concurrency Level: 25 --> Time taken for tests: 63.556663 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 --> Requests per second: 15.73 [#/sec] (mean) Time per request: 1588.917 [ms] (mean) Time per request: 63.557 [ms] (mean, across all concurrent requests) Transfer rate: 103.94 [Kbytes/sec] received

    Read the article

  • No digital audio output with Asus Xonar DG

    - by Lunatik
    I've purchased an Asus Xonar DG as replacement for faulty onboard audio in a Medion 8822 as it has an optical output which is all I really need to feed my HTPC. I uninstalled the previous drivers/devices, switched the PC off, inserted the Asus card, powered up, disabled the onboard audio in the BIOS, then installed the driver that came on the CD (same version as on Asus' website as of today) and everything went perfectly - no errors. I set the audio devices up in Windows and in the Asus utility (SPDIF enabled, 6-ch audio) as I would expect to see them work, but the only thing is I have no digital audio from test tones within Windows/the Asus utility, PCM audio or Dolby Digital from DVD. Analogue audio is fine. I've uninstalled things and reinstalled a couple of times now, as well as trying almost all combinations of analogue/digital outputs but can't get it sorted. Does anyone have any tips on how to get this working? This card has just been released so there isn't much out there to go on. Notes: The light on the toslink port is lit. OS is Vista 32-bit SP2 and all up to date, pretty much a fresh install with almost no 3rd party applications installed This page seems to suggest that a digital output device in Windows is not needed with Xonar cards as it was with the previous Realtek so I have it set to Analog. The only other output device is S/PDIF pass-thru

    Read the article

  • Vista gets stuck in an endless loop while booting

    - by Mason Wheeler
    I put my laptop to sleep last night, and when I woke up this morning... it didn't. So I tried to reboot, and everything went fine until it got to the Vista splash screen, where it's supposed to display the logon. Here, it hits an endless loop: Display the cursor with the blue spinny thing that replaced the hourglass, for 5-10 seconds Display "Please wait..." for about half a second Screen flashes to black, then quickly back to the Vista splash screen Goto step 1 The whole time, my hard LED is on almost non-stop. I can boot into Safe Mode... sometimes. Sometimes it'll load all the drivers, then sit there for about 10 minutes, spinning the hard drive non-stop, then reboot with no warning. I tried booting to Last Known Good Configuration. Didn't fix anything. When I've managed to get into Safe Mode, I tried running CHKDSK. Didn't fix anything. I tried running System Restore to each of my last two restore points. Didn't fix anything either time. I ran a virus scan. Didn't find anything. I tried calling the manufacturer (Alienware), only to discover that my warranty expired last freaking week and now I can't get it fixed without paying exorbitant sums of money. I'm about at my wits' end here. Has anyone seen this problem before? Does anyone know how to fix it? Does anyone know a solution that does not involve reinstalling the OS and losing an entire year's worth of program installations, Windows Updates and configuring and tweaking things until it's working just like I want it to?

    Read the article

  • Tool to modify properties/metadata of a PDF? i.e. Change "Title", "Author"? Sony Reader showing som

    - by Chris W. Rea
    I own a Sony Reader PRS-600 ebook reader. I bought a ton of Manning Publications ebooks (DRM-free) recently. Many of the books are PDFs since not all the ones I wanted are available in epub format. The problem: Some of the PDF books I purchased have incorrect or missing metadata. Making things worse, the Sony Reader only displays the "Title" from the PDF metadata when displaying book titles in the reader's collection of books! The Reader doesn't display the filename. So, even though I have a PDF informatively named "Windows PowerShell In Action.pdf", it shows up as "untitled" in the Reader. Imagine how useful the Reader's list of book titles becomes when many are just "untitled" or "unnamed document" ! Yes, it is maddening. So – short of expecting the publisher to fix the files or Sony to add a filename-based list instead, I'm looking for a way to fix the PDF metadata. I can view the metadata with Adobe Reader, but it doesn't permit modification of the properties. Leading to: Question: Is there a tool – free, or cheap – and either for PC or Mac, that can modify the properties / metadata of a DRM-free PDF document? I want to correct "Title" and "Author" fields, specifically.

    Read the article

  • apache 2.4 redirect within virtualhost

    - by user129545
    I have a couple http (port 80) vhosts that I want to redirect to http if an https request is made to them. Apparently some things have changed with Apache 2.4 (NameVirtualHost not used like it was in the past, etc). Apache 2.4 on centos 5.5, This is all using a single ip for all vhosts below, I don't have multiple ip's on this box, my /usr/local/apache2/conf/extra/httpd-vhosts.conf : # <VirtualHost www.dom1.com:80> ServerName www.dom1.com ServerAlias dom1.com DocumentRoot /usr/local/apache2/htdocs/dom1/wordpress </VirtualHost> <VirtualHost webmail.dom2.com:443> ServerName webmail.dom2.com DocumentRoot /usr/local/apache2/htdocs/webmail SSLEngine On SSLCertificateFile /usr/local/apache2/webmail.crt SSLCertificateKeyFile /usr/local/apache2/webmail.key </VirtualHost> # my /usr/local/apache2/conf/extra/httpd-ssl.conf, # Listen 443 SSLPassPhraseDialog builtin SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000) SSLSessionCacheTimeout 300 Mutex default SSLRandomSeed startup file:/dev/urandom 512 SSLRandomSeed connect builtin SSLCryptoDevice builtin # webmail.dom2.com works fine. Problem is I can connect to https://www.dom1.com, and it serves up the content from webmail.dom2.com. I want any https requests for www.dom1.com on port 443 to simply redirect to http://www.dom1.com on port 80. Thanks

    Read the article

  • Mac OSX: which folders should ClamXav Sentry watch?

    - by trolle3000
    I'm using ClamXav on my mac. I've read this, and I am aware of the whole macs-need-no-AV-but-they-do-anyway discussion. I guess that's why I would feel like a real ass if I somehow managed to compromise my system! So ClamXav has been downloaded and ClamXav Sentry set up to start on log-in, but it doesn't really do anything before you tell it to. Specifically, you have to tell it which folders to watch for virusses/vira so I'm wondering, where are good places to look? Currently it's been set up to look the following places: In the home folder: ~/Downloads ~/Library/Caches ~/Library/Contextual Menu Items ~/Library/Cookies ~/Library/Internet Plug-Ins ~/Library/LaunchAgents In my system folder: /Library/Application Support /Library/Caches /Library/Contextual Menu Items /Library/Cookies /Library/Internet Plug-Ins /Library/LaunchAgents /Library/LaunchDaemons /Library/Startupitems Basically, this is 100% conjecture. All (most of) the folders have something to do with internet and things that start up automatically, so I'm guessing that's where vira go. But still, the qustion: Which folders should ClamXav Sentry watch, if any? FYI, I'm not using any mail app's, but please include that in your answer for anyone who might be interested. Cheers!

    Read the article

  • Current wisdom on SQL Server and Hyperthreading?

    - by BradC
    Lots of articles out there (see Slava Oks's original SQL 2000 article and Kevin Kline's SQL 2005 update) recommend disabling hyperthreading on SQL servers, or at least testing your specific workload before enabling it on your servers. This issue is gradually becoming less relevant as true multi-core processors replace hyperthreaded ones, but what's the current wisdom on this issue? Does this advice change any with SQL 2005 64-bit, or SQL 2008, or Windows Server 2008? Ideally, this should be tested in advance in a staging environment, but what about for servers that have already made it into production with HT enabled? How can I tell if performance issues we're experiencing might be related to HT? Is there some specific combination of perfmon counters that might point me in that direction, as opposed to all the other things I normally pursue when working on improving SQL performance? Edit: This is especially attractive because of the potential for an across the board improvement for some of my high-cpu servers, but the client is going to want to see something concrete that helps me identify which servers really could benefit from disabling hyperthreading. Of course, conventional performance troubleshooting is ongoing, but sometimes any little bit helps.

    Read the article

  • best-practices to block social sites

    - by adopilot
    In our company we have around 100 workstation with internet access, And day by day situation getting more worst and worst from perspective of using internet access for the purpose of doing private jobs, and wasting time on social sites. Open hearted I am not for blocking sites like Facebook, Youtube, and others similar but day by day my colleagues do not finishing his tasks and while I looking at their monitor all time they are ruining IE or Mozilla and chat and things like that. In other way Ill like to block youtube sometime when We have very poor internet access speed, Here is my questions: Do other companies blocking social sites ? Do I need dedicated device for that like hardware firewall, super expensive router Or I can do that whit my existing FreeBSD 6.1 self made router with two lan cards and configured nat to act like router. I was trying do that using ipfw and routerfirewall but without success, My code looks like ipfw add 25 deny tcp from 192.168.0.0/20 to www.facebook.com ipfw add 25 deny udp from 192.168.0.0/20 to www.facebook. ipfw add 25 deny tcp from 192.168.0.0/20 to www.dernek. ipfw add 25 deny udp from 192.168.0.0/20 to www.dernek. ipfw add 25 deny tcp from 192.168.0.0/20 to www.youtube. ipfw add 25 deny udp from 192.168.0.0/20 to www.youtube.com

    Read the article

  • Question about Displaying Documents and the CQWP in MOSS 2007

    - by Psycho Bob
    My organization is in the process of converting our intranet over to a SharePoint solution. Part of this intranet will be the movement and organization of all our internal documents. Currently, we have 11 pages of document links, each with its own subheadings. So far I have it set where each document has a custom field called "Page" with a check box list of all the document pages on the intranet site. On each individual page, I have setup a Content Query Web Part that displays the documents that have the corresponding Page value set (i.e. if a document Page value has been checked for "HR" it will appear on the HR page). The goal of this setup is to allow the nontechnical personal who will be responsible for the maintenance of the documents to be able to upload new documents to the documents list and note on which pages they should appear on without having to manually update the pages themselves. The problem that I am having is that I cannot seem to find a good way to sort the documents into their subheadings once they are on the appropriate page. I could create individual check boxes for each page/subheading combination, but this would create a list of approximately 50-75 items. Does anyone have any ideas as to how I could accomplish this, either via CQWP or by different means? Goals/Requirements of Installation Allow Intranet documents to be maintained by nontechnical personnel Display documents on the appropriate pages without user having to edit actual page or web part Denote document page location using user settable document attributes (if possible) Maintain current intranet organization and workflow Use only one document list without subdirectories NOTE: I am aware that this is not the most efficient or elegant way to do things, but these are the requirements I have been given for the project.

    Read the article

  • How can I set up Redmine => Active Directory authentication?

    - by Chris R
    First, I'm not an AD admin on site, but my manager has asked me to try to get my personal Redmine installation to integrate with ActiveDirectory in order to test-drive it for a larger-scale rollout. Our AD server is at host:port ims.example.com:389 and I have a user IMS/me. Right now, I also have a user me in Redmine using local authentication. I have created an ActiveDirectory LDAP authentication method in RedMine with the following parameters: Host: ims.example.com Port: 389 Base DN: cn=Users,dc=ims,dc=example,dc=com On-The-Fly User Creation: YES Login: sAMAccountName Firstname: givenName Lastname: sN Email: mail Testing this connection works just fine. I have, however, not successfully authenticated with it. I've created a backup admin user so that I can get back in to the me account if I break things, and then I've tried changing me to use the ActiveDirectory credentials. However, once I do, nothing works to log in. I have tried all of these login name options: me IMS/me IMS\me I've used my known Domain password, but no joy. So, what setting do I have wrong, or what information do I need to acquire in order to make this work?

    Read the article

  • 403 Forbidden When Using AuthzSVNAccessFile

    - by David Osborn
    I've had a nicely functioning svn server running on windows that uses Apache for access. In the original setup every user had access to all repositories, but I recently needed the ability to grant a user only access to one repository. I uncommented the AuthzSVNAccessFile line in my httpd.conf file and pointed it to an accessfile and setup the access file, but I get a 403 Forbidden when I go to mydomain.com/svn . If I recomment out this line then things work again. I also made sure I uncommented the LoadModule authz_svn_module and verified that it was point to the correct file. Below is the Location section of my httpd.conf and my svnaccessfile httpd.conf (location section only) <Location /svn> DAV svn SVNParentPath C:\svn SVNListParentPath on AuthType Basic AuthName "Subversion repositories" AuthUserFile passwd Require valid-user AuthzSVNAccessFile svnaccessfile </Location> (I want a more complex policy in the long run but just did this to test the file out) svnaccessfile [svn:/] * = rw I have also tried just the below for the svnaccessfile. [/] * = rw I also restart the service after each change just to make sure it is taken.

    Read the article

  • Two hosted servers, one public - VPN?

    - by Aquitaine
    Hello there, Web developer here who has to occasionally wear a system & network admin hat (small company). We currently have a single hosted server running Windows Server 2003 that runs both our web server (IIS/Coldfusion) and our database server (SQL Server 2008). We lock down the SQL server by allowing only specific IPs to connect to it. Not ideal but it's worked thus far. We're moving up to two distinct servers and I want to take the opportunity to 'get things right' and make only the web server face the public. What I need to be able to do is to allow only a handful of people to connect to the database server. Rather than using an IP allow list, I'd prefer to use a VPN to let people through so that access is based on the user and not simply the user's location. I'm leaning toward something like OpenVPN, just so I can stick with Server 2008 Web edition. Do I: Use the web server as a VPN server and set up the database server to only accept connections from the web server? Is there an extra step required to make connections to, say, db.mycompany.com route through the VPN rather than through a different connection? I'm ignorant of this part of network infrastructure stuff. Or, Set up a VPN server on the database server as the only public-facing server connection so that there aren't any routing issues to deal with? I know this is Network 101 stuff but I thought I'd ask before just blundering through it since it could affect the company a bit. Thanks very much!

    Read the article

  • Two audio streams - headphones and speakers

    - by Sylvester
    What I want (this is probably hard for most to answer, as this is a very unique setup) is to have two different streams (this means audio splitter is not an option, as it will still only be one stream) of audio - one through the headphones and one through the main speakers. I can do the audio rerouting using virtual audio cables, however the problem is this: i cannot get both headphones AND speakers to play even just one stream, let alone two seperate ones. using "split front and back audio into seperate streams is not an option, as the actual MB F_PANEL is faulty (nothing to do with the case front panel, just so you know. that works fine) So, first things first. I need it to recognise the headphones as a seperate audio device so that Virtual Audio Cables will detect it and allow me to route the necessary audio to the headphones only. I also need to be able have sound play through speakers and headphones together what i want to achieve overall, is this: have the ENTIRE computer's sounds picked up by VAC, and stream them to Line1. then have Line1 stream to the headphones. that way whatever's being streamed is heard through the headphones, while the entire system sounds (including those not streamed) are played through speakers.

    Read the article

  • How do I get a Wireless N PCi card to connect to a wireless G router?

    - by Andy
    I'm having some problems setting up a new wireless PCI card on a WinXP SP3 PC. I know that the router is configured correctly. It is a Linksys WRT54GL, using 802.11b/g. Security mode is WPA2 Personal with TKIP+AES encryption. I am able to connect to this fine using my laptop (first gen MacBook with a 802.11b built in card). The new PCI card is also Linksys, but it supports 802.11n. Card seems to be installed ok (Windows sees it fine, doesn't list any errors in Device Manager), however when it scans for available wireless networks it can't find my wireless network (the router is set to broadcast the SSID). I tried to enter the network SSID manually, but that didn't seem to help. I chose WPA2-PSK for network authentication. The only options for encryption are TKIP or AES - I've tried both, neither worked. I am sure that I typed in my wireless key correctly. At this point, I don't think the problem is with encryption, but something else. It almost seems like I need to switch the wireless card into g mode, but I haven't found a way to do that (if that is even possible/necessary - I thought n was fully backwards compatible with g). Also, the PC is in the same room as the router, and my laptop, so I don't think that it is an interference issue. Any ideas what I'm doing wrong? I'm running out of things to try at this point. :(

    Read the article

  • Linux: prevent VNC from swapping like mad

    - by Weezy
    I'm accessing a MacMini (with MacOS X 10.4) from my Linux machine using VNC and there's an issue that is driving me crazy... My Linux machine has 4 GB of ram and I run a lot of various apps on it and I've got no issue at all. It's all snappy and don't hear the hard disk swapping/read/writing too often. Now with VNC, the hard disk is swapping like mad... When I'm moving things on the OS X desktop. So I was thinking of creating a ramdisk and forcing the temp VNC files to go into that ramdisk but the problem is I can't find any temp files. I've attempted to do that: #!/bin/bash while [ true ] do lsof | grep vnc done And eyeball parse the output to try to find some temp file: no luck. The VNC version I'm using is this one: $ vncviewer -version VNC Viewer Free Edition 4.1.1 for X - built Jan 30 2009 19:33:16 Copyright (C) 2002-2005 RealVNC Ltd. No matter how much data is coming from the Mac, there should be plenty of memory (4 GB of ram) so there's really no reason to swap like crazy. This is driving me mad. Any help as to how I could solve this problem is most welcome because this is literally driving me nuts.

    Read the article

  • How do I get the F1-F12 keys to switch screens in gnu screen in cygwin when connecting via SSH?

    - by Mikey
    I'm connecting to a desktop running cygwin via SSH from the terminal app in Mac OS X. I have already started screen on the cygwin side and can connect to it over the SSH session. Furthermore, I have the following in the .screenrc file: bindkey -k k1 select 1 # F1 = screen 1 bindkey -k k2 select 2 # F2 = screen 2 bindkey -k k3 select 3 # F3 = screen 3 bindkey -k k4 select 4 # F4 = screen 4 bindkey -k k5 select 5 # F5 = screen 5 bindkey -k k6 select 6 # F6 = screen 6 bindkey -k k7 select 7 # F7 = screen 7 bindkey -k k8 select 8 # F8 = screen 8 bindkey -k k9 select 9 # F9 = screen 9 bindkey -k F1 prev # F11 = prev bindkey -k F2 next # F12 = next However, when I start multiple windows in screen and attempt to switch between them via the function keys, all I get is a beep. I have tried various settings for $TERM (e.g. ansi, cygwin, xterm-color, vt100) and they don't really seem to affect anything. I have verified that the terminal app is in fact sending the escape sequence for the function key that I'm expecting and that my bash shell (running inside screen) is receiving it. For example, for F1, it sends the following (hexdump is a perl script I wrote that takes STDIN in binmode and outputs it as a hexadecimal/ascii dump): % hexdump [press F1 and then hit ^D to terminate input] 00000000: 1b4f50 .OP If things were working correctly, I don't think bash should receive the escape sequence because screen should have caught it and turned it into a command. How do I get the function keys to work?

    Read the article

  • Best way to 'harden' embedded ext4 file server against unexpected loss of power?

    - by Jeremy Friesner
    Hi all, First, a little background: my company makes an audio streaming device that is a headless, rack-mounted Linux box with a couple of SSDs attached. Each SSD is formatted with ext4. The users can connect to the system using Samba/CIFS to upload new audio files or access existing ones. There is also custom software for streaming out audio over the network. This is all fine. The only problem is that the users are audio people, not computer people, and see the system as a 'black box', not as a computer. Which means that at the end of the day, they aren't going to ssh in to the box and enter "/sbin/shutdown -h"; they are just going to cut power to the rack and leave, and expect things to still work properly the next day. Since ext4 has journalling, journal checksumming, etc, this mostly works. The only time it doesn't work is when someone uploads a new file via Samba and then cuts power to the system before the uploaded data has been fully flushed to the disk. In that case, they come in the next day and find that their new file has been truncated or is missing entirely, and are unhappy. My question is, what is the best way to avoid this problem? Is there a way to get smbd to call "sync" at the end of every upload? (Performance on uploads isn't so important, since they only happen occasionally). Or is there a way to tell ext4 to automatically flush within a few seconds of any change to a file? (Again, performance can be sacrificed for safety here) Should I set a particular write-ordering mode, activate barriers, etc?

    Read the article

  • Vmware Workstation downloads as a txt file?

    - by George Mauer
    I just went to the vmware website because I want to try workstation over virtualbox. I signed up for a workstation trial and clicked download on the 64bit linux version. What downloaded is a 320 megabyte txt file VMware-Workstation-Full-8.0.2-591240.x86_64.txt What gives? Is anyone familiar with this pattern of delivering software? How do I run it? Here is the beginning of that file: #!/usr/bin/env bash # # VMware Installer Launcher # # This is the executable stub to check if the VMware Installer Service # is installed and if so, launch it. If it is not installed, the # attached payload is extracted, the VMIS is installed, and the VMIS # is launched to install the bundle as normal. # Architecture this bundle was built for (x86 or x64) ARCH=x64 if [ -z "$BASH" ]; then # $- expands to the current options so things like -x get passed through if [ ! -z "$-" ]; then opts="-$-" fi # dash flips out of $opts is quoted, so don't. exec /usr/bin/env bash $opts "$0" "$@" echo "Unable to restart with bash shell" exit 1 fi set -e ETCDIR=/etc/vmware-installer OLDETCDIR="/etc/vmware" ### Offsets ### # These are offsets that are later used relative to EOF. FOOTER_SIZE=52 # This won't work with non-GNU stat. FILE_SIZE=`stat --format "%s" "$0"` offset=$(($FILE_SIZE - 4)) MAGIC_OFFSET=$offset offset=$(($offset - 4)) CHECKSUM_OFFSET=$offset offset=$(($offset - 4)) VERSION_OFFSET=$offset offset=$(($offset - 4)) PREPAYLOAD_OFFSET=$offset

    Read the article

  • Total newb having SSH and remote MySQL access problems

    - by kscott
    I don't often work with linux or need to SSH into remote MySQL databases, so pardon my ignorance. For months I had been using the HeidiSQL client application to remotely access a MySQL database. Today two things happened: the DB moved to a new server and I updated HeidiSQL, now I cannot log in to the MySQL server, when attempting I get this message from Heidi: SQL Error (2003) in statement #0: Can't connect to MySQL server on 'localhost' (10061) If I use Putty, I can connect to the server and get MySQL access through command line, including fetching data from the DB. I assume this means my credentials and address are correct, but do not understand why putting those same details into HeidiSQL's SSH tunnel info won't work. I also downloaded the MySQL Workbench and attempted to set up a connection through that client and got this message: Cannot Connect to Database Server Your connection attempt failed for user 'myusername' from your host to server at localhost:3306: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 Please: 1 Check that mysql is running on server localhost 2 Check that mysql is running on port 3306 (note: 3306 is the default, but this can be changed) 3 Check the myusername has rights to connect to localhost from your address (mysql rights define what clients can connect to the server and from which machines) 4 Make sure you are both providing a password if needed and using the correct password for localhost connecting from the host address you're connecting from From Googling around I see that it could be related to the MySQL bind-address, but I am a third party sub-contractor with no access to the MySQL settings of this box and the system admin is assuring me that I'm an idiot and need to figure it out on my end. This is completely possible but I don't know what else to try. Edit 1 - The client settings I am using In Heidi and MySQL Workbench I am using the following: SSH host + port: theHostnameOfTheRemoteServer.com:22 {this is the same host I can Putty to} SSH Username: mySSHusername {the same user name I use for my Putty connection} SSH Password: mySSHpassword {the same password for the Putty connection} Local port: 3307 MySQL host: theHostnameOfTheRemoteServer.com MySQL User: mySQLusername {which I can connect with once in with Putty} MySQL Password: mySQLpassword {which works once in with Putty} Port: 3306

    Read the article

  • Routing a single request through multiple nginx backend apps

    - by Jonathan Oliver
    I wanted to get an idea if anything like the following scenario was possible: Nginx handles a request and routes it to some kind of authentication application where cookies and/or other kinds of security identifiers are interpreted and verified. The app perhaps makes a few additions to the request (appending authenticated headers). Failing authentication returns an HTTP 401. Nginx then takes the request and routes it through an authorization application which determines, based upon identity and the HTTP verb (put, delete, get, etc.) and URL in question, whether the actor/agent/user has permission to performed the intended action. Perhaps the authorization application modifies the request somewhat by appending another header, for example. Failing authorization returns 403. (Wash, rinse, repeat the proxy pattern for any number of services that want to participate in the request in some fashion.) Finally, Nginx routes the request into the actual application code where the request is inspected and the requested operations are executed according to the URL in question and where the identity of the user can be captured and understood by the application by looking at the altered HTTP request. Ideally, Nginx could do this natively or with a plugin. Any ideas? The alternative that I've considered is having Nginx hand off the initial request to the authentication application and then have this application proxy the request back through to Nginx (whether on the same box or another box). I know there are a number of applications frameworks (Django, RoR, etc.) that can do a lot of this stuff "in process", but I was trying to make things a little more generic and self contained where different applications could "hook" the HTTP pipeline of Nginx and then participate in, short circuit, and even modify the request accordingly. If Nginx can't do this, is anyone aware of other web servers that will perform in the manner described above?

    Read the article

< Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >