Search Results

Search found 539 results on 22 pages for 'alan fietz'.

Page 7/22 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • xkb layouts not working (in KDE?) after upgrade from Ubuntu 9.10 to 10.04

    - by Alan
    I customised my keyboard layout in 9.10 by editing the appropriate /usr/share/X11/xkb/symbols/ file. After upgrading to 10.04 I noticed it had overwritten all my modifications, so I recovered the layout and overwrote the symbol file's base entry. Sadly KDE (and, presumably, the entire OS) seems to ignore the files altogether. The help files don't mention anything about modifying layouts anyway (and the layout switcher seems to be using setxkbmap, which uses the above path according to its man page), so I'm at a bit of a loss. Do I need to compile this into some other format somehow or how do I get it to work?

    Read the article

  • customized xkb layouts not working (in KDE?) after upgrade from Ubuntu 9.10 to 10.04

    - by Alan
    I customised my keyboard layout in 9.10 by editing the appropriate /usr/share/X11/xkb/symbols/ file. After upgrading to 10.04 I noticed it had overwritten all my modifications, so I recovered the layout and overwrote the symbol file's base entry. Sadly KDE (and, presumably, the entire OS) seems to ignore the files altogether. The help files don't mention anything about modifying layouts anyway (and the layout switcher seems to be using setxkbmap, which uses the above path according to its man page), so I'm at a bit of a loss. Do I need to compile this into some other format somehow or how do I get it to work?

    Read the article

  • Laptop abruptly powers off after few seconds of booting

    - by Alan Mendelevich
    I have a 3 year old HP Pavilion dv2208 laptop. Recently it started abruptly powering off in like ~20-30 seconds into Windows boot sequence after almost every reboot/shutdown. Even if I leave it in Repair/Start Windows Normally stage it powers off anyway. The only way I managed to workaround this is to enter BIOS setup screen and leave it on for no less than 10 minutes. I don't know what happens there but this helps every time. Any ideas of possible ways to fix this that don't include replacing motherboard are highly appreciated. P.S.: I've tried resetting BIOS to defaults, updating to the latest BIOS version, etc. Happens with both Vista and Windows 7.

    Read the article

  • Phpmyadmin mysql foreign key problem

    - by alan
    Hey guys i'm using phpmyadmin (php & mysql) and i'm having alot of trouble linking the tables using foreign keys. I'm getting negative values for the field countyId (which is the foriegn key). However it is linking to my other table fine and it's cascading fine. So when I go to add data there will be a drop box for the CountyId and the values will look something like this, " -1 1- " Here is my alter statement, ALTER TABLE Baronies ADD FOREIGN KEY (CountyId) REFERENCES Counties (CountyId) ON DELETE CASCADE

    Read the article

  • Using dd command and running out of space while cloning drive to img

    - by Alan Kuras
    I have a problem with drive cloning. Im using dd on damaged disk with bad sectors trying to make an image from it. Im booting computer with Live Linux CD . Damaged disk: sda 146GB (NTFS) External drive: sdb 300GB (NTFS) After running the command below im running out of space on disk sdb. dd if=/dev/sda of=/dev/sdb/hdd.img bs=4096 conv=noerror,sync The question is why im running out of space on disk sdb ?

    Read the article

  • How Can We Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/$bytes_sent/$body_bytes_sent' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = $request_time - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = $bytes_sent - The count of bytes sent including headers. %B = $body_bytes_sent - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

  • What Are All the Variables Necessary to Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/O?/B?' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = ? - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = ? - The count of bytes sent including headers. %B = ? - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

  • tar - exclude certain files

    - by Alan
    I wish to tar all files in a directory and its subdirectories that do NOT end in .jpg, .bmp, .gif, or png. So, given the following folders and files: foo/file.txt foo/file.gif foo/bar/file foo/bar/image.jpg I want to tar only the files file.txt and file. file.gif and image.jpg should be ignored. I would also like to maintain the folder structure. My first thought was to pipe the results of the find command in conjunction with grep -v ".jpg|.gif|.bmp.png" to a text file, and then use the tar include argument to feed it that list of files. However, the results of the grepped find command also contain directories (in the example above, it would be "foo" and "foo/bar"), and when a directory is fed to tar, it includes all files in that directory, so I would end up with a tar file containing all of the files--not what I want. Is there any way to prevent find from outputting directories? Is there a far easier way to approach this?

    Read the article

  • Server Config on Github Security Considerations?

    - by Alan Griffith
    What are the security considerations of having my server configs in a repo on Github with world read-only access. I know to not include /etc/shadow and other password files. I'd like to share any of my good ideas and allow others to contribute, but I don't want to roll out a welcome mat for crackers.

    Read the article

  • Tool for logging NIC link state events.

    - by Alan B
    Intel NICs have a driver option (in Windows) that will log link state events to the system log, so if the network drops out periodically you can determine that fact. Does anyone know of a simple generic solution that does this, in other words one that is not part of the driver from a particular manufacturer? I know there are plenty of 'big iron' network monitoring tools out there but surely there's something really simple that runs as a service in Windows with minimal setup ? TIA

    Read the article

  • Windows command line ZIP extraction with checksum or similar ?

    - by Alan B
    What I need to do, at the command line, is: Extract the contents of a a ZIP archive. Change an arbitrary number of the extracted files. Repeat step 1, but because it is a huge archive, only extract the archived copies of the files changed in step 2 which is much faster. Ideally the extraction in step 3 would do something like a checksum on the files on disk and only extract those where the file in the archive has a different checksum. Or maybe compare the date changed stamp on the disk file. At the minute I use pkzipc.exe which is the command-line version of PkZip. I can't see a way to do it with this though. You can extract files from the archive that are newer than the disk files, but what I want is the opposite of that in a sense.

    Read the article

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

  • Multiple Graphics cards - Non Multi-Display Setup

    - by Alan Thomas
    Is it possible for me to utilize the processing capabilities of multiple graphics cards, even if I'm only using one monitor? I recently bought a new AMD graphics card, fairly top of the line. I also have a two year old, decent nVidia card. They're obviously very different cards. I don't really mind for gaming because my current card can handle most games fine by itself. I'm concerned about video editing programs such as Adobe Premiere and After Effects. Would the system be able to utilize the power of both cards to, say, render a video? I have both drivers still installed on my machine. And because there is only one monitor connected to my current graphics card (AMD) the nVidia would be connected to no display. So I am wondering whether it would or could be utilized in some way to help in processing video. Thanks!

    Read the article

  • Is there an FTP client for Windows 7 with a text-based UI? (not text-prompt based, like ftp.exe)

    - by Alan B
    Is there such as thing as a text-mode FTP client for Windows 7 ? By 'text-mode' I mean one that runs in a CMD.EXE window as opposed to a Windows GUI application. It also needs to be something along the lines of FileZilla, i.e. menu-driven as opposed to command-entry clients like NcFTP or indeed the built-in one. edit: To avoid confusion, what I mean is an application similar to that pictured (ZTreeWin File Manager), which runs from CMD.EXE and uses text characters for its UI, within the CMD.EXE window. The built-in FTP client, and things like NcFTP offer a prompt at which you issue commands. That's not what I'm looking for.

    Read the article

  • email attachments [closed]

    - by Alan Doolan
    My company currently use software on a local machine that will take an email from the email server, extract the attachment, rename it and then add it to a folder on a webserver using ftp. This works well but they are currently asking if it can be done 'in the cloud' or what they really mean, not local. Is there any thing that would do this on the server itself? I should clarify a bit. The attachements are various reports that are being sent to different email addresses (mostly google corporate and free accounts). We need the reports to be on a folder on a webserver so that internal pages can take the information in the reports (csv) and use it on the webpages or adds them to a separate database. The key part being that the files need to be in the particular folders. Though it does work to have a computer running software that will take the files, renames them to the required name and uploads them to the folder it relies too heavily on one computer working all the time. This is not something we can depend on at this point. I'll be honest, I'm a web developer and not strong with server systems past my particular standard requirements so this is beyond me. though yes, I am aware that my boss is not 100% sure what 'cloud' means but likes the word.

    Read the article

  • Reading email from Emacs VM using a secure server (Gmail)

    - by Alan Wehmann
    This is a question (see below) originally entered at https://answers.launchpad.net/vm/+question/108267 and upon the recommendation of Uday Reddy the question and answers are being moved here. The date of the original question was May 4, 2010. One subject of the question is use of the program stunnel with program View Mail (run within Emacs) on a PC running Microsoft Windows, in order to read email from a server that requires use of TSL/SSL (Gmail). See the related question, How to configure Emacs smtp for secure server for using a secure server, for sending email. The programs discussed are Emacs, VM (ViewMail) and stunnel. The platform under discussion is MS Windows. The original question was asked by usr345 on 2010-04-24: I tried to install vm on Windows, but when I tried to get the mail from gmail using ssl, an error emerges, emacs hanges-up. Here is the code from .emacs: (add-to-list 'load-path (expand-file-name "~/vm/lisp")) (add-to-list 'Info-default-directory-list (expand-file-name "~/vm/info")) (require 'vm-autoloads) (setq vm-primary-inbox "~/mail/inbox.mbox") (setq vm-crash-box "~/mail/inbox.crash.mbox") (setq vm-spool-files `((,vm-primary-inbox "pop-ssl:pop.gmail.com:995:pass:usr345:PASSWORD" ,vm-crash-box))) (setq vm-stunnel-program "g:/program files/stunnel/stunnel.exe") So, the question: How to configure pop-ssl on Windows?

    Read the article

  • Dediced server for all network functions?

    - by Alan
    I want to set up a fictional network configuration for a school in my neighborhood. They have about 50 computers altogether, 2X20 in computer rooms for students and another 10 scattered around for various professors. They should all access the internet through a dedicated Linux router machine. What they would like is to have domain names for those three computer groups. Lab1, Lab2 and Professors. The computers in Lab2 and Lab1 should have static ip and should all be named by numbers. So there should be 1@Lab1, 2@Lab1.... etc. And the Professors network should have a DHCP, with authentication. Is it an ok solution to have all these functions on a single server? (The one which will be used as a router) Do I have to set a local DNS for domain naming? Do the host names for Lab computers have to be set on the clients, or can they be automatically assigned?

    Read the article

  • How can I restore my "Unknown" partition type, back to NTFS?

    - by Alan
    Lately I've been having trouble restoring my PC after uninstalling GRUB, and an Ubuntu install from it. Usually I don't encounter any problems when doing this, but this time is different. My Windows XP (NTFS) partition is listed as "Other" in Partition Magic, and "Unknown" in GParted rather than "NTFS". How can I gain access to Windows partition once again? I am more than willing to provide any information, and run any tests necessary to produce said information in order to find out what's going on here. My apologies if this is the wrong place to ask such a question. I have heard nothing but good about Superuser, and decided to give it a shot. Thanks!

    Read the article

  • raid 0 failure, drives look fine

    - by Alan
    Hello, after a lovely blue screen my vista 64 machine decided to reconfigure one of my drives to no longer be part of my raid volume. So now my raid fails as it only has one member disk. This happened to me about 6 months ago and I just changed the disk in question back to a raid disk and all was well. However I cant seem to find that option in my bios or raid config anymore :( Any help would be appreciated

    Read the article

  • Change Apache DocumentDirectory path in trueCrypt partition

    - by Alan C
    Hello, I'm recently moving from windows to linux, so I've setup my machine to dual boot Windows7 and Ubuntu 10.04. I was able to successfully setup Apache on the Ubuntu partition, but I need to move the DocumentRoot since my websites are on a TrueCrypt partition that is in another hard drive so I can have them accessible in both OS. I followed some guides on how to change the path for the DocumentRoot so I end up modifiying the default file at /etc/apache2/sites-available DocumentRoot /media/truecrypt1/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /media/truecrypt1/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Those are the lines that I've changed, but now when I go to localhost I always get the Forbidden You don't have permission to access / on this server. Apache/2.2.14 (Ubuntu) Server at localhost Port 80

    Read the article

  • Wireless range extender throughput extremely slow.

    - by Alan B
    I've got a Belkin 54G router connected to the internet, and a Belkin range extender model F5D7132. I can get the range extender connected to the parent router SSID no problem, in repeater mode as opposed to access point mode. My Windows 7 laptop connects to the extender, which has a different SSID, and it connects with the full 5 bars. The issue is that when going through the extender internet performance is murderously slow, even getting the config pages of the extender or router is bad. When I connect directly to the router, all is well.

    Read the article

  • need assistance with my.cnf - 1500% CPU usage

    - by Alan Long
    I'm running into a few issues with our new database server. It is a HP G8 with 2 INTEL XEON E5-2650 processors and 32GB of ram. This server is dedicated as a MySQL server (5.1.69) for our intranet portal. I have been having issues with this server staying alive - I notice high CPU usage during certain times of day (8% ~ 1500%+) and see very low memory usage (7 ~ 15%) based on using the 'top' command. When the CPU usage passes 1000%, that is when the app usually dies. I'm trying to see what I'm doing wrong with the config file, hopefully one of the experts can chime in and let me know what they think. See below for my.cnf file: [mysqld] default-storage-engine=InnoDB datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock #user=mysql large-pages # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_connections=275 tmp_table_size=1G key_buffer_size=384M key_buffer=384M thread_cache_size=1024 long_query_time=5 low_priority_updates=1 max_heap_table_size=1G myisam_sort_buffer_size=8M concurrent_insert=2 table_cache=1024 sort_buffer_size=8M read_buffer_size=5M read_rnd_buffer_size=6M join_buffer_size=16M table_definition_cache=6k open_files_limit=8k slow_query_log #skip-name-resolve # Innodb Settings innodb_buffer_pool_size=18G innodb_thread_concurrency=0 innodb_log_file_size=1G innodb_log_buffer_size=16M innodb_flush_log_at_trx_commit=2 innodb_lock_wait_timeout=50 innodb_file_per_table #innodb_buffer_pool_instances=4 #eliminating double buffering innodb_flush_method = O_DIRECT flush_time=86400 innodb_additional_mem_pool_size=40M #innodb_io_capacity = 5000 #innodb_read_io_threads = 64 #innodb_write_io_threads = 64 # increase until threads_created doesnt grow anymore thread_cache=1024 query_cache_type=1 query_cache_limit=4M query_cache_size=256M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 0 wait_timeout = 1800 connect_timeout = 10 interactive_timeout = 60 [mysqldump] max_allowed_packet=32M [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid log-slow-queries=/var/log/mysql/slow-queries.log long_query_time = 1 log-queries-not-using-indexes we connect to one database with 75 tables, the largest table has 1,150,000 entries and the second largest has 128,036 entries. I have also verified that our PHP queries are optimized as best as possible. Reference - MySQLtuner: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.69-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 420M (Tables: 75) [!!] Total fragmented tables: 75 -------- Security Recommendations ------------------------------------------- [!!] User '[email protected]' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 1h 14m 50s (8M q [1K qps], 705 conn, TX: 6B, RX: 892M) [--] Reads / Writes: 68% / 32% [--] Total buffers: 19.7G global + 35.2M per thread (275 max threads) [!!] Maximum possible memory usage: 29.1G (93% of installed RAM) [OK] Slow queries: 0% (472/8M) [OK] Highest usage of available connections: 66% (183/275) [OK] Key buffer size / total MyISAM indexes: 384.0M/91.0K [OK] Key buffer hit rate: 100.0% (173 cached / 0 reads) [OK] Query cache efficiency: 96.2% (7M cached / 7M selects) [!!] Query cache prunes per day: 553614 [OK] Sorts requiring temporary tables: 0% (3 temp sorts / 1K sorts) [!!] Temporary tables created on disk: 49% (3K on disk / 7K total) [OK] Thread cache hit rate: 74% (183 created / 705 connections) [OK] Table cache hit rate: 97% (231 open / 238 opened) [OK] Open file limit used: 0% (17/8K) [OK] Table locks acquired immediately: 100% (432K immediate / 432K locks) [OK] InnoDB data size / buffer pool: 420.9M/18.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 256M) [see warning above] Thanks in advanced for your help!

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >