Search Results

Search found 36132 results on 1446 pages for 'line height'.

Page 318/1446 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • Unable to install mod_wsgi on CentOS 5.5 VPS...

    - by jasonaburton
    I am trying to install mod_wsgi on my VPS, but it won't work. This is what I am doing: wget http://modwsgi.googlecode.com/files/mod_wsgi-2.5.tar.gz tar xzvf mod_wsgi-2.5.tar.gz cd mod_wsgi-2.5 ./configure --with-python=/opt/python2.5/bin/python After I run the above command, I get this error: checking for apxs2... no checking for apxs... no checking Apache version... ./configure: line 1298: apxs: command not found ./configure: line 1298: apxs: command not found ./configure: line 1299: /: is a directory ./configure: line 1461: apxs: command not found configure: creating ./config.status config.status: creating Makefile config.status: error: cannot find input file: Makefile.in Through some research I've discovered that I need to modify my command: ./configure --with-apxs=/usr/local/apache/bin/apxs \ --with-python=/usr/local/bin/python But, /usr/local/apache/ doesn't exist, or so that's what it is telling me. If it doesn't exist, how do I create it with all the files needed, or if apache is located elsewhere on my VPS where would it be located? I'd also like to mention that I ran a command to install apache before this entire deal: yum install httpd so I assumed that was all I needed but apparently not (I am very new at all this server administration stuff so please be gentle) EDIT: This is the tutorial that I have been using to get this all set up: http://binarysushi.com/blog/2009/aug/19/CentOS-5-3-python-2-5-virtualevn-mod-wsgi-and-mod-rpaf/ I got stuck at the heading "Installing mod_wsgi" Thanks for any help!

    Read the article

  • Errors to do with modules when using net-snmp utils

    - by bob
    I was using the net-snmp packages that come with my linux distro (version 5.3.2.2), but wanted to do some work with the latest version of net-snmp (5.7), so tried compiling and installing the new source. It seemed to work ok but now I'm getting a load of errors when use net-snmp utils (snmpget, snmpset snmpwalk etc..) for example: $ snmptranslate -On SNMPv2-MIB::system.sysDescr MIB search path: /home/me/.snmp/mibs:/usr/local/share/snmp/mibs Cannot find module (SNMPv2-SMI) At line 6 in /usr/local/share/snmp/mibs/SNMPv2-MIB.txt Cannot find module (SNMPv2-TC): At line 9 in /usr/local/share/snmp/mibs/SNMPv2-MIB.txt Cannot find module (SNMPv2-MIB): At line 9 in (none) : <a lot of similar lines> : Cannot find module (NET-SNMP-VACM-MIB): At line 9 in (none) .1.3.6.1.2.1.1.1 From this I assumed perhaps that I was missing mibs from the 'MIB search path', so I looked at the first error 'Cannot find module (SNMPv2-SMI)', however it seems to be in the right directory: $ ls /usr/local/share/snmp/mibs/*SNMPv2-SMI* /usr/local/share/snmp/mibs/SNMPv2-SMI.txt And the same result for the other in the list.. so I'm wondering if anybody knows why it might not be finding the modules even though they seem to be in the search path?

    Read the article

  • Pattern matching gnmap fields with SED

    - by Ovid
    I am testing the regex needed for creating field extraction with Splunk for nmap and think I might be close... Example full line: Host: 10.0.0.1 (host) Ports: 21/open|filtered/tcp//ftp///, 22/open/tcp//ssh//OpenSSH 5.9p1 Debian 5ubuntu1 (protocol 2.0)/, 23/closed/tcp//telnet///, 80/open/tcp//http//Apache httpd 2.2.22 ((Ubuntu))/, 10000/closed/tcp//snet-sensor-mgmt/// OS: Linux 2.6.32 - 3.2 Seq Index: 257 IP ID Seq: All zeros I've used underscore "_" as the delimiter because it makes it a little easier to read. root@host:/# sed -n -e 's_\([0-9]\{1,5\}\/[^/]*\/[^/]*\/\/[^/]*\/\/[^/]*\/.\)_\n\1_pg' filename The same regex with the escape characters removed: root@host:/# sed -n -e 's_\([0-9]\{1,5\}/[^/]*/[^/]*//[^/]*//[^/]*/.\)_\n\1_pg' filename Output: ... ... ... Host: 10.0.0.1 (host) Ports: 21/open|filtered/tcp//ftp///, 22/open/tcp//ssh//OpenSSH 2.0p1 Debian 2ubuntu1 (protocol 2.0)/, 23/closed/tcp//telnet///, 80/open/tcp//http//Apache httpd 5.4.32 ((Ubuntu))/, 10000/closed/tcp//snet-sensor-mgmt/// OS: Linux 9.8.76 - 7.3 Seq Index: 257 IPID Seq: All zeros ... ... ... As you can see, the pattern matching appears to be working - although I am unable to: 1 - match on both the end of line ( comma , and white/tabspace). The last line contains unwanted text (in this case, the OS and TCP timing info) and 2 - remove any of the un-necessary data - i.e. print only the matching pattern. It is actually printing the whole line. If i remove the sed -n flag, the remaining file contents are also printed. I can't seem to locate a way to only print the matched regex. Being fairly new to sed and regex, any help or pointers is greatly appreciated!

    Read the article

  • Error importing large MySQL dump file which includes binary BLOBs in Windows

    - by Daniel Magliola
    I'm trying to import a MySQL dump file, which I got from my hosting company, into my Windows dev machine, and i'm running into problems. I'm importing this from the command line, and i'm getting a very weird error: ERROR 2005 (HY000) at line 3118: Unknown MySQL server host '+?*á±dÆ-N+Æ·h^ye"p-i+ Z+-$?P+Y.8+|?+l8/l¦¦î7æ¦X¦XE.ºG[ ;-ï?éµ?º+¦¦].?+f9d릦'+ÿG?-0à¡úè?-?ù??¥'+NÑ' (11004) I'm attaching the screenshot because i'm assuming the binary data will get lost... I'm not exactly sure what the problem is, but two potential issues are the size of the file (2 Gb) which is not insanely large, but it's not trivially small either, and the other is the fact that many of these tables have JPG images in them (which is why the file is 2Gb large, for the most part). Also, the dump was taken in a Linux machine and I'm importing this into Windows, not sure if that could add to the problems (I understand it shouldn't) Now, that binary garbage is why I think the images in the file might be a problem, but i've been able to import similar dumps from the same hosting company in the past, so i'm not sure what might be the issue. Also, trying to look into this file (and line 3118 in particular) is kind of impossible given its size (i'm not really handy with Linux command line tools like grep, sed, etc). The file might be corrupted, but i'm not exactly sure how to check it. What I downloaded was a .gz file, which I "tested" with WinRar and it says it looks OK (i'm assuming gz has some kind of CRC). If you can think of a better way to test it, I'd love to try that. Any ideas what could be going on / how to get past this error? I'm not very attached to the data in particular, since I just want this as a copy for dev, so if I have to lose a few records, i'm fine with that, as long as the schema remains perfectly sound. Thanks! Daniel

    Read the article

  • PHP include() through HTTP makes Apache time out

    - by Adam Interact
    I have a problem with ExpressionEngine2 after moving from an old server to WHM/cPanel running on CentOS6.4. Simple test code to reproduce that issue: <?php $protocol = strpos(strtolower($_SERVER['SERVER_PROTOCOL']),'https') === FALSE ? 'http' : 'https'; $host = $_SERVER['HTTP_HOST']; include($protocol . '://' . $host . '/header.html'); ?> <p> Main text...</p> <?php include($protocol . '://' . $host . '/footer.html'); ?> Where header.html looks like <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> and footer.html looks like: </body> </html> Creates Apache time out: Warning: include(http://www.domain.com/header.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 5 Warning: include() [function.include]: Failed opening 'http://www.domain.com/header.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 5 Main text... Warning: include(http://www.domain.com/footer.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 12 Warning: include() [function.include]: Failed opening 'http://www.domain.com/footer.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 12 Any clue what can be wrong with Apache or PHP configuration? Thanks

    Read the article

  • Bandwidth Suggestion

    - by Campo
    I have been asked to analyze the bandwidth usage of a company and make a recommendation for upgrading their Internet connection(s). Here is the layout 3 DLS lines so it is 3x(6 Down, 1 Up Each) into a load balancer out to the office's network. 30 VOIP phones run on a T1 (1.5 Down, 1.5 Up) The users at the company are heavily uploading. It is my suspicion that the issue in slowdown is being cause by multiple people uploading and others not being able to get requests out for even simple http requests. My initial idea is to get them a fiber line with a 10 down and 10 up. What do others think on this plan? Will that be enough to host their network traffic? What do I do about the VOIP line afterward? The fiber is expensive and I know the T1 does a great job for their VOIP so I do not want to suggest a DSL line because I know it may not be sufficient. I would also like to save them some money if I can. Maybe even get a faster fiber line and forgo the T1. Though I know their load balance/switch can only handle 20MB/S throughput. Looking for some confirmation/suggestions on my plan. I am planning on going in to get some real diagnostic numbers. Any suggestions on software to use for that? Preferably Windows software.

    Read the article

  • Error related to pkg-config when installing frei0r as part of another package

    - by Anentropic
    I am trying to build https://github.com/mltframework/shotcut on OS X Lion (using their script in scripts/build_shotcut.sh) and after numerous hurdles I'm stuck on this error: ./configure: line 16062: syntax error near unexpected token `OPENCV,' ./configure: line 16062: `PKG_CHECK_MODULES(OPENCV, opencv >= 1.0.0, HAVE_OPENCV=true, true)' ERROR: Unable to configure frei0r From what I already googled this means that the PKG_CHECK_MODULES macro hasn't been defined, which probably means there's something wrong with my pkg-config, which I installed via Homebrew. Sounds like the pkg.m4 file isn't found. When I brew install pkg-config I get the following warning: Warning: m4 macros were installed to "share/aclocal". Homebrew does not append "/usr/local/share/aclocal" to "/usr/share/aclocal/dirlist". If an autoconf script you use requires these m4 macros, you'll need to add this path manually. Well I've appended that line to the dirlist file and it doesn't fix the problem above. Can anyone suggest a way forward here? I have briefly tried building my own pkg-config from source but (bizarrely) when I tried to ./configure I got the following error: checking for pkg-config... no ./configure: line 13540: --exists: command not found configure: error: pkg-config and glib-2.0 not found, please set GLIB_CFLAGS and GLIB_LIBS to the correct values if building pkg-config needs pkg-config it seems like a weird catch 22 situation... I think this is probably an unnecessary sidetrack anyway.

    Read the article

  • gitolite mac don't add new user to authorized_keys

    - by crashbus
    I installed gitolite and every thing works fine for me as admin. But when I'd like to add add a new user the new user can't connect to the server. After I looked into the file authorized_keys I saw that the new user wasn't added to the file. During the commit of the new public-key I get some workings: WARNING: split conf not set, gl-conf present for 'gitolite-admin' Counting objects: 6, done. Delta compression using up to 8 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 882 bytes, done. Total 4 (delta 1), reused 0 (delta 0) remote: WARNING: split conf not set, gl-conf present for 'gitolite-admin' remote: WARNING: ?? @staff christianwaldmann markwelch remote: sh: find: command not found remote: sh: find: command not found remote: sh: sort: command not found remote: sh: find: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: cut: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 23: grep: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sort: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sed: command not found remote: sh: find: command not found remote: sh: find: command not found How can I fix it that gitolite auto-add the new user to the authorized_keys.

    Read the article

  • How do I setup JBoss 5.1.0.GA to run multiple instances?

    - by djangofan
    Does anyone have any experience or advice in setting up multiple JBoss 5.1.x instances on the same machine that has 1 network card? Here is what I did: Installed JBoss 5.1.0.GA into c:\myjboss 1.5. I copied the server/default directory to server/ports-01 and server/ports-02 so they have their own config. did I assume correctly? Ran .\run.bat -c ports-01 Ran .\run.bat -c ports-02 At this point there are 2 instances but the second instance doesn't load correctly because of what is probably a few port conflicts. For example: the http port ends up being 8080 for both instances, which it gets from line #49 in the C:\myjboss\server\all\conf\bindingservice.beans\META-INF\bindings-jboss-beans.xml file. Earlier in the server load it clearly gets the value from line#63 in that same file. I don't know why it gets part of the port config from line #49 and the other part from line#63. Confused. I also tried: .\run.bat -Djboss.service.binding.set=ports-01 -c ports-01 and it made little difference. Any ideas on what I am doing wrong?

    Read the article

  • Problems configuring logstash for email output

    - by user2099762
    I'm trying to configure logstash to send email alerts and log output in elasticsearch / kibana. I have the logs successfully syncing via rsyslog, but I get the following error when I run /opt/logstash-1.4.1/bin/logstash agent -f /opt/logstash-1.4.1/logstash.conf --configtest Error: Expected one of #, {, ,, ] at line 23, column 12 (byte 387) after filter { if [program] == "nginx-access" { grok { match = [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} [%{HTTPDATE:time_local}] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded = false host = " Here is my logstash config file input { syslog { type => syslog port => 5544 } } filter { if [program] == "nginx-access" { grok { match => [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} \[% {HTTPDATE:time_local}\] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded => false host => "localhost" cluster => "cluster01" } email { from => "[email protected]" match => [ "Error 504 Gateway Timeout", "status,504", "Error 404 Not Found", "status,404" ] subject => "%{matchName}" to => "[email protected]" via => "smtp" body => "Here is the event line that occured: %{@message}" htmlbody => "<h2>%{matchName}</h2><br/><br/><h3>Full Event</h3><br/><br/><div align='center'>%{@message}</div>" } } I've checked line 23 which is referenced in the error and it looks fine....I've tried taking out the filter, and everything works...without changing that line. Please help

    Read the article

  • How to display escaped characters in tmux status bar

    - by walrus
    i am running tmux from a tty on an embedded linux device. (NOT a terminal emulator) because the screen is rather small, i want to add some "icons" to the tmux status bar. to achieve this, i have simply created a font with the appropriate glyphs for things like battery, or wifi. i can load the font, and display the characters with calls that use an escape to the line drawing characters like so: echo -e "\xe\234\xf" \xe escapes me into line drawing character mode, \234 is my created character, and \xf returns me to normal character mode so my terminal doesnt start getting goofy. this works perfectly if i enter the command at the terminal whether tmux is started or not. the issue arises if i then try to use it in my ~/.tmux.conf file for the status bar. i currently have a line like this: set -g status-right "#(echo -e "\xe\234\xf") #(/script/to/output/powerlevel) this simply outputs \xe\234\xf powerlevel this goes the same if i try printf over echo. this is the output i would expect to get on the terminal if i made the call without passing -e to echo, or without enclosing the statement with quotes. i then decided to wrap the calls to the echo or printf in a shell script. again, the script works when called from the terminal, but not in tmux's status bar. now i get the unprintable character "?" instead of my icon, like this: ? powerlevel this is what i would expect if i did not use the line drawing escapes previously mentioned above, or if i tried to copy and paste the character as text using tmux. in addition, the calling of these character scripts screws up the rest of my status-right, as the clock has about 6 digits for minutes when it is called (though it correctly only updates two of them). how can i make tmux respect the escape characters? any help or insight is greatly appreciated.

    Read the article

  • What are these CPU cache settings? Snoop Filter, ACL prefetch, HW prefetch

    - by eater
    I was in my BIOS setup turning on VT-x support today and saw these other settings. A little googling indicates that they each seem to turn on some sort of optimization to do with the CPU's L2 cache. They were all turned off by default. The processor in question is an Intel Xeon quad-core 3.4GHz (X5492). My OS is Linux 2.6.35.10-74.fc14.x86_64 #1 SMP Thu Dec 23 16:04:50 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux. I have 4GB of RAM if that matters. Here's what the BIOS manufacturer has to say: Snoop Filter Enabling the snoop filter typically improves performance by reducing snoop traffic on the frontside bus in dual processor configurations. Well I like the sound of improved performance. Why would the BIOS have this off by default? Or by dual processor do they not mean multi-core? Regardless, is there a downside if this is on? ACL Prefetch When enabled, the Adjacent Cache Line Prefetcher fetches both cache lines that comprise a cache line pair when it determines required data is not currently in its cache. When disabled, the processor will only fetch the cache line required by the processor. HW Prefetch Fetches an extra line of data into L2 from external memory. Both of these sound like optimizations that have some drawbacks. What are the reasons to turn them on? What are the reasons to leave them off. Why is the default off?

    Read the article

  • Why is crontab giving "No such file or directory" error when the file DOES exist?

    - by fettereddingoskidney
    I am getting the following three lines in an error message in /var/mail/username after the following job runs in crontab... 15 * * * * /Applications/MAMP/htdocs/iconimageryidx/includes/insertPropertyRESI.php Errors: /applications/mamp/htdocs/iconimageryidx/includes/insertpropertyRESI.php: line 1: ?php: No such file or directory /applications/mamp/htdocs/iconimageryidx/includes/insertpropertyRESI.php: line 3: syntax error near unexpected token `'initialize.php'' /applications/mamp/htdocs/iconimageryidx/includes/insertpropertyRESI.php: line 3: `require_once('initialize.php'); The PHP script I am trying to execute DOES in fact exist, and I have made absolutely sure the spelling is correct several times. I ran a crontab on another script before and it worked just fine...any ideas?? The 2nd & 3rd Errors are from line 3 in the following script (the one I am trying to run with the crontab): <?php require_once('initialize.php'); require_once('insertPropertyTypes.php'); $sDate; if(isset($_GET['startDate'])) { $sDate = $_GET['startDate']; } else { $sDate = ''; } $insertResi = new InsertPropertyTypes('Listing', $sDate, 'RESI'); ?> When I run my script insertPropertyRESI.php in the browser, it runs just fine???? Also, initialize.php and insertPropertyTypes.php are in the same directory as insertPropertyRESI.php I am using MAMP with PHP 5.3.5 thakns for the help :?

    Read the article

  • Why do moving lines become fuzzy on my monitor?

    - by CodeInChaos
    I recently got a new notebook. With moving images there are some graphical issues, and I'd like to know what causes them. None of my earlier monitors exhibited similar issues. Moving high contrast lines become jagged, similar to interleaved videos. When moving a horizontal line vertically those artifacts are colored, when moving a vertical line horizontally they aren't colored. The effect isn't observable in static images. And when moving faster the zone in which it occurs becomes wider. The effect is very visible if I move a window around on the borders of the window and wherever high contrast lines appear. But it appears when watching videos too. The vertical line in that image moves to the right, the horizontal line upwards. The effect is most likely related to the fact that each real pixel consists of different sub-pixels for the different color channels. But how are these causing the observed effect? Is the change at which the different colors change to the destination brightness different? The optical impression is that every second pixel in a chess board like arrangement is adapting slower than it's neighbors. But that doesn't make much sense.

    Read the article

  • Postfix : outgoing mail in TLS for a specific domain

    - by vercetty92
    I am trying to configure postfix to send mail in TLS (starttls in fact), but only for a specific destination. I tried with "smtp_tls_policy_maps". This is the only line in my main.cf file regarding TLS configuration, but it seems not working. Here is my main.cf file: queue_directory = /opt/csw/var/spool/postfix command_directory = /opt/csw/sbin daemon_directory = /opt/csw/libexec/postfix html_directory = /opt/csw/share/doc/postfix/html manpage_directory = /opt/csw/share/man sample_directory = /opt/csw/share/doc/postfix/samples readme_directory = /opt/csw/share/doc/postfix/README_FILES mail_spool_directory = /var/spool/mail sendmail_path = /opt/csw/sbin/sendmail newaliases_path = /opt/csw/bin/newaliases mailq_path = /opt/csw/bin/mailq mail_owner = postfix setgid_group = postdrop mydomain = ullink.net myorigin = $myhostname mydestination = $myhostname, localhost.$mydomain, localhost masquerade_domains = vercetty92.net alias_maps = dbm:/etc/opt/csw/postfix/aliases alias_database = dbm:/etc/opt/csw/postfix/aliases transport_maps = dbm:/etc/opt/csw/postfix/transport smtp_tls_policy_maps = dbm:/etc/opt/csw/postfix/tls_policy inet_interfaces = all unknown_local_recipient_reject_code = 550 relayhost = smtpd_banner = $myhostname ESMTP $mail_name debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin xxgdb $daemon_directory/$process_name $process_id & sleep 5 And here is my "tls_policy" file: gmail.com encrypt protocols=SSLv3:TLSv1 ciphers=high I also tried gmail.com encrypt My wish is to use TLS only for the gmail domain. With this configuration, I don't see any TLS line in the source of the mail. But if I tell postfix to use TLS if possible for all destination with this line, it works: smtp_tls_security_level = may Beause I can see this line in the source of my mail: (version=TLSv1/SSLv3 cipher=OTHER); But I don't want to try to use TLS for the others domains...only for gmail... Do I miss something in my conf? (I also try whith "hash:/etc/opt/csw/postfix/tls_policy", and it's the same) Thanks a lot in advance

    Read the article

  • Google Drive terminates without error on startup

    - by Iszi
    I've used Google Drive for awhile now, but it won't start up after installing on my latest system re-build. I'm still using the same OS, hardware, and basic software load (antivirus, firewall, etc.) that I have for years during which I had not previously had problems with Drive. OS: Windows 7 Ultimate x64 Google Drive Version: 1.12.5329.1887 Now, whenever I try to run Google Drive, it just spawns two instances of the executable which die shortly after. No error messages are posted to the desktop, and nothing indicating any problem is written to the Event Log. After some research, I've yet to find anyone having the same problem who's found an answer. I did find out how to run Google Drive in diagnostic mode, using the --vv parameter at the command line. After that, I opened up the sync log and got this: 2013-10-31 17:11:24,039 INFO pid=3664 1892:MainThread logging:1600 OS: Windows/6.1.7601-SP1 2013-10-31 17:11:24,039 INFO pid=3664 1892:MainThread logging:1600 Google Drive (build 1.12.5329.1887) 2013-10-31 17:11:24,039 DEBUG pid=3664 1892:MainThread logging:1608 DEBUGGING DUMP is ON. 2013-10-31 17:11:24,051 ERROR pid=3664 1892:MainThread logging:1575 ERROR, UNEXPECTED EXCEPTION 2013-10-31 17:11:24,051 ERROR pid=3664 1892:MainThread logging:1575 [Error 5] Access is denied Traceback (most recent call last): File "<string>", line 232, in Main File "<string>", line 118, in RegisterCustomFileTypes File "P:\p\agents\hpal4.eem\recipes\353983091\base\b\drb\googleclient\apps\webdrive_sync\windows\build\pyi.win32\main\outPYZ1.pyz/windows.registry", line 62, in GetValue WindowsError: [Error 5] Access is denied 2013-10-31 17:11:24,052 INFO pid=3664 1892:MainThread logging:1600 Crash reporting disabled. Ignoring report. 2013-10-31 17:11:24,052 INFO pid=3664 1892:MainThread logging:1600 Exiting with error code: 0 I'm running on an account with Administrator-level permissions, and have even tried using "Run As Administrator" on the EXE. I'm not sure why it's looking for a P:\ drive, as no such volume has ever been mounted on this system. What should I do to try to further troubleshoot, and resolve, this issue?

    Read the article

  • Amavisd-new(2.6.4-3) failing to do "lookup_sql_dsn" when large number of emails are need to be accessed

    - by sandip
    Amavis is failing to do sql lookup when large number of emails are sent to amavis. Its throwing out error after scanning 40 to 50 email. It shows error like. (!!)TROUBLE in process_request: sql exec: err=7, 57P01,DBD::Pg::st bind_param failed:FATAL: terminating connection due to administrator command\nSSL connection has been closed unexpectedly at (eval 103) line 164, <GEN50> line 5. at (eval 104) line 280, <GEN50> line 5. As soon as this error appears in the logs, Amavis stops and port 10024 is closed. Thinking it to an error due to ssl connection in the database(postgresql-8.4), i had stopped ssl in postgres, but it was of no use. I have tried to configure amavis on another server, but i got the same error again. This happening on a production server, So i am not being able to scan emails as per user settings. Anybody have any idea, what may be the source of this error ?? Please help. Thanks in advance

    Read the article

  • How do I activate the F_LINE input in a transplanted HP chassis?

    - by admin
    I have an HP Pavilion Media Center PC chassis, vintage 2003 or so and I replaced the motherboard in it with a newer (vintage 2009) HP motherboard, M2N68-LA (Narra 5). I have scoured the internet trying to find pinouts for the motherboard to no avail. My question concerns the front panel audio, specifically Line In. The old chassis was built for AC97 but the new mobo is build for the newer HD audio standard. I figured out by comparison & experimentally how to connect the Mic & Headphone jacks to the HD audio header of the mobo by adding a manual switch to set the SENSE lines. Now all works fine for Mic & headphone. The old chassis also has a front panel Line In jack that the newer HP chassis does not have. However, the new mobo has a 4 pin white connector labeled F_LINE that I believe is a line input. Under Windows 7 I see the two Line Inputs in the mixer but I can't get one of them to become active. The 4 pin F_LINE connector uses the two middle pins for ground, and presumably the other two for left and right audio inputs. There are no pins for sensing on that connector. Can anyone tell me how to use that F_LINE input for the front panel, or how to activate it?

    Read the article

  • What may be wrong with String::ToIdentifier::EN tests?

    - by wk01
    I try to install Perl module String::ToIdentifier::EN (as depndency of DBIx::Class::Schema::Loader) but it fails on tests. I googled those errors but get no picture, where is problem: Building and testing String-ToIdentifier-EN-0.07 cp lib/String/ToIdentifier/EN.pm blib/lib/String/ToIdentifier/EN.pm cp lib/String/ToIdentifier/EN/Unicode.pm blib/lib/String/ToIdentifier/EN/Unicode.pm Manifying blib/man3/String::ToIdentifier::EN.3pm Manifying blib/man3/String::ToIdentifier::EN::Unicode.3pm PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'inc', 'blib/lib', 'blib/arch')" t/00_basic.t t/10_ascii.t t/20_capitalization.t Byte order is not compatible at ../../lib/Storable.pm (autosplit into ../../lib/auto/Storable/_retrieve.al) line 380, at /home/wanradt/perl5/lib/perl5/Lingua/EN/Tagger.pm line 167 # Looks like you planned 25 tests but ran 4. # Looks like your test exited with 25 just after 4. t/00_basic.t ........... Dubious, test returned 25 (wstat 6400, 0x1900) Failed 21/25 subtests Byte order is not compatible at ../../lib/Storable.pm (autosplit into ../../lib/auto/Storable/_retrieve.al) line 380, at /home/wanradt/perl5/lib/perl5/Lingua/EN/Tagger.pm line 167 # Looks like you planned 768 tests but ran 512. # Looks like your test exited with 25 just after 512. t/10_ascii.t ........... Dubious, test returned 25 (wstat 6400, 0x1900) Failed 256/768 subtests t/20_capitalization.t .. ok Test Summary Report ------------------- t/00_basic.t (Wstat: 6400 Tests: 4 Failed: 0) Non-zero exit status: 25 Parse errors: Bad plan. You planned 25 tests but ran 4. t/10_ascii.t (Wstat: 6400 Tests: 512 Failed: 0) Non-zero exit status: 25 Parse errors: Bad plan. You planned 768 tests but ran 512. Files=3, Tests=528, 1 wallclock secs ( 0.07 usr 0.02 sys + 0.42 cusr 0.04 csys = 0.55 CPU) Result: FAIL Failed 2/3 test programs. 0/528 subtests failed. make: *** [test_dynamic] Error 255 -> FAIL Installing String::ToIdentifier::EN failed. See /home/wanradt/.cpanm/build.log for details. Byte order is not compatible at... seems a key, but to where?

    Read the article

  • How to import this data set into excel? (column headings on each row delimited by a colon)

    - by Anonymous
    I'm trying to import the following data set into Excel. I've had no luck with the text import wizard. I'd like Excel to make id, name, street, etc the column names and insert each record onto a new row. , id: sdfg:435-345, name: Some Name, type: , street: Address Line 1, Some Place, postalcode: DN2 5FF, city: Cityhere, telephoneNumber: 01234 567890, mobileNumber: 01234 567890, faxNumber: /, url: http://www.website.co.uk, email: [email protected], remark: , geocode: 526.2456;-0.8520, category: some, more, info , id: sdfg:435-345f, name: Some Name, type: , street: Address Line 1, Some Place, postalcode: DN2 5FF, city: Cityhere, telephoneNumber: 01234 567890, mobileNumber: 01234 567890, faxNumber: /, url: http://www.website.co.uk, email: [email protected], remark: , geocode: 526.2456;-0.8520, category: some, more, info Is there any easy way to do this with Excel? I'm struggling to think of a way to convert this to a conventional CSV easily. As far as I can think, I'd have to remove the labels from each line, enclose each line in quotes, then delimit them with commas. Obviously that's made a little more difficult to script though seeing as some fields (address, for instance) contain comma-delimited data. I'm not good with regex at all. What's the best way to tackle this?

    Read the article

  • How to automatically start VM created by virt-manager?

    - by Jeff Shattock
    I have created a virtual machine with virt-manager that runs on kvm/qemu. The machine works well when started through virt-manager. However, I would like to be able to start and stop the VM through a script in init.d, so that it comes up and down along with the host. I need to have virt-manager show that the machine is running, and to be able to connect to its console through there. When I use the command line that is produced by running ps -eaf | grep kvm after starting the vm through virt-manager, I get some console messages about redirected character devices, but the machine does start and runs properly. However, I do not get any indication from virt-manager that it has started. How can I modify the command line to get virt-manager to pick up the running VM? Is there anything else about the command line that should change when starting outside of virt-manager? Command line is (slightly reformatted for readability): /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name BORON \ -uuid fa7e5fbd-7d8e-43c4-ebd9-1504a4383eb1 \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/BORON.monitor,server,nowait \ -monitor chardev:monitor -localtime -boot c \ -drive file=/dev/FS1/BORON,if=ide,index=0,boot=on,format=raw \ -net nic,macaddr=52:54:00:20:0b:fd,vlan=0,name=nic.0 \ -net tap,fd=41,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 \ -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:1 -k en-us -vga cirrus

    Read the article

  • Why just splitting an Ethernet cable does not work?

    - by Sin Jeong-hun
    I thought the Ethernet is logically one-line communication bus (for argument's sake, I am excluding hubs). All machines attached in the bus hears the same signals and the machines themselves try to avoid collisions by randomly backing off. http://computer.howstuffworks.com/ethernet6.htm If so, why splitting one Ethernet line from my home router into two and connecting two computers would not work? Why do I have to add a switch to it? *What the Internet said would not work. [4 port home router] ------[one Ethernet cable]-----[simple splitter]======[two computers] *What the Internet said I should do [4 port home router] ------[one Ethernet cable]-----[switch]======[two computers] Is this because of the signal degradation (reduced electric current)? Thank you for all the answers! The reason why I did not just use the two ports of my home router is... The 4-port gigabit router is in my room and I had put a computer in another room (also my room, though). Since wired network is far more reliable and secure, I had bought a long Ethernet cable and and connected the computer to the router. Now I was thinking about adding another computer to that room. I could buy another long Ethernet cable, but then there will be two cables between the rooms. The one line already is a minor annoyance, so I thought if I could share the one line between the two computers in that room. A switch would work, but it requires power and is a little bit pricey. That is why I wondered why it would not work to simply split the physical Ethernet cable. Apparently I do not completely understand how Ethernet and a switch work. I just have some bit of knowledge I heard in my college class.

    Read the article

  • java warnings on linux

    - by Geo Papas
    Hello i am getting warnings after i have installed java on kubuntu 11.10. The java programs run but i always get 4 warnings: $ java Warning: no leading - on line 1 of `/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/jvm.cfg' Warning: missing VM type on line 1 of `/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/jvm.cfg' Warning: no leading - on line 1 of `/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/jvm.cfg' Warning: missing VM type on line 1 of `/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/jvm.cfg' What am i missing? Thanks in advance! Here is the file content /usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/jvm.cfg : /usr/lib/jvm/java-6-sun # # %W% %E% # # Copyright (c) 2006, Oracle and/or its affiliates. All rights reserved. # ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. # # List of JVMs that can be used as an option to java, javac, etc. # Order is important -- first in this list is the default JVM. # NOTE that this both this file and its format are UNSUPPORTED and # WILL GO AWAY in a future release. # # You may also select a JVM in an arbitrary location with the # "-XXaltjvm=<jvm_dir>" option, but that too is unsupported # and may not be available in a future release. # -server KNOWN -client IGNORE -hotspot ERROR -classic WARN -native ERROR -green ERROR

    Read the article

  • cisco asa + action drop issue

    - by ghp
    Have created a tunnel between 10.x.y.z network and 122.a.b.c ..the tunnel is up and active, but when I try the packet tracer output ..I get the ACTION as drop. I have also enabled same-security-traffic permit intra-interface. Can someone help me what does this drop mean? Result: input-interface: inside input-status: up input-line-status: up output-interface: outside output-status: up output-line-status: up Action: drop Drop-reason: (acl-drop) Flow is denied by configured rule Packet Tracer output @Shane Madden: please find below the packet tracer output. CASA5K-A# CASA5K-A# config t CASA5K-A(config)# packet-tracer input inside tcp 10.x.y.112 0 122.a.b.c 0 Phase: 1 Type: ROUTE-LOOKUP Subtype: input Result: ALLOW Config: Additional Information: in 0.0.0.0 0.0.0.0 outside Phase: 2 Type: ACCESS-LIST Subtype: Result: DROP Config: Implicit Rule Additional Information: Result: input-interface: inside input-status: up input-line-status: up output-interface: outside output-status: up output-line-status: up Action: drop Drop-reason: (acl-drop) Flow is denied by configured rule CASA5K-A(config)# ======================================================================== The access-group are as follows : access-group acl-inbound in interface outside access-group acl-outbound in interface inside and the access-list's are access-list acl-inbound extended permit tcp any any gt 1023 access-list acl-outbound extended permit ip object-group net-Source object net-dest

    Read the article

  • ASP.NET MVC 3: Razor’s @: and <text> syntax

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (today) In today’s post I’m going to discuss two useful syntactical features of the new Razor view-engine – the @: and <text> syntax support. Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post.  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a list of products: When run, it generates output like:   One of the techniques that Razor uses to implicitly identify when a code block ends is to look for tag/element content to denote the beginning of a content region.  For example, in the code snippet above Razor automatically treated the inner <li></li> block within our foreach loop as an HTML content block because it saw the opening <li> tag sequence and knew that it couldn’t be valid C#.  This particular technique – using tags to identify content blocks within code – is one of the key ingredients that makes Razor so clean and productive with scenarios involving HTML creation. Using @: to explicitly indicate the start of content Not all content container blocks start with a tag element tag, though, and there are scenarios where the Razor parser can’t implicitly detect a content block. Razor addresses this by enabling you to explicitly indicate the beginning of a line of content by using the @: character sequence within a code block.  The @: sequence indicates that the line of content that follows should be treated as a content block: As a more practical example, the below snippet demonstrates how we could output a “(Out of Stock!)” message next to our product name if the product is out of stock: Because I am not wrapping the (Out of Stock!) message in an HTML tag element, Razor can’t implicitly determine that the content within the @if block is the start of a content block.  We are using the @: character sequence to explicitly indicate that this line within our code block should be treated as content. Using Code Nuggets within @: content blocks In addition to outputting static content, you can also have code nuggets embedded within a content block that is initiated using a @: character sequence.  For example, we have two @: sequences in the code snippet below: Notice how within the second @: sequence we are emitting the number of units left within the content block (e.g. - “(Only 3 left!”). We are doing this by embedding a @p.UnitsInStock code nugget within the line of content. Multiple Lines of Content Razor makes it easy to have multiple lines of content wrapped in an HTML element.  For example, below the inner content of our @if container is wrapped in an HTML <p> element – which will cause Razor to treat it as content: For scenarios where the multiple lines of content are not wrapped by an outer HTML element, you can use multiple @: sequences: Alternatively, Razor also allows you to use a <text> element to explicitly identify content: The <text> tag is an element that is treated specially by Razor. It causes Razor to interpret the inner contents of the <text> block as content, and to not render the containing <text> tag element (meaning only the inner contents of the <text> element will be rendered – the tag itself will not).  This makes it convenient when you want to render multi-line content blocks that are not wrapped by an HTML element.  The <text> element can also optionally be used to denote single-lines of content, if you prefer it to the more concise @: sequence: The above code will render the same output as the @: version we looked at earlier.  Razor will automatically omit the <text> wrapping element from the output and just render the content within it.  Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s smart detection of <tag> elements to identify the beginning of content regions is one of the reasons that the Razor approach works so well with HTML generation scenarios, and it enables you to avoid having to explicitly mark the beginning/ending of content regions in about 95% of if/else and foreach scenarios. Razor’s @: and <text> syntax can then be used for scenarios where you want to avoid using an HTML element within a code container block, and need to more explicitly denote a content region. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >