Search Results

Search found 4146 results on 166 pages for 'sqlpass summit 2011'.

Page 102/166 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Installing PHP APC in Fedora - Unable to initialize module ?

    - by sri
    I have been trying to install APC on my Fedora Apache Server for showing progress bar while uploading files. But I am getting the following PHP Warning while starting XAMPP. Starting XAMPP for Linux 1.7.1... PHP Warning: PHP Startup: apc: Unable to initialize module Module compiled with module API=20090626, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to matchin Unknown on line 0 XAMPP: Starting Apache with SSL (and PHP5)... XAMPP: Starting MySQL... XAMPP: Another FTP daemon is already running. XAMPP for Linux started. My Server Details : OS : Fedora-12 XAMPP version : 1.7.1 PHP Version : 5.2.9 APC Version : 3.1.9 I have tried the process as is mentioned in here : 1)http://2bits.com/articles/installing-php-apc-gnulinux-centos-5.html 2)http://stevejenkins.com/blog/2011/08/how-to-install-apc-alternative-php-cache-on-centos-5-6/

    Read the article

  • vnc connection from linux to windows ce

    - by JosiP
    Im having troubles while im trying to connect from linux to Windows CE, via VNC viewer. Here is what i can see on log: /usr/bin/vncviewer 10.1.1.57 VNC Viewer Free Edition 4.1.2 for X - built Apr 20 2011 12:04:25 Copyright (C) 2002-2005 RealVNC Ltd. See http://www.realvnc.com for information on VNC. Tue Jul 2 12:15:04 2013 CConn: connected to host 10.1.1.57 port 5900 CConnection: Server supports RFB protocol version 3.5 CConnection: Using RFB protocol version 3.3 TXImage: Using default colormap and visual, TrueColor, depth 24. CConn: Using pixel format depth 6 (8bpp) rgb222 CConn: Using ZRLE encoding I cannot see anything - only black screen. Restarting device does not help. Device is connected directly to machine by crossed ethernet cable, and its IP is assigned by DHCP. Any clues, ideas, what can i do to get normal view ? best regards J.

    Read the article

  • Mac OS X read/write NTFS support

    - by Tiago Veloso
    I am trying to get read/write support for NTFS drives, under Mac os 10.6. I have tried to use NTFS 3G, but it seems it does not support 64 bit kernels. I was unable to change my Mac's Kernel to 32 bit. Is there a solution? I am running Snow Leopard, under a 2011 MBP13 I am getting the following error. After running system_profile | grep Kernel I get: ForkProBox:~ fork$ system_profiler | grep Kernel Kernel Version: Darwin 10.7.1 64-bit Kernel and Extensions: Yes I have ran the commands suggested here is their output Error tracking

    Read the article

  • Group policy issues

    - by Alex Berry
    We are having an issue on one of our clients relatively new sbs installs. The domain consists of a single SBS 2011 server with 4 windows 7 clients and 3 xp clients. Most of the time everything is fine however roughly every 3 days windows 7 clients start timing out when trying to receive computer group policy. This results in hour long delays before getting to the login screen in the morning. This is accompanied by event ID 6006, win login errors stating it took 3599 seconds to process policy. Once they've booted they can log in without issue however gpupdate fails again on computer policy and gpresult comes back with access denied, even when run as domain admin... At this point if we restart the server the network is fine for 3 days. I thought perhaps it might be ipv6 or smb2, but disabling ipv6 on the clients doesn't help and the clients can browse the sysvol folder freely on smb2 anyway. Does anyone have any ideas or routes I can take to further diagnose the issue? Thanks in advance :)

    Read the article

  • Most efficient Way to setup a game server

    - by alex bowers
    I'm running a PHP based game which has over 45 Million members predicted for end of this year (2011) Currently we are on 7.5 Million, this game is being ran on facebook and I am in desperate need to help get this game server as efficient and as powerful as possible. it is a dedicated server with Processor Manufacturer Intel Model i7 920 Frequency 4x 2x 2.66 GHz NIC GigaEthernet RAM 12 GB Hard disk 4 x 1 TB specs. It has apache installed, cPanel, phpMyAdmin, several apache mods and MySQL. The game also runs 47 mysql calls per second per user. Is there any alternatives to the above which could be faster, more efficient etc? I dont mind having to recode the game to fit to it, as long as it maximises our upper limit of members on the game. Thanks Also, is there a way to tell what our maximum limit to players, database calls etc is? Thank you again, hope you guys can help :)

    Read the article

  • Files hidden on USB hard disk because of virus, how to clean?

    - by hammad
    I have 1 GB hard drive. First all of its contents were gone after a virus got in (when I have the drive to someone else). After research found that it was a virus may be. I run Nortan antivus and removed certain virus. Then ran malewarebytes, now ad-aware and avg 2011 antivirus. Now i could see the files on my other computer. But now I have hooked this to my new macbook pro (windows installation) and i can't see the files. The antivurs does find all the files. MacOSX wrote a 400 MB directory there which i can see but nothing else. How to fix this? Which program will fix it?

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • disable "SSL 2.0+ upgrade support" in nginx

    - by Bhargava
    I evaluated the SSL credentials of my server with qualsys ssl page ( https://www.ssllabs.com/ssldb/index.html ) and found the entry "SSL 2.0+ upgrade support" being marked as yes. I want to disable this sslv2 handshake too. I searched around and found http://forum.nginx.org/read.php?2,104032m, which points to creating a openssl.cnf file. Have a naive question here. After creating the file, does one need to re-key his certificate for this to work ? Are there any other steps to follow ? I use nginx 1.0.11 and openssl "OpenSSL 1.0.0e-fips 6 Sep 2011". I have set ssl_ciphers in nginx to SSLv3 TLSv1;

    Read the article

  • Large concurrent user performance issues for Apache + mod_jk + GlassFish v3.1 clusters

    - by user10035
    I am running a java ee 6 ear application on a GlassFish v3.1 ( 2 clusters with 2 instances each) load balanced by an Apache v2.2 with mod_jk - all on the same server (Windows Server 2003 R2, Intel Xeon CPU x5670 @2.93Ghz, 6GB RAM, 2 cpus). The web application is accessed by around ~100 users. When they all try to access it at the same time every morning ~8am, the response is very slow while trying to access the main jsf home page. Apart from that I have seen the CPU usage spike upto 99% by the httpd process during the day frequently and I start seeing errors in the mod_jk.log file. [Wed Jun 08 08:25:43 2011] [9380:8216] [info] ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or client network problems [Wed Jun 08 08:25:43 2011] [9380:8216] [info] ajp_service::jk_ajp_common.c (2543): (myAppLocalInstance4) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1) Any suggestions on how I can go about improving this? Apache configuration is mostly the default as shown below ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2" Listen 80 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule include_module modules/mod_include.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule setenvif_module modules/mod_setenvif.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User daemon Group daemon </IfModule> </IfModule> DocumentRoot "C:/Program Files/Apache Software Foundation/Apache2.2/htdocs" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "C:/Program Files/Apache Software Foundation/Apache2.2/htdocs"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "logs/error.log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "logs/access.log" common </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin/" </IfModule> <Directory "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> Include conf/extra/httpd-mpm.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> LoadModule jk_module modules/mod_jk.so JkWorkersFile conf/workers.properties JkLogFile logs/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories JkRequestLogFormat "%w %V %T" JkMount /myApp/* loadbalancerLocal JkMount /myAppRemote/* loadbalancerRemote JkMount /myApp loadbalancerLocal JkMount /myAppRemote loadbalancerRemote The workers.properties config file is: worker.list=loadbalancerLocal,loadbalancerRemote worker.myAppLocalInstance1.type=ajp13 worker.myAppLocalInstance1.host=localhost worker.myAppLocalInstance1.port=8109 worker.myAppLocalInstance1.lbfactor=1 worker.myAppLocalInstance1.socket_keepalive=1 worker.myAppLocalInstance1.socket_timeout=1000 worker.myAppLocalInstance2.type=ajp13 worker.myAppLocalInstance2.host=localhost worker.myAppLocalInstance2.port=8209 worker.myAppLocalInstance2.lbfactor=1 worker.myAppLocalInstance2.socket_keepalive=1 worker.myAppLocalInstance2.socket_timeout=1000 worker.myAppLocalInstance3.type=ajp13 worker.myAppLocalInstance3.host=localhost worker.myAppLocalInstance3.port=8309 worker.myAppLocalInstance3.lbfactor=1 worker.myAppLocalInstance3.socket_keepalive=1 worker.myAppLocalInstance3.socket_timeout=1000 worker.myAppLocalInstance4.type=ajp13 worker.myAppLocalInstance4.host=localhost worker.myAppLocalInstance4.port=8409 worker.myAppLocalInstance4.lbfactor=1 worker.myAppLocalInstance4.socket_keepalive=1 worker.myAppLocalInstance4.socket_timeout=1000 worker.myAppRemoteInstance1.type=ajp13 worker.myAppRemoteInstance1.host=localhost worker.myAppRemoteInstance1.port=8509 worker.myAppRemoteInstance1.lbfactor=1 worker.myAppRemoteInstance1.socket_keepalive=1 worker.myAppRemoteInstance1.socket_timeout=1000 worker.myAppRemoteInstance2.type=ajp13 worker.myAppRemoteInstance2.host=localhost worker.myAppRemoteInstance2.port=8609 worker.myAppRemoteInstance2.lbfactor=1 worker.myAppRemoteInstance2.socket_keepalive=1 worker.myAppRemoteInstance2.socket_timeout=1000 worker.myAppRemoteInstance3.type=ajp13 worker.myAppRemoteInstance3.host=localhost worker.myAppRemoteInstance3.port=8709 worker.myAppRemoteInstance3.lbfactor=1 worker.myAppRemoteInstance3.socket_keepalive=1 worker.myAppRemoteInstance3.socket_timeout=1000 worker.myAppRemoteInstance4.type=ajp13 worker.myAppRemoteInstance4.host=localhost worker.myAppRemoteInstance4.port=8809 worker.myAppRemoteInstance4.lbfactor=1 worker.myAppRemoteInstance4.socket_keepalive=1 worker.myAppRemoteInstance4.socket_timeout=1000 worker.loadbalancerLocal.type=lb worker.loadbalancerLocal.sticky_session=True worker.loadbalancerLocal.balance_workers=myAppLocalInstance1,myAppLocalInstance2,myAppLocalInstance3,myAppLocalInstance4 worker.loadbalancerRemote.type=lb worker.loadbalancerRemote.balance_workers=myAppRemoteInstance1,myAppRemoteInstance2,myAppRemoteInstance3,myAppRemoteInstance4 worker.loadbalancerRemote.sticky_session=True

    Read the article

  • SVN: Error validating server certificate for svn hook linux

    - by Dr Casper Black
    Hi, I managed to setup a SVN (over SSL) server and TortoiseSVN client on Win. I made a Post-Commit Hook for test project. The Post-Commit will update the web dir so the App in PHP can be executed with the newest version. It all works when done over shell. The only problem is, when i commit the changes over the client in Win the change is commited but HOOK throws error post-commit hook failed (exit code 1) with output: Error validating server certificate for 'https://SERVER_IP:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! - The certificate hostname does not match. Certificate information: - Hostname: DEVSRVR - Valid: from Fri, 28 Jan 2011 09:22:45 GMT until Sat, 28 Jan 2012 09:22:45 GMT - Issuer: PHP, SS, SS, SRB - Fingerprint: 5f:d0:50:d6:dd:a6:d4:64:a5:ac:3a:4b:7c:7d:33:e3:75:dd:23:9f (R)eject, accept (t)emporarily or accept (p)ermanently? svn: OPTIONS of 'https://SERVER_IP/svn/myproject/trunk': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://SERVER_IP)

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • man: command not found in zsh (Mac OS 10.58)

    - by Oscar
    I changed to zsh from the default (by changing the "Shells open with" preference in Terminal to "command (complete path)" set to /bin/zsh While most things seem to work, I tried to see the man page for a command and got a "permission denied" message. When I tried sudo, I got "man: command not found". I changed to the default shell (/bin/tcsh), and this is what I get when I open a new shell: Last login: Fri Nov 18 13:53:50 on ttys000 Fri Nov 18 13:55:21 CST 2011 /usr/bin/manpath: Permission denied. If I try man, I get the same "command not found message". I guess there is something wrong in my PATH, but I have no idea how to fix it. "echo $PATH" (in tcsh) gets: /sw/bin:/sw/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/texbin In zsh, it gets: /usr/bin:/bin:/sw/bin:/usr/local/bin:/usr/local/teTeX/bin/powerpc-apple-darwin-current:/usr/sbin:/sbin:/usr/texbin:/usr/X11/bin Any ideas?

    Read the article

  • Debian on Hyper-V

    - by Tobia
    I installed Debian with kernel 2.6.32-5-686 on a Hyper-V virtual machine. I had to add a legacy network card. I follow this tutorial http://www.microsofttranslator.com/bv.aspx?ref=Internal&from=ru&to=en&a=http://blogs.technet.com/b/abeshkov/archive/2011/03/17/hyperv_5f00_debian.aspx to add Hyper-V driver but when I reboot with the new kernel it crash during bootup. Is there any other way to load hyper-v drivers? I really need to change that legacy network card because my debian machine is going to be used as proxy. Thank you.

    Read the article

  • Is there a browser addon to redirect a link to another, modifying some address content automatically?

    - by kokbira
    Well, I'm looking for an addon that can redirect a link when I click on it in the following ways: Change from https to http Change from twitter.com/xxxxxxxxx to, for example, dabr.co.uk/xxxxxxxxx (added at 2010-02-15th, 20:30 GMT) Remove the "?utm_source=twitterfeed&utm_medium=twitter" from the end ou a URL Generally, replace a string with another (e.g. youtube->yt; so www.example.com/visitingyoutube would become www.example.com/visitingyt) PS: (added at 2010-02-15th, 20:30 GMT) @oKtosiTe, a clearer user case: Supposes that there is a link in Twitter that point to a URL X (URL X is http://www.newspapersite.com/2011-02-15_1304.html?utm_source=twitterfeed&utm_medium=twitter) In that case, I want to open that URL only until ".html", i.e., I want to open a URL Y, that is http://www.newspapersite.com/2011-02-15_1304.html What happens when I click normally in that link: 3.1. Browser goes to URL X What I want to happen when I click in that link: 4.1. The addon must transform URL X to URL Y (I must configure it before to change a piece of URL from "?utm_source=twitterfeed&utm_medium=twitter" to "" 4.2. The browser goes to URL Y

    Read the article

  • racoon-tool doesn't generate full racoon.conf file in /var/lib/racoon/racoon.conf

    - by robthewolf
    I am using ipsec-tools/racoon to create my VPN. I am using racoon-tool to configure racoon.conf but when I run racoon-tool reload it only generates the first section - Global items. When I run racoon-tool I get: # racoon-tool reload Loading SAD and SPD... SAD and SPD loaded. Configuring racoon...done. This is the entire file /var/lib/racoon/racoon.conf # # Racoon configuration for Samuel # Generated on Wed Jan 5 21:31:49 2011 by racoon-tool # # # Global items # path pre_shared_key "/etc/racoon/psk.txt"; path certificate "/etc/racoon/certs"; log debug; I cannot find anywhere a solution as to why this is happening. Please help

    Read the article

  • Problem with Xen, xvda and sda

    - by Javier J. Salmeron Garcia
    I am creating a cloud for my university using Eucalyptus with Xen (PCs have Debian Squeeze 64 bit installed). I have a problem with the following guest configuration: # # Configuration file for the Xen instance evenmorefinalfoo, created # by xen-tools 4.2 on Thu May 26 11:03:06 2011. # # # Kernel + memory size # kernel = '/boot/vmlinuz-2.6.32-5-xen-amd64' ramdisk = '/boot/initrd.img-2.6.32-5-xen-amd64' vcpus = '1' memory = '128' # # Disk device(s). # root = '/dev/sda2 ro' disk = [ 'file:/home/xen/domains/evenmorefinalfoo/disk.img,sda2,w', 'file:/home/xen/domains/evenmorefinalfoo/swap.img,sda1,w', ] As you can see, the disk and swap images are meant to be mounted on sda1 and sda2. However, when I start the guest, these are mounted on xvda1 xvda2, provoking an error. Is there anything that I can do about that? It seems like it is a Xen error. Thank you in advance,

    Read the article

  • rm command not ask before delete

    - by apis17
    i have centos VPS created using XEN + OpenVZ virtualization. -bash-3.2# uname -a Linux host.domain.com 2.6.18-274.7.1.el5.028stab095.1xen #1 SMP Mon Oct 24 22:10:04 MSD 2011 i686 i686 i386 GNU/Linux there are no question asked when i want to delete file(s) -bash-3.2# vi test.txt -bash-3.2# rm test.txt -bash-3.2# the main server (not virtualized one) is asking me first before delete any files. [root@main ~]# vi test.txt [root@main ~]# rm test.txt rm: remove regular file `test.txt'? y [root@main ~]# how to configure virtualized environment to prompt me before deleting any file(s)? thank you.

    Read the article

  • What is needed to invoke LibreOffice running just the macro without the GUI?

    - by C.W.Holeman II
    Invoking LibreOffice and running a macro via the GUI works as expected producing three HTML files, one for each spreadsheet page: $ libreoffice x.ods Tools>Macros>Run Macros... Library: LibreOffice Macros> ExportSheetsToHTML Macro Names: exportsheetstohtml.js Run When attempting to invoke just the macro it just hangs: $ libreoffice\ -invisible\ -nofirststartwizard\ -headless\ -norestore\ x.ods "macro:///LibreOffice Macros.ExportSheetsToHTML.exportsheetstohtml.js" $ ps x | grep libreoffice 11286 pts/0 S+ 0:00 /bin/sh /opt/libreoffice/program/soffice -invisible -nofirststartwizard -headless -norestore x.ods macro:///LibreOffice Macros.ExportSheetsToHTML.exportsheetstohtml.js 11296 pts/0 Sl+ 0:58 /opt/libreoffice/program/soffice.bin -invisible -nofirststartwizard -headless -norestore x.ods macro:///LibreOffice Macros.ExportSheetsToHTML.exportsheetstohtml.js Version info: Linux road 2.6.32-28-generic #55-Ubuntu SMP Mon Jan 10 21:21:01 UTC 2011 i686 GNU/Linux LibreOffice 3.3.0 OOO330m19 (Build:6) tag libreoffice-3.3.0.4

    Read the article

  • Server freeze - how to debug

    - by Petr Peller
    I am running a Debian virtual server with Apache, PHP, MySQL. There is just 1 website with very low traffic running but the server very often (almost everyday) freezes and does not respond. When this happens the server is unreachable from web browser or by SSH and I have to go to administration of my provider and perform server hard reset after this the server seems to work fine. How can I find out what is causing the freezes? Linux vm2797 2.6.32-5-amd64 #1 SMP Tue Jun 14 09:42:28 UTC 2011 x86_64 GNU/Linux

    Read the article

  • MBP becomes very hot after using Xcode

    - by Globalhawk
    Hardware: MBP early 2011 version OS: Mountain lion App: Xcode 4.5.2 Problem: Every time when I start Xcode, 2 or 3 processes called "git" start running. But when I quit Xcode the "git" process won't quit and are still using a lot of CPU. Then the computer becomes quite hot and the battery gets drained very quickly. If I manually kill these processes the problem is gone. I tried to reinstall Xcode several times but the problem comes back every time. It drives me crazy. Any help will be appreciated!

    Read the article

  • R data.frame with stacked specified titles for latex output with xtable

    - by hhh
    > w<-data.frame(c(0,0,1,1.3,2.1), c(0,0.6,0.9,1.6091,1.6299), c(258,141,206.4,125.8,140.5), c(162,162.7,162.4,162,162)) > colnames(w) <- c('Worst Cum', 'Best Cum', 'Worst Points', 'Best Points' ) Wrong (the code) Worst Cum Best Cum Worst Points Best Points 1 0.0 0.0000 258.0 162.0 2 0.0 0.6000 141.0 162.7 3 1.0 0.9000 206.4 162.4 4 1.3 1.6091 125.8 162.0 5 2.1 1.6299 140.5 162.0 Goal: how? CUM Points Worst Best Worst Best 1 0.0 0.0000 258.0 162.0 2 0.0 0.6000 141.0 162.7 3 1.0 0.9000 206.4 162.4 4 1.3 1.6091 125.8 162.0 5 2.1 1.6299 140.5 162.0 Trial 1: fail with many data.frames > a<-data.frame(c(0,0,1,1.3,2.1), c(0,0.6,0.9,1.6091,1.6299)) > b<-data.frame(c(258,141,206.4,125.8,140.5), c(162,162.7,162.4,162,162)) > c<-data.frame(cbind(a,b)) > colnames(c) <- c('Cum', 'Points') > colnames(a) <- c('Worst', 'Best') > colnames(b) <- c('Worst', 'Best') and > xtable(c) % latex table generated in R 2.13.1 by xtable 1.6-0 package % Thu Nov 24 03:43:34 2011 \begin{table}[ht] \begin{center} \begin{tabular}{rrrrr} \hline & Cum & Points & NA & NA \\ \hline 1 & 0.00 & 0.00 & 258.00 & 162.00 \\ 2 & 0.00 & 0.60 & 141.00 & 162.70 \\ 3 & 1.00 & 0.90 & 206.40 & 162.40 \\ 4 & 1.30 & 1.61 & 125.80 & 162.00 \\ 5 & 2.10 & 1.63 & 140.50 & 162.00 \\ \hline \end{tabular} \end{center} \end{table} > xtable(a) % latex table generated in R 2.13.1 by xtable 1.6-0 package % Thu Nov 24 03:45:06 2011 \begin{table}[ht] \begin{center} \begin{tabular}{rrr} \hline & Worst & Best \\ \hline 1 & 0.00 & 0.00 \\ 2 & 0.00 & 0.60 \\ 3 & 1.00 & 0.90 \\ 4 & 1.30 & 1.61 \\ 5 & 2.10 & 1.63 \\ \hline \end{tabular} \end{center} \end{table} It is wrong because it replaces the inner headers with higher-level header nb "NA" vals.

    Read the article

  • Issues connecting to WPA2 with User Authentication Mavericks?

    - by heinst
    I was on all the builds of the Mavericks beta and connecting to my University's network was fine. Then I upgraded to the public release and now I can't seem to connect to the internet. I can connect to other networks, but not my schools. Its a WPA2 network with a User Authentication. And my MacBook is a 2011? 2.2 GHz first gen i7 Quad Core with 8 GBs of RAM. Does anyone else have the same issue? Any tips on how to fix it? Thanks! heinst

    Read the article

  • Is it possible to modify a color scheme and windows decorations in Xfce4?

    - by Juhele
    just testing PCLinuxOS Phoenix XFCE Edition 2011-07 which I would like to install on my grandpa's PC instead older PCLOS 2009 with KDE3 (which is almost impossible to upgrade). The PC is relatively older one (Sempron 2200+ CPU, MSI K7N2GM2 board with integrated GeForce 440MX series graphics, 1GB RAM and 80GB IDE HDD) and I thought that it is too weak for KDE4. I already used Xfce in the past (Sam Linux 2006 and 2007), but in new Xfce4 I am not able to somehow change the windows color scheme and the windows decorations - the settings manager offers me switching between preinstalled themes but it is possible to modify them with some GUI?

    Read the article

  • The "System" process has my BCD open(windows 7)?

    - by Epic_orange
    I discovered this when I tried to use EasyBCD to well, edit by bcd, but it said "The current file is in use and cannot be opened by EasyBCD..." So I tried to use handle.exe to stop it but it said \Handle>handle -c 15C -p 4 Handle v3.46 Copyright (C) 1997-2011 Mark Russinovich Sysinternals - www.sysinternals.com 15C: File (---) C:\Boot\BCD Close handle 15C in System (PID 4)? (y/n) y Error closing handle: The handle is invalid. Why does system have my bcd open and how can i stop it? I have tried rebooting and googling.

    Read the article

  • ioncube not load after upgrading to php 5.4

    - by amir
    i am afraid that i broke somthing in my vps :/ i hope you can help me. i am on ubuntu-12.04-x86. and i moved to new vps so i tryied to upgrade the php to the news version from 5.3 to 5.4. anyway after installing i get this messages: fild loading /usr/lib/php5/20090626+lfs/ioncube_loader_lin_5.3.so /usr/php5/20090625+lfs/ioncube_loader_lin_5.3.so: undefined synbol: php_body_wri php 5.4.8-1~presise+1 (cli) (built(oct 29 2012) i need to mention that the server is working and php also working but when i do phpinfo there is no "with the ionCube PHP Loader v4.0.14, Copyright (c) 2002-2011, by ionCube Ltd." which was before :/ i installed with this guide:http://www.upubuntu.com/2012/03/how-to-upgrade-install-php-540-under.html i need to worry?

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >