Search Results

Search found 32520 results on 1301 pages for 'local machine'.

Page 210/1301 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Unable to install ffmpeg-php

    - by matt_tm
    Hi, I followed the instructions on http://www.mysql-apache-php.com/ffmpeg-install.htm but ffmpeg-php does not show up in my phpinfo() The commands I ran (in order) #yum install ffmpeg ffmpeg-devel ... Public key for faac-1.26-1.el5.rf.x86_64.rpm is not installed #rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm ... 1:rpmforge-release ########################################### [100%] #yum install ffmpeg ... Complete! #wget http://space.dl.sourceforge.net/project/ffmpeg-php/ffmpeg-php/0.6.0/ffmpeg-php-0.6.0.tbz2 ... #tar -xjf ffmpeg-php-0.6.0.tbz2 #cd ffmpeg-php-0.6.0 #phpize ... configure: error: ffmpeg headers not found. Make sure ffmpeg is compiled as shared libraries using the --enable-shared option #yum install ffmpeg-devel ... Complete! #./configure ... config.status: creating config.h #make ... Build complete. Don't forget to run 'make test'. #make install Installing shared extensions: /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ #ls -al /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ ... -rwxr-xr-x 1 root root 185285 Sep 20 03:36 ffmpeg.so* ... #nano /usr/local/lib/php.ini In which I put these two lines at the end of the php.ini file [ffmpeg] extension=ffmpeg.so Then, #service httpd restart But phpinfo() still does not show any 'ffmpeg' section. This is the correct php.ini because: #php -i | grep php\.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini

    Read the article

  • squid ssl bump sslv3 enforce to allow old sites

    - by Shrey
    Important: I have this question on stackoverflow but somebody told me this is more relevant place for this question. Thanks I have configured squid(3.4.2) as ssl bumped proxy. I am setting proxy in firefox(29) to use squid for https/http. Now it works for most sites, but some sites which support old SSL proto(sslv3) break, and I see squid not employing any workarounds for those like browsers do. Sites which should work: https://usc-excel.officeapps.live.com/ , https://www.mahaconnect.in As a workaround I have set sslproxy_version=3 , which enforces SSLv3 and above sites work. My question: is there a better way to do this which does not involve enforcing SSLv3 for servers supporting TLS1 or better. Now I know openssl doesn't automatically handle that. But I imagined squid would. My squid conf snipper: http_port 3128 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/certs/SquidCA.pem always_direct allow all ssl_bump server-first all sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /usr/local/squid/var/lib/ssl_db -M 4MB client_persistent_connections on server_persistent_connections on sslproxy_version 3 sslproxy_options ALL cache_dir aufs /usr/local/squid/var/cache/squid 100 16 256 coredump_dir /usr/local/squid/var/cache/squid strip_query_terms off httpd_suppress_version_string on via off forwarded_for transparent vary_ignore_expire on refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 UPDATE: I have tried compiling squid 3.4.5 with openssl 1.0.1h . No improvements

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • Supplementary Developer Laptop

    - by David Smith
    I'm looking to buy a laptop with the following specs for a developer. The goal will be to have a development machine supplementing the devs desktop. During work hours the dev will be on a beefy desktop. For working while on the go: trains, client sites, code camps, it would be nice to have a machine which can run Visual Studio 2008 without needing to remote desktop into their primary machine. What do you think is the lowest cost laptop meeting this need? Here are the specs I have in mind: SSD drive 64GB-doesn't need to be huge, most data is stored on servers. Will need to fit Windows 7, IIS, SQL Server, and Visual Studio 2010. RAM-3GB processor =Pentium Core 2 duo Screen size = 14 inches. OS doesn't matter. It will be paved with Windows 7 Ultimate optical drive omitted would be a plus. weight and battery life aren't so important because the machine will be plugged in almost all the time.

    Read the article

  • macport selfupdate not working

    - by eistrati
    macbookpro:~ eistrati$ port -v MacPorts 2.1.2 macbookpro:~ eistrati$ xcodebuild -version Xcode 4.5.2 Build version 4G2008a macbookpro:~ eistrati$ sudo port -d selfupdate DEBUG: Copying /Users/eistrati/Library/Preferences/com.apple.dt.Xcode.plist to /opt/local/var/macports/home/Library/Preferences DEBUG: MacPorts sources location: /opt/local/var/macports/sources/rsync.macports.org/release/tarballs ---> Updating MacPorts base sources using rsync rsync: failed to connect to rsync.macports.org: Connection refused (61) rsync error: error in socket IO (code 10) at /SourceCache/rsync/rsync-42/rsync/clientserver.c(105) [receiver=2.6.9] Command failed: /usr/bin/rsync -rtzv --delete-after rsync://rsync.macports.org/release/tarballs/base.tar /opt/local/var/macports/sources/rsync.macports.org/release/tarballs Exit code: 10 DEBUG: Error synchronizing MacPorts sources: command execution failed while executing "macports::selfupdate [array get global_options] base_updated" Error: /opt/local/bin/port: port selfupdate failed: Error synchronizing MacPorts sources: command execution failed Ideas? Please help!

    Read the article

  • Disk2VHD image used in Win 7 as a bootable VPC

    - by John
    I have used Disk2VHD to create a VHD of my old XP Laptop's boot drive. I would like to use it as an XP virtual machine on my new Win7 machine. I have tried doingit on a second XP machine and the VPC boots properly using that VHD but under Win7 I can't get it to act as the boot disk for the VPC. Any ideas? TIA J

    Read the article

  • Stack overflow in xp cmd console

    - by Dave
    I am using an older program whose source code I cannot see. I am using the cmd.exe console in windows xp. The program ran with no problems on an xp machine last year, while a stack overflow code 2000 error was observed on a different xp machine (easy fix - use the machine that works). I tried running the program on the previously working machine lately, and now am getting the same error. No changes to the os were made and I did not change the service pack version. Any ideas on how to get around this stack overflow error so I can use the program? Dosbox will at least open the program, however it does not run to completion. Thanks!

    Read the article

  • Linux pptp client stops working after several hours

    - by Aron Rotteveel
    Here's the situation: Setup: 1 Windows Server 2008 machine acting as a Domain Controller and RRAS server 1 CentOS machine in a datacentre located elsewhere PPTP client running on CentOS machine, connected to the DC via When I connect to the DC, everything is working fine. I have set up a static IP for the dialup connection in my RRAS server so that the CentOS machine is automatically assigned the IP 192.168.1.240. Inside the VPN, it is not possible to access this machine on the local IP-address. Perfect. However, after several hours, it simply seems to stop working (IE: I cannot ping to or from this machine on the local network). The strange thing is, however: The DC shows the VPN client as still being connected The CentOS machine shows the network interface as being up There are no entries in my /var/log/messages that indicate a problem Output from ifconfig: ppp0 Link encap:Point-to-Point Protocol inet addr:192.168.1.240 P-t-P:192.168.1.160 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1396 Metric:1 RX packets:43 errors:0 dropped:0 overruns:0 frame:0 TX packets:58 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:4511 (4.4 KiB) TX bytes:15071 (14.7 KiB) Output from route -n: 192.168.1.160 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ppp0 I have the following in my ip-up.local: route add -net 192.168.1.0 netmask 255.255.255.0 dev ppp0 The situation can be easily fixed by issueing a killall pppd and re-connecting. However, I obviously do not want to do this every X-hours or so. I have tried running pppd with both the debug as the kdebug flag but cannot find the cause of this problem. Currently, my ppp0 network interface seems to be running and the last log lines mentioning it are: Feb 19 14:10:40 graviton pppd[10934]: local IP address 192.168.1.240 Feb 19 14:10:40 graviton pppd[10934]: remote IP address 192.168.1.160 Feb 19 14:10:40 graviton pppd[10934]: Script /etc/ppp/ip-up started (pid 10952) Feb 19 14:10:40 graviton pppd[10934]: Script /etc/ppp/ip-up finished (pid 10952), status = 0x0 Feb 19 14:11:27 graviton pptp[10935]: anon log[decaps_gre:pptp_gre.c:414]: buffering packet 190 (expecting 189, lost or reordered) Feb 19 14:11:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Request received. Feb 19 14:11:37 graviton pptp[10942]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 6 'Echo-Reply' Feb 19 14:12:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Request received. Feb 19 14:12:37 graviton pptp[10942]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 6 'Echo-Reply' Feb 19 14:12:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:13:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:14:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:15:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:16:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:19:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:19:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:679]: no more Echo Reply/Request packets will be reported. I have enabled the persist option. The network interface is still running, but it is still impossible to send data through the VPN. Any help is appreciated.

    Read the article

  • I want to hit Apex SQL with a big stick

    - by Michael Stephenson
    <Whinge> Thought id just have a little whinge about this product which caused me a load of grief the other day..... So the background was that my development machine had a completely full hard disk which I needed to sort out.  Upon investigation I found the issue was that the msdb database had managed to get very large. This was caused because a long time ago (and I cant even remember why) I tried out Apex SQL.  After a few days I decided to uninstall it and thought nothing more of it.  What I didnt realise was that uninstalling it doesnt actually uninstall it (and it doesnt inform you about this), but there was still some assemblies left on my machine.  Everytime SQL Server was running it was starting the Apex SQL Connection monitor which was then running in the background and regularly recording information in the msdb database.  Over time it had recorded enough to fill the disk. The below article advises how to sort this out by removing this fully so if your having a problem then try this out:http://knowledgebase.apexsql.com/2007/08/how-to-uninstall-apexsqlconnectionmonit_09.htm Once this was sorted out its interesting to read the above article because I just dont think the approach used by the vendor of this software is a very good one.  So for the Apex team just wanted to pass on a thought: If I want to uninstall your product you should tell me if stuff is left on the machine especially if a process will be running which is going to fill my machine with useless data, </Whinge>

    Read the article

  • Sign out of Windows Live Messenger remotely

    - by justinhj
    I just upgraded Windows Live Messenger at home. I'm logged into my machine at work so I have a Live session active there too. Now the fun part. This new version of messenger is signing me out after about 2 minutes, and saying "You were signed out from here because you signed in to a version of Messenger that doesn't let you sign in at more than on place" Ok, so I went into the options on my home machine and selected "Sign me out at all other locations". Is there another way I can force my office machine to logout remotely, as either this option does not work, or the machine in my office just keeps reconnecting. Version 2009 14.0.8089.726 EDIT: Actually this problem went away after a few hours; I guess some kind of server side timeout kicked in.

    Read the article

  • Unknown protocol when trying to connect to remote host wit stunnel

    - by RaYell
    I'm trying to set up a stunnel for WebDav on Windows. I want to connect 80 port on my local interface to 443 on another machine in my network. I can ping the machine remote machine. However when I use the tunnel, I'm getting this error all the time SSL state (accept): before/accept initialization SSL_accept: 140760FC: error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol There is nothing in the logs on the other machine and here's my stunnel connection config [https] accept = 127.0.0.2:80 connect = 10.0.0.60:443 verify = 0 I've set it up to accept all certificates so this shouldn't be a problem with a self-signed certificate remote host uses. Does anyone knows what might be the problem that this connection cannot be eastablished?

    Read the article

  • Installing multiple php versions plus extensions on freebsd

    - by jgtumusiime
    I'm a currently learning how to work with freebsd. Lately I have been trying to run multiple php versions along with their respective packages. However, I seem to be running into issues while making installations. The default location for my php installation is /usr/local/etc/, however I want to be able to install php5.2, php5.3 and php5.4 in /usr/local/etc/php52, /usr/local/etc/php53 and /usr/local/etc/php54 respectively. Using ports I simply achieved this by doing cd /usr/ports/lang/php5x && make PREFIX="/usr/local/etc/php5x" install clean. The problem now is: How do I do the same for extensions of all my PHP versions? When I try installing php-extensions like so: cd /usr/ports/lang/php5x-extension && make PREFIX="/usr/local/etc/php5x/lib/php" install clean, I get this error ... ===> PHPizing for php53-bcmath-5.3.17 env: /usr/local/bin/phpize: No such file or directory *** Error code 127 Stop in /usr/ports/math/php53-bcmath. *** Error code 1 Stop in /usr/ports/lang/php53-extensions. My PHPize is located in /usr/local/etc/php5x/bin/phpize So how do I get make or whatever to look for phpize in the right path? Is there a cleaner, may be simpler way of maintaining multiple php installations? I need to achieve this because of compatibility issues from some legacy code that runs on 5.2 and breaks on 5.3. Thank you. ================= So I successfully installed an configured freebsd jail and I would like to install software within my jail but I cannot connect to the network. Here is my rc.conf jail_enable="YES" # Set to NO to disable starting of any jails jail_list="mambo2" # Space separated list of names of jails jail_mambo2_rootdir="/usr/jails/j01" # jail's root directory jail_mambo2_hostname="mambo2.ug" # jail's hostname jail_mambo2_ip="192.168.100.174" # jail's IP address jail_mambo2_devfs_enable="YES" # mount devfs in the jail jail_mambo2_devfs_ruleset="mambo2_ruleset" # devfs ruleset to apply to jail here is my jail ifconfig output mambo2# ifconfig rl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:c1:28:00:48:db media: Ethernet autoselect (100baseTX <full-duplex>) status: active plip0: flags=108810<POINTOPOINT,SIMPLEX,MULTICAST,NEEDSGIANT> metric 0 mtu 1500 lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 mambo2# I created a /etc/resolv.conf for nameservers mambo2# cat /etc/resolv.conf nameserver 192.168.100.251 nameserver 8.8.8.8 mambo2# Here is a list of jails running [root@mambo /usr/home/jtumusiime]# jls JID IP Address Hostname Path 5 192.168.100.174 mambo2.ug /usr/jails/j01 my host has 4 ip addresses, 3 public and one private: 192.168.100.173 I tried creating a jail using ezjail and this does not work out. [root@mambo /usr/home/jtumusiime]# ezjail-admin update -p -i Error: Cannot find your copy of the FreeBSD source tree in . Consider using 'ezjail-admin install' to create the base jail from an ftp server. [root@mambo /usr/home/jtumusiime]# I have an updated copy of freebsd 7.1 source in /usr/src/ and I did #make buildworld while building the first jail mambo2 Here is an excerpt of ouput of ezjail-admin install ... 221 Goodbye. Trying 193.162.146.4... Connected to ftp.freebsd.org. 220 ftp.beastie.tdk.net FTP server (Version 6.00LS) ready. 331 Guest login ok, send your email address as password. 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. 200 Type set to I. 550 pub/FreeBSD-Archive/old-releases/i386/7.1-RELEASE/base: No such file or directory. 221 Goodbye. Could not fetch base from ftp.freebsd.org. Maybe your release (7.1-RELEASE) is specified incorrectly or the host ftp.freebsd.org does not provide that release build. Use the -r option to specify an existing release or the -h option to specify an alternative ftp server. Querying your ftp-server... The ftp server you specified (ftp.freebsd.org) seems to provide the following builds: Trying 193.162.146.4... total 10 drwxrwxr-x 13 1006 1006 512 Feb 20 2011 8.2-RELEASE drwxrwxr-x 13 1006 1006 512 Apr 10 2012 8.3-RELEASE lrwxr-xr-x 1 1006 1006 16 Jan 7 2012 9.0-RELEASE -> i386/9.0-RELEASE drwxrwxr-x 7 1006 1006 1024 Feb 19 2012 ISO-IMAGES -rw-rw-r-- 1 1006 1006 637 Nov 23 2005 README.TXT drwxrwxr-x 5 1006 1006 512 Nov 2 02:59 i386 I do not want to upgrade my freebsd installation. I have googled around; but all on vail

    Read the article

  • mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect

    - by photon
    When I'm trying to install MySQL 5.5 community edition on my Ubuntu 10.04 by compiling the source code, I met the following problem: $ fg % 1 sudo ../bin/mysqld_safe --basedir=/usr/local/mysql_community_5.5/data --user=mysql --defaults-file=/etc/my.cnf [sudo] password for linnan: Sorry, try again. [sudo] password for linnan: 121023 09:26:21 mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect. Internal program error (non-fatal): unknown logging method '/usr/local/mysql_community_5.5/log/mysql.log' 121023 09:26:21 mysqld_safe Logging to '/var/log/mysql/error.log'. Internal program error (non-fatal): unknown logging method '/usr/local/mysql_community_5.5/log/mysql.log' 121023 09:26:22 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121023 09:26:23 mysqld_safe mysqld from pid file /var/lib/mysql/ubuntu.pid ended It seems that the problem is related to log configuration. I've noticed a bugfix related to this problem: http://bugs.mysql.com/bug.php?id=50083 But I still have no idea how to solve it. The relative content in /etc/my.cnf: [mysqld] port = 3306 socket = /tmp/mysql.sock skip-external-locking key_buffer_size = 384M max_allowed_packet = 1M table_open_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 32M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 character-set-server=utf8 [mysqld-safe] basedir=/usr/local/mysql_community_5.5 datadir=/usr/local/mysql_community_5.5/data mysqld_safe_syslog.cnf: /etc/mysql/conf.d/mysqld_safe_syslog.cnf: [mysqld_safe] syslog

    Read the article

  • Windows 7 Default Gateway problem

    - by Matt
    Hi, I have a strange problem (or at least seems strange to me) the below are IP configurations for two laptops on my home network which consists of a main router 192.168.11.1 and a connected wireless router (i know this can cause problems but has always worked until I got the win7 machine) at 192.168.11.2 with DHCP disabled. Laptop 1 - Win XP IP: Dynamically assigned by main router default gateway: 192.168.11.1 (main router) This machine gets perfect connectivity. Laptop 2 - Win7 IP: dynamically assigned by main router Default Gateway: 192.168.11.2 THIS IS THE PROBLEM... I cannot seem to get this machine to default to the main router for the gateway UNLESS I go to a static configuration which I would rather not do since I regularly go between my home and public networks. Why is my Win7 machine not finding the main gateway the same way that the other laptop is? I believe that the rest of my setup is fine as it has always worked and it works perfectly when set as static ip and gateway. Please help! Thanks

    Read the article

  • Why does bash sometimes think my $HOME isn't the correct directory?

    - by Adam Yanalunas
    Like the title says it seems that bash sometimes misidentifies my $HOME. This cropped up after a seemingly unique series of events that I will now replay in broad strokes. Running OS X 10.6 with normal, local account Work binds my account to Active Directory Much time passes with no issues Set up rvm to manage Ruby installs (this becomes important later) Upgraded to OS X 10.7 a few days ago After successful install, attempted to log in, was presented with "Must reset password" dialog that never allowed a password to be reset. Would simply shake the box after new password was entered. Much googling was done. Much more googling was done. Swearing was had. Logged in as root, created new account, set as admin, deleted /Users/[new account], renamed /Users/[old account] to /Users/[new account] Logged out of root, logged into new account with no issues After OS X asking for a my account password a few times to update Keychain and other system-level stuff it was back to business as usual. Opened Terminal, cd to project folder, tried "rails server" and was presented with: /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find rails (>= 0) amongst [] (Gem::LoadError) from /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:256:into_spec' from /usr/local/lib/ruby/1.9.1/rubygems.rb:1210:in gem' from /usr/local/bin/rails:18:in' Ran through a few exercises, decided to rm -rf ~/.rvm and reinstall. Running a --trace on the rvm installer shows it dies on this line: mkdir: /Users/[old account]: Permission denied Scrolling back through the --trace log I see many more mentions of /Users/[old account]. When inspect the install script the offending line is looking at "${HOME}/.rvm" as it tries to run the mkdir. To my confusion I also see mentions of /Users/[new account] in the log. I've tried exporting a new HOME in my .bash_profile to no luck. Can anyone guess why /Users/[old account] would still be kicking around?

    Read the article

  • RPCSS kerberos issues on imaged Windows workstations

    - by sysadmin1138
    While doing some unrelated troubleshooting I came across a set of Event Log entries that have me concerned. Machine Name: labcomputer82 Source: Security-Kerberos Event ID: 4 Event Description: The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server labcomputer143$. The target name used was RPCSS/imagemaster4.ad.domain.edu. This indicates that the target server failed to decrypt the ticket provided by the client. This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using. Please ensure that the target SPN is registered on, and only registered on, the account used by the server. This error can also happen when the target service is using a different password for the target service account than what the Kerberos Key Distribution Center (KDC) has for the target service account. Please ensure that the service on the server and the KDC are both updated to use the current password. If the server name is not fully qualified, and the target domain (AD.DOMAIN.EDU) is different from the client domain (AD.DOMAIN.EDU), check if there are identically named server accounts in these two domains, or use the fully-qualified name to identify the server. There are three machine names used in this message. It's generated on labcomputer82, it's attempting to talk to another lab workstation called labcomputer143, and the service in question (RPCSS) refers to the name of the machine that this machine was imaged from (and possibly also that of labcomputer143, I'm not sure). The thing that has me raising both eyebrows is that the machine named labcomputer82 is attempting to use an SPN of RPCSS/imagemaster4.ad.domain.edu. The SPN attribute on the computer object in AD looks just fine. It has all the names it should have. Of the over 3,000 computer objects in our AD domain, somewhere around 1,700 of the are computer-lab seats that are frequently imaged. If we're doing something wrong, I'd like to know in time to get our procedures modified (and people retrained) for fall quarter. But if this is normal for imaged machines, I'll just continue ignoring these.

    Read the article

  • Internet Forwarding With Qemu?

    - by ConfusedGuy
    I'm using kvm and qemu to run a windows virtual machine, but I'm trying to get internet on that machine. I've been reading about all this bridging and stuff that is done to do that, but I was wondering if there was a simpler way, to just forward my internet connection (since I'm connected on the host machine) through qemu to the guest operating system. Is this possible? Thanks

    Read the article

  • disk-to-disk backup without costly backup redundancy?

    - by AaronLS
    A good backup strategy involves a combination of 1) disconnected backups/snapshots that will not be affected by bugs, viruses, and/or security breaches 2) geographically distributed backups to protect against local disasters 3) testing backups to ensure that they can be restored as needed Generally I take an onsite backup daily, and an offsite backup weekly, and do test restores periodically. In the rare circumstance that I need to restore files, I do some from the local backup. Should a catastrophic event destroy the servers and local backups, then the offsite weekly tape backup would be used to restore the files. I don't need multiple offsite backups with redundancy. I ALREADY HAVE REDUNDANCY THROUGH THE USE OF BOTH LOCAL AND REMOTE BACKUPS. I have recovery blocks and par files with the backups, so I already have protection against a small percentage of corrupt bits. I perform test restores to ensure the backups function properly. Should the remote backups experience a dataloss, I can replace them with one of the local backups. There are historical offsite backups as well, so if a dataloss was not noticed for a few weeks(such as a bug/security breach/virus), the data could be restored from an older backup. By doing this, the only scenario that poses a risk to complete data loss would be one where both the local, remote, and servers all experienced a data loss in the same time period. I'm willing to risk that happening since the odds of that trifecta negligibly small, and the data isn't THAT valuable to me. So I hope I have emphasized that I don't need redundancy in my offsite backups because I have covered all the bases. I know this exact technique is employed by numerous businesses. Of course there are some that take multiple offsite backups, because the data is so incredibly valuable that they don't even want to risk that trifecta disaster, but in the majority of cases the trifecta disaster is an accepted risk. I HAD TO COVER ALL THIS BECAUSE SOME PEOPLE DON'T READ!!! I think I have justified my backup strategy and the majority of businesses who use offsite tape backups do not have any additional redundancy beyond what is mentioned above(recovery blocks, par files, historical snapshots). Now I would like to eliminate the use of tapes for offsite backups, and instead use a backup service. Most however are extremely costly for $/gb/month storage. I don't mind paying for transfer bandwidth, but the cost of storage is way to high. All of them advertise that they maintain backups of the data, and I imagine they use RAID as well. Obviously if you were using them to host servers this would all be necessary, but for my scenario, I am simply replacing my offsite backups with such a service. So there is no need for RAID, and absolutely no value in another layer of backups of backups. My one and only question: "Are there online data-storage/backup services that do not use redundancy or offer backups(backups of my backups) as part of their packages, and thus are more reasonably priced?" NOT my question: "Is this a flawed strategy?" I don't care if you think this is a good strategy or not. I know it pretty standard. Very few people make an extra copy of their offsite backups. They already have local backups that they can use to replace the remote backups if something catastrophic happens at the remote site. Please limit your responses to the question posed. Sorry if I seem a little abrasive, but I had some trolls in my last post who didn't read my requirements nor my question, and were trying to go off answering a totally different question. I made it pretty clear, but didn't try to justify my strategy, because I didn't ask about whether my strategy was justifyable. So I apologize if this was lengthy, as it really didn't need to be, but since there are so many trolls here who try to sidetrack questions by responding without addressing the question at hand.

    Read the article

  • Backup program for Windows using non-proprietary format?

    - by Cristi Diaconescu
    I'm looking at the various local backup programs for windows, and I was wondering which of them use a non-proprietary backup format? By non-proprietary, I mean I want to be able to access at least the latest version of the backed up files either directly, or by using an open-standard format like zip/7z/rdiff... The other thing I'm looking for in a backup program is the ability to create incremental backups. What I have found so far: SyncBack copies files as-is, using separate directories for versioning pretty much the same for all the 'roll you own' task scheduler + rsync/xcopy32/robocopy/MS SyncToy/etc solutions GFI Backup appears to be using Zip files, at least in their 'Business' version, not sure about the free 'Home' version. Didn't try it yet, but it's next on my list. Mozy (!) supports local backup starting with v 2.0 and basically provides a 2nd local copy on a separate partition. Subjectively, it feels slow and resource intensive (I think it took more than a week to finish the first local backup of ~ 300 GB), and does not appear to offer file versioning (arguably, you can get older file versions online). On the positive side, it looks like the local backup is integrated in the restore process which was traditionally a masochistic experience (and this goes for any online backup provider). Other suggestions? I favor ease of use over tons of options (e.g. SyncBack is very flexible but it offers sooo many ways to shoot yourself in the foot...)

    Read the article

  • Old operational master still thinks it is the "one"

    - by Doug
    Hi there, I have a domain with 3 AD servers for now i'll just call them: AD01 (Win 2008 GC, Operations master) AD02 (Win 2008 GC) AD03 (Win 2003 GC) A couple of months there was some hardware issues with AD01 so the operations master, PDC and Infrastructure Master was moved to AD02. All machines where on while this was happening. AD01 (Win 2008 GC) AD02 (Win 2008 GC, Operations master) AD03 (Win 2003 GC) AD01 was then shutdown for a month. Upon starting this machine up with replaced hardware (NIC and RAID card) i now have a weird problem. AD01 Thinks it is operations master still in AD on the local box AD02 & AD03 Thinks AD02 is operations master in AD on both boxes When running DCDIAG on AD01 i get a number of issues (listed below) When running "dcdiag /test:advertising" on AD01: Doing primary tests Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Running partition tests on : ForestDnsZones Running partition tests on : DomainDnsZones Running partition tests on : Schema Running partition tests on : Configuration Running partition tests on : domain Running enterprise tests on : domain.local When running "dcdiag" on AD01 i get the following errors (excerpt of the Final output): Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Starting test: FrsEvent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=domain,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=domain,DC=local Starting test: Replications [Replications Check,Replications Check] Inbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_INBOUND_REPL" [Replications Check,AD01] Outbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_OUTBOUND_REPL" So the problem appeasr to be that when i moved the operations master, AD01 never got the memo, and now that it's started up, all the other AD servers don't think its the boss anymore when it trys to replicate etc. So i really need to manually update AD01 so that it knows who the operations master, instrastructure and PDC is - but i'm not having any luck I've been googling for nearly a day and all solutions lead to "the cake is a lie" Your ninja skills will be greatly appreciated

    Read the article

  • Accessing apache in ubuntu 10 virtualbox guest from ubuntu 10 host

    - by Francis L
    I did the following: installed VirtualBox 3.1.6 OSE in ubuntu 10 desktop. installed ubuntu 10 server on a virtual machine in VirtualBox. select "LAMP server" and "OpenSSH server" options during the ubuntu server installation. leave network "adapter 1" of virtual machine as "NAT". use "VBoxManage" described in manual to setup port forwarding on the host (Protocol: TCP, GuestPort: 80, HostPort: 8080). verify "ExtraDataItem" have been added to "ubuntuServer1.xml" (my virtual machine name) correctly. run command "pgrep apache" in ubuntu server in virtual machine to ensure apache is running. Everything went well. But, when I try to access the apache from the browser on the host with "http://localhost:8080/", it just continue fetching with no response. Now, I'm struck! Please help! Many many thanks in advance!

    Read the article

  • Unable to start Apache on Ubuntu 12.10: no listening sockets available

    - by michalstanko
    I'm unable to start apache2 installed using apt-get. I'm getting the very same error on 2 separate Ubuntu 12.10 installations, one on my desktop PC, the other one running in VirtualBox: michal@michaltest:~$ sudo service apache2 start * Starting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName no listening sockets available, shutting down Unable to open logs Action 'start' failed. The Apache error log may have more information. [fail] lsof says: michal@michaltest:/var/log/apache2$ sudo lsof -i :80 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ubuntu-ge 2074 michal 11u IPv4 23978 0t0 TCP michaltest.local:47578->mulberry.canonical.com:http (CLOSE_WAIT) firefox 25194 michal 71u IPv4 42477 0t0 TCP michaltest.local:59793->69.59.197.29:http (ESTABLISHED) firefox 25194 michal 76u IPv4 41834 0t0 TCP michaltest.local:59698->69.59.197.29:http (ESTABLISHED) gvfsd-htt 25320 michal 12u IPv4 42568 0t0 TCP michaltest.local:56203->lb260.amst.cotendo.net:http (CLOSE_WAIT) netstat says: michal@michaltest:/var/log/apache2$ sudo netstat -lnp | grep '80' unix 2 [ ACC ] STREAM LISTENING 8030 876/acpid /var/run/acpid.socket /var/log/apache2/error.log: [Thu Nov 08 11:13:30 2012] [notice] Apache/2.2.22 (Ubuntu) configured -- resuming normal operations [Thu Nov 08 11:17:32 2012] [notice] caught SIGTERM, shutting down /etc/apache2/ports.conf: NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> Thanks for your help. EDIT #1: michal@michaltest:~$ sudo netstat -ano | grep '443' tcp 54 0 10.0.2.15:58504 91.189.92.70:443 CLOSE_WAIT off (0.00/0/0)

    Read the article

  • Compiling Mono on Fedora 7

    - by Gary
    Trying to install Mono from source on Fedora 7.. running # ./configure --prefix=/opt/mono works fine, but doing the make # make ; make install ends up with the following: Makefile:93: warning: overriding commands for target `csproj-local' ../build/executable.make:131: warning: ignoring old commands for target `csproj-local' make install-local make[6]: Entering directory `/opt/mono-2.6.4/mcs/mcs' Makefile:93: warning: overriding commands for target `csproj-local' ../build/executable.make:131: warning: ignoring old commands for target `csproj-local' MCS [basic] mcs.exe typemanager.cs(2047,40): error CS0103: The name `CultureInfo' does not exist in the context of `Mono.CSharp.TypeManager' Compilation failed: 1 error(s), 0 warnings make[6]: *** [../class/lib/basic/mcs.exe] Error 1 make[6]: Leaving directory `/opt/mono-2.6.4/mcs/mcs' make[5]: *** [do-install] Error 2 make[5]: Leaving directory `/opt/mono-2.6.4/mcs/mcs' make[4]: *** [install-recursive] Error 1 make[4]: Leaving directory `/opt/mono-2.6.4/mcs' make[3]: *** [profile-do--basic--install] Error 2 make[3]: Leaving directory `/opt/mono-2.6.4/mcs' make[2]: *** [profiles-do--install] Error 2 make[2]: Leaving directory `/opt/mono-2.6.4/mcs' make[1]: *** [install-exec] Error 2 make[1]: Leaving directory `/opt/mono-2.6.4/runtime' make: *** [install-recursive] Error 1 I've been following the instructions at http://ruakuu.blogspot.com/2008/06/installing-and-configuring-opensim-on.html. This is all in an effort to get OpenSimulator running.

    Read the article

  • Vagrant: VirtualBox: Headless Ubuntu: How to set up bridged networking?

    - by Jay Godse
    I am trying to set up a Vagrant VirtualBox (v4.2.4) virtual machine with a Ubuntu "box" which I got from www.vagrantbox.es. I was able to use Vagrant to set it up as a headless box and start it, and then I was able to ssh locally into it (using 127.0.0.1:2222), connect the internet and sun a bunch of "sudo apt-get" commands to update it and install new software. I would like to be able to access this virtual machine on my network, so I need a bridged network adapter for the virtual box. When I went to the VirtualBox console for this device, and tried to set up bridged networking, it said that I needed the "guest additions". I tried to install them and I couldn't get the .iso file for the guest additions. I went elsewhere on the 'net and it seems that I had to run "sudo apt-get install virtualbox-guest-additions-iso" from the virtual machine in order to get bridged networking. I tried this, and it installed fine after a couple of reboots. I then tried to set up bridged networking again (VirtualBox console to Devices-Network Adaptors...) but it didn't work. What, or what else do I need to do to set up bridged networking in this virtual machine? I appreciate any help that I can get.

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >