Search Results

Search found 15906 results on 637 pages for 'scott and the dev team'.

Page 534/637 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • Difficulty restoring a differential backup in SQL Server, 2 media families are expected or no files

    - by digiguru
    I have sql backups copied from server A to server B on a nightly basis. We want to move the sql server from server A to server B without much downtime, but the files are very large. I assumed that performing a differential backup and restore would solve the problem with the databases. Copy full backup from server A to copy to server B (10+gb) Open SQL Server Managment Studio on server B Right mouse on databases Restore Database Type in the new DB-name Choose "From Device" and browse to the backup file Click Okay. This is now resorting the original "full" backup. Test new db with dev application - everything works :) On original database rightmouse on DB Tasks Backup... Backup Type = Differential, Backup to disk, add a new file, and remove the old one (it needs to be a small file to transfer for the smallest amount of outage) Copy the diff backup onto the new db Right mouse on DB Tasks Restore Database This is where I get stuck. If I add both the new differential file, and the original backup to the restore process I get an error The media loaded on "M:\path\to\backup\full.bak" is formatted to support 1 media families, but 2 media families are expected according to the backup device specification. RESTORE HEADERONLY is terminating abnormally. But if I try to restore using just the differential file I get System.Data.SqlClient.SqlError: The log or differential backup cannot be restored because no files are ready to rollforward. (Microsoft.SqlServer.Smo) Any idea how to do it? Is there a better way of restoring backups with limited downtime?

    Read the article

  • The frame buffer layout of the current display cannot be made to match...

    - by adambox
    I get this error when I have VM running in VMWare Workstation on my work PC and try Remote Desktop-ing in from my iMac at home. I just upgraded VMWare Workstation to 7.0 from 6.0 and now I'm getting it when I try to resume my VM at work. It then asks me the scary question of whether I want to preserve or discard the suspended state. I don't want to lose stuff! ack! Update I backed up my VM and tried hitting that "discard" button and the result was a reboot of the VM. I then tried restoring to a snapshot, and none of my snapshots work! Is there anyway to fix this so I can run 7 but still have my old snapshots? The frame buffer layout of the current display cannot be made to match the frame buffer layout stored in the snapshot. The dimensions of the frame buffer in the snapshot are: Max width 3200, Max height 1770, Max size 22659072. The dimensions of the frame buffer on the current display are: Max width 3200, Max height 1600, Max size 20512768. Error encountered while trying to restore the virtual machine state from file "C:\Documents and Settings\adam\My Documents\My Virtual Machines\dev\Windows XP Professional.vmss". What do I do so I don't break things horribly?

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. We'd like to avoid any kind of local storage (share a disk on a desktop or something) since we're a geographically distributed team). So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). FTP and other client-based apps don't seem to support this at all. Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. In my mind, the magical tool would be some combination of TrueCrypt and rsync. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • How to import certificate for Apache + LDAPS?

    - by user101956
    I am trying to get ldaps to work through Apache 2.2.17 (Windows Server 2008). If I use ldap (plain text) my configuration works great. LDAPTrustedGlobalCert CA_DER C:/wamp/certs/Trusted_Root_Certificate.cer LDAPVerifyServerCert Off <Location /> AuthLDAPBindDN "CN=corpsvcatlas,OU=Service Accounts,OU=u00958,OU=00958,DC=hca,DC=corpad,DC=net" AuthLDAPBindPassword ..removed.. AuthLDAPURL "ldaps://gc-hca.corpad.net:3269/dc=hca,dc=corpad,dc=net?sAMAccountName?sub" AuthType Basic AuthName "USE YOUR WINDOWS ACCOUNT" AuthBasicProvider ldap AuthUserFile /dev/null require valid-user </Location> I also tried the other encryption choices besides CA_DER just to be safe, with no luck. Finally, I also needed this with Apache tomcat. For tomcat I used the tomcat JRE and ran a line like this: keytool -import -trustcacerts -keystore cacerts -storepass changeit -noprompt -alias mycert -file Trusted_Root_Certificate.cer After doing the above line ldaps worked greate via tomcat. This lets me know that my certificate is a-ok. Update: Both ldap modules are turned on, since using ldap instead of ldaps works fine. When I run a git clone this is the error returned: C:\Tempgit clone http://eqb9718@localhost/git/Liferay.git Cloning into Liferay... Password: error: The requested URL returned error: 500 while accessing http://eqb9718@loca lhost/git/Liferay.git/info/refs fatal: HTTP request failed access.log has this: 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:12 -0600] "GET /git/Liferay.git/info/refs service=git-upload-pack HTTP/1.1" 500 535 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:33 -0600] "GET /git/Liferay.git/info/refs HTTP/1.1" 500 535 apache_error.log has nothing. Is there any more verbose logging I can turn on or better tests to do?

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    Hi, My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • playsms sent sms to queue

    - by user512213
    i install playsms in my system as below mentioned way https://github.com/antonraharja/playSMS/wiki/Web-User-Interface-Installation i use kannel as a gateway and kannel running fine i get the following status while i check kannel status in browser. Kannel bearerbox version `1.4.3'. Build `Nov 24 2011 08:02:18', compiler `4.6.2'. System Linux, release 3.2.0-23-generic-pae, version #36-Ubuntu SMP Tue Apr 10 22:19:09 UTC 2012, machine i686. Hostname MSSRF-INCOIS, IP 127.0.1.1. Libxml version 2.7.8. Using OpenSSL 1.0.0e 6 Sep 2011. Compiled with MySQL 5.5.17, using MySQL 5.5.24. Using SQLite 3.7.9. Using native malloc. Status: running, uptime 0d 1h 8m 27s WDP: received 0 (0 queued), sent 0 (0 queued) SMS: received 0 (0 queued), sent 0 (0 queued), store size -1 SMS: inbound (0.00,0.00,0.00) msg/sec, outbound (0.00,0.00,0.00) msg/sec DLR: 0 queued, using internal storage Box connections: wapbox, IP 127.0.0.1 (on-line 0d 1h 8m 2s) wapbox, IP 127.0.0.1 (on-line 0d 0h 18m 11s) SMSC connections: unknown AT2[/dev/ttyS0] (online 4106s, rcvd 0, sent 0, failed 0, queued 0 msgs) But when i try to send sms in playsms means it shows "Your SMS has been delivered to queue " but sms not received .Any thing we missed out while configuration.any one advice me to proceed.

    Read the article

  • Too much memory consumed during TFS automated build

    - by Bernard Chen
    We're running TFS 2010 Standard Edition, and we've set up an automated build to run whenever someone checks in code. We run through all of the automated tests (built with MSTest) as part of the build. We've configured the build to run the tests as a 64-bit process, but the QTAgent.exe that runs the tests grows in memory while the tests are running. It is currently reaching 8GB for the ~650 tests we have, and the process has slowed significantly when we went from 450 tests to 650 tests. When we run all of the tests in the local development environment, memory seems to be freed at least with each TestClass and never exceeds a certain level. The process of running all tests has not increased significantly in the local development environment. Is there a way to configure the build service to free up memory with each Test or each TestClass? With the way things are currently running, the build process gets very slow when we start to run out of memory on the machine. Edit: I found the MSTest invocation in the build log and ran it manually and saw the same behavior of runaway memory. I removed the /publish, /publishbuild, /teamproject, /platform, and /flavor parameters from the invocation of MSTest, in case the test runner was holding onto results until the end, but the behavior didn't change. I ran the same command line on a dev box, separate from the build server, and the memory freed up frequently. It seems there must be something wrong/different about the build server that is causing it to behave different, but I'm stumped where to look. I've looked at qtagent.exe.config, mstest.exe.config, versions of both executables. What else might affect this?

    Read the article

  • AWStats consumes too much resource, how to disable temporarily

    - by trante
    For some days AWStats takes %10-%20 of my CPU, takes 400-550 MB RAM and works for hours. Maybe my site's traffic became larger so process time takes more time than before or some bugs in program makes this. Anyway I want to disable AWStats temporarily. Maybe I would want to activate it in future. I found that answer. But it gives commands to remove AWStats. I only want to disable it temporarily. My system is Centos 6.3, Plesk 11.5.30 Update #19. I tried to disable cron jobs. I run this # killall awstats.pl I opened # vi /etc/cron.daily/awstats file and I changed it to this: #!/bin/sh #/usr/share/awstats/awstats_updateall.pl now -awstatsprog=/var/www/cgi-bin/awstats/awstats.pl -configdir=/etc/awstats >/dev/null 2>&1 exit 0 After some time I still see that awstats is running. What should I do more to not to awstats run again ? But without removing my files. After changing " /etc/cron.daily/awstats" file awstats doesn't start in daytime. But every night in 03:15 awstats starts again. Because of Plesk auto updates are working at that time, I changed from Plesk. Don't auto update automatically. But it seems like last night at 03:15 awstats started again. Is there any way to stop awstats temporarily except this solution ? Because this solution deletes awstats configs permanently and I don't know how to revert it back in future ? Turn off all AWStats for Plesk 11+ domains #!/bin/bash for i in /var/www/vhosts/*; do echo "Turning off and deleting Stats for" echo `basename $i` /usr/local/psa/admin/bin/webstatmng --unset-configs --stat-prog=awstats --domain-name=`basename $i` /usr/local/psa/admin/bin/webstatmng --clean --stat-prog=awstats --domain-name=`basename $i` done

    Read the article

  • Problems migrating software RAID 5 to new server (linux)

    - by leleu
    I have a CentOS setup with sw RAID5 that holds my data. Well, the server died, so I bought another box to migrate my drives to. Only thing is, I cannot get the RAID array rebuilt (not even sure it needs rebuilding, might just need the /dev/md0 mapping created... but I don't even know how to determine what I need!) Some details: RAID5 software (used mdadm) 4x 250GB drives (2 are SATA, 2 are EIDE -- would this matter? It worked fine in the other box...) latest CentOS distro built using mdadm I've got a decent amount of experience with standard linux stuff, but the hardware level stuff runs me in circles. I've spent some time googling and elsewhere here on SF, so please be kind for my newbie questions :). My question is this: how can I diagnose the problem? For all I know, I'm using the wrong device blocks when I try to rebuild the array, but I can't find the command to display only the devices that have some physical attachment. Is there some simple way for me to run mdadm, having it scan over all my physical drives, and say "hey, drives 2,5,6,7 are a software array, want me to mount it?" I basically just took the drives from my old box and put it into my new one. They show up in the BIOS. What steps do I need to take in order to get the array up, running, and mounted? Thanks in advance!

    Read the article

  • SuperMicro BMC on OpenSuSE Linux --cannot access from LAN

    - by Kendall
    Hi, I have an (old) SMC-001 IPMI device on an (old) X6DVL-EG2 motherboard. My problem is that I cannot access the BMC from LAN. I'm also getting some interesting output from ipmitool. First, the setup. I enable Console Redirection in the BIOS, turn BIOS Redirection after POSt to "disabled". I then modprobe'ed for ipmi_msghandler, ipmi_devintf and ipmi_si. I then found ipmi0 under /dev. So far so good. Since I want console redirection over serial, I modified /boot/grub/menu.lst: http://pastebin.com/YYJmhusQ I then modified "/etc/inittab" as follows: S1:12345:respawn:/sbin/agetty -L 19200 ttyS1 ansi Networking I set as following, using "ipmitool" ipaddr: 192.168.3.164 netmask: 255.255.255.0 defgw: 192.168.3.1 The above are correct for my environment. To test it I do: ipmitool -I open chassis power off which responds by powering off the machine. When I to access from another computer on the network, however, I get an error message: host# ipmitool -I lanplus -H 192.168.10.164 -U Admin -a chassis power status Error: Unable to establish LAN session Unable to get Chassis Power Status "Admin" seems to be a valid user name: host# ipmitool -I open user list 1 2 Admin true false true USER The interesting output from ipmitool I initially mentioned: host # ipmitool -I open lan set 1 access on Set Channel Access for channel 1 failed: Request data field length limit exceeded Also, newload4:/home/gjones # ipmitool channel info 1 Channel 0x1 info: Channel Medium Type : 802.3 LAN Channel Protocol Type : IPMB-1.0 Session Support : session-less Active Session Count : 0 Protocol Vendor ID : 7154 Get Channel Access (volatile) failed: Request data field length limit exceeded The output of "ipmitool -I open lan print 1" is here: http://pastebin.com/UZyL6yyE Any help/suggestions is greatly appreciated; I've been working with this thing for a few hours now with no success.

    Read the article

  • setting up synergy (multi-computer shared input devices) new to mac

    - by pedalpete
    I've just gotten a mac and the first thing I'm trying to do in setting it up for dev is to set-up synergy. I've run into countless issues, and am making progress, but that 'mac's just work' theory is getting tired really fast. I'd prefer running the mac as the server, as it's a desktop. why use my pc(laptop) as the server. So i downloaded synergy, and tried to edit the 'synergy.conf' file. First with textmate, then with apple script. Neither of these applications will open the file. Even after changing textmate to use plain text format and switching off a few other things to make it into a plain text editor. No good. Uh, isn't that what these script/text editors are for? Then I found SyneryKM which is a package to run synergy on osx. Took LOTS of fiddling, but I think I finally kinda got it figured out. I've got my pc & mac both running synergy. However, my pc will only connect to synergy using the ip address. No problem, however, my mac won't connect to itself as a synergy server using the IP address. If I use the ip address as a screen in the 'server configuration file', i get a 'Error: unknown screen name mymac'. I know this shouldn't be super complicated, and possibly not the place to find answers about synergy, but I couldn't find a better place, and there has been some talk of synergy here. Any chance somebody can help with this?

    Read the article

  • Can't compile CentOS 5, Ruby 1.9.2 and OpenSSL 1.0.0c

    - by pstinnett
    I'm trying to install Ruby 1.9.2 on CentOS 5.5. I get through most of the make process, but when it tries to compile OpenSSL I get an error. Below is the errror outputted: compiling openssl make[1]: Entering directory `/sources/ruby-1.9.2-p136/ext/openssl' gcc -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/openssl -DRUBY_EXTCONF_H=\"extconf.h\" -fPIC -O3 -ggdb -Wextra -Wno-unused-parameter -Wno-parentheses -Wpointer-arith -Wwrite-strings -Wno-missing-field-initializers -Wno-long-long -o ossl_x509.o -c ossl_x509.c In file included from ossl.h:201, from ossl_x509.c:11: openssl_missing.h:71: error: conflicting types for ‘HMAC_CTX_copy’ /usr/include/openssl/hmac.h:102: error: previous declaration of ‘HMAC_CTX_copy’ was here openssl_missing.h:95: error: conflicting types for ‘EVP_CIPHER_CTX_copy’ /usr/include/openssl/evp.h:459: error: previous declaration of ‘EVP_CIPHER_CTX_copy’ was here make[1]: *** [ossl_x509.o] Error 1 make[1]: Leaving directory `/sources/ruby-1.9.2-p136/ext/openssl' make: *** [mkmain.sh] Error 1 Any help would be greatly appreciated! I'm not a master at Linux by any means, but I was able to successfully install this version of Ruby on our dev server. Our live server is running a newer version of OpenSSL which I'm assuming is why it's breaking. Just not sure what the fix is!

    Read the article

  • OpenVPN and TomatoVPN

    - by Bill Johnson
    Wondering if someone can help me with the following. I have updated my Linksys router with TomatoVPN and used the following config: Interface Type:TAP Protocol:UDP Port:1195 Firewall Custom Authorization Mode:Static Key I have then inserted the static key generated in OpenVPN saved and started the service. connect.ovpn. # Use the following to have your client computer send all traffic through your router # (remote gateway) remote (entered my DNS/DHCP servers external IP address here) port 1195 dev tap secret static.key.txt proto udp comp-lzo route-gateway 192.168.1.1 redirect-gateway float I've then placed my static key in a file in the same directory as your connect.ovpn (static.key.txt) Now OpenVPN is installed on a laptop that I use at home. I have plugged in the laptop to my home connection and started connect.ovpn The Local Area Connection is connected as 'Home Network 3' - and when I start OpenVPN it is connected as 'Local Area Connection 2' and this is showing as 'Unidentified Network' and it appears there is no network access. TAP-Win32 Adapter V9 appears to be the adaptors name and the IP and DNS properties are set to automatic. If I open up the OpenVPN GUI it shows an error message saying "Connecting to connect has failed". Looking at the error message behind this pop-up one line says "TCP/UDP Socket bind failed on local address [undef]:1195 Address already in use [WSAEADDRINUSE] Could anyone possibly help me further with this please?

    Read the article

  • SAN with iSCSI-Target Performance Horrendous

    - by Justin
    We have a poor man's SAN setup in a 1U Ubuntu server running iSCSI-Target with two 300GB drives in RAID-0. We then are using it for block level storage for virtual machines. The hypervisor is connected to the SAN via gigabit on a dedicated VLAN and interfaces. We only have a single virtual machine setup and doing some benchmarks. If we run hdparm -t /dev/sda1 from the virtual machine, we get 'ok' performance of 75MB/s from the virtual machine to the SAN. Then we basically compile a package with ./configure and make. Things start ok, but then all the sudden the load average on the SAN grows to 7+ and things slow down to a crawl. When we SSH into the SAN and run top, sure the load is 7+, but the CPU usage is basically nothing, also the server has 1.5GB of memory available. When we kill the compile on the virtual machine, slowly the LOAD on the SAN goes back to sub 1 figures. What in the world is causing this? How can we diagnosis this further? Here are two screenshot from the SAN during high load. 1> Output of iotop on the SAN: 2> Output of top on the SAN:

    Read the article

  • How do I enable the confluence-users group?

    - by M. Joanis
    I've got an issue with Atlassian Confluence. Normal users can't log in, but administrators can... Details below! I manage users using an Apple Open Directory (LDAP). I created two groups: "confluence-administrators" and "confluence-users". I've added team leaders and managers to both groups, and I've added some users to "confluence-users". Everyone in "confluence-administrators" can log in easily. People in "confluence-users" can't log in at all. When I look at the user list (in Confluence), and select a user to examine the list of groups he or she belongs to, I can see that the Confluence Administrators are indeed members of the "confluence-administrators" group, but not a single user is a member of the "confluence-users" group. Not event the Confluence Administrators, which are members of both groups! So I tried to have one of the "confluence-users" log in while watching the Confluence logs. Here's the result: 2012-07-05 14:50:19,698 ERROR [http-8090-11] [core.event.listener.AutoGroupAdderListener] handleEvent Could not auto add user to group: Group <confluence-users> is read-only and cannot be updated at com.atlassian.crowd.directory.DbCachingRemoteDirectory.addUserToGroup(DbCachingRemoteDirectory.java:461) ... So it says the group group is read-only... I'm not sure why it is a problem. Well confluence-administrators too is read-only and it doesn't complain. Some things I don't think are part of the problem: I've synchronized Confluence with LDAP many, many times. I have verified many times that I didn't make a typo while setting the groups on the LDAP server. LDAP synchronization goes well. No errors in the logs (only INFO level log messages). The user exists. Errors in the logs are different when a user doesn't exist. Any help is most welcome!

    Read the article

  • How to find out where or if MYSQL5 logs are stored on a machine WHM/Cpanel

    - by moi
    I have a WHM/Cpanel re-seller hosting account on a virtual private server (Linux). I have root access to the machine via SSH I am trying to locate a file that contains information that will help me to determine which users have accessed what db and from which hosts. I would imagine this kind of data is stored in a log file somewhere. The MySQL page says: The general query log - Established client connections and statements received from clients See: http://dev.mysql.com/doc/refman/5.0/en/server-logs.html It also says: By default, all log files are created in the mysqld data directory. So, I am am NOT asking where are the general query log logs stored, (cos I expect I will get answers saying "it depends") Please help me work out: "How can go about finding out where MySQL general query log logs are stored on a linux machine" Couple of things i've already tried: I looked at /etc/my.cnf it was a tiny file that only contained the following info: [mysqld] skip-bdb skip-innodb set-variable = max_connections=500 safe-show-database ~ ~ I have looked in: /var/lib/mysql/ But I could not see any log-like file names in that directory. Any clues on this would be most welcome.

    Read the article

  • MySQL remote access not working - Port Close?

    - by dave.zap
    I am not able to get a remote connection established to MySQL. From my pc I am able to telnet to 3306 on the existing server, but when I try the same with the new server it hangs for few minutes then returns # mysql -utest3 -h [server ip] -p Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on '[server ip]' (110) Here is some output from the server. # nmap -sT -O localhost -p 3306 ... PORT STATE SERVICE 3306/tcp closed mysql ... # netstat -anp | grep mysql tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 2 [ ACC ] STREAM LISTENING 12286 6349/mysqld /DATA/mysql/mysql.sock # netstat -anp | grep 3306 tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 3 [ ] STREAM CONNECTED 3306 1411/audispd # lsof -i TCP:3306 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 6349 mysql 10u IPv4 12285 0t0 TCP [domain]:mysql (LISTEN) I am running... OS CentOS release 5.8 (Final) mysql 5.5.28 (Remi) Note: Internal connections to mysql work fine. I have disabled IPtables, the box has no other firewall, it runs Apache on port 80 and ssh no problem. Had followed this tutorial - http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-server.html I have bound the IP address in my.cnf user=mysql bind-address = [sever ip] port=3306 I even started over by deleting the mysql folder in my datastore and running mysql_install_db --datadir=/DATA/mysql --force Then recreated all the users as per the manual... http://dev.mysql.com/doc/refman/5.5/en/adding-users.html I have created one test user CREATE USER 'test'@'%' IDENTIFIED BY '[password]'; GRANT ALL PRIVILEGES ON *.* TO 'test'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; So all I can see is that the port is not really open. Where else might I look? thanks

    Read the article

  • Dell 15Z. Trying to output image using Mini Displayport to a Yamasaki Q270. (2560 x 1440) rez monitor. Dual DVI-D to Mini Displayport

    - by michaelcku
    I have a Dell 15Z. GT525M Video card. Updated Driver via NVIDIA. Problem is I just purchased a Yamasaki Catleap Q270 monitor. It is a (2560x1440) monitor with only a Dual DVI-D out. 15Z only has HDMI and a Mini Display Port. I got a Active Dual DVI-D to Mini Display Port Active Adapter(Linked Below). http://www.monoprice.com/products/product.asp?c_id=104&cp_id=10428&cs_id=1042802&p_id=6904&seq=1&format=2 The Setup works when i plug it into my Macbook air 13.3. However when I plug it into my 15Z it doesn't work. Tried everything I can thank of. Window control P. FN and F1. It just doesn't work. I believe the issue lies in the Mini Display Port but i can't be sure. No matter what i do, the Mini Display Port doesn't get recognized. Spent 3 hours on the phone with the XPS Tech team. They simply said it was not compatible.... Any help or suggestion would be greatly appreciated.

    Read the article

  • OpenVPN via DD-WRT

    - by user140491
    I am using DD-WRT with my Buffalo G300NH. I notice in my log files: Wed Oct 10 01:08:25 2012 us=343000 Cannot open /tmp/openvpn/dh.pem for DH parameters: error:02001003:system library:fopen:No such process: error:2006D080:BIO routines:BIO_new_file:no such file I have looked at other answers regarding this error. I have tried to no avail. 755 are chmod rights to /tmp/openvpn. At this point, I can not connect outside my LAN via OpenVPN. My server config looks like this: #mode server #tls-server push "route 192.168.11.1 255.255.255.0" push "dhcp-option DNS 10.8.0.1" server 10.8.0.0 255.255.255.0 port 1194 proto udp dev tun0 ifconfig 10.8.0.1 10.8.0.2 #secret /tmp/static.key ca /tmp/openvpn/ca.crt cert /tmp/openvpn/cert.pem key /tmp/openvpn/key.pem dh /tmp/openvpn/dh.pem keepalive 10 120 comp-lzo persist-key persist-tun verb 5 management localhost 5001 Can someone, knowledgeable, of this error kindly help? i have been going on several days, trying to sort it out. I like all nighters though!!

    Read the article

  • Why the system information message when accessing an Ubuntu server doesn't match free -m?

    - by Andres
    Each time I SSH into my AWS Ubuntu servers I see a system information message, showing load, memory usage and packages available to install, like this: Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.2.0-51-virtual x86_64) * Documentation: https://help.ubuntu.com/ System information as of Sun Nov 10 18:06:43 EST 2013 System load: 0.08 Processes: 127 Usage of /: 4.9% of 98.43GB Users logged in: 1 Memory usage: 69% IP address for eth0: 10.236.136.233 Swap usage: 100% Graph this data and manage this system at https://landscape.canonical.com/ 13 packages can be updated. 0 updates are security updates. Get cloud support with Ubuntu Advantage Cloud Guest http://www.ubuntu.com/business/services/cloud Use Juju to deploy your cloud instances and workloads. https://juju.ubuntu.com/#cloud-precise *** /dev/xvda1 will be checked for errors at next reboot *** *** System restart required *** My question is about the memory percentage shown. In this case, it's showing a 69% of memory usage, but since the swap usage was 100% I checked it by myself. So when I run free -m I get this: total used free shared buffers cached Mem: 1652 1635 17 0 4 29 -/+ buffers/cache: 1601 51 Swap: 895 895 0 And that's of course closer to 100% than to 69%

    Read the article

  • Xen HVM networking wont work

    - by Nathan
    I'm trying to get a Xen HVM network working using route however I am failing. Xen PV works fine using Ubuntu but when installing Ubuntu on HVM it fails to pick up the network. I'll let you know now that I'm not that experienced with Xen so I would appreciate any help. vm104 is the HVM thats causing me the problems, here is the configs that I believe should help resolve the problem. [root@eros vm104]# cat vm104.cfg import os, re arch = os.uname()[4] if re.search('64', arch): arch_libdir = 'lib64' else: arch_libdir = 'lib' kernel = '/usr/lib/xen/boot/hvmloader' builder = 'hvm' memory = 6000 shadow_memory = '8' cpu_weight = 256 name = 'vm104' vif = ['type=ioemu, ip=85.25.x.y, vifname=vifvm104.0, mac=00:16:3e:52:3d:fe, bridge=xenbr0'] acpi = 1 apic = 1 vnc = 1 vcpus = 4 vncdisplay = 3 vncviewer = 0 vncconsole = 1 vnclisten = '217.118.x.y' vncpasswd = 'kCfb5S4tE7' serial = 'pty' disk = ['phy:/dev/vpsvg/vm104_img,hda,w', 'file:/home/solusvm/xen/iso/Windows-Server-2008-RC2.iso,hdc:cdrom,r'] device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm' boot = 'cd' sdl = '0' usbdevice = 'tablet' pae=1 [root@eros /]# cat /etc/xen/xend-config.sxp | egrep -v "(^#.*|^$)" (xend-unix-server yes) (xend-unix-path /var/lib/xend/xend-socket) (xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$') (network-script network-route) (vif-script vif-route) (network-script 'network-route netdev=eth0') (dom0-min-mem 256) (dom0-cpus 0) (vnc-listen '0.0.0.0') (vncpasswd '') (keymap 'en-us') The Windows install will not pick up the network - I've tried setting the IP manually by using the Xen servers IP as the gateway and setting the main IP in Windows but no luck. If anyone needs any more information let me know and I appreciate any input!

    Read the article

  • PPTP Client setup, Fedora 17

    - by Suarez Romina
    I am trying to connect to hidemyass.com VPN services via PPTP, but I am having issues understanding why it isn't working, since I don't get a warning or fatal error and my IP remains the same. This is how i create the connection: [root@lasvegas-nv-datacenter ~]# pptpsetup --create TUNNELNAME --server 199.58.165.20 --username MYUSERNAME --password MYPASSWORD --encrypt --start And this is the output: Using interface ppp0 Connect: ppp0 <-- /dev/pts/1 CHAP authentication succeeded MPPE 128-bit stateless compression enabled local IP address 10.200.21.14 remote IP address 10.200.20.1 After that, I check the log and this is what i get: [root@lasvegas-nv-datacenter ~]# tail -f /var/log/messages Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_rep:pptp_ctrl.c:254]: Sent control packet type is 1 'Start-Control-Connection-Request' Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:754]: Received Start Control Connection Reply Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:788]: Client connection established. Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_rep:pptp_ctrl.c:254]: Sent control packet type is 7 'Outgoing-Call-Request' Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:873]: Received Outgoing Call Reply. Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:912]: Outgoing call established (call ID 0, peer's call ID 20096). Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: CHAP authentication succeeded Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: MPPE 128-bit stateless compression enabled Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: local IP address 10.200.21.14 Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: remote IP address 10.200.20.1 Can someone help me? Basically, i Ieed to connect to the VPN and have my IP changed after the connection. I read a lot of guides but still cannot understand why I don't get a connection.

    Read the article

  • Is it possible to shrink the size of an HP Smart Array logical drive?

    - by ewwhite
    I know extension is quite possible using the hpacucli utility, but is there an easy way to reduce the size of an existing logical drive (not array)? The controller is a P410i in a ProLiant DL360 G6 server. I'd like to reduce logicaldrive 1 from 72GB to 40GB. => ctrl all show config detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 5001438006FD9A50 Cache Serial Number: PAAVP9VYFB8Y RAID 6 (ADG) Status: Disabled Controller Status: OK Chassis Slot: Hardware Revision: Rev C Firmware Version: 3.66 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Enabled Total Cache Size: 512 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 412476 MB Status: OK Logical Drive: 1 Size: 72.0 GB Fault Tolerance: RAID 1+0 Heads: 255 Sectors Per Track: 32 Cylinders: 18504 Strip Size: 256 KB Status: OK Array Accelerator: Enabled Unique Identifier: 600508B1001C132E4BBDFAA6DAD13DA3 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 196 MB, / 12.0 GB, /usr 8.0 GB, /var 4.0 GB, /tmp 2.0 GB OS Status: LOCKED Logical Drive Label: AE438D6A5001438006FD9A50BE0A Mirror Group 0: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 Device Number: 250 Firmware Version: RevC WWID: 5001438006FD9A5F Vendor ID: PMCSIERA Model: SRC 8x6G

    Read the article

  • How much network latency is "typical" for east - west coast USA?

    - by Jeff Atwood
    At the moment we're trying to decide whether to move our datacenter from the west coast (Corvallis, OR) to the east coast (NY, NY). However, I am seeing some disturbing latency numbers from my location (Berkeley, CA) to the NYC host. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes: Berkeley to NYC server: 215 ms latency, 46ms transfer time, 261ms total Berkeley to Corvallis server: 114ms latency, 41ms transfer time, 155ms total some URLs if you want to try yourself: http://careers.stackoverflow.com/content/cso/img/logo.png (NY, NY) http://serverfault.com/cache/logo.png (Corvallis, OR) It makes sense that Corvallis, OR is geographically closer to Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only went up 10%, yet the latency went up ten times as much! That feels... wrong... to me. I found a few links here that were helpful (through Google no less!) ... http://serverfault.com/questions/63531/does-routing-distance-affect-performance-significantly http://serverfault.com/questions/61719/how-does-geography-affect-network-latency http://serverfault.com/questions/6210/latency-in-internet-connections-from-europe-to-usa ... but nothing authoritative. So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook “Permission Denied”

    - by 113169587962668775787
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >