Search Results

Search found 17944 results on 718 pages for 'size'.

Page 553/718 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • Gluster bricks are offline and errors in logs

    - by Roman Newaza
    I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected: root@GlusterNode1a:~# gluster peer status Number of Peers: 3 Hostname: gluster-1b Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2 State: Peer in Cluster (Connected) Hostname: gluster-2b Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9 State: Peer in Cluster (Connected) Hostname: gluster-2a Uuid: 72405811-15a0-456b-86bb-1589058ff89b State: Peer in Cluster (Connected) I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log: copy(/storage/152627/dat): failed to open stream: Structure needs cleaning readfile(/storage/1438227/dat): failed to open stream: Input/output error unlink(/storage/189457/23/dat): No such file or directory Finally, I have found out some bricks are offline: root@GlusterNode1a:~# gluster volume status Status of volume: storage Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick gluster-1a:/storage/1a 24009 Y 1326 Brick gluster-1b:/storage/1b 24009 N N/A Brick gluster-2a:/storage/2a 24009 N N/A Brick gluster-2b:/storage/2b 24009 N N/A Brick gluster-1a:/storage/3a 24011 Y 1332 Brick gluster-1b:/storage/3b 24011 N N/A Brick gluster-2a:/storage/4a 24011 N N/A Brick gluster-2b:/storage/4b 24011 N N/A NFS Server on localhost 38467 Y 24670 Self-heal Daemon on localhost N/A Y 24676 NFS Server on gluster-2b 38467 Y 4339 Self-heal Daemon on gluster-2b N/A Y 4345 NFS Server on gluster-2a 38467 Y 1392 Self-heal Daemon on gluster-2a N/A Y 1402 NFS Server on gluster-1b 38467 Y 2435 Self-heal Daemon on gluster-1b N/A Y 2441 What can I do about that? I need to fix it. Note: CPU and Network usage of all the four nodes are about the same.

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • Windows DFS - file locking & replication?

    - by Adam Salkin
    I'm in a small company that has offices on the east and west coasts of America and also various people working from their homes. There are Windows Servers already in the offices. I think that Microsoft Windows DFS will do what I want, but despite reading the web site, I'm really not sure, so I'm hoping that someone can confirm if it will do all the following: (For various personnel / political reasons I know that a proposal for a Microsoft Windows system has more chance of being accepted than any *nix system) Creation of a Folder so that any files in this folder will automatically be available on the servers in all the offices. When anyone opens up one of these shared files on any of servers, the copies on all the servers will automatically be locked. And when they close the file, the updates automatically get copied to the file on all the servers. VPN access to these folders for people working outside the offices. Bandwidth at the main offices varies from 6 Mb/s to 20Mb/s. Files are Excel / Word / AutoCAD ranging in size from 100KB to 4MB. Thank you.

    Read the article

  • One nginx rules for lots of subdomain

    - by komase
    I have lots of subdomain in a server. Every subdomain has its own Drupal boost rules, like in below codes: server { server_name subdomain1.website.com; location / { root /var/www/html/subdomain/subdomain1.website.com; index index.php; set $boost ""; set $boost_query "_"; if ( $request_method = GET ) { set $boost G; } if ($http_cookie !~ "DRUPAL_UID") { set $boost "${boost}D"; } if ($query_string = "") { set $boost "${boost}Q"; } if ( -f $document_root/cache/normal/$host$request_uri$boost_query.html ) { set $boost "${boost}F"; } if ($boost = GDQF){ rewrite ^.*$ /cache/normal/$host/$request_uri$boost_query.html break; } if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/subdomain/subdomain1.website.com$fastcgi_script_name; include fastcgi_params; } } I adding all subdomain rules manually from time to time. The size of ngin.conf has become too big. So, I need one nginx rules which do: subdomain1.website.com pointing to /var/www/html/subdomain/subdomain1.website.com subdomain2.website.com pointing to /var/www/html/subdomain/subdomain2.website.com subdomain3.website.com pointing to /var/www/html/subdomain/subdomain3.website.com ...and so on (So that no more adding rules for subdomain .website.com I need in the future.)

    Read the article

  • Is my HDD dead forever?

    - by Roberto
    Yesterday I turned on my computer and it couldn't boot. I found out the hd (320GB SATA Seagate Momentus 7200.3 for notebook) was broken and it couldn't be recognized by the BIOS. I have another of the same hard drive, so I exchanged the boards. I found out that there is a problem on its board since my good hard drive didn't work. But the broken hard drive doesn't work with the good board as well: it can be recognized but when I insert a Windows Instalation DVD it says the hard drive is 0GB. I put it in a case and use it in another computer via USB, and but it doesn't show up in the "My Computer". I used a software to recover files called "GetDataBack for NTFS", it recognized the hard drive but with the wrong size (2TB). I try to make it read the hard drive but it got an I/O error reading sector. It tries to read, the hard drive spins up. So, since I'm using a good board on it, the problem seems to be internal. Is there anything someone could do to recover the files from it?

    Read the article

  • How to use my computer as a Headset device for my phone with Bluetooth?

    - by TheJelly
    I want to extract the audio from my phone (the analog TV and FM/AM receiver mainly) and play it through my computer speakers. There is a headphone jack but it is of non-standard size (probably a micro-jack) and I do not have access to a shop that sells that kind of equipment in my area so doing this with Bluetooth is the only solution I can foresee. Both my laptop and my phone support A2DP but for some reason the service (from the phone) does not show up while I add a new connection and the phone does not let me initiate a connection with any other profile except FTP (although it detects other services in the service list like A2DP and works perfectly fine with other profiles like DUN, HID, OPP, SSP if the connection is started through the computer). I am currently using the latest version of the Toshiba stack, I have tried using WIDCOMM but it refuses to install drivers for both the internal Bluetooth (which is a Broadcom device) and the USB Bluetooth that I use on my desktop. The standard Microsoft stack (generic driver) does install but it does not work with both of my devices as they do not detect any Bluetooth devices when scanning. With BlueSoleil (the default stack that came with the USB Bluetooth) I could set my device as "headset" instead of only "laptop/desktop", and this allowed both my phones to detect my laptop as a device they can use as a headset, but the problem with this stack was that only the older phone could actually connect to my laptop and that the internal Bluetooth could not be used. Basically, I want to set the device type as a "headset" for my phone using the Toshiba stack like I did with BlueSoleil. Is there any way this could be done? Thanks. Image: Device type selection http://i.stack.imgur.com/drjC6.jpg

    Read the article

  • KVM Guest with NAT + Bridged networking

    - by Daniel
    I currently have a few KVM Guests on a dedicated server with bridged networking (this works) and i can successfully ping the outside ips i assign via ifconfig (in the guest). However, due to the fact i only have 5 public ipv4 ip addresses, i would like to port forward services like so: hostip:port - kvm_guest:port UPDATE I found out KVM comes with a "default" NAT interface, so added the virtual NIC to the Guest virsh configuration then configured it in the Guest, it has the ip address: 192.168.122.112 I can successfully ping 192.168.122.112 and access all ports on 192.168.122.112 from the KVM Host, so i tried to port forward like so: iptables -t nat -I PREROUTING -p tcp --dport 5222 -j DNAT --to-destination 192.168.122.112:2521 iptables -I FORWARD -m state -d 192.168.122.0/24 --state NEW,RELATED,ESTABLISHED -j ACCEPT telnet KVM_HOST_IP 5222 just hangs on "trying" telnet 192.168.122.112 2521 works [root@node1 ~]# tcpdump port 5222 tcpdump: WARNING: eth0: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 23:43:47.216181 IP 1.152.245.247.51183 > null.xmpp-client: Flags [S], seq 1183303931, win 65535, options [mss 1400,nop,wscale 3,nop,nop,TS val 445777813 ecr 0,sackOK,eol], length 0 23:43:48.315747 IP 1.152.245.247.51183 > null.xmpp-client: Flags [S], seq 1183303931, win 65535, options [mss 1400,nop,wscale 3,nop,nop,TS val 445778912 ecr 0,sackOK,eol], length 0 23:43:49.415606 IP 1.152.245.247.51183 > null.xmpp-client: Flags [S], seq 1183303931, win 65535, options [mss 1400,nop,wscale 3,nop,nop,TS val 445780010 ecr 0,sackOK,eol], length 0 7 packets received by filter 0 packets dropped by kernel [root@node1 ~]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 state NEW,RELATED,ESTABLISHED Chain OUTPUT (policy ACCEPT) target prot opt source destination All help is appreciated. Thanks.

    Read the article

  • Why an empty MAIL FROM address can sent out email?

    - by garconcn
    We are using Smarter Mail system. Recently, we found that hacker had hacked some user accounts and sent out lots of spams. We have firewall to ratelimit the sender, but for the following email, the firewall couldn't do this because of the empty FROM address. Why an empty FROM address is consider OK? Actually, in our MTA(surgemail), we can see the sender in the email header. Any idea? Thanks. 11:17:06 [xx.xx.xx.xx][15459629] rsp: 220 mail30.server.com 11:17:06 [xx.xx.xx.xx][15459629] connected at 6/16/2010 11:17:06 AM 11:17:06 [xx.xx.xx.xx][15459629] cmd: EHLO ulix.geo.auth.gr 11:17:06 [xx.xx.xx.xx][15459629] rsp: 250-mail30.server.com Hello [xx.xx.xx.xx] 250-SIZE 31457280 250-AUTH LOGIN CRAM-MD5 250 OK 11:17:06 [xx.xx.xx.xx][15459629] cmd: AUTH LOGIN 11:17:06 [xx.xx.xx.xx][15459629] rsp: 334 VXNlcm5hbWU6 11:17:07 [xx.xx.xx.xx][15459629] rsp: 334 UGFzc3dvcmQ6 11:17:07 [xx.xx.xx.xx][15459629] rsp: 235 Authentication successful 11:17:07 [xx.xx.xx.xx][15459629] Authenticated as [email protected] 11:17:07 [xx.xx.xx.xx][15459629] cmd: MAIL FROM: 11:17:07 [xx.xx.xx.xx][15459629] rsp: 250 OK < Sender ok 11:17:07 [xx.xx.xx.xx][15459629] cmd: RCPT TO:[email protected] 11:17:07 [xx.xx.xx.xx][15459629] rsp: 250 OK Recipient ok 11:17:08 [xx.xx.xx.xx][15459629] cmd: DATA

    Read the article

  • ReadyBoost in Windows 7

    - by Robert Koritnik
    I've bought an SD card today for my phot frame, but when I inserted it into my notebook I saw I could use it for ReadyBoost. Some background I'm a .net developer, using VMs and developing web applications (and Sharepoint). I use an HP notebook machine with Core 2 Duo 2GHz + 4GB RAM + 320 7200 HD. I simultaneously run Visual Studio 2010 with some plugins SQL Server Firefox with at least 10 tabs Chrome with about 5 tabs IIS VM with Server 2008 machine Sharepoint and occasionally also Photoshop and some InDesign as well. So I don't let my machine have a break. :D Question If I buy myself some really fast SDHC card (like SanDisk 16GB Extreme 30MB/s - is there anything faster) and use it with my Windows 7 ReadyBoost, will I see any performance gain? Is it going to work something similar to Seagate's HybridDrive Momentus with 4GB of solid state drive? What could I actually expect if I do put this card into my machine? And what would be recommended size? Observations I guess redirecting page file to it would speed up the system. Some VM machines on it would probably run faster as well because they could run parallel to HD host system I guess. Am I right or wrong?

    Read the article

  • Upgrading from SQL2000 database to SQL Express 2008 R2

    - by itwb
    Hi, We have a web application which uses a MSSQL 2000 backend database. We are currently paying a ridiculous amount for Shared Hosting, with the database costs alone costing us $150 per month (MSSQL 100mb extra space is $40 per month). Our database size is 896.38 MB I am looking at getting a Virtual Private Server and upgrading the database to a MSSQL2008 Express database. I am aware that the Express version is limited to a 10GB database (with R2), and is constrained to a single CPU. I have also been offered SQL Server 2008 Web Edition for $19/per month, but I cannot find many details on the difference between Express and Web. Any suggestions here? What I would also like to know is: If we upgrade the database to MSSQL 2008 database, is there any issues with possible data transformations in the future? I.e. Is it possible to download and mount it with SQL Server 2008 Standard edition? I'm more concerned about how to get data in and out of the database through SQL Management tools. Are there any other issues that I might face? Thanks, Mike

    Read the article

  • Why is mkfs overwriting the LUKS encryption header on LVM on RAID partitions on Ubuntu 12.04?

    - by Starchy
    I'm trying to setup a couple of LUKS-encrypted partitions to be mounted after boot-time on a new Ubuntu server which was installed with LVM on top of software RAID. After running cryptsetup luksFormat, the LUKS header is clearly visible on the volume. After running any flavor of mkfs, the header is overwritten (which does not happen on other systems that were setup without LVM), and cryptsetup will no longer recognize the device as a LUKS device. # cryptsetup -y --cipher aes-cbc-essiv:sha256 --key-size 256 luksFormat /dev/dm-1 WARNING! ======== This will overwrite data on /dev/dm-1 irrevocably. Are you sure? (Type uppercase yes): YES Enter LUKS passphrase: Verify passphrase: # hexdump -C /dev/dm-1|head -n5 00000000 4c 55 4b 53 ba be 00 01 61 65 73 00 00 00 00 00 |LUKS....aes.....| 00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000020 00 00 00 00 00 00 00 00 63 62 63 2d 65 73 73 69 |........cbc-essi| 00000030 76 3a 73 68 61 32 35 36 00 00 00 00 00 00 00 00 |v:sha256........| 00000040 00 00 00 00 00 00 00 00 73 68 61 31 00 00 00 00 |........sha1....| # cryptsetup luksOpen /dev/dm-1 web2-var # mkfs.ext4 /dev/mapper/web2-var [..snip..] Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done # hexdump -C /dev/dm-1|head -n5 # cryptsetup luksClose /dev/mapper/web2-var 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000400 00 40 5d 00 00 88 74 01 66 a0 12 00 17 f2 6d 01 |.@]...t.f.....m.| 00000410 f5 3f 5d 00 00 00 00 00 02 00 00 00 02 00 00 00 |.?].............| 00000420 00 80 00 00 00 80 00 00 00 20 00 00 00 00 00 00 |......... ......| # cryptsetup luksOpen /dev/dm-1 web2-var Device /dev/dm-1 is not a valid LUKS device. I have also tried mkfs.ext2 with the same result. Based on setups I've done successfully on Debian and Ubuntu (but not LVM or Ubuntu 12.04), it's hard to see why this is failing.

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • 1000Base-X layer 2/MAC address details

    - by user69971
    A layer 2 Ethernet frame is sent with a source and destination MAC address. Given a 100Base-TX (copper) trunk between two Cisco switches, I can do a "show interface fa 0/0" on S1 to see the MAC address assigned to the trunking interface, then go to Switch2 and do a "show mac address-table" and find the MAC address of the S1 fa 0/0 interface as a dynamically learned MAC in the table. Given a similar setup with a 1000Base-X (fiber GBIC) trunk, the MAC address shown in "show interface gi 0/0" on S1 does not show up in the MAC address-table of S2. Everything I can find online indicates that 1000Base-X uses largely the same layer 2 format as copper connections. There's some slight alterations - minimum frame size is slightly larger - but the fundamentals of the frame structure appear to be the same, including transmission with a source and destination L2 address. Why doesn't the address of gi 0/0 show up in the MAC address table of the connection switch? The only thing which seems to make sense would be that the GBIC has its own MAC address, almost as if its acting as a mini 2-port switch or hub, with the switch-assigned MAC address showing up on the interface connection and a different MAC address assigned to the fiber side. If this is the case, is there any way to see the GBIC MAC address on the switch? (I've tried to look up the details in IEEE 802.3z but it doesn't seem to be available without an IEEE membership or purchasing the standard. I find the base 802.3 PDFs for download, but not 802.3z.)

    Read the article

  • Does Hyper-V support SCSI Pass-through discs in a Server 2003 R2 VM?

    - by Peter Bernier
    I'm running into some difficulties getting pass-through disks to be accessible to a Hyper-v server 2003 r2 virtual machine. Host OS : Server 2008 R2 full w/Hyper-V role Guest OS : Server 2003 R2 (Windows Home Server) The guest's OS disk is a pass-through disk on the IDE controller (not the best solution, but I can live with it). My storage disks will be pass-through disks on the SCSI controller. I'm able to see all of the disks that I'll be using for the VM on the host without issue. The problem that I'm having is that I can't seem to get the guest OS to be able to 'see' the storage drives (as pass-through disks on the SCSI controller). Here's what I'm doing : On the host, the storage drive is set to 'Offline' just like the OS disk (this is required for pass-through to work). In the VM, the storage drive is on the SCSI controller. Hyper-V Integration Tools are installed in guest. That's as far as I'm able to get. I don't see the drive in Computer Management, or in Windows Explorer (I've tried with an unformatted disk, as well as after formatting a partition). I am able to see a removable device that lists the disk's model number in the Guest, but I can't seem to access the storage. (I get an entry in Device Manager that needs drivers, but nothing on the Integration Tools disc works..) Trouble-shooting steps I've tried : If put the pass-through drive on the IDE controller, I can see it in the Guest. If put the storage drive 'Online' in the host and create a VHD on it on the SCSI controller, I can see it in the Guest. I suppose I could create a fixed-size VHD that consumes the entire disk, but I'd rather not have that overhead. I've also extracted the contents of the Integration Tools drivers (x86 and amd64) and tried pointing the disk controller to each of those, with no luck. Can anyone offer suggestions as to how I can get this to work properly?

    Read the article

  • Windows Media Player 12 Library import keeps dying

    - by duckworth
    I cannot get WMP 12 to import my library. I have searched around various forums and tried all the common solutions like disabling Media Sharing, deleted my %LOCALAPPDATA%\Microsoft\Media Player directory and tried reimporting, etc. but nothing works. I have even removed the Media features from Windows setup and re-added them. I have a large mp3 collection shared on the network from another Windows box. I add the folder (tried as a mapped drive and UNC path) and it begins importing. After about 30 minutes into the import (the CurrentDatabase_372.wmdb hits just under 400MB) my WMP player stops importing and all of the icons in WMP turn to red x's and my library is gone. I close and reopen WMP 12 and the library is empty and the CurrentDatabase_372.wmdb is small and it strarts importing again. Rinse, lather, repeat. I am going nuts as WMP11 on Vista handles this same setup perfectly. I am at my wits end on what else to try. I am running a legit Windows 7 Ultimate X64 RTM install. Here is a screenshot of what WMP12 looks like when the import dies: Any other ideas? Edit: OK, I Just confirmed this is definitely a problem not specific to my computer or configuration. I just did a clean installation of Windows 7 Ultimate x86 on an old test machine, opened WMP12 and added the same network folder of mp3's and it crashed about an hour into the import with the same appearance as the screenshot I posted above and the library disappears. So the problem has to be one of several things: The large size of the library The fact that the library is on the network A specific file or file is causing it the player to crash

    Read the article

  • Setting up a Pagefile and Partition in Server 2008

    - by Brett Powell
    I am setting up 18 new machines for our company, and I have instructions from my new boss on setting up a Pagefile and Partition. I have looked at their existing machines to base the new setups off of, but there is no consistency between any 2 machines, which has left me extremely frustrated to say the least. My instructions are... 1) Set a static pagefile (use recommended value as max/min), set it on SSD if SSD available. 2) Make 3 partitions: C: is used for OS and install files D: is used for backups on machines with a SSD. On machines without SSD create a D: partition for pagefile (2*installed RAM for partition size) E: must be the partition hosting user files I have never messed with Pagefiles before, and looking at their existing machines is offering no help. My questions are... 1) As the machines I am setting up have no SSD (just 2 SATA drives) does it sound like the Pagefile should be setup on the C: (primary) drive or the D:? The instructions are vague so I have no idea. 2) As C: and D: are both Physical drives, does it sound like C: should be partitioned out to create the E: drive or D:? Thanks for any help I can get. I am extremely stressed out under a massive workload right now, and these vague instructions are quite infuriating.

    Read the article

  • Installing Mysql Ruby gem on 64-bit CentOS

    - by Jacek
    Hi, I have a problem installing mysql ruby gem on 64bit CentOS machine. [jacekb@vitaidealn ~]$ uname -a Linux vitaidealn.local 2.6.18-92.el5 #1 SMP Tue Jun 10 18:51:06 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux Mysql and mysql-devel packages are installed. Mysql_config provides following paths: Usage: /usr/lib64/mysql/mysql_config [OPTIONS] Options: --cflags [-I/usr/include/mysql -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fwrapv] --include [-I/usr/include/mysql] --libs [-L/usr/lib64/mysql -lmysqlclient -lz -lcrypt -lnsl -lm -L/usr/lib64 -lssl -lcrypto] --libs_r [-L/usr/lib64/mysql -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -L/usr/lib64 -lssl -lcrypto] --socket [/var/lib/mysql/mysql.sock] --port [3306] --version [5.0.45] --libmysqld-libs [-L/usr/lib64/mysql -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -L/usr/lib64 -lssl -lcrypto] Trying to install: [jacekb@vitaidealn ~]$ gem install mysql -- --with-mysql-include=/usr/include/mysql --with-mysql-libs=/usr/lib64/mysql ... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb --with-mysql-include=/usr/include/mysql --with-mysql-libs=/usr/lib64/mysql checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. I would appreciate any help. Thanks for reading :).

    Read the article

  • Create Windows AMI with instance storage

    - by Jonathan Oliver
    I have a business use case and workflow where local/instance/ephemeral storage for an EC2 instance is ideal. Unfortunately I'm coupled to a Windows platform for this particular task and the EC2 Windows offering appears to have some deficiencies related to AMI creation. In essence, I'm trying to figure out if there's a way to attach local instance storage to a Windows EC2 instance using the typical command line interface (because the Amazon Website GUI doesn't support it) and then to somehow create an AMI based upon that. I've tried creating a snapshot and then creating a Windows AMI based upon the snapshot, but of course the docs say this is unsupported and makes an unbootable AMI. In short, here's what I'm trying to do: Be able to run a Windows instance (EBS/S3 instance doesn't matter) Attach local instance storage as drive D: Persist that configuration as an AMI such that I can start lots of them as necessary from either the GUI, command line, or REST API. Be able to take a launched instance, update software, shutdown, and create another AMI based upon that. Wash, rinse, repeat. One other potential option which isn't horrible, but isn't ideal is to create an AMI which has 2 EBS volumes already attached (system+apps and data). Essentially, every time I startup an instance based upon the AMI it'll create 2 new EBS volumes of pre-determined size. I'm trying to avoid that scenario if possible.

    Read the article

  • nameserver spoiling avahi multicast name resolution of .local domain

    - by Doug Coburn
    After trying to ping a machine on my local network, I noticed that I was trying hit address 66.152.109.24. This is an external public address. Resolution should have occurred via avahi mDNS. I ran dig to see how the name resolution worked and my quest/centurylink name server was retuning results for my .local domain queries! I tried a random name and got the same ip address result. $ dig jakdafj.local ; <<>> DiG 9.8.1-P1-RedHat-9.8.1-3.P1.fc15 <<>> jakdafj.local ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58410 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;jakdafj.local. IN A ;; ANSWER SECTION: jakdafj.local. 10 IN A 66.152.109.24 jakdafj.local. 10 IN A 204.232.231.46 ;; Query time: 104 msec ;; SERVER: 205.171.3.25#53(205.171.3.25) ;; WHEN: Sat Mar 24 20:40:17 2012 ;; MSG SIZE rcvd: 63 Am I missing something or is my DNS name server at 205.171.3.25 corrupted?

    Read the article

  • snort analysis of wireshark capture

    - by Ben Voigt
    I'm trying to identify trouble users on our network. ntop identifies high traffic and high connection users, but malware doesn't always need high bandwidth to really mess things up. So I am trying to do offline analysis with snort (don't want to burden the router with inline analysis of 20 Mbps traffic). Apparently snort provides a -r option for this purpose, but I can't get the analysis to run. The analysis system is gentoo, amd64, in case that makes any difference. I've already used oinkmaster to download the latest IDS signatures. But when I try to run snort, I keep getting the following error: % snort -V ,,_ -*> Snort! <*- o" )~ Version 2.9.0.3 IPv6 GRE (Build 98) x86_64-linux '''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team Copyright (C) 1998-2010 Sourcefire, Inc., et al. Using libpcap version 1.1.1 Using PCRE version: 8.11 2010-12-10 Using ZLIB version: 1.2.5 %> snort -v -r jan21-for-snort.cap -c /etc/snort/snort.conf -l ~/snortlog/ (snip) 273 out of 1024 flowbits in use. [ Port Based Pattern Matching Memory ] +- [ Aho-Corasick Summary ] ------------------------------------- | Storage Format : Full-Q | Finite Automaton : DFA | Alphabet Size : 256 Chars | Sizeof State : Variable (1,2,4 bytes) | Instances : 314 | 1 byte states : 304 | 2 byte states : 10 | 4 byte states : 0 | Characters : 69371 | States : 58631 | Transitions : 3471623 | State Density : 23.1% | Patterns : 3020 | Match States : 2934 | Memory (MB) : 29.66 | Patterns : 0.36 | Match Lists : 0.77 | DFA | 1 byte states : 1.37 | 2 byte states : 26.59 | 4 byte states : 0.00 +---------------------------------------------------------------- [ Number of patterns truncated to 20 bytes: 563 ] ERROR: Can't find pcap DAQ! Fatal Error, Quitting.. net-libs/daq is installed, but I don't even want to capture traffic, I just want to process the capture file. What configuration options should I be setting/unsetting in order to do offline analysis instead of real-time capture?

    Read the article

  • Fedora 17 - Dropping into debug shell after attempted partitioning

    - by i.h4d35
    So I tried creating a new partition on Fedora 17 using fdisk as follows: Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (2048-823215039, default 2048): Using default value 2048 Last cylinder or +size or +sizeM or +sizeK (1-9039, default 9039): +15G Once this was done,instead of formatting the partition I created, I ran the partprobe command to write the changes to the partition table. On rebooting the computer, it drops to the debug shell and gives me the error as follows: dracut warning:unable to process initqueue dracut warning:/dev/disk/by-uuid/vg_mymachine does not exist dropping to debug shell dracut:/# While trying to run fsck on the said partition from the debug shell, it says "etc/fstab not found" and inside /etc I see a fstab.empty file. Is it now possible to retrieve what I have from the computer? Any help would be appreciated. Thanks in advance Edit: I've also tried the following steps for additional troubleshooting: I tried to boot using the Fedora disk and tried the rescue mode - says no Linux partition detected. I tried to create an fstab file by combining the entries from blkid and the /etc/mtab file and using the UUIDs from the mtab file - It didn't work. As soon as I rebooted the machine, it promptly dropped me in to the debug shell and the fstab file which i created wansn't there anymore in /etc (part of this solution)

    Read the article

  • Installing VirtualBox on BackTrack 5

    - by m0skit0
    I'm getting this error when running VirtualBox's installation script: $ sudo ~/Downloads/VirtualBox-4.1.14-77440-Linux_x86.run Verifying archive integrity... All good. Uncompressing VirtualBox for Linux installation........... VirtualBox Version 4.1.14 r77440 (2012-04-12T16:20:44Z) installer Removing previous installation of VirtualBox 4.1.14 r77440 from /opt/VirtualBox Installing VirtualBox to /opt/VirtualBox tar: Record size = 8 blocks Python found: python, installing bindings... Building the VirtualBox kernel modules Error! Bad return status for module build on kernel: 3.2.6 (i686) Consult the make.log in the build directory /var/lib/dkms/vboxhost/4.1.14/build/ for more information. ERROR: binary package for vboxhost: 4.1.14 not found Here's the log: $ cat /var/lib/dkms/vboxhost/4.1.14/build/make.log DKMS make.log for vboxhost-4.1.14 for kernel 3.2.6 (i686) Sun May 13 14:32:52 CEST 2012 make: Entering directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/Makefile:39: /usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu: No such file or directory make: *** No rule to make target `/usr/src/linux-headers-3.2.6/arch/x86/Makefile_32.cpu'. Stop. make: Leaving directory `/usr/src/linux-headers-3.2.6' /usr/src/linux-headers-3.2.6/arch/x86/ directory: $ ls /usr/src/linux-headers-3.2.6/arch/x86/ Kconfig Makefile ia32 lguest mm pci tools video Kconfig.cpu boot kernel lib net platform um xen Kconfig.debug crypto kvm math-emu oprofile power vdso Makefile references on "cpu" $ cat /usr/src/linux-headers-3.2.6/arch/x86/Makefile | grep cpu include $(srctree)/arch/x86/Makefile_32.cpu # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu) Before upgrading to 3.X I didn't have this problem, the script would install VB correctly. Any ideas on what might be causing this? Thanks in advance!

    Read the article

  • high virtual memory usage in openvz?

    - by freedrull
    We're having a lot of memory problems on a new OpenVZ box. It is supposed to have 1 gig of memory, I'm not sure how much of that is burstable or guaranteed memory. Programs in general seem to take up more virtual memory than they do on my box at home, and on our other OpenVZ box. I wrote this simple C program: #include <stdio.h> #include <stdlib.h> int main(){ char *thingy = malloc(500); getchar(): return 0; } So it simply allocates 500 bytes and then returns. I ran the program on 3 computers. On my home machine, and our other OpenVZ box it shows about 1k bytes of virtual memory being used. On the new problematic machine its about 3k. I know this is just virtual memory and not resident memory, but why is this machine allocating so much virtual memory? Are there some settings I need to adjust to the OpenVZ memory settings? I tried changing the stack size with ulimit -s 256 and restarting some demons, but I still saw the same results. I'm doing all of my monitoring with htop, is this even a good program to use with a OpenVZ vps? I've read I should be parsing the output of /proc/user_beancounters intead or something.

    Read the article

  • How to recover deleted NTFS partitions?

    - by Frank
    Last night I made a terrible mistake. I was reinstalling Windows and I accidentally deleted all the partitions on all my drives. I realized my mistake before I had created any partitions, so nothing has been written to any of the disks. I'm currently at my wits' end about what I'll do if I don't manage to recover the data. I have two 1TB drives and a 2TB. One of the 1TB was the drive I was supposed to be reformatting so nothing to be recovered there. I am currently in a Linux livecd. In this article http://support.microsoft.com/kb/245725 Microsoft advises to recreate the exact same partition but choose not to format it, and then recover the backup boot sector from the end of the ntfs volume. But none of the drives I want to recover are bootable drives. So does that mean I do not need to rewrite the boot sector? As in if I simply recreate a partition of the same size it will see all my data? Or would I be better off using the TestDisk utility? http://www.cgsecurity.org/wiki/TestDisk Please help, I'm desperate!!

    Read the article

  • postgresql No space left on device

    - by pstanton
    Postgres is reporting that it is out of disk space while performing a rather large aggregation query: Caused by: org.postgresql.util.PSQLException: ERROR: could not write block 31840050 of temporary file: No space left on device at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:192) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:304) at org.hibernate.engine.query.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:189) ... 8 more However the disk has quite a lot of space: Filesystem Size Used Avail Use% Mounted on /dev/sda1 386G 123G 243G 34% / udev 5.9G 172K 5.9G 1% /dev none 5.9G 0 5.9G 0% /dev/shm none 5.9G 628K 5.9G 1% /var/run none 5.9G 0 5.9G 0% /var/lock none 5.9G 0 5.9G 0% /lib/init/rw The query is doing the following: INSERT INTO summary_table SELECT t.a, t.b, SUM(t.c) AS c, COUNT(t.*) AS count, t.d, t.e, DATE_TRUNC('month', t.start) AS month, tt.type AS type, FALSE, tt.duration FROM detail_table_1 t, detail_table_2 tt WHERE t.trid=tt.id AND tt.type='a' AND DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')>=23 OR DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')<13 GROUP BY month, type, t.a, t.b, t.d, t.e, FALSE, tt.duration any tips?

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >