Search Results

Search found 11280 results on 452 pages for 'zend newbie dev'.

Page 334/452 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • cruisecontrol.net failing build with no errors

    - by John Hoge
    Hi, I've been using CCNet for some time now but just upgraded to .net 4. I've installed the new framework on my dev box along with vs2010 and on my CC.net server as well. I've just installed CruiseControl.net version 1.5.6804.1, and changed my MSBuild tasks to point to the new v4.0.30319 framework directory. I've got two projects on .net 4 now that just don't build. They build perfectly well in VS and run well on both Cassini and IIS. I just don't get any error messages from cc.net, just this: BUILD FAILED Project: GoodBay Date of build: 2010-05-11 10:21:59 Running time: 00:00:02 Integration Request: Build (ForceBuild) triggered from SVN Modifications since last build (0)

    Read the article

  • Why is hg fetch a deprecated extension? [closed]

    - by Jan
    Mercurial's fetch extenson conveniently pulls and merges from a remote repository. Recently, this feature has been deprecated by the developers. They recommend avoiding it and it is on the unloved features list. It is useful in many cases to be able to pull and initiate a merge with one command (which hg pull -u doesn't do). I assume there is a reason behind the deprecation but I haven't been able to find one in the documentation or online. What is the reason behind deprecating it? I'm not looking for opinions, but rather the factual reason behind its deprecation (which might be that the dev team's opinion is that it should not be used).

    Read the article

  • Xen P2V for large physical hosts with much free space

    - by Sirex
    I need to P2V a rhel5 machine to xen under rhel5. I know I can use dd if=/dev/sda then using virt-install --import on the host, but the downside of this is the original machine has 80% free space on its drive. Does anyone know of (or can document) a quick and easy method which works reliably, to produce a bootable xen image which can run under a hvm in such cases ? I tried clonezilla to make the image, to avoid the free space problem, but it failed to do the clone with "something went wrong" (useless info, i know). At the moment im looking at doing a dd of each partition, and a file level copy of the partition which is mostly empty, then creating a new virtual disk, copying the partitions over to it by mounting both the new image and the virtual drive on a second vm, then copying the boot sectors over, then copying the file level backup..... there must be an easier way ? Oh, and budget is $0. :)

    Read the article

  • Best practice for ONLY allowing MySQL access to a server?

    - by Calvin Froedge
    Here's the use case: I have a SaaS system that was built (dev environment) on a single box. I've moved everything to a cloud environment running Ubuntu 10.10. One server runs the application, the other runs the database. The basic idea is that the server that runs the database should only be accessible by the application and the administrator's machine, who both have correct RSA keys. My question: Would it be better practice to use a firewall to block access to ALL ports except MySQL, or skip firewall / iptables and just disable all other services / ports completely? Furthermore, should I run MySQL on a non-standard port? This database will hold quite sensitive information and I want to make sure I'm doing everything possible to properly safeguard it. Thanks in advance. I've been reading here for a while but this is the first question that I've asked. I'll try to answer some as well = )

    Read the article

  • Why do I have no TTY on a basic Ubuntu 9.10 server install?

    - by pr1001
    I have reinstalled Ubuntu 9.10 Server several times on a bog standard 1RU server and each time I finish the install and reboot I see GRUB run and am then presented with a black screen. The machine is running just fine, as I am able to SSH in, but I can't see anything on the attached monitor. I have a simple LCD screen connected via VGA and a signal is apparently being output to it, as it doesn't go asleep. Looking at /var/log/syslog I see: Mar 24 14:57:44 bridge5 rsyslogd-2039: Could no open output file '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] However, I later see: Mar 24 14:57:44 bridge5 kernel: [ 0.001368] console [tty0] enabled Any thoughts? Thanks!

    Read the article

  • How to pipe differently the body of the curl answer and the printed output?

    - by Antoine Lizée
    I would like to print in the command line some output of curl, like the http headers, followed by the body of the answer processed by a stdin/stdout program. For instance: Print the status code: curl -s -w "%{http_code} \\n" -o "/dev/null" http://myURL.com And then process the output with a json parsing tool: curl -s http://myURL.com | python -mjson.tool I would like to do both with one command, and I have the feeling that it may be possible thanks to the -o option that makes the difference between the output of curl and the actual answer from the query. The problem is that -o writes directly to a file. Somebody's got a hack?

    Read the article

  • trouble connecting ups

    - by Jure1873
    I've got a riello UPS connected to my server through USB. The output from dmesg is: [1362998.520035] usb 2-2: new low speed USB device using uhci_hcd and address 7 [938715.763270] usb 2-2: configuration #1 chosen from 1 choice [1363008.726243] input: Ups Manufacturing RS232-USB converter as /class/input/input7 [1363008.749408] input: USB HID v1.00 Gamepad [Ups Manufacturing RS232-USB converter] on usb-0000:00:1d.0-2 Now the program for controlling the UPS is expecting me to input the device path (/dev/ttyUSB0), but it doesn't get created. What is /class/input/input7 and where is it? Do i have to install additional drivers?

    Read the article

  • scp -q isn't quiet between different hosts

    - by pythonic metaphor
    So scp -q file host:file and scp -q host:file file are both quiet, i.e. don't give the progress meter. But when I run scp -q host1:file host2:file, I still get the progress meter as well as a Connection to host1 closed. message. The progress meter can be gotten rid of by redirected stdout to /dev/null (although I'd rather not have to), but the connection closed messages comes on stderr, which I definitely want to keep in case there's a real error. How can I make scp quiet? Do I have to run ssh host1 "scp -q file host2:file"?

    Read the article

  • How to set which IP to use for a HTTP request?

    - by GetFree
    This is probably a silly question. I'm doing some http requests using wget from the command line, and I want those connections to be made through one specific IP of the 4 IPs my server has. Those http requests go to one specific range of IPs so I only want those to be routed differently. The 4 interfaces in my server are eth0, eth0:0, eth0:1, eth0:2. I tried with the following command: route add -net 192.164.10.0/24 dev eth0:0 But when I see the routing table it says: Destination Gateway Genmask Flags MSS Window irtt Iface 192.164.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 The interface is set to eth0 not eth0:0 as my command says. What am I doing wrong?

    Read the article

  • X58 RAID 10 - Am I forced to use Sata2?

    - by Avi
    I'm building a new dev computer. It will be running a few VMWare Worksation virtual machines. I was advised on Serverfault to use Raid10 for performance. Raid 10 uses 4 disks. I contacted my supplier who suggested a gigabyte X58A motherboard and 4 Western Digital Caviar black 6Gb/s disks. I have checked the spec for the X58A board, however, and it says: SATA 3Gb/s: RAID 0, RAID 1, RAID 5, and RAID 10 SATA 6Gb/s: RAID 0, and RAID 1. I'm losing half the bandwidth because I'm forced to use SATA2! What should I do?

    Read the article

  • MongoDB REST interface not listening after update

    - by Ones and Zeroes
    I replaced the mongodb-10gen install with the Ubuntu package (mongodb-server, mongodb-client and dev). apt-get install mongodb Thereafter, I am now unable to connect to the REST interface, where it worked before. Doing a wget to http://127.0.0.1:27018, I receive the following response: Connecting to 127.0.0.1:27018... failed: Connection refused. My previous /etc/mongodb.conf file had the following in: #enable REST rest = true Adding it to the packaged conf file does not resolve the issue, not even after restarting. I also tried changing the following with no effect: # Disable the HTTP interface (Defaults to localhost:27018). # nohttpinterface = true to # Disable the HTTP interface (Defaults to localhost:27018). nohttpinterface = false I have searched for days, and there doesn't seem to be anything on the Mongo site about a similar anomaly. If you have encountered a similar issue on Ubuntu Oneiric, please add your comments, even if you haven't found a solution to this issue.

    Read the article

  • Lost Page Write I/O Errors on CentOS LVM setup

    - by Gregg Leventhal
    I have a CentOS 6 box with LVM setup and one of the PVs is a USB disk (I know). One of them is getting the error: Oct 30 10:57:07 alpha01 kernel: lost page write due to I/O error on dm-3 Oct 30 10:57:07 alpha01 kernel: Buffer I/O error on device dm-3, logical block 4 Which is causing problems with all of the LVs on it. pvs shows the PV as unknown device. I can ls to the logical volumes and they show up in lvdisplay, but first I get a bunch of IO errors. I made sure the cables are secure between the USB drive. What should I do to get this back up and running for the meanwhile? Should I unmount each LV and run an fsck.ext4 on each one like fsck.ext4 -y /dev/vg1/lv_logvolname ?

    Read the article

  • Xen Disk Performence Issues

    - by user98651
    I'm currently using Xen PV on CentOS 5 with my domU's as flat files running on a hardware RAID controlled (write cache enabled) formatted with XFS. On the dom0 I can get about 500MB/s in a 2GB dd write from /dev/zero however on the domU's I'm lucky if I get 10MB/s (it is usually around half that). I've tried changing the disk scheduling to NOOP on the domU's, changed some mount parameters and tweaked the performance allocations of both the dom0 (prioritize CPU) and domU's (increase RAM and VCPU allocations). None of these steps have produced any noticeable change in performance. My instinct here is that it is not a hardware problem, due to the solid performance of the dom0. Any ideas on what might be causing this problem? I'm considering moving to LVM based domU's.

    Read the article

  • Can spell checking be disabled by default on OS X?

    - by Lri
    Is there some way I could disable continuous spell checking or other settings in the substitutions menu by default? System Preferences only has an option to disable autocorrect. defaults write -g CheckSpellingWhileTyping -bool false would be overridden by keys on the property lists of applications. This would only apply to applications that have been used before: #!/bin/bash for d in $(defaults domains | tr -d ,); do osascript -e "app id \"$d\"" > /dev/null 2>&1 [ $? == 1 ] && continue echo $d defaults write $d CheckSpellingWhileTyping -bool false defaults write $d SmartDashes -bool false defaults write $d SmartLinks -bool false defaults write $d SmartQuotes -bool false defaults write $d SmartCopyPaste -bool false defaults write $d TextReplacement -bool false done

    Read the article

  • Having problems with Grub2 booting Ubuntu from my External Hard Drive

    - by anonymous
    I installed Ubuntu on my external hard drive but it won't boot on my laptop. what do i do? i did some reading and traced the source of the problem to grub2. Apparently, grub2 doesn't use the device's UUID, and uses the linux directory instead (/dev/sdf2). This means that whenever i plug my E-HDD into a system that has a different number of drives connected to it, i won't be able to boot without editting the boot command. I don't understand it too well but that's what i got from what i read. Is there anyway to fix this?

    Read the article

  • Can't open Paypal.com with Google Chrome

    - by grunwald2.0
    Currently I always get an error message (since one week!) trying to open the PayPal website with Google Chrome and I don't know why. FlashBlocker and AdBlockPlus are deactivated. v.20.0.1132.11 dev. Error message: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> **<p>Your browser sent a request that this server could not understand.<br /> Size of a request header field exceeds server limit.<br />** <pre> Cookie: Apache=10.190.8.170.1302997118916547; (cookie body removed due to privacy reasons) </pre> </p> </body></html>

    Read the article

  • Kickstart Partitioning Configuration

    - by Flo
    I'be been trying to run a kickstart script with the following partition configuration: #Clear the masterboot record zerombr bootloader --location=mbr --driveorder=sda --append=" rhgb crashkernel=auto quiet" # Set up the partitions/logical volumes/logical groups clearpart --all part /boot --fstype=ext4 --asprimary --size=512 --ondisk=sda part swap --size=2048 --fstype=swap --ondisk=sda part pv.01 --fstype=ext4 --grow --size=200 --ondisk=sda part pv.02 --fstype=ext4 --grow --size=200 --ondisk=sdb volgroup VolGroup pv.01 pv.02 --pesize=32768 logvol /opt --fstype=ext4 --name=opt.fs --vgname=VolGroup --size=40000 logvol / --fstype=ext4 --name=root.fs --vgname=VolGroup --size=78000 I have two hard drives and it looks to me like its a really simple configuration. When I run the kickstart I keep getting all these errors that have to do with python files for configuring partitions. The only actual maybe useful piece of information is KeyError /dev/sda/ I tried a number of alterations of this configuration but nothing really worked. Any ideas?

    Read the article

  • SAMBA and Linux ACLs -- "Permission denied" on write to share but file written nevertheless

    - by MCH
    I set up a writable share directory "/home/net/share" with acl like this: sudo mkdir -p "/home/net/share" sudo setfacl -m "u:localuser:rwx,u:remoteuser:rwx,g:users:rwx" "/home/net/share" My /etc/samba/smb.conf looks like this: [global] workgroup = w server string = server security = user load printers = no log file = /var/log/samba/%m.log max log size = 50 dns proxy = no printing = bsd printcap name = /dev/null disable spoolss = yes encrypt passwords = true invalid users = nobody root follow symlinks = yes wide links = yes [share] comment = Writable by localuser and remoteuser path = /home/net/share valid users = remoteuser read only = no public = no printable = no Locally, localuser and remoteuser have user accounts and smbpasswds and can both read, create and delete files in /home/net/share. But when I log on from a different machine (like this: sudo mount -t cifs //server/share mountpoint/ -o username=remoteuser ), I get "Permission denied" both when trying to create directories and files, oddly though, it does create files (not directories!) despite these messages! How can I get this working?

    Read the article

  • Cloning OpenVZ container

    - by Tiffany Walker
    I have an OpenVZ container on 1 host and I would like to clone it over to my server. both run SolusVM. I only have root access to my server and would like to host the container on my server now. Can I use rsync to clone the drive while the OS is running on both? Using a command like this: rsync -uazPx --exclude='/boot' --exclude='/proc' --exclude='/dev' --exclude='/lib' --exclude='/tmp' --exclude='/var/lock' / [email protected]:/ Is there any other areas I should probably not copy over?

    Read the article

  • What is fastest way to backup a disk image over LAN?

    - by David Balažic
    Sometimes I boot sysrescd or a similar live linux on a PC to backup the hardrive over local network to my server. I noticed many times, that the transfer speed is not optimal (slower than HDD and network speed). Any rules of thumb what to do and what to avoid? What I typically do is something like: dd bs=16M if=/dev/sda | nc ... # on client nc ... | dd bs=16M of=/destination/disk/backup1 # on server I also "throw" in lzop (other are way too slow) and sometimes on the fly md5sum calculation (both of uncompressed and compress source). I try to add (m)buffer (or other alternatives) to improve throughput (and get a progress indicator). I noticed that even with enough free CPU, adding commands to the pipeline slows things down. Typically the destination is on a NTFS volume (accessed via ntfs-3g, with the _big_writes_ option).

    Read the article

  • Export 1 year of CVS to another repo?

    - by John Dibling
    We have a CVS repo with many years of history. It has become huge and unwieldly, so we would like to split this singe repo in to two repos: The main repo would have 1 year's worth of history, up to and including present day. This is where all dev work would take place. An archive repo would have the complete history, up to the point where the main repo would take over. This would be read-only, and only used to look at historical changes. Given that we are starting with one huge, monolithic CVS repo, is it possible to split it up in this way? How can this be accomplished?

    Read the article

  • Server downtime - are these APC warnings the cause?

    - by DisgruntledGoat
    Yesterday I had a problem with my dedicated server (Ubuntu 10.04, LAMP). It wasn't down per se, but running incredibly slowly as if we had a massive overload of visitors (though I don't think we did). It's running smoothly again now. I've been checking through log files etc to see if I can find any issues, the only strange thing is a bunch of these errors, occurring at about the same time as the downtime: [apc-warning] Unable to allocate memory for pool. in [file] on line 49. And a bit later on: [apc-warning] GC cache entry '[file1]' (dev=2056 ino=8988092) was on gc-list for 3601 seconds in [file2] on line 746. Could these errors indicate the cause of the server slowdown, or are they simply a result of the server being slow in the first place? What would be the solution?

    Read the article

  • solaris + EMC + power-path

    - by yael
    please advice - when I run powercf command on my Solaris machine , which changes this command do on the EMC storage , or on Solaris file system ? from maanual page: DESCRIPTION During system boot on Solaris hosts, the powercf utility configures PowerPath devices by scanning the HBAs for both single-ported and multiported storage system logical dev- ices. (A multiported logical device shows up on two or more HBAs with the same storage system subsystem/device identity. The identity comes from the serial number for the logical device.) For each storage system logical device found in the scan of the HBAs, powercf creates a corresponding emcpower device entry in the emcp.conf file, and it saves a primary path and an alternate primary path to that device.

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • Entire filesystem restore from rdiff-backup snapshot

    - by atmosx
    I'm trying to make a complete system restore from an rdiff-backup. The cli for backing was: rdiff-backup --exclude-special-files --exclude /tmp --exclude /mnt --exclude /proc --exclude /sys / /mnt/backup/ebox/ I created a new partition mounted the partition at /mnt/gentoo and did: rdiff-backup -r /mnt/vol2 /mnt/gentoo However when I try to chroot to this system (following gentoo's manual, which means mounting /dev/ and /proc) I get the following error: chroot: failed to run command/bin/bash': No such file or directory` All this takes place on a Parallels (virtual machine) Debian installation. Any ideas on how to proceed in order to fully restore the system? Best Regards ps. /mnt/gentoo/bin/bash works fine if I execute it. All files and permissions are in place rdiff-backup seems to work just fine. However the system cannot neither boot (exits with kernel panic - cannot find init) or be chrooted.

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >