Search Results

Search found 10245 results on 410 pages for 'quick guide to irm'.

Page 305/410 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • Can I recover a rm -rf-ed Mercurial repository?

    - by WishCow
    I made the mistake of wiping out my entire project directory with a quick "rm -rf project". Of course, the .hg directory went with it. I had about 15-20 changesets, that I have not pushed to anyone, and I would really really like to get those back. The system is a Ubuntu machine, and the partiton where the delete happened is ext3, the project consist mostly of PHP files. I know about the guideline to not write to the disk in question. The first idea was to use the tool named scalpel, to get the PHP files back and diff them with the current version from the repo, and somehow carve the changes out. While it succeeded, it did not recover the file names (or there is a switch I'm missing), so I'm left with a few thousand sequentially named .php files, combing through them is not an option. Can a kind soul please save me, and suggest a way to: a) get the repo back, or b) get the files back, with filenames For those wondering how I did such a stupid thing: I was working on a file in Vim which I wanted to remove from the repository: :!hg rm % This complained that the file is in a subrepository, so I specified the following: :!hg rm % -R engine which complained that file has modifications, use -f to force. And this is when somehow, I made up the following command: :!rm -rf % -R engine Somehow, seeing "force" makes me do a rm -rf by reflex.

    Read the article

  • How to install smtp/email server to work with php script?

    - by jiexi
    I have this code $mail->IsSMTP(); $mail->SMTPAuth = true; $mail->SMTPSecure = "ssl"; $mail->Host = "mail.craze.cc"; $mail->Port = 465; $mail->Username = "username"; $mail->Password = "pass"; $mail->SetFrom("[email protected]", "craze.cc"); $mail->AddReplyTo("[email protected]", "craze.cc"); $mail->AddAddress($this->email, $this->username); $mail->IsHTML(false); $mail->Subject = "Activate Your Craze.cc Account"; $mail->Body = $message;`enter code here` How i configure my postfix/sendmail or whatever server to actually work and send the mail? This has been driving me insane! I've tried numerous times to configure these servers. I just want to be able to send emails via my php script... Can someone please link me to a guide to get this all going? or just provide help themselves? Maybe there is an alternative way i can use to send my email in the php script? Basically, i need help just getting the emails to send...

    Read the article

  • Windows PE network setup

    - by microchasm
    I'm walking through the following step-by-step guide for deploying Windows 7 via AIK: http://technet.microsoft.com/en-us/library/dd349348%28WS.10%29.aspx On step 4 (Capturing the Installation onto a Network Share), I run into a bit of a snag: attempting to connect to a network drive repeatedly fails. I'm using/deploying Dell Optiplex 380 64 bit machines, and the network cards seem to be really wonky. On the machine that I'm using to run AIK etc, the network driver wasn't found automatically. I had to manually go in and install the driver (which was found on the OEM installation media). I've since copied this to the USB key that I'm using for the Autounattend.xml so its handy. I think that because of this, the PE environment doesn't or can't instantiate the network device. Is there a way to install/configure the network device through the command prompt in PE? If not, I read about adding in the answer file path(s) to drivers, but if I did it this way, would I have to start the process all over again (i.e. create new Autounattend.xml with the PnPcustomizations path included, re-run the installation on the reference machine, install all the applications, re-make the PE iso, reboot into new PE iso)? Any shortcuts, direction, or advice would be appreciated. Thanks!

    Read the article

  • CPU-adaptive compression

    - by liori
    Hello, Let assume I need to send some data from one computer to another, over a pretty fast network... for example standard 100Mbit connection (~10MB/s). My disk drives are standard HDD, so their speed is somewhere between 30MB/s and 100MB/s. So I guess that compressing the data on the fly could help. But... I don't want to be limited by CPU. If I choose an algorithm that is intensive on CPU, the transfer will actually go slower than without compression. This is difficult with compressors like GZIP and BZIP2 because you usually set the compression strength once for the whole transfer, and my data streams are sometimes easy, sometimes hard to compress--this makes the process suboptimal because sometimes I do not use full CPU, and sometimes the bandwidth is underutilized. Is there a compression program that would adapt to current CPU/bandwidth and hit the sweet spot so that the transfer will be optimal? Ideally for Linux, but I am still curious about all solutions. I'd love to see something compatible with GZIP/BZIP2 decompressors, but this is not necessary. So I'd like to optimize total transfer time, not simply amount of bytes to send. Also I don't need real time decompression... real time compression is enough. The destination host can process the data later in its spare time. I know this doesn't change much (compression is usually much more CPU-intensive than decompression), but if there's a solution that could use this fact, all the better. Each time I am transferring different data, and I really want to make these one-time transfers as quick as possible. So I won't benefit from getting multiple transfers faster due to stronger compression. Thanks,

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

  • sharing a folder between linux and windows over the internet

    - by valya
    Hello Currently my job is to make websites with Django. I use many things like virtualenv, PIL, etc. The problem is, I can't stand Linux on my desktop. I like it on servers, It's greate to use it over the SSH. But for desktop? No way. But for the development Linux is quite essential. Of course almost everything is ported to Windows, but it's not as simple to use as in Linux. For example, Windows shell is awful in comparison with Linux. So I've tried Cygwin, but it's too damn slow. Every time django dev server reloads, it tooks almost 20-30 seconds. In comparison, then using "native" python on Windows or Linux, it reloads instantly. Even worse, Cygwin makes all my system very slow. I've been thinking about it and have thought up a way to go. I can share a folder with my application with some Linux box. The devserver and everything will run on that box, while I'll be happy editing files and running the browser on my Windows 7. SSH shell is much quickier and handy than Cygwin. Currently there are no Linux boxes in my home network (except for my android phone :) but I have several VDS boxes with Debian. So, how do I share a Windows folder with VDS box? I can't rely on my desktop IP but I can rely on the VDS's one. I need sharing to be as quick as possible (well, 2-3 seconds ping is OK) and "native" for both systems, so I could use a folder like a normal folder in both Windows and Linux.

    Read the article

  • RDP problem with Vista and Windows 7 destination

    - by MadBison
    I use a server a home to host a bunch of concurrently running Hyper-V VM's with different OS's and software for testing. I have Vista on the laptop, all latest SP's and patches. The server is Server 2008 R2, fully patched. The guests are a mix of XP, Vista, Server 2008 and Windows 7. If I connect to the Win XP or Server 2008 guest using RDP, it is always good. Very quick, no speed issues. If I connect to the Vista or Win 7 guests, the response time is so slow it is unusable. Usually 6 or 8 seconds, and at times it is to long to measure! This happens from both the laptop running Vista, and the server running Server 2008 R2. Does anyone know what the issue is with RDP on Vista and Windows 7 destinations? I did read this: http://blog.tmcnet.com/blog/tom-keating/microsoft/remote-desktop-slow-problem-solved.asp and that is not the problem I have applied that change to all PC's.

    Read the article

  • How do I boot [embedded] linux from sd card?

    - by Brandon Yates
    I am hacking together a quick embedded linux system on a DM816x evm board. Previously I have been using TFTP and NFS to load my kernel and root filesystem to the board. I am now trying to switch over to loading everything from an SD card. I have my card partitioned such that uBoot and my kernel image are in one partition, and my rootFS in another partition. At power-on, Uboot starts correctly and successfully launches the kernel. However, the kernel is unable to mount the root file system. It appears that it doesn't recognize any SD (mmc) cards. It gives this error message. VFS: Cannot open root device "mmcblk0p2" or unknown-block(2,0) Please append a correct "root=" boot option; here are the available partitions: 1f00 256 mtdblock0 (driver?) 1f01 8 mtdblock1 (driver?) 1f02 2560 mtdblock2 (driver?) 1f03 1272 mtdblock3 (driver?) 1f04 2432 mtdblock4 (driver?) 1f05 128 mtdblock5 (driver?) 1f06 4352 mtdblock6 (driver?) 1f07 204928 mtdblock7 (driver?) 1f08 50304 mtdblock8 (driver?) Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(2,0) I feel like I'm missing something fundamental here. Why does it not recognize the root device I am trying to load from? Here is my uBoot boot script that is running: setenv bootargs console=ttyO2,115200n8 root=/dev/mmcblk0p2 rw mem=124M earlyprink vram=50M ti816xfb.vram=0:16M,1:16M,2:6M ip=off noinitrd;mmc init;fatload mmc 1 0x80009000 uImage;bootm 0x80009000

    Read the article

  • (Zywall USG 300) NAT bypassed when accessing in-house-server From LAN Via domain name

    - by mschr
    My situations is like this; i host a number of websites from within our joint network solution. On the network is basically 3 categories: the known public, registered via mac, given static dhcp lease the anonymous lan connections, given lease from specific dhcp range switches, unix hosts firewall Now, consider following hosts which are of interest 111.111.111.111 (Zywall USG 300 WAN) 192.168.1.1 (ZyWall USG 300 LAN) load balances and bw monitors plus handles NAT 192.168.1.2 (Linux www) serves mydomain1.tld and mydomain2.tld 192.168.123.123 (Random LAN client) accesses mydomain1.tld from LAN 23.234.12.253 (Random External client) accesses mydomain1.tld via WAN DNS A records are setup so that both mydomain1.tld and mydomain2.tld points to 111.111.111.111 - and the Linux www serves the http parts with VirtualHost configurations, setting up the document roots pr ServerName, this is not so interesting though.. NAT rule translates 111.111.111.111:80 to 192.168.1.2:80 (1:1 NAT) Our problem follows; When accessing http://mydomain1.tld from outside (23.234.12.253 example host) the joint network - everything is fine, zywall receives requests via port 80 and maps it to the linux host' httpd. However - once trying to go through the NAT from LAN side (in-house, 192.168.123.123 example host) then one gets filtered in the Zywall port 80 firewall. I know this only because port 443 is open for administration interface and https://mydomain1.tld prompts for zywall login. So my conclusion is, that the LAN that accesses 111.111.111.111 in fact are routed to 192.168.1.1 whilst bypassing the NAT table. I need to know how to setup NAT / Policy Route, so that LAN WAN LAN will function with proper network translations instead of doing the 'quick nameserver lookup' or whatever this might be.

    Read the article

  • ffmpeg: converting an avi to a reduced, shareable flv/mp4...

    - by meder
    I recently followed a guide and recompiled my ffmpeg so x264 is enabled. I used some generic settings to convert my 700 MB avi file to a mp4 file, the result was a 407MB mp4 file. The original avi file's settings: Codec: DX50 Resolution: 704x304 Frame rate: 23.976023 Stream 1 Codec: mpga Type: Audio Channels: 2 Sample rate: 48000 Hz Bitrate 179 kb/s Command I used: ffmpeg -i input.avi -acodec libfaac -ab 128k -ac 2 -vcodec libx264 -vpre hq -crf 22 -threads 0 output.mp4 The settings of the output file (output.mp4): Codec: avc1 Resolution: 704x304 Display resolution: 704x304 Frame rate: 11.988011 Stream 1 Codec: mp4a Type: Audio Channels: 2 Sample Rate: 48000 Hz Bits per sample: 16 Bitrate: 1536 kb/s The quality of the output mp4 is pretty nice, it seems as if it's pretty much the same as the original source. However, I'm trying to reduce the filesize and I'm not really sure whether I should go with an flv format or keep it mp4. The advantage the flv would have obviously is that it would be playable with a flash player ( I have come across some swf players which take a flash parameter to play an flv file ).. but maybe I could use the video element, as I'm only going to be displaying this video privately so I don't have to worry about supporting legacy browsers such as IE. Can someone recommend some settings to specify in order for the filesize to be around ~100-150MB or so? I don't mind a reduction in quality, nor do I mind resizing it - I was going to do it initially but I wasn't sure what the guidelines were ( if any ) for dealing with resolution.. since this video's is 704x304 would it still be ok if I forced it into one that isn't perfectly fit for the aspect ratio? I have no clue about that part. I realize that I could have probably specified 28 instead of 22 for the CRF, I'm not sure if I should do that as opposed to maybe specifying smaller resolution, which might make it smaller as well?

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • USB Device Not Recognized

    - by Franky Chanyau
    Ok this one gets a little bit complicated but bare with me :D A client brought her computer in to be fixed about a week ago, she says she tried charging a new phone she bought from china and immediately after her usb keyboard and mouse stopped working (typical). I had a quick look at it but because I did not have time, I did a simple system restore and it seemed as if the issue was fixed. I promptly sent it back to her but a few days back she called saying that the issue has returned. Turns out the computer was riddled with some virus that also corrupted her XP install so I had to format the whole thing(yes I tried repairing). I hoped that the format would fix the keyboard and mouse issue but the whole thing has escalated and the computer will throw the "USB Device not recognized" error when I plug anything into the many usb ports it has. I have installed all the drivers (including the chip set drivers) for the pc and even tried the unplugging from the power for a while trick, still no luck. I am sure it is not a hardware issue, but may be wrong. This is way over my head. Can anyone help? Computer: HP Compaq DC7100, Intel Pentium 4, 512mb RAM OS: Windows XP Professional SP2

    Read the article

  • Lucid Lynx login issue

    - by Bart Silverstrim
    Recently upgraded from Karmic to Lynx. Upgrade seemed to go well, no noticeable issues. I logged in, and my window manager wasn't starting. An application would appear, but sans control buttons and border, so figured the windows manager needed to be given a swift kick. Opened a web browser and a quick google had me run "metacity --replace &" and everything popped up. I re-ran the Compiz configuration tool to enable my rotating desktop cube to the way I liked it, and had to reconfigure my desktop switcher to the right number of desktops (although the first time I ran it, it crashed on the panel and reloaded...odd, but once it relaunched it seemed fine.) Today I installed updates, rebooted and logged in for the second time since my upgrade. Again, the window manager was dead, and my compiz settings were gone, and the workspaces were set back to four (and when I clicked on the preferences to change them, it crashed on the panel and reloaded again). Resetting everything made things look somewhat normal again. I'm guessing it'll work until I reboot again. Googling around isn't turning up similar complaints about Lucid Lynx and the window manager. Before I go deleting preference files, anyone else know of this kind of issue and what could be done about it? Or should I start taking the stab in the dark approach of deleting preference files hoping one of them is corrupt or has something unsupported in it that's throwing LL for a loop?

    Read the article

  • iptables-restore: line 1 failed

    - by Doug
    Hello, I am new to servers, and I was following this guide and it failed on the first command instructed. Could anyone give me a hand? http://wiki.debian.org/iptables ~ZORO~:/etc# iptables-restore < /etc/iptables.test.rules iptables-restore: line 1 failed Edit: iptables.test.rules ~ZORO~:/etc# cat /etc/iptables.test.rules *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -i ! lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You could modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections for script kiddies # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE -A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT # Now you should read up on iptables rules and consider whether ssh access # for everyone is really desired. Most likely you will only allow access from certain IPs. # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls (access via 'dmesg' command) -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy: -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • Being a more attractive job candidate - Certs XOR Degree

    - by Zephyr Pellerin
    I'm currently working in an IT position, where I do helpdesk stuff, and predominantly security related issues/consulting (In the loosest sense of the term) In-House and for Service-Contract clients (as the only/acting CCSP [I guess I should say only person with Cisco experience] in my organization). I've professionally written Kernel Mode drivers for a gaming company. Among other things that I'm proud to put on a resume. I think of myself as very reasonably qualified as a System Administrator, With excellent Cisco experience, among other things I think would make a good addition to almost any IT staff in need of a new employee. However, Something has always tripped me up - Human Resources. Let me explain, I decided to skip the university route - I'm immensely glad that I did, The computer science graduates that I've met and work with rarely know much of anything about Computers (Until they gain some 'real' experience), Even when asked about Theoretical Computing fundamentals they can rattle something off about Turing Completeness but rarely do they understand the mathematical underpinnings. In short, I think instead of going to college, I'd rather pick up some real world experience. However, Apparently, Employers rarely think the same way. A quick perusal of jobs through the standard job search engine yields nothing short of a conspiracy to exclude anyone without 'A Bachelors Degree in Computer Science or Equivalent'. Interviews I've had in the past have almost always been entangled with - 1. My Age (Which I can't really change) and 2. Lack of Degree. Employers frequently disregard the CCNA/CCSP, The experience I've gained through internships, My extensive experience in x86 assembly and C, among so many other things I like to think are valuable to employers - In lieu of the fact that I don't have a piece of paper. So, AS AN EMPLOYER - Is it even worth working on my CCIE? Or should I pad my resume with certifications that are easier to acquire (Like CISSP, MSCE, Network+, etc.). Or should I ditch the whole idea and head back to get a Mathematics or CS degree?

    Read the article

  • Connection refused after installing vsftp on Ubuntu 8.04 with fail2ban

    - by Patrick
    I have been using an Ubuntu 8.04 server with fail2ban for a while now (12+ months) and using ftp over SSH without any problems. I have a new user that needs to put files onto the server from an IP modem. I have installed vsftp (sudo apt-get install vsftp) and everything installed correctly. I have created an ftp user on the server following this guide. Whenever I try to connect to the server with my ftp program (filezilla) I get an immediate response of: Connection attempt failed with "ECONNREFUSED - Connection refused by server". I have looked into fail2ban and cannot find any problems. The iptables setup is: Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- anywhere anywhere VSFTP config file (commented lines removed) listen=YES anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=[username] secure_chroot_dir=/var/run/vsftpd pam_service_name=vsftpd rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key Any ideas on what is preventing access to the server?

    Read the article

  • OpenVZ Can't initialize containers after install

    - by Tonino Jankov
    I have installed OpenVZ on centos 6 on a dedicated server. I followed quick installation guide on openvz wiki. After installing thru yum, I don't know why, but grub.conf wasn't automatically updated to accomodate new kernel, so I had to do it manually. I edited grub.conf, added openvz kernel and rebooted - it went fine. Server went up into openvz kernel and it worked, it started openvz service byitself. But after I created a container, added IP to it and attempted to start it, I couldn't. Here is the output from the shell: [root@cloud2 ~]# vzctl start 86 Starting container ... Container is mounted Container start failed (try to check kernel messages, e.g. "dmesg | tail") Container is unmounted [root@cloud2 ~]# dmesg | tail [ 1973.401596] CT: 86: failed to start with err=-105 [ 2107.113850] Failed to initialize the ICMP6 control socket (err -105). [ 2107.155523] CT: 86: stopped [ 2107.155543] CT: 86: failed to start with err=-105 [ 6348.282184] Failed to initialize the ICMP6 control socket (err -105). [ 6348.330348] CT: 86: stopped [ 6348.330361] CT: 86: failed to start with err=-105 [45184.024002] Failed to initialize the ICMP6 control socket (err -105). [45184.072086] CT: 86: stopped [45184.072099] CT: 86: failed to start with err=-105 [root@cloud2 ~]# I don't know what is wrong. I tried different templates, debian 6, centos 6, i386, amd64, but the issue is the same. What is the problem?

    Read the article

  • My Computer hangs for a few minutes just after startup, and then is fine.

    - by EvilChookie
    So I just built myself a reasonably beefy computer, and I installed Windows 7 on it. However, I start the machine up each morning and within a few minutes, the computer will semi hang. That is the mouse is responsive, and most of the time I can open task manager, or a new tab in Chrome. Occasionally windows will be labelled as 'Not responding'. Then, the machine will get over it's problem, and will be nice and quick until I turn it off. Here's my specs: CPU: AMD Phenom-II X4 955 Black (Quad Core, 3.2ghz) RAM: 4GB of DDR3 1300 MOBO: ASUS M4A785T-M (Latest BIOS) HARD DRIVES: 2x1TB Western Digital Caviar Blacks in RAID-0. OS: Windows 7 Ultimate x64. GPU: ASUS GT240 1GB. I believe this issue relates to the RAID array, as I didn't have the lockup problem before I created the array. I purchased a second drive and reformatted after creating a RAID array, since the single drive was a little on the pokey side (compared to the rest of the computer). What I have tried: Updated Raid Drivers Malware checks Windows Updates Unecessary Services CPU and Disk activity appears to be low (via Resource Monitor) No strange errors in the error log. Any thoughts?

    Read the article

  • Cannot install git-core using macports

    - by robUK
    Hello, Snow Leopard 10.6.4 mac ports 1.9.1 I have just installed macports and I want to install git-core. However, I get the following errors: ---> Computing dependencies for git-core ---> Dependencies to be installed: python26 db46 gdbm readline sqlite3 rsync popt ---> Building db46 Error: Target org.macports.build returned: shell command failed Log for db46 is at: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_ports_databases_db46/main.log Error: The following dependencies failed to build: python26 db46 gdbm readline sqlite3 rsync popt Error: Status 1 encountered during processing. To report a bug, see <http://guide.macports.org/#project.tickets> I have tried doing a port selfupdate and a port clean all and then trying to install again. But still get the same problem. I have also tried install the dependent db46 on its own. Here is the log message: :error:build Target org.macports.build returned: shell command failed :debug:build Backtrace: shell command failed while executing "command_exec build" (procedure "portbuild::build_main" line 8) invoked from within "$procedure $targetname" :info:build Warning: the following items did not execute (for db46): org.macports.activate org.macports.build org.macports.destroot org.macports.install This is my first time using mac ports. Many thanks for any suggestions.

    Read the article

  • Ubuntu Postfix email account with forward

    - by Mika
    I have an Ubuntu 12.04 server with Postfix installed. In Postfix installation I used this guide https://help.ubuntu.com/community/Postfix. I didn't go through all of that, just the sudo dpkg-reconfigure postfix part. I have created user accounts to my server and the users home directories contain a .forward file which have only one row the email address to forward to. I have defined dns A records for the names www.mydomain.com and mydomain.com But if I send an email to [email protected] it doesn't get forwarded. Actually I can't see any sign about any email ever visiting my server. My firewall is defined to allow incoming traffic for ports 80, 443 and 22. For outgoing traffic it allows ports 587 and 22. The exact definitions are below. Should I allow also outgoing http (port 80)? or maybe port 25? # Allow ssh in iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT # Allow incoming HTTP iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # Allow incoming HTTPS iptables -A INPUT -i eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT # Allow outgoing SSH iptables -A OUTPUT -o eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT # Allow outgoing emails iptables -A OUTPUT -o eth0 -p tcp --dport 587 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 587 -m state --state ESTABLISHED -j ACCEPT Edits: I found lines from my syslog telling me that there were incoming traffic for port 25 which was blocked. The sender ip's for those packages were trustworthy, so I opened also port 25. Now I can see some Postfix logging in my syslog. It looks like it is at least trying to forward emails. I haven't yet received any forwarder emails into my gmail mail box.

    Read the article

  • Write Fedora.iso to USB and boot it from a Macbook

    - by MTilsted
    I have an .iso image of the full Fedora 16 install (Downloaded from http://fedoraproject.org/en/get-fedora-options#formats as "Fedora 16 DVD") and the question now is: How do I write it on a USB stick, so I can install it on my Mac book? I tried using DD as the install guide said, and that gave me a USB stick which can boot from my PC. But it can't boot from the Mac (The Mac start menu don't show it as a boot option). Edit: I downloaded a live install image, and did this (SSD is my USB 4GB thing) /sbin/mkdosfs -F 32 -n usbdisk /dev/dev/sdd1 sudo livecd-iso-to-disk --format --reset-mbr --efi /tmp/download/Fedora-16-i686-Live-KDE.iso /dev/sdd1 And this produced an image which can boot on my pc but not on my mac. This seems to indicate that the --efi is not working, because if it really was EFI it would not boot on a normal pc, would it? I then tried this: (Difference being that I write the image directory to /dev/sdd instead of /dev/sdd1) but this still will not boot on the Mac (it newer shows up at the startup screen on the Mac). sudo livecd-iso-to-disk --format --reset-mbr --efi /tmp/download/Fedora- PS: My host Linux is Fedora 13.

    Read the article

  • Computer hangs at boot screen with new RAID card

    - by shanethehat
    I am trying to build a new server around a Biostar TH61 motherboard and an Adaptec 6405E RAID controller card. The machine booted fine from USB before the RAID card and drives were installed. After installing, on the first boot the card was detected, but then started to spit out the following message every 10 seconds: Error: Controller Kernel Stopped Running << Press any key to continue ... Following the troubleshooting guide I unplugged everything, reseated the card, and reattached all the drives. This time the machine is sitting on the boot screen without any error messages and flashing a cursor, but after 15 minutes of this, nothing seems to be happening. Given that there are no error messages I'm hesitant to reboot again. Is it normal for a RAID card to sit without a status message when it firsts boots, maybe to initialise the drives or something? The current screen output looks a bit like this: Controller #00 found at PCI Slot:01, Bus:01, Dev:00, Func:00 Controller Model: Adaptec 6405E Firmware Version: 5.2-0[18512] Memory Size: 128MB Serial number: 111111111111111 SAS WWN: 50000D1104AE9180 _ Update: So after waiting 30 minutes I've rebooted back to the Kernal Stopped Running error. Maybe time to update the RAID BIOS.

    Read the article

  • Can I format a Veritas cluster shared volume from windows?

    - by spaghettidba
    We have a Microsoft Failover Cluster with dynamic disks managed by Veritas Storage Foundation. Today the sysadmins added a new disk for SQL Server but the cluster size on the volume was wrong, so I issued a quick format to change it. The disk volume failed, the SQL Server group failed as well and the cluster became unresponsive. After some minutes I managed to fail over to a passive node. The SAN admins say it's my fault because I shouldn't have formatted the disk from the Windows format applet, but I should have used Veritas Enterprise Administrator instead. Can a format operation bring offline a whole cluster group this way? Relevant error messages: From the eventlog: The cluster resource host subsystem (RHS) stopped unexpectedly. An attempt will be made to restart it. This is usually due to a problem in a resource DLL. Please determine which resource DLL is causing the issue and report the problem to the resource vendor. From the cluster.log ERR [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Control(STORAGE_GET_DISK_INFO_EX) to resource 'NameOfTheDiskGroup' timed out.' Veritas Documentation: Excerpt from Symantec's documentation: Note: Before manually creating the resource, you must format the cluster-shared volume with NTFS using the VEA GUI and mount it on the node where you are trying to create the resource. Does this mean the disk cannot be formatted from Windows? I don't read it that way. For the record, I formatted many disks using the Windows applet in the past and nothing bad happened.

    Read the article

  • SBS 2003 boot stalls at acpitabl.dat

    - by John
    I have a SBS 2003 server running for 3 year without any problems, and few days ago it freezes during the boot. System is using two 500 Gb drives in RAID1 (Intel Matrix 7.5) After trying to load in safe mode, boot stops on acpitabl.dat. First idea was that there is a problem with RAID altough disk status was OK, and RAID status was Rebuild. I tried to boot with each drive, and one gives me the same problem, and the other drive is failing to load. Took both drives out, and checked it on a different machine. One drive is dead, other is without any problems. Returned the good drive back in SBS 2003 with changed status to Degraded, but the problem is still the same. I also have a clean SBS 2003 copy installed on this drive (previous installation), which loads smooth and quick. So, I believe the main problem is this installed version of SBS 2003. Did not make any hardware changes, did not make any updates (not sure about any automatic windows updates lately). Since there are tons posts about this problem, and no clear solution, I am trying to figure how to repair SBS 2003 installation, since there are some installed programs on this installation which I cannot re-install without additional issues.

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >