Search Results

Search found 19279 results on 772 pages for 'everything'.

Page 501/772 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • Hylafax: Encounter "No font metric information" when try to send a fax

    - by Chau Chee Yang
    I am using Hylafax 6.0.5 on Fedora 13 x86_64. As there are no rpm package available for Fedora 13, I use the source tar ball to install hylafax myself. Everything seems fine during compile and install. I try to send a fax with sendfax and encounter error: # sendfax -n -d <fax-number> /etc/passwd /usr/local/sbin/textfmt: No font metric information found for "Courier-Bold". Usage: /usr/local/sbin/textfmt [-1] [-2] [-B] [-c] [-D] [-f fontname] [-F fontdir(s)] [-m N] [-o #] [-p #] [-r] [-U] [-Ml=#,r=#,t=#,b=#] [-V #] files... >out.ps Default options: -f Courier -1 -p 11bp -o 0 Error converting document; command was "/usr/local/sbin/textfmt -B -f Courier-Bold -Ml=0.4in -p 11 -s default >'/tmp//sndfaxp5GdJ9' <'/etc/passwd'" It seems like there is problem with font problem. I have ghostscript-fonts installed too. I can't find hyla.conf in path /etc/hylafax. There is no /etc/hylafax path in my file system. All configuration files seems located in /var/spool/hylafax/etc. Please advice. Thank you.

    Read the article

  • likewise-open and samba as pdc

    - by Knight Samar
    Hi, We have successfully implemented a Samba Primary Domain Controller for a hybrid Windows-Linux environment. So now I am setting up dual-boot clients with Windows XP and Ubuntu 9.10. Windows XP can be easily added to the Samba Domain. Everything is manageable. No worries. But when I try using likewise-open 4.1 to add the Ubuntu 9.10 to the samba domain, it cannot locate the domain controller. domainjoin-cli --loglevel verbose join MYDOMAIN root Error: Unable to resolve DC name [code 0x00080026] Resolving 'MYDOMAIN' failed. Check that the domain name is correctly entered. Also check that your DNS server is reachable, and that your system is configured to use DNS in nsswitch. I even tried mydomain.com variations but to no avail. What am I missing ? I read up a document on MSDN wherein it says that the Domain Controller creates some SRV records in the DNS server. I guess, I don't have them on my BIND. Do you think that is the problem ? If yes, can anyone please point out how and what SRV records need to be added. Thanks.

    Read the article

  • How do I make a Data Validation drop-down exclude blanks?

    - by Iszi
    Related: How can I use non-adjacent cells on another sheet for a Data Validation drop-down, and only show non-blank values? For now, I've worked around the above problem by re-arranging my sheet so all the Data Validation Source cells are in one range. I'm leaving the above question open though, because I think it still poses an interesting problem. However, the issue now is that the Data Validation drop-down isn't working in the way I expected it to (and how I believe others are telling me it should). Even though I've got everything into one named range, Excel still shows blanks in a drop-down that references that range. Setup: Sheet 1 A1= (blank) B1= Header A2= 1 B2= Value1 A3= 2 B3= Value2 A4= 3 B4= Value3 A5= 4 B5= (empty) A6= 5 B6= (empty) A7= 6 B7= (empty) Sheet1!B2:B7 is named Validation Sheet2!A1 is set to use Data Validation with a Source =Validation, and in-cell drop-down. The drop-down in Sheet2!A1 shows: Value1 Value2 Value3 . . . (Dots represent blank lines) How can I get rid of these blank lines in the in-cell drop-down, while still including Sheet1!B5:B7 in the Data Validation Source? Note: I nuked the sheet, and tried it again without column A from Sheet1 (putting values from column B in the above example into column A), and it worked fine. Adding Column A back though, brought the blanks back into the Data Validation drop-down. What do I need to do to keep column A as I want it and keep the in-cell drop-down clean?

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • Windows 7 alt-tab window disappears to back when aero peek is enabled

    - by nhinkle
    There is a strange bug with Aero Peek and Alt-Tab where the Alt-Tab window itself is sent all of the way to the back, and is obscured by whatever window may be being previewed in front of it. While it still works for switching tabs, it's extremely annoying to not be able to see what other windows are there. One solution I've found is to just disable Aero Peek, but I like the Aero Peek feature when it works, and want it enabled. Some users of Lenovo products have found that uninstalling the "Thinkvantage communications" VoIP suite fixes it for them, but my laptop is an HP, not a Lenovo. The only VoIP software I have installed is Skype, which I removed to see if it had something to do with VoIP, but that didn't have any effect. I had seen this issue before on my old laptop, but it usually went away after a while, and always after a reboot. On my new computer (HP dm4t) it always occurs, and is driving me nuts. If anybody can actually pinpoint what the problem is and more importantly how to fix it, I will be extremely thankful. Window being previewed is in front of the Alt-Tab window Update: The issue seems to have randomly resolved itself, at least for now. I have no idea what changed, since I haven't made any modifications to the system between when it was broken and when it started working. Attached is another screenshot of it working properly, with the alt-tab window in front of everything else. I'd still like to know what causes this if anybody can determine it. Update 2: And now it's broken again...

    Read the article

  • Endian Destination NAT

    - by Ben Swinburne
    I have installed Endian Community Firewall 2.3 and am clearly misunderstanding/doing something wrong with it. I'm trying to create some destination NAT rules to allow incoming connections to various services within the network. Router - RED I/F - x.x.x.x Router - GREEN I/F - 192.168.11.253 ECF - RED I/F - 192.168.11.254/24 ECF - GREEN I/F - 192.168.12.254/24 Target server - 192.168.12.1 Please ignore the haphazard choice of subnets and addresses- I'm trying to quickly plop Endian into an existing network before a complete rework in 6-12 months so for now. Everything works except destination NAT, so outgoing connections are fine, the routes between the two subnets are OK etc. I want to create various incoming NATs but let's take for the sake of argument, SMTP port 25 from the Internet to Target server 192.168.12.1. I've tried almost every combination of options in the Destination NAT section to achieve this and clearly am doing something wrong. I suspect my confusion must be somewhere in the Access From and/or Target section. The rest seems OK Filter Policy = Allow Service = SMTP Protocol = TCP Port = 25 Translate to type = IP DNAT Policy = NAT Insert IP = 192.168.12.1 Port Range = 25 Enabled = Checked Position = First I can't work out what I'm doing wrong, or am I doing it right and it's just not working!? Any help would be greatly appreciated.

    Read the article

  • How to get the permissions right for /dev/raw1394

    - by Mark0978
    I recently upgraded one of my ubuntu machines to Karmic and I'm having trouble getting the permissions of /dev/raw1394 set to 0666. They only thing this machine is used for is recording audio from a firepod which uses /dev/raw1394 via jackd and there are no other FireWire devices connected, so security around this device is not really an issue. If I run as root, everything works as expected, but I have some folks that run the recorder that I don't want to have root access. However, I can't figure out which lines setup the perms I've tied this: /etc/udev/permissions.d/raw1394.rules:raw1394:root:root:0666 And I have this setup (default install) /lib/udev/rules.d/75-persistent-net-generator.rules:SUBSYSTEMS=="ieee1394", ENV{COMMENT}="Firewire device $attr{host_id})" /lib/udev/rules.d/75-cd-aliases-generator.rules:# the "path" of usb/ieee1394 devices changes frequently, use "id" /lib/udev/rules.d/75-cd-aliases-generator.rules:ACTION=="add", SUBSYSTEM=="block", SUBSYSTEMS=="usb|ieee1394", ENV{ID_CDROM}=="?*", ENV{GENERATED}!="?*", \ /lib/udev/rules.d/60-persistent-storage-tape.rules:KERNEL=="st*[0-9]|nst*[0-9]", ATTRS{ieee1394_id}=="?*", ENV{ID_SERIAL}="$attr{ieee1394_id}", ENV{ID_BUS}="ieee1394" /lib/udev/rules.d/50-udev-default.rules:# FireWire (deprecated dv1394 and video1394 drivers) /lib/udev/rules.d/50-udev-default.rules:KERNEL=="dv1394-[0-9]*", NAME="dv1394/%n", GROUP="video" /lib/udev/rules.d/50-udev-default.rules:KERNEL=="video1394-[0-9]*", NAME="video1394/%n", GROUP="video" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[!0-9]|sr*", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[0-9]", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}-part%n" And I find these lines in /var/log/syslog Apr 30 09:11:30 record kernel: [ 3.284010] ieee1394: Node added: ID:BUS[0-00:1023] GUID[000a9200c7062266] Apr 30 09:11:30 record kernel: [ 3.284195] ieee1394: Host added: ID:BUS[0-01:1023] GUID[00d0035600a97b9f] Apr 30 09:11:30 record kernel: [ 18.372791] ieee1394: raw1394: /dev/raw1394 device initialized What I can't figure out, is which line actually creates that raw1394 device in the first place. How do you get /dev/raw1394 to have permissions 0666?

    Read the article

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

  • Macbook Pro 2010 Ethernet Jumbo Frame(9k MTU) Support?

    - by Troggy
    Has anyone been able to use jumbo frame support on their 2010 Macbook Pros? This is kind of negative news here, but I am seeing many reports that this is not available anymore due to Apple's choice of network card in the new machines. I cannot set my MTU speed over 1500 on my new 2010 MBP i7, but my old early 2008 MBP (Core2) has the 9000 MTU setting for use. Everything I have is setup to use jumbo frames and I thought apple kept that feature in their "pro" lineup. It sounds like the Mac Pro still has it. Did they decide to use a chipset that doesn't support it? I am trying to pinpoint some solid chipset numbers and the feature support. Maybe they just need to update the drivers? Is there some more official information about this feature? This might seem minor, but this is really frustrating if apple removed this feature from their pro laptop line. From what I have read so far, it sounds like I am not alone in my frustrations with this. http://discussions.info.apple.com/message.jspa?messageID=12258067 http://discussions.apple.com/thread.jspa?messageID=12130158 Anyone have any experience or further knowledge about this issue ... beyond typing my question into google and giving the top 5 results as answers? :)

    Read the article

  • Solr error; JNDI not configured for solr; Anybody know what this means?

    - by Camran
    I am installing solr on my VPS (Ubuntu 9.10) via PuTTY. First, I thought about installing Solr with Tomcat, but then after installing tomcat, I changed my mind and went for the Jetty which comes with Solr. Now that I have setup everything on my Server, and try to start the "start.jar" file, I get some errors... Here is some text from the log file: 2010-05-29 00:22:42.074::INFO: jetty-6.1.3 2010-05-29 00:22:42.134::INFO: Extract jar:file:/var/www/webapps/solr.war!/ to /var/www/work/Jetty_0_0_0_0_8983_solr.war__solr__k1kf17/webapp May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: JNDI not configured for solr (NoInitialContextEx) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: solr home defaulted to 'solr/' (could not find system property or JNDI) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader <init> INFO: Solr home set to 'solr/' May 29, 2010 12:22:42 AM org.apache.solr.servlet.SolrDispatchFilter init INFO: SolrDispatchFilter.init() May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: JNDI not configured for solr (NoInitialContextEx) May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader locateSolrHome INFO: solr home defaulted to 'solr/' (could not find system property or JNDI) May 29, 2010 12:22:42 AM org.apache.solr.core.CoreContainer$Initializer initialize INFO: looking for solr.xml: /var/www/solr/solr.xml May 29, 2010 12:22:42 AM org.apache.solr.core.SolrResourceLoader <init> INFO: Solr home set to 'solr/' Anybody know what this is? Thanks

    Read the article

  • Cygwin, ssh, and git on Windows Server 2008

    - by Paul
    Hi everyone. I'm trying to setup a git repository on an existing Windows 2008 (R2) server. I have successfully installed Cygwin & added git and ssh to the packages, and everything works perfectly (thanks to Mark for his article on it). I can ssh to localhost on the server, and I can do git operations locally on the server. When I try to do either from the client, however, I get the "port 22, Bad file number" error. Detailed SSH output is limited to this: OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to {myserver} [{myserver}] port 22. debug1: connect to address {myserver} port 22: Attempt to connect timed out without establishing a connection ssh: connect to host {myserver} port 22: Bad file number Google tells me that this means I'm being blocked, usually, by a firewall. So, double-checked the firewall settings on the server, rule is there allowing port 22 traffic. I even tried turning off the firewall briefly, no change in behavior. I can ssh just fine from that client to other servers. The hosting company swears that there's no other firewalls blocking that server on port 22 (or any other port, they claim, but I find that hard to believe). I have another trouble ticket into them, just in case the first support person was full of it, but meanwhile I wanted to see if anyone could think of anything else it can be. Thanks, Paul

    Read the article

  • 2 Server FC SAN Configuration

    - by BSte
    I have 2 identical servers: -48GB Ram -8GigE NIC's -2FC NIC's -2x72GB RAID1 Hard Drives -Server 2008R2 Host I also Have a Fibre Channel SAN: -16x146GB RAID10 Hard Drives -2xDual-port FC Controllers (Controller A and B both have ports 1 and 2) -Server 1 has Fiber to Ports A1 and B1 -Server 2 has Fiber to Ports A2 and B2 -I kept the default config with 1 Virtual Disk and 1 Volume -The default mappings show ports A1,A2,B1,B2 on LUN 0 with read-write My goal is: -2xVM's with IIS and Guest Level Failover -2xVM's with SQL 2008 Enterprise using a Single DB and Guest Level Failover -1xVM that is an application server, preferable with Host Failover. From what I read, this will also need AD for clustering to work. -I need at least 1 VM always running for IIS and the SQLDB. This includes hardware failover and application (ie: reboot a VM for Critical updates) I was told I could install the VM's and run them from the SAN, and this is what I've tried: Installed MPIO and HyperV on Server1 and Server 2 Added the SAN as Disk E: on both servers, made it GPT and formatted NTFS Configured HyperV on both server to store use E:\VD and E:\VHD On server1, I was able to install 3 VM's on the SAN and all worked well. On server2, I would start installing the other 2 VM's, but always at some point the VM's would get a corrupt .VHD message (either server). Everything I found about the message typically related to antivirus, so I removed all antivirus on both Host servers (now only running 2008R2). I reformatted drive E: (SAN), recreated the VHD and VD directories, installed 3 VM's on Server 1, and then had the same issue when installing VM's on Server2. Obviously something is wrong, but I'm not certain what exactly. My questions: 1) Are my goals possible with this hardware setup? -I've read 2008R2 supports FC SAN's, but a lot of articles seem to only give examples with iSCSCI setups 2) What would be the suggested route on setting up the SAN (disks,volumes,LUN's)? I've worked with HyperV on a single machine before and never had issues. Actual experience working on SAN's and clustering is new to me. Any suggestions or recommendations to get me in the right direction would be much appreciated.

    Read the article

  • Word documents very slow to open over network, but fine when opened locally - on one machine

    - by Craig H
    Windows XP, Word 2003, patched. The issue is happening with several Word documents stored on a network drive. The Word documents are clearly a bit wonky (i.e. one is 675k, but if you copy everything but the last paragraph marker into a new document, the new document is only 30k). But that's only part of the problem. On one weird machine, and one machine only, it takes ~20 seconds to open these Word documents from the network drive. Copy the file to C: on that werid machine? Opens immediately. Go to other machines (that are very similar - same patch level, etc.) and open the same document from the network? Opens immediately. Delete normal.dot? 20 seconds. Login with a different user on the weird machine? 20 seconds. Plug wonky machine into a different network port? 20 seconds. So the problem appears to be hardware related (i.e. wonky internal NIC) or related to a setting that is not profile specific. Any ideas? "Scrubbing" all the documents isn't ideal for several reasons. This is driving me nuts because I swear I ran into this before many years ago and eventually figured it out. But I appear to have lost my notes.

    Read the article

  • Viewing a large-resolution VNC server through a small-resolution viewer in Ubuntu

    - by Madiyaan Damha
    I have two Ubuntu computers, one with a large screen resolution (1920x1600) that is running default ubuntu vnc server. I have another computer that has a resolution of about 1200x1024 that I use to vnc into the server (I use the default ubuntu vnc viewer). Now everything works fine except there are annoying scrollbars in the viewer because the server's desktop resolution is so much higher than the viewer's. Is there a way to: 1) Scale the server's desktop down to the viewer's resolution. I know there will be a loss of image quality, but I am willing to try it out. This should be something like how windows media player or vlc scales down the window (and does some interpolation of pixels). 2) Automatically shrink the resolution of the server to the client's when I connect and scale the resolution back when I disconnect. This seems like a less attractive solution. 3) Any other solution that gurus out there use? I am sure someone has experienced this before (annoying scroll bars) so there must be a solution out there. Thanks,

    Read the article

  • Explorer and open file dialog not responding (Vista)

    - by rohancragg
    Any explorer window opened for the first time on my machine causes the explorer window to display the folders tree and folder path in the address bar immediately but the file/folder list pane is blank and the window displays 'Not Responding' in the title bar, this hangs for up to a minute or more. Any file dialog displays 'Not Responding' in the title bar. The files list is eventually displayed after a few seconds or more. Steps to repro: Close all open instances of explorer Windows Key | Run | [enter a folder path such as 'c:\temp'] Or within any app: use a file open / save dialog Once there is at least one open instance of explorer the performance is still fairly poor but not nearly so bad and file lists are displayed in a timely fashion. What I've tried: Cleaned up registry with CCleaner tool, and uninstalled all other unused software Checked nothing unwanted running at startup with Autoruns Removed any ISO burner/recorder/mount software Still to try Get latest version of everything - especially stuff with shell extension behaviour such as TortoiseSVN Anyone have any other suggestions? Thanks alot. Update I'm wondering if this is related, I'll try the hotfix when I get home and report back: KB972685 - FIX:Explorer.exe hangs when using a shell extension written using MFC Update 2 Before I got a chance to try the hotfix it seems one of the above actions fixed this for me; either the removal of IsoRecorder or TortoiseHg (which I was no longer using anyway). Update 3 A similar issue with Explorer.exe has come back since installing TortoiseHg 1.01 :-(

    Read the article

  • Exim rejects recipient address on my domain

    - by Nicolas
    Hi, I have a dedicated server (debian) on which I have installed Exim and Dovecot. Everything worked fine until around a month ago. I tried to reinstall and reconfigure exim but I keep having all the incoming emails rejected. Outlook says: A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: [email protected] SMTP error from remote mail server after RCPT TO:: host mail.mydomain.com [94.76.##.##]: 550 relay not permitted GMAIL: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 relay not permitted (state 14). On the server side, my rejectlog file shows: 2011-01-04 17:09:21 H=mail-qw0-f53.google.com [209.85.216.53] F=<####@gmail.com rejected RCPT : relay not permitted ... and the mainlog file: 2011-01-04 17:00:01 1PaAEr-0007vN-DX <= root@ETC_MAILNAME U=root P=local S=869 2011-01-04 17:00:01 1PaAEr-0007vN-DX ** root@etc_mailname: Unrouteable address 2011-01-04 17:00:01 1PaAEr-0007vY-Kn Error while reading message with no usable sender address (R=1PaAEr-0007vN-DX): at least one malformed recipient address: root@ETC_MAILNAME - malformed address: _MAILNAME may not follow root@ETC 2011-01-04 17:00:01 1PaAEr-0007vN-DX Process failed (1) when writing error message to root@ETC_MAILNAME (frozen) 2011-01-04 17:09:21 no IP address found for host MAIN_RELAY_NETS (during SMTP connection from mail-qw0-f53.google.com [209.85.216.53]) 2011-01-04 17:09:21 H=mail-qw0-f53.google.com [209.85.216.53] F=<####@gmail.com rejected RCPT : relay not permitted then after the message becomes frozen: 2011-01-04 17:28:44 1PaAEr-0007vN-DX Message is frozen Thank you for your help, any idea/comment is welcomed as I am really running out of idea to fix this issue, Nicolas. Oh and the PHP mail() function does not do anything as well, would it be linked to? I think mail() uses sendmail from my php.ini.

    Read the article

  • Can't install NPM after installing Node on EC2 Linux instance?

    - by frequent
    I'm trying my first attempt on getting a node server set up on an amazon ec2 linux instance. I think I made it quite far. First problem I ran into was when trying to make Node the connection timed out after a while, so I need three attempts until I got this: LINK(target) /home/ec2-user/node/out/Release/node: Finished touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_header.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_provider.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /home/ec2-user/node/out/Release/obj.target/node_etw.stamp make[1]: Leaving directory `/home/ec2-user/node/out' ln -fs out/Release/node node Which tells me, "Node is done", although I'm not sure it is also working as it should. Following this,this and this tutorial, I'm now stuck at installing npm. I think I first cloned into the wrong folder, which always gave me error 127, but even if I'm doing this: cd ~ git clone git://github.com/isaacs/npm.git cd npm sudo -s PATH=/usr/local/bin:$PATH make install I'm still getting this: #after cloning# make[1]: Entering directory `/root/npm' node cli.js install bash: node: command not found make[1]: *** [node_modules/.bin/ronn] Error 127 make[1]: Leaving directory `/root/npm' make: *** [man/man3/start.3] Error 2 Question:: Since I'm pretty much a newby at everything I'm trying here, can someone please tell me what I'm doing wrong and how to get npm to install? Also, in case I cloned into the wrong folder, is there a way to remove the "false clone" or is this not written to disk until I call make install and I don't need to worry? Thanks for helping out!

    Read the article

  • What are the right reverse PTR, domain keys, and SPF settings for two domains running the same appli

    - by James A. Rosen
    I just read Jeff Atwood's recent post on DNS configuration for email and decided to give it a go on my application. I have a web-app that runs on one server under two different IPs and domain names, on both HTTP and HTTPS for each: <VirtualHost *:80> ServerName foo.org ServerAlias www.foo.org ... </VirtualHost> <VirtualHost 1.2.3.4:443> ServerName foo.org ServerAlias www.foo.org </VirtualHost> <VirtualHost *:80> ServerName bar.org ServerAlias www.bar.org ... </VirtualHost> <VirtualHost 2.3.4.5:443> ServerName bar.org ServerAlias www.bar.org </VirtualHost> I'm using GMail as my SMTP server. Do I need the reverse PTR and SenderID records? If so, do I put the same ones on all of my records (foo.org, www.foo.org, bar.org, www.bar.org, ASPMX.L.GOOGLE.COM, ASPMX2.GOOGLEMAIL.COM, ..)? I'm pretty sure I want the domain-keys records, but I'm not sure which domains to attach them to. The Google mail servers? foo.org and bar.org? Everything?

    Read the article

  • Is there a way to know what the Windows Disk Cleanup utility will delete?

    - by Cam Jackson
    When I run the Disk Cleanup utility that's built into Windows 8, it tells me that it can free up 53GB by deleting 'Temporary Files'. However, a CCleaner analysis on default settings only finds about 300MB worth of space to free up, so I'm wondering what Disk Cleanup has found that CCleaner does not. Note that this question appears to be similar to what I'm asking, but the accepted answer says that 'Temporary Files' refers to %TEMP%. I've already cleared out most of C:\Users\Cam\AppData\Local\Temp, and it now has only 230MB of stuff in it, even with system files showing. So where is this 53GB located? Is there a way to find out what it is? Edit: I should note that this is on a 110GB SSD, so it's almost half the drive. And in fact I'm only using 86GB, so if it's really going to clear out 53GB, that would be more than 60% of the stuff on my C drive. I'm starting to think that Disk Cleanup caches its analysis, and hasn't updated since I started cleaning up the drive earlier today. Although when I run it it says that it's 'Calculating' how much space can be saved, and it takes about 5-10 seconds to do so. Hmmm... Edit2: Here is what my hard drive looks like, according to SpaceMonger (Right click-Open image in new tab, so you can see it properly): You can see why I was starting to think that the 53GB figure is actually wrong. Even if 'Temporary Files' includes my hiberfil and everything in WinSxS (about 13GB total), that would be 26GB, which is only halfway there. Hard to see where there's 53GB of stuff to delete.

    Read the article

  • MAC addresses on dual-NIC mainboards

    - by Tom O'Connor
    Here's a weird problem. We've got a number of devices with dual-NIC mainboards. Some are Realtek NICs, which suck. Some are Intel e1000s, which don't. I've just noticed on 2 machines, one is an Intel NIC, one is a Realtek, that when I put the MAC address of one machine into the dhcpd.conf file on our DHCP server to get it to PXE boot the machine into a rebuild environment, initially everything is fine. The server gets a DHCP allocation, and PXE boots into the Ubuntu preseed enviroment. On one or two machines, it gets as far as Ubuntu's DHCP network configuration, and fails. If i pull up a busybox shell (on tty2 on the installing machine), and run ip link, I can see that the UP flag is set on the other NIC. Here's some stuff. host xeon16-ghz240-gb48-node1 { hardware ethernet BC:AE:C5:07:1F:18; filename "pxelinux.0"; next-server 192.168.123.80; } That's what's in dhcpd.conf This is what ip link on the evil machine looks like. Only one NIC is actually connected (deliberately). As you can see, the NIC that's in the dhcpd config, is not marked as UP, and the link that is UP, isn't the one in DHCP. So far I've seen this on two brands of dual-NIC configuration. Does anyone know 1) what's causing it, and b) What we can do about it?

    Read the article

  • How do I effectively use WinSCP on my GoDaddy Dedicated Hosting

    - by Scott
    After being told that Virtual Private Servers would not fit the scope of my project, I have timidly entered the world of dedicated hosting. Unfortunately, this is forcing me how to learn the basics of being a Linux server admin. GoDaddy has a master account for the server. When you use SSH, they want you to use "su" to switch to the root user. Thus far, I have been able to do everything I have needed to thus far via the command line as this root user. However, now I need to upload files to my server. I'm used to using WinSCP to upload files. I can use my general server account to view the files but when I try to drag or create files its says that I cannot because I do not have permission to do so. I have researched the WinSCP documentation and it seems that this "su" function is beyond the scope of the program. How am I to grant myself access to upload these files using SSH? Should I create a user with the proper permissions? I'm happy to do this but thus far I have not been able to make sense of what I have found online. I'm going to try and move forward but any help and/or insight is appreciated.

    Read the article

  • Virtualbox HTTP load testing, host CPU overload issues

    - by aschuler
    I'm doing HTTP load testing benchmarks (using Apache Benchmark and Siege) on a small Java EE 1.7.0 / Tomcat 7.0.26 application running on a Debian Squeeze 6.0.4 x64 virtualized with Virtualbox 4.1.8. The computer host is Ubuntu 11.10 x64. I've modified those parameters in the Tomcat server.xml : <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="200000" redirectPort="8443" acceptCount="2000" maxThreads="150" minSpareThreads="50" /> The application executed on the server takes around 300ms. This app is running well until a certain amount of concurrent connections like those one : ab -n 500 -c 150 http://xx.xx.xx.xx:8080/myapp/ ab -n 1000 -c 50 http://xx.xx.xx.xx:8080/myapp/ siege -b -c 100 -r 20 http://xx.xx.xx.xx:8080/myapp/ A lot of socket connection timed out happens and this completly overload the host processor (but the CPU load inside the VM is normal). Doing an htop on the host, i can see that the Virtualbox processus is running under 300% CPU and never come down even after the load test is finished. (I've allocated 4 processors to the VM, if I allocate only one processor, CPU load goes under 100%). Restarting Tomcat don't do anything, i'm forced to restart the whole VM. I've tryed to launch those ab/siege commands locally on the VM and everything goes well. I first thought it was related to a linux network limit as explained here: Running some benchmarks using ab, and tomcat starts to really slow down So I've modified those TCP parameters : echo 15 > /proc/sys/net/ipv4/tcp_fin_timeout echo 30 > /proc/sys/net/ipv4/tcp_keepalive_intvl echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse It seems to be better, but it continues to overload the host CPU and output socket connections time out at a certain amount of concurrent connections. I'm wondering if this is not related to how Virtualbox handles external concurrent connections.

    Read the article

  • Unable to access Windows share

    - by mbnoimi
    I've installed Alfresco 4.2.d under Ubuntu 12.04 LTS; Everything done fine except I can't access it from Windows share although I got the link from Alfresco explorer which is: file:///%5C%5CECSA%5CAlfresco%5CSites%5Cswsdp%5CdocumentLibrary%5CAgency%20Files%5CImages%5Ccoins.JPG I tried to access it from: \\ECSA but I failed too so I made a ping (192.168.0.70 is server IP) then I got: C:\Users\user>ping 192.168.0.70 Pinging 192.168.0.70 with 32 bytes of data: Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Ping statistics for 192.168.0.70: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\Users\user>ping ECSA Ping request could not find host ECSA. Please check the name and try C:\Users\user> Some logs of what's going on: C:\Users\user>net view ECSA System error 1707 has occurred. The network address is invalid. C:\Users\user>nbtstat -a 192.168.0.70 Local Area Connection: Node IpAddress: [192.168.0.84] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- ECSA <20> UNIQUE Registered ECSA <00> UNIQUE Registered WORKGROUP <00> GROUP Registered MAC Address = 00-00-00-00-00-00 C:\Users\user> CIFS Server Configuration in file-servers.properties ### CIFS Server Configuration - file-servers.properties ### cifs.enabled=true cifs.serverName=${localname}A cifs.domain= cifs.broadcast=255.255.255.255 cifs.bindto=192.168.0.70 cifs.ipv6.enabled=false cifs.hostannounce=true cifs.disableNIO=false cifs.disableNativeCode=false cifs.sessionTimeout=900 cifs.maximumVirtualCircuitsPerSession=16 cifs.tcpipSMB.port=445 cifs.netBIOSSMB.sessionPort=139 cifs.netBIOSSMB.namePort=137 cifs.netBIOSSMB.datagramPort=138 cifs.WINS.autoDetectEnabled=true cifs.WINS.primary=192.168.0.70 cifs.WINS.secondary=192.168.0.1 cifs.sessionDebug= cifs.pseudoFiles.enabled=true cifs.pseudoFiles.explorerURL.enabled=true cifs.pseudoFiles.explorerURL.fileName=__Alfresco.url cifs.pseudoFiles.shareURL.enabled=false cifs.pseudoFiles.shareURL.fileName=__Share.url How can I fix this issue?

    Read the article

  • Vmware Player 3.0 - cannot ping 32 bits guest from 64 bits (guest or host)

    - by npmj
    I'm stuck with what seems a bug in VmWare Player (build 203739). I'm using W7 Ultimate 64bits as host and have a CentOS 5.4 (64 bits) as a guest and a Windows XP Professional SP3 (32 bits) as another guest. From the 64 bits machines (the host and the linux guest) I cannot ping the windows XP. Off course, I already turned off the windows firewall in the guest and also in the host. The network is pretty basic, I'm using Vmnet8 (NAT), with DHCP and port forwarding (to the windows XP's IP). Everything is working ok, I have internet access from host and from both guests. Port forwarding to the XP guest is working ok too. The only problem is that I cannot access the XP guest through the Vmnet8. I monitored the traffic using wireshark (in the host and in the windows guest). If I try to ping the XP guest from the host, what I see is the ARP request leaving the host, being answered by the guest and, after that, there is no echo request leaving the host. The same occurs if I try to ping the XP from the CentOs guest. From the windows XP guest I can ping both the host and the CentOs guest. From the XP guest I can access the host shares. Obviously, from the host I cannot see the XP shares (as I cannot even ping the guest). I want to maintain this setup (using NAT to share the host's internet connection). Any suggestions?

    Read the article

  • pread() read only xxxx of yyyy

    - by Alix Axel
    Occasionally, nginx doesn't send any data back to the browser (ERR_EMPTY_RESPONSE in Chrome). Upon checking the server error.log, I find these weird messages: 2013/10/20 23:57:40 [alert] 29146#0: *35 pread() read only 4653 of 4656 from "~/htdocs/index.html" while sending response to client, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" 2013/10/20 23:57:45 [alert] 29146#0: *36 pread() read only 4653 of 4656 from "~/htdocs/index.html" while sending response to client, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" 2013/10/20 23:58:18 [alert] 29146#0: *38 pread() read only 4650 of 4653 from "~/htdocs/index.html" while sending response to client, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" 2013/10/20 23:58:18 [alert] 29146#0: *39 pread() read only 4650 of 4653 from "~/htdocs/index.html" while sending response to client, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" 2013/10/20 23:58:19 [alert] 29146#0: *40 pread() read only 4650 of 4653 from "~/htdocs/index.html" while sending response to client, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" 2013/10/21 00:02:21 [alert] 29146#0: *41 pread() read only 4629 of 4641 from "~/htdocs/index.html" while sending response to client, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" 2013/10/21 00:02:21 [alert] 29146#0: *42 pread() read only 4629 of 4641 from "~/htdocs/index.html" while sending response to client, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" 2013/10/21 00:02:23 [alert] 29146#0: *43 pread() read only 4629 of 4641 from "~/htdocs/index.html" while sending response to client, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" 2013/10/21 00:02:31 [alert] 29146#0: *44 pread() read only 4629 of 4641 from "~/htdocs/index.html" while sending response to client, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" 2013/10/21 00:02:46 [alert] 29146#0: *45 pread() read only 4629 of 4641 from "~/htdocs/index.html" while sending response to client, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" Does anyone have any idea why this happens? After a while everything is served correctly again.

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >