Search Results

Search found 27800 results on 1112 pages for 'state machine'.

Page 391/1112 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • Cloudera Manager agent deploy failing to receive heartbeat from agent

    - by user150341
    All, I am getting the error on the console at the last phase of the installation: Installation failed. Failed to receive heartbeat from agent Server Log: 2012-12-19 00:32:12,132 INFO [NodeConfiguratorThread-4- 0:node.NodeConfiguratorProgress@503] 192.168.1.100: Setting WAIT_FOR_HEARTBEAT as failed and done state All nodes (name node and (2)client nodes) are VM's running 64bit CentOS. sshd has been enabled on all nodes, and VM's are set to Bridge. Any clue on how to fix this error?

    Read the article

  • The Message Queuing service failed to join the computer's domain 'DOMAIN' Error 0xc00e0025

    - by SimonGoldstone
    Struggling to get MSMQ installed in Domain Integration mode on Windows 2012 (Azure). So far, I've provisioned a brand new Windows Server 2012 (R2) machine on the Azure platform and installed the Active Directory role and promoted the machine to a domain controller. Once the AD was in place, I then added the MSMQ feature, along with the Directory Integration add on. However, it will not install in AD integration mode. It will only work in Workgroup mode. I can verify this by running the following powershell command: New-MSMQQueue -name Queue1 -queuetype Public When I run this command, I get the following error: New-MsmqQueue : A workgroup installation computer does not support the operation. The event viewer reveals to errors in the Application log: 1. The Message Queuing service failed to join the computer's domain 'DOMAIN'. Error 0xc00e0025: 2. Message Queuing was unable to create the msmq (MSMQ Configuration) object in Active Directory Domain Services. Error c00e0025h: I'm struggling here. Any advice?

    Read the article

  • Configuring default gateway returned by dhcp server

    - by comp1mp
    Hello, I have a machine which connects via ethernet to a private LAN, and wireless to a network which provides internet connectivity. The private LAN uses a wireless router to perform DHCP. The problem is that the wireless and NIC adapters have different default gateways. The default gateway for the private LAN has a lower adapter metric, and is thus chosen by the routing algorithm. I am thus unable to browse the internet when connected to both adapters. The following link has a solution for manually setting the adapter metric to a high number. http://superuser.com/questions/77822/how-to-tell-windows-7-to-ignore-a-default-gateway I was hoping to find a different solution. Does any one know of a router that allows you to configure its DHCP server to return an empty default gateway? I cannot find such an option for my linksys wrt300n. Configuring a static ip address with no default gateway does work, however I would like to use DHCP if possible. Does anyone know of a different way to specify a default gateway for a windows 7 machine with multiple network adapters without mucking with the adapter metric? Thanks, Matthew

    Read the article

  • UNC Path fails by IP "no network provider accepted the given network path", but works using hostname

    - by BoyMars
    I have an unusual problem with a Windows Server 2003 (Standard x86) box. It appears the machine will not accept connections to its shares (locally and from other domain member servers) by using its ip address in a UNC path. The error returned is: "no network provider accepted the given network path" This is the case with the machine's ip address: \\10.0.8.x and even the loopback address: \\127.0.0.1 \\localhost does not work... but using the hostname (fqdn or not) works: \\server & \\server.domain.local The local windows firewall for this server is off, ping/rdp/other services respond fine using the IP address. The following services are running and have been restarted: Computer Browser Workstation Server The server itself has been rebooted too. Event 8032 in the system log indicates that: The browser service has failed to retrieve the backup list too many times on transport \Device\NetBT_Tcpip_{29A6A925-AFB3-47E2-BA59-DDA086DEAE7A}. The backup browser is stopping. The domain controller has not been restarted, no other servers have experienced this problem, yet there are a number of browser (8021) related errors in the logs on this server. Does anyone have any suggestions? I would like to avoid rejoining this server to the domain if possible.

    Read the article

  • GhettoVCB.sh log is wrong

    - by Michael
    2010-02-25 16:03:02 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 2 2010-02-25 16:03:02 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2010-02-25 16:03:02 -- info: ============================== ghettoVCB LOG START ============================== 2010-02-25 16:03:02 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-02-25 16:03:02 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-02-25 16:03:02 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-02-25 16:03:02 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nfs_storage_backup/vm1 2010-02-25 16:03:02 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-02-25 16:03:02 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 2 2010-02-25 16:03:02 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-02-25 16:03:02 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2010-02-25 16:03:02 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-02-25 16:03:02 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-02-25 16:03:02 -- info: CONFIG - LOG_LEVEL = info 2010-02-25 16:03:02 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log 2010-02-25 16:03:02 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-02-25 16:03:02 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-02-25 16:03:02 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-02-25 16:03:02 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-02-25 16:03:02 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2010-02-25 16:03:02 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-02-25 16:03:02 -- info: CONFIG - LOG_LEVEL = info 2010-02-25 16:03:02 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-02-25 16:03:02 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2010-02-25 16:03:13 -- info: Initiate backup for VM1 2010-02-25 16:03:13 -- info: Initiate backup for VM1 2010-02-25 16:03:13 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-02-25" for VM1 2010-02-25 16:03:13 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-02-25" for VM1 Failed to clone disk : The file already exists (39). Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/datastore1/machine/VM1.vmdk'... 2010-02-25 16:04:16 -- info: Removing snapshot from VM1 ... Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/datastore1/machine/VM1.vmdk'... How can I fix this issue, the backup is working, but the log shows something like 2 back-up's in the exact time?

    Read the article

  • pfsense peer-to-peer OpenVPN not connecting

    - by John P
    I'm trying to setup a peer-to-peer OpenVPN between two pfsense servers running 2.0.1-RELEASE, but the client keeps getting the connection dropped, with a status of "reconnecting; ping-restart" and nothing appears to be routing between them. Both these firewalls are also doing PPTP VPNs that are working correctly. FW01 ("server") ======================= LAN: 10.1.1.2/24 WAN: xx.xx.126.34/27 ServerMode: Peer to Peer (Shared Key) Protocol: UDP DeviceMode: tun Interface: WAN Port 1194 Tunnel: 10.0.8.1/30 Local Network: 10.1.1.0/24 Remote Network: 192.168.1.0/24 Firewall Rule in OpenVPN tab: UDP * * * * * none FW03 (client) LAN: 192.168.1.2/24 WAN: xx.xx.9.66/27 ServerMode: Peer to Peer (Shared Key) Protocol: UDP DeviceMode: tun Interface: WAN Server Host: xx.xx.126.34 Tunnel: -- also tried 10.1.8.0/24 Remote Network: 10.1.1.0/24 Client Logs: System Log Apr 6 18:00:08 kernel: ... Restarting packages. Apr 6 18:00:13 check_reload_status: Starting packages Apr 6 18:00:19 php: : Restarting/Starting all packages. Apr 6 18:00:56 kernel: ovpnc1: link state changed to DOWN Apr 6 18:00:56 check_reload_status: Reloading filter Apr 6 18:00:57 check_reload_status: Reloading filter Apr 6 18:00:57 kernel: ovpnc1: link state changed to UP Apr 6 18:00:57 check_reload_status: rc.newwanip starting ovpnc1 Apr 6 18:00:57 check_reload_status: Syncing firewall Apr 6 18:01:02 php: : rc.newwanip: Informational is starting ovpnc1. Apr 6 18:01:02 php: : rc.newwanip: on (IP address: ) (interface: ) (real interface: ovpnc1). Apr 6 18:01:02 php: : rc.newwanip: Failed to update IP, restarting... Apr 6 18:01:02 php: : send_event: sent interface reconfigure got ERROR: incomplete command. all reload reconfigure restart newip linkup sync Client OpenVPN log Apr 6 18:39:14 openvpn[12177]: Inactivity timeout (--ping-restart), restarting Apr 6 18:39:14 openvpn[12177]: SIGUSR1[soft,ping-restart] received, process restarting Apr 6 18:39:16 openvpn[12177]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Apr 6 18:39:16 openvpn[12177]: Re-using pre-shared static key Apr 6 18:39:16 openvpn[12177]: Preserving previous TUN/TAP instance: ovpnc1 Apr 6 18:39:16 openvpn[12177]: UDPv4 link local (bound): [AF_INET]64.94.9.66 Apr 6 18:39:16 openvpn[12177]: UDPv4 link remote: [AF_INET]64.74.126.34:1194 Server OpenVPN log Apr 6 14:40:36 openvpn[22117]: UDPv4 link remote: [undef] Apr 6 14:40:36 openvpn[22117]: UDPv4 link local (bound): [AF_INET]xx.xx.126.34:1194 Apr 6 14:40:36 openvpn[21006]: /usr/local/sbin/ovpn-linkup ovpns1 1500 1557 10.1.8.1 10.1.8.2 init Apr 6 14:40:36 openvpn[21006]: /sbin/ifconfig ovpns1 10.1.8.1 10.1.8.2 mtu 1500 netmask 255.255.255.255 up Apr 6 14:40:36 openvpn[21006]: do_ifconfig, tt-ipv6=0, tt-did_ifconfig_ipv6_setup=0 Apr 6 14:40:36 openvpn[21006]: TUN/TAP device /dev/tun1 opened Apr 6 14:40:36 openvpn[21006]: Control Channel Authentication: using '/var/etc/openvpn/server1.tls-auth' as a OpenVPN static key file Apr 6 14:40:36 openvpn[21006]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Apr 6 14:40:36 openvpn[21006]: OpenVPN 2.2.0 amd64-portbld-freebsd8.1 [SSL] [LZO2] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Aug 11 2011 Apr 6 14:40:36 openvpn[17171]: SIGTERM[hard,] received, process exiting Apr 6 14:40:36 openvpn[17171]: /usr/local/sbin/ovpn-linkdown ovpns1 1500 1557 10.1.8.1 10.1.8.2 init Apr 6 14:40:36 openvpn[17171]: ERROR: FreeBSD route delete command failed: external program exited with error status: 1 Apr 6 14:40:36 openvpn[17171]: event_wait : Interrupted system call (code=4) Apr 6 14:06:32 openvpn[17171]: Initialization Sequence Completed Apr 6 14:06:32 openvpn[17171]: UDPv4 link remote: [undef] Apr 6 14:06:32 openvpn[17171]: UDPv4 link local (bound): [AF_INET]xx.xx.126.34:1194

    Read the article

  • SQL Server 2008 Restore from Backup fails with error 3241 'cannot process this media family'

    - by pearcewg
    I am attempting to backup a database from a SQL Server instance on one machine and restore it to another, and I am encountering the frequently discovered 'SQL Server cannot process this media family' error. Each of my instances are SQL Server 2008, but with different patch levels Restore: 10.0.2531.0 Backup: 10.0.1600.22 ((SQL_PreRelease).080709-1414 ) The restore DB is express. Not sure about the backup version. The backup version is on a virtual private server. The restore is on my development box. When I restore to a different database on the source (backup) server, it restores fine. Lots of stuff on google about this issue, some on stackoverflow about this issue, but nothing which is this exact situation. Any thoughts? It should be straightforward to do a backup and restore from one machine to another (having done this thousands of times in with SQL 6.5,7,2000,2005). Any ideas how to restore a database in this situation, which gives this error when attempting to restore? PARTIAL RESOLUTION: When I restored to a different box, running SQL 2008 Express on Windows Server 2003, all worked well. It just wouldn't work on the Windows 7 box. Not sure why. If anyone else has a similar experience, please let me know (there are many similar issues in different forums out there).

    Read the article

  • Windows Vista DHCP bug, arp authorize, isc dhcp, workaround

    - by jinanwow
    I am trying to find a workaround for the Windows Vista Force Broadcast bug with ISC DHCP and a Cisco Router. The problem is not windows vista trying to obtain an IP address from us that works fine (with or without the flag enabled). THe problem is we are using a cisco router and the command 'arp authorized' to prevent users from using static IP addresses on the network. The problem is, if Windows Vista sets the boot flag to true the command 'arp authorized' will not work, as it looks for the IP address and destination MAC address in the DHCP Offer Packet to add it to its arp table. The machine will DHCP just fine, but since the ARP table is not aware of the machine, it is unable to access the internet. If I disable the broadcast flag in vista, the next time it DHCPs an arp entry gets created since the DHCP Offer is unicast instead of broadcast. The thing is, we can not tell 500 to 1000 people to edit their registry, so we need a workaround for this issue. I have not had much success in finding a workaround. The question is, is there a way to force or trick ISC DHCP into unicasting a responce back to the user. Either on the Cisco Side, ISC DHCP side or intercepting and rewriting the DHCP Discover UDP packet to turn off the flag before it reaches ISC DHCP?

    Read the article

  • 3Ware 9650SE RAID-6, two degraded drives, one ECC, rebuild stuck

    - by cswingle
    This morning I came in the office to discover that two of the drives on a RAID-6, 3ware 9650SE controller were marked as degraded and it was rebuilding the array. After getting to about 4%, it got ECC errors on a third drive (this may have happened when I attempted to access the filesystem on this RAID and got I/O errors from the controller). Now I'm in this state: > /c2/u1 show Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB) ------------------------------------------------------------------------ u1 RAID-6 REBUILDING 4%(A) - - 64K 7450.5 u1-0 DISK OK - - p5 - 931.312 u1-1 DISK OK - - p2 - 931.312 u1-2 DISK OK - - p1 - 931.312 u1-3 DISK OK - - p4 - 931.312 u1-4 DISK OK - - p11 - 931.312 u1-5 DISK DEGRADED - - p6 - 931.312 u1-6 DISK OK - - p7 - 931.312 u1-7 DISK DEGRADED - - p3 - 931.312 u1-8 DISK WARNING - - p9 - 931.312 u1-9 DISK OK - - p10 - 931.312 u1/v0 Volume - - - - - 7450.5 Examining the SMART data on the three drives in question, the two that are DEGRADED are in good shape (PASSED without any Current_Pending_Sector or Offline_Uncorrectable errors), but the drive listed as WARNING has 24 uncorrectable sectors. And, the "rebuild" has been stuck at 4% for ten hours now. So: How do I get it to start actually rebuilding? This particular controller doesn't appear to support /c2/u1 resume rebuild, and the only rebuild command that appears to be an option is one that wants to know what disk to add (/c2/u1 start rebuild disk=<p:-p...> [ignoreECC] according to the help). I have two hot spares in the server, and I'm happy to engage them, but I don't understand what it would do with that information in the current state it's in. Can I pull out the drive that is demonstrably failing (the WARNING drive), when I have two DEGRADED drives in a RAID-6? It seems to me that the best scenario would be for me to pull the WARNING drive and tell it to use one of my hot spares in the rebuild. But won't I kill the thing by pulling a "good" drive in a RAID-6 with two DEGRADED drives? Finally, I've seen reference in other posts to a bad bug in this controller that causes good drives to be marked as bad and that upgrading the firmware may help. Is flashing the firmware a risky operation given the situation? Is it likely to help or hurt wrt the rebuilding-but-stuck-at-4% RAID? Am I experiencing this bug in action? Advice outside the spiritual would be much appreciated. Thanks.

    Read the article

  • The file STDOLE2.TLB cannot be found or contains a Visual Basic for Applications library that is not

    - by Jim Birchall
    In Microsoft Project 2007 Professional, If I select Tools Macro Security the message: The file "STDOLE.TLB" cannot be found or contains a Visual Basic for Applications library that is not valid. Verify that the file name is correct, and try again. If the Visual Basic for Applications library is invalid, reinstall Project. I am a software developer and I have developed an Add-In for MS Project using Visual Studio 2005, with all the problems that entails. As such my machine is configured with both Project 2003 Professional and Project 2007 Professional. I only noticed this error when trying to debug my Add-In. The Add-In loads and draws the menu, but when I click on the menu option I receive this error (which is also generated by the Tools Macros Security option mentioned above). I have tried repairing office installations and uninstalling everything and then re-installing everything all over again, but after several hours I still get the same problem. Does anyone have any idea how to resolve this? Some method of finding out what type libraries are registered with STDOLE2.TLB may help if I can identify what is causing the problem. Also a way of manually unregistering the nasty library may be helpful. My machine is configured as follows: Windows 7 Ultimate x64 Project 2003 Professional Office 2007 Ultimate Project 2007 Professional Visio 2007 Professional Visual Studio 2005 Team Edition for Developers Visual Studio 2005 Tools for Office Second edition Visual Studio 2008 Professional I also installed MS Project 2010 Beta to test my Add-In against, but have since uninstalled it. I have a suspicion that this may have caused the problem, but I cannot be sure.

    Read the article

  • Android SDK emulator freezes on a Mac running OS X 10.6 Snow Leopard

    - by Donald Burr
    I'm having trouble running the Android SDK on both of my Macs running OS X 10.6.2 Snow Leopard. This appears to be a 64 bit vs. 32 bit issue, as Snow Leopard now defaults to 64-bit everything, including the Java virtual machine. I found this webpage with instructions on how to get the Android tools to run in the 32-bit Java VM, and I am now able to run the Android GUI tool to download SDK files, create AVM's, etc. However, when I try the Hello World tutorial and get to the point where I run my application under the Android emulator, everything goes south. The emulator appears to start but it hangs (spinning beachball of death cursor) without displaying anything. (This only hangs the emulator; the rest of the system still works fine.) If I follow the exact same steps (minus the 32-bit java hack) in a Windows virtual machine, everything works fine. Googling didn't yield anything useful (except for the 32-bit java hack I spoke of earlier). This occurs on both my Mac Pro tower and 13" MacBook Pro. Does anyone have any suggestions?

    Read the article

  • Server 2008 RAID5 resynching

    - by benpage
    I built a W2K8 R2 server last weekend, and built a R5 array using Disk Management, 5x 1TB drives. For the next 60 hours the status said 'Resynching (X%)' as it built the array. during this time i did read and write to the drive (slowly) and once the rebuild was complete speeds were quite fast. I was copying over some data from a USB hard drive overnight, and the machine crashed (looking in the error i believe it was the driver for the usb drive) and so the RAID5 went back to resynching, since the machine was not shut down properly. The issue is, it's been about 48 hours since that started and I'm happy to say it will take roughly 60 hours again to resynch, however.. all this time in Disk Management, the status has only said 'Resynching', not 'Resynching (X%)'. The hard drive light is going nuts, and read/writes are slow again, so I'm assuming it is actually resynching. The question is - is that a correct assumption? and why is disk management not telling me what %age it's done? Is this normal behaviour for a not-properly-shut-down r5 array?

    Read the article

  • Adding a CLI for PHP5 on live server

    - by Josua Pedersen
    I want to add command-line support for PHP5 on my server. When I run aptitude install php5-cli I get a message saying that my PHP modules/packages have unmet dependencies. Here is a list of packages that suffer from these "unmet dependencies" and needs and upgrade: php5-gd php5-curl php5-mysql php5-cgi They all depend on php5-common. Can I upgrade the packages just like aptitude suggests without causing any disruptions to the live site? Output from aptitude Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initialising package states... Done The following packages are BROKEN: libapache2-mod-php5 php5-cgi php5-curl php5-gd php5-mysql The following NEW packages will be installed: php5-cli The following packages will be upgraded: php5-common 1 packages upgraded, 1 newly installed, 0 to remove and 123 not upgraded. Need to get 3,511kB of archives. After unpacking 7,803kB will be used. The following packages have unmet dependencies: php5-gd: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. php5-curl: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. php5-mysql: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. php5-cgi: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. libapache2-mod-php5: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. The following actions will resolve these dependencies: Upgrade the following packages: libapache2-mod-php5 [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] php5-cgi [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] php5-curl [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] php5-gd [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] php5-mysql [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] Score is 340

    Read the article

  • Windows 7 Folder Redirection (GPO)

    - by Kev
    Hi - I have been fighting this issue for a day or two now, so I am looking for some insight. I am taking over admin duties in a domain of 800 users, and the previous admins really did not employ much of any GPO settings for the clients of the Domain. In each site, there is a location on the file server where "Home" folders were manually created. EX: \server\home\enduser Whenever a user got a machine, the admin would manually right-click on the "My Documents" folder and manually enter the path to the home folder. We are planning to start putting Windows 7 machines on the Network, and I am wanting to automate as much as I can, everything that was not done in the past. Since everyone has exising "Home" folders I have been fighting and trying to get Folder Redirection to work with a new Windows 7 machine (In a Test OU). I am getting all kinds of errors and I can't get the Windows 7 "Documents" folder to redirect to the users EXISTING home folders. As I stated earlier, all of the Home folders were (and still are) manually created on the File Server and are set with the following Security permissions - Domain Admins - Full Control euser (end user) - Modify (Everything but Full) Can someone point me in the right direction on the proper setting to put in the Folder Redirection GPO to get this to work with the Existing Home folders. I can't seem to make it work but I will keep trying. Thanks In Advance !

    Read the article

  • Windows7 - “The specified network password is not correct.” when the password is in fact correct.

    - by Win7 Home User
    I have a samba server setup for some time now. It is a Hardware NAS - which unfortunately does not provide access to the Samba logs. (the exact model of the NAS is called Addonics NAS Adapter ) I also have a Windows Vista and a Windows XP machine - from both I am able to map \\192.168.0.20\Smd with no errors ( net use l: \\192.168.0.20\Smd works, after asking for my username and password). I also bought a brand new computer, with Windows 7, and when I try to execute the same exact net use command on it - using the exact same username/password pair, I get a "The specified network password is not correct." message. I also tried mapping from the Windows explorer menu, and got the same error. I synchronized the clocks of the two machines, tried again... and yet the same error persists. So what is really surprising here is that mapping works from WindowXP and Windows Vista machines, but fails from a Windows7 machine using the exact same command and username/password - Anyone has any idea of what could be causing this or how to solve the problem? Thanks

    Read the article

  • Intel 1.83 Mac Mini upgraded to 1.5 gig for Snow Leopard

    - by Paula
    Even though I've upgraded countless video cards, RAM, harddrives, motherboards in PCs... this will be my first mac mini RAM upgrade. I've watched the classic "putty knife" video. (Absurd method... but I guess it's what I'm stuck with.) I have a 1.83 Intel-based Mac Mini from 2007-2008, with 1 gig of RAM. (Two 512 sticks) Can I install 1 gig + 512 ? (Or do I have to throw away my existing sticks and buy two 1 gig sticks?) This old machine is rarely used... so I want to spend the absolute minimum on this RAM upgrade. We ONLY use it to run xCode... nothing more. But wanted to increase the RAM so we can install Snow Leopard. I have no idea how many pins the memory has. I printed out over FORTY pages of specs about this machine from "About this Mac"... but didn't find what I needed. Does this sound right: DDR2 SDRAM (but no mention of SO-DIMM) 667MHZ (but don't know if I can use faster also) Pin count: Unknown Computer model number: Unknown (But I "think" it's an MB138/A) PC2 RAM (unknown... not mentioned) 5300 (unknown... not mentioned) Mfg date: (unknown... not mentioned) Number of slots: (unknown... not mentioned) Laptop or desktop RAM: (unknown... not mentioned)

    Read the article

  • Does "Noctua NH-U12-DX 1366" mount on Asus p6t 1366?

    - by Andrea Ambu
    On Noctua site they state: Caution: The NH-U12DX 1366 can only be used on mainboards that have a backplate with screw threads for CPU cooler installation (such as the Intel reference backplate for Xeon 5500). The cooler is thus incompatible with Xeon 3500 and Core i7 mainboards that don’t have such a backplate. How do I know if Asus p6t has it?

    Read the article

  • Having trouble using psservice and sc.exe between Windows Server 2008 machines

    - by Teflon Mac
    I'm trying to control services on one W2k8 machine from another; no domain just a workgroup. The user account I'm logged in as is an administrator on both machines. I've tried both psservice and sc.exe. These work in a Windows Server 2003 environment but it looks like I need to an extra step or two due to the changed security model in 2008. Any ideas as to how grant permission to the Service Control Manager (psservice) or OpenService (sc)? I tried running the DOS window with "Run As Administrator" and it made no difference. With psservice I get the following D:\mydir>psservice \\REMOTESERVER -u "adminid" -p "adminpassword" start "Display Name of Service" PsService v2.22 - Service information and configuration utility Copyright (C) 2001-2008 Mark Russinovich Sysinternals - www.sysinternals.com Unable to access Service Control Manager on \\REMOTESERVER: Access is denied. In the remote server, I get the following message in the Security Log so I know I connect and login to the remote machine. I assume it then fails on a subsequent authorization step. The logoff message in the security log is just that ("An account was logged off."), so no extra info there. Special privileges assigned to new logon. Subject: Security ID: REMOTESERVER\adminid Account Name: adminid Account Domain: REMOTESERVER Logon ID: 0xxxxxxxx Privileges: SeSecurityPrivilege SeBackupPrivilege SeRestorePrivilege SeTakeOwnershipPrivilege SeDebugPrivilege SeSystemEnvironmentPrivilege SeLoadDriverPrivilege SeImpersonatePrivilege sc.exe is similar. The command syntax and error differs as below but I also see the same login message in the remote server's security log. D:\mydir>sc \\REMOTESERVER start "Registry Name of Service" [SC] StartService: OpenService FAILED 5: Access is denied.

    Read the article

  • SCVMM 2008 R2 problems migrating VM from VS2005 to Hyper-V host

    - by Scott Ivey
    I have System Center Virtual Machine Manager 2008 R2 installed, and have a Hyper-V R2 host and a Virtual Server 2005 host. I'm trying to migrate my machines from the VS2005 host to the Hyper-V host, and keep getting the following error... VMM is unable to complete the requested file transfer. The connection to the HTTP server myserver.mydomain.local could not be established. (Unknown error (0x80072efd)) Recommended Action Ensure that the HTTP service and/or the agent on the machine myserver.mydomain.local are installed and running and that a firewall is not blocking HTTPS traffic. (Note - migrations between Hyper-V hosts managed by the VMM server work fine - my problem is just going from VS2005-Hyper-V hosts) I have no firewalls turned on on either of the servers, and no firewalls in the middle. I've looked all over for answers to this problem, and am getting nowhere. All the articles I find when searching are talking about either V2V or P2V - and i'm just trying to do a straight migrate VM. I've tried rebooting the boxes, changing the BITS SSL port number, restarting services, triple-checking firewalls, etc. Does anyone have any good suggestions as to how I can resolve this problem?

    Read the article

  • VirtualBox Issue: virtualbox changed my Computer Name's ip address in Windows

    - by suud
    I had installed virtualbox 4.2.2 in Windows 7. My Computer Name is: MY-PC My IP address (using ipconfig /all command) is: 192.168.1.101 My IP is dynamic and I set DNS to google dns (8.8.8.8) When I ping MY-PC, I got this result: Pinging MY-PC [192.168.56.1] with 32 bytes of data: Reply from 192.168.56.1: bytes=32 time<1ms TTL=128 Reply from 192.168.56.1: bytes=32 time<1ms TTL=128 Reply from 192.168.56.1: bytes=32 time<1ms TTL=128 Reply from 192.168.56.1: bytes=32 time<1ms TTL=128 My virtualbox was not running and I expected the ip adress of MY-PC is 192.168.1.101, not 192.168.56.1 Then I run command: nbtstat -a MY-PC and I got this result: VirtualBox Host-Only Network: Node IpAddress: [192.168.56.1] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- MY-PC <00> UNIQUE Registered WORKGROUP <00> GROUP Registered MY-PC <20> UNIQUE Registered MAC Address = 08-00-27-00-60-B3 Local Area Connection: Node IpAddress: [0.0.0.0] Scope Id: [] Host not found. Wireless Network Connection: Node IpAddress: [192.168.1.101] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- MY-PC <00> UNIQUE Registered WORKGROUP <00> GROUP Registered MY-PC <20> UNIQUE Registered MAC Address = 94-0C-6D-E5-6D-5D So it seems virtualbox caused this problem. I want to know how to change back my Computer Name's ip address to 192.168.1.101 (or any ip address that set by my internet connection)?

    Read the article

  • VMRC equivalent for Hyper-V?

    - by Ian Boyd
    VMRC was the client tool used to connect to virtual machines running on Virtual Server. Upgrading to Windows Server 2008 R2 with the Hyper-V role, i need a way for people to be able to use the virtual machines. Note: not all virtual machines will have network connectivity not all virtual machines will be running Windows some people needing to connect to a virtual machine will be running Windows XP Hyper-V manager, allowing management of the hyper-v server, is less desirable (since it allows management of the hyper-v server (and doesn't work on all operating systems)) What is the Windows Server 2008 R2 equivalent of VMRC; to "vnc" to a virtual server? Update: i think Tatas was suggesting Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 (?): Which requires SQL Server IIS Installing those would unfortunately violate our Windows Server 2008 R2 license. i might be looking at the wrong product link, since commenter said there is a version that doesn't require "System Center". Update 2: The Windows Server 2008 R2 running HyperV is being licensed with the understanding that it only be used to host HyperV. From the [Windows Server 2008 R2 Licensing FAQ][4]: Q. If I have one license for Windows Server 2008 R2 Standard and want to run it in a virtual operating system environment, can I continue running it in the physical operating system environment? A. Yes, with Windows Server 2008 R2 Standard, you may run one instance in the physical operating system environment and one instance in the virtual operating system environment; however, the instance running in the physical operating system environment may be used only to run hardware virtualization software, provide hardware virtualization services, or to run software to manage and service operating system environments on the licensed server. This is why i'm weary about installing IIS or SQL Server.

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

  • cant install fastcgi ubuntu server: Package libapache2-mod-fastcgi is not available

    - by BlueDragon
    When i try to install fastcgi in ubuntu server 12.04 I get the following error: sudo apt-get install libapache2-mod-fastcgi Reading package lists... Done Building dependency tree Reading state information... Done Package libapache2-mod-fastcgi is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'libapache2-mod-fastcgi' has no installation candidate Any solution?

    Read the article

  • Why do I get "unsupported architecture" errors trying to install a Python library in OSX?

    - by Emma518
    I am trying to install a Python library in the Presto package, source http://www.cv.nrao.edu/~sransom/presto/ Using 'gmake fftfit' I get the following error: cd fftfit_src ; f2py-2.7 -c fftfit.pyf *.f running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building extension "fftfit" sources creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7 f2py options: [] f2py: fftfit.pyf Reading fortran codes... Reading file 'fftfit.pyf' (format:free) Post-processing... Block: fftfit Block: cprof Block: fftfit Post-processing (stage 2)... Building modules... Building module "fftfit"... Constructing wrapper function "cprof"... c,amp,pha = cprof(y,[nmax,nh]) Constructing wrapper function "fftfit"... shift,eshift,snr,esnr,b,errb,ngood = fftfit(prof,s,phi,[nmax]) Wrote C/API module "fftfit" to file "/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64- 2.7/fftfitmodule.c" adding '/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7/fortranobject.c' to sources. adding '/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7' to include_dirs. copying /Library/Python/2.7/site-packages/numpy-1.8.2-py2.7-macosx-10.9- intel.egg/numpy/f2py/src/fortranobject.c -> /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7 copying /Library/Python/2.7/site-packages/numpy-1.8.2-py2.7-macosx-10.9-intel.egg/numpy/f2py/src/fortranobject.h -> /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7 build_src: building npy-pkg config files running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize Gnu95FCompiler Found executable /usr/local/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using build_ext building 'fftfit' extension compiling C sources C compiler: /usr/bin/clang -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -arch ppc -arch i386 -arch x86_64 -g -O2 creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders/sx creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h00 00gp/T creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h00 00gp/T/tmp9MmLz8 creating /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h00 00gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7 compile options: '-I/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9- x86_64-2.7 -I/Library/Python/2.7/site-packages/numpy-1.8.2-py2.7-macosx-10.9- intel.egg/numpy/core/include - I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c' clang: /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64- 2.7/fftfitmodule.c In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:19: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/ 5.1/include/limits.h:38: In file included from /usr/include/limits.h:63: /usr/include/sys/cdefs.h:658:2: error: Unsupported architecture #error Unsupported architecture ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:19: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/ 5.1/include/limits.h:38: In file included from /usr/include/limits.h:64: /usr/include/machine/limits.h:8:2: error: architecture not supported #error architecture not supported ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:67: In file included from /usr/include/_types.h:27: In file included from /usr/include/sys/_types.h:33: /usr/include/machine/_types.h:34:2: error: architecture not supported #error architecture not supported ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:67: In file included from /usr/include/_types.h:27: /usr/include/sys/_types.h:94:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_blkcnt_t; /* total blocks */ ^ /usr/include/sys/_types.h:95:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_blksize_t; /* preferred block size */ ^ /usr/include/sys/_types.h:96:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_dev_t; /* dev_t */ ^ /usr/include/sys/_types.h:99:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */ ^ /usr/include/sys/_types.h:100:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/ ^ /usr/include/sys/_types.h:101:9: error: unknown type name '__uint64_t' typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */ ^ /usr/include/sys/_types.h:107:9: error: unknown type name '__darwin_natural_t' typedef __darwin_natural_t __darwin_mach_port_name_t; /* Used by mach */ ^ /usr/include/sys/_types.h:109:9: error: unknown type name '__uint16_t' typedef __uint16_t __darwin_mode_t; /* [???] Some file attributes */ ^ /usr/include/sys/_types.h:110:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */ ^ /usr/include/sys/_types.h:111:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_pid_t; /* [???] process and group IDs */ ^ /usr/include/sys/_types.h:131:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_sigset_t; /* [???] signal set */ ^ /usr/include/sys/_types.h:132:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_suseconds_t; /* [???] microseconds */ ^ /usr/include/sys/_types.h:133:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_uid_t; /* [???] user IDs */ ^ /usr/include/sys/_types.h:134:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_useconds_t; /* [???] microseconds */ ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:71: /usr/include/sys/_types/_va_list.h:31:9: error: unknown type name '__darwin_va_list'; did you mean '__builtin_va_list'? typedef __darwin_va_list va_list; ^ note: '__builtin_va_list' declared here In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:72: /usr/include/sys/_types/_size_t.h:30:9: error: unknown type name '__darwin_size_t'; did you mean '__darwin_ino_t'? typedef __darwin_size_t size_t; ^ /usr/include/sys/_types.h:103:26: note: '__darwin_ino_t' declared here typedef __darwin_ino64_t __darwin_ino_t; /* [???] Used for inodes */ ^ fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated. In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:19: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/5.1/include/limits.h:38: In file included from /usr/include/limits.h:63: /usr/include/sys/cdefs.h:658:2: error: Unsupported architecture #error Unsupported architecture ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:19: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/ 5.1/include/limits.h:38: In file included from /usr/include/limits.h:64: /usr/include/machine/limits.h:8:2: error: architecture not supported #error architecture not supported ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:67: In file included from /usr/include/_types.h:27: In file included from /usr/include/sys/_types.h:33: /usr/include/machine/_types.h:34:2: error: architecture not supported #error architecture not supported ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:67: In file included from /usr/include/_types.h:27: /usr/include/sys/_types.h:94:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_blkcnt_t; /* total blocks */ ^ /usr/include/sys/_types.h:95:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_blksize_t; /* preferred block size */ ^ /usr/include/sys/_types.h:96:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_dev_t; /* dev_t */ ^ /usr/include/sys/_types.h:99:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */ ^ /usr/include/sys/_types.h:100:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/ ^ /usr/include/sys/_types.h:101:9: error: unknown type name '__uint64_t' typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */ ^ /usr/include/sys/_types.h:107:9: error: unknown type name '__darwin_natural_t' typedef __darwin_natural_t __darwin_mach_port_name_t; /* Used by mach */ ^ /usr/include/sys/_types.h:109:9: error: unknown type name '__uint16_t' typedef __uint16_t __darwin_mode_t; /* [???] Some file attributes */ ^ /usr/include/sys/_types.h:110:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */ ^ /usr/include/sys/_types.h:111:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_pid_t; /* [???] process and group IDs */ ^ /usr/include/sys/_types.h:131:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_sigset_t; /* [???] signal set */ ^ /usr/include/sys/_types.h:132:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_suseconds_t; /* [???] microseconds */ ^ /usr/include/sys/_types.h:133:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_uid_t; /* [???] user IDs */ ^ /usr/include/sys/_types.h:134:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_useconds_t; /* [???] microseconds */ ^ In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:71: /usr/include/sys/_types/_va_list.h:31:9: error: unknown type name '__darwin_va_list'; did you mean '__builtin_va_list'? typedef __darwin_va_list va_list; ^ note: '__builtin_va_list' declared here In file included from /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7/fftfitmodule.c:16: In file included from /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33: In file included from /usr/include/stdio.h:72: /usr/include/sys/_types/_size_t.h:30:9: error: unknown type name '__darwin_size_t'; did you mean '__darwin_ino_t'? typedef __darwin_size_t size_t; ^ /usr/include/sys/_types.h:103:26: note: '__darwin_ino_t' declared here typedef __darwin_ino64_t __darwin_ino_t; /* [???] Used for inodes */ ^ fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated. error: Command "/usr/bin/clang -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -arch ppc -arch i386 -arch x86_64 -g -O2 -I/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx- 10.9-x86_64-2.7 -I/Library/Python/2.7/site-packages/numpy-1.8.2-py2.7-macosx-10.9- intel.egg/numpy/core/include - I/opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7/fftfitmodule.c -o /var/folders/sx/j_l_qvys4bv00_38pfvy3m8h0000gp/T/tmp9MmLz8/var/folders/sx/j_l_qvys4bv00_38pfvy3m8h00 00gp/T/tmp9MmLz8/src.macosx-10.9-x86_64-2.7/fftfitmodule.o" failed with exit status 1 Makefile:5: recipe for target 'fftfit' failed gmake: *** [fftfit] Error 1 How can I solve this architecture problem?

    Read the article

  • Header set Access-Control-Allow-Origin not working with mod_rewrite + mod_jk

    - by tharant
    My first question on here on SF so please forgive me if I manage to bork the post. :) Anyways, I'm using mod_rewrite on one of my machines with a simple rule that redirects to a webapp on another machine. I'm also setting the header 'Access-Control-Allow-Origin' on both machines. The problem is that when I hit the rewrite rule, I loose the 'Access-Control-Allow-Origin' header setting. Here's an example of the Apache config for the first machine: NameVirtualHost 10.0.0.2:80 <VirtualHost 10.0.0.2:80> DocumentRoot /var/www/host.example.com ServerName host.example.com JkMount /webapp/* jkworker Header set Access-Control-Allow-Origin "*" RewriteEngine on RewriteRule ^/otherhost http://otherhost.example.com/webapp [R,L] </VirtualHost> And here's an example of the Apache config for the second: NameVirtualHost 10.0.1.2:80 <VirtualHost 10.0.1.2:80> DocumentRoot /var/www/otherhost.example.com ServerName otherhost.example.com JkMount /webapp/* jkworker Header set Access-Control-Allow-Origin "*" </VirtualHost> When I hit host.example.com we see that the header is set: $ curl -i http://host.example.com/ HTTP/1.1 302 Moved Temporarily Server: Apache/2.2.11 (FreeBSD) mod_ssl/2.2.11 OpenSSL/0.9.7e-p1 DAV/2 mod_jk/1.2.26 Content-Length: 0 Access-Control-Allow-Origin: * Content-Type: text/html;charset=ISO-8859-1 And when I hit otherhost.example.com we see that it too is setting the header: $ curl -i http://otherhost.example.com HTTP/1.1 200 OK Server: Apache/2.0.46 (Red Hat) Location: http://otherhost.example.com/index.htm Content-Length: 0 Access-Control-Allow-Origin: * Content-Type: text/html;charset=UTF-8 But when I try to hit the rewrite rule at host.example.com/otherhost we get no love: $ curl -i http://host.example.com/otherhost/ HTTP/1.1 302 Found Server: Apache/2.2.11 (FreeBSD) mod_ssl/2.2.11 OpenSSL/0.9.7e-p1 DAV/2 mod_jk/1.2.26 Location: http://otherhost.example.com/ Content-Length: 0 Content-Type: text/html; charset=iso-8859-1 Can anybody point out what I'm doing wrong here? Could mod_jk be part of the problem?

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >