Search Results

Search found 5643 results on 226 pages for 'machines'.

Page 181/226 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • PHP crashing during oAuth scripts

    - by FunkyChicken
    I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64 The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log: in php-fpm log: WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start in nginx log: ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers. https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash. I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it. +++ UPDATE +++ Now I have been doing some debugging in 1 of the scripts that is playing up. If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec() command. When that is ran, it crashes. If i echo'test';exit; just above that line, it echo's correctly, if i do it below that line, php crashes. Which means it's that line 808 which causes the crash. So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm which also uses curl_exec, but that runs just fine. So I started to dig deeper into that query from the facebook script to see what values the $opts array contains from line 806. Output of that array is: http://pastebin.com/Cq9ffd3R What the problem is, I still have no clue :(

    Read the article

  • Deploying Windows Service through group policy fails with Event ID 102

    - by Sören Kuklau
    I'm trying to deploy a custom Windows Service (written in C#; installed through a VS setup project) using a group policy. To help debug this, I also have two additional MSIs in the same policy. All three packages are deployed as a machine policy, not a user one. On one machine (runs Windows Server 2008; no UAC), all three deploy fine. The service is set to Automatic, as expected. On two machines (run Windows 7; UAC), the two other MSIs deploy fine, but my service fails to install. The event log gives an event ID of 102, which appears to be a permissions problem: The install of application "Package Name" from policy "Policy Name" failed. The error was The installation source for this product is not available. Verify that the source exists and that you can access it. However, all three packages come from the same share linked through UNC, so this is unlikely. My guess is that UAC is the problem; that the service requires additional permissions. Do I need to alter the MSI somehow?

    Read the article

  • How do I set up a Windows NFS share so that I can view it's contents on Linux?

    - by hewhocutsdown
    My NFS server is a Windows XP SP3 box with the Microsoft Windows Services for Unix installed. I have a share configured under C:\NFS with the share name NFS and ANSI encoding. Anonymous access is enabled, with the anon UID/GID set to 0/0. Additionally, I've set ALL MACHINES to Read-Write, and checked the checkbox to Allow root access. My first NFS client is a Ubuntu 10.04 box, with nfs-common installed. Running sudo mount -t nfs 1.1.1.1:/NFS /home/user/NFS succeeds, but when I attempt to view the folder (even as root), it tells me that I do not have the permissions necessary to view the contents of the folder. My second NFS client is an IBM iSeries box running OS/400 V5R3. I used the mount command below: MOUNT TYPE(*NFS) MFS('1.1.1.1:/NFS') MNTOVRDIR('/PARENT/NFS') OPTIONS('rw,nosuid,retry=5,rsize=8096,wsize=8096,timeo=20,retrans=2,acregmin=30,acregmax=60,acdirmin=30,acdirmax=60,soft') CODEPAGE(*BINARY *ASCII) which also mounts successfully. Attempting to WRKLNK '/PARENT/NFS' and use Option 5 to enter the directory yields a Not authorized to object error - even though I am a security officer with the *ALLOBJ special authority. My gut says that it's a problem with the Windows share, but I don't know what it could be. Do you have any suggestions?

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • I want to add a Quality Assurance domain. How do I handle DNS servers?

    - by Tim
    I'm advising a large client on how to isolate their dev and testing from their production. They already have one domain, lets say xyz.net with the active directory domain as "XYZ01". I want to add second domain say QAxyz.net and make its active directory domain "QA01" All development and QA servers would be moved to the QAxyz.net domain, the machines would be part of the QA01 domain. Note: Some of these servers will have the same name as the production servers for testing purposes. I believe we would have separate DNS servers for each domain. If I am logged into the QA01 domain, to access the production domain I would qualify my access like so: \PRODSERVER.xyz.net login: XYZ01\username Do I need to add a forwarder to my QAxyz.net DNS server so that it can see xyz.net? Would I need to do the same to the xyz.net DNS server to see QAxyz.net? I don't know how to advise them in this. Does anyone have any other recommendations to isolationg a QA domain? Many Thanks in advance! Tim

    Read the article

  • Windows 2008 R2 Servers Sending Arp Requests for IPs outside Subnet

    - by Kyle Brandt
    By running a packet capture on my my routers I see some of my servers sending ARP requests for IPs that exist outside of its network. For example if my network is: Network: 8.8.8.0/24 Gateway: 8.8.8.1 (MAC: 00:21:9b:aa:aa:aa) Example Server: 8.8.8.20 (MAC: 00:21:9b:bb:bb:bb) By running a capture on the interface that has 8.8.8.1 I see requests like: Sender Mac: 00:21:9b:bb:bb:bb Sender IP: 8.8.8.20 Target MAC: 00:21:9b:aa:aa:aa Target IP: 69.63.181.58 Anyone seen this behavior before? My understanding of ARP is that requests should only go out for IPs within the subnet... Am I confused in my understanding of ARP? If I am not confused, anyone seen this behavior? Also, these seem to happen in bursts and it doesn't happen when I do something like ping an IP outside of the network. Update: In response to Ian's questions. I am not running anything like Hyper-V. I have multiple interfaces but only one is active (Using BACS failover teaming). The subnet mask is 255.255.255.0 (Even if it were something different it wouldn't explain an IP like 69.63.181.58). When I run MS Network Monitor or wireshark I do not see these ARP requests. What happens is that on the router capturing I see a burst of about 10 requests for IPs outside of the network from the host machine. On the machine itself using wireshark or NetMon I see a flood of ARP responses for all the machines on the network. However, I don't see any requests in the capture asking for those responses. So it seems like maybe it is maybe refreshing the arp cache but including IPs that outside of the network. Also when it does this NetMon doesn't show the ARP requests?

    Read the article

  • nmap installation issue

    - by daasf
    vanilla centos with latest updates, installed gcc, and after ./configure:.... Configuration complete. Type make (or gmake on some *BSD machines) to compile. [root@winxp nmap-5.51]# make Makefile:375: makefile.dep: No such file or directory g++ -MM -I./liblua -I./libdnet-stripped/include -I./libpcre -I./libpcap -I./nbase - I./nsock/include -DHAVE_CONFIG_H -DNMAP_NAME=\"Nmap\" -DNMAP_URL=\"http://nmap.org\" - DNMAP_PLATFORM=\"x86_64-unknown-linux-gnu\" -DNMAPDATADIR=\"/usr/local/share/nmap\" - D_FORTIFY_SOURCE=2 main.cc nmap.cc targets.cc tcpip.cc nmap_error.cc utils.cc idle_scan.cc osscan.cc osscan2.cc output.cc payload.cc scan_engine.cc timing.cc charpool.cc services.cc protocols.cc nmap_rpc.cc portlist.cc NmapOps.cc TargetGroup.cc Target.cc FingerPrintResults.cc service_scan.cc NmapOutputTable.cc MACLookup.cc nmap_tty.cc nmap_dns.cc traceroute.cc portreasons.cc xml.cc nse_main.cc nse_utility.cc nse_nsock.cc nse_dnet.cc nse_fs.cc nse_nmaplib.cc nse_debug.cc nse_pcrelib.cc nse_binlib.cc nse_bit.cc > makefile.dep /bin/sh: g++: command not found make: *** [makefile.dep] Error 127 [root@winxp nmap-5.51]# yum install g++ -y Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.ash.fastserv.com * base: centos.mirror.choopa.net * extras: mirror.trouble-free.net * updates: mirror.nexcess.net Setting up Install Process No package g++ available. Nothing to do [root@winxp nmap-5.51]#

    Read the article

  • chkconfig creating service symlinks with the wrong order

    - by Robert
    On RHEL 6.3, I have a system service that should be starting after postgresql and httpd (order 64 and 85, respectively), but chkconfig always places it at order 50. I tried an experiment on a CentOS 6.0 virtual machine to make sure I understood the LSB stanza syntax. I created /etc/init.d/foo, owner root, permissions 755, with this text: ### BEGIN INIT INFO # Provides: foo # Required-Start: postgresql httpd # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Foo init script ### END INIT INFO And then ran chkconfig --add foo. Result: /etc/rc5.d/S86foo is created, as expected. (The other runlevels are also as expected.) I repeated the exact same experiment on the RHEL machine, and it created /etc/rc5.d/S50foo instead. I can't see anything different between the two that would lead to different results. Both machines have postgresql and httpd starting at the same orders and runlevels. Any thoughts? I could just use # chkconfig: 2345 86 50, or manually rename the service symlinks to the correct order, but I'm trying to document an install process for later users, and I want to know how to do it right and understand why it's not working as expected.

    Read the article

  • How can i use the `eject` command on a computer i have SSH'd into?

    - by will
    So if i do eject on my machine, it works exactly as expected, however, if i ssh into the machine next to me, and do the same thing, it does not work... my computer: eject: using default device `cdrom' eject: device name is `cdrom' eject: expanded name is `/dev/cdrom' eject: `/dev/cdrom' is a link to `/dev/sr0' eject: `/dev/sr0' is not mounted eject: `/dev/sr0' is not a mount point eject: checking if device "/dev/sr0" has a removable or hotpluggable flag eject: `/dev/sr0' is not a multipartition device eject: trying to eject `/dev/sr0' using CD-ROM eject command eject: CD-ROM eject command succeeded other computer: eject: using default device `cdrom' eject: device name is `cdrom' eject: expanded name is `/dev/cdrom' eject: `/dev/cdrom' is a link to `/dev/sr0' eject: `/dev/sr0' is not mounted eject: `/dev/sr0' is not a mount point eject: checking if device "/dev/sr0" has a removable or hotpluggable flag eject: `/dev/sr0' is not a multipartition device eject: unable to open `/dev/sr0' if i look in the /dev/ dir, then i find cdrom which is a symlink to sr0 - as mentioned by the verbose outputs of eject -v. On my machine, if i try and look at it, if the drive is open, it will close it, and then give this: $ less sr0 sr0 is not a regular file (use -f to see it) so $ less -f sr0 sr0: No medium found but if i do it on the other computer, $ less -f sr0 sr0: Permission denied so i look at the files more, and get this on both machines: $ ls -la sr0 brw-rw----+ 1 root cdrom 11, 0 Nov 12 10:13 sr0 Does anyone know a way around this? I do not have root access.

    Read the article

  • Restoring Windows 2008 Server X86 and X64

    - by rihatum
    Restoring Windows 2008 Server (Domain Controller) We are using Backup Exec System Recovery 2010 to Image our DC. Now this software has a feature to convert the backup into a vmware or hyper-v VM I have also used disk2vhd to convert one of our dc's to a vhd and when I connected it into Hyper-V, it booted fine, I can login - BUT :-) As soon as I login, I get the activation error, that change product key, this product key isn't good for this machine etc. Question is : When in a real recovery situation, what would be the procedure to restore it either virtual or onto a physical box but be able to login and change product key etc ? In this scenario its just locked down and I cant' do anything, if this is the case, how would I replicate my production environment via these tools ? Any Ideas ? Will be grateful for some real world examples here. Same thing happens with our exchange backup / test restore either physical or virtual, can login but nothing else. Now we don't have the keys as they are OEM keys and just wondering what will happen in a real scenario, would we be purchasing another KEY or using the OEM key on our new server ? This is a test environment I am trying to create by restoring our backups either into hyper-v or physical test machines. Also, If I build up a machine (Server 2008) in a VM (Hyper-V), How can I restore just the system state backup of my DC into it ? will that give me the activation error too ? even though I would use the TRIAL ISOs provided by Microsoft ? Kind regards

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software?

    Read the article

  • Recommendation on remote access setup for accessing customer systems

    - by gregmac
    I'm looking for a product recommendation (open or commercial) that will allow remote access to customer sites for tech support purposes. We need to be able to gain access to help troubleshoot problems on servers. Currently end up using anything from RDP on public IP, to various VPNs that clients happen to have, to webex-type sessions that require lots of interaction from both sides to get things working. This often means a problem that could take 10 minutes to solve takes an extra 30+ minutes messing around trying to get a connection up. There are multiple customer sites, which should NOT have access to each other. At each site, there is anywhere from 1 to 8 servers (Windows 2003 or 2008) that need to be accessed. Support connection to machines even if they're behind a firewall/router with no public IP Be able to selectively allow/deny access from customer site. Customer site should not be able to connect outbound to anywhere else (our systems, or other customer sites) Support multiple users from our end If not a VPN connection (where RDP could be used over top), should support: Remote desktop access, including copy/paste File transfers Preferably would have some way to list all remote systems, showing online/offline. Anyone have any suggestions?

    Read the article

  • No Homegroup Computers, Network Troubleshooter Fails

    - by Mokubai
    I have a problem with my Windows 7 Homegroup, between two Windows 7 Home Premium machines. On one machine I get this: The other machine in the Homegroup is perfectly happy and is able to see and browse this faulty machine as if there is nothing wrong. The Network and Sharing Center shows that I am joined to a Homegroup on my "Home" network and nothing is out of the ordinary. I have tried leaving the Homegroup and rejoining/recreating it several times and that does nothing at all. Normal browsing to machine names and looking through folders seems to work, but it's a much more clunky way to get stuff compared to the convenience of the Homegroup facilities. Starting the troubleshooter detects some problems with a "Peer Networking" (PNRpr or something like that) service not starting but fails to fix anything. Sure enough when I go to view the services via Control Panel - Administrative Tools - Services I see that both the "Peer Name Resolution Protocol" and "Peer Networking Grouping" services are stopped. Attempting to start the "Peer Networking Grouping" gives an error that a dependency service will not start, the only service it is dependant on is the "Peer Name Resolution Protocol" so I try to start that and I get an error saying that the "service could not start due to error 0x80630801" This has happened before and I have fixed it then by using System Restore and restoring the machine to a week before when I knew it had all worked. This time though I cannot remember when I last used the Homegroup from this machine and I've installed quite a bit so I don't want to go fumbling through restore points trying to find one that works... Can anyone tell me if there is a way to reset things so that this machine is able to use the Homegroup again?

    Read the article

  • Server 2012, Jumbo Frames - should I expect problems?

    - by TomTom
    Ok, this sound might stupid - but is there any negative on just enabling jumbo frames in practice? From what I understand: Any switch or ethernet adapter that sees a jumbo frame it can not handle will just drop it. TCP is not a problem as max frame size is negotiated in the setinuo phase. UCP is a theoretical problem as a server may just send a LARGE UDP packet that gets dropped on the way. Practically though, as UDP is packet based, I do not really think any software WOULD send a UDP packet larger than 1500 bytes net without app level configuration changes - at least this is how I do my programming, as it is quite hard to get a decent MTU size for that without testing yourself, so you fall back in programming to max 1500 packets. The network in question is a standard small business network - we upgraded now from a non managed 24 port switch to a 52 port switch with 4 10g ports (netgear - quite cheap) and will mov a file server to 10g for also ISCSI serving. All my equipment on the Ethernet level can handle minimum 9000 bytes and due to local firewalls I really want to get packets larger (less firewall processing), but the network is also NAT'ed to the internet. On top, different machines move around (download) large files (multi gigabyte area) quite often for processing. The question is - can I expect problems when I just enable jumbo frames? Again, this is not totally ignorance - I just don't see programs sending more than 1500 byte UDP packets (if that is a practical problem please tell me) and for TCP the MTU is negotiated anyway. if there is a problem I can move to a dedicated VLAN, but this has it's own shares of problems as basically most workstations must then be on both VLAN's.

    Read the article

  • Printing on Windows 8 with PDF viewer (Adobe Reader) from network

    - by Bongo
    i have a problem with the Adobe Reader 8, but the problem seems to be equally bad with other pdf viewers. Here is the configuration: My PDF viewer is located on network drive "Z:" which is the network adress \dgs-main\progs. I tried to start the adobe reader from here - \\dgs-main\progs\Adobe\Reader 8.0\Reader\AcroRd32.exe and open the PDF from here - C:\Users\ServiceDesk\AppData\Local\Temp\GeneratedPDF.pdf The problem is as follows, if i open the PDF with a local PDF viewer everything works fine and i can print the document. If i open the PDF with the Network PDF viewer then it opens, but printing is impossible. The error message states: "Unable to start print job. Is printer available?" As mentioned above, it works with a local pdf viewer. In both cases i use the same printer. The Printer is a network printer but even with a local printer it fails. The error occurs only on Windows 8 machines. On windows 7 it works fine. I Hope somebody can tell me what the problem is. Thanks in advance and have a fine day.

    Read the article

  • How can I cause Task Scheduler to "fail" if a dialog box returns a certain result?

    - by Roger
    I'm working on a VBScript to do a weekly reboot of all machines on our network. I want to run this script via Task Scheduler. The script runs at 3:00 AM, but there is a small chance that users may still be on the network at that time, and I need to give them the option to terminate the reboot. If they do so, I would like the reboot to occur the next night at 3:00 AM. I've set Task Scheduler up to repeat in this way. So far, so good. The problem is that if the user selects "Cancel" in my script, the Task Scheduler does not see my task as failed, and won't run it again the next night. Any ideas? Can I pass an errorcode to task scheduler or otherwise abort the task via VBScript? My code is below: Option Explicit Dim objShell, intShutdown Dim strShutdown, strAbort ' -r = restart, -t 600 = 10 minutes, -f = force programs to close strShutdown = "shutdown.exe -r -t 600 -f" set objShell = CreateObject("WScript.Shell") objShell.Run strShutdown, 0, false 'go to sleep so message box appears on top WScript.Sleep 100 ' Input Box to abort shutdown intShutdown = (MsgBox("Computer will restart in 10 minutes. Do you want to cancel computer restart?",vbYesNo+vbExclamation+vbApplicationModal,"Cancel Restart")) If intShutdown = vbYes Then ' Abort Shutdown strAbort = "shutdown.exe -a" set objShell = CreateObject("WScript.Shell") objShell.Run strAbort, 0, false End if Wscript.Quit Appreciate any thoughts.

    Read the article

  • Joining Samba to Active Directory with local user authentication

    - by Ansel Pol
    I apologise that this is somewhat incoherent, but hopefully someone will be able to make enough sense of this to understand what I'm trying to achieve and provide pointers. I have a machine with two network interfaces connected to two different networks (one of which it's providing several other services for, such as DNS), running two separate instances of Samba, one bound to each interface. One of the instances is just a workgroup-style setup using share-level authentication, which is all working fine. The problem is that I'm looking to join the other instance to an MS Active Directory domain (provided by MS Windows Small Business Server 2003) to enable a subset of the domain users to access the shares from Windows machines on the other network. The users who need access from the domain environment have accounts (whose names are all-lowercase versions of their domain usernames) on the machine running Samba, but I'm not sure about how to map the UIDs and everything I've read concerns authenticating accounts on that machine against either AD or another LDAP server. To clarify: I only want the credentials for AD users accessing the non-workgroup Samba instance to be authenticated against AD, not the accounts on the machine running Samba. I hope this is sufficiently clear. EDIT: In addition to being able to access the Samba shares from AD, I do also need to be able to access a share on the domain from the machine running Samba but would still like everything non-Samba-related to authenticate locally.

    Read the article

  • Unable to force Debian to do unattended install... libc6 wants interactive confirm

    - by JD Long
    I'm trying to create a script that forces a Debian Lenny install to install the latest version of CRAN R. During the install it appears libc6 is upgraded and the install wants interactive confirm that it's OK to restart three services (mysql, exim4, cron). This process HAS to be unattended as it runs on Amazon's Elastic Map Reduce (EMR) machines. But I'm running out of options. Here's a few things I've tried: This previous question appears to be exactly what I'm looking for. So I set up my install script as follows: # set my CRAN repos... yes, I know there's a new convention where to put these. echo "deb http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list echo "deb-src http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list # set the dpkg.cfg options per the previous SuperUser question echo "force-confold" | sudo tee -a /etc/dpkg/dpkg.cfg echo "force-confdef" | sudo tee -a /etc/dpkg/dpkg.cfg export DEBIAN_FRONTEND=noninteractive # add key to keyring so it doesn't complain gpg --keyserver pgp.mit.edu --recv-key 381BA480 gpg -a --export 381BA480 > jranke_cran.asc sudo apt-key add jranke_cran.asc sudo apt-get update # install the latest R sudo apt-get install --yes --force-yes r-base But this script hangs with the following request for input: OK, so I tried stopping the services using the following script: sudo /etc/init.d/mysql stop sudo /etc/init.d/exim4 stop sudo /etc/init.d/cron stop sudo apt-get install --yes --force-yes libc6 This does not work and the interactive screen comes back, but this time with only cron listed as the service that must be restarted. So is there a way to make libc6 just restart these services with no user input? Or is there a way to stop cron so it does not cause an interactive prompt? Maybe a creative option I've never thought of? Keep in mind that this system is brought up, some Hadoop code is run, and then it's torn down. So I can put up with side effects and bad behavior that we might not want in a production desktop machine or web server.

    Read the article

  • Getting SMB file shares working over a PPTP VPN

    - by Ben Scott
    I'm having issues getting SMB file shares working over a PPTP VPN. The server setup consists of a security device (DrayTek V3300) which passes the PPTP authentication to a SBS2003 server running RRAS. The server is the DC and provides DNS and WINS, the single NIC's name server is set to 127.0.0.1, and DHCP on the DrayTek sets the server IP as the DNS. If I create a new VPN connection in Win7, leaving everything as default apart from the server, username, password and domain, I can: ping everything by IP address resolve IPs with nslookup using their fully-qualified name, as in nslookup fileserver.mydomain.local ping machines by fully-qualified name, as in ping fileserver.mydomain.local However if I try to access a file share: within Explorer, I get "Windows cannot access ..." with "Error code: 0x80004005 Unspecified Error", using net use z: \\fileserver.mydomain.local\share, I get "System error 53 has occurred. The network path was not found." If I add the machine name to my HOSTS file I can use the file share, which is my last-ditch workaround, but I have a number of VPN users and would rather a solution that doesn't involve me trying to hand-edit system files on computers half a country away. If I set the WINS server explicitly in the connection's IPv4 settings I don't have to use the FQN to ping the machine, but that doesn't change anything else.

    Read the article

  • Can I get a domain controller not to act as DNS for the members?

    - by rsw
    Hi, Let me try to explain my current setup. I have one linux machine acting as DHCP and DNS (dhcpd3 and bind) in my network. This works fine, all computers I hook up to the network gets an IP address and proper DNS servers set. Let's call it 10.12.0.10 However, we also have a Windows Server 2003 Domain Controller in our network to which we add our Windows computers (running XP), let's call it 10.12.0.20. I noticed that when I run 'nslookup' on one of the windows machines, it says that the primary DNS is 10.12.0.20. This have not been much of a problem since: The Windows clients are stationary The Windows server in itself point out my real DHCP/DNS, since I can reach everything specified in it However, this turns out to be a problem when we use Laptops. They connect to the domain here and gets a DNS server, but when the user travels or connect the computer from home, we hit a problem. They are connected to their internet, but their DNS is 10.12.0.20 which they can't reach since they're at home and not at the office network. I solved this by removing the register key called "NameServer" with the value 10.12.0.20, but it gets set again whenever they logon to the domain the next time (when they get back to the office). Can I somehow make the computers take whatever DNS server they are handed when connecting to the internet or a home network, instead of always trying to reach the Domain Controller?

    Read the article

  • Firefox not using Kerberos despite being configured to

    - by Nicolas Raoul
    I am deploying Linux/Firefox on a corporate Kerberos network. I followed this Kerberos-on-Firefox procedure but still Firefox does not connect via the company's Kerberos. I am using Firefox 3.0.18 on RedHat EL Server 5.5 Here is what I did: Run kinit on the command line to create a Kerberos ticket Check with klist: the ticket is valid until tomorrow, service principal is krbtgt/[email protected]. In Firefox, set network.negotiate-auth.trusted-uris and network.negotiate-auth.delegation-uris to .dc.thecompany.com. Load the company's portal page via its full hostname: http://server37.thecompany.com/alfresco. (note: server37 is actually the machine I am running Firefox on, but that should not be a problem I guess) PROBLEM: the company's intranet portal still serves me the login/password page. The same portal correctly uses Kerberos on Internet Explorer/Windows 7 machines, same settings, and shows the user's personal page. The server does not see any Kerberos request coming. Did I do something wrong? I enabled NSPR_LOG_MODULES=negotiateauth:5 as explained here, but the log file stays empty.

    Read the article

  • SQL Server 2008: Getting Login failed for user "Domain\User". Failed to open the explicitly specified database [CLIENT: IP.ADD.RR.ESS]

    - by GodEater
    This is a very similar issue to " SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified database " which unfortunately seems to have gone unsolved. My issue here is subtly different. Firstly the account failing login is not 'NT AUTHORITY\NETWORK SERVICE' - it's an actual domain account. Secondly, there are two machines involved - I gathered from the first question it was a single machine running both the IIS and SQL instances. The application which is trying to connect to the database is an ASP.NET one running on another server (if that makes any different, I'm not sure it does.) The ConnectionString being used in the web.config for the application is : data source=MySQLServer;initial catalog=MyDatabase;integrated security=sspi; And the Application Pool is set to NetworkService for Identity. So - in the web app, I get the following error : Cannot open database "MyDatabase" requested by the login. The login failed. Login failed for user 'MyDomain\WebServerMachineName$' In the SQL Server logs I see : Login failed for user 'MyDomain\WebServerMachineName$'. Reason: Failed to open the explicitly specified database. [CLIENT: Web.Server.IP.Address] Running this bit of SQL against the database in question : USE [MyDatabase] GO SELECT SDP.name AS [User Name], SDP.type_desc AS [User Type], UPPER(SDPS.name) AS [Database Role] FROM sys.database_principals SDP INNER JOIN sys.database_role_members SDRM ON SDP.principal_id=SDRM.member_principal_id INNER JOIN sys.database_principals SDPS ON SDRM.role_principal_id = SDPS.principal_id Gets me this result : MyDomain\WebServerMachineName$ WINDOWS_USER DB_DDLADMIN MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAREADER MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAWRITER Which appears to me to indicate I've got the permissions right. Anyone have any idea why it's not working, or how I can narrow the issue down some more?

    Read the article

  • Failing Windows Updates with Error Code 800719e4

    - by Kev
    On a number of Vista machines I have now come across the same error - when installing updates everything works fine, until it after it reboots and the rolls back during step 3. On all occasions (where a simple retry hasn't worked) the error code has been 800719e4. On my own laptop I have so far tried the following:- Installed the updates one by one manually - I started on the smallest and one by one have worked towards the largest one which has left me with "Security Update for Windows (KB2286198)" that refuses to install. Renamed all the files in "C:\Windows\Logs\CBS" to "xxx.old" where xxx was the original name with windows update turned off - no change Renamed all the folders in "C:\Windows\SoftwareDistribution" in the same manner - no change Attempted to install it manually "Windows6.0-KB2286198-x86.msu" - no change Tried to un-install IE8 - doesn't work, rolls back at the end (Installing the IE9 Beta when it launched was what alerted me to the issue on this laptop) Ran a "Fix It" thing from the Microsoft Website - no help (Can't find the link now). Tried to recover from the disk - but alas my laptop only has a recovery partition (and was unservice packed original). Ran with nothing running on startup, and only MS services - again no change. Google is being useless with a load of posts trying to get me to call a telephone number with letters in (presumably an American number) The error code appears to mean error log full but no one has any idea which log! The WinUpdate log does indicate the following is the error point though :- 2010-10-23 13:54:48:230 1240 738 Handler WARNING: Got extended error: "POQ Operation SetKeyValue OperationData \Registry\machine\Schema\wcm://Microsoft-Windows-shell32?version=6.0.6002.18287&language=neutral&processorArchitecture=x86&publicKeyToken=31bf3856ad364e35&versionScope=nonSxS&scope=allUsers\metadata\elements\HKEY_CLASSES_ROOT_lnkfile_shellex_DropHandler_defaultValue, @default, , ewAwADAAMAAyADEANAAwADEALQAwADAAMAAwAC0AMAAwADAAMAAtAEMAMAAwADAALQAwADAAMAAwADAAMAAwADAAMAAwADQANgB9AAAA" Has anyone any idea how to fix this once and for all - reinstalling laptop after laptop from scratch is mildly annoying at work where Office and Firefox are the only extras, but even more annoying at home - I don't fancy going through the palaver of reinstalling everything yet again.

    Read the article

  • SSH into remote server using Public-private keys

    - by maria
    Hi, I have recently setup ssh on two linux machines (lets call them server-a, client-b). I have generated two ssh auth files on client-b machine using ssh key gen and can see both public and private files in .ssh dir. I have named them 'example' and 'example.pub'. Then I have added example.pub to sever-a's auth file. When I try to ssh into server-a it still requests a password authentication where as I want a password less login (private key on client-b is setup without password). When I try to ssh with '-v' .. get the following output: debug1: Next authentication method: publickey debug1: Trying private key: /Users/abc/.ssh/identity debug1: Offering public key: /Users/abc/.ssh/id_rsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Offering public key: /Users/abc/.ssh/id_dsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Password: Please help.

    Read the article

  • Notebook Operating System with extreme support cycles/security updates

    - by leto
    Hello there, after reading the announcements about Mac OS X "Lion" and Apples political decision, I've had enough. I'm a longtime Apple User since 1992, have always felt at home there, but am trying to switch to alternative Operating System since a year. I've also been working with Unix machines since 2001, so I'm looking in one of the free Unices or a Linux. Since I last looked at the desktop in 2002 choke much has changed, it seems. So I'm lost once more in the war between desktop environments and software. To be honest: I don't care what it's name is, I want to get my job done. Here's what I set me as landmark for an operating system/software to be considered: Has to be atleast four years old Has to supply security updates for current release for atleast a year Production quality stability for the whole desktop environment (!) No f****g commercial stuff that tends to supply me with privacy invading App Store or Cloud space So far I'm running a MacBook from 2007, 4 Gig memory, 250 Gig disk and I need: IMAPs for Mail since 1995 Webbrowser sic Shell Keeping current with Updates/Upgrades with no more than 5 Minutes spent in entering commands (makes it hard for OpenBSD ;-) ) A desktop filemanger would be nice, but is a bonus. What can you suggest as operating system? The one with the longest support cycles and best chance to survive the next 10 years will win a new user, even sending patches when needed :-) Greets

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >