Search Results

Search found 21319 results on 853 pages for 'state management'.

Page 687/853 | < Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >

  • Validating signature trust with gpg?

    - by larsks
    We would like to use gpg signatures to verify some aspects of our system configuration management tools. Additionally, we would like to use a "trust" model where individual sysadmin keys are signed with a master signing key, and then our systems trust that master key (and use the "web of trust" to validate signatures by our sysadmins). This gives us a lot of flexibility, such as the ability to easily revoke the trust on a key when someone leaves, but we've run into a problem. While the gpg command will tell you if a key is untrusted, it doesn't appear to return an exit code indicating this fact. For example: # gpg -v < foo.asc Version: GnuPG v1.4.11 (GNU/Linux) gpg: armor header: gpg: original file name='' this is a test gpg: Signature made Fri 22 Jul 2011 11:34:02 AM EDT using RSA key ID ABCD00B0 gpg: using PGP trust model gpg: Good signature from "Testing Key <[email protected]>" gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: ABCD 1234 0527 9D0C 3C4A CAFE BABE DEAD BEEF 00B0 gpg: binary signature, digest algorithm SHA1 The part we care about is this: gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. The exit code returned by gpg in this case is 0, despite the trust failure: # echo $? 0 How do we get gpg to fail in the event that something is signed with an untrusted signature? I've seen some suggestions that the gpgv command will return a proper exit code, but unfortunately gpgv doesn't know how to fetch keys from keyservers. I guess we can parse the status output (using --status-fd) from gpg, but is there a better way?

    Read the article

  • Still about SSD potentials...write and read speed

    - by Macroideal
    I have been working on SSD (solid state disk) for several months..Problems and Questions hit my head unexpectedly..Coz i am a virgin in ssd... Especially these days I was testing the write-read speed of ssd, which I was always caring.... however result turned out not good as I expected, or even worse Three kinds of read-write were implemented in my test read and write directly from and into ssd, with openning ssd as a whole device. in windows: _open("\\:g", ***).. It can be very tricky and hairy that you'd write a data with size of folds of 512, at the disk position of folds of 512bytes... So, If you wanto write just a byte or 4 bytes, you'v to write at least a whole sector one time. Read and write data from and into files located in SSD... Read and Write data from and into files in mechanical Disk I compared the pratices below...I found ssd sucks...the ssd performs worse than mechanical disk... so i am wondering where i can get the potential performance of ssd, since ssd is said to a substitute for mechanical disk in the future.. Nevertheless, I test ssd with a pro-hard-disk tools..ssd is like twice speedier than mechanical disk. So, why?

    Read the article

  • Probelms Intstalling Trac using apt-get Ubuntu Jaunty

    - by Ben Waine
    Hi, I'm having some issues getting apt to install trac correctly on my Ubuntu Jaunty Box. Using the command 'apt-get install trac' I get the following output: root@myserver:~# apt-get install trac Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: trac: Depends: python-setuptools (> 0.5) but it is not installable Depends: python-pysqlite2 (>= 2.3.2) but it is not going to be installed Depends: python-subversion but it is not installable Depends: libjs-jquery but it is not installable Recommends: python-pygments (= 0.6) but it is not installable or enscript but it is not installable Recommends: python-tz but it is not installable E: Broken packages I have successfully used the command on my karmic kola desktop machine and am able to create new projects etc. I thought I might be able to solve the problem by installing all python related extensions. This produced a very similar output. I have Main, universe and multi-verse repositories enabled. Its a remote machine and I have no access to the gui. Hope someone can help, googleing failed to solve the issue or find a solution! Thanks, Ben

    Read the article

  • PBS batch jobs - the qalter command

    - by Ryan Budney
    I've got a giant computation running on a Scientific Linux cluster. At present I have over 600 jobs parked in the queue, waiting for processor time, while a few are running. I'm trying to use the qalter command on some of the idle but scheduled jobs. I'd like to schedule them for a later time, so that other users can jump part of the queue, sort of as an act of politeness. Is this doable? For example, JOBNAME 292399 is currently idle, scheduled to be run whenever a spot in the queue opens up. But if I run qalter -a 10051000 292398 followed by qrerun 292398 I get qrerun: Request invalid for state of job 292398.euler. From the qalter documentation, I thought 10051000 refers to tomorrow (oct 5th, 10am) but perhaps I'm misunderstanding something? If I'm going about this the wrong way, please let me know. The main thing I'm looking for is a command that's easily scriptable, so that I can modify when my queued tasks get run. qalter seems good for those purposes if I can get it working. I'd rather avoid running qdel and re qsubbing the computations, as there's a bookkeeping issue on which tasks to restart (vs which ones not to). I want to avoid that kind of bookkeeping. From googling around I notice some qalter commands have rather different date formats, but the above appears to be correct, as far as I can tell from the man docs. Any help would be appreciated.

    Read the article

  • Cisco SG200 vlan issue in ESXi VSA cluster

    - by George
    I have three Cisco SG200-26 switches, and I also have two ESXi hosts that I have connected like shown in the below "best practice" map by VMware: http://communities.vmware.com/servlet/JiveServlet/previewBody/17393-102-1-22458/VSA_networking_map.pdf Even though I created the VLANs in the SG200 and I set the two VLANs (508 and 608) as allowed for these untagged ports (where my ESX NIC's are connected), I can not ping from host 1 to host 2 when configuring the NIC's to use 608 VLAN. Am I missing something? my IP's are all in the 192.168. range, and the only reason I need the VLANs is to isolate the traffic of VSA back-end internally, only the two hosts will be using the VLANs. So I think I do not have to create virtual interfaces on my router since that's the case, is my understanding correct? Also sending my switch config screenshot below.. all 3 switches have the latest firmware (it seems these were originally linksys and got rebranded as cisco after the acquisition) http://img31.imageshack.us/img31/2503/switch.gif Any ideas what to change on the Cisco SG200 to make this work , would be appreciated! The second VLAN (608) only needs two IP's: 192.168.0.1 and 192.168.0.2 The first VLAN (508) will have about 15 IP's for ESXi Management and VSA cluster service, I could use either 192.168.1.xx or 10.0.1.xx The rest of my network (about 50 clients) is in 192.168.1.xx range VMware also states that the VLAN protocol on the physical switch must be 802.1Q, not ISL, anyone knows which of the two my SG200-26 uses? In addition to that, the only requirement from VSA is that my two hosts: -Are in the same subnet. -Have static IP addresses set. -Have the same Default Gateway configured. If I need inter-vlan routing for this, I suppose I have to create virtual interfaces on my sonicwall, and assign an IP for each VLAN, and then set routes between them? Thank you for your time!

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0?

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). xxxx:xxxx:xxxx/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else?

    Read the article

  • Azure can't ping or telnet VM from client

    - by Raif
    I have a VM on Azure with an instance sqlserver 2012 running on it. From my work computer and my home computer I can't get sqlserver management studio connect to it. I have looked at ALL the settings recommended in numerous articles. everything is setup correctly. endpoint 1433 Private and public sqlserver tcp enabled. sqlserver tcp listening on right port sqlserver using mixed auth windows fire wall, holes poked and then disabled on both client and VM can log in from VM using the credentials that I'm trying to use remotely further more I can't ping the dns or ip or tellnet address from my local machines. I can however hit the iis from a browser using the ip. strange. CS asked me to download MS Network Monitor, which I did and pinged and telneted. I have the results saved but can't really make heads or tails of them. CS hasn't responded yet. I can post some info here that would help. EDIT Never one to shrink from a challenge, I deleted my VM and re-did everything. Now it works although my confidence azure is somewhat shaken.

    Read the article

  • Drop database on DB2 9.5 - SQL1035N The database is currently in use

    - by Tommy
    I've never got this working the first time, but now I can't seem to do i at all. There is a connection pool somewhere using the database, so trying to drop the database when an application is using the database should give this error. The problem is there are no connection to the database when I issue these commands: db2 connect to mydatabase db2 quiesce database immediate force connections db2 connect reset db2 drop database mydatabase This allways give: SQL1035N The database is currently in use. SQLSTATE=57019 running this command shows no connections/applications DB2 list applications I can even deactivate the database, but still can't drop it. db2 => deactivate database mydatabase DB20000I The DEACTIVATE DATABASE command completed successfully. db2 => drop database mydatabase SQL1035N The database is currently in use. SQLSTATE=57019 db2 => Anyone got any clues? I'm running the cmd-windows as the local administrator (windows 2008) and this is also the admin for DB2. The connectionpool-user cannot connect during quiesce-state.

    Read the article

  • Configuration for a two machine ESXi cluster using VSA to present local storage to VMs

    - by MDMarra
    I'm designing a little vSphere 5 cluster for one of our remote sites. We have some IBM x3650s that have 6x 300GB 10K RPM drives in them, along with dual quad core CPUs and 24GB RAM. Because we use HP P4500 G2s at our primary site, we have licenses available for HP P4000 VSAs. I thought that this would be the perfect opportunity to use them. Below is a basic drawing of what I want to accomplish: I want to run a P4000 VSA on each server and run them in a Network RAID-10 (Lefthand speak for network mirroring, think of it as RAID 1 across nodes or as an active/active storage cluster). I will then present this storage to guests that will run on this mini-cluster. It will be managed by a vCenter Server on our main site. All connections will be GbE with two dedicated to storage. Management and Data will share a pair of connections, since I don't expect there to be high load. These servers are just there to provide directory services, dhcp, printing, etc. Does anyone see anything potentially wrong with this approach? Is this the best way to do this without adding additional dedicated storage heads? Are there any pitfalls in this design, besides the lack of dedicated Data/Mgmt interfaces?

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Apache2/Shibboleth TCP connections stuck in CLOSE_WAIT

    - by RJT
    I run an Apache2 server which uses the Shibboleth daemon (shibd) as federated authentication module. Certain server connections using Shibboleth seem to stick permanently in CLOSE_WAIT state. tcp 38 0 blah.blah:57346 shib.server.:8443 CLOSE_WAIT tcp 38 0 blah.blah:45601 shib.server2:8443 CLOSE_WAIT tcp 38 0 blah.blah:41737 shib.server3:5057 CLOSE_WAIT From what I can find out, CLOSE_WAIT means that when the remote server disconnects, the local application is failing to close the connection, as it should. I suspect shibd is responsible somehow. Needless to say, if enough CLOSE_WAIT connections accumulate, I have a problem. Trying to get rid of the CLOSE_WAIT connections by simply using /etc/init.d/networking restart does not work. In fact networking seems to refuse to close down and restart, and I get a SIOCADDRT: File exists error (ie networking is trying to start without having stopped first). Same problem with ifup -a So I have two questions - one may be easy, and one harder. What's a good way to force networking to restart, and force whatever connections are stuck in CLOSE_WAIT to clear? Any ideas about how to fix shibboleth and force shibd module to behave?

    Read the article

  • Can’t connect to SQL Server 2008 - looks like Shared Memory problem

    - by user38556
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest). I have tried restarting the SQL Server service, by the way, and also tried to get someone to log onto the machine with an admin account to restart the SQL Server service. Still no luck.

    Read the article

  • Exim rejects recipient address on my domain

    - by Nicolas
    Hi, I have a dedicated server (debian) on which I have installed Exim and Dovecot. Everything worked fine until around a month ago. I tried to reinstall and reconfigure exim but I keep having all the incoming emails rejected. Outlook says: A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: [email protected] SMTP error from remote mail server after RCPT TO:: host mail.mydomain.com [94.76.##.##]: 550 relay not permitted GMAIL: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 relay not permitted (state 14). On the server side, my rejectlog file shows: 2011-01-04 17:09:21 H=mail-qw0-f53.google.com [209.85.216.53] F=<####@gmail.com rejected RCPT : relay not permitted ... and the mainlog file: 2011-01-04 17:00:01 1PaAEr-0007vN-DX <= root@ETC_MAILNAME U=root P=local S=869 2011-01-04 17:00:01 1PaAEr-0007vN-DX ** root@etc_mailname: Unrouteable address 2011-01-04 17:00:01 1PaAEr-0007vY-Kn Error while reading message with no usable sender address (R=1PaAEr-0007vN-DX): at least one malformed recipient address: root@ETC_MAILNAME - malformed address: _MAILNAME may not follow root@ETC 2011-01-04 17:00:01 1PaAEr-0007vN-DX Process failed (1) when writing error message to root@ETC_MAILNAME (frozen) 2011-01-04 17:09:21 no IP address found for host MAIN_RELAY_NETS (during SMTP connection from mail-qw0-f53.google.com [209.85.216.53]) 2011-01-04 17:09:21 H=mail-qw0-f53.google.com [209.85.216.53] F=<####@gmail.com rejected RCPT : relay not permitted then after the message becomes frozen: 2011-01-04 17:28:44 1PaAEr-0007vN-DX Message is frozen Thank you for your help, any idea/comment is welcomed as I am really running out of idea to fix this issue, Nicolas. Oh and the PHP mail() function does not do anything as well, would it be linked to? I think mail() uses sendmail from my php.ini.

    Read the article

  • Solaris 10 zlogin logs in, logs out immediately

    - by Spelevink
    On a SPARC v445 running Solaris 10 9/10, had to rebuild rpool and reattached the three existing mirrored zpools on the other existing disks, with their zfs filesystems and NG zones intact. The zones have been configured with zonecfg -z ZONENAME create etc. ... and are now online using zoneadm -z ZONENAME attach -U then simply booting after being in installed state, but I cannot zlogin to any of the zones except one. It shows that I am logged in, then a blank line, then immediately logged out again. When I try to login using zlogin -C ZONENAME I cannot; the error message is: May 15 15:43:46 <hostname> login: open_module: stat(/usr/lib/security/pam_mkhomedir.so.1) failed: no such file or directory. May 15 15:43:46 <hostname> login: load_modules: cannot open module /usr/lib/security/pam_mkhomedir.so.1 But /usr/lib/pam_mkhomedir.so.1 does not exist, and it does not exist on my other servers, but those zones are accessible using zlogin. I can only zlogin to the zones with zlogin -S ZONENAME. What to do next? Thank you.

    Read the article

  • Windows Server 2008 Create Symbolic Link, updated Security Policy still gives privilege error

    - by Matt
    Windows Server 2008, RC2. I am trying to create a symbolic/soft link using the mklink command: mklink /D LinkName TargetDir e.g. c:\temp\>mklink /D foo bar This works fine if I run the command line as Administrator. However, I need it to work for regular users as well, because ultimately I need another program (executing as a user) to be able to do this. So, I updated the Local Security Policy via secpol.msc. Under "Local Policies" "User Rights Management" "Create symbolic links", I added "Users" to the security setting. I rebooted the machine. It still didn't work. So I added "Everyone" to the policy. Rebooted. And STILL it didn't work. What on earth am I doing wrong here? I think my user is even an Administrator on this box, and running plain command line even with this updated policy in place still gives me: You do not have sufficient privilege to perform this operation.

    Read the article

  • How to restrict file system when logged into terminal services

    - by pghcpa
    What I need to accomplish: With one login, when user is physically in the building I need them to see everything. When they are using terminal services with same login they should not be able to see the file system on the network. I can lock down the PC running terminal services as that is its only use. Details: Windows/2003 Server with terminal services. One login for a user (e.g., johndoe). When johndoe logs into the network at his desk in the office, he can see the network files according to group policy. When johndoe logs into terminal services from outside the building, we do not want to allow him see the network. Using 2x to do a published app, but that app has a "feature" that allows user to see network. Published application on termina services (only) is a document management system that is tied to windows login, so I can't give them two logins. With one login, when they are in the building I need them to see everything. When they are using terminal services they should not be able to see the network. I can lock down the PC running terminal services as that is its only use.

    Read the article

  • Problem with creation of scheduled task from IIS6 on SR2003

    - by Morten Louw Nielsen
    Hi, I have also posted this question on stackoverflow, but will also try here, since it might be more system-related I am writing a webapplication using .NET. The webapp creates scheduled tasks using the System.Diagnostics.Process class, calling SCHTASKS.EXE with parameters. I have changed the identity on the app pool, to a specific domain user. The domain-user is local administrator on all the four webservers. From webserver01 I am creating tasks on webserver01 to webserver04. It works perfect for 3-5 days, but then it breaks. It gives me the following errormessage in a messagebox: "The application failed to initialize properly (0xc0000142). Click on OK to terminate the application." If I have the system in the broken state, and I change the identity of the app pool to Domain administrator, it works. As I change it back to my domain-user, it breaks again. If I reboot the server, it works again for the same amount of days, but will break again. It seems like a permission-related problem. I just don't understand why it works sometimes, and sometimes doesn't. I hope someone outthere has seen this problem! Looking forward to hear from you! Kind regards, Morten, Denmark

    Read the article

  • Any ideas out there as to how the data can be recovered from an SSD?

    - by ben
    A friend had some form of catastrophic failure on an HP mini 1000, unbootable. Of course there was data that wasn't backed up. I've removed the SSD and hooked it up to a ZIF 40 enclosure but can not seem to get the drive to be recognized in Windows 7. In Disk Management it displays as present, but uninitialized. Attempting to initialize it presents an error Virtual Disk Manager - "The device is not ready". There is scant information on MIE (the custom OS), so I'm not even sure what kind of file system I'm dealing with. In any case, if the filesystem is indeed some flavor other than FAT or NTFS, is this error consistent with that? Are there any creative ideas out there as to how the data can be recovered? Update: Thanks for all the suggestions! I hadn't even considered running a live cd. Unfortunately no luck with Ubuntu (live cd) or explore2fs. The zif connection seems ok (color coded green led for proper connect, orange for not). The drive can't be initialized and therefore can't be formatted, so I guess there may be some real damage. Probably needs to head to a specialist. Thanks again for the feedback, much appreciated.

    Read the article

  • Lion server profile manager, device enrollment doesn't work

    - by user964406
    I am in the process of setting up Lion Servers profile manager to manage iPads on our local school network. I don't need to manage them while they are outside the network. I have successfully had it working on my personal network. The school network is behind a proxy which we have no control over. I can get the iPads to view the mydevices page and install a trust cert. I have managed to get an iPad to successfully install the remote management profile. After this the profile manager bugs out. It will list the active task of 'new device (sending)' but it's unable to complete the task. If I click on the device on profile manager and try any of the actions out they will all fail to complete. I am using the auto generated certificates and this works if I bring the server and iPad outside of the school network. Shortly after device enrollment the system log on the Lion server reports the following Replaced the actual ip address with INTERNALIP Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini applepushserviced[778]: Got connection error Error Domain=NSPOSIXErrorDomain Code=1 "The operation couldn\u2019t be completed. Operation not permitted" UserInfo=0x7fa483b1a340 {NSErrorFailingURLStringKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS, NSErrorFailingURLKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS} Jun 4 08:40:53 mini applepushserviced[778]: Failed to get client cert on attempt 2, will retry in 15 seconds Does anyone have any ideas on how to get past this stage? Thanks in advance.

    Read the article

  • SQL Server 2008 cluster freezing

    - by Ed Leighton-Dick
    We have run into a strange situation in which a SQL Server 2008 single-node cluster hangs. As background, we are rebuilding a Windows Server 2003/SQL Server 2005 two-node cluster using Windows 2008 and SQL Server 2008. Here's the timeline: Evicted the passive node (server B) from the Windows 2003/SQL 2005 cluster. The active node now functions as a single-node cluster with no problems. Wiped server B's disks and installed Windows 2008 and SQL Server 2008 as a single-node cluster. Since we do not want to the two clusters to communicate yet, we left the cluster's private network "heartbeat" adapter unconfigured. The cluster comes up and functions normally. Moved all databases to the new cluster. Cluster continues to function normally. Turned off server A (old cluster) in preparation for rebuilding as the second node of the new cluster. SQL Server instance on server B (new cluster) locks up, even though it should have no knowledge of or interaction with server A. Restarted server A. SQL Server instance on server B (new cluster) immediately begins working again. Things we have tried: The new cluster's name responds to ping and NETBIOS requests, even while the SQL Server is hung. We have confirmed that no IP address is assigned to the old heartbeat adapter, and it is not pulling an IP address from DHCP. Disabling the heartbeat's network card has the same effect. No errors were generated in any logs - Windows or SQL. When the error first occurred, it sat in the hung state for quite some time (well over 10 minutes) before anyone figured out what was going on. This would seem to eliminate any sort of normal cluster timeout in which it would have been searching for the other node (even if one had been configured). Server B is running Windows 2008 SP2, fully patched, and SQL Server 2008 SP1 CU7 (10.0.2775).

    Read the article

  • Windows 7 Backup - Does the "system image" include all the files on my drive?

    - by Vaccano
    I have a new Dell Laptop that I have setup the way I like it. I want to use Windows 7 to do a backup and then restore that backup on a different hard drive (solid state). When I setup the backup info (manually) for Windows 7 Backup there is a little checkbox at the the bottom that says: Include a system image of drives: RECOVERY, OS (C:) I can also select to backup all my data on the C: drive (the only hard drive I have anything on) as well as some libraries (which are on my C: drive so no point in selecting those). The question I have is, does Windows 7 Backup just somehow know what needs to be restored (ie program files and Windows and the registry ....? Or is it really making a full restorable copy of the C: drive? (If the later is true then I don't need select the C: drive to be "backed up" if I don't plan to access the files except by restoring them right? (Because the system image will already have it all.)) So, which way is it? What is saved in the System Image?

    Read the article

  • switchless Infiniband between two servers on RHEL 6.3

    - by exfizik
    I have 2 servers running RHEL 6.3 which have 2 port Infiniband cards >lspci | grep -i infini 07:00.0 InfiniBand: QLogic Corp. IBA7322 QDR InfiniBand HCA (rev 02) I'm interested in connecting them directly to each other bypassing an Infiniband switch (which I don't have). Quick googling showed that at least in some configurations it's possible. I installed all RedHat Infiniband packages with yum groupinstall "Infiniband Support". However, ibv_devinfo shows that both ports in each card are down, which indicates that cables are not connected. But the cable is connected, although the LEDs are off on the cards (not a good sign). Another source of confusion for me is that according to this, RedHat doesn't come with OFED packages and I'm slightly hesitant to install them from source due to the lack of RedHat support for them... So where am I going with this? The questions I have are: is it possible to have a switchless/direct Infiniband connection between two servers the way I described above? If it's possible, do I have to use the OFED packages or can I configure everything with just the packages coming with RHEL. Why are the LEDs off on my servers even though the cable is connected? Any additional input/advice/pointers would be appreciated. P.S. I followed this guide for installation instructions. The Infiniband cards are clearly recognized by my OS and the rdma service is running. Update: I have opensm installed. When I run it it says: OpenSM 3.3.13 Command Line Arguments: Log File: /var/log/opensm.log ------------------------------------------------- OpenSM 3.3.13 Entering DISCOVERING state Using default GUID 0x1175000076e4c8 SM port is down and stays at that point.

    Read the article

  • ESXi 4.0 Guests Locking up

    - by Brendan Sherwin
    I installed ESXi 4.0 on an HP Proliant g5 with a 64bit Xeon processor and took advantage of the free license as I work for a public school. I created two instances of server 2003 from scratch, one to be the DC, DHCP, the other to be a file server and DNS/DHCP backup. I had both guests up and running fine, setup my user accounts, transferred the data, etc etc. Once I joined a client machine to the domain, I would find that both of my Windows guests would lock up. Sometimes it would be for five or so minutes, once it was overnight. The "locked up" state means that as far I could tell, all services were stopped; dhcp no longer handed out IP's, DNS stopped working, I couldn't RDP into the server. The ESXi host, my HP server, was still running fine. VSphere was working, and I could look at the performance of the individual guests.I would try Powering off the hosts from inside VSPhere, and the hosts would start powering off, but get stuck at 95%, and stay that way, sometimes only for 10 minutes, others for hours. Several times I had to restart ESXi from it's console in order to restart my machines. Now, can anyone tell me what is happening, and how I can fix it, or take steps to prevent it? I hired a consultant to come take a look at it, someone who's experience and knowledge I trust, and he told me he had never seen anything like this ever before. He spoke to a friend of his who is VM certified, and he also said he had never heard of this issue. Thanks for your replies, and I'll do my best to respond ASAP. Currently, the server is powered off, and I've reinstituted my nine year old Server 2000 boxes, and I'm considering installing ESXi 3.5. Does anyone know a host created in 4.0 will work in 3.5? I'd really like to avoid having to rebuild those accounts! I know 4.0 works on this server, as I have another server in another school with the same exact hardware running 4.0 fine. Brendan

    Read the article

  • PXE booting LACP hosts on Force10 S50N with FTOS

    - by lolwutreddit
    Hardware: S50N Firmware: FTOS 8.4.2.6 Problem: We're trying to PXE boot some servers that are connected via port-channel interfaces with LACP. Current Work-around: we PXE boot a server with a single interface (eth0), and then use a Perl script to turn up the port-channel interfaces after the server is built. Details: Is anyone doing anything similar on Force10 S50 switches with FTOS? If not, is anyone doing this on another S series, or larger chassis-based Force10? I'm wondering if Native VLAN will solve this, since ports in a port-channel cannot explicitly have a VLAN set, and they don't seem to use the tagged or untagged VLAN that the port channel is in. I will confirm this next (I think it's the only thing I haven't tried) Juniper Example: http://broken.net/openindiana/how-to-pxe-boot-systems-on-lacp-using-juniper-switches/ Cisco: there are plenty of documented ways to solve this issue on IOS and Nexus Update/Edit: since there seems to be no way to use interface or port-channel mode commands to get the individual interfaces to show up in spanning-tree (rtsp in this case), the ports should never go into a forwarding state. I'm not going to mess with it anymore unless a) someone that has experience passes it on, or b) Force10 comes up with a solution for this (I'm guessing it will only be introduced on other S platforms (S55, S60), since the S50 seems to be near EOL). I'm basing that on the fact that the Open Automation type features are only being supported on the newer switches.

    Read the article

  • PHP 5.3 on IIS gives 404 error in CGI mode

    - by reinier
    Slowly losing my mind here. I had PHP 5.2 working fine (ISAPI) under IIS, but for some extension I needed 5.3. So no worries, I installed this but it turns out ISAPI is not supplied anymore. I followed the install tutorials for fastcgi and ended up with a 500 internal server error for every PHP page served. So my current situation is: I have fastcgi removed. In my websites I have added PHP (head, get, post) and routed them to c:\php\php-cgi.exe. Result: every PHP page I try (even the ones with just text) gives 404 not found error. Any HTML file I put in the same folder, serves without a hitch. Who can help me please... How hard can something like this be right? For me apparently very hard. Extra information: ran the installer as suggested below. Set it to use fastcgi. my fcgiext.ini file looks like this now: [types] php=c:\php\php-cgi.exe [c:\php\php-cgi.exe] exepath=c:\php\php-cgi.exe from the command-line a 3 line PHP file with just phpinfo(); works fine from the server the same PHP file with just phpinfo(); results in the internal server 500 error. from the server a PHP file with just text works fine when changing the document types in IIS management console and point the PHP extension directly to c:\php\php-cgi.exe results in 404 for every PHP file the php.ini is the php.ini.production file which came in the distribution. No edits were made. Setting the IIS PHP handler directly to PHP (not via fastcgi) c:\php\php-cgi.exe results in the following: display a PHP page with only text....works fine display a page with only phpinfo(); results in 404 not found

    Read the article

< Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >