Search Results

Search found 22481 results on 900 pages for 'andy may'.

Page 651/900 | < Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >

  • IIS6 intranet site using integrated authentication fails to load when accessed externally

    - by maik
    I've developed a couple of internal sites for my organization that use integrated authentication. Ultimately we want these sites to be accessible externally to users with domain-joined computers. The sites work as expected on domain computers while on the internal network. The problem comes when I take my laptop home and try to access those sites. IIS only has integrated authentication enabled for the two sites. When I browse to the site using IE8 I get a username/password prompt asking for domain credentials. I can put those in and it will work, but the goal is to use the cached token for integrated authentication. Next I reasoned that IE wouldn't response to an integrated auth request (is NTLM the right term for this?) unless the site was trusted. I tried adding the site to Trusted Sites but I get the same behavior as the before. I then added the site to Local Intranet sites and that is where things get weird. I get a generic error page from IE, no error code or anything. Just for funsies I loaded up Firefox (which I had previously set up to use integrated authentication) and I added this new site to network.automatic-ntlm-auth.trusted-uris. Much to my surprise I was able to load the pages up with no problem at all and saw exactly what I was expecting (including verification that the integrated authentication worked). My mind is a bit boggled at the moment as I'm not really sure where to go from here. I was hoping some of you may be able to provide some insight.

    Read the article

  • Kerberos & localhost

    - by Alex Leach
    I've got a Kerberos v5 server set up on a Linux machine, and it's working very well when connecting to other hosts (using samba, ldap or ssh), for which there are principals in my kerberos database. Can I use kerberos to authenticate against localhost though? And if I can, are there reasons why I shouldn't? I haven't made a kerberos principal for localhost. I don't think I should; instead I think the principal should resolve to the machine's full hostname. Is that possible? I'd ideally like a way to configure this on just one server (whether kerberos, DNS, or ssh), but if each machine needs some custom configuration, that'd work too. e.g $ ssh -v localhost ... debug1: Unspecified GSS failure. Minor code may provide more information Server host/[email protected] not found in Kerberos database ... EDIT: So I had a bad /etc/hosts file. If I remember correctly, the original version I got with Ubuntu had two 127.0. IP addresses, something like:- 127.0.0.1 localhost 127.0.*1*.1 hostname For no good reason, I'd changed mine a long time ago to: 127.0.0.1 localhost 127.0.*0*.1 hostname.example.com hostname This seemed to work fine with everything until I tried out ssh with kerberos (a recent endeavour). Somehow this configuration led to sshd resolving the machine's kerberos principal to "host/localhost@\n", which I suppose makes sense if it uses /etc/hosts for forward and reverse dns lookups in preference to external dns. So I commented out the latter line, and sshd magically started authenticating with gssapi-with-mic. Awesome. (Then I investigated localhost and asked the question)

    Read the article

  • Split Tunnel VPN using incorrect Tunnel

    - by Brian Schmeltz
    Our company has a handful of field offices that have recently been setup with a regular internet connection after we removed the T1 and router that connected them directly to our network. Now, when the users are in the office, they log in to the VPN to be able to connect to the network. For the sake of them being able to print and scan from the local multi-function we have setup a split tunnel VPN. We currently have about 15-20 users using this setup around the country without any problems. Recently one of our users started having problems accessing internal programs/sites when connecting from both home and the office. There are three other users in the same office and they do not have this problem. I assumed that it was something with the computer and went ahead and replaced it with another of the same model. The computer worked fine in our home office; however, when the user received it, she had the exact same problem both at home and in the field office. Thinking it may be a NIC driver issue I sent her another computer, this time a different model, same problem occurred. If I update the host file to point to the correct paths, things will work, and if I connect via a normal VPN connection everything works, but the user cannot scan or print - which is a problem. Have tried to find ways to create another tunnel on a normal VPN and have tried to find ways to force the correct tunnel on the split tunnel VPN. It appears that there is something related to the ISP because if I connect to Comcast or Verizon it is fine but once she connects to Insite then she has problems. I have been unable to get any support from Insite as they don't feel the issue is with them. We use a Nortel VPN client. Any thoughts or ideas would be appreciated.

    Read the article

  • Secondary backup server

    - by verdy
    I've been given a task to implement a backup solution in the event of our website goes down. It is a dedicated server running centos 6. From what i've experience on our server, our server may go down because of PHP application crash or hardware failure. I have couple of questions: In the first case, is it possible to get the server restart the PHP automatically, how can I do that? Because in my mind, if it is only the application that goes down, probably I can still make use of the server itself. In the second case, can I redirect a request to a secondary server? How can I do that? What do I need other than another server? For now it is gonna be a simple server which shows the user a static landing page so later the system notify us via email that the primary server went down so that we can restart the server manually. Is it possible to setup just a vps or even a shared server for the secondary server ? As I think there is only gonna be a static page. Thanks. Any help would be much appreciated

    Read the article

  • It seems Windows 8.1 killed my two T60 laptop batteries

    - by rstock
    Upgraded Windows 7 to Win8 earlier this year, and last week upgraded to Windows 8.1. (Lenovo T60) Had no problem with battery usage when on Win7 nor Win8. After about a week of Win8.1 on my system, the battery stop working, while the system was on. The orange batt. indicator just keeps flashing. The system does not charge the battery (even though I know there was life in it). I installed a known good fully charged battery from another T60, it worked for aboute 40 mins then it instantly died in fron of my eyes. The system now shows the same orange flashing batt. light, but it is not charging. I know both these batteries are still good, they just appear to be dead. My research suggest that the new Win8.1 may not have updated the battery driver to Win8. I have since done that. Same problem. Research i s also pointing me to some 'smart chip' on the batteries that need to a reset. Is this possible ?? Does anyone know a process to reset the 'smart chip' on these batteries (fru# 92P1139) ????

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • Postfix Relay to Office365

    - by woodsbw
    I am trying to setup a Postfix server on a Linux box to relay all mail to our Office365 (Exchange, hosted by Microsoft) mail server, but, I keep getting an error regarding the sending address: BB338140DC1: to= relay=pod51010.outlook.com[157.56.234.118]:587, delay=7.6, delays=0.01/0/2.5/5.1, dsn=5.7.1, status=bounced (host pod51010.outlook.com[157.56.234.118] said: 550 5.7.1 Client does not have permissions to send as this sender (in reply to end of DATA command)) Office 365 requires that the sending address in the MAIL FROM and From: header be the same as the address used to authenticate. I have tried everything I can think of in the config to get this working. My postconf -n: append_dot_mydomain = no biff = no config_directory = /etc/postfix debug_peer_list = 127.0.0.1 inet_interfaces = loopback-only inet_protocols = all mailbox_size_limit = 0 mydestination = xxxxx, localhost.localdomain, localhost myhostname = localhost mynetworks = 127.0.0.0/8 recipient_delimiter = + relay_domains = our.doamin relayhost = [pod51010.outlook.com]:587 sender_canonical_classes = envelope_sender sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_always_send_ehlo = yes smtp_sasl_auth_enable = yes smtp_sasl_mechanism_filter = login smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = smtp_tls_CAfile = /etc/postfix/cacert.pem smtp_tls_loglevel = 1 smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes sender_canonical: www-data [email protected] root [email protected] www-data@localhost [email protected] root@localhost [email protected] Also, sasl_passwd is set to the correct credentials (tested them using swaks multiple times.) Authentication works, and sends the message when the from headers are correct (also tested using swaks....which works) The emails are coming from PHP, so I have also tried altering the sendmail path in php.ini to use pass the correct from address via -f So, for some reason, mail coming from www-data and root are not having the from fields rewritten to Office 365's satisfaction, and it won't send the message. Any postfix gurus out there that can help me setup this relay?

    Read the article

  • Accidentally mounted a ReiserFS drive as MBR on my windows box - how do I recover?

    - by Ryan
    I had a WD Netcenter with a 160GB drive that kept dropping off the network. I opened up the enclosure and removed the hard drive, connected to a Windows box without knowing the drive used ReiserFS.... When mounting on the Windows box, I chose "MBR" as filesystem. 70GB of data corrupted: 90% of data is word documents, excel spreadsheets, and jpg's - all mission critical. Attempted recovery on Linux box (ubuntu) using TestDisk: I could see the container, but couldn't get anything out – according to TestDisk this was because I chose "none" as filesystem. Attempted recovery using Nucleus Kernel Recovery for windows: 98% of what was recovered is incomplete and/or unusable. I need to know if a way exists to recover or rebuild original ReiserFS MBR, or what tools/techniques might give me the best results in recovering the data. Found a Windows version of TestDisk and I ran it yesterday - here are the results: TestDisk 6.14-WIP, Data Recovery Utility, May 2012 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sda - 160 GB / 149 GiB - CHS 19457 255 63 The harddisk (160 GB / 149 GiB) seems too small! (< 519 GB / 483 GiB) Check the harddisk size: HD jumpers settings, BIOS detection... The following partitions can't be recovered: Partition Start End Size in sectors > ReiserFS 3.6 62 241 8 19458 0 18 311581568 ReiserFS 3.6 62 248 55 19458 8 2 311581568 ReiserFS 3.6 62 254 37 19458 13 47 311581568 ReiserFS 3.6 63 6 28 19458 20 38 311581568 ReiserFS 3.6 63 13 11 19458 27 21 311581568 ReiserFS 3.6 63 21 43 19458 35 53 311581568 ReiserFS 3.6 63 27 41 19458 41 51 311581568 ReiserFS 3.6 63 37 35 19458 51 45 311581568 ReiserFS 3.6 63 54 20 19458 68 30 311581568 ReiserFS 3.6 63 76 26 19458 90 36 311581568

    Read the article

  • Debian software raid 1: boot from both disk

    - by bsreekanth
    I newly installed debian squeeze with software raid.The way I did was, as also given in this thread. I have 2 HDD with 500 GB each. For each of them, I created 3 partitions (/boot, / and swap) I selected the hard drive and created a new partition table I created a new partition that was 1GB. I then specified to use the partition as a Physical Volume for RAID. and used for /boot and enabled bootable. Created another partition, which is of 480 GB, and then specified to use the partition as a Physical Volume for RAID. and used for /. Created another partion and used for swap Then RAID configuration: Through Configure RAID menu - create MD device - (2 for the number of drives, 0 for spare devices) Next select the partitions you want to be members of /dev/MD0. I selected /dev/sda1 and /dev/sdb1 (for /boot) Next select the partitions you want to be members of /dev/MD1. I selected /dev/sda6 and /dev/sdb6 (for /) And no RAID for swap partitions 'Finish Partitioning and write changes to disk' -- Finish the rest of the install like normal Everything is ok now, except I am not sure how to test my raid config. When I pull the power of the HDD, it only boots from one disk. I read in some forum that I may have to install GRUB manually on the other. In Debian Squeeze, there is no grub command. Not sure how to make my software raid bootable from both disk. Also, please comment on my steps above. Anything unusual. I configured /boot partitions of both disks to be boot=yes. Not sure whether that is ok. Thanks, Bsr

    Read the article

  • How to send T.38 from a mac?

    - by Brian Postow
    I'm trying to set up a fax-server on a macintosh. I have Hylafax, and we're going to use an internet FOIP fax provider (Haven't decided who yet, that may be another question). The problem is how to get from Hylafax to T.38. I know of two options, but I'm not sure how to decide between them: T38modem Advantages: It's only one extra program, and i know that I can compile it for the Mac. (well, At least I can get the H323 version working on a Mac) Disadvantages: It is mostly undocumented and seems to be supported only by one guy in Russia. IAXModem/Asterisk Advantages: It's well known, and well supported. We can pay for support. It presumably does the T38 with SIP correctly, so we don't have to worry about it. Disadvantages: It's two separate programs. While I know how to get Asterisk on a mac, I'm not sure about IAXModem. (It's sourceforge, and linux, but compiling things for a mac isn't always straight forward...) It's also mostly undocumented. Do these seem like an accurate listing of the pros/cons? Anyone have any other suggestions? thanks.

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • Sun-JRE on CentOS-4.8 RPM error: post-install scriptlet failed, exit status 5

    - by Emyr
    I have a server with CentOS 4.8 installed. The provided is rubbish, but there's only a few months left, and they're busy being sued by Chase bank, so I doubt I can get CentOS 5. I wiped the server clean using Virtuozzo, and found that the default image is VERY empty. I even had to install yum myself. I've reached the point where I want to install TomCat. I downloaded the Sun JRE as a .rpm.bin file, did chmod a+x and ran it. That produced a .rpm file, which I tried installing: [root@host java]# rpm -Uvh jre-6u20-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... error: %post(jre-1.6.0_20-fcs.i586) scriptlet failed, exit status 5 [root@host java]# rpm -qi jre Name : jre Relocations: /usr/java Version : 1.6.0_20 Vendor: Sun Microsystems, Inc. Release : fcs Build Date: Mon Apr 12 19:34:13 2010 Install Date: Thu May 6 06:36:17 2010 Build Host: jdk-lin-1586 Group : Development/Tools Source RPM: jre-1.6.0_20-fcs.src.rpm Size : 50708634 License: Sun Microsystems Binary Code License (BCL) Signature : (none) Packager : Java Software <[email protected]> URL : http://java.sun.com/ Summary : Java(TM) Platform Standard Edition Runtime Environment Description : The Java Platform Standard Edition Runtime Environment (JRE) contains everything necessary to run applets and applications designed for the Java platform. This includes the Java virtual machine, plus the Java platform classes and supporting files. The JRE is freely redistributable, per the terms of the included license. [root@host java]# I couldn't find any results on Google for any parts of that error message, and I have very little experience of rpm (I usually use Debian). Is this a broken package, or am I missing something or some setting?

    Read the article

  • Why can't I install Java?

    - by Patrick
    I've been searching high and low for someone who can help me. I've been trying for a month. A month. to install Java on my new PC to no avail. No tech support forum can seem to help me. It all started while playing tekkit one day. I kept running out of memory (using the 32-bit JRE 7 u45) so I decided to install the 64-bit version. I uninstalled the 32-bit version first, for some reason, and downloaded the 64 bit runtime. In the installer, I go through all the normal screens until the installation progress bar appears. Then, it just sits there. No progress is made. No CPU is used by the installer, or any of its dependencies. The installer will stay like this for hours, days, and in one case a whole week without doing anything at all. I've tried installing older versions, the 32-bit version, even Java 6 and none of them will install. UAC is disabled, I've run regedit, CCleaner, and any other "fix-it" program there is. It's getting to the point where I may just have to wipe my hard drives and start over. I have several applications that require java, so this is an absolute necessity. Please, please, someone have the answer. Here are my system specs: -Intel i7-3770k -AsRock z77 Extreme3 -Samsung 840 pro SSD -WD Caviar Black 1tb

    Read the article

  • MySQL Server Is Slow

    - by user2853965746
    I have two MySQL servers and one was just recently setup. The one I just recently setup is a bit slower than my older one, which kind of bothers me because I don't want my clients to be upset with the speed difference when I launch the new one. The older server runs on Ubuntu (~13.04 I believe) and the new one is on Debian 6. Both servers are 2GB ram, but my newer server is has an SSD, so I thought it might be the same speed if not faster. Anyway, the speed difference isn't too much (both are still under a second, but still noticeable). Whenever I select 50 rows from the user table on my older server (SELECT * FROM users LIMIT 50), I get the results in 0.003 s. There is 100,000+ accounts in that table. Whenever running the same command on the same table with only six dev accounts, it takes 0.069 s. It may not seem like a lot, but it's noticeable when you're used to a fast response. I added skip-name-resolve to the config and it didn't seem to help. Basically I'm asking if anyone knows what can cause a MySQL server to be slow in Debian 6? Should I just drop it and switch to Ubuntu like the older server (I don't think the OS is the problem, but you never know)? The older server is under a lot of use too, it's used a lot for web api's on my website. A lot of connections and stuff, and it still remains fast.

    Read the article

  • Proper Network Infastructure Setup DMZ, VPN, Routing Hardware Question

    - by NickToyota
    Greetings Server Fault Universe, So here's a quick background. Two weeks ago I started a new position as the systems administrator for an expanding health services company of just over 100 persons. The individual I was replacing left the company with little to no notice. Basically, I have inherited a network of one main HQ (where I am situated) which has existed for over 10 years, with five smaller offices (less than 20 persons). I am trying to make sense of the current setup. The network at the HQ includes: Linksys RV082 Router providing internet access for employees and site to site VPN connecting the smaller offices (using an RV042 each). We have both cable and dsl lines connected to balance traffic (however this does not work at all and is not my main concern right now). Cisco Ironport appliance. This is the main gateway for our incoming and outgoing emails. This also has an external IP and internal IP. Lotus domino in and out email servers connected to the mentioned Cisco gateway. These also have an external IP and internal IP. Two windows 2003 and 2008 boxes running as domain controllers with DNS of course. These also have both an external IP and internal IP. Website and web mail servers also on both external and internal IPs. I am still confused as why there are so many servers connected directly to the internet. I am seriously looking to redesign this setup with proper security practices in mind (my highest concern) and am in need of a proper firewall setup for the external/internal servers along with a VPN solution about 50 employees. Budget is not a concern as I have been given some flexibility to purchase necessary solutions. I have been told Cisco ASA appliance may help. Does anyone out in the Server Fault Universe have some recommendations? Thank you all in advance.

    Read the article

  • APC Smart UPS network shutdown issue

    - by Rob Clarke
    Here is a bit about our setup: We have 2x Smart-UPS RT 6000 XL units with network management cards We are running Powerchute from a network server Powerchute is connected to the management cards of both UPSs UPSs are set to do a graceful shutdown via Powerchute when the battery duration is under 20 minutes We also have a command file that runs with Powerchute Although our setup is redundant we do not have an equal load on each server due to APC switches for single power devices The problem is that as we do not have an equal load on each server the batteries drain at different rates. This means that the UPSs both get to the specified low battery duration at completely different times. The problem here is that UPS 1 may have run down to 5 minutes and is in desperate need of initiating a Powerchute shutdown - UPS 2 still has 25 minutes of runtime so no shutdown is initiated. Consequently UPS 1 goes down and takes all the servers with and then shuts down UPS 2 as well! What we need to happen are 1 of either 2 things: Powerchute initiates the shutdown as soon as either UPS reaches the 20 minutes low battery duration setting - and doesnt wait for both The UPS with the heavier load expends its entire battery but does not shutdown both UPSs and lets the load be switched across to the UPS that still has runtime remaining. That way when the UPS that still has runtime reaches its low battery duration it can proceed with the graceful shutdown via Powerchute. Hope that makes sense, any help is greatly appreciated!

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • Intel D2500HN Atom D2500 Doesn't turn on

    - by David W
    I recently bought parts from Amazon to build an embedded PC, and have assembled everything. I have: Intel D2500HN Mini-ITX Motherboard Mini-Box Pico-PSU 80 M350 Universal Min-ITX Enclosure 2GB DDR3 Memory Kinamax AD-LCD12 LCD Monitors 12V 6A 72W AC Adapter Power Supply The motherboard gets a light (on the motherboard, not on the Pico-PSU) when I plug it into the power adapter. Furthermore, I see the power switch light come on when I press the power button. However, the display doesn't turn on, and it doesn't seem that the PC is actually turned on. Since I'm seeing these lights, I know that the motherboard is getting power. Furthermore, the display VGA port is embedded into the motherboard, so that's not the issue. I'm just trying to figure out what COULD be the issue aside from a faulty motherboard. I have a diagram of the D2500HN motherboard which labels everything, and have ensured that the power LED as well as the On/Off cables are plugged into the right spots, although to be sure I've tried flipping these two cables around, and also plugging 1 cable into the other cable's spot & vice-versa. Is there anything else you folks think I may be missing, or anything else I can do to try to troubleshoot this issue before sending the motherboard back?

    Read the article

  • Backup strategy for developer-focused Apple environments?

    - by ewwhite
    It's interesting to see the technological split between structured corporate environments and more developer-driven/startup environments. Some of the Microsoft technologies I take for granted (VSS, Folder Redirection, etc.) simply are not available when managing the increasing number of Apple laptops I see in DevOps shops. I'm interested in centralized and automated backup strategies for a group of 30-40 Apple laptops... How is this typically done safely and securely, assuming these are company-owned machines (versus BYOD)? While Apple has Time Machine, it's geared toward individual computer backups and doesn't seem to work reliably in a group setting. Another issue with these workstations is the presence of Vagrant/Virtual Box VMs on the developers' systems. Time Machine and virtual machines typically don't work well unless the VMs are excluded from the backup set. I'd like a push-based backup process with some flexible scheduling options. I know how to handle the backend storage, but I'm not sure on what needs to be presented to the client systems. Due to the nature of the data here, cloud-based backup may not be a viable option. Any suggestions about how you handle this in your environment would be appreciated. Edit: The virtual machine backups are no longer important. They can be excluded from the process and planning.

    Read the article

  • Slow performance on VMWare Linux server after Tomcat install

    - by Loftx
    We have a VMWare ESXi 4.1 server hosting a number of Linux and Windows guests. Recently a new Linux guest was added to this server and seemed to be performing well. Tomcat and some other applications on this server were then installed which seem to have caused the server to run really slowly without any obvious resource issues. Slow performance include: The time taken to bring up the password prompt over ssh takes a few seconds when it was previously instantaneous. The time taken to unzip a zip file which was previously a few seconds now takes around 30 seconds The time taken to compile vmware tools has increased by similar factors Both the VMWare console and monitoring commands don't report any issues with high CPU or memory usage but something is obviously slowing the server down somehow. Does anyone have any ideas what may be causing this issue and how it can be resolved? Thanks, Tom Edit As per your questions I’ve looked at some of the performance indicators on both the VM host and VM guest indicated. Firstly I tried reserving the full amount of memory (3gb) for this VM – no other machines on this server have any memory reservation. The swap in rate and swap out rate for the VM host and guest are now both zero. Balloon memory on the guest is zero and on the host is 3.5gb (total memory on the host is 12gb) The swap rate for the guest is also zero. Swap used by the host is 200mb on average. Compression and decompression rates for the host and guest are zero. Command aborts for the host are zero. Read latency is very low – maximum 10ms average 0.8ms. Write latency is higher – a few spikes to 170ms but mostly around 25ms – is this bad? Queue command latency is zero . Physical disk read latency averages 5ms but often 10ms Physical disk write latency averages 15ms but is often 20ms I hope this helps - let me know if you need any more information.

    Read the article

  • Network Explorer Intermittently Fails to Display all Computers in Work Group

    - by graf_ignotiev
    I run a small computer lab of 10 computers and occasionally, when using the network explorer (a.k.a Network Browser) some or all of the remote computers will fail to appear. If I try to access a remote computer by its name I get an unspecified error (code 0x80004005), but I am still able to access it with the computer's IP address. The strangest part is that the problem will inexplicably go away after waiting awhile. Each computer is running Windows 7 x64 Enterprise and has identical hardware, software and configuration. They are all on the same subnet and in the same workgroup. I've spent days researching the problem and have tried the following solutions: Updated the BIOS, chipset and network adapter drivers Changed Power Settings in Network Adapter Properties so that the computer will not turn it off Disabled the Computer Browser service Changed the DHCP node type to broadcast Reviewed the Event Viewer logs Steps 3 and 4 have seemed to help the problem a little bit, but not completely. I'm beginning to suspect that the problem might lie with our router which is a ZyXEL ZyWALL 2WG, as the packets sent by Network Discovery may not be returning in time, but I wanted to get some perspective in the issue before I went any further.

    Read the article

  • Increasing Java's heapspace in Tomcat startup script

    - by Ankur
    I want to increase my heap size when using Tomcat. I was told to add this line export CATALINA_OPTS=-Xms16m -Xmx256m; In to the startup.sh script - I did so (at the beginning) but got the error export: 24: -Xmx256m: bad variable name Where am I supposed to add it, am I doing something else wrong? <b>export CATALINA_OPTS=-Xms16m -Xmx256m;</b> # Better OS/400 detection: see Bugzilla 31132 os400=false darwin=false case "`uname`" in CYGWIN*) cygwin=true;; OS400*) os400=true;; Darwin*) darwin=true;; esac # resolve links - $0 may be a softlink PRG="$0" while [ -h "$PRG" ] ; do ls=`ls -ld "$PRG"` link=`expr "$ls" : '.*-> \(.*\)$'` if expr "$link" : '/.*' > /dev/null; then PRG="$link" else PRG=`dirname "$PRG"`/"$link" fi done PRGDIR=`dirname "$PRG"` EXECUTABLE=catalina.sh # Check that target executable exists if $os400; then # -x will Only work on the os400 if the files are: # 1. owned by the user # 2. owned by the PRIMARY group of the user # this will not work if the user belongs in secondary groups eval else if [ ! -x "$PRGDIR"/"$EXECUTABLE" ]; then echo "Cannot find $PRGDIR/$EXECUTABLE" echo "This file is needed to run this program" exit 1 fi fi exec "$PRGDIR"/"$EXECUTABLE" start "$@"

    Read the article

  • What folders to encrypt with EFS on Windows 7 laptop?

    - by Joe Schmoe
    Since I've been using my laptop more as a laptop recently (carrying it around) I am now evaluating my strategy to protect confidential information in case it is stolen. Keep in mind that my laptop is 6 years old (Lenovo T61 with 8 GB or RAM, 2GHz dual core CPU). It runs Windows 7 fine but it is no speedy demon. It doesn't support AES instruction set. I've been using TrueCrypt volume mounted on demand for really important stuff like financial statements forever. Nothing else is encrypted. I just finished my evaluation of EFS, Bitlocker and took a closer look at TrueCrypt again. I've come to conclusion that boot partition encryption via Bitlocker or TrueCrypt is not worth the hassle. I may decide in the future to use Bitlocker or TrueCrypt to encrypt one of the data volumes but at this point I intend to use EFS to encrypt parts of my hard drive that contain data that I wouldn't want exposed. The purpose of this post is to get your feedback about what folders should be encrypted from the general point of view (of course everyone will have something specific in addition) Here is what I thought of so far (will update if I think of something else): 1) AppData\Local\Microsoft\Outlook - Outlook files 2) AppData\Local\Thunderbird\Profiles and AppData\Roaming\Thunderbird\Profiles- Thunderbird profiles, not sure yet where exactly data is stored. 3) AppData\Roaming\Mozilla\Firefox\Profiles\djdsakdjh.default\bookmarkbackups - Firefox bookmark backup. Is there a separate location for "main" Firefox bookmark file? I haven't figured it out yet. 4) Bookmarks for Chrome (don't know where it's bookmarks are) and Internet Explorer ($Username\Favorites) - I don't really use them but why not to secure that as well. 5) Downloads\, My Documents\ and My Pictures\ folders I don't think I need to encrypt, say, latest service pack for Visual Studio. So I will probably create subfolder called "Secure" in all of these folders and set it to "Encrypted". Anything sensitive I will save in this folder. Any other suggestions? Again, this is from the point of view of your "regular office user".

    Read the article

  • Talk on multiple IRC channels at once?

    - by TwoPixelGrid
    I seem to remember, back in '91 or so, that the console-based IRCII implemention on the Solaris box that first got me on the net would let me /Join multiple channels on a given network such that, as new channels were joined, they would start scrolling to the single console view. Let's call it the 'interleaved conversation' chat paradigm. Am I rembering this correctly? More importantly, is there a modern way of doing this in any of the GUI-based clients? I'm surprised this isn't a common desire/feature because I think it would greatly improve the experience, especially on channels with high SNR. For example, If I'm working on a project I may connect to Freenode and join : #Qt,#OpenGL,#C++. As it is now, with mIRC,Xchat, I have to manually flip between pages just to see whats being said and to reply. What I envision would go more like this (using only 2 channels for simplicity) /join #QT #OpenGL < [QT] QtChannelUser: Hello TwoPixelGrid. < [OpenGL] OpenGLChannelUser: Hi there TwoPixelGrid. @QT: Hi QtChannelUser @OpenGL: Hello againOpenGLChannelUser And this message is going out to all my channels. Do I have to write a new client or is this already out there?

    Read the article

  • Win8/7/XP print spooler not getting along with Zebra ZT230 via WIFI

    - by Jonathan M
    I have a graphics-intensive 4"x6" label I'm printing to the ZT230. I'm printing multiple (10) copies. When connected via USB, all goes well. However, when connected via wifi, I only get 2 of the labels. A wireshark capture shows that at some point in the process my computer (presumably my windows spooler) is sending a reset packet, which, I believe, would pretty much kill the print job. I'm getting the same results on Win8, Win7 and WinXP. The print job was originally generated on Zebra's ZebraDesigner2 software. For easier diagnosis, I captured it to a .prn file. The .prn file can be found here: https://drive.google.com/file/d/0BwxF_9SAkKzLLTF5bUJVT0lESUU/edit?usp=sharing And the wireshark capture file can be found here: https://drive.google.com/file/d/0BwxF_9SAkKzLTGpSS0ktZW1xV28/edit?usp=sharing And the printer configuration listing: https://docs.google.com/document/d/1zh1Tw4D4yNa2uljOIL1kO2z8se9HK859irpUEwyxlyY/edit?usp=sharing I've started a discussion with Zebra Tech Support, and they're working on it, but I thought I'd toss it out here for more ideas since we're getting kind of stumped. Any ideas why this may be happening?

    Read the article

< Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >