Search Results

Search found 9667 results on 387 pages for 'hardware monitoring'.

Page 145/387 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • JBoss 5 on AIX 5.3

    - by jess
    I am a very newbie for AIX and system monitoring. Actually our application currently run production on jboss 5.1 in AIX 5.3. Please check below configuration & system settings. AIX system configuration OS Level 5.3.9.0 (oslevel -g) Physical Memory size 24GB (svmon -G) Page space 4GB (lsps -s) processors 3 cores, Processor Type: PowerPC_POWER6, Processor Clock Speed: 4704 MHz (prtconf | grep Processor) Java version JRE 1.6.0 IBM AIX build pap6460sr10fp1-20120321_01 (SR10 FP1) (java -fullversion) JBoss configuration JBoss 5.1/JBoss ESB 4.11 Hornetq messaging with consumer flow control java opts : -d64 -Xms2g -Xmx4g -XX:MaxPermSize=1024m Sometime we observe very strange behavior in the JBoss that freeze without any error logs. Also server log stop without any further trace. We also not able to get thread dump (kill -3) and its not generate at that point. (kill -3 xxxxx works in normal circumstances) Only option available for us was restart the jboss server and its seem all messages that were in queues during the freeze time process after restarting. We try tweak some of setting in JBoss hornetq, we though issue was there. Hornetq Stuck By Default. But we haven't any luck and also unable to isolate the issue in any point. We looking at tool like nmon for monitoring this but no clue is that good enough to do so. Please provide some point to investigate this issue. Thanks

    Read the article

  • What speed are Wi-Fi management and control frames sent at?

    - by Bryce Thomas
    There are a bunch of different 802.11 Wi-Fi standards, e.g. 802.11a, 802.11b, 802.11g, 802.11n etc. that all support different speeds. Wi-Fi frames are generally categorised as one of the following: Data frames - carry the actual application data Control frames - coordinate when its safe to send/reduce collisions Management frames - handle connection discovery/setup/tear down (e.g. AP discovery, association, disassociation) My question is about whether all these frames, and specifically management frames, are transmitted at the fastest supported speed available, or whether certain classes of frames are transmitted at some lowest common denominator speed. I have noticed that when I put an 802.11b/g only device into monitor mode and capture traffic over the air, I still see management frames (e.g. association/disassociation) being transmitted between my phone and AP which are both 802.11n, even though 802.11n has a higher transfer rate. So I am imagining one of two possibilities: My 802.11n phone/AP had to negotiate a slower speed for some reason and that's why I can see their frames on my 802.11b/g monitoring device. Management frames (and perhaps control frames also?) are sent at a lower speed, and it's only data frames that are transmitted faster with newer 802.11 standards. The reason I would like to know which one of these two possibilities (or perhaps a third possibility) is the case is that I want to capture management frames, and need to know whether using an 802.11b/g card is going to lead to me missing some frames sent at higher speeds than the monitoring card can observe. If management frames are indeed sent at a slower rate, then it's all good. If I just happen to be seeing the management frames because my phone/AP have negotiated a slower rate though, then I need to reconsider what card I use for packet capture.

    Read the article

  • How can I parse/ transform text log data before it gets captured in SCOM 2007 R2?

    - by Abs
    I'm pretty much a noob with System Center Operations Manager 2007, and I'm probably missing something pretty basic, but I'm stumped anyway. We're setting up monitoring on some of our servers, and we'd like to capture data from some plain text log files (e.g. DNS debug logs, DHCP logs). It looks to me like I can set up a generic text file monitoring rule and get events captured into the main Ops Manager database, but my understanding is that the whole line of text from the plain text log gets captured as one field. In an ideal world, we'd be able to parse or transform that log file data to make it easier to query later. Is this possible? Is it easy? Do I have to buy expensive 3rd-party software to do it? One more thing: it would be even better if there was a way to stuff this data into the Audit Collection Services (ACS) database instead of the main one, but I'll take what I can get. Any help would be greatly appreciated.

    Read the article

  • virtual memory commited

    - by vinu
    After a server bounce happens, and after around 40-45 days time period, we receive continuous “Committed Virtual Memory” alerts which indicates the usage of swap space in the magnitude of 4GB This also causes the application to perform very slowly and experience a number of stalled transactions. Server Setup: 4 Tomcat Servers (version 7.0.22) that are load balanced (not clustered) by 2 Apache Servers. And the Apache servers themselves supply static content and routing to these 4 tomcat servers. Java Runtime Version: java version "1.6.0_30" Java(TM) SE Runtime Environment (build 1.6.0_30-b12) Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode Memory Startup Parameters: MEMORY_OPTIONS="-Xms1024m -Xmx1024m -Xss192k -XX:MaxGCPauseMillis=500 -XX:+HeapDumpOnOutOfMemoryError -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled" Monitoring – Wily monitoring is available in all the production servers that monitors key server parameters and sends out configurable alert emails based on pre defined settings. Note: Each of the servers also has two other separate tomcat domains that run different applications Investigated area: There is no Heap Memory Leak and the GC is running fine without any issues over any period of time The current busy thread count corresponds directly to the application usage – weekends and nights have lesser no. of threads compared to business hours ThreadLocal uses a WeakReference internally. If the ThreadLocal is not strongly referenced, it will be garbage-collected, even though various threads have values stored via that ThreadLocal. Additionally, ThreadLocal values are actually stored in the Thread; if a thread dies, all of the values associated with that thread through a ThreadLocal are collected. If you have a ThreadLocal as a final class member, that's a strong reference, and it cannot be collected until the class is unloaded. But this is how any class member works, and isn't considered a memory leak. The cited problem only comes into play when the value stored in a ThreadLocal strongly references that ThreadLocal—sort of a circular reference. In this case, the value (a SimpleDateFormat), has no backwards reference to the ThreadLocal. There's no memory leak in this code. Can anyone please let me know what could be the cause of this and what to be monitored?

    Read the article

  • Reading log files from web application

    - by Egorinsk
    I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • Windows Small Business System 2003. SQL timeout in Server Performance Report

    - by tetranz
    I'm the volunteer IT admin at a small school. We have SBS 2003 with about ten desktops. The server performance report is emailed to me daily. It is setup with a wizard in the Monitoring and Performance part of the "Server Management" console. It often fails with a "The page cannot be displayed" error. The event log shows Event Type: Error Event Source: ServerStatusReports Event Category: None Event ID: 1 Date: 1/16/2011 Time: 6:03:14 AM User: N/A Computer: ALPHA Description: Server Status Report: URL: http://localhost/monitoring/perf.aspx?reportMode=1&allHours=1 Error Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Stack Trace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParser.ReadNetlib(Int32 bytesExpected) [plus lots more stack trace] This has been happening for years :) I've never really solved it. It seems to be related to WSUS. When it happens, I run the Update Services "Server Cleanup Wizard". That takes a long time to run. If I haven't run it for a while it can take 10 hours. I also run the WsusDBMaintenance.sql script (from TechNet I think) which reindexes the database etc. Those two things seem to get it working again for a while. Recently the "while" has become a couple of weeks. My searching online has revealed lots of people having this problem but no real solution. Does anyone have any good ideas about this? I have to wonder if something in the WSUS SQL schema is not indexed properly. The time that the server cleanup wizard takes seems ridiculous. Thanks

    Read the article

  • How to Monitor Network in Medium-Sized Company?

    - by Kyle Lowry
    I work at a medium sized company (100+ employees). An issue that has been cropping up is network performance, internet access in particular. We have about 70 or more computers, a mix of Mac OS X and Windows XP & 7 machines. We have several servers (Exchange server, PC file servers, MS SQL, Blackberry, FTP, Mac server, etc). There are four main switches, a SonicWall firewall, and probably a couple routers in the server room with a dozen or so more scattered around the building. The network structure has grown organically over a number of years; and, as far as I know, there really isn't a monitoring solution in place. When we experience network issues (slow connections, dropped packets, and so on), our general solution is to power cycle some hardware or go around to each employee and ask them if they are uploading/downloading any large files. This is really inefficient and time consuming, and it does not allow us to monitor the network, tackling potential problems proactively. I would like to find a solution that would allow me to monitor network usage company-wide in real time, with detail going down to the individual computer, ideally. Given the hodgepodge of equipment and operating systems, what would be the best way to set up some kind of monitoring solution? Hardware, software, restructuring our network architecture?

    Read the article

  • Server Intermittently Inaccessible Externally (but Accessible Internally Continuously)

    - by nicorellius
    I have a CRM on a server on a network. We have a static IP and another server outward facing. We use port-forwarding to map to the CRM, so that when you go to the IP or the FQDN, you get to the CRM: xxx.xxx.xxx.xxx crm.example.com Internally, we can access the CRM by going to crm or crm.example.com Lately, I've been noticing that accessing the server from outside the network times out or gives 503, bad gateway. During that time, I can also SSH (different port, so this works) into the outward facing computer and access the server just fine. I have a robot monitoring the site and indeed via HTTP monitoring the site is going down periodically. I looked through the Apache server access and error logs and nothing stuck out at me so I'm a bit confused as to what could be going on. I also searched the access logs for 503 and found nothing. When I run tracert from outside the network, it appears the packets basically make it through the wider area servers (Comcast city and county servers) and end up dropping at the CRM server's front step. I'm tempted to replace the server because it is older and underpowered but it would be nice to know what is going on. Any ideas what to do next?

    Read the article

  • Reading log files from web application

    - by Egorinsk
    Hi! I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • Android USB driver for Xperia X10a

    - by mlohbihler
    I've been trying for a couple hours now, and have hit all of the sites Google found, but i cannot get the Android USB driver on my XP box to talk to my new Xperia X10a. I found the lines that some kind soul posted, and has been syndicated repeated, but they don't work for me. The idea is to add them to the Google.NTx86 and Google.NTamd64 sections of the android_winusb.inf file: ;Xperia X10 %SingleAdbInterface% = USB_Install, USB\VID_0FCE&PID_E12E %CompositeAdbInterface% = USB_Install, USB\VID_0FCE&PID_E12E&MI_01 I tried a number of variations of the first line, including "Sony Ericsson X10a", which is what XP shows me in both the "found new hardware" wizard and the device manager, but no luck. The result is always the same. Here are my steps: Plug in the phone via USB "found new hardware wizard" appears Choose "No, not this time" and "next" Choose "Install from a list or specific location..." and "next" Choose "Search for best driver...", check "Include this location...", and browse for the "usb_driver" folder in the Android SDK installation. Click "next" It does a quick search and then says "Cannot install this hardware", "... because the wizard cannot find the necessary software". I've tried more things that i can recall now, including deleting registry entries, but it just won't work. Any help would be appreciated at this point. Regards, m@

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • one page filter results in new page in javascript

    - by Jake
    I have links set up on one page and the relationship between the links is a parent child relationship. (For example: Parent: All, Children: Software; Hardware) These links of course lead the user to a new page that shows the results from a table that is populated. Currently these links are all Similar destinations, but just a filter in the url. But the problem is that there is a javascript filter on the page that gives the user to choose between All, Software, or Hardware. Understand basically that if the url is still reading that there on the software page but they just filtered on the page to be Hardware that doesn't look good IMO. So what I was trying to do was make the links on the inital page all go the the exact same destination and somehow still know on the new page which link was clicked and run the javascript filter from knowing which link was clicked on that page. Is there a way to found that out from javascript? I guess a way to pass that value to the new page and retrieving it in javascript without showing it in the url so I can filter the table for the user based on that value?

    Read the article

  • How to TDD Asynchronous Events?

    - by Padu Merloti
    The fundamental question is how do I create a unit test that needs to call a method, wait for an event to happen on the tested class and then call another method (the one that we actually want to test)? Here's the scenario if you have time to read further: I'm developing an application that has to control a piece of hardware. In order to avoid dependency from hardware availability, when I create my object I specify that we are running in test mode. When that happens, the class that is being tested creates the appropriate driver hierarchy (in this case a thin mock layer of hardware drivers). Imagine that the class in question is an Elevator and I want to test the method that gives me the floor number that the elevator is. Here is how my fictitious test looks like right now: [TestMethod] public void TestGetCurrentFloor() { var elevator = new Elevator(Elevator.Environment.Offline); elevator.ElevatorArrivedOnFloor += TestElevatorArrived; elevator.GoToFloor(5); //Here's where I'm getting lost... I could block //until TestElevatorArrived gives me a signal, but //I'm not sure it's the best way int floor = elevator.GetCurrentFloor(); Assert.AreEqual(floor, 5); } Edit: Thanks for all the answers. This is how I ended up implementing it: [TestMethod] public void TestGetCurrentFloor() { var elevator = new Elevator(Elevator.Environment.Offline); elevator.ElevatorArrivedOnFloor += (s, e) => { Monitor.Pulse(this); }; lock (this) { elevator.GoToFloor(5); if (!Monitor.Wait(this, Timeout)) Assert.Fail("Elevator did not reach destination in time"); int floor = elevator.GetCurrentFloor(); Assert.AreEqual(floor, 5); } }

    Read the article

  • Anonymous union definition/declaration in a macro GNU vs VS2008

    - by Alan_m
    I am attempting to alter an IAR specific header file for a lpc2138 so it can compile with Visual Studio 2008 (to enable compatible unit testing). My problem involves converting register definitions to be hardware independent (not at a memory address) The "IAR-safe macro" is: #define __IO_REG32_BIT(NAME, ADDRESS, ATTRIBUTE, BIT_STRUCT) \ volatile __no_init ATTRIBUTE union \ { \ unsigned long NAME; \ BIT_STRUCT NAME ## _bit; \ } @ ADDRESS //declaration //(where __gpio0_bits is a structure that names //each of the 32 bits as P0_0, P0_1, etc) __IO_REG32_BIT(IO0PIN,0xE0028000,__READ_WRITE,__gpio0_bits); //usage IO0PIN = 0x0xAA55AA55; IO0PIN_bit.P0_5 = 0; This is my comparable "hardware independent" code: #define __IO_REG32_BIT(NAME, BIT_STRUCT)\ volatile union \ { \ unsigned long NAME; \ BIT_STRUCT NAME##_bit; \ } NAME; //declaration __IO_REG32_BIT(IO0PIN,__gpio0_bits); //usage IO0PIN.IO0PIN = 0xAA55AA55; IO0PIN.IO0PIN_bit.P0_5 = 1; This compiles and works but quite obviously my "hardware independent" usage does not match the "IAR-safe" usage. How do I alter my macro so I can use IO0PIN the same way I do in IAR? I feel this is a simple anonymous union matter but multiple attempts and variants have proven unsuccessful. Maybe the IAR GNU compiler supports anonymous unions and vs2008 does not. Thank you.

    Read the article

  • Communicating with all network computers regardless of IP address

    - by Stephen Jennings
    I'm interested in finding a way to enumerate all accessible devices on the local network, regardless of their IP address. For example, in a 192.168.1.X network, if there is a computer with a 10.0.0.X IP address plugged into the network, I want to be able to detect that rogue computer and preferrably communicate with it as well. Both computers will be running this custom software. I realize that's a vague description, and a full solution to the problem would be lengthy, so I'm really looking for help finding the right direction to go in ("Look into using class XYZ and ABC in this manner") rather than a full implementation. The reason I want this is that our company ships imaged computers to thousands of customers, each of which have different network settings (most use the same IP scheme, but a large percentage do not, and most do not have DHCP enabled on their networks). Once the hardware arrives, we have a hard time getting it up on the network, especially if the IP scheme doesn't match, since there is no one technically oriented on-site. Ideally, I want to design some kind of console to be used from their main workstation which looks out on the network, finds all computers running our software, displays their current IP address, and allows you to change the IP. I know it's possible to do this because we sell a couple pieces of custom hardware which have exactly this capability (plug the hardware in anywhere and view it from another computer regardless of IP), but I'm hoping it's possible to do in .NET 2.0, but I'm open to using .NET 3.5 or P/Invoke if I have to.

    Read the article

  • How is external memory, internal memory, and cache organized?

    - by goldenmean
    Consider a system as follows:= A hardware board having say ARM Cortex-A8 and Neon Vector coprocessor, and Embedded Linux OS running on Cortex-A8. On this environment, if there is some application - say, a video decoder is executing - then: How is it decided that which buffers would be in external memory, which ones would be allocated in internal SRAM, etc. When one says calloc/malloc on such system/code, the pointer returned is from which memory: internal or external? Can a user make buffers to be allocated to the memories of his choice (internal/external)? In ARM architectures, there is another memory called as Tightly coupled memory (TCM). What is that and how can user enable and use it? Can I declare buffers in this memory? Do I need to see the memory map (if any) of the hardware board to understand about all these different physical memories present in a typical hardware board? How much of a role does the OS play in distinguishing these different memories? Sorry for multiple questions, but i think they all are interlinked.

    Read the article

  • Does the Java Memory Model (JSR-133) imply that entering a monitor flushes the CPU data cache(s)?

    - by Durandal
    There is something that bugs me with the Java memory model (if i even understand everything correctly). If there are two threads A and B, there are no guarantees that B will ever see a value written by A, unless both A and B synchronize on the same monitor. For any system architecture that guarantees cache coherency between threads, there is no problem. But if the architecture does not support cache coherency in hardware, this essentially means that whenever a thread enters a monitor, all memory changes made before must be commited to main memory, and the cache must be invalidated. And it needs to be the entire data cache, not just a few lines, since the monitor has no information which variables in memory it guards. But that would surely impact performance of any application that needs to synchronize frequently (especially things like job queues with short running jobs). So can Java work reasonably well on architectures without hardware cache-coherency? If not, why doesn't the memory model make stronger guarantees about visibility? Wouldn't it be more efficient if the language would require information what is guarded by a monitor? As i see it the memory model gives us the worst of both worlds, the absolute need to synchronize, even if cache coherency is guaranteed in hardware, and on the other hand bad performance on incoherent architectures (full cache flushes). So shouldn't it be more strict (require information what is guarded by a monitor) or more lose and restrict potential platforms to cache-coherent architectures? As it is now, it doesn't make too much sense to me. Can somebody clear up why this specific memory model was choosen? EDIT: My use of strict and lose was a bad choice in retrospect. I used "strict" for the case where less guarantees are made and "lose" for the opposite. To avoid confusion, its probably better to speak in terms of stronger or weaker guarantees.

    Read the article

  • Binding the selected value from a combobox to a member of a class.

    - by CM
    I have a combobox that is bound to a an instance of a class. I need to get the user's selection ID of the combobox and set a class property equal to it. For example, here is the class: public class robot { private string _ID; private string _name; private string _configFile; [XmlElement("hardware")] public hardware[] hardware; public string ID { get { return _ID; } set { _ID = value; } } public string name { get { return _name; } set { _name = value; } } public string configFile { get { return _configFile; } set { _configFile = value; } } } Now here is the code to bind the combobox to an instance of that class. This display's the name of each robot in the array in the combobox. private void SetupDevicesComboBox() { robot[] robot = CommConfig.robot; cmbDevices.DataSource = robot; cmbDevices.DisplayMember = "name"; cmbDevices.ValueMember = "ID"; } But now I can't seem to take what the user selects and use it. How do I use the "ID" of what the user select's from the combobox? Settings.selectedRobotID = cmbDevices.ValueMember; //This just generates "ID" regardless of what is selected. I also tried Settings.selectedRobotID = cmbDevices.SelectedItem.ToString(); //This just generates "CommConfig.robot" Thanks

    Read the article

  • Running Firewall (IPCop) on Hyper-V

    - by Loren Charnley
    I currently use IPCop for our corporate firewall & VPN. I am looking to consolidate a number of servers, and am considering including the firewall server in the consolidation. I currently plan on using Server 2008 with Hyper-V for the virtualization. Has anyone out there tried virtualizing IPCop? Is there anything that I should be aware of? In particular, IPCop has somewhat limited hardware support for NICs - what hardware will the VM see for the network card?

    Read the article

  • How HP D2700 disk enclosure is monitored for alarms via SNMP

    - by VSAC
    We have HP D2700 disk enclosure and we would like to monitor D2700 (connected to HP Proliant DL360G8) for alarms.I have following questions regarding this. What are the options available for reporting D2700 hardware alarms (disk failure, power failure) via SNMP? We understand the D2700 to have an Ethernet interface for controller A and B and alarms are available via SNMP via this interface. Can anyone provide the actual alarms via this interface? (MIB and alarm list) As we have a number of D2700’s and would like to minimize the number of physical connections to the switch and associated IP addresses; Is there a mechanism to monitor the D2700 from the SCSI connected HPDL360 and raise SNMP alarms from the DL360 for hardware failures on the D2700? If so can anyone provide details and the actual alarms and MIB via this mechanism? Thanks!

    Read the article

  • HP-UX (PA-RISC|Itanium) virtualisation on (x86-64|x86)

    - by Oleksandr Bolotov
    I'm looking for a way to run HP-UX (for educational purposes), but I don't have HP hardware right now. These options are not very suitable for me: HP TestDrive program - Looks like it was discontinued 2 years ago. Ski - looks like only CPU emulator. Is it worth trying? HPPAQEMU - Patch for old Qemu for HPPA-Linux guest-OS only. Is it worth trying? hp-ux Aires - I don't need to visualize HP-PA on HP-Itanium. That question is about using HP-UX without HP hardware.

    Read the article

  • Mac OS X Server 10.6 - Apple's software mirrored RAID worth it?

    - by Arko
    Hi, I am installing an Intel Xserve (Quad core Xeon) with Snow Leopard Server (10.6) on two 80Gb 7200rpm SATA HDs. I created a mirrored RAID set using Disk Utility with those two drives, all went fine. I was then asking myself if this is really a good idea. I know that an hardware RAID system would be better, but what about this software RAID? Have you any feedback on this? Will it work fine if one HD breaks down? Does this affect performance? [UPDATE] In short: Hardware RAID is better than software RAID which is better than none. Thank you all for the answers, they were very helpful. Especially Gordon's script to monitor failures. As Apple's software RAID is pretty silent about a drive failure.

    Read the article

  • Very High Interrupt CPU usage in Win2k3 VM on vSphere

    - by Darragh
    Hi, I've been testing some software in a server virtual environment and I've noticed I get a huge amount of CPU usage on the Interrupts process. My question is, how does this relate to the virtual hardware platform as the rate is allot lower in a real system. Some how the hypervizor scheduler works hard to over come this problem but not as well as on real hardware does. Obvious things are high I/O and disk access but this application mostly just sits and works in memory allot. If anyone has experienced the same, please let me know. thanks in advance Screenshot: Process Explorer

    Read the article

  • Suggestions for SOHO networking gear

    - by jakemcgraw
    I'm a software developer in my day to day job but have landed a contract position to spec out and install the computer equipment for a small office. Ease of use (easy installation, low maintenance and good support) is priority number one, it supersedes price by a wide margin. The installation we had in mind would support up to ten workstations. I was originally going to go with Netgear hardware for firewall, switch duties: Firewall: NETGEAR UTM25-100NAS Switch:NETGEAR GS724T but have been told Sonicwall firewalls are easier to configure. So, sysadmins, if ease of use was priority number one, what hardware would you purchase for firewall, switch duties?

    Read the article

  • Getting USB boot to work in SmartOS on HP ProLiant N40L

    - by user126579
    I recently downloaded SmartOS and tried running it on my HP ProLiant N40L, but it always fails on boot. After dd'ing the image to the USB stick, I plug it into the internal USB header and turn the machine on. After selecting from GRUB, it displays the following: , bss=0x0 It sits there for 2-4 minutes, then finally boots the OS and displays the following: WARNING: Couldn't read ACPI SRAT table from BIOS. lgrp support will be limited to one group. SunOS Release 5.11 Version joyent_20120614T184600Z 64-bit Copyright (c) 2010-2012, Joyent Inc. All rights reserved. WARNING: kvm: no hardware support After that, it hangs. I've tried this with two different USB sticks. I've seen some mentions on the SmartOS website about people running it on an N40L, booting from USB, so maybe it's just broken hardware? Has anyone gotten this working?

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >