Search Results

Search found 8094 results on 324 pages for 'campus solutions'.

Page 305/324 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • C# Different class objects in one list

    - by jeah_wicer
    I have looked around some now to find a solution to this problem. I found several ways that could solve it but to be honest I didn't realize which of the ways that would be considered the "right" C# or OOP way of solving it. My goal is not only to solve the problems but also to develop a good set of code standards and I'm fairly sure there's a standard way to handle this problem. Let's say I have 2 types of printer hardwares with their respective classes and ways of communicating: PrinterType1, PrinterType2. I would also like to be able to later on add another type if neccessary. One step up in abstraction those have much in common. It should be possible to send a string to each one of them as an example. They both have variables in common and variables unique to each class. (One for instance communicates via COM-port and has such an object, while the other one communicates via TCP and has such an object). I would however like to just implement a List of all those printers and be able to go through the list and perform things as "Send(string message)" on all Printers regardless of type. I would also like to access variables like "PrinterList[0].Name" that are the same for both objects, however I would also at some places like to access data that is specific to the object itself (For instance in the settings window of the application where the COM-port name is set for one object and the IP/port number for another). So, in short something like: In common: Name Send() Specific to PrinterType1: Port Specific to PrinterType2: IP And I wish to, for instance, do Send() on all objects regardless of type and the number of objects present. I've read about polymorphism, Generics, interfaces and such, but I would like to know how this, in my eyes basic, problem typically would be dealt with in C# (and/or OOP in general). I actually did try to make a base class, but it didn't quite seem right to me. For instance I have no use of a "string Send(string Message)" function in the base class itself. So why would I define one there that needs to be overridden in the derived classes when I would never use the function in the base class ever in the first place? I'm really thankful for any answers. People around here seem very knowledgeable and this place has provided me with many solutions earlier. Now I finally have an account to answer and vote with too. EDIT: To additionally explain, I would also like to be able to access the objects of the actual printertype. For instance the Port variable in PrinterType1 which is a SerialPort object. I would like to access it like: PrinterList[0].Port.Open() and have access to the full range of functionality of the underlaying port. At the same time I would like to call generic functions that work in the same way for the different objects (but with different implementations): foreach (printer in Printers) printer.Send(message)

    Read the article

  • Parsing ip addresses in php

    - by user2938780
    I am trying to get the number of active connections (Real Time) from a log file by IP connected and having a Play status but instead, it's giving me the total number of IP with status play. The number doesn't decrease at all. Keeps on increasing as soon as a new ip is added. How can I fix it? Here my code: $stringToParse = file_get_contents('wowzamediaserver_access.log'); preg_match_all('/\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/', $stringToParse, $matchOP); echo "Number of connections: ".sizeof(array_unique($matchOP[0])); HERE IS THE LOG: 2013-10-30 14:54:36 CET stop stream INFO 200 account1 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_1 http (cupertino) - 2013-10-30 14:56:12 CET play stream INFO 200 account2 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_2 rtmp (cupertino) - 2013-10-30 14:58:23 CET stop stream INFO 200 account2 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_2 rtmp (cupertino) - 2013-10-30 14:58:39 CET play stream INFO 200 account1 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_1 http (cupertino) - 2013-10-30 14:59:12 CET play stream INFO 200 account2 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_2 rtmp (cupertino) - I want to be able to count the IP whenever it has a "PLAY" status and don't count it whenever it's "STOP" 2013-10-30 14:59:00 CET play stream INFO 200 tv2vielive - _defaultVHost_ tv2vielive _definst_ 0.315 [any] 1935 rtmp://tv2vie.zion3cloud.com:1935/tv2vielive 78.247.255.186 rtmp http://www.tv2vie.org/swf/flowplayer-3.2.16.swf WIN 11,7,700,202 92565864 3576 3455 1 0 0 0 tv2vielive - - - - - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive - 2013-10-30 14:59:04 CET stop stream INFO 200 tv2vielive - _defaultVHost_ tv2vielive _definst_ 4.75 [any] 1935 rtmp://tv2vie.zion3cloud.com:1935/tv2vielive 78.247.255.186 rtmp http://www.tv2vie.org/swf/flowplayer-3.2.16.swf WIN 11,7,700,202 92565864 3576 512571 1 7222 0 503766 tv2vielive - - - - - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive - Any solutions? I have even tried the first answer solution but getting "0" play connections. $stringToParse = file_get_contents('wowzamediaserver_access.log'); $pattern = '~^.* play.* ( ([0-9]{1,3}+\.){3,3}[0-9]{1,3}).*$~m'; preg_match_all($pattern, $stringToParse, $matches); echo count($matches[1]) . ' play actions'; But whenever I use my code, I am getting "Number of connections: xxxxx(actual count of IPs). My concern is that I only need the count of IPs with PLAY status. If that IP changes to STOP then it wont count.

    Read the article

  • VB.Net 2008 IDE hanging - MSVB7.dll eating 100% CPU when editing code

    - by Andrew Backer
    I am having a problem with msvb7.dll eating 50%+ cpu on my dual core system. This usually lasts 10-30 seconds or so, during which time the IDE is non-responsive. This occurs when I do pretty much anything in the text editor, and can be replicated by simply adding blank lines to a function, and then deleting them. Or pasting some code. Or... lotsa stuff. SP1 installed I had DevExpress' refactor/coderush, components, and codeit.right installed, but have removed all 3 of them. (I had installed the latest version of Refactor Pro! (9.3.4), perhaps the day before) I have tried a VS.NET Repair. There is a kb that referenced some cpu destroying with vb, but it was included in SP1 Also: The solution consists of ~30 VB projects and 2 C# projects 8 other developers aren't having any issues with this (or at least not the SAME issues, we all have em) Clean get from TFS was done Project builds properly, can can even debug. This doesn't seem to happen on really small solutions, but perhaps it does and it just goes away super quick. Any clues at all as to what might be causing this, or how to fix it? I REALLY don't want to lose another day uninstalling and reinstalling and patching and so on =) If that even fixes it. Here is the stack trace (process explorer) that I get from the threads window when the msvb7.dll is churning. --- title in process explorer [threads] tab for process -------- cpu:49.28% cswitch delta: 300 to 3500 startaddress: [msvb7.dll+0x4218c] msvb7.dll version: 9.0.30729.1 --- actual stack trace ------- ntkrnlpa.exe!KiUnexpectedInterrupt+0x121 ntkrnlpa.exe!ZwYieldExecution+0x1c56 ntkrnlpa.exe!KiDispatchInterrupt+0x72e NDIS.sys!NdisFreeToBlockPool+0x15e1 // shortened stack trace. all of these are from msvb7, msvb7.dll+0x46ce7 <- 0x2676a <- 0x2698e <- 0x38031 <- 0x2659f <- 0x26644 msvb7.dll+0x25f29 <- 0x2ac7a <- 0x27522 <- 0x274a0 <- 0x2b5ce <- 0x2b6e4 msvb7.dll+0x67d0a <- 0x68551 <- 0x6817b <- 0x681f0 <- 0x67c38 <- 0x65fa8 msvb7.dll+0x666c6 <- 0x6672c <- 0x6673d <- 0x6677c <- 0x667b4 <- 0x63c77 msvb7.dll+0x63e97 <- 0x42c3a <- 0x42bc1 <- 0x41bd7 kernel32.dll!GetModuleFileNameA+0x1b4 This is the list of stuff from "copy info" in help-about, shortened to a resonable length. Microsoft Visual Studio 2008 | Version 9.0.30729.1 SP Microsoft Visual Studio 2008 Professional Edition - ENU Service Pack 1 (KB945140) KB945140 Microsoft .NET Framework | Version 3.5 SP1 Microsoft Visual Basic 2008 Microsoft Visual C# 2008 Microsoft Visual F# for Visual Studio 2008 Microsoft Visual Studio 2008 Team Explorer | Version 9.0.30729.1 Microsoft Visual Studio 2008 Tools for Office Microsoft Visual Web Developer 2008 Hotfix for Microsoft Visual Studio 2008 Professional Edition - ENU KB944899, KB945282, KB946040, KB946308, KB946344, KB946581, KB947171 KB947173, KB947180, KB947540, KB947789, KB948127, KB946260, KB946458, KB948816 Microsoft Recipe Framework Package 8.0 Process Editor WIT Designer 1.4.0.0 Process Editor for Microsoft Visual Studio Team Foundation Server, Version 1.4.0.0 tangible T4 Editor 9.0 tangible T4 Text Template Editor - T4 Editor tangibleprojectsystem 1.0 Team Foundation Server Power Tools October 2008 SQL Prompt 4.0 (disabled)

    Read the article

  • SUSE EC2 Problem - zypper - Permission denied

    - by phuu
    I'm trying to use zypper to install gcc on my Amazon EC2 instance running SUSE.When I try:zypper in gcc I get: Retrieving repository 'SLE11-SDK-SP1' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLE11-SDK-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1' metadata [error] Repository 'SLE11-SDK-SP1' is invalid. Can't provide /media.1/media : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1' because of the above error. Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [|] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [error] Repository 'SLE11-SDK-SP1-Updates' is invalid. Can't provide /repodata/repomd.xml : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1-Updates' because of the above error. Retrieving repository 'SLES11-Extras' metadata [/] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): r Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): zypper in gcc Invalid answer 'zypper in gcc'. [a/r/i/?] (a): a Retrieving repository 'SLES11-Extras' metadata [error] Repository 'SLES11-Extras' is invalid. Can't provide /repodata/repomd.xml : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-Extras' because of the above error. Retrieving repository 'SLES11-SP1' metadata [-] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLES11-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): a Retrieving repository 'SLES11-SP1' metadata [error] Repository 'SLES11-SP1' is invalid. Can't provide /media.1/media : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-SP1' because of the above error. Retrieving repository 'SLES11-SP1-Updates' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. I've search for the problem and this thread came up, but offered no solutions.I've triedsces-activate. Am I doing something wrong? I should say I'm very new to this, and I admit I don't really know what I'm doing, but I'm trying to learn about setting up and running a server and so I thought I'd throw myself in at the deep(ish) end. Thanks for reading.

    Read the article

  • Dealing with HTTP w00tw00t attacks

    - by Saif Bechan
    I have a server with apache and I recently installed mod_security2 because I get attacked a lot by this: My apache version is apache v2.2.3 and I use mod_security2.c This were the entries from the error log: [Wed Mar 24 02:35:41 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:31 2010] [error] [client 202.75.211.90] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:49 2010] [error] [client 95.228.153.177] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:48:03 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) Here are the errors from the access_log: 202.75.211.90 - - [29/Mar/2010:10:43:15 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:11:40:41 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:12:37:19 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" I tried configuring mod_security2 like this: SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecFilterSelective REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" The thing in mod_security2 is that SecFilterSelective can not be used, it gives me errors. Instead I use a rule like this: SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecRule REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" Even this does not work. I don't know what to do anymore. Anyone have any advice? Update 1 I see that nobody can solve this problem using mod_security. So far using ip-tables seems like the best option to do this but I think the file will become extremely large because the ip changes serveral times a day. I came up with 2 other solutions, can someone comment on them on being good or not. The first solution that comes to my mind is excluding these attacks from my apache error logs. This will make is easier for me to spot other urgent errors as they occur and don't have to spit trough a long log. The second option is better i think, and that is blocking hosts that are not sent in the correct way. In this example the w00tw00t attack is send without hostname, so i think i can block the hosts that are not in the correct form. Update 2 After going trough the answers I came to the following conclusions. To have custom logging for apache will consume some unnecessary recourses, and if there really is a problem you probably will want to look at the full log without anything missing. It is better to just ignore the hits and concentrate on a better way of analyzing your error logs. Using filters for your logs a good approach for this. Final thoughts on the subject The attack mentioned above will not reach your machine if you at least have an up to date system so there are basically no worries. It can be hard to filter out all the bogus attacks from the real ones after a while, because both the error logs and access logs get extremely large. Preventing this from happening in any way will cost you resources and they it is a good practice not to waste your resources on unimportant stuff. The solution i use now is Linux logwatch. It sends me summaries of the logs and they are filtered and grouped. This way you can easily separate the important from the unimportant. Thank you all for the help, and I hope this post can be helpful to someone else too.

    Read the article

  • Bypass BIOS password set by faulty Toshiba firmware on Satellite A55 laptop?

    - by Brian
    How can the CMOS be cleared on the Toshiba Satellite A55-S1065? I have this 7 year old laptop that has been crippled by a glitch in its BIOS: 'A "Password =" prompt may be displayed when the computer is turned on, even though no power-on password has been set. If this happens, there is no password that will satisfy the password request. The computer will be unusable until this problem is resolved. [..] The occurrence of this problem on any particular computer is unpredictable -- it may never happen, but it could happen any time that the computer is turned on. [..] Toshiba will cover the cost of this repair under warranty until Dec 31, 2010.' -Toshiba As they stated, this machine is "unusable." The escape key does not bypass the prompt (nor does any other key), thus no operating system can be booted and no firmware updates can be installed. After doing some research, I found solutions that have been suggested for various Toshiba Satellite models afflicted by this glitch: "Make arrangements with a Toshiba Authorized Service Provider to have this problem resolved." -Toshiba (same link). Even prior to the expiration of Toshiba's support ("repair under warranty until Dec 31, 2010"), there have been reports that this solution is prohibitively expensive, labor charges accruing even when the laptop is still under warranty, and other reports that are generally discouraging: "They were unable to fix it and the guy who worked on it said he couldn’t find the jumpers on the motherboard to clear the BIOS. I paid $39 for my troubles and still have the password problem." - Steve. Since the costs of the repairs can now exceed the value of the hardware, it would seem this is a DIY solution, or a non-solution (i.e. the hardware is trash). Build a Toshiba parallel loopback by stripping and soldering the wires on a DB25 plug to connect connect these pins: 1-5-10, 2-11, 3-17, 4-12, 6-16, 7-13, 8-14, 9-15, 18-25. -CGSecurity. According to a list of supported models on pwcrack, this will likely not work for my Satellite A55-1065 (as well as many other models of similar age). -pwcrack Disconnect the laptop battery for an extended period of time. Doesn't work, laptop sat in a closet for several years without the battery connected and I forgot about the whole thing for awhile. The poor thing. Clear CMOS by setting the proper jumper setting or by removing the CMOS (RTC) battery, or by short circuiting a (hidden?) jumper that looks like a pair of solder marks -various sources for various Satellite models: Satellite A105: "you will see C88 clearly labeled right next the jack that the wireless card plugs into. There are two little solder squares (approx 1/16") at this location" -kerneltrap Satellite 1800: "Underneath the RAM there is black sticker, peel off the black sticker and you will reveal two little solder marks which are actually 'jumpers'. Very carefully hold a flat-head screwdriver touching both points and power on the unit briefly, effectively 'shorting' this circuit." -shadowfax2020 Satellite L300: "Short the B500 solder pads on the system board." -Lester Escobar Satellite A215: "Short the B500 solder pads on the system board." -fixya Clearing the CMOS could resolve the issue, but I cannot locate a jumper or a battery on this board. Nothing that looks remotely like a battery can be removed (everything is soldered). I have looked closely at the area around the memory and do not see any obvious solder pads that could be a secret jumper. Here are pictures (click for full resolution) : Where is the jumper (or solder pads) to short circuit and wipe the CMOS on this board? Possibly related questions: Remove Toshiba laptop BIOS password? Password Problem Toshiba Satellite..

    Read the article

  • SUSE EC2 Problem - zypper - Permission denied

    - by phuu
    Hi. I'm trying to use zypper to install gcc on my Amazon EC2 instance running SUSE.When I try:zypper in gcc I get: Retrieving repository 'SLE11-SDK-SP1' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLE11-SDK-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1' metadata [error] Repository 'SLE11-SDK-SP1' is invalid. Can't provide /media.1/media : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1' because of the above error. Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [|] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [error] Repository 'SLE11-SDK-SP1-Updates' is invalid. Can't provide /repodata/repomd.xml : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1-Updates' because of the above error. Retrieving repository 'SLES11-Extras' metadata [/] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): r Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): zypper in gcc Invalid answer 'zypper in gcc'. [a/r/i/?] (a): a Retrieving repository 'SLES11-Extras' metadata [error] Repository 'SLES11-Extras' is invalid. Can't provide /repodata/repomd.xml : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-Extras' because of the above error. Retrieving repository 'SLES11-SP1' metadata [-] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLES11-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): a Retrieving repository 'SLES11-SP1' metadata [error] Repository 'SLES11-SP1' is invalid. Can't provide /media.1/media : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-SP1' because of the above error. Retrieving repository 'SLES11-SP1-Updates' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. I've search for the problem and this thread came up, but offered no solutions.I've triedsces-activate. Am I doing something wrong? I should say I'm very new to this, and I admit I don't really know what I'm doing, but I'm trying to learn about setting up and running a server and so I thought I'd throw myself in at the deep(ish) end. Thanks for reading.

    Read the article

  • Internal drives vs USB-3 with external SSD or eSata with External SSD

    - by normstorm
    I have a need to carry VMWare Virtual Machines with me for work. These are very large files (each VM is 20GB or more) and I carry around about 40 to 50 VM's to simulate different software configurations for different client needs. Key: they won't fit on the internal hard drive of my current laptop. I currently execute the VM's from an external 7200RPM 2.5" USB-2 drive. I keep copies of the VM's on other 5400 external USB-2 drives. The VM's work from this drive, but they are slow, costing me much time and frustration. It can take upwards of 30 minutes just to make a copy of one of the VM's. They can take upwards of 10-15 minutes to fully launch and then they operate sluggishly. I am buying a new laptop (Core I7, 8GB RAM and other high-end specs). I intend to buy an SSD for the O/S volume (C:). This SSD will not be large enough to hold the VM's. I have always wanted a second internal hard drive to operate the VM's. To have two hard drives, though, I am finding that I will have to go to a 17" laptop which would be bulky/heavy. I am instead considering purchasing a 15" laptop with either an eSATA port or USB-3 ports and then purchasing two external drives. One of the drives might be an external SSD (maybe OCX brand) for operating the VM's and the other a 7400RPM 1TB hard drive for carrying around the VM's not currently in use. The question is which options would give me the biggest bang for the buck and the weight: 1) 2nd Internal SSD hard drive. This would mean buying a 17" laptop with two drive "bays". The first bay would hold an SSD drive for the C: drive. I would leave the first bay empty from the manufacture and then purchase/install an aftermarket SSD drive. This second SSD drive would have to be very large (256 GB), which would be expensive. I would still also need another external hard drive for carrying around the VM's not in use. 2) 2nd internal hard drive - 7400 RPM. Again, a 17" laptop would be required, but there are models available with on SSD drive for the C: drive and a second 7200 RPM hard drives. The second drive could probably be large enough to hold the VM's in use as well as those not in use. But would it be fast enough to drive the VM's? 3) USB-3 with External SSD. I could buy a 15" laptop with an SSD drive for the C: drive and a second hard drive for general files. I would operate the VM's from an external USB-3 SSD drive and have a third USB-3 external 7200 RPM drive for holding the VM's not in use. 4) eSATA with External SSD. Ditto, just eSATA instead of USB-3 5) USB-3 with External 7400 RPM drive. Ditto, but the drive running the VM's would be USB-3 attached 7400 RPM drives rather than SSD. 6) eSATA with External 7400 RPM drive. Dittor, but the drive running the VM's would be eSATA attached 7400 RPM drives rather than SSD. Any thoughts on this and any creative solutions?

    Read the article

  • How Do I Enable My Ubuntu Server To Host Various SSL-Enabled Websites?

    - by Andy Ibanez
    Actually, I Have looked around for a few hours now, but I can't get this to work. The main problem I'm having is that only one out of two sites works. I have my website which will mostly be used for an app. It's called atajosapp.com . atajosapp.com will have three main sites: www.atajosapp.com <- Homepage for the app. auth.atajosapp.com <- Login endpoint for my API (needs SSL) api.atajosapp.com <- Main endpoint for my API (needs SSL). If you attempt to access api.atajosapp.com it works. It will throw you a 403 error and a JSON output, but that's fully intentional. If you try to access auth.atajosapp.com however, the site simply doesn't load. Chrome complains with: The webpage at https://auth.atajosapp.com/ might be temporarily down or it may have moved permanently to a new web address. Error code: ERR_TUNNEL_CONNECTION_FAILED But the website IS there. If you try to access www.atajosapp.com or any other HTTP site, it connects fine. It just doesn't like dealing with more than one HTTPS websites, it seems. The VirtualHost for api.atajosapp.com looks like this: <VirtualHost *:443> DocumentRoot /var/www/api.atajosapp.com ServerName api.atajosapp.com SSLEngine on SSLCertificateFile /certificates/STAR_atajosapp_com.crt SSLCertificateKeyFile /certificates/star_atajosapp_com.key SSLCertificateChainFile /certificates/PositiveSSLCA2.crt </VirtualHost> auth.atajosapp.com Looks very similar: <VirtualHost *:443> DocumentRoot /var/www/auth.atajosapp.com ServerName auth.atajosapp.com SSLEngine on SSLCertificateFile /certificates/STAR_atajosapp_com.crt SSLCertificateKeyFile /certificates/star_atajosapp_com.key SSLCertificateChainFile /certificates/PositiveSSLCA2.crt </VirtualHost> Now I have found many websites that talk about possible solutions. At first, I was getting a message like this: _default_ VirtualHost overlap on port 443, the first has precedence But after googling for hours, I managed to solve it by editing both apache2.conf and ports.conf. This is the last thing I added to ports.conf: <IfModule mod_ssl.c> NameVirtualHost *:443 # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here NameVirtualHost *:443 Listen 443 </IfModule> Still, right now only api.atajosapp.com and www.atajosapp.com are working. I still can't access auth.atajosapp.com. When I check the error log, I see this: Init: Name-based SSL virtual hosts only work for clients with TLS server name indication support (RFC 4366) I don't know what else to do to make both sites work fine on this. I purchased a Wildcard SSL certificate from Comodo that supposedly secures *.atajosapp.com, so after hours trying and googling, I don't know what's wrong anymore. Any help will be really appreciated. EDIT: I just ran the apachectl -t -D DUMP_VHOSTS command and this is the output. Can't make much sense of it...: root@atajosapp:/# apachectl -t -D DUMP_VHOSTS apache2: Could not reliably determine the server's fully qualified domain name, using atajosapp.com for ServerName [Thu Nov 07 02:01:24 2013] [warn] NameVirtualHost *:443 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server api.atajosapp.com (/etc/apache2/sites-enabled/api.atajosapp.com:1) port 443 namevhost api.atajosapp.com (/etc/apache2/sites-enabled/api.atajosapp.com:1) port 443 namevhost auth.atajosapp.com (/etc/apache2/sites-enabled/auth.atajosapp.com:1) *:80 is a NameVirtualHost default server atajosapp.com (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost atajosapp.com (/etc/apache2/sites-enabled/000-default:1)

    Read the article

  • Network authentication + roaming home directory - which technology should I look into using?

    - by Brian
    I'm looking into software which provides a user with a single identity across multiple computers. That is, a user should have the same permissions on each computer, and the user should have access to all of his or her files (roaming home directory) on each computer. There seem to be many solutions for this general idea, but I'm trying to determine the best one for me. Here are some details along with requirements: The network of machines are Amazon EC2 instances running Ubuntu. We access the machines with SSH. Some machines on this LAN may have different uses, but I am only discussing machines for a certain use (running a multi-tenancy platform). The system will not necessarily have a constant amount of machines. We may have to permanently or temporarily alter the amount of machines running. This is the the reason why I'm looking into centralized authentication/storage. The implementation of this effect should be a secure one. We're unsure if users will have direct shell access, but their software will potentially be running (under restricted Linux user names, of course) on our systems, which is as good as direct shell access. Let's assume that their software could potentially be malicious for the sake of security. I have heard of several technologies/combinations to achieve my goal, but I'm unsure of the ramifications of each. An older ServerFault post recommended NFS & NIS, though the combination has security problems according to this old article by Symantec. The article suggests moving to NIS+, but, as it is old, this Wikipedia article has cited statements suggesting a trending away from NIS+ by Sun. The recommended replacement is another thing I have heard of... LDAP. It looks like LDAP can be used to save user information in a centralized location on a network. NFS would still need to be used to cover the 'roaming home folder' requirement, but I see references of them being used together. Since the Symantec article pointed out security problems in both NIS and NFS, is there software to replace NFS, or should I heed that article's suggestions for locking it down? I'm tending toward LDAP because another fundamental piece of our architecture, RabbitMQ, has a authentication/authorization plugin for LDAP. RabbitMQ will be accessible in a restricted manner to users on the system, so I would like to tie the security systems together if possible. Kerberos is another secure authentication protocol that I have heard of. I learned a bit about it some years ago in a cryptography class but don't remember much about it. I have seen suggestions online that it can be combined with LDAP in several ways. Is this necessary? What are the security risks of LDAP without Kerberos? I also remember Kerberos being used in another piece of software developed by Carnegie Mellon University... Andrew File System, or AFS. OpenAFS is available for use, though its setup seems a bit complicated. At my university, AFS provides both requirements... I can log in to any machine, and my "AFS folder" is always available (at least when I acquire an AFS token). Along with suggestions for which path I should look into, does anybody have any guides which were particularly helpful? As the bold text pointed out, LDAP looks to be the best choice, but I'm particularly interested in the implementation details (Keberos? NFS?) with respect to security.

    Read the article

  • I want to change DPI with Imagemagick without changing the actual byte-size of the image data

    - by user1694803
    I feel so horribly sorry that I have to ask this question here, but after hours of researching how to do an actually very simple task I'm still failing... In Gimp there is a very simple way to do what I want. I only have the German dialog installed but I'll try to translate it. I'm talking about going to "Picture-PrintingSize" and then adjusting the Values "X-Resolution" and "Y-Resolution" which are known to me as so called DPI values. You can also choose the format which by default is "Pixel/Inch". (In German the dialog is "Bild-Druckgröße" and there "X-Auflösung" and "Y-Auflösung") Ok, the values there are often "72" by default. When I change them to e.g. "300" this has the effect that the image stays the same on the computer, but if I print it, it will be smaller if you look at it, but all the details are still there, just smaller - it has a higher resolution on the printed paper (but smaller size... which is fine for me). I am often doing that when I am working with LaTeX, or to be exact with the command "pdflatex" on a recent Ubuntu-Machine. When I'm doing the above process with Gimp manually everything works just fine. The images will appear smaller in the resulting PDF but with high printing quality. What I am trying to do is to automate the process of going into Gimp and adjusting the DPI values. Since Imagemagick is known to be superb and I used it for many other tasks I tried to achieve my goal with this tool. But it does just not do what I want. After trying a lot of things I think this actually is be the command that should be my friend: convert input.png -density 300 output.png This should set the DPI to 300, as I can read everywhere in the web. It seems to work. When I check the file it stays the same. file input.png output.png input.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced output.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced When I use this command, it seems like it did what I wanted: identify -verbose output.png | grep 300 Resolution: 300x300 PNG:pHYs : x_res=300, y_res=300, units=0 (Funny enough, the same output comes for input.png which confuses me... so this might be the wrong parameters to watch?) But when I now render my TeX with "pdflatex" the image is still big and blurry. Also when I open the image with Gimp again the DPI values are set to "72" instead of "300". So there actually was no effect at all. Now what is the problem here. Am I getting something completely wrong? I can't be that wrong since everything works just fine with Gimp... Thanks for any help in this. I am also open to other automated solutions which are easily done on a Linux system...

    Read the article

  • SQL Server Issue: Could not allocate space for object ... primary filegroup is full

    - by Luke
    Trying to figure out a problem at an office that has SQL Server 2005 installed on Windows SBS Server 2008. Here's the setup: It's an office, and the person who set this all up is nowhere to be found. I'm the best hope they have... One of the programs they use on a workstation gives them an error of "Could not allocate space for object 'Billing' in database "MyDatabase" because primary filegroup is full" when trying to save an entry in their software. I searched around for hours, looking for possible solutions. One was to check for available disk space, and another was to defrag. I checked the hard drives on the server, and there is plenty of space free. I also defragged, which may have helped the problem somewhat. It's hard to say, because it seems like with the nature of the error, if you try over and over you might get it to actually save. My next step was to try to see if autogrowth was enabled on the database. This would seem to be a likely / possible solution, but I can't access the database! If I run the SQL Management Studio, I can log in as my Windows user and view the list of databases. However, if I try to do anything (actually view the database, view the properties, add or edit users), I get errors that I don't have permission. For what it's worth, I also tried runing Management Studio as Administrator, in case that would help. No difference, though. Now, what I'm guessing is going on -- from my limited knowledge of SQL and from reading online -- is that though I'm logged in as a Windows administrator, that account does NOT have SQL access. I do see a list of SQL users, including SA, but I again don't have permission to add one or to change the password on an existing one. And nobody at the office has any idea what the SQL passwords could be. So... here's my thinking thus far: 1 - The "Could not allocate" error likely points to a database that needs to be allowed to autogrow. Especially since I verified there is plenty of free space and the HD has been defragmented. 2 - Enabling autogrow would be very easy to do if I had the proper access within SQL Management Stuido. That leads me to this link: http://blogs.technet.com/b/sqlman/archive/2011/06/14/tips-amp-tricks-you-have-lost-access-to-sql-server-now-what.aspx It sounds like it's a step-by-step guide for giving me the access I need to SQL. I'm guessing that if I followed this guide, I would be able to then log in to the SQL server via Management Studio with the proper permissions, and would be able to enable autogrow (or simply view the status of the existing database), and hopefully solve the "Could not allocate space" problem! So I guess I have a few questions: 1 - Would you guys agree with my "diagnosis"? Think I'm barking up the right tree? 2 - Is there any risk at all in hurting / disabling / wrecking the current SQL database or setup with me going through the guide to regain SQL access? I understand that per the guide, I would have to temporarily shut down SQL, so obviously it wouldn't be accessible during that time. But it wouldn't be worth the risk if there's a chance I could mess anything up... Like I said, the workstations ARE currently accessing the database somehow, but nobody knows with what login info or anything. Basically, it's set up, it works (usually), but if they had to reload the software, nobody would know how. Any feedback would be appreciated!! The problem is such that it's not an emergency for them, but an annoyance. If I could fix it, it would be wonderful. But if not, I think they'll manage, especially as they are going to eventually stop using this software. Thank you so much for your time! Luke

    Read the article

  • Prevent auto mounting Android sdcard under Linux Mint

    - by BullShark
    I recently obtained an older Android phone, so that I could test Android Apps on it. I've needed it because I have a Nexus 7 but not older Android versions, hardware, etc. to test on. I'm having a problem with it under Linux Mint with Cinnamon. When I plug the phone in, or remove and plug the sdcard from the phone back to it while the phone is plugged in, Linux automatically mounts the sdcard. This is a problem because once it is mounted under Linux, it dismounts from the phone running Android 2.3.5, and I can no longer test Android Apps I write that require the sdcard to be present, writable. I went to Menu System Tools System Settings System Details Removable Media, and it brings up this window. I have changed the settings to always "Ask what to do" on "Select how media should be handled". However, the sdcard still gets mounted and then I am asked how I want to open these files (media players, photo importers, file browser, etc.). If I click the checkbox for "Never prompt or start programs on media insertion", then the sdcard is mounted, and I am not asked how to open these files. Eject is just a noob word for Ubuntu users that means umount (unmount) like "Adminstrator" is another ubuntu noob word for the root user. And if I unmount the sdcard, the phone doesn't recognize it again until I take the sdcard out and plug it back in. The phone sees it for a brief moment until Linux Mint takes it over. There are 2 possible solutions and maybe more: 1) Prevent Linux from automounting sdcards some how 2) Tell Android not to allow the computer it is plugged into to take over the sdcard, HOW? Edit: I found out how to prevent the sdcard from being automatically mounted: Now it gets recognized by Linux: bullshark@beastlinux ~ $ dmesg | tail -n 25 [597212.218323] sd 21:0:0:0: [sde] Attached SCSI removable disk [597212.218639] sr 21:0:0:1: Attached scsi CD-ROM sr2 [597212.218910] sr 21:0:0:1: Attached scsi generic sg7 type 5 [597217.139373] sd 21:0:0:0: [sde] 3862528 512-byte logical blocks: (1.97 GB/1.84 GiB) [597217.140726] sd 21:0:0:0: [sde] No Caching mode page present [597217.140735] sd 21:0:0:0: [sde] Assuming drive cache: write through [597217.143595] sd 21:0:0:0: [sde] No Caching mode page present [597217.143602] sd 21:0:0:0: [sde] Assuming drive cache: write through [597217.152240] sde: sde1 [597389.751008] 4:2:1: cannot get freq at ep 0x84 [597390.238742] 4:2:1: cannot get freq at ep 0x84 [597624.903132] sde: detected capacity change from 1977614336 to 0 [597637.677763] sd 21:0:0:0: [sde] 3862528 512-byte logical blocks: (1.97 GB/1.84 GiB) [597637.679616] sd 21:0:0:0: [sde] No Caching mode page present [597637.679626] sd 21:0:0:0: [sde] Assuming drive cache: write through [597637.682508] sd 21:0:0:0: [sde] No Caching mode page present [597637.682515] sd 21:0:0:0: [sde] Assuming drive cache: write through [597637.692758] sde: sde1 [597661.857979] sde: detected capacity change from 1977614336 to 0 [597688.775455] sd 21:0:0:0: [sde] 3862528 512-byte logical blocks: (1.97 GB/1.84 GiB) [597688.776814] sd 21:0:0:0: [sde] No Caching mode page present [597688.776823] sd 21:0:0:0: [sde] Assuming drive cache: write through [597688.780055] sd 21:0:0:0: [sde] No Caching mode page present [597688.780062] sd 21:0:0:0: [sde] Assuming drive cache: write through [597688.788639] sde: sde1 bullshark@beastlinux ~ $ However, the phone still unmounts the sdcard upon being detected by Linux. Linux detects but does not mount, and a few seconds later: Edit #2 (Solution): I solved this one by changing the usb connection type (was usb mass storage) :

    Read the article

  • Unable to Mange DNS via MMC

    - by IT Helpdesk Team Manager
    When trying to access the DNS service on Microsoft Windows Server 2003 (Build 3790) domain controller/schema master via the MMC DNS snap in or locally via the DNS MMC from Administrative tools I'm getting a red "X" through the icon for the DNS Server. The inability to access DNS management via MMC happens on all domain controllers as well. We've looked at items such as the DHCP client not being started, incorrect DNS setup ( the machine points at itself and another DC ), the DNS service not running ( it is and all DNS queries via NSLOOKUP work correctly ), dslint returns the correct information and functions as expected. There is the following entry in the DNS event log: The DNS server could not initialize the remote procedure call (RPC) service. If it is not running, start the RPC service or reboot the computer. The event data is the error code. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 0000051b dnscmd fails with RPC server unavailable yet RPC is started: C:\Documents and Settings\Administrator.DOMAIN>dnscmd /Info Info query failed status = 1722 (0x000006ba) Command failed: RPC_S_SERVER_UNAVAILABLE 1722 (000006ba) DCDIAG /TEST:DNS /V /E produces the following errors: Warning: no DNS RPC connectivity (error or non Microsoft DNS server is running) [Error details: 1753 (Type: Win32 - Description: There are no more endpoints available from the endpoint mapper.)] Warning: no DNS RPC connectivity (error or non Microsoft DNS server is running) [Error details: 1722 (Type: Win32 - Description: The RPC server is unavailable.)] The DNS server could not initialize the remote procedure call (RPC) service. If it is not running, start the RPC service or reboot the computer. The event data is the error code. A DNS query for _ldap._tcp.dc._msdcs. returns the correct results. All domain and ADS related activities are working except that I can't manage my DNS via MMC or dnscmd. Any thoughts or solutions would be greatly appreciated. EDIT: Adding Registry export per request: Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc Class Name: <NO CLASS> Last Write Time: 10/18/2012 - 2:29 PM Value 0 Name: DCOM Protocols Type: REG_MULTI_SZ Data: ncacn_ip_tcp Value 1 Name: UuidSequenceNumber Type: REG_DWORD Data: 0xb19bd0f Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\ClientProtocols Class Name: <NO CLASS> Last Write Time: 3/9/2007 - 12:11 PM Value 0 Name: ncacn_np Type: REG_SZ Data: rpcrt4.dll Value 1 Name: ncacn_ip_tcp Type: REG_SZ Data: rpcrt4.dll Value 2 Name: ncadg_ip_udp Type: REG_SZ Data: rpcrt4.dll Value 3 Name: ncacn_http Type: REG_SZ Data: rpcrt4.dll Value 4 Name: ncacn_at_dsp Type: REG_SZ Data: rpcrt4.dll Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\NameService Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Value 0 Name: DefaultSyntax Type: REG_SZ Data: 3 Value 1 Name: Endpoint Type: REG_SZ Data: \pipe\locator Value 2 Name: NetworkAddress Type: REG_SZ Data: \\. Value 3 Name: Protocol Type: REG_SZ Data: ncacn_np Value 4 Name: ServerNetworkAddress Type: REG_SZ Data: \\. Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\NetBios Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\RpcProxy Class Name: <NO CLASS> Last Write Time: 3/9/2007 - 12:11 PM Value 0 Name: Enabled Type: REG_DWORD Data: 0x1 Value 1 Name: ValidPorts Type: REG_SZ Data: pdc:100-5000 Key Name: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\SecurityService Class Name: <NO CLASS> Last Write Time: 2/20/2006 - 4:48 PM Value 0 Name: 9 Type: REG_SZ Data: secur32.dll Value 1 Name: 10 Type: REG_SZ Data: secur32.dll Value 2 Name: 14 Type: REG_SZ Data: schannel.dll Value 3 Name: 16 Type: REG_SZ Data: secur32.dll Value 4 Name: 1 Type: REG_SZ Data: secur32.dll Value 5 Name: 18 Type: REG_SZ Data: secur32.dll Value 6 Name: 68 Type: REG_SZ Data: netlogon.dll

    Read the article

  • Random Slow Response

    - by ARehman
    We have an ASP.NET MVC 1.0 application running on Windows Server 2008 – Standard (32 –bit), Dual Core Xeon (3.0 GHz), 2 G.B R.A.M. Most of the times application renders response in 3-4 seconds, but sometimes users get very late response and delay is up to 40 seconds or more than a minute. It happens in following way: User browsed a page, idle for 5, 10 or 15 minutes, tried to browse same page or some other. Now, there is a chance that he will see late response whereas the app pool is still up and running. This can happen with any arbitrary page. We have tried followings/observations. Moved the application to stand alone web server App Pool idle shutdown time is 60 minutes. There are no abrupt shut downs/restarts. CPU or memory doesn’t spike. No delays in SQL queries. Modified App Pool setting to run in classic-mode. It didn’t help. Plugged-in custom module to log all those requests which took more than 5 seconds to complete. It didn’t pick any request of interest. Enabled ‘Failed Request Tracing’ to log all those requests which take 20 or more seconds to complete. It didn’t log anything. Event Viewer, HTTPER log, W3SVC logs or WAS logs don’t indicate anything. HTTPERR only has ‘_ _ Timer_ConnectionIdle _ _’ entries. There is not much traffic to server. This can happen also if only two users are active. Next we captured TCP/IP terrific on both a user and server end with Wireshark and below are details in brief of this slowness: Browser sends a request for ~/User/Home/ (GET Request) by setting up a receiving end point using port 'wlbs(port-2504)'. I'm not sure if this could be a problem in some way that browser didn't hand-shake with the server first and assumed that last connection is still open, whereas, I browsed the same page 4 minutes ago and didn't perform any activity with site after that. If I see the HTTPERR log, it indicates that it has ‘_ _ Timer_ConnectionIdle _ _ _’ entry for my last activity with server. Browser (I was using Chrome) waits for any response from the server, doesn’t find any then starts retransmitting the same request using same end point after incrementing wait intervals, e.g. after 8, 18, 29, 40, 62, and 92 seconds. All these GET requests were received by server as well. But, server didn’t send any packet to client. Browser didn't see any response on the end point it set up in point 1, it opened a new end point 'optiwave-lm (port-2524)', did a hand shake with the server and transmitted the same request again. Server received, processed it, and returned successful response. What happened to earlier 6-7 requests? Whether they were passed on to HTTP.SYS or not? Why Failed Request Tracing not logged anything, we didn't find any clue yet. Server served the same page successfully just 4 minutes ago. Looking forward for more suggestions/solutions. -- Thanks

    Read the article

  • Failed Administrator login on WSO2 IS with external OpenLDAP

    - by Marco Rivadeneyra
    I have an installation of WSO2 Identity Server and I'm trying to make it work with an external OpenLDAP instance I have followed this guide: http://wso2.org/project/solutions/identity/3.2.3/docs/user-core/admin_guide.html#LDAP For the read-only mode. But when I try to log-in I get a failed login and the following error on the console: TID: [0] [WSO2 Identity Server] [2012-08-10 17:10:25,493] WARN {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} - Failed Administrator login attempt 'john[0]' at [2012-08-10 17:10:25,0493] from IP address 127.0.0.1 {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} Full log: http://pastebin.com/pHUGXBqv My configuration file looks as follows: <UserManager> <Realm> <Configuration> <AdminRole>admin</AdminRole> <AdminUser> <UserName>john</UserName> <Password>johnldap</Password> </AdminUser> <EveryOneRoleName>everyone</EveryOneRoleName> <!-- By default users in this role sees the registry root --> <ReadOnly>true</ReadOnly> <MaxUserNameListLength>500</MaxUserNameListLength> <Property name="url">jdbc:h2:repository/database/WSO2CARBON_DB</Property> <Property name="userName">wso2carbon</Property> <Property name="password">wso2carbon</Property> <Property name="driverName">org.h2.Driver</Property> <Property name="maxActive">50</Property> <Property name="maxWait">60000</Property> <Property name="minIdle">5</Property> </Configuration> <UserStoreManager class="org.wso2.carbon.user.core.ldap.LDAPUserStoreManager"> <Property name="ReadOnly">true</Property> <Property name="MaxUserNameListLength">100</Property> <Property name="ConnectionURL">ldap://192.168.81.144:389</Property> <Property name="ConnectionName">cn=admin,dc=example,dc=com</Property> <Property name="ConnectionPassword">admin</Property> <Property name="UserSearchBase">ou=People,dc=example,dc=com</Property> <Property name="UserNameListFilter">(objectClass=inetOrgPerson)</Property> <Property name="UserNameAttribute">uid</Property> <Property name="ReadLDAPGroups">false</Property> <Property name="GroupSearchBase">ou=Groups,dc=example,dc=com</Property> <Property name="GroupSearchFilter">(objectClass=groupOfNames)</Property> <Property name="GroupNameAttribute">uid</Property> <Property name="MembershipAttribute">member</Property> </UserStoreManager> <AuthorizationManager class="org.wso2.carbon.user.core.authorization.JDBCAuthorizationManager"></AuthorizationManager> </Realm> I followed this guide to configure my LDAP server up to Loggging: https://help.ubuntu.com/12.04/serverguide/openldap-server.html Could you suggest what might be wrong? The LDAP log is available at: http://pastebin.com/T9rFYEAW

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • Expert iptables help needed?

    - by Asad Moeen
    After a detailed analysis, I collected these details. I am under a UDP Flood which is more of application dependent. I run a Game-Server and an attacker is flooding me with "getstatus" query which makes the GameServer respond by making the replies to the query which cause output to the attacker's IP as high as 30mb/s and server lag. Here are the packet details, Packet starts with 4 bytes 0xff and then getstatus. Theoretically, the packet is like "\xff\xff\xff\xffgetstatus " Now that I've tried a lot of iptables variations like state and rate-limiting along side but those didn't work. Rate Limit works good but only when the Server is not started. As soon as the server starts, no iptables rule seems to block it. Anyone else got more solutions? someone asked me to contact the provider and get it done at the Network/Router but that looks very odd and I believe they might not do it since that would also affect other clients. Responding to all those answers, I'd say: Firstly, its a VPS so they can't do it for me. Secondly, I don't care if something is coming in but since its application generated so there has to be a OS level solution to block the outgoing packets. At least the outgoing ones must be stopped. Secondly, its not Ddos since just 400kb/s input generates 30mb/s output from my GameServer. That never happens in a D-dos. Asking the provider/hardware level solution should be used in that case but this one is different. And Yes, Banning his IP stops the flood of outgoing packets but he has many more IP-Addresses as he spoofs his original so I just need something to block him automatically. Even tried a lot of Firewalls but as you know they are just front-ends to iptables so if something doesn't work on iptables, what would the firewalls do? These were the rules I tried, iptables -A INPUT -p udp -m state --state NEW -m recent --set --name DDOS --rsource iptables -A INPUT -p udp -m state --state NEW -m recent --update --seconds 1 --hitcount 5 --name DDOS --rsource -j DROP It works for the attacks on un-used ports but when the server is listening and responding to the incoming queries by the attacker, it never works. Okay Tom.H, your rules were working when I modified them somehow like this: iptables -A INPUT -p udp -m length --length 1:1024 -m recent --set --name XXXX --rsource iptables -A INPUT -p udp -m string --string "xxxxxxxxxx" --algo bm --to 65535 -m recent --update --seconds 1 --hitcount 15 --name XXXX --rsource -j DROP They worked for about 3 days very good where the string "xxxxxxxxx" would be rate-limited, blocked if someone flooded and also didn't affect the clients. But just today, I tried updating the chain to try to remove a previously blocked IP so for that I had to flush the chain and restore this rule ( iptables -X and iptables -F ), some clients were already connected to servers including me. So restoring the rules now would also block some of the clients string completely while some are not affected. So does this mean I need to restart the server or why else would this happen because the last time the rules were working, there was no one connected?

    Read the article

  • Cheapest way to connect 20-24 Sata II HDDs in a budget storage server?

    - by Joe Hopfgartner
    I need to assemble a high density storage server for as cheap as possible. It's been a while for me and the last systems I integrated didn't even have Sata yet... During my Research I of course stumbled about Nexsan SATA Beast, the BackBlaze storage Pods as well as some ridiculously overpriced HP Proliant or Dell storage solutions. Finally I choose Norco cases as the way to go. My eye is set on the RPC-4020, which is a 4U 19" Rackmount case with 20 Hot Swap 3.5" SATA/SAS Hdd trays (Backplanes included) and room for two 2.5" OS drives as well as a Slim Line CD-Rom. The backplanes connect with a single SATA port for each drive, so there are 20 internal SATA ports to to be connected. They also have redundant power ports which I think is quite nice. The cheapest price I have found is 290$ + 40$ shipping. In europe the cheapest unfortunately is 370€ (500$) + 40 € shipping... A nice alternative would be the RPC-4224 which has SFF-8087 Mini SAS connectors that bundle 4 SATA trays each. But it doesn't seem to be available in Europe (where i am) anywhere. So here comes my problem: What Mainboard/Controller to choose to connect them for as cheap as possible while still having nice data rates? I have to say that the server is intended as a Storage server with 1gps connectivity and the data transfer will be distributed very evenly across all drives. I also don't require any raid functionality. This is all done at application level, I just need JBOD. So for example if I go for the RPC 4020 Model I need to connect 20 Storage + 1 OS + 1 CDROM Sata ports. I searched a bit and stumbled across this very low priced controller: http://www.intel.com/products/server/raid-controllers/SASWT4I/SASWT4I-overview.htm They sell it for 115 € here and the specs say it can control up to 122 hard discs and has 4 Mini SAS connectors. So I would use 4 Mini SAS 36pin - 4 SATA 7pin cables to connect 4 SATA drives to each port and choose a Mainboard taht has 6 SATA on board (for example this one) and hurray, I can connect my 22 SATA devices for as low as about ~ 220 EUR (cpu, ram, psu, case not counted) Question: WOULD THAT WORK? And if not, why? 2nd Question: If I go for the 4220 or 4224 Model, I have internal Mini SAS connectors. Am I right in assuming that the backplane than acts as a "SAS Expander"? And can I just plug these SAS connectors into any SAS port I can find on my controller / mainboard or are there certain requirements? I know that SATA port multipliers only work with controllers that are ready for that. But isn't this expansion already implemented in the SAS standard? I am sorry that this is a very broad question, but I really spent the last week reading up and it seems to be not so clear! Especially all the controlling hardware specifications! 3rd Question: A lot of hardware specs feature "internal channels" and "internal connectors". The connecors are the physical numbers of places where I can plug a cable in. I got that. But are the "internal channels" always the maximum numbers of physical drives that can be used in the end? Or can I enhance this further by Expanders/Fanouts? 4th and last question: What do you think about the setup so far? Do you know any good alternatives? Maby I am completely going the wrong way and some DAS would be way better? Are there any comparable chassis available in europe? Please feel free to say whatever you think is relevant to the subject!

    Read the article

  • Outlook 2013 keeps freezing, semi-consistently

    - by AviD
    I have an oddity of problem with my Outlook's stability. It seems to be freezing up, not at random intervals, but based on a seemingly strange combination of configurations. I have been trying many different combinations, I've even devolved to "Cargo-cult" debugging, since I have no clue what is causing this... Here is my set up - since I don't know for sure which settings are causing the lockup, I'll probably mention irrelevant things: (relatively) clean install of Windows 8 (on hyper-v, if that matters) Clean install of Outlook 2013, fully updated 3 accounts configured: Hotmail account configured with ActiveSync Gmail account Large-ish account (several GB) connected with IMAP Only a few folders are subscribed in IMAP Outlook is set to only display subscribed folders configured to keep messages permanently Google Apps account, connected with IMAP Small account connected with IMAP All folders IMAP subscribed Outlook is set to only display subscribed folders configured to keep messages permanently Several Send/Receive Groups configured, to try different configurations of enabling/disable/partial the different accounts - with different send times, from 60 minutes down to 5 minutes. The problem is that at certain points Outlook completely freezes up and I have to kill it. This is not consistent - there are some things that cause it immediately almost consistently, there are some times that it just happens by itself after some period of time (sometimes a few moments, sometimes a few hours; sometimes while using it, sometimes after I've been away from it for a few hours). I have searched all over, and there seem to be many with similar (apparently) problem, and found numerous "solutions" (some even more cargocultish than mine), but so far none of them worked. I've removed all the accounts, both all together and one at a time, and re-configured them - eventually it freezes up. I've tried uninstalling Outlook, cleaning it up completely - removing files, app settings, registry keys, etc - then reinstalling - eventually it freezes up. I've only enabled the Hotmail account, disabling (but not removing) the Google accounts - apparently this does not lock up. I've enabled the Hotmail and the Gmail accounts, leaving the Apps one disabled - it seems like it does not lock up. With all accounts enabled, it locks up almost immediately after doing a send/receive. With only the Apps account enabled, it seems to not lock up. With the Hotmail and the Apps accounts enabled (Gmail disabled), it seems like it locks up after a random amount of time. With Hotmail enabled, and Gmail and Apps both enabled but set to receive only custom folder downloading (not all subscribed folders) - sometimes it locks up right after a send/receive, sometimes it goes for hours without locking up, and sometimes it only locks up when I send an email. I've tried switching the ports for the Google accounts (SSL/465 vs TLS/587), though I have no idea if this should affect, but no real difference. In short, I honestly have no idea what is actually causing Outlook to lock up, I might be completely barking up the wrong tree. At this point I don't really know what else to try, I'm flipping switches at random here. I would like to have all 3 accounts enabled, ideally in several groups (e.g. pull down only important folders in a group with short interval, and all other folders in a longer interval) - obviously without freezing up at all. I've tried putting in all the important details, if there is anything else important to add please let me know. Another issue that occurred to me might also be connected - the Google accounts don't always synchronize properly, even after a send/receive or "update folder". At least not consistently... though I haven't been able to find a significant connection between this and that.

    Read the article

  • What NAS setup for two-way syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker. Update I've been digging some more and it looks like QNAP's Remote Replication function is actually just Rsync, so only really suitable for one-way syncing. I've posted on their forum to double check, but I think that's the case. In which case, I think the focus of my question is now either: do any reasonably-priced NASs support bidirectional syncing over the internet?, or has anyone had any luck installing onto NASs for this purpose? (Also, updated question to clarify that I'm after two-way syncing.)

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • Forwarding rsyslog to syslog-ng, with FQDN and facility separation

    - by Joshua Miller
    I'm attempting to configure my rsyslog clients to forward messages to my syslog-ng log repository systems. Forwarding messages works "out of the box", but my clients are logging short names, not FQDNs. As a result the messages on the syslog repo use short names as well, which is a problem because one can't determine which system the message originated from easily. My clients get their names through DHCP / DNS. I've tried a number of solutions trying to get this working, but without success. I'm using rsyslog 4.6.2 and syslog-ng 3.2.5. I've tried setting $PreserveFQDN on as the first directive in /etc/rsyslog.conf (and restarting rsyslog of course). It seems to have no effect. hostname --fqdn on the client returns the proper FQDN, so the problem isn't whether the system can actually figure out its own FQDN. $LocalHostName <fqdn> looked promising, but this directive isn't available in my version of rsyslog (Available since 4.7.4+, 5.7.3+, 6.1.3+). Upgrading isn't an option at the moment. Configuring the syslog-ng server to populate names based on reverse lookups via DNS isn't an option. There are complexities with reverse DNS and the public cloud. Specifying for the forwarder to use a custom template seems like a viable option at first glance. I can specify the following, which causes local logging to begin using the FQDN on the syslog-ng repo. $template MyTemplate, "%timestamp% <FQDN> %syslogtag%%msg%" $ActionForwardDefaultTemplate MyTemplate However, when I put this in place syslog-ng seems to be unable to categorize messages by facility or priority. Messages come in as FQDN, but everything is put in to user.log. When I don't use the custom template, messages are properly categorized under facility and priority, but with the short name. So, in summary, if I manually trick rsyslog into including the FQDN, priority and facility becomes lost details to syslog-ng. How can I get rsyslog to do FQDN logging which works properly going to a syslog-ng repository? rsyslog client config: $ModLoad imuxsock.so # provides support for local system logging (e.g. via logger command) $ModLoad imklog.so # provides kernel logging support (previously done by rklogd) $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat *.info;mail.none;authpriv.none;cron.none /var/log/messages authpriv.* /var/log/secure mail.* -/var/log/maillog cron.* /var/log/cron *.emerg * uucp,news.crit /var/log/spooler local7.* /var/log/boot.log $WorkDirectory /var/spool/rsyslog # where to place spool files $ActionQueueFileName fwdRule1 # unique name prefix for spool files $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) $ActionQueueSaveOnShutdown on # save messages to disk on shutdown $ActionQueueType LinkedList # run asynchronously $ActionResumeRetryCount -1 # infinite retries if host is down *.* @syslog-ng1.example.com *.* @syslog-ng2.example.com syslog-ng configuration (abridged for brevity): options { flush_lines (0); time_reopen (10); log_fifo_size (1000); long_hostnames (off); use_dns (no); use_fqdn (yes); create_dirs (no); keep_hostname (yes); }; source src { unix-stream("/dev/log"); internal(); udp(ip(0.0.0.0) port(514)); }; destination per_host_destination { file( "/var/log/syslog-ng/devices/$HOST/$FACILITY.log" owner("root") group("root") perm(0644) dir_owner(root) dir_group(root) dir_perm(0775) create_dirs(yes)); }; log { source(src); destination(per_facility_destination); };

    Read the article

  • Supermicro IPMI on MBD-X8DAH+-F-O motherboard. Keyboard and mouse do not work after booting Windows Server 2008 R2

    - by LDelgado
    Hell Everyone, I built a server with the mentioned motherboard. I installed Windows Server 2008 R2 Enterprise on this server. IPMI is integrated on the motherboard with its own dedicated NIC. I've got that NIC configured with its own IP address. I can remote into it using IPMI, and I can remotely control the server settings before booting the OS ( BIOS, RAID configuration, etc). When the OS boots, I lose the mouse and keyboard. I cannot use the keyboard or mouse when installing the OS either. So the Keyboard and Mouse only work when no OS is loaded. Once the OS loads I lose it - that is my problem. I've been doing some research and trying a few things, but I have not been successful in fixing this issue. I may be wrong, but based on the things I've found online, it seems that the problem could be caused by the way the OS handles USB. The server is headless. There is no keyboard, mouse, or monitor plugged into it. When I boot up the OS and remote into it, I cannot see a mouse or keyboard listed in the Device Manager. Based on what I've read, it seems that the OS should detect a mouse and a keyboard when connecting remotely via IPMI. The following are the solutions I've tried. Nothing has worked so far: I've updated the firmware of the IPMI component to the latest firmware - 1.33. I made sure that the mouse mode was set to Absolute (Windows OS). I've loaded the factory defaults several times. I've enabled Port64h/60h Emulation under the USB settings in the BIOS. I've disabled USB legacy support in the BIOS. I made sure the firewall wasn't blocking IPMI (disabled the firewall). And that's about it. I've found threads in some forums from people having the same issue as me, but they were not running the same OS. They were either running Linux or FreeBSD. Most of them fixed their problem by selecting the right mouse mode (Linux in their case). There was one other that solved the problem by disabling USB Mass Storage mode. He stated "When I set it to disable USB Mass Storage when no image is loaded, the ukbd came alive, and I'm typing this on the IPMI Console. " source: http://freebsd.1045724.n5.nabble.com/IPMI-Console-No-luck-once-OS-is-booted-td3967868.html I suspect the solution described in the previous paragraph is somehow related to my problem. I've found several threads on the internet with issues describing the same problem, but none of them were with Windows Server 2008 R2. Again, I may be wrong, but it seems like that could be the issue. I just don't know how I go about applying a solution in Windows Server 2008 R2. In any case, I could use your expertise. Maybe I am missing something, or maybe I'm on the right track. Your help is much appreciated. Thank you in advance,

    Read the article

  • Why are USB 2.0 devices crashing my system?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a generic baby-ATX motherboard with no identifying markings. CPU-Z identifies the motherboard model as VT8601 but doesn't provide me with any manufacturer name. There's one stamp on the motherboard that says "Rev.B". On board it has 10/100 LAN, 2 x USB 1.0, VGA, PS/2 for KB and mouse, parallel port, 2 x serial ports, 2 x IDE, 1 x floppy, 2 x SDRAM slots, 1 x CPU housing that is seating a 1.3GHz Intel Celeron CPU, 3 x PCI, 1 x AGP - although you can only use 2 of the PCI slots if you use the AGP slot due to the physical layout of the board. It's got 768Mb PC133 SDRAM - 1 x 512Mb & 1 x 256Mb installed as well as a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board containing 3 x external + 1 x internal USB connectors. All this is sitting in a slimline case. I don't know the wattage of the PSU, but can post this later if this proves to be helpful. The motherboard is running a version of Award BIOS for which I don't have the version number to hand but can again post this later if it would be helpful. It has an 80Gb Western Digital hard drive freshly formatted and built with Windows XP Professional with Service Pack 3 and all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 crashes the machine as soon as it starts up). It's got a Daytek MV150 15" touch screen running through the VGA and COM1 with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem If I hook any USB 2.0 devices to this machine, it hangs the whole machine and I have to hard boot it to get it back up. Workarounds found I can plug the same devices into the on board USB 1.0 connectors and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions I've tried disabling all the on board devices that I'm not using - such as the on board LAN, the second COM port, the AGP connector etc. through the BIOS in an (perhaps misguided or futile) attempt to reduce the power consumption... I don't think it had any effect but it didn't do any harm. I was wondering if the PSU wattage just isn't enough to drive the USB 2.0 devices; I've seen this suggested but haven't found any confirmation that this could really be an issue - nor have I found a way to work around this issue - if indeed it is one. Any ideas? The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference. I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself. Other thoughts Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine?

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >