Search Results

Search found 16587 results on 664 pages for 'virtual hardware'.

Page 539/664 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • Making the iPhone work with stripped down Windows XP

    - by Gabriel
    Hi, this is my first time posting here and I have a really specific question. I have an ASUS eee 901 running Windows XP Home. I had everything working well, but then I decided to improve performance by moving Windows to the smaller but faster internal SSD. I used Nlite to strip down Windows, following the instructions here: http://wiki.eeeuser.com/howto:nlitexp I now have a very lightweight installation of XP home with SP3 and all the current updates. Almost everything is working really well. I have installed iTunes and I CAN sync with no problems. However, each time I plug in my iPhone 3GS (latest firmware), Windows tries and fails to install drivers. The Found New Hardware Wizard launches, but nothing I do will make it complete successfully, with the result that the iphone does not show up in Windows as removable storage, or as a camera. When I launch the Camera and Scanner Wizard, it shows only my webcam, not the iphone. I have verified that I have the following files in place: Windows\System32\ptpusb.dll (regsvr32 successful) Windows\System32\ptpusd.dll (entry point not found, can not be registered) Windows\System32\usbaaplrc.dll (entry point not found, can not be registered) Windows\System32\drivers\usbaapl.sys Windows\System32\drivers\usbscan.sys Windows\System32\drivers\usbstor.sys Does anyone know if some other file is required or if there's some other element preventing this from working? Edit (From posted answer) I did select Cameras & Camcorders, and my webcam is working fine for video & still capture.

    Read the article

  • How to access programs in one PC using another PC

    - by darkstar13
    Hi, I was recently given an old PC for my remote access at work. The CPU that comes with it has Windows XP installed, 400+ MB of ram, all USB devices disabled. I access my work applications using VPN / Citrix. Basically, it' sooooo slow. Plus it's bulky and it will just occupy space, so I am now hoping to find a way for me to integrate this work PC with my home PC. I tried to put in the hard drive in my home PC CPU, and set the drive as slave. However, when I booted my PC from this hard drive, I am stuck at the screen where windows is prompting me to select how am I going to boot (ex. Safe Mode, Safe mode with command prompt, Last Working Configuration, etc), but whatever option I select, I am still stuck at this option after reboot. I am thinking if maybe I can clone the drive and mount the cloned drive and access the system as a virtual machine. But I don't know if that will work. I would like to know if there's something I can do so I can work at home using my home PC, where I can access my work programs to connect to VPN / Citrix. My home PC's OS is Windows 7 Ultimate x64.

    Read the article

  • Not recognizing second monitor after hibernate (Windows 7, Dell D630 laptop)

    - by Brooks Moses
    I have a Dell Latitude D630 laptop which I've recently updated to Windows 7 64-bit. (The Dell site confirms that it's Windows-7-compatible.) Normally it lives in a docking station with a second monitor connected to the DVI port on the docking station, and I use the second monitor in a multi-monitor configuration with the laptop screen. Sometimes I undock the laptop and use it separately. Here's the problem: If I hibernate the laptop while undocked, and then power it back up in the docking station, it does not recognize the second monitor. By which I mean that not only does it not share the desktop onto the second monitor, but if I go into the control panel for display settings and press "Detect", it does not even detect the existence of the second monitor. I can tell it to "use the VGA port anyway" for a second monitor, but the monitor is connected to a DVI port on the docking station, so that doesn't do anything useful. If I entirely reboot the laptop while it's connected to the docking station, it has no problem recognizing the second monitor and using it. But then, if I hibernate, undock, de-hibernate while undocked and rehibernate, and then re-dock and de-hibernate, it's back to not recognizing the second monitor again. I'm reasonably certain that this is not a limitation of the hardware; this worked fine on Windows XP. I'm currently using the Windows 7 driver for my video card. I attempted to use the video driver from the Dell website for this laptop, but Dell only provides Vista 64-bit drivers, not Windows 7 64-bit drivers. Their "Windows 7 compatibility" page suggests that the Vista drivers should work, but when I attempted to install the driver, it gave me a "this operating system not supported" error and refused to install. Any further ideas?

    Read the article

  • Wifi Drops Connections with WPA2-PSK

    - by graf_ignotiev
    I run a small computer lab made up of 10 computers of identical hardware and software (Dell Latitudes with Windows 7 x64 Enterprise) and I use a ZyWALL 2WG as a router/firewall. Nine of the computers connect to the router over wifi using WPA2-PSK encryption while the last one is connected by ethernet cable. I'm having a problem where any computer connected to the wi-fi occasionally drops off the network (it cannot be pinged and the client cannot ping the gateway). It only happens on the wifi side and only when the encryption is WPA2-PSK or WPA-PSK. I tried using another router with a different make and model and had no problems. Thinking it could be a software error, I reset the router to factory defaults and installed the newest firmware (V4.04(AQI.8) | 04/09/2010), but still have the problem. The 802.1X log gives the following error User logout because of user disassociation. with this note WPA2-PSK:00242c582ece:logout where 00242c582ece is the mac address of the device. At this point I'm out of things to try and leads to follow. It looks like this user had the same or similar problem, but none of those proposed solutions work for me.

    Read the article

  • Has anyone used the sharedband connection bonding product?

    - by John Rennie
    See http://www.sharedband.com/ for details on the product. Obviously Sharedband aren't too keen on giving away their technical secrets, but I would guess that it bonds the connections at the IP layer i.e. their routers send the IP packets to the SharedBand routers over all available lines and the ShareBand routers handle all the virtual circuitry and provide the NATing to whatever IP address(es) they've assigned you. It looks a clever idea, and a good way to provide some resilience over ADSL links. You can even use ADSL links from different ISPs and SharedBand will still bond them for you. But, I find myself wondering how well it really works, and whether it's worth it. The Draytek routers can already load balance (though not bond) up to four ADSL lines, so the SharedBand product really only offers an advantage if you're hosting servers i.e. you can have one IP address to accept incoming connections through all your (working) ADSL lines. But should you really try and host servers using ADSL lines, especially since ADSL upload performance isn't stellar? Wouldn't it be better to use a hosted server, or maybe pay up for a leased line with a SLA? So I'm asking if anyone is using SharedBand, and if so what do you think of it? JR

    Read the article

  • Word documents very slow to open over network, but fine when opened locally - on one machine

    - by Craig H
    Windows XP, Word 2003, patched. The issue is happening with several Word documents stored on a network drive. The Word documents are clearly a bit wonky (i.e. one is 675k, but if you copy everything but the last paragraph marker into a new document, the new document is only 30k). But that's only part of the problem. On one weird machine, and one machine only, it takes ~20 seconds to open these Word documents from the network drive. Copy the file to C: on that werid machine? Opens immediately. Go to other machines (that are very similar - same patch level, etc.) and open the same document from the network? Opens immediately. Delete normal.dot? 20 seconds. Login with a different user on the weird machine? 20 seconds. Plug wonky machine into a different network port? 20 seconds. So the problem appears to be hardware related (i.e. wonky internal NIC) or related to a setting that is not profile specific. Any ideas? "Scrubbing" all the documents isn't ideal for several reasons. This is driving me nuts because I swear I ran into this before many years ago and eventually figured it out. But I appear to have lost my notes.

    Read the article

  • Short POST data in HTTP

    - by Matt
    We're hosting a customer's Debian Linux web server. It's running a PHP based web application. The server is sitting behind our firewall with it's own virtual interface and port 80 is forwarded internally to a machine sitting in the DMZ. The issue we're having is that when data is posted to the server it seems to be being cut short for some users. It's reproducable for some users on the same box. But the same user sending the same data on the same lan on another PC it works. The data gets cut to around 1140 bytes I'm told. Any idea why this might be happening? The customer is blaming our firewall, but then surely we'd have issues with other services. I'm suspecting it's a problem with the website itself. Suggestions on how to isolate the problem would be of help. Our firewall is Astaro. EDIT: A customer has set the ethernet frame size temporarily to 500bytes on the server. This made it work for now! I know some of the customers are using an internet provider that runs PPPoE

    Read the article

  • Microsoft Remote Desktop Services - Android

    - by Matt Rogers
    We have recently started testing Remote Desktop Services. We have deployed the environment using the latest server, Windows Server 2012 R2. We have deployed the Web Access Roles, RD Gateway, Connection Broker Virtualization Host and Session Host. We are running both, Virtual machine-based and Session-based deployments. All of these are working as expected internally and externally when using a Windows workstation as the RDS client, however, the Android client is unable to launch applications. Once you install the app from Google Play you are given a screen to add Remote Resources. After entering the appropriate URL, username and password we see the applications that have been published. Unfortunately, when we attempt to launch an app we get the following error: Connection Error Host not found. Please provide the fully-qualified name or the IP address of the host. We have already entered this information otherwise I don't believe we would be able to see the published applications. I think the error is related to the certificate and how it is being used to connect to the applications. Since this is in our lab environment we have not configured a valid external certificate on the servers and the trusted certificate that is installed on the android tablet points to our internal server / domain name. What I would like to know: Has anyone configured RDS Web Access on Server 2012 R2 and attempted to externally connect an Android or iOS device using the Microsoft supported Remote Desktop client. Are others experiencing the same problem we are? Were you able to resolve the issue? Was it related to the external cert / host name?

    Read the article

  • Reloading NAT configuration on a running VMWare Server 2.0.2

    - by Jonathan Clarke
    I have a server running VMWare Server 2.0.2. The host is Debian Lenny. I have 15-20 virtual machines running, all attached to a single NAT network (named vmnet8). I have configured VMWare's NAT (the vmnet-natd daemon) to forward some incoming to ports to one of the VMs, since it hosts some publicly accessible services. I did this via the file /etc/vmware/vmnet8/nat/nat.conf by adding lines like the following: 80 = 192.168.100.100:80 This works great, I can reach the web server on the VM at 192.168.100.100 by connecting to the host's IP address. Sometimes, I need to add port redirections to this NAT configuration. So, I add a line to the configuration file. Now for the question. How do I make the natd process take this new configuration into account? Clearly, restarting the host machine does take it into account, and the newly added port is forwarded. However, this is not an option on this server, so how should one do this without restarting the whole host? Thanks for any ideas!

    Read the article

  • Cacti not working for SNMP data sources

    - by lorenzo-s
    I installed packages cacti and snmpd on a Debian server. I'm able to display common graphs in Cacti (such as memory usage, load average, logged in users, etc) using the data templates listed as Unix. Now I want to replace these graphs with new ones using SNMP data sources, because I see there is also CPU usage and because it's not excluded I have to manage multiple hosts in the future. So, I installed snmpd on the machine and left the snmpd.conf as it is. In Cacti, I created three new data sources from SNMP templates for 127.0.0.1 host: ucd/net - CPU Usage - Nice ucd/net - CPU Usage - System ucd/net - CPU Usage - User Then I created a new graph from template ucd/net - CPU Usage, and select the three data sources in the Graph Item Fields section. Graph is now enabled and running, but empty. No data have been collected. Under Console - Devices my SNMP host is listed as up and running: System:Linux ip-xx-xx-xxx-xxx 3.2.0-23-virtual #36-Ubuntu SMP Tue Apr 10 22:29:03 UTC 2012 x86_64 Uptime: 929267 (0 days, 2 hours, 34 minutes) Hostname: ip-xx-xx-xxx-xxx Location: Sitting on the Dock of the Bay Contact: Me [email protected] In SNMP Options I left all as it is: SNMP Version: Version 1 SNMP Community: public SNMP Timeout: 500 ms Maximum OID's Per Get Request: 10 In Console - Utilities - Cacti Log I have multiple warning (two for each data source) every 5 minutes: 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[2] DS[18] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.15.0' 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[1] DS[9] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.11.52.0' 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] Host[2] DS[19] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.6.0' [...] I have the feeling I'm missing something, but I cannot get it...

    Read the article

  • Mac Management and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

  • RTL8192SU Linux Issue Installing Driver

    - by s32ialx
    OK I've read tons of fourms of people getting the onboard RTL8192SE working and the RTL8192SU working dif is U = USB they are both N and i have both Toshiba L500D-00T pre-installed Win Vistax64-HP and i have obtained the free Win7x64-HP upgrade the onboard wificard sucks and can't hold a stable connection for more then 20minutes in windows but the usb is amazing. Now problem is i tried both Ubuntu and Mandriva with no resolve the issue is the onboard drive detects and actually SHOWS that it's there but no wireless networks detect so it's saying no SSID's are broadcasting which i know is a lie since I'm running a 2wire bell dsl modem with built in wifi and a Linksys wrt54g w/ DD-WRT firmware and both are broadcasting fine. Why don't i use the USB? In the hardware device manager in mandriva it shows up as unknown but shows that it's realtek and that it's a 8192 chipset. but no option to for a driver install and when i do a make in terminal i get this error and no clue what it means [root@John-PC rtl8192se_linux_2.6.0010.1020.2009_64bit]# make make: *** /lib/modules/2.6.31.12-desktop-3mnb/build: No such file or directory. Stop. make: *** [all] Error 2 [root@John-PC rtl8192se_linux_2.6.0010.1020.2009_64bit]# any help appreciated. and just encase I'm running currently Mandriva Spring 2010 Free

    Read the article

  • How do I fix this Windows 7 wireless connectivity issue?

    - by Charles Randall
    I have a laptop with an Intel Wireless Centrino 6300 module. Recently, the machine has stopped properly connecting to my wireless router. It will get stuck in a loop of connecting, then disconnecting and reconnecting. While connected, it will simply say "No Internet Access." Running inSSIDer 2.0, it shows my network jumping around between two channels -- I know this isn't the case, because I've set my router to sit on one single channel. My MacBook Pro, Boxee Box, PS3, and Xbox 360 all connect fine to the wireless and have no problems at all. I know it's not the wireless module, as I bought a second one recently assuming the first had died -- but I get the same behavior with both. Sometimes, I can fix the issue temporarily by deleting the network (Using the Manage Wireless Networks page), and then re-adding it (via standard wireless methods). Then it will work for a few days. But inevitably the problem comes back, and now the laptop simply won't connect to the wireless at all, even if I take steps that usually work. Since I've ruled out the hardware, and it's unlikely some kind of interference issue (because I would expect to see it on any multitude of other devices), I would think at this point that it's a problem with Windows itself. One thing that might be a hint, even though I delete the network, when I add it again, it's always listed as "Wireless Network Connection 2" even though there isn't another in the list.

    Read the article

  • Always failed in connecting to the Outlook Anywhere through TMG 2010 with certificate ?

    - by Albert Widjaja
    Hi, I have successfully published Exchange Activesync using TMG 2010 and OWA internally only but somehow when I tried to publish the Outlook Anywhere it failed ( as can be seen from the https://www.testexchangeconnectivity.com ) Settings: IIS 7 settings, I have unchecked the require SSL and "Ignore" the client certificate Exchange CAS settings: ServerName : ExCAS02-VM SSLOffloading : True ExternalHostname : activesync.domain.com ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS://ExCAS02-VM.domainad.com/W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : ExCAS02-VM AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN=ExCAS02-VM,CN=Servers,CN=Exchange Administrative....... Identity : ExCAS02-VM\Rpc (Default Web Site) Guid : 59873fe5-3e09-456e-9540-f67abc893f5e ObjectCategory : domainad.com/Configuration/Schema/ms-Exch-Rpc-Http-Virtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtualDirectory} WhenChanged : 18/02/2011 4:31:54 PM WhenCreated : 18/02/2011 4:30:27 PM OriginatingServer : ADDC01.domainad.com IsValid : True Test-OutlookWebServices settings: 1013 Error When contacting https://activesync.domain.com/Rpc received the error The remote server returned an error: (500) Internal Server Error. 1017 Error [EXPR]-Error when contacting the RPC/HTTP service at https://activesync.domain.com/Rpc. The elapsed time was 0 milliseconds. https://www.testexchangeconnectivity.com testing result: Checking the IIS configuration for client certificate authentication. Client certificate authentication was detected. Additional Details Accept/Require client certificates were found. Set the IIS configuration to Ignore Client Certificates if you aren't using this type of authentication. environment: Windows Server 2008 (HT-CAS) Exchange Server 2007 SP1 TMG 2010 Standard Outlook 2007 client SP2. Any kind of help would be greatly appreciated. Thanks.

    Read the article

  • Linksys/Cisco Small Business SRW-Series (ie SRW248G4) - Overcoming the Limitations

    - by Warren P
    We just purchased a Cisco/Linksys SRW 248G4 switch to try it out. We have always had unmanaged switches before, and this is our first "somewhat managed" switch. So far the major limitations are: Only Internet Explorer 6 (manual says IE 5.5!) works for the web interface SSH exists but is not practically useable because the only key length that is supported is no longer even used by most modern SSH installs. (I get the error "RSA modulus too small" in openssh 4.x/5.x) This is with the latest firmware revision, I believe, although Cisco's website does not actually tell you what version you're downloading. All in all, I think, they must be trying to tell me that if I want a good-quality switch, I shouldn't buy these SRWs and should buy a Dell or an HP ProCurve, or save up my pennies, and buy a Catalyst. The question here, then, at long last: Has anyone gotten the web-browser to work via some IE 7 or IE 8 compatibility mode settings or used another browser (Opera? KDE/Safari/WebKit?) and spoofed IE6? Is there any way to get the SSH key length upgraded? I'm guessing a 0% chance of a yes on that last one. I found an XP machine, used telnet (via PuttyTel.exe) and IE6 to set this up, and I doubt we'll have to touch it again. Which is fine with us. But it would be nice if I could administer this thing from either (a) a linux box, or (b) my primary desktop which is windows 7. It looks like XPMode with IE6 on the virtual XP machine may be my only way to administer this type of switch via the web.

    Read the article

  • Can I change the user id of a user on one Linux server to match another server in /etc/passwd?

    - by user76177
    I have a Rails application that is on a virtual machine (RHEL 6) and it's database is on dedicated hardware (also RHEL 6). The app server has an NFS directory from the db server mounted and accessible. It needs to write images to that server that are uploaded via the app. Background processes on the db server need to read and write to the same directory, as they perform resizing operations on the uploaded files. Right now none of this is working, because the user ids are different between the two systems. I only need this to work for this one application, so it is way too much overhead to put an LDAP system in place. Can I simply change the user id of this one user in one of the systems, or will that cause mass chaos? UPDATE: The fix worked, at least on local devices. Unfortunately the device I have mounted to the main db server still thinks my user id is 502 instead of 506. Do I need to remount that device, or is there an NFS daemon I can stop and restart to refresh it?

    Read the article

  • Can't install py-subversion on freebsd 8.2

    - by max taldykin
    I'm trying to install python bindings for subversion: # cd /usr/ports/devel/py-subversion # make ===> Patching for py26-subversion-1.6.15 ===> Applying extra patch /usr/ports/devel/py-subversion/../../devel/subversion/files /bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in cannot open /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in: No such file or directory *** Error code 2 Yes, there is no such file in subversion/files, but there is file patch-subversion::bindings::swig::perl::natives::Makefle.PL.in (with colons instead of hyphens). After renaming and rerunning make I got another error: # make ===> Patching for py26-subversion-1.6.15 ===> Applying extra patch /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in cannot open /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in: No such file or directory *** Error code 2 But now there is nothing like bindings-* in subversion/files. So, the question is why is it so and how can I install py-subversion? PS: FreeBSD is running on virtual private server, so I think it is somehow patched. # uname -a FreeBSD mskhug.ru 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0 r50: Thu Feb 24 10:15:34 IRKT 2011 [email protected]:/root/src/sys/amd64/compile/DEBUG amd64

    Read the article

  • Can't remotely connect through SQL Server Management Studio

    - by FAtBalloon
    I have setup a SQL Server 2008 Express instance on a dedicated Windows 2008 Server hosted by 1and1.com. I cannot connect remotely to the server through management studio. I have taken the following steps below and am beyond any further ideas. I have researched the site and cannot figure anything else out so please forgive me if I missed something obvious, but I'm going crazy. Here's the lowdown. The SQL Server instance is running and works perfectly when working locally. In SQL Server Management Studio, I have checked the box "Allow Remote Connections to this Server" I have removed any external hardware firewall settings from the 1and1 admin panel Windows firewall on the server has been disabled, but just for kicks I added an inbound rule that allows for all connections on port 1433. In SQL Native Client configuration, TCP/IP is enabled. I also made sure the "IP1" with the server's IP address had a 0 for dynamic port, but I deleted it and added 1433 in the regular TCP Port field. I also set the "IPALL" TCP Port to 1433. In SQL Native Client configuration, SQL Server Browser is also running and I also tried adding an ALIAS in the I restarted SQL server after I set this value. Doing a "netstat -ano" on the server machine returns a TCP 0.0.0.0:1433 LISTENING UDP 0.0.0.0:1434 LISTENING I do a port scan from my local computer and it says that the port is FILTERED instead of LISTENING. I also tried to connect from Management studio on my local machine and it is throwing a connection error. Tried the following server names with SQL Server and Windows Authentication marked in the database security. ipaddress\SQLEXPRESS,1433 ipaddress\SQLEXPRESS ipaddress ipaddress,1433 tcp:ipaddress\SQLEXPRESS tcp:ipaddress\SQLEXPRESS,1433

    Read the article

  • Any ideas out there as to how the data can be recovered from an SSD?

    - by ben
    A friend had some form of catastrophic failure on an HP mini 1000, unbootable. Of course there was data that wasn't backed up. I've removed the SSD and hooked it up to a ZIF 40 enclosure but can not seem to get the drive to be recognized in Windows 7. In Disk Management it displays as present, but uninitialized. Attempting to initialize it presents an error Virtual Disk Manager - "The device is not ready". There is scant information on MIE (the custom OS), so I'm not even sure what kind of file system I'm dealing with. In any case, if the filesystem is indeed some flavor other than FAT or NTFS, is this error consistent with that? Are there any creative ideas out there as to how the data can be recovered? Update: Thanks for all the suggestions! I hadn't even considered running a live cd. Unfortunately no luck with Ubuntu (live cd) or explore2fs. The zif connection seems ok (color coded green led for proper connect, orange for not). The drive can't be initialized and therefore can't be formatted, so I guess there may be some real damage. Probably needs to head to a specialist. Thanks again for the feedback, much appreciated.

    Read the article

  • monitoring a /21 for potential bad guys with snort and port mirroring

    - by Adeodatus
    Hi all, I want/need to start monitoring our network a bit better. Its an odd network in that it comprises 2 /22 public IPs and a slew of private admin IPs. I do have one point in the network where it all comes together and I can turn on port mirroring on the catalyst. From that port, I'd like to turn up a box running various utilities. Snort is high on my list but it'd be nice to also get some networking statistics with something like Netflow. So, what are peoeple's thoughts. I can turn up a box needed for this with a bit of ease. We have the hardware available. What should I run? I'd love to know what kind of nasty things are potentially going on but I'd also like to see statistics on what people are doing on the network so I can better tweak our systems to handle it better and improve performance. I'm open so please, give me some ideas to go along with what I've got.

    Read the article

  • MAC addresses on dual-NIC mainboards

    - by Tom O'Connor
    Here's a weird problem. We've got a number of devices with dual-NIC mainboards. Some are Realtek NICs, which suck. Some are Intel e1000s, which don't. I've just noticed on 2 machines, one is an Intel NIC, one is a Realtek, that when I put the MAC address of one machine into the dhcpd.conf file on our DHCP server to get it to PXE boot the machine into a rebuild environment, initially everything is fine. The server gets a DHCP allocation, and PXE boots into the Ubuntu preseed enviroment. On one or two machines, it gets as far as Ubuntu's DHCP network configuration, and fails. If i pull up a busybox shell (on tty2 on the installing machine), and run ip link, I can see that the UP flag is set on the other NIC. Here's some stuff. host xeon16-ghz240-gb48-node1 { hardware ethernet BC:AE:C5:07:1F:18; filename "pxelinux.0"; next-server 192.168.123.80; } That's what's in dhcpd.conf This is what ip link on the evil machine looks like. Only one NIC is actually connected (deliberately). As you can see, the NIC that's in the dhcpd config, is not marked as UP, and the link that is UP, isn't the one in DHCP. So far I've seen this on two brands of dual-NIC configuration. Does anyone know 1) what's causing it, and b) What we can do about it?

    Read the article

  • Setting up Apache 2.2 + FastCGI + SuExec + PHP-FPM on Centos 6

    - by mr1031011
    I'm trying to follow this very detailed instruction here, I simply changed from www-data user to apache user, and is using /var/www/hosts/sitename/public_html instead of /home/user/public_html However, I spent the whole day trying to figure out why the php file content is displayed without being parsed correctly. I just cant's seem to figure this out. Below is my current config: /etc/httpd/conf.d/fastcgi.conf User apache Group apache LoadModule fastcgi_module modules/mod_fastcgi.so # dir for IPC socket files FastCgiIpcDir /var/run/mod_fastcgi # wrap all fastcgi script calls in suexec FastCgiWrapper On # global FastCgiConfig can be overridden by FastCgiServer options in vhost config FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 # sample PHP config # see /usr/share/doc/mod_fastcgi-2.4.6 for php-wrapper script # don't forget to disable mod_php in /etc/httpd/conf.d/php.conf! # # to enable privilege separation, add a "SuexecUserGroup" directive # and chown the php-wrapper script and parent directory accordingly # see also http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/ # FastCgiServer /var/www/www-data/php5-fcgi #AddType application/x-httpd-php .php AddHandler php-fcgi .php Action php-fcgi /fcgi-bin/php5-fcgi Alias /fcgi-bin/ /var/www/www-data/ #FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /tmp/php5-fpm.sock -pass-header Authorization #DirectoryIndex index.php # <Location /fcgi-bin/> # Order Deny,Allow # Deny from All # Allow from env=REDIRECT_STATUS SetHandler fcgid-script Options +ExecCGI </Location> /etc/httpd/conf.d/vhost.conf <VirtualHost> DirectoryIndex index.php index.html index.shtml index.cgi SuexecUserGroup www.mysite.com mygroup Alias /fcgi-bin/ /var/www/www-data/www.mysite.com/ DocumentRoot /var/www/hosts/mysite.com/w/w/w/www/ <Directory /var/www/hosts/mysite.com/w/w/w/www/> Options -Indexes FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> PS: 1. Also, with PHP5.5, do I even need FPM or is it already included? 2. I'm using mod_fastcgi, not sure if this is the problem and it I should switch to mod_fcgid? There seems to be conflicting records on the internet considering which one is better. I have many virtual hosts running on the machine and hope to be able to provide each user with their own opcache

    Read the article

  • How do I effectively use WinSCP on my GoDaddy Dedicated Hosting

    - by Scott
    After being told that Virtual Private Servers would not fit the scope of my project, I have timidly entered the world of dedicated hosting. Unfortunately, this is forcing me how to learn the basics of being a Linux server admin. GoDaddy has a master account for the server. When you use SSH, they want you to use "su" to switch to the root user. Thus far, I have been able to do everything I have needed to thus far via the command line as this root user. However, now I need to upload files to my server. I'm used to using WinSCP to upload files. I can use my general server account to view the files but when I try to drag or create files its says that I cannot because I do not have permission to do so. I have researched the WinSCP documentation and it seems that this "su" function is beyond the scope of the program. How am I to grant myself access to upload these files using SSH? Should I create a user with the proper permissions? I'm happy to do this but thus far I have not been able to make sense of what I have found online. I'm going to try and move forward but any help and/or insight is appreciated.

    Read the article

  • Windows Terminal Server: occasional memory violation for applications

    - by syneticon-dj
    On a virtualized (ESXi 4.1) Windows Server 2008 SP2 32-bit machine which is used as a terminal server, I occasionally (approximately 1-3 event log entries a day) see applications fail with an 0xc0000005 error - apparently a memory access violation. The problem seems quite random and only badly reproducable - applications may run for hours, fail with 0xc0000005 and restart quite fine or just throw the access violation at startup and start flawlessly at the second attempt. The names of executables, modules and offset addresses vary, although a single executable tends to fail with same modules and the same memory offset addresses (like "OUTLOOK.EXE" repeatedly failing on module "olmapi32.dll" with the offset "0x00044b7a") - even across multiple user's logons and with several days passing without a single failure inbetween. The offset addresses seem to change across reboots, however. Only selective executables seem affected by the problem, although I may simply not be seeing a sufficient number of application runs from the other ones. I first suspected a possible problem with the physical machine's RAM, but ruled this out as a rather unlikely cause - the memory comes with ECC and I've already moved the virtual machine across several times, without any perceptable change. I've seen that DEP was enabled in "OptOut" mode on this machine: C:\Users\administrator>wmic OS Get DataExecutionPrevention_SupportPolicy DataExecutionPrevention_SupportPolicy 3 and tried changing the policy to OptIn via startup options: bcdedit.exe /set {current} nx OptIn but have yet to see any effect - I also would expect Outlook 12 or Adobe Reader 9 (both affected applications) to play well with DEP. Any other ideas why the apps may be failing?

    Read the article

  • Server Performance

    - by sb12
    I know very little about performance tuning of servers etc... so i thought i'd put this up here as i start some research on it, just to get some direction. I am in the process of migrating from my old server to a new one - both are 64 bit machines. One is a few years old, the other brand new (PowerEdge R410). The old server spec is: 2 cpus, 3.4GHz Pentiums, 8G of RAM, Fedora 11 currently installed The new server spec is: 16 cpus, 3.2 GHz Xeon, 16G of RAM, CentOS 6.2 installed. Also RAID10 is on the new server - no RAID on the old one. Both servers currently have the same database (MySQL) with the same data migrated. I wrote a Perl script that simply steps through each row of a table in the database (about 18000 rows) and updates a value in that row. Every row in the table is updated. Out of curiosity i ran this perl script on both machines, just to see how the new server would perform vs. the old one, and it produced interesting results: The old server was twice as fast as the new one to complete. Looking at the database, both are configured exactly the same (the new one being a dump of the old one...)... Anyone any ideas why this would be given the hardware gap between both? As i said i'm about to start some digging, but thought i'd put this up here to maybe get some good direction.... Many thanks in advance..

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >